code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def _plot_ice_lines( self, preds, feature_values, n_ice_to_plot, ax, pd_plot_idx, n_total_lines_by_plot, individual_line_kw, ): """Plot the ICE lines. Parameters ---------- preds : ndarray of shape \ (n_instances, n_grid_points) The predictions computed for all points of `feature_values` for a given feature for all samples in `X`. feature_values : ndarray of shape (n_grid_points,) The feature values for which the predictions have been computed. n_ice_to_plot : int The number of ICE lines to plot. ax : Matplotlib axes The axis on which to plot the ICE lines. pd_plot_idx : int The sequential index of the plot. It will be unraveled to find the matching 2D position in the grid layout. n_total_lines_by_plot : int The total number of lines expected to be plot on the axis. individual_line_kw : dict Dict with keywords passed when plotting the ICE lines. """ rng = check_random_state(self.random_state) # subsample ice ice_lines_idx = rng.choice( preds.shape[0], n_ice_to_plot, replace=False, ) ice_lines_subsampled = preds[ice_lines_idx, :] # plot the subsampled ice for ice_idx, ice in enumerate(ice_lines_subsampled): line_idx = np.unravel_index( pd_plot_idx * n_total_lines_by_plot + ice_idx, self.lines_.shape ) self.lines_[line_idx] = ax.plot( feature_values, ice.ravel(), **individual_line_kw )[0]
Plot the ICE lines. Parameters ---------- preds : ndarray of shape (n_instances, n_grid_points) The predictions computed for all points of `feature_values` for a given feature for all samples in `X`. feature_values : ndarray of shape (n_grid_points,) The feature values for which the predictions have been computed. n_ice_to_plot : int The number of ICE lines to plot. ax : Matplotlib axes The axis on which to plot the ICE lines. pd_plot_idx : int The sequential index of the plot. It will be unraveled to find the matching 2D position in the grid layout. n_total_lines_by_plot : int The total number of lines expected to be plot on the axis. individual_line_kw : dict Dict with keywords passed when plotting the ICE lines.
_plot_ice_lines
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/partial_dependence.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/partial_dependence.py
BSD-3-Clause
def _plot_average_dependence( self, avg_preds, feature_values, ax, pd_line_idx, line_kw, categorical, bar_kw, ): """Plot the average partial dependence. Parameters ---------- avg_preds : ndarray of shape (n_grid_points,) The average predictions for all points of `feature_values` for a given feature for all samples in `X`. feature_values : ndarray of shape (n_grid_points,) The feature values for which the predictions have been computed. ax : Matplotlib axes The axis on which to plot the average PD. pd_line_idx : int The sequential index of the plot. It will be unraveled to find the matching 2D position in the grid layout. line_kw : dict Dict with keywords passed when plotting the PD plot. categorical : bool Whether feature is categorical. bar_kw: dict Dict with keywords passed when plotting the PD bars (categorical). """ if categorical: bar_idx = np.unravel_index(pd_line_idx, self.bars_.shape) self.bars_[bar_idx] = ax.bar(feature_values, avg_preds, **bar_kw)[0] ax.tick_params(axis="x", rotation=90) else: line_idx = np.unravel_index(pd_line_idx, self.lines_.shape) self.lines_[line_idx] = ax.plot( feature_values, avg_preds, **line_kw, )[0]
Plot the average partial dependence. Parameters ---------- avg_preds : ndarray of shape (n_grid_points,) The average predictions for all points of `feature_values` for a given feature for all samples in `X`. feature_values : ndarray of shape (n_grid_points,) The feature values for which the predictions have been computed. ax : Matplotlib axes The axis on which to plot the average PD. pd_line_idx : int The sequential index of the plot. It will be unraveled to find the matching 2D position in the grid layout. line_kw : dict Dict with keywords passed when plotting the PD plot. categorical : bool Whether feature is categorical. bar_kw: dict Dict with keywords passed when plotting the PD bars (categorical).
_plot_average_dependence
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/partial_dependence.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/partial_dependence.py
BSD-3-Clause
def _plot_two_way_partial_dependence( self, avg_preds, feature_values, feature_idx, ax, pd_plot_idx, Z_level, contour_kw, categorical, heatmap_kw, ): """Plot 2-way partial dependence. Parameters ---------- avg_preds : ndarray of shape \ (n_instances, n_grid_points, n_grid_points) The average predictions for all points of `feature_values[0]` and `feature_values[1]` for some given features for all samples in `X`. feature_values : seq of 1d array A sequence of array of the feature values for which the predictions have been computed. feature_idx : tuple of int The indices of the target features ax : Matplotlib axes The axis on which to plot the ICE and PDP lines. pd_plot_idx : int The sequential index of the plot. It will be unraveled to find the matching 2D position in the grid layout. Z_level : ndarray of shape (8, 8) The Z-level used to encode the average predictions. contour_kw : dict Dict with keywords passed when plotting the contours. categorical : bool Whether features are categorical. heatmap_kw: dict Dict with keywords passed when plotting the PD heatmap (categorical). """ if categorical: import matplotlib.pyplot as plt default_im_kw = dict(interpolation="nearest", cmap="viridis") im_kw = {**default_im_kw, **heatmap_kw} data = avg_preds[self.target_idx] im = ax.imshow(data, **im_kw) text = None cmap_min, cmap_max = im.cmap(0), im.cmap(1.0) text = np.empty_like(data, dtype=object) # print text with appropriate color depending on background thresh = (data.max() + data.min()) / 2.0 for flat_index in range(data.size): row, col = np.unravel_index(flat_index, data.shape) color = cmap_max if data[row, col] < thresh else cmap_min values_format = ".2f" text_data = format(data[row, col], values_format) text_kwargs = dict(ha="center", va="center", color=color) text[row, col] = ax.text(col, row, text_data, **text_kwargs) fig = ax.figure fig.colorbar(im, ax=ax) ax.set( xticks=np.arange(len(feature_values[1])), yticks=np.arange(len(feature_values[0])), xticklabels=feature_values[1], yticklabels=feature_values[0], xlabel=self.feature_names[feature_idx[1]], ylabel=self.feature_names[feature_idx[0]], ) plt.setp(ax.get_xticklabels(), rotation="vertical") heatmap_idx = np.unravel_index(pd_plot_idx, self.heatmaps_.shape) self.heatmaps_[heatmap_idx] = im else: from matplotlib import transforms XX, YY = np.meshgrid(feature_values[0], feature_values[1]) Z = avg_preds[self.target_idx].T CS = ax.contour(XX, YY, Z, levels=Z_level, linewidths=0.5, colors="k") contour_idx = np.unravel_index(pd_plot_idx, self.contours_.shape) self.contours_[contour_idx] = ax.contourf( XX, YY, Z, levels=Z_level, vmax=Z_level[-1], vmin=Z_level[0], **contour_kw, ) ax.clabel(CS, fmt="%2.2f", colors="k", fontsize=10, inline=True) trans = transforms.blended_transform_factory(ax.transData, ax.transAxes) # create the decile line for the vertical axis xlim, ylim = ax.get_xlim(), ax.get_ylim() vlines_idx = np.unravel_index(pd_plot_idx, self.deciles_vlines_.shape) self.deciles_vlines_[vlines_idx] = ax.vlines( self.deciles[feature_idx[0]], 0, 0.05, transform=trans, color="k", ) # create the decile line for the horizontal axis hlines_idx = np.unravel_index(pd_plot_idx, self.deciles_hlines_.shape) self.deciles_hlines_[hlines_idx] = ax.hlines( self.deciles[feature_idx[1]], 0, 0.05, transform=trans, color="k", ) # reset xlim and ylim since they are overwritten by hlines and # vlines ax.set_xlim(xlim) ax.set_ylim(ylim) # set xlabel if it is not already set if not ax.get_xlabel(): ax.set_xlabel(self.feature_names[feature_idx[0]]) ax.set_ylabel(self.feature_names[feature_idx[1]])
Plot 2-way partial dependence. Parameters ---------- avg_preds : ndarray of shape (n_instances, n_grid_points, n_grid_points) The average predictions for all points of `feature_values[0]` and `feature_values[1]` for some given features for all samples in `X`. feature_values : seq of 1d array A sequence of array of the feature values for which the predictions have been computed. feature_idx : tuple of int The indices of the target features ax : Matplotlib axes The axis on which to plot the ICE and PDP lines. pd_plot_idx : int The sequential index of the plot. It will be unraveled to find the matching 2D position in the grid layout. Z_level : ndarray of shape (8, 8) The Z-level used to encode the average predictions. contour_kw : dict Dict with keywords passed when plotting the contours. categorical : bool Whether features are categorical. heatmap_kw: dict Dict with keywords passed when plotting the PD heatmap (categorical).
_plot_two_way_partial_dependence
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/partial_dependence.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/partial_dependence.py
BSD-3-Clause
def test_input_data_dimension(pyplot): """Check that we raise an error when `X` does not have exactly 2 features.""" X, y = make_classification(n_samples=10, n_features=4, random_state=0) clf = LogisticRegression().fit(X, y) msg = "n_features must be equal to 2. Got 4 instead." with pytest.raises(ValueError, match=msg): DecisionBoundaryDisplay.from_estimator(estimator=clf, X=X)
Check that we raise an error when `X` does not have exactly 2 features.
test_input_data_dimension
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_check_boundary_response_method_error(): """Check error raised for multi-output multi-class classifiers by `_check_boundary_response_method`. """ class MultiLabelClassifier: classes_ = [np.array([0, 1]), np.array([0, 1])] err_msg = "Multi-label and multi-output multi-class classifiers are not supported" with pytest.raises(ValueError, match=err_msg): _check_boundary_response_method(MultiLabelClassifier(), "predict", None)
Check error raised for multi-output multi-class classifiers by `_check_boundary_response_method`.
test_check_boundary_response_method_error
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_check_boundary_response_method( estimator, response_method, class_of_interest, expected_prediction_method ): """Check the behaviour of `_check_boundary_response_method` for the supported cases. """ prediction_method = _check_boundary_response_method( estimator, response_method, class_of_interest ) assert prediction_method == expected_prediction_method
Check the behaviour of `_check_boundary_response_method` for the supported cases.
test_check_boundary_response_method
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_multiclass_predict(pyplot): """Check multiclass `response=predict` gives expected results.""" grid_resolution = 10 eps = 1.0 X, y = make_classification(n_classes=3, n_informative=3, random_state=0) X = X[:, [0, 1]] lr = LogisticRegression(random_state=0).fit(X, y) disp = DecisionBoundaryDisplay.from_estimator( lr, X, response_method="predict", grid_resolution=grid_resolution, eps=1.0 ) x0_min, x0_max = X[:, 0].min() - eps, X[:, 0].max() + eps x1_min, x1_max = X[:, 1].min() - eps, X[:, 1].max() + eps xx0, xx1 = np.meshgrid( np.linspace(x0_min, x0_max, grid_resolution), np.linspace(x1_min, x1_max, grid_resolution), ) response = lr.predict(np.c_[xx0.ravel(), xx1.ravel()]) assert_allclose(disp.response, response.reshape(xx0.shape)) assert_allclose(disp.xx0, xx0) assert_allclose(disp.xx1, xx1)
Check multiclass `response=predict` gives expected results.
test_multiclass_predict
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_decision_boundary_display_classifier( pyplot, fitted_clf, response_method, plot_method ): """Check that decision boundary is correct.""" fig, ax = pyplot.subplots() eps = 2.0 disp = DecisionBoundaryDisplay.from_estimator( fitted_clf, X, grid_resolution=5, response_method=response_method, plot_method=plot_method, eps=eps, ax=ax, ) assert isinstance(disp.surface_, pyplot.matplotlib.contour.QuadContourSet) assert disp.ax_ == ax assert disp.figure_ == fig x0, x1 = X[:, 0], X[:, 1] x0_min, x0_max = x0.min() - eps, x0.max() + eps x1_min, x1_max = x1.min() - eps, x1.max() + eps assert disp.xx0.min() == pytest.approx(x0_min) assert disp.xx0.max() == pytest.approx(x0_max) assert disp.xx1.min() == pytest.approx(x1_min) assert disp.xx1.max() == pytest.approx(x1_max) fig2, ax2 = pyplot.subplots() # change plotting method for second plot disp.plot(plot_method="pcolormesh", ax=ax2, shading="auto") assert isinstance(disp.surface_, pyplot.matplotlib.collections.QuadMesh) assert disp.ax_ == ax2 assert disp.figure_ == fig2
Check that decision boundary is correct.
test_decision_boundary_display_classifier
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_decision_boundary_display_outlier_detector( pyplot, response_method, plot_method ): """Check that decision boundary is correct for outlier detector.""" fig, ax = pyplot.subplots() eps = 2.0 outlier_detector = IsolationForest(random_state=0).fit(X, y) disp = DecisionBoundaryDisplay.from_estimator( outlier_detector, X, grid_resolution=5, response_method=response_method, plot_method=plot_method, eps=eps, ax=ax, ) assert isinstance(disp.surface_, pyplot.matplotlib.contour.QuadContourSet) assert disp.ax_ == ax assert disp.figure_ == fig x0, x1 = X[:, 0], X[:, 1] x0_min, x0_max = x0.min() - eps, x0.max() + eps x1_min, x1_max = x1.min() - eps, x1.max() + eps assert disp.xx0.min() == pytest.approx(x0_min) assert disp.xx0.max() == pytest.approx(x0_max) assert disp.xx1.min() == pytest.approx(x1_min) assert disp.xx1.max() == pytest.approx(x1_max)
Check that decision boundary is correct for outlier detector.
test_decision_boundary_display_outlier_detector
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_decision_boundary_display_regressor(pyplot, response_method, plot_method): """Check that we can display the decision boundary for a regressor.""" X, y = load_diabetes(return_X_y=True) X = X[:, :2] tree = DecisionTreeRegressor().fit(X, y) fig, ax = pyplot.subplots() eps = 2.0 disp = DecisionBoundaryDisplay.from_estimator( tree, X, response_method=response_method, ax=ax, eps=eps, plot_method=plot_method, ) assert isinstance(disp.surface_, pyplot.matplotlib.contour.QuadContourSet) assert disp.ax_ == ax assert disp.figure_ == fig x0, x1 = X[:, 0], X[:, 1] x0_min, x0_max = x0.min() - eps, x0.max() + eps x1_min, x1_max = x1.min() - eps, x1.max() + eps assert disp.xx0.min() == pytest.approx(x0_min) assert disp.xx0.max() == pytest.approx(x0_max) assert disp.xx1.min() == pytest.approx(x1_min) assert disp.xx1.max() == pytest.approx(x1_max) fig2, ax2 = pyplot.subplots() # change plotting method for second plot disp.plot(plot_method="pcolormesh", ax=ax2, shading="auto") assert isinstance(disp.surface_, pyplot.matplotlib.collections.QuadMesh) assert disp.ax_ == ax2 assert disp.figure_ == fig2
Check that we can display the decision boundary for a regressor.
test_decision_boundary_display_regressor
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_multilabel_classifier_error(pyplot, response_method): """Check that multilabel classifier raises correct error.""" X, y = make_multilabel_classification(random_state=0) X = X[:, :2] tree = DecisionTreeClassifier().fit(X, y) msg = "Multi-label and multi-output multi-class classifiers are not supported" with pytest.raises(ValueError, match=msg): DecisionBoundaryDisplay.from_estimator( tree, X, response_method=response_method, )
Check that multilabel classifier raises correct error.
test_multilabel_classifier_error
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_multi_output_multi_class_classifier_error(pyplot, response_method): """Check that multi-output multi-class classifier raises correct error.""" X = np.asarray([[0, 1], [1, 2]]) y = np.asarray([["tree", "cat"], ["cat", "tree"]]) tree = DecisionTreeClassifier().fit(X, y) msg = "Multi-label and multi-output multi-class classifiers are not supported" with pytest.raises(ValueError, match=msg): DecisionBoundaryDisplay.from_estimator( tree, X, response_method=response_method, )
Check that multi-output multi-class classifier raises correct error.
test_multi_output_multi_class_classifier_error
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_multioutput_regressor_error(pyplot): """Check that multioutput regressor raises correct error.""" X = np.asarray([[0, 1], [1, 2]]) y = np.asarray([[0, 1], [4, 1]]) tree = DecisionTreeRegressor().fit(X, y) with pytest.raises(ValueError, match="Multi-output regressors are not supported"): DecisionBoundaryDisplay.from_estimator(tree, X, response_method="predict")
Check that multioutput regressor raises correct error.
test_multioutput_regressor_error
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_regressor_unsupported_response(pyplot, response_method): """Check that we can display the decision boundary for a regressor.""" X, y = load_diabetes(return_X_y=True) X = X[:, :2] tree = DecisionTreeRegressor().fit(X, y) err_msg = "should either be a classifier to be used with response_method" with pytest.raises(ValueError, match=err_msg): DecisionBoundaryDisplay.from_estimator(tree, X, response_method=response_method)
Check that we can display the decision boundary for a regressor.
test_regressor_unsupported_response
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_dataframe_labels_used(pyplot, fitted_clf): """Check that column names are used for pandas.""" pd = pytest.importorskip("pandas") df = pd.DataFrame(X, columns=["col_x", "col_y"]) # pandas column names are used by default _, ax = pyplot.subplots() disp = DecisionBoundaryDisplay.from_estimator(fitted_clf, df, ax=ax) assert ax.get_xlabel() == "col_x" assert ax.get_ylabel() == "col_y" # second call to plot will have the names fig, ax = pyplot.subplots() disp.plot(ax=ax) assert ax.get_xlabel() == "col_x" assert ax.get_ylabel() == "col_y" # axes with a label will not get overridden fig, ax = pyplot.subplots() ax.set(xlabel="hello", ylabel="world") disp.plot(ax=ax) assert ax.get_xlabel() == "hello" assert ax.get_ylabel() == "world" # labels get overridden only if provided to the `plot` method disp.plot(ax=ax, xlabel="overwritten_x", ylabel="overwritten_y") assert ax.get_xlabel() == "overwritten_x" assert ax.get_ylabel() == "overwritten_y" # labels do not get inferred if provided to `from_estimator` _, ax = pyplot.subplots() disp = DecisionBoundaryDisplay.from_estimator( fitted_clf, df, ax=ax, xlabel="overwritten_x", ylabel="overwritten_y" ) assert ax.get_xlabel() == "overwritten_x" assert ax.get_ylabel() == "overwritten_y"
Check that column names are used for pandas.
test_dataframe_labels_used
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_string_target(pyplot): """Check that decision boundary works with classifiers trained on string labels.""" iris = load_iris() X = iris.data[:, [0, 1]] # Use strings as target y = iris.target_names[iris.target] log_reg = LogisticRegression().fit(X, y) # Does not raise DecisionBoundaryDisplay.from_estimator( log_reg, X, grid_resolution=5, response_method="predict", )
Check that decision boundary works with classifiers trained on string labels.
test_string_target
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_dataframe_support(pyplot, constructor_name): """Check that passing a dataframe at fit and to the Display does not raise warnings. Non-regression test for: * https://github.com/scikit-learn/scikit-learn/issues/23311 * https://github.com/scikit-learn/scikit-learn/issues/28717 """ df = _convert_container( X, constructor_name=constructor_name, columns_name=["col_x", "col_y"] ) estimator = LogisticRegression().fit(df, y) with warnings.catch_warnings(): # no warnings linked to feature names validation should be raised warnings.simplefilter("error", UserWarning) DecisionBoundaryDisplay.from_estimator(estimator, df, response_method="predict")
Check that passing a dataframe at fit and to the Display does not raise warnings. Non-regression test for: * https://github.com/scikit-learn/scikit-learn/issues/23311 * https://github.com/scikit-learn/scikit-learn/issues/28717
test_dataframe_support
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_class_of_interest_binary(pyplot, response_method): """Check the behaviour of passing `class_of_interest` for plotting the output of `predict_proba` and `decision_function` in the binary case. """ iris = load_iris() X = iris.data[:100, :2] y = iris.target[:100] assert_array_equal(np.unique(y), [0, 1]) estimator = LogisticRegression().fit(X, y) # We will check that `class_of_interest=None` is equivalent to # `class_of_interest=estimator.classes_[1]` disp_default = DecisionBoundaryDisplay.from_estimator( estimator, X, response_method=response_method, class_of_interest=None, ) disp_class_1 = DecisionBoundaryDisplay.from_estimator( estimator, X, response_method=response_method, class_of_interest=estimator.classes_[1], ) assert_allclose(disp_default.response, disp_class_1.response) # we can check that `_get_response_values` modifies the response when targeting # the other class, i.e. 1 - p(y=1|x) for `predict_proba` and -decision_function # for `decision_function`. disp_class_0 = DecisionBoundaryDisplay.from_estimator( estimator, X, response_method=response_method, class_of_interest=estimator.classes_[0], ) if response_method == "predict_proba": assert_allclose(disp_default.response, 1 - disp_class_0.response) else: assert response_method == "decision_function" assert_allclose(disp_default.response, -disp_class_0.response)
Check the behaviour of passing `class_of_interest` for plotting the output of `predict_proba` and `decision_function` in the binary case.
test_class_of_interest_binary
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_class_of_interest_multiclass(pyplot, response_method): """Check the behaviour of passing `class_of_interest` for plotting the output of `predict_proba` and `decision_function` in the multiclass case. """ iris = load_iris() X = iris.data[:, :2] y = iris.target # the target are numerical labels class_of_interest_idx = 2 estimator = LogisticRegression().fit(X, y) disp = DecisionBoundaryDisplay.from_estimator( estimator, X, response_method=response_method, class_of_interest=class_of_interest_idx, ) # we will check that we plot the expected values as response grid = np.concatenate([disp.xx0.reshape(-1, 1), disp.xx1.reshape(-1, 1)], axis=1) response = getattr(estimator, response_method)(grid)[:, class_of_interest_idx] assert_allclose(response.reshape(*disp.response.shape), disp.response) # make the same test but this time using target as strings y = iris.target_names[iris.target] estimator = LogisticRegression().fit(X, y) disp = DecisionBoundaryDisplay.from_estimator( estimator, X, response_method=response_method, class_of_interest=iris.target_names[class_of_interest_idx], ) grid = np.concatenate([disp.xx0.reshape(-1, 1), disp.xx1.reshape(-1, 1)], axis=1) response = getattr(estimator, response_method)(grid)[:, class_of_interest_idx] assert_allclose(response.reshape(*disp.response.shape), disp.response) # check that we raise an error for unknown labels # this test should already be handled in `_get_response_values` but we can have this # test here as well err_msg = "class_of_interest=2 is not a valid label: It should be one of" with pytest.raises(ValueError, match=err_msg): DecisionBoundaryDisplay.from_estimator( estimator, X, response_method=response_method, class_of_interest=class_of_interest_idx, )
Check the behaviour of passing `class_of_interest` for plotting the output of `predict_proba` and `decision_function` in the multiclass case.
test_class_of_interest_multiclass
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_multiclass_plot_max_class(pyplot, response_method): """Check plot correct when plotting max multiclass class.""" import matplotlib as mpl # In matplotlib < v3.5, default value of `pcolormesh(shading)` is 'flat', which # results in the last row and column being dropped. Thus older versions produce # a 99x99 grid, while newer versions produce a 100x100 grid. if parse_version(mpl.__version__) < parse_version("3.5"): pytest.skip("`pcolormesh` in Matplotlib >= 3.5 gives smaller grid size.") X, y = load_iris_2d_scaled() clf = LogisticRegression().fit(X, y) disp = DecisionBoundaryDisplay.from_estimator( clf, X, plot_method="pcolormesh", response_method=response_method, ) grid = np.concatenate([disp.xx0.reshape(-1, 1), disp.xx1.reshape(-1, 1)], axis=1) response = getattr(clf, response_method)(grid).reshape(*disp.response.shape) assert_allclose(response, disp.response) assert len(disp.surface_) == len(clf.classes_) # Get which class has highest response and check it is plotted highest_class = np.argmax(response, axis=2) for idx, quadmesh in enumerate(disp.surface_): # Note quadmesh mask is True (i.e. masked) when `idx` is NOT the highest class assert_array_equal( highest_class != idx, quadmesh.get_array().mask.reshape(*highest_class.shape), )
Check plot correct when plotting max multiclass class.
test_multiclass_plot_max_class
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_multiclass_colors_cmap(pyplot, plot_method, multiclass_colors): """Check correct cmap used for all `multiclass_colors` inputs.""" import matplotlib as mpl if parse_version(mpl.__version__) < parse_version("3.5"): pytest.skip( "Matplotlib >= 3.5 is needed for `==` to check equivalence of colormaps" ) X, y = load_iris_2d_scaled() clf = LogisticRegression().fit(X, y) disp = DecisionBoundaryDisplay.from_estimator( clf, X, plot_method=plot_method, multiclass_colors=multiclass_colors, ) if multiclass_colors == "plasma": colors = mpl.pyplot.get_cmap(multiclass_colors, len(clf.classes_)).colors else: colors = [mpl.colors.to_rgba(color) for color in multiclass_colors] cmaps = [ mpl.colors.LinearSegmentedColormap.from_list( f"colormap_{class_idx}", [(1.0, 1.0, 1.0, 1.0), (r, g, b, 1.0)] ) for class_idx, (r, g, b, _) in enumerate(colors) ] for idx, quad in enumerate(disp.surface_): assert quad.cmap == cmaps[idx]
Check correct cmap used for all `multiclass_colors` inputs.
test_multiclass_colors_cmap
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_multiclass_plot_max_class_cmap_kwarg(pyplot): """Check `cmap` kwarg ignored when using plotting max multiclass class.""" X, y = load_iris_2d_scaled() clf = LogisticRegression().fit(X, y) msg = ( "Plotting max class of multiclass 'decision_function' or 'predict_proba', " "thus 'multiclass_colors' used and 'cmap' kwarg ignored." ) with pytest.warns(UserWarning, match=msg): DecisionBoundaryDisplay.from_estimator(clf, X, cmap="viridis")
Check `cmap` kwarg ignored when using plotting max multiclass class.
test_multiclass_plot_max_class_cmap_kwarg
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_subclass_named_constructors_return_type_is_subclass(pyplot): """Check that named constructors return the correct type when subclassed. Non-regression test for: https://github.com/scikit-learn/scikit-learn/pull/27675 """ clf = LogisticRegression().fit(X, y) class SubclassOfDisplay(DecisionBoundaryDisplay): pass curve = SubclassOfDisplay.from_estimator(estimator=clf, X=X) assert isinstance(curve, SubclassOfDisplay)
Check that named constructors return the correct type when subclassed. Non-regression test for: https://github.com/scikit-learn/scikit-learn/pull/27675
test_subclass_named_constructors_return_type_is_subclass
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_boundary_decision_display.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_boundary_decision_display.py
BSD-3-Clause
def test_partial_dependence_overwrite_labels( pyplot, clf_diabetes, diabetes, kind, line_kw, label, ): """Test that make sure that we can overwrite the label of the PDP plot""" disp = PartialDependenceDisplay.from_estimator( clf_diabetes, diabetes.data, [0, 2], grid_resolution=25, feature_names=diabetes.feature_names, kind=kind, line_kw=line_kw, ) for ax in disp.axes_.ravel(): if label is None: assert ax.get_legend() is None else: legend_text = ax.get_legend().get_texts() assert len(legend_text) == 1 assert legend_text[0].get_text() == label
Test that make sure that we can overwrite the label of the PDP plot
test_partial_dependence_overwrite_labels
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
BSD-3-Clause
def test_grid_resolution_with_categorical(pyplot, categorical_features, array_type): """Check that we raise a ValueError when the grid_resolution is too small respect to the number of categories in the categorical features targeted. """ X = [["A", 1, "A"], ["B", 0, "C"], ["C", 2, "B"]] column_name = ["col_A", "col_B", "col_C"] X = _convert_container(X, array_type, columns_name=column_name) y = np.array([1.2, 0.5, 0.45]).T preprocessor = make_column_transformer((OneHotEncoder(), categorical_features)) model = make_pipeline(preprocessor, LinearRegression()) model.fit(X, y) err_msg = ( "resolution of the computed grid is less than the minimum number of categories" ) with pytest.raises(ValueError, match=err_msg): PartialDependenceDisplay.from_estimator( model, X, features=["col_C"], feature_names=column_name, categorical_features=categorical_features, grid_resolution=2, )
Check that we raise a ValueError when the grid_resolution is too small respect to the number of categories in the categorical features targeted.
test_grid_resolution_with_categorical
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
BSD-3-Clause
def test_partial_dependence_kind_list( pyplot, clf_diabetes, diabetes, ): """Check that we can provide a list of strings to kind parameter.""" matplotlib = pytest.importorskip("matplotlib") disp = PartialDependenceDisplay.from_estimator( clf_diabetes, diabetes.data, features=[0, 2, (1, 2)], grid_resolution=20, kind=["both", "both", "average"], ) for idx in [0, 1]: assert all( [ isinstance(line, matplotlib.lines.Line2D) for line in disp.lines_[0, idx].ravel() ] ) assert disp.contours_[0, idx] is None assert disp.contours_[0, 2] is not None assert all([line is None for line in disp.lines_[0, 2].ravel()])
Check that we can provide a list of strings to kind parameter.
test_partial_dependence_kind_list
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
BSD-3-Clause
def test_partial_dependence_kind_error( pyplot, clf_diabetes, diabetes, features, kind, ): """Check that we raise an informative error when 2-way PD is requested together with 1-way PD/ICE""" warn_msg = ( "ICE plot cannot be rendered for 2-way feature interactions. 2-way " "feature interactions mandates PD plots using the 'average' kind" ) with pytest.raises(ValueError, match=warn_msg): PartialDependenceDisplay.from_estimator( clf_diabetes, diabetes.data, features=features, grid_resolution=20, kind=kind, )
Check that we raise an informative error when 2-way PD is requested together with 1-way PD/ICE
test_partial_dependence_kind_error
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
BSD-3-Clause
def test_plot_partial_dependence_lines_kw( pyplot, clf_diabetes, diabetes, line_kw, pd_line_kw, ice_lines_kw, expected_colors, ): """Check that passing `pd_line_kw` and `ice_lines_kw` will act on the specific lines in the plot. """ disp = PartialDependenceDisplay.from_estimator( clf_diabetes, diabetes.data, [0, 2], grid_resolution=20, feature_names=diabetes.feature_names, n_cols=2, kind="both", line_kw=line_kw, pd_line_kw=pd_line_kw, ice_lines_kw=ice_lines_kw, ) line = disp.lines_[0, 0, -1] assert line.get_color() == expected_colors[0], ( f"{line.get_color()}!={expected_colors[0]}\n{line_kw} and {pd_line_kw}" ) if pd_line_kw is not None: if "linestyle" in pd_line_kw: assert line.get_linestyle() == pd_line_kw["linestyle"] elif "ls" in pd_line_kw: assert line.get_linestyle() == pd_line_kw["ls"] else: assert line.get_linestyle() == "--" line = disp.lines_[0, 0, 0] assert line.get_color() == expected_colors[1], ( f"{line.get_color()}!={expected_colors[1]}" ) if ice_lines_kw is not None: if "linestyle" in ice_lines_kw: assert line.get_linestyle() == ice_lines_kw["linestyle"] elif "ls" in ice_lines_kw: assert line.get_linestyle() == ice_lines_kw["ls"] else: assert line.get_linestyle() == "-"
Check that passing `pd_line_kw` and `ice_lines_kw` will act on the specific lines in the plot.
test_plot_partial_dependence_lines_kw
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
BSD-3-Clause
def test_partial_dependence_display_wrong_len_kind( pyplot, clf_diabetes, diabetes, ): """Check that we raise an error when `kind` is a list with a wrong length. This case can only be triggered using the `PartialDependenceDisplay.from_estimator` method. """ disp = PartialDependenceDisplay.from_estimator( clf_diabetes, diabetes.data, features=[0, 2], grid_resolution=20, kind="average", # len(kind) != len(features) ) # alter `kind` to be a list with a length different from length of `features` disp.kind = ["average"] err_msg = ( r"When `kind` is provided as a list of strings, it should contain as many" r" elements as `features`. `kind` contains 1 element\(s\) and `features`" r" contains 2 element\(s\)." ) with pytest.raises(ValueError, match=err_msg): disp.plot()
Check that we raise an error when `kind` is a list with a wrong length. This case can only be triggered using the `PartialDependenceDisplay.from_estimator` method.
test_partial_dependence_display_wrong_len_kind
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
BSD-3-Clause
def test_partial_dependence_display_kind_centered_interaction( pyplot, kind, clf_diabetes, diabetes, ): """Check that we properly center ICE and PD when passing kind as a string and as a list.""" disp = PartialDependenceDisplay.from_estimator( clf_diabetes, diabetes.data, [0, 1], kind=kind, centered=True, subsample=5, ) assert all([ln._y[0] == 0.0 for ln in disp.lines_.ravel() if ln is not None])
Check that we properly center ICE and PD when passing kind as a string and as a list.
test_partial_dependence_display_kind_centered_interaction
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
BSD-3-Clause
def test_partial_dependence_display_with_constant_sample_weight( pyplot, clf_diabetes, diabetes, ): """Check that the utilization of a constant sample weight maintains the standard behavior. """ disp = PartialDependenceDisplay.from_estimator( clf_diabetes, diabetes.data, [0, 1], kind="average", method="brute", ) sample_weight = np.ones_like(diabetes.target) disp_sw = PartialDependenceDisplay.from_estimator( clf_diabetes, diabetes.data, [0, 1], sample_weight=sample_weight, kind="average", method="brute", ) assert np.array_equal( disp.pd_results[0]["average"], disp_sw.pd_results[0]["average"] )
Check that the utilization of a constant sample weight maintains the standard behavior.
test_partial_dependence_display_with_constant_sample_weight
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
BSD-3-Clause
def test_subclass_named_constructors_return_type_is_subclass( pyplot, diabetes, clf_diabetes ): """Check that named constructors return the correct type when subclassed. Non-regression test for: https://github.com/scikit-learn/scikit-learn/pull/27675 """ class SubclassOfDisplay(PartialDependenceDisplay): pass curve = SubclassOfDisplay.from_estimator( clf_diabetes, diabetes.data, [0, 2, (0, 2)], ) assert isinstance(curve, SubclassOfDisplay)
Check that named constructors return the correct type when subclassed. Non-regression test for: https://github.com/scikit-learn/scikit-learn/pull/27675
test_subclass_named_constructors_return_type_is_subclass
python
scikit-learn/scikit-learn
sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/inspection/_plot/tests/test_plot_partial_dependence.py
BSD-3-Clause
def make_dataset(X, y, sample_weight, random_state=None): """Create ``Dataset`` abstraction for sparse and dense inputs. This also returns the ``intercept_decay`` which is different for sparse datasets. Parameters ---------- X : array-like, shape (n_samples, n_features) Training data y : array-like, shape (n_samples, ) Target values. sample_weight : numpy array of shape (n_samples,) The weight of each sample random_state : int, RandomState instance or None (default) Determines random number generation for dataset random sampling. It is not used for dataset shuffling. Pass an int for reproducible output across multiple function calls. See :term:`Glossary <random_state>`. Returns ------- dataset The ``Dataset`` abstraction intercept_decay The intercept decay """ rng = check_random_state(random_state) # seed should never be 0 in SequentialDataset64 seed = rng.randint(1, np.iinfo(np.int32).max) if X.dtype == np.float32: CSRData = CSRDataset32 ArrayData = ArrayDataset32 else: CSRData = CSRDataset64 ArrayData = ArrayDataset64 if sp.issparse(X): dataset = CSRData(X.data, X.indptr, X.indices, y, sample_weight, seed=seed) intercept_decay = SPARSE_INTERCEPT_DECAY else: X = np.ascontiguousarray(X) dataset = ArrayData(X, y, sample_weight, seed=seed) intercept_decay = 1.0 return dataset, intercept_decay
Create ``Dataset`` abstraction for sparse and dense inputs. This also returns the ``intercept_decay`` which is different for sparse datasets. Parameters ---------- X : array-like, shape (n_samples, n_features) Training data y : array-like, shape (n_samples, ) Target values. sample_weight : numpy array of shape (n_samples,) The weight of each sample random_state : int, RandomState instance or None (default) Determines random number generation for dataset random sampling. It is not used for dataset shuffling. Pass an int for reproducible output across multiple function calls. See :term:`Glossary <random_state>`. Returns ------- dataset The ``Dataset`` abstraction intercept_decay The intercept decay
make_dataset
python
scikit-learn/scikit-learn
sklearn/linear_model/_base.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_base.py
BSD-3-Clause
def _preprocess_data( X, y, *, fit_intercept, copy=True, copy_y=True, sample_weight=None, check_input=True, ): """Common data preprocessing for fitting linear models. This helper is in charge of the following steps: - Ensure that `sample_weight` is an array or `None`. - If `check_input=True`, perform standard input validation of `X`, `y`. - Perform copies if requested to avoid side-effects in case of inplace modifications of the input. Then, if `fit_intercept=True` this preprocessing centers both `X` and `y` as follows: - if `X` is dense, center the data and store the mean vector in `X_offset`. - if `X` is sparse, store the mean in `X_offset` without centering `X`. The centering is expected to be handled by the linear solver where appropriate. - in either case, always center `y` and store the mean in `y_offset`. - both `X_offset` and `y_offset` are always weighted by `sample_weight` if not set to `None`. If `fit_intercept=False`, no centering is performed and `X_offset`, `y_offset` are set to zero. Returns ------- X_out : {ndarray, sparse matrix} of shape (n_samples, n_features) If copy=True a copy of the input X is triggered, otherwise operations are inplace. If input X is dense, then X_out is centered. y_out : {ndarray, sparse matrix} of shape (n_samples,) or (n_samples, n_targets) Centered version of y. Possibly performed inplace on input y depending on the copy_y parameter. X_offset : ndarray of shape (n_features,) The mean per column of input X. y_offset : float or ndarray of shape (n_features,) X_scale : ndarray of shape (n_features,) Always an array of ones. TODO: refactor the code base to make it possible to remove this unused variable. """ xp, _, device_ = get_namespace_and_device(X, y, sample_weight) n_samples, n_features = X.shape X_is_sparse = sp.issparse(X) if isinstance(sample_weight, numbers.Number): sample_weight = None if sample_weight is not None: sample_weight = xp.asarray(sample_weight) if check_input: X = check_array( X, copy=copy, accept_sparse=["csr", "csc"], dtype=supported_float_dtypes(xp) ) y = check_array(y, dtype=X.dtype, copy=copy_y, ensure_2d=False) else: y = xp.astype(y, X.dtype, copy=copy_y) if copy: if X_is_sparse: X = X.copy() else: X = _asarray_with_order(X, order="K", copy=True, xp=xp) dtype_ = X.dtype if fit_intercept: if X_is_sparse: X_offset, X_var = mean_variance_axis(X, axis=0, weights=sample_weight) else: X_offset = _average(X, axis=0, weights=sample_weight, xp=xp) X_offset = xp.astype(X_offset, X.dtype, copy=False) X -= X_offset y_offset = _average(y, axis=0, weights=sample_weight, xp=xp) y -= y_offset else: X_offset = xp.zeros(n_features, dtype=X.dtype, device=device_) if y.ndim == 1: y_offset = xp.asarray(0.0, dtype=dtype_, device=device_) else: y_offset = xp.zeros(y.shape[1], dtype=dtype_, device=device_) # XXX: X_scale is no longer needed. It is an historic artifact from the # time where linear model exposed the normalize parameter. X_scale = xp.ones(n_features, dtype=X.dtype, device=device_) return X, y, X_offset, y_offset, X_scale
Common data preprocessing for fitting linear models. This helper is in charge of the following steps: - Ensure that `sample_weight` is an array or `None`. - If `check_input=True`, perform standard input validation of `X`, `y`. - Perform copies if requested to avoid side-effects in case of inplace modifications of the input. Then, if `fit_intercept=True` this preprocessing centers both `X` and `y` as follows: - if `X` is dense, center the data and store the mean vector in `X_offset`. - if `X` is sparse, store the mean in `X_offset` without centering `X`. The centering is expected to be handled by the linear solver where appropriate. - in either case, always center `y` and store the mean in `y_offset`. - both `X_offset` and `y_offset` are always weighted by `sample_weight` if not set to `None`. If `fit_intercept=False`, no centering is performed and `X_offset`, `y_offset` are set to zero. Returns ------- X_out : {ndarray, sparse matrix} of shape (n_samples, n_features) If copy=True a copy of the input X is triggered, otherwise operations are inplace. If input X is dense, then X_out is centered. y_out : {ndarray, sparse matrix} of shape (n_samples,) or (n_samples, n_targets) Centered version of y. Possibly performed inplace on input y depending on the copy_y parameter. X_offset : ndarray of shape (n_features,) The mean per column of input X. y_offset : float or ndarray of shape (n_features,) X_scale : ndarray of shape (n_features,) Always an array of ones. TODO: refactor the code base to make it possible to remove this unused variable.
_preprocess_data
python
scikit-learn/scikit-learn
sklearn/linear_model/_base.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_base.py
BSD-3-Clause
def _rescale_data(X, y, sample_weight, inplace=False): """Rescale data sample-wise by square root of sample_weight. For many linear models, this enables easy support for sample_weight because (y - X w)' S (y - X w) with S = diag(sample_weight) becomes ||y_rescaled - X_rescaled w||_2^2 when setting y_rescaled = sqrt(S) y X_rescaled = sqrt(S) X Returns ------- X_rescaled : {array-like, sparse matrix} y_rescaled : {array-like, sparse matrix} """ # Assume that _validate_data and _check_sample_weight have been called by # the caller. xp, _ = get_namespace(X, y, sample_weight) n_samples = X.shape[0] sample_weight_sqrt = xp.sqrt(sample_weight) if sp.issparse(X) or sp.issparse(y): sw_matrix = sparse.dia_matrix( (sample_weight_sqrt, 0), shape=(n_samples, n_samples) ) if sp.issparse(X): X = safe_sparse_dot(sw_matrix, X) else: if inplace: X *= sample_weight_sqrt[:, None] else: X = X * sample_weight_sqrt[:, None] if sp.issparse(y): y = safe_sparse_dot(sw_matrix, y) else: if inplace: if y.ndim == 1: y *= sample_weight_sqrt else: y *= sample_weight_sqrt[:, None] else: if y.ndim == 1: y = y * sample_weight_sqrt else: y = y * sample_weight_sqrt[:, None] return X, y, sample_weight_sqrt
Rescale data sample-wise by square root of sample_weight. For many linear models, this enables easy support for sample_weight because (y - X w)' S (y - X w) with S = diag(sample_weight) becomes ||y_rescaled - X_rescaled w||_2^2 when setting y_rescaled = sqrt(S) y X_rescaled = sqrt(S) X Returns ------- X_rescaled : {array-like, sparse matrix} y_rescaled : {array-like, sparse matrix}
_rescale_data
python
scikit-learn/scikit-learn
sklearn/linear_model/_base.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_base.py
BSD-3-Clause
def decision_function(self, X): """ Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data matrix for which we want to get the confidence scores. Returns ------- scores : ndarray of shape (n_samples,) or (n_samples, n_classes) Confidence scores per `(n_samples, n_classes)` combination. In the binary case, confidence score for `self.classes_[1]` where >0 means this class would be predicted. """ check_is_fitted(self) xp, _ = get_namespace(X) X = validate_data(self, X, accept_sparse="csr", reset=False) scores = safe_sparse_dot(X, self.coef_.T, dense_output=True) + self.intercept_ return ( xp.reshape(scores, (-1,)) if (scores.ndim > 1 and scores.shape[1] == 1) else scores )
Predict confidence scores for samples. The confidence score for a sample is proportional to the signed distance of that sample to the hyperplane. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data matrix for which we want to get the confidence scores. Returns ------- scores : ndarray of shape (n_samples,) or (n_samples, n_classes) Confidence scores per `(n_samples, n_classes)` combination. In the binary case, confidence score for `self.classes_[1]` where >0 means this class would be predicted.
decision_function
python
scikit-learn/scikit-learn
sklearn/linear_model/_base.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_base.py
BSD-3-Clause
def predict(self, X): """ Predict class labels for samples in X. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data matrix for which we want to get the predictions. Returns ------- y_pred : ndarray of shape (n_samples,) Vector containing the class labels for each sample. """ xp, _ = get_namespace(X) scores = self.decision_function(X) if len(scores.shape) == 1: indices = xp.astype(scores > 0, indexing_dtype(xp)) else: indices = xp.argmax(scores, axis=1) return xp.take(self.classes_, indices, axis=0)
Predict class labels for samples in X. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) The data matrix for which we want to get the predictions. Returns ------- y_pred : ndarray of shape (n_samples,) Vector containing the class labels for each sample.
predict
python
scikit-learn/scikit-learn
sklearn/linear_model/_base.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_base.py
BSD-3-Clause
def _predict_proba_lr(self, X): """Probability estimation for OvR logistic regression. Positive class probabilities are computed as 1. / (1. + np.exp(-self.decision_function(X))); multiclass is handled by normalizing that over all classes. """ prob = self.decision_function(X) expit(prob, out=prob) if prob.ndim == 1: return np.vstack([1 - prob, prob]).T else: # OvR normalization, like LibLinear's predict_probability prob /= prob.sum(axis=1).reshape((prob.shape[0], -1)) return prob
Probability estimation for OvR logistic regression. Positive class probabilities are computed as 1. / (1. + np.exp(-self.decision_function(X))); multiclass is handled by normalizing that over all classes.
_predict_proba_lr
python
scikit-learn/scikit-learn
sklearn/linear_model/_base.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_base.py
BSD-3-Clause
def densify(self): """ Convert coefficient matrix to dense array format. Converts the ``coef_`` member (back) to a numpy.ndarray. This is the default format of ``coef_`` and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns ------- self Fitted estimator. """ msg = "Estimator, %(name)s, must be fitted before densifying." check_is_fitted(self, msg=msg) if sp.issparse(self.coef_): self.coef_ = self.coef_.toarray() return self
Convert coefficient matrix to dense array format. Converts the ``coef_`` member (back) to a numpy.ndarray. This is the default format of ``coef_`` and is required for fitting, so calling this method is only required on models that have previously been sparsified; otherwise, it is a no-op. Returns ------- self Fitted estimator.
densify
python
scikit-learn/scikit-learn
sklearn/linear_model/_base.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_base.py
BSD-3-Clause
def sparsify(self): """ Convert coefficient matrix to sparse format. Converts the ``coef_`` member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The ``intercept_`` member is not converted. Returns ------- self Fitted estimator. Notes ----- For non-sparse models, i.e. when there are not many zeros in ``coef_``, this may actually *increase* memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with ``(coef_ == 0).sum()``, must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify. """ msg = "Estimator, %(name)s, must be fitted before sparsifying." check_is_fitted(self, msg=msg) self.coef_ = sp.csr_matrix(self.coef_) return self
Convert coefficient matrix to sparse format. Converts the ``coef_`` member to a scipy.sparse matrix, which for L1-regularized models can be much more memory- and storage-efficient than the usual numpy.ndarray representation. The ``intercept_`` member is not converted. Returns ------- self Fitted estimator. Notes ----- For non-sparse models, i.e. when there are not many zeros in ``coef_``, this may actually *increase* memory usage, so use this method with care. A rule of thumb is that the number of zero elements, which can be computed with ``(coef_ == 0).sum()``, must be more than 50% for this to provide significant benefits. After calling this method, further fitting with the partial_fit method (if any) will not work until you call densify.
sparsify
python
scikit-learn/scikit-learn
sklearn/linear_model/_base.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_base.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None): """ Fit linear model. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. Will be cast to X's dtype if necessary. sample_weight : array-like of shape (n_samples,), default=None Individual weights for each sample. .. versionadded:: 0.17 parameter *sample_weight* support to LinearRegression. Returns ------- self : object Fitted Estimator. """ n_jobs_ = self.n_jobs accept_sparse = False if self.positive else ["csr", "csc", "coo"] X, y = validate_data( self, X, y, accept_sparse=accept_sparse, y_numeric=True, multi_output=True, force_writeable=True, ) has_sw = sample_weight is not None if has_sw: sample_weight = _check_sample_weight( sample_weight, X, dtype=X.dtype, ensure_non_negative=True ) # Note that neither _rescale_data nor the rest of the fit method of # LinearRegression can benefit from in-place operations when X is a # sparse matrix. Therefore, let's not copy X when it is sparse. copy_X_in_preprocess_data = self.copy_X and not sp.issparse(X) X, y, X_offset, y_offset, X_scale = _preprocess_data( X, y, fit_intercept=self.fit_intercept, copy=copy_X_in_preprocess_data, sample_weight=sample_weight, ) if has_sw: # Sample weight can be implemented via a simple rescaling. Note # that we safely do inplace rescaling when _preprocess_data has # already made a copy if requested. X, y, sample_weight_sqrt = _rescale_data( X, y, sample_weight, inplace=copy_X_in_preprocess_data ) if self.positive: if y.ndim < 2: self.coef_ = optimize.nnls(X, y)[0] else: # scipy.optimize.nnls cannot handle y with shape (M, K) outs = Parallel(n_jobs=n_jobs_)( delayed(optimize.nnls)(X, y[:, j]) for j in range(y.shape[1]) ) self.coef_ = np.vstack([out[0] for out in outs]) elif sp.issparse(X): X_offset_scale = X_offset / X_scale if has_sw: def matvec(b): return X.dot(b) - sample_weight_sqrt * b.dot(X_offset_scale) def rmatvec(b): return X.T.dot(b) - X_offset_scale * b.dot(sample_weight_sqrt) else: def matvec(b): return X.dot(b) - b.dot(X_offset_scale) def rmatvec(b): return X.T.dot(b) - X_offset_scale * b.sum() X_centered = sparse.linalg.LinearOperator( shape=X.shape, matvec=matvec, rmatvec=rmatvec ) if y.ndim < 2: self.coef_ = lsqr(X_centered, y, atol=self.tol, btol=self.tol)[0] else: # sparse_lstsq cannot handle y with shape (M, K) outs = Parallel(n_jobs=n_jobs_)( delayed(lsqr)( X_centered, y[:, j].ravel(), atol=self.tol, btol=self.tol ) for j in range(y.shape[1]) ) self.coef_ = np.vstack([out[0] for out in outs]) else: # cut-off ratio for small singular values cond = max(X.shape) * np.finfo(X.dtype).eps self.coef_, _, self.rank_, self.singular_ = linalg.lstsq(X, y, cond=cond) self.coef_ = self.coef_.T if y.ndim == 1: self.coef_ = np.ravel(self.coef_) self._set_intercept(X_offset, y_offset, X_scale) return self
Fit linear model. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. Will be cast to X's dtype if necessary. sample_weight : array-like of shape (n_samples,), default=None Individual weights for each sample. .. versionadded:: 0.17 parameter *sample_weight* support to LinearRegression. Returns ------- self : object Fitted Estimator.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_base.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_base.py
BSD-3-Clause
def _check_precomputed_gram_matrix( X, precompute, X_offset, X_scale, rtol=None, atol=1e-5 ): """Computes a single element of the gram matrix and compares it to the corresponding element of the user supplied gram matrix. If the values do not match a ValueError will be thrown. Parameters ---------- X : ndarray of shape (n_samples, n_features) Data array. precompute : array-like of shape (n_features, n_features) User-supplied gram matrix. X_offset : ndarray of shape (n_features,) Array of feature means used to center design matrix. X_scale : ndarray of shape (n_features,) Array of feature scale factors used to normalize design matrix. rtol : float, default=None Relative tolerance; see numpy.allclose If None, it is set to 1e-4 for arrays of dtype numpy.float32 and 1e-7 otherwise. atol : float, default=1e-5 absolute tolerance; see :func`numpy.allclose`. Note that the default here is more tolerant than the default for :func:`numpy.testing.assert_allclose`, where `atol=0`. Raises ------ ValueError Raised when the provided Gram matrix is not consistent. """ n_features = X.shape[1] f1 = n_features // 2 f2 = min(f1 + 1, n_features - 1) v1 = (X[:, f1] - X_offset[f1]) * X_scale[f1] v2 = (X[:, f2] - X_offset[f2]) * X_scale[f2] expected = np.dot(v1, v2) actual = precompute[f1, f2] dtypes = [precompute.dtype, expected.dtype] if rtol is None: rtols = [1e-4 if dtype == np.float32 else 1e-7 for dtype in dtypes] rtol = max(rtols) if not np.isclose(expected, actual, rtol=rtol, atol=atol): raise ValueError( "Gram matrix passed in via 'precompute' parameter " "did not pass validation when a single element was " "checked - please check that it was computed " f"properly. For element ({f1},{f2}) we computed " f"{expected} but the user-supplied value was " f"{actual}." )
Computes a single element of the gram matrix and compares it to the corresponding element of the user supplied gram matrix. If the values do not match a ValueError will be thrown. Parameters ---------- X : ndarray of shape (n_samples, n_features) Data array. precompute : array-like of shape (n_features, n_features) User-supplied gram matrix. X_offset : ndarray of shape (n_features,) Array of feature means used to center design matrix. X_scale : ndarray of shape (n_features,) Array of feature scale factors used to normalize design matrix. rtol : float, default=None Relative tolerance; see numpy.allclose If None, it is set to 1e-4 for arrays of dtype numpy.float32 and 1e-7 otherwise. atol : float, default=1e-5 absolute tolerance; see :func`numpy.allclose`. Note that the default here is more tolerant than the default for :func:`numpy.testing.assert_allclose`, where `atol=0`. Raises ------ ValueError Raised when the provided Gram matrix is not consistent.
_check_precomputed_gram_matrix
python
scikit-learn/scikit-learn
sklearn/linear_model/_base.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_base.py
BSD-3-Clause
def _pre_fit( X, y, Xy, precompute, fit_intercept, copy, check_input=True, sample_weight=None, ): """Function used at beginning of fit in linear models with L1 or L0 penalty. This function applies _preprocess_data and additionally computes the gram matrix `precompute` as needed as well as `Xy`. """ n_samples, n_features = X.shape if sparse.issparse(X): # copy is not needed here as X is not modified inplace when X is sparse precompute = False X, y, X_offset, y_offset, X_scale = _preprocess_data( X, y, fit_intercept=fit_intercept, copy=False, check_input=check_input, sample_weight=sample_weight, ) else: # copy was done in fit if necessary X, y, X_offset, y_offset, X_scale = _preprocess_data( X, y, fit_intercept=fit_intercept, copy=copy, check_input=check_input, sample_weight=sample_weight, ) # Rescale only in dense case. Sparse cd solver directly deals with # sample_weight. if sample_weight is not None: # This triggers copies anyway. X, y, _ = _rescale_data(X, y, sample_weight=sample_weight) if hasattr(precompute, "__array__"): if fit_intercept and not np.allclose(X_offset, np.zeros(n_features)): warnings.warn( ( "Gram matrix was provided but X was centered to fit " "intercept: recomputing Gram matrix." ), UserWarning, ) # TODO: instead of warning and recomputing, we could just center # the user provided Gram matrix a-posteriori (after making a copy # when `copy=True`). # recompute Gram precompute = "auto" Xy = None elif check_input: # If we're going to use the user's precomputed gram matrix, we # do a quick check to make sure its not totally bogus. _check_precomputed_gram_matrix(X, precompute, X_offset, X_scale) # precompute if n_samples > n_features if isinstance(precompute, str) and precompute == "auto": precompute = n_samples > n_features if precompute is True: # make sure that the 'precompute' array is contiguous. precompute = np.empty(shape=(n_features, n_features), dtype=X.dtype, order="C") np.dot(X.T, X, out=precompute) if not hasattr(precompute, "__array__"): Xy = None # cannot use Xy if precompute is not Gram if hasattr(precompute, "__array__") and Xy is None: common_dtype = np.result_type(X.dtype, y.dtype) if y.ndim == 1: # Xy is 1d, make sure it is contiguous. Xy = np.empty(shape=n_features, dtype=common_dtype, order="C") np.dot(X.T, y, out=Xy) else: # Make sure that Xy is always F contiguous even if X or y are not # contiguous: the goal is to make it fast to extract the data for a # specific target. n_targets = y.shape[1] Xy = np.empty(shape=(n_features, n_targets), dtype=common_dtype, order="F") np.dot(y.T, X, out=Xy.T) return X, y, X_offset, y_offset, X_scale, precompute, Xy
Function used at beginning of fit in linear models with L1 or L0 penalty. This function applies _preprocess_data and additionally computes the gram matrix `precompute` as needed as well as `Xy`.
_pre_fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_base.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_base.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None): """Fit the model. Parameters ---------- X : ndarray of shape (n_samples, n_features) Training data. y : ndarray of shape (n_samples,) Target values. Will be cast to X's dtype if necessary. sample_weight : ndarray of shape (n_samples,), default=None Individual weights for each sample. .. versionadded:: 0.20 parameter *sample_weight* support to BayesianRidge. Returns ------- self : object Returns the instance itself. """ X, y = validate_data( self, X, y, dtype=[np.float64, np.float32], force_writeable=True, y_numeric=True, ) dtype = X.dtype n_samples, n_features = X.shape sw_sum = n_samples y_var = y.var() if sample_weight is not None: sample_weight = _check_sample_weight(sample_weight, X, dtype=dtype) sw_sum = sample_weight.sum() y_mean = np.average(y, weights=sample_weight) y_var = np.average((y - y_mean) ** 2, weights=sample_weight) X, y, X_offset_, y_offset_, X_scale_ = _preprocess_data( X, y, fit_intercept=self.fit_intercept, copy=self.copy_X, sample_weight=sample_weight, ) if sample_weight is not None: # Sample weight can be implemented via a simple rescaling. X, y, _ = _rescale_data(X, y, sample_weight) self.X_offset_ = X_offset_ self.X_scale_ = X_scale_ # Initialization of the values of the parameters eps = np.finfo(np.float64).eps # Add `eps` in the denominator to omit division by zero alpha_ = self.alpha_init lambda_ = self.lambda_init if alpha_ is None: alpha_ = 1.0 / (y_var + eps) if lambda_ is None: lambda_ = 1.0 # Avoid unintended type promotion to float64 with numpy 2 alpha_ = np.asarray(alpha_, dtype=dtype) lambda_ = np.asarray(lambda_, dtype=dtype) verbose = self.verbose lambda_1 = self.lambda_1 lambda_2 = self.lambda_2 alpha_1 = self.alpha_1 alpha_2 = self.alpha_2 self.scores_ = list() coef_old_ = None XT_y = np.dot(X.T, y) # Let M, N = n_samples, n_features and K = min(M, N). # The posterior covariance matrix needs Vh_full: (N, N). # The full SVD is only required when n_samples < n_features. # When n_samples < n_features, K=M and full_matrices=True # U: (M, M), S: M, Vh_full: (N, N), Vh: (M, N) # When n_samples > n_features, K=N and full_matrices=False # U: (M, N), S: N, Vh_full: (N, N), Vh: (N, N) U, S, Vh_full = linalg.svd(X, full_matrices=(n_samples < n_features)) K = len(S) eigen_vals_ = S**2 eigen_vals_full = np.zeros(n_features, dtype=dtype) eigen_vals_full[0:K] = eigen_vals_ Vh = Vh_full[0:K, :] # Convergence loop of the bayesian ridge regression for iter_ in range(self.max_iter): # update posterior mean coef_ based on alpha_ and lambda_ and # compute corresponding sse (sum of squared errors) coef_, sse_ = self._update_coef_( X, y, n_samples, n_features, XT_y, U, Vh, eigen_vals_, alpha_, lambda_ ) if self.compute_score: # compute the log marginal likelihood s = self._log_marginal_likelihood( n_samples, n_features, sw_sum, eigen_vals_, alpha_, lambda_, coef_, sse_, ) self.scores_.append(s) # Update alpha and lambda according to (MacKay, 1992) gamma_ = np.sum((alpha_ * eigen_vals_) / (lambda_ + alpha_ * eigen_vals_)) lambda_ = (gamma_ + 2 * lambda_1) / (np.sum(coef_**2) + 2 * lambda_2) alpha_ = (sw_sum - gamma_ + 2 * alpha_1) / (sse_ + 2 * alpha_2) # Check for convergence if iter_ != 0 and np.sum(np.abs(coef_old_ - coef_)) < self.tol: if verbose: print("Convergence after ", str(iter_), " iterations") break coef_old_ = np.copy(coef_) self.n_iter_ = iter_ + 1 # return regularization parameters and corresponding posterior mean, # log marginal likelihood and posterior covariance self.alpha_ = alpha_ self.lambda_ = lambda_ self.coef_, sse_ = self._update_coef_( X, y, n_samples, n_features, XT_y, U, Vh, eigen_vals_, alpha_, lambda_ ) if self.compute_score: # compute the log marginal likelihood s = self._log_marginal_likelihood( n_samples, n_features, sw_sum, eigen_vals_, alpha_, lambda_, coef_, sse_, ) self.scores_.append(s) self.scores_ = np.array(self.scores_) # posterior covariance self.sigma_ = np.dot( Vh_full.T, Vh_full / (alpha_ * eigen_vals_full + lambda_)[:, np.newaxis] ) self._set_intercept(X_offset_, y_offset_, X_scale_) return self
Fit the model. Parameters ---------- X : ndarray of shape (n_samples, n_features) Training data. y : ndarray of shape (n_samples,) Target values. Will be cast to X's dtype if necessary. sample_weight : ndarray of shape (n_samples,), default=None Individual weights for each sample. .. versionadded:: 0.20 parameter *sample_weight* support to BayesianRidge. Returns ------- self : object Returns the instance itself.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_bayes.py
BSD-3-Clause
def predict(self, X, return_std=False): """Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Samples. return_std : bool, default=False Whether to return the standard deviation of posterior prediction. Returns ------- y_mean : array-like of shape (n_samples,) Mean of predictive distribution of query points. y_std : array-like of shape (n_samples,) Standard deviation of predictive distribution of query points. """ y_mean = self._decision_function(X) if not return_std: return y_mean else: sigmas_squared_data = (np.dot(X, self.sigma_) * X).sum(axis=1) y_std = np.sqrt(sigmas_squared_data + (1.0 / self.alpha_)) return y_mean, y_std
Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Samples. return_std : bool, default=False Whether to return the standard deviation of posterior prediction. Returns ------- y_mean : array-like of shape (n_samples,) Mean of predictive distribution of query points. y_std : array-like of shape (n_samples,) Standard deviation of predictive distribution of query points.
predict
python
scikit-learn/scikit-learn
sklearn/linear_model/_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_bayes.py
BSD-3-Clause
def _update_coef_( self, X, y, n_samples, n_features, XT_y, U, Vh, eigen_vals_, alpha_, lambda_ ): """Update posterior mean and compute corresponding sse (sum of squared errors). Posterior mean is given by coef_ = scaled_sigma_ * X.T * y where scaled_sigma_ = (lambda_/alpha_ * np.eye(n_features) + np.dot(X.T, X))^-1 """ if n_samples > n_features: coef_ = np.linalg.multi_dot( [Vh.T, Vh / (eigen_vals_ + lambda_ / alpha_)[:, np.newaxis], XT_y] ) else: coef_ = np.linalg.multi_dot( [X.T, U / (eigen_vals_ + lambda_ / alpha_)[None, :], U.T, y] ) # Note: we do not need to explicitly use the weights in this sum because # y and X were preprocessed by _rescale_data to handle the weights. sse_ = np.sum((y - np.dot(X, coef_)) ** 2) return coef_, sse_
Update posterior mean and compute corresponding sse (sum of squared errors). Posterior mean is given by coef_ = scaled_sigma_ * X.T * y where scaled_sigma_ = (lambda_/alpha_ * np.eye(n_features) + np.dot(X.T, X))^-1
_update_coef_
python
scikit-learn/scikit-learn
sklearn/linear_model/_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_bayes.py
BSD-3-Clause
def fit(self, X, y): """Fit the model according to the given training data and parameters. Iterative procedure to maximize the evidence Parameters ---------- X : array-like of shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like of shape (n_samples,) Target values (integers). Will be cast to X's dtype if necessary. Returns ------- self : object Fitted estimator. """ X, y = validate_data( self, X, y, dtype=[np.float64, np.float32], force_writeable=True, y_numeric=True, ensure_min_samples=2, ) dtype = X.dtype n_samples, n_features = X.shape coef_ = np.zeros(n_features, dtype=dtype) X, y, X_offset_, y_offset_, X_scale_ = _preprocess_data( X, y, fit_intercept=self.fit_intercept, copy=self.copy_X ) self.X_offset_ = X_offset_ self.X_scale_ = X_scale_ # Launch the convergence loop keep_lambda = np.ones(n_features, dtype=bool) lambda_1 = self.lambda_1 lambda_2 = self.lambda_2 alpha_1 = self.alpha_1 alpha_2 = self.alpha_2 verbose = self.verbose # Initialization of the values of the parameters eps = np.finfo(np.float64).eps # Add `eps` in the denominator to omit division by zero if `np.var(y)` # is zero. # Explicitly set dtype to avoid unintended type promotion with numpy 2. alpha_ = np.asarray(1.0 / (np.var(y) + eps), dtype=dtype) lambda_ = np.ones(n_features, dtype=dtype) self.scores_ = list() coef_old_ = None def update_coeff(X, y, coef_, alpha_, keep_lambda, sigma_): coef_[keep_lambda] = alpha_ * np.linalg.multi_dot( [sigma_, X[:, keep_lambda].T, y] ) return coef_ update_sigma = ( self._update_sigma if n_samples >= n_features else self._update_sigma_woodbury ) # Iterative procedure of ARDRegression for iter_ in range(self.max_iter): sigma_ = update_sigma(X, alpha_, lambda_, keep_lambda) coef_ = update_coeff(X, y, coef_, alpha_, keep_lambda, sigma_) # Update alpha and lambda sse_ = np.sum((y - np.dot(X, coef_)) ** 2) gamma_ = 1.0 - lambda_[keep_lambda] * np.diag(sigma_) lambda_[keep_lambda] = (gamma_ + 2.0 * lambda_1) / ( (coef_[keep_lambda]) ** 2 + 2.0 * lambda_2 ) alpha_ = (n_samples - gamma_.sum() + 2.0 * alpha_1) / (sse_ + 2.0 * alpha_2) # Prune the weights with a precision over a threshold keep_lambda = lambda_ < self.threshold_lambda coef_[~keep_lambda] = 0 # Compute the objective function if self.compute_score: s = (lambda_1 * np.log(lambda_) - lambda_2 * lambda_).sum() s += alpha_1 * log(alpha_) - alpha_2 * alpha_ s += 0.5 * ( fast_logdet(sigma_) + n_samples * log(alpha_) + np.sum(np.log(lambda_)) ) s -= 0.5 * (alpha_ * sse_ + (lambda_ * coef_**2).sum()) self.scores_.append(s) # Check for convergence if iter_ > 0 and np.sum(np.abs(coef_old_ - coef_)) < self.tol: if verbose: print("Converged after %s iterations" % iter_) break coef_old_ = np.copy(coef_) if not keep_lambda.any(): break self.n_iter_ = iter_ + 1 if keep_lambda.any(): # update sigma and mu using updated params from the last iteration sigma_ = update_sigma(X, alpha_, lambda_, keep_lambda) coef_ = update_coeff(X, y, coef_, alpha_, keep_lambda, sigma_) else: sigma_ = np.array([]).reshape(0, 0) self.coef_ = coef_ self.alpha_ = alpha_ self.sigma_ = sigma_ self.lambda_ = lambda_ self._set_intercept(X_offset_, y_offset_, X_scale_) return self
Fit the model according to the given training data and parameters. Iterative procedure to maximize the evidence Parameters ---------- X : array-like of shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like of shape (n_samples,) Target values (integers). Will be cast to X's dtype if necessary. Returns ------- self : object Fitted estimator.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_bayes.py
BSD-3-Clause
def predict(self, X, return_std=False): """Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Samples. return_std : bool, default=False Whether to return the standard deviation of posterior prediction. Returns ------- y_mean : array-like of shape (n_samples,) Mean of predictive distribution of query points. y_std : array-like of shape (n_samples,) Standard deviation of predictive distribution of query points. """ y_mean = self._decision_function(X) if return_std is False: return y_mean else: col_index = self.lambda_ < self.threshold_lambda X = _safe_indexing(X, indices=col_index, axis=1) sigmas_squared_data = (np.dot(X, self.sigma_) * X).sum(axis=1) y_std = np.sqrt(sigmas_squared_data + (1.0 / self.alpha_)) return y_mean, y_std
Predict using the linear model. In addition to the mean of the predictive distribution, also its standard deviation can be returned. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Samples. return_std : bool, default=False Whether to return the standard deviation of posterior prediction. Returns ------- y_mean : array-like of shape (n_samples,) Mean of predictive distribution of query points. y_std : array-like of shape (n_samples,) Standard deviation of predictive distribution of query points.
predict
python
scikit-learn/scikit-learn
sklearn/linear_model/_bayes.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_bayes.py
BSD-3-Clause
def _set_order(X, y, order="C"): """Change the order of X and y if necessary. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : ndarray of shape (n_samples,) Target values. order : {None, 'C', 'F'} If 'C', dense arrays are returned as C-ordered, sparse matrices in csr format. If 'F', dense arrays are return as F-ordered, sparse matrices in csc format. Returns ------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data with guaranteed order. y : ndarray of shape (n_samples,) Target values with guaranteed order. """ if order not in [None, "C", "F"]: raise ValueError( "Unknown value for order. Got {} instead of None, 'C' or 'F'.".format(order) ) sparse_X = sparse.issparse(X) sparse_y = sparse.issparse(y) if order is not None: sparse_format = "csc" if order == "F" else "csr" if sparse_X: X = X.asformat(sparse_format, copy=False) else: X = np.asarray(X, order=order) if sparse_y: y = y.asformat(sparse_format) else: y = np.asarray(y, order=order) return X, y
Change the order of X and y if necessary. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : ndarray of shape (n_samples,) Target values. order : {None, 'C', 'F'} If 'C', dense arrays are returned as C-ordered, sparse matrices in csr format. If 'F', dense arrays are return as F-ordered, sparse matrices in csc format. Returns ------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data with guaranteed order. y : ndarray of shape (n_samples,) Target values with guaranteed order.
_set_order
python
scikit-learn/scikit-learn
sklearn/linear_model/_coordinate_descent.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_coordinate_descent.py
BSD-3-Clause
def _alpha_grid( X, y, Xy=None, l1_ratio=1.0, fit_intercept=True, eps=1e-3, n_alphas=100, copy_X=True, sample_weight=None, ): """Compute the grid of alpha values for elastic net parameter search Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication y : ndarray of shape (n_samples,) or (n_samples, n_outputs) Target values Xy : array-like of shape (n_features,) or (n_features, n_outputs),\ default=None Xy = np.dot(X.T, y) that can be precomputed. l1_ratio : float, default=1.0 The elastic net mixing parameter, with ``0 < l1_ratio <= 1``. For ``l1_ratio = 0`` the penalty is an L2 penalty. (currently not supported) ``For l1_ratio = 1`` it is an L1 penalty. For ``0 < l1_ratio <1``, the penalty is a combination of L1 and L2. eps : float, default=1e-3 Length of the path. ``eps=1e-3`` means that ``alpha_min / alpha_max = 1e-3`` n_alphas : int, default=100 Number of alphas along the regularization path fit_intercept : bool, default=True Whether to fit an intercept or not copy_X : bool, default=True If ``True``, X will be copied; else, it may be overwritten. sample_weight : ndarray of shape (n_samples,), default=None """ if l1_ratio == 0: raise ValueError( "Automatic alpha grid generation is not supported for" " l1_ratio=0. Please supply a grid by providing " "your estimator with the appropriate `alphas=` " "argument." ) if Xy is not None: Xyw = Xy else: X, y, X_offset, _, _ = _preprocess_data( X, y, fit_intercept=fit_intercept, copy=copy_X, sample_weight=sample_weight, check_input=False, ) if sample_weight is not None: if y.ndim > 1: yw = y * sample_weight.reshape(-1, 1) else: yw = y * sample_weight else: yw = y if sparse.issparse(X): Xyw = safe_sparse_dot(X.T, yw, dense_output=True) - np.sum(yw) * X_offset else: Xyw = np.dot(X.T, yw) if Xyw.ndim == 1: Xyw = Xyw[:, np.newaxis] if sample_weight is not None: n_samples = sample_weight.sum() else: n_samples = X.shape[0] alpha_max = np.sqrt(np.sum(Xyw**2, axis=1)).max() / (n_samples * l1_ratio) if alpha_max <= np.finfo(np.float64).resolution: return np.full(n_alphas, np.finfo(np.float64).resolution) return np.geomspace(alpha_max, alpha_max * eps, num=n_alphas)
Compute the grid of alpha values for elastic net parameter search Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication y : ndarray of shape (n_samples,) or (n_samples, n_outputs) Target values Xy : array-like of shape (n_features,) or (n_features, n_outputs), default=None Xy = np.dot(X.T, y) that can be precomputed. l1_ratio : float, default=1.0 The elastic net mixing parameter, with ``0 < l1_ratio <= 1``. For ``l1_ratio = 0`` the penalty is an L2 penalty. (currently not supported) ``For l1_ratio = 1`` it is an L1 penalty. For ``0 < l1_ratio <1``, the penalty is a combination of L1 and L2. eps : float, default=1e-3 Length of the path. ``eps=1e-3`` means that ``alpha_min / alpha_max = 1e-3`` n_alphas : int, default=100 Number of alphas along the regularization path fit_intercept : bool, default=True Whether to fit an intercept or not copy_X : bool, default=True If ``True``, X will be copied; else, it may be overwritten. sample_weight : ndarray of shape (n_samples,), default=None
_alpha_grid
python
scikit-learn/scikit-learn
sklearn/linear_model/_coordinate_descent.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_coordinate_descent.py
BSD-3-Clause
def lasso_path( X, y, *, eps=1e-3, n_alphas=100, alphas=None, precompute="auto", Xy=None, copy_X=True, coef_init=None, verbose=False, return_n_iter=False, positive=False, **params, ): """Compute Lasso path with coordinate descent. The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is:: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1 For multi-output tasks it is:: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21 Where:: ||W||_21 = \\sum_i \\sqrt{\\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the :ref:`User Guide <lasso>`. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If ``y`` is mono-output then ``X`` can be sparse. y : {array-like, sparse matrix} of shape (n_samples,) or \ (n_samples, n_targets) Target values. eps : float, default=1e-3 Length of the path. ``eps=1e-3`` means that ``alpha_min / alpha_max = 1e-3``. n_alphas : int, default=100 Number of alphas along the regularization path. alphas : array-like, default=None List of alphas where to compute the models. If ``None`` alphas are set automatically. precompute : 'auto', bool or array-like of shape \ (n_features, n_features), default='auto' Whether to use a precomputed Gram matrix to speed up calculations. If set to ``'auto'`` let us decide. The Gram matrix can also be passed as argument. Xy : array-like of shape (n_features,) or (n_features, n_targets),\ default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_X : bool, default=True If ``True``, X will be copied; else, it may be overwritten. coef_init : array-like of shape (n_features, ), default=None The initial values of the coefficients. verbose : bool or int, default=False Amount of verbosity. return_n_iter : bool, default=False Whether to return the number of iterations or not. positive : bool, default=False If set to True, forces coefficients to be positive. (Only allowed when ``y.ndim == 1``). **params : kwargs Keyword arguments passed to the coordinate descent solver. Returns ------- alphas : ndarray of shape (n_alphas,) The alphas along the path where models are computed. coefs : ndarray of shape (n_features, n_alphas) or \ (n_targets, n_features, n_alphas) Coefficients along the path. dual_gaps : ndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iters : list of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See Also -------- lars_path : Compute Least Angle Regression or Lasso path using LARS algorithm. Lasso : The Lasso is a linear model that estimates sparse coefficients. LassoLars : Lasso model fit with Least Angle Regression a.k.a. Lars. LassoCV : Lasso linear model with iterative fitting along a regularization path. LassoLarsCV : Cross-validated Lasso using the LARS algorithm. sklearn.decomposition.sparse_encode : Estimator that can be used to transform signals into sparse linear combination of atoms from a fixed. Notes ----- For an example, see :ref:`examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py <sphx_glr_auto_examples_linear_model_plot_lasso_lasso_lars_elasticnet_path.py>`. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples -------- Comparing lasso_path and lars_path with interpolation: >>> import numpy as np >>> from sklearn.linear_model import lasso_path >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T >>> y = np.array([1, 2, 3.1]) >>> # Use lasso_path to compute a coefficient path >>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5]) >>> print(coef_path) [[0. 0. 0.46874778] [0.2159048 0.4425765 0.23689075]] >>> # Now use lars_path and 1D linear interpolation to compute the >>> # same path >>> from sklearn.linear_model import lars_path >>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso') >>> from scipy import interpolate >>> coef_path_continuous = interpolate.interp1d(alphas[::-1], ... coef_path_lars[:, ::-1]) >>> print(coef_path_continuous([5., 1., .5])) [[0. 0. 0.46915237] [0.2159048 0.4425765 0.23668876]] """ return enet_path( X, y, l1_ratio=1.0, eps=eps, n_alphas=n_alphas, alphas=alphas, precompute=precompute, Xy=Xy, copy_X=copy_X, coef_init=coef_init, verbose=verbose, positive=positive, return_n_iter=return_n_iter, **params, )
Compute Lasso path with coordinate descent. The Lasso optimization function varies for mono and multi-outputs. For mono-output tasks it is:: (1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1 For multi-output tasks it is:: (1 / (2 * n_samples)) * ||Y - XW||^2_Fro + alpha * ||W||_21 Where:: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the :ref:`User Guide <lasso>`. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If ``y`` is mono-output then ``X`` can be sparse. y : {array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_targets) Target values. eps : float, default=1e-3 Length of the path. ``eps=1e-3`` means that ``alpha_min / alpha_max = 1e-3``. n_alphas : int, default=100 Number of alphas along the regularization path. alphas : array-like, default=None List of alphas where to compute the models. If ``None`` alphas are set automatically. precompute : 'auto', bool or array-like of shape (n_features, n_features), default='auto' Whether to use a precomputed Gram matrix to speed up calculations. If set to ``'auto'`` let us decide. The Gram matrix can also be passed as argument. Xy : array-like of shape (n_features,) or (n_features, n_targets), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_X : bool, default=True If ``True``, X will be copied; else, it may be overwritten. coef_init : array-like of shape (n_features, ), default=None The initial values of the coefficients. verbose : bool or int, default=False Amount of verbosity. return_n_iter : bool, default=False Whether to return the number of iterations or not. positive : bool, default=False If set to True, forces coefficients to be positive. (Only allowed when ``y.ndim == 1``). **params : kwargs Keyword arguments passed to the coordinate descent solver. Returns ------- alphas : ndarray of shape (n_alphas,) The alphas along the path where models are computed. coefs : ndarray of shape (n_features, n_alphas) or (n_targets, n_features, n_alphas) Coefficients along the path. dual_gaps : ndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iters : list of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. See Also -------- lars_path : Compute Least Angle Regression or Lasso path using LARS algorithm. Lasso : The Lasso is a linear model that estimates sparse coefficients. LassoLars : Lasso model fit with Least Angle Regression a.k.a. Lars. LassoCV : Lasso linear model with iterative fitting along a regularization path. LassoLarsCV : Cross-validated Lasso using the LARS algorithm. sklearn.decomposition.sparse_encode : Estimator that can be used to transform signals into sparse linear combination of atoms from a fixed. Notes ----- For an example, see :ref:`examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py <sphx_glr_auto_examples_linear_model_plot_lasso_lasso_lars_elasticnet_path.py>`. To avoid unnecessary memory duplication the X argument of the fit method should be directly passed as a Fortran-contiguous numpy array. Note that in certain cases, the Lars solver may be significantly faster to implement this functionality. In particular, linear interpolation can be used to retrieve model coefficients between the values output by lars_path Examples -------- Comparing lasso_path and lars_path with interpolation: >>> import numpy as np >>> from sklearn.linear_model import lasso_path >>> X = np.array([[1, 2, 3.1], [2.3, 5.4, 4.3]]).T >>> y = np.array([1, 2, 3.1]) >>> # Use lasso_path to compute a coefficient path >>> _, coef_path, _ = lasso_path(X, y, alphas=[5., 1., .5]) >>> print(coef_path) [[0. 0. 0.46874778] [0.2159048 0.4425765 0.23689075]] >>> # Now use lars_path and 1D linear interpolation to compute the >>> # same path >>> from sklearn.linear_model import lars_path >>> alphas, active, coef_path_lars = lars_path(X, y, method='lasso') >>> from scipy import interpolate >>> coef_path_continuous = interpolate.interp1d(alphas[::-1], ... coef_path_lars[:, ::-1]) >>> print(coef_path_continuous([5., 1., .5])) [[0. 0. 0.46915237] [0.2159048 0.4425765 0.23668876]]
lasso_path
python
scikit-learn/scikit-learn
sklearn/linear_model/_coordinate_descent.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_coordinate_descent.py
BSD-3-Clause
def enet_path( X, y, *, l1_ratio=0.5, eps=1e-3, n_alphas=100, alphas=None, precompute="auto", Xy=None, copy_X=True, coef_init=None, verbose=False, return_n_iter=False, positive=False, check_input=True, **params, ): """Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is:: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 For multi-output tasks it is:: (1 / (2 * n_samples)) * ||Y - XW||_Fro^2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where:: ||W||_21 = \\sum_i \\sqrt{\\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the :ref:`User Guide <elastic_net>`. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If ``y`` is mono-output then ``X`` can be sparse. y : {array-like, sparse matrix} of shape (n_samples,) or \ (n_samples, n_targets) Target values. l1_ratio : float, default=0.5 Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). ``l1_ratio=1`` corresponds to the Lasso. eps : float, default=1e-3 Length of the path. ``eps=1e-3`` means that ``alpha_min / alpha_max = 1e-3``. n_alphas : int, default=100 Number of alphas along the regularization path. alphas : array-like, default=None List of alphas where to compute the models. If None alphas are set automatically. precompute : 'auto', bool or array-like of shape \ (n_features, n_features), default='auto' Whether to use a precomputed Gram matrix to speed up calculations. If set to ``'auto'`` let us decide. The Gram matrix can also be passed as argument. Xy : array-like of shape (n_features,) or (n_features, n_targets),\ default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_X : bool, default=True If ``True``, X will be copied; else, it may be overwritten. coef_init : array-like of shape (n_features, ), default=None The initial values of the coefficients. verbose : bool or int, default=False Amount of verbosity. return_n_iter : bool, default=False Whether to return the number of iterations or not. positive : bool, default=False If set to True, forces coefficients to be positive. (Only allowed when ``y.ndim == 1``). check_input : bool, default=True If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller. **params : kwargs Keyword arguments passed to the coordinate descent solver. Returns ------- alphas : ndarray of shape (n_alphas,) The alphas along the path where models are computed. coefs : ndarray of shape (n_features, n_alphas) or \ (n_targets, n_features, n_alphas) Coefficients along the path. dual_gaps : ndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iters : list of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when ``return_n_iter`` is set to True). See Also -------- MultiTaskElasticNet : Multi-task ElasticNet model trained with L1/L2 mixed-norm \ as regularizer. MultiTaskElasticNetCV : Multi-task L1/L2 ElasticNet with built-in cross-validation. ElasticNet : Linear regression with combined L1 and L2 priors as regularizer. ElasticNetCV : Elastic Net model with iterative fitting along a regularization path. Notes ----- For an example, see :ref:`examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py <sphx_glr_auto_examples_linear_model_plot_lasso_lasso_lars_elasticnet_path.py>`. Examples -------- >>> from sklearn.linear_model import enet_path >>> from sklearn.datasets import make_regression >>> X, y, true_coef = make_regression( ... n_samples=100, n_features=5, n_informative=2, coef=True, random_state=0 ... ) >>> true_coef array([ 0. , 0. , 0. , 97.9, 45.7]) >>> alphas, estimated_coef, _ = enet_path(X, y, n_alphas=3) >>> alphas.shape (3,) >>> estimated_coef array([[ 0., 0.787, 0.568], [ 0., 1.120, 0.620], [-0., -2.129, -1.128], [ 0., 23.046, 88.939], [ 0., 10.637, 41.566]]) """ X_offset_param = params.pop("X_offset", None) X_scale_param = params.pop("X_scale", None) sample_weight = params.pop("sample_weight", None) tol = params.pop("tol", 1e-4) max_iter = params.pop("max_iter", 1000) random_state = params.pop("random_state", None) selection = params.pop("selection", "cyclic") if len(params) > 0: raise ValueError("Unexpected parameters in params", params.keys()) # We expect X and y to be already Fortran ordered when bypassing # checks if check_input: X = check_array( X, accept_sparse="csc", dtype=[np.float64, np.float32], order="F", copy=copy_X, ) y = check_array( y, accept_sparse="csc", dtype=X.dtype.type, order="F", copy=False, ensure_2d=False, ) if Xy is not None: # Xy should be a 1d contiguous array or a 2D C ordered array Xy = check_array( Xy, dtype=X.dtype.type, order="C", copy=False, ensure_2d=False ) n_samples, n_features = X.shape multi_output = False if y.ndim != 1: multi_output = True n_targets = y.shape[1] if multi_output and positive: raise ValueError("positive=True is not allowed for multi-output (y.ndim != 1)") # MultiTaskElasticNet does not support sparse matrices if not multi_output and sparse.issparse(X): if X_offset_param is not None: # As sparse matrices are not actually centered we need this to be passed to # the CD solver. X_sparse_scaling = X_offset_param / X_scale_param X_sparse_scaling = np.asarray(X_sparse_scaling, dtype=X.dtype) else: X_sparse_scaling = np.zeros(n_features, dtype=X.dtype) # X should have been passed through _pre_fit already if function is called # from ElasticNet.fit if check_input: X, y, _, _, _, precompute, Xy = _pre_fit( X, y, Xy, precompute, fit_intercept=False, copy=False, check_input=check_input, ) if alphas is None: # No need to normalize of fit_intercept: it has been done # above alphas = _alpha_grid( X, y, Xy=Xy, l1_ratio=l1_ratio, fit_intercept=False, eps=eps, n_alphas=n_alphas, copy_X=False, ) elif len(alphas) > 1: alphas = np.sort(alphas)[::-1] # make sure alphas are properly ordered n_alphas = len(alphas) dual_gaps = np.empty(n_alphas) n_iters = [] rng = check_random_state(random_state) if selection not in ["random", "cyclic"]: raise ValueError("selection should be either random or cyclic.") random = selection == "random" if not multi_output: coefs = np.empty((n_features, n_alphas), dtype=X.dtype) else: coefs = np.empty((n_targets, n_features, n_alphas), dtype=X.dtype) if coef_init is None: coef_ = np.zeros(coefs.shape[:-1], dtype=X.dtype, order="F") else: coef_ = np.asfortranarray(coef_init, dtype=X.dtype) for i, alpha in enumerate(alphas): # account for n_samples scaling in objectives between here and cd_fast l1_reg = alpha * l1_ratio * n_samples l2_reg = alpha * (1.0 - l1_ratio) * n_samples if not multi_output and sparse.issparse(X): model = cd_fast.sparse_enet_coordinate_descent( w=coef_, alpha=l1_reg, beta=l2_reg, X_data=X.data, X_indices=X.indices, X_indptr=X.indptr, y=y, sample_weight=sample_weight, X_mean=X_sparse_scaling, max_iter=max_iter, tol=tol, rng=rng, random=random, positive=positive, ) elif multi_output: model = cd_fast.enet_coordinate_descent_multi_task( coef_, l1_reg, l2_reg, X, y, max_iter, tol, rng, random ) elif isinstance(precompute, np.ndarray): # We expect precompute to be already Fortran ordered when bypassing # checks if check_input: precompute = check_array(precompute, dtype=X.dtype.type, order="C") model = cd_fast.enet_coordinate_descent_gram( coef_, l1_reg, l2_reg, precompute, Xy, y, max_iter, tol, rng, random, positive, ) elif precompute is False: model = cd_fast.enet_coordinate_descent( coef_, l1_reg, l2_reg, X, y, max_iter, tol, rng, random, positive ) else: raise ValueError( "Precompute should be one of True, False, 'auto' or array-like. Got %r" % precompute ) coef_, dual_gap_, eps_, n_iter_ = model coefs[..., i] = coef_ # we correct the scale of the returned dual gap, as the objective # in cd_fast is n_samples * the objective in this docstring. dual_gaps[i] = dual_gap_ / n_samples n_iters.append(n_iter_) if verbose: if verbose > 2: print(model) elif verbose > 1: print("Path: %03i out of %03i" % (i, n_alphas)) else: sys.stderr.write(".") if return_n_iter: return alphas, coefs, dual_gaps, n_iters return alphas, coefs, dual_gaps
Compute elastic net path with coordinate descent. The elastic net optimization function varies for mono and multi-outputs. For mono-output tasks it is:: 1 / (2 * n_samples) * ||y - Xw||^2_2 + alpha * l1_ratio * ||w||_1 + 0.5 * alpha * (1 - l1_ratio) * ||w||^2_2 For multi-output tasks it is:: (1 / (2 * n_samples)) * ||Y - XW||_Fro^2 + alpha * l1_ratio * ||W||_21 + 0.5 * alpha * (1 - l1_ratio) * ||W||_Fro^2 Where:: ||W||_21 = \sum_i \sqrt{\sum_j w_{ij}^2} i.e. the sum of norm of each row. Read more in the :ref:`User Guide <elastic_net>`. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If ``y`` is mono-output then ``X`` can be sparse. y : {array-like, sparse matrix} of shape (n_samples,) or (n_samples, n_targets) Target values. l1_ratio : float, default=0.5 Number between 0 and 1 passed to elastic net (scaling between l1 and l2 penalties). ``l1_ratio=1`` corresponds to the Lasso. eps : float, default=1e-3 Length of the path. ``eps=1e-3`` means that ``alpha_min / alpha_max = 1e-3``. n_alphas : int, default=100 Number of alphas along the regularization path. alphas : array-like, default=None List of alphas where to compute the models. If None alphas are set automatically. precompute : 'auto', bool or array-like of shape (n_features, n_features), default='auto' Whether to use a precomputed Gram matrix to speed up calculations. If set to ``'auto'`` let us decide. The Gram matrix can also be passed as argument. Xy : array-like of shape (n_features,) or (n_features, n_targets), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. copy_X : bool, default=True If ``True``, X will be copied; else, it may be overwritten. coef_init : array-like of shape (n_features, ), default=None The initial values of the coefficients. verbose : bool or int, default=False Amount of verbosity. return_n_iter : bool, default=False Whether to return the number of iterations or not. positive : bool, default=False If set to True, forces coefficients to be positive. (Only allowed when ``y.ndim == 1``). check_input : bool, default=True If set to False, the input validation checks are skipped (including the Gram matrix when provided). It is assumed that they are handled by the caller. **params : kwargs Keyword arguments passed to the coordinate descent solver. Returns ------- alphas : ndarray of shape (n_alphas,) The alphas along the path where models are computed. coefs : ndarray of shape (n_features, n_alphas) or (n_targets, n_features, n_alphas) Coefficients along the path. dual_gaps : ndarray of shape (n_alphas,) The dual gaps at the end of the optimization for each alpha. n_iters : list of int The number of iterations taken by the coordinate descent optimizer to reach the specified tolerance for each alpha. (Is returned when ``return_n_iter`` is set to True). See Also -------- MultiTaskElasticNet : Multi-task ElasticNet model trained with L1/L2 mixed-norm as regularizer. MultiTaskElasticNetCV : Multi-task L1/L2 ElasticNet with built-in cross-validation. ElasticNet : Linear regression with combined L1 and L2 priors as regularizer. ElasticNetCV : Elastic Net model with iterative fitting along a regularization path. Notes ----- For an example, see :ref:`examples/linear_model/plot_lasso_lasso_lars_elasticnet_path.py <sphx_glr_auto_examples_linear_model_plot_lasso_lasso_lars_elasticnet_path.py>`. Examples -------- >>> from sklearn.linear_model import enet_path >>> from sklearn.datasets import make_regression >>> X, y, true_coef = make_regression( ... n_samples=100, n_features=5, n_informative=2, coef=True, random_state=0 ... ) >>> true_coef array([ 0. , 0. , 0. , 97.9, 45.7]) >>> alphas, estimated_coef, _ = enet_path(X, y, n_alphas=3) >>> alphas.shape (3,) >>> estimated_coef array([[ 0., 0.787, 0.568], [ 0., 1.120, 0.620], [-0., -2.129, -1.128], [ 0., 23.046, 88.939], [ 0., 10.637, 41.566]])
enet_path
python
scikit-learn/scikit-learn
sklearn/linear_model/_coordinate_descent.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_coordinate_descent.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None, check_input=True): """Fit model with coordinate descent. Parameters ---------- X : {ndarray, sparse matrix, sparse array} of (n_samples, n_features) Data. Note that large sparse matrices and arrays requiring `int64` indices are not accepted. y : ndarray of shape (n_samples,) or (n_samples, n_targets) Target. Will be cast to X's dtype if necessary. sample_weight : float or array-like of shape (n_samples,), default=None Sample weights. Internally, the `sample_weight` vector will be rescaled to sum to `n_samples`. .. versionadded:: 0.23 check_input : bool, default=True Allow to bypass several input checking. Don't use this parameter unless you know what you do. Returns ------- self : object Fitted estimator. Notes ----- Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format. """ if self.alpha == 0: warnings.warn( ( "With alpha=0, this algorithm does not converge " "well. You are advised to use the LinearRegression " "estimator" ), stacklevel=2, ) # Remember if X is copied X_copied = False # We expect X and y to be float64 or float32 Fortran ordered arrays # when bypassing checks if check_input: X_copied = self.copy_X and self.fit_intercept X, y = validate_data( self, X, y, accept_sparse="csc", order="F", dtype=[np.float64, np.float32], force_writeable=True, accept_large_sparse=False, copy=X_copied, multi_output=True, y_numeric=True, ) y = check_array( y, order="F", copy=False, dtype=X.dtype.type, ensure_2d=False ) n_samples, n_features = X.shape alpha = self.alpha if isinstance(sample_weight, numbers.Number): sample_weight = None if sample_weight is not None: if check_input: sample_weight = _check_sample_weight(sample_weight, X, dtype=X.dtype) # TLDR: Rescale sw to sum up to n_samples. # Long: The objective function of Enet # # 1/2 * np.average(squared error, weights=sw) # + alpha * penalty (1) # # is invariant under rescaling of sw. # But enet_path coordinate descent minimizes # # 1/2 * sum(squared error) + alpha' * penalty (2) # # and therefore sets # # alpha' = n_samples * alpha (3) # # inside its function body, which results in objective (2) being # equivalent to (1) in case of no sw. # With sw, however, enet_path should set # # alpha' = sum(sw) * alpha (4) # # Therefore, we use the freedom of Eq. (1) to rescale sw before # calling enet_path, i.e. # # sw *= n_samples / sum(sw) # # such that sum(sw) = n_samples. This way, (3) and (4) are the same. sample_weight = sample_weight * (n_samples / np.sum(sample_weight)) # Note: Alternatively, we could also have rescaled alpha instead # of sample_weight: # # alpha *= np.sum(sample_weight) / n_samples # Ensure copying happens only once, don't do it again if done above. # X and y will be rescaled if sample_weight is not None, order='F' # ensures that the returned X and y are still F-contiguous. should_copy = self.copy_X and not X_copied X, y, X_offset, y_offset, X_scale, precompute, Xy = _pre_fit( X, y, None, self.precompute, fit_intercept=self.fit_intercept, copy=should_copy, check_input=check_input, sample_weight=sample_weight, ) # coordinate descent needs F-ordered arrays and _pre_fit might have # called _rescale_data if check_input or sample_weight is not None: X, y = _set_order(X, y, order="F") if y.ndim == 1: y = y[:, np.newaxis] if Xy is not None and Xy.ndim == 1: Xy = Xy[:, np.newaxis] n_targets = y.shape[1] if not self.warm_start or not hasattr(self, "coef_"): coef_ = np.zeros((n_targets, n_features), dtype=X.dtype, order="F") else: coef_ = self.coef_ if coef_.ndim == 1: coef_ = coef_[np.newaxis, :] dual_gaps_ = np.zeros(n_targets, dtype=X.dtype) self.n_iter_ = [] for k in range(n_targets): if Xy is not None: this_Xy = Xy[:, k] else: this_Xy = None _, this_coef, this_dual_gap, this_iter = self.path( X, y[:, k], l1_ratio=self.l1_ratio, eps=None, n_alphas=None, alphas=[alpha], precompute=precompute, Xy=this_Xy, copy_X=True, coef_init=coef_[k], verbose=False, return_n_iter=True, positive=self.positive, check_input=False, # from here on **params tol=self.tol, X_offset=X_offset, X_scale=X_scale, max_iter=self.max_iter, random_state=self.random_state, selection=self.selection, sample_weight=sample_weight, ) coef_[k] = this_coef[:, 0] dual_gaps_[k] = this_dual_gap[0] self.n_iter_.append(this_iter[0]) if n_targets == 1: self.n_iter_ = self.n_iter_[0] self.coef_ = coef_[0] self.dual_gap_ = dual_gaps_[0] else: self.coef_ = coef_ self.dual_gap_ = dual_gaps_ self._set_intercept(X_offset, y_offset, X_scale) # check for finiteness of coefficients if not all(np.isfinite(w).all() for w in [self.coef_, self.intercept_]): raise ValueError( "Coordinate descent iterations resulted in non-finite parameter" " values. The input data may contain large values and need to" " be preprocessed." ) # return self for chaining fit and predict calls return self
Fit model with coordinate descent. Parameters ---------- X : {ndarray, sparse matrix, sparse array} of (n_samples, n_features) Data. Note that large sparse matrices and arrays requiring `int64` indices are not accepted. y : ndarray of shape (n_samples,) or (n_samples, n_targets) Target. Will be cast to X's dtype if necessary. sample_weight : float or array-like of shape (n_samples,), default=None Sample weights. Internally, the `sample_weight` vector will be rescaled to sum to `n_samples`. .. versionadded:: 0.23 check_input : bool, default=True Allow to bypass several input checking. Don't use this parameter unless you know what you do. Returns ------- self : object Fitted estimator. Notes ----- Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_coordinate_descent.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_coordinate_descent.py
BSD-3-Clause
def _decision_function(self, X): """Decision function of the linear model. Parameters ---------- X : numpy array or scipy.sparse matrix of shape (n_samples, n_features) Returns ------- T : ndarray of shape (n_samples,) The predicted decision function. """ check_is_fitted(self) if sparse.issparse(X): return safe_sparse_dot(X, self.coef_.T, dense_output=True) + self.intercept_ else: return super()._decision_function(X)
Decision function of the linear model. Parameters ---------- X : numpy array or scipy.sparse matrix of shape (n_samples, n_features) Returns ------- T : ndarray of shape (n_samples,) The predicted decision function.
_decision_function
python
scikit-learn/scikit-learn
sklearn/linear_model/_coordinate_descent.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_coordinate_descent.py
BSD-3-Clause
def _path_residuals( X, y, sample_weight, train, test, fit_intercept, path, path_params, alphas=None, l1_ratio=1, X_order=None, dtype=None, ): """Returns the MSE for the models computed by 'path'. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. sample_weight : None or array-like of shape (n_samples,) Sample weights. train : list of indices The indices of the train set. test : list of indices The indices of the test set. path : callable Function returning a list of models on the path. See enet_path for an example of signature. path_params : dictionary Parameters passed to the path function. alphas : array-like, default=None Array of float that is used for cross-validation. If not provided, computed using 'path'. l1_ratio : float, default=1 float between 0 and 1 passed to ElasticNet (scaling between l1 and l2 penalties). For ``l1_ratio = 0`` the penalty is an L2 penalty. For ``l1_ratio = 1`` it is an L1 penalty. For ``0 < l1_ratio < 1``, the penalty is a combination of L1 and L2. X_order : {'F', 'C'}, default=None The order of the arrays expected by the path function to avoid memory copies. dtype : a numpy dtype, default=None The dtype of the arrays expected by the path function to avoid memory copies. """ X_train = X[train] y_train = y[train] X_test = X[test] y_test = y[test] if sample_weight is None: sw_train, sw_test = None, None else: sw_train = sample_weight[train] sw_test = sample_weight[test] n_samples = X_train.shape[0] # TLDR: Rescale sw_train to sum up to n_samples on the training set. # See TLDR and long comment inside ElasticNet.fit. sw_train *= n_samples / np.sum(sw_train) # Note: Alternatively, we could also have rescaled alpha instead # of sample_weight: # # alpha *= np.sum(sample_weight) / n_samples if not sparse.issparse(X): for array, array_input in ( (X_train, X), (y_train, y), (X_test, X), (y_test, y), ): if array.base is not array_input and not array.flags["WRITEABLE"]: # fancy indexing should create a writable copy but it doesn't # for read-only memmaps (cf. numpy#14132). array.setflags(write=True) if y.ndim == 1: precompute = path_params["precompute"] else: # No Gram variant of multi-task exists right now. # Fall back to default enet_multitask precompute = False X_train, y_train, X_offset, y_offset, X_scale, precompute, Xy = _pre_fit( X_train, y_train, None, precompute, fit_intercept=fit_intercept, copy=False, sample_weight=sw_train, ) path_params = path_params.copy() path_params["Xy"] = Xy path_params["X_offset"] = X_offset path_params["X_scale"] = X_scale path_params["precompute"] = precompute path_params["copy_X"] = False path_params["alphas"] = alphas # needed for sparse cd solver path_params["sample_weight"] = sw_train if "l1_ratio" in path_params: path_params["l1_ratio"] = l1_ratio # Do the ordering and type casting here, as if it is done in the path, # X is copied and a reference is kept here X_train = check_array(X_train, accept_sparse="csc", dtype=dtype, order=X_order) alphas, coefs, _ = path(X_train, y_train, **path_params) del X_train, y_train if y.ndim == 1: # Doing this so that it becomes coherent with multioutput. coefs = coefs[np.newaxis, :, :] y_offset = np.atleast_1d(y_offset) y_test = y_test[:, np.newaxis] intercepts = y_offset[:, np.newaxis] - np.dot(X_offset, coefs) X_test_coefs = safe_sparse_dot(X_test, coefs) residues = X_test_coefs - y_test[:, :, np.newaxis] residues += intercepts if sample_weight is None: this_mse = (residues**2).mean(axis=0) else: this_mse = np.average(residues**2, weights=sw_test, axis=0) return this_mse.mean(axis=0)
Returns the MSE for the models computed by 'path'. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. sample_weight : None or array-like of shape (n_samples,) Sample weights. train : list of indices The indices of the train set. test : list of indices The indices of the test set. path : callable Function returning a list of models on the path. See enet_path for an example of signature. path_params : dictionary Parameters passed to the path function. alphas : array-like, default=None Array of float that is used for cross-validation. If not provided, computed using 'path'. l1_ratio : float, default=1 float between 0 and 1 passed to ElasticNet (scaling between l1 and l2 penalties). For ``l1_ratio = 0`` the penalty is an L2 penalty. For ``l1_ratio = 1`` it is an L1 penalty. For ``0 < l1_ratio < 1``, the penalty is a combination of L1 and L2. X_order : {'F', 'C'}, default=None The order of the arrays expected by the path function to avoid memory copies. dtype : a numpy dtype, default=None The dtype of the arrays expected by the path function to avoid memory copies.
_path_residuals
python
scikit-learn/scikit-learn
sklearn/linear_model/_coordinate_descent.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_coordinate_descent.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None, **params): """Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse. Note that large sparse matrices and arrays requiring `int64` indices are not accepted. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. sample_weight : float or array-like of shape (n_samples,), \ default=None Sample weights used for fitting and evaluation of the weighted mean squared error of each cv-fold. Note that the cross validated MSE that is finally used to find the best model is the unweighted mean over the (weighted) MSEs of each test fold. **params : dict, default=None Parameters to be passed to the CV splitter. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`, which can be set by using ``sklearn.set_config(enable_metadata_routing=True)``. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Returns an instance of fitted model. """ _raise_for_params(params, self, "fit") # TODO(1.9): remove n_alphas and alphas={"warn", None}; set alphas=100 by # default. Remove these deprecations messages and use self.alphas directly # instead of self._alphas. if self.n_alphas == "deprecated": self._alphas = 100 else: warnings.warn( "'n_alphas' was deprecated in 1.7 and will be removed in 1.9. " "'alphas' now accepts an integer value which removes the need to pass " "'n_alphas'. The default value of 'alphas' will change from None to " "100 in 1.9. Pass an explicit value to 'alphas' and leave 'n_alphas' " "to its default value to silence this warning.", FutureWarning, ) self._alphas = self.n_alphas if isinstance(self.alphas, str) and self.alphas == "warn": # - If self.n_alphas == "deprecated", both are left to their default values # so we don't warn since the future default behavior will be the same as # the current default behavior. # - If self.n_alphas != "deprecated", then we already warned about it # and the warning message mentions the future self.alphas default, so # no need to warn a second time. pass elif self.alphas is None: warnings.warn( "'alphas=None' is deprecated and will be removed in 1.9, at which " "point the default value will be set to 100. Set 'alphas=100' " "to silence this warning.", FutureWarning, ) else: self._alphas = self.alphas # This makes sure that there is no duplication in memory. # Dealing right with copy_X is important in the following: # Multiple functions touch X and subsamples of X and can induce a # lot of duplication of memory copy_X = self.copy_X and self.fit_intercept check_y_params = dict( copy=False, dtype=[np.float64, np.float32], ensure_2d=False ) if isinstance(X, np.ndarray) or sparse.issparse(X): # Keep a reference to X reference_to_old_X = X # Let us not impose fortran ordering so far: it is # not useful for the cross-validation loop and will be done # by the model fitting itself # Need to validate separately here. # We can't pass multi_output=True because that would allow y to be # csr. We also want to allow y to be 64 or 32 but check_X_y only # allows to convert for 64. check_X_params = dict( accept_sparse="csc", dtype=[np.float64, np.float32], force_writeable=True, copy=False, accept_large_sparse=False, ) X, y = validate_data( self, X, y, validate_separately=(check_X_params, check_y_params) ) if sparse.issparse(X): if hasattr(reference_to_old_X, "data") and not np.may_share_memory( reference_to_old_X.data, X.data ): # X is a sparse matrix and has been copied copy_X = False elif not np.may_share_memory(reference_to_old_X, X): # X has been copied copy_X = False del reference_to_old_X else: # Need to validate separately here. # We can't pass multi_output=True because that would allow y to be # csr. We also want to allow y to be 64 or 32 but check_X_y only # allows to convert for 64. check_X_params = dict( accept_sparse="csc", dtype=[np.float64, np.float32], order="F", force_writeable=True, copy=copy_X, ) X, y = validate_data( self, X, y, validate_separately=(check_X_params, check_y_params) ) copy_X = False check_consistent_length(X, y) if not self._is_multitask(): if y.ndim > 1 and y.shape[1] > 1: raise ValueError( "For multi-task outputs, use MultiTask%s" % self.__class__.__name__ ) y = column_or_1d(y, warn=True) else: if sparse.issparse(X): raise TypeError("X should be dense but a sparse matrix waspassed") elif y.ndim == 1: raise ValueError( "For mono-task outputs, use %sCV" % self.__class__.__name__[9:] ) if isinstance(sample_weight, numbers.Number): sample_weight = None if sample_weight is not None: sample_weight = _check_sample_weight(sample_weight, X, dtype=X.dtype) model = self._get_estimator() # All LinearModelCV parameters except 'cv' are acceptable path_params = self.get_params() # Pop `intercept` that is not parameter of the path function path_params.pop("fit_intercept", None) if "l1_ratio" in path_params: l1_ratios = np.atleast_1d(path_params["l1_ratio"]) # For the first path, we need to set l1_ratio path_params["l1_ratio"] = l1_ratios[0] else: l1_ratios = [ 1, ] path_params.pop("cv", None) path_params.pop("n_jobs", None) n_l1_ratio = len(l1_ratios) check_scalar_alpha = partial( check_scalar, target_type=Real, min_val=0.0, include_boundaries="left", ) if isinstance(self._alphas, Integral): alphas = [ _alpha_grid( X, y, l1_ratio=l1_ratio, fit_intercept=self.fit_intercept, eps=self.eps, n_alphas=self._alphas, copy_X=self.copy_X, sample_weight=sample_weight, ) for l1_ratio in l1_ratios ] else: # Making sure alphas entries are scalars. for index, alpha in enumerate(self._alphas): check_scalar_alpha(alpha, f"alphas[{index}]") # Making sure alphas is properly ordered. alphas = np.tile(np.sort(self._alphas)[::-1], (n_l1_ratio, 1)) # We want n_alphas to be the number of alphas used for each l1_ratio. n_alphas = len(alphas[0]) path_params.update({"n_alphas": n_alphas}) path_params["copy_X"] = copy_X # We are not computing in parallel, we can modify X # inplace in the folds if effective_n_jobs(self.n_jobs) > 1: path_params["copy_X"] = False # init cross-validation generator cv = check_cv(self.cv) if _routing_enabled(): splitter_supports_sample_weight = get_routing_for_object(cv).consumes( method="split", params=["sample_weight"] ) if ( sample_weight is not None and not splitter_supports_sample_weight and not has_fit_parameter(self, "sample_weight") ): raise ValueError( "The CV splitter and underlying estimator do not support" " sample weights." ) if splitter_supports_sample_weight: params["sample_weight"] = sample_weight routed_params = process_routing(self, "fit", **params) if sample_weight is not None and not has_fit_parameter( self, "sample_weight" ): # MultiTaskElasticNetCV does not (yet) support sample_weight sample_weight = None else: routed_params = Bunch() routed_params.splitter = Bunch(split=Bunch()) # Compute path for all folds and compute MSE to get the best alpha folds = list(cv.split(X, y, **routed_params.splitter.split)) best_mse = np.inf # We do a double for loop folded in one, in order to be able to # iterate in parallel on l1_ratio and folds jobs = ( delayed(_path_residuals)( X, y, sample_weight, train, test, self.fit_intercept, self.path, path_params, alphas=this_alphas, l1_ratio=this_l1_ratio, X_order="F", dtype=X.dtype.type, ) for this_l1_ratio, this_alphas in zip(l1_ratios, alphas) for train, test in folds ) mse_paths = Parallel( n_jobs=self.n_jobs, verbose=self.verbose, prefer="threads", )(jobs) mse_paths = np.reshape(mse_paths, (n_l1_ratio, len(folds), -1)) # The mean is computed over folds. mean_mse = np.mean(mse_paths, axis=1) self.mse_path_ = np.squeeze(np.moveaxis(mse_paths, 2, 1)) for l1_ratio, l1_alphas, mse_alphas in zip(l1_ratios, alphas, mean_mse): i_best_alpha = np.argmin(mse_alphas) this_best_mse = mse_alphas[i_best_alpha] if this_best_mse < best_mse: best_alpha = l1_alphas[i_best_alpha] best_l1_ratio = l1_ratio best_mse = this_best_mse self.l1_ratio_ = best_l1_ratio self.alpha_ = best_alpha if isinstance(self._alphas, Integral): self.alphas_ = np.asarray(alphas) if n_l1_ratio == 1: self.alphas_ = self.alphas_[0] # Remove duplicate alphas in case alphas is provided. else: self.alphas_ = np.asarray(alphas[0]) # Refit the model with the parameters selected common_params = { name: value for name, value in self.get_params().items() if name in model.get_params() } model.set_params(**common_params) model.alpha = best_alpha model.l1_ratio = best_l1_ratio model.copy_X = copy_X precompute = getattr(self, "precompute", None) if isinstance(precompute, str) and precompute == "auto": model.precompute = False if sample_weight is None: # MultiTaskElasticNetCV does not (yet) support sample_weight, even # not sample_weight=None. model.fit(X, y) else: model.fit(X, y, sample_weight=sample_weight) if not hasattr(self, "l1_ratio"): del self.l1_ratio_ self.coef_ = model.coef_ self.intercept_ = model.intercept_ self.dual_gap_ = model.dual_gap_ self.n_iter_ = model.n_iter_ return self
Fit linear model with coordinate descent. Fit is on grid of alphas and best alpha estimated by cross-validation. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. Pass directly as Fortran-contiguous data to avoid unnecessary memory duplication. If y is mono-output, X can be sparse. Note that large sparse matrices and arrays requiring `int64` indices are not accepted. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. sample_weight : float or array-like of shape (n_samples,), default=None Sample weights used for fitting and evaluation of the weighted mean squared error of each cv-fold. Note that the cross validated MSE that is finally used to find the best model is the unweighted mean over the (weighted) MSEs of each test fold. **params : dict, default=None Parameters to be passed to the CV splitter. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`, which can be set by using ``sklearn.set_config(enable_metadata_routing=True)``. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Returns an instance of fitted model.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_coordinate_descent.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_coordinate_descent.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = ( MetadataRouter(owner=self.__class__.__name__) .add_self_request(self) .add( splitter=check_cv(self.cv), method_mapping=MethodMapping().add(caller="fit", callee="split"), ) ) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/linear_model/_coordinate_descent.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_coordinate_descent.py
BSD-3-Clause
def fit(self, X, y): """Fit MultiTaskElasticNet model with coordinate descent. Parameters ---------- X : ndarray of shape (n_samples, n_features) Data. y : ndarray of shape (n_samples, n_targets) Target. Will be cast to X's dtype if necessary. Returns ------- self : object Fitted estimator. Notes ----- Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format. """ # Need to validate separately here. # We can't pass multi_output=True because that would allow y to be csr. check_X_params = dict( dtype=[np.float64, np.float32], order="F", force_writeable=True, copy=self.copy_X and self.fit_intercept, ) check_y_params = dict(ensure_2d=False, order="F") X, y = validate_data( self, X, y, validate_separately=(check_X_params, check_y_params) ) check_consistent_length(X, y) y = y.astype(X.dtype) if hasattr(self, "l1_ratio"): model_str = "ElasticNet" else: model_str = "Lasso" if y.ndim == 1: raise ValueError("For mono-task outputs, use %s" % model_str) n_samples, n_features = X.shape n_targets = y.shape[1] X, y, X_offset, y_offset, X_scale = _preprocess_data( X, y, fit_intercept=self.fit_intercept, copy=False ) if not self.warm_start or not hasattr(self, "coef_"): self.coef_ = np.zeros( (n_targets, n_features), dtype=X.dtype.type, order="F" ) l1_reg = self.alpha * self.l1_ratio * n_samples l2_reg = self.alpha * (1.0 - self.l1_ratio) * n_samples self.coef_ = np.asfortranarray(self.coef_) # coef contiguous in memory random = self.selection == "random" ( self.coef_, self.dual_gap_, self.eps_, self.n_iter_, ) = cd_fast.enet_coordinate_descent_multi_task( self.coef_, l1_reg, l2_reg, X, y, self.max_iter, self.tol, check_random_state(self.random_state), random, ) # account for different objective scaling here and in cd_fast self.dual_gap_ /= n_samples self._set_intercept(X_offset, y_offset, X_scale) # return self for chaining fit and predict calls return self
Fit MultiTaskElasticNet model with coordinate descent. Parameters ---------- X : ndarray of shape (n_samples, n_features) Data. y : ndarray of shape (n_samples, n_targets) Target. Will be cast to X's dtype if necessary. Returns ------- self : object Fitted estimator. Notes ----- Coordinate descent is an algorithm that considers each column of data at a time hence it will automatically convert the X input as a Fortran-contiguous numpy array if necessary. To avoid memory re-allocation it is advised to allocate the initial data in memory directly using that format.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_coordinate_descent.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_coordinate_descent.py
BSD-3-Clause
def _huber_loss_and_gradient(w, X, y, epsilon, alpha, sample_weight=None): """Returns the Huber loss and the gradient. Parameters ---------- w : ndarray, shape (n_features + 1,) or (n_features + 2,) Feature vector. w[:n_features] gives the coefficients w[-1] gives the scale factor and if the intercept is fit w[-2] gives the intercept factor. X : ndarray of shape (n_samples, n_features) Input data. y : ndarray of shape (n_samples,) Target vector. epsilon : float Robustness of the Huber estimator. alpha : float Regularization parameter. sample_weight : ndarray of shape (n_samples,), default=None Weight assigned to each sample. Returns ------- loss : float Huber loss. gradient : ndarray, shape (len(w)) Returns the derivative of the Huber loss with respect to each coefficient, intercept and the scale as a vector. """ _, n_features = X.shape fit_intercept = n_features + 2 == w.shape[0] if fit_intercept: intercept = w[-2] sigma = w[-1] w = w[:n_features] n_samples = np.sum(sample_weight) # Calculate the values where |y - X'w -c / sigma| > epsilon # The values above this threshold are outliers. linear_loss = y - safe_sparse_dot(X, w) if fit_intercept: linear_loss -= intercept abs_linear_loss = np.abs(linear_loss) outliers_mask = abs_linear_loss > epsilon * sigma # Calculate the linear loss due to the outliers. # This is equal to (2 * M * |y - X'w -c / sigma| - M**2) * sigma outliers = abs_linear_loss[outliers_mask] num_outliers = np.count_nonzero(outliers_mask) n_non_outliers = X.shape[0] - num_outliers # n_sq_outliers includes the weight give to the outliers while # num_outliers is just the number of outliers. outliers_sw = sample_weight[outliers_mask] n_sw_outliers = np.sum(outliers_sw) outlier_loss = ( 2.0 * epsilon * np.sum(outliers_sw * outliers) - sigma * n_sw_outliers * epsilon**2 ) # Calculate the quadratic loss due to the non-outliers.- # This is equal to |(y - X'w - c)**2 / sigma**2| * sigma non_outliers = linear_loss[~outliers_mask] weighted_non_outliers = sample_weight[~outliers_mask] * non_outliers weighted_loss = np.dot(weighted_non_outliers.T, non_outliers) squared_loss = weighted_loss / sigma if fit_intercept: grad = np.zeros(n_features + 2) else: grad = np.zeros(n_features + 1) # Gradient due to the squared loss. X_non_outliers = -axis0_safe_slice(X, ~outliers_mask, n_non_outliers) grad[:n_features] = ( 2.0 / sigma * safe_sparse_dot(weighted_non_outliers, X_non_outliers) ) # Gradient due to the linear loss. signed_outliers = np.ones_like(outliers) signed_outliers_mask = linear_loss[outliers_mask] < 0 signed_outliers[signed_outliers_mask] = -1.0 X_outliers = axis0_safe_slice(X, outliers_mask, num_outliers) sw_outliers = sample_weight[outliers_mask] * signed_outliers grad[:n_features] -= 2.0 * epsilon * (safe_sparse_dot(sw_outliers, X_outliers)) # Gradient due to the penalty. grad[:n_features] += alpha * 2.0 * w # Gradient due to sigma. grad[-1] = n_samples grad[-1] -= n_sw_outliers * epsilon**2 grad[-1] -= squared_loss / sigma # Gradient due to the intercept. if fit_intercept: grad[-2] = -2.0 * np.sum(weighted_non_outliers) / sigma grad[-2] -= 2.0 * epsilon * np.sum(sw_outliers) loss = n_samples * sigma + squared_loss + outlier_loss loss += alpha * np.dot(w, w) return loss, grad
Returns the Huber loss and the gradient. Parameters ---------- w : ndarray, shape (n_features + 1,) or (n_features + 2,) Feature vector. w[:n_features] gives the coefficients w[-1] gives the scale factor and if the intercept is fit w[-2] gives the intercept factor. X : ndarray of shape (n_samples, n_features) Input data. y : ndarray of shape (n_samples,) Target vector. epsilon : float Robustness of the Huber estimator. alpha : float Regularization parameter. sample_weight : ndarray of shape (n_samples,), default=None Weight assigned to each sample. Returns ------- loss : float Huber loss. gradient : ndarray, shape (len(w)) Returns the derivative of the Huber loss with respect to each coefficient, intercept and the scale as a vector.
_huber_loss_and_gradient
python
scikit-learn/scikit-learn
sklearn/linear_model/_huber.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_huber.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None): """Fit the model according to the given training data. Parameters ---------- X : array-like, shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like, shape (n_samples,) Target vector relative to X. sample_weight : array-like, shape (n_samples,) Weight given to each sample. Returns ------- self : object Fitted `HuberRegressor` estimator. """ X, y = validate_data( self, X, y, copy=False, accept_sparse=["csr"], y_numeric=True, dtype=[np.float64, np.float32], ) sample_weight = _check_sample_weight(sample_weight, X) if self.warm_start and hasattr(self, "coef_"): parameters = np.concatenate((self.coef_, [self.intercept_, self.scale_])) else: if self.fit_intercept: parameters = np.zeros(X.shape[1] + 2) else: parameters = np.zeros(X.shape[1] + 1) # Make sure to initialize the scale parameter to a strictly # positive value: parameters[-1] = 1 # Sigma or the scale factor should be non-negative. # Setting it to be zero might cause undefined bounds hence we set it # to a value close to zero. bounds = np.tile([-np.inf, np.inf], (parameters.shape[0], 1)) bounds[-1][0] = np.finfo(np.float64).eps * 10 opt_res = optimize.minimize( _huber_loss_and_gradient, parameters, method="L-BFGS-B", jac=True, args=(X, y, self.epsilon, self.alpha, sample_weight), options={"maxiter": self.max_iter, "gtol": self.tol, "iprint": -1}, bounds=bounds, ) parameters = opt_res.x if opt_res.status == 2: raise ValueError( "HuberRegressor convergence failed: l-BFGS-b solver terminated with %s" % opt_res.message ) self.n_iter_ = _check_optimize_result("lbfgs", opt_res, self.max_iter) self.scale_ = parameters[-1] if self.fit_intercept: self.intercept_ = parameters[-2] else: self.intercept_ = 0.0 self.coef_ = parameters[: X.shape[1]] residual = np.abs(y - safe_sparse_dot(X, self.coef_) - self.intercept_) self.outliers_ = residual > self.scale_ * self.epsilon return self
Fit the model according to the given training data. Parameters ---------- X : array-like, shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like, shape (n_samples,) Target vector relative to X. sample_weight : array-like, shape (n_samples,) Weight given to each sample. Returns ------- self : object Fitted `HuberRegressor` estimator.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_huber.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_huber.py
BSD-3-Clause
def _fit(self, X, y, max_iter, alpha, fit_path, Xy=None): """Auxiliary method to fit the model using X, y as training data""" n_features = X.shape[1] X, y, X_offset, y_offset, X_scale = _preprocess_data( X, y, fit_intercept=self.fit_intercept, copy=self.copy_X ) if y.ndim == 1: y = y[:, np.newaxis] n_targets = y.shape[1] Gram = self._get_gram(self.precompute, X, y) self.alphas_ = [] self.n_iter_ = [] self.coef_ = np.empty((n_targets, n_features), dtype=X.dtype) if fit_path: self.active_ = [] self.coef_path_ = [] for k in range(n_targets): this_Xy = None if Xy is None else Xy[:, k] alphas, active, coef_path, n_iter_ = lars_path( X, y[:, k], Gram=Gram, Xy=this_Xy, copy_X=self.copy_X, copy_Gram=True, alpha_min=alpha, method=self.method, verbose=max(0, self.verbose - 1), max_iter=max_iter, eps=self.eps, return_path=True, return_n_iter=True, positive=self.positive, ) self.alphas_.append(alphas) self.active_.append(active) self.n_iter_.append(n_iter_) self.coef_path_.append(coef_path) self.coef_[k] = coef_path[:, -1] if n_targets == 1: self.alphas_, self.active_, self.coef_path_, self.coef_ = [ a[0] for a in (self.alphas_, self.active_, self.coef_path_, self.coef_) ] self.n_iter_ = self.n_iter_[0] else: for k in range(n_targets): this_Xy = None if Xy is None else Xy[:, k] alphas, _, self.coef_[k], n_iter_ = lars_path( X, y[:, k], Gram=Gram, Xy=this_Xy, copy_X=self.copy_X, copy_Gram=True, alpha_min=alpha, method=self.method, verbose=max(0, self.verbose - 1), max_iter=max_iter, eps=self.eps, return_path=False, return_n_iter=True, positive=self.positive, ) self.alphas_.append(alphas) self.n_iter_.append(n_iter_) if n_targets == 1: self.alphas_ = self.alphas_[0] self.n_iter_ = self.n_iter_[0] self._set_intercept(X_offset, y_offset, X_scale) return self
Auxiliary method to fit the model using X, y as training data
_fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_least_angle.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_least_angle.py
BSD-3-Clause
def fit(self, X, y, Xy=None): """Fit the model using X, y as training data. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. Xy : array-like of shape (n_features,) or (n_features, n_targets), \ default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. Returns ------- self : object Returns an instance of self. """ X, y = validate_data( self, X, y, force_writeable=True, y_numeric=True, multi_output=True ) alpha = getattr(self, "alpha", 0.0) if hasattr(self, "n_nonzero_coefs"): alpha = 0.0 # n_nonzero_coefs parametrization takes priority max_iter = self.n_nonzero_coefs else: max_iter = self.max_iter if self.jitter is not None: rng = check_random_state(self.random_state) noise = rng.uniform(high=self.jitter, size=len(y)) y = y + noise self._fit( X, y, max_iter=max_iter, alpha=alpha, fit_path=self.fit_path, Xy=Xy, ) return self
Fit the model using X, y as training data. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. Xy : array-like of shape (n_features,) or (n_features, n_targets), default=None Xy = np.dot(X.T, y) that can be precomputed. It is useful only when the Gram matrix is precomputed. Returns ------- self : object Returns an instance of self.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_least_angle.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_least_angle.py
BSD-3-Clause
def _lars_path_residues( X_train, y_train, X_test, y_test, Gram=None, copy=True, method="lar", verbose=False, fit_intercept=True, max_iter=500, eps=np.finfo(float).eps, positive=False, ): """Compute the residues on left-out data for a full LARS path Parameters ----------- X_train : array-like of shape (n_samples, n_features) The data to fit the LARS on y_train : array-like of shape (n_samples,) The target variable to fit LARS on X_test : array-like of shape (n_samples, n_features) The data to compute the residues on y_test : array-like of shape (n_samples,) The target variable to compute the residues on Gram : None, 'auto' or array-like of shape (n_features, n_features), \ default=None Precomputed Gram matrix (X' * X), if ``'auto'``, the Gram matrix is precomputed from the given X, if there are more samples than features copy : bool, default=True Whether X_train, X_test, y_train and y_test should be copied; if False, they may be overwritten. method : {'lar' , 'lasso'}, default='lar' Specifies the returned model. Select ``'lar'`` for Least Angle Regression, ``'lasso'`` for the Lasso. verbose : bool or int, default=False Sets the amount of verbosity fit_intercept : bool, default=True whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). positive : bool, default=False Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. See reservations for using this option in combination with method 'lasso' for expected small values of alpha in the doc of LassoLarsCV and LassoLarsIC. max_iter : int, default=500 Maximum number of iterations to perform. eps : float, default=np.finfo(float).eps The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the ``tol`` parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization. Returns -------- alphas : array-like of shape (n_alphas,) Maximum of covariances (in absolute value) at each iteration. ``n_alphas`` is either ``max_iter`` or ``n_features``, whichever is smaller. active : list Indices of active variables at the end of the path. coefs : array-like of shape (n_features, n_alphas) Coefficients along the path residues : array-like of shape (n_alphas, n_samples) Residues of the prediction on the test data """ X_train = _check_copy_and_writeable(X_train, copy) y_train = _check_copy_and_writeable(y_train, copy) X_test = _check_copy_and_writeable(X_test, copy) y_test = _check_copy_and_writeable(y_test, copy) if fit_intercept: X_mean = X_train.mean(axis=0) X_train -= X_mean X_test -= X_mean y_mean = y_train.mean(axis=0) y_train = as_float_array(y_train, copy=False) y_train -= y_mean y_test = as_float_array(y_test, copy=False) y_test -= y_mean alphas, active, coefs = lars_path( X_train, y_train, Gram=Gram, copy_X=False, copy_Gram=False, method=method, verbose=max(0, verbose - 1), max_iter=max_iter, eps=eps, positive=positive, ) residues = np.dot(X_test, coefs) - y_test[:, np.newaxis] return alphas, active, coefs, residues.T
Compute the residues on left-out data for a full LARS path Parameters ----------- X_train : array-like of shape (n_samples, n_features) The data to fit the LARS on y_train : array-like of shape (n_samples,) The target variable to fit LARS on X_test : array-like of shape (n_samples, n_features) The data to compute the residues on y_test : array-like of shape (n_samples,) The target variable to compute the residues on Gram : None, 'auto' or array-like of shape (n_features, n_features), default=None Precomputed Gram matrix (X' * X), if ``'auto'``, the Gram matrix is precomputed from the given X, if there are more samples than features copy : bool, default=True Whether X_train, X_test, y_train and y_test should be copied; if False, they may be overwritten. method : {'lar' , 'lasso'}, default='lar' Specifies the returned model. Select ``'lar'`` for Least Angle Regression, ``'lasso'`` for the Lasso. verbose : bool or int, default=False Sets the amount of verbosity fit_intercept : bool, default=True whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). positive : bool, default=False Restrict coefficients to be >= 0. Be aware that you might want to remove fit_intercept which is set True by default. See reservations for using this option in combination with method 'lasso' for expected small values of alpha in the doc of LassoLarsCV and LassoLarsIC. max_iter : int, default=500 Maximum number of iterations to perform. eps : float, default=np.finfo(float).eps The machine-precision regularization in the computation of the Cholesky diagonal factors. Increase this for very ill-conditioned systems. Unlike the ``tol`` parameter in some iterative optimization-based algorithms, this parameter does not control the tolerance of the optimization. Returns -------- alphas : array-like of shape (n_alphas,) Maximum of covariances (in absolute value) at each iteration. ``n_alphas`` is either ``max_iter`` or ``n_features``, whichever is smaller. active : list Indices of active variables at the end of the path. coefs : array-like of shape (n_features, n_alphas) Coefficients along the path residues : array-like of shape (n_alphas, n_samples) Residues of the prediction on the test data
_lars_path_residues
python
scikit-learn/scikit-learn
sklearn/linear_model/_least_angle.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_least_angle.py
BSD-3-Clause
def fit(self, X, y, **params): """Fit the model using X, y as training data. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) Target values. **params : dict, default=None Parameters to be passed to the CV splitter. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`, which can be set by using ``sklearn.set_config(enable_metadata_routing=True)``. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Returns an instance of self. """ _raise_for_params(params, self, "fit") X, y = validate_data(self, X, y, force_writeable=True, y_numeric=True) X = as_float_array(X, copy=self.copy_X) y = as_float_array(y, copy=self.copy_X) # init cross-validation generator cv = check_cv(self.cv, classifier=False) if _routing_enabled(): routed_params = process_routing(self, "fit", **params) else: routed_params = Bunch(splitter=Bunch(split={})) # As we use cross-validation, the Gram matrix is not precomputed here Gram = self.precompute if hasattr(Gram, "__array__"): warnings.warn( 'Parameter "precompute" cannot be an array in ' '%s. Automatically switch to "auto" instead.' % self.__class__.__name__ ) Gram = "auto" cv_paths = Parallel(n_jobs=self.n_jobs, verbose=self.verbose)( delayed(_lars_path_residues)( X[train], y[train], X[test], y[test], Gram=Gram, copy=False, method=self.method, verbose=max(0, self.verbose - 1), fit_intercept=self.fit_intercept, max_iter=self.max_iter, eps=self.eps, positive=self.positive, ) for train, test in cv.split(X, y, **routed_params.splitter.split) ) all_alphas = np.concatenate(next(zip(*cv_paths))) # Unique also sorts all_alphas = np.unique(all_alphas) # Take at most max_n_alphas values stride = int(max(1, int(len(all_alphas) / float(self.max_n_alphas)))) all_alphas = all_alphas[::stride] mse_path = np.empty((len(all_alphas), len(cv_paths))) for index, (alphas, _, _, residues) in enumerate(cv_paths): alphas = alphas[::-1] residues = residues[::-1] if alphas[0] != 0: alphas = np.r_[0, alphas] residues = np.r_[residues[0, np.newaxis], residues] if alphas[-1] != all_alphas[-1]: alphas = np.r_[alphas, all_alphas[-1]] residues = np.r_[residues, residues[-1, np.newaxis]] this_residues = interpolate.interp1d(alphas, residues, axis=0)(all_alphas) this_residues **= 2 mse_path[:, index] = np.mean(this_residues, axis=-1) mask = np.all(np.isfinite(mse_path), axis=-1) all_alphas = all_alphas[mask] mse_path = mse_path[mask] # Select the alpha that minimizes left-out error i_best_alpha = np.argmin(mse_path.mean(axis=-1)) best_alpha = all_alphas[i_best_alpha] # Store our parameters self.alpha_ = best_alpha self.cv_alphas_ = all_alphas self.mse_path_ = mse_path # Now compute the full model using best_alpha # it will call a lasso internally when self if LassoLarsCV # as self.method == 'lasso' self._fit( X, y, max_iter=self.max_iter, alpha=best_alpha, Xy=None, fit_path=True, ) return self
Fit the model using X, y as training data. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) Target values. **params : dict, default=None Parameters to be passed to the CV splitter. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`, which can be set by using ``sklearn.set_config(enable_metadata_routing=True)``. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Returns an instance of self.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_least_angle.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_least_angle.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = MetadataRouter(owner=self.__class__.__name__).add( splitter=check_cv(self.cv), method_mapping=MethodMapping().add(caller="fit", callee="split"), ) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/linear_model/_least_angle.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_least_angle.py
BSD-3-Clause
def fit(self, X, y, copy_X=None): """Fit the model using X, y as training data. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) Target values. Will be cast to X's dtype if necessary. copy_X : bool, default=None If provided, this parameter will override the choice of copy_X made at instance creation. If ``True``, X will be copied; else, it may be overwritten. Returns ------- self : object Returns an instance of self. """ if copy_X is None: copy_X = self.copy_X X, y = validate_data(self, X, y, force_writeable=True, y_numeric=True) X, y, Xmean, ymean, Xstd = _preprocess_data( X, y, fit_intercept=self.fit_intercept, copy=copy_X ) Gram = self.precompute alphas_, _, coef_path_, self.n_iter_ = lars_path( X, y, Gram=Gram, copy_X=copy_X, copy_Gram=True, alpha_min=0.0, method="lasso", verbose=self.verbose, max_iter=self.max_iter, eps=self.eps, return_n_iter=True, positive=self.positive, ) n_samples = X.shape[0] if self.criterion == "aic": criterion_factor = 2 elif self.criterion == "bic": criterion_factor = log(n_samples) else: raise ValueError( f"criterion should be either bic or aic, got {self.criterion!r}" ) residuals = y[:, np.newaxis] - np.dot(X, coef_path_) residuals_sum_squares = np.sum(residuals**2, axis=0) degrees_of_freedom = np.zeros(coef_path_.shape[1], dtype=int) for k, coef in enumerate(coef_path_.T): mask = np.abs(coef) > np.finfo(coef.dtype).eps if not np.any(mask): continue # get the number of degrees of freedom equal to: # Xc = X[:, mask] # Trace(Xc * inv(Xc.T, Xc) * Xc.T) ie the number of non-zero coefs degrees_of_freedom[k] = np.sum(mask) self.alphas_ = alphas_ if self.noise_variance is None: self.noise_variance_ = self._estimate_noise_variance( X, y, positive=self.positive ) else: self.noise_variance_ = self.noise_variance self.criterion_ = ( n_samples * np.log(2 * np.pi * self.noise_variance_) + residuals_sum_squares / self.noise_variance_ + criterion_factor * degrees_of_freedom ) n_best = np.argmin(self.criterion_) self.alpha_ = alphas_[n_best] self.coef_ = coef_path_[:, n_best] self._set_intercept(Xmean, ymean, Xstd) return self
Fit the model using X, y as training data. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) Target values. Will be cast to X's dtype if necessary. copy_X : bool, default=None If provided, this parameter will override the choice of copy_X made at instance creation. If ``True``, X will be copied; else, it may be overwritten. Returns ------- self : object Returns an instance of self.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_least_angle.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_least_angle.py
BSD-3-Clause
def _estimate_noise_variance(self, X, y, positive): """Compute an estimate of the variance with an OLS model. Parameters ---------- X : ndarray of shape (n_samples, n_features) Data to be fitted by the OLS model. We expect the data to be centered. y : ndarray of shape (n_samples,) Associated target. positive : bool, default=False Restrict coefficients to be >= 0. This should be inline with the `positive` parameter from `LassoLarsIC`. Returns ------- noise_variance : float An estimator of the noise variance of an OLS model. """ if X.shape[0] <= X.shape[1] + self.fit_intercept: raise ValueError( f"You are using {self.__class__.__name__} in the case where the number " "of samples is smaller than the number of features. In this setting, " "getting a good estimate for the variance of the noise is not " "possible. Provide an estimate of the noise variance in the " "constructor." ) # X and y are already centered and we don't need to fit with an intercept ols_model = LinearRegression(positive=positive, fit_intercept=False) y_pred = ols_model.fit(X, y).predict(X) return np.sum((y - y_pred) ** 2) / ( X.shape[0] - X.shape[1] - self.fit_intercept )
Compute an estimate of the variance with an OLS model. Parameters ---------- X : ndarray of shape (n_samples, n_features) Data to be fitted by the OLS model. We expect the data to be centered. y : ndarray of shape (n_samples,) Associated target. positive : bool, default=False Restrict coefficients to be >= 0. This should be inline with the `positive` parameter from `LassoLarsIC`. Returns ------- noise_variance : float An estimator of the noise variance of an OLS model.
_estimate_noise_variance
python
scikit-learn/scikit-learn
sklearn/linear_model/_least_angle.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_least_angle.py
BSD-3-Clause
def init_zero_coef(self, X, dtype=None): """Allocate coef of correct shape with zeros. Parameters: ----------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. dtype : data-type, default=None Overrides the data type of coef. With dtype=None, coef will have the same dtype as X. Returns ------- coef : ndarray of shape (n_dof,) or (n_classes, n_dof) Coefficients of a linear model. """ n_features = X.shape[1] n_classes = self.base_loss.n_classes if self.fit_intercept: n_dof = n_features + 1 else: n_dof = n_features if self.base_loss.is_multiclass: coef = np.zeros_like(X, shape=(n_classes, n_dof), dtype=dtype, order="F") else: coef = np.zeros_like(X, shape=n_dof, dtype=dtype) return coef
Allocate coef of correct shape with zeros. Parameters: ----------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. dtype : data-type, default=None Overrides the data type of coef. With dtype=None, coef will have the same dtype as X. Returns ------- coef : ndarray of shape (n_dof,) or (n_classes, n_dof) Coefficients of a linear model.
init_zero_coef
python
scikit-learn/scikit-learn
sklearn/linear_model/_linear_loss.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_linear_loss.py
BSD-3-Clause
def weight_intercept(self, coef): """Helper function to get coefficients and intercept. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). Returns ------- weights : ndarray of shape (n_features,) or (n_classes, n_features) Coefficients without intercept term. intercept : float or ndarray of shape (n_classes,) Intercept terms. """ if not self.base_loss.is_multiclass: if self.fit_intercept: intercept = coef[-1] weights = coef[:-1] else: intercept = 0.0 weights = coef else: # reshape to (n_classes, n_dof) if coef.ndim == 1: weights = coef.reshape((self.base_loss.n_classes, -1), order="F") else: weights = coef if self.fit_intercept: intercept = weights[:, -1] weights = weights[:, :-1] else: intercept = 0.0 return weights, intercept
Helper function to get coefficients and intercept. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). Returns ------- weights : ndarray of shape (n_features,) or (n_classes, n_features) Coefficients without intercept term. intercept : float or ndarray of shape (n_classes,) Intercept terms.
weight_intercept
python
scikit-learn/scikit-learn
sklearn/linear_model/_linear_loss.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_linear_loss.py
BSD-3-Clause
def weight_intercept_raw(self, coef, X): """Helper function to get coefficients, intercept and raw_prediction. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. Returns ------- weights : ndarray of shape (n_features,) or (n_classes, n_features) Coefficients without intercept term. intercept : float or ndarray of shape (n_classes,) Intercept terms. raw_prediction : ndarray of shape (n_samples,) or \ (n_samples, n_classes) """ weights, intercept = self.weight_intercept(coef) if not self.base_loss.is_multiclass: raw_prediction = X @ weights + intercept else: # weights has shape (n_classes, n_dof) raw_prediction = X @ weights.T + intercept # ndarray, likely C-contiguous return weights, intercept, raw_prediction
Helper function to get coefficients, intercept and raw_prediction. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. Returns ------- weights : ndarray of shape (n_features,) or (n_classes, n_features) Coefficients without intercept term. intercept : float or ndarray of shape (n_classes,) Intercept terms. raw_prediction : ndarray of shape (n_samples,) or (n_samples, n_classes)
weight_intercept_raw
python
scikit-learn/scikit-learn
sklearn/linear_model/_linear_loss.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_linear_loss.py
BSD-3-Clause
def loss( self, coef, X, y, sample_weight=None, l2_reg_strength=0.0, n_threads=1, raw_prediction=None, ): """Compute the loss as weighted average over point-wise losses. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : contiguous array of shape (n_samples,) Observed, true target values. sample_weight : None or contiguous array of shape (n_samples,), default=None Sample weights. l2_reg_strength : float, default=0.0 L2 regularization strength n_threads : int, default=1 Number of OpenMP threads to use. raw_prediction : C-contiguous array of shape (n_samples,) or array of \ shape (n_samples, n_classes) Raw prediction values (in link space). If provided, these are used. If None, then raw_prediction = X @ coef + intercept is calculated. Returns ------- loss : float Weighted average of losses per sample, plus penalty. """ if raw_prediction is None: weights, intercept, raw_prediction = self.weight_intercept_raw(coef, X) else: weights, intercept = self.weight_intercept(coef) loss = self.base_loss.loss( y_true=y, raw_prediction=raw_prediction, sample_weight=None, n_threads=n_threads, ) loss = np.average(loss, weights=sample_weight) return loss + self.l2_penalty(weights, l2_reg_strength)
Compute the loss as weighted average over point-wise losses. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : contiguous array of shape (n_samples,) Observed, true target values. sample_weight : None or contiguous array of shape (n_samples,), default=None Sample weights. l2_reg_strength : float, default=0.0 L2 regularization strength n_threads : int, default=1 Number of OpenMP threads to use. raw_prediction : C-contiguous array of shape (n_samples,) or array of shape (n_samples, n_classes) Raw prediction values (in link space). If provided, these are used. If None, then raw_prediction = X @ coef + intercept is calculated. Returns ------- loss : float Weighted average of losses per sample, plus penalty.
loss
python
scikit-learn/scikit-learn
sklearn/linear_model/_linear_loss.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_linear_loss.py
BSD-3-Clause
def loss_gradient( self, coef, X, y, sample_weight=None, l2_reg_strength=0.0, n_threads=1, raw_prediction=None, ): """Computes the sum of loss and gradient w.r.t. coef. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : contiguous array of shape (n_samples,) Observed, true target values. sample_weight : None or contiguous array of shape (n_samples,), default=None Sample weights. l2_reg_strength : float, default=0.0 L2 regularization strength n_threads : int, default=1 Number of OpenMP threads to use. raw_prediction : C-contiguous array of shape (n_samples,) or array of \ shape (n_samples, n_classes) Raw prediction values (in link space). If provided, these are used. If None, then raw_prediction = X @ coef + intercept is calculated. Returns ------- loss : float Weighted average of losses per sample, plus penalty. gradient : ndarray of shape coef.shape The gradient of the loss. """ (n_samples, n_features), n_classes = X.shape, self.base_loss.n_classes n_dof = n_features + int(self.fit_intercept) if raw_prediction is None: weights, intercept, raw_prediction = self.weight_intercept_raw(coef, X) else: weights, intercept = self.weight_intercept(coef) loss, grad_pointwise = self.base_loss.loss_gradient( y_true=y, raw_prediction=raw_prediction, sample_weight=sample_weight, n_threads=n_threads, ) sw_sum = n_samples if sample_weight is None else np.sum(sample_weight) loss = loss.sum() / sw_sum loss += self.l2_penalty(weights, l2_reg_strength) grad_pointwise /= sw_sum if not self.base_loss.is_multiclass: grad = np.empty_like(coef, dtype=weights.dtype) grad[:n_features] = X.T @ grad_pointwise + l2_reg_strength * weights if self.fit_intercept: grad[-1] = grad_pointwise.sum() else: grad = np.empty((n_classes, n_dof), dtype=weights.dtype, order="F") # grad_pointwise.shape = (n_samples, n_classes) grad[:, :n_features] = grad_pointwise.T @ X + l2_reg_strength * weights if self.fit_intercept: grad[:, -1] = grad_pointwise.sum(axis=0) if coef.ndim == 1: grad = grad.ravel(order="F") return loss, grad
Computes the sum of loss and gradient w.r.t. coef. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : contiguous array of shape (n_samples,) Observed, true target values. sample_weight : None or contiguous array of shape (n_samples,), default=None Sample weights. l2_reg_strength : float, default=0.0 L2 regularization strength n_threads : int, default=1 Number of OpenMP threads to use. raw_prediction : C-contiguous array of shape (n_samples,) or array of shape (n_samples, n_classes) Raw prediction values (in link space). If provided, these are used. If None, then raw_prediction = X @ coef + intercept is calculated. Returns ------- loss : float Weighted average of losses per sample, plus penalty. gradient : ndarray of shape coef.shape The gradient of the loss.
loss_gradient
python
scikit-learn/scikit-learn
sklearn/linear_model/_linear_loss.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_linear_loss.py
BSD-3-Clause
def gradient( self, coef, X, y, sample_weight=None, l2_reg_strength=0.0, n_threads=1, raw_prediction=None, ): """Computes the gradient w.r.t. coef. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : contiguous array of shape (n_samples,) Observed, true target values. sample_weight : None or contiguous array of shape (n_samples,), default=None Sample weights. l2_reg_strength : float, default=0.0 L2 regularization strength n_threads : int, default=1 Number of OpenMP threads to use. raw_prediction : C-contiguous array of shape (n_samples,) or array of \ shape (n_samples, n_classes) Raw prediction values (in link space). If provided, these are used. If None, then raw_prediction = X @ coef + intercept is calculated. Returns ------- gradient : ndarray of shape coef.shape The gradient of the loss. """ (n_samples, n_features), n_classes = X.shape, self.base_loss.n_classes n_dof = n_features + int(self.fit_intercept) if raw_prediction is None: weights, intercept, raw_prediction = self.weight_intercept_raw(coef, X) else: weights, intercept = self.weight_intercept(coef) grad_pointwise = self.base_loss.gradient( y_true=y, raw_prediction=raw_prediction, sample_weight=sample_weight, n_threads=n_threads, ) sw_sum = n_samples if sample_weight is None else np.sum(sample_weight) grad_pointwise /= sw_sum if not self.base_loss.is_multiclass: grad = np.empty_like(coef, dtype=weights.dtype) grad[:n_features] = X.T @ grad_pointwise + l2_reg_strength * weights if self.fit_intercept: grad[-1] = grad_pointwise.sum() return grad else: grad = np.empty((n_classes, n_dof), dtype=weights.dtype, order="F") # gradient.shape = (n_samples, n_classes) grad[:, :n_features] = grad_pointwise.T @ X + l2_reg_strength * weights if self.fit_intercept: grad[:, -1] = grad_pointwise.sum(axis=0) if coef.ndim == 1: return grad.ravel(order="F") else: return grad
Computes the gradient w.r.t. coef. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : contiguous array of shape (n_samples,) Observed, true target values. sample_weight : None or contiguous array of shape (n_samples,), default=None Sample weights. l2_reg_strength : float, default=0.0 L2 regularization strength n_threads : int, default=1 Number of OpenMP threads to use. raw_prediction : C-contiguous array of shape (n_samples,) or array of shape (n_samples, n_classes) Raw prediction values (in link space). If provided, these are used. If None, then raw_prediction = X @ coef + intercept is calculated. Returns ------- gradient : ndarray of shape coef.shape The gradient of the loss.
gradient
python
scikit-learn/scikit-learn
sklearn/linear_model/_linear_loss.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_linear_loss.py
BSD-3-Clause
def gradient_hessian( self, coef, X, y, sample_weight=None, l2_reg_strength=0.0, n_threads=1, gradient_out=None, hessian_out=None, raw_prediction=None, ): """Computes gradient and hessian w.r.t. coef. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : contiguous array of shape (n_samples,) Observed, true target values. sample_weight : None or contiguous array of shape (n_samples,), default=None Sample weights. l2_reg_strength : float, default=0.0 L2 regularization strength n_threads : int, default=1 Number of OpenMP threads to use. gradient_out : None or ndarray of shape coef.shape A location into which the gradient is stored. If None, a new array might be created. hessian_out : None or ndarray of shape (n_dof, n_dof) or \ (n_classes * n_dof, n_classes * n_dof) A location into which the hessian is stored. If None, a new array might be created. raw_prediction : C-contiguous array of shape (n_samples,) or array of \ shape (n_samples, n_classes) Raw prediction values (in link space). If provided, these are used. If None, then raw_prediction = X @ coef + intercept is calculated. Returns ------- gradient : ndarray of shape coef.shape The gradient of the loss. hessian : ndarray of shape (n_dof, n_dof) or \ (n_classes, n_dof, n_dof, n_classes) Hessian matrix. hessian_warning : bool True if pointwise hessian has more than 25% of its elements non-positive. """ (n_samples, n_features), n_classes = X.shape, self.base_loss.n_classes n_dof = n_features + int(self.fit_intercept) if raw_prediction is None: weights, intercept, raw_prediction = self.weight_intercept_raw(coef, X) else: weights, intercept = self.weight_intercept(coef) sw_sum = n_samples if sample_weight is None else np.sum(sample_weight) # Allocate gradient. if gradient_out is None: grad = np.empty_like(coef, dtype=weights.dtype, order="F") elif gradient_out.shape != coef.shape: raise ValueError( f"gradient_out is required to have shape coef.shape = {coef.shape}; " f"got {gradient_out.shape}." ) elif self.base_loss.is_multiclass and not gradient_out.flags.f_contiguous: raise ValueError("gradient_out must be F-contiguous.") else: grad = gradient_out # Allocate hessian. n = coef.size # for multinomial this equals n_dof * n_classes if hessian_out is None: hess = np.empty((n, n), dtype=weights.dtype) elif hessian_out.shape != (n, n): raise ValueError( f"hessian_out is required to have shape ({n, n}); got " f"{hessian_out.shape=}." ) elif self.base_loss.is_multiclass and ( not hessian_out.flags.c_contiguous and not hessian_out.flags.f_contiguous ): raise ValueError("hessian_out must be contiguous.") else: hess = hessian_out if not self.base_loss.is_multiclass: grad_pointwise, hess_pointwise = self.base_loss.gradient_hessian( y_true=y, raw_prediction=raw_prediction, sample_weight=sample_weight, n_threads=n_threads, ) grad_pointwise /= sw_sum hess_pointwise /= sw_sum # For non-canonical link functions and far away from the optimum, the # pointwise hessian can be negative. We take care that 75% of the hessian # entries are positive. hessian_warning = ( np.average(hess_pointwise <= 0, weights=sample_weight) > 0.25 ) hess_pointwise = np.abs(hess_pointwise) grad[:n_features] = X.T @ grad_pointwise + l2_reg_strength * weights if self.fit_intercept: grad[-1] = grad_pointwise.sum() if hessian_warning: # Exit early without computing the hessian. return grad, hess, hessian_warning hess[:n_features, :n_features] = sandwich_dot(X, hess_pointwise) if l2_reg_strength > 0: # The L2 penalty enters the Hessian on the diagonal only. To add those # terms, we use a flattened view of the array. order = "C" if hess.flags.c_contiguous else "F" hess.reshape(-1, order=order)[: (n_features * n_dof) : (n_dof + 1)] += ( l2_reg_strength ) if self.fit_intercept: # With intercept included as added column to X, the hessian becomes # hess = (X, 1)' @ diag(h) @ (X, 1) # = (X' @ diag(h) @ X, X' @ h) # ( h @ X, sum(h)) # The left upper part has already been filled, it remains to compute # the last row and the last column. Xh = X.T @ hess_pointwise hess[:-1, -1] = Xh hess[-1, :-1] = Xh hess[-1, -1] = hess_pointwise.sum() else: # Here we may safely assume HalfMultinomialLoss aka categorical # cross-entropy. # HalfMultinomialLoss computes only the diagonal part of the hessian, i.e. # diagonal in the classes. Here, we want the full hessian. Therefore, we # call gradient_proba. grad_pointwise, proba = self.base_loss.gradient_proba( y_true=y, raw_prediction=raw_prediction, sample_weight=sample_weight, n_threads=n_threads, ) grad_pointwise /= sw_sum grad = grad.reshape((n_classes, n_dof), order="F") grad[:, :n_features] = grad_pointwise.T @ X + l2_reg_strength * weights if self.fit_intercept: grad[:, -1] = grad_pointwise.sum(axis=0) if coef.ndim == 1: grad = grad.ravel(order="F") # The full hessian matrix, i.e. not only the diagonal part, dropping most # indices, is given by: # # hess = X' @ h @ X # # Here, h is a priori a 4-dimensional matrix of shape # (n_samples, n_samples, n_classes, n_classes). It is diagonal its first # two dimensions (the ones with n_samples), i.e. it is # effectively a 3-dimensional matrix (n_samples, n_classes, n_classes). # # h = diag(p) - p' p # # or with indices k and l for classes # # h_kl = p_k * delta_kl - p_k * p_l # # with p_k the (predicted) probability for class k. Only the dimension in # n_samples multiplies with X. # For 3 classes and n_samples = 1, this looks like ("@" is a bit misused # here): # # hess = X' @ (h00 h10 h20) @ X # (h10 h11 h12) # (h20 h12 h22) # = (X' @ diag(h00) @ X, X' @ diag(h10), X' @ diag(h20)) # (X' @ diag(h10) @ X, X' @ diag(h11), X' @ diag(h12)) # (X' @ diag(h20) @ X, X' @ diag(h12), X' @ diag(h22)) # # Now coef of shape (n_classes * n_dof) is contiguous in n_classes. # Therefore, we want the hessian to follow this convention, too, i.e. # hess[:n_classes, :n_classes] = (x0' @ h00 @ x0, x0' @ h10 @ x0, ..) # (x0' @ h10 @ x0, x0' @ h11 @ x0, ..) # (x0' @ h20 @ x0, x0' @ h12 @ x0, ..) # is the first feature, x0, for all classes. In our implementation, we # still want to take advantage of BLAS "X.T @ X". Therefore, we have some # index/slicing battle to fight. if sample_weight is not None: sw = sample_weight / sw_sum else: sw = 1.0 / sw_sum for k in range(n_classes): # Diagonal terms (in classes) hess_kk. # Note that this also writes to some of the lower triangular part. h = proba[:, k] * (1 - proba[:, k]) * sw hess[ k : n_classes * n_features : n_classes, k : n_classes * n_features : n_classes, ] = sandwich_dot(X, h) if self.fit_intercept: # See above in the non multiclass case. Xh = X.T @ h hess[ k : n_classes * n_features : n_classes, n_classes * n_features + k, ] = Xh hess[ n_classes * n_features + k, k : n_classes * n_features : n_classes, ] = Xh hess[n_classes * n_features + k, n_classes * n_features + k] = ( h.sum() ) # Off diagonal terms (in classes) hess_kl. for l in range(k + 1, n_classes): # Upper triangle (in classes). h = -proba[:, k] * proba[:, l] * sw hess[ k : n_classes * n_features : n_classes, l : n_classes * n_features : n_classes, ] = sandwich_dot(X, h) if self.fit_intercept: Xh = X.T @ h hess[ k : n_classes * n_features : n_classes, n_classes * n_features + l, ] = Xh hess[ n_classes * n_features + k, l : n_classes * n_features : n_classes, ] = Xh hess[n_classes * n_features + k, n_classes * n_features + l] = ( h.sum() ) # Fill lower triangle (in classes). hess[l::n_classes, k::n_classes] = hess[k::n_classes, l::n_classes] if l2_reg_strength > 0: # See above in the non multiclass case. order = "C" if hess.flags.c_contiguous else "F" hess.reshape(-1, order=order)[ : (n_classes**2 * n_features * n_dof) : (n_classes * n_dof + 1) ] += l2_reg_strength # The pointwise hessian is always non-negative for the multinomial loss. hessian_warning = False return grad, hess, hessian_warning
Computes gradient and hessian w.r.t. coef. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : contiguous array of shape (n_samples,) Observed, true target values. sample_weight : None or contiguous array of shape (n_samples,), default=None Sample weights. l2_reg_strength : float, default=0.0 L2 regularization strength n_threads : int, default=1 Number of OpenMP threads to use. gradient_out : None or ndarray of shape coef.shape A location into which the gradient is stored. If None, a new array might be created. hessian_out : None or ndarray of shape (n_dof, n_dof) or (n_classes * n_dof, n_classes * n_dof) A location into which the hessian is stored. If None, a new array might be created. raw_prediction : C-contiguous array of shape (n_samples,) or array of shape (n_samples, n_classes) Raw prediction values (in link space). If provided, these are used. If None, then raw_prediction = X @ coef + intercept is calculated. Returns ------- gradient : ndarray of shape coef.shape The gradient of the loss. hessian : ndarray of shape (n_dof, n_dof) or (n_classes, n_dof, n_dof, n_classes) Hessian matrix. hessian_warning : bool True if pointwise hessian has more than 25% of its elements non-positive.
gradient_hessian
python
scikit-learn/scikit-learn
sklearn/linear_model/_linear_loss.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_linear_loss.py
BSD-3-Clause
def gradient_hessian_product( self, coef, X, y, sample_weight=None, l2_reg_strength=0.0, n_threads=1 ): """Computes gradient and hessp (hessian product function) w.r.t. coef. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : contiguous array of shape (n_samples,) Observed, true target values. sample_weight : None or contiguous array of shape (n_samples,), default=None Sample weights. l2_reg_strength : float, default=0.0 L2 regularization strength n_threads : int, default=1 Number of OpenMP threads to use. Returns ------- gradient : ndarray of shape coef.shape The gradient of the loss. hessp : callable Function that takes in a vector input of shape of gradient and and returns matrix-vector product with hessian. """ (n_samples, n_features), n_classes = X.shape, self.base_loss.n_classes n_dof = n_features + int(self.fit_intercept) weights, intercept, raw_prediction = self.weight_intercept_raw(coef, X) sw_sum = n_samples if sample_weight is None else np.sum(sample_weight) if not self.base_loss.is_multiclass: grad_pointwise, hess_pointwise = self.base_loss.gradient_hessian( y_true=y, raw_prediction=raw_prediction, sample_weight=sample_weight, n_threads=n_threads, ) grad_pointwise /= sw_sum hess_pointwise /= sw_sum grad = np.empty_like(coef, dtype=weights.dtype) grad[:n_features] = X.T @ grad_pointwise + l2_reg_strength * weights if self.fit_intercept: grad[-1] = grad_pointwise.sum() # Precompute as much as possible: hX, hX_sum and hessian_sum hessian_sum = hess_pointwise.sum() if sparse.issparse(X): hX = ( sparse.dia_matrix((hess_pointwise, 0), shape=(n_samples, n_samples)) @ X ) else: hX = hess_pointwise[:, np.newaxis] * X if self.fit_intercept: # Calculate the double derivative with respect to intercept. # Note: In case hX is sparse, hX.sum is a matrix object. hX_sum = np.squeeze(np.asarray(hX.sum(axis=0))) # prevent squeezing to zero-dim array if n_features == 1 hX_sum = np.atleast_1d(hX_sum) # With intercept included and l2_reg_strength = 0, hessp returns # res = (X, 1)' @ diag(h) @ (X, 1) @ s # = (X, 1)' @ (hX @ s[:n_features], sum(h) * s[-1]) # res[:n_features] = X' @ hX @ s[:n_features] + sum(h) * s[-1] # res[-1] = 1' @ hX @ s[:n_features] + sum(h) * s[-1] def hessp(s): ret = np.empty_like(s) if sparse.issparse(X): ret[:n_features] = X.T @ (hX @ s[:n_features]) else: ret[:n_features] = np.linalg.multi_dot([X.T, hX, s[:n_features]]) ret[:n_features] += l2_reg_strength * s[:n_features] if self.fit_intercept: ret[:n_features] += s[-1] * hX_sum ret[-1] = hX_sum @ s[:n_features] + hessian_sum * s[-1] return ret else: # Here we may safely assume HalfMultinomialLoss aka categorical # cross-entropy. # HalfMultinomialLoss computes only the diagonal part of the hessian, i.e. # diagonal in the classes. Here, we want the matrix-vector product of the # full hessian. Therefore, we call gradient_proba. grad_pointwise, proba = self.base_loss.gradient_proba( y_true=y, raw_prediction=raw_prediction, sample_weight=sample_weight, n_threads=n_threads, ) grad_pointwise /= sw_sum grad = np.empty((n_classes, n_dof), dtype=weights.dtype, order="F") grad[:, :n_features] = grad_pointwise.T @ X + l2_reg_strength * weights if self.fit_intercept: grad[:, -1] = grad_pointwise.sum(axis=0) # Full hessian-vector product, i.e. not only the diagonal part of the # hessian. Derivation with some index battle for input vector s: # - sample index i # - feature indices j, m # - class indices k, l # - 1_{k=l} is one if k=l else 0 # - p_i_k is the (predicted) probability that sample i belongs to class k # for all i: sum_k p_i_k = 1 # - s_l_m is input vector for class l and feature m # - X' = X transposed # # Note: Hessian with dropping most indices is just: # X' @ p_k (1(k=l) - p_l) @ X # # result_{k j} = sum_{i, l, m} Hessian_{i, k j, m l} * s_l_m # = sum_{i, l, m} (X')_{ji} * p_i_k * (1_{k=l} - p_i_l) # * X_{im} s_l_m # = sum_{i, m} (X')_{ji} * p_i_k # * (X_{im} * s_k_m - sum_l p_i_l * X_{im} * s_l_m) # # See also https://github.com/scikit-learn/scikit-learn/pull/3646#discussion_r17461411 def hessp(s): s = s.reshape((n_classes, -1), order="F") # shape = (n_classes, n_dof) if self.fit_intercept: s_intercept = s[:, -1] s = s[:, :-1] # shape = (n_classes, n_features) else: s_intercept = 0 tmp = X @ s.T + s_intercept # X_{im} * s_k_m tmp += (-proba * tmp).sum(axis=1)[:, np.newaxis] # - sum_l .. tmp *= proba # * p_i_k if sample_weight is not None: tmp *= sample_weight[:, np.newaxis] # hess_prod = empty_like(grad), but we ravel grad below and this # function is run after that. hess_prod = np.empty((n_classes, n_dof), dtype=weights.dtype, order="F") hess_prod[:, :n_features] = (tmp.T @ X) / sw_sum + l2_reg_strength * s if self.fit_intercept: hess_prod[:, -1] = tmp.sum(axis=0) / sw_sum if coef.ndim == 1: return hess_prod.ravel(order="F") else: return hess_prod if coef.ndim == 1: return grad.ravel(order="F"), hessp return grad, hessp
Computes gradient and hessp (hessian product function) w.r.t. coef. Parameters ---------- coef : ndarray of shape (n_dof,), (n_classes, n_dof) or (n_classes * n_dof,) Coefficients of a linear model. If shape (n_classes * n_dof,), the classes of one feature are contiguous, i.e. one reconstructs the 2d-array via coef.reshape((n_classes, -1), order="F"). X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : contiguous array of shape (n_samples,) Observed, true target values. sample_weight : None or contiguous array of shape (n_samples,), default=None Sample weights. l2_reg_strength : float, default=0.0 L2 regularization strength n_threads : int, default=1 Number of OpenMP threads to use. Returns ------- gradient : ndarray of shape coef.shape The gradient of the loss. hessp : callable Function that takes in a vector input of shape of gradient and and returns matrix-vector product with hessian.
gradient_hessian_product
python
scikit-learn/scikit-learn
sklearn/linear_model/_linear_loss.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_linear_loss.py
BSD-3-Clause
def _check_multi_class(multi_class, solver, n_classes): """Computes the multi class type, either "multinomial" or "ovr". For `n_classes` > 2 and a solver that supports it, returns "multinomial". For all other cases, in particular binary classification, return "ovr". """ if multi_class == "auto": if solver in ("liblinear",): multi_class = "ovr" elif n_classes > 2: multi_class = "multinomial" else: multi_class = "ovr" if multi_class == "multinomial" and solver in ("liblinear",): raise ValueError("Solver %s does not support a multinomial backend." % solver) return multi_class
Computes the multi class type, either "multinomial" or "ovr". For `n_classes` > 2 and a solver that supports it, returns "multinomial". For all other cases, in particular binary classification, return "ovr".
_check_multi_class
python
scikit-learn/scikit-learn
sklearn/linear_model/_logistic.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_logistic.py
BSD-3-Clause
def _logistic_regression_path( X, y, pos_class=None, Cs=10, fit_intercept=True, max_iter=100, tol=1e-4, verbose=0, solver="lbfgs", coef=None, class_weight=None, dual=False, penalty="l2", intercept_scaling=1.0, multi_class="auto", random_state=None, check_input=True, max_squared_sum=None, sample_weight=None, l1_ratio=None, n_threads=1, ): """Compute a Logistic Regression model for a list of regularization parameters. This is an implementation that uses the result of the previous model to speed up computations along the set of solutions, making it faster than sequentially calling LogisticRegression for the different parameters. Note that there will be no speedup with liblinear solver, since it does not handle warm-starting. Read more in the :ref:`User Guide <logistic_regression>`. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Input data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Input data, target values. pos_class : int, default=None The class with respect to which we perform a one-vs-all fit. If None, then it is assumed that the given problem is binary. Cs : int or array-like of shape (n_cs,), default=10 List of values for the regularization parameter or integer specifying the number of regularization parameters that should be used. In this case, the parameters will be chosen in a logarithmic scale between 1e-4 and 1e4. fit_intercept : bool, default=True Whether to fit an intercept for the model. In this case the shape of the returned array is (n_cs, n_features + 1). max_iter : int, default=100 Maximum number of iterations for the solver. tol : float, default=1e-4 Stopping criterion. For the newton-cg and lbfgs solvers, the iteration will stop when ``max{|g_i | i = 1, ..., n} <= tol`` where ``g_i`` is the i-th component of the gradient. verbose : int, default=0 For the liblinear and lbfgs solvers set verbose to any positive number for verbosity. solver : {'lbfgs', 'liblinear', 'newton-cg', 'newton-cholesky', 'sag', 'saga'}, \ default='lbfgs' Numerical solver to use. coef : array-like of shape (n_features,), default=None Initialization value for coefficients of logistic regression. Useless for liblinear solver. class_weight : dict or 'balanced', default=None Weights associated with classes in the form ``{class_label: weight}``. If not given, all classes are supposed to have weight one. The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))``. Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. dual : bool, default=False Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features. penalty : {'l1', 'l2', 'elasticnet'}, default='l2' Used to specify the norm used in the penalization. The 'newton-cg', 'sag' and 'lbfgs' solvers support only l2 penalties. 'elasticnet' is only supported by the 'saga' solver. intercept_scaling : float, default=1. Useful only when the solver 'liblinear' is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a "synthetic" feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes ``intercept_scaling * synthetic_feature_weight``. Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. multi_class : {'ovr', 'multinomial', 'auto'}, default='auto' If the option chosen is 'ovr', then a binary problem is fit for each label. For 'multinomial' the loss minimised is the multinomial loss fit across the entire probability distribution, *even when the data is binary*. 'multinomial' is unavailable when solver='liblinear'. 'auto' selects 'ovr' if the data is binary, or if solver='liblinear', and otherwise selects 'multinomial'. .. versionadded:: 0.18 Stochastic Average Gradient descent solver for 'multinomial' case. .. versionchanged:: 0.22 Default changed from 'ovr' to 'auto' in 0.22. random_state : int, RandomState instance, default=None Used when ``solver`` == 'sag', 'saga' or 'liblinear' to shuffle the data. See :term:`Glossary <random_state>` for details. check_input : bool, default=True If False, the input arrays X and y will not be checked. max_squared_sum : float, default=None Maximum squared sum of X over samples. Used only in SAG solver. If None, it will be computed, going through all the samples. The value should be precomputed to speed up cross validation. sample_weight : array-like of shape(n_samples,), default=None Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. l1_ratio : float, default=None The Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1``. Only used if ``penalty='elasticnet'``. Setting ``l1_ratio=0`` is equivalent to using ``penalty='l2'``, while setting ``l1_ratio=1`` is equivalent to using ``penalty='l1'``. For ``0 < l1_ratio <1``, the penalty is a combination of L1 and L2. n_threads : int, default=1 Number of OpenMP threads to use. Returns ------- coefs : ndarray of shape (n_cs, n_features) or (n_cs, n_features + 1) List of coefficients for the Logistic Regression model. If fit_intercept is set to True then the second dimension will be n_features + 1, where the last item represents the intercept. For ``multiclass='multinomial'``, the shape is (n_classes, n_cs, n_features) or (n_classes, n_cs, n_features + 1). Cs : ndarray Grid of Cs used for cross-validation. n_iter : array of shape (n_cs,) Actual number of iteration for each Cs. Notes ----- You might get slightly different results with the solver liblinear than with the others since this uses LIBLINEAR which penalizes the intercept. .. versionchanged:: 0.19 The "copy" parameter was removed. """ if isinstance(Cs, numbers.Integral): Cs = np.logspace(-4, 4, Cs) solver = _check_solver(solver, penalty, dual) # Preprocessing. if check_input: X = check_array( X, accept_sparse="csr", dtype=np.float64, accept_large_sparse=solver not in ["liblinear", "sag", "saga"], ) y = check_array(y, ensure_2d=False, dtype=None) check_consistent_length(X, y) n_samples, n_features = X.shape classes = np.unique(y) random_state = check_random_state(random_state) multi_class = _check_multi_class(multi_class, solver, len(classes)) if pos_class is None and multi_class != "multinomial": if classes.size > 2: raise ValueError("To fit OvR, use the pos_class argument") # np.unique(y) gives labels in sorted order. pos_class = classes[1] if sample_weight is not None or class_weight is not None: sample_weight = _check_sample_weight(sample_weight, X, dtype=X.dtype, copy=True) # If class_weights is a dict (provided by the user), the weights # are assigned to the original labels. If it is "balanced", then # the class_weights are assigned after masking the labels with a OvR. le = LabelEncoder() if isinstance(class_weight, dict) or ( multi_class == "multinomial" and class_weight is not None ): class_weight_ = compute_class_weight( class_weight, classes=classes, y=y, sample_weight=sample_weight ) sample_weight *= class_weight_[le.fit_transform(y)] # For doing a ovr, we need to mask the labels first. For the # multinomial case this is not necessary. if multi_class == "ovr": w0 = np.zeros(n_features + int(fit_intercept), dtype=X.dtype) mask = y == pos_class y_bin = np.ones(y.shape, dtype=X.dtype) if solver == "liblinear": mask_classes = np.array([-1, 1]) y_bin[~mask] = -1.0 else: # HalfBinomialLoss, used for those solvers, represents y in [0, 1] instead # of in [-1, 1]. mask_classes = np.array([0, 1]) y_bin[~mask] = 0.0 # for compute_class_weight if class_weight == "balanced": class_weight_ = compute_class_weight( class_weight, classes=mask_classes, y=y_bin, sample_weight=sample_weight, ) sample_weight *= class_weight_[le.fit_transform(y_bin)] else: if solver in ["sag", "saga", "lbfgs", "newton-cg", "newton-cholesky"]: # SAG, lbfgs, newton-cg and newton-cholesky multinomial solvers need # LabelEncoder, not LabelBinarizer, i.e. y as a 1d-array of integers. # LabelEncoder also saves memory compared to LabelBinarizer, especially # when n_classes is large. le = LabelEncoder() Y_multi = le.fit_transform(y).astype(X.dtype, copy=False) else: # For liblinear solver, apply LabelBinarizer, i.e. y is one-hot encoded. lbin = LabelBinarizer() Y_multi = lbin.fit_transform(y) if Y_multi.shape[1] == 1: Y_multi = np.hstack([1 - Y_multi, Y_multi]) w0 = np.zeros( (classes.size, n_features + int(fit_intercept)), order="F", dtype=X.dtype ) # IMPORTANT NOTE: # All solvers relying on LinearModelLoss need to scale the penalty with n_samples # or the sum of sample weights because the implemented logistic regression # objective here is (unfortunately) # C * sum(pointwise_loss) + penalty # instead of (as LinearModelLoss does) # mean(pointwise_loss) + 1/C * penalty if solver in ["lbfgs", "newton-cg", "newton-cholesky"]: # This needs to be calculated after sample_weight is multiplied by # class_weight. It is even tested that passing class_weight is equivalent to # passing sample_weights according to class_weight. sw_sum = n_samples if sample_weight is None else np.sum(sample_weight) if coef is not None: # it must work both giving the bias term and not if multi_class == "ovr": if coef.size not in (n_features, w0.size): raise ValueError( "Initialization coef is of shape %d, expected shape %d or %d" % (coef.size, n_features, w0.size) ) w0[: coef.size] = coef else: # For binary problems coef.shape[0] should be 1, otherwise it # should be classes.size. n_classes = classes.size if n_classes == 2: n_classes = 1 if coef.shape[0] != n_classes or coef.shape[1] not in ( n_features, n_features + 1, ): raise ValueError( "Initialization coef is of shape (%d, %d), expected " "shape (%d, %d) or (%d, %d)" % ( coef.shape[0], coef.shape[1], classes.size, n_features, classes.size, n_features + 1, ) ) if n_classes == 1: w0[0, : coef.shape[1]] = -coef w0[1, : coef.shape[1]] = coef else: w0[:, : coef.shape[1]] = coef if multi_class == "multinomial": if solver in ["lbfgs", "newton-cg", "newton-cholesky"]: # scipy.optimize.minimize and newton-cg accept only ravelled parameters, # i.e. 1d-arrays. LinearModelLoss expects classes to be contiguous and # reconstructs the 2d-array via w0.reshape((n_classes, -1), order="F"). # As w0 is F-contiguous, ravel(order="F") also avoids a copy. w0 = w0.ravel(order="F") loss = LinearModelLoss( base_loss=HalfMultinomialLoss(n_classes=classes.size), fit_intercept=fit_intercept, ) target = Y_multi if solver == "lbfgs": func = loss.loss_gradient elif solver == "newton-cg": func = loss.loss grad = loss.gradient hess = loss.gradient_hessian_product # hess = [gradient, hessp] warm_start_sag = {"coef": w0.T} else: target = y_bin if solver == "lbfgs": loss = LinearModelLoss( base_loss=HalfBinomialLoss(), fit_intercept=fit_intercept ) func = loss.loss_gradient elif solver == "newton-cg": loss = LinearModelLoss( base_loss=HalfBinomialLoss(), fit_intercept=fit_intercept ) func = loss.loss grad = loss.gradient hess = loss.gradient_hessian_product # hess = [gradient, hessp] elif solver == "newton-cholesky": loss = LinearModelLoss( base_loss=HalfBinomialLoss(), fit_intercept=fit_intercept ) warm_start_sag = {"coef": np.expand_dims(w0, axis=1)} coefs = list() n_iter = np.zeros(len(Cs), dtype=np.int32) for i, C in enumerate(Cs): if solver == "lbfgs": l2_reg_strength = 1.0 / (C * sw_sum) iprint = [-1, 50, 1, 100, 101][ np.searchsorted(np.array([0, 1, 2, 3]), verbose) ] opt_res = optimize.minimize( func, w0, method="L-BFGS-B", jac=True, args=(X, target, sample_weight, l2_reg_strength, n_threads), options={ "maxiter": max_iter, "maxls": 50, # default is 20 "iprint": iprint, "gtol": tol, "ftol": 64 * np.finfo(float).eps, }, ) n_iter_i = _check_optimize_result( solver, opt_res, max_iter, extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG, ) w0, loss = opt_res.x, opt_res.fun elif solver == "newton-cg": l2_reg_strength = 1.0 / (C * sw_sum) args = (X, target, sample_weight, l2_reg_strength, n_threads) w0, n_iter_i = _newton_cg( grad_hess=hess, func=func, grad=grad, x0=w0, args=args, maxiter=max_iter, tol=tol, verbose=verbose, ) elif solver == "newton-cholesky": l2_reg_strength = 1.0 / (C * sw_sum) sol = NewtonCholeskySolver( coef=w0, linear_loss=loss, l2_reg_strength=l2_reg_strength, tol=tol, max_iter=max_iter, n_threads=n_threads, verbose=verbose, ) w0 = sol.solve(X=X, y=target, sample_weight=sample_weight) n_iter_i = sol.iteration elif solver == "liblinear": if len(classes) > 2: warnings.warn( "Using the 'liblinear' solver for multiclass classification is " "deprecated. An error will be raised in 1.8. Either use another " "solver which supports the multinomial loss or wrap the estimator " "in a OneVsRestClassifier to keep applying a one-versus-rest " "scheme.", FutureWarning, ) ( coef_, intercept_, n_iter_i, ) = _fit_liblinear( X, target, C, fit_intercept, intercept_scaling, None, penalty, dual, verbose, max_iter, tol, random_state, sample_weight=sample_weight, ) if fit_intercept: w0 = np.concatenate([coef_.ravel(), intercept_]) else: w0 = coef_.ravel() # n_iter_i is an array for each class. However, `target` is always encoded # in {-1, 1}, so we only take the first element of n_iter_i. n_iter_i = n_iter_i.item() elif solver in ["sag", "saga"]: if multi_class == "multinomial": target = target.astype(X.dtype, copy=False) loss = "multinomial" else: loss = "log" # alpha is for L2-norm, beta is for L1-norm if penalty == "l1": alpha = 0.0 beta = 1.0 / C elif penalty == "l2": alpha = 1.0 / C beta = 0.0 else: # Elastic-Net penalty alpha = (1.0 / C) * (1 - l1_ratio) beta = (1.0 / C) * l1_ratio w0, n_iter_i, warm_start_sag = sag_solver( X, target, sample_weight, loss, alpha, beta, max_iter, tol, verbose, random_state, False, max_squared_sum, warm_start_sag, is_saga=(solver == "saga"), ) else: raise ValueError( "solver must be one of {'liblinear', 'lbfgs', " "'newton-cg', 'sag'}, got '%s' instead" % solver ) if multi_class == "multinomial": n_classes = max(2, classes.size) if solver in ["lbfgs", "newton-cg", "newton-cholesky"]: multi_w0 = np.reshape(w0, (n_classes, -1), order="F") else: multi_w0 = w0 if n_classes == 2: multi_w0 = multi_w0[1][np.newaxis, :] coefs.append(multi_w0.copy()) else: coefs.append(w0.copy()) n_iter[i] = n_iter_i return np.array(coefs), np.array(Cs), n_iter
Compute a Logistic Regression model for a list of regularization parameters. This is an implementation that uses the result of the previous model to speed up computations along the set of solutions, making it faster than sequentially calling LogisticRegression for the different parameters. Note that there will be no speedup with liblinear solver, since it does not handle warm-starting. Read more in the :ref:`User Guide <logistic_regression>`. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Input data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Input data, target values. pos_class : int, default=None The class with respect to which we perform a one-vs-all fit. If None, then it is assumed that the given problem is binary. Cs : int or array-like of shape (n_cs,), default=10 List of values for the regularization parameter or integer specifying the number of regularization parameters that should be used. In this case, the parameters will be chosen in a logarithmic scale between 1e-4 and 1e4. fit_intercept : bool, default=True Whether to fit an intercept for the model. In this case the shape of the returned array is (n_cs, n_features + 1). max_iter : int, default=100 Maximum number of iterations for the solver. tol : float, default=1e-4 Stopping criterion. For the newton-cg and lbfgs solvers, the iteration will stop when ``max{|g_i | i = 1, ..., n} <= tol`` where ``g_i`` is the i-th component of the gradient. verbose : int, default=0 For the liblinear and lbfgs solvers set verbose to any positive number for verbosity. solver : {'lbfgs', 'liblinear', 'newton-cg', 'newton-cholesky', 'sag', 'saga'}, default='lbfgs' Numerical solver to use. coef : array-like of shape (n_features,), default=None Initialization value for coefficients of logistic regression. Useless for liblinear solver. class_weight : dict or 'balanced', default=None Weights associated with classes in the form ``{class_label: weight}``. If not given, all classes are supposed to have weight one. The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))``. Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. dual : bool, default=False Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features. penalty : {'l1', 'l2', 'elasticnet'}, default='l2' Used to specify the norm used in the penalization. The 'newton-cg', 'sag' and 'lbfgs' solvers support only l2 penalties. 'elasticnet' is only supported by the 'saga' solver. intercept_scaling : float, default=1. Useful only when the solver 'liblinear' is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a "synthetic" feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes ``intercept_scaling * synthetic_feature_weight``. Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. multi_class : {'ovr', 'multinomial', 'auto'}, default='auto' If the option chosen is 'ovr', then a binary problem is fit for each label. For 'multinomial' the loss minimised is the multinomial loss fit across the entire probability distribution, *even when the data is binary*. 'multinomial' is unavailable when solver='liblinear'. 'auto' selects 'ovr' if the data is binary, or if solver='liblinear', and otherwise selects 'multinomial'. .. versionadded:: 0.18 Stochastic Average Gradient descent solver for 'multinomial' case. .. versionchanged:: 0.22 Default changed from 'ovr' to 'auto' in 0.22. random_state : int, RandomState instance, default=None Used when ``solver`` == 'sag', 'saga' or 'liblinear' to shuffle the data. See :term:`Glossary <random_state>` for details. check_input : bool, default=True If False, the input arrays X and y will not be checked. max_squared_sum : float, default=None Maximum squared sum of X over samples. Used only in SAG solver. If None, it will be computed, going through all the samples. The value should be precomputed to speed up cross validation. sample_weight : array-like of shape(n_samples,), default=None Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. l1_ratio : float, default=None The Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1``. Only used if ``penalty='elasticnet'``. Setting ``l1_ratio=0`` is equivalent to using ``penalty='l2'``, while setting ``l1_ratio=1`` is equivalent to using ``penalty='l1'``. For ``0 < l1_ratio <1``, the penalty is a combination of L1 and L2. n_threads : int, default=1 Number of OpenMP threads to use. Returns ------- coefs : ndarray of shape (n_cs, n_features) or (n_cs, n_features + 1) List of coefficients for the Logistic Regression model. If fit_intercept is set to True then the second dimension will be n_features + 1, where the last item represents the intercept. For ``multiclass='multinomial'``, the shape is (n_classes, n_cs, n_features) or (n_classes, n_cs, n_features + 1). Cs : ndarray Grid of Cs used for cross-validation. n_iter : array of shape (n_cs,) Actual number of iteration for each Cs. Notes ----- You might get slightly different results with the solver liblinear than with the others since this uses LIBLINEAR which penalizes the intercept. .. versionchanged:: 0.19 The "copy" parameter was removed.
_logistic_regression_path
python
scikit-learn/scikit-learn
sklearn/linear_model/_logistic.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_logistic.py
BSD-3-Clause
def _log_reg_scoring_path( X, y, train, test, *, pos_class, Cs, scoring, fit_intercept, max_iter, tol, class_weight, verbose, solver, penalty, dual, intercept_scaling, multi_class, random_state, max_squared_sum, sample_weight, l1_ratio, score_params, ): """Computes scores across logistic_regression_path Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target labels. train : list of indices The indices of the train set. test : list of indices The indices of the test set. pos_class : int The class with respect to which we perform a one-vs-all fit. If None, then it is assumed that the given problem is binary. Cs : int or list of floats Each of the values in Cs describes the inverse of regularization strength. If Cs is as an int, then a grid of Cs values are chosen in a logarithmic scale between 1e-4 and 1e4. scoring : str, callable or None The scoring method to use for cross-validation. Options: - str: see :ref:`scoring_string_names` for options. - callable: a scorer callable object (e.g., function) with signature ``scorer(estimator, X, y)``. See :ref:`scoring_callable` for details. - `None`: :ref:`accuracy <accuracy_score>` is used. fit_intercept : bool If False, then the bias term is set to zero. Else the last term of each coef_ gives us the intercept. max_iter : int Maximum number of iterations for the solver. tol : float Tolerance for stopping criteria. class_weight : dict or 'balanced' Weights associated with classes in the form ``{class_label: weight}``. If not given, all classes are supposed to have weight one. The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))`` Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. verbose : int For the liblinear and lbfgs solvers set verbose to any positive number for verbosity. solver : {'lbfgs', 'liblinear', 'newton-cg', 'newton-cholesky', 'sag', 'saga'} Decides which solver to use. penalty : {'l1', 'l2', 'elasticnet'} Used to specify the norm used in the penalization. The 'newton-cg', 'sag' and 'lbfgs' solvers support only l2 penalties. 'elasticnet' is only supported by the 'saga' solver. dual : bool Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features. intercept_scaling : float Useful only when the solver 'liblinear' is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a "synthetic" feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. multi_class : {'auto', 'ovr', 'multinomial'} If the option chosen is 'ovr', then a binary problem is fit for each label. For 'multinomial' the loss minimised is the multinomial loss fit across the entire probability distribution, *even when the data is binary*. 'multinomial' is unavailable when solver='liblinear'. random_state : int, RandomState instance Used when ``solver`` == 'sag', 'saga' or 'liblinear' to shuffle the data. See :term:`Glossary <random_state>` for details. max_squared_sum : float Maximum squared sum of X over samples. Used only in SAG solver. If None, it will be computed, going through all the samples. The value should be precomputed to speed up cross validation. sample_weight : array-like of shape(n_samples,) Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. l1_ratio : float The Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1``. Only used if ``penalty='elasticnet'``. Setting ``l1_ratio=0`` is equivalent to using ``penalty='l2'``, while setting ``l1_ratio=1`` is equivalent to using ``penalty='l1'``. For ``0 < l1_ratio <1``, the penalty is a combination of L1 and L2. score_params : dict Parameters to pass to the `score` method of the underlying scorer. Returns ------- coefs : ndarray of shape (n_cs, n_features) or (n_cs, n_features + 1) List of coefficients for the Logistic Regression model. If fit_intercept is set to True then the second dimension will be n_features + 1, where the last item represents the intercept. Cs : ndarray Grid of Cs used for cross-validation. scores : ndarray of shape (n_cs,) Scores obtained for each Cs. n_iter : ndarray of shape(n_cs,) Actual number of iteration for each Cs. """ X_train = X[train] X_test = X[test] y_train = y[train] y_test = y[test] sw_train, sw_test = None, None if sample_weight is not None: sample_weight = _check_sample_weight(sample_weight, X) sw_train = sample_weight[train] sw_test = sample_weight[test] coefs, Cs, n_iter = _logistic_regression_path( X_train, y_train, Cs=Cs, l1_ratio=l1_ratio, fit_intercept=fit_intercept, solver=solver, max_iter=max_iter, class_weight=class_weight, pos_class=pos_class, multi_class=multi_class, tol=tol, verbose=verbose, dual=dual, penalty=penalty, intercept_scaling=intercept_scaling, random_state=random_state, check_input=False, max_squared_sum=max_squared_sum, sample_weight=sw_train, ) log_reg = LogisticRegression(solver=solver, multi_class=multi_class) # The score method of Logistic Regression has a classes_ attribute. if multi_class == "ovr": log_reg.classes_ = np.array([-1, 1]) elif multi_class == "multinomial": log_reg.classes_ = np.unique(y_train) else: raise ValueError( "multi_class should be either multinomial or ovr, got %d" % multi_class ) if pos_class is not None: mask = y_test == pos_class y_test = np.ones(y_test.shape, dtype=np.float64) y_test[~mask] = -1.0 scores = list() scoring = get_scorer(scoring) for w in coefs: if multi_class == "ovr": w = w[np.newaxis, :] if fit_intercept: log_reg.coef_ = w[:, :-1] log_reg.intercept_ = w[:, -1] else: log_reg.coef_ = w log_reg.intercept_ = 0.0 if scoring is None: scores.append(log_reg.score(X_test, y_test, sample_weight=sw_test)) else: score_params = score_params or {} score_params = _check_method_params(X=X, params=score_params, indices=test) scores.append(scoring(log_reg, X_test, y_test, **score_params)) return coefs, Cs, np.array(scores), n_iter
Computes scores across logistic_regression_path Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target labels. train : list of indices The indices of the train set. test : list of indices The indices of the test set. pos_class : int The class with respect to which we perform a one-vs-all fit. If None, then it is assumed that the given problem is binary. Cs : int or list of floats Each of the values in Cs describes the inverse of regularization strength. If Cs is as an int, then a grid of Cs values are chosen in a logarithmic scale between 1e-4 and 1e4. scoring : str, callable or None The scoring method to use for cross-validation. Options: - str: see :ref:`scoring_string_names` for options. - callable: a scorer callable object (e.g., function) with signature ``scorer(estimator, X, y)``. See :ref:`scoring_callable` for details. - `None`: :ref:`accuracy <accuracy_score>` is used. fit_intercept : bool If False, then the bias term is set to zero. Else the last term of each coef_ gives us the intercept. max_iter : int Maximum number of iterations for the solver. tol : float Tolerance for stopping criteria. class_weight : dict or 'balanced' Weights associated with classes in the form ``{class_label: weight}``. If not given, all classes are supposed to have weight one. The "balanced" mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as ``n_samples / (n_classes * np.bincount(y))`` Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. verbose : int For the liblinear and lbfgs solvers set verbose to any positive number for verbosity. solver : {'lbfgs', 'liblinear', 'newton-cg', 'newton-cholesky', 'sag', 'saga'} Decides which solver to use. penalty : {'l1', 'l2', 'elasticnet'} Used to specify the norm used in the penalization. The 'newton-cg', 'sag' and 'lbfgs' solvers support only l2 penalties. 'elasticnet' is only supported by the 'saga' solver. dual : bool Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features. intercept_scaling : float Useful only when the solver 'liblinear' is used and self.fit_intercept is set to True. In this case, x becomes [x, self.intercept_scaling], i.e. a "synthetic" feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased. multi_class : {'auto', 'ovr', 'multinomial'} If the option chosen is 'ovr', then a binary problem is fit for each label. For 'multinomial' the loss minimised is the multinomial loss fit across the entire probability distribution, *even when the data is binary*. 'multinomial' is unavailable when solver='liblinear'. random_state : int, RandomState instance Used when ``solver`` == 'sag', 'saga' or 'liblinear' to shuffle the data. See :term:`Glossary <random_state>` for details. max_squared_sum : float Maximum squared sum of X over samples. Used only in SAG solver. If None, it will be computed, going through all the samples. The value should be precomputed to speed up cross validation. sample_weight : array-like of shape(n_samples,) Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. l1_ratio : float The Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1``. Only used if ``penalty='elasticnet'``. Setting ``l1_ratio=0`` is equivalent to using ``penalty='l2'``, while setting ``l1_ratio=1`` is equivalent to using ``penalty='l1'``. For ``0 < l1_ratio <1``, the penalty is a combination of L1 and L2. score_params : dict Parameters to pass to the `score` method of the underlying scorer. Returns ------- coefs : ndarray of shape (n_cs, n_features) or (n_cs, n_features + 1) List of coefficients for the Logistic Regression model. If fit_intercept is set to True then the second dimension will be n_features + 1, where the last item represents the intercept. Cs : ndarray Grid of Cs used for cross-validation. scores : ndarray of shape (n_cs,) Scores obtained for each Cs. n_iter : ndarray of shape(n_cs,) Actual number of iteration for each Cs.
_log_reg_scoring_path
python
scikit-learn/scikit-learn
sklearn/linear_model/_logistic.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_logistic.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None): """ Fit the model according to the given training data. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like of shape (n_samples,) Target vector relative to X. sample_weight : array-like of shape (n_samples,) default=None Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. .. versionadded:: 0.17 *sample_weight* support to LogisticRegression. Returns ------- self Fitted estimator. Notes ----- The SAGA solver supports both float64 and float32 bit arrays. """ solver = _check_solver(self.solver, self.penalty, self.dual) if self.penalty != "elasticnet" and self.l1_ratio is not None: warnings.warn( "l1_ratio parameter is only used when penalty is " "'elasticnet'. Got " "(penalty={})".format(self.penalty) ) if self.penalty == "elasticnet" and self.l1_ratio is None: raise ValueError("l1_ratio must be specified when penalty is elasticnet.") if self.penalty is None: if self.C != 1.0: # default values warnings.warn( "Setting penalty=None will ignore the C and l1_ratio parameters" ) # Note that check for l1_ratio is done right above C_ = np.inf penalty = "l2" else: C_ = self.C penalty = self.penalty if solver == "lbfgs": _dtype = np.float64 else: _dtype = [np.float64, np.float32] X, y = validate_data( self, X, y, accept_sparse="csr", dtype=_dtype, order="C", accept_large_sparse=solver not in ["liblinear", "sag", "saga"], ) check_classification_targets(y) self.classes_ = np.unique(y) # TODO(1.8) remove multi_class multi_class = self.multi_class if self.multi_class == "multinomial" and len(self.classes_) == 2: warnings.warn( ( "'multi_class' was deprecated in version 1.5 and will be removed in" " 1.7. From then on, binary problems will be fit as proper binary " " logistic regression models (as if multi_class='ovr' were set)." " Leave it to its default value to avoid this warning." ), FutureWarning, ) elif self.multi_class in ("multinomial", "auto"): warnings.warn( ( "'multi_class' was deprecated in version 1.5 and will be removed in" " 1.7. From then on, it will always use 'multinomial'." " Leave it to its default value to avoid this warning." ), FutureWarning, ) elif self.multi_class == "ovr": warnings.warn( ( "'multi_class' was deprecated in version 1.5 and will be removed in" " 1.7. Use OneVsRestClassifier(LogisticRegression(..)) instead." " Leave it to its default value to avoid this warning." ), FutureWarning, ) else: # Set to old default value. multi_class = "auto" multi_class = _check_multi_class(multi_class, solver, len(self.classes_)) if solver == "liblinear": if len(self.classes_) > 2: warnings.warn( "Using the 'liblinear' solver for multiclass classification is " "deprecated. An error will be raised in 1.8. Either use another " "solver which supports the multinomial loss or wrap the estimator " "in a OneVsRestClassifier to keep applying a one-versus-rest " "scheme.", FutureWarning, ) if effective_n_jobs(self.n_jobs) != 1: warnings.warn( "'n_jobs' > 1 does not have any effect when" " 'solver' is set to 'liblinear'. Got 'n_jobs'" " = {}.".format(effective_n_jobs(self.n_jobs)) ) self.coef_, self.intercept_, self.n_iter_ = _fit_liblinear( X, y, self.C, self.fit_intercept, self.intercept_scaling, self.class_weight, self.penalty, self.dual, self.verbose, self.max_iter, self.tol, self.random_state, sample_weight=sample_weight, ) return self if solver in ["sag", "saga"]: max_squared_sum = row_norms(X, squared=True).max() else: max_squared_sum = None n_classes = len(self.classes_) classes_ = self.classes_ if n_classes < 2: raise ValueError( "This solver needs samples of at least 2 classes" " in the data, but the data contains only one" " class: %r" % classes_[0] ) if len(self.classes_) == 2: n_classes = 1 classes_ = classes_[1:] if self.warm_start: warm_start_coef = getattr(self, "coef_", None) else: warm_start_coef = None if warm_start_coef is not None and self.fit_intercept: warm_start_coef = np.append( warm_start_coef, self.intercept_[:, np.newaxis], axis=1 ) # Hack so that we iterate only once for the multinomial case. if multi_class == "multinomial": classes_ = [None] warm_start_coef = [warm_start_coef] if warm_start_coef is None: warm_start_coef = [None] * n_classes path_func = delayed(_logistic_regression_path) # The SAG solver releases the GIL so it's more efficient to use # threads for this solver. if solver in ["sag", "saga"]: prefer = "threads" else: prefer = "processes" # TODO: Refactor this to avoid joblib parallelism entirely when doing binary # and multinomial multiclass classification and use joblib only for the # one-vs-rest multiclass case. if ( solver in ["lbfgs", "newton-cg", "newton-cholesky"] and len(classes_) == 1 and effective_n_jobs(self.n_jobs) == 1 ): # In the future, we would like n_threads = _openmp_effective_n_threads() # For the time being, we just do n_threads = 1 else: n_threads = 1 fold_coefs_ = Parallel(n_jobs=self.n_jobs, verbose=self.verbose, prefer=prefer)( path_func( X, y, pos_class=class_, Cs=[C_], l1_ratio=self.l1_ratio, fit_intercept=self.fit_intercept, tol=self.tol, verbose=self.verbose, solver=solver, multi_class=multi_class, max_iter=self.max_iter, class_weight=self.class_weight, check_input=False, random_state=self.random_state, coef=warm_start_coef_, penalty=penalty, max_squared_sum=max_squared_sum, sample_weight=sample_weight, n_threads=n_threads, ) for class_, warm_start_coef_ in zip(classes_, warm_start_coef) ) fold_coefs_, _, n_iter_ = zip(*fold_coefs_) self.n_iter_ = np.asarray(n_iter_, dtype=np.int32)[:, 0] n_features = X.shape[1] if multi_class == "multinomial": self.coef_ = fold_coefs_[0][0] else: self.coef_ = np.asarray(fold_coefs_) self.coef_ = self.coef_.reshape( n_classes, n_features + int(self.fit_intercept) ) if self.fit_intercept: self.intercept_ = self.coef_[:, -1] self.coef_ = self.coef_[:, :-1] else: self.intercept_ = np.zeros(n_classes) return self
Fit the model according to the given training data. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like of shape (n_samples,) Target vector relative to X. sample_weight : array-like of shape (n_samples,) default=None Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. .. versionadded:: 0.17 *sample_weight* support to LogisticRegression. Returns ------- self Fitted estimator. Notes ----- The SAGA solver supports both float64 and float32 bit arrays.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_logistic.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_logistic.py
BSD-3-Clause
def predict_proba(self, X): """ Probability estimates. The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be "multinomial" the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e. calculate the probability of each class assuming it to be positive using the logistic function and normalize these values across all the classes. Parameters ---------- X : array-like of shape (n_samples, n_features) Vector to be scored, where `n_samples` is the number of samples and `n_features` is the number of features. Returns ------- T : array-like of shape (n_samples, n_classes) Returns the probability of the sample for each class in the model, where classes are ordered as they are in ``self.classes_``. """ check_is_fitted(self) ovr = self.multi_class in ["ovr", "warn"] or ( self.multi_class in ["auto", "deprecated"] and (self.classes_.size <= 2 or self.solver == "liblinear") ) if ovr: return super()._predict_proba_lr(X) else: decision = self.decision_function(X) if decision.ndim == 1: # Workaround for multi_class="multinomial" and binary outcomes # which requires softmax prediction with only a 1D decision. decision_2d = np.c_[-decision, decision] else: decision_2d = decision return softmax(decision_2d, copy=False)
Probability estimates. The returned estimates for all classes are ordered by the label of classes. For a multi_class problem, if multi_class is set to be "multinomial" the softmax function is used to find the predicted probability of each class. Else use a one-vs-rest approach, i.e. calculate the probability of each class assuming it to be positive using the logistic function and normalize these values across all the classes. Parameters ---------- X : array-like of shape (n_samples, n_features) Vector to be scored, where `n_samples` is the number of samples and `n_features` is the number of features. Returns ------- T : array-like of shape (n_samples, n_classes) Returns the probability of the sample for each class in the model, where classes are ordered as they are in ``self.classes_``.
predict_proba
python
scikit-learn/scikit-learn
sklearn/linear_model/_logistic.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_logistic.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None, **params): """Fit the model according to the given training data. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like of shape (n_samples,) Target vector relative to X. sample_weight : array-like of shape (n_samples,) default=None Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. **params : dict Parameters to pass to the underlying splitter and scorer. .. versionadded:: 1.4 Returns ------- self : object Fitted LogisticRegressionCV estimator. """ _raise_for_params(params, self, "fit") solver = _check_solver(self.solver, self.penalty, self.dual) if self.penalty == "elasticnet": if ( self.l1_ratios is None or len(self.l1_ratios) == 0 or any( ( not isinstance(l1_ratio, numbers.Number) or l1_ratio < 0 or l1_ratio > 1 ) for l1_ratio in self.l1_ratios ) ): raise ValueError( "l1_ratios must be a list of numbers between " "0 and 1; got (l1_ratios=%r)" % self.l1_ratios ) l1_ratios_ = self.l1_ratios else: if self.l1_ratios is not None: warnings.warn( "l1_ratios parameter is only used when penalty " "is 'elasticnet'. Got (penalty={})".format(self.penalty) ) l1_ratios_ = [None] X, y = validate_data( self, X, y, accept_sparse="csr", dtype=np.float64, order="C", accept_large_sparse=solver not in ["liblinear", "sag", "saga"], ) check_classification_targets(y) class_weight = self.class_weight # Encode for string labels label_encoder = LabelEncoder().fit(y) y = label_encoder.transform(y) if isinstance(class_weight, dict): class_weight = { label_encoder.transform([cls])[0]: v for cls, v in class_weight.items() } # The original class labels classes = self.classes_ = label_encoder.classes_ encoded_labels = label_encoder.transform(label_encoder.classes_) # TODO(1.8) remove multi_class multi_class = self.multi_class if self.multi_class == "multinomial" and len(self.classes_) == 2: warnings.warn( ( "'multi_class' was deprecated in version 1.5 and will be removed in" " 1.7. From then on, binary problems will be fit as proper binary " " logistic regression models (as if multi_class='ovr' were set)." " Leave it to its default value to avoid this warning." ), FutureWarning, ) elif self.multi_class in ("multinomial", "auto"): warnings.warn( ( "'multi_class' was deprecated in version 1.5 and will be removed in" " 1.7. From then on, it will always use 'multinomial'." " Leave it to its default value to avoid this warning." ), FutureWarning, ) elif self.multi_class == "ovr": warnings.warn( ( "'multi_class' was deprecated in version 1.5 and will be removed in" " 1.7. Use OneVsRestClassifier(LogisticRegressionCV(..)) instead." " Leave it to its default value to avoid this warning." ), FutureWarning, ) else: # Set to old default value. multi_class = "auto" multi_class = _check_multi_class(multi_class, solver, len(classes)) if solver in ["sag", "saga"]: max_squared_sum = row_norms(X, squared=True).max() else: max_squared_sum = None if _routing_enabled(): routed_params = process_routing( self, "fit", sample_weight=sample_weight, **params, ) else: routed_params = Bunch() routed_params.splitter = Bunch(split={}) routed_params.scorer = Bunch(score=params) if sample_weight is not None: routed_params.scorer.score["sample_weight"] = sample_weight # init cross-validation generator cv = check_cv(self.cv, y, classifier=True) folds = list(cv.split(X, y, **routed_params.splitter.split)) # Use the label encoded classes n_classes = len(encoded_labels) if n_classes < 2: raise ValueError( "This solver needs samples of at least 2 classes" " in the data, but the data contains only one" " class: %r" % classes[0] ) if n_classes == 2: # OvR in case of binary problems is as good as fitting # the higher label n_classes = 1 encoded_labels = encoded_labels[1:] classes = classes[1:] # We need this hack to iterate only once over labels, in the case of # multi_class = multinomial, without changing the value of the labels. if multi_class == "multinomial": iter_encoded_labels = iter_classes = [None] else: iter_encoded_labels = encoded_labels iter_classes = classes # compute the class weights for the entire dataset y if class_weight == "balanced": class_weight = compute_class_weight( class_weight, classes=np.arange(len(self.classes_)), y=y, sample_weight=sample_weight, ) class_weight = dict(enumerate(class_weight)) path_func = delayed(_log_reg_scoring_path) # The SAG solver releases the GIL so it's more efficient to use # threads for this solver. if self.solver in ["sag", "saga"]: prefer = "threads" else: prefer = "processes" fold_coefs_ = Parallel(n_jobs=self.n_jobs, verbose=self.verbose, prefer=prefer)( path_func( X, y, train, test, pos_class=label, Cs=self.Cs, fit_intercept=self.fit_intercept, penalty=self.penalty, dual=self.dual, solver=solver, tol=self.tol, max_iter=self.max_iter, verbose=self.verbose, class_weight=class_weight, scoring=self.scoring, multi_class=multi_class, intercept_scaling=self.intercept_scaling, random_state=self.random_state, max_squared_sum=max_squared_sum, sample_weight=sample_weight, l1_ratio=l1_ratio, score_params=routed_params.scorer.score, ) for label in iter_encoded_labels for train, test in folds for l1_ratio in l1_ratios_ ) # _log_reg_scoring_path will output different shapes depending on the # multi_class param, so we need to reshape the outputs accordingly. # Cs is of shape (n_classes . n_folds . n_l1_ratios, n_Cs) and all the # rows are equal, so we just take the first one. # After reshaping, # - scores is of shape (n_classes, n_folds, n_Cs . n_l1_ratios) # - coefs_paths is of shape # (n_classes, n_folds, n_Cs . n_l1_ratios, n_features) # - n_iter is of shape # (n_classes, n_folds, n_Cs . n_l1_ratios) or # (1, n_folds, n_Cs . n_l1_ratios) coefs_paths, Cs, scores, n_iter_ = zip(*fold_coefs_) self.Cs_ = Cs[0] if multi_class == "multinomial": coefs_paths = np.reshape( coefs_paths, (len(folds), len(l1_ratios_) * len(self.Cs_), n_classes, -1), ) # equiv to coefs_paths = np.moveaxis(coefs_paths, (0, 1, 2, 3), # (1, 2, 0, 3)) coefs_paths = np.swapaxes(coefs_paths, 0, 1) coefs_paths = np.swapaxes(coefs_paths, 0, 2) self.n_iter_ = np.reshape( n_iter_, (1, len(folds), len(self.Cs_) * len(l1_ratios_)) ) # repeat same scores across all classes scores = np.tile(scores, (n_classes, 1, 1)) else: coefs_paths = np.reshape( coefs_paths, (n_classes, len(folds), len(self.Cs_) * len(l1_ratios_), -1), ) self.n_iter_ = np.reshape( n_iter_, (n_classes, len(folds), len(self.Cs_) * len(l1_ratios_)) ) scores = np.reshape(scores, (n_classes, len(folds), -1)) self.scores_ = dict(zip(classes, scores)) self.coefs_paths_ = dict(zip(classes, coefs_paths)) self.C_ = list() self.l1_ratio_ = list() self.coef_ = np.empty((n_classes, X.shape[1])) self.intercept_ = np.zeros(n_classes) for index, (cls, encoded_label) in enumerate( zip(iter_classes, iter_encoded_labels) ): if multi_class == "ovr": scores = self.scores_[cls] coefs_paths = self.coefs_paths_[cls] else: # For multinomial, all scores are the same across classes scores = scores[0] # coefs_paths will keep its original shape because # logistic_regression_path expects it this way if self.refit: # best_index is between 0 and (n_Cs . n_l1_ratios - 1) # for example, with n_cs=2 and n_l1_ratios=3 # the layout of scores is # [c1, c2, c1, c2, c1, c2] # l1_1 , l1_2 , l1_3 best_index = scores.sum(axis=0).argmax() best_index_C = best_index % len(self.Cs_) C_ = self.Cs_[best_index_C] self.C_.append(C_) best_index_l1 = best_index // len(self.Cs_) l1_ratio_ = l1_ratios_[best_index_l1] self.l1_ratio_.append(l1_ratio_) if multi_class == "multinomial": coef_init = np.mean(coefs_paths[:, :, best_index, :], axis=1) else: coef_init = np.mean(coefs_paths[:, best_index, :], axis=0) # Note that y is label encoded and hence pos_class must be # the encoded label / None (for 'multinomial') w, _, _ = _logistic_regression_path( X, y, pos_class=encoded_label, Cs=[C_], solver=solver, fit_intercept=self.fit_intercept, coef=coef_init, max_iter=self.max_iter, tol=self.tol, penalty=self.penalty, class_weight=class_weight, multi_class=multi_class, verbose=max(0, self.verbose - 1), random_state=self.random_state, check_input=False, max_squared_sum=max_squared_sum, sample_weight=sample_weight, l1_ratio=l1_ratio_, ) w = w[0] else: # Take the best scores across every fold and the average of # all coefficients corresponding to the best scores. best_indices = np.argmax(scores, axis=1) if multi_class == "ovr": w = np.mean( [coefs_paths[i, best_indices[i], :] for i in range(len(folds))], axis=0, ) else: w = np.mean( [ coefs_paths[:, i, best_indices[i], :] for i in range(len(folds)) ], axis=0, ) best_indices_C = best_indices % len(self.Cs_) self.C_.append(np.mean(self.Cs_[best_indices_C])) if self.penalty == "elasticnet": best_indices_l1 = best_indices // len(self.Cs_) self.l1_ratio_.append(np.mean(l1_ratios_[best_indices_l1])) else: self.l1_ratio_.append(None) if multi_class == "multinomial": self.C_ = np.tile(self.C_, n_classes) self.l1_ratio_ = np.tile(self.l1_ratio_, n_classes) self.coef_ = w[:, : X.shape[1]] if self.fit_intercept: self.intercept_ = w[:, -1] else: self.coef_[index] = w[: X.shape[1]] if self.fit_intercept: self.intercept_[index] = w[-1] self.C_ = np.asarray(self.C_) self.l1_ratio_ = np.asarray(self.l1_ratio_) self.l1_ratios_ = np.asarray(l1_ratios_) # if elasticnet was used, add the l1_ratios dimension to some # attributes if self.l1_ratios is not None: # with n_cs=2 and n_l1_ratios=3 # the layout of scores is # [c1, c2, c1, c2, c1, c2] # l1_1 , l1_2 , l1_3 # To get a 2d array with the following layout # l1_1, l1_2, l1_3 # c1 [[ . , . , . ], # c2 [ . , . , . ]] # We need to first reshape and then transpose. # The same goes for the other arrays for cls, coefs_path in self.coefs_paths_.items(): self.coefs_paths_[cls] = coefs_path.reshape( (len(folds), self.l1_ratios_.size, self.Cs_.size, -1) ) self.coefs_paths_[cls] = np.transpose( self.coefs_paths_[cls], (0, 2, 1, 3) ) for cls, score in self.scores_.items(): self.scores_[cls] = score.reshape( (len(folds), self.l1_ratios_.size, self.Cs_.size) ) self.scores_[cls] = np.transpose(self.scores_[cls], (0, 2, 1)) self.n_iter_ = self.n_iter_.reshape( (-1, len(folds), self.l1_ratios_.size, self.Cs_.size) ) self.n_iter_ = np.transpose(self.n_iter_, (0, 1, 3, 2)) return self
Fit the model according to the given training data. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training vector, where `n_samples` is the number of samples and `n_features` is the number of features. y : array-like of shape (n_samples,) Target vector relative to X. sample_weight : array-like of shape (n_samples,) default=None Array of weights that are assigned to individual samples. If not provided, then each sample is given unit weight. **params : dict Parameters to pass to the underlying splitter and scorer. .. versionadded:: 1.4 Returns ------- self : object Fitted LogisticRegressionCV estimator.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_logistic.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_logistic.py
BSD-3-Clause
def score(self, X, y, sample_weight=None, **score_params): """Score using the `scoring` option on the given test data and labels. Parameters ---------- X : array-like of shape (n_samples, n_features) Test samples. y : array-like of shape (n_samples,) True labels for X. sample_weight : array-like of shape (n_samples,), default=None Sample weights. **score_params : dict Parameters to pass to the `score` method of the underlying scorer. .. versionadded:: 1.4 Returns ------- score : float Score of self.predict(X) w.r.t. y. """ _raise_for_params(score_params, self, "score") scoring = self._get_scorer() if _routing_enabled(): routed_params = process_routing( self, "score", sample_weight=sample_weight, **score_params, ) else: routed_params = Bunch() routed_params.scorer = Bunch(score={}) if sample_weight is not None: routed_params.scorer.score["sample_weight"] = sample_weight return scoring( self, X, y, **routed_params.scorer.score, )
Score using the `scoring` option on the given test data and labels. Parameters ---------- X : array-like of shape (n_samples, n_features) Test samples. y : array-like of shape (n_samples,) True labels for X. sample_weight : array-like of shape (n_samples,), default=None Sample weights. **score_params : dict Parameters to pass to the `score` method of the underlying scorer. .. versionadded:: 1.4 Returns ------- score : float Score of self.predict(X) w.r.t. y.
score
python
scikit-learn/scikit-learn
sklearn/linear_model/_logistic.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_logistic.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = ( MetadataRouter(owner=self.__class__.__name__) .add_self_request(self) .add( splitter=self.cv, method_mapping=MethodMapping().add(caller="fit", callee="split"), ) .add( scorer=self._get_scorer(), method_mapping=MethodMapping() .add(caller="score", callee="score") .add(caller="fit", callee="score"), ) ) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/linear_model/_logistic.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_logistic.py
BSD-3-Clause
def _cholesky_omp(X, y, n_nonzero_coefs, tol=None, copy_X=True, return_path=False): """Orthogonal Matching Pursuit step using the Cholesky decomposition. Parameters ---------- X : ndarray of shape (n_samples, n_features) Input dictionary. Columns are assumed to have unit norm. y : ndarray of shape (n_samples,) Input targets. n_nonzero_coefs : int Targeted number of non-zero elements. tol : float, default=None Targeted squared error, if not None overrides n_nonzero_coefs. copy_X : bool, default=True Whether the design matrix X must be copied by the algorithm. A false value is only helpful if X is already Fortran-ordered, otherwise a copy is made anyway. return_path : bool, default=False Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation. Returns ------- gamma : ndarray of shape (n_nonzero_coefs,) Non-zero elements of the solution. idx : ndarray of shape (n_nonzero_coefs,) Indices of the positions of the elements in gamma within the solution vector. coef : ndarray of shape (n_features, n_nonzero_coefs) The first k values of column k correspond to the coefficient value for the active features at that step. The lower left triangle contains garbage. Only returned if ``return_path=True``. n_active : int Number of active features at convergence. """ if copy_X: X = X.copy("F") else: # even if we are allowed to overwrite, still copy it if bad order X = np.asfortranarray(X) min_float = np.finfo(X.dtype).eps nrm2, swap = linalg.get_blas_funcs(("nrm2", "swap"), (X,)) (potrs,) = get_lapack_funcs(("potrs",), (X,)) alpha = np.dot(X.T, y) residual = y gamma = np.empty(0) n_active = 0 indices = np.arange(X.shape[1]) # keeping track of swapping max_features = X.shape[1] if tol is not None else n_nonzero_coefs L = np.empty((max_features, max_features), dtype=X.dtype) if return_path: coefs = np.empty_like(L) while True: lam = np.argmax(np.abs(np.dot(X.T, residual))) if lam < n_active or alpha[lam] ** 2 < min_float: # atom already selected or inner product too small warnings.warn(premature, RuntimeWarning, stacklevel=2) break if n_active > 0: # Updates the Cholesky decomposition of X' X L[n_active, :n_active] = np.dot(X[:, :n_active].T, X[:, lam]) linalg.solve_triangular( L[:n_active, :n_active], L[n_active, :n_active], trans=0, lower=1, overwrite_b=True, check_finite=False, ) v = nrm2(L[n_active, :n_active]) ** 2 Lkk = linalg.norm(X[:, lam]) ** 2 - v if Lkk <= min_float: # selected atoms are dependent warnings.warn(premature, RuntimeWarning, stacklevel=2) break L[n_active, n_active] = sqrt(Lkk) else: L[0, 0] = linalg.norm(X[:, lam]) X.T[n_active], X.T[lam] = swap(X.T[n_active], X.T[lam]) alpha[n_active], alpha[lam] = alpha[lam], alpha[n_active] indices[n_active], indices[lam] = indices[lam], indices[n_active] n_active += 1 # solves LL'x = X'y as a composition of two triangular systems gamma, _ = potrs( L[:n_active, :n_active], alpha[:n_active], lower=True, overwrite_b=False ) if return_path: coefs[:n_active, n_active - 1] = gamma residual = y - np.dot(X[:, :n_active], gamma) if tol is not None and nrm2(residual) ** 2 <= tol: break elif n_active == max_features: break if return_path: return gamma, indices[:n_active], coefs[:, :n_active], n_active else: return gamma, indices[:n_active], n_active
Orthogonal Matching Pursuit step using the Cholesky decomposition. Parameters ---------- X : ndarray of shape (n_samples, n_features) Input dictionary. Columns are assumed to have unit norm. y : ndarray of shape (n_samples,) Input targets. n_nonzero_coefs : int Targeted number of non-zero elements. tol : float, default=None Targeted squared error, if not None overrides n_nonzero_coefs. copy_X : bool, default=True Whether the design matrix X must be copied by the algorithm. A false value is only helpful if X is already Fortran-ordered, otherwise a copy is made anyway. return_path : bool, default=False Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation. Returns ------- gamma : ndarray of shape (n_nonzero_coefs,) Non-zero elements of the solution. idx : ndarray of shape (n_nonzero_coefs,) Indices of the positions of the elements in gamma within the solution vector. coef : ndarray of shape (n_features, n_nonzero_coefs) The first k values of column k correspond to the coefficient value for the active features at that step. The lower left triangle contains garbage. Only returned if ``return_path=True``. n_active : int Number of active features at convergence.
_cholesky_omp
python
scikit-learn/scikit-learn
sklearn/linear_model/_omp.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_omp.py
BSD-3-Clause
def _gram_omp( Gram, Xy, n_nonzero_coefs, tol_0=None, tol=None, copy_Gram=True, copy_Xy=True, return_path=False, ): """Orthogonal Matching Pursuit step on a precomputed Gram matrix. This function uses the Cholesky decomposition method. Parameters ---------- Gram : ndarray of shape (n_features, n_features) Gram matrix of the input data matrix. Xy : ndarray of shape (n_features,) Input targets. n_nonzero_coefs : int Targeted number of non-zero elements. tol_0 : float, default=None Squared norm of y, required if tol is not None. tol : float, default=None Targeted squared error, if not None overrides n_nonzero_coefs. copy_Gram : bool, default=True Whether the gram matrix must be copied by the algorithm. A false value is only helpful if it is already Fortran-ordered, otherwise a copy is made anyway. copy_Xy : bool, default=True Whether the covariance vector Xy must be copied by the algorithm. If False, it may be overwritten. return_path : bool, default=False Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation. Returns ------- gamma : ndarray of shape (n_nonzero_coefs,) Non-zero elements of the solution. idx : ndarray of shape (n_nonzero_coefs,) Indices of the positions of the elements in gamma within the solution vector. coefs : ndarray of shape (n_features, n_nonzero_coefs) The first k values of column k correspond to the coefficient value for the active features at that step. The lower left triangle contains garbage. Only returned if ``return_path=True``. n_active : int Number of active features at convergence. """ Gram = Gram.copy("F") if copy_Gram else np.asfortranarray(Gram) if copy_Xy or not Xy.flags.writeable: Xy = Xy.copy() min_float = np.finfo(Gram.dtype).eps nrm2, swap = linalg.get_blas_funcs(("nrm2", "swap"), (Gram,)) (potrs,) = get_lapack_funcs(("potrs",), (Gram,)) indices = np.arange(len(Gram)) # keeping track of swapping alpha = Xy tol_curr = tol_0 delta = 0 gamma = np.empty(0) n_active = 0 max_features = len(Gram) if tol is not None else n_nonzero_coefs L = np.empty((max_features, max_features), dtype=Gram.dtype) L[0, 0] = 1.0 if return_path: coefs = np.empty_like(L) while True: lam = np.argmax(np.abs(alpha)) if lam < n_active or alpha[lam] ** 2 < min_float: # selected same atom twice, or inner product too small warnings.warn(premature, RuntimeWarning, stacklevel=3) break if n_active > 0: L[n_active, :n_active] = Gram[lam, :n_active] linalg.solve_triangular( L[:n_active, :n_active], L[n_active, :n_active], trans=0, lower=1, overwrite_b=True, check_finite=False, ) v = nrm2(L[n_active, :n_active]) ** 2 Lkk = Gram[lam, lam] - v if Lkk <= min_float: # selected atoms are dependent warnings.warn(premature, RuntimeWarning, stacklevel=3) break L[n_active, n_active] = sqrt(Lkk) else: L[0, 0] = sqrt(Gram[lam, lam]) Gram[n_active], Gram[lam] = swap(Gram[n_active], Gram[lam]) Gram.T[n_active], Gram.T[lam] = swap(Gram.T[n_active], Gram.T[lam]) indices[n_active], indices[lam] = indices[lam], indices[n_active] Xy[n_active], Xy[lam] = Xy[lam], Xy[n_active] n_active += 1 # solves LL'x = X'y as a composition of two triangular systems gamma, _ = potrs( L[:n_active, :n_active], Xy[:n_active], lower=True, overwrite_b=False ) if return_path: coefs[:n_active, n_active - 1] = gamma beta = np.dot(Gram[:, :n_active], gamma) alpha = Xy - beta if tol is not None: tol_curr += delta delta = np.inner(gamma, beta[:n_active]) tol_curr -= delta if abs(tol_curr) <= tol: break elif n_active == max_features: break if return_path: return gamma, indices[:n_active], coefs[:, :n_active], n_active else: return gamma, indices[:n_active], n_active
Orthogonal Matching Pursuit step on a precomputed Gram matrix. This function uses the Cholesky decomposition method. Parameters ---------- Gram : ndarray of shape (n_features, n_features) Gram matrix of the input data matrix. Xy : ndarray of shape (n_features,) Input targets. n_nonzero_coefs : int Targeted number of non-zero elements. tol_0 : float, default=None Squared norm of y, required if tol is not None. tol : float, default=None Targeted squared error, if not None overrides n_nonzero_coefs. copy_Gram : bool, default=True Whether the gram matrix must be copied by the algorithm. A false value is only helpful if it is already Fortran-ordered, otherwise a copy is made anyway. copy_Xy : bool, default=True Whether the covariance vector Xy must be copied by the algorithm. If False, it may be overwritten. return_path : bool, default=False Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation. Returns ------- gamma : ndarray of shape (n_nonzero_coefs,) Non-zero elements of the solution. idx : ndarray of shape (n_nonzero_coefs,) Indices of the positions of the elements in gamma within the solution vector. coefs : ndarray of shape (n_features, n_nonzero_coefs) The first k values of column k correspond to the coefficient value for the active features at that step. The lower left triangle contains garbage. Only returned if ``return_path=True``. n_active : int Number of active features at convergence.
_gram_omp
python
scikit-learn/scikit-learn
sklearn/linear_model/_omp.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_omp.py
BSD-3-Clause
def orthogonal_mp( X, y, *, n_nonzero_coefs=None, tol=None, precompute=False, copy_X=True, return_path=False, return_n_iter=False, ): r"""Orthogonal Matching Pursuit (OMP). Solves n_targets Orthogonal Matching Pursuit problems. An instance of the problem has the form: When parametrized by the number of non-zero coefficients using `n_nonzero_coefs`: argmin ||y - X\gamma||^2 subject to ||\gamma||_0 <= n_{nonzero coefs} When parametrized by error using the parameter `tol`: argmin ||\gamma||_0 subject to ||y - X\gamma||^2 <= tol Read more in the :ref:`User Guide <omp>`. Parameters ---------- X : array-like of shape (n_samples, n_features) Input data. Columns are assumed to have unit norm. y : ndarray of shape (n_samples,) or (n_samples, n_targets) Input targets. n_nonzero_coefs : int, default=None Desired number of non-zero entries in the solution. If None (by default) this value is set to 10% of n_features. tol : float, default=None Maximum squared norm of the residual. If not None, overrides n_nonzero_coefs. precompute : 'auto' or bool, default=False Whether to perform precomputations. Improves performance when n_targets or n_samples is very large. copy_X : bool, default=True Whether the design matrix X must be copied by the algorithm. A false value is only helpful if X is already Fortran-ordered, otherwise a copy is made anyway. return_path : bool, default=False Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation. return_n_iter : bool, default=False Whether or not to return the number of iterations. Returns ------- coef : ndarray of shape (n_features,) or (n_features, n_targets) Coefficients of the OMP solution. If `return_path=True`, this contains the whole coefficient path. In this case its shape is (n_features, n_features) or (n_features, n_targets, n_features) and iterating over the last axis generates coefficients in increasing order of active features. n_iters : array-like or int Number of active features across every target. Returned only if `return_n_iter` is set to True. See Also -------- OrthogonalMatchingPursuit : Orthogonal Matching Pursuit model. orthogonal_mp_gram : Solve OMP problems using Gram matrix and the product X.T * y. lars_path : Compute Least Angle Regression or Lasso path using LARS algorithm. sklearn.decomposition.sparse_encode : Sparse coding. Notes ----- Orthogonal matching pursuit was introduced in S. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (https://www.di.ens.fr/~mallat/papiers/MallatPursuit93.pdf) This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf Examples -------- >>> from sklearn.datasets import make_regression >>> from sklearn.linear_model import orthogonal_mp >>> X, y = make_regression(noise=4, random_state=0) >>> coef = orthogonal_mp(X, y) >>> coef.shape (100,) >>> X[:1,] @ coef array([-78.68]) """ X = check_array(X, order="F", copy=copy_X) copy_X = False if y.ndim == 1: y = y.reshape(-1, 1) y = check_array(y) if y.shape[1] > 1: # subsequent targets will be affected copy_X = True if n_nonzero_coefs is None and tol is None: # default for n_nonzero_coefs is 0.1 * n_features # but at least one. n_nonzero_coefs = max(int(0.1 * X.shape[1]), 1) if tol is None and n_nonzero_coefs > X.shape[1]: raise ValueError( "The number of atoms cannot be more than the number of features" ) if precompute == "auto": precompute = X.shape[0] > X.shape[1] if precompute: G = np.dot(X.T, X) G = np.asfortranarray(G) Xy = np.dot(X.T, y) if tol is not None: norms_squared = np.sum((y**2), axis=0) else: norms_squared = None return orthogonal_mp_gram( G, Xy, n_nonzero_coefs=n_nonzero_coefs, tol=tol, norms_squared=norms_squared, copy_Gram=copy_X, copy_Xy=False, return_path=return_path, ) if return_path: coef = np.zeros((X.shape[1], y.shape[1], X.shape[1])) else: coef = np.zeros((X.shape[1], y.shape[1])) n_iters = [] for k in range(y.shape[1]): out = _cholesky_omp( X, y[:, k], n_nonzero_coefs, tol, copy_X=copy_X, return_path=return_path ) if return_path: _, idx, coefs, n_iter = out coef = coef[:, :, : len(idx)] for n_active, x in enumerate(coefs.T): coef[idx[: n_active + 1], k, n_active] = x[: n_active + 1] else: x, idx, n_iter = out coef[idx, k] = x n_iters.append(n_iter) if y.shape[1] == 1: n_iters = n_iters[0] if return_n_iter: return np.squeeze(coef), n_iters else: return np.squeeze(coef)
Orthogonal Matching Pursuit (OMP). Solves n_targets Orthogonal Matching Pursuit problems. An instance of the problem has the form: When parametrized by the number of non-zero coefficients using `n_nonzero_coefs`: argmin ||y - X\gamma||^2 subject to ||\gamma||_0 <= n_{nonzero coefs} When parametrized by error using the parameter `tol`: argmin ||\gamma||_0 subject to ||y - X\gamma||^2 <= tol Read more in the :ref:`User Guide <omp>`. Parameters ---------- X : array-like of shape (n_samples, n_features) Input data. Columns are assumed to have unit norm. y : ndarray of shape (n_samples,) or (n_samples, n_targets) Input targets. n_nonzero_coefs : int, default=None Desired number of non-zero entries in the solution. If None (by default) this value is set to 10% of n_features. tol : float, default=None Maximum squared norm of the residual. If not None, overrides n_nonzero_coefs. precompute : 'auto' or bool, default=False Whether to perform precomputations. Improves performance when n_targets or n_samples is very large. copy_X : bool, default=True Whether the design matrix X must be copied by the algorithm. A false value is only helpful if X is already Fortran-ordered, otherwise a copy is made anyway. return_path : bool, default=False Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation. return_n_iter : bool, default=False Whether or not to return the number of iterations. Returns ------- coef : ndarray of shape (n_features,) or (n_features, n_targets) Coefficients of the OMP solution. If `return_path=True`, this contains the whole coefficient path. In this case its shape is (n_features, n_features) or (n_features, n_targets, n_features) and iterating over the last axis generates coefficients in increasing order of active features. n_iters : array-like or int Number of active features across every target. Returned only if `return_n_iter` is set to True. See Also -------- OrthogonalMatchingPursuit : Orthogonal Matching Pursuit model. orthogonal_mp_gram : Solve OMP problems using Gram matrix and the product X.T * y. lars_path : Compute Least Angle Regression or Lasso path using LARS algorithm. sklearn.decomposition.sparse_encode : Sparse coding. Notes ----- Orthogonal matching pursuit was introduced in S. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (https://www.di.ens.fr/~mallat/papiers/MallatPursuit93.pdf) This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf Examples -------- >>> from sklearn.datasets import make_regression >>> from sklearn.linear_model import orthogonal_mp >>> X, y = make_regression(noise=4, random_state=0) >>> coef = orthogonal_mp(X, y) >>> coef.shape (100,) >>> X[:1,] @ coef array([-78.68])
orthogonal_mp
python
scikit-learn/scikit-learn
sklearn/linear_model/_omp.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_omp.py
BSD-3-Clause
def orthogonal_mp_gram( Gram, Xy, *, n_nonzero_coefs=None, tol=None, norms_squared=None, copy_Gram=True, copy_Xy=True, return_path=False, return_n_iter=False, ): """Gram Orthogonal Matching Pursuit (OMP). Solves n_targets Orthogonal Matching Pursuit problems using only the Gram matrix X.T * X and the product X.T * y. Read more in the :ref:`User Guide <omp>`. Parameters ---------- Gram : array-like of shape (n_features, n_features) Gram matrix of the input data: `X.T * X`. Xy : array-like of shape (n_features,) or (n_features, n_targets) Input targets multiplied by `X`: `X.T * y`. n_nonzero_coefs : int, default=None Desired number of non-zero entries in the solution. If `None` (by default) this value is set to 10% of n_features. tol : float, default=None Maximum squared norm of the residual. If not `None`, overrides `n_nonzero_coefs`. norms_squared : array-like of shape (n_targets,), default=None Squared L2 norms of the lines of `y`. Required if `tol` is not None. copy_Gram : bool, default=True Whether the gram matrix must be copied by the algorithm. A `False` value is only helpful if it is already Fortran-ordered, otherwise a copy is made anyway. copy_Xy : bool, default=True Whether the covariance vector `Xy` must be copied by the algorithm. If `False`, it may be overwritten. return_path : bool, default=False Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation. return_n_iter : bool, default=False Whether or not to return the number of iterations. Returns ------- coef : ndarray of shape (n_features,) or (n_features, n_targets) Coefficients of the OMP solution. If `return_path=True`, this contains the whole coefficient path. In this case its shape is `(n_features, n_features)` or `(n_features, n_targets, n_features)` and iterating over the last axis yields coefficients in increasing order of active features. n_iters : list or int Number of active features across every target. Returned only if `return_n_iter` is set to True. See Also -------- OrthogonalMatchingPursuit : Orthogonal Matching Pursuit model (OMP). orthogonal_mp : Solves n_targets Orthogonal Matching Pursuit problems. lars_path : Compute Least Angle Regression or Lasso path using LARS algorithm. sklearn.decomposition.sparse_encode : Generic sparse coding. Each column of the result is the solution to a Lasso problem. Notes ----- Orthogonal matching pursuit was introduced in G. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (https://www.di.ens.fr/~mallat/papiers/MallatPursuit93.pdf) This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf Examples -------- >>> from sklearn.datasets import make_regression >>> from sklearn.linear_model import orthogonal_mp_gram >>> X, y = make_regression(noise=4, random_state=0) >>> coef = orthogonal_mp_gram(X.T @ X, X.T @ y) >>> coef.shape (100,) >>> X[:1,] @ coef array([-78.68]) """ Gram = check_array(Gram, order="F", copy=copy_Gram) Xy = np.asarray(Xy) if Xy.ndim > 1 and Xy.shape[1] > 1: # or subsequent target will be affected copy_Gram = True if Xy.ndim == 1: Xy = Xy[:, np.newaxis] if tol is not None: norms_squared = [norms_squared] if copy_Xy or not Xy.flags.writeable: # Make the copy once instead of many times in _gram_omp itself. Xy = Xy.copy() if n_nonzero_coefs is None and tol is None: n_nonzero_coefs = int(0.1 * len(Gram)) if tol is not None and norms_squared is None: raise ValueError( "Gram OMP needs the precomputed norms in order " "to evaluate the error sum of squares." ) if tol is not None and tol < 0: raise ValueError("Epsilon cannot be negative") if tol is None and n_nonzero_coefs <= 0: raise ValueError("The number of atoms must be positive") if tol is None and n_nonzero_coefs > len(Gram): raise ValueError( "The number of atoms cannot be more than the number of features" ) if return_path: coef = np.zeros((len(Gram), Xy.shape[1], len(Gram)), dtype=Gram.dtype) else: coef = np.zeros((len(Gram), Xy.shape[1]), dtype=Gram.dtype) n_iters = [] for k in range(Xy.shape[1]): out = _gram_omp( Gram, Xy[:, k], n_nonzero_coefs, norms_squared[k] if tol is not None else None, tol, copy_Gram=copy_Gram, copy_Xy=False, return_path=return_path, ) if return_path: _, idx, coefs, n_iter = out coef = coef[:, :, : len(idx)] for n_active, x in enumerate(coefs.T): coef[idx[: n_active + 1], k, n_active] = x[: n_active + 1] else: x, idx, n_iter = out coef[idx, k] = x n_iters.append(n_iter) if Xy.shape[1] == 1: n_iters = n_iters[0] if return_n_iter: return np.squeeze(coef), n_iters else: return np.squeeze(coef)
Gram Orthogonal Matching Pursuit (OMP). Solves n_targets Orthogonal Matching Pursuit problems using only the Gram matrix X.T * X and the product X.T * y. Read more in the :ref:`User Guide <omp>`. Parameters ---------- Gram : array-like of shape (n_features, n_features) Gram matrix of the input data: `X.T * X`. Xy : array-like of shape (n_features,) or (n_features, n_targets) Input targets multiplied by `X`: `X.T * y`. n_nonzero_coefs : int, default=None Desired number of non-zero entries in the solution. If `None` (by default) this value is set to 10% of n_features. tol : float, default=None Maximum squared norm of the residual. If not `None`, overrides `n_nonzero_coefs`. norms_squared : array-like of shape (n_targets,), default=None Squared L2 norms of the lines of `y`. Required if `tol` is not None. copy_Gram : bool, default=True Whether the gram matrix must be copied by the algorithm. A `False` value is only helpful if it is already Fortran-ordered, otherwise a copy is made anyway. copy_Xy : bool, default=True Whether the covariance vector `Xy` must be copied by the algorithm. If `False`, it may be overwritten. return_path : bool, default=False Whether to return every value of the nonzero coefficients along the forward path. Useful for cross-validation. return_n_iter : bool, default=False Whether or not to return the number of iterations. Returns ------- coef : ndarray of shape (n_features,) or (n_features, n_targets) Coefficients of the OMP solution. If `return_path=True`, this contains the whole coefficient path. In this case its shape is `(n_features, n_features)` or `(n_features, n_targets, n_features)` and iterating over the last axis yields coefficients in increasing order of active features. n_iters : list or int Number of active features across every target. Returned only if `return_n_iter` is set to True. See Also -------- OrthogonalMatchingPursuit : Orthogonal Matching Pursuit model (OMP). orthogonal_mp : Solves n_targets Orthogonal Matching Pursuit problems. lars_path : Compute Least Angle Regression or Lasso path using LARS algorithm. sklearn.decomposition.sparse_encode : Generic sparse coding. Each column of the result is the solution to a Lasso problem. Notes ----- Orthogonal matching pursuit was introduced in G. Mallat, Z. Zhang, Matching pursuits with time-frequency dictionaries, IEEE Transactions on Signal Processing, Vol. 41, No. 12. (December 1993), pp. 3397-3415. (https://www.di.ens.fr/~mallat/papiers/MallatPursuit93.pdf) This implementation is based on Rubinstein, R., Zibulevsky, M. and Elad, M., Efficient Implementation of the K-SVD Algorithm using Batch Orthogonal Matching Pursuit Technical Report - CS Technion, April 2008. https://www.cs.technion.ac.il/~ronrubin/Publications/KSVD-OMP-v2.pdf Examples -------- >>> from sklearn.datasets import make_regression >>> from sklearn.linear_model import orthogonal_mp_gram >>> X, y = make_regression(noise=4, random_state=0) >>> coef = orthogonal_mp_gram(X.T @ X, X.T @ y) >>> coef.shape (100,) >>> X[:1,] @ coef array([-78.68])
orthogonal_mp_gram
python
scikit-learn/scikit-learn
sklearn/linear_model/_omp.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_omp.py
BSD-3-Clause
def fit(self, X, y): """Fit the model using X, y as training data. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. Will be cast to X's dtype if necessary. Returns ------- self : object Returns an instance of self. """ X, y = validate_data(self, X, y, multi_output=True, y_numeric=True) n_features = X.shape[1] X, y, X_offset, y_offset, X_scale, Gram, Xy = _pre_fit( X, y, None, self.precompute, self.fit_intercept, copy=True ) if y.ndim == 1: y = y[:, np.newaxis] if self.n_nonzero_coefs is None and self.tol is None: # default for n_nonzero_coefs is 0.1 * n_features # but at least one. self.n_nonzero_coefs_ = max(int(0.1 * n_features), 1) elif self.tol is not None: self.n_nonzero_coefs_ = None else: self.n_nonzero_coefs_ = self.n_nonzero_coefs if Gram is False: coef_, self.n_iter_ = orthogonal_mp( X, y, n_nonzero_coefs=self.n_nonzero_coefs_, tol=self.tol, precompute=False, copy_X=True, return_n_iter=True, ) else: norms_sq = np.sum(y**2, axis=0) if self.tol is not None else None coef_, self.n_iter_ = orthogonal_mp_gram( Gram, Xy=Xy, n_nonzero_coefs=self.n_nonzero_coefs_, tol=self.tol, norms_squared=norms_sq, copy_Gram=True, copy_Xy=True, return_n_iter=True, ) self.coef_ = coef_.T self._set_intercept(X_offset, y_offset, X_scale) return self
Fit the model using X, y as training data. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. Will be cast to X's dtype if necessary. Returns ------- self : object Returns an instance of self.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_omp.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_omp.py
BSD-3-Clause
def _omp_path_residues( X_train, y_train, X_test, y_test, copy=True, fit_intercept=True, max_iter=100, ): """Compute the residues on left-out data for a full LARS path. Parameters ---------- X_train : ndarray of shape (n_samples, n_features) The data to fit the LARS on. y_train : ndarray of shape (n_samples) The target variable to fit LARS on. X_test : ndarray of shape (n_samples, n_features) The data to compute the residues on. y_test : ndarray of shape (n_samples) The target variable to compute the residues on. copy : bool, default=True Whether X_train, X_test, y_train and y_test should be copied. If False, they may be overwritten. fit_intercept : bool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). max_iter : int, default=100 Maximum numbers of iterations to perform, therefore maximum features to include. 100 by default. Returns ------- residues : ndarray of shape (n_samples, max_features) Residues of the prediction on the test data. """ if copy: X_train = X_train.copy() y_train = y_train.copy() X_test = X_test.copy() y_test = y_test.copy() if fit_intercept: X_mean = X_train.mean(axis=0) X_train -= X_mean X_test -= X_mean y_mean = y_train.mean(axis=0) y_train = as_float_array(y_train, copy=False) y_train -= y_mean y_test = as_float_array(y_test, copy=False) y_test -= y_mean coefs = orthogonal_mp( X_train, y_train, n_nonzero_coefs=max_iter, tol=None, precompute=False, copy_X=False, return_path=True, ) if coefs.ndim == 1: coefs = coefs[:, np.newaxis] return np.dot(coefs.T, X_test.T) - y_test
Compute the residues on left-out data for a full LARS path. Parameters ---------- X_train : ndarray of shape (n_samples, n_features) The data to fit the LARS on. y_train : ndarray of shape (n_samples) The target variable to fit LARS on. X_test : ndarray of shape (n_samples, n_features) The data to compute the residues on. y_test : ndarray of shape (n_samples) The target variable to compute the residues on. copy : bool, default=True Whether X_train, X_test, y_train and y_test should be copied. If False, they may be overwritten. fit_intercept : bool, default=True Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be centered). max_iter : int, default=100 Maximum numbers of iterations to perform, therefore maximum features to include. 100 by default. Returns ------- residues : ndarray of shape (n_samples, max_features) Residues of the prediction on the test data.
_omp_path_residues
python
scikit-learn/scikit-learn
sklearn/linear_model/_omp.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_omp.py
BSD-3-Clause
def fit(self, X, y, **fit_params): """Fit the model using X, y as training data. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) Target values. Will be cast to X's dtype if necessary. **fit_params : dict Parameters to pass to the underlying splitter. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`, which can be set by using ``sklearn.set_config(enable_metadata_routing=True)``. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Returns an instance of self. """ _raise_for_params(fit_params, self, "fit") X, y = validate_data(self, X, y, y_numeric=True, ensure_min_features=2) X = as_float_array(X, copy=False, ensure_all_finite=False) cv = check_cv(self.cv, classifier=False) if _routing_enabled(): routed_params = process_routing(self, "fit", **fit_params) else: # TODO(SLEP6): remove when metadata routing cannot be disabled. routed_params = Bunch() routed_params.splitter = Bunch(split={}) max_iter = ( min(max(int(0.1 * X.shape[1]), 5), X.shape[1]) if not self.max_iter else self.max_iter ) cv_paths = Parallel(n_jobs=self.n_jobs, verbose=self.verbose)( delayed(_omp_path_residues)( X[train], y[train], X[test], y[test], self.copy, self.fit_intercept, max_iter, ) for train, test in cv.split(X, **routed_params.splitter.split) ) min_early_stop = min(fold.shape[0] for fold in cv_paths) mse_folds = np.array( [(fold[:min_early_stop] ** 2).mean(axis=1) for fold in cv_paths] ) best_n_nonzero_coefs = np.argmin(mse_folds.mean(axis=0)) + 1 self.n_nonzero_coefs_ = best_n_nonzero_coefs omp = OrthogonalMatchingPursuit( n_nonzero_coefs=best_n_nonzero_coefs, fit_intercept=self.fit_intercept, ).fit(X, y) self.coef_ = omp.coef_ self.intercept_ = omp.intercept_ self.n_iter_ = omp.n_iter_ return self
Fit the model using X, y as training data. Parameters ---------- X : array-like of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) Target values. Will be cast to X's dtype if necessary. **fit_params : dict Parameters to pass to the underlying splitter. .. versionadded:: 1.4 Only available if `enable_metadata_routing=True`, which can be set by using ``sklearn.set_config(enable_metadata_routing=True)``. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Returns an instance of self.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_omp.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_omp.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = MetadataRouter(owner=self.__class__.__name__).add( splitter=self.cv, method_mapping=MethodMapping().add(caller="fit", callee="split"), ) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.4 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/linear_model/_omp.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_omp.py
BSD-3-Clause
def fit(self, X, y, coef_init=None, intercept_init=None): """Fit linear model with Passive Aggressive algorithm. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) Target values. coef_init : ndarray of shape (n_classes, n_features) The initial coefficients to warm-start the optimization. intercept_init : ndarray of shape (n_classes,) The initial intercept to warm-start the optimization. Returns ------- self : object Fitted estimator. """ self._more_validate_params() lr = "pa1" if self.loss == "hinge" else "pa2" return self._fit( X, y, alpha=1.0, C=self.C, loss="hinge", learning_rate=lr, coef_init=coef_init, intercept_init=intercept_init, )
Fit linear model with Passive Aggressive algorithm. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) Target values. coef_init : ndarray of shape (n_classes, n_features) The initial coefficients to warm-start the optimization. intercept_init : ndarray of shape (n_classes,) The initial intercept to warm-start the optimization. Returns ------- self : object Fitted estimator.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_passive_aggressive.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_passive_aggressive.py
BSD-3-Clause
def partial_fit(self, X, y): """Fit linear model with Passive Aggressive algorithm. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Subset of training data. y : numpy array of shape [n_samples] Subset of target values. Returns ------- self : object Fitted estimator. """ if not hasattr(self, "coef_"): self._more_validate_params(for_partial_fit=True) lr = "pa1" if self.loss == "epsilon_insensitive" else "pa2" return self._partial_fit( X, y, alpha=1.0, C=self.C, loss="epsilon_insensitive", learning_rate=lr, max_iter=1, sample_weight=None, coef_init=None, intercept_init=None, )
Fit linear model with Passive Aggressive algorithm. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Subset of training data. y : numpy array of shape [n_samples] Subset of target values. Returns ------- self : object Fitted estimator.
partial_fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_passive_aggressive.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_passive_aggressive.py
BSD-3-Clause
def fit(self, X, y, coef_init=None, intercept_init=None): """Fit linear model with Passive Aggressive algorithm. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : numpy array of shape [n_samples] Target values. coef_init : array, shape = [n_features] The initial coefficients to warm-start the optimization. intercept_init : array, shape = [1] The initial intercept to warm-start the optimization. Returns ------- self : object Fitted estimator. """ self._more_validate_params() lr = "pa1" if self.loss == "epsilon_insensitive" else "pa2" return self._fit( X, y, alpha=1.0, C=self.C, loss="epsilon_insensitive", learning_rate=lr, coef_init=coef_init, intercept_init=intercept_init, )
Fit linear model with Passive Aggressive algorithm. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : numpy array of shape [n_samples] Target values. coef_init : array, shape = [n_features] The initial coefficients to warm-start the optimization. intercept_init : array, shape = [1] The initial intercept to warm-start the optimization. Returns ------- self : object Fitted estimator.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_passive_aggressive.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_passive_aggressive.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None): """Fit the model according to the given training data. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) Target values. sample_weight : array-like of shape (n_samples,), default=None Sample weights. Returns ------- self : object Returns self. """ X, y = validate_data( self, X, y, accept_sparse=["csc", "csr", "coo"], y_numeric=True, multi_output=False, ) sample_weight = _check_sample_weight(sample_weight, X) n_features = X.shape[1] n_params = n_features if self.fit_intercept: n_params += 1 # Note that centering y and X with _preprocess_data does not work # for quantile regression. # The objective is defined as 1/n * sum(pinball loss) + alpha * L1. # So we rescale the penalty term, which is equivalent. alpha = np.sum(sample_weight) * self.alpha if self.solver == "interior-point" and sp_version >= parse_version("1.11.0"): raise ValueError( f"Solver {self.solver} is not anymore available in SciPy >= 1.11.0." ) if sparse.issparse(X) and self.solver not in ["highs", "highs-ds", "highs-ipm"]: raise ValueError( f"Solver {self.solver} does not support sparse X. " "Use solver 'highs' for example." ) # make default solver more stable if self.solver_options is None and self.solver == "interior-point": solver_options = {"lstsq": True} else: solver_options = self.solver_options # After rescaling alpha, the minimization problem is # min sum(pinball loss) + alpha * L1 # Use linear programming formulation of quantile regression # min_x c x # A_eq x = b_eq # 0 <= x # x = (s0, s, t0, t, u, v) = slack variables >= 0 # intercept = s0 - t0 # coef = s - t # c = (0, alpha * 1_p, 0, alpha * 1_p, quantile * 1_n, (1-quantile) * 1_n) # residual = y - X@coef - intercept = u - v # A_eq = (1_n, X, -1_n, -X, diag(1_n), -diag(1_n)) # b_eq = y # p = n_features # n = n_samples # 1_n = vector of length n with entries equal one # see https://stats.stackexchange.com/questions/384909/ # # Filtering out zero sample weights from the beginning makes life # easier for the linprog solver. indices = np.nonzero(sample_weight)[0] n_indices = len(indices) # use n_mask instead of n_samples if n_indices < len(sample_weight): sample_weight = sample_weight[indices] X = _safe_indexing(X, indices) y = _safe_indexing(y, indices) c = np.concatenate( [ np.full(2 * n_params, fill_value=alpha), sample_weight * self.quantile, sample_weight * (1 - self.quantile), ] ) if self.fit_intercept: # do not penalize the intercept c[0] = 0 c[n_params] = 0 if self.solver in ["highs", "highs-ds", "highs-ipm"]: # Note that highs methods always use a sparse CSC memory layout internally, # even for optimization problems parametrized using dense numpy arrays. # Therefore, we work with CSC matrices as early as possible to limit # unnecessary repeated memory copies. eye = sparse.eye(n_indices, dtype=X.dtype, format="csc") if self.fit_intercept: ones = sparse.csc_matrix(np.ones(shape=(n_indices, 1), dtype=X.dtype)) A_eq = sparse.hstack([ones, X, -ones, -X, eye, -eye], format="csc") else: A_eq = sparse.hstack([X, -X, eye, -eye], format="csc") else: eye = np.eye(n_indices) if self.fit_intercept: ones = np.ones((n_indices, 1)) A_eq = np.concatenate([ones, X, -ones, -X, eye, -eye], axis=1) else: A_eq = np.concatenate([X, -X, eye, -eye], axis=1) b_eq = y result = linprog( c=c, A_eq=A_eq, b_eq=b_eq, method=self.solver, options=solver_options, ) solution = result.x if not result.success: failure = { 1: "Iteration limit reached.", 2: "Problem appears to be infeasible.", 3: "Problem appears to be unbounded.", 4: "Numerical difficulties encountered.", } warnings.warn( "Linear programming for QuantileRegressor did not succeed.\n" f"Status is {result.status}: " + failure.setdefault(result.status, "unknown reason") + "\n" + "Result message of linprog:\n" + result.message, ConvergenceWarning, ) # positive slack - negative slack # solution is an array with (params_pos, params_neg, u, v) params = solution[:n_params] - solution[n_params : 2 * n_params] self.n_iter_ = result.nit if self.fit_intercept: self.coef_ = params[1:] self.intercept_ = params[0] else: self.coef_ = params self.intercept_ = 0.0 return self
Fit the model according to the given training data. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) Target values. sample_weight : array-like of shape (n_samples,), default=None Sample weights. Returns ------- self : object Returns self.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_quantile.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_quantile.py
BSD-3-Clause
def _dynamic_max_trials(n_inliers, n_samples, min_samples, probability): """Determine number trials such that at least one outlier-free subset is sampled for the given inlier/outlier ratio. Parameters ---------- n_inliers : int Number of inliers in the data. n_samples : int Total number of samples in the data. min_samples : int Minimum number of samples chosen randomly from original data. probability : float Probability (confidence) that one outlier-free sample is generated. Returns ------- trials : int Number of trials. """ inlier_ratio = n_inliers / float(n_samples) nom = max(_EPSILON, 1 - probability) denom = max(_EPSILON, 1 - inlier_ratio**min_samples) if nom == 1: return 0 if denom == 1: return float("inf") return abs(float(np.ceil(np.log(nom) / np.log(denom))))
Determine number trials such that at least one outlier-free subset is sampled for the given inlier/outlier ratio. Parameters ---------- n_inliers : int Number of inliers in the data. n_samples : int Total number of samples in the data. min_samples : int Minimum number of samples chosen randomly from original data. probability : float Probability (confidence) that one outlier-free sample is generated. Returns ------- trials : int Number of trials.
_dynamic_max_trials
python
scikit-learn/scikit-learn
sklearn/linear_model/_ransac.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_ransac.py
BSD-3-Clause
def fit(self, X, y, sample_weight=None, **fit_params): """Fit estimator using RANSAC algorithm. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. sample_weight : array-like of shape (n_samples,), default=None Individual weights for each sample raises error if sample_weight is passed and estimator fit method does not support it. .. versionadded:: 0.18 **fit_params : dict Parameters routed to the `fit` method of the sub-estimator via the metadata routing API. .. versionadded:: 1.5 Only available if `sklearn.set_config(enable_metadata_routing=True)` is set. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Fitted `RANSACRegressor` estimator. Raises ------ ValueError If no valid consensus set could be found. This occurs if `is_data_valid` and `is_model_valid` return False for all `max_trials` randomly chosen sub-samples. """ # Need to validate separately here. We can't pass multi_output=True # because that would allow y to be csr. Delay expensive finiteness # check to the estimator's own input validation. _raise_for_params(fit_params, self, "fit") check_X_params = dict(accept_sparse="csr", ensure_all_finite=False) check_y_params = dict(ensure_2d=False) X, y = validate_data( self, X, y, validate_separately=(check_X_params, check_y_params) ) check_consistent_length(X, y) if self.estimator is not None: estimator = clone(self.estimator) else: estimator = LinearRegression() if self.min_samples is None: if not isinstance(estimator, LinearRegression): raise ValueError( "`min_samples` needs to be explicitly set when estimator " "is not a LinearRegression." ) min_samples = X.shape[1] + 1 elif 0 < self.min_samples < 1: min_samples = np.ceil(self.min_samples * X.shape[0]) elif self.min_samples >= 1: min_samples = self.min_samples if min_samples > X.shape[0]: raise ValueError( "`min_samples` may not be larger than number " "of samples: n_samples = %d." % (X.shape[0]) ) if self.residual_threshold is None: # MAD (median absolute deviation) residual_threshold = np.median(np.abs(y - np.median(y))) else: residual_threshold = self.residual_threshold if self.loss == "absolute_error": if y.ndim == 1: loss_function = lambda y_true, y_pred: np.abs(y_true - y_pred) else: loss_function = lambda y_true, y_pred: np.sum( np.abs(y_true - y_pred), axis=1 ) elif self.loss == "squared_error": if y.ndim == 1: loss_function = lambda y_true, y_pred: (y_true - y_pred) ** 2 else: loss_function = lambda y_true, y_pred: np.sum( (y_true - y_pred) ** 2, axis=1 ) elif callable(self.loss): loss_function = self.loss random_state = check_random_state(self.random_state) try: # Not all estimator accept a random_state estimator.set_params(random_state=random_state) except ValueError: pass estimator_fit_has_sample_weight = has_fit_parameter(estimator, "sample_weight") estimator_name = type(estimator).__name__ if sample_weight is not None and not estimator_fit_has_sample_weight: raise ValueError( "%s does not support sample_weight. Sample" " weights are only used for the calibration" " itself." % estimator_name ) if sample_weight is not None: fit_params["sample_weight"] = sample_weight if _routing_enabled(): routed_params = process_routing(self, "fit", **fit_params) else: routed_params = Bunch() routed_params.estimator = Bunch(fit={}, predict={}, score={}) if sample_weight is not None: sample_weight = _check_sample_weight(sample_weight, X) routed_params.estimator.fit = {"sample_weight": sample_weight} n_inliers_best = 1 score_best = -np.inf inlier_mask_best = None X_inlier_best = None y_inlier_best = None inlier_best_idxs_subset = None self.n_skips_no_inliers_ = 0 self.n_skips_invalid_data_ = 0 self.n_skips_invalid_model_ = 0 # number of data samples n_samples = X.shape[0] sample_idxs = np.arange(n_samples) self.n_trials_ = 0 max_trials = self.max_trials while self.n_trials_ < max_trials: self.n_trials_ += 1 if ( self.n_skips_no_inliers_ + self.n_skips_invalid_data_ + self.n_skips_invalid_model_ ) > self.max_skips: break # choose random sample set subset_idxs = sample_without_replacement( n_samples, min_samples, random_state=random_state ) X_subset = X[subset_idxs] y_subset = y[subset_idxs] # check if random sample set is valid if self.is_data_valid is not None and not self.is_data_valid( X_subset, y_subset ): self.n_skips_invalid_data_ += 1 continue # cut `fit_params` down to `subset_idxs` fit_params_subset = _check_method_params( X, params=routed_params.estimator.fit, indices=subset_idxs ) # fit model for current random sample set estimator.fit(X_subset, y_subset, **fit_params_subset) # check if estimated model is valid if self.is_model_valid is not None and not self.is_model_valid( estimator, X_subset, y_subset ): self.n_skips_invalid_model_ += 1 continue # residuals of all data for current random sample model y_pred = estimator.predict(X) residuals_subset = loss_function(y, y_pred) # classify data into inliers and outliers inlier_mask_subset = residuals_subset <= residual_threshold n_inliers_subset = np.sum(inlier_mask_subset) # less inliers -> skip current random sample if n_inliers_subset < n_inliers_best: self.n_skips_no_inliers_ += 1 continue # extract inlier data set inlier_idxs_subset = sample_idxs[inlier_mask_subset] X_inlier_subset = X[inlier_idxs_subset] y_inlier_subset = y[inlier_idxs_subset] # cut `fit_params` down to `inlier_idxs_subset` score_params_inlier_subset = _check_method_params( X, params=routed_params.estimator.score, indices=inlier_idxs_subset ) # score of inlier data set score_subset = estimator.score( X_inlier_subset, y_inlier_subset, **score_params_inlier_subset, ) # same number of inliers but worse score -> skip current random # sample if n_inliers_subset == n_inliers_best and score_subset < score_best: continue # save current random sample as best sample n_inliers_best = n_inliers_subset score_best = score_subset inlier_mask_best = inlier_mask_subset X_inlier_best = X_inlier_subset y_inlier_best = y_inlier_subset inlier_best_idxs_subset = inlier_idxs_subset max_trials = min( max_trials, _dynamic_max_trials( n_inliers_best, n_samples, min_samples, self.stop_probability ), ) # break if sufficient number of inliers or score is reached if n_inliers_best >= self.stop_n_inliers or score_best >= self.stop_score: break # if none of the iterations met the required criteria if inlier_mask_best is None: if ( self.n_skips_no_inliers_ + self.n_skips_invalid_data_ + self.n_skips_invalid_model_ ) > self.max_skips: raise ValueError( "RANSAC skipped more iterations than `max_skips` without" " finding a valid consensus set. Iterations were skipped" " because each randomly chosen sub-sample failed the" " passing criteria. See estimator attributes for" " diagnostics (n_skips*)." ) else: raise ValueError( "RANSAC could not find a valid consensus set. All" " `max_trials` iterations were skipped because each" " randomly chosen sub-sample failed the passing criteria." " See estimator attributes for diagnostics (n_skips*)." ) else: if ( self.n_skips_no_inliers_ + self.n_skips_invalid_data_ + self.n_skips_invalid_model_ ) > self.max_skips: warnings.warn( ( "RANSAC found a valid consensus set but exited" " early due to skipping more iterations than" " `max_skips`. See estimator attributes for" " diagnostics (n_skips*)." ), ConvergenceWarning, ) # estimate final model using all inliers fit_params_best_idxs_subset = _check_method_params( X, params=routed_params.estimator.fit, indices=inlier_best_idxs_subset ) estimator.fit(X_inlier_best, y_inlier_best, **fit_params_best_idxs_subset) self.estimator_ = estimator self.inlier_mask_ = inlier_mask_best return self
Fit estimator using RANSAC algorithm. Parameters ---------- X : {array-like, sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. sample_weight : array-like of shape (n_samples,), default=None Individual weights for each sample raises error if sample_weight is passed and estimator fit method does not support it. .. versionadded:: 0.18 **fit_params : dict Parameters routed to the `fit` method of the sub-estimator via the metadata routing API. .. versionadded:: 1.5 Only available if `sklearn.set_config(enable_metadata_routing=True)` is set. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- self : object Fitted `RANSACRegressor` estimator. Raises ------ ValueError If no valid consensus set could be found. This occurs if `is_data_valid` and `is_model_valid` return False for all `max_trials` randomly chosen sub-samples.
fit
python
scikit-learn/scikit-learn
sklearn/linear_model/_ransac.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_ransac.py
BSD-3-Clause
def predict(self, X, **params): """Predict using the estimated model. This is a wrapper for `estimator_.predict(X)`. Parameters ---------- X : {array-like or sparse matrix} of shape (n_samples, n_features) Input data. **params : dict Parameters routed to the `predict` method of the sub-estimator via the metadata routing API. .. versionadded:: 1.5 Only available if `sklearn.set_config(enable_metadata_routing=True)` is set. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- y : array, shape = [n_samples] or [n_samples, n_targets] Returns predicted values. """ check_is_fitted(self) X = validate_data( self, X, ensure_all_finite=False, accept_sparse=True, reset=False, ) _raise_for_params(params, self, "predict") if _routing_enabled(): predict_params = process_routing(self, "predict", **params).estimator[ "predict" ] else: predict_params = {} return self.estimator_.predict(X, **predict_params)
Predict using the estimated model. This is a wrapper for `estimator_.predict(X)`. Parameters ---------- X : {array-like or sparse matrix} of shape (n_samples, n_features) Input data. **params : dict Parameters routed to the `predict` method of the sub-estimator via the metadata routing API. .. versionadded:: 1.5 Only available if `sklearn.set_config(enable_metadata_routing=True)` is set. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- y : array, shape = [n_samples] or [n_samples, n_targets] Returns predicted values.
predict
python
scikit-learn/scikit-learn
sklearn/linear_model/_ransac.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_ransac.py
BSD-3-Clause
def score(self, X, y, **params): """Return the score of the prediction. This is a wrapper for `estimator_.score(X, y)`. Parameters ---------- X : (array-like or sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. **params : dict Parameters routed to the `score` method of the sub-estimator via the metadata routing API. .. versionadded:: 1.5 Only available if `sklearn.set_config(enable_metadata_routing=True)` is set. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- z : float Score of the prediction. """ check_is_fitted(self) X = validate_data( self, X, ensure_all_finite=False, accept_sparse=True, reset=False, ) _raise_for_params(params, self, "score") if _routing_enabled(): score_params = process_routing(self, "score", **params).estimator["score"] else: score_params = {} return self.estimator_.score(X, y, **score_params)
Return the score of the prediction. This is a wrapper for `estimator_.score(X, y)`. Parameters ---------- X : (array-like or sparse matrix} of shape (n_samples, n_features) Training data. y : array-like of shape (n_samples,) or (n_samples, n_targets) Target values. **params : dict Parameters routed to the `score` method of the sub-estimator via the metadata routing API. .. versionadded:: 1.5 Only available if `sklearn.set_config(enable_metadata_routing=True)` is set. See :ref:`Metadata Routing User Guide <metadata_routing>` for more details. Returns ------- z : float Score of the prediction.
score
python
scikit-learn/scikit-learn
sklearn/linear_model/_ransac.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_ransac.py
BSD-3-Clause
def get_metadata_routing(self): """Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.5 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information. """ router = MetadataRouter(owner=self.__class__.__name__).add( estimator=self.estimator, method_mapping=MethodMapping() .add(caller="fit", callee="fit") .add(caller="fit", callee="score") .add(caller="score", callee="score") .add(caller="predict", callee="predict"), ) return router
Get metadata routing of this object. Please check :ref:`User Guide <metadata_routing>` on how the routing mechanism works. .. versionadded:: 1.5 Returns ------- routing : MetadataRouter A :class:`~sklearn.utils.metadata_routing.MetadataRouter` encapsulating routing information.
get_metadata_routing
python
scikit-learn/scikit-learn
sklearn/linear_model/_ransac.py
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/_ransac.py
BSD-3-Clause