code
stringlengths 66
870k
| docstring
stringlengths 19
26.7k
| func_name
stringlengths 1
138
| language
stringclasses 1
value | repo
stringlengths 7
68
| path
stringlengths 5
324
| url
stringlengths 46
389
| license
stringclasses 7
values |
---|---|---|---|---|---|---|---|
def sqeuclidean_row_norms(X, num_threads):
"""Compute the squared euclidean norm of the rows of X in parallel.
Parameters
----------
X : ndarray or CSR matrix of shape (n_samples, n_features)
Input data. Must be c-contiguous.
num_threads : int
The number of OpenMP threads to use.
Returns
-------
sqeuclidean_row_norms : ndarray of shape (n_samples,)
Arrays containing the squared euclidean norm of each row of X.
"""
if X.dtype == np.float64:
return np.asarray(_sqeuclidean_row_norms64(X, num_threads))
if X.dtype == np.float32:
return np.asarray(_sqeuclidean_row_norms32(X, num_threads))
raise ValueError(
"Only float64 or float32 datasets are supported at this time, "
f"got: X.dtype={X.dtype}."
)
|
Compute the squared euclidean norm of the rows of X in parallel.
Parameters
----------
X : ndarray or CSR matrix of shape (n_samples, n_features)
Input data. Must be c-contiguous.
num_threads : int
The number of OpenMP threads to use.
Returns
-------
sqeuclidean_row_norms : ndarray of shape (n_samples,)
Arrays containing the squared euclidean norm of each row of X.
|
sqeuclidean_row_norms
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
BSD-3-Clause
|
def is_usable_for(cls, X, Y, metric) -> bool:
"""Return True if the dispatcher can be used for the
given parameters.
Parameters
----------
X : {ndarray, sparse matrix} of shape (n_samples_X, n_features)
Input data.
Y : {ndarray, sparse matrix} of shape (n_samples_Y, n_features)
Input data.
metric : str, default='euclidean'
The distance metric to use.
For a list of available metrics, see the documentation of
:class:`~sklearn.metrics.DistanceMetric`.
Returns
-------
True if the dispatcher can be used, else False.
"""
# FIXME: the current Cython implementation is too slow for a large number of
# features. We temporarily disable it to fallback on SciPy's implementation.
# See: https://github.com/scikit-learn/scikit-learn/issues/28191
if (
issparse(X)
and issparse(Y)
and isinstance(metric, str)
and "euclidean" in metric
):
return False
def is_numpy_c_ordered(X):
return hasattr(X, "flags") and getattr(X.flags, "c_contiguous", False)
def is_valid_sparse_matrix(X):
return (
issparse(X)
and X.format == "csr"
and
# TODO: support CSR matrices without non-zeros elements
X.nnz > 0
and
# TODO: support CSR matrices with int64 indices and indptr
# See: https://github.com/scikit-learn/scikit-learn/issues/23653
X.indices.dtype == X.indptr.dtype == np.int32
)
is_usable = (
get_config().get("enable_cython_pairwise_dist", True)
and (is_numpy_c_ordered(X) or is_valid_sparse_matrix(X))
and (is_numpy_c_ordered(Y) or is_valid_sparse_matrix(Y))
and X.dtype == Y.dtype
and X.dtype in (np.float32, np.float64)
and (metric in cls.valid_metrics() or isinstance(metric, DistanceMetric))
)
return is_usable
|
Return True if the dispatcher can be used for the
given parameters.
Parameters
----------
X : {ndarray, sparse matrix} of shape (n_samples_X, n_features)
Input data.
Y : {ndarray, sparse matrix} of shape (n_samples_Y, n_features)
Input data.
metric : str, default='euclidean'
The distance metric to use.
For a list of available metrics, see the documentation of
:class:`~sklearn.metrics.DistanceMetric`.
Returns
-------
True if the dispatcher can be used, else False.
|
is_usable_for
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
BSD-3-Clause
|
def compute(
cls,
X,
Y,
**kwargs,
):
"""Compute the reduction.
Parameters
----------
X : ndarray or CSR matrix of shape (n_samples_X, n_features)
Input data.
Y : ndarray or CSR matrix of shape (n_samples_Y, n_features)
Input data.
**kwargs : additional parameters for the reduction
Notes
-----
This method is an abstract class method: it has to be implemented
for all subclasses.
"""
|
Compute the reduction.
Parameters
----------
X : ndarray or CSR matrix of shape (n_samples_X, n_features)
Input data.
Y : ndarray or CSR matrix of shape (n_samples_Y, n_features)
Input data.
**kwargs : additional parameters for the reduction
Notes
-----
This method is an abstract class method: it has to be implemented
for all subclasses.
|
compute
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
BSD-3-Clause
|
def compute(
cls,
X,
Y,
k,
metric="euclidean",
chunk_size=None,
metric_kwargs=None,
strategy=None,
return_distance=False,
):
"""Compute the argkmin reduction.
Parameters
----------
X : ndarray or CSR matrix of shape (n_samples_X, n_features)
Input data.
Y : ndarray or CSR matrix of shape (n_samples_Y, n_features)
Input data.
k : int
The k for the argkmin reduction.
metric : str, default='euclidean'
The distance metric to use for argkmin.
For a list of available metrics, see the documentation of
:class:`~sklearn.metrics.DistanceMetric`.
chunk_size : int, default=None,
The number of vectors per chunk. If None (default) looks-up in
scikit-learn configuration for `pairwise_dist_chunk_size`,
and use 256 if it is not set.
metric_kwargs : dict, default=None
Keyword arguments to pass to specified metric function.
strategy : str, {'auto', 'parallel_on_X', 'parallel_on_Y'}, default=None
The chunking strategy defining which dataset parallelization are made on.
For both strategies the computations happens with two nested loops,
respectively on chunks of X and chunks of Y.
Strategies differs on which loop (outer or inner) is made to run
in parallel with the Cython `prange` construct:
- 'parallel_on_X' dispatches chunks of X uniformly on threads.
Each thread then iterates on all the chunks of Y. This strategy is
embarrassingly parallel and comes with no datastructures
synchronisation.
- 'parallel_on_Y' dispatches chunks of Y uniformly on threads.
Each thread processes all the chunks of X in turn. This strategy is
a sequence of embarrassingly parallel subtasks (the inner loop on Y
chunks) with intermediate datastructures synchronisation at each
iteration of the sequential outer loop on X chunks.
- 'auto' relies on a simple heuristic to choose between
'parallel_on_X' and 'parallel_on_Y': when `X.shape[0]` is large enough,
'parallel_on_X' is usually the most efficient strategy.
When `X.shape[0]` is small but `Y.shape[0]` is large, 'parallel_on_Y'
brings more opportunity for parallelism and is therefore more efficient
- None (default) looks-up in scikit-learn configuration for
`pairwise_dist_parallel_strategy`, and use 'auto' if it is not set.
return_distance : boolean, default=False
Return distances between each X vector and its
argkmin if set to True.
Returns
-------
If return_distance=False:
- argkmin_indices : ndarray of shape (n_samples_X, k)
Indices of the argkmin for each vector in X.
If return_distance=True:
- argkmin_distances : ndarray of shape (n_samples_X, k)
Distances to the argkmin for each vector in X.
- argkmin_indices : ndarray of shape (n_samples_X, k)
Indices of the argkmin for each vector in X.
Notes
-----
This classmethod inspects the arguments values to dispatch to the
dtype-specialized implementation of :class:`ArgKmin`.
This allows decoupling the API entirely from the implementation details
whilst maintaining RAII: all temporarily allocated datastructures necessary
for the concrete implementation are therefore freed when this classmethod
returns.
"""
if X.dtype == Y.dtype == np.float64:
return ArgKmin64.compute(
X=X,
Y=Y,
k=k,
metric=metric,
chunk_size=chunk_size,
metric_kwargs=metric_kwargs,
strategy=strategy,
return_distance=return_distance,
)
if X.dtype == Y.dtype == np.float32:
return ArgKmin32.compute(
X=X,
Y=Y,
k=k,
metric=metric,
chunk_size=chunk_size,
metric_kwargs=metric_kwargs,
strategy=strategy,
return_distance=return_distance,
)
raise ValueError(
"Only float64 or float32 datasets pairs are supported at this time, "
f"got: X.dtype={X.dtype} and Y.dtype={Y.dtype}."
)
|
Compute the argkmin reduction.
Parameters
----------
X : ndarray or CSR matrix of shape (n_samples_X, n_features)
Input data.
Y : ndarray or CSR matrix of shape (n_samples_Y, n_features)
Input data.
k : int
The k for the argkmin reduction.
metric : str, default='euclidean'
The distance metric to use for argkmin.
For a list of available metrics, see the documentation of
:class:`~sklearn.metrics.DistanceMetric`.
chunk_size : int, default=None,
The number of vectors per chunk. If None (default) looks-up in
scikit-learn configuration for `pairwise_dist_chunk_size`,
and use 256 if it is not set.
metric_kwargs : dict, default=None
Keyword arguments to pass to specified metric function.
strategy : str, {'auto', 'parallel_on_X', 'parallel_on_Y'}, default=None
The chunking strategy defining which dataset parallelization are made on.
For both strategies the computations happens with two nested loops,
respectively on chunks of X and chunks of Y.
Strategies differs on which loop (outer or inner) is made to run
in parallel with the Cython `prange` construct:
- 'parallel_on_X' dispatches chunks of X uniformly on threads.
Each thread then iterates on all the chunks of Y. This strategy is
embarrassingly parallel and comes with no datastructures
synchronisation.
- 'parallel_on_Y' dispatches chunks of Y uniformly on threads.
Each thread processes all the chunks of X in turn. This strategy is
a sequence of embarrassingly parallel subtasks (the inner loop on Y
chunks) with intermediate datastructures synchronisation at each
iteration of the sequential outer loop on X chunks.
- 'auto' relies on a simple heuristic to choose between
'parallel_on_X' and 'parallel_on_Y': when `X.shape[0]` is large enough,
'parallel_on_X' is usually the most efficient strategy.
When `X.shape[0]` is small but `Y.shape[0]` is large, 'parallel_on_Y'
brings more opportunity for parallelism and is therefore more efficient
- None (default) looks-up in scikit-learn configuration for
`pairwise_dist_parallel_strategy`, and use 'auto' if it is not set.
return_distance : boolean, default=False
Return distances between each X vector and its
argkmin if set to True.
Returns
-------
If return_distance=False:
- argkmin_indices : ndarray of shape (n_samples_X, k)
Indices of the argkmin for each vector in X.
If return_distance=True:
- argkmin_distances : ndarray of shape (n_samples_X, k)
Distances to the argkmin for each vector in X.
- argkmin_indices : ndarray of shape (n_samples_X, k)
Indices of the argkmin for each vector in X.
Notes
-----
This classmethod inspects the arguments values to dispatch to the
dtype-specialized implementation of :class:`ArgKmin`.
This allows decoupling the API entirely from the implementation details
whilst maintaining RAII: all temporarily allocated datastructures necessary
for the concrete implementation are therefore freed when this classmethod
returns.
|
compute
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
BSD-3-Clause
|
def compute(
cls,
X,
Y,
radius,
metric="euclidean",
chunk_size=None,
metric_kwargs=None,
strategy=None,
return_distance=False,
sort_results=False,
):
"""Return the results of the reduction for the given arguments.
Parameters
----------
X : ndarray or CSR matrix of shape (n_samples_X, n_features)
Input data.
Y : ndarray or CSR matrix of shape (n_samples_Y, n_features)
Input data.
radius : float
The radius defining the neighborhood.
metric : str, default='euclidean'
The distance metric to use.
For a list of available metrics, see the documentation of
:class:`~sklearn.metrics.DistanceMetric`.
chunk_size : int, default=None,
The number of vectors per chunk. If None (default) looks-up in
scikit-learn configuration for `pairwise_dist_chunk_size`,
and use 256 if it is not set.
metric_kwargs : dict, default=None
Keyword arguments to pass to specified metric function.
strategy : str, {'auto', 'parallel_on_X', 'parallel_on_Y'}, default=None
The chunking strategy defining which dataset parallelization are made on.
For both strategies the computations happens with two nested loops,
respectively on chunks of X and chunks of Y.
Strategies differs on which loop (outer or inner) is made to run
in parallel with the Cython `prange` construct:
- 'parallel_on_X' dispatches chunks of X uniformly on threads.
Each thread then iterates on all the chunks of Y. This strategy is
embarrassingly parallel and comes with no datastructures
synchronisation.
- 'parallel_on_Y' dispatches chunks of Y uniformly on threads.
Each thread processes all the chunks of X in turn. This strategy is
a sequence of embarrassingly parallel subtasks (the inner loop on Y
chunks) with intermediate datastructures synchronisation at each
iteration of the sequential outer loop on X chunks.
- 'auto' relies on a simple heuristic to choose between
'parallel_on_X' and 'parallel_on_Y': when `X.shape[0]` is large enough,
'parallel_on_X' is usually the most efficient strategy.
When `X.shape[0]` is small but `Y.shape[0]` is large, 'parallel_on_Y'
brings more opportunity for parallelism and is therefore more efficient
despite the synchronization step at each iteration of the outer loop
on chunks of `X`.
- None (default) looks-up in scikit-learn configuration for
`pairwise_dist_parallel_strategy`, and use 'auto' if it is not set.
return_distance : boolean, default=False
Return distances between each X vector and its neighbors if set to True.
sort_results : boolean, default=False
Sort results with respect to distances between each X vector and its
neighbors if set to True.
Returns
-------
If return_distance=False:
- neighbors_indices : ndarray of n_samples_X ndarray
Indices of the neighbors for each vector in X.
If return_distance=True:
- neighbors_indices : ndarray of n_samples_X ndarray
Indices of the neighbors for each vector in X.
- neighbors_distances : ndarray of n_samples_X ndarray
Distances to the neighbors for each vector in X.
Notes
-----
This classmethod inspects the arguments values to dispatch to the
dtype-specialized implementation of :class:`RadiusNeighbors`.
This allows decoupling the API entirely from the implementation details
whilst maintaining RAII: all temporarily allocated datastructures necessary
for the concrete implementation are therefore freed when this classmethod
returns.
"""
if X.dtype == Y.dtype == np.float64:
return RadiusNeighbors64.compute(
X=X,
Y=Y,
radius=radius,
metric=metric,
chunk_size=chunk_size,
metric_kwargs=metric_kwargs,
strategy=strategy,
sort_results=sort_results,
return_distance=return_distance,
)
if X.dtype == Y.dtype == np.float32:
return RadiusNeighbors32.compute(
X=X,
Y=Y,
radius=radius,
metric=metric,
chunk_size=chunk_size,
metric_kwargs=metric_kwargs,
strategy=strategy,
sort_results=sort_results,
return_distance=return_distance,
)
raise ValueError(
"Only float64 or float32 datasets pairs are supported at this time, "
f"got: X.dtype={X.dtype} and Y.dtype={Y.dtype}."
)
|
Return the results of the reduction for the given arguments.
Parameters
----------
X : ndarray or CSR matrix of shape (n_samples_X, n_features)
Input data.
Y : ndarray or CSR matrix of shape (n_samples_Y, n_features)
Input data.
radius : float
The radius defining the neighborhood.
metric : str, default='euclidean'
The distance metric to use.
For a list of available metrics, see the documentation of
:class:`~sklearn.metrics.DistanceMetric`.
chunk_size : int, default=None,
The number of vectors per chunk. If None (default) looks-up in
scikit-learn configuration for `pairwise_dist_chunk_size`,
and use 256 if it is not set.
metric_kwargs : dict, default=None
Keyword arguments to pass to specified metric function.
strategy : str, {'auto', 'parallel_on_X', 'parallel_on_Y'}, default=None
The chunking strategy defining which dataset parallelization are made on.
For both strategies the computations happens with two nested loops,
respectively on chunks of X and chunks of Y.
Strategies differs on which loop (outer or inner) is made to run
in parallel with the Cython `prange` construct:
- 'parallel_on_X' dispatches chunks of X uniformly on threads.
Each thread then iterates on all the chunks of Y. This strategy is
embarrassingly parallel and comes with no datastructures
synchronisation.
- 'parallel_on_Y' dispatches chunks of Y uniformly on threads.
Each thread processes all the chunks of X in turn. This strategy is
a sequence of embarrassingly parallel subtasks (the inner loop on Y
chunks) with intermediate datastructures synchronisation at each
iteration of the sequential outer loop on X chunks.
- 'auto' relies on a simple heuristic to choose between
'parallel_on_X' and 'parallel_on_Y': when `X.shape[0]` is large enough,
'parallel_on_X' is usually the most efficient strategy.
When `X.shape[0]` is small but `Y.shape[0]` is large, 'parallel_on_Y'
brings more opportunity for parallelism and is therefore more efficient
despite the synchronization step at each iteration of the outer loop
on chunks of `X`.
- None (default) looks-up in scikit-learn configuration for
`pairwise_dist_parallel_strategy`, and use 'auto' if it is not set.
return_distance : boolean, default=False
Return distances between each X vector and its neighbors if set to True.
sort_results : boolean, default=False
Sort results with respect to distances between each X vector and its
neighbors if set to True.
Returns
-------
If return_distance=False:
- neighbors_indices : ndarray of n_samples_X ndarray
Indices of the neighbors for each vector in X.
If return_distance=True:
- neighbors_indices : ndarray of n_samples_X ndarray
Indices of the neighbors for each vector in X.
- neighbors_distances : ndarray of n_samples_X ndarray
Distances to the neighbors for each vector in X.
Notes
-----
This classmethod inspects the arguments values to dispatch to the
dtype-specialized implementation of :class:`RadiusNeighbors`.
This allows decoupling the API entirely from the implementation details
whilst maintaining RAII: all temporarily allocated datastructures necessary
for the concrete implementation are therefore freed when this classmethod
returns.
|
compute
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
BSD-3-Clause
|
def compute(
cls,
X,
Y,
k,
weights,
Y_labels,
unique_Y_labels,
metric="euclidean",
chunk_size=None,
metric_kwargs=None,
strategy=None,
):
"""Compute the argkmin reduction.
Parameters
----------
X : ndarray of shape (n_samples_X, n_features)
The input array to be labelled.
Y : ndarray of shape (n_samples_Y, n_features)
The input array whose class membership are provided through the
`Y_labels` parameter.
k : int
The number of nearest neighbors to consider.
weights : ndarray
The weights applied over the `Y_labels` of `Y` when computing the
weighted mode of the labels.
Y_labels : ndarray
An array containing the index of the class membership of the
associated samples in `Y`. This is used in labeling `X`.
unique_Y_labels : ndarray
An array containing all unique indices contained in the
corresponding `Y_labels` array.
metric : str, default='euclidean'
The distance metric to use. For a list of available metrics, see
the documentation of :class:`~sklearn.metrics.DistanceMetric`.
Currently does not support `'precomputed'`.
chunk_size : int, default=None,
The number of vectors per chunk. If None (default) looks-up in
scikit-learn configuration for `pairwise_dist_chunk_size`,
and use 256 if it is not set.
metric_kwargs : dict, default=None
Keyword arguments to pass to specified metric function.
strategy : str, {'auto', 'parallel_on_X', 'parallel_on_Y'}, default=None
The chunking strategy defining which dataset parallelization are made on.
For both strategies the computations happens with two nested loops,
respectively on chunks of X and chunks of Y.
Strategies differs on which loop (outer or inner) is made to run
in parallel with the Cython `prange` construct:
- 'parallel_on_X' dispatches chunks of X uniformly on threads.
Each thread then iterates on all the chunks of Y. This strategy is
embarrassingly parallel and comes with no datastructures
synchronisation.
- 'parallel_on_Y' dispatches chunks of Y uniformly on threads.
Each thread processes all the chunks of X in turn. This strategy is
a sequence of embarrassingly parallel subtasks (the inner loop on Y
chunks) with intermediate datastructures synchronisation at each
iteration of the sequential outer loop on X chunks.
- 'auto' relies on a simple heuristic to choose between
'parallel_on_X' and 'parallel_on_Y': when `X.shape[0]` is large enough,
'parallel_on_X' is usually the most efficient strategy.
When `X.shape[0]` is small but `Y.shape[0]` is large, 'parallel_on_Y'
brings more opportunity for parallelism and is therefore more efficient
despite the synchronization step at each iteration of the outer loop
on chunks of `X`.
- None (default) looks-up in scikit-learn configuration for
`pairwise_dist_parallel_strategy`, and use 'auto' if it is not set.
Returns
-------
probabilities : ndarray of shape (n_samples_X, n_classes)
An array containing the class probabilities for each sample.
Notes
-----
This classmethod is responsible for introspecting the arguments
values to dispatch to the most appropriate implementation of
:class:`PairwiseDistancesArgKmin`.
This allows decoupling the API entirely from the implementation details
whilst maintaining RAII: all temporarily allocated datastructures necessary
for the concrete implementation are therefore freed when this classmethod
returns.
"""
if weights not in {"uniform", "distance"}:
raise ValueError(
"Only the 'uniform' or 'distance' weights options are supported"
f" at this time. Got: {weights=}."
)
if X.dtype == Y.dtype == np.float64:
return ArgKminClassMode64.compute(
X=X,
Y=Y,
k=k,
weights=weights,
Y_labels=np.array(Y_labels, dtype=np.intp),
unique_Y_labels=np.array(unique_Y_labels, dtype=np.intp),
metric=metric,
chunk_size=chunk_size,
metric_kwargs=metric_kwargs,
strategy=strategy,
)
if X.dtype == Y.dtype == np.float32:
return ArgKminClassMode32.compute(
X=X,
Y=Y,
k=k,
weights=weights,
Y_labels=np.array(Y_labels, dtype=np.intp),
unique_Y_labels=np.array(unique_Y_labels, dtype=np.intp),
metric=metric,
chunk_size=chunk_size,
metric_kwargs=metric_kwargs,
strategy=strategy,
)
raise ValueError(
"Only float64 or float32 datasets pairs are supported at this time, "
f"got: X.dtype={X.dtype} and Y.dtype={Y.dtype}."
)
|
Compute the argkmin reduction.
Parameters
----------
X : ndarray of shape (n_samples_X, n_features)
The input array to be labelled.
Y : ndarray of shape (n_samples_Y, n_features)
The input array whose class membership are provided through the
`Y_labels` parameter.
k : int
The number of nearest neighbors to consider.
weights : ndarray
The weights applied over the `Y_labels` of `Y` when computing the
weighted mode of the labels.
Y_labels : ndarray
An array containing the index of the class membership of the
associated samples in `Y`. This is used in labeling `X`.
unique_Y_labels : ndarray
An array containing all unique indices contained in the
corresponding `Y_labels` array.
metric : str, default='euclidean'
The distance metric to use. For a list of available metrics, see
the documentation of :class:`~sklearn.metrics.DistanceMetric`.
Currently does not support `'precomputed'`.
chunk_size : int, default=None,
The number of vectors per chunk. If None (default) looks-up in
scikit-learn configuration for `pairwise_dist_chunk_size`,
and use 256 if it is not set.
metric_kwargs : dict, default=None
Keyword arguments to pass to specified metric function.
strategy : str, {'auto', 'parallel_on_X', 'parallel_on_Y'}, default=None
The chunking strategy defining which dataset parallelization are made on.
For both strategies the computations happens with two nested loops,
respectively on chunks of X and chunks of Y.
Strategies differs on which loop (outer or inner) is made to run
in parallel with the Cython `prange` construct:
- 'parallel_on_X' dispatches chunks of X uniformly on threads.
Each thread then iterates on all the chunks of Y. This strategy is
embarrassingly parallel and comes with no datastructures
synchronisation.
- 'parallel_on_Y' dispatches chunks of Y uniformly on threads.
Each thread processes all the chunks of X in turn. This strategy is
a sequence of embarrassingly parallel subtasks (the inner loop on Y
chunks) with intermediate datastructures synchronisation at each
iteration of the sequential outer loop on X chunks.
- 'auto' relies on a simple heuristic to choose between
'parallel_on_X' and 'parallel_on_Y': when `X.shape[0]` is large enough,
'parallel_on_X' is usually the most efficient strategy.
When `X.shape[0]` is small but `Y.shape[0]` is large, 'parallel_on_Y'
brings more opportunity for parallelism and is therefore more efficient
despite the synchronization step at each iteration of the outer loop
on chunks of `X`.
- None (default) looks-up in scikit-learn configuration for
`pairwise_dist_parallel_strategy`, and use 'auto' if it is not set.
Returns
-------
probabilities : ndarray of shape (n_samples_X, n_classes)
An array containing the class probabilities for each sample.
Notes
-----
This classmethod is responsible for introspecting the arguments
values to dispatch to the most appropriate implementation of
:class:`PairwiseDistancesArgKmin`.
This allows decoupling the API entirely from the implementation details
whilst maintaining RAII: all temporarily allocated datastructures necessary
for the concrete implementation are therefore freed when this classmethod
returns.
|
compute
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
BSD-3-Clause
|
def compute(
cls,
X,
Y,
radius,
weights,
Y_labels,
unique_Y_labels,
outlier_label,
metric="euclidean",
chunk_size=None,
metric_kwargs=None,
strategy=None,
):
"""Return the results of the reduction for the given arguments.
Parameters
----------
X : ndarray of shape (n_samples_X, n_features)
The input array to be labelled.
Y : ndarray of shape (n_samples_Y, n_features)
The input array whose class membership is provided through
the `Y_labels` parameter.
radius : float
The radius defining the neighborhood.
weights : ndarray
The weights applied to the `Y_labels` when computing the
weighted mode of the labels.
Y_labels : ndarray
An array containing the index of the class membership of the
associated samples in `Y`. This is used in labeling `X`.
unique_Y_labels : ndarray
An array containing all unique class labels.
outlier_label : int, default=None
Label for outlier samples (samples with no neighbors in given
radius). In the default case when the value is None if any
outlier is detected, a ValueError will be raised. The outlier
label should be selected from among the unique 'Y' labels. If
it is specified with a different value a warning will be raised
and all class probabilities of outliers will be assigned to be 0.
metric : str, default='euclidean'
The distance metric to use. For a list of available metrics, see
the documentation of :class:`~sklearn.metrics.DistanceMetric`.
Currently does not support `'precomputed'`.
chunk_size : int, default=None,
The number of vectors per chunk. If None (default) looks-up in
scikit-learn configuration for `pairwise_dist_chunk_size`,
and use 256 if it is not set.
metric_kwargs : dict, default=None
Keyword arguments to pass to specified metric function.
strategy : str, {'auto', 'parallel_on_X', 'parallel_on_Y'}, default=None
The chunking strategy defining which dataset parallelization are made on.
For both strategies the computations happens with two nested loops,
respectively on chunks of X and chunks of Y.
Strategies differs on which loop (outer or inner) is made to run
in parallel with the Cython `prange` construct:
- 'parallel_on_X' dispatches chunks of X uniformly on threads.
Each thread then iterates on all the chunks of Y. This strategy is
embarrassingly parallel and comes with no datastructures
synchronisation.
- 'parallel_on_Y' dispatches chunks of Y uniformly on threads.
Each thread processes all the chunks of X in turn. This strategy is
a sequence of embarrassingly parallel subtasks (the inner loop on Y
chunks) with intermediate datastructures synchronisation at each
iteration of the sequential outer loop on X chunks.
- 'auto' relies on a simple heuristic to choose between
'parallel_on_X' and 'parallel_on_Y': when `X.shape[0]` is large enough,
'parallel_on_X' is usually the most efficient strategy.
When `X.shape[0]` is small but `Y.shape[0]` is large, 'parallel_on_Y'
brings more opportunity for parallelism and is therefore more efficient
despite the synchronization step at each iteration of the outer loop
on chunks of `X`.
- None (default) looks-up in scikit-learn configuration for
`pairwise_dist_parallel_strategy`, and use 'auto' if it is not set.
Returns
-------
probabilities : ndarray of shape (n_samples_X, n_classes)
An array containing the class probabilities for each sample.
"""
if weights not in {"uniform", "distance"}:
raise ValueError(
"Only the 'uniform' or 'distance' weights options are supported"
f" at this time. Got: {weights=}."
)
if X.dtype == Y.dtype == np.float64:
return RadiusNeighborsClassMode64.compute(
X=X,
Y=Y,
radius=radius,
weights=weights,
Y_labels=np.array(Y_labels, dtype=np.intp),
unique_Y_labels=np.array(unique_Y_labels, dtype=np.intp),
outlier_label=outlier_label,
metric=metric,
chunk_size=chunk_size,
metric_kwargs=metric_kwargs,
strategy=strategy,
)
if X.dtype == Y.dtype == np.float32:
return RadiusNeighborsClassMode32.compute(
X=X,
Y=Y,
radius=radius,
weights=weights,
Y_labels=np.array(Y_labels, dtype=np.intp),
unique_Y_labels=np.array(unique_Y_labels, dtype=np.intp),
outlier_label=outlier_label,
metric=metric,
chunk_size=chunk_size,
metric_kwargs=metric_kwargs,
strategy=strategy,
)
raise ValueError(
"Only float64 or float32 datasets pairs are supported at this time, "
f"got: X.dtype={X.dtype} and Y.dtype={Y.dtype}."
)
|
Return the results of the reduction for the given arguments.
Parameters
----------
X : ndarray of shape (n_samples_X, n_features)
The input array to be labelled.
Y : ndarray of shape (n_samples_Y, n_features)
The input array whose class membership is provided through
the `Y_labels` parameter.
radius : float
The radius defining the neighborhood.
weights : ndarray
The weights applied to the `Y_labels` when computing the
weighted mode of the labels.
Y_labels : ndarray
An array containing the index of the class membership of the
associated samples in `Y`. This is used in labeling `X`.
unique_Y_labels : ndarray
An array containing all unique class labels.
outlier_label : int, default=None
Label for outlier samples (samples with no neighbors in given
radius). In the default case when the value is None if any
outlier is detected, a ValueError will be raised. The outlier
label should be selected from among the unique 'Y' labels. If
it is specified with a different value a warning will be raised
and all class probabilities of outliers will be assigned to be 0.
metric : str, default='euclidean'
The distance metric to use. For a list of available metrics, see
the documentation of :class:`~sklearn.metrics.DistanceMetric`.
Currently does not support `'precomputed'`.
chunk_size : int, default=None,
The number of vectors per chunk. If None (default) looks-up in
scikit-learn configuration for `pairwise_dist_chunk_size`,
and use 256 if it is not set.
metric_kwargs : dict, default=None
Keyword arguments to pass to specified metric function.
strategy : str, {'auto', 'parallel_on_X', 'parallel_on_Y'}, default=None
The chunking strategy defining which dataset parallelization are made on.
For both strategies the computations happens with two nested loops,
respectively on chunks of X and chunks of Y.
Strategies differs on which loop (outer or inner) is made to run
in parallel with the Cython `prange` construct:
- 'parallel_on_X' dispatches chunks of X uniformly on threads.
Each thread then iterates on all the chunks of Y. This strategy is
embarrassingly parallel and comes with no datastructures
synchronisation.
- 'parallel_on_Y' dispatches chunks of Y uniformly on threads.
Each thread processes all the chunks of X in turn. This strategy is
a sequence of embarrassingly parallel subtasks (the inner loop on Y
chunks) with intermediate datastructures synchronisation at each
iteration of the sequential outer loop on X chunks.
- 'auto' relies on a simple heuristic to choose between
'parallel_on_X' and 'parallel_on_Y': when `X.shape[0]` is large enough,
'parallel_on_X' is usually the most efficient strategy.
When `X.shape[0]` is small but `Y.shape[0]` is large, 'parallel_on_Y'
brings more opportunity for parallelism and is therefore more efficient
despite the synchronization step at each iteration of the outer loop
on chunks of `X`.
- None (default) looks-up in scikit-learn configuration for
`pairwise_dist_parallel_strategy`, and use 'auto' if it is not set.
Returns
-------
probabilities : ndarray of shape (n_samples_X, n_classes)
An array containing the class probabilities for each sample.
|
compute
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_pairwise_distances_reduction/_dispatcher.py
|
BSD-3-Clause
|
def plot(
self,
*,
include_values=True,
cmap="viridis",
xticks_rotation="horizontal",
values_format=None,
ax=None,
colorbar=True,
im_kw=None,
text_kw=None,
):
"""Plot visualization.
Parameters
----------
include_values : bool, default=True
Includes values in confusion matrix.
cmap : str or matplotlib Colormap, default='viridis'
Colormap recognized by matplotlib.
xticks_rotation : {'vertical', 'horizontal'} or float, \
default='horizontal'
Rotation of xtick labels.
values_format : str, default=None
Format specification for values in confusion matrix. If `None`,
the format specification is 'd' or '.2g' whichever is shorter.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
colorbar : bool, default=True
Whether or not to add a colorbar to the plot.
im_kw : dict, default=None
Dict with keywords passed to `matplotlib.pyplot.imshow` call.
text_kw : dict, default=None
Dict with keywords passed to `matplotlib.pyplot.text` call.
.. versionadded:: 1.2
Returns
-------
display : :class:`~sklearn.metrics.ConfusionMatrixDisplay`
Returns a :class:`~sklearn.metrics.ConfusionMatrixDisplay` instance
that contains all the information to plot the confusion matrix.
"""
check_matplotlib_support("ConfusionMatrixDisplay.plot")
import matplotlib.pyplot as plt
if ax is None:
fig, ax = plt.subplots()
else:
fig = ax.figure
cm = self.confusion_matrix
n_classes = cm.shape[0]
default_im_kw = dict(interpolation="nearest", cmap=cmap)
im_kw = im_kw or {}
im_kw = _validate_style_kwargs(default_im_kw, im_kw)
text_kw = text_kw or {}
self.im_ = ax.imshow(cm, **im_kw)
self.text_ = None
cmap_min, cmap_max = self.im_.cmap(0), self.im_.cmap(1.0)
if include_values:
self.text_ = np.empty_like(cm, dtype=object)
# print text with appropriate color depending on background
thresh = (cm.max() + cm.min()) / 2.0
for i, j in product(range(n_classes), range(n_classes)):
color = cmap_max if cm[i, j] < thresh else cmap_min
if values_format is None:
text_cm = format(cm[i, j], ".2g")
if cm.dtype.kind != "f":
text_d = format(cm[i, j], "d")
if len(text_d) < len(text_cm):
text_cm = text_d
else:
text_cm = format(cm[i, j], values_format)
default_text_kwargs = dict(ha="center", va="center", color=color)
text_kwargs = _validate_style_kwargs(default_text_kwargs, text_kw)
self.text_[i, j] = ax.text(j, i, text_cm, **text_kwargs)
if self.display_labels is None:
display_labels = np.arange(n_classes)
else:
display_labels = self.display_labels
if colorbar:
fig.colorbar(self.im_, ax=ax)
ax.set(
xticks=np.arange(n_classes),
yticks=np.arange(n_classes),
xticklabels=display_labels,
yticklabels=display_labels,
ylabel="True label",
xlabel="Predicted label",
)
ax.set_ylim((n_classes - 0.5, -0.5))
plt.setp(ax.get_xticklabels(), rotation=xticks_rotation)
self.figure_ = fig
self.ax_ = ax
return self
|
Plot visualization.
Parameters
----------
include_values : bool, default=True
Includes values in confusion matrix.
cmap : str or matplotlib Colormap, default='viridis'
Colormap recognized by matplotlib.
xticks_rotation : {'vertical', 'horizontal'} or float, default='horizontal'
Rotation of xtick labels.
values_format : str, default=None
Format specification for values in confusion matrix. If `None`,
the format specification is 'd' or '.2g' whichever is shorter.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
colorbar : bool, default=True
Whether or not to add a colorbar to the plot.
im_kw : dict, default=None
Dict with keywords passed to `matplotlib.pyplot.imshow` call.
text_kw : dict, default=None
Dict with keywords passed to `matplotlib.pyplot.text` call.
.. versionadded:: 1.2
Returns
-------
display : :class:`~sklearn.metrics.ConfusionMatrixDisplay`
Returns a :class:`~sklearn.metrics.ConfusionMatrixDisplay` instance
that contains all the information to plot the confusion matrix.
|
plot
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/confusion_matrix.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/confusion_matrix.py
|
BSD-3-Clause
|
def from_estimator(
cls,
estimator,
X,
y,
*,
labels=None,
sample_weight=None,
normalize=None,
display_labels=None,
include_values=True,
xticks_rotation="horizontal",
values_format=None,
cmap="viridis",
ax=None,
colorbar=True,
im_kw=None,
text_kw=None,
):
"""Plot Confusion Matrix given an estimator and some data.
For general information regarding `scikit-learn` visualization tools, see
the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the
:ref:`Model Evaluation Guide <confusion_matrix>`.
.. versionadded:: 1.0
Parameters
----------
estimator : estimator instance
Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a classifier.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
labels : array-like of shape (n_classes,), default=None
List of labels to index the confusion matrix. This may be used to
reorder or select a subset of labels. If `None` is given, those
that appear at least once in `y_true` or `y_pred` are used in
sorted order.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
normalize : {'true', 'pred', 'all'}, default=None
Either to normalize the counts display in the matrix:
- if `'true'`, the confusion matrix is normalized over the true
conditions (e.g. rows);
- if `'pred'`, the confusion matrix is normalized over the
predicted conditions (e.g. columns);
- if `'all'`, the confusion matrix is normalized by the total
number of samples;
- if `None` (default), the confusion matrix will not be normalized.
display_labels : array-like of shape (n_classes,), default=None
Target names used for plotting. By default, `labels` will be used
if it is defined, otherwise the unique labels of `y_true` and
`y_pred` will be used.
include_values : bool, default=True
Includes values in confusion matrix.
xticks_rotation : {'vertical', 'horizontal'} or float, \
default='horizontal'
Rotation of xtick labels.
values_format : str, default=None
Format specification for values in confusion matrix. If `None`, the
format specification is 'd' or '.2g' whichever is shorter.
cmap : str or matplotlib Colormap, default='viridis'
Colormap recognized by matplotlib.
ax : matplotlib Axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
colorbar : bool, default=True
Whether or not to add a colorbar to the plot.
im_kw : dict, default=None
Dict with keywords passed to `matplotlib.pyplot.imshow` call.
text_kw : dict, default=None
Dict with keywords passed to `matplotlib.pyplot.text` call.
.. versionadded:: 1.2
Returns
-------
display : :class:`~sklearn.metrics.ConfusionMatrixDisplay`
See Also
--------
ConfusionMatrixDisplay.from_predictions : Plot the confusion matrix
given the true and predicted labels.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import ConfusionMatrixDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = SVC(random_state=0)
>>> clf.fit(X_train, y_train)
SVC(random_state=0)
>>> ConfusionMatrixDisplay.from_estimator(
... clf, X_test, y_test)
<...>
>>> plt.show()
For a detailed example of using a confusion matrix to evaluate a
Support Vector Classifier, please see
:ref:`sphx_glr_auto_examples_model_selection_plot_confusion_matrix.py`
"""
method_name = f"{cls.__name__}.from_estimator"
check_matplotlib_support(method_name)
if not is_classifier(estimator):
raise ValueError(f"{method_name} only supports classifiers")
y_pred = estimator.predict(X)
return cls.from_predictions(
y,
y_pred,
sample_weight=sample_weight,
labels=labels,
normalize=normalize,
display_labels=display_labels,
include_values=include_values,
cmap=cmap,
ax=ax,
xticks_rotation=xticks_rotation,
values_format=values_format,
colorbar=colorbar,
im_kw=im_kw,
text_kw=text_kw,
)
|
Plot Confusion Matrix given an estimator and some data.
For general information regarding `scikit-learn` visualization tools, see
the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the
:ref:`Model Evaluation Guide <confusion_matrix>`.
.. versionadded:: 1.0
Parameters
----------
estimator : estimator instance
Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a classifier.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
labels : array-like of shape (n_classes,), default=None
List of labels to index the confusion matrix. This may be used to
reorder or select a subset of labels. If `None` is given, those
that appear at least once in `y_true` or `y_pred` are used in
sorted order.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
normalize : {'true', 'pred', 'all'}, default=None
Either to normalize the counts display in the matrix:
- if `'true'`, the confusion matrix is normalized over the true
conditions (e.g. rows);
- if `'pred'`, the confusion matrix is normalized over the
predicted conditions (e.g. columns);
- if `'all'`, the confusion matrix is normalized by the total
number of samples;
- if `None` (default), the confusion matrix will not be normalized.
display_labels : array-like of shape (n_classes,), default=None
Target names used for plotting. By default, `labels` will be used
if it is defined, otherwise the unique labels of `y_true` and
`y_pred` will be used.
include_values : bool, default=True
Includes values in confusion matrix.
xticks_rotation : {'vertical', 'horizontal'} or float, default='horizontal'
Rotation of xtick labels.
values_format : str, default=None
Format specification for values in confusion matrix. If `None`, the
format specification is 'd' or '.2g' whichever is shorter.
cmap : str or matplotlib Colormap, default='viridis'
Colormap recognized by matplotlib.
ax : matplotlib Axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
colorbar : bool, default=True
Whether or not to add a colorbar to the plot.
im_kw : dict, default=None
Dict with keywords passed to `matplotlib.pyplot.imshow` call.
text_kw : dict, default=None
Dict with keywords passed to `matplotlib.pyplot.text` call.
.. versionadded:: 1.2
Returns
-------
display : :class:`~sklearn.metrics.ConfusionMatrixDisplay`
See Also
--------
ConfusionMatrixDisplay.from_predictions : Plot the confusion matrix
given the true and predicted labels.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import ConfusionMatrixDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = SVC(random_state=0)
>>> clf.fit(X_train, y_train)
SVC(random_state=0)
>>> ConfusionMatrixDisplay.from_estimator(
... clf, X_test, y_test)
<...>
>>> plt.show()
For a detailed example of using a confusion matrix to evaluate a
Support Vector Classifier, please see
:ref:`sphx_glr_auto_examples_model_selection_plot_confusion_matrix.py`
|
from_estimator
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/confusion_matrix.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/confusion_matrix.py
|
BSD-3-Clause
|
def from_predictions(
cls,
y_true,
y_pred,
*,
labels=None,
sample_weight=None,
normalize=None,
display_labels=None,
include_values=True,
xticks_rotation="horizontal",
values_format=None,
cmap="viridis",
ax=None,
colorbar=True,
im_kw=None,
text_kw=None,
):
"""Plot Confusion Matrix given true and predicted labels.
For general information regarding `scikit-learn` visualization tools, see
the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the
:ref:`Model Evaluation Guide <confusion_matrix>`.
.. versionadded:: 1.0
Parameters
----------
y_true : array-like of shape (n_samples,)
True labels.
y_pred : array-like of shape (n_samples,)
The predicted labels given by the method `predict` of an
classifier.
labels : array-like of shape (n_classes,), default=None
List of labels to index the confusion matrix. This may be used to
reorder or select a subset of labels. If `None` is given, those
that appear at least once in `y_true` or `y_pred` are used in
sorted order.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
normalize : {'true', 'pred', 'all'}, default=None
Either to normalize the counts display in the matrix:
- if `'true'`, the confusion matrix is normalized over the true
conditions (e.g. rows);
- if `'pred'`, the confusion matrix is normalized over the
predicted conditions (e.g. columns);
- if `'all'`, the confusion matrix is normalized by the total
number of samples;
- if `None` (default), the confusion matrix will not be normalized.
display_labels : array-like of shape (n_classes,), default=None
Target names used for plotting. By default, `labels` will be used
if it is defined, otherwise the unique labels of `y_true` and
`y_pred` will be used.
include_values : bool, default=True
Includes values in confusion matrix.
xticks_rotation : {'vertical', 'horizontal'} or float, \
default='horizontal'
Rotation of xtick labels.
values_format : str, default=None
Format specification for values in confusion matrix. If `None`, the
format specification is 'd' or '.2g' whichever is shorter.
cmap : str or matplotlib Colormap, default='viridis'
Colormap recognized by matplotlib.
ax : matplotlib Axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
colorbar : bool, default=True
Whether or not to add a colorbar to the plot.
im_kw : dict, default=None
Dict with keywords passed to `matplotlib.pyplot.imshow` call.
text_kw : dict, default=None
Dict with keywords passed to `matplotlib.pyplot.text` call.
.. versionadded:: 1.2
Returns
-------
display : :class:`~sklearn.metrics.ConfusionMatrixDisplay`
See Also
--------
ConfusionMatrixDisplay.from_estimator : Plot the confusion matrix
given an estimator, the data, and the label.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import ConfusionMatrixDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = SVC(random_state=0)
>>> clf.fit(X_train, y_train)
SVC(random_state=0)
>>> y_pred = clf.predict(X_test)
>>> ConfusionMatrixDisplay.from_predictions(
... y_test, y_pred)
<...>
>>> plt.show()
"""
check_matplotlib_support(f"{cls.__name__}.from_predictions")
if display_labels is None:
if labels is None:
display_labels = unique_labels(y_true, y_pred)
else:
display_labels = labels
cm = confusion_matrix(
y_true,
y_pred,
sample_weight=sample_weight,
labels=labels,
normalize=normalize,
)
disp = cls(confusion_matrix=cm, display_labels=display_labels)
return disp.plot(
include_values=include_values,
cmap=cmap,
ax=ax,
xticks_rotation=xticks_rotation,
values_format=values_format,
colorbar=colorbar,
im_kw=im_kw,
text_kw=text_kw,
)
|
Plot Confusion Matrix given true and predicted labels.
For general information regarding `scikit-learn` visualization tools, see
the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the
:ref:`Model Evaluation Guide <confusion_matrix>`.
.. versionadded:: 1.0
Parameters
----------
y_true : array-like of shape (n_samples,)
True labels.
y_pred : array-like of shape (n_samples,)
The predicted labels given by the method `predict` of an
classifier.
labels : array-like of shape (n_classes,), default=None
List of labels to index the confusion matrix. This may be used to
reorder or select a subset of labels. If `None` is given, those
that appear at least once in `y_true` or `y_pred` are used in
sorted order.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
normalize : {'true', 'pred', 'all'}, default=None
Either to normalize the counts display in the matrix:
- if `'true'`, the confusion matrix is normalized over the true
conditions (e.g. rows);
- if `'pred'`, the confusion matrix is normalized over the
predicted conditions (e.g. columns);
- if `'all'`, the confusion matrix is normalized by the total
number of samples;
- if `None` (default), the confusion matrix will not be normalized.
display_labels : array-like of shape (n_classes,), default=None
Target names used for plotting. By default, `labels` will be used
if it is defined, otherwise the unique labels of `y_true` and
`y_pred` will be used.
include_values : bool, default=True
Includes values in confusion matrix.
xticks_rotation : {'vertical', 'horizontal'} or float, default='horizontal'
Rotation of xtick labels.
values_format : str, default=None
Format specification for values in confusion matrix. If `None`, the
format specification is 'd' or '.2g' whichever is shorter.
cmap : str or matplotlib Colormap, default='viridis'
Colormap recognized by matplotlib.
ax : matplotlib Axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
colorbar : bool, default=True
Whether or not to add a colorbar to the plot.
im_kw : dict, default=None
Dict with keywords passed to `matplotlib.pyplot.imshow` call.
text_kw : dict, default=None
Dict with keywords passed to `matplotlib.pyplot.text` call.
.. versionadded:: 1.2
Returns
-------
display : :class:`~sklearn.metrics.ConfusionMatrixDisplay`
See Also
--------
ConfusionMatrixDisplay.from_estimator : Plot the confusion matrix
given an estimator, the data, and the label.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import ConfusionMatrixDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = SVC(random_state=0)
>>> clf.fit(X_train, y_train)
SVC(random_state=0)
>>> y_pred = clf.predict(X_test)
>>> ConfusionMatrixDisplay.from_predictions(
... y_test, y_pred)
<...>
>>> plt.show()
|
from_predictions
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/confusion_matrix.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/confusion_matrix.py
|
BSD-3-Clause
|
def from_estimator(
cls,
estimator,
X,
y,
*,
sample_weight=None,
drop_intermediate=True,
response_method="auto",
pos_label=None,
name=None,
ax=None,
**kwargs,
):
"""Plot DET curve given an estimator and data.
For general information regarding `scikit-learn` visualization tools, see
the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the
:ref:`Model Evaluation Guide <det_curve>`.
.. versionadded:: 1.0
Parameters
----------
estimator : estimator instance
Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a classifier.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
drop_intermediate : bool, default=True
Whether to drop thresholds where true positives (tp) do not change
from the previous or subsequent threshold. All points with the same
tp value have the same `fnr` and thus same y coordinate.
.. versionadded:: 1.7
response_method : {'predict_proba', 'decision_function', 'auto'} \
default='auto'
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the predicted target response. If set
to 'auto', :term:`predict_proba` is tried first and if it does not
exist :term:`decision_function` is tried next.
pos_label : int, float, bool or str, default=None
The label of the positive class. When `pos_label=None`, if `y_true`
is in {-1, 1} or {0, 1}, `pos_label` is set to 1, otherwise an
error will be raised.
name : str, default=None
Name of DET curve for labeling. If `None`, use the name of the
estimator.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
**kwargs : dict
Additional keywords arguments passed to matplotlib `plot` function.
Returns
-------
display : :class:`~sklearn.metrics.DetCurveDisplay`
Object that stores computed values.
See Also
--------
det_curve : Compute error rates for different probability thresholds.
DetCurveDisplay.from_predictions : Plot DET curve given the true and
predicted labels.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import DetCurveDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(n_samples=1000, random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, test_size=0.4, random_state=0)
>>> clf = SVC(random_state=0).fit(X_train, y_train)
>>> DetCurveDisplay.from_estimator(
... clf, X_test, y_test)
<...>
>>> plt.show()
"""
y_pred, pos_label, name = cls._validate_and_get_response_values(
estimator,
X,
y,
response_method=response_method,
pos_label=pos_label,
name=name,
)
return cls.from_predictions(
y_true=y,
y_pred=y_pred,
sample_weight=sample_weight,
drop_intermediate=drop_intermediate,
name=name,
ax=ax,
pos_label=pos_label,
**kwargs,
)
|
Plot DET curve given an estimator and data.
For general information regarding `scikit-learn` visualization tools, see
the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the
:ref:`Model Evaluation Guide <det_curve>`.
.. versionadded:: 1.0
Parameters
----------
estimator : estimator instance
Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a classifier.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
drop_intermediate : bool, default=True
Whether to drop thresholds where true positives (tp) do not change
from the previous or subsequent threshold. All points with the same
tp value have the same `fnr` and thus same y coordinate.
.. versionadded:: 1.7
response_method : {'predict_proba', 'decision_function', 'auto'} default='auto'
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the predicted target response. If set
to 'auto', :term:`predict_proba` is tried first and if it does not
exist :term:`decision_function` is tried next.
pos_label : int, float, bool or str, default=None
The label of the positive class. When `pos_label=None`, if `y_true`
is in {-1, 1} or {0, 1}, `pos_label` is set to 1, otherwise an
error will be raised.
name : str, default=None
Name of DET curve for labeling. If `None`, use the name of the
estimator.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
**kwargs : dict
Additional keywords arguments passed to matplotlib `plot` function.
Returns
-------
display : :class:`~sklearn.metrics.DetCurveDisplay`
Object that stores computed values.
See Also
--------
det_curve : Compute error rates for different probability thresholds.
DetCurveDisplay.from_predictions : Plot DET curve given the true and
predicted labels.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import DetCurveDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(n_samples=1000, random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, test_size=0.4, random_state=0)
>>> clf = SVC(random_state=0).fit(X_train, y_train)
>>> DetCurveDisplay.from_estimator(
... clf, X_test, y_test)
<...>
>>> plt.show()
|
from_estimator
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/det_curve.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/det_curve.py
|
BSD-3-Clause
|
def from_predictions(
cls,
y_true,
y_pred,
*,
sample_weight=None,
drop_intermediate=True,
pos_label=None,
name=None,
ax=None,
**kwargs,
):
"""Plot the DET curve given the true and predicted labels.
For general information regarding `scikit-learn` visualization tools, see
the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the
:ref:`Model Evaluation Guide <det_curve>`.
.. versionadded:: 1.0
Parameters
----------
y_true : array-like of shape (n_samples,)
True labels.
y_pred : array-like of shape (n_samples,)
Target scores, can either be probability estimates of the positive
class, confidence values, or non-thresholded measure of decisions
(as returned by `decision_function` on some classifiers).
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
drop_intermediate : bool, default=True
Whether to drop thresholds where true positives (tp) do not change
from the previous or subsequent threshold. All points with the same
tp value have the same `fnr` and thus same y coordinate.
.. versionadded:: 1.7
pos_label : int, float, bool or str, default=None
The label of the positive class. When `pos_label=None`, if `y_true`
is in {-1, 1} or {0, 1}, `pos_label` is set to 1, otherwise an
error will be raised.
name : str, default=None
Name of DET curve for labeling. If `None`, name will be set to
`"Classifier"`.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
**kwargs : dict
Additional keywords arguments passed to matplotlib `plot` function.
Returns
-------
display : :class:`~sklearn.metrics.DetCurveDisplay`
Object that stores computed values.
See Also
--------
det_curve : Compute error rates for different probability thresholds.
DetCurveDisplay.from_estimator : Plot DET curve given an estimator and
some data.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import DetCurveDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(n_samples=1000, random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, test_size=0.4, random_state=0)
>>> clf = SVC(random_state=0).fit(X_train, y_train)
>>> y_pred = clf.decision_function(X_test)
>>> DetCurveDisplay.from_predictions(
... y_test, y_pred)
<...>
>>> plt.show()
"""
pos_label_validated, name = cls._validate_from_predictions_params(
y_true, y_pred, sample_weight=sample_weight, pos_label=pos_label, name=name
)
fpr, fnr, _ = det_curve(
y_true,
y_pred,
pos_label=pos_label,
sample_weight=sample_weight,
drop_intermediate=drop_intermediate,
)
viz = cls(
fpr=fpr,
fnr=fnr,
estimator_name=name,
pos_label=pos_label_validated,
)
return viz.plot(ax=ax, name=name, **kwargs)
|
Plot the DET curve given the true and predicted labels.
For general information regarding `scikit-learn` visualization tools, see
the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the
:ref:`Model Evaluation Guide <det_curve>`.
.. versionadded:: 1.0
Parameters
----------
y_true : array-like of shape (n_samples,)
True labels.
y_pred : array-like of shape (n_samples,)
Target scores, can either be probability estimates of the positive
class, confidence values, or non-thresholded measure of decisions
(as returned by `decision_function` on some classifiers).
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
drop_intermediate : bool, default=True
Whether to drop thresholds where true positives (tp) do not change
from the previous or subsequent threshold. All points with the same
tp value have the same `fnr` and thus same y coordinate.
.. versionadded:: 1.7
pos_label : int, float, bool or str, default=None
The label of the positive class. When `pos_label=None`, if `y_true`
is in {-1, 1} or {0, 1}, `pos_label` is set to 1, otherwise an
error will be raised.
name : str, default=None
Name of DET curve for labeling. If `None`, name will be set to
`"Classifier"`.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
**kwargs : dict
Additional keywords arguments passed to matplotlib `plot` function.
Returns
-------
display : :class:`~sklearn.metrics.DetCurveDisplay`
Object that stores computed values.
See Also
--------
det_curve : Compute error rates for different probability thresholds.
DetCurveDisplay.from_estimator : Plot DET curve given an estimator and
some data.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import DetCurveDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(n_samples=1000, random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, test_size=0.4, random_state=0)
>>> clf = SVC(random_state=0).fit(X_train, y_train)
>>> y_pred = clf.decision_function(X_test)
>>> DetCurveDisplay.from_predictions(
... y_test, y_pred)
<...>
>>> plt.show()
|
from_predictions
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/det_curve.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/det_curve.py
|
BSD-3-Clause
|
def plot(self, ax=None, *, name=None, **kwargs):
"""Plot visualization.
Parameters
----------
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
name : str, default=None
Name of DET curve for labeling. If `None`, use `estimator_name` if
it is not `None`, otherwise no labeling is shown.
**kwargs : dict
Additional keywords arguments passed to matplotlib `plot` function.
Returns
-------
display : :class:`~sklearn.metrics.DetCurveDisplay`
Object that stores computed values.
"""
self.ax_, self.figure_, name = self._validate_plot_params(ax=ax, name=name)
line_kwargs = {} if name is None else {"label": name}
line_kwargs.update(**kwargs)
# We have the following bounds:
# sp.stats.norm.ppf(0.0) = -np.inf
# sp.stats.norm.ppf(1.0) = np.inf
# We therefore clip to eps and 1 - eps to not provide infinity to matplotlib.
eps = np.finfo(self.fpr.dtype).eps
self.fpr = self.fpr.clip(eps, 1 - eps)
self.fnr = self.fnr.clip(eps, 1 - eps)
(self.line_,) = self.ax_.plot(
sp.stats.norm.ppf(self.fpr),
sp.stats.norm.ppf(self.fnr),
**line_kwargs,
)
info_pos_label = (
f" (Positive label: {self.pos_label})" if self.pos_label is not None else ""
)
xlabel = "False Positive Rate" + info_pos_label
ylabel = "False Negative Rate" + info_pos_label
self.ax_.set(xlabel=xlabel, ylabel=ylabel)
if "label" in line_kwargs:
self.ax_.legend(loc="lower right")
ticks = [0.001, 0.01, 0.05, 0.20, 0.5, 0.80, 0.95, 0.99, 0.999]
tick_locations = sp.stats.norm.ppf(ticks)
tick_labels = [
"{:.0%}".format(s) if (100 * s).is_integer() else "{:.1%}".format(s)
for s in ticks
]
self.ax_.set_xticks(tick_locations)
self.ax_.set_xticklabels(tick_labels)
self.ax_.set_xlim(-3, 3)
self.ax_.set_yticks(tick_locations)
self.ax_.set_yticklabels(tick_labels)
self.ax_.set_ylim(-3, 3)
return self
|
Plot visualization.
Parameters
----------
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
name : str, default=None
Name of DET curve for labeling. If `None`, use `estimator_name` if
it is not `None`, otherwise no labeling is shown.
**kwargs : dict
Additional keywords arguments passed to matplotlib `plot` function.
Returns
-------
display : :class:`~sklearn.metrics.DetCurveDisplay`
Object that stores computed values.
|
plot
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/det_curve.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/det_curve.py
|
BSD-3-Clause
|
def plot(
self,
ax=None,
*,
name=None,
plot_chance_level=False,
chance_level_kw=None,
despine=False,
**kwargs,
):
"""Plot visualization.
Extra keyword arguments will be passed to matplotlib's `plot`.
Parameters
----------
ax : Matplotlib Axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
name : str, default=None
Name of precision recall curve for labeling. If `None`, use
`estimator_name` if not `None`, otherwise no labeling is shown.
plot_chance_level : bool, default=False
Whether to plot the chance level. The chance level is the prevalence
of the positive label computed from the data passed during
:meth:`from_estimator` or :meth:`from_predictions` call.
.. versionadded:: 1.3
chance_level_kw : dict, default=None
Keyword arguments to be passed to matplotlib's `plot` for rendering
the chance level line.
.. versionadded:: 1.3
despine : bool, default=False
Whether to remove the top and right spines from the plot.
.. versionadded:: 1.6
**kwargs : dict
Keyword arguments to be passed to matplotlib's `plot`.
Returns
-------
display : :class:`~sklearn.metrics.PrecisionRecallDisplay`
Object that stores computed values.
Notes
-----
The average precision (cf. :func:`~sklearn.metrics.average_precision_score`)
in scikit-learn is computed without any interpolation. To be consistent
with this metric, the precision-recall curve is plotted without any
interpolation as well (step-wise style).
You can change this style by passing the keyword argument
`drawstyle="default"`. However, the curve will not be strictly
consistent with the reported average precision.
"""
self.ax_, self.figure_, name = self._validate_plot_params(ax=ax, name=name)
default_line_kwargs = {"drawstyle": "steps-post"}
if self.average_precision is not None and name is not None:
default_line_kwargs["label"] = (
f"{name} (AP = {self.average_precision:0.2f})"
)
elif self.average_precision is not None:
default_line_kwargs["label"] = f"AP = {self.average_precision:0.2f}"
elif name is not None:
default_line_kwargs["label"] = name
line_kwargs = _validate_style_kwargs(default_line_kwargs, kwargs)
(self.line_,) = self.ax_.plot(self.recall, self.precision, **line_kwargs)
info_pos_label = (
f" (Positive label: {self.pos_label})" if self.pos_label is not None else ""
)
xlabel = "Recall" + info_pos_label
ylabel = "Precision" + info_pos_label
self.ax_.set(
xlabel=xlabel,
xlim=(-0.01, 1.01),
ylabel=ylabel,
ylim=(-0.01, 1.01),
aspect="equal",
)
if plot_chance_level:
if self.prevalence_pos_label is None:
raise ValueError(
"You must provide prevalence_pos_label when constructing the "
"PrecisionRecallDisplay object in order to plot the chance "
"level line. Alternatively, you may use "
"PrecisionRecallDisplay.from_estimator or "
"PrecisionRecallDisplay.from_predictions "
"to automatically set prevalence_pos_label"
)
default_chance_level_line_kw = {
"label": f"Chance level (AP = {self.prevalence_pos_label:0.2f})",
"color": "k",
"linestyle": "--",
}
if chance_level_kw is None:
chance_level_kw = {}
chance_level_line_kw = _validate_style_kwargs(
default_chance_level_line_kw, chance_level_kw
)
(self.chance_level_,) = self.ax_.plot(
(0, 1),
(self.prevalence_pos_label, self.prevalence_pos_label),
**chance_level_line_kw,
)
else:
self.chance_level_ = None
if despine:
_despine(self.ax_)
if "label" in line_kwargs or plot_chance_level:
self.ax_.legend(loc="lower left")
return self
|
Plot visualization.
Extra keyword arguments will be passed to matplotlib's `plot`.
Parameters
----------
ax : Matplotlib Axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
name : str, default=None
Name of precision recall curve for labeling. If `None`, use
`estimator_name` if not `None`, otherwise no labeling is shown.
plot_chance_level : bool, default=False
Whether to plot the chance level. The chance level is the prevalence
of the positive label computed from the data passed during
:meth:`from_estimator` or :meth:`from_predictions` call.
.. versionadded:: 1.3
chance_level_kw : dict, default=None
Keyword arguments to be passed to matplotlib's `plot` for rendering
the chance level line.
.. versionadded:: 1.3
despine : bool, default=False
Whether to remove the top and right spines from the plot.
.. versionadded:: 1.6
**kwargs : dict
Keyword arguments to be passed to matplotlib's `plot`.
Returns
-------
display : :class:`~sklearn.metrics.PrecisionRecallDisplay`
Object that stores computed values.
Notes
-----
The average precision (cf. :func:`~sklearn.metrics.average_precision_score`)
in scikit-learn is computed without any interpolation. To be consistent
with this metric, the precision-recall curve is plotted without any
interpolation as well (step-wise style).
You can change this style by passing the keyword argument
`drawstyle="default"`. However, the curve will not be strictly
consistent with the reported average precision.
|
plot
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/precision_recall_curve.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/precision_recall_curve.py
|
BSD-3-Clause
|
def from_estimator(
cls,
estimator,
X,
y,
*,
sample_weight=None,
drop_intermediate=False,
response_method="auto",
pos_label=None,
name=None,
ax=None,
plot_chance_level=False,
chance_level_kw=None,
despine=False,
**kwargs,
):
"""Plot precision-recall curve given an estimator and some data.
For general information regarding `scikit-learn` visualization tools, see
the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the :ref:`Model
Evaluation Guide <precision_recall_f_measure_metrics>`.
Parameters
----------
estimator : estimator instance
Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a classifier.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
drop_intermediate : bool, default=False
Whether to drop some suboptimal thresholds which would not appear
on a plotted precision-recall curve. This is useful in order to
create lighter precision-recall curves.
.. versionadded:: 1.3
response_method : {'predict_proba', 'decision_function', 'auto'}, \
default='auto'
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the target response. If set to 'auto',
:term:`predict_proba` is tried first and if it does not exist
:term:`decision_function` is tried next.
pos_label : int, float, bool or str, default=None
The class considered as the positive class when computing the
precision and recall metrics. By default, `estimators.classes_[1]`
is considered as the positive class.
name : str, default=None
Name for labeling curve. If `None`, no name is used.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is created.
plot_chance_level : bool, default=False
Whether to plot the chance level. The chance level is the prevalence
of the positive label computed from the data passed during
:meth:`from_estimator` or :meth:`from_predictions` call.
.. versionadded:: 1.3
chance_level_kw : dict, default=None
Keyword arguments to be passed to matplotlib's `plot` for rendering
the chance level line.
.. versionadded:: 1.3
despine : bool, default=False
Whether to remove the top and right spines from the plot.
.. versionadded:: 1.6
**kwargs : dict
Keyword arguments to be passed to matplotlib's `plot`.
Returns
-------
display : :class:`~sklearn.metrics.PrecisionRecallDisplay`
See Also
--------
PrecisionRecallDisplay.from_predictions : Plot precision-recall curve
using estimated probabilities or output of decision function.
Notes
-----
The average precision (cf. :func:`~sklearn.metrics.average_precision_score`)
in scikit-learn is computed without any interpolation. To be consistent
with this metric, the precision-recall curve is plotted without any
interpolation as well (step-wise style).
You can change this style by passing the keyword argument
`drawstyle="default"`. However, the curve will not be strictly
consistent with the reported average precision.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import PrecisionRecallDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.linear_model import LogisticRegression
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = LogisticRegression()
>>> clf.fit(X_train, y_train)
LogisticRegression()
>>> PrecisionRecallDisplay.from_estimator(
... clf, X_test, y_test)
<...>
>>> plt.show()
"""
y_pred, pos_label, name = cls._validate_and_get_response_values(
estimator,
X,
y,
response_method=response_method,
pos_label=pos_label,
name=name,
)
return cls.from_predictions(
y,
y_pred,
sample_weight=sample_weight,
name=name,
pos_label=pos_label,
drop_intermediate=drop_intermediate,
ax=ax,
plot_chance_level=plot_chance_level,
chance_level_kw=chance_level_kw,
despine=despine,
**kwargs,
)
|
Plot precision-recall curve given an estimator and some data.
For general information regarding `scikit-learn` visualization tools, see
the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the :ref:`Model
Evaluation Guide <precision_recall_f_measure_metrics>`.
Parameters
----------
estimator : estimator instance
Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a classifier.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
drop_intermediate : bool, default=False
Whether to drop some suboptimal thresholds which would not appear
on a plotted precision-recall curve. This is useful in order to
create lighter precision-recall curves.
.. versionadded:: 1.3
response_method : {'predict_proba', 'decision_function', 'auto'}, default='auto'
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the target response. If set to 'auto',
:term:`predict_proba` is tried first and if it does not exist
:term:`decision_function` is tried next.
pos_label : int, float, bool or str, default=None
The class considered as the positive class when computing the
precision and recall metrics. By default, `estimators.classes_[1]`
is considered as the positive class.
name : str, default=None
Name for labeling curve. If `None`, no name is used.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is created.
plot_chance_level : bool, default=False
Whether to plot the chance level. The chance level is the prevalence
of the positive label computed from the data passed during
:meth:`from_estimator` or :meth:`from_predictions` call.
.. versionadded:: 1.3
chance_level_kw : dict, default=None
Keyword arguments to be passed to matplotlib's `plot` for rendering
the chance level line.
.. versionadded:: 1.3
despine : bool, default=False
Whether to remove the top and right spines from the plot.
.. versionadded:: 1.6
**kwargs : dict
Keyword arguments to be passed to matplotlib's `plot`.
Returns
-------
display : :class:`~sklearn.metrics.PrecisionRecallDisplay`
See Also
--------
PrecisionRecallDisplay.from_predictions : Plot precision-recall curve
using estimated probabilities or output of decision function.
Notes
-----
The average precision (cf. :func:`~sklearn.metrics.average_precision_score`)
in scikit-learn is computed without any interpolation. To be consistent
with this metric, the precision-recall curve is plotted without any
interpolation as well (step-wise style).
You can change this style by passing the keyword argument
`drawstyle="default"`. However, the curve will not be strictly
consistent with the reported average precision.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import PrecisionRecallDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.linear_model import LogisticRegression
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = LogisticRegression()
>>> clf.fit(X_train, y_train)
LogisticRegression()
>>> PrecisionRecallDisplay.from_estimator(
... clf, X_test, y_test)
<...>
>>> plt.show()
|
from_estimator
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/precision_recall_curve.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/precision_recall_curve.py
|
BSD-3-Clause
|
def from_predictions(
cls,
y_true,
y_pred,
*,
sample_weight=None,
drop_intermediate=False,
pos_label=None,
name=None,
ax=None,
plot_chance_level=False,
chance_level_kw=None,
despine=False,
**kwargs,
):
"""Plot precision-recall curve given binary class predictions.
For general information regarding `scikit-learn` visualization tools, see
the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the :ref:`Model
Evaluation Guide <precision_recall_f_measure_metrics>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
True binary labels.
y_pred : array-like of shape (n_samples,)
Estimated probabilities or output of decision function.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
drop_intermediate : bool, default=False
Whether to drop some suboptimal thresholds which would not appear
on a plotted precision-recall curve. This is useful in order to
create lighter precision-recall curves.
.. versionadded:: 1.3
pos_label : int, float, bool or str, default=None
The class considered as the positive class when computing the
precision and recall metrics.
name : str, default=None
Name for labeling curve. If `None`, name will be set to
`"Classifier"`.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is created.
plot_chance_level : bool, default=False
Whether to plot the chance level. The chance level is the prevalence
of the positive label computed from the data passed during
:meth:`from_estimator` or :meth:`from_predictions` call.
.. versionadded:: 1.3
chance_level_kw : dict, default=None
Keyword arguments to be passed to matplotlib's `plot` for rendering
the chance level line.
.. versionadded:: 1.3
despine : bool, default=False
Whether to remove the top and right spines from the plot.
.. versionadded:: 1.6
**kwargs : dict
Keyword arguments to be passed to matplotlib's `plot`.
Returns
-------
display : :class:`~sklearn.metrics.PrecisionRecallDisplay`
See Also
--------
PrecisionRecallDisplay.from_estimator : Plot precision-recall curve
using an estimator.
Notes
-----
The average precision (cf. :func:`~sklearn.metrics.average_precision_score`)
in scikit-learn is computed without any interpolation. To be consistent
with this metric, the precision-recall curve is plotted without any
interpolation as well (step-wise style).
You can change this style by passing the keyword argument
`drawstyle="default"`. However, the curve will not be strictly
consistent with the reported average precision.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import PrecisionRecallDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.linear_model import LogisticRegression
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = LogisticRegression()
>>> clf.fit(X_train, y_train)
LogisticRegression()
>>> y_pred = clf.predict_proba(X_test)[:, 1]
>>> PrecisionRecallDisplay.from_predictions(
... y_test, y_pred)
<...>
>>> plt.show()
"""
pos_label, name = cls._validate_from_predictions_params(
y_true, y_pred, sample_weight=sample_weight, pos_label=pos_label, name=name
)
precision, recall, _ = precision_recall_curve(
y_true,
y_pred,
pos_label=pos_label,
sample_weight=sample_weight,
drop_intermediate=drop_intermediate,
)
average_precision = average_precision_score(
y_true, y_pred, pos_label=pos_label, sample_weight=sample_weight
)
class_count = Counter(y_true)
prevalence_pos_label = class_count[pos_label] / sum(class_count.values())
viz = cls(
precision=precision,
recall=recall,
average_precision=average_precision,
estimator_name=name,
pos_label=pos_label,
prevalence_pos_label=prevalence_pos_label,
)
return viz.plot(
ax=ax,
name=name,
plot_chance_level=plot_chance_level,
chance_level_kw=chance_level_kw,
despine=despine,
**kwargs,
)
|
Plot precision-recall curve given binary class predictions.
For general information regarding `scikit-learn` visualization tools, see
the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the :ref:`Model
Evaluation Guide <precision_recall_f_measure_metrics>`.
Parameters
----------
y_true : array-like of shape (n_samples,)
True binary labels.
y_pred : array-like of shape (n_samples,)
Estimated probabilities or output of decision function.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
drop_intermediate : bool, default=False
Whether to drop some suboptimal thresholds which would not appear
on a plotted precision-recall curve. This is useful in order to
create lighter precision-recall curves.
.. versionadded:: 1.3
pos_label : int, float, bool or str, default=None
The class considered as the positive class when computing the
precision and recall metrics.
name : str, default=None
Name for labeling curve. If `None`, name will be set to
`"Classifier"`.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is created.
plot_chance_level : bool, default=False
Whether to plot the chance level. The chance level is the prevalence
of the positive label computed from the data passed during
:meth:`from_estimator` or :meth:`from_predictions` call.
.. versionadded:: 1.3
chance_level_kw : dict, default=None
Keyword arguments to be passed to matplotlib's `plot` for rendering
the chance level line.
.. versionadded:: 1.3
despine : bool, default=False
Whether to remove the top and right spines from the plot.
.. versionadded:: 1.6
**kwargs : dict
Keyword arguments to be passed to matplotlib's `plot`.
Returns
-------
display : :class:`~sklearn.metrics.PrecisionRecallDisplay`
See Also
--------
PrecisionRecallDisplay.from_estimator : Plot precision-recall curve
using an estimator.
Notes
-----
The average precision (cf. :func:`~sklearn.metrics.average_precision_score`)
in scikit-learn is computed without any interpolation. To be consistent
with this metric, the precision-recall curve is plotted without any
interpolation as well (step-wise style).
You can change this style by passing the keyword argument
`drawstyle="default"`. However, the curve will not be strictly
consistent with the reported average precision.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import PrecisionRecallDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.linear_model import LogisticRegression
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = LogisticRegression()
>>> clf.fit(X_train, y_train)
LogisticRegression()
>>> y_pred = clf.predict_proba(X_test)[:, 1]
>>> PrecisionRecallDisplay.from_predictions(
... y_test, y_pred)
<...>
>>> plt.show()
|
from_predictions
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/precision_recall_curve.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/precision_recall_curve.py
|
BSD-3-Clause
|
def plot(
self,
ax=None,
*,
kind="residual_vs_predicted",
scatter_kwargs=None,
line_kwargs=None,
):
"""Plot visualization.
Extra keyword arguments will be passed to matplotlib's ``plot``.
Parameters
----------
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
kind : {"actual_vs_predicted", "residual_vs_predicted"}, \
default="residual_vs_predicted"
The type of plot to draw:
- "actual_vs_predicted" draws the observed values (y-axis) vs.
the predicted values (x-axis).
- "residual_vs_predicted" draws the residuals, i.e. difference
between observed and predicted values, (y-axis) vs. the predicted
values (x-axis).
scatter_kwargs : dict, default=None
Dictionary with keywords passed to the `matplotlib.pyplot.scatter`
call.
line_kwargs : dict, default=None
Dictionary with keyword passed to the `matplotlib.pyplot.plot`
call to draw the optimal line.
Returns
-------
display : :class:`~sklearn.metrics.PredictionErrorDisplay`
Object that stores computed values.
"""
check_matplotlib_support(f"{self.__class__.__name__}.plot")
expected_kind = ("actual_vs_predicted", "residual_vs_predicted")
if kind not in expected_kind:
raise ValueError(
f"`kind` must be one of {', '.join(expected_kind)}. "
f"Got {kind!r} instead."
)
import matplotlib.pyplot as plt
if scatter_kwargs is None:
scatter_kwargs = {}
if line_kwargs is None:
line_kwargs = {}
default_scatter_kwargs = {"color": "tab:blue", "alpha": 0.8}
default_line_kwargs = {"color": "black", "alpha": 0.7, "linestyle": "--"}
scatter_kwargs = _validate_style_kwargs(default_scatter_kwargs, scatter_kwargs)
line_kwargs = _validate_style_kwargs(default_line_kwargs, line_kwargs)
scatter_kwargs = {**default_scatter_kwargs, **scatter_kwargs}
line_kwargs = {**default_line_kwargs, **line_kwargs}
if ax is None:
_, ax = plt.subplots()
if kind == "actual_vs_predicted":
max_value = max(np.max(self.y_true), np.max(self.y_pred))
min_value = min(np.min(self.y_true), np.min(self.y_pred))
self.line_ = ax.plot(
[min_value, max_value], [min_value, max_value], **line_kwargs
)[0]
x_data, y_data = self.y_pred, self.y_true
xlabel, ylabel = "Predicted values", "Actual values"
self.scatter_ = ax.scatter(x_data, y_data, **scatter_kwargs)
# force to have a squared axis
ax.set_aspect("equal", adjustable="datalim")
ax.set_xticks(np.linspace(min_value, max_value, num=5))
ax.set_yticks(np.linspace(min_value, max_value, num=5))
else: # kind == "residual_vs_predicted"
self.line_ = ax.plot(
[np.min(self.y_pred), np.max(self.y_pred)],
[0, 0],
**line_kwargs,
)[0]
self.scatter_ = ax.scatter(
self.y_pred, self.y_true - self.y_pred, **scatter_kwargs
)
xlabel, ylabel = "Predicted values", "Residuals (actual - predicted)"
ax.set(xlabel=xlabel, ylabel=ylabel)
self.ax_ = ax
self.figure_ = ax.figure
return self
|
Plot visualization.
Extra keyword arguments will be passed to matplotlib's ``plot``.
Parameters
----------
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
kind : {"actual_vs_predicted", "residual_vs_predicted"}, default="residual_vs_predicted"
The type of plot to draw:
- "actual_vs_predicted" draws the observed values (y-axis) vs.
the predicted values (x-axis).
- "residual_vs_predicted" draws the residuals, i.e. difference
between observed and predicted values, (y-axis) vs. the predicted
values (x-axis).
scatter_kwargs : dict, default=None
Dictionary with keywords passed to the `matplotlib.pyplot.scatter`
call.
line_kwargs : dict, default=None
Dictionary with keyword passed to the `matplotlib.pyplot.plot`
call to draw the optimal line.
Returns
-------
display : :class:`~sklearn.metrics.PredictionErrorDisplay`
Object that stores computed values.
|
plot
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/regression.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/regression.py
|
BSD-3-Clause
|
def from_estimator(
cls,
estimator,
X,
y,
*,
kind="residual_vs_predicted",
subsample=1_000,
random_state=None,
ax=None,
scatter_kwargs=None,
line_kwargs=None,
):
"""Plot the prediction error given a regressor and some data.
For general information regarding `scikit-learn` visualization tools,
read more in the :ref:`Visualization Guide <visualizations>`.
For details regarding interpreting these plots, refer to the
:ref:`Model Evaluation Guide <visualization_regression_evaluation>`.
.. versionadded:: 1.2
Parameters
----------
estimator : estimator instance
Fitted regressor or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a regressor.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
kind : {"actual_vs_predicted", "residual_vs_predicted"}, \
default="residual_vs_predicted"
The type of plot to draw:
- "actual_vs_predicted" draws the observed values (y-axis) vs.
the predicted values (x-axis).
- "residual_vs_predicted" draws the residuals, i.e. difference
between observed and predicted values, (y-axis) vs. the predicted
values (x-axis).
subsample : float, int or None, default=1_000
Sampling the samples to be shown on the scatter plot. If `float`,
it should be between 0 and 1 and represents the proportion of the
original dataset. If `int`, it represents the number of samples
display on the scatter plot. If `None`, no subsampling will be
applied. by default, 1000 samples or less will be displayed.
random_state : int or RandomState, default=None
Controls the randomness when `subsample` is not `None`.
See :term:`Glossary <random_state>` for details.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
scatter_kwargs : dict, default=None
Dictionary with keywords passed to the `matplotlib.pyplot.scatter`
call.
line_kwargs : dict, default=None
Dictionary with keyword passed to the `matplotlib.pyplot.plot`
call to draw the optimal line.
Returns
-------
display : :class:`~sklearn.metrics.PredictionErrorDisplay`
Object that stores the computed values.
See Also
--------
PredictionErrorDisplay : Prediction error visualization for regression.
PredictionErrorDisplay.from_predictions : Prediction error visualization
given the true and predicted targets.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.linear_model import Ridge
>>> from sklearn.metrics import PredictionErrorDisplay
>>> X, y = load_diabetes(return_X_y=True)
>>> ridge = Ridge().fit(X, y)
>>> disp = PredictionErrorDisplay.from_estimator(ridge, X, y)
>>> plt.show()
"""
check_matplotlib_support(f"{cls.__name__}.from_estimator")
y_pred = estimator.predict(X)
return cls.from_predictions(
y_true=y,
y_pred=y_pred,
kind=kind,
subsample=subsample,
random_state=random_state,
ax=ax,
scatter_kwargs=scatter_kwargs,
line_kwargs=line_kwargs,
)
|
Plot the prediction error given a regressor and some data.
For general information regarding `scikit-learn` visualization tools,
read more in the :ref:`Visualization Guide <visualizations>`.
For details regarding interpreting these plots, refer to the
:ref:`Model Evaluation Guide <visualization_regression_evaluation>`.
.. versionadded:: 1.2
Parameters
----------
estimator : estimator instance
Fitted regressor or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a regressor.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
kind : {"actual_vs_predicted", "residual_vs_predicted"}, default="residual_vs_predicted"
The type of plot to draw:
- "actual_vs_predicted" draws the observed values (y-axis) vs.
the predicted values (x-axis).
- "residual_vs_predicted" draws the residuals, i.e. difference
between observed and predicted values, (y-axis) vs. the predicted
values (x-axis).
subsample : float, int or None, default=1_000
Sampling the samples to be shown on the scatter plot. If `float`,
it should be between 0 and 1 and represents the proportion of the
original dataset. If `int`, it represents the number of samples
display on the scatter plot. If `None`, no subsampling will be
applied. by default, 1000 samples or less will be displayed.
random_state : int or RandomState, default=None
Controls the randomness when `subsample` is not `None`.
See :term:`Glossary <random_state>` for details.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
scatter_kwargs : dict, default=None
Dictionary with keywords passed to the `matplotlib.pyplot.scatter`
call.
line_kwargs : dict, default=None
Dictionary with keyword passed to the `matplotlib.pyplot.plot`
call to draw the optimal line.
Returns
-------
display : :class:`~sklearn.metrics.PredictionErrorDisplay`
Object that stores the computed values.
See Also
--------
PredictionErrorDisplay : Prediction error visualization for regression.
PredictionErrorDisplay.from_predictions : Prediction error visualization
given the true and predicted targets.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.linear_model import Ridge
>>> from sklearn.metrics import PredictionErrorDisplay
>>> X, y = load_diabetes(return_X_y=True)
>>> ridge = Ridge().fit(X, y)
>>> disp = PredictionErrorDisplay.from_estimator(ridge, X, y)
>>> plt.show()
|
from_estimator
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/regression.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/regression.py
|
BSD-3-Clause
|
def from_predictions(
cls,
y_true,
y_pred,
*,
kind="residual_vs_predicted",
subsample=1_000,
random_state=None,
ax=None,
scatter_kwargs=None,
line_kwargs=None,
):
"""Plot the prediction error given the true and predicted targets.
For general information regarding `scikit-learn` visualization tools,
read more in the :ref:`Visualization Guide <visualizations>`.
For details regarding interpreting these plots, refer to the
:ref:`Model Evaluation Guide <visualization_regression_evaluation>`.
.. versionadded:: 1.2
Parameters
----------
y_true : array-like of shape (n_samples,)
True target values.
y_pred : array-like of shape (n_samples,)
Predicted target values.
kind : {"actual_vs_predicted", "residual_vs_predicted"}, \
default="residual_vs_predicted"
The type of plot to draw:
- "actual_vs_predicted" draws the observed values (y-axis) vs.
the predicted values (x-axis).
- "residual_vs_predicted" draws the residuals, i.e. difference
between observed and predicted values, (y-axis) vs. the predicted
values (x-axis).
subsample : float, int or None, default=1_000
Sampling the samples to be shown on the scatter plot. If `float`,
it should be between 0 and 1 and represents the proportion of the
original dataset. If `int`, it represents the number of samples
display on the scatter plot. If `None`, no subsampling will be
applied. by default, 1000 samples or less will be displayed.
random_state : int or RandomState, default=None
Controls the randomness when `subsample` is not `None`.
See :term:`Glossary <random_state>` for details.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
scatter_kwargs : dict, default=None
Dictionary with keywords passed to the `matplotlib.pyplot.scatter`
call.
line_kwargs : dict, default=None
Dictionary with keyword passed to the `matplotlib.pyplot.plot`
call to draw the optimal line.
Returns
-------
display : :class:`~sklearn.metrics.PredictionErrorDisplay`
Object that stores the computed values.
See Also
--------
PredictionErrorDisplay : Prediction error visualization for regression.
PredictionErrorDisplay.from_estimator : Prediction error visualization
given an estimator and some data.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.linear_model import Ridge
>>> from sklearn.metrics import PredictionErrorDisplay
>>> X, y = load_diabetes(return_X_y=True)
>>> ridge = Ridge().fit(X, y)
>>> y_pred = ridge.predict(X)
>>> disp = PredictionErrorDisplay.from_predictions(y_true=y, y_pred=y_pred)
>>> plt.show()
"""
check_matplotlib_support(f"{cls.__name__}.from_predictions")
random_state = check_random_state(random_state)
n_samples = len(y_true)
if isinstance(subsample, numbers.Integral):
if subsample <= 0:
raise ValueError(
f"When an integer, subsample={subsample} should be positive."
)
elif isinstance(subsample, numbers.Real):
if subsample <= 0 or subsample >= 1:
raise ValueError(
f"When a floating-point, subsample={subsample} should"
" be in the (0, 1) range."
)
subsample = int(n_samples * subsample)
if subsample is not None and subsample < n_samples:
indices = random_state.choice(np.arange(n_samples), size=subsample)
y_true = _safe_indexing(y_true, indices, axis=0)
y_pred = _safe_indexing(y_pred, indices, axis=0)
viz = cls(
y_true=y_true,
y_pred=y_pred,
)
return viz.plot(
ax=ax,
kind=kind,
scatter_kwargs=scatter_kwargs,
line_kwargs=line_kwargs,
)
|
Plot the prediction error given the true and predicted targets.
For general information regarding `scikit-learn` visualization tools,
read more in the :ref:`Visualization Guide <visualizations>`.
For details regarding interpreting these plots, refer to the
:ref:`Model Evaluation Guide <visualization_regression_evaluation>`.
.. versionadded:: 1.2
Parameters
----------
y_true : array-like of shape (n_samples,)
True target values.
y_pred : array-like of shape (n_samples,)
Predicted target values.
kind : {"actual_vs_predicted", "residual_vs_predicted"}, default="residual_vs_predicted"
The type of plot to draw:
- "actual_vs_predicted" draws the observed values (y-axis) vs.
the predicted values (x-axis).
- "residual_vs_predicted" draws the residuals, i.e. difference
between observed and predicted values, (y-axis) vs. the predicted
values (x-axis).
subsample : float, int or None, default=1_000
Sampling the samples to be shown on the scatter plot. If `float`,
it should be between 0 and 1 and represents the proportion of the
original dataset. If `int`, it represents the number of samples
display on the scatter plot. If `None`, no subsampling will be
applied. by default, 1000 samples or less will be displayed.
random_state : int or RandomState, default=None
Controls the randomness when `subsample` is not `None`.
See :term:`Glossary <random_state>` for details.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
scatter_kwargs : dict, default=None
Dictionary with keywords passed to the `matplotlib.pyplot.scatter`
call.
line_kwargs : dict, default=None
Dictionary with keyword passed to the `matplotlib.pyplot.plot`
call to draw the optimal line.
Returns
-------
display : :class:`~sklearn.metrics.PredictionErrorDisplay`
Object that stores the computed values.
See Also
--------
PredictionErrorDisplay : Prediction error visualization for regression.
PredictionErrorDisplay.from_estimator : Prediction error visualization
given an estimator and some data.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import load_diabetes
>>> from sklearn.linear_model import Ridge
>>> from sklearn.metrics import PredictionErrorDisplay
>>> X, y = load_diabetes(return_X_y=True)
>>> ridge = Ridge().fit(X, y)
>>> y_pred = ridge.predict(X)
>>> disp = PredictionErrorDisplay.from_predictions(y_true=y, y_pred=y_pred)
>>> plt.show()
|
from_predictions
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/regression.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/regression.py
|
BSD-3-Clause
|
def plot(
self,
ax=None,
*,
name=None,
curve_kwargs=None,
plot_chance_level=False,
chance_level_kw=None,
despine=False,
**kwargs,
):
"""Plot visualization.
Parameters
----------
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
name : str or list of str, default=None
Name for labeling legend entries. The number of legend entries
is determined by `curve_kwargs`.
To label each curve, provide a list of strings. To avoid labeling
individual curves that have the same appearance, this cannot be used in
conjunction with `curve_kwargs` being a dictionary or None. If a
string is provided, it will be used to either label the single legend entry
or if there are multiple legend entries, label each individual curve with
the same name. If `None`, set to `name` provided at `RocCurveDisplay`
initialization. If still `None`, no name is shown in the legend.
.. versionadded:: 1.7
curve_kwargs : dict or list of dict, default=None
Keywords arguments to be passed to matplotlib's `plot` function
to draw individual ROC curves. For single curve plotting, should be
a dictionary. For multi-curve plotting, if a list is provided the
parameters are applied to the ROC curves of each CV fold
sequentially and a legend entry is added for each curve.
If a single dictionary is provided, the same parameters are applied
to all ROC curves and a single legend entry for all curves is added,
labeled with the mean ROC AUC score.
.. versionadded:: 1.7
plot_chance_level : bool, default=False
Whether to plot the chance level.
.. versionadded:: 1.3
chance_level_kw : dict, default=None
Keyword arguments to be passed to matplotlib's `plot` for rendering
the chance level line.
.. versionadded:: 1.3
despine : bool, default=False
Whether to remove the top and right spines from the plot.
.. versionadded:: 1.6
**kwargs : dict
Keyword arguments to be passed to matplotlib's `plot`.
.. deprecated:: 1.7
kwargs is deprecated and will be removed in 1.9. Pass matplotlib
arguments to `curve_kwargs` as a dictionary instead.
Returns
-------
display : :class:`~sklearn.metrics.RocCurveDisplay`
Object that stores computed values.
"""
fpr, tpr, roc_auc, name = self._validate_plot_params(ax=ax, name=name)
n_curves = len(fpr)
if not isinstance(curve_kwargs, list) and n_curves > 1:
if roc_auc:
legend_metric = {"mean": np.mean(roc_auc), "std": np.std(roc_auc)}
else:
legend_metric = {"mean": None, "std": None}
else:
roc_auc = roc_auc if roc_auc is not None else [None] * n_curves
legend_metric = {"metric": roc_auc}
curve_kwargs = self._validate_curve_kwargs(
n_curves,
name,
legend_metric,
"AUC",
curve_kwargs=curve_kwargs,
**kwargs,
)
default_chance_level_line_kw = {
"label": "Chance level (AUC = 0.5)",
"color": "k",
"linestyle": "--",
}
if chance_level_kw is None:
chance_level_kw = {}
chance_level_kw = _validate_style_kwargs(
default_chance_level_line_kw, chance_level_kw
)
self.line_ = []
for fpr, tpr, line_kw in zip(fpr, tpr, curve_kwargs):
self.line_.extend(self.ax_.plot(fpr, tpr, **line_kw))
# Return single artist if only one curve is plotted
if len(self.line_) == 1:
self.line_ = self.line_[0]
info_pos_label = (
f" (Positive label: {self.pos_label})" if self.pos_label is not None else ""
)
xlabel = "False Positive Rate" + info_pos_label
ylabel = "True Positive Rate" + info_pos_label
self.ax_.set(
xlabel=xlabel,
xlim=(-0.01, 1.01),
ylabel=ylabel,
ylim=(-0.01, 1.01),
aspect="equal",
)
if plot_chance_level:
(self.chance_level_,) = self.ax_.plot((0, 1), (0, 1), **chance_level_kw)
else:
self.chance_level_ = None
if despine:
_despine(self.ax_)
if curve_kwargs[0].get("label") is not None or (
plot_chance_level and chance_level_kw.get("label") is not None
):
self.ax_.legend(loc="lower right")
return self
|
Plot visualization.
Parameters
----------
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
name : str or list of str, default=None
Name for labeling legend entries. The number of legend entries
is determined by `curve_kwargs`.
To label each curve, provide a list of strings. To avoid labeling
individual curves that have the same appearance, this cannot be used in
conjunction with `curve_kwargs` being a dictionary or None. If a
string is provided, it will be used to either label the single legend entry
or if there are multiple legend entries, label each individual curve with
the same name. If `None`, set to `name` provided at `RocCurveDisplay`
initialization. If still `None`, no name is shown in the legend.
.. versionadded:: 1.7
curve_kwargs : dict or list of dict, default=None
Keywords arguments to be passed to matplotlib's `plot` function
to draw individual ROC curves. For single curve plotting, should be
a dictionary. For multi-curve plotting, if a list is provided the
parameters are applied to the ROC curves of each CV fold
sequentially and a legend entry is added for each curve.
If a single dictionary is provided, the same parameters are applied
to all ROC curves and a single legend entry for all curves is added,
labeled with the mean ROC AUC score.
.. versionadded:: 1.7
plot_chance_level : bool, default=False
Whether to plot the chance level.
.. versionadded:: 1.3
chance_level_kw : dict, default=None
Keyword arguments to be passed to matplotlib's `plot` for rendering
the chance level line.
.. versionadded:: 1.3
despine : bool, default=False
Whether to remove the top and right spines from the plot.
.. versionadded:: 1.6
**kwargs : dict
Keyword arguments to be passed to matplotlib's `plot`.
.. deprecated:: 1.7
kwargs is deprecated and will be removed in 1.9. Pass matplotlib
arguments to `curve_kwargs` as a dictionary instead.
Returns
-------
display : :class:`~sklearn.metrics.RocCurveDisplay`
Object that stores computed values.
|
plot
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/roc_curve.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/roc_curve.py
|
BSD-3-Clause
|
def from_estimator(
cls,
estimator,
X,
y,
*,
sample_weight=None,
drop_intermediate=True,
response_method="auto",
pos_label=None,
name=None,
ax=None,
curve_kwargs=None,
plot_chance_level=False,
chance_level_kw=None,
despine=False,
**kwargs,
):
"""Create a ROC Curve display from an estimator.
For general information regarding `scikit-learn` visualization tools,
see the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the :ref:`Model
Evaluation Guide <roc_metrics>`.
Parameters
----------
estimator : estimator instance
Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a classifier.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
drop_intermediate : bool, default=True
Whether to drop thresholds where the resulting point is collinear
with its neighbors in ROC space. This has no effect on the ROC AUC
or visual shape of the curve, but reduces the number of plotted
points.
response_method : {'predict_proba', 'decision_function', 'auto'} \
default='auto'
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the target response. If set to 'auto',
:term:`predict_proba` is tried first and if it does not exist
:term:`decision_function` is tried next.
pos_label : int, float, bool or str, default=None
The class considered as the positive class when computing the ROC AUC.
By default, `estimators.classes_[1]` is considered
as the positive class.
name : str, default=None
Name of ROC Curve for labeling. If `None`, use the name of the
estimator.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is created.
curve_kwargs : dict, default=None
Keywords arguments to be passed to matplotlib's `plot` function.
.. versionadded:: 1.7
plot_chance_level : bool, default=False
Whether to plot the chance level.
.. versionadded:: 1.3
chance_level_kw : dict, default=None
Keyword arguments to be passed to matplotlib's `plot` for rendering
the chance level line.
.. versionadded:: 1.3
despine : bool, default=False
Whether to remove the top and right spines from the plot.
.. versionadded:: 1.6
**kwargs : dict
Keyword arguments to be passed to matplotlib's `plot`.
.. deprecated:: 1.7
kwargs is deprecated and will be removed in 1.9. Pass matplotlib
arguments to `curve_kwargs` as a dictionary instead.
Returns
-------
display : :class:`~sklearn.metrics.RocCurveDisplay`
The ROC Curve display.
See Also
--------
roc_curve : Compute Receiver operating characteristic (ROC) curve.
RocCurveDisplay.from_predictions : ROC Curve visualization given the
probabilities of scores of a classifier.
roc_auc_score : Compute the area under the ROC curve.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import RocCurveDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = SVC(random_state=0).fit(X_train, y_train)
>>> RocCurveDisplay.from_estimator(
... clf, X_test, y_test)
<...>
>>> plt.show()
"""
y_score, pos_label, name = cls._validate_and_get_response_values(
estimator,
X,
y,
response_method=response_method,
pos_label=pos_label,
name=name,
)
return cls.from_predictions(
y_true=y,
y_score=y_score,
sample_weight=sample_weight,
drop_intermediate=drop_intermediate,
name=name,
ax=ax,
pos_label=pos_label,
curve_kwargs=curve_kwargs,
plot_chance_level=plot_chance_level,
chance_level_kw=chance_level_kw,
despine=despine,
**kwargs,
)
|
Create a ROC Curve display from an estimator.
For general information regarding `scikit-learn` visualization tools,
see the :ref:`Visualization Guide <visualizations>`.
For guidance on interpreting these plots, refer to the :ref:`Model
Evaluation Guide <roc_metrics>`.
Parameters
----------
estimator : estimator instance
Fitted classifier or a fitted :class:`~sklearn.pipeline.Pipeline`
in which the last estimator is a classifier.
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
drop_intermediate : bool, default=True
Whether to drop thresholds where the resulting point is collinear
with its neighbors in ROC space. This has no effect on the ROC AUC
or visual shape of the curve, but reduces the number of plotted
points.
response_method : {'predict_proba', 'decision_function', 'auto'} default='auto'
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the target response. If set to 'auto',
:term:`predict_proba` is tried first and if it does not exist
:term:`decision_function` is tried next.
pos_label : int, float, bool or str, default=None
The class considered as the positive class when computing the ROC AUC.
By default, `estimators.classes_[1]` is considered
as the positive class.
name : str, default=None
Name of ROC Curve for labeling. If `None`, use the name of the
estimator.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is created.
curve_kwargs : dict, default=None
Keywords arguments to be passed to matplotlib's `plot` function.
.. versionadded:: 1.7
plot_chance_level : bool, default=False
Whether to plot the chance level.
.. versionadded:: 1.3
chance_level_kw : dict, default=None
Keyword arguments to be passed to matplotlib's `plot` for rendering
the chance level line.
.. versionadded:: 1.3
despine : bool, default=False
Whether to remove the top and right spines from the plot.
.. versionadded:: 1.6
**kwargs : dict
Keyword arguments to be passed to matplotlib's `plot`.
.. deprecated:: 1.7
kwargs is deprecated and will be removed in 1.9. Pass matplotlib
arguments to `curve_kwargs` as a dictionary instead.
Returns
-------
display : :class:`~sklearn.metrics.RocCurveDisplay`
The ROC Curve display.
See Also
--------
roc_curve : Compute Receiver operating characteristic (ROC) curve.
RocCurveDisplay.from_predictions : ROC Curve visualization given the
probabilities of scores of a classifier.
roc_auc_score : Compute the area under the ROC curve.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import RocCurveDisplay
>>> from sklearn.model_selection import train_test_split
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> X_train, X_test, y_train, y_test = train_test_split(
... X, y, random_state=0)
>>> clf = SVC(random_state=0).fit(X_train, y_train)
>>> RocCurveDisplay.from_estimator(
... clf, X_test, y_test)
<...>
>>> plt.show()
|
from_estimator
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/roc_curve.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/roc_curve.py
|
BSD-3-Clause
|
def from_cv_results(
cls,
cv_results,
X,
y,
*,
sample_weight=None,
drop_intermediate=True,
response_method="auto",
pos_label=None,
ax=None,
name=None,
curve_kwargs=None,
plot_chance_level=False,
chance_level_kwargs=None,
despine=False,
):
"""Create a multi-fold ROC curve display given cross-validation results.
.. versionadded:: 1.7
Parameters
----------
cv_results : dict
Dictionary as returned by :func:`~sklearn.model_selection.cross_validate`
using `return_estimator=True` and `return_indices=True` (i.e., dictionary
should contain the keys "estimator" and "indices").
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
drop_intermediate : bool, default=True
Whether to drop some suboptimal thresholds which would not appear
on a plotted ROC curve. This is useful in order to create lighter
ROC curves.
response_method : {'predict_proba', 'decision_function', 'auto'} \
default='auto'
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the target response. If set to 'auto',
:term:`predict_proba` is tried first and if it does not exist
:term:`decision_function` is tried next.
pos_label : int, float, bool or str, default=None
The class considered as the positive class when computing the ROC AUC
metrics. By default, `estimators.classes_[1]` is considered
as the positive class.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
name : str or list of str, default=None
Name for labeling legend entries. The number of legend entries
is determined by `curve_kwargs`.
To label each curve, provide a list of strings. To avoid labeling
individual curves that have the same appearance, this cannot be used in
conjunction with `curve_kwargs` being a dictionary or None. If a
string is provided, it will be used to either label the single legend entry
or if there are multiple legend entries, label each individual curve with
the same name. If `None`, no name is shown in the legend.
curve_kwargs : dict or list of dict, default=None
Keywords arguments to be passed to matplotlib's `plot` function
to draw individual ROC curves. If a list is provided the
parameters are applied to the ROC curves of each CV fold
sequentially and a legend entry is added for each curve.
If a single dictionary is provided, the same parameters are applied
to all ROC curves and a single legend entry for all curves is added,
labeled with the mean ROC AUC score.
plot_chance_level : bool, default=False
Whether to plot the chance level.
chance_level_kwargs : dict, default=None
Keyword arguments to be passed to matplotlib's `plot` for rendering
the chance level line.
despine : bool, default=False
Whether to remove the top and right spines from the plot.
Returns
-------
display : :class:`~sklearn.metrics.RocCurveDisplay`
The multi-fold ROC curve display.
See Also
--------
roc_curve : Compute Receiver operating characteristic (ROC) curve.
RocCurveDisplay.from_estimator : ROC Curve visualization given an
estimator and some data.
RocCurveDisplay.from_predictions : ROC Curve visualization given the
probabilities of scores of a classifier.
roc_auc_score : Compute the area under the ROC curve.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import RocCurveDisplay
>>> from sklearn.model_selection import cross_validate
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> clf = SVC(random_state=0)
>>> cv_results = cross_validate(
... clf, X, y, cv=3, return_estimator=True, return_indices=True)
>>> RocCurveDisplay.from_cv_results(cv_results, X, y)
<...>
>>> plt.show()
"""
pos_label_ = cls._validate_from_cv_results_params(
cv_results,
X,
y,
sample_weight=sample_weight,
pos_label=pos_label,
)
fpr_folds, tpr_folds, auc_folds = [], [], []
for estimator, test_indices in zip(
cv_results["estimator"], cv_results["indices"]["test"]
):
y_true = _safe_indexing(y, test_indices)
y_pred, _ = _get_response_values_binary(
estimator,
_safe_indexing(X, test_indices),
response_method=response_method,
pos_label=pos_label_,
)
sample_weight_fold = (
None
if sample_weight is None
else _safe_indexing(sample_weight, test_indices)
)
fpr, tpr, _ = roc_curve(
y_true,
y_pred,
pos_label=pos_label_,
sample_weight=sample_weight_fold,
drop_intermediate=drop_intermediate,
)
roc_auc = auc(fpr, tpr)
fpr_folds.append(fpr)
tpr_folds.append(tpr)
auc_folds.append(roc_auc)
viz = cls(
fpr=fpr_folds,
tpr=tpr_folds,
name=name,
roc_auc=auc_folds,
pos_label=pos_label_,
)
return viz.plot(
ax=ax,
curve_kwargs=curve_kwargs,
plot_chance_level=plot_chance_level,
chance_level_kw=chance_level_kwargs,
despine=despine,
)
|
Create a multi-fold ROC curve display given cross-validation results.
.. versionadded:: 1.7
Parameters
----------
cv_results : dict
Dictionary as returned by :func:`~sklearn.model_selection.cross_validate`
using `return_estimator=True` and `return_indices=True` (i.e., dictionary
should contain the keys "estimator" and "indices").
X : {array-like, sparse matrix} of shape (n_samples, n_features)
Input values.
y : array-like of shape (n_samples,)
Target values.
sample_weight : array-like of shape (n_samples,), default=None
Sample weights.
drop_intermediate : bool, default=True
Whether to drop some suboptimal thresholds which would not appear
on a plotted ROC curve. This is useful in order to create lighter
ROC curves.
response_method : {'predict_proba', 'decision_function', 'auto'} default='auto'
Specifies whether to use :term:`predict_proba` or
:term:`decision_function` as the target response. If set to 'auto',
:term:`predict_proba` is tried first and if it does not exist
:term:`decision_function` is tried next.
pos_label : int, float, bool or str, default=None
The class considered as the positive class when computing the ROC AUC
metrics. By default, `estimators.classes_[1]` is considered
as the positive class.
ax : matplotlib axes, default=None
Axes object to plot on. If `None`, a new figure and axes is
created.
name : str or list of str, default=None
Name for labeling legend entries. The number of legend entries
is determined by `curve_kwargs`.
To label each curve, provide a list of strings. To avoid labeling
individual curves that have the same appearance, this cannot be used in
conjunction with `curve_kwargs` being a dictionary or None. If a
string is provided, it will be used to either label the single legend entry
or if there are multiple legend entries, label each individual curve with
the same name. If `None`, no name is shown in the legend.
curve_kwargs : dict or list of dict, default=None
Keywords arguments to be passed to matplotlib's `plot` function
to draw individual ROC curves. If a list is provided the
parameters are applied to the ROC curves of each CV fold
sequentially and a legend entry is added for each curve.
If a single dictionary is provided, the same parameters are applied
to all ROC curves and a single legend entry for all curves is added,
labeled with the mean ROC AUC score.
plot_chance_level : bool, default=False
Whether to plot the chance level.
chance_level_kwargs : dict, default=None
Keyword arguments to be passed to matplotlib's `plot` for rendering
the chance level line.
despine : bool, default=False
Whether to remove the top and right spines from the plot.
Returns
-------
display : :class:`~sklearn.metrics.RocCurveDisplay`
The multi-fold ROC curve display.
See Also
--------
roc_curve : Compute Receiver operating characteristic (ROC) curve.
RocCurveDisplay.from_estimator : ROC Curve visualization given an
estimator and some data.
RocCurveDisplay.from_predictions : ROC Curve visualization given the
probabilities of scores of a classifier.
roc_auc_score : Compute the area under the ROC curve.
Examples
--------
>>> import matplotlib.pyplot as plt
>>> from sklearn.datasets import make_classification
>>> from sklearn.metrics import RocCurveDisplay
>>> from sklearn.model_selection import cross_validate
>>> from sklearn.svm import SVC
>>> X, y = make_classification(random_state=0)
>>> clf = SVC(random_state=0)
>>> cv_results = cross_validate(
... clf, X, y, cv=3, return_estimator=True, return_indices=True)
>>> RocCurveDisplay.from_cv_results(cv_results, X, y)
<...>
>>> plt.show()
|
from_cv_results
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/roc_curve.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/roc_curve.py
|
BSD-3-Clause
|
def test_display_curve_error_classifier(pyplot, data, data_binary, Display):
"""Check that a proper error is raised when only binary classification is
supported."""
X, y = data
X_binary, y_binary = data_binary
clf = DecisionTreeClassifier().fit(X, y)
# Case 1: multiclass classifier with multiclass target
msg = "Expected 'estimator' to be a binary classifier. Got 3 classes instead."
with pytest.raises(ValueError, match=msg):
Display.from_estimator(clf, X, y)
# Case 2: multiclass classifier with binary target
with pytest.raises(ValueError, match=msg):
Display.from_estimator(clf, X_binary, y_binary)
# Case 3: binary classifier with multiclass target
clf = DecisionTreeClassifier().fit(X_binary, y_binary)
msg = "The target y is not binary. Got multiclass type of target."
with pytest.raises(ValueError, match=msg):
Display.from_estimator(clf, X, y)
|
Check that a proper error is raised when only binary classification is
supported.
|
test_display_curve_error_classifier
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_common_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_common_curve_display.py
|
BSD-3-Clause
|
def test_display_curve_error_regression(pyplot, data_binary, Display):
"""Check that we raise an error with regressor."""
# Case 1: regressor
X, y = data_binary
regressor = DecisionTreeRegressor().fit(X, y)
msg = "Expected 'estimator' to be a binary classifier. Got DecisionTreeRegressor"
with pytest.raises(ValueError, match=msg):
Display.from_estimator(regressor, X, y)
# Case 2: regression target
classifier = DecisionTreeClassifier().fit(X, y)
# Force `y_true` to be seen as a regression problem
y = y + 0.5
msg = "The target y is not binary. Got continuous type of target."
with pytest.raises(ValueError, match=msg):
Display.from_estimator(classifier, X, y)
with pytest.raises(ValueError, match=msg):
Display.from_predictions(y, regressor.fit(X, y).predict(X))
|
Check that we raise an error with regressor.
|
test_display_curve_error_regression
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_common_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_common_curve_display.py
|
BSD-3-Clause
|
def test_display_curve_error_no_response(
pyplot,
data_binary,
response_method,
msg,
Display,
):
"""Check that a proper error is raised when the response method requested
is not defined for the given trained classifier."""
X, y = data_binary
class MyClassifier(ClassifierMixin, BaseEstimator):
def fit(self, X, y):
self.classes_ = [0, 1]
return self
clf = MyClassifier().fit(X, y)
with pytest.raises(AttributeError, match=msg):
Display.from_estimator(clf, X, y, response_method=response_method)
|
Check that a proper error is raised when the response method requested
is not defined for the given trained classifier.
|
test_display_curve_error_no_response
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_common_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_common_curve_display.py
|
BSD-3-Clause
|
def test_display_curve_estimator_name_multiple_calls(
pyplot,
data_binary,
Display,
constructor_name,
):
"""Check that passing `name` when calling `plot` will overwrite the original name
in the legend."""
X, y = data_binary
clf_name = "my hand-crafted name"
clf = LogisticRegression().fit(X, y)
y_pred = clf.predict_proba(X)[:, 1]
# safe guard for the binary if/else construction
assert constructor_name in ("from_estimator", "from_predictions")
if constructor_name == "from_estimator":
disp = Display.from_estimator(clf, X, y, name=clf_name)
else:
disp = Display.from_predictions(y, y_pred, name=clf_name)
assert disp.estimator_name == clf_name
pyplot.close("all")
disp.plot()
assert clf_name in disp.line_.get_label()
pyplot.close("all")
clf_name = "another_name"
disp.plot(name=clf_name)
assert clf_name in disp.line_.get_label()
|
Check that passing `name` when calling `plot` will overwrite the original name
in the legend.
|
test_display_curve_estimator_name_multiple_calls
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_common_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_common_curve_display.py
|
BSD-3-Clause
|
def test_display_curve_not_fitted_errors_old_name(pyplot, data_binary, clf, Display):
"""Check that a proper error is raised when the classifier is not
fitted."""
X, y = data_binary
# clone since we parametrize the test and the classifier will be fitted
# when testing the second and subsequent plotting function
model = clone(clf)
with pytest.raises(NotFittedError):
Display.from_estimator(model, X, y)
model.fit(X, y)
disp = Display.from_estimator(model, X, y)
assert model.__class__.__name__ in disp.line_.get_label()
assert disp.estimator_name == model.__class__.__name__
|
Check that a proper error is raised when the classifier is not
fitted.
|
test_display_curve_not_fitted_errors_old_name
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_common_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_common_curve_display.py
|
BSD-3-Clause
|
def test_display_curve_not_fitted_errors(pyplot, data_binary, clf, Display):
"""Check that a proper error is raised when the classifier is not fitted."""
X, y = data_binary
# clone since we parametrize the test and the classifier will be fitted
# when testing the second and subsequent plotting function
model = clone(clf)
with pytest.raises(NotFittedError):
Display.from_estimator(model, X, y)
model.fit(X, y)
disp = Display.from_estimator(model, X, y)
assert model.__class__.__name__ in disp.line_.get_label()
assert disp.name == model.__class__.__name__
|
Check that a proper error is raised when the classifier is not fitted.
|
test_display_curve_not_fitted_errors
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_common_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_common_curve_display.py
|
BSD-3-Clause
|
def test_display_curve_n_samples_consistency(pyplot, data_binary, Display):
"""Check the error raised when `y_pred` or `sample_weight` have inconsistent
length."""
X, y = data_binary
classifier = DecisionTreeClassifier().fit(X, y)
msg = "Found input variables with inconsistent numbers of samples"
with pytest.raises(ValueError, match=msg):
Display.from_estimator(classifier, X[:-2], y)
with pytest.raises(ValueError, match=msg):
Display.from_estimator(classifier, X, y[:-2])
with pytest.raises(ValueError, match=msg):
Display.from_estimator(classifier, X, y, sample_weight=np.ones(X.shape[0] - 2))
|
Check the error raised when `y_pred` or `sample_weight` have inconsistent
length.
|
test_display_curve_n_samples_consistency
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_common_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_common_curve_display.py
|
BSD-3-Clause
|
def test_display_curve_error_pos_label(pyplot, data_binary, Display):
"""Check consistence of error message when `pos_label` should be specified."""
X, y = data_binary
y = y + 10
classifier = DecisionTreeClassifier().fit(X, y)
y_pred = classifier.predict_proba(X)[:, -1]
msg = r"y_true takes value in {10, 11} and pos_label is not specified"
with pytest.raises(ValueError, match=msg):
Display.from_predictions(y, y_pred)
|
Check consistence of error message when `pos_label` should be specified.
|
test_display_curve_error_pos_label
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_common_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_common_curve_display.py
|
BSD-3-Clause
|
def test_classifier_display_curve_named_constructor_return_type(
pyplot, data_binary, Display, constructor
):
"""Check that named constructors return the correct type when subclassed.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/pull/27675
"""
X, y = data_binary
# This can be anything - we just need to check the named constructor return
# type so the only requirement here is instantiating the class without error
y_pred = y
classifier = LogisticRegression().fit(X, y)
class SubclassOfDisplay(Display):
pass
if constructor == "from_predictions":
curve = SubclassOfDisplay.from_predictions(y, y_pred)
else: # constructor == "from_estimator"
curve = SubclassOfDisplay.from_estimator(classifier, X, y)
assert isinstance(curve, SubclassOfDisplay)
|
Check that named constructors return the correct type when subclassed.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/pull/27675
|
test_classifier_display_curve_named_constructor_return_type
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_common_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_common_curve_display.py
|
BSD-3-Clause
|
def test_confusion_matrix_display_validation(pyplot):
"""Check that we raise the proper error when validating parameters."""
X, y = make_classification(
n_samples=100, n_informative=5, n_classes=5, random_state=0
)
with pytest.raises(NotFittedError):
ConfusionMatrixDisplay.from_estimator(SVC(), X, y)
regressor = SVR().fit(X, y)
y_pred_regressor = regressor.predict(X)
y_pred_classifier = SVC().fit(X, y).predict(X)
err_msg = "ConfusionMatrixDisplay.from_estimator only supports classifiers"
with pytest.raises(ValueError, match=err_msg):
ConfusionMatrixDisplay.from_estimator(regressor, X, y)
err_msg = "Mix type of y not allowed, got types"
with pytest.raises(ValueError, match=err_msg):
# Force `y_true` to be seen as a regression problem
ConfusionMatrixDisplay.from_predictions(y + 0.5, y_pred_classifier)
with pytest.raises(ValueError, match=err_msg):
ConfusionMatrixDisplay.from_predictions(y, y_pred_regressor)
err_msg = "Found input variables with inconsistent numbers of samples"
with pytest.raises(ValueError, match=err_msg):
ConfusionMatrixDisplay.from_predictions(y, y_pred_classifier[::2])
|
Check that we raise the proper error when validating parameters.
|
test_confusion_matrix_display_validation
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
BSD-3-Clause
|
def test_confusion_matrix_display_custom_labels(
pyplot, constructor_name, with_labels, with_display_labels
):
"""Check the resulting plot when labels are given."""
n_classes = 5
X, y = make_classification(
n_samples=100, n_informative=5, n_classes=n_classes, random_state=0
)
classifier = SVC().fit(X, y)
y_pred = classifier.predict(X)
# safe guard for the binary if/else construction
assert constructor_name in ("from_estimator", "from_predictions")
ax = pyplot.gca()
labels = [2, 1, 0, 3, 4] if with_labels else None
display_labels = ["b", "d", "a", "e", "f"] if with_display_labels else None
cm = confusion_matrix(y, y_pred, labels=labels)
common_kwargs = {
"ax": ax,
"display_labels": display_labels,
"labels": labels,
}
if constructor_name == "from_estimator":
disp = ConfusionMatrixDisplay.from_estimator(classifier, X, y, **common_kwargs)
else:
disp = ConfusionMatrixDisplay.from_predictions(y, y_pred, **common_kwargs)
assert_allclose(disp.confusion_matrix, cm)
if with_display_labels:
expected_display_labels = display_labels
elif with_labels:
expected_display_labels = labels
else:
expected_display_labels = list(range(n_classes))
expected_display_labels_str = [str(name) for name in expected_display_labels]
x_ticks = [tick.get_text() for tick in disp.ax_.get_xticklabels()]
y_ticks = [tick.get_text() for tick in disp.ax_.get_yticklabels()]
assert_array_equal(disp.display_labels, expected_display_labels)
assert_array_equal(x_ticks, expected_display_labels_str)
assert_array_equal(y_ticks, expected_display_labels_str)
|
Check the resulting plot when labels are given.
|
test_confusion_matrix_display_custom_labels
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
BSD-3-Clause
|
def test_confusion_matrix_display(pyplot, constructor_name):
"""Check the behaviour of the default constructor without using the class
methods."""
n_classes = 5
X, y = make_classification(
n_samples=100, n_informative=5, n_classes=n_classes, random_state=0
)
classifier = SVC().fit(X, y)
y_pred = classifier.predict(X)
# safe guard for the binary if/else construction
assert constructor_name in ("from_estimator", "from_predictions")
cm = confusion_matrix(y, y_pred)
common_kwargs = {
"normalize": None,
"include_values": True,
"cmap": "viridis",
"xticks_rotation": 45.0,
}
if constructor_name == "from_estimator":
disp = ConfusionMatrixDisplay.from_estimator(classifier, X, y, **common_kwargs)
else:
disp = ConfusionMatrixDisplay.from_predictions(y, y_pred, **common_kwargs)
assert_allclose(disp.confusion_matrix, cm)
assert disp.text_.shape == (n_classes, n_classes)
rotations = [tick.get_rotation() for tick in disp.ax_.get_xticklabels()]
assert_allclose(rotations, 45.0)
image_data = disp.im_.get_array().data
assert_allclose(image_data, cm)
disp.plot(cmap="plasma")
assert disp.im_.get_cmap().name == "plasma"
disp.plot(include_values=False)
assert disp.text_ is None
disp.plot(xticks_rotation=90.0)
rotations = [tick.get_rotation() for tick in disp.ax_.get_xticklabels()]
assert_allclose(rotations, 90.0)
disp.plot(values_format="e")
expected_text = np.array([format(v, "e") for v in cm.ravel(order="C")])
text_text = np.array([t.get_text() for t in disp.text_.ravel(order="C")])
assert_array_equal(expected_text, text_text)
|
Check the behaviour of the default constructor without using the class
methods.
|
test_confusion_matrix_display
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
BSD-3-Clause
|
def test_confusion_matrix_contrast(pyplot):
"""Check that the text color is appropriate depending on background."""
cm = np.eye(2) / 2
disp = ConfusionMatrixDisplay(cm, display_labels=[0, 1])
disp.plot(cmap=pyplot.cm.gray)
# diagonal text is black
assert_allclose(disp.text_[0, 0].get_color(), [0.0, 0.0, 0.0, 1.0])
assert_allclose(disp.text_[1, 1].get_color(), [0.0, 0.0, 0.0, 1.0])
# off-diagonal text is white
assert_allclose(disp.text_[0, 1].get_color(), [1.0, 1.0, 1.0, 1.0])
assert_allclose(disp.text_[1, 0].get_color(), [1.0, 1.0, 1.0, 1.0])
disp.plot(cmap=pyplot.cm.gray_r)
# diagonal text is white
assert_allclose(disp.text_[0, 1].get_color(), [0.0, 0.0, 0.0, 1.0])
assert_allclose(disp.text_[1, 0].get_color(), [0.0, 0.0, 0.0, 1.0])
# off-diagonal text is black
assert_allclose(disp.text_[0, 0].get_color(), [1.0, 1.0, 1.0, 1.0])
assert_allclose(disp.text_[1, 1].get_color(), [1.0, 1.0, 1.0, 1.0])
# Regression test for #15920
cm = np.array([[19, 34], [32, 58]])
disp = ConfusionMatrixDisplay(cm, display_labels=[0, 1])
disp.plot(cmap=pyplot.cm.Blues)
min_color = pyplot.cm.Blues(0)
max_color = pyplot.cm.Blues(255)
assert_allclose(disp.text_[0, 0].get_color(), max_color)
assert_allclose(disp.text_[0, 1].get_color(), max_color)
assert_allclose(disp.text_[1, 0].get_color(), max_color)
assert_allclose(disp.text_[1, 1].get_color(), min_color)
|
Check that the text color is appropriate depending on background.
|
test_confusion_matrix_contrast
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
BSD-3-Clause
|
def test_confusion_matrix_pipeline(pyplot, clf):
"""Check the behaviour of the plotting with more complex pipeline."""
n_classes = 5
X, y = make_classification(
n_samples=100, n_informative=5, n_classes=n_classes, random_state=0
)
with pytest.raises(NotFittedError):
ConfusionMatrixDisplay.from_estimator(clf, X, y)
clf.fit(X, y)
y_pred = clf.predict(X)
disp = ConfusionMatrixDisplay.from_estimator(clf, X, y)
cm = confusion_matrix(y, y_pred)
assert_allclose(disp.confusion_matrix, cm)
assert disp.text_.shape == (n_classes, n_classes)
|
Check the behaviour of the plotting with more complex pipeline.
|
test_confusion_matrix_pipeline
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
BSD-3-Clause
|
def test_confusion_matrix_with_unknown_labels(pyplot, constructor_name):
"""Check that when labels=None, the unique values in `y_pred` and `y_true`
will be used.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/pull/18405
"""
n_classes = 5
X, y = make_classification(
n_samples=100, n_informative=5, n_classes=n_classes, random_state=0
)
classifier = SVC().fit(X, y)
y_pred = classifier.predict(X)
# create unseen labels in `y_true` not seen during fitting and not present
# in 'classifier.classes_'
y = y + 1
# safe guard for the binary if/else construction
assert constructor_name in ("from_estimator", "from_predictions")
common_kwargs = {"labels": None}
if constructor_name == "from_estimator":
disp = ConfusionMatrixDisplay.from_estimator(classifier, X, y, **common_kwargs)
else:
disp = ConfusionMatrixDisplay.from_predictions(y, y_pred, **common_kwargs)
display_labels = [tick.get_text() for tick in disp.ax_.get_xticklabels()]
expected_labels = [str(i) for i in range(n_classes + 1)]
assert_array_equal(expected_labels, display_labels)
|
Check that when labels=None, the unique values in `y_pred` and `y_true`
will be used.
Non-regression test for:
https://github.com/scikit-learn/scikit-learn/pull/18405
|
test_confusion_matrix_with_unknown_labels
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
BSD-3-Clause
|
def test_colormap_max(pyplot):
"""Check that the max color is used for the color of the text."""
gray = pyplot.get_cmap("gray", 1024)
confusion_matrix = np.array([[1.0, 0.0], [0.0, 1.0]])
disp = ConfusionMatrixDisplay(confusion_matrix)
disp.plot(cmap=gray)
color = disp.text_[1, 0].get_color()
assert_allclose(color, [1.0, 1.0, 1.0, 1.0])
|
Check that the max color is used for the color of the text.
|
test_colormap_max
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
BSD-3-Clause
|
def test_im_kw_adjust_vmin_vmax(pyplot):
"""Check that im_kw passes kwargs to imshow"""
confusion_matrix = np.array([[0.48, 0.04], [0.08, 0.4]])
disp = ConfusionMatrixDisplay(confusion_matrix)
disp.plot(im_kw=dict(vmin=0.0, vmax=0.8))
clim = disp.im_.get_clim()
assert clim[0] == pytest.approx(0.0)
assert clim[1] == pytest.approx(0.8)
|
Check that im_kw passes kwargs to imshow
|
test_im_kw_adjust_vmin_vmax
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
BSD-3-Clause
|
def test_confusion_matrix_text_kw(pyplot):
"""Check that text_kw is passed to the text call."""
font_size = 15.0
X, y = make_classification(random_state=0)
classifier = SVC().fit(X, y)
# from_estimator passes the font size
disp = ConfusionMatrixDisplay.from_estimator(
classifier, X, y, text_kw={"fontsize": font_size}
)
for text in disp.text_.reshape(-1):
assert text.get_fontsize() == font_size
# plot adjusts plot to new font size
new_font_size = 20.0
disp.plot(text_kw={"fontsize": new_font_size})
for text in disp.text_.reshape(-1):
assert text.get_fontsize() == new_font_size
# from_predictions passes the font size
y_pred = classifier.predict(X)
disp = ConfusionMatrixDisplay.from_predictions(
y, y_pred, text_kw={"fontsize": font_size}
)
for text in disp.text_.reshape(-1):
assert text.get_fontsize() == font_size
|
Check that text_kw is passed to the text call.
|
test_confusion_matrix_text_kw
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_confusion_matrix_display.py
|
BSD-3-Clause
|
def test_precision_recall_chance_level_line(
pyplot,
chance_level_kw,
constructor_name,
):
"""Check the chance level line plotting behavior."""
X, y = make_classification(n_classes=2, n_samples=50, random_state=0)
pos_prevalence = Counter(y)[1] / len(y)
lr = LogisticRegression()
y_pred = lr.fit(X, y).predict_proba(X)[:, 1]
if constructor_name == "from_estimator":
display = PrecisionRecallDisplay.from_estimator(
lr,
X,
y,
plot_chance_level=True,
chance_level_kw=chance_level_kw,
)
else:
display = PrecisionRecallDisplay.from_predictions(
y,
y_pred,
plot_chance_level=True,
chance_level_kw=chance_level_kw,
)
import matplotlib as mpl
assert isinstance(display.chance_level_, mpl.lines.Line2D)
assert tuple(display.chance_level_.get_xdata()) == (0, 1)
assert tuple(display.chance_level_.get_ydata()) == (pos_prevalence, pos_prevalence)
# Checking for chance level line styles
if chance_level_kw is None:
assert display.chance_level_.get_color() == "k"
else:
assert display.chance_level_.get_color() == "r"
|
Check the chance level line plotting behavior.
|
test_precision_recall_chance_level_line
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_precision_recall_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_precision_recall_display.py
|
BSD-3-Clause
|
def test_precision_recall_display_name(pyplot, constructor_name, default_label):
"""Check the behaviour of the name parameters"""
X, y = make_classification(n_classes=2, n_samples=100, random_state=0)
pos_label = 1
classifier = LogisticRegression().fit(X, y)
classifier.fit(X, y)
y_pred = classifier.predict_proba(X)[:, pos_label]
# safe guard for the binary if/else construction
assert constructor_name in ("from_estimator", "from_predictions")
if constructor_name == "from_estimator":
display = PrecisionRecallDisplay.from_estimator(classifier, X, y)
else:
display = PrecisionRecallDisplay.from_predictions(
y, y_pred, pos_label=pos_label
)
average_precision = average_precision_score(y, y_pred, pos_label=pos_label)
# check that the default name is used
assert display.line_.get_label() == default_label.format(average_precision)
# check that the name can be set
display.plot(name="MySpecialEstimator")
assert (
display.line_.get_label()
== f"MySpecialEstimator (AP = {average_precision:.2f})"
)
|
Check the behaviour of the name parameters
|
test_precision_recall_display_name
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_precision_recall_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_precision_recall_display.py
|
BSD-3-Clause
|
def test_default_labels(pyplot, average_precision, estimator_name, expected_label):
"""Check the default labels used in the display."""
precision = np.array([1, 0.5, 0])
recall = np.array([0, 0.5, 1])
display = PrecisionRecallDisplay(
precision,
recall,
average_precision=average_precision,
estimator_name=estimator_name,
)
display.plot()
assert display.line_.get_label() == expected_label
|
Check the default labels used in the display.
|
test_default_labels
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_precision_recall_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_precision_recall_display.py
|
BSD-3-Clause
|
def test_prediction_error_display_raise_error(
pyplot, class_method, regressor, params, err_type, err_msg
):
"""Check that we raise the proper error when making the parameters
# validation."""
with pytest.raises(err_type, match=err_msg):
if class_method == "from_estimator":
PredictionErrorDisplay.from_estimator(regressor, X, y, **params)
else:
y_pred = regressor.predict(X)
PredictionErrorDisplay.from_predictions(y_true=y, y_pred=y_pred, **params)
|
Check that we raise the proper error when making the parameters
# validation.
|
test_prediction_error_display_raise_error
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_predict_error_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_predict_error_display.py
|
BSD-3-Clause
|
def test_from_estimator_not_fitted(pyplot):
"""Check that we raise a `NotFittedError` when the passed regressor is not
fit."""
regressor = Ridge()
with pytest.raises(NotFittedError, match="is not fitted yet."):
PredictionErrorDisplay.from_estimator(regressor, X, y)
|
Check that we raise a `NotFittedError` when the passed regressor is not
fit.
|
test_from_estimator_not_fitted
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_predict_error_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_predict_error_display.py
|
BSD-3-Clause
|
def test_prediction_error_display(pyplot, regressor_fitted, class_method, kind):
"""Check the default behaviour of the display."""
if class_method == "from_estimator":
display = PredictionErrorDisplay.from_estimator(
regressor_fitted, X, y, kind=kind
)
else:
y_pred = regressor_fitted.predict(X)
display = PredictionErrorDisplay.from_predictions(
y_true=y, y_pred=y_pred, kind=kind
)
if kind == "actual_vs_predicted":
assert_allclose(display.line_.get_xdata(), display.line_.get_ydata())
assert display.ax_.get_xlabel() == "Predicted values"
assert display.ax_.get_ylabel() == "Actual values"
assert display.line_ is not None
else:
assert display.ax_.get_xlabel() == "Predicted values"
assert display.ax_.get_ylabel() == "Residuals (actual - predicted)"
assert display.line_ is not None
assert display.ax_.get_legend() is None
|
Check the default behaviour of the display.
|
test_prediction_error_display
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_predict_error_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_predict_error_display.py
|
BSD-3-Clause
|
def test_plot_prediction_error_ax(pyplot, regressor_fitted, class_method):
"""Check that we can pass an axis to the display."""
_, ax = pyplot.subplots()
if class_method == "from_estimator":
display = PredictionErrorDisplay.from_estimator(regressor_fitted, X, y, ax=ax)
else:
y_pred = regressor_fitted.predict(X)
display = PredictionErrorDisplay.from_predictions(
y_true=y, y_pred=y_pred, ax=ax
)
assert display.ax_ is ax
|
Check that we can pass an axis to the display.
|
test_plot_prediction_error_ax
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_predict_error_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_predict_error_display.py
|
BSD-3-Clause
|
def test_prediction_error_custom_artist(
pyplot, regressor_fitted, class_method, scatter_kwargs, line_kwargs
):
"""Check that we can tune the style of the line and the scatter."""
extra_params = {
"kind": "actual_vs_predicted",
"scatter_kwargs": scatter_kwargs,
"line_kwargs": line_kwargs,
}
if class_method == "from_estimator":
display = PredictionErrorDisplay.from_estimator(
regressor_fitted, X, y, **extra_params
)
else:
y_pred = regressor_fitted.predict(X)
display = PredictionErrorDisplay.from_predictions(
y_true=y, y_pred=y_pred, **extra_params
)
if line_kwargs is not None:
assert display.line_.get_linestyle() == "-"
assert display.line_.get_color() == "red"
else:
assert display.line_.get_linestyle() == "--"
assert display.line_.get_color() == "black"
assert display.line_.get_alpha() == 0.7
if scatter_kwargs is not None:
assert_allclose(display.scatter_.get_facecolor(), [[0.0, 0.0, 1.0, 0.9]])
assert_allclose(display.scatter_.get_edgecolor(), [[0.0, 0.0, 1.0, 0.9]])
else:
assert display.scatter_.get_alpha() == 0.8
|
Check that we can tune the style of the line and the scatter.
|
test_prediction_error_custom_artist
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_predict_error_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_predict_error_display.py
|
BSD-3-Clause
|
def _check_figure_axes_and_labels(display, pos_label):
"""Check mpl axes and figure defaults are correct."""
import matplotlib as mpl
assert isinstance(display.ax_, mpl.axes.Axes)
assert isinstance(display.figure_, mpl.figure.Figure)
assert display.ax_.get_adjustable() == "box"
assert display.ax_.get_aspect() in ("equal", 1.0)
assert display.ax_.get_xlim() == display.ax_.get_ylim() == (-0.01, 1.01)
expected_pos_label = 1 if pos_label is None else pos_label
expected_ylabel = f"True Positive Rate (Positive label: {expected_pos_label})"
expected_xlabel = f"False Positive Rate (Positive label: {expected_pos_label})"
assert display.ax_.get_ylabel() == expected_ylabel
assert display.ax_.get_xlabel() == expected_xlabel
|
Check mpl axes and figure defaults are correct.
|
_check_figure_axes_and_labels
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def test_roc_curve_display_plotting(
pyplot,
response_method,
data_binary,
with_sample_weight,
drop_intermediate,
with_strings,
constructor_name,
default_name,
):
"""Check the overall plotting behaviour for single curve."""
X, y = data_binary
pos_label = None
if with_strings:
y = np.array(["c", "b"])[y]
pos_label = "c"
if with_sample_weight:
rng = np.random.RandomState(42)
sample_weight = rng.randint(1, 4, size=(X.shape[0]))
else:
sample_weight = None
lr = LogisticRegression()
lr.fit(X, y)
y_score = getattr(lr, response_method)(X)
y_score = y_score if y_score.ndim == 1 else y_score[:, 1]
if constructor_name == "from_estimator":
display = RocCurveDisplay.from_estimator(
lr,
X,
y,
sample_weight=sample_weight,
drop_intermediate=drop_intermediate,
pos_label=pos_label,
curve_kwargs={"alpha": 0.8},
)
else:
display = RocCurveDisplay.from_predictions(
y,
y_score,
sample_weight=sample_weight,
drop_intermediate=drop_intermediate,
pos_label=pos_label,
curve_kwargs={"alpha": 0.8},
)
fpr, tpr, _ = roc_curve(
y,
y_score,
sample_weight=sample_weight,
drop_intermediate=drop_intermediate,
pos_label=pos_label,
)
assert_allclose(display.roc_auc, auc(fpr, tpr))
assert_allclose(display.fpr, fpr)
assert_allclose(display.tpr, tpr)
assert display.name == default_name
import matplotlib as mpl
_check_figure_axes_and_labels(display, pos_label)
assert isinstance(display.line_, mpl.lines.Line2D)
assert display.line_.get_alpha() == 0.8
expected_label = f"{default_name} (AUC = {display.roc_auc:.2f})"
assert display.line_.get_label() == expected_label
|
Check the overall plotting behaviour for single curve.
|
test_roc_curve_display_plotting
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def test_roc_curve_plot_parameter_length_validation(pyplot, params, err_msg):
"""Check `plot` parameter length validation performed correctly."""
display = RocCurveDisplay(**params)
if err_msg:
with pytest.raises(ValueError, match=err_msg):
display.plot()
else:
# No error should be raised
display.plot()
|
Check `plot` parameter length validation performed correctly.
|
test_roc_curve_plot_parameter_length_validation
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def test_roc_curve_display_kwargs_deprecation(pyplot, data_binary, constructor_name):
"""Check **kwargs deprecated correctly in favour of `curve_kwargs`."""
X, y = data_binary
lr = LogisticRegression()
lr.fit(X, y)
fpr = np.array([0, 0.5, 1])
tpr = np.array([0, 0.5, 1])
# Error when both `curve_kwargs` and `**kwargs` provided
with pytest.raises(ValueError, match="Cannot provide both `curve_kwargs`"):
if constructor_name == "from_estimator":
RocCurveDisplay.from_estimator(
lr, X, y, curve_kwargs={"alpha": 1}, label="test"
)
elif constructor_name == "from_predictions":
RocCurveDisplay.from_predictions(
y, y, curve_kwargs={"alpha": 1}, label="test"
)
else:
RocCurveDisplay(fpr=fpr, tpr=tpr).plot(
curve_kwargs={"alpha": 1}, label="test"
)
# Warning when `**kwargs`` provided
with pytest.warns(FutureWarning, match=r"`\*\*kwargs` is deprecated and will be"):
if constructor_name == "from_estimator":
RocCurveDisplay.from_estimator(lr, X, y, label="test")
elif constructor_name == "from_predictions":
RocCurveDisplay.from_predictions(y, y, label="test")
else:
RocCurveDisplay(fpr=fpr, tpr=tpr).plot(label="test")
|
Check **kwargs deprecated correctly in favour of `curve_kwargs`.
|
test_roc_curve_display_kwargs_deprecation
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def test_roc_curve_plot_legend_label(pyplot, data_binary, name, curve_kwargs, roc_auc):
"""Check legend label correct with all `curve_kwargs`, `name` combinations."""
fpr = [np.array([0, 0.5, 1]), np.array([0, 0.5, 1]), np.array([0, 0.5, 1])]
tpr = [np.array([0, 0.5, 1]), np.array([0, 0.5, 1]), np.array([0, 0.5, 1])]
if not isinstance(curve_kwargs, list) and isinstance(name, list):
with pytest.raises(ValueError, match="To avoid labeling individual curves"):
RocCurveDisplay(fpr=fpr, tpr=tpr, roc_auc=roc_auc).plot(
name=name, curve_kwargs=curve_kwargs
)
else:
display = RocCurveDisplay(fpr=fpr, tpr=tpr, roc_auc=roc_auc).plot(
name=name, curve_kwargs=curve_kwargs
)
legend = display.ax_.get_legend()
if legend is None:
# No legend is created, exit test early
assert name is None
assert roc_auc is None
return
else:
legend_labels = [text.get_text() for text in legend.get_texts()]
if isinstance(curve_kwargs, list):
# Multiple labels in legend
assert len(legend_labels) == 3
for idx, label in enumerate(legend_labels):
if name is None:
expected_label = "AUC = 1.00" if roc_auc else None
assert label == expected_label
elif isinstance(name, str):
expected_label = "single (AUC = 1.00)" if roc_auc else "single"
assert label == expected_label
else:
# `name` is a list of different strings
expected_label = (
f"{name[idx]} (AUC = 1.00)" if roc_auc else f"{name[idx]}"
)
assert label == expected_label
else:
# Single label in legend
assert len(legend_labels) == 1
if name is None:
expected_label = "AUC = 1.00 +/- 0.00" if roc_auc else None
assert legend_labels[0] == expected_label
else:
# name is single string
expected_label = "single (AUC = 1.00 +/- 0.00)" if roc_auc else "single"
assert legend_labels[0] == expected_label
|
Check legend label correct with all `curve_kwargs`, `name` combinations.
|
test_roc_curve_plot_legend_label
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def test_roc_curve_from_cv_results_legend_label(
pyplot, data_binary, name, curve_kwargs
):
"""Check legend label correct with all `curve_kwargs`, `name` combinations."""
X, y = data_binary
n_cv = 3
cv_results = cross_validate(
LogisticRegression(), X, y, cv=n_cv, return_estimator=True, return_indices=True
)
if not isinstance(curve_kwargs, list) and isinstance(name, list):
with pytest.raises(ValueError, match="To avoid labeling individual curves"):
RocCurveDisplay.from_cv_results(
cv_results, X, y, name=name, curve_kwargs=curve_kwargs
)
else:
display = RocCurveDisplay.from_cv_results(
cv_results, X, y, name=name, curve_kwargs=curve_kwargs
)
legend = display.ax_.get_legend()
legend_labels = [text.get_text() for text in legend.get_texts()]
if isinstance(curve_kwargs, list):
# Multiple labels in legend
assert len(legend_labels) == 3
auc = ["0.62", "0.66", "0.55"]
for idx, label in enumerate(legend_labels):
if name is None:
assert label == f"AUC = {auc[idx]}"
elif isinstance(name, str):
assert label == f"single (AUC = {auc[idx]})"
else:
# `name` is a list of different strings
assert label == f"{name[idx]} (AUC = {auc[idx]})"
else:
# Single label in legend
assert len(legend_labels) == 1
if name is None:
assert legend_labels[0] == "AUC = 0.61 +/- 0.05"
else:
# name is single string
assert legend_labels[0] == "single (AUC = 0.61 +/- 0.05)"
|
Check legend label correct with all `curve_kwargs`, `name` combinations.
|
test_roc_curve_from_cv_results_legend_label
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def test_roc_curve_from_cv_results_curve_kwargs(pyplot, data_binary, curve_kwargs):
"""Check line kwargs passed correctly in `from_cv_results`."""
X, y = data_binary
cv_results = cross_validate(
LogisticRegression(), X, y, cv=3, return_estimator=True, return_indices=True
)
display = RocCurveDisplay.from_cv_results(
cv_results, X, y, curve_kwargs=curve_kwargs
)
for idx, line in enumerate(display.line_):
color = line.get_color()
if curve_kwargs is None:
# Default color
assert color == "blue"
elif isinstance(curve_kwargs, Mapping):
# All curves "red"
assert color == "red"
else:
assert color == curve_kwargs[idx]["c"]
|
Check line kwargs passed correctly in `from_cv_results`.
|
test_roc_curve_from_cv_results_curve_kwargs
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def _check_chance_level(plot_chance_level, chance_level_kw, display):
"""Check chance level line and line styles correct."""
import matplotlib as mpl
if plot_chance_level:
assert isinstance(display.chance_level_, mpl.lines.Line2D)
assert tuple(display.chance_level_.get_xdata()) == (0, 1)
assert tuple(display.chance_level_.get_ydata()) == (0, 1)
else:
assert display.chance_level_ is None
# Checking for chance level line styles
if plot_chance_level and chance_level_kw is None:
assert display.chance_level_.get_color() == "k"
assert display.chance_level_.get_linestyle() == "--"
assert display.chance_level_.get_label() == "Chance level (AUC = 0.5)"
elif plot_chance_level:
if "c" in chance_level_kw:
assert display.chance_level_.get_color() == chance_level_kw["c"]
else:
assert display.chance_level_.get_color() == chance_level_kw["color"]
if "lw" in chance_level_kw:
assert display.chance_level_.get_linewidth() == chance_level_kw["lw"]
else:
assert display.chance_level_.get_linewidth() == chance_level_kw["linewidth"]
if "ls" in chance_level_kw:
assert display.chance_level_.get_linestyle() == chance_level_kw["ls"]
else:
assert display.chance_level_.get_linestyle() == chance_level_kw["linestyle"]
|
Check chance level line and line styles correct.
|
_check_chance_level
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def test_roc_curve_chance_level_line(
pyplot,
data_binary,
plot_chance_level,
chance_level_kw,
label,
constructor_name,
):
"""Check chance level plotting behavior of `from_predictions`, `from_estimator`."""
X, y = data_binary
lr = LogisticRegression()
lr.fit(X, y)
y_score = getattr(lr, "predict_proba")(X)
y_score = y_score if y_score.ndim == 1 else y_score[:, 1]
if constructor_name == "from_estimator":
display = RocCurveDisplay.from_estimator(
lr,
X,
y,
curve_kwargs={"alpha": 0.8, "label": label},
plot_chance_level=plot_chance_level,
chance_level_kw=chance_level_kw,
)
else:
display = RocCurveDisplay.from_predictions(
y,
y_score,
curve_kwargs={"alpha": 0.8, "label": label},
plot_chance_level=plot_chance_level,
chance_level_kw=chance_level_kw,
)
import matplotlib as mpl
assert isinstance(display.line_, mpl.lines.Line2D)
assert display.line_.get_alpha() == 0.8
assert isinstance(display.ax_, mpl.axes.Axes)
assert isinstance(display.figure_, mpl.figure.Figure)
_check_chance_level(plot_chance_level, chance_level_kw, display)
# Checking for legend behaviour
if plot_chance_level and chance_level_kw is not None:
if label is not None or chance_level_kw.get("label") is not None:
legend = display.ax_.get_legend()
assert legend is not None # Legend should be present if any label is set
legend_labels = [text.get_text() for text in legend.get_texts()]
if label is not None:
assert label in legend_labels
if chance_level_kw.get("label") is not None:
assert chance_level_kw["label"] in legend_labels
else:
assert display.ax_.get_legend() is None
|
Check chance level plotting behavior of `from_predictions`, `from_estimator`.
|
test_roc_curve_chance_level_line
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def test_roc_curve_chance_level_line_from_cv_results(
pyplot,
data_binary,
plot_chance_level,
chance_level_kw,
curve_kwargs,
):
"""Check chance level plotting behavior with `from_cv_results`."""
X, y = data_binary
n_cv = 3
cv_results = cross_validate(
LogisticRegression(), X, y, cv=n_cv, return_estimator=True, return_indices=True
)
display = RocCurveDisplay.from_cv_results(
cv_results,
X,
y,
plot_chance_level=plot_chance_level,
chance_level_kwargs=chance_level_kw,
curve_kwargs=curve_kwargs,
)
import matplotlib as mpl
assert all(isinstance(line, mpl.lines.Line2D) for line in display.line_)
# Ensure both curve line kwargs passed correctly as well
if curve_kwargs:
assert all(line.get_alpha() == 0.8 for line in display.line_)
assert isinstance(display.ax_, mpl.axes.Axes)
assert isinstance(display.figure_, mpl.figure.Figure)
_check_chance_level(plot_chance_level, chance_level_kw, display)
legend = display.ax_.get_legend()
# There is always a legend, to indicate each 'Fold' curve
assert legend is not None
legend_labels = [text.get_text() for text in legend.get_texts()]
if plot_chance_level and chance_level_kw is not None:
if chance_level_kw.get("label") is not None:
assert chance_level_kw["label"] in legend_labels
else:
assert len(legend_labels) == 1
|
Check chance level plotting behavior with `from_cv_results`.
|
test_roc_curve_chance_level_line_from_cv_results
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def test_roc_curve_display_complex_pipeline(pyplot, data_binary, clf, constructor_name):
"""Check the behaviour with complex pipeline."""
X, y = data_binary
clf = clone(clf)
if constructor_name == "from_estimator":
with pytest.raises(NotFittedError):
RocCurveDisplay.from_estimator(clf, X, y)
clf.fit(X, y)
if constructor_name == "from_estimator":
display = RocCurveDisplay.from_estimator(clf, X, y)
name = clf.__class__.__name__
else:
display = RocCurveDisplay.from_predictions(y, y)
name = "Classifier"
assert name in display.line_.get_label()
assert display.name == name
|
Check the behaviour with complex pipeline.
|
test_roc_curve_display_complex_pipeline
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def test_roc_curve_display_default_labels(
pyplot, roc_auc, name, curve_kwargs, expected_labels
):
"""Check the default labels used in the display."""
fpr = [np.array([0, 0.5, 1]), np.array([0, 0.3, 1])]
tpr = [np.array([0, 0.5, 1]), np.array([0, 0.3, 1])]
disp = RocCurveDisplay(fpr=fpr, tpr=tpr, roc_auc=roc_auc, name=name).plot(
curve_kwargs=curve_kwargs
)
for idx, expected_label in enumerate(expected_labels):
assert disp.line_[idx].get_label() == expected_label
|
Check the default labels used in the display.
|
test_roc_curve_display_default_labels
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def test_y_score_and_y_pred_specified_error():
"""Check that an error is raised when both y_score and y_pred are specified."""
y_true = np.array([0, 1, 1, 0])
y_score = np.array([0.1, 0.4, 0.35, 0.8])
y_pred = np.array([0.2, 0.3, 0.5, 0.1])
with pytest.raises(
ValueError, match="`y_pred` and `y_score` cannot be both specified"
):
RocCurveDisplay.from_predictions(y_true, y_score=y_score, y_pred=y_pred)
|
Check that an error is raised when both y_score and y_pred are specified.
|
test_y_score_and_y_pred_specified_error
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def test_y_pred_deprecation_warning(pyplot):
"""Check that a warning is raised when y_pred is specified."""
y_true = np.array([0, 1, 1, 0])
y_score = np.array([0.1, 0.4, 0.35, 0.8])
with pytest.warns(FutureWarning, match="y_pred is deprecated in 1.7"):
display_y_pred = RocCurveDisplay.from_predictions(y_true, y_pred=y_score)
assert_allclose(display_y_pred.fpr, [0, 0.5, 0.5, 1])
assert_allclose(display_y_pred.tpr, [0, 0, 1, 1])
display_y_score = RocCurveDisplay.from_predictions(y_true, y_score)
assert_allclose(display_y_score.fpr, [0, 0.5, 0.5, 1])
assert_allclose(display_y_score.tpr, [0, 0, 1, 1])
|
Check that a warning is raised when y_pred is specified.
|
test_y_pred_deprecation_warning
|
python
|
scikit-learn/scikit-learn
|
sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/_plot/tests/test_roc_curve_display.py
|
BSD-3-Clause
|
def _check_shape(param, param_shape, name):
"""Validate the shape of the input parameter 'param'.
Parameters
----------
param : array
param_shape : tuple
name : str
"""
param = np.array(param)
if param.shape != param_shape:
raise ValueError(
"The parameter '%s' should have the shape of %s, but got %s"
% (name, param_shape, param.shape)
)
|
Validate the shape of the input parameter 'param'.
Parameters
----------
param : array
param_shape : tuple
name : str
|
_check_shape
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_base.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_base.py
|
BSD-3-Clause
|
def _initialize_parameters(self, X, random_state):
"""Initialize the model parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
random_state : RandomState
A random number generator instance that controls the random seed
used for the method chosen to initialize the parameters.
"""
n_samples, _ = X.shape
if self.init_params == "kmeans":
resp = np.zeros((n_samples, self.n_components), dtype=X.dtype)
label = (
cluster.KMeans(
n_clusters=self.n_components, n_init=1, random_state=random_state
)
.fit(X)
.labels_
)
resp[np.arange(n_samples), label] = 1
elif self.init_params == "random":
resp = np.asarray(
random_state.uniform(size=(n_samples, self.n_components)), dtype=X.dtype
)
resp /= resp.sum(axis=1)[:, np.newaxis]
elif self.init_params == "random_from_data":
resp = np.zeros((n_samples, self.n_components), dtype=X.dtype)
indices = random_state.choice(
n_samples, size=self.n_components, replace=False
)
resp[indices, np.arange(self.n_components)] = 1
elif self.init_params == "k-means++":
resp = np.zeros((n_samples, self.n_components), dtype=X.dtype)
_, indices = kmeans_plusplus(
X,
self.n_components,
random_state=random_state,
)
resp[indices, np.arange(self.n_components)] = 1
self._initialize(X, resp)
|
Initialize the model parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
random_state : RandomState
A random number generator instance that controls the random seed
used for the method chosen to initialize the parameters.
|
_initialize_parameters
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_base.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_base.py
|
BSD-3-Clause
|
def fit(self, X, y=None):
"""Estimate model parameters with the EM algorithm.
The method fits the model ``n_init`` times and sets the parameters with
which the model has the largest likelihood or lower bound. Within each
trial, the method iterates between E-step and M-step for ``max_iter``
times until the change of likelihood or lower bound is less than
``tol``, otherwise, a ``ConvergenceWarning`` is raised.
If ``warm_start`` is ``True``, then ``n_init`` is ignored and a single
initialization is performed upon the first call. Upon consecutive
calls, training starts where it left off.
Parameters
----------
X : array-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
y : Ignored
Not used, present for API consistency by convention.
Returns
-------
self : object
The fitted mixture.
"""
# parameters are validated in fit_predict
self.fit_predict(X, y)
return self
|
Estimate model parameters with the EM algorithm.
The method fits the model ``n_init`` times and sets the parameters with
which the model has the largest likelihood or lower bound. Within each
trial, the method iterates between E-step and M-step for ``max_iter``
times until the change of likelihood or lower bound is less than
``tol``, otherwise, a ``ConvergenceWarning`` is raised.
If ``warm_start`` is ``True``, then ``n_init`` is ignored and a single
initialization is performed upon the first call. Upon consecutive
calls, training starts where it left off.
Parameters
----------
X : array-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
y : Ignored
Not used, present for API consistency by convention.
Returns
-------
self : object
The fitted mixture.
|
fit
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_base.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_base.py
|
BSD-3-Clause
|
def fit_predict(self, X, y=None):
"""Estimate model parameters using X and predict the labels for X.
The method fits the model n_init times and sets the parameters with
which the model has the largest likelihood or lower bound. Within each
trial, the method iterates between E-step and M-step for `max_iter`
times until the change of likelihood or lower bound is less than
`tol`, otherwise, a :class:`~sklearn.exceptions.ConvergenceWarning` is
raised. After fitting, it predicts the most probable label for the
input data points.
.. versionadded:: 0.20
Parameters
----------
X : array-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
y : Ignored
Not used, present for API consistency by convention.
Returns
-------
labels : array, shape (n_samples,)
Component labels.
"""
X = validate_data(self, X, dtype=[np.float64, np.float32], ensure_min_samples=2)
if X.shape[0] < self.n_components:
raise ValueError(
"Expected n_samples >= n_components "
f"but got n_components = {self.n_components}, "
f"n_samples = {X.shape[0]}"
)
self._check_parameters(X)
# if we enable warm_start, we will have a unique initialisation
do_init = not (self.warm_start and hasattr(self, "converged_"))
n_init = self.n_init if do_init else 1
max_lower_bound = -np.inf
best_lower_bounds = []
self.converged_ = False
random_state = check_random_state(self.random_state)
n_samples, _ = X.shape
for init in range(n_init):
self._print_verbose_msg_init_beg(init)
if do_init:
self._initialize_parameters(X, random_state)
lower_bound = -np.inf if do_init else self.lower_bound_
current_lower_bounds = []
if self.max_iter == 0:
best_params = self._get_parameters()
best_n_iter = 0
else:
converged = False
for n_iter in range(1, self.max_iter + 1):
prev_lower_bound = lower_bound
log_prob_norm, log_resp = self._e_step(X)
self._m_step(X, log_resp)
lower_bound = self._compute_lower_bound(log_resp, log_prob_norm)
current_lower_bounds.append(lower_bound)
change = lower_bound - prev_lower_bound
self._print_verbose_msg_iter_end(n_iter, change)
if abs(change) < self.tol:
converged = True
break
self._print_verbose_msg_init_end(lower_bound, converged)
if lower_bound > max_lower_bound or max_lower_bound == -np.inf:
max_lower_bound = lower_bound
best_params = self._get_parameters()
best_n_iter = n_iter
best_lower_bounds = current_lower_bounds
self.converged_ = converged
# Should only warn about convergence if max_iter > 0, otherwise
# the user is assumed to have used 0-iters initialization
# to get the initial means.
if not self.converged_ and self.max_iter > 0:
warnings.warn(
(
"Best performing initialization did not converge. "
"Try different init parameters, or increase max_iter, "
"tol, or check for degenerate data."
),
ConvergenceWarning,
)
self._set_parameters(best_params)
self.n_iter_ = best_n_iter
self.lower_bound_ = max_lower_bound
self.lower_bounds_ = best_lower_bounds
# Always do a final e-step to guarantee that the labels returned by
# fit_predict(X) are always consistent with fit(X).predict(X)
# for any value of max_iter and tol (and any random_state).
_, log_resp = self._e_step(X)
return log_resp.argmax(axis=1)
|
Estimate model parameters using X and predict the labels for X.
The method fits the model n_init times and sets the parameters with
which the model has the largest likelihood or lower bound. Within each
trial, the method iterates between E-step and M-step for `max_iter`
times until the change of likelihood or lower bound is less than
`tol`, otherwise, a :class:`~sklearn.exceptions.ConvergenceWarning` is
raised. After fitting, it predicts the most probable label for the
input data points.
.. versionadded:: 0.20
Parameters
----------
X : array-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
y : Ignored
Not used, present for API consistency by convention.
Returns
-------
labels : array, shape (n_samples,)
Component labels.
|
fit_predict
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_base.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_base.py
|
BSD-3-Clause
|
def _e_step(self, X):
"""E step.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Returns
-------
log_prob_norm : float
Mean of the logarithms of the probabilities of each sample in X
log_responsibility : array, shape (n_samples, n_components)
Logarithm of the posterior probabilities (or responsibilities) of
the point of each sample in X.
"""
log_prob_norm, log_resp = self._estimate_log_prob_resp(X)
return np.mean(log_prob_norm), log_resp
|
E step.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Returns
-------
log_prob_norm : float
Mean of the logarithms of the probabilities of each sample in X
log_responsibility : array, shape (n_samples, n_components)
Logarithm of the posterior probabilities (or responsibilities) of
the point of each sample in X.
|
_e_step
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_base.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_base.py
|
BSD-3-Clause
|
def score_samples(self, X):
"""Compute the log-likelihood of each sample.
Parameters
----------
X : array-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
Returns
-------
log_prob : array, shape (n_samples,)
Log-likelihood of each sample in `X` under the current model.
"""
check_is_fitted(self)
X = validate_data(self, X, reset=False)
return logsumexp(self._estimate_weighted_log_prob(X), axis=1)
|
Compute the log-likelihood of each sample.
Parameters
----------
X : array-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
Returns
-------
log_prob : array, shape (n_samples,)
Log-likelihood of each sample in `X` under the current model.
|
score_samples
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_base.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_base.py
|
BSD-3-Clause
|
def predict(self, X):
"""Predict the labels for the data samples in X using trained model.
Parameters
----------
X : array-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
Returns
-------
labels : array, shape (n_samples,)
Component labels.
"""
check_is_fitted(self)
X = validate_data(self, X, reset=False)
return self._estimate_weighted_log_prob(X).argmax(axis=1)
|
Predict the labels for the data samples in X using trained model.
Parameters
----------
X : array-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
Returns
-------
labels : array, shape (n_samples,)
Component labels.
|
predict
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_base.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_base.py
|
BSD-3-Clause
|
def predict_proba(self, X):
"""Evaluate the components' density for each sample.
Parameters
----------
X : array-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
Returns
-------
resp : array, shape (n_samples, n_components)
Density of each Gaussian component for each sample in X.
"""
check_is_fitted(self)
X = validate_data(self, X, reset=False)
_, log_resp = self._estimate_log_prob_resp(X)
return np.exp(log_resp)
|
Evaluate the components' density for each sample.
Parameters
----------
X : array-like of shape (n_samples, n_features)
List of n_features-dimensional data points. Each row
corresponds to a single data point.
Returns
-------
resp : array, shape (n_samples, n_components)
Density of each Gaussian component for each sample in X.
|
predict_proba
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_base.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_base.py
|
BSD-3-Clause
|
def sample(self, n_samples=1):
"""Generate random samples from the fitted Gaussian distribution.
Parameters
----------
n_samples : int, default=1
Number of samples to generate.
Returns
-------
X : array, shape (n_samples, n_features)
Randomly generated sample.
y : array, shape (nsamples,)
Component labels.
"""
check_is_fitted(self)
if n_samples < 1:
raise ValueError(
"Invalid value for 'n_samples': %d . The sampling requires at "
"least one sample." % (self.n_components)
)
_, n_features = self.means_.shape
rng = check_random_state(self.random_state)
n_samples_comp = rng.multinomial(n_samples, self.weights_)
if self.covariance_type == "full":
X = np.vstack(
[
rng.multivariate_normal(mean, covariance, int(sample))
for (mean, covariance, sample) in zip(
self.means_, self.covariances_, n_samples_comp
)
]
)
elif self.covariance_type == "tied":
X = np.vstack(
[
rng.multivariate_normal(mean, self.covariances_, int(sample))
for (mean, sample) in zip(self.means_, n_samples_comp)
]
)
else:
X = np.vstack(
[
mean
+ rng.standard_normal(size=(sample, n_features))
* np.sqrt(covariance)
for (mean, covariance, sample) in zip(
self.means_, self.covariances_, n_samples_comp
)
]
)
y = np.concatenate(
[np.full(sample, j, dtype=int) for j, sample in enumerate(n_samples_comp)]
)
return (X, y)
|
Generate random samples from the fitted Gaussian distribution.
Parameters
----------
n_samples : int, default=1
Number of samples to generate.
Returns
-------
X : array, shape (n_samples, n_features)
Randomly generated sample.
y : array, shape (nsamples,)
Component labels.
|
sample
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_base.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_base.py
|
BSD-3-Clause
|
def _estimate_log_prob_resp(self, X):
"""Estimate log probabilities and responsibilities for each sample.
Compute the log probabilities, weighted log probabilities per
component and responsibilities for each sample in X with respect to
the current state of the model.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Returns
-------
log_prob_norm : array, shape (n_samples,)
log p(X)
log_responsibilities : array, shape (n_samples, n_components)
logarithm of the responsibilities
"""
weighted_log_prob = self._estimate_weighted_log_prob(X)
log_prob_norm = logsumexp(weighted_log_prob, axis=1)
with np.errstate(under="ignore"):
# ignore underflow
log_resp = weighted_log_prob - log_prob_norm[:, np.newaxis]
return log_prob_norm, log_resp
|
Estimate log probabilities and responsibilities for each sample.
Compute the log probabilities, weighted log probabilities per
component and responsibilities for each sample in X with respect to
the current state of the model.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Returns
-------
log_prob_norm : array, shape (n_samples,)
log p(X)
log_responsibilities : array, shape (n_samples, n_components)
logarithm of the responsibilities
|
_estimate_log_prob_resp
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_base.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_base.py
|
BSD-3-Clause
|
def _print_verbose_msg_init_end(self, lb, init_has_converged):
"""Print verbose message on the end of iteration."""
converged_msg = "converged" if init_has_converged else "did not converge"
if self.verbose == 1:
print(f"Initialization {converged_msg}.")
elif self.verbose >= 2:
t = time() - self._init_prev_time
print(
f"Initialization {converged_msg}. time lapse {t:.5f}s\t lower bound"
f" {lb:.5f}."
)
|
Print verbose message on the end of iteration.
|
_print_verbose_msg_init_end
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_base.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_base.py
|
BSD-3-Clause
|
def _log_wishart_norm(degrees_of_freedom, log_det_precisions_chol, n_features):
"""Compute the log of the Wishart distribution normalization term.
Parameters
----------
degrees_of_freedom : array-like of shape (n_components,)
The number of degrees of freedom on the covariance Wishart
distributions.
log_det_precision_chol : array-like of shape (n_components,)
The determinant of the precision matrix for each component.
n_features : int
The number of features.
Return
------
log_wishart_norm : array-like of shape (n_components,)
The log normalization of the Wishart distribution.
"""
# To simplify the computation we have removed the np.log(np.pi) term
return -(
degrees_of_freedom * log_det_precisions_chol
+ degrees_of_freedom * n_features * 0.5 * math.log(2.0)
+ np.sum(
gammaln(0.5 * (degrees_of_freedom - np.arange(n_features)[:, np.newaxis])),
0,
)
)
|
Compute the log of the Wishart distribution normalization term.
Parameters
----------
degrees_of_freedom : array-like of shape (n_components,)
The number of degrees of freedom on the covariance Wishart
distributions.
log_det_precision_chol : array-like of shape (n_components,)
The determinant of the precision matrix for each component.
n_features : int
The number of features.
Return
------
log_wishart_norm : array-like of shape (n_components,)
The log normalization of the Wishart distribution.
|
_log_wishart_norm
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _check_parameters(self, X):
"""Check that the parameters are well defined.
Parameters
----------
X : array-like of shape (n_samples, n_features)
"""
self._check_weights_parameters()
self._check_means_parameters(X)
self._check_precision_parameters(X)
self._checkcovariance_prior_parameter(X)
|
Check that the parameters are well defined.
Parameters
----------
X : array-like of shape (n_samples, n_features)
|
_check_parameters
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _check_weights_parameters(self):
"""Check the parameter of the Dirichlet distribution."""
if self.weight_concentration_prior is None:
self.weight_concentration_prior_ = 1.0 / self.n_components
else:
self.weight_concentration_prior_ = self.weight_concentration_prior
|
Check the parameter of the Dirichlet distribution.
|
_check_weights_parameters
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _check_means_parameters(self, X):
"""Check the parameters of the Gaussian distribution.
Parameters
----------
X : array-like of shape (n_samples, n_features)
"""
_, n_features = X.shape
if self.mean_precision_prior is None:
self.mean_precision_prior_ = 1.0
else:
self.mean_precision_prior_ = self.mean_precision_prior
if self.mean_prior is None:
self.mean_prior_ = X.mean(axis=0)
else:
self.mean_prior_ = check_array(
self.mean_prior, dtype=[np.float64, np.float32], ensure_2d=False
)
_check_shape(self.mean_prior_, (n_features,), "means")
|
Check the parameters of the Gaussian distribution.
Parameters
----------
X : array-like of shape (n_samples, n_features)
|
_check_means_parameters
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _check_precision_parameters(self, X):
"""Check the prior parameters of the precision distribution.
Parameters
----------
X : array-like of shape (n_samples, n_features)
"""
_, n_features = X.shape
if self.degrees_of_freedom_prior is None:
self.degrees_of_freedom_prior_ = n_features
elif self.degrees_of_freedom_prior > n_features - 1.0:
self.degrees_of_freedom_prior_ = self.degrees_of_freedom_prior
else:
raise ValueError(
"The parameter 'degrees_of_freedom_prior' "
"should be greater than %d, but got %.3f."
% (n_features - 1, self.degrees_of_freedom_prior)
)
|
Check the prior parameters of the precision distribution.
Parameters
----------
X : array-like of shape (n_samples, n_features)
|
_check_precision_parameters
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _checkcovariance_prior_parameter(self, X):
"""Check the `covariance_prior_`.
Parameters
----------
X : array-like of shape (n_samples, n_features)
"""
_, n_features = X.shape
if self.covariance_prior is None:
self.covariance_prior_ = {
"full": np.atleast_2d(np.cov(X.T)),
"tied": np.atleast_2d(np.cov(X.T)),
"diag": np.var(X, axis=0, ddof=1),
"spherical": np.var(X, axis=0, ddof=1).mean(),
}[self.covariance_type]
elif self.covariance_type in ["full", "tied"]:
self.covariance_prior_ = check_array(
self.covariance_prior, dtype=[np.float64, np.float32], ensure_2d=False
)
_check_shape(
self.covariance_prior_,
(n_features, n_features),
"%s covariance_prior" % self.covariance_type,
)
_check_precision_matrix(self.covariance_prior_, self.covariance_type)
elif self.covariance_type == "diag":
self.covariance_prior_ = check_array(
self.covariance_prior, dtype=[np.float64, np.float32], ensure_2d=False
)
_check_shape(
self.covariance_prior_,
(n_features,),
"%s covariance_prior" % self.covariance_type,
)
_check_precision_positivity(self.covariance_prior_, self.covariance_type)
# spherical case
else:
self.covariance_prior_ = self.covariance_prior
|
Check the `covariance_prior_`.
Parameters
----------
X : array-like of shape (n_samples, n_features)
|
_checkcovariance_prior_parameter
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _initialize(self, X, resp):
"""Initialization of the mixture parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
resp : array-like of shape (n_samples, n_components)
"""
nk, xk, sk = _estimate_gaussian_parameters(
X, resp, self.reg_covar, self.covariance_type
)
self._estimate_weights(nk)
self._estimate_means(nk, xk)
self._estimate_precisions(nk, xk, sk)
|
Initialization of the mixture parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
resp : array-like of shape (n_samples, n_components)
|
_initialize
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _estimate_weights(self, nk):
"""Estimate the parameters of the Dirichlet distribution.
Parameters
----------
nk : array-like of shape (n_components,)
"""
if self.weight_concentration_prior_type == "dirichlet_process":
# For dirichlet process weight_concentration will be a tuple
# containing the two parameters of the beta distribution
self.weight_concentration_ = (
1.0 + nk,
(
self.weight_concentration_prior_
+ np.hstack((np.cumsum(nk[::-1])[-2::-1], 0))
),
)
else:
# case Variational Gaussian mixture with dirichlet distribution
self.weight_concentration_ = self.weight_concentration_prior_ + nk
|
Estimate the parameters of the Dirichlet distribution.
Parameters
----------
nk : array-like of shape (n_components,)
|
_estimate_weights
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _estimate_means(self, nk, xk):
"""Estimate the parameters of the Gaussian distribution.
Parameters
----------
nk : array-like of shape (n_components,)
xk : array-like of shape (n_components, n_features)
"""
self.mean_precision_ = self.mean_precision_prior_ + nk
self.means_ = (
self.mean_precision_prior_ * self.mean_prior_ + nk[:, np.newaxis] * xk
) / self.mean_precision_[:, np.newaxis]
|
Estimate the parameters of the Gaussian distribution.
Parameters
----------
nk : array-like of shape (n_components,)
xk : array-like of shape (n_components, n_features)
|
_estimate_means
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _estimate_precisions(self, nk, xk, sk):
"""Estimate the precisions parameters of the precision distribution.
Parameters
----------
nk : array-like of shape (n_components,)
xk : array-like of shape (n_components, n_features)
sk : array-like
The shape depends of `covariance_type`:
'full' : (n_components, n_features, n_features)
'tied' : (n_features, n_features)
'diag' : (n_components, n_features)
'spherical' : (n_components,)
"""
{
"full": self._estimate_wishart_full,
"tied": self._estimate_wishart_tied,
"diag": self._estimate_wishart_diag,
"spherical": self._estimate_wishart_spherical,
}[self.covariance_type](nk, xk, sk)
self.precisions_cholesky_ = _compute_precision_cholesky(
self.covariances_, self.covariance_type
)
|
Estimate the precisions parameters of the precision distribution.
Parameters
----------
nk : array-like of shape (n_components,)
xk : array-like of shape (n_components, n_features)
sk : array-like
The shape depends of `covariance_type`:
'full' : (n_components, n_features, n_features)
'tied' : (n_features, n_features)
'diag' : (n_components, n_features)
'spherical' : (n_components,)
|
_estimate_precisions
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _estimate_wishart_full(self, nk, xk, sk):
"""Estimate the full Wishart distribution parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
xk : array-like of shape (n_components, n_features)
sk : array-like of shape (n_components, n_features, n_features)
"""
_, n_features = xk.shape
# Warning : in some Bishop book, there is a typo on the formula 10.63
# `degrees_of_freedom_k = degrees_of_freedom_0 + Nk` is
# the correct formula
self.degrees_of_freedom_ = self.degrees_of_freedom_prior_ + nk
self.covariances_ = np.empty((self.n_components, n_features, n_features))
for k in range(self.n_components):
diff = xk[k] - self.mean_prior_
self.covariances_[k] = (
self.covariance_prior_
+ nk[k] * sk[k]
+ nk[k]
* self.mean_precision_prior_
/ self.mean_precision_[k]
* np.outer(diff, diff)
)
# Contrary to the original bishop book, we normalize the covariances
self.covariances_ /= self.degrees_of_freedom_[:, np.newaxis, np.newaxis]
|
Estimate the full Wishart distribution parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
xk : array-like of shape (n_components, n_features)
sk : array-like of shape (n_components, n_features, n_features)
|
_estimate_wishart_full
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _estimate_wishart_tied(self, nk, xk, sk):
"""Estimate the tied Wishart distribution parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
xk : array-like of shape (n_components, n_features)
sk : array-like of shape (n_features, n_features)
"""
_, n_features = xk.shape
# Warning : in some Bishop book, there is a typo on the formula 10.63
# `degrees_of_freedom_k = degrees_of_freedom_0 + Nk`
# is the correct formula
self.degrees_of_freedom_ = (
self.degrees_of_freedom_prior_ + nk.sum() / self.n_components
)
diff = xk - self.mean_prior_
self.covariances_ = (
self.covariance_prior_
+ sk * nk.sum() / self.n_components
+ self.mean_precision_prior_
/ self.n_components
* np.dot((nk / self.mean_precision_) * diff.T, diff)
)
# Contrary to the original bishop book, we normalize the covariances
self.covariances_ /= self.degrees_of_freedom_
|
Estimate the tied Wishart distribution parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
xk : array-like of shape (n_components, n_features)
sk : array-like of shape (n_features, n_features)
|
_estimate_wishart_tied
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _estimate_wishart_diag(self, nk, xk, sk):
"""Estimate the diag Wishart distribution parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
xk : array-like of shape (n_components, n_features)
sk : array-like of shape (n_components, n_features)
"""
_, n_features = xk.shape
# Warning : in some Bishop book, there is a typo on the formula 10.63
# `degrees_of_freedom_k = degrees_of_freedom_0 + Nk`
# is the correct formula
self.degrees_of_freedom_ = self.degrees_of_freedom_prior_ + nk
diff = xk - self.mean_prior_
self.covariances_ = self.covariance_prior_ + nk[:, np.newaxis] * (
sk
+ (self.mean_precision_prior_ / self.mean_precision_)[:, np.newaxis]
* np.square(diff)
)
# Contrary to the original bishop book, we normalize the covariances
self.covariances_ /= self.degrees_of_freedom_[:, np.newaxis]
|
Estimate the diag Wishart distribution parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
xk : array-like of shape (n_components, n_features)
sk : array-like of shape (n_components, n_features)
|
_estimate_wishart_diag
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _estimate_wishart_spherical(self, nk, xk, sk):
"""Estimate the spherical Wishart distribution parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
xk : array-like of shape (n_components, n_features)
sk : array-like of shape (n_components,)
"""
_, n_features = xk.shape
# Warning : in some Bishop book, there is a typo on the formula 10.63
# `degrees_of_freedom_k = degrees_of_freedom_0 + Nk`
# is the correct formula
self.degrees_of_freedom_ = self.degrees_of_freedom_prior_ + nk
diff = xk - self.mean_prior_
self.covariances_ = self.covariance_prior_ + nk * (
sk
+ self.mean_precision_prior_
/ self.mean_precision_
* np.mean(np.square(diff), 1)
)
# Contrary to the original bishop book, we normalize the covariances
self.covariances_ /= self.degrees_of_freedom_
|
Estimate the spherical Wishart distribution parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
xk : array-like of shape (n_components, n_features)
sk : array-like of shape (n_components,)
|
_estimate_wishart_spherical
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _m_step(self, X, log_resp):
"""M step.
Parameters
----------
X : array-like of shape (n_samples, n_features)
log_resp : array-like of shape (n_samples, n_components)
Logarithm of the posterior probabilities (or responsibilities) of
the point of each sample in X.
"""
n_samples, _ = X.shape
nk, xk, sk = _estimate_gaussian_parameters(
X, np.exp(log_resp), self.reg_covar, self.covariance_type
)
self._estimate_weights(nk)
self._estimate_means(nk, xk)
self._estimate_precisions(nk, xk, sk)
|
M step.
Parameters
----------
X : array-like of shape (n_samples, n_features)
log_resp : array-like of shape (n_samples, n_components)
Logarithm of the posterior probabilities (or responsibilities) of
the point of each sample in X.
|
_m_step
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _compute_lower_bound(self, log_resp, log_prob_norm):
"""Estimate the lower bound of the model.
The lower bound on the likelihood (of the training data with respect to
the model) is used to detect the convergence and has to increase at
each iteration.
Parameters
----------
X : array-like of shape (n_samples, n_features)
log_resp : array, shape (n_samples, n_components)
Logarithm of the posterior probabilities (or responsibilities) of
the point of each sample in X.
log_prob_norm : float
Logarithm of the probability of each sample in X.
Returns
-------
lower_bound : float
"""
# Contrary to the original formula, we have done some simplification
# and removed all the constant terms.
(n_features,) = self.mean_prior_.shape
# We removed `.5 * n_features * np.log(self.degrees_of_freedom_)`
# because the precision matrix is normalized.
log_det_precisions_chol = _compute_log_det_cholesky(
self.precisions_cholesky_, self.covariance_type, n_features
) - 0.5 * n_features * np.log(self.degrees_of_freedom_)
if self.covariance_type == "tied":
log_wishart = self.n_components * np.float64(
_log_wishart_norm(
self.degrees_of_freedom_, log_det_precisions_chol, n_features
)
)
else:
log_wishart = np.sum(
_log_wishart_norm(
self.degrees_of_freedom_, log_det_precisions_chol, n_features
)
)
if self.weight_concentration_prior_type == "dirichlet_process":
log_norm_weight = -np.sum(
betaln(self.weight_concentration_[0], self.weight_concentration_[1])
)
else:
log_norm_weight = _log_dirichlet_norm(self.weight_concentration_)
return (
-np.sum(np.exp(log_resp) * log_resp)
- log_wishart
- log_norm_weight
- 0.5 * n_features * np.sum(np.log(self.mean_precision_))
)
|
Estimate the lower bound of the model.
The lower bound on the likelihood (of the training data with respect to
the model) is used to detect the convergence and has to increase at
each iteration.
Parameters
----------
X : array-like of shape (n_samples, n_features)
log_resp : array, shape (n_samples, n_components)
Logarithm of the posterior probabilities (or responsibilities) of
the point of each sample in X.
log_prob_norm : float
Logarithm of the probability of each sample in X.
Returns
-------
lower_bound : float
|
_compute_lower_bound
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_bayesian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_bayesian_mixture.py
|
BSD-3-Clause
|
def _check_weights(weights, n_components):
"""Check the user provided 'weights'.
Parameters
----------
weights : array-like of shape (n_components,)
The proportions of components of each mixture.
n_components : int
Number of components.
Returns
-------
weights : array, shape (n_components,)
"""
weights = check_array(weights, dtype=[np.float64, np.float32], ensure_2d=False)
_check_shape(weights, (n_components,), "weights")
# check range
if any(np.less(weights, 0.0)) or any(np.greater(weights, 1.0)):
raise ValueError(
"The parameter 'weights' should be in the range "
"[0, 1], but got max value %.5f, min value %.5f"
% (np.min(weights), np.max(weights))
)
# check normalization
atol = 1e-6 if weights.dtype == np.float32 else 1e-8
if not np.allclose(np.abs(1.0 - np.sum(weights)), 0.0, atol=atol):
raise ValueError(
"The parameter 'weights' should be normalized, but got sum(weights) = %.5f"
% np.sum(weights)
)
return weights
|
Check the user provided 'weights'.
Parameters
----------
weights : array-like of shape (n_components,)
The proportions of components of each mixture.
n_components : int
Number of components.
Returns
-------
weights : array, shape (n_components,)
|
_check_weights
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_gaussian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_gaussian_mixture.py
|
BSD-3-Clause
|
def _check_means(means, n_components, n_features):
"""Validate the provided 'means'.
Parameters
----------
means : array-like of shape (n_components, n_features)
The centers of the current components.
n_components : int
Number of components.
n_features : int
Number of features.
Returns
-------
means : array, (n_components, n_features)
"""
means = check_array(means, dtype=[np.float64, np.float32], ensure_2d=False)
_check_shape(means, (n_components, n_features), "means")
return means
|
Validate the provided 'means'.
Parameters
----------
means : array-like of shape (n_components, n_features)
The centers of the current components.
n_components : int
Number of components.
n_features : int
Number of features.
Returns
-------
means : array, (n_components, n_features)
|
_check_means
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_gaussian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_gaussian_mixture.py
|
BSD-3-Clause
|
def _check_precision_positivity(precision, covariance_type):
"""Check a precision vector is positive-definite."""
if np.any(np.less_equal(precision, 0.0)):
raise ValueError("'%s precision' should be positive" % covariance_type)
|
Check a precision vector is positive-definite.
|
_check_precision_positivity
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_gaussian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_gaussian_mixture.py
|
BSD-3-Clause
|
def _check_precision_matrix(precision, covariance_type):
"""Check a precision matrix is symmetric and positive-definite."""
if not (
np.allclose(precision, precision.T) and np.all(linalg.eigvalsh(precision) > 0.0)
):
raise ValueError(
"'%s precision' should be symmetric, positive-definite" % covariance_type
)
|
Check a precision matrix is symmetric and positive-definite.
|
_check_precision_matrix
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_gaussian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_gaussian_mixture.py
|
BSD-3-Clause
|
def _check_precisions(precisions, covariance_type, n_components, n_features):
"""Validate user provided precisions.
Parameters
----------
precisions : array-like
'full' : shape of (n_components, n_features, n_features)
'tied' : shape of (n_features, n_features)
'diag' : shape of (n_components, n_features)
'spherical' : shape of (n_components,)
covariance_type : str
n_components : int
Number of components.
n_features : int
Number of features.
Returns
-------
precisions : array
"""
precisions = check_array(
precisions,
dtype=[np.float64, np.float32],
ensure_2d=False,
allow_nd=covariance_type == "full",
)
precisions_shape = {
"full": (n_components, n_features, n_features),
"tied": (n_features, n_features),
"diag": (n_components, n_features),
"spherical": (n_components,),
}
_check_shape(
precisions, precisions_shape[covariance_type], "%s precision" % covariance_type
)
_check_precisions = {
"full": _check_precisions_full,
"tied": _check_precision_matrix,
"diag": _check_precision_positivity,
"spherical": _check_precision_positivity,
}
_check_precisions[covariance_type](precisions, covariance_type)
return precisions
|
Validate user provided precisions.
Parameters
----------
precisions : array-like
'full' : shape of (n_components, n_features, n_features)
'tied' : shape of (n_features, n_features)
'diag' : shape of (n_components, n_features)
'spherical' : shape of (n_components,)
covariance_type : str
n_components : int
Number of components.
n_features : int
Number of features.
Returns
-------
precisions : array
|
_check_precisions
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_gaussian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_gaussian_mixture.py
|
BSD-3-Clause
|
def _estimate_gaussian_covariances_full(resp, X, nk, means, reg_covar):
"""Estimate the full covariance matrices.
Parameters
----------
resp : array-like of shape (n_samples, n_components)
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
means : array-like of shape (n_components, n_features)
reg_covar : float
Returns
-------
covariances : array, shape (n_components, n_features, n_features)
The covariance matrix of the current components.
"""
n_components, n_features = means.shape
covariances = np.empty((n_components, n_features, n_features), dtype=X.dtype)
for k in range(n_components):
diff = X - means[k]
covariances[k] = np.dot(resp[:, k] * diff.T, diff) / nk[k]
covariances[k].flat[:: n_features + 1] += reg_covar
return covariances
|
Estimate the full covariance matrices.
Parameters
----------
resp : array-like of shape (n_samples, n_components)
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
means : array-like of shape (n_components, n_features)
reg_covar : float
Returns
-------
covariances : array, shape (n_components, n_features, n_features)
The covariance matrix of the current components.
|
_estimate_gaussian_covariances_full
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_gaussian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_gaussian_mixture.py
|
BSD-3-Clause
|
def _estimate_gaussian_covariances_tied(resp, X, nk, means, reg_covar):
"""Estimate the tied covariance matrix.
Parameters
----------
resp : array-like of shape (n_samples, n_components)
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
means : array-like of shape (n_components, n_features)
reg_covar : float
Returns
-------
covariance : array, shape (n_features, n_features)
The tied covariance matrix of the components.
"""
avg_X2 = np.dot(X.T, X)
avg_means2 = np.dot(nk * means.T, means)
covariance = avg_X2 - avg_means2
covariance /= nk.sum()
covariance.flat[:: len(covariance) + 1] += reg_covar
return covariance
|
Estimate the tied covariance matrix.
Parameters
----------
resp : array-like of shape (n_samples, n_components)
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
means : array-like of shape (n_components, n_features)
reg_covar : float
Returns
-------
covariance : array, shape (n_features, n_features)
The tied covariance matrix of the components.
|
_estimate_gaussian_covariances_tied
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_gaussian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_gaussian_mixture.py
|
BSD-3-Clause
|
def _estimate_gaussian_covariances_diag(resp, X, nk, means, reg_covar):
"""Estimate the diagonal covariance vectors.
Parameters
----------
responsibilities : array-like of shape (n_samples, n_components)
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
means : array-like of shape (n_components, n_features)
reg_covar : float
Returns
-------
covariances : array, shape (n_components, n_features)
The covariance vector of the current components.
"""
avg_X2 = np.dot(resp.T, X * X) / nk[:, np.newaxis]
avg_means2 = means**2
return avg_X2 - avg_means2 + reg_covar
|
Estimate the diagonal covariance vectors.
Parameters
----------
responsibilities : array-like of shape (n_samples, n_components)
X : array-like of shape (n_samples, n_features)
nk : array-like of shape (n_components,)
means : array-like of shape (n_components, n_features)
reg_covar : float
Returns
-------
covariances : array, shape (n_components, n_features)
The covariance vector of the current components.
|
_estimate_gaussian_covariances_diag
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_gaussian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_gaussian_mixture.py
|
BSD-3-Clause
|
def _estimate_gaussian_parameters(X, resp, reg_covar, covariance_type):
"""Estimate the Gaussian distribution parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The input data array.
resp : array-like of shape (n_samples, n_components)
The responsibilities for each data sample in X.
reg_covar : float
The regularization added to the diagonal of the covariance matrices.
covariance_type : {'full', 'tied', 'diag', 'spherical'}
The type of precision matrices.
Returns
-------
nk : array-like of shape (n_components,)
The numbers of data samples in the current components.
means : array-like of shape (n_components, n_features)
The centers of the current components.
covariances : array-like
The covariance matrix of the current components.
The shape depends of the covariance_type.
"""
nk = resp.sum(axis=0) + 10 * np.finfo(resp.dtype).eps
means = np.dot(resp.T, X) / nk[:, np.newaxis]
covariances = {
"full": _estimate_gaussian_covariances_full,
"tied": _estimate_gaussian_covariances_tied,
"diag": _estimate_gaussian_covariances_diag,
"spherical": _estimate_gaussian_covariances_spherical,
}[covariance_type](resp, X, nk, means, reg_covar)
return nk, means, covariances
|
Estimate the Gaussian distribution parameters.
Parameters
----------
X : array-like of shape (n_samples, n_features)
The input data array.
resp : array-like of shape (n_samples, n_components)
The responsibilities for each data sample in X.
reg_covar : float
The regularization added to the diagonal of the covariance matrices.
covariance_type : {'full', 'tied', 'diag', 'spherical'}
The type of precision matrices.
Returns
-------
nk : array-like of shape (n_components,)
The numbers of data samples in the current components.
means : array-like of shape (n_components, n_features)
The centers of the current components.
covariances : array-like
The covariance matrix of the current components.
The shape depends of the covariance_type.
|
_estimate_gaussian_parameters
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_gaussian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_gaussian_mixture.py
|
BSD-3-Clause
|
def _compute_precision_cholesky(covariances, covariance_type):
"""Compute the Cholesky decomposition of the precisions.
Parameters
----------
covariances : array-like
The covariance matrix of the current components.
The shape depends of the covariance_type.
covariance_type : {'full', 'tied', 'diag', 'spherical'}
The type of precision matrices.
Returns
-------
precisions_cholesky : array-like
The cholesky decomposition of sample precisions of the current
components. The shape depends of the covariance_type.
"""
estimate_precision_error_message = (
"Fitting the mixture model failed because some components have "
"ill-defined empirical covariance (for instance caused by singleton "
"or collapsed samples). Try to decrease the number of components, "
"increase reg_covar, or scale the input data."
)
dtype = covariances.dtype
if dtype == np.float32:
estimate_precision_error_message += (
" The numerical accuracy can also be improved by passing float64"
" data instead of float32."
)
if covariance_type == "full":
n_components, n_features, _ = covariances.shape
precisions_chol = np.empty((n_components, n_features, n_features), dtype=dtype)
for k, covariance in enumerate(covariances):
try:
cov_chol = linalg.cholesky(covariance, lower=True)
except linalg.LinAlgError:
raise ValueError(estimate_precision_error_message)
precisions_chol[k] = linalg.solve_triangular(
cov_chol, np.eye(n_features, dtype=dtype), lower=True
).T
elif covariance_type == "tied":
_, n_features = covariances.shape
try:
cov_chol = linalg.cholesky(covariances, lower=True)
except linalg.LinAlgError:
raise ValueError(estimate_precision_error_message)
precisions_chol = linalg.solve_triangular(
cov_chol, np.eye(n_features, dtype=dtype), lower=True
).T
else:
if np.any(np.less_equal(covariances, 0.0)):
raise ValueError(estimate_precision_error_message)
precisions_chol = 1.0 / np.sqrt(covariances)
return precisions_chol
|
Compute the Cholesky decomposition of the precisions.
Parameters
----------
covariances : array-like
The covariance matrix of the current components.
The shape depends of the covariance_type.
covariance_type : {'full', 'tied', 'diag', 'spherical'}
The type of precision matrices.
Returns
-------
precisions_cholesky : array-like
The cholesky decomposition of sample precisions of the current
components. The shape depends of the covariance_type.
|
_compute_precision_cholesky
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_gaussian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_gaussian_mixture.py
|
BSD-3-Clause
|
def _compute_precision_cholesky_from_precisions(precisions, covariance_type):
r"""Compute the Cholesky decomposition of precisions using precisions themselves.
As implemented in :func:`_compute_precision_cholesky`, the `precisions_cholesky_` is
an upper-triangular matrix for each Gaussian component, which can be expressed as
the $UU^T$ factorization of the precision matrix for each Gaussian component, where
$U$ is an upper-triangular matrix.
In order to use the Cholesky decomposition to get $UU^T$, the precision matrix
$\Lambda$ needs to be permutated such that its rows and columns are reversed, which
can be done by applying a similarity transformation with an exchange matrix $J$,
where the 1 elements reside on the anti-diagonal and all other elements are 0. In
particular, the Cholesky decomposition of the transformed precision matrix is
$J\Lambda J=LL^T$, where $L$ is a lower-triangular matrix. Because $\Lambda=UU^T$
and $J=J^{-1}=J^T$, the `precisions_cholesky_` for each Gaussian component can be
expressed as $JLJ$.
Refer to #26415 for details.
Parameters
----------
precisions : array-like
The precision matrix of the current components.
The shape depends on the covariance_type.
covariance_type : {'full', 'tied', 'diag', 'spherical'}
The type of precision matrices.
Returns
-------
precisions_cholesky : array-like
The cholesky decomposition of sample precisions of the current
components. The shape depends on the covariance_type.
"""
if covariance_type == "full":
precisions_cholesky = np.array(
[
_flipudlr(linalg.cholesky(_flipudlr(precision), lower=True))
for precision in precisions
]
)
elif covariance_type == "tied":
precisions_cholesky = _flipudlr(
linalg.cholesky(_flipudlr(precisions), lower=True)
)
else:
precisions_cholesky = np.sqrt(precisions)
return precisions_cholesky
|
Compute the Cholesky decomposition of precisions using precisions themselves.
As implemented in :func:`_compute_precision_cholesky`, the `precisions_cholesky_` is
an upper-triangular matrix for each Gaussian component, which can be expressed as
the $UU^T$ factorization of the precision matrix for each Gaussian component, where
$U$ is an upper-triangular matrix.
In order to use the Cholesky decomposition to get $UU^T$, the precision matrix
$\Lambda$ needs to be permutated such that its rows and columns are reversed, which
can be done by applying a similarity transformation with an exchange matrix $J$,
where the 1 elements reside on the anti-diagonal and all other elements are 0. In
particular, the Cholesky decomposition of the transformed precision matrix is
$J\Lambda J=LL^T$, where $L$ is a lower-triangular matrix. Because $\Lambda=UU^T$
and $J=J^{-1}=J^T$, the `precisions_cholesky_` for each Gaussian component can be
expressed as $JLJ$.
Refer to #26415 for details.
Parameters
----------
precisions : array-like
The precision matrix of the current components.
The shape depends on the covariance_type.
covariance_type : {'full', 'tied', 'diag', 'spherical'}
The type of precision matrices.
Returns
-------
precisions_cholesky : array-like
The cholesky decomposition of sample precisions of the current
components. The shape depends on the covariance_type.
|
_compute_precision_cholesky_from_precisions
|
python
|
scikit-learn/scikit-learn
|
sklearn/mixture/_gaussian_mixture.py
|
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/mixture/_gaussian_mixture.py
|
BSD-3-Clause
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.