repo
stringclasses 856
values | pull_number
int64 3
127k
| instance_id
stringlengths 12
58
| issue_numbers
sequencelengths 1
5
| base_commit
stringlengths 40
40
| patch
stringlengths 67
1.54M
| test_patch
stringlengths 0
107M
| problem_statement
stringlengths 3
307k
| hints_text
stringlengths 0
908k
| created_at
timestamp[s] |
---|---|---|---|---|---|---|---|---|---|
vispy/vispy | 2,131 | vispy__vispy-2131 | [
"2129"
] | d313742a00da77deaa8e1696d7fa74dc4aad19bc | diff --git a/vispy/visuals/_scalable_textures.py b/vispy/visuals/_scalable_textures.py
--- a/vispy/visuals/_scalable_textures.py
+++ b/vispy/visuals/_scalable_textures.py
@@ -36,8 +36,11 @@ def get_default_clim_from_data(data):
max_finite = np.isfinite(max_value)
if not (min_finite and max_finite):
finite_data = data[np.isfinite(data)]
- min_value = finite_data.min()
- max_value = finite_data.max()
+ if finite_data.size:
+ min_value = finite_data.min()
+ max_value = finite_data.max()
+ else:
+ min_value = max_value = 0 # no finite values in the data
return min_value, max_value
| diff --git a/vispy/visuals/tests/test_scalable_textures.py b/vispy/visuals/tests/test_scalable_textures.py
--- a/vispy/visuals/tests/test_scalable_textures.py
+++ b/vispy/visuals/tests/test_scalable_textures.py
@@ -77,6 +77,18 @@ def test_default_clim_non_finite():
clim = get_default_clim_from_data(data)
assert clim == (5, 25)
+ data = np.array([np.nan, np.nan, np.nan]).astype(np.float32)
+ clim = get_default_clim_from_dtype(data.dtype)
+ assert clim == (0, 1)
+ clim = get_default_clim_from_data(data)
+ assert clim == (0, 0)
+
+ data = np.array([np.nan, np.inf, -np.inf]).astype(np.float32)
+ clim = get_default_clim_from_dtype(data.dtype)
+ assert clim == (0, 1)
+ clim = get_default_clim_from_data(data)
+ assert clim == (0, 0)
+
def test_clim_handling_cpu():
| ImageVisual fails if data is all NaN
In my application I need to temporarily fill the texture (ImageVisual) with invalid data. I'd like if it showed up as transparent which I think I can handle with a custom shader, but the new NaN/Inf handling in the new scalable textures does not allow this for `clim='auto'` cases. I end up getting:
```
ValueError: zero-size array to reduction operation minimum which has no identity
```
The reason is pretty obvious when looking at the code:
https://github.com/vispy/vispy/blob/d313742a00da77deaa8e1696d7fa74dc4aad19bc/vispy/visuals/_scalable_textures.py#L38-L40
If none of the data is finite then `data[np.isfinite(data)]` returns an empty array and the min/max fail after. I think a better behavior may be to check for a `0` size and return `0, inf`? Or maybe `-inf, inf`?
| These `NaN` continue to hunt us. I guess that's what they do ...
Yeah...it's just new bugs with new features. What do you think on the returned clims for all-Nan/all-Inf data? I guess it could be `-inf, inf` or maybe even `nan, nan`...maybe.
It does not even matter much, does it, since all NaN's will be discarted or snap to the lower clim anyway. Maybe we should aim for the clim values to be finite, so perhaps `(0, 0)`?
Yeah I suppose that fits with the general idea of what you would get if you did min/max on data that was all the same. Leaves it up to the specific texture subclass on how it wants to handle the equal clims. I'll try making a PR tonight or tomorrow. | 2021-07-01T20:54:51 |
vispy/vispy | 2,135 | vispy__vispy-2135 | [
"2132"
] | d70e042118f538c03629253012e8e13339b69881 | diff --git a/vispy/visuals/graphs/layouts/networkx_layout.py b/vispy/visuals/graphs/layouts/networkx_layout.py
--- a/vispy/visuals/graphs/layouts/networkx_layout.py
+++ b/vispy/visuals/graphs/layouts/networkx_layout.py
@@ -6,9 +6,6 @@
try:
import networkx as nx
except ModuleNotFoundError:
- import warnings
- warnings.warn(
- "Networkx not found, please install network to use its layouts")
nx = None
@@ -27,6 +24,8 @@ def __init__(self, graph=None, layout=None, **kwargs):
kwargs: dict, optional
when layout is :str: :kwargs: will act as a setting dictionary for the layout function of networkx
"""
+ if nx is None:
+ raise ValueError("networkx not found, please install networkx to use its layouts")
if isinstance(graph, type(None)):
raise ValueError("Requires networkx input")
self.graph = graph
@@ -37,17 +36,14 @@ def __init__(self, graph=None, layout=None, **kwargs):
# check for networkx
elif isinstance(layout, str):
- if nx:
- if not layout.endswith("_layout"):
- layout += "_layout" # append for nx
- layout_function = getattr(nx, layout)
- if layout_function:
- self.positions = np.asarray(
- [i for i in dict(layout_function(graph, **kwargs)).values()])
- else:
- raise ValueError("Check networkx for layouts")
+ if not layout.endswith("_layout"):
+ layout += "_layout" # append for nx
+ layout_function = getattr(nx, layout)
+ if layout_function:
+ self.positions = np.asarray(
+ [i for i in dict(layout_function(graph, **kwargs)).values()])
else:
- raise ValueError("networkx not found")
+ raise ValueError("Check networkx for layouts")
# assume dict from networkx; values are 2-array
elif isinstance(layout, dict):
self.positions = np.asarray([i for i in layout.values()])
| Remove warning on import about missing networkx dependency
Currently, importing vispy (or some submodule) without networkx installed results in a warning about installing networkx, even if the user/downstream library has no intention of using the graph layout. The warning should be delayed and turned into an error, as noted by @djhoese [here](https://github.com/napari/napari/issues/2979#issuecomment-874159877).
xref napari/napari#2979
| 2021-07-08T13:50:29 |
||
vispy/vispy | 2,140 | vispy__vispy-2140 | [
"2139"
] | ea5a740d020aa7f8179becd30ed3383c575e1783 | diff --git a/vispy/visuals/_scalable_textures.py b/vispy/visuals/_scalable_textures.py
--- a/vispy/visuals/_scalable_textures.py
+++ b/vispy/visuals/_scalable_textures.py
@@ -114,7 +114,19 @@ def set_clim(self, clim):
@property
def clim_normalized(self):
"""Normalize current clims to match texture data inside the shader.
+
+ If data is scaled on the CPU then the texture data will be in the range
+ 0-1 in the _build_texture() method. Inside the fragment shader the
+ final contrast adjustment will be applied based on this normalized
+ ``clim``.
+
"""
+ if isinstance(self.clim, str) and self.clim == "auto":
+ raise RuntimeError("Can't return 'auto' normalized color limits "
+ "until data has been set. Call "
+ "'scale_and_set_data' first.")
+ if self.clim[0] == self.clim[1]:
+ return self.clim[0], np.inf
# if the internalformat of the texture is normalized we need to
# also normalize the clims so they match in-shader
clim_min = self.normalize_value(self.clim[0], self._data_dtype)
@@ -260,6 +272,11 @@ def clim_normalized(self):
``clim``.
"""
+ if isinstance(self.clim, str) and self.clim == "auto":
+ raise RuntimeError("Can't return 'auto' normalized color limits "
+ "until data has been set. Call "
+ "'scale_and_set_data' first.")
+
range_min, range_max = self._data_limits
clim_min, clim_max = self.clim
if clim_min == clim_max:
@@ -349,25 +366,6 @@ class GPUScaledTextureMixin(_ScaledTextureMixin):
# instance variable that will be used later on
_auto_texture_format = False
- @property
- def clim_normalized(self):
- """Normalize current clims to match texture data inside the shader.
-
- If data is scaled on the CPU then the texture data will be in the range
- 0-1 in the _build_texture() method. Inside the fragment shader the
- final contrast adjustment will be applied based on this normalized
- ``clim``.
-
- """
- if self.clim[0] == self.clim[1]:
- return self.clim[0], np.inf
-
- # if the internalformat of the texture is normalized we need to
- # also normalize the clims so they match in-shader
- clim_min = self.normalize_value(self.clim[0], self._data_dtype)
- clim_max = self.normalize_value(self.clim[1], self._data_dtype)
- return clim_min, clim_max
-
def _handle_auto_texture_format(self, texture_format, data):
if isinstance(texture_format, str) and texture_format == 'auto':
if data is None:
diff --git a/vispy/visuals/image.py b/vispy/visuals/image.py
--- a/vispy/visuals/image.py
+++ b/vispy/visuals/image.py
@@ -348,11 +348,21 @@ def clim(self):
def clim(self, clim):
if self._texture.set_clim(clim):
self._need_texture_upload = True
- # shortcut so we don't have to rebuild the whole color transform
- if not self._need_colortransform_update:
- self.shared_program.frag['color_transform'][1]['clim'] = self._texture.clim_normalized
+ self._update_colortransform_clim()
self.update()
+ def _update_colortransform_clim(self):
+ if self._need_colortransform_update:
+ # we are going to rebuild anyway so just do it later
+ return
+ try:
+ norm_clims = self._texture.clim_normalized
+ except RuntimeError:
+ return
+ else:
+ # shortcut so we don't have to rebuild the whole color transform
+ self.shared_program.frag['color_transform'][1]['clim'] = norm_clims
+
@property
def cmap(self):
"""Get the colormap object applied to luminance (single band) data."""
| diff --git a/vispy/visuals/tests/test_image.py b/vispy/visuals/tests/test_image.py
--- a/vispy/visuals/tests/test_image.py
+++ b/vispy/visuals/tests/test_image.py
@@ -26,6 +26,11 @@ def test_image(is_3d):
assert_image_approved(c.render(), "visuals/image%s.png" %
("_rgb" if is_3d else "_mono"))
+ # change to auto clims after first draw
+ image.clim = "auto"
+ assert_image_approved(c.render(), "visuals/image%s.png" %
+ ("_rgb" if is_3d else "_mono"))
+
@requires_application()
@pytest.mark.parametrize('gamma', [None, -0.5, "0.5"])
diff --git a/vispy/visuals/tests/test_scalable_textures.py b/vispy/visuals/tests/test_scalable_textures.py
--- a/vispy/visuals/tests/test_scalable_textures.py
+++ b/vispy/visuals/tests/test_scalable_textures.py
@@ -1,4 +1,5 @@
import numpy as np
+import pytest
from vispy.testing import run_tests_if_main
@@ -91,7 +92,6 @@ def test_default_clim_non_finite():
def test_clim_handling_cpu():
-
ref_data = np.array([[10, 10, 5], [15, 25, 15]])
# f32 - auto clim
@@ -124,6 +124,15 @@ def test_clim_handling_cpu():
assert st.clim_normalized == (0, np.inf)
# assert np.min(st._data) == 0 - does not matter
+ # f32 - auto clim
+ st = CPUScaledStub()
+ st.set_clim("auto")
+ assert st.clim == "auto"
+ pytest.raises(RuntimeError, getattr, st, "clim_normalized")
+ st.scale_and_set_data(ref_data.astype(np.float32))
+ assert st.clim == (5, 25)
+ assert st.clim_normalized == (0, 1)
+
def test_clim_handling_gpu():
ref_data = np.array([[10, 10, 5], [15, 25, 15]])
@@ -158,5 +167,14 @@ def test_clim_handling_gpu():
assert st.clim_normalized == (10, np.inf)
# assert np.min(st._data) == 0 - does not matter
+ # f32 - auto clim
+ st = GPUScaledStub()
+ st.set_clim("auto")
+ assert st.clim == "auto"
+ pytest.raises(RuntimeError, getattr, st, "clim_normalized")
+ st.scale_and_set_data(ref_data.astype(np.float32))
+ assert st.clim == (5.0, 25.0)
+ assert st.clim_normalized == (5.0, 25.0)
+
run_tests_if_main()
| Fix scalabe textures using clim_normalized when "auto" limits are used
If you create an ImageVisual with clims and then set clims to "auto" later, the ImageVisual fails. Here's an updated image test showing the use case
```python
@requires_application()
@pytest.mark.parametrize('is_3d', [True, False])
def test_image_auto_clim(is_3d):
"""Test image visual"""
size = (100, 50)
with TestingCanvas(size=size, bgcolor='w') as c:
image = Image(cmap='grays', clim=[0, 1], parent=c.scene)
shape = (size[1]-10, size[0]-10) + ((3,) if is_3d else ())
np.random.seed(379823)
data = np.random.rand(*shape)
image.set_data(data)
assert_image_approved(c.render(), "visuals/image%s.png" %
("_rgb" if is_3d else "_mono"))
# change to auto clims later
image.clim = "auto"
assert_image_approved(c.render(), "visuals/image%s.png" %
("_rgb" if is_3d else "_mono"))
```
Here's the code in the ImageVisual that fails:
```python
@clim.setter
def clim(self, clim):
if self._texture.set_clim(clim):
self._need_texture_upload = True
# shortcut so we don't have to rebuild the whole color transform
if not self._need_colortransform_update:
self.shared_program.frag['color_transform'][1]['clim'] = self._texture.clim_normalized
self.update()
```
So because no other color transform changes have been done, we take the shortcut to assign `clim_normalized`. However, the data hasn't actually been set yet so auto clims haven't been calculated. So `clim_normalized` fails when it tries to split the clims into two elements (min, max).
I'm working on a PR with this updated test. My guess is that the easiest fix is to just update this colortransform if statement with an "auto" check...or maybe try/except with a new exception in clim_normalized. I'll make the PR...
| 2021-07-13T20:19:26 |
|
vispy/vispy | 2,144 | vispy__vispy-2144 | [
"2143"
] | 7e0301025c2a8206c890887a7dcdf8a4cdbcd778 | diff --git a/vispy/visuals/filters/color.py b/vispy/visuals/filters/color.py
--- a/vispy/visuals/filters/color.py
+++ b/vispy/visuals/filters/color.py
@@ -60,8 +60,8 @@ class IsolineFilter(Filter):
}
"""
- def __init__(self, level=2., width=2.0, antialias=1.0, color='black'):
- super(IsolineFilter, self).__init__(fcode=self.FRAG_SHADER)
+ def __init__(self, level=2., width=2.0, antialias=1.0, color='black', **kwargs):
+ super(IsolineFilter, self).__init__(fcode=self.FRAG_SHADER, **kwargs)
self.level = level
self.width = width
@@ -114,8 +114,8 @@ class Alpha(Filter):
}
"""
- def __init__(self, alpha=1.0):
- super(Alpha, self).__init__(fcode=self.FRAG_SHADER)
+ def __init__(self, alpha=1.0, **kwargs):
+ super(Alpha, self).__init__(fcode=self.FRAG_SHADER, **kwargs)
self.alpha = alpha
@@ -136,8 +136,8 @@ class ColorFilter(Filter):
}
"""
- def __init__(self, filter=(1., 1., 1., 1.)):
- super(ColorFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=8)
+ def __init__(self, filter=(1., 1., 1., 1.), fpos=8, **kwargs):
+ super(ColorFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=fpos, **kwargs)
self.filter = filter
@@ -164,9 +164,9 @@ class ZColormapFilter(Filter):
}
"""
- def __init__(self, cmap, zrange=(0., 1.)):
- super(ZColormapFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=3,
- vcode=self.VERT_SHADER, vpos=9)
+ def __init__(self, cmap, zrange=(0., 1.), fpos=3, vpos=9, **kwargs):
+ super(ZColormapFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=fpos,
+ vcode=self.VERT_SHADER, vpos=vpos, **kwargs)
if isinstance(cmap, str):
cmap = colormap.get_colormap(cmap)
| Add ability to pass "fpos" as a parameter to the ColorFilter
Hi all,
I am currently trying to use the ```ColorFilter``` (https://github.com/vispy/vispy/blob/main/vispy/visuals/filters/color.py) in a project along with several other filters, which I need to be placed in a specific order. However, right now, ```fpos``` cannot be passed as a parameter to ```ColorFilter```, which is always using 8:
```
def __init__(self, filter=(1., 1., 1., 1.)):
super(ColorFilter, self).__init__(fcode=self.FRAG_SHADER, fpos=8)
self.filter = filter
```
Is it possible to change this so the user can specify any position for this filter?
Thanks so much,
Clare
| > Is it possible to change this so the user can specify any position for this filter?
Absolutely! I think this is something people never really needed to do and now that we've had more structure added to filters this should be much easier to standardize as a keyword argument. It seems like almost all the filters (at least the ones in `color.py`, if not more) either use the default position of the base class or have the position hard coded.
Would you mind taking a shot at a pull request to fix one or more of these? You could make the keyword arguments and keep the defaults as they are now. | 2021-07-16T17:44:26 |
|
vispy/vispy | 2,179 | vispy__vispy-2179 | [
"2178"
] | 402bb8e7dccab94b879d26ad3a52e1769aef7ed4 | diff --git a/examples/scene/volume_plane.py b/examples/scene/volume_plane.py
--- a/examples/scene/volume_plane.py
+++ b/examples/scene/volume_plane.py
@@ -13,6 +13,7 @@
* 2 - toggle between volume rendering modes ('volume', 'plane')
* [] - shift plane along plane normal
* {} - decrease/increase plane thickness
+* Spacebar - stop/start animation
* x/y/z/o - set plane normal along x/y/z or [1,1,1] oblique axis
"""
@@ -113,7 +114,6 @@ def on_key_press(event):
elif event.text == ']':
plane.plane_position += 2 * shift
print(f"plane position: {plane.plane_position}")
-
elif event.text == 'x':
plane.plane_normal = [0, 0, 1]
elif event.text == 'y':
@@ -122,6 +122,11 @@ def on_key_press(event):
plane.plane_normal = [1, 0, 0]
elif event.text == 'o':
plane.plane_normal = [1, 1, 1]
+ elif event.text == ' ':
+ if timer.running:
+ timer.stop()
+ else:
+ timer.start()
def move_plane(event):
| Update volume_plane.py to allow timer to be stopped
I noticed the volume_plane.py has keyboard handling for moving the plane through the volume, but it doesn't allow you to stop the animation so these events don't actually mean much.
@alisterburt do you have time to fix this? Tradition is to use the spacebar (`" "`) to toggle the timer start/stop
| oh yeah! Thanks for pointing that out - I forgot to integrate the animated example with the keybindings, will try get to that sorted this evening
Thanks @alisterburt. So is this something you think you'll have time for today? If not, I can try to find the time and then make a 0.8.0 release with this fix. However, I have a kid staying home sick today so not sure I'll have a bunch of dedicated time. Let me know your availability. Thanks.
Will have a go after lunch - sorry to hear about the illness, hope they get better soon! 🙂 | 2021-08-20T11:38:47 |
|
vispy/vispy | 2,200 | vispy__vispy-2200 | [
"2199"
] | 2deb1bfe05c55d5c58f4512a601c1ef8a95a6d90 | diff --git a/vispy/app/backends/_qt.py b/vispy/app/backends/_qt.py
--- a/vispy/app/backends/_qt.py
+++ b/vispy/app/backends/_qt.py
@@ -415,9 +415,12 @@ def __init__(self, vispy_canvas, **kwargs):
# problems on Ubuntu computers with touchscreen.
# See https://github.com/vispy/vispy/pull/1143
if sys.platform == 'darwin':
- qt_widget_attributes = QtCore.Qt.WidgetAttribute if PYQT6_API else QtCore.Qt
- self.setAttribute(qt_widget_attributes.WA_AcceptTouchEvents)
- self.grabGesture(qt_widget_attributes.PinchGesture)
+ if PYQT6_API:
+ self.setAttribute(QtCore.Qt.WidgetAttribute.WA_AcceptTouchEvents)
+ self.grabGesture(QtCore.Qt.GestureType.PinchGesture)
+ else:
+ self.setAttribute(QtCore.Qt.WA_AcceptTouchEvents)
+ self.grabGesture(QtCore.Qt.PinchGesture)
def screen_changed(self, new_screen):
"""Window moved from one display to another, resize canvas.
| PinchGesture attribute error in PyQt6
got this error using PyQt6:
```pytb
File "/Users/talley/miniconda3/envs/napdev/lib/python3.9/site-packages/vispy/app/canvas.py", line 231, in create_native
self._app.backend_module.CanvasBackend(self, **self._backend_kwargs)
File "/Users/talley/miniconda3/envs/napdev/lib/python3.9/site-packages/vispy/app/backends/_qt.py", line 420, in __init__
self.grabGesture(qt_widget_attributes.PinchGesture)
File "/Users/talley/miniconda3/envs/napdev/lib/python3.9/enum.py", line 405, in __getattr__
raise AttributeError(name) from None
AttributeError: PinchGesture
```
I believe PinchGesture is under the `GestureType` namespace (`QtCore.Qt.GestureType.PinchGesture`)
instead of `QtCore.Qt.WidgetAttribute`
| 2021-08-26T21:35:32 |
||
vispy/vispy | 2,202 | vispy__vispy-2202 | [
"2201"
] | e2bdc37457c8c5709218a7c9b2fbe8e218bd26a7 | diff --git a/vispy/app/backends/_qt.py b/vispy/app/backends/_qt.py
--- a/vispy/app/backends/_qt.py
+++ b/vispy/app/backends/_qt.py
@@ -574,7 +574,8 @@ def event(self, ev):
if t == qt_event_types.TouchEnd:
self._vispy_canvas.events.touch(type='end')
if t == qt_event_types.Gesture:
- gesture = ev.gesture(qt_event_types.PinchGesture)
+ pinch_gesture = QtCore.Qt.GestureType.PinchGesture if PYQT6_API else QtCore.Qt.PinchGesture
+ gesture = ev.gesture(pinch_gesture)
if gesture:
(x, y) = _get_qpoint_pos(gesture.centerPoint())
scale = gesture.scaleFactor()
| Pinch Gesture error with PyQt5
When I make a pinch gesture in the canvas, I get the error below.
```python
WARNING: Traceback (most recent call last):
File "/Users/kyamauch/Documents/vispy/vispy/app/backends/_qt.py", line 577, in event
gesture = ev.gesture(qt_event_types.PinchGesture)
AttributeError: type object 'QEvent' has no attribute 'PinchGesture'
```
## To reproduce
1. Run the `examples/scene/volume.py`
2. Make a pinch gesture
## environment
- I pulled the latest vispy main ( e2bdc37457c8c5709218a7c9b2fbe8e218bd26a7).
- On Mac OS 10.15.7 (Catalina)
- PyQt 5.15.2
## related issues
- fix by @tlambert03 for PyQt6. I don't think this caused the issue since I experienced it before pulling Talley's patch, but maybe there are some useful ideas there. https://github.com/vispy/vispy/pull/2200
- related napari issue: https://github.com/napari/napari/issues/3263
| I tested this on `0.7.0` and the issue doesn't seem to be present. | 2021-08-27T13:01:51 |
|
vispy/vispy | 2,215 | vispy__vispy-2215 | [
"2214"
] | c7ae76642996e8c223e8247e300f999715b3df5c | diff --git a/vispy/app/backends/_glfw.py b/vispy/app/backends/_glfw.py
--- a/vispy/app/backends/_glfw.py
+++ b/vispy/app/backends/_glfw.py
@@ -13,8 +13,7 @@
import gc
import os
-from ..base import (BaseApplicationBackend, BaseCanvasBackend,
- BaseTimerBackend)
+from ..base import BaseApplicationBackend, BaseCanvasBackend, BaseTimerBackend
from ...util import keys, logger
from ...util.ptime import time
from ... import config
@@ -24,60 +23,15 @@
# -------------------------------------------------------------------- init ---
+glfw = None
try:
- from ...ext import glfw
-
- # Map native keys to vispy keys
- KEYMAP = {
- glfw.GLFW_KEY_LEFT_SHIFT: keys.SHIFT,
- glfw.GLFW_KEY_RIGHT_SHIFT: keys.SHIFT,
- glfw.GLFW_KEY_LEFT_CONTROL: keys.CONTROL,
- glfw.GLFW_KEY_RIGHT_CONTROL: keys.CONTROL,
- glfw.GLFW_KEY_LEFT_ALT: keys.ALT,
- glfw.GLFW_KEY_RIGHT_ALT: keys.ALT,
- glfw.GLFW_KEY_LEFT_SUPER: keys.META,
- glfw.GLFW_KEY_RIGHT_SUPER: keys.META,
-
- glfw.GLFW_KEY_LEFT: keys.LEFT,
- glfw.GLFW_KEY_UP: keys.UP,
- glfw.GLFW_KEY_RIGHT: keys.RIGHT,
- glfw.GLFW_KEY_DOWN: keys.DOWN,
- glfw.GLFW_KEY_PAGE_UP: keys.PAGEUP,
- glfw.GLFW_KEY_PAGE_DOWN: keys.PAGEDOWN,
-
- glfw.GLFW_KEY_INSERT: keys.INSERT,
- glfw.GLFW_KEY_DELETE: keys.DELETE,
- glfw.GLFW_KEY_HOME: keys.HOME,
- glfw.GLFW_KEY_END: keys.END,
-
- glfw.GLFW_KEY_ESCAPE: keys.ESCAPE,
- glfw.GLFW_KEY_BACKSPACE: keys.BACKSPACE,
-
- glfw.GLFW_KEY_F1: keys.F1,
- glfw.GLFW_KEY_F2: keys.F2,
- glfw.GLFW_KEY_F3: keys.F3,
- glfw.GLFW_KEY_F4: keys.F4,
- glfw.GLFW_KEY_F5: keys.F5,
- glfw.GLFW_KEY_F6: keys.F6,
- glfw.GLFW_KEY_F7: keys.F7,
- glfw.GLFW_KEY_F8: keys.F8,
- glfw.GLFW_KEY_F9: keys.F9,
- glfw.GLFW_KEY_F10: keys.F10,
- glfw.GLFW_KEY_F11: keys.F11,
- glfw.GLFW_KEY_F12: keys.F12,
-
- glfw.GLFW_KEY_SPACE: keys.SPACE,
- glfw.GLFW_KEY_ENTER: keys.ENTER,
- '\r': keys.ENTER,
- glfw.GLFW_KEY_TAB: keys.TAB,
- }
-
- BUTTONMAP = {glfw.GLFW_MOUSE_BUTTON_LEFT: 1,
- glfw.GLFW_MOUSE_BUTTON_RIGHT: 2,
- glfw.GLFW_MOUSE_BUTTON_MIDDLE: 3
- }
-except Exception as exp:
- available, testable, why_not, which = False, False, str(exp), None
+ import glfw
+except ImportError:
+ why_not = "Could not import glwf, you may need to `pip install glfw` first."
+ available, testable, why_not, which = False, False, why_not, None
+except Exception as err:
+ why_not = "Error importing glfw: " + str(err)
+ available, testable, why_not, which = False, False, why_not, None
else:
if USE_EGL:
available, testable, why_not = False, False, 'EGL not supported'
@@ -86,6 +40,57 @@
available, testable, why_not = True, True, None
which = 'glfw ' + str(glfw.__version__)
+if glfw:
+ # Map native keys to vispy keys
+ KEYMAP = {
+ glfw.KEY_LEFT_SHIFT: keys.SHIFT,
+ glfw.KEY_RIGHT_SHIFT: keys.SHIFT,
+ glfw.KEY_LEFT_CONTROL: keys.CONTROL,
+ glfw.KEY_RIGHT_CONTROL: keys.CONTROL,
+ glfw.KEY_LEFT_ALT: keys.ALT,
+ glfw.KEY_RIGHT_ALT: keys.ALT,
+ glfw.KEY_LEFT_SUPER: keys.META,
+ glfw.KEY_RIGHT_SUPER: keys.META,
+
+ glfw.KEY_LEFT: keys.LEFT,
+ glfw.KEY_UP: keys.UP,
+ glfw.KEY_RIGHT: keys.RIGHT,
+ glfw.KEY_DOWN: keys.DOWN,
+ glfw.KEY_PAGE_UP: keys.PAGEUP,
+ glfw.KEY_PAGE_DOWN: keys.PAGEDOWN,
+
+ glfw.KEY_INSERT: keys.INSERT,
+ glfw.KEY_DELETE: keys.DELETE,
+ glfw.KEY_HOME: keys.HOME,
+ glfw.KEY_END: keys.END,
+
+ glfw.KEY_ESCAPE: keys.ESCAPE,
+ glfw.KEY_BACKSPACE: keys.BACKSPACE,
+
+ glfw.KEY_F1: keys.F1,
+ glfw.KEY_F2: keys.F2,
+ glfw.KEY_F3: keys.F3,
+ glfw.KEY_F4: keys.F4,
+ glfw.KEY_F5: keys.F5,
+ glfw.KEY_F6: keys.F6,
+ glfw.KEY_F7: keys.F7,
+ glfw.KEY_F8: keys.F8,
+ glfw.KEY_F9: keys.F9,
+ glfw.KEY_F10: keys.F10,
+ glfw.KEY_F11: keys.F11,
+ glfw.KEY_F12: keys.F12,
+
+ glfw.KEY_SPACE: keys.SPACE,
+ glfw.KEY_ENTER: keys.ENTER,
+ '\r': keys.ENTER,
+ glfw.KEY_TAB: keys.TAB,
+ }
+
+ BUTTONMAP = {glfw.MOUSE_BUTTON_LEFT: 1,
+ glfw.MOUSE_BUTTON_RIGHT: 2,
+ glfw.MOUSE_BUTTON_MIDDLE: 3
+ }
+
MOD_KEYS = [keys.SHIFT, keys.ALT, keys.CONTROL, keys.META]
_GLFW_INITIALIZED = False
_VP_GLFW_ALL_WINDOWS = []
@@ -122,23 +127,23 @@ def _get_glfw_windows():
def _set_config(c):
"""Set gl configuration for GLFW."""
- glfw.glfwWindowHint(glfw.GLFW_RED_BITS, c['red_size'])
- glfw.glfwWindowHint(glfw.GLFW_GREEN_BITS, c['green_size'])
- glfw.glfwWindowHint(glfw.GLFW_BLUE_BITS, c['blue_size'])
- glfw.glfwWindowHint(glfw.GLFW_ALPHA_BITS, c['alpha_size'])
-
- glfw.glfwWindowHint(glfw.GLFW_ACCUM_RED_BITS, 0)
- glfw.glfwWindowHint(glfw.GLFW_ACCUM_GREEN_BITS, 0)
- glfw.glfwWindowHint(glfw.GLFW_ACCUM_BLUE_BITS, 0)
- glfw.glfwWindowHint(glfw.GLFW_ACCUM_ALPHA_BITS, 0)
-
- glfw.glfwWindowHint(glfw.GLFW_DEPTH_BITS, c['depth_size'])
- glfw.glfwWindowHint(glfw.GLFW_STENCIL_BITS, c['stencil_size'])
- # glfw.glfwWindowHint(glfw.GLFW_CONTEXT_VERSION_MAJOR, c['major_version'])
- # glfw.glfwWindowHint(glfw.GLFW_CONTEXT_VERSION_MINOR, c['minor_version'])
- # glfw.glfwWindowHint(glfw.GLFW_SRGB_CAPABLE, c['srgb'])
- glfw.glfwWindowHint(glfw.GLFW_SAMPLES, c['samples'])
- glfw.glfwWindowHint(glfw.GLFW_STEREO, c['stereo'])
+ glfw.window_hint(glfw.RED_BITS, c['red_size'])
+ glfw.window_hint(glfw.GREEN_BITS, c['green_size'])
+ glfw.window_hint(glfw.BLUE_BITS, c['blue_size'])
+ glfw.window_hint(glfw.ALPHA_BITS, c['alpha_size'])
+
+ glfw.window_hint(glfw.ACCUM_RED_BITS, 0)
+ glfw.window_hint(glfw.ACCUM_GREEN_BITS, 0)
+ glfw.window_hint(glfw.ACCUM_BLUE_BITS, 0)
+ glfw.window_hint(glfw.ACCUM_ALPHA_BITS, 0)
+
+ glfw.window_hint(glfw.DEPTH_BITS, c['depth_size'])
+ glfw.window_hint(glfw.STENCIL_BITS, c['stencil_size'])
+ # glfw.window_hint(glfw.CONTEXT_VERSION_MAJOR, c['major_version'])
+ # glfw.window_hint(glfw.CONTEXT_VERSION_MINOR, c['minor_version'])
+ # glfw.window_hint(glfw.SRGB_CAPABLE, c['srgb'])
+ glfw.window_hint(glfw.SAMPLES, c['samples'])
+ glfw.window_hint(glfw.STEREO, c['stereo'])
if not c['double_buffer']:
raise RuntimeError('GLFW must double buffer, consider using a '
'different backend, or using double buffering')
@@ -168,7 +173,7 @@ def _vispy_get_backend_name(self):
return 'Glfw'
def _vispy_process_events(self):
- glfw.glfwPollEvents()
+ glfw.poll_events()
for timer in self._timers:
timer._tick()
wins = _get_glfw_windows()
@@ -179,7 +184,7 @@ def _vispy_process_events(self):
def _vispy_run(self):
wins = _get_glfw_windows()
- while any(w._id is not None and not glfw.glfwWindowShouldClose(w._id)
+ while any(w._id is not None and not glfw.window_should_close(w._id)
for w in wins):
self._vispy_process_events()
self._vispy_quit() # to clean up
@@ -199,14 +204,14 @@ def _vispy_get_native_app(self):
global _GLFW_INITIALIZED
if not _GLFW_INITIALIZED:
cwd = os.getcwd()
- glfw.glfwSetErrorCallback(_error_callback)
+ glfw.set_error_callback(_error_callback)
try:
- if not glfw.glfwInit(): # only ever call once
+ if not glfw.init(): # only ever call once
raise OSError('Could not init glfw:\n%r' % _glfw_errors)
finally:
os.chdir(cwd)
- glfw.glfwSetErrorCallback(0)
- atexit.register(glfw.glfwTerminate)
+ glfw.set_error_callback(None)
+ atexit.register(glfw.terminate)
_GLFW_INITIALIZED = True
return glfw
@@ -230,23 +235,22 @@ def __init__(self, vispy_canvas, **kwargs):
else:
share = p.context.shared.ref._id
- glfw.glfwWindowHint(glfw.GLFW_REFRESH_RATE, 0) # highest possible
- glfw.glfwSwapInterval(1 if p.vsync else 0)
- glfw.glfwWindowHint(glfw.GLFW_RESIZABLE, int(p.resizable))
- glfw.glfwWindowHint(glfw.GLFW_DECORATED, int(p.decorate))
- glfw.glfwWindowHint(glfw.GLFW_VISIBLE, 0) # start out hidden
- glfw.glfwWindowHint(glfw.GLFW_FLOATING, int(p.always_on_top))
+ glfw.window_hint(glfw.REFRESH_RATE, 0) # highest possible
+ glfw.window_hint(glfw.RESIZABLE, int(p.resizable))
+ glfw.window_hint(glfw.DECORATED, int(p.decorate))
+ glfw.window_hint(glfw.VISIBLE, 0) # start out hidden
+ glfw.window_hint(glfw.FLOATING, int(p.always_on_top))
if p.fullscreen is not False:
self._fullscreen = True
if p.fullscreen is True:
- monitor = glfw.glfwGetPrimaryMonitor()
+ monitor = glfw.get_primary_monitor()
else:
- monitor = glfw.glfwGetMonitors()
+ monitor = glfw.get_monitors()
if p.fullscreen >= len(monitor):
raise ValueError('fullscreen must be <= %s'
% len(monitor))
monitor = monitor[p.fullscreen]
- use_size = glfw.glfwGetVideoMode(monitor)[:2]
+ use_size = glfw.get_video_mode(monitor)[:2]
if use_size != tuple(p.size):
logger.debug('Requested size %s, will be ignored to '
'use fullscreen mode %s' % (p.size, use_size))
@@ -256,31 +260,34 @@ def __init__(self, vispy_canvas, **kwargs):
monitor = None
size = p.size
- self._id = glfw.glfwCreateWindow(width=size[0], height=size[1],
- title=p.title, monitor=monitor,
- share=share)
+ self._id = glfw.create_window(width=size[0], height=size[1],
+ title=p.title, monitor=monitor,
+ share=share)
if not self._id:
raise RuntimeError('Could not create window')
+ glfw.make_context_current(self._id)
+ glfw.swap_interval(1 if p.vsync else 0) # needs a valid context
+
_VP_GLFW_ALL_WINDOWS.append(self)
self._mod = list()
# Register callbacks
- glfw.glfwSetWindowRefreshCallback(self._id, self._on_draw)
- glfw.glfwSetWindowSizeCallback(self._id, self._on_resize)
- glfw.glfwSetKeyCallback(self._id, self._on_key_press)
- glfw.glfwSetCharCallback(self._id, self._on_key_char)
- glfw.glfwSetMouseButtonCallback(self._id, self._on_mouse_button)
- glfw.glfwSetScrollCallback(self._id, self._on_mouse_scroll)
- glfw.glfwSetCursorPosCallback(self._id, self._on_mouse_motion)
- glfw.glfwSetWindowCloseCallback(self._id, self._on_close)
+ glfw.set_window_refresh_callback(self._id, self._on_draw)
+ glfw.set_window_size_callback(self._id, self._on_resize)
+ glfw.set_key_callback(self._id, self._on_key_press)
+ glfw.set_char_callback(self._id, self._on_key_char)
+ glfw.set_mouse_button_callback(self._id, self._on_mouse_button)
+ glfw.set_scroll_callback(self._id, self._on_mouse_scroll)
+ glfw.set_cursor_pos_callback(self._id, self._on_mouse_motion)
+ glfw.set_window_close_callback(self._id, self._on_close)
self._vispy_canvas_ = None
self._needs_draw = False
self._vispy_canvas.set_current()
if p.position is not None:
self._vispy_set_position(*p.position)
if p.show:
- glfw.glfwShowWindow(self._id)
+ glfw.show_window(self._id)
# Init
self._initialized = True
@@ -301,42 +308,42 @@ def _vispy_set_current(self):
if self._id is None:
return
# Make this the current context
- glfw.glfwMakeContextCurrent(self._id)
+ glfw.make_context_current(self._id)
def _vispy_swap_buffers(self):
if self._id is None:
return
# Swap front and back buffer
- glfw.glfwSwapBuffers(self._id)
+ glfw.swap_buffers(self._id)
def _vispy_set_title(self, title):
if self._id is None:
return
# Set the window title. Has no effect for widgets
- glfw.glfwSetWindowTitle(self._id, title.encode('utf-8'))
+ glfw.set_window_title(self._id, title.encode('utf-8'))
def _vispy_set_size(self, w, h):
if self._id is None:
return
# Set size of the widget or window
- glfw.glfwSetWindowSize(self._id, w, h)
+ glfw.set_window_size(self._id, w, h)
def _vispy_set_position(self, x, y):
if self._id is None:
return
# Set position of the widget or window. May have no effect for widgets
- glfw.glfwSetWindowPos(self._id, x, y)
+ glfw.set_window_pos(self._id, x, y)
def _vispy_set_visible(self, visible):
# Show or hide the window or widget
if self._id is None:
return
if visible:
- glfw.glfwShowWindow(self._id)
+ glfw.show_window(self._id)
# this ensures that the show takes effect
self._vispy_update()
else:
- glfw.glfwHideWindow(self._id)
+ glfw.hide_window(self._id)
def _vispy_set_fullscreen(self, fullscreen):
logger.warn('Cannot change fullscreen mode for GLFW backend')
@@ -352,28 +359,28 @@ def _vispy_close(self):
# Force the window or widget to shut down
if self._id is not None:
self._vispy_canvas = None
- # glfw.glfwSetWindowShouldClose() # Does not really cause a close
+ # glfw.set_window_should_close() # Does not really cause a close
self._vispy_set_visible(False)
self._id, id_ = None, self._id
- glfw.glfwDestroyWindow(id_)
+ glfw.destroy_window(id_)
gc.collect() # help ensure context gets destroyed
def _vispy_get_size(self):
if self._id is None:
return
- w, h = glfw.glfwGetWindowSize(self._id)
+ w, h = glfw.get_window_size(self._id)
return w, h
def _vispy_get_physical_size(self):
if self._id is None:
return
- w, h = glfw.glfwGetFramebufferSize(self._id)
+ w, h = glfw.get_framebuffer_size(self._id)
return w, h
def _vispy_get_position(self):
if self._id is None:
return
- x, y = glfw.glfwGetWindowPos(self._id)
+ x, y = glfw.get_window_pos(self._id)
return x, y
def _vispy_get_fullscreen(self):
@@ -401,13 +408,13 @@ def _on_draw(self, _id=None):
def _on_mouse_button(self, _id, button, action, mod):
if self._vispy_canvas is None and self._id is not None:
return
- pos = glfw.glfwGetCursorPos(self._id)
+ pos = glfw.get_cursor_pos(self._id)
if button < 3:
# Mouse click event
button = BUTTONMAP.get(button, 0)
- if action == glfw.GLFW_PRESS:
+ if action == glfw.PRESS:
fun = self._vispy_mouse_press
- elif action == glfw.GLFW_RELEASE:
+ elif action == glfw.RELEASE:
fun = self._vispy_mouse_release
else:
return
@@ -416,7 +423,7 @@ def _on_mouse_button(self, _id, button, action, mod):
def _on_mouse_scroll(self, _id, x_off, y_off):
if self._vispy_canvas is None and self._id is not None:
return
- pos = glfw.glfwGetCursorPos(self._id)
+ pos = glfw.get_cursor_pos(self._id)
delta = (float(x_off), float(y_off))
self._vispy_canvas.events.mouse_wheel(pos=pos, delta=delta,
modifiers=self._mod)
@@ -430,10 +437,10 @@ def _on_key_press(self, _id, key, scancode, action, mod):
if self._vispy_canvas is None:
return
key, text = self._process_key(key)
- if action == glfw.GLFW_PRESS:
+ if action == glfw.PRESS:
fun = self._vispy_canvas.events.key_press
down = True
- elif action == glfw.GLFW_RELEASE:
+ elif action == glfw.RELEASE:
fun = self._vispy_canvas.events.key_release
down = False
else:
@@ -443,7 +450,7 @@ def _on_key_press(self, _id, key, scancode, action, mod):
# NOTE: GLFW only provides localized characters via _on_key_char, so if
# this event contains a character we store all other data and dispatch
# it once the final unicode character is sent shortly after.
- if text != '' and action == glfw.GLFW_PRESS:
+ if text != '' and action == glfw.PRESS:
self._next_key_events.append((fun, key, self._mod))
else:
if key in self._next_key_text:
diff --git a/vispy/ext/glfw.py b/vispy/ext/glfw.py
deleted file mode 100644
--- a/vispy/ext/glfw.py
+++ /dev/null
@@ -1,649 +0,0 @@
-# -*- coding: utf-8 -*-
-
-# -----------------------------------------------------------------------------
-# GLFW - An OpenGL framework
-# API version: 3.0.1
-# WWW: http://www.glfw.org/
-# ----------------------------------------------------------------------------
-# Copyright (c) 2002-2006 Marcus Geelnard
-# Copyright (c) 2006-2010 Camilla Berglund
-#
-# Python bindings - Copyright (c) 2013 Nicolas P. Rougier
-#
-# This software is provided 'as-is', without any express or implied
-# warranty. In no event will the authors be held liable for any damages
-# arising from the use of this software.
-#
-# Permission is granted to anyone to use this software for any purpose,
-# including commercial applications, and to alter it and redistribute it
-# freely, subject to the following restrictions:
-#
-# 1. The origin of this software must not be misrepresented; you must not
-# claim that you wrote the original software. If you use this software
-# in a product, an acknowledgment in the product documentation would
-# be appreciated but is not required.
-#
-# 2. Altered source versions must be plainly marked as such, and must not
-# be misrepresented as being the original software.
-#
-# 3. This notice may not be removed or altered from any source
-# distribution.
-#
-# -----------------------------------------------------------------------------
-
-# NOTE:
-# This source has been modified from its original form by the vispy dev team
-
-import os
-import ctypes.util
-from ctypes import (Structure, POINTER, CFUNCTYPE, byref, c_char_p, c_int,
- c_uint, c_double, c_float, c_ushort)
-
-
-_glfw_file = None
-
-# First if there is an environment variable pointing to the library
-if 'GLFW_LIBRARY' in os.environ:
- if os.path.exists(os.environ['GLFW_LIBRARY']):
- _glfw_file = os.path.realpath(os.environ['GLFW_LIBRARY'])
-
-# Else, try to find it
-if _glfw_file is None:
- order = ['glfw3', 'glfw']
- for check in order:
- _glfw_file = ctypes.util.find_library(check)
- if _glfw_file is not None:
- break
-
-# Else, we failed and exit
-if _glfw_file is None:
- raise OSError('GLFW library not found')
-
-# Load it
-_glfw = ctypes.CDLL(_glfw_file)
-
-
-# Ensure it's new enough
-def glfwGetVersion():
- major, minor, rev = c_int(0), c_int(0), c_int(0)
- _glfw.glfwGetVersion(byref(major), byref(minor), byref(rev))
- return major.value, minor.value, rev.value
-
-version = glfwGetVersion()
-
-if version[0] != 3:
- version = '.'.join([str(v) for v in version])
- raise OSError('Need GLFW library version 3, found version %s' % version)
-
-
-# --- Version -----------------------------------------------------------------
-GLFW_VERSION_MAJOR = version[0]
-GLFW_VERSION_MINOR = version[1]
-GLFW_VERSION_REVISION = version[2]
-__version__ = GLFW_VERSION_MAJOR, GLFW_VERSION_MINOR, GLFW_VERSION_REVISION
-
-# --- Input handling definitions ----------------------------------------------
-GLFW_RELEASE = 0
-GLFW_PRESS = 1
-GLFW_REPEAT = 2
-
-# --- Keys --------------------------------------------------------------------
-
-# --- The unknown key ---------------------------------------------------------
-GLFW_KEY_UNKNOWN = -1
-
-# --- Printable keys ----------------------------------------------------------
-GLFW_KEY_SPACE = 32
-GLFW_KEY_APOSTROPHE = 39 # ''
-GLFW_KEY_COMMA = 44 # ,
-GLFW_KEY_MINUS = 45 # -
-GLFW_KEY_PERIOD = 46 # .
-GLFW_KEY_SLASH = 47 # /
-GLFW_KEY_0 = 48
-GLFW_KEY_1 = 49
-GLFW_KEY_2 = 50
-GLFW_KEY_3 = 51
-GLFW_KEY_4 = 52
-GLFW_KEY_5 = 53
-GLFW_KEY_6 = 54
-GLFW_KEY_7 = 55
-GLFW_KEY_8 = 56
-GLFW_KEY_9 = 57
-GLFW_KEY_SEMICOLON = 59 # ;
-GLFW_KEY_EQUAL = 61 # =
-GLFW_KEY_A = 65
-GLFW_KEY_B = 66
-GLFW_KEY_C = 67
-GLFW_KEY_D = 68
-GLFW_KEY_E = 69
-GLFW_KEY_F = 70
-GLFW_KEY_G = 71
-GLFW_KEY_H = 72
-GLFW_KEY_I = 73
-GLFW_KEY_J = 74
-GLFW_KEY_K = 75
-GLFW_KEY_L = 76
-GLFW_KEY_M = 77
-GLFW_KEY_N = 78
-GLFW_KEY_O = 79
-GLFW_KEY_P = 80
-GLFW_KEY_Q = 81
-GLFW_KEY_R = 82
-GLFW_KEY_S = 83
-GLFW_KEY_T = 84
-GLFW_KEY_U = 85
-GLFW_KEY_V = 86
-GLFW_KEY_W = 87
-GLFW_KEY_X = 88
-GLFW_KEY_Y = 89
-GLFW_KEY_Z = 90
-GLFW_KEY_LEFT_BRACKET = 91 # [
-GLFW_KEY_BACKSLASH = 92 # \
-GLFW_KEY_RIGHT_BRACKET = 93 # ]
-GLFW_KEY_GRAVE_ACCENT = 96 # `
-GLFW_KEY_WORLD_1 = 161 # non-US #1
-GLFW_KEY_WORLD_2 = 162 # non-US #2
-
-# --- Function keys -----------------------------------------------------------
-GLFW_KEY_ESCAPE = 256
-GLFW_KEY_ENTER = 257
-GLFW_KEY_TAB = 258
-GLFW_KEY_BACKSPACE = 259
-GLFW_KEY_INSERT = 260
-GLFW_KEY_DELETE = 261
-GLFW_KEY_RIGHT = 262
-GLFW_KEY_LEFT = 263
-GLFW_KEY_DOWN = 264
-GLFW_KEY_UP = 265
-GLFW_KEY_PAGE_UP = 266
-GLFW_KEY_PAGE_DOWN = 267
-GLFW_KEY_HOME = 268
-GLFW_KEY_END = 269
-GLFW_KEY_CAPS_LOCK = 280
-GLFW_KEY_SCROLL_LOCK = 281
-GLFW_KEY_NUM_LOCK = 282
-GLFW_KEY_PRINT_SCREEN = 283
-GLFW_KEY_PAUSE = 284
-GLFW_KEY_F1 = 290
-GLFW_KEY_F2 = 291
-GLFW_KEY_F3 = 292
-GLFW_KEY_F4 = 293
-GLFW_KEY_F5 = 294
-GLFW_KEY_F6 = 295
-GLFW_KEY_F7 = 296
-GLFW_KEY_F8 = 297
-GLFW_KEY_F9 = 298
-GLFW_KEY_F10 = 299
-GLFW_KEY_F11 = 300
-GLFW_KEY_F12 = 301
-GLFW_KEY_F13 = 302
-GLFW_KEY_F14 = 303
-GLFW_KEY_F15 = 304
-GLFW_KEY_F16 = 305
-GLFW_KEY_F17 = 306
-GLFW_KEY_F18 = 307
-GLFW_KEY_F19 = 308
-GLFW_KEY_F20 = 309
-GLFW_KEY_F21 = 310
-GLFW_KEY_F22 = 311
-GLFW_KEY_F23 = 312
-GLFW_KEY_F24 = 313
-GLFW_KEY_F25 = 314
-GLFW_KEY_KP_0 = 320
-GLFW_KEY_KP_1 = 321
-GLFW_KEY_KP_2 = 322
-GLFW_KEY_KP_3 = 323
-GLFW_KEY_KP_4 = 324
-GLFW_KEY_KP_5 = 325
-GLFW_KEY_KP_6 = 326
-GLFW_KEY_KP_7 = 327
-GLFW_KEY_KP_8 = 328
-GLFW_KEY_KP_9 = 329
-GLFW_KEY_KP_DECIMAL = 330
-GLFW_KEY_KP_DIVIDE = 331
-GLFW_KEY_KP_MULTIPLY = 332
-GLFW_KEY_KP_SUBTRACT = 333
-GLFW_KEY_KP_ADD = 334
-GLFW_KEY_KP_ENTER = 335
-GLFW_KEY_KP_EQUAL = 336
-GLFW_KEY_LEFT_SHIFT = 340
-GLFW_KEY_LEFT_CONTROL = 341
-GLFW_KEY_LEFT_ALT = 342
-GLFW_KEY_LEFT_SUPER = 343
-GLFW_KEY_RIGHT_SHIFT = 344
-GLFW_KEY_RIGHT_CONTROL = 345
-GLFW_KEY_RIGHT_ALT = 346
-GLFW_KEY_RIGHT_SUPER = 347
-GLFW_KEY_MENU = 348
-GLFW_KEY_LAST = GLFW_KEY_MENU
-
-
-# --- Modifiers ---------------------------------------------------------------
-GLFW_MOD_SHIFT = 0x0001
-GLFW_MOD_CONTROL = 0x0002
-GLFW_MOD_ALT = 0x0004
-GLFW_MOD_SUPER = 0x0008
-
-# --- Mouse -------------------------------------------------------------------
-GLFW_MOUSE_BUTTON_1 = 0
-GLFW_MOUSE_BUTTON_2 = 1
-GLFW_MOUSE_BUTTON_3 = 2
-GLFW_MOUSE_BUTTON_4 = 3
-GLFW_MOUSE_BUTTON_5 = 4
-GLFW_MOUSE_BUTTON_6 = 5
-GLFW_MOUSE_BUTTON_7 = 6
-GLFW_MOUSE_BUTTON_8 = 7
-GLFW_MOUSE_BUTTON_LAST = GLFW_MOUSE_BUTTON_8
-GLFW_MOUSE_BUTTON_LEFT = GLFW_MOUSE_BUTTON_1
-GLFW_MOUSE_BUTTON_RIGHT = GLFW_MOUSE_BUTTON_2
-GLFW_MOUSE_BUTTON_MIDDLE = GLFW_MOUSE_BUTTON_3
-
-
-# --- Joystick ----------------------------------------------------------------
-GLFW_JOYSTICK_1 = 0
-GLFW_JOYSTICK_2 = 1
-GLFW_JOYSTICK_3 = 2
-GLFW_JOYSTICK_4 = 3
-GLFW_JOYSTICK_5 = 4
-GLFW_JOYSTICK_6 = 5
-GLFW_JOYSTICK_7 = 6
-GLFW_JOYSTICK_8 = 7
-GLFW_JOYSTICK_9 = 8
-GLFW_JOYSTICK_10 = 9
-GLFW_JOYSTICK_11 = 10
-GLFW_JOYSTICK_12 = 11
-GLFW_JOYSTICK_13 = 12
-GLFW_JOYSTICK_14 = 13
-GLFW_JOYSTICK_15 = 14
-GLFW_JOYSTICK_16 = 15
-GLFW_JOYSTICK_LAST = GLFW_JOYSTICK_16
-
-
-# --- Error codes -------------------------------------------------------------
-GLFW_NOT_INITIALIZED = 0x00010001
-GLFW_NO_CURRENT_CONTEXT = 0x00010002
-GLFW_INVALID_ENUM = 0x00010003
-GLFW_INVALID_VALUE = 0x00010004
-GLFW_OUT_OF_MEMORY = 0x00010005
-GLFW_API_UNAVAILABLE = 0x00010006
-GLFW_VERSION_UNAVAILABLE = 0x00010007
-GLFW_PLATFORM_ERROR = 0x00010008
-GLFW_FORMAT_UNAVAILABLE = 0x00010009
-
-# ---
-GLFW_FOCUSED = 0x00020001
-GLFW_ICONIFIED = 0x00020002
-GLFW_RESIZABLE = 0x00020003
-GLFW_VISIBLE = 0x00020004
-GLFW_DECORATED = 0x00020005
-GLFW_AUTO_ICONIFY = 0x00020006
-GLFW_FLOATING = 0x00020007
-
-# ---
-GLFW_RED_BITS = 0x00021001
-GLFW_GREEN_BITS = 0x00021002
-GLFW_BLUE_BITS = 0x00021003
-GLFW_ALPHA_BITS = 0x00021004
-GLFW_DEPTH_BITS = 0x00021005
-GLFW_STENCIL_BITS = 0x00021006
-GLFW_ACCUM_RED_BITS = 0x00021007
-GLFW_ACCUM_GREEN_BITS = 0x00021008
-GLFW_ACCUM_BLUE_BITS = 0x00021009
-GLFW_ACCUM_ALPHA_BITS = 0x0002100A
-GLFW_AUX_BUFFERS = 0x0002100B
-GLFW_STEREO = 0x0002100C
-GLFW_SAMPLES = 0x0002100D
-GLFW_SRGB_CAPABLE = 0x0002100E
-GLFW_REFRESH_RATE = 0x0002100F
-
-# ---
-GLFW_CLIENT_API = 0x00022001
-GLFW_CONTEXT_VERSION_MAJOR = 0x00022002
-GLFW_CONTEXT_VERSION_MINOR = 0x00022003
-GLFW_CONTEXT_REVISION = 0x00022004
-GLFW_CONTEXT_ROBUSTNESS = 0x00022005
-GLFW_OPENGL_FORWARD_COMPAT = 0x00022006
-GLFW_OPENGL_DEBUG_CONTEXT = 0x00022007
-GLFW_OPENGL_PROFILE = 0x00022008
-
-# ---
-GLFW_OPENGL_API = 0x00030001
-GLFW_OPENGL_ES_API = 0x00030002
-
-# ---
-GLFW_NO_ROBUSTNESS = 0
-GLFW_NO_RESET_NOTIFICATION = 0x00031001
-GLFW_LOSE_CONTEXT_ON_RESET = 0x00031002
-
-# ---
-GLFW_OPENGL_ANY_PROFILE = 0
-GLFW_OPENGL_CORE_PROFILE = 0x00032001
-GLFW_OPENGL_COMPAT_PROFILE = 0x00032002
-
-# ---
-GLFW_CURSOR = 0x00033001
-GLFW_STICKY_KEYS = 0x00033002
-GLFW_STICKY_MOUSE_BUTTONS = 0x00033003
-
-# ---
-GLFW_CURSOR_NORMAL = 0x00034001
-GLFW_CURSOR_HIDDEN = 0x00034002
-GLFW_CURSOR_DISABLED = 0x00034003
-
-# ---
-GLFW_CONNECTED = 0x00040001
-GLFW_DISCONNECTED = 0x00040002
-
-
-# --- Structures --------------------------------------------------------------
-class GLFWvidmode(Structure):
- _fields_ = [ ('width', c_int),
- ('height', c_int),
- ('redBits', c_int),
- ('greenBits', c_int),
- ('blueBits', c_int),
- ('refreshRate', c_int) ]
-
-class GLFWgammaramp(Structure):
- _fields_ = [ ('red', POINTER(c_ushort)),
- ('green', POINTER(c_ushort)),
- ('blue', POINTER(c_ushort)),
- ('size', c_int) ]
-
-class GLFWwindow(Structure): pass
-class GLFWmonitor(Structure): pass
-
-# --- Callbacks ---------------------------------------------------------------
-errorfun = CFUNCTYPE(None, c_int, c_char_p)
-windowposfun = CFUNCTYPE(None, POINTER(GLFWwindow), c_int, c_int)
-windowsizefun = CFUNCTYPE(None, POINTER(GLFWwindow), c_int, c_int)
-windowclosefun = CFUNCTYPE(None, POINTER(GLFWwindow))
-windowrefreshfun = CFUNCTYPE(None, POINTER(GLFWwindow))
-windowfocusfun = CFUNCTYPE(None, POINTER(GLFWwindow), c_int)
-windowiconifyfun = CFUNCTYPE(None, POINTER(GLFWwindow), c_int)
-framebuffersizefun = CFUNCTYPE(None, POINTER(GLFWwindow), c_int, c_int)
-mousebuttonfun = CFUNCTYPE(None, POINTER(GLFWwindow), c_int, c_int, c_int)
-cursorposfun = CFUNCTYPE(None, POINTER(GLFWwindow), c_double, c_double)
-cursorenterfun = CFUNCTYPE(None, POINTER(GLFWwindow), c_int)
-scrollfun = CFUNCTYPE(None, POINTER(GLFWwindow), c_double, c_double)
-keyfun = CFUNCTYPE(None, POINTER(GLFWwindow), c_int, c_int, c_int, c_int)
-charfun = CFUNCTYPE(None, POINTER(GLFWwindow), c_uint)
-monitorfun = CFUNCTYPE(None, POINTER(GLFWmonitor), c_int)
-
-# --- Init --------------------------------------------------------------------
-glfwInit = _glfw.glfwInit
-glfwTerminate = _glfw.glfwTerminate
-#glfwGetVersion = _glfw.glfwGetVersion
-
-# --- Error -------------------------------------------------------------------
-#glfwSetErrorCallback = _glfw.glfwSetErrorCallback
-
-# --- Monitor -----------------------------------------------------------------
-# glfwGetMonitors = _glfw.glfwGetMonitors
-# glfwGetMonitors.restype = POINTER(GLFWmonitor)
-glfwGetPrimaryMonitor = _glfw.glfwGetPrimaryMonitor
-# glfwGetMonitorPos = _glfw.glfwGetMonitorPos
-# glfwGetMonitorPhysicalSize = _glfw.glfwGetMonitorPhysicalSize
-glfwGetMonitorName = _glfw.glfwGetMonitorName
-glfwGetMonitorName.restype = c_char_p
-# glfwSetMonitorCallback = _glfw.glfwSetMonitorCallback
-# glfwGetVideoModes = _glfw.glfwGetVideoModes
-# glfwGetVideoMode = _glfw.glfwGetVideoMode
-
-# --- Gama --------------------------------------------------------------------
-glfwSetGamma = _glfw.glfwSetGamma
-# glfwGetGammaRamp = _glfw.glfwGetGammaRamp
-# glfwSetGammaRamp = _glfw.glfwSetGammaRamp
-
-# --- Window ------------------------------------------------------------------
-glfwDefaultWindowHints = _glfw.glfwDefaultWindowHints
-glfwWindowHint = _glfw.glfwWindowHint
-# glfwCreateWindow = _glfw.glfwCreateWindow
-# glfwDestroyWindow = _glfw.glfwDestroyWindow
-glfwWindowShouldClose = _glfw.glfwWindowShouldClose
-glfwSetWindowShouldClose = _glfw.glfwSetWindowShouldClose
-glfwSetWindowTitle = _glfw.glfwSetWindowTitle
-# glfwGetWindowPos = _glfw.glfwGetWindowPos
-glfwSetWindowPos = _glfw.glfwSetWindowPos
-# glfwGetWindowSize = _glfw.glfwGetWindowSize
-glfwSetWindowSize = _glfw.glfwSetWindowSize
-# glfwGetFramebufferSize = _glfw.glfwGetFramebufferSize
-glfwIconifyWindow = _glfw.glfwIconifyWindow
-glfwRestoreWindow = _glfw.glfwRestoreWindow
-glfwShowWindow = _glfw.glfwShowWindow
-glfwHideWindow = _glfw.glfwHideWindow
-glfwGetWindowMonitor = _glfw.glfwGetWindowMonitor
-glfwGetWindowAttrib = _glfw.glfwGetWindowAttrib
-glfwSetWindowUserPointer = _glfw.glfwSetWindowUserPointer
-glfwGetWindowUserPointer = _glfw.glfwGetWindowUserPointer
-# glfwSetWindowPosCallback = _glfw.glfwSetWindowPosCallback
-# glfwSetWindowSizeCallback = _glfw.glfwSetWindowSizeCallback
-# glfwSetWindowCloseCallback = _glfw.glfwSetWindowCloseCallback
-# glfwSetWindowRefreshCallback = _glfw.glfwSetWindowRefreshCallback
-# glfwSetWindowFocusCallback = _glfw.glfwSetWindowFocusCallback
-# glfwSetWindowIconifyCallback = _glfw.glfwSetWindowIconifyCallback
-# glfwSetFramebufferSizeCallback = _glfw.glfwSetFramebufferSizeCallback
-glfwPollEvents = _glfw.glfwPollEvents
-glfwWaitEvents = _glfw.glfwWaitEvents
-
-# --- Input -------------------------------------------------------------------
-glfwGetInputMode = _glfw.glfwGetInputMode
-glfwSetInputMode = _glfw.glfwSetInputMode
-glfwGetKey = _glfw.glfwGetKey
-glfwGetMouseButton = _glfw.glfwGetMouseButton
-# glfwGetCursorPos = _glfw.glfwGetCursorPos
-glfwSetCursorPos = _glfw.glfwSetCursorPos
-# glfwSetKeyCallback = _glfw.glfwSetKeyCallback
-# glfwSetCharCallback = _glfw.glfwSetCharCallback
-# glfwSetMouseButtonCallback = _glfw.glfwSetMouseButtonCallback
-# glfwSetCursorPosCallback = _glfw.glfwSetCursorPosCallback
-# glfwSetCursorEnterCallback = _glfw.glfwSetCursorEnterCallback
-# glfwSetScrollCallback = _glfw.glfwSetScrollCallback
-glfwJoystickPresent = _glfw.glfwJoystickPresent
-# glfwGetJoystickAxes = _glfw.glfwGetJoystickAxes
-# glfwGetJoystickButtons = _glfw.glfwGetJoystickButtons
-glfwGetJoystickName = _glfw.glfwGetJoystickName
-glfwGetJoystickName.restype = c_char_p
-
-# --- Clipboard ---------------------------------------------------------------
-glfwSetClipboardString = _glfw.glfwSetClipboardString
-glfwGetClipboardString = _glfw.glfwGetClipboardString
-glfwGetClipboardString.restype = c_char_p
-
-# --- Timer -------------------------------------------------------------------
-glfwGetTime = _glfw.glfwGetTime
-glfwGetTime.restype = c_double
-glfwSetTime = _glfw.glfwSetTime
-
-# --- Context -----------------------------------------------------------------
-glfwMakeContextCurrent = _glfw.glfwMakeContextCurrent
-glfwGetCurrentContext = _glfw.glfwGetCurrentContext
-glfwSwapBuffers = _glfw.glfwSwapBuffers
-glfwSwapInterval = _glfw.glfwSwapInterval
-glfwExtensionSupported = _glfw.glfwExtensionSupported
-glfwGetProcAddress = _glfw.glfwGetProcAddress
-
-
-
-# --- Pythonizer --------------------------------------------------------------
-
-# This keeps track of current windows
-__windows__ = []
-__destroyed__ = []
-
-# This is to prevent garbage collection on callbacks
-__c_callbacks__ = {}
-__py_callbacks__ = {}
-__c_error_callback__ = None
-
-def glfwCreateWindow(width=640, height=480, title="GLFW Window",
- monitor=None, share=None):
- _glfw.glfwCreateWindow.restype = POINTER(GLFWwindow)
- window = _glfw.glfwCreateWindow(int(width), int(height),
- title.encode('utf-8'), monitor, share)
- __windows__.append(window)
- __destroyed__.append(False)
- index = __windows__.index(window)
- __c_callbacks__[index] = {}
- __py_callbacks__[index] = { 'errorfun' : None,
- 'monitorfun' : None,
- 'windowposfun' : None,
- 'windowsizefun' : None,
- 'windowclosefun' : None,
- 'windowrefreshfun' : None,
- 'windowfocusfun' : None,
- 'windowiconifyfun' : None,
- 'framebuffersizefun' : None,
- 'keyfun' : None,
- 'charfun' : None,
- 'mousebuttonfun' : None,
- 'cursorposfun' : None,
- 'cursorenterfun' : None,
- 'scrollfun' : None }
- return window
-
-
-def glfwDestroyWindow(window):
- index = __windows__.index(window)
- if not __destroyed__[index]:
- # We do not delete window from the list (or it would impact numbering)
- __windows__[index] = None
- _glfw.glfwDestroyWindow(window)
- del __c_callbacks__[index]
- del __py_callbacks__[index]
- __destroyed__[index] = True
-
-
-def glfwGetWindowPos(window):
- xpos, ypos = c_int(0), c_int(0)
- _glfw.glfwGetWindowPos(window, byref(xpos), byref(ypos))
- return xpos.value, ypos.value
-
-
-def glfwGetCursorPos(window):
- xpos, ypos = c_double(0), c_double(0)
- _glfw.glfwGetCursorPos(window, byref(xpos), byref(ypos))
- return int(xpos.value), int(ypos.value)
-
-
-def glfwGetWindowSize(window):
- width, height = c_int(0), c_int(0)
- _glfw.glfwGetWindowSize(window, byref(width), byref(height))
- return width.value, height.value
-
-
-def glfwGetFramebufferSize(window):
- width, height = c_int(0), c_int(0)
- _glfw.glfwGetFramebufferSize(window, byref(width), byref(height))
- return width.value, height.value
-
-
-def glfwGetMonitors():
- count = c_int(0)
- _glfw.glfwGetMonitors.restype = POINTER(POINTER(GLFWmonitor))
- c_monitors = _glfw.glfwGetMonitors( byref(count) )
- return [c_monitors[i] for i in range(count.value)]
-
-
-def glfwGetVideoModes(monitor):
- count = c_int(0)
- _glfw.glfwGetVideoModes.restype = POINTER(GLFWvidmode)
- c_modes = _glfw.glfwGetVideoModes( monitor, byref(count) )
- modes = []
- for i in range(count.value):
- modes.append( (c_modes[i].width,
- c_modes[i].height,
- c_modes[i].redBits,
- c_modes[i].blueBits,
- c_modes[i].greenBits,
- c_modes[i].refreshRate ) )
- return modes
-
-
-def glfwGetMonitorPos(monitor):
- xpos, ypos = c_int(0), c_int(0)
- _glfw.glfwGetMonitorPos(monitor, byref(xpos), byref(ypos))
- return xpos.value, ypos.value
-
-
-def glfwGetMonitorPhysicalSize(monitor):
- width, height = c_int(0), c_int(0)
- _glfw.glfwGetMonitorPhysicalSize(monitor, byref(width), byref(height))
- return width.value, height.value
-
-
-def glfwGetVideoMode(monitor):
- _glfw.glfwGetVideoMode.restype = POINTER(GLFWvidmode)
- c_mode = _glfw.glfwGetVideoMode(monitor).contents
- return (c_mode.width,
- c_mode.height,
- c_mode.redBits,
- c_mode.blueBits,
- c_mode.greenBits,
- c_mode.refreshRate )
-
-
-def GetGammaRamp(monitor):
- _glfw.glfwGetGammaRamp.restype = POINTER(GLFWgammaramp)
- c_gamma = _glfw.glfwGetGammaRamp(monitor).contents
- gamma = {'red':[], 'green':[], 'blue':[]}
- if c_gamma:
- for i in range(c_gamma.size):
- gamma['red'].append(c_gamma.red[i])
- gamma['green'].append(c_gamma.green[i])
- gamma['blue'].append(c_gamma.blue[i])
- return gamma
-
-
-def glfwGetJoystickAxes(joy):
- count = c_int(0)
- _glfw.glfwGetJoystickAxes.restype = POINTER(c_float)
- c_axes = _glfw.glfwGetJoystickAxes(joy, byref(count))
- axes = [c_axes[i].value for i in range(count)]
- return axes
-
-
-def glfwGetJoystickButtons(joy):
- count = c_int(0)
- _glfw.glfwGetJoystickButtons.restype = POINTER(c_int)
- c_buttons = _glfw.glfwGetJoystickButtons(joy, byref(count))
- buttons = [c_buttons[i].value for i in range(count)]
- return buttons
-
-
-# --- Callbacks ---------------------------------------------------------------
-
-def __callback__(name):
- callback = 'glfwSet%sCallback' % name
- fun = '%sfun' % name.lower()
- code = """
-def %(callback)s(window, callback = None):
- index = __windows__.index(window)
- old_callback = __py_callbacks__[index]['%(fun)s']
- __py_callbacks__[index]['%(fun)s'] = callback
- if callback: callback = %(fun)s(callback)
- __c_callbacks__[index]['%(fun)s'] = callback
- _glfw.%(callback)s(window, callback)
- return old_callback""" % {'callback': callback, 'fun': fun}
- return code
-
-exec(__callback__('Monitor'))
-exec(__callback__('WindowPos'))
-exec(__callback__('WindowSize'))
-exec(__callback__('WindowClose'))
-exec(__callback__('WindowRefresh'))
-exec(__callback__('WindowFocus'))
-exec(__callback__('WindowIconify'))
-exec(__callback__('FramebufferSize'))
-exec(__callback__('Key'))
-exec(__callback__('Char'))
-exec(__callback__('MouseButton'))
-exec(__callback__('CursorPos'))
-exec(__callback__('Scroll'))
-
-
-# Error callback does not take window parameter
-def glfwSetErrorCallback(callback = None):
- global __c_error_callback__
- __c_error_callback__ = errorfun(callback)
- _glfw.glfwSetErrorCallback(__c_error_callback__)
\ No newline at end of file
| Replace glfw wrapper with pyglfw
At the moment, vispy has its own glfw wrapper that targets the glfw DLL, and it needs to know where you've installed GLFW.
We should probably change this to use `pip install glfw` (https://github.com/FlorianRhiem/pyGLFW)
_Originally posted by @almarklein in https://github.com/vispy/vispy/issues/2213#issuecomment-920763391_
| 2021-09-18T21:02:48 |
||
vispy/vispy | 2,223 | vispy__vispy-2223 | [
"2222"
] | feeaf8afa99ddbbac86a03e3e611a52c1c89584d | diff --git a/vispy/visuals/graphs/util.py b/vispy/visuals/graphs/util.py
--- a/vispy/visuals/graphs/util.py
+++ b/vispy/visuals/graphs/util.py
@@ -89,7 +89,7 @@ def _straight_line_vertices(adjacency_mat, node_coords, directed=False):
if directed:
arrows = np.array(list(_get_directed_edges(adjacency_mat)))
arrow_vertices = node_coords[arrows.ravel()]
- arrow_vertices = arrow_vertices.reshape((len(arrow_vertices)/2, 4))
+ arrow_vertices = arrow_vertices.reshape((len(arrow_vertices)//2, 4))
return line_vertices, arrow_vertices
| scene.visuals.Graph is not working with directed = True
I am trying to render an directed graph but I am getting the error.
Code (based on [example from gallery](https://vispy.org/gallery/scene/graph.html#sphx-glr-gallery-scene-graph-py), I just set directed=True):
```py
import sys
import networkx as nx
from vispy import app, scene
from vispy.visuals.graphs import layouts
canvas = scene.SceneCanvas(title='Simple NetworkX Graph', size=(600, 600),
bgcolor='white', show=True)
view = canvas.central_widget.add_view('panzoom')
graph = nx.adjacency_matrix(
nx.fast_gnp_random_graph(500, 0.005, directed=True)
)
layout = layouts.get_layout('force_directed', iterations=100)
visual = scene.visuals.Graph(
graph, layout=layout, line_color='black', arrow_type="stealth",
arrow_size=30, node_symbol="disc", node_size=20,
face_color=(1, 0, 0, 0.2), border_width=0.0, animate=True, directed=True,
parent=view.scene)
@canvas.events.draw.connect
def on_draw(event):
if not visual.animate_layout():
canvas.update()
if __name__ == '__main__':
if sys.flags.interactive != 1:
app.run()
```
Error:
```
<< caught exception here: >>
File "C:\Users\maxim\AppData\Local\Programs\Python\Python39\lib\site-packages\vispy\util\event.py", line 469, in _invoke_callback
cb(event)
File "D:\dev\university\UniversityProjects\3\alg_and_struct\2\demo.py", line 27, in on_draw
if not visual.animate_layout():
File "C:\Users\maxim\AppData\Local\Programs\Python\Python39\lib\site-packages\vispy\visuals\graphs\graph.py", line 143, in animate_layout
node_vertices, line_vertices, arrows = next(self._layout_iter)
File "C:\Users\maxim\AppData\Local\Programs\Python\Python39\lib\site-packages\vispy\visuals\graphs\layouts\force_directed.py", line 95, in __call__
for result in solver(adjacency_mat, directed):
File "C:\Users\maxim\AppData\Local\Programs\Python\Python39\lib\site-packages\vispy\visuals\graphs\layouts\force_directed.py", line 162, in _sparse_fruchterman_reingold
line_vertices, arrows = _straight_line_vertices(adjacency_coo, pos,
File "C:\Users\maxim\AppData\Local\Programs\Python\Python39\lib\site-packages\vispy\visuals\graphs\util.py", line 92, in _straight_line_vertices
arrow_vertices = arrow_vertices.reshape((len(arrow_vertices)/2, 4))
TypeError: 'float' object cannot be interpreted as an integer
ERROR: Invoking <function on_draw at 0x000001EB3573EDC0> for DrawEvent
```
May be typecasting or `//` at [this line](https://github.com/vispy/vispy/blob/feeaf8afa99ddbbac86a03e3e611a52c1c89584d/vispy/visuals/graphs/util.py#L92) is needed.
| Looks like a bug to me. How would you feel about making a pull request to fix it with a `//`?
I'm a little confused at how we don't have a test that hits this point too. If you are feeling really adventurous you could add a test too. | 2021-09-27T17:15:13 |
|
vispy/vispy | 2,226 | vispy__vispy-2226 | [
"2225"
] | 545abddf3cf5d7e672cb48fe8e68779fc338cc2d | diff --git a/vispy/visuals/filters/clipping_planes.py b/vispy/visuals/filters/clipping_planes.py
--- a/vispy/visuals/filters/clipping_planes.py
+++ b/vispy/visuals/filters/clipping_planes.py
@@ -24,14 +24,15 @@ class PlanesClipper(Filter):
VERT_CODE = """
void clip() {
- // Transform back to visual coordinates and clip based on that
- $v_distance_from_clip = $clip_with_planes($itransform(gl_Position).xyz);
+ // pass position as varying for interpolation
+ $v_position = gl_Position;
}
"""
FRAG_CODE = """
void clip() {
- if ($v_distance_from_clip < 0.)
+ float distance_from_clip = $clip_with_planes($itransform($v_position).xyz);
+ if (distance_from_clip < 0.)
discard;
}
"""
@@ -47,9 +48,9 @@ def __init__(self, clipping_planes=None, coord_system='scene'):
fcode=Function(self.FRAG_CODE), fhook='pre', fpos=1,
)
- v_distance_from_clip = Varying('v_distance_from_clip', 'float')
- self.vshader['v_distance_from_clip'] = v_distance_from_clip
- self.fshader['v_distance_from_clip'] = v_distance_from_clip
+ v_position = Varying('v_position', 'vec4')
+ self.vshader['v_position'] = v_position
+ self.fshader['v_position'] = v_position
self.clipping_planes = clipping_planes
@@ -63,7 +64,7 @@ def coord_system(self):
def _attach(self, visual):
super()._attach(visual)
- self.vshader['itransform'] = visual.get_transform('render', self._coord_system)
+ self.fshader['itransform'] = visual.get_transform('render', self._coord_system)
@staticmethod
@lru_cache(maxsize=10)
@@ -102,7 +103,7 @@ def clipping_planes(self, value):
self._clipping_planes = value
clip_func = self._build_clipping_planes_func(len(value))
- self.vshader['clip_with_planes'] = clip_func
+ self.fshader['clip_with_planes'] = clip_func
for idx, plane in enumerate(value):
clip_func[f'clipping_plane_pos{idx}'] = tuple(plane[0])
| Incorrect behavior with multipe clipping planes
I was checking up on that nice trick where the clipping planes logic is done in the vertex shader and then interpolated to the fragment shader, with the intention of applying it in pygfx too. However, I found that this trick does not work in the case of multiple clipping planes.
This can be shown with the following example:
```py
import numpy as np
from vispy import app, scene, io
from vispy.visuals.filters.clipping_planes import PlanesClipper
canvas = scene.SceneCanvas(keys='interactive', size=(800, 600), show=True)
view = canvas.central_widget.add_view()
cube = scene.visuals.Box(100, 100, 100, color=(1, 0, 0, 1), parent=view.scene)
view.camera = scene.cameras.TurntableCamera(parent=view.scene, fov=60)
clip_center = (0, 20, 60)
clipping_planes = np.concatenate(
[ np.array([[clip_center, [1, 0, 0]]]), np.array([[clip_center, [0, 1, 0]]])]
)
clipper = PlanesClipper()
clipper.clipping_planes = clipping_planes
cube.attach(clipper)
if __name__ == '__main__':
app.run()
```
If you turn the camera to look from above, you'll see this:

I think this can be explained with the following figure:

The black lines indicate two clipping planes (the shaded side is where they clip). The two blue dots represent two vertices with a line or polygon interpolating between them. Both dots are of equal distance two a plane, one on the + side and one on the - side. Now if the `min_plane_distance` (or whatever we ended up calling it :D ) is interpolated, it will have its zero point (the point where it starts clipping) in the middle.
cc @brisvag
| Ouch, I only tested meshes and lines with a single clipping plane :/
Ok, I think I get the problem. I don't see a way around in the vertex shader. so you have a working implementation in pygfx, it seems? We should port it over here :)
> so you have a working implementation in pygfx, it seems? We should port it over here :)
Yeah. It essentially does the same thing, but in the fragment shader. The source is [here](https://github.com/pygfx/pygfx/blob/93b31f28a0c81c74913128670afa90f4b4f28dc2/pygfx/renderers/wgpu/_shadercomposer.py#L151-L166) but the shader composition in pygfx and vispy is quite different, so not sure if you can reuse much of it :)
The gist is that a function `apply_clipping_planes()` is provided, that each object's (i.e. visual) fragment shader calls. I suppose vispu would use a filter though? Further, in the vertex shader we set the world_pos (i.e. scene pos) as a varying so we don't need to apply a transform. | 2021-10-01T08:47:50 |
|
vispy/vispy | 2,245 | vispy__vispy-2245 | [
"2229"
] | efa49b6896321374149998e15f8bce2ae327ba70 | diff --git a/vispy/visuals/_scalable_textures.py b/vispy/visuals/_scalable_textures.py
--- a/vispy/visuals/_scalable_textures.py
+++ b/vispy/visuals/_scalable_textures.py
@@ -125,6 +125,10 @@ def clim_normalized(self):
raise RuntimeError("Can't return 'auto' normalized color limits "
"until data has been set. Call "
"'scale_and_set_data' first.")
+ if self._data_dtype is None:
+ raise RuntimeError("Can't return normalized color limits until "
+ "data has been set. Call "
+ "'scale_and_set_data' first.")
if self.clim[0] == self.clim[1]:
return self.clim[0], np.inf
# if the internalformat of the texture is normalized we need to
@@ -276,6 +280,10 @@ def clim_normalized(self):
raise RuntimeError("Can't return 'auto' normalized color limits "
"until data has been set. Call "
"'scale_and_set_data' first.")
+ if self._data_limits is None:
+ raise RuntimeError("Can't return normalized color limits until "
+ "data has been set. Call "
+ "'scale_and_set_data' first.")
range_min, range_max = self._data_limits
clim_min, clim_max = self.clim
diff --git a/vispy/visuals/image.py b/vispy/visuals/image.py
--- a/vispy/visuals/image.py
+++ b/vispy/visuals/image.py
@@ -521,22 +521,24 @@ def _update_method(self, view):
self._prepare_transforms(view)
def _build_texture(self):
- pre_clims = self._texture.clim
+ try:
+ pre_clims = self._texture.clim_normalized
+ except RuntimeError:
+ pre_clims = "auto"
pre_internalformat = self._texture.internalformat
self._texture.scale_and_set_data(self._data)
- post_clims = self._texture.clim
+ post_clims = self._texture.clim_normalized
post_internalformat = self._texture.internalformat
# color transform needs rebuilding if the internalformat was changed
# new color limits need to be assigned if the normalized clims changed
# otherwise, the original color transform should be fine
- # Note that this assumes that if clim changed, clim_normalized changed
new_if = post_internalformat != pre_internalformat
new_cl = post_clims != pre_clims
- if not new_if and new_cl and not self._need_colortransform_update:
+ if new_if:
+ self._need_colortransform_update = True
+ elif new_cl and not self._need_colortransform_update:
# shortcut so we don't have to rebuild the whole color transform
self.shared_program.frag['color_transform'][1]['clim'] = self._texture.clim_normalized
- elif new_if:
- self._need_colortransform_update = True
self._need_texture_upload = False
def _compute_bounds(self, axis, view):
| diff --git a/vispy/visuals/tests/test_image.py b/vispy/visuals/tests/test_image.py
--- a/vispy/visuals/tests/test_image.py
+++ b/vispy/visuals/tests/test_image.py
@@ -263,4 +263,38 @@ def test_image_vertex_updates():
build_vertex_mock.assert_called_once()
+@requires_application()
[email protected](
+ ("dtype", "init_clim"),
+ [
+ (np.float32, "auto"),
+ (np.float32, (0, 5)),
+ (np.uint8, "auto"),
+ (np.uint8, (0, 5)),
+ ]
+)
+def test_change_clim_float(dtype, init_clim):
+ """Test that with an image of floats, clim is correctly set from the first try.
+
+ See https://github.com/vispy/vispy/pull/2245.
+ """
+ size = (40, 40)
+ np.random.seed(0)
+ data = (np.random.rand(*size) * 100).astype(dtype)
+
+ with TestingCanvas(size=size[::-1], bgcolor="w") as c:
+ image = Image(data=data, clim=init_clim, parent=c.scene)
+
+ # needed to properly initialize the canvas
+ c.render()
+
+ image.clim = 0, 10
+ rendered1 = c.render()
+ # set clim to same values
+ image.clim = 0, 10
+ rendered2 = c.render()
+
+ assert np.allclose(rendered1, rendered2)
+
+
run_tests_if_main()
| Regression with `Image` clim and visibility interaction
As [mentioned in this comment](https://github.com/vispy/vispy/pull/1920#discussion_r723993819), there seems to be a regression that we didn't catch with #1920. Unfortunately, I wasn't able to pinpoint the exact issue so far. @tlambert03, @djhoese, @jni , do you remember how this works?
| What is the actual regression? You've pointed out the related issues, but not what is actually broken.
Sorry, I was not clear. #1911 introduced a fix to a bug that caused contrast limits to be wrong after the visibility of the visual changed. #1920 changed that code and, as far as I can tell, nullified #1911.
I have not tested this in vispy yet, in napari we [had added a test for this behaviour](https://github.com/napari/napari/blob/6c228a62e30ebf796f7640c90fe119153c6dd918/napari/_vispy/_tests/test_image_rendering.py#L44-L59), and that fails. I can dig a bit deeper, but I'm not familiar with this part of the codebase so I was hoping someone might spot the issue faster than me :)
Thanks for pointing out the test. I'll try to take a look some time this morning, but am also preparing for a long weekend so might not finish it today.
@brisvag I'm a little confused by that test. It takes a screenshot when the Image layer isn't visible (the default setting, right?) and then compares the screenshot when the image is visible and then makes sure that the difference is a maximum of 5? Shouldn't it be a huge difference (no image versus image)?
No, the image layer is visible by default. Doing `visible = True` is (supposedly) a no_op, since it's simply passing the call down to the vispy node. It's equivalent to `Image.visible = True` when image is already visible. AFAIK, this simply fires the `update` event of the visual node. After that, I'm a bit lost on what exactly happens cause I don't quite understand how events are magically connected in vispy.
I'm not sure why that would have an effect, but I'll look. Next question, is this failing with napari using the new `texture_format` (GPU scaling) or is it still using the old CPU scaling?
Edit: Doing `.visible = True` does call `self.update` so that's correct. I'm still not sure why that matters in this test though.
We're not setting `texture_format`, so it's still using CPU scaling. Still, I cannot reproduce in pure vispy with `examples/scene/image.py`, which as far as I can tell should be similar?
There must be something non-obvious that napari is doing that makes the test not a 1:1 translation to vispy. For example, any idea if napari sets the data to some small image size when things are invisible to save GPU memory? Based on the fix by @tlambert03 there should be something related to setting the data (changing the texture) that triggers this issue. At least as far as I understand.
Yes, there must be something. I found out that in napari, setting `visible` triggers `ImageVisual._build_texture`, while in vispy it doesn't. It's probably a bug on napari side, then!
There's always a chance that napari is trying to work around an issue in vispy or is trying to optimize something. I guess try changing napari and see what tests fail. I do see some specific checks in vispy for the thing that Talley was originally trying to fix, but I also see some things that don't immediately make sense to me. Either way, I need some code I can put in a vispy test to make it fail so I can fix it. So far I haven't been able to.
Yeah, this is not urgent :) Hopefully someone else more acquanted with napari's image layer can chip in; otherwise, this will get more attention once we actually start trying to merge the vendor-less branch I'm working on! | 2021-10-28T19:34:03 |
vispy/vispy | 2,293 | vispy__vispy-2293 | [
"2212"
] | 34d5def17a7355d7f2ac8932ff1588cd660bcf40 | diff --git a/vispy/app/backends/_qt.py b/vispy/app/backends/_qt.py
--- a/vispy/app/backends/_qt.py
+++ b/vispy/app/backends/_qt.py
@@ -97,7 +97,7 @@ def _get_event_xy(ev):
else:
from PyQt5.QtOpenGL import QGLWidget, QGLFormat
from PyQt5 import QtGui, QtCore, QtWidgets, QtTest
- QWidget, QApplication = QtWidgets.QWidget, QtWidgets.QApplication #
+ QWidget, QApplication = QtWidgets.QWidget, QtWidgets.QApplication # Compat
elif qt_lib == 'pyqt6':
_check_imports('PyQt6')
if not USE_EGL:
@@ -630,7 +630,7 @@ def _modifiers(self, event):
[qt_keyboard_modifiers.ControlModifier, keys.CONTROL],
[qt_keyboard_modifiers.AltModifier, keys.ALT],
[qt_keyboard_modifiers.MetaModifier, keys.META]):
- if q & qtmod:
+ if qtmod & q:
mod += (v,)
return mod
| Python DeprecationWarning on PyQt 5.12.3
I don't have a simple reproducer yet, as I encounter this on a rather complex napari workflow, but I wanted to write it down in case others have encountered it. Under certain circumstances and *only* with older PyQt versions (5.12.3 in this case), I get the following warning emitted from VisPy:
```pytb
/Users/jni/conda/envs/all/lib/python3.9/site-packages/vispy/app/backends/_qt.py:627: DeprecationWarning: an integer is required (got type KeyboardModifiers). Implicit conversion to integers using __int__ is deprecated, and may be removed in a future version of Python.!
```
VisPy version is 0.8.1.
I was able to silence the warning by wrapping the relevant code block into a
```python
with warnings.catch_warnings():
warnings.simplefilter('ignore')
...
```
context. But I don't know if that's the desired workaround. Suggestions are super welcome! Here is the offending code:
https://github.com/vispy/vispy/blob/5d58b832c853c45b44f5036b9e7c90d6e9f4f454/vispy/app/backends/_qt.py#L627-L628
Note: I can't update my PyQt version because that is the latest available on conda-forge and the pip version breaks a *different* part of my workflow. 😂
| Very strange. Looking at the code it seems like it should run into that code whenever you press a key. If not, I wonder if Qt or VisPy is usually sending an integer for the modifiers in the Event but then sometimes passes the Enum instead.
What version of Python are you using? Looks like this `__int__` deprecation was added in Python 3.8.
What version of `qt`? What version of `pyqt5-sip`?
Looking at the code more I see that `_modifiers` is called from any event in Qt so any mouse or keyboard event could cause this theoretically.
Ok, I've learned a lot in the last hour. Here is the related CPython change where the deprecation was discussed: https://bugs.python.org/issue36048
The summary is that this warning should show up when something from the C-level has to convert a Python object to an integer. If the object has a `__index__` method then it uses that which returns an integer and no warning is produced. If it doesn't have `__index__` then it calls `__int__` and raises the warning as this was an implicit conversion to integer.
On my Ubuntu based system on Python 3.9 with everything installed from conda (except vispy installed in editable mode from source) I am able to do this which means the warning should never appear:
```python
In [1]: from PyQt5.QtCore import Qt
In [2]: Qt.ShiftModifier.__index__()
Out[2]: 33554432
```
So...@jni what OS are you running? Can you run the above code?
Hi @djhoese and thanks for investigating! I agree it's weird. To answer your questions:
- your code runs fine, but I nevertheless see the warning in my use case.
- I'm on macOS, Python 3.9 from conda-forge
- Package versions:
```
$ pip list | grep Qt
PyQt5 5.12.3
PyQt5_sip 4.19.18
PyQtChart 5.12
PyQtWebEngine 5.12.1
QtPy 1.11.0
$ pip list | grep qt
qtconsole 5.1.1
sphinxcontrib-qthelp 1.0.3
superqt 0.2.3
```
The following environment does it for me on macOS 11.4:
```yaml
name: napari-pyqt512
channels:
- conda-forge
dependencies:
- python=3.9
- napari
- pyqt=5.12.3
- pip
- pip:
- platelet-unet-watershed==0.0.1
- torch==1.9.0
- torchvision==0.10.0
```
Then run:
```python
import napari
import numpy as np
viewer = napari.Viewer(ndisplay=3)
image_layer = viewer.add_image(np.random.random((2, 20, 512, 512)))
_, w = viewer.window.add_plugin_dock_widget('platelet-unet-watershed', 'U Net Predict Widget')
w.predict_widget()
napari.run()
```
And move the mouse on the canvas. Apologies again that I haven't had a chance to narrow it down further. Normal examples from the napari examples gallery don't show the warning... 🤔
`pip install -U pyqt5` makes the problem go away...
So I installed napari from conda-forge, but wasn't familiar with napari plugins. It said that your plugin doesn't provide a dock widget so I was going to pip install from github source...and it wanted to download `torch` at 834MB :sob: so I stopped it.
Give me a few days to investigate further and boil it down. I raised this "early" in case it was something obvious/that you'd encountered before. Thanks for the time you've put into it already.
@jni I just ran into this in a realworld use case! My SIFT application is using some `.ui` files which it converts to Python using `pyuic5`. In these files I have sections like:
```python
self.frameRangeGroupBox = QtWidgets.QGroupBox(ExportImageDialog)
self.frameRangeGroupBox.setGeometry(QtCore.QRect(10, 30, 251, 111))
self.frameRangeGroupBox.setAlignment(QtCore.Qt.AlignLeading|QtCore.Qt.AlignLeft|QtCore.Qt.AlignTop)
```
Turns out that in Python 3.9 you get the same deprecation warning you were seeing and in Python 3.10 it actually breaks! I see this error in Python 3.10:
```
TypeError: setAlignment(self, int): argument 1 has unexpected type 'Alignment'
```
The individual alignment flags have type `AlignmentFlag` which can be converted to integer just fine it seems, but when you do the OR'ing with `|` it becomes an `Alignment` which apparently can't be converted.
```
In [4]: type(QtCore.Qt.AlignLeading)
Out[4]: PyQt5.QtCore.Qt.AlignmentFlag
In [5]: type(QtCore.Qt.AlignLeading | QtCore.Qt.AlignLeft)
Out[5]: PyQt5.QtCore.Qt.Alignment
```
@jni I think theoretically if you wrap `q & qtmod` in an `int(q & qtmod)` it may fix your deprecation warning. | 2022-02-04T21:27:59 |
|
vispy/vispy | 2,296 | vispy__vispy-2296 | [
"2295"
] | e9a7c5bd073acf24fce850add00f94f6315cd15f | diff --git a/doc/conf.py b/doc/conf.py
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -348,8 +348,8 @@ def _custom_edit_url(github_user, github_repo, github_version, doc_path, file_na
_python_doc_base = 'https://docs.python.org/3.9'
intersphinx_mapping = {
_python_doc_base: None,
- 'https://docs.scipy.org/doc/numpy': None,
- 'https://docs.scipy.org/doc/scipy/reference': None,
+ 'https://numpy.org/doc/stable/': None,
+ 'https://scipy.github.io/devdocs/': None,
}
diff --git a/vispy/io/stl.py b/vispy/io/stl.py
--- a/vispy/io/stl.py
+++ b/vispy/io/stl.py
@@ -33,7 +33,7 @@ def load_stl(file_obj, file_type=None):
file_type: not used
Returns
- ----------
+ -------
loaded: kwargs for a Trimesh constructor with keys:
vertices: (n,3) float, vertices
faces: (m,3) int, indexes of vertices
@@ -65,7 +65,7 @@ def load_stl_binary(file_obj):
file_obj: open file- like object
Returns
- ----------
+ -------
loaded: kwargs for a Trimesh constructor with keys:
vertices: (n,3) float, vertices
faces: (m,3) int, indexes of vertices
@@ -124,7 +124,7 @@ def load_stl_ascii(file_obj):
file_obj: open file- like object
Returns
- ----------
+ -------
loaded: kwargs for a Trimesh constructor with keys:
vertices: (n,3) float, vertices
faces: (m,3) int, indexes of vertices
diff --git a/vispy/scene/node.py b/vispy/scene/node.py
--- a/vispy/scene/node.py
+++ b/vispy/scene/node.py
@@ -423,7 +423,7 @@ def describe_tree(self, with_transform=False):
If true, add information about node transform types.
Returns
- ----------
+ -------
tree : str
The tree diagram.
"""
| Fix failing website build
Website builds just started failing with docstring related issues:
```
/home/runner/micromamba/envs/vispy-tests/lib/python3.9/site-packages/numpydoc/docscrape.py:434: UserWarning: potentially wrong underline length...
Returns
---------- in
Load an STL file from a file object.
... in the docstring of load_stl in /home/runner/work/vispy/vispy/vispy/io/stl.py.
warn(msg)
/home/runner/micromamba/envs/vispy-tests/lib/python3.9/site-packages/numpydoc/docscrape.py:434: UserWarning: potentially wrong underline length...
Returns
---------- in
Load an ASCII STL file from a file object.
... in the docstring of load_stl_ascii in /home/runner/work/vispy/vispy/vispy/io/stl.py.
warn(msg)
/home/runner/micromamba/envs/vispy-tests/lib/python3.9/site-packages/numpydoc/docscrape.py:434: UserWarning: potentially wrong underline length...
Returns
---------- in
Load a binary STL file from a file object.
... in the docstring of load_stl_binary in /home/runner/work/vispy/vispy/vispy/io/stl.py.
warn(msg)
```
| Link to the failing job:
https://github.com/vispy/vispy/runs/5085662854?check_suite_focus=true | 2022-02-06T22:24:25 |
|
vispy/vispy | 2,326 | vispy__vispy-2326 | [
"2325"
] | 9bdda8ebe2c20712addcc9c4549468439b3e512d | diff --git a/vispy/app/backends/_pyglet.py b/vispy/app/backends/_pyglet.py
--- a/vispy/app/backends/_pyglet.py
+++ b/vispy/app/backends/_pyglet.py
@@ -271,6 +271,12 @@ def _vispy_get_size(self):
w, h = self.get_size()
return w, h
+ def _vispy_get_physical_size(self):
+ if self._vispy_canvas is None:
+ return
+ w, h = self.get_framebuffer_size()
+ return w, h
+
def _vispy_get_position(self):
x, y = self.get_location()
return x, y
diff --git a/vispy/app/backends/_wx.py b/vispy/app/backends/_wx.py
--- a/vispy/app/backends/_wx.py
+++ b/vispy/app/backends/_wx.py
@@ -376,6 +376,11 @@ def _vispy_get_size(self):
w, h = self.GetClientSize()
return w, h
+ def _vispy_get_physical_size(self):
+ w, h = self.GetClientSize()
+ ratio = self.GetContentScaleFactor()
+ return int(w * ratio), int(h * ratio)
+
def _vispy_get_position(self):
if self._vispy_canvas is None:
return
| Apple M1 macOS 12 canvas is a quarter of window size
I have found an issue on macos 12 with apple m1 - dont know if it exists on x64 macos - dont have one so cannot test it.
all test are done with the offical examples!
The issue seems to be that the canvas is not the specified size - it's a quarter of it
This is with pyglet
<img width="912" alt="Untitled" src="https://user-images.githubusercontent.com/13086508/164896228-5f829153-4b5e-439b-8196-96efeee3ebb3.png">
pyglet scaled the whole canvas to to a quarter and also renders it at a quarter. Moving the qube works fine as you would expect just scaled down to a quarter
In contrast wx:
https://user-images.githubusercontent.com/13086508/164896545-086ae183-f87e-4934-b781-e3c296984bdd.mov
With wx the canvas is rendered a the correct resolution but only the lower left is quarter is visable. Moving the scene quarters the output to a quarter but the qube is now centered in frame - letting go of the mouse button goes back to the full canvas size with just the lower left quarter visable
The behavior continues with all of the other examples I tested
| Could you please provide the output of running:
```bash
python -c "import vispy; print(vispy.sys_info())"
```
And can you test with PyQt5 please? Are you using conda or pip or something else?
images above are from conda miniforge3 env PyQt5 isn't possible on m1 at least from existing packages in conda miniforge3 and pip failes- I tested PyQt6 which run without issues
Platform: macOS-12.3.1-arm64-arm-64bit
Python: 3.9.10 | packaged by conda-forge | (main, Feb 1 2022, 21:27:43) [Clang 11.1.0 ]
NumPy: 1.21.6
Backend: PyQt6
pyqt4: None
pyqt5: None
pyqt6: ('PyQt6', '6.3.0', '6.3.0')
pyside: None
pyside2: None
pyside6: None
pyglet: pyglet 1.5.16
glfw: None
sdl2: None
wx: wxPython 4.1.1
egl: None
osmesa: None
tkinter: None
jupyter_rfb: None
_test: None
GL version: '2.1 Metal - 76.3'
MAX_TEXTURE_SIZE: 16384
Extensions: 'GL_ARB_color_buffer_float GL_ARB_depth_buffer_float GL_ARB_depth_clamp GL_ARB_depth_texture GL_ARB_draw_buffers GL_ARB_draw_elements_base_vertex GL_ARB_draw_instanced GL_ARB_fragment_program GL_ARB_fragment_program_shadow GL_ARB_fragment_shader GL_ARB_framebuffer_object GL_ARB_framebuffer_sRGB GL_ARB_half_float_pixel GL_ARB_half_float_vertex GL_ARB_imaging GL_ARB_instanced_arrays GL_ARB_multisample GL_ARB_multitexture GL_ARB_occlusion_query GL_ARB_pixel_buffer_object GL_ARB_point_parameters GL_ARB_point_sprite GL_ARB_provoking_vertex GL_ARB_seamless_cube_map GL_ARB_shader_objects GL_ARB_shader_texture_lod GL_ARB_shading_language_100 GL_ARB_shadow GL_ARB_shadow_ambient GL_ARB_sync GL_ARB_texture_border_clamp GL_ARB_texture_compression GL_ARB_texture_compression_rgtc GL_ARB_texture_cube_map GL_ARB_texture_env_add GL_ARB_texture_env_combine GL_ARB_texture_env_crossbar GL_ARB_texture_env_dot3 GL_ARB_texture_float GL_ARB_texture_mirrored_repeat GL_ARB_texture_non_power_of_two GL_ARB_texture_rectangle GL_ARB_texture_rg GL_ARB_transpose_matrix GL_ARB_vertex_array_bgra GL_ARB_vertex_blend GL_ARB_vertex_buffer_object GL_ARB_vertex_program GL_ARB_vertex_shader GL_ARB_window_pos GL_EXT_abgr GL_EXT_bgra GL_EXT_bindable_uniform GL_EXT_blend_color GL_EXT_blend_equation_separate GL_EXT_blend_func_separate GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_clip_volume_hint GL_EXT_debug_label GL_EXT_debug_marker GL_EXT_draw_buffers2 GL_EXT_draw_range_elements GL_EXT_fog_coord GL_EXT_framebuffer_blit GL_EXT_framebuffer_multisample GL_EXT_framebuffer_multisample_blit_scaled GL_EXT_framebuffer_object GL_EXT_framebuffer_sRGB GL_EXT_geometry_shader4 GL_EXT_gpu_program_parameters GL_EXT_gpu_shader4 GL_EXT_multi_draw_arrays GL_EXT_packed_depth_stencil GL_EXT_packed_float GL_EXT_provoking_vertex GL_EXT_rescale_normal GL_EXT_secondary_color GL_EXT_separate_specular_color GL_EXT_shadow_funcs GL_EXT_stencil_two_side GL_EXT_stencil_wrap GL_EXT_texture_array GL_EXT_texture_compression_dxt1 GL_EXT_texture_compression_s3tc GL_EXT_texture_env_add GL_EXT_texture_filter_anisotropic GL_EXT_texture_integer GL_EXT_texture_lod_bias GL_EXT_texture_rectangle GL_EXT_texture_shared_exponent GL_EXT_texture_sRGB GL_EXT_texture_sRGB_decode GL_EXT_timer_query GL_EXT_transform_feedback GL_EXT_vertex_array_bgra GL_APPLE_aux_depth_stencil GL_APPLE_client_storage GL_APPLE_element_array GL_APPLE_fence GL_APPLE_float_pixels GL_APPLE_flush_buffer_range GL_APPLE_flush_render GL_APPLE_packed_pixels GL_APPLE_pixel_buffer GL_APPLE_rgb_422 GL_APPLE_row_bytes GL_APPLE_specular_vector GL_APPLE_texture_range GL_APPLE_transform_hint GL_APPLE_vertex_array_object GL_APPLE_vertex_point_size GL_APPLE_vertex_program_evaluators GL_APPLE_ycbcr_422 GL_ATI_separate_stencil GL_ATI_texture_env_combine3 GL_ATI_texture_float GL_IBM_rasterpos_clip GL_NV_blend_square GL_NV_conditional_render GL_NV_depth_clamp GL_NV_fog_distance GL_NV_fragment_program_option GL_NV_fragment_program2 GL_NV_light_max_exponent GL_NV_texgen_reflection GL_NV_texture_barrier GL_NV_vertex_program2_option GL_NV_vertex_program3 GL_SGI_color_matrix GL_SGIS_generate_mipmap GL_SGIS_texture_edge_clamp GL_SGIS_texture_lod '
python3.9 homebrew no virtual env exactly same beheavior. wx and pyglet have the same issues as above. Only difference is pyqt5. Just like pyqt6 working flawlessly.
Platform: macOS-12.3.1-arm64-arm-64bit
Python: 3.9.12 (main, Mar 26 2022, 15:44:31) [Clang 13.1.6 (clang-1316.0.21.2)]
NumPy: 1.22.3
Backend: PyQt5
pyqt4: None
pyqt5: ('PyQt5', '5.15.6', '5.15.2')
pyqt6: None
pyside: None
pyside2: None
pyside6: None
pyglet: pyglet 1.5.23
glfw: None
sdl2: None
wx: wxPython 4.1.1
egl: None
osmesa: None
tkinter: None
jupyter_rfb: None
_test: None
GL version: '2.1 Metal - 76.3'
MAX_TEXTURE_SIZE: 16384
Extensions: 'GL_ARB_color_buffer_float GL_ARB_depth_buffer_float GL_ARB_depth_clamp GL_ARB_depth_texture GL_ARB_draw_buffers GL_ARB_draw_elements_base_vertex GL_ARB_draw_instanced GL_ARB_fragment_program GL_ARB_fragment_program_shadow GL_ARB_fragment_shader GL_ARB_framebuffer_object GL_ARB_framebuffer_sRGB GL_ARB_half_float_pixel GL_ARB_half_float_vertex GL_ARB_imaging GL_ARB_instanced_arrays GL_ARB_multisample GL_ARB_multitexture GL_ARB_occlusion_query GL_ARB_pixel_buffer_object GL_ARB_point_parameters GL_ARB_point_sprite GL_ARB_provoking_vertex GL_ARB_seamless_cube_map GL_ARB_shader_objects GL_ARB_shader_texture_lod GL_ARB_shading_language_100 GL_ARB_shadow GL_ARB_shadow_ambient GL_ARB_sync GL_ARB_texture_border_clamp GL_ARB_texture_compression GL_ARB_texture_compression_rgtc GL_ARB_texture_cube_map GL_ARB_texture_env_add GL_ARB_texture_env_combine GL_ARB_texture_env_crossbar GL_ARB_texture_env_dot3 GL_ARB_texture_float GL_ARB_texture_mirrored_repeat GL_ARB_texture_non_power_of_two GL_ARB_texture_rectangle GL_ARB_texture_rg GL_ARB_transpose_matrix GL_ARB_vertex_array_bgra GL_ARB_vertex_blend GL_ARB_vertex_buffer_object GL_ARB_vertex_program GL_ARB_vertex_shader GL_ARB_window_pos GL_EXT_abgr GL_EXT_bgra GL_EXT_bindable_uniform GL_EXT_blend_color GL_EXT_blend_equation_separate GL_EXT_blend_func_separate GL_EXT_blend_minmax GL_EXT_blend_subtract GL_EXT_clip_volume_hint GL_EXT_debug_label GL_EXT_debug_marker GL_EXT_draw_buffers2 GL_EXT_draw_range_elements GL_EXT_fog_coord GL_EXT_framebuffer_blit GL_EXT_framebuffer_multisample GL_EXT_framebuffer_multisample_blit_scaled GL_EXT_framebuffer_object GL_EXT_framebuffer_sRGB GL_EXT_geometry_shader4 GL_EXT_gpu_program_parameters GL_EXT_gpu_shader4 GL_EXT_multi_draw_arrays GL_EXT_packed_depth_stencil GL_EXT_packed_float GL_EXT_provoking_vertex GL_EXT_rescale_normal GL_EXT_secondary_color GL_EXT_separate_specular_color GL_EXT_shadow_funcs GL_EXT_stencil_two_side GL_EXT_stencil_wrap GL_EXT_texture_array GL_EXT_texture_compression_dxt1 GL_EXT_texture_compression_s3tc GL_EXT_texture_env_add GL_EXT_texture_filter_anisotropic GL_EXT_texture_integer GL_EXT_texture_lod_bias GL_EXT_texture_rectangle GL_EXT_texture_shared_exponent GL_EXT_texture_sRGB GL_EXT_texture_sRGB_decode GL_EXT_timer_query GL_EXT_transform_feedback GL_EXT_vertex_array_bgra GL_APPLE_aux_depth_stencil GL_APPLE_client_storage GL_APPLE_element_array GL_APPLE_fence GL_APPLE_float_pixels GL_APPLE_flush_buffer_range GL_APPLE_flush_render GL_APPLE_packed_pixels GL_APPLE_pixel_buffer GL_APPLE_rgb_422 GL_APPLE_row_bytes GL_APPLE_specular_vector GL_APPLE_texture_range GL_APPLE_transform_hint GL_APPLE_vertex_array_object GL_APPLE_vertex_point_size GL_APPLE_vertex_program_evaluators GL_APPLE_ycbcr_422 GL_ATI_separate_stencil GL_ATI_texture_env_combine3 GL_ATI_texture_float GL_IBM_rasterpos_clip GL_NV_blend_square GL_NV_conditional_render GL_NV_depth_clamp GL_NV_fog_distance GL_NV_fragment_program_option GL_NV_fragment_program2 GL_NV_light_max_exponent GL_NV_texgen_reflection GL_NV_texture_barrier GL_NV_vertex_program2_option GL_NV_vertex_program3 GL_SGI_color_matrix GL_SGIS_generate_mipmap GL_SGIS_texture_edge_clamp GL_SGIS_texture_lod '
I wonder if this is like a HiDPI issue. But I'd be surprised if this is the first we're hearing of it. We had to do some upgrades/fixes to the Qt backends to make them work in these HiDPI cases (2x logical pixels versus physical pixels). I personally only ever use Qt-based and I don't have a mac. The fact that this is backend dependent seems like it isn't necessarily an OpenGL issue which is my biggest fear when dealing with Apple products.
I don't think any of the other @vispy/core developers have Macs either (M1 or not) so we'd have to find some people with Macs and test Wx or pyglet and see if this is a Mac thing or a M1 thing.
If someone tells me exactly what I need to run I'm happy to run it on my macOS 11 Intel mac. (And maybe even finally take the plunge and upgrade to 12!)
If you got the time just run official vispy examples
like this one
https://vispy.org/gallery/scene/infinite_line.html#sphx-glr-gallery-scene-infinite-line-py
with a pyglet or wx backend
I don't know how to set the backend.
just put
`
import vispy;
vispy.app.use_app("wx"); // or vispy.app.use_app("pyglet")
`
in the first lines
also you would need pyglet or wx installed
I have an M1 mac (since a few months now). I can reproduce the error. Looking into it. | 2022-04-25T15:56:01 |
|
vispy/vispy | 2,328 | vispy__vispy-2328 | [
"1165"
] | f0af63083b25c27bd02ce3778bd9e62096c3ccd1 | diff --git a/vispy/visuals/text/text.py b/vispy/visuals/text/text.py
--- a/vispy/visuals/text/text.py
+++ b/vispy/visuals/text/text.py
@@ -227,6 +227,8 @@ def get_font(self, face, bold=False, italic=False):
alpha = (alpha + 0.5 * asum) / 3.0;
}
+ if (alpha <= 0) discard;
+
gl_FragColor = vec4(v_color.rgb, v_color.a * alpha);
}
"""
@@ -395,6 +397,10 @@ class TextVisual(Visual):
The 'gpu' method should produce higher quality results.
font_manager : object | None
Font manager to use (can be shared if the GLContext is shared).
+ depth_test : bool
+ Whether to apply depth testing. Default False. If False, the text
+ behaves like an overlay that does not get hidden behind other
+ visuals in the scene.
"""
_shaders = {
@@ -405,7 +411,7 @@ class TextVisual(Visual):
def __init__(self, text=None, color='black', bold=False,
italic=False, face='OpenSans', font_size=12, pos=[0, 0, 0],
rotation=0., anchor_x='center', anchor_y='center',
- method='cpu', font_manager=None):
+ method='cpu', font_manager=None, depth_test=False):
Visual.__init__(self, vcode=self._shaders['vertex'], fcode=self._shaders['fragment'])
# Check input
valid_keys = ('top', 'center', 'middle', 'baseline', 'bottom')
@@ -430,7 +436,7 @@ def __init__(self, text=None, color='black', bold=False,
self.rotation = rotation
self._text_scale = STTransform()
self._draw_mode = 'triangles'
- self.set_gl_state(blend=True, depth_test=False, cull_face=False,
+ self.set_gl_state(blend=True, depth_test=depth_test, cull_face=False,
blend_func=('src_alpha', 'one_minus_src_alpha'))
self.freeze()
| diff --git a/vispy/visuals/tests/test_text.py b/vispy/visuals/tests/test_text.py
--- a/vispy/visuals/tests/test_text.py
+++ b/vispy/visuals/tests/test_text.py
@@ -81,4 +81,15 @@ def test_face_bold_italic():
assert font1 is font4
+def test_text_depth_test():
+ t = Text(depth_test=False)
+ assert not t._vshare.gl_state["depth_test"]
+
+ t = Text(depth_test=True)
+ assert t._vshare.gl_state["depth_test"]
+
+ t = Text() # Default is false
+ assert not t._vshare.gl_state["depth_test"]
+
+
run_tests_if_main()
| Text isn't occluded properly
When using the Text visual in a 3D scene graph, occlusion of text isn't handled properly.
All visuals that were created after the text was added to the scene will "occlude" the text even if the text is in front.
All visuals that were created before the text cannot occlude the text and the text remains visible regardless.
It seems that the text is not properly incorporated into the 3D scene and instead it is just painted over.
I created a demo scene to illustrate the issue. Note how the numbers are always visible through the green plane, even when the numbers are behind the green plane. The red plane always hides the numbers, even when numbers are in front of the plane.
``` python
import numpy as np
import vispy
import vispy.scene
canvas = vispy.scene.SceneCanvas(keys='interactive', bgcolor=(1,1,1))
canvas.show()
view = canvas.central_widget.add_view()
view.camera = vispy.scene.TurntableCamera(up='z', fov=60)
view.camera.set_range(x=(-5, 5), y=(-5, 5), z=(-5, 5))
x_coords = np.array((-5,5))
y_coords = np.array((-5,5))
z_data = np.array([(3,3),(3,3)])
item = vispy.scene.visuals.SurfacePlot(x=x_coords, y=y_coords, z=z_data, color=(0, 1, 0, 1))
view.add(item)
for i in range(-5, 5):
item = vispy.scene.Text(text="%i" % i, color='black', bold=True, font_size=200, pos=(0,0,i))
view.add(item)
item = vispy.scene.visuals.SurfacePlot(x=x_coords, y=y_coords, z=-z_data, color=(1, 0, 0, 1))
view.add(item)
vispy.app.run()
```
| @djhoese
This is a drawing order issue. It can be worked around by forcing the drawing order with `.order` on each of the visuals. Here is the result:

When I set the first surface to `.order = 3`, the text to `.order = i`, and the last surface to `.order = -3`.
Given the age of this issue I'm going to close this assuming that I probably won't get a response and that I provided a reasonable workaround.
@djhoese you still get a response. Even though I'm not using vispy anymore in my current projects.
The workaround wouldn't have helped though. Because the issue remains that the text occlusion isn't calculated correctly. All you've done is to manually work out the occlusion for this particular scene and camera angle. In the example posted here, if you start moving the camera to a new position, e.g. looking from below the red plane, it will be all wrong again.
The app I was working on at the time had vehicles driving around and their name was hovering above each vehicle. You could then see the name of vehicles even when they where behind a mountain or similar which was rather odd and ugly. Working out the occlusion manually in such a setting would have been about impossible.
But like I said, I'm no longer working on this so you can leave the issue closed - unless you actually want to solve the occlusion issue properly. :-)
Ugh you are absolutely right. I thought I had checked looking from the red surface. We definitely want to make this easier or just fix it all around, but there are also other similar issues. I think I'll reopen this as it is a pretty simple and clear example compared to some of the others I've seen.
Could it be as simple that the text visual is not writing to the depth buffer?
Very likely. I don't have much experience with the text visual or writing to the gl_FragDepth. @almarklein any chance you have a little time to throw together a PR?
Probably not this week. I'll keep the notification open in my mailbox, so I can see if I can find some time next week or so.
@djhoese
Is there anyway we can display a 3D text in a view? TextVisual currently can only display 2D text, as far as I've seen! Please correct me if I'm wrong.
I (finally!) took some time to look into this. It's not that the depth is not set, but the depth-test is disabled. It looks like this is deliberate:
https://github.com/vispy/vispy/blob/f0af63083b25c27bd02ce3778bd9e62096c3ccd1/vispy/visuals/text/text.py#L433-L434
If I enable it, and add `if (alpha <= 0) discard;` right before [this line](https://github.com/vispy/vispy/blob/f0af63083b25c27bd02ce3778bd9e62096c3ccd1/vispy/visuals/text/text.py#L230) then I get this:
<img width="561" alt="image" src="https://user-images.githubusercontent.com/3015475/165965236-7e94e780-fee1-4cb7-8357-c33df1d76bd5.png">
This looks more like what (I think) was intended, except for the edges due to the inproper blending (yes, transparency is hard!). This effect can be removed by selecting a dark background.
I don't think it would be a good idea to simply turn on the depth test, since people might rely on it behaving like an overlay. My suggestion would be to make it optional.
| 2022-04-29T14:38:39 |
vispy/vispy | 2,355 | vispy__vispy-2355 | [
"2344"
] | 02929ef4d82ecc154d4e19ad63edfffd19af4f5e | diff --git a/vispy/app/backends/_glfw.py b/vispy/app/backends/_glfw.py
--- a/vispy/app/backends/_glfw.py
+++ b/vispy/app/backends/_glfw.py
@@ -27,7 +27,7 @@
try:
import glfw
except ImportError:
- why_not = "Could not import glwf, you may need to `pip install glfw` first."
+ why_not = "Could not import glfw, you may need to `pip install glfw` first."
available, testable, why_not, which = False, False, why_not, None
except Exception as err:
why_not = "Error importing glfw: " + str(err)
diff --git a/vispy/app/canvas.py b/vispy/app/canvas.py
--- a/vispy/app/canvas.py
+++ b/vispy/app/canvas.py
@@ -606,7 +606,7 @@ class MouseEvent(Event):
will return the button that started the drag (same thing as
``event.press_event.button``).
buttons : [int, ...]
- The list of buttons depressed during this event.
+ The list of buttons pressed during this event.
modifiers : tuple of Key instances
Tuple that specifies which modifier keys were pressed down at the
time of the event (shift, control, alt, meta).
@@ -632,7 +632,9 @@ def __init__(self, type, pos=None, button=None, buttons=None,
Event.__init__(self, type, **kwargs)
self._pos = np.array([0, 0]) if (pos is None) else np.array(pos)
self._button = int(button) if (button is not None) else None
- self._buttons = [] if (buttons is None) else buttons
+ # Explicitly add button to buttons if newly pressed, check #2344 for more reference
+ newly_pressed_buttons = [button] if button is not None and type == 'mouse_press' else []
+ self._buttons = [] if (buttons is None) else buttons + newly_pressed_buttons
self._modifiers = tuple(modifiers or ())
self._delta = np.zeros(2) if (delta is None) else np.array(delta)
self._last_event = last_event
| diff --git a/vispy/app/tests/test_canvas.py b/vispy/app/tests/test_canvas.py
--- a/vispy/app/tests/test_canvas.py
+++ b/vispy/app/tests/test_canvas.py
@@ -2,7 +2,7 @@
# Copyright (c) Vispy Development Team. All Rights Reserved.
# Distributed under the (new) BSD License. See LICENSE.txt for more info.
-from vispy.app import Canvas
+from vispy.app import Canvas, MouseEvent
from vispy.visuals import ImageVisual
from vispy.testing import requires_application
from vispy.visuals.transforms import STTransform
@@ -106,3 +106,17 @@ def on_draw(ev):
rgba_result = c.render()
assert not np.allclose(rgba_result[..., :3], 0)
+
+
+@requires_application()
[email protected]("mouse_event_type, button, buttons, expected_button, expected_buttons", [
+ ('mouse_press', 1, [], 1, [1]),
+ ('mouse_release', 1, [1], 1, [1]),
+ # left click pressed and held, followed by a right click
+ ('mouse_press', 2, [1], 2, [1, 2]),
+ ('mouse_release', 2, [1, 2], 2, [1, 2]),
+])
+def test_mouse_event(mouse_event_type, button, buttons, expected_button, expected_buttons):
+ mev = MouseEvent(type=mouse_event_type, button=button, buttons=buttons)
+ assert mev.buttons == expected_buttons
+ assert mev.button == expected_button
| Mouse event behaviour
```
def on_mouse_press(self, event):
print(event.buttons)
```
The [docs](https://vispy.org/api/vispy.app.canvas.html?highlight=mouseevent#vispy.app.canvas.MouseEvent) says
```
buttons[int, …]
The list of buttons pressed during this event.
```
But, pressing the left click returns an empty list and while holding down the left click and pressing other mouse buttons returns the mouse button key [1], [2] etc.
Is it a bug or I am getting something wrong?
I am using `glfw` as my backend.
| Can you post a small example script that I can use to try to reproduce and debug this?
Also, you have the documentation wrong. It doesn't say "pressed" it says "depressed". You may want to play with letting go of the button (depressing) and see if that changes the results. However, I think that would make more sense to check on a `on_mouse_release` event handler. I haven't used mouse events heavily for a while so I could be wrong.
This is the script I was using, event.buttons return the buttons that were already pressed during that event excluding the mouse button pressed that called that event.
If, I pressed the left mouse button and held on to it and then pressed the right mouse button. `events.buttons` gives me the `1` as output.
```
import math
from vispy import app, gloo, use
use("glfw")
class Canvas(app.Canvas):
def __init__(self, *args, **kwargs):
app.Canvas.__init__(self, *args, **kwargs)
def on_mouse_press(self, event):
print(event.buttons)
if __name__ == '__main__':
canvas = Canvas(keys='interactive', always_on_top=True)
canvas.show()
app.run()
```
Does "depressed" mean released or continued to be pressed?
I made some changes and printed out the `.button` and the `.buttons`:
```python
ifrom vispy import app, gloo, use
class Canvas(app.Canvas):
def on_mouse_press(self, event):
print(f"Press: {event.button=} | {event.buttons=}")
def on_mouse_release(self, event):
print(f"Release: {event.button=} | {event.buttons=}")
if __name__ == '__main__':
canvas = Canvas(keys='interactive', always_on_top=True)
canvas.show()
app.run()
```
Here's what I get (with annotations):
```
# left press
Press: event.button=1 | event.buttons=[]
# left release
Release: event.button=1 | event.buttons=[1]
# right press
Press: event.button=2 | event.buttons=[]
# right release
Release: event.button=2 | event.buttons=[2]
# left press
Press: event.button=1 | event.buttons=[]
# right press
Press: event.button=2 | event.buttons=[1]
# right release
Release: event.button=2 | event.buttons=[1, 2]
# left release
Release: event.button=1 | event.buttons=[1]
```
To me, and let me know if this is the original bug you were mentioning, the last press/release cycle is a bug. When the right button is pressed the `1` is included in the "depressed" `.buttons` list, but it wasn't released. When the right button is then released we also see `1` in the `.buttons` list, I would consider this a bug. Unless the `.buttons` list is meant to be "other buttons being acted on including or in addition to the button that triggered the event", then the docs are wrong.
Ok, apparently as a native english speaker I don't know what "depressed" means. It means currently pressed here. See this code:
https://github.com/vispy/vispy/blob/fbf90ebadbdd1054f3bb3bba4cebc7e60e2082f5/vispy/app/base.py#L188
and the other "buttons" related code in that module. I think to get a list of "all buttons pressed right now" you need to do `.buttons` plus `.button`. I would be open to the idea of a pull request that moves the above linked code so `.buttons` always includes `.button` but I can imagine this causing bugs for some people.
> you need to do `.buttons` plus `.button`
Yes, that is what I am doing right now.
> but I can imagine this causing bugs for some people.
We could change the docs to reflect the current behaviour if that's the case, but that might get confusing.
I would prefer adding the `button` currently pressed to the `buttons` list, that would fix and make the behaviour intuitive as well.
Output like this is what a user(or I expected) would expect.
```
# left press
Press: event.button=1 | event.buttons=[1]
# left release
Release: event.button=1 | event.buttons=[1]
# right press
Press: event.button=2 | event.buttons=[2]
# right release
Release: event.button=2 | event.buttons=[2]
# left press
Press: event.button=1 | event.buttons=[1]
# right press
Press: event.button=2 | event.buttons=[1,2]
# right release
Release: event.button=2 | event.buttons=[1, 2]
# left release
Release: event.button=1 | event.buttons=[1]
```
> I don't know what "depressed" means
I am confused as well about this. I think it means released or "de-pressed" :p
It would be probably better to hear what others have to say on this to figure out the correct behaviour that one would expect.
Yes, depressed means pressed (https://www.dictionary.com/browse/depressed; definition 2); it's not "de-pressed".
Thanks @QuLogic. @vispy/core what are your thoughts? Should the MouseEvent `.buttons` include the newly pressed button (the one causing the "press" event)? See my comment above.
...actually @tushar5526 I'm a little confused by your expected output. I would have thought that the "release" event would not include the released button in the `.buttons`. It is no longer pressed so it should no longer be in the list, right? That is, if we're using "depressed" to mean pressed.
Sorry for the confusion, I was wrong about what "depressed" means, hence my expected output.
> I would have thought that the "release" event would not include the released button in the .buttons. It is no longer pressed so it should no longer be in the list, right? That is, if we're using "depressed" to mean pressed.
Yes, this should be the correct behaviour.
We can change "depressed" to "pressed" in the docs, to avoid confusion.
> We can change "depressed" to "pressed" in the docs, to avoid confusion.
Agreed. Let's see what others think (@almarklein, @kmuehlbauer, @rougier any experience with mouse events in other applications?)
I also learned what "depressed" means and I agree this brings confusion. "pressed" would be much better.
Is "depressed" a common term in the UX world? To be honest I think of a sad mouse button when I hear it.
Ok so I think we've settled on two changes:
1. "depressed" -> "pressed" in the docstring
2. Add the newly pressed button (the one triggering the press event) to the `.buttons` list.
@tushar5526 agreed? how would you like to make a pull request for these changes?
@djhoese Agreed.
[https://github.com/vispy/vispy/blob/main/vispy/app/canvas.py/#L635](https://github.com/vispy/vispy/blob/main/vispy/app/canvas.py/#L635), we can simply add `button` to the `buttons` if the event type is mouse_pressed.
I suppose this should work. | 2022-07-12T18:42:23 |
vispy/vispy | 2,411 | vispy__vispy-2411 | [
"2407"
] | 8bf4f9e538dee5cb986779006b84f62eed81ed7c | diff --git a/vispy/app/backends/_qt.py b/vispy/app/backends/_qt.py
--- a/vispy/app/backends/_qt.py
+++ b/vispy/app/backends/_qt.py
@@ -191,7 +191,7 @@ def _get_event_xy(ev):
qt_keys.Key_Return: keys.ENTER,
qt_keys.Key_Tab: keys.TAB,
}
-if PYQT6_API:
+if PYQT6_API or PYSIDE6_API:
BUTTONMAP = {
QtCore.Qt.MouseButton.NoButton: 0,
QtCore.Qt.MouseButton.LeftButton: 1,
@@ -278,11 +278,11 @@ def _set_config(c):
glformat.setGreenBufferSize(c['green_size'])
glformat.setBlueBufferSize(c['blue_size'])
glformat.setAlphaBufferSize(c['alpha_size'])
- if QT5_NEW_API or PYSIDE6_API:
+ if QT5_NEW_API:
# Qt5 >= 5.4.0 - below options automatically enabled if nonzero.
glformat.setSwapBehavior(glformat.DoubleBuffer if c['double_buffer']
else glformat.SingleBuffer)
- elif PYQT6_API:
+ elif PYQT6_API or PYSIDE6_API:
glformat.setSwapBehavior(glformat.SwapBehavior.DoubleBuffer if c['double_buffer']
else glformat.SwapBehavior.SingleBuffer)
else:
| diff --git a/vispy/app/tests/test_context.py b/vispy/app/tests/test_context.py
--- a/vispy/app/tests/test_context.py
+++ b/vispy/app/tests/test_context.py
@@ -17,6 +17,8 @@ def test_context_properties():
return # cannot set more than once on Pyglet
if a.backend_name.lower() == 'osmesa':
return # cannot set config on OSMesa
+ if 'pyqt5' in a.backend_name.lower() or 'pyqt6' in a.backend_name.lower() or 'pyside2' in a.backend_name.lower() or 'pyside6' in a.backend_name.lower():
+ pytest.xfail("Context sharing is not supported in PyQt5, PyQt6, PySide2, or PySide6 at this time.")
# stereo, double buffer won't work on every sys
configs = [dict(samples=4), dict(stencil_size=8),
@@ -68,9 +70,9 @@ def check():
# Check while c1 is active
check()
- # pyqt5 does not currently support context sharing
- if 'pyqt5' in c1.app.backend_name.lower() or 'pyqt6' in c1.app.backend_name.lower():
- pytest.xfail("Context sharing is not supported in PyQt5 at this time.")
+ # pyqt5 does not currently support context sharing, pyside6 seg faults on app tests
+ if 'pyqt5' in c1.app.backend_name.lower() or 'pyqt6' in c1.app.backend_name.lower() or 'pyside2' in c1.app.backend_name.lower() or 'pyside6' in c1.app.backend_name.lower():
+ pytest.xfail("Context sharing is not supported in PyQt5, PyQt6, PySide2, or PySide6 at this time.")
# Tkinter does not currently support context sharing
if 'tk' in c1.app.backend_name.lower():
| error with pyside6 6.4.0 backend on macOS arm64
Making a fresh env, python 3.9 or python 3.10, then installing pyside6 `pip install pyside6` and vispy (0.11.0) works, but running
https://vispy.org/gallery/scene/turntable_box.html#sphx-glr-gallery-scene-turntable-box-py
fails with error:
```
╰─ python turntable_box.py (test-env3) ─╯
Traceback (most recent call last):
File "/Users/piotrsobolewski/Downloads/turntable_box.py", line 17, in <module>
canvas = scene.SceneCanvas(keys='interactive', size=(800, 600), show=True)
File "/Users/piotrsobolewski/Dev/miniforge3/envs/test-env3/lib/python3.9/site-packages/vispy/scene/canvas.py", line 135, in __init__
super(SceneCanvas, self).__init__(
File "/Users/piotrsobolewski/Dev/miniforge3/envs/test-env3/lib/python3.9/site-packages/vispy/app/canvas.py", line 211, in __init__
self.create_native()
File "/Users/piotrsobolewski/Dev/miniforge3/envs/test-env3/lib/python3.9/site-packages/vispy/app/canvas.py", line 228, in create_native
self._app.backend_module.CanvasBackend(self, **self._backend_kwargs)
File "/Users/piotrsobolewski/Dev/miniforge3/envs/test-env3/lib/python3.9/site-packages/vispy/app/backends/_qt.py", line 372, in __init__
self._init_specific(p, kwargs)
File "/Users/piotrsobolewski/Dev/miniforge3/envs/test-env3/lib/python3.9/site-packages/vispy/app/backends/_qt.py", line 791, in _init_specific
glformat = _set_config(p.context.config)
File "/Users/piotrsobolewski/Dev/miniforge3/envs/test-env3/lib/python3.9/site-packages/vispy/app/backends/_qt.py", line 283, in _set_config
glformat.setSwapBehavior(glformat.DoubleBuffer if c['double_buffer']
AttributeError: 'PySide6.QtGui.QSurfaceFormat' object has no attribute 'DoubleBuffer'
```
(by comparison the same but with `pip install pyqt6` works fine.)
| Just so we keep a record: see related #2335 and #2406.
It seems like we might need to add yet another test environment for pyside2 (based on #2406). For this PySide6 stuff I'm hoping we can just live with user testing for now. I can't find it right now but I believe there was another issue around with someone getting a seg fault on macOS with Qt6 installed. That might have been specific to their environment though.
@psobolewskiPhD do you or someone on the napari team (@brisvag?) think you could add a new CI environment to vispy that uses pyside2 to run the examples? Or if you want pyside6 instead?
> Just so we keep a record: see related #2335 and #2406.
>
> It seems like we might need to add yet another test environment for pyside2 (based on #2406). For this PySide6 stuff I'm hoping we can just live with user testing for now. I can't find it right now but I believe there was another issue around with someone getting a seg fault on macOS with Qt6 installed. That might have been specific to their environment though.
>
> @psobolewskiPhD do you or someone on the napari team (@brisvag?) think you could add a new CI environment to vispy that uses pyside2 to run the examples? Or if you want pyside6 instead?
At least from a napari PoV Qt6 (so pyside6) is really experimental and not officially supported. But pyside2 is supported and maybe even cleaner from a licensing PoV, so I think CI tests with it would be a good idea. napari tests windows and ubuntu with PySide2==5.15.2.1 and ubuntu with pyside6, so this issue could be mac specific or conda-forge specific or version specific.
Based on some sleuthing by @czaki I tried python 3.10 and pyside6 (latest, 6.4.0) but it still gave the same error. However! Using pyside6 6.3.0 and 6.3.2 was fine, so this issue is specific to something that's changed in pyside 6 6.4.0 . I've edited the title.
Yes. This is pyside 6.4.0 proble and requires a new qtpy release
https://github.com/spyder-ide/qtpy/pull/374
> Yes. Phis is pyside 6.4.0 proble and requires a new qtpy release
> [spyder-ide/qtpy#374](https://github.com/spyder-ide/qtpy/pull/374)
Unfortunately installing that PR doesn't solve this issue. Again it's puzzling because it's in the docs:
https://doc.qt.io/qtforpython/PySide6/QtGui/QSurfaceFormat.html#PySide6.QtGui.PySide6.QtGui.QSurfaceFormat.SwapBehavior
could you show traceback?
Is the code working with 6.3.1 and/or 6.3.2.
Traceback is in the OP and yes both 6.3.0 and 6.3.2 work (https://github.com/vispy/vispy/issues/2407#issuecomment-1283776193)
Could you report it to vispy?
> Could you report it to vispy?
How? This is a vispy issue... Sorry if i'm missing something, it's been a long day.
It is a long day, and this is the proper diagnosis for me. I war pretty sure that I'm in napari repository.
@psobolewskiPhD I'm trying to look through pyside6 source code, but their git web UI is not helpful. Since I don't see it defined here, what are the minimal number of steps to reproduce this (what commands create the environment)? Are you able to do an import of the exact PySide6 module and access this attribute it is failing on?
```
from PySide6.QtGui import QSurfaceFormat
print(hasattr(QSurfaceFormat, "DoubleBuffer"))
```
Scratch that code, try this:
```
from PySide6.QtGui import QSurfaceFormat
qsf = QSurfaceFormat()
print(hasattr(qsf, "DoubleBuffer"))
print(hasattr(qsf.SwapBehavior, "DoubleBuffer"))
```
This "SwapBehavior" version matches what vispy is doing for PyQt6 so maybe PySide6 6.4.0+ is matching PyQt6 for this interface...or something similar.
Replicating the env:
```
mamba create -n vispy-clean -c conda-forge python=3.9
conda activate wispy-clean
pip install pyside6
pip install vispy
```
Here's the output I get on my arm64 macOS 12.6:
```
>>> from PySide6.QtGui import QSurfaceFormat
>>> qsf = QSurfaceFormat()
>>> print(hasattr(qsf, "DoubleBuffer"))
False
>>> print(hasattr(qsf.SwapBehavior, "DoubleBuffer"))
True
>>>
```
Great! I see the same thing on my Ubuntu system. Note that the vispy installation isn't actually necessary for this simple test.
So it looks like PySide6 6.4.0 uses the same interfaces as PyQt6. This should be a simple but yet another version check we need to do in `_qt.py`. @psobolewskiPhD or @Czaki would you be willing and have the time to make a PR for this?
I think the main change is here:
https://github.com/vispy/vispy/blob/8bf4f9e538dee5cb986779006b84f62eed81ed7c/vispy/app/backends/_qt.py#L281-L287
I'm not sure if this is the only change. We may need to rename `PYSIDE6_API` to `PYSIDE6_LEGACY_API` and create a new `PYSIDE6_API` that is used in the same locations as `PYQT6_API`. *If* there are more name changes that effect vispy then it may be that PyQt6 and PySide6 now have equivalent interfaces. In that case importing PySide6 6.4+ could just set `PYQT6_API` to True.
Awesome 👏
I can try to take a look over the weekend.
❤️ | 2022-10-22T15:37:53 |
vispy/vispy | 2,437 | vispy__vispy-2437 | [
"2386"
] | 7f6718dabc671d3340acd4eca6ac47182f6bbea3 | diff --git a/codegen/createglapi.py b/codegen/createglapi.py
--- a/codegen/createglapi.py
+++ b/codegen/createglapi.py
@@ -116,10 +116,10 @@ def __repr__(self):
DEFINE_CONST_MAP = '''
ENUM_MAP = {}
-for ob in list(globals().values()):
- if repr(ob).startswith('GL_'):
+for var_name, ob in list(globals().items()):
+ if var_name.startswith('GL_'):
ENUM_MAP[int(ob)] = ob
-del ob
+del ob, var_name
'''
diff --git a/vispy/gloo/gl/_constants.py b/vispy/gloo/gl/_constants.py
--- a/vispy/gloo/gl/_constants.py
+++ b/vispy/gloo/gl/_constants.py
@@ -326,7 +326,7 @@ def __repr__(self):
ENUM_MAP = {}
-for ob in list(globals().values()):
- if repr(ob).startswith('GL_'):
+for var_name, ob in list(globals().items()):
+ if var_name.startswith('GL_'):
ENUM_MAP[int(ob)] = ob
-del ob
+del ob, var_name
| character mapping issue from vispy.gloo.gl._constants
It looks like there is a character encoding issue in vispy that is happening in this bug report, please correct me if I am wrong.
https://forum.image.sc/t/try-to-load-napari-library-from-stardist-napari-dock-widget-import-surface-from-polys-and-failed-like-bellow-any-help-would-be-more-than-welcome/71524
| I am not an expert on text encoding and how Python chooses one or the other, but my understanding was that Python would default to UTF-8. It is also my understanding based on:
https://stackoverflow.com/questions/26324622/what-characters-do-not-directly-map-from-cp1252-to-utf-8
That CP1252 (the module doing the complaining about the encoding) should match UTF-8 for all ASCII characters. If I read the file on my machine, everything is telling me it is entirely ASCII. This leads me to believe the file somehow got corrupted on the users system. I think this could be tested by doing:
```
x = open("vispy/gloo/gl/_constants.py", encoding="cp1252", mode="r")
print(x.read())
```
I believe this reads the file as if it was on-disk as cp1252. When I do this locally it loads fine. I would expect a ValueError if any of the file was non-ASCII and/or non-CP1252 compatible. I don't get an error.
Update: Looks like Python `open` uses https://docs.python.org/3/library/locale.html#locale.getpreferredencoding by default which is platform specific.
Awesome, thank you kindly @djhoese.
Keep us updated if the user lets you know if their issue has been resolved.
Hi @djhoese, I am currently experiencing the same issue as OP. Specifically, `UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 2575: character maps to <undefined>` when trying to use the napari viewer library.
Attempting to open the "_constants.py" file as described above works as expected.
Any advice would be appreciated!
@h-westmacott Great! (that we have someone who can test, not that you're having trouble)
What do you get when you run `python -c "import locale; print(locale.getpreferredencoding())"` on the command line? Here's what I get on my system:
```
$ python -c "import locale; print(locale.getpreferredencoding())"
UTF-8
```
Other questions that come to mind:
1. What operating system are you on?
2. Are you running/importing vispy/napari with `python ...` from the command line? Or some other way?
3. What do you get when you do:
```
python -c "for ob in list(globals().values()): print(ob); print(repr(ob))"
```
Happy to be able to help! I'm working on a windows machine, and this issue occurs when I try to run any napari commands from python script or the terminal. resulst included below:
```
python -c "import locale; print(locale.getpreferredencoding())"
cp1252
```
```
python -c "for ob in list(globals().values()): print(ob); print(repr(ob))"
__main__
'__main__'
None
None
None
None
<class '_frozen_importlib.BuiltinImporter'>
<class '_frozen_importlib.BuiltinImporter'>
None
None
{}
{}
<module 'builtins' (built-in)>
<module 'builtins' (built-in)>
```
How do you feel about hacking your vispy installation? If you can find the `_constants.py` module and add a print statement before this line:
https://github.com/vispy/vispy/blob/c52c48194c0af60ce97ac844eb87a54f8434988c/vispy/gloo/gl/_constants.py#L330
That says `print(ob)` that might help us figure some of this out. Although...that print might actually produce the same error. :thinking:
I guess I assume doing `python -c "import vispy._constants"` raises the exception for you, right?
Adding in the print statement before line 330 adds the following to the terminal output before the traceback:
```
vispy.gloo.gl._constants
GL definitions converted to Python by codegen/createglapi.py.
THIS CODE IS AUTO-GENERATED. DO NOT EDIT.
Constants for OpenGL ES 2.0.
vispy.gloo.gl
<_frozen_importlib_external.SourceFileLoader object at 0x00000175161FE5E0>
ModuleSpec(name='vispy.gloo.gl._constants', loader=<_frozen_importlib_external.SourceFileLoader object at 0x00000175161FE5E0>, origin='C:\\Users\\User\\anaconda3\\envs\\lightsheet\\lib\\site-packages\\vispy\\gloo\\gl\\_constants.py')
C:\Users\User\anaconda3\envs\lightsheet\lib\site-packages\vispy\gloo\gl\_constants.py
C:\Users\User\anaconda3\envs\lightsheet\lib\site-packages\vispy\gloo\gl\__pycache__\_constants.cpython-39.pyc
```
`python -c "import vispy.gloo.gl._constants"` does indeed raise the same exception.
`python -c "import vispy._constants"` raises `ModuleNotFoundError: No module named 'vispy._constants'`
> python -c "import vispy.gloo.gl._constants" does indeed raise the same exception.
python -c "import vispy._constants" raises ModuleNotFoundError: No module named 'vispy._constants'
:man_facepalming: Thanks.
For the print out output, does the traceback say the `print` is the cause or the `repr` line?
One more hack:
```
ENUM_MAP = {}
for key, ob in list(globals().items()):
print(f"#### {key}:")
print(ob)
if repr(ob).startswith('GL_'):
ENUM_MAP[int(ob)] = ob
del ob
```
If it is the `__builtins__` dict then we have a lot more work to do.
Yes, the `print` is now the cause of the `repr` line:
```
...
File "C:\Users\User\anaconda3\envs\lightsheet\lib\site-packages\vispy\gloo\gl\_constants.py", line 330, in <module>
print(ob)
File "C:\Users\User\anaconda3\envs\lightsheet\lib\_sitebuiltins.py", line 61, in __repr__
self.__setup()
File "C:\Users\User\anaconda3\envs\lightsheet\lib\_sitebuiltins.py", line 51, in __setup
data = fp.read()
File "C:\Users\User\anaconda3\envs\lightsheet\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 2575: character maps to <undefined>
```
Following the extra hack, the console outputs:
```
#### __name__:
vispy.gloo.gl._constants
#### __doc__:
GL definitions converted to Python by codegen/createglapi.py.
THIS CODE IS AUTO-GENERATED. DO NOT EDIT.
Constants for OpenGL ES 2.0.
#### __package__:
vispy.gloo.gl
#### __loader__:
<_frozen_importlib_external.SourceFileLoader object at 0x000001B8E40CE610>
#### __spec__:
ModuleSpec(name='vispy.gloo.gl._constants', loader=<_frozen_importlib_external.SourceFileLoader object at 0x000001B8E40CE610>, origin='C:\\Users\\User\\anaconda3\\envs\\lightsheet\\lib\\site-packages\\vispy\\gloo\\gl\\_constants.py')
#### __file__:
C:\Users\User\anaconda3\envs\lightsheet\lib\site-packages\vispy\gloo\gl\_constants.py
#### __cached__:
C:\Users\User\anaconda3\envs\lightsheet\lib\site-packages\vispy\gloo\gl\__pycache__\_constants.cpython-39.pyc
#### __builtins__:
```
Ok so now, create a test module in your current directory named `check_builtins.py` with the below in it:
```
print(__builtins__)
```
And do `python -c "import check_builtins"`.
I should have said this earlier, but regardless of what the cause of this error is we can definitely simplify this for loop and avoid unimportant things in the globals dictionary (things prefixed with `_` for one). But I'd really like to know what in Python itself is causing an error. Might be something that can be fixed upstream in CPython.
Wild idea...does your username have non-ASCII characters in it?
The check_builtins.py file raises the same error:
```
python -c "import check_builtins"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\User\Documents\lightsheet-model\train-lightsheet-model\check_builtins.py", line 1, in <module>
print(__builtins__)
File "C:\Users\User\anaconda3\envs\lightsheet\lib\_sitebuiltins.py", line 61, in __repr__
self.__setup()
File "C:\Users\User\anaconda3\envs\lightsheet\lib\_sitebuiltins.py", line 51, in __setup
data = fp.read()
File "C:\Users\User\anaconda3\envs\lightsheet\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 2575: character maps to <undefined>
```
No non-ASCII in my username, no.
Ok so we've removed vispy from the equation. So that part of `_sitebuiltins.py` is parsing license and copyright files. You can see that here:
https://github.com/python/cpython/blob/f13f466474ed53529acd3f209070431fbae14323/Lib/_sitebuiltins.py#L40-L42
And you can see where this is used here:
https://github.com/python/cpython/blob/f13f466474ed53529acd3f209070431fbae14323/Lib/site.py#L404-L427
We know from the error message that the problem is only with one of the `_Printer` usages where files are loaded/read so `credits` and `copyright` shouldn't be the problem. That means it is the license file. So updating the check_builtins.py script with `print(__builtins__['license'])` should still fail on your system.
If that fails, then could you find the LICENSE (or LICENSE.txt) file in your `lib/pythonXX` directory and copy it here? For example, mine on linux is here:
```
~/miniconda3/envs/satpy_py310/lib/python3.10/LICENSE.txt
```
Oh or the license file could be in your current directory or your parent directory.
Scratch the parent directory idea, but I do think it can be a license file in your current directory. And you don't need to guess you could just edit the check script to do `print(__builtins__['license']._Printer__filenames)`.
And I'm a little confused because at least in the current main branch of CPython the license file is read as UTF-8:
https://github.com/python/cpython/blob/f13f466474ed53529acd3f209070431fbae14323/Lib/_sitebuiltins.py#L50
So why is cp1252.py being used.
Ah I bet this is a bug that was fixed in Python 3.10. Here is the open line in Python 3.9:
https://github.com/python/cpython/blob/c09dba57cfbbf74273ce44b1f48f71b46806605c/Lib/_sitebuiltins.py#L50
And in Python 3.10:
https://github.com/python/cpython/blob/bc2cdfc81571dc759a90b94dd3f4858b98cad1eb/Lib/_sitebuiltins.py#L50
An encoding was added.
I had this issue on Windows, reproduced by:
```python -c "import vispy.gloo.gl._constants"```
Running with the "-X utf8" flag fixed the error here and in a broader napari/vispy call:
```python -X utf8 -c "import vispy.gloo.gl._constants"```
Thanks @Charles-Fieseler-Vienna. I'll add my suggested fix from #1330. If someone wants to make a PR to fix it that would be greatly appreciated:
I think a good solution might be the following:
```python
ENUM_MAP = {}
for var_name, ob in list(globals().items()):
if var_name.startswith('GL_'):
ENUM_MAP[int(ob)] = ob
del ob, var_name
```
This has the added benefit that we don't even try `repr`'ing the objects at all. If you look at all the constants defined in that module they all start with `GL_` so this just avoids dealing with anything else. Note that the edit would have to occur here:
https://github.com/vispy/vispy/blob/c52c48194c0af60ce97ac844eb87a54f8434988c/codegen/createglapi.py#L117-L123
As the `_constants.py` module is generated from this code. I think I will then need to run the code to regenerate the files. | 2022-12-17T18:35:50 |
|
vispy/vispy | 2,470 | vispy__vispy-2470 | [
"2453"
] | 5449e3402b35da4d6a8215d8d72e19dcf05917d7 | diff --git a/examples/basics/visuals/markers.py b/examples/basics/visuals/markers.py
--- a/examples/basics/visuals/markers.py
+++ b/examples/basics/visuals/markers.py
@@ -4,7 +4,13 @@
# Copyright (c) Vispy Development Team. All Rights Reserved.
# Distributed under the (new) BSD License. See LICENSE.txt for more info.
# -----------------------------------------------------------------------------
-""" Display markers at different sizes and line thicknessess.
+"""Display markers at different sizes and line thicknesses.
+
+Keyboard options:
+* spacebar: Cycle through possible marker symbols.
+* "s": Switch between "fixed" marker scaling (initial setting) and "scene"
+ scaling.
+
"""
import numpy as np
@@ -60,10 +66,11 @@ def on_key_press(self, event):
self.markers.symbol = self.markers.symbols[self.index]
self.update()
elif event.text == 's':
- self.markers.scaling = not self.markers.scaling
+ self.markers.scaling = "fixed" if self.markers.scaling != "fixed" else "scene"
self.update()
if __name__ == '__main__':
+ print(__doc__)
canvas = Canvas()
app.run()
diff --git a/vispy/visuals/markers.py b/vispy/visuals/markers.py
--- a/vispy/visuals/markers.py
+++ b/vispy/visuals/markers.py
@@ -16,7 +16,7 @@
_VERTEX_SHADER = """
uniform float u_antialias;
uniform float u_px_scale;
-uniform bool u_scaling;
+uniform int u_scaling;
uniform bool u_spherical;
attribute vec3 a_position;
@@ -43,28 +43,38 @@
vec4 pos = vec4(a_position, 1);
vec4 fb_pos = $visual_to_framebuffer(pos);
+ vec4 x;
+ vec4 size_vec;
gl_Position = $framebuffer_to_render(fb_pos);
// NOTE: gl_stuff uses framebuffer coords!
-
- if (u_scaling == true) {
- // calculate point size from visual to framebuffer coords to determine size
+ if (u_scaling == 1) {
+ // scaling == "scene": scale marker using entire visual -> framebuffer set of transforms
+ x = $framebuffer_to_visual(fb_pos + vec4(big_float, 0, 0, 0));
+ x = (x - pos);
+ size_vec = $visual_to_framebuffer(pos + normalize(x) * a_size);
+ $v_size = size_vec.x / size_vec.w - fb_pos.x / fb_pos.w;
+ v_edgewidth = ($v_size / a_size) * a_edgewidth;
+ }
+ else if (u_scaling == 2) {
+ // scaling == "visual": scale marker using only the Visual's transform
// move horizontally in framebuffer space
// then go to scene coordinates (not visual, so scaling is accounted for)
- vec4 x = $framebuffer_to_scene(fb_pos + vec4(big_float, 0, 0, 0));
+ x = $framebuffer_to_scene(fb_pos + vec4(big_float, 0, 0, 0));
// subtract position, so we get the scene-coordinate vector describing
// an "horizontal direction parallel to the screen"
vec4 scene_pos = $framebuffer_to_scene(fb_pos);
x = (x - scene_pos);
// multiply that direction by the size (in scene space) and add it to the position
// this gives us the position of the edge of the point, which we convert in screen space
- vec4 size_vec = $scene_to_framebuffer(scene_pos + normalize(x) * a_size);
+ size_vec = $scene_to_framebuffer(scene_pos + normalize(x) * a_size);
// divide by `w` for perspective, and subtract pos
// this gives us the actual screen-space size of the point
$v_size = size_vec.x / size_vec.w - fb_pos.x / fb_pos.w;
v_edgewidth = ($v_size / a_size) * a_edgewidth;
}
else {
+ // scaling == "fixed": marker is always the same number of pixels
$v_size = a_size * u_px_scale;
v_edgewidth = a_edgewidth * u_px_scale;
}
@@ -505,8 +515,20 @@ class MarkersVisual(Visual):
The color used to draw each symbol interior.
symbol : str or array
The style of symbol used to draw each marker (see Notes).
- scaling : bool
- If set to True, marker scales when rezooming.
+ scaling : str | bool
+ Scaling method of individual markers. If set to "fixed" (default) then
+ no scaling is done and markers will always be the same number of
+ pixels on the screen. If set to "scene" then the chain of transforms
+ from the Visual's transform to the transform mapping to the OpenGL
+ framebuffer are used to scaling the marker. This has the effect of the
+ marker staying the same size in the "scene" coordinate space and
+ changing size as the visualization is zoomed in and out. If set to
+ "visual" the marker is scaled only using the transform of the Visual
+ and not the rest of the scene/camera. This means that something like
+ a camera changing the view will not affect the size of the marker, but
+ the user can still scale it using the Visual's transform. For
+ backwards compatibility this can be set to the boolean ``False`` for
+ "fixed" or ``True`` for "scene".
alpha : float
The opacity level of the visual.
antialias : float
@@ -534,7 +556,7 @@ class MarkersVisual(Visual):
_symbol_shader_values = symbol_shader_values
_symbol_shader = symbol_func
- def __init__(self, scaling=False, alpha=1, antialias=1, spherical=False,
+ def __init__(self, scaling="fixed", alpha=1, antialias=1, spherical=False,
light_color='white', light_position=(1, -1, 1), light_ambient=0.3, **kwargs):
self._vbo = VertexBuffer()
self._data = None
@@ -554,6 +576,8 @@ def __init__(self, scaling=False, alpha=1, antialias=1, spherical=False,
if len(kwargs) > 0:
self.set_data(**kwargs)
+ self._scaling = "fixed"
+ self._scaling_int = 0
self.scaling = scaling
self.antialias = antialias
self.light_color = light_color
@@ -677,9 +701,20 @@ def scaling(self):
@scaling.setter
def scaling(self, value):
- value = bool(value)
- self.shared_program['u_scaling'] = value
+ scaling_modes = {
+ False: 0,
+ True: 1,
+ "fixed": 0,
+ "scene": 1,
+ "visual": 2,
+ }
+ if value not in scaling_modes:
+ possible_options = ", ".join(repr(opt) for opt in scaling_modes)
+ raise ValueError(f"Unknown scaling option {value!r}, expected one of: {possible_options}")
+ scaling_int = scaling_modes[value]
+ self.shared_program['u_scaling'] = scaling_int
self._scaling = value
+ self._scaling_int = scaling_int
self.update()
@property
@@ -773,7 +808,7 @@ def _prepare_draw(self, view):
if self._data is None:
return False
view.view_program['u_px_scale'] = view.transforms.pixel_scale
- view.view_program['u_scaling'] = self.scaling
+ view.view_program['u_scaling'] = self._scaling_int
def _compute_bounds(self, axis, view):
pos = self._data['a_position']
| Markers scaling behavior broken
Hello everybody, I'd say, this breaks documented functionality, namely [MarkersVisual.scaling](https://vispy.org/api/vispy.visuals.markers.html#vispy.visuals.markers.MarkersVisual.scaling) e.g. in [examples/basics/visuals/markers.py](https://github.com/vispy/vispy/blob/c0d7457a7b1717d47baa8b8622faa2c67bbe4be0/examples/basics/visuals/markers.py#L63): Pressing 's' should toggle between the markers remaining the same size when zooming and scaling with the zoom.
This works with version 0.11.0, but not with version 0.12.1
_Originally posted by @arcanerr in https://github.com/vispy/vispy/issues/2359#issuecomment-1433559707_
| Yeah this is bad. Probably explains some things @ameraner was seeing with one of our applications (SIFT) related to the newer versions of SIFT.
Ok so let's lay out the behaviors and expectations. As documented in the MarkersVisual, the `scaling` parameter means (I have to look this up every time this comes up):
* False (default): The markers will be the same number of pixels regardless of zoom level.
* True: The markers will shrink or grow as you zoom as if you were moving them closer to/further away from the screen.
The problem with the previous implementation was that it used the MarkersVisuals transform to determine this so in 3D views you would see markers change in size just by rotating the view (I think that's what we had figured out).
The change implemented in #2359 had the side effect that MarkersVisuals `.transform` no longer changed the *size* of the Markers, but only the vertex coordinates (the location of the Markers).
I see now that the referenced example is doing exactly that, using the Markers transform to change the "zoom" of the Visual:
https://github.com/vispy/vispy/blob/c0d7457a7b1717d47baa8b8622faa2c67bbe4be0/examples/basics/visuals/markers.py#L47-L49
So the question is, how do we update the example to reflect the expected behavior? In a SceneCanvas setting you would maybe change a higher-up transform (a camera) and expect to see the scaling change (not sure that still works). Or do we change/revert some/all of the changes of #2359 to have the Visual transform have some effect on size?
The SceneCanvas-based version of the same example works:
```python
import numpy as np
from vispy import app
from vispy.scene import SceneCanvas, visuals
from vispy.visuals.transforms import STTransform
n = 500
pos = np.zeros((n, 2))
colors = np.ones((n, 4), dtype=np.float32)
radius, theta, dtheta = 1.0, 0.0, 5.5 / 180.0 * np.pi
for i in range(500):
theta += dtheta
x = 256 + radius * np.cos(theta)
y = 256 + radius * np.sin(theta)
r = 10.1 - i * 0.02
radius -= 0.45
pos[i] = x, y
colors[i] = (i/500, 1.0-i/500, 0, 1)
class Canvas(SceneCanvas):
def __init__(self):
super().__init__(keys='interactive', size=(512, 512),
title="Marker demo [press space to change marker]")
self.unfreeze()
self.index = 0
self.view = self.central_widget.add_view()
self.view.camera = 'panzoom'
self.view.camera.set_range(x=(0, 500), y=(0, 500))
self.markers = visuals.Markers(parent=self.view.scene)
self.markers.set_data(pos, face_color=colors)
self.markers.symbol = self.markers.symbols[self.index]
#self.markers.transform = STTransform()
self.freeze()
self.show()
def on_key_press(self, event):
if event.text == ' ':
self.index = (self.index + 1) % (len(self.markers.symbols))
self.markers.symbol = self.markers.symbols[self.index]
self.update()
elif event.text == 's':
self.markers.scaling = not self.markers.scaling
print(self.markers.scaling)
self.update()
if __name__ == '__main__':
canvas = Canvas()
app.run()
```
Basically the transform logic happens on the cameras transform, not directly on the MarkersVisual.
Before #2359, changing the `scale` of a point affected the marker `size`. This is sometimes desirable in 2D (such as in the example above), and sometimes it is not (such as if coordinates need to be rescaled to match an image with a specific pixel size, but the user wants to give marker sizes with the destination size, rather than the original).
However, most the most important complication comes from also supporting anisotropic scales: if the scale components of a transform are different in x and y, then the previous system is simply wrong: what does it mean to rescale the size of a `disc` by `(5, 2)`? Should it become an ellipse? Since this is not really possible with GL Markers (unless we go *much* more complicated on the shader), the only sensible solution is to *not* have the marker size be affected by scaling. (This is more obvious with 3D data, where rotating the camera affects the vectors used for size calculations in the shader, but it affects any anisotropic data).
Basically:
```py
coords = [
[ 0, 0],
[ 0, 10],
]
```
this with a size of `10`, makes two "touching" circles. If you the provide a scale of `(2, 3)`, what do you expect to get out?
---
As you pointed out, the reason why the example above works in `SceneCanvas` version is because it's the camera that changes, and not the visual. I think that the "simpler" example without canvas is just confusing at this point, and should be replaced by the one you mentioned above.
PS: if you want to get the old behaviour back, you can just multiply the `size` of the marker by the scale and you're good to go :)
> you can just multiply the size of the marker by the scale and you're good to go :)
@brisvag you mean the scale of the transform being used? And that would have to be done for each update, right?
Correct! Basically add:
```py
def on_mouse_wheel(self, event):
"""Use the mouse wheel to zoom."""
zoom_amount = 1.25**event.delta[1]
self.markers.transform.zoom((zoom_amount,) * 2, center=event.pos)
self._marker_size *= zoom_amount
self.markers.set_data(pos, face_color=colors, size=self._marker_size)
self.update()
```
Or something like this. Note however that you would actually need some extra logic if you want to swap between scaling and non-scaling (disable the size change when scaling is False basically).
However, I feel like changing the example like this is making it harder and harder to follow...
@brisvag The issue we're running into in our application is that we have a PanZoom camera which controls the general pan/zoom nature of the SceneCanvas, but we have a series of Nodes with Visuals as children. These Nodes have map projection transforms on them and they can change between different geographic projections (ex. mercator versus polar-stereographic versus other). So the pixel size in one projection does not work in another. I suppose our short term fix could be recalculating the pixel size when we switch to a different projection (I think this logically makes sense), but this feels like a major loss in functionality...right? Or maybe it is unrealistic to assume this should work this way. Put another way, why should we (the SIFT project) assume that a screen-based pixel size be effected by coordinate transformations. @arcanerr thoughts?
In general, I think this is really "reducing out assumptions" rather than a loss of functionality. But in your use case (since data is isotropic) it adds an extra step.
I would vote against reverting the change, because while it's making one use case (albeit a commom one) more convenient, it simply breaks another (anisotropic) with especially awful results in 3D. This breakage cannot be undone as easily as adding a callback on zoom. Indeed, if we were to revert here, I think I would end up redoing these changes in napari and maintain a different shader there, as anisotropic data is relatovely common in 3D imaging.
> as anisotropic data is relatovely common in 3D imaging.
Yeah this was going to be my next question. If this is common in other scientific fields then it makes sense to require one use case to have an extra step rather than completely break it for another use case.
> Put another way, why should we (the SIFT project) assume that a screen-based pixel size be effected by coordinate transformations. @arcanerr thoughts?
Basically, I think this questions needs to be asked side by side with:
Why should project X, which uses anisotropic scaling, assume that scaling will affect marker size only on one (arbitrary) axis (or even worse, a linear combination of axes if rotations are applied)?
I think I agree with you @brisvag although I don't want to. For my own understanding, could you explain (simply) one of the use cases that is common in napari that saw the prior issues (anisotropic)? I understand that these cases exist, but don't have an understanding of when or why it would be done (coordinates versus marker size).
I don't personally have such data, but maybe @Czaki can chip in with some real data? (he opened the original issue at https://github.com/napari/napari/issues/2213, where you can see the effect in action in 3D).
Note that while the "zoomin in and out" issue will only be obvious in 3D (cause of rotations), this is also an issue if you have 2D anisotropic data; it might even be worse, since it won't be "shifting" and you might not notice that (or understand why) the size of your points is incorrect.
In my case, it happens for any data with a scale 3,1,1 (210,70,70). When working with data of scale 5,1,1 (300, 50, 50), it was even more visible. But as I remember, people on macOS do not meet this problem. Maybe there are some differences in handling this scale between OS.
Ok, I'll try to put my two cents in by thinking out loud about what I would expect `.scaling` to be interpreted as when `Markers` are **used in a scene graph** (in contrast to using the Visuals directly, I don't have experience with that setup):
1. `scaling=False` should help to implement kind of a special case, that is: Markers are _positioned_ with respect to the accumulated transformations of the scene and the camera, but their shapes are rendered directly in screen space (that is equivalent to that the linear 3x3 part of the accumulated homogeneous transformation matrix[^1] is ignored). Side note: all fragments have to take the depth of the reference point i.e. their position to update the z-Buffer. I guess that's how it is in the current implementation.
2. With `scaling=True` - uhm - the only consistent interpretation I see for that in a _3D_ scene graph is: if there is anisotropic scaling or even non-linear distortion in the accumulated transformation, Markers should be scaled anisotropically or distorted :thinking:. If you want to implement a different interpretation, you have to ask: Should it be interpreted as `scaling==True and rotation==False and shearing==False and reflection==False ...`?!? It doesn't seem to make sense to ignore only parts of the linear matrix in the affine case, for a nonlinear transformation like map projections it makes even less sense, does it? Is this the same interpretation you prefer, @brisvag?
I'd say, with this interpretation the SIFT use case cannot be fulfilled without extra work: Here images are displayed as a 2D scene with images as layers in a layer stack (like Photoshop layers), but that is "faked" by setting up a 3D scene with orthographic projection and all the images always facing to the camera. Here we want to have Markers which
1. either stay the same size independent of the zoom (that works with `scaling=False`)
2. or resize when zooming to create the effect, that they keep the same size relative to the displayed images/the scene, but with constant width of the outline (this happens with `scaling=False` with pre-0.12.0 VisPy)
(2) :thinking: could be solved by applying the zoom factor to the Markers coherently with the camera zoom. Is that how it was until the merge request in question?
Unfortunately I won't have the capacity to contribute to this discussion much more.
----
[^1]: From now on, we I briefly refer to this part as the _linear matrix_
> With `scaling=True` - uhm - the only consistent interpretation I see for that in a _3D_ scene graph is: if there is anisotropic scaling or even non-linear distortion in the accumulated transformation, Markers should be scaled anisotropically or distorted :thinking:.
Yes, I agree, this is the best scenario. That's what I meant here:
> However, most the most important complication comes from also supporting anisotropic scales: if the scale components of a transform are different in x and y, then the previous system is simply wrong: what does it mean to rescale the size of a `disc` by `(5, 2)`? Should it become an ellipse? Since this is not really possible with GL Markers (unless we go _much_ more complicated on the shader), the only sensible solution is to _not_ have the marker size be affected by scaling. (This is more obvious with 3D data, where rotating the camera affects the vectors used for size calculations in the shader, but it affects any anisotropic data).
As I mentioned here, I fear that this will require significant work on the shader. Maybe I'm wrong, but my first reaction was that my PR was the only solution I could reasonably tackle :P And this is only accounting 2D; if you try to do this in 3D, you effectively have to change discs to ellipsoids and project them according to rotation. Which *maybe* is possible? But certainly hard and too much work for me to justify taking on.
> If you want to implement a different interpretation, you have to ask: Should it be interpreted as `scaling==True and rotation==False and shearing==False and reflection==False ...`?!? It doesn't seem to make sense to ignore only parts of the linear matrix in the affine case, for a nonlinear transformation like map projections it makes even less sense, does it? Is this the same interpretation you prefer, @brisvag?
EDIT: see below comment for better breakdown.
~I'm not sure I follow exactly your reasoning here, so forgive me if I misunderstand. I think that what we had before `v0.12` was effectively:~
~- marker coordinates are affected by all components in the matrix~
~- marker shapes are affected only by scaling (no rotation, no reflection, but shear is part of scale right? Sorry, my math theory is weak here :P). Specifically, the amount of scaling was the projection of the `scale` vector onto the buffer x coordinate, which was then applied to all axes (assuming isotropicity). This is what causes the weird behaviour on rotation in 3D (and would also do the same if rotating in 2D, it's just that we typically don't do that).~
After my PR, the marker shapes are instead fully unaffected by the matrix. I think the only better solution is to have them *fully* affected by the matrix.
As a general thought: we need to keep in mind that markers are a *screen space* visual, much like little banners that follow the camera. In that sense, I feel like we're trying to force them into something they're not. If we decide to go for the "full matrix" approach, we should likely abandon gl markers and do everything ourselves (which could also solve #2078).
Rereading what I wrote, I'm thinking that maybe I got it wrong. I'll try again :P
I think some confusion comes from the fact that we have multiple transforms at play. We don't simply "use" or "ignore" entirely a part of *the* matrix. Instead, we use/ignore some parts of some of the matrices that compose the full matrix.
Before `v0.12`:
- marker coordinates are affected by the full transform from `visual` to `framebuffer`.
- marker shapes are also affected by the transform from `visual` to `framebuffer`, but the `scale` and `shear` components are used in a weird way (the amount of scaling was the projection of the scale vector onto the buffer x coordinate, which was then applied to all axes (assuming isotropicity). This is what causes the weird behaviour on rotation in 3D (and would also do the same if rotating in 2D, it's just that we typically don't do that). While this seems weird, this is (one of the?) only sensible way to rescale markers that are *isotropic and fixed in shape*. This would not be a problem if we can get markers that properly shear and project in 3D (as mentioned above: *hard*).
After `v0.12`:
- marker coordinates are affected by the full transform from `visual` to `framebuffer`.
- marker shapes are only affected by the transform from `scene` to `framebuffer`. This means that any shear that happens on the `Visual` level is ignored, as well as any scaling. However, other transforms (such as the camera zoom) are used properly. In hindsight, this also probably means that if somehow another shearing transform is introduced between `scene` and `framebuffer`, the same issue as before comes up :/.
> (2) thinking could be solved by applying the zoom factor to the Markers coherently with the camera zoom. Is that how it was until the merge request in question?
Yes, this is how you can solve your problem by just adding a callback that sets the size based on zoom. But this [*should* still work in your use case](https://github.com/vispy/vispy/issues/2453#issuecomment-1433613812), since you're using the `SceneCanvas` and not just the `Visuals`. The only difference after `v0.12` is that you have to change the size of the markers if you change their transform.
> - marker shapes are only affected by the transform from scene to framebuffer. This means that any shear that happens on the Visual level is ignored, as well as any scaling. However, other transforms (such as the camera zoom) are used properly. In hindsight, this also probably means that if somehow another shearing transform is introduced between scene and framebuffer, the same issue as before comes up :/.
What I don't understand then what's the difference between `.scaling=True` and `.scaling=False` in v0.12? There is one, but I was not able to comprehend the implemented logic from the results.
> This would not be a problem if we can get markers that properly shear and project in 3D (as mentioned above: hard).
One could think about to place polygons into the scene and render the markers similar to textures onto them, then the polygons could be transformed like all the other primitives. This won't work that well with the `sphere` Markers though. But yes, without knowing the internals of VisPy, this means a completely different setup than the current one and therefore can be called _hard_.
I'll be honest, I'm quickly losing track of an understanding of this discussion. But I think I still have questions/comments that are appropriate:
> marker shapes are only affected by the transform from scene to framebuffer. This means that any shear that happens on the Visual level is ignored, as well as any scaling. However, other transforms (such as the camera zoom) are used properly. In hindsight, this also probably means that if somehow another shearing transform is introduced between scene and framebuffer, the same issue as before comes up :/.
How do (or should) Markers behave for non-linear cameras? We have strange cameras like the [magnify camera](https://github.com/vispy/vispy/blob/main/vispy/scene/cameras/magnify.py) that although it is more of a gimmick/special case, it does produce some interesting visualizations and use case of a non-linear camera transformation.
Which leads to another point that is maybe being implicitly said here: Markers represent single points in space. Meaning the shape of the Marker (the symbol) ~does~doesn't actually exist, it is just an identifier/marker/indicator of a point in space. But is this true (do users expect it to be ture) with scaling=False and scaling=True?
> What I don't understand then what's the difference between .scaling=True and .scaling=False in v0.12? There is one, but I was not able to comprehend the implemented logic from the results.
After the various discussions on the recent changes I would expect this could be summarized as `scaling=False` should result in Markers (again, points in space) whose location and size is determined by all transforms involved; the "scene" gets smaller, the markers get smaller. So `scaling=True` is scene gets smaller, the markers stay the same size, but same location. I think by this simple definition this means that `scaling=False` should have Markers whose shapes change with all transforms and possibly skewed into weird looking shapes while `scaling=True` should always be the same basic shape (isotropic? a circle is always a circle on the 2D screen). I think with the current implementation this last description will never be true for all cameras, right? The cameras are still "skewing" the "perfect" shape of the Marker.
Sorry if I'm completely missing the point here. I'm still not sure I have all the use cases properly represented in my head. I wasn't looking for actual data in my previous comment, more just "I look at data of X which represent Y so they have to scale differently in one dimension to be as close to real world as possible".
Side question: @arcanerr with the current implementation, could SIFT set the size of the Marker whenever the viewing projection is changed? Or because of possibly non-linear map projections that size would have to be determined for every location of the Marker (a marker on the equator of a mercator projection is one size but that same size marker on the high latitudes of the mercator projection would be much larger)?
> What I don't understand then what's the difference between `.scaling=True` and `.scaling=False` in v0.12? There is one, but I was not able to comprehend the implemented logic from the results.
In short, the difference is that the `scale` component of the transform in `Markers.transform` is ignored when calculating marker size, while `scale` components of other transforms (i.e: `camera`) are still used. This was under the (probably wrong) assumption that anisotropic transforms are introduced at the `Visual` level.
Ultimately, it was wrong before, and it is still wrong; I just find the "new wrong" easier to workaround (with an event callback only on transform changes) than the old one (which would require much more complex event that fires even on camera rotation, for example). (Note that it's not just me "not needing" the old behaviour; in fact I already am using this exact workaround somehwere in my code).
---
Non-linear cameras should still still work the same (i.e: the same as before, with the anisotropic issue included), as well as non-linear transform *if* they are applied to the whole scene, rather than just one visual.
---
> Which leads to another point that is maybe being implicitly said here: Markers represent single points in space. Meaning the shape of the Marker (the symbol) does actually exist, it is just an identifier/marker/indicator of a point in space. But is this true (do users expect it to be ture) with scaling=False and scaling=True?
Exactly the source of the problem: as soon as you use markers to represent *physical* size, that's where all the issues begin... but we don't have anything better at the moment :P. That's where we should probably have a separate visual that handles any projection even in 3D anisotropic.
I agree that this all feels very arbitrary (and is ultimately wrong), but realistically I can't afford to spend time any time soon rewriting the whole `Markers` visual to do the fancy thing :( I think it's realtively doable in 2D, but in 3D you need a full ellipsoid that can rotate in 3 dimensions...
> Side question: @arcanerr with the current implementation, could SIFT set the size of the Marker whenever the viewing projection is changed? Or because of possibly non-linear map projections that size would have to be determined for every location of the Marker (a marker on the equator of a mercator projection is one size but that same size marker on the high latitudes of the mercator projection would be much larger)?
I guess it should be possible: there must be code which is triggered when zoom interaction happens (and a different Projection is selected!), at this/these point(s) I'd try to update `Markers.size`. But since I did not inspect the PanZoom camera code I can't tell exactly how to calculate the correct value.
A problem is also, that when you do this repeatedly (something like `my_markers.size = calc_new_size(my_markers.size)`) numerical errors can accumulate, i.e., the size of the Markers may drift in a way that even though the scene may appear in the same size on the screen at two points in time the markers could be smaller or larger than initially. To avoid this, the relative (to the scene) marker size needs to be stored and the correct way to update the size would be something like `my_markers.size = calc_size( my_markers.relative_size)` :thinking: ...
An alternative could be to reactivate the old code as a 3rd code path parallel to the new `scaling=True` code path (as you already state [here](https://github.com/vispy/vispy/pull/2359#discussion_r932380294), @brisvag). To tell the `Markers` when to use this path, a new property (`legacy_scaling=True`, `screen_space_scaling=True` or ...?) would be required. Or you introduce such a property for the new code path and revive the old code path for `scaling=True`...
> An alternative could be to reactivate the old code as a 3rd code path parallel to the new `scaling=True` code path (as you already state [here](https://github.com/vispy/vispy/pull/2359#discussion_r932380294), @brisvag).
Yes, that's definitely an option. I don't love the idea to have not one but two code paths which are both wrong, but if we can't come up with anything better that's better than nothing :P
Note: I realized in my last big comment I said "does" and meant "doesn't". Looks like @brisvag understood what I was going for. The shape/symbol of the marker doesn't actually exist. It is just that, a symbol for a single point. I don't think the Markers in our SIFT application are treating them this way (right @arcanerr?), just that users are or will be surprised when Marker size changes between views when they are meant to represent the same thing in the same exact way but the user is only requesting to see them from a different "angle" (projection).
> Yes, that's definitely an option. I don't love the idea to have not one but two code paths which are both wrong, but if we can't come up with anything better that's better than nothing :P
Maybe not two wrong approaches, but approaches with more specific descriptions or identifiers/flags. I mean, could we identify this as different sizing modes/methods for the Markers? I think no matter how this issue is moved forward the description of the Markers needs to be updated to clearly state that Markers are points in space, not something that exists with that particular shape/size in the "world" being viewed...Although now that I say that we have this new-ish example which uses sphere Markers for this exact purpose:
https://vispy.org/gallery/scene/marker_spheres.html#sphx-glr-gallery-scene-marker-spheres-py
So maybe this description can't be something we depend on (marker is a point, not a shape).
Anyway...we already have a `scaling` keyword argument which in English could be used to describe something with more than a boolean False/True set of options. I'm not sure what the shader looks like in this sense, but if it makes it easier we could require re-compiling the shader by swapping out scaling functions: no-op for scaling==False, old way of scaling for ==True, the new way of scaling for scaling=='some_string_name', or something like that. Basically, False and True are there for backwards compatibility but new string modes are there for controlling what transforms have an effect on the shape of the Markers?
> Anyway...we already have a `scaling` keyword argument which in English could be used to describe something with more than a boolean False/True set of options. I'm not sure what the shader looks like in this sense, but if it makes it easier we could require re-compiling the shader by swapping out scaling functions: no-op for scaling==False, old way of scaling for ==True, the new way of scaling for scaling=='some_string_name', or something like that. Basically, False and True are there for backwards compatibility but new string modes are there for controlling what transforms have an effect on the shape of the Markers?
Sounds good to me. I'm not yet happy with the following suggestion for the names of the scaling modes, but as a starting point, how about these?:
- `"fixed"` or `"off"`: equivalent to `scaling=False`
- `"scene"`: equivalent to `scaling=True` in pre-v0.12
- `"camera"`: equivalent to `scaling=True` in v0.12+
And the next question: should `scaling=True` revert to its pre-v0.12 meaning or keep the new v0.12+ one?
I vote for the revert because it is the longer existing interpretation, so there is probably more code in the wild implemented for it (and it would fix a SIFT bug without further ado :wink:).
What about instead of `scene` using `visual`? Isn't that more accurate in the sense that it follows the transforms affecting the Visual object?
I could see "camera" being confusing to people who aren't directly using a camera, but I'm also hoping there is a more descriptive term that reflects the "non-scene graph transforms".
> And the next question: should scaling=True revert to its pre-v0.12 meaning or keep the new v0.12+ one?
I vote for the revert because it is the longer existing interpretation
I agree for the same reasons, but also because the first group to identify this problem (napari) are the most active and most likely to customize vispy as needed in their own distribution.
@brisvag Thoughts? Alternatives (naming of the above scaling options or entirely different implementations)?
To be honest, the more I think about this and the more I hate my "fix" from #2359 xD But in the interest of having *something* that works, I guess the above proposal is ok for me.
In that case, I would go for `False` and `True` falling back to previous behaviour, and `fixed`/`visual`/`scene` for the string options.
> the more I hate my "fix"
What does your perfect solution that you don't hate look like? Does it involve some complex shader code or does it look like a lot of user controllable options in the MarkersVisual or something else?
To be clear, I'm not trying to block anything here, I'm just annoyed that what I thought was a good fix turned out to be just as arbitrary as the previous behaviour ^^'
In an ideal world, I think the elegant solution would be to simply *not* have the `scaling` option on markers, and have them just be "indicator of a zero-dimensional point in space", while handling physically "sized" objects with instanced rendering of simple meshes (or SDF impostors like spheres) which would allow us to use the *whole* matrix.
Maybe a way forward in #2460, of we'ok with giving up *some* things in exchange for correctly transformed "points". | 2023-04-15T20:49:28 |
|
vispy/vispy | 2,519 | vispy__vispy-2519 | [
"2518"
] | 6b68262412f6f31b5d9911f3f919953a5bb63b30 | diff --git a/examples/demo/scene/picking.py b/examples/demo/scene/picking.py
--- a/examples/demo/scene/picking.py
+++ b/examples/demo/scene/picking.py
@@ -18,7 +18,7 @@
selected = None
# plot data
-cmap = get_colormap('hsl', value=0.5)
+cmap = get_colormap('hsl')
colors = cmap.map(np.linspace(0.1, 0.9, data.shape[0]))
t = np.arange(data.shape[1]) * (dt * 1000)
for i, y in enumerate(data):
diff --git a/vispy/color/colormap.py b/vispy/color/colormap.py
--- a/vispy/color/colormap.py
+++ b/vispy/color/colormap.py
@@ -1092,17 +1092,13 @@ def __init__(self, limits=(0.33, 0.66, 1.0)):
)
-def get_colormap(name, *args, **kwargs):
- """Obtain a colormap.
+def get_colormap(name):
+ """Obtain a colormap by name.
Parameters
----------
name : str | Colormap
Colormap name. Can also be a Colormap for pass-through.
- *args:
- Deprecated.
- **kwargs
- Deprecated.
Examples
--------
@@ -1111,18 +1107,10 @@ def get_colormap(name, *args, **kwargs):
.. versionchanged: 0.7
- Additional args/kwargs are no longer accepted. Colormap classes are
- no longer created on the fly. To create a ``cubehelix``
- (``CubeHelixColormap``), ``single_hue`` (``SingleHue``), ``hsl``
- (``HSL``), ``husl`` (``HSLuv``), ``diverging`` (``Diverging``), or
- ``RdYeBuCy`` (``RedYellowBlueCyan``) colormap you must import and
- instantiate it directly from the ``vispy.color.colormap`` module.
+ Additional args/kwargs are no longer accepted. Colormap instances are
+ no longer created on the fly.
"""
- if args or kwargs:
- warnings.warn("Creating a Colormap instance with 'get_colormap' is "
- "no longer supported. No additional arguments or "
- "keyword arguments should be passed.", DeprecationWarning)
if isinstance(name, BaseColormap):
return name
@@ -1130,14 +1118,6 @@ def get_colormap(name, *args, **kwargs):
raise TypeError('colormap must be a Colormap or string name')
if name in _colormaps: # vispy cmap
cmap = _colormaps[name]
- if name in ("cubehelix", "single_hue", "hsl", "husl", "diverging", "RdYeBuCy"):
- warnings.warn(
- f"Colormap '{name}' has been deprecated since vispy 0.7. "
- f"Please import and create 'vispy.color.colormap.{cmap.__class__.__name__}' "
- "directly instead.",
- DeprecationWarning,
- stacklevel=2,
- )
elif has_matplotlib(): # matplotlib cmap
try:
| Unnecessary Deprecation in vispy.color.get_colormap?
`vispy.color.get_colormap` with certain colormaps gives [DeprecationWarning for certain colormaps](https://github.com/vispy/vispy/blob/main/vispy/color/colormap.py#L1133-L1140):
```
DeprecationWarning: Colormap 'hsl' has been deprecated since vispy 0.7. Please import and create 'vispy.color.colormap.HSL' directly instead.
```
I think this may be a holdover from a brief period when those colormaps were returning, not an _instance_ of the colormap, but the _class itself_. The `DeprecationWarning` was introduced in dcb19017b99e21004075c2d9d15fcf4207740ce4, which is the same commit where the HSL and other colormaps were changed in the `_colormaps` dict [from being instances, to being classes](https://github.com/vispy/vispy/blob/6b68262412f6f31b5d9911f3f919953a5bb63b30/vispy/color/colormap.py#L1133-L1140). They went _back_ to being instances in 7d1f5966db90836ecbb003c1325d028fec7b2f4b, but the DeprecationWarning remained.
I think the DeprecationWarning was possibly left in by mistake.
Can the DeprecationWarning be removed?
| Oh this hurts my head. It has been so long. I think you're right though. The deprecation at the time was still useful (even after the switch from class to instances) to remind users "hey, I know you used to expect a class, but that's deprecated".
I would be open to a PR @codypiersall where you remove the deprecation and you could remove the `*args` and `**kwargs` too which are also documented as deprecated. | 2023-08-21T15:28:48 |
|
vispy/vispy | 2,523 | vispy__vispy-2523 | [
"2522"
] | 6b68262412f6f31b5d9911f3f919953a5bb63b30 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,15 +24,11 @@
import os
from os import path as op
-from distutils import log
-from setuptools import setup, find_packages, Extension
+from setuptools import setup, find_packages
import numpy as np
from Cython.Build import cythonize
-
-log.set_verbosity(log.DEBUG)
-log.info('setup.py entered')
-log.info('$PATH=%s' % os.environ['PATH'])
+from Cython.Distutils import Extension
name = 'vispy'
description = 'Interactive visualization in Python'
@@ -56,8 +52,11 @@ def set_builtin(name, value):
extensions = [Extension('vispy.visuals.text._sdf_cpu',
- [op.join('vispy', 'visuals', 'text', '_sdf_cpu.pyx')],
- include_dirs=[np.get_include()]),
+ sources=[op.join('vispy', 'visuals', 'text', '_sdf_cpu.pyx')],
+ include_dirs=[np.get_include()],
+ cython_directives={"language_level": "3"},
+ define_macros=[("NPY_NO_DEPRECATED_API", "NPY_1_7_API_VERSION")],
+ ),
]
readme = open('README.rst', 'r').read()
@@ -70,7 +69,7 @@ def set_builtin(name, value):
},
author='Vispy contributors',
author_email='[email protected]',
- license='(new) BSD',
+ license='BSD-3-Clause',
url='http://vispy.org',
download_url='https://pypi.python.org/pypi/vispy',
keywords=[
@@ -92,9 +91,8 @@ def set_builtin(name, value):
long_description_content_type='text/x-rst',
platforms='any',
provides=['vispy'],
- python_requires='>=3.6',
+ python_requires='>=3.8',
install_requires=['numpy', 'freetype-py', 'hsluv', 'kiwisolver', 'packaging'],
- setup_requires=['numpy', 'cython', 'setuptools_scm', 'setuptools_scm_git_archive', 'packaging'],
extras_require={
'ipython-static': ['ipython'],
'pyglet': ['pyglet>=1.2'],
@@ -147,9 +145,10 @@ def set_builtin(name, value):
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX',
'Programming Language :: Python',
- 'Programming Language :: Python :: 3.6',
- 'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
+ 'Programming Language :: Python :: 3.9',
+ 'Programming Language :: Python :: 3.10',
+ 'Programming Language :: Python :: 3.11',
'Framework :: IPython'
],
)
| Upgrade to Cython 3 for all builds
Cython 3.0 is out now. There are a lot of changes including many changes to the defaults. I've learned quite a few gotchas from my other cython-based projects that hopefully upgrade will be easy for us since we only have a few Cython things in vispy. Just making an issue so if someone else wants to tackle it before me feel free.
| 2023-08-24T18:30:41 |
||
vispy/vispy | 2,540 | vispy__vispy-2540 | [
"2538"
] | 83d029b2c03a6c2d64f63a1f6045b62615d1de11 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -23,6 +23,7 @@
"""
import os
+import sys
from os import path as op
from setuptools import setup, find_packages
@@ -59,6 +60,10 @@ def set_builtin(name, value):
),
]
+install_requires = ['numpy', 'freetype-py', 'hsluv', 'kiwisolver', 'packaging']
+if sys.version_info < (3, 9):
+ install_requires.append("importlib-resources")
+
readme = open('README.rst', 'r').read()
setup(
name=name,
@@ -92,7 +97,7 @@ def set_builtin(name, value):
platforms='any',
provides=['vispy'],
python_requires='>=3.8',
- install_requires=['numpy', 'freetype-py', 'hsluv', 'kiwisolver', 'packaging'],
+ install_requires=install_requires,
extras_require={
'ipython-static': ['ipython'],
'pyglet': ['pyglet>=1.2'],
diff --git a/vispy/app/backends/_qt.py b/vispy/app/backends/_qt.py
--- a/vispy/app/backends/_qt.py
+++ b/vispy/app/backends/_qt.py
@@ -386,9 +386,9 @@ def __init__(self, vispy_canvas, **kwargs):
# must set physical size before setting visible or fullscreen
# operations may make the size invalid
- if hasattr(self, 'devicePixelRatio'):
+ if hasattr(self, 'devicePixelRatioF'):
# handle high DPI displays in PyQt5
- ratio = self.devicePixelRatio()
+ ratio = self.devicePixelRatioF()
else:
ratio = 1
self._physical_size = (p.size[0] * ratio, p.size[1] * ratio)
@@ -421,7 +421,7 @@ def screen_changed(self, new_screen):
If display resolutions are the same this is essentially a no-op except for the redraw.
If the display resolutions differ (HiDPI versus regular displays) the canvas needs to
- be redrawn to reset the physical size based on the current `devicePixelRatio()` and
+ be redrawn to reset the physical size based on the current `devicePixelRatioF()` and
redrawn with that new size.
"""
@@ -909,11 +909,11 @@ def initializeGL(self):
def resizeGL(self, w, h):
if self._vispy_canvas is None:
return
- if hasattr(self, 'devicePixelRatio'):
- # We take into account devicePixelRatio, which is non-unity on
+ if hasattr(self, 'devicePixelRatioF'):
+ # We take into account devicePixelRatioF, which is non-unity on
# e.g HiDPI displays.
- # self.devicePixelRatio() is a float and should have been in Qt5 according to the documentation
- ratio = self.devicePixelRatio()
+ # self.devicePixelRatioF() is a float and should have been in Qt5 according to the documentation
+ ratio = self.devicePixelRatioF()
w = int(w * ratio)
h = int(h * ratio)
self._vispy_set_physical_size(w, h)
| Using `QT_SCALE_FACTOR` causes a cropped canvas (Qt backend)
Hi, seems like setting the Qt environment variable `QT_SCALE_FACTOR` to a value like `1.8` causes the canvas to appear cropped/without using the space properly. A preview of the behavior:
* Running the [`sphere.py` example](https://github.com/vispy/vispy/blob/main/examples/scene/sphere.py) without setting `QT_SCALE_FACTOR`:

* Running the [`sphere.py` example](https://github.com/vispy/vispy/blob/main/examples/scene/sphere.py) setting `QT_SCALE_FACTOR` to `1.8` (`set QT_SCALE_FACTOR=1.8`):

Tested the above running (relative to the cloned vispy repo directory) from a Python environment on Windows with vispy 0.14.1 the following:
```cmd
set QT_SCALE_FACTOR=
```
or
```cmd
set QT_SCALE_FACTOR=1.8
```
and
```cmd
python examples/scene/sphere.py
```
Also, for tracebility, I started investigating this due to https://github.com/napari/napari/issues/6197. A possible fix is explained at https://github.com/napari/napari/issues/6197#issuecomment-1769207938. Let me know if opening a PR with the changes described there makes sense or if more info is needed!
| I had never heard of `QT_SCALE_FACTOR` before and I guess that fix in the comment seems reasonable (even though a bug should maybe be filed with Qt) so pull requests are welcome.
If Qt4 compatibility has to be removed to have the code exactly like that then lets make sure we officially remove it from all interfaces and documentation.
> I had never heard of QT_SCALE_FACTOR before and I guess that fix in the comment seems reasonable (even though a bug should maybe be filed with Qt) so pull requests are welcome.
Awesome! Also, checking a little bit more the Qt5 docs I think the issue comes due to the difference between [`devicePixelRatio` ](https://doc.qt.io/qt-5/qpaintdevice.html#devicePixelRatio) and [`devicePixelRatioF`](https://doc.qt.io/qt-5/qpaintdevice.html#devicePixelRatioF) 🤔. So, to support a floating point value, `devicePixelRatioF` is neededand then a possible solution preserving the current logic could look something like:
```python
# must set physical size before setting visible or fullscreen
# operations may make the size invalid
if hasattr(self, 'devicePixelRatioF'):
# handle high DPI displays in PyQt5+
ratio = self.devicePixelRatioF()
else:
ratio = 1
self._physical_size = (p.size[0] * ratio, p.size[1] * ratio)
```
and
```python
if hasattr(self, 'devicePixelRatioF'):
# We take into account devicePixelRatio, which is non-unity on
# e.g HiDPI displays.
# self.devicePixelRatioF() is a float and should have been in Qt5+ according to the documentation
ratio = self.devicePixelRatioF()
w = int(w * ratio)
h = int(h * ratio)
```
However, regarding the Qt4 support comment:
>If Qt4 compatibility has to be removed to have the code exactly like that then lets make sure we officially remove it from all interfaces and documentation.
Indeed, I think Qt4 has no `devicePixelRatio` or `devicePixelRatioF` definitions (probably that's the reason why the current code has the `hasattr` validations). Happy to help with that too but seems like that would be a more involved work (I think the issue tracking that is https://github.com/vispy/vispy/issues/1788) 🤔
Taking that into account, could make sense to do a fix like the one above (just changing calls for `devicePixelRatio` to `devicePixelRatioF`) and in a different PR tackle the Qt4 support removal issue? Let me know what do you think! | 2023-10-23T17:02:57 |
|
vispy/vispy | 2,583 | vispy__vispy-2583 | [
"2581"
] | 5d61ecbd930167446be0f9e5e078288e19839af9 | diff --git a/vispy/gloo/glir.py b/vispy/gloo/glir.py
--- a/vispy/gloo/glir.py
+++ b/vispy/gloo/glir.py
@@ -1283,8 +1283,13 @@ def _pre_draw(self):
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, vbo_handle)
gl.glEnableVertexAttribArray(attr_handle)
func(attr_handle, *args)
- if divisor is not None:
- gl.glVertexAttribDivisor(attr_handle, divisor)
+ if hasattr(gl, "glVertexAttribDivisor"):
+ gl.glVertexAttribDivisor(attr_handle, divisor or 0)
+ elif divisor is not None:
+ logger.warning(
+ 'Instanced rendering is not supported by the current'
+ f'backend ("{gl.current_backend.__name__}")'
+ )
else:
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, 0)
gl.glDisableVertexAttribArray(attr_handle)
| Having problems with TextVisuals and InstancedMesh
Hello,
I am having problems with TextVisuals displaying when using InstancedMeshes.
It seems that the TextVisual doesn't display when using an instancedMesh, but does with a standard mesh.
I am having this problem in a larger program, with many other visuals and meshes, both instanced and otherwise.
The below example demonstrates this behaviour.
- Both the TextVisual and the single mesh are originally drawn
- Pressing 'p'
- the InstancedMesh is now rendered
- the TextVisual disappears
- Unsetting the parent of InstancedMesh does not bring the TextVisual back.
The instantiation order of the visuals does not change the behaviour.
My vispy and python versions are
- python: 3.10.12
- vispy: 0.14.1
Any help with something I'm doing wrong, or the right direction (I don't have any GL experience) to try and debug further would be greatly appreciated.
```python
import numpy as np
from scipy.spatial.transform import Rotation
from vispy import app, scene, use
from vispy.io import imread, load_data_file, read_mesh
from vispy.scene import visuals as vVisuals
# needed for instanced rendering to work
use(gl='gl+')
mesh_path = load_data_file('spot/spot.obj.gz')
texture_path = load_data_file('spot/spot.png')
vertices, faces, normals, texcoords = read_mesh(mesh_path)
texture = np.flipud(imread(texture_path))
canvas = scene.SceneCanvas(keys='interactive', bgcolor='white', show=True)
view = canvas.central_widget.add_view()
view.camera = 'arcball'
view.camera.depth_value = 10 * (vertices.max() - vertices.min())
# Instanced Mesh
n_instances = 2
instance_colors = np.array([[1,0,0],
[0,1,0]])
instance_positions = ((np.random.rand(n_instances, 3) - 0.5) * 2).astype(np.float32)
instance_transforms = Rotation.random(n_instances).as_matrix().astype(np.float32)
instanced_mesh = vVisuals.InstancedMesh(
vertices,
faces,
instance_colors=instance_colors,
instance_positions=instance_positions,
instance_transforms=instance_transforms,
parent=None,
)
# Single Mesh
mesh = vVisuals.Mesh(vertices,
faces,
color=(.5, .7, .5, 1),
parent=None)
# instanced_mesh.parent = view.scene
mesh.parent = view.scene
# Text Visual
t_visual = scene.visuals.Text('Test Default Text',
color=(0,0,0),
anchor_x='left',
anchor_y='top',
parent=view,
font_size=40,
pos=(100,100))
@canvas.events.key_press.connect
def on_key_press(event):
if event.key == 'p':
if instanced_mesh.parent is None:
print("setting parent")
instanced_mesh.parent = view.scene
else:
print("unsetting parent")
instanced_mesh.parent = None
if __name__ == "__main__":
app.run()
```
| Hey @rzmearns, thanks for the issue and for the reproducer! It's pretty weird, yeah. I also get the same problem on my machine, so this is very likely a bug.
I tried playing with the parents manually, but I can't seem to recover the visibility of the text once it disappears. I'm not sure where to begin here :/
It could be something at the GLIR level getting "stuck" in instanced mode, but that doesn't explain why the normal `Mesh` works.
Otherwise it could be like a blending problem, but even that seems completely unrelated code-wise.
I'm a bit lost...
Hi @brisvag, thanks for taking a look.
I've been doing a bit more digging, (kind of blindly at this point), but a little progress.
In `vispy/visuals/instanced_mesh.py` in the definition of the `_VERTEX_SHADER` which overwrites the basic Mesh version.
Changing the `gl_position` in the GL program to ignore the instance transforms,
i.e. `gl_position = $transform($to_vec4($position));`
The TextVisual remains when setting the `InstancedMesh.parent` (need to move the basic mesh out of the way to see it though).
So I'm going to do a bit of research about how these programs might be interacting.
Thanks again.
Interesting, nice find! So really it's probably not something fundamentally broken with the instancing draw call, which is a good first step ^^'
@rzmearns can you please try this patch? My hypothesis here is that the `attr_handle` is reused between the text and instanced mesh visuals. The single mesh does not interfere because it does not use more than handle 0, which is okay because the divisor is the same for this attribute (`a_position`).
Here is some debug output from this call (which is in `GlirProgram._pre_draw`) showing the attribute name, handle, and divisor value. You can see the overlapping values from the `Text` and `InstancedMesh` visuals.
```
drawing <Text at 0x1340d6dd0>
a_color 4 None
a_position 1 None
a_texcoord 0 None
a_rotation 2 None
a_pos 3 None
drawing <Mesh at 0x1340e5310>
a_position 0 None
drawing <InstancedMesh at 0x1340e5410>
a_position 0 None
u_base_color 1 1
transform_x 5 1
transform_y 4 1
transform_z 2 1
shift 3 1
```
```diff
diff --git a/vispy/gloo/glir.py b/vispy/gloo/glir.py
index 3b7cc4c4..988e5cd7 100644
--- a/vispy/gloo/glir.py
+++ b/vispy/gloo/glir.py
@@ -1283,8 +1283,7 @@ class GlirProgram(GlirObject):
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, vbo_handle)
gl.glEnableVertexAttribArray(attr_handle)
func(attr_handle, *args)
- if divisor is not None:
- gl.glVertexAttribDivisor(attr_handle, divisor)
+ gl.glVertexAttribDivisor(attr_handle, divisor or 0)
else:
gl.glBindBuffer(gl.GL_ARRAY_BUFFER, 0)
gl.glDisableVertexAttribArray(attr_handle)
```
If this works I can make a PR for more testing of the change.
Hi @aganders3,
That patch seems to have fixed the problem. The textVisual remains as the instancedMesh parent is set and unset.
I have to say I don't really understand the underlying reasoning though (mainly due to my ignorance about GLSL and the glir layer)
Are the handle numbers not still overlapping in the below debug output (after the patch).
`print(f"{name} {attr_handle} {divisor}")`
```
drawing <Mesh at 0x7f88a2301840>
drawing <GlirProgram 11 at 0x7f88a232e920>
a_position 0 None
drawing <InstancedMesh at 0x7f88a2301690>
drawing <GlirProgram 13 at 0x7f88a2356cb0>
a_position 0 None
u_base_color 1 1
transform_x 3 1
transform_y 4 1
transform_z 5 1
shift 2 1
drawing <Text at 0x7f88a277c670>
drawing <GlirProgram 9 at 0x7f88a2302a40>
a_color 0 None
a_position 2 None
a_texcoord 3 None
a_rotation 1 None
a_pos 4 None
```
Thanks
Great!
Yes, they will still overlap - sorry I was not clear in my description.
Currently the divisor is set to 1 by the InstancedMesh, but never set back to 0 because the call to (re)set it is skipped if `None`. With my patch it will set the divisor to 0 (the default) when `None`.
This means the overlap does not cause problems as the divisor is always set correctly for each buffer for the current command.
Right, thanks for clarifying that, I think I understand the underlying problem now.
Thanks @aganders3 and @brisvag for your help.
What would the next step be?
I will let @brisvag weigh in, but I can open a PR tomorrow morning (I'm on US East Coast time).
Aaaaah! Great detective work @aganders3! I also ended up on that line and tried to add an `else` statement, but I was working a bit blindly and I guess I didn't stumble on the right solution xD That makes perfect sense now :) Happy to review and merge the PR! | 2024-04-16T13:47:48 |
|
vispy/vispy | 2,592 | vispy__vispy-2592 | [
"2591"
] | f57c27d40962ed028b75062ea3ad64e453a95efe | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -106,6 +106,7 @@ def set_builtin(name, value):
'pyside': ['PySide'],
'pyside2': ['PySide2'],
'pyside6': ['PySide6'],
+ 'glfw': ['glfw'],
'sdl2': ['PySDL2'],
'wx': ['wxPython'],
'tk': ['pyopengltk'],
| What's the status of GLFW?
I see there's a `glfw` backend, but in `setup.py` it is neither listed as a dependency nor defined as an extra. Is that an oversight or deliberately?
I'm packaging `vispy` for Fedora and with [glfw](https://pypi.org/project/glfw/) added as a dependency, I'm seeing `glfw` listed in the output of `vispy.sys_info()`. Tests using `glsw` as a backend also appear to work fine.
| I'm not sure why there isn't an "extra" defined in the setup.py for it (PRs welcome), but it is mentioned here:
https://vispy.org/installation.html#backend-requirements | 2024-05-28T14:44:37 |
|
esphome/esphome-docs | 919 | esphome__esphome-docs-919 | [
"1005"
] | 93f1e7345dab9c9c5c88820e418676ee1f8332aa | diff --git a/conf.py b/conf.py
--- a/conf.py
+++ b/conf.py
@@ -70,9 +70,9 @@
# built documents.
#
# The short X.Y version.
-version = '1.16'
+version = '1.17'
# The full version, including alpha/beta/rc tags.
-release = '1.16.0-dev'
+release = '1.17.0-dev'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
| Fix documentation typo on Sensirion SPS30
## Description:
Fix documentation typo on Sensirion SPS30
| 2021-01-07T06:29:13 |
||
esphome/esphome-docs | 1,148 | esphome__esphome-docs-1148 | [
"858"
] | 30a30b1eba4924ba3cac2412f6399156ecba3671 | diff --git a/conf.py b/conf.py
--- a/conf.py
+++ b/conf.py
@@ -69,7 +69,7 @@
# The short X.Y version.
version = "1.17"
# The full version, including alpha/beta/rc tags.
-release = "1.17.1"
+release = "1.17.2"
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
| Update docs for new fan speed
## Description:
**Related issue (if applicable):** fixes https://github.com/esphome/issues/issues/1278
**Pull request in [esphome](https://github.com/esphome/esphome) with YAML changes (if applicable):** esphome/esphome#https://github.com/esphome/esphome/pull/1391
## Checklist:
- [ ] Branch: `next` is for changes and new documentation that will go public with the next ESPHome release. Fixes, changes and adjustments for the current release should be created against `current`.
- [ ] Link added in `/index.rst` when creating new documents for new components or cookbook.
| 2021-05-09T10:38:45 |
||
esphome/esphome-docs | 1,150 | esphome__esphome-docs-1150 | [
"1940"
] | ebe90a60f7570f5beed2879cc1fc434628ec0429 | diff --git a/conf.py b/conf.py
--- a/conf.py
+++ b/conf.py
@@ -67,9 +67,9 @@
# built documents.
#
# The short X.Y version.
-version = "1.17"
+version = "1.18"
# The full version, including alpha/beta/rc tags.
-release = "1.17.2"
+release = "1.18.0b1"
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
| Add airquality wp6003 + am312 tutorial
Add air quality + am312 tutorial
## Description:
**Related issue (if applicable):** fixes <link to issue>
**Pull request in [esphome](https://github.com/esphome/esphome) with YAML changes (if applicable):** esphome/esphome#<esphome PR number goes here>
## Checklist:
- [ ] Branch: `next` is for changes and new documentation that will go public with the next ESPHome release. Fixes, changes and adjustments for the current release should be created against `current`.
- [ ] Link added in `/index.rst` when creating new documents for new components or cookbook.
| 2021-05-09T20:06:19 |
||
esphome/esphome-docs | 1,181 | esphome__esphome-docs-1181 | [
"1940"
] | 3bea218b3cb8e0ba28baef861bd7a27813fbbee0 | diff --git a/conf.py b/conf.py
--- a/conf.py
+++ b/conf.py
@@ -67,9 +67,9 @@
# built documents.
#
# The short X.Y version.
-version = "1.17"
+version = "1.18"
# The full version, including alpha/beta/rc tags.
-release = "1.17.2"
+release = "1.18.0"
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
| Add airquality wp6003 + am312 tutorial
Add air quality + am312 tutorial
## Description:
**Related issue (if applicable):** fixes <link to issue>
**Pull request in [esphome](https://github.com/esphome/esphome) with YAML changes (if applicable):** esphome/esphome#<esphome PR number goes here>
## Checklist:
- [ ] Branch: `next` is for changes and new documentation that will go public with the next ESPHome release. Fixes, changes and adjustments for the current release should be created against `current`.
- [ ] Link added in `/index.rst` when creating new documents for new components or cookbook.
| 2021-05-18T23:47:34 |
||
PaddlePaddle/Paddle2ONNX | 8 | PaddlePaddle__Paddle2ONNX-8 | [
"7"
] | 299858ebb7d196d20f44ecb03f0e76ad63244cc3 | diff --git a/convert.py b/convert.py
--- a/convert.py
+++ b/convert.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved.
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -30,14 +30,16 @@ def convert(dirname):
inference_scope = fluid.core.Scope()
with fluid.scope_guard(inference_scope):
[inference_program, feed_target_names,
- fetch_targets] = fluid.io.load_inference_model(dirname, exe)
+ fetch_targets] = fluid.io.load_inference_model(dirname, exe)
# Using blocks in programs, create nodes using:
onnx_nodes = []
all_inputs = []
for block in inference_program.blocks:
- all_inputs += [paddle_variable_to_onnx_tensor(
- v, block) for v in block.vars if v not in ['feed', 'fetch']]
+ all_inputs += [
+ paddle_variable_to_onnx_tensor(v, block) for v in block.vars
+ if v not in ['feed', 'fetch']
+ ]
for op in block.ops:
if op.type in ops.PADDLE_TO_ONNX:
@@ -45,8 +47,8 @@ def convert(dirname):
# TODO(varunarora): Use the modifier function to make the
# transformation.
node_proto = helper.make_node(
- ops.PADDLE_TO_ONNX[op.type][0],
- op.input_arg_names, op.output_arg_names)
+ ops.PADDLE_TO_ONNX[op.type][0], op.input_arg_names,
+ op.output_arg_names)
onnx_nodes.append(node_proto)
else:
@@ -59,8 +61,9 @@ def convert(dirname):
# Nodes, name of graph, inputs, outputs.
if dirname[-1] == '/':
dirname = dirname[:-1]
- graph = helper.make_graph(onnx_nodes, os.path.basename(
- dirname).split('.')[0], all_inputs, [])
+ graph = helper.make_graph(onnx_nodes,
+ os.path.basename(dirname).split('.')[0],
+ all_inputs, [])
print graph
@@ -70,7 +73,7 @@ def convert(dirname):
if __name__ == "__main__":
# Read arguments: path to model.
parser = argparse.ArgumentParser()
- parser.add_argument("--modeldir", required=True,
- help="Input PaddlePaddle model")
+ parser.add_argument(
+ "--modeldir", required=True, help="Input PaddlePaddle model")
args = parser.parse_args()
convert(args.modeldir)
diff --git a/ops.py b/ops.py
--- a/ops.py
+++ b/ops.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved.
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -11,7 +11,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
-
"""
Priority of ops (uniques) to figure out support for.
@@ -460,7 +459,6 @@ def xor_op():
# 'Ceil', NEEDS ATTENTION.
'cast': ('Clip', clip_op),
'concat': ('Concat', concat_op),
-
',': ('Constant', constant_op),
'conv': ('Conv', conv_op),
diff --git a/variables.py b/variables.py
--- a/variables.py
+++ b/variables.py
@@ -1,4 +1,4 @@
-# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved.
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,6 +15,7 @@
from onnx import helper, onnx_pb2, TensorProto
import paddle.fluid.core as core
+
def paddle_variable_to_onnx_tensor(paddle_var_name, block):
# TODO(varunarora): Need to do this only in the case of VarType.LOD_TENSOR.
paddle_var = block.var(paddle_var_name)
@@ -36,5 +37,5 @@ def paddle_variable_to_onnx_tensor(paddle_var_name, block):
# '': onnx_pb2.TensorProto.STRING,
# '': onnx_pb2.TensorProto.COMPLEX64,
# '': onnx_pb2.TensorProto.COMPLEX128,
- core.VarDesc.VarType.BOOL: onnx_pb2.TensorProto.BOOL,
+ core.VarDesc.VarType.BOOL: onnx_pb2.TensorProto.BOOL
}
| Need add CI for the repo
| 2018-04-07T09:43:08 |
||
PaddlePaddle/Paddle2ONNX | 11 | PaddlePaddle__Paddle2ONNX-11 | [
"5"
] | 8d816e18bc15b14aa82be73c672402a478d03bea | diff --git a/validate.py b/validate.py
new file mode 100644
--- /dev/null
+++ b/validate.py
@@ -0,0 +1,118 @@
+# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import os
+import argparse
+import numpy as np
+
+import paddle.fluid as fluid
+from onnx import helper, checker, load
+from caffe2.python.onnx.backend import Caffe2Backend
+
+import fluid_onnx.ops as ops
+from fluid_onnx.variables import paddle_variable_to_onnx_tensor
+from fluid_onnx.variables import PADDLE_TO_ONNX_DTYPE
+
+
+def parse_args():
+ # Read arguments: path to model.
+ parser = argparse.ArgumentParser("Use dummy data in the interval [a, b] "
+ "as inputs to verify the conversion.")
+ parser.add_argument(
+ "--fluid_model",
+ required=True,
+ help="The path to PaddlePaddle Fluid model.")
+ parser.add_argument(
+ "--onnx_model", required=True, help="The path to ONNX model.")
+ parser.add_argument(
+ "--a",
+ type=float,
+ default=0.0,
+ help="Left boundary of dummy data. (default: %(default)f)")
+ parser.add_argument(
+ "--b",
+ type=float,
+ default=1.0,
+ help="Right boundary of dummy data. (default: %(default)f)")
+ parser.add_argument(
+ "--batch_size",
+ type=int,
+ default=10,
+ help="Batch size. (default: %(default)d)")
+ parser.add_argument(
+ "--expected_decimal",
+ type=int,
+ default=5,
+ help="The expected decimal accuracy. (default: %(default)d)")
+ args = parser.parse_args()
+ return args
+
+
+def print_arguments(args):
+ print('----------- Configuration Arguments -----------')
+ for arg, value in sorted(vars(args).iteritems()):
+ print('%s: %s' % (arg, value))
+ print('------------------------------------------------')
+
+
+def validate(args):
+ place = fluid.CPUPlace()
+ exe = fluid.Executor(place)
+
+ [fluid_infer_program, feed_target_names,
+ fetch_targets] = fluid.io.load_inference_model(args.fluid_model, exe)
+
+ input_shapes = [
+ fluid_infer_program.global_block().var(var_name).shape
+ for var_name in feed_target_names
+ ]
+ input_shapes = [
+ shape if shape[0] > 0 else (args.batch_size, ) + shape[1:]
+ for shape in input_shapes
+ ]
+
+ # Generate dummy data as inputs
+ inputs = [
+ (args.b - args.a) * np.random.random(shape).astype("float32") + args.a
+ for shape in input_shapes
+ ]
+
+ # Fluid inference
+ fluid_results = exe.run(fluid_infer_program,
+ feed=dict(zip(feed_target_names, inputs)),
+ fetch_list=fetch_targets)
+
+ # Remove these prints some day
+ print("Inference results for fluid model:")
+ print(fluid_results)
+ print('\n')
+
+ # ONNX inference, using caffe2 as the backend
+ onnx_model = load(args.onnx_model)
+ rep = Caffe2Backend.prepare(onnx_model, device='CPU')
+ onnx_results = rep.run(inputs)
+ print("Inference results for ONNX model:")
+ print(onnx_results)
+ print('\n')
+
+ for ref, hyp in zip(fluid_results, onnx_results):
+ np.testing.assert_almost_equal(ref, hyp, decimal=args.expected_decimal)
+ print("The exported model achieves {}-decimal precision.".format(
+ args.expected_decimal))
+
+
+if __name__ == "__main__":
+ args = parse_args()
+ print_arguments(args)
+ validate(args)
| Validation for converted models for accuracy
The goal would be to get identical inference results to Paddle models and inference. @kuke has more thoughts. And here is a notebook on the topic: https://github.com/onnx/tutorials/blob/master/tutorials/CorrectnessVerificationAndPerformanceComparison.ipynb.
Q: what runtime would we use for ONNX: TensorRT?
| A: I think mainly the server, and TensorRT is only one possible runtime environment. | 2018-04-09T17:19:08 |
|
PaddlePaddle/Paddle2ONNX | 12 | PaddlePaddle__Paddle2ONNX-12 | [
"16"
] | b038bdc762e8329650ff63572e510c2023e1bebd | diff --git a/variables.py b/variables.py
--- a/variables.py
+++ b/variables.py
@@ -19,9 +19,9 @@
def paddle_variable_to_onnx_tensor(paddle_var_name, block):
# TODO(varunarora): Need to do this only in the case of VarType.LOD_TENSOR.
paddle_var = block.var(paddle_var_name)
- return helper.make_tensor_value_info(
- paddle_var_name, PADDLE_TO_ONNX_DTYPE[paddle_var.dtype],
- paddle_var.shape)
+ return helper.make_tensor_value_info(paddle_var_name,
+ PADDLE_TO_ONNX_DTYPE[paddle_var.dtype],
+ paddle_var.shape)
PADDLE_TO_ONNX_DTYPE = {
| diff --git a/tests/__init__.py b/tests/__init__.py
--- a/tests/__init__.py
+++ b/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
| Fix travis-ci problems
Travis-ci always failed
| 2018-04-10T03:49:23 |
|
PaddlePaddle/Paddle2ONNX | 15 | PaddlePaddle__Paddle2ONNX-15 | [
"14"
] | 49757b5db44ea253d45b2e8be9364139eb8a318b | diff --git a/variables.py b/variables.py
--- a/variables.py
+++ b/variables.py
@@ -19,9 +19,9 @@
def paddle_variable_to_onnx_tensor(paddle_var_name, block):
# TODO(varunarora): Need to do this only in the case of VarType.LOD_TENSOR.
paddle_var = block.var(paddle_var_name)
- return helper.make_tensor_value_info(
- paddle_var_name, PADDLE_TO_ONNX_DTYPE[paddle_var.dtype],
- paddle_var.shape)
+ return helper.make_tensor_value_info(paddle_var_name,
+ PADDLE_TO_ONNX_DTYPE[paddle_var.dtype],
+ paddle_var.shape)
PADDLE_TO_ONNX_DTYPE = {
| diff --git a/tests/__init__.py b/tests/__init__.py
--- a/tests/__init__.py
+++ b/tests/__init__.py
@@ -0,0 +1,13 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
| Need check whether protoc existed.
| 2018-04-10T08:01:09 |
|
PaddlePaddle/Paddle2ONNX | 29 | PaddlePaddle__Paddle2ONNX-29 | [
"28"
] | 57c845a571035600775c9c680f5fc2a715a551db | diff --git a/fluid_onnx/ops.py b/fluid_onnx/ops.py
--- a/fluid_onnx/ops.py
+++ b/fluid_onnx/ops.py
@@ -374,7 +374,7 @@ def pad_op():
def pool2d_op(operator, scope):
inputs, attrs, outputs = get_op_io_info(operator)
if attrs['global_pooling'] is False:
- op_type = {'max': 'MaxPool', 'ave': 'AveragePool'}
+ op_type = {'max': 'MaxPool', 'avg': 'AveragePool'}
pool2d = make_node(
op_type[attrs['pooling_type']],
inputs=inputs['X'],
@@ -383,7 +383,7 @@ def pool2d_op(operator, scope):
strides=attrs['strides'],
pads=attrs['paddings'] + attrs['paddings'], )
else:
- op_type = {'max': 'GlobalMaxPool', 'ave': 'GlobalAveragePool'}
+ op_type = {'max': 'GlobalMaxPool', 'avg': 'GlobalAveragePool'}
pool2d = make_node(
op_type[attrs['pooling_type']],
inputs=inputs['X'],
| diff --git a/tests/op_test.py b/tests/op_test.py
new file mode 100644
--- /dev/null
+++ b/tests/op_test.py
@@ -0,0 +1,269 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import sys
+import unittest
+import numpy as np
+
+from onnx.helper import make_node, make_graph, make_model
+from onnx.checker import check_node
+import paddle.fluid.core as core
+from paddle.fluid import scope_guard
+from paddle.fluid.backward import append_backward
+from paddle.fluid.op import Operator
+from paddle.fluid.executor import Executor
+from paddle.fluid.framework import Program, OpProtoHolder
+from caffe2.python.onnx.backend import Caffe2Backend
+
+from fluid_onnx.ops import node_maker
+from fluid_onnx.variables import paddle_variable_to_onnx_tensor
+
+"""
+NOTE (varunarora): Some of the code snippets below have been inspired from
+op_test.py in /python/paddle/fluid/tests/unittests/ in the original
+Paddle repository (https://github.com/PaddlePaddle/Paddle/).
+
+When in doubt, keep in sync with it's counterparts.
+"""
+
+def append_input_output(block, op_proto, np_list, persistable_list, is_input):
+ """Returns a list of Paddle variables associated with a block.
+
+ Args:
+ block:
+ op_proto: The matching C++ operator type.
+ np_list: Dict of value names -> values.
+ persistable_list: List of variables to be persisted.
+ is_input: Boolean of if this is a set of inputs.
+
+ Returns:
+ A dict of variable names -> Paddle variable instances.
+ """
+ # A list of expected inputs and outputs, as desired by Paddle's
+ # C++ runtime.
+ proto_list = op_proto.inputs if is_input else op_proto.outputs
+
+ def create_var(block, name, np_list, var_proto):
+ """Creates a Paddle var in the given block and C++ proto type"""
+
+ # If the expected variable is not found is in the provided list
+ # of variables, make an assertion. Else, determine the shape and
+ # and set the LoD level before creating the Paddle variable.
+ if name not in np_list:
+ assert var_proto.intermediate, "{} not found".format(name)
+ shape = None
+ lod_level = None
+ else:
+ np_value = np_list[name]
+ if isinstance(np_value, tuple):
+ shape = list(np_value[0].shape)
+ lod_level = len(np_value[1])
+ else:
+ shape = list(np_value.shape)
+ lod_level = 0
+
+ persistable = True if name in persistable_list else False
+ return block.create_var(
+ dtype="float32",
+ shape=shape,
+ persistable=persistable,
+ lod_level=lod_level,
+ name=name)
+
+ # Go through all the variables in the expected list for this operator.
+ var_dict = {}
+ for var_proto in proto_list:
+ var_name = str(var_proto.name)
+
+ # If these are inputs, and the expected input is not necessary
+ # are is not provided in the list of inputs, we move on to the next
+ # expected. input.
+ # If not, we make sure it the expected input is provided, or that it
+ # is unnecessary.
+ if is_input:
+ if (var_name not in np_list) and var_proto.dispensable:
+ continue
+ assert (var_name in np_list) or (var_proto.dispensable), \
+ "Missing {} as input".format(var_name)
+
+ # Set duplicable variables as lists of Paddle variables, and standard
+ # ones as simple Paddle variables.
+ if var_proto.duplicable:
+ assert isinstance(np_list[var_name], list), \
+ "Duplicable {} should be set as list".format(var_name)
+ var_list = []
+ for (name, np_value) in np_list[var_name]:
+ var_list.append(
+ create_var(block, name, {name: np_value}, var_proto))
+ var_dict[var_name] = var_list
+ else:
+ var_dict[var_name] = create_var(block, var_name, np_list, var_proto)
+
+ return var_dict
+
+
+class OpTest(unittest.TestCase):
+ """Evaluates an op maker's validity.
+
+ Using op-specific inputs and attributes, executes:
+ (1) A Paddle program
+ (2) A Caffe2 backend consuming Paddle ops converted to ONNX using custom
+ op makers.
+
+ It uses the outputs of both fo these executions to compare the values
+ of their outputs. Success in these comparisons comes from almost equal
+ values of this output data across both executions.
+
+ Attributes:
+ inputs (dict): Operator input name -> input value.
+ outputs (dict): Operator output -> output value placeholders.
+
+ Additionally, custom attributes to the op.
+ """
+ def feed_var(self, input_vars, place):
+ """Returns a dictionary of variable names -> initialized tensors.
+
+ It sets tensors' execution place set (CPU or GPU), and Level of
+ Details (LoD) using this info from the numpy values.
+ """
+ feed_map = {}
+ for var_name in input_vars:
+ if isinstance(input_vars[var_name], list):
+ for name, np_value in self.inputs[var_name]:
+ tensor = core.LoDTensor()
+ if isinstance(np_value, tuple):
+ tensor.set(np_value[0], place)
+ tensor.set_lod(np_value[1])
+ else:
+ tensor.set(np_value, place)
+ feed_map[name] = tensor
+ else:
+ tensor = core.LoDTensor()
+ if isinstance(self.inputs[var_name], tuple):
+ tensor.set(self.inputs[var_name][0], place)
+ tensor.set_lod(self.inputs[var_name][1])
+ else:
+ tensor.set(self.inputs[var_name], place)
+ feed_map[var_name] = tensor
+
+ return feed_map
+
+ def eval_fluid_op(self):
+ """Run a Paddle program only with the op to test.
+
+ Returns the output values after running.
+ """
+ op_proto = OpProtoHolder.instance().get_op_proto(self.op_type)
+
+ # Create a new paddle scope and program.
+ place = core.CPUPlace()
+ exe = Executor(place)
+ self.scope = core.Scope()
+
+ with scope_guard(self.scope):
+ program = Program()
+ self.block = program.global_block()
+
+ # A list of inputs and outputs used by the op
+ # that need to persisted in the global block.
+ persistable = self.persistable if hasattr(self,
+ "persistable") else []
+
+ # Add input and output variables to the global block.
+ inputs = append_input_output(self.block, op_proto, self.inputs,
+ persistable, True)
+ outputs = append_input_output(self.block, op_proto, self.outputs,
+ persistable, False)
+
+ # Append the op.
+ self.op = self.block.append_op(
+ type=self.op_type,
+ inputs=inputs,
+ outputs=outputs,
+ attrs=self.attrs if hasattr(self, "attrs") else dict())
+
+ # Infer the var type and share of the op based on the block's
+ # inputs and outputs.
+ self.op.desc.infer_var_type(self.block.desc)
+ self.op.desc.infer_shape(self.block.desc)
+
+ # Construct a unique list of outputs to fetch.
+ self.fetch_list = []
+ for var_name, var in outputs.iteritems():
+ if var_name in self.outputs:
+ if isinstance(var, list):
+ for v in var:
+ self.fetch_list.append(v)
+ else:
+ self.fetch_list.append(var)
+
+ self.feed_map = self.feed_var(inputs, place)
+
+ outs = exe.run(program,
+ feed=self.feed_map,
+ scope=self.scope,
+ fetch_list=self.fetch_list,
+ return_numpy=True)
+ return outs
+
+ def eval_onnx_node(self):
+ """Run a Caffe2 program using their ONNX backend.
+
+ Prior to running the backend, use the Paddle scope to construct
+ ONNX ops and prepare the inputs and output values based on ONNX
+ compatibility.
+ """
+ # Convert inputs and outputs to ONNX tensors.
+ # Use the Paddle fetch_list to prepare the outputs.
+ inputs = [
+ paddle_variable_to_onnx_tensor(v, self.block) for v in self.feed_map
+ ]
+
+ fetch_target_names = [
+ fetch_target.name for fetch_target in self.fetch_list
+ ]
+ outputs = [
+ paddle_variable_to_onnx_tensor(v, self.block)
+ for v in fetch_target_names
+ ]
+
+ # Construct the ONNX model using paddle-onnx.
+ onnx_node = node_maker[self.op_type](operator=self.op, scope=self.scope)
+ node_list = list(onnx_node) if isinstance(onnx_node,
+ tuple) else [onnx_node]
+ for node in node_list:
+ check_node(node)
+
+ onnx_graph = make_graph(node_list, self.op_type, inputs, outputs)
+ onnx_model = make_model(onnx_graph, producer_name='unittest')
+
+ # Run the Caffe2Backend with the ONNX model.
+ rep = Caffe2Backend.prepare(onnx_model, device='CPU')
+ in_vals = [self.inputs[input.name] for input in inputs]
+ outs = rep.run(in_vals)
+
+ return outs
+
+ def check_output(self, decimal=5):
+ """Compares the outputs from the Paddle program and the Caffe2
+ backend using the ONNX model constructed by paddle-onnx.
+
+ Compares accuracy at a precision of 5 decimal places by default.
+ """
+ fluid_result = self.eval_fluid_op()
+ onnx_result = self.eval_onnx_node()
+
+ for ref, hyp in zip(fluid_result, onnx_result):
+ # Compare the values using numpy's almost_equal comparator.
+ np.testing.assert_almost_equal(ref, hyp, decimal=decimal)
diff --git a/tests/test_conv2d_op.py b/tests/test_conv2d_op.py
new file mode 100644
--- /dev/null
+++ b/tests/test_conv2d_op.py
@@ -0,0 +1,64 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestConv2dOp(OpTest):
+ def setUp(self):
+ self.op_type = "conv2d"
+ self.use_cudnn = False
+ self.use_mkldnn = False
+ self.dtype = np.float32
+ self.groups = 1
+ self.dilations = [1, 1]
+ self.pad = [0, 0]
+ self.stride = [1, 1]
+ self.input_size = [2, 3, 5, 5]
+ f_c = self.input_size[1] / self.groups
+ self.filter_size = [6, f_c, 3, 3]
+
+ conv2d_param = {
+ 'stride': self.stride,
+ 'pad': self.pad,
+ 'dilation': self.dilations
+ }
+
+ input = np.random.random(self.input_size).astype(self.dtype)
+ filter = np.random.random(self.filter_size).astype(self.dtype)
+
+ self.inputs = {'Input': input, 'Filter': filter}
+
+ self.persistable = ['Filter']
+
+ self.attrs = {
+ 'strides': self.stride,
+ 'paddings': self.pad,
+ 'groups': self.groups,
+ 'dilations': self.dilations,
+ 'use_cudnn': self.use_cudnn,
+ 'use_mkldnn': self.use_mkldnn
+ }
+
+ output = np.zeros((1, 1, 1, 1))
+ self.outputs = {'Output': output}
+
+ def test_check_output(self):
+ self.check_output(decimal=5)
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_elementwise_add_op.py b/tests/test_elementwise_add_op.py
new file mode 100644
--- /dev/null
+++ b/tests/test_elementwise_add_op.py
@@ -0,0 +1,37 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestElementwiseAddOp(OpTest):
+ def setUp(self):
+ self.op_type = "elementwise_add"
+ self.attrs = {"axis": 1}
+
+ self.inputs = {
+ 'X': np.random.random((2, 1)).astype(np.float32),
+ 'Y': np.random.random((1, )).astype(np.float32)
+ }
+
+ self.outputs = {'Out': np.zeros((1, 1))}
+
+ def test_check_output(self):
+ self.check_output()
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_mul_op.py b/tests/test_mul_op.py
new file mode 100644
--- /dev/null
+++ b/tests/test_mul_op.py
@@ -0,0 +1,38 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestMulOp(OpTest):
+ def setUp(self):
+ self.op_type = "mul"
+ self.inputs = {
+ 'X': np.random.random((15, 4, 12, 10)).astype("float32"),
+ 'Y': np.random.random((4, 30, 8, 2, 9)).astype("float32")
+ }
+ self.attrs = {'x_num_col_dims': 2, 'y_num_col_dims': 2}
+ result = np.dot(self.inputs['X'].reshape(15 * 4, 12 * 10),
+ self.inputs['Y'].reshape(4 * 30, 8 * 2 * 9))
+ result = result.reshape(15, 4, 8, 2, 9)
+ self.outputs = {'Out': result}
+
+ def test_check_output(self):
+ self.check_output()
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_pool2d_op.py b/tests/test_pool2d_op.py
new file mode 100644
--- /dev/null
+++ b/tests/test_pool2d_op.py
@@ -0,0 +1,118 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestPool2dOp(OpTest):
+ def setUp(self):
+ self.op_type = "pool2d"
+ self.use_cudnn = False
+ self.use_mkldnn = False
+ self.dtype = np.float32
+ self.init_test_case()
+ self.init_global_pool()
+ self.init_kernel_type()
+ self.init_pool_type()
+ self.init_ceil_mode()
+ if self.global_pool:
+ self.paddings = [0 for _ in range(len(self.paddings))]
+ input = np.random.random(self.shape).astype(self.dtype)
+ output = np.zeros((1, 1)).astype(self.dtype)
+
+ self.inputs = {'X': input}
+
+ self.attrs = {
+ 'strides': self.strides,
+ 'paddings': self.paddings,
+ 'ksize': self.ksize,
+ 'pooling_type': self.pool_type,
+ 'global_pooling': self.global_pool,
+ 'use_cudnn': self.use_cudnn,
+ 'use_mkldnn': self.use_mkldnn,
+ 'ceil_mode': self.ceil_mode,
+ 'data_format': 'AnyLayout'
+ }
+
+ self.outputs = {'Out': output}
+
+ def test_check_output(self):
+ self.check_output()
+
+ def init_test_case(self):
+ self.shape = [2, 3, 5, 5]
+ self.ksize = [3, 3]
+ self.strides = [1, 1]
+ self.paddings = [0, 0]
+
+ def init_kernel_type(self):
+ pass
+
+ def init_pool_type(self):
+ self.pool_type = "avg"
+
+ def init_global_pool(self):
+ self.global_pool = True
+
+ def init_ceil_mode(self):
+ self.ceil_mode = False
+
+
+class TestPool2dOp1(TestPool2dOp):
+ def init_test_case(self):
+ self.shape = [2, 3, 7, 7]
+ self.ksize = [3, 3]
+ self.strides = [1, 1]
+ self.paddings = [0, 0]
+
+ def init_pool_type(self):
+ self.pool_type = "avg"
+
+ def init_global_pool(self):
+ self.global_pool = False
+
+
+class TestPool2dOp2(TestPool2dOp):
+ def init_test_case(self):
+ self.shape = [2, 3, 7, 7]
+ self.ksize = [3, 3]
+ self.strides = [1, 1]
+ self.paddings = [1, 1]
+
+ def init_pool_type(self):
+ self.pool_type = "avg"
+
+ def init_global_pool(self):
+ self.global_pool = False
+
+
+class TestPool2dOp3(TestPool2dOp):
+ def init_pool_type(self):
+ self.pool_type = "max"
+
+
+class TestPool2dOp4(TestPool2dOp1):
+ def init_pool_type(self):
+ self.pool_type = "max"
+
+
+class TestPool2dOp5(TestPool2dOp2):
+ def init_pool_type(self):
+ self.pool_type = "max"
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_tanh_op.py b/tests/test_tanh_op.py
new file mode 100644
--- /dev/null
+++ b/tests/test_tanh_op.py
@@ -0,0 +1,32 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestTanhOp(OpTest):
+ def setUp(self):
+ X = np.random.random((3, 5)).astype("float32")
+ self.inputs = {'X': X}
+ self.outputs = {'Out': np.random.random((1, 1)).astype("float32")}
+ self.op_type = 'tanh'
+
+ def test_check_output(self):
+ self.check_output()
+
+
+if __name__ == '__main__':
+ unittest.main()
| Design the unit test framework to test operators‘ conversion
| 2018-04-19T15:51:44 |
|
PaddlePaddle/Paddle2ONNX | 30 | PaddlePaddle__Paddle2ONNX-30 | [
"34"
] | 5441d5fcf978ed5a169c4307413eb0009aed897e | diff --git a/convert.py b/convert.py
--- a/convert.py
+++ b/convert.py
@@ -15,6 +15,7 @@
import os
import argparse
+from fluid.utils import op_io_info
from onnx import helper, checker
import paddle.fluid as fluid
@@ -28,7 +29,11 @@ def parse_args():
parser.add_argument(
"--fluid_model", required=True, help="Input PaddlePaddle Fluid model.")
parser.add_argument(
- "--onnx_model", required=False, help="The path to save ONNX model.")
+ "--onnx_model", required=True, help="The path to save ONNX model.")
+ parser.add_argument(
+ "--to_print_model",
+ action='store_true',
+ help="To print converted ONNX model.")
args = parser.parse_args()
return args
@@ -68,15 +73,6 @@ def convert(args):
for v in feed_target_names
]
- # Create outputs
- fetch_target_names = [
- fetch_target.name for fetch_target in fetch_targets
- ]
- outputs = [
- paddle_variable_to_onnx_tensor(v, global_block)
- for v in fetch_target_names
- ]
-
# Create nodes
for block in inference_program.blocks:
for op in block.ops:
@@ -84,7 +80,7 @@ def convert(args):
# TODO(kuke): deal with the corner case that vars in
# different blocks have the same name
node_proto = ops.node_maker[op.type](operator=op,
- scope=inference_scope)
+ block=block)
if isinstance(node_proto, tuple):
onnx_nodes.extend(list(node_proto))
@@ -95,6 +91,21 @@ def convert(args):
raise NotImplementedError("OP[%s] is not supported in "
"the converter!" % op.type)
+ # Create outputs
+ fetch_target_names = [
+ fetch_target.name for fetch_target in fetch_targets
+ ]
+ # Get the new names for outputs if they've renamed in nodes' making
+ renamed_outputs = op_io_info.get_all_renamed_outputs()
+ fetch_target_names = [
+ name if name not in renamed_outputs else renamed_outputs[name]
+ for name in fetch_target_names
+ ]
+ outputs = [
+ paddle_variable_to_onnx_tensor(v, global_block)
+ for v in fetch_target_names
+ ]
+
# Make graph
model_name = os.path.basename(args.fluid_model.strip('/')).split('.')[0]
onnx_graph = helper.make_graph(onnx_nodes, model_name, inputs, outputs)
@@ -106,7 +117,8 @@ def convert(args):
checker.check_model(onnx_model)
# Print model
- print("The converted model is:\n{}".format(onnx_model))
+ if args.to_print_model:
+ print("The converted model is:\n{}".format(onnx_model))
# Save converted model
if args.onnx_model is not None:
diff --git a/fluid/utils.py b/fluid/utils.py
--- a/fluid/utils.py
+++ b/fluid/utils.py
@@ -12,12 +12,66 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+from compiler.ast import flatten
-def get_op_io_info(op):
- inputs = dict([(name, op.input(name)) for name in op.input_names])
- attrs = dict(
- [(name, op.attr(name))
- for name in op.attr_names]) if op.attr_names is not None else None
- outputs = dict([(name, op.output(name)) for name in op.output_names])
- return inputs, attrs, outputs
+class OpIOsInfo():
+ """Return inputs/outputs information for an operator, and resolve potential
+ name conflicts in ONNX graph.
+ """
+
+ def __init__(self):
+ self._all_renamed_outputs = {}
+ self._renamed_cnt = 0
+
+ def _get_new_name(self, arg):
+ """Get the new name for an argument.
+ """
+
+ self._renamed_cnt += 1
+ return arg + '@dup_' + str(self._renamed_cnt)
+
+ def _rename_input_args(self):
+ """Rename input arguments if their previous output arugments have been
+ renamed.
+ """
+
+ for in_name in self.inputs:
+ if self.inputs[in_name][0] in self._all_renamed_outputs:
+ self.inputs[in_name][0] = self._all_renamed_outputs[self.inputs[
+ in_name][0]]
+
+ def _rename_output_args(self):
+ """Rename output arguments if they have same names with the input
+ arguments.
+ """
+
+ input_args = flatten(self.inputs.values())
+ for out_name in self.outputs:
+ if self.outputs[out_name][0] in input_args:
+ new_name = self._get_new_name(self.outputs[out_name][0])
+ self._all_renamed_outputs[self.outputs[out_name][0]] = new_name
+ self.outputs[out_name][0] = new_name
+
+ def get_all_renamed_outputs(self):
+ """Get all the renamed outputs in history.
+ """
+
+ return self._all_renamed_outputs
+
+ def __call__(self, op):
+ self.inputs = dict([(name, op.input(name)) for name in op.input_names])
+ self.attrs = dict(
+ [(name, op.attr(name))
+ for name in op.attr_names]) if op.attr_names is not None else None
+ self.outputs = dict(
+ [(name, op.output(name)) for name in op.output_names])
+
+ self._rename_input_args()
+ self._rename_output_args()
+
+ return self.inputs, self.attrs, self.outputs
+
+
+# Instantiate the class to a callable object
+op_io_info = OpIOsInfo()
diff --git a/fluid_onnx/ops.py b/fluid_onnx/ops.py
--- a/fluid_onnx/ops.py
+++ b/fluid_onnx/ops.py
@@ -13,9 +13,10 @@
# limitations under the License.
import sys
+from onnx import TensorProto
from onnx.helper import make_node, make_tensor
from paddle.fluid.executor import fetch_var
-from fluid.utils import get_op_io_info
+from fluid.utils import op_io_info
from fluid_onnx.variables import PADDLE_TO_ONNX_DTYPE
"""
Priority of ops (uniques) to figure out support for.
@@ -59,16 +60,6 @@ def abs_op():
pass
-def add_op(operator, scope):
- inputs, attrs, outputs = get_op_io_info(operator)
- return make_node(
- 'Add',
- inputs=inputs['X'] + inputs['Y'],
- outputs=outputs['Out'],
- axis=attrs['axis'],
- broadcast=1)
-
-
def and_op():
"""
Need to support broadcast.
@@ -91,17 +82,40 @@ def averagepool_op():
pass
-def batchnorm_op(operator, scope):
- inputs, attrs, outputs = get_op_io_info(operator)
- bn_op = make_node(
+def batch_norm_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
+
+ x_shape = block.vars[inputs['X'][0]].shape
+ reshape_node = None
+ if len(x_shape) == 2:
+ reshaped_x = [inputs['X'][0] + '@reshape_0']
+ new_shape = [0, x_shape[1], 1, 1]
+ new_shape_name = [inputs['X'][0] + '@shape_tensor_0']
+ new_shape_node = make_node(
+ 'Constant',
+ inputs=[],
+ outputs=new_shape_name,
+ value=make_tensor(
+ name=new_shape_name[0],
+ data_type=TensorProto.INT64,
+ dims=(4, ),
+ vals=new_shape))
+ reshape_node = make_node(
+ 'Reshape', inputs=inputs['X'] + new_shape_name, outputs=reshaped_x)
+ else:
+ reshaped_x = inputs['X']
+
+ bn_node = make_node(
'BatchNormalization',
- inputs=inputs['X'] + inputs['Scale'] + inputs['Bias'] + inputs['Mean'] +
+ inputs=reshaped_x + inputs['Scale'] + inputs['Bias'] + inputs['Mean'] +
inputs['Variance'],
outputs=outputs['Y'],
is_test=attrs['is_test'],
epsilon=attrs['epsilon'],
momentum=attrs['momentum'])
- return bn_op
+
+ return bn_node if reshape_node is None else (new_shape_node, reshape_node,
+ bn_node)
def cast_op():
@@ -134,11 +148,10 @@ def constant_op(var, scope):
return constant_node
-def conv2d_op(operator, scope):
- inputs, attrs, outputs = get_op_io_info(operator)
- kernel_shape = fetch_var(
- operator.input('Filter')[0].decode('string_escape'), scope).shape
+def conv2d_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
+ kernel_shape = block.vars[inputs['Filter'][0]].shape
conv2d = make_node(
'Conv',
inputs=inputs['Input'] + inputs['Filter'],
@@ -163,19 +176,50 @@ def div_op():
pass
-def dropout_op():
- pass
+def dropout_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
+ scale_input = [outputs['Out'][0] + '@dropout']
+ dropout_op = make_node(
+ 'Dropout',
+ inputs=inputs['X'],
+ outputs=scale_input + outputs['Mask'],
+ is_test=attrs['is_test'],
+ ratio=attrs['dropout_prob'])
+ # Fluid and ONNX use different dropout formula
+ scale_op = make_node(
+ 'Scale',
+ inputs=scale_input,
+ outputs=outputs['Out'],
+ scale=1.0 - attrs['dropout_prob'])
+ return (dropout_op, scale_op)
-def elu_op():
- pass
+def elementwise_add_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
+ return make_node(
+ 'Add',
+ inputs=inputs['X'] + inputs['Y'],
+ outputs=outputs['Out'],
+ axis=attrs['axis'],
+ broadcast=1)
-def equal_op():
+
+def elementwise_mul_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
+ return make_node(
+ 'Mul',
+ inputs=inputs['X'] + inputs['Y'],
+ outputs=outputs['Out'],
+ axis=attrs['axis'],
+ broadcast=1)
+
+
+def elu_op():
pass
-def dropout_op():
+def equal_op():
pass
@@ -263,8 +307,8 @@ def lppool_op():
pass
-def mul_op(operator, scope):
- inputs, attrs, outputs = get_op_io_info(operator)
+def mul_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
# Flatten input(X) and input(Y) into 2-D matries
x_flat_out = [inputs['X'][0] + '@flatten_0']
@@ -371,8 +415,8 @@ def pad_op():
pass
-def pool2d_op(operator, scope):
- inputs, attrs, outputs = get_op_io_info(operator)
+def pool2d_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
if attrs['global_pooling'] is False:
op_type = {'max': 'MaxPool', 'avg': 'AveragePool'}
pool2d = make_node(
@@ -459,8 +503,8 @@ def reducesumsquare_op():
pass
-def relu_op(operator, scope):
- inputs, _, outputs = get_op_io_info(operator)
+def relu_op(operator, block):
+ inputs, _, outputs = op_io_info(operator)
return make_node('Relu', inputs=inputs['X'], outputs=outputs['Out'])
@@ -476,8 +520,9 @@ def shape_op():
pass
-def sigmoid_op():
- pass
+def sigmoid_op(operator, block):
+ inputs, _, outputs = op_io_info(operator)
+ return make_node('Sigmoid', inputs=inputs['X'], outputs=outputs['Out'])
def size_op():
@@ -488,8 +533,8 @@ def slice_op():
pass
-def softmax_op(operator, scope):
- inputs, attrs, outputs = get_op_io_info(operator)
+def softmax_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
return make_node('Softmax', inputs=inputs['X'], outputs=outputs['Out'])
@@ -525,8 +570,8 @@ def sum_op():
pass
-def tanh_op(operator, scope):
- inputs, attrs, outputs = get_op_io_info(operator)
+def tanh_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
return make_node('Tanh', inputs=inputs['X'], outputs=outputs['Out'])
@@ -560,13 +605,14 @@ def xor_op():
node_maker = {
# Paddle op name : (ONNX op name, modifier)
'abs': ('Abs', abs_op),
- 'elementwise_add': add_op,
+ 'elementwise_add': elementwise_add_op,
+ 'elementwise_mul': elementwise_mul_op,
# '': 'And', # ?
# 'ArgMax', NEEDS ATTENTION.
# 'ArgMin', NEEDS ATTENTION.
'': ('AveragePool', averagepool_op),
- 'batch_norm': batchnorm_op,
+ 'batch_norm': batch_norm_op,
'cast': ('Cast', cast_op),
# 'Ceil', NEEDS ATTENTION.
'cast': ('Clip', clip_op),
@@ -577,8 +623,9 @@ def xor_op():
# Need to continue the mapping below.
'': 'ConvTranspose',
'': 'DepthToSpace',
+ 'depthwise_conv2d': conv2d_op,
'': 'Div',
- '': 'Dropout',
+ 'dropout': dropout_op,
'': 'Elu',
'': 'Equal',
'': 'Exp',
@@ -636,7 +683,7 @@ def xor_op():
'': 'Reshape',
# 'Selu', NEEDS ATTENTION.
'': 'Shape',
- '': 'Sigmoid',
+ 'sigmoid': sigmoid_op,
'': 'Size',
# 'Slice', NEEDS ATTENTION.
'softmax': softmax_op,
| diff --git a/tests/op_test.py b/tests/op_test.py
--- a/tests/op_test.py
+++ b/tests/op_test.py
@@ -28,7 +28,6 @@
from fluid_onnx.ops import node_maker
from fluid_onnx.variables import paddle_variable_to_onnx_tensor
-
"""
NOTE (varunarora): Some of the code snippets below have been inspired from
op_test.py in /python/paddle/fluid/tests/unittests/ in the original
@@ -37,6 +36,7 @@
When in doubt, keep in sync with it's counterparts.
"""
+
def append_input_output(block, op_proto, np_list, persistable_list, is_input):
"""Returns a list of Paddle variables associated with a block.
@@ -50,12 +50,14 @@ def append_input_output(block, op_proto, np_list, persistable_list, is_input):
Returns:
A dict of variable names -> Paddle variable instances.
"""
+
# A list of expected inputs and outputs, as desired by Paddle's
# C++ runtime.
proto_list = op_proto.inputs if is_input else op_proto.outputs
def create_var(block, name, np_list, var_proto):
- """Creates a Paddle var in the given block and C++ proto type"""
+ """Creates a Paddle var in the given block and C++ proto type.
+ """
# If the expected variable is not found is in the provided list
# of variables, make an assertion. Else, determine the shape and
@@ -87,10 +89,9 @@ def create_var(block, name, np_list, var_proto):
var_name = str(var_proto.name)
# If these are inputs, and the expected input is not necessary
- # are is not provided in the list of inputs, we move on to the next
- # expected. input.
- # If not, we make sure it the expected input is provided, or that it
- # is unnecessary.
+ # and not provided in the list of inputs, we move on to the next
+ # expected input. If not, we make sure it the expected input is
+ # provided, or that it is unnecessary.
if is_input:
if (var_name not in np_list) and var_proto.dispensable:
continue
@@ -113,6 +114,27 @@ def create_var(block, name, np_list, var_proto):
return var_dict
+def create_tensor(np_value, place):
+ """Create a LoDTensor initialized by the numpy ndarray.
+
+ Args:
+ np_value (ndarray|tuple): The numpy ndarry to initialize the tensor,
+ in tuple (value, LoD) when LoD is given.
+ place (CPUPlace|CUDAPlace): The place for the tensor.
+ Return:
+ The created LoDTensor.
+ """
+
+ tensor = core.LoDTensor()
+ if isinstance(np_value, tuple):
+ tensor.set(np_value[0], place)
+ tensor.set_lod(np_value[1])
+ else:
+ tensor.set(np_value, place)
+
+ return tensor
+
+
class OpTest(unittest.TestCase):
"""Evaluates an op maker's validity.
@@ -131,30 +153,22 @@ class OpTest(unittest.TestCase):
Additionally, custom attributes to the op.
"""
+
def feed_var(self, input_vars, place):
"""Returns a dictionary of variable names -> initialized tensors.
It sets tensors' execution place set (CPU or GPU), and Level of
Details (LoD) using this info from the numpy values.
"""
+
feed_map = {}
for var_name in input_vars:
if isinstance(input_vars[var_name], list):
for name, np_value in self.inputs[var_name]:
- tensor = core.LoDTensor()
- if isinstance(np_value, tuple):
- tensor.set(np_value[0], place)
- tensor.set_lod(np_value[1])
- else:
- tensor.set(np_value, place)
+ tensor = create_tensor(np_value, place)
feed_map[name] = tensor
else:
- tensor = core.LoDTensor()
- if isinstance(self.inputs[var_name], tuple):
- tensor.set(self.inputs[var_name][0], place)
- tensor.set_lod(self.inputs[var_name][1])
- else:
- tensor.set(self.inputs[var_name], place)
+ tensor = create_tensor(self.inputs[var_name], place)
feed_map[var_name] = tensor
return feed_map
@@ -164,14 +178,15 @@ def eval_fluid_op(self):
Returns the output values after running.
"""
+
op_proto = OpProtoHolder.instance().get_op_proto(self.op_type)
# Create a new paddle scope and program.
place = core.CPUPlace()
exe = Executor(place)
- self.scope = core.Scope()
+ scope = core.Scope()
- with scope_guard(self.scope):
+ with scope_guard(scope):
program = Program()
self.block = program.global_block()
@@ -212,7 +227,6 @@ def eval_fluid_op(self):
outs = exe.run(program,
feed=self.feed_map,
- scope=self.scope,
fetch_list=self.fetch_list,
return_numpy=True)
return outs
@@ -224,6 +238,7 @@ def eval_onnx_node(self):
ONNX ops and prepare the inputs and output values based on ONNX
compatibility.
"""
+
# Convert inputs and outputs to ONNX tensors.
# Use the Paddle fetch_list to prepare the outputs.
inputs = [
@@ -239,7 +254,7 @@ def eval_onnx_node(self):
]
# Construct the ONNX model using paddle-onnx.
- onnx_node = node_maker[self.op_type](operator=self.op, scope=self.scope)
+ onnx_node = node_maker[self.op_type](operator=self.op, block=self.block)
node_list = list(onnx_node) if isinstance(onnx_node,
tuple) else [onnx_node]
for node in node_list:
@@ -261,6 +276,7 @@ def check_output(self, decimal=5):
Compares accuracy at a precision of 5 decimal places by default.
"""
+
fluid_result = self.eval_fluid_op()
onnx_result = self.eval_onnx_node()
diff --git a/tests/test_conv2d_op.py b/tests/test_conv2d_op.py
--- a/tests/test_conv2d_op.py
+++ b/tests/test_conv2d_op.py
@@ -19,7 +19,7 @@
class TestConv2dOp(OpTest):
def setUp(self):
- self.op_type = "conv2d"
+ self.init_conv_type()
self.use_cudnn = False
self.use_mkldnn = False
self.dtype = np.float32
@@ -56,9 +56,17 @@ def setUp(self):
output = np.zeros((1, 1, 1, 1))
self.outputs = {'Output': output}
+ def init_conv_type(self):
+ self.op_type = "conv2d"
+
def test_check_output(self):
self.check_output(decimal=5)
+class TestDepthwiseConv2dOp(TestConv2dOp):
+ def init_conv_type(self):
+ self.op_type = "depthwise_conv2d"
+
+
if __name__ == '__main__':
unittest.main()
diff --git a/tests/test_dropout_op.py b/tests/test_dropout_op.py
new file mode 100644
--- /dev/null
+++ b/tests/test_dropout_op.py
@@ -0,0 +1,32 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestDropoutOp(OpTest):
+ def setUp(self):
+ self.op_type = "dropout"
+ self.inputs = {'X': np.random.random((32, 64, 2)).astype("float32")}
+ self.attrs = {'dropout_prob': 0.8, 'is_test': True}
+ self.outputs = {'Out': np.zeros((1, 1))}
+
+ def test_check_output(self):
+ self.check_output()
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_elementwise_mul_op.py b/tests/test_elementwise_mul_op.py
new file mode 100644
--- /dev/null
+++ b/tests/test_elementwise_mul_op.py
@@ -0,0 +1,46 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestElementwiseMulOp(OpTest):
+ def setUp(self):
+ self.op_type = "elementwise_mul"
+ self.inputs = {
+ 'X': np.random.uniform(0.1, 1, [11, 13]).astype(np.float32),
+ 'Y': np.random.uniform(0.1, 1, [11, 13]).astype(np.float32)
+ }
+ self.outputs = {'Out': np.zeros((1, 1))}
+
+ def test_check_output(self):
+ self.check_output()
+
+
+class TestElementwiseMulOp_broadcast(TestElementwiseMulOp):
+ def setUp(self):
+ self.op_type = "elementwise_mul"
+ self.inputs = {
+ 'X': np.random.rand(2, 3, 4, 5).astype(np.float64),
+ 'Y': np.random.rand(3, 4).astype(np.float64)
+ }
+
+ self.attrs = {'axis': 1}
+ self.outputs = {'Out': np.zeros((1, 1))}
+
+
+if __name__ == '__main__':
+ unittest.main()
| Enable the conversion of MobileNet & SE_ResNeXt model
| 2018-04-24T08:07:47 |
|
PaddlePaddle/Paddle2ONNX | 38 | PaddlePaddle__Paddle2ONNX-38 | [
"39"
] | a49d02ea2b36685bc387dc8c1b9010af39b58ee4 | diff --git a/fluid_onnx/ops.py b/fluid_onnx/ops.py
--- a/fluid_onnx/ops.py
+++ b/fluid_onnx/ops.py
@@ -13,6 +13,7 @@
# limitations under the License.
import sys
+from functools import partial
from onnx import TensorProto
from onnx.helper import make_node, make_tensor
from paddle.fluid.executor import fetch_var
@@ -56,8 +57,10 @@
"""
-def abs_op():
- pass
+def activation_ops(act_type, operator, block):
+ inputs, _, outputs = op_io_info(operator)
+ return make_node(
+ act_type, inputs=inputs.values()[0], outputs=outputs.values()[0])
def and_op():
@@ -75,13 +78,6 @@ def argmin_op():
pass
-def averagepool_op():
- """
- Need to support more pad mode.
- """
- pass
-
-
def batch_norm_op(operator, block):
inputs, attrs, outputs = op_io_info(operator)
@@ -122,10 +118,6 @@ def cast_op():
pass
-def ceil_op():
- pass
-
-
def clip_op():
pass
@@ -172,10 +164,6 @@ def depthtospace_op():
pass
-def div_op():
- pass
-
-
def dropout_op(operator, block):
inputs, attrs, outputs = op_io_info(operator)
scale_input = [outputs['Out'][0] + '@dropout']
@@ -195,20 +183,10 @@ def dropout_op(operator, block):
return (dropout_op, scale_op)
-def elementwise_add_op(operator, block):
- inputs, attrs, outputs = op_io_info(operator)
- return make_node(
- 'Add',
- inputs=inputs['X'] + inputs['Y'],
- outputs=outputs['Out'],
- axis=attrs['axis'],
- broadcast=1)
-
-
-def elementwise_mul_op(operator, block):
+def elementwise_ops(op_type, operator, block):
inputs, attrs, outputs = op_io_info(operator)
return make_node(
- 'Mul',
+ op_type,
inputs=inputs['X'] + inputs['Y'],
outputs=outputs['Out'],
axis=attrs['axis'],
@@ -223,18 +201,10 @@ def equal_op():
pass
-def exp_op():
- pass
-
-
def flatten_op():
pass
-def floor_op():
- pass
-
-
def gru_op():
pass
@@ -247,23 +217,22 @@ def gemm_op():
pass
-def globalaveragepool_op():
- pass
-
-
def globallppool_op():
pass
-def globalmaxpool_op():
- pass
-
-
def greater_op():
pass
-def hardsigmoid_op():
+def hardsigmoid_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
+ return make_node(
+ 'HardSigmoid',
+ inputs=inputs['X'],
+ outputs=outputs['Out'],
+ alpha=0.2,
+ beta=0.5)
pass
@@ -370,13 +339,6 @@ def max_op():
pass
-def maxpool_op():
- """
- Need to support broadcast.
- """
- pass
-
-
def maxroipool_op():
pass
@@ -459,8 +421,29 @@ def randomuniformlike_op():
pass
-def reciprocal_op():
- pass
+def reduce_ops(op_type, operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
+ rank = len(block.vars[inputs['X'][0]].shape)
+ dim = attrs['dim']
+ axes = [dim if dim >= 0 else rank + dim]
+ reduce_out = [outputs['Out'][0] + '@reduce_0'] if attrs[
+ 'reduce_all'] else outputs
+ reduce_node = make_node(
+ op_type,
+ inputs=inputs['X'],
+ outputs=reduce_out,
+ axes=axes,
+ keepdims=attrs['keep_dim'])
+ if attrs['reduce_all'] is True:
+ axes = range(rank) if attrs['keep_dim'] else range(rank - 1)
+ reduce_all_node = make_node(
+ op_type,
+ inputs=reduce_out,
+ outputs=outputs,
+ axes=axes,
+ keepdims=attrs['keep_dim'])
+ return (reduce_node, reduce_all_node)
+ return reduce_node
def reducel1_op():
@@ -479,35 +462,14 @@ def reducelogsumexp_op():
pass
-def reducemax_op():
- pass
-
-
-def reducemean_op():
- pass
-
-
-def reducemin_op():
- pass
-
-
def reduceprod_op():
pass
-def reducesum_op():
- pass
-
-
def reducesumsquare_op():
pass
-def relu_op(operator, block):
- inputs, _, outputs = op_io_info(operator)
- return make_node('Relu', inputs=inputs['X'], outputs=outputs['Out'])
-
-
def reshape_op():
pass
@@ -520,11 +482,6 @@ def shape_op():
pass
-def sigmoid_op(operator, block):
- inputs, _, outputs = op_io_info(operator)
- return make_node('Sigmoid', inputs=inputs['X'], outputs=outputs['Out'])
-
-
def size_op():
pass
@@ -538,14 +495,6 @@ def softmax_op(operator, block):
return make_node('Softmax', inputs=inputs['X'], outputs=outputs['Out'])
-def softplus_op():
- pass
-
-
-def softsign_op():
- pass
-
-
def spacetodepth_op():
pass
@@ -570,11 +519,6 @@ def sum_op():
pass
-def tanh_op(operator, block):
- inputs, attrs, outputs = op_io_info(operator)
- return make_node('Tanh', inputs=inputs['X'], outputs=outputs['Out'])
-
-
def tile_op():
pass
@@ -604,18 +548,14 @@ def xor_op():
node_maker = {
# Paddle op name : (ONNX op name, modifier)
- 'abs': ('Abs', abs_op),
- 'elementwise_add': elementwise_add_op,
- 'elementwise_mul': elementwise_mul_op,
-
+ 'abs': partial(activation_ops, 'Abs'),
# '': 'And', # ?
# 'ArgMax', NEEDS ATTENTION.
# 'ArgMin', NEEDS ATTENTION.
- '': ('AveragePool', averagepool_op),
'batch_norm': batch_norm_op,
'cast': ('Cast', cast_op),
- # 'Ceil', NEEDS ATTENTION.
- 'cast': ('Clip', clip_op),
+ 'ceil': partial(activation_ops, 'Ceil'),
+ 'clip': ('Clip', clip_op),
'concat': ('Concat', concat_op),
'constant': constant_op,
'conv2d': conv2d_op,
@@ -624,28 +564,30 @@ def xor_op():
'': 'ConvTranspose',
'': 'DepthToSpace',
'depthwise_conv2d': conv2d_op,
- '': 'Div',
'dropout': dropout_op,
+ 'elementwise_add': partial(elementwise_ops, 'Add'),
+ 'elementwise_div': partial(elementwise_ops, 'Div'),
+ 'elementwise_mul': partial(elementwise_ops, 'Mul'),
+ 'elementwise_pow': partial(elementwise_ops, 'Pow'),
+ 'elementwise_sub': partial(elementwise_ops, 'Sub'),
'': 'Elu',
'': 'Equal',
- '': 'Exp',
+ 'exp': partial(activation_ops, 'Exp'),
'': 'Flatten',
- # 'Floor', NEEDS ATTENTION.
+ 'floor': partial(activation_ops, 'Floor'),
'': 'GRU',
'': 'Gather',
'': 'Gemm',
- '': 'GlobalAveragePool',
'': 'GlobalLpPool',
- '': 'GlobalMaxPool',
'': 'Greater',
- '': 'HardSigmoid',
+ 'hard_sigmoid': 'HardSigmoid', # Caffe2 error
# 'Hardmax', NEEDS ATTENTION.
# 'InstanceNormalization', NEEDS ATTENTION.
'': 'LRN',
'': 'LSTM',
'': 'LeakyRelu',
'': 'Less',
- '': 'Log',
+ 'log': partial(activation_ops, 'Log'),
',': 'LogSoftmax',
'': 'LpNormalization',
'': 'LpPool',
@@ -662,40 +604,38 @@ def xor_op():
'': 'PRelu',
'': 'Pad',
'pool2d': pool2d_op,
- '': 'Pow',
',': 'RNN',
'': 'RandomNormal',
# 'RandomNormalLike', NEEDS ATTENTION.
# 'RandomUniform', NEEDS ATTENTION.
# 'RandomUniformLike', NEEDS ATTENTION.
- '': 'Reciprocal',
+ 'reciprocal': partial(activation_ops, 'Reciprocal'),
'': 'ReduceL1',
'': 'ReduceL2',
',': 'ReduceLogSum',
',': 'ReduceLogSumExp',
- '': 'ReduceMax',
- '': 'ReduceMean',
- '': 'ReduceMin',
- # 'ReduceProd', NEEDS ATTENTION.
- '': 'ReduceSum',
+ 'reduce_max': partial(reduce_ops, 'ReduceMax'),
+ 'reduce_mean': partial(reduce_ops, 'ReduceMean'),
+ 'reduce_min': partial(reduce_ops, 'ReduceMin'),
+ '': partial(reduce_ops, 'ReduceProd'), # Caffe2 error
+ 'reduce_sum': partial(reduce_ops, 'ReduceSum'),
',': 'ReduceSumSquare',
- 'relu': relu_op,
+ 'relu': partial(activation_ops, 'Relu'),
'': 'Reshape',
# 'Selu', NEEDS ATTENTION.
'': 'Shape',
- 'sigmoid': sigmoid_op,
+ 'sigmoid': partial(activation_ops, 'Sigmoid'),
'': 'Size',
# 'Slice', NEEDS ATTENTION.
'softmax': softmax_op,
- '': 'Softplus',
- '': 'Softsign',
+ 'softplus': partial(activation_ops, 'Softplus'),
+ 'softsign': partial(activation_ops, 'Softsign'),
'': 'SpaceToDepth',
'': 'Split',
- '': 'Sqrt',
+ 'sqrt': partial(activation_ops, 'Sqrt'),
# 'Squeeze', NEEDS ATTENTION.
- 'elementwise_sub': ('Sub', sub_op),
'': 'Sum',
- 'tanh': tanh_op,
+ 'tanh': partial(activation_ops, 'Tanh'),
'': 'Tile',
'': 'TopK',
'': 'Transpose',
| diff --git a/tests/test_activation_ops.py b/tests/test_activation_ops.py
new file mode 100644
--- /dev/null
+++ b/tests/test_activation_ops.py
@@ -0,0 +1,90 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestAbsOp(OpTest):
+ def setUp(self):
+ X = np.random.random((13, 15)).astype("float32")
+ self.inputs = {'X': X}
+ self.outputs = {'Out': np.zeros((1, 1)).astype("float32")}
+ self.init_op_type()
+
+ def init_op_type(self):
+ self.op_type = 'abs'
+
+ def test_check_output(self):
+ self.check_output()
+
+
+class TestCeilOp(TestAbsOp):
+ def init_op_type(self):
+ self.op_type = 'ceil'
+
+
+class TestExpOp(TestAbsOp):
+ def init_op_type(self):
+ self.op_type = 'exp'
+
+
+class TestFloorOp(TestAbsOp):
+ def init_op_type(self):
+ self.op_type = 'floor'
+
+
+class TestLogOp(TestAbsOp):
+ def init_op_type(self):
+ self.op_type = 'log'
+
+
+class TestReciprocalOp(TestAbsOp):
+ def init_op_type(self):
+ self.op_type = 'reciprocal'
+
+
+class TestReluOp(TestAbsOp):
+ def init_op_type(self):
+ self.op_type = 'relu'
+
+
+class TestSigmoidOp(TestAbsOp):
+ def init_op_type(self):
+ self.op_type = 'sigmoid'
+
+
+class TestSoftplusOp(TestAbsOp):
+ def init_op_type(self):
+ self.op_type = 'softplus'
+
+
+class TestSoftsignOp(TestAbsOp):
+ def init_op_type(self):
+ self.op_type = 'softsign'
+
+
+class TestSqrtOp(TestAbsOp):
+ def init_op_type(self):
+ self.op_type = 'sqrt'
+
+
+class TestTanhOp(TestAbsOp):
+ def init_op_type(self):
+ self.op_type = 'tanh'
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_elementwise_mul_op.py b/tests/test_elementwise_mul_op.py
deleted file mode 100644
--- a/tests/test_elementwise_mul_op.py
+++ /dev/null
@@ -1,46 +0,0 @@
-# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import unittest
-import numpy as np
-from op_test import OpTest
-
-
-class TestElementwiseMulOp(OpTest):
- def setUp(self):
- self.op_type = "elementwise_mul"
- self.inputs = {
- 'X': np.random.uniform(0.1, 1, [11, 13]).astype(np.float32),
- 'Y': np.random.uniform(0.1, 1, [11, 13]).astype(np.float32)
- }
- self.outputs = {'Out': np.zeros((1, 1))}
-
- def test_check_output(self):
- self.check_output()
-
-
-class TestElementwiseMulOp_broadcast(TestElementwiseMulOp):
- def setUp(self):
- self.op_type = "elementwise_mul"
- self.inputs = {
- 'X': np.random.rand(2, 3, 4, 5).astype(np.float64),
- 'Y': np.random.rand(3, 4).astype(np.float64)
- }
-
- self.attrs = {'axis': 1}
- self.outputs = {'Out': np.zeros((1, 1))}
-
-
-if __name__ == '__main__':
- unittest.main()
diff --git a/tests/test_elementwise_add_op.py b/tests/test_elementwise_ops.py
similarity index 60%
rename from tests/test_elementwise_add_op.py
rename to tests/test_elementwise_ops.py
--- a/tests/test_elementwise_add_op.py
+++ b/tests/test_elementwise_ops.py
@@ -19,19 +19,42 @@
class TestElementwiseAddOp(OpTest):
def setUp(self):
- self.op_type = "elementwise_add"
+ self.init_op_type()
self.attrs = {"axis": 1}
self.inputs = {
- 'X': np.random.random((2, 1)).astype(np.float32),
- 'Y': np.random.random((1, )).astype(np.float32)
+ 'X': np.random.random((4, 2)).astype(np.float32),
+ 'Y': np.random.random((2, )).astype(np.float32)
}
self.outputs = {'Out': np.zeros((1, 1))}
+ def init_op_type(self):
+ self.op_type = "elementwise_add"
+
def test_check_output(self):
self.check_output()
+class TestElementwiseSubOp(TestElementwiseAddOp):
+ def init_op_type(self):
+ self.op_type = "elementwise_sub"
+
+
+class TestElementwiseMulOp(TestElementwiseAddOp):
+ def init_op_type(self):
+ self.op_type = "elementwise_mul"
+
+
+class TestElementwiseDivOp(TestElementwiseAddOp):
+ def init_op_type(self):
+ self.op_type = "elementwise_div"
+
+
+class TestElementwisePowOp(TestElementwiseAddOp):
+ def init_op_type(self):
+ self.op_type = "elementwise_pow"
+
+
if __name__ == '__main__':
unittest.main()
diff --git a/tests/test_reduce_ops.py b/tests/test_reduce_ops.py
new file mode 100644
--- /dev/null
+++ b/tests/test_reduce_ops.py
@@ -0,0 +1,74 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestReduceSumOp(OpTest):
+ def setUp(self):
+ self.init_op_type()
+ self.init_keep_dim()
+ self.init_reduce_all()
+ self.inputs = {'X': np.random.random((5, 6, 7, 8)).astype("float32")}
+ self.attrs = {
+ 'dim': 2,
+ 'keep_dim': self.keep_dim,
+ 'reduce_all': self.reduce_all
+ }
+ self.outputs = {'Out': np.zeros((1, 1))}
+
+ def init_op_type(self):
+ self.op_type = "reduce_sum"
+
+ def init_keep_dim(self):
+ self.keep_dim = True
+
+ def init_reduce_all(self):
+ self.reduce_all = False
+
+ def test_check_output(self):
+ self.check_output(decimal=4)
+
+
+class TestReduceMeanOp(TestReduceSumOp):
+ def init_op_type(self):
+ self.op_type = "reduce_mean"
+
+ def init_reduce_all(self):
+ self.reduce_all = True
+
+
+class TestReduceMaxOp(TestReduceSumOp):
+ def init_op_type(self):
+ self.op_type = "reduce_max"
+
+ def init_keep_dim(self):
+ self.keep_dim = False
+
+ def init_reduce_all(self):
+ self.reduce_all = True
+
+
+class TestReduceMinOp(TestReduceSumOp):
+ def init_op_type(self):
+ self.op_type = "reduce_min"
+
+ def init_keep_dim(self):
+ self.keep_dim = False
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_tanh_op.py b/tests/test_tanh_op.py
deleted file mode 100644
--- a/tests/test_tanh_op.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import unittest
-import numpy as np
-from op_test import OpTest
-
-
-class TestTanhOp(OpTest):
- def setUp(self):
- X = np.random.random((3, 5)).astype("float32")
- self.inputs = {'X': X}
- self.outputs = {'Out': np.random.random((1, 1)).astype("float32")}
- self.op_type = 'tanh'
-
- def test_check_output(self):
- self.check_output()
-
-
-if __name__ == '__main__':
- unittest.main()
| Add some common used operators' conversion
| 2018-04-27T14:44:15 |
|
PaddlePaddle/Paddle2ONNX | 40 | PaddlePaddle__Paddle2ONNX-40 | [
"41"
] | 98bf52219fdbd6f23649cdf033fcd1c30ffff147 | diff --git a/fluid_onnx/ops.py b/fluid_onnx/ops.py
--- a/fluid_onnx/ops.py
+++ b/fluid_onnx/ops.py
@@ -58,6 +58,11 @@
def activation_ops(act_type, operator, block):
+ """ Convert common activations with type specified by 'act_type', including
+ 'abs', 'ceil', 'exp', 'floor', 'log', 'reciprocal', 'relu', 'sigmoid',
+ 'softplus', 'softsign', 'sqrt' and 'tanh'.
+ """
+
inputs, _, outputs = op_io_info(operator)
return make_node(
act_type, inputs=inputs.values()[0], outputs=outputs.values()[0])
@@ -114,16 +119,32 @@ def batch_norm_op(operator, block):
bn_node)
-def cast_op():
- pass
+def cast_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
+ return make_node(
+ 'Cast',
+ inputs=inputs['X'],
+ outputs=outputs['Out'],
+ to=PADDLE_TO_ONNX_DTYPE[attrs['out_dtype']])
-def clip_op():
- pass
+def clip_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
+ return make_node(
+ 'Clip',
+ inputs=inputs['X'],
+ outputs=outputs['Out'],
+ min=attrs['min'],
+ max=attrs['max'])
-def concat_op():
- pass
+def concat_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
+ return make_node(
+ 'Concat',
+ inputs=inputs['X'],
+ outputs=outputs['Out'],
+ axis=attrs['axis'])
def constant_op(var, scope):
@@ -156,8 +177,20 @@ def conv2d_op(operator, block):
return conv2d
-def convtranspose_op():
- pass
+def conv2d_transpose_op(operator, block):
+ inputs, attrs, outputs = op_io_info(operator)
+
+ kernel_shape = block.vars[inputs['Filter'][0]].shape
+ conv2d_transpose = make_node(
+ 'ConvTranspose',
+ inputs=inputs['Input'] + inputs['Filter'],
+ outputs=outputs['Output'],
+ dilations=attrs['dilations'],
+ kernel_shape=kernel_shape[-2:],
+ strides=attrs['strides'],
+ group=1,
+ pads=attrs['paddings'] + attrs['paddings'])
+ return conv2d_transpose
def depthtospace_op():
@@ -184,12 +217,19 @@ def dropout_op(operator, block):
def elementwise_ops(op_type, operator, block):
+ """Convert elementwise operators From to ONNX. Supported elementwise
+ 'op_type' includes 'Add', 'Div', 'Mul', 'Pow' and 'Sub'.
+ """
+
inputs, attrs, outputs = op_io_info(operator)
+ rank_x = len(block.vars[inputs['X'][0]].shape)
+ rank_y = len(block.vars[inputs['Y'][0]].shape)
+ axis = rank_x - rank_y if attrs['axis'] == -1 else attrs['axis']
return make_node(
op_type,
inputs=inputs['X'] + inputs['Y'],
outputs=outputs['Out'],
- axis=attrs['axis'],
+ axis=axis,
broadcast=1)
@@ -260,8 +300,21 @@ def less_op():
pass
-def log_op():
- pass
+def binary_logical_ops(op_type, operator, block):
+ """Convert binary logical operators, i.e. 'And', 'Or' and 'Xor'.
+ """
+
+ inputs, _, outputs = op_io_info(operator)
+ return make_node(
+ op_type, inputs=inputs['X'] + inputs['Y'], outputs=outputs['Out'])
+
+
+def unary_logical_ops(op_type, operator, block):
+ """Convert unary logical operators, i.e. 'Not'.
+ """
+
+ inputs, _, outputs = op_io_info(operator)
+ return make_node(op_type, inputs=inputs['X'], outputs=outputs['Out'])
def logsoftmax_op():
@@ -422,6 +475,11 @@ def randomuniformlike_op():
def reduce_ops(op_type, operator, block):
+ """Convert reduce operators in Fluid to ONNX. 'op_type' specifies the
+ target ONNX operator type, supporting 'Reduce{Max, Mean, Min, Sum}'
+ right now.
+ """
+
inputs, attrs, outputs = op_io_info(operator)
rank = len(block.vars[inputs['X'][0]].shape)
dim = attrs['dim']
@@ -549,19 +607,17 @@ def xor_op():
node_maker = {
# Paddle op name : (ONNX op name, modifier)
'abs': partial(activation_ops, 'Abs'),
- # '': 'And', # ?
# 'ArgMax', NEEDS ATTENTION.
# 'ArgMin', NEEDS ATTENTION.
'batch_norm': batch_norm_op,
- 'cast': ('Cast', cast_op),
+ 'cast': cast_op,
'ceil': partial(activation_ops, 'Ceil'),
- 'clip': ('Clip', clip_op),
- 'concat': ('Concat', concat_op),
+ 'clip': clip_op,
+ 'concat': concat_op,
'constant': constant_op,
'conv2d': conv2d_op,
-
# Need to continue the mapping below.
- '': 'ConvTranspose',
+ 'conv2d_transpose': conv2d_transpose_op,
'': 'DepthToSpace',
'depthwise_conv2d': conv2d_op,
'dropout': dropout_op,
@@ -588,6 +644,10 @@ def xor_op():
'': 'LeakyRelu',
'': 'Less',
'log': partial(activation_ops, 'Log'),
+ 'logical_and': partial(binary_logical_ops, 'And'),
+ 'logical_or': partial(binary_logical_ops, 'Or'),
+ 'logical_not': partial(unary_logical_ops, 'Not'),
+ 'logical_xor': partial(binary_logical_ops, 'Xor'),
',': 'LogSoftmax',
'': 'LpNormalization',
'': 'LpPool',
@@ -599,8 +659,6 @@ def xor_op():
'': 'Min',
'mul': mul_op,
',': 'Neg',
- '': 'Not',
- '': 'Or',
'': 'PRelu',
'': 'Pad',
'pool2d': pool2d_op,
@@ -640,7 +698,6 @@ def xor_op():
'': 'TopK',
'': 'Transpose',
# 'Unsqueeze', NEEDS ATTENTION.
- '': 'Xor',
# 'experimental ATen'
# ',': 'experimental Affine'
# 'experimental ConstantFill'
diff --git a/fluid_onnx/variables.py b/fluid_onnx/variables.py
--- a/fluid_onnx/variables.py
+++ b/fluid_onnx/variables.py
@@ -36,7 +36,7 @@ def paddle_onnx_shape(paddle_shape):
PADDLE_TO_ONNX_DTYPE = {
core.VarDesc.VarType.FP32: onnx_pb2.TensorProto.FLOAT,
- core.VarDesc.VarType.FP64: onnx_pb2.TensorProto.FLOAT16,
+ core.VarDesc.VarType.FP64: onnx_pb2.TensorProto.DOUBLE,
# '': onnx_pb2.TensorProto.DOUBLE,
core.VarDesc.VarType.INT32: onnx_pb2.TensorProto.INT32,
core.VarDesc.VarType.INT16: onnx_pb2.TensorProto.INT16,
| diff --git a/tests/op_test.py b/tests/op_test.py
--- a/tests/op_test.py
+++ b/tests/op_test.py
@@ -77,7 +77,7 @@ def create_var(block, name, np_list, var_proto):
persistable = True if name in persistable_list else False
return block.create_var(
- dtype="float32",
+ dtype='float32',
shape=shape,
persistable=persistable,
lod_level=lod_level,
@@ -263,9 +263,17 @@ def eval_onnx_node(self):
onnx_graph = make_graph(node_list, self.op_type, inputs, outputs)
onnx_model = make_model(onnx_graph, producer_name='unittest')
+ # Expand input dictionary if there are tensor arrays
+ input_map = {}
+ for v in self.inputs:
+ if isinstance(self.inputs[v], list):
+ input_map.update(self.inputs[v])
+ else:
+ input_map[v] = self.inputs[v]
+
# Run the Caffe2Backend with the ONNX model.
rep = Caffe2Backend.prepare(onnx_model, device='CPU')
- in_vals = [self.inputs[input.name] for input in inputs]
+ in_vals = [input_map[input.name] for input in inputs]
outs = rep.run(in_vals)
return outs
diff --git a/tests/test_activation_ops.py b/tests/test_activation_ops.py
--- a/tests/test_activation_ops.py
+++ b/tests/test_activation_ops.py
@@ -19,9 +19,9 @@
class TestAbsOp(OpTest):
def setUp(self):
- X = np.random.random((13, 15)).astype("float32")
+ X = np.random.random((13, 15)).astype('float32')
self.inputs = {'X': X}
- self.outputs = {'Out': np.zeros((1, 1)).astype("float32")}
+ self.outputs = {'Out': np.zeros((1, 1)).astype('float32')}
self.init_op_type()
def init_op_type(self):
diff --git a/tests/test_cast_op.py b/tests/test_cast_op.py
new file mode 100644
--- /dev/null
+++ b/tests/test_cast_op.py
@@ -0,0 +1,37 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+import paddle.fluid.core as core
+from op_test import OpTest
+
+
+class TestCastOp(OpTest):
+ def setUp(self):
+ input = np.random.random((10, 10))
+ self.inputs = {'X': input.astype('float32')}
+ self.outputs = {'Out': input.astype('float64')}
+ self.attrs = {
+ 'in_dtype': int(core.VarDesc.VarType.FP32),
+ 'out_dtype': int(core.VarDesc.VarType.FP64)
+ }
+ self.op_type = 'cast'
+
+ def test_check_output(self):
+ self.check_output()
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_clip_op.py b/tests/test_clip_op.py
new file mode 100644
--- /dev/null
+++ b/tests/test_clip_op.py
@@ -0,0 +1,33 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestClipOp(OpTest):
+ def setUp(self):
+ input = np.random.random((4, 5, 6)).astype('float32')
+ self.op_type = 'clip'
+ self.inputs = {'X': input}
+ self.attrs = {'min': 0.2, 'max': 0.8}
+ self.outputs = {'Out': np.zeros((1, 1)).astype('float32')}
+
+ def test_check_output(self):
+ self.check_output()
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_concat_op.py b/tests/test_concat_op.py
new file mode 100644
--- /dev/null
+++ b/tests/test_concat_op.py
@@ -0,0 +1,47 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestConcatOp(OpTest):
+ def setUp(self):
+ self.op_type = 'concat'
+ self.init_test_data()
+ self.inputs = {'X': [('x0', self.x0), ('x1', self.x1), ('x2', self.x2)]}
+ self.attrs = {'axis': self.axis}
+ self.outputs = {'Out': np.zeros((1, 1)).astype('float32')}
+
+ def test_check_output(self):
+ self.check_output()
+
+ def init_test_data(self):
+ self.x0 = np.random.random((2, 1, 4, 5)).astype('float32')
+ self.x1 = np.random.random((2, 2, 4, 5)).astype('float32')
+ self.x2 = np.random.random((2, 3, 4, 5)).astype('float32')
+ self.axis = 1
+
+
+class TestConcatOp2(OpTest):
+ def init_test_data(self):
+ self.x0 = np.random.random((2, 3, 4, 5)).astype('float32')
+ self.x1 = np.random.random((2, 3, 4, 5)).astype('float32')
+ self.x2 = np.random.random((2, 3, 4, 5)).astype('float32')
+ self.axis = 2
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_conv2d_op.py b/tests/test_conv2d_op.py
--- a/tests/test_conv2d_op.py
+++ b/tests/test_conv2d_op.py
@@ -57,7 +57,7 @@ def setUp(self):
self.outputs = {'Output': output}
def init_conv_type(self):
- self.op_type = "conv2d"
+ self.op_type = 'conv2d'
def test_check_output(self):
self.check_output(decimal=5)
@@ -65,7 +65,7 @@ def test_check_output(self):
class TestDepthwiseConv2dOp(TestConv2dOp):
def init_conv_type(self):
- self.op_type = "depthwise_conv2d"
+ self.op_type = 'depthwise_conv2d'
if __name__ == '__main__':
diff --git a/tests/test_conv2d_transpose_op.py b/tests/test_conv2d_transpose_op.py
new file mode 100644
--- /dev/null
+++ b/tests/test_conv2d_transpose_op.py
@@ -0,0 +1,56 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import unittest
+import numpy as np
+from op_test import OpTest
+
+
+class TestConv2dTransposeOp(OpTest):
+ def setUp(self):
+ self.use_cudnn = False
+ self.init_op_type()
+ self.init_test_case()
+
+ input_ = np.random.random(self.input_size).astype('float32')
+ filter_ = np.random.random(self.filter_size).astype('float32')
+
+ self.inputs = {'Input': input_, 'Filter': filter_}
+ self.attrs = {
+ 'strides': self.stride,
+ 'paddings': self.pad,
+ 'dilations': self.dilations,
+ 'use_cudnn': self.use_cudnn,
+ 'data_format': 'AnyLayout'
+ }
+
+ self.outputs = {'Output': np.zeros((1, 1))}
+
+ def init_test_case(self):
+ self.pad = [0, 0]
+ self.stride = [1, 1]
+ self.dilations = [1, 1]
+ self.input_size = [2, 3, 5, 5] # NCHW
+ f_c = self.input_size[1]
+ self.filter_size = [f_c, 6, 3, 3]
+
+ def init_op_type(self):
+ self.op_type = 'conv2d_transpose'
+
+ def test_check_output(self):
+ self.check_output()
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_dropout_op.py b/tests/test_dropout_op.py
--- a/tests/test_dropout_op.py
+++ b/tests/test_dropout_op.py
@@ -19,8 +19,8 @@
class TestDropoutOp(OpTest):
def setUp(self):
- self.op_type = "dropout"
- self.inputs = {'X': np.random.random((32, 64, 2)).astype("float32")}
+ self.op_type = 'dropout'
+ self.inputs = {'X': np.random.random((32, 64, 2)).astype('float32')}
self.attrs = {'dropout_prob': 0.8, 'is_test': True}
self.outputs = {'Out': np.zeros((1, 1))}
diff --git a/tests/test_elementwise_ops.py b/tests/test_elementwise_ops.py
--- a/tests/test_elementwise_ops.py
+++ b/tests/test_elementwise_ops.py
@@ -19,41 +19,47 @@
class TestElementwiseAddOp(OpTest):
def setUp(self):
- self.init_op_type()
- self.attrs = {"axis": 1}
+ self.attrs = {'axis': 2}
+ self.init()
self.inputs = {
- 'X': np.random.random((4, 2)).astype(np.float32),
- 'Y': np.random.random((2, )).astype(np.float32)
+ 'X': np.random.random((2, 3, 4, 5)).astype(np.float32),
+ 'Y': np.random.random((4, 5)).astype(np.float32)
}
self.outputs = {'Out': np.zeros((1, 1))}
- def init_op_type(self):
- self.op_type = "elementwise_add"
+ def init(self):
+ self.op_type = 'elementwise_add'
def test_check_output(self):
self.check_output()
+class TestElementwiseAddOpNegAxis(OpTest):
+ def init(self):
+ self.op_type = 'elementwise_add'
+ self.attrs = {'axis': -1}
+
+
class TestElementwiseSubOp(TestElementwiseAddOp):
- def init_op_type(self):
- self.op_type = "elementwise_sub"
+ def init(self):
+ self.op_type = 'elementwise_sub'
class TestElementwiseMulOp(TestElementwiseAddOp):
- def init_op_type(self):
- self.op_type = "elementwise_mul"
+ def init(self):
+ self.op_type = 'elementwise_mul'
class TestElementwiseDivOp(TestElementwiseAddOp):
- def init_op_type(self):
- self.op_type = "elementwise_div"
+ def init(self):
+ self.op_type = 'elementwise_div'
class TestElementwisePowOp(TestElementwiseAddOp):
- def init_op_type(self):
- self.op_type = "elementwise_pow"
+ def init(self):
+ self.op_type = 'elementwise_pow'
if __name__ == '__main__':
diff --git a/tests/test_logical_ops.py b/tests/test_logical_ops.py
new file mode 100644
--- /dev/null
+++ b/tests/test_logical_ops.py
@@ -0,0 +1,49 @@
+# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import op_test
+import unittest
+import numpy as np
+
+
+def create_test_class(op_type, callback, binary_op=True):
+ class Cls(op_test.OpTest):
+ def setUp(self):
+ a = np.random.choice(a=[True, False], size=(10, 7)).astype(bool)
+ if binary_op:
+ b = np.random.choice(a=[True, False], size=(10, 7)).astype(bool)
+ c = callback(a, b)
+ else:
+ c = callback(a)
+ self.outputs = {'Out': c}
+ self.op_type = op_type
+ if binary_op:
+ self.inputs = {'X': a, 'Y': b}
+ else:
+ self.inputs = {'X': a}
+
+ def test_output(self):
+ self.check_output()
+
+ Cls.__name__ = op_type
+ globals()[op_type] = Cls
+
+
+create_test_class('logical_and', lambda _a, _b: np.logical_and(_a, _b))
+create_test_class('logical_or', lambda _a, _b: np.logical_or(_a, _b))
+create_test_class('logical_not', lambda _a: np.logical_not(_a), False)
+create_test_class('logical_xor', lambda _a, _b: np.logical_xor(_a, _b))
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/test_mul_op.py b/tests/test_mul_op.py
--- a/tests/test_mul_op.py
+++ b/tests/test_mul_op.py
@@ -19,10 +19,10 @@
class TestMulOp(OpTest):
def setUp(self):
- self.op_type = "mul"
+ self.op_type = 'mul'
self.inputs = {
- 'X': np.random.random((15, 4, 12, 10)).astype("float32"),
- 'Y': np.random.random((4, 30, 8, 2, 9)).astype("float32")
+ 'X': np.random.random((15, 4, 12, 10)).astype('float32'),
+ 'Y': np.random.random((4, 30, 8, 2, 9)).astype('float32')
}
self.attrs = {'x_num_col_dims': 2, 'y_num_col_dims': 2}
result = np.dot(self.inputs['X'].reshape(15 * 4, 12 * 10),
diff --git a/tests/test_pool2d_op.py b/tests/test_pool2d_op.py
--- a/tests/test_pool2d_op.py
+++ b/tests/test_pool2d_op.py
@@ -19,7 +19,7 @@
class TestPool2dOp(OpTest):
def setUp(self):
- self.op_type = "pool2d"
+ self.op_type = 'pool2d'
self.use_cudnn = False
self.use_mkldnn = False
self.dtype = np.float32
@@ -79,7 +79,7 @@ def init_test_case(self):
self.paddings = [0, 0]
def init_pool_type(self):
- self.pool_type = "avg"
+ self.pool_type = 'avg'
def init_global_pool(self):
self.global_pool = False
@@ -93,7 +93,7 @@ def init_test_case(self):
self.paddings = [1, 1]
def init_pool_type(self):
- self.pool_type = "avg"
+ self.pool_type = 'avg'
def init_global_pool(self):
self.global_pool = False
@@ -101,17 +101,17 @@ def init_global_pool(self):
class TestPool2dOp3(TestPool2dOp):
def init_pool_type(self):
- self.pool_type = "max"
+ self.pool_type = 'max'
class TestPool2dOp4(TestPool2dOp1):
def init_pool_type(self):
- self.pool_type = "max"
+ self.pool_type = 'max'
class TestPool2dOp5(TestPool2dOp2):
def init_pool_type(self):
- self.pool_type = "max"
+ self.pool_type = 'max'
if __name__ == '__main__':
diff --git a/tests/test_reduce_ops.py b/tests/test_reduce_ops.py
--- a/tests/test_reduce_ops.py
+++ b/tests/test_reduce_ops.py
@@ -22,7 +22,7 @@ def setUp(self):
self.init_op_type()
self.init_keep_dim()
self.init_reduce_all()
- self.inputs = {'X': np.random.random((5, 6, 7, 8)).astype("float32")}
+ self.inputs = {'X': np.random.random((5, 6, 7, 8)).astype('float32')}
self.attrs = {
'dim': 2,
'keep_dim': self.keep_dim,
@@ -31,7 +31,7 @@ def setUp(self):
self.outputs = {'Out': np.zeros((1, 1))}
def init_op_type(self):
- self.op_type = "reduce_sum"
+ self.op_type = 'reduce_sum'
def init_keep_dim(self):
self.keep_dim = True
@@ -45,7 +45,7 @@ def test_check_output(self):
class TestReduceMeanOp(TestReduceSumOp):
def init_op_type(self):
- self.op_type = "reduce_mean"
+ self.op_type = 'reduce_mean'
def init_reduce_all(self):
self.reduce_all = True
@@ -53,7 +53,7 @@ def init_reduce_all(self):
class TestReduceMaxOp(TestReduceSumOp):
def init_op_type(self):
- self.op_type = "reduce_max"
+ self.op_type = 'reduce_max'
def init_keep_dim(self):
self.keep_dim = False
@@ -64,7 +64,7 @@ def init_reduce_all(self):
class TestReduceMinOp(TestReduceSumOp):
def init_op_type(self):
- self.op_type = "reduce_min"
+ self.op_type = 'reduce_min'
def init_keep_dim(self):
self.keep_dim = False
| Refine quotes' format and comments & add new operators
| 2018-05-02T08:23:52 |
|
PaddlePaddle/Paddle2ONNX | 45 | PaddlePaddle__Paddle2ONNX-45 | [
"44"
] | 497bd0bba0f74bc8e20f435c46e923cc02e3026e | diff --git a/fluid/utils.py b/fluid/utils.py
--- a/fluid/utils.py
+++ b/fluid/utils.py
@@ -75,3 +75,14 @@ def __call__(self, op):
# Instantiate the class to a callable object
op_io_info = OpIOsInfo()
+
+
+def get_old_name(arg):
+ """Get the old rame for a possible renamed argument
+ """
+
+ idx = arg.find('@')
+ if idx == -1:
+ return arg
+ else:
+ return arg[:idx]
diff --git a/fluid_onnx/ops.py b/fluid_onnx/ops.py
--- a/fluid_onnx/ops.py
+++ b/fluid_onnx/ops.py
@@ -17,7 +17,7 @@
from onnx import TensorProto
from onnx.helper import make_node, make_tensor
from paddle.fluid.executor import fetch_var
-from fluid.utils import op_io_info
+from fluid.utils import op_io_info, get_old_name
from fluid_onnx.variables import PADDLE_TO_ONNX_DTYPE
"""
Priority of ops (uniques) to figure out support for.
@@ -86,7 +86,7 @@ def argmin_op():
def batch_norm_op(operator, block):
inputs, attrs, outputs = op_io_info(operator)
- x_shape = block.vars[inputs['X'][0]].shape
+ x_shape = block.vars[get_old_name(inputs['X'][0])].shape
reshape_node = None
if len(x_shape) == 2:
reshaped_x = [inputs['X'][0] + '@reshape_0']
@@ -164,7 +164,7 @@ def constant_op(var, scope):
def conv2d_op(operator, block):
inputs, attrs, outputs = op_io_info(operator)
- kernel_shape = block.vars[inputs['Filter'][0]].shape
+ kernel_shape = block.vars[get_old_name(inputs['Filter'][0])].shape
conv2d = make_node(
'Conv',
inputs=inputs['Input'] + inputs['Filter'],
@@ -180,7 +180,7 @@ def conv2d_op(operator, block):
def conv2d_transpose_op(operator, block):
inputs, attrs, outputs = op_io_info(operator)
- kernel_shape = block.vars[inputs['Filter'][0]].shape
+ kernel_shape = block.vars[get_old_name(inputs['Filter'][0])].shape
conv2d_transpose = make_node(
'ConvTranspose',
inputs=inputs['Input'] + inputs['Filter'],
@@ -222,8 +222,8 @@ def elementwise_ops(op_type, operator, block):
"""
inputs, attrs, outputs = op_io_info(operator)
- rank_x = len(block.vars[inputs['X'][0]].shape)
- rank_y = len(block.vars[inputs['Y'][0]].shape)
+ rank_x = len(block.vars[get_old_name(inputs['X'][0])].shape)
+ rank_y = len(block.vars[get_old_name(inputs['Y'][0])].shape)
axis = rank_x - rank_y if attrs['axis'] == -1 else attrs['axis']
return make_node(
op_type,
@@ -481,7 +481,7 @@ def reduce_ops(op_type, operator, block):
"""
inputs, attrs, outputs = op_io_info(operator)
- rank = len(block.vars[inputs['X'][0]].shape)
+ rank = len(block.vars[get_old_name(inputs['X'][0])].shape)
dim = attrs['dim']
axes = [dim if dim >= 0 else rank + dim]
reduce_out = [outputs['Out'][0] + '@reduce_0'] if attrs[
| Fix the fetch var bug when the arg is renamed
| 2018-05-04T09:36:00 |
||
DistrictDataLabs/yellowbrick | 35 | DistrictDataLabs__yellowbrick-35 | [
"33",
"33"
] | fc91da0c1a0c91bb380d7d68db9eba7ed13c618e | diff --git a/docs/conf.py b/docs/conf.py
new file mode 100644
--- /dev/null
+++ b/docs/conf.py
@@ -0,0 +1,348 @@
+#!/usr/bin/env python3
+# -*- coding: utf-8 -*-
+#
+# yellowbrick documentation build configuration file, created by
+# sphinx-quickstart on Tue Jul 5 19:45:43 2016.
+#
+# This file is execfile()d with the current directory set to its
+# containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#
+import os
+import sys
+sys.path.insert(0, os.path.abspath('..'))
+
+# -- General configuration ------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#
+# needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = [
+ 'sphinx.ext.autodoc',
+ 'sphinx.ext.intersphinx',
+ 'sphinx.ext.coverage',
+ 'sphinx.ext.mathjax',
+ 'sphinx.ext.viewcode',
+]
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix(es) of source filenames.
+# You can specify multiple suffix as a list of string:
+#
+# source_suffix = ['.rst', '.md']
+source_suffix = '.rst'
+
+# The encoding of source files.
+#
+# source_encoding = 'utf-8-sig'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = 'yellowbrick'
+copyright = '2016, District Data Labs'
+author = 'District Data Labs'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = '0.1'
+# The full version, including alpha/beta/rc tags.
+release = '0.1'
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#
+# This is also used if you do content translation via gettext catalogs.
+# Usually you set "language" from the command line for these cases.
+language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#
+# today = ''
+#
+# Else, today_fmt is used as the format for a strftime call.
+#
+# today_fmt = '%B %d, %Y'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+# This patterns also effect to html_static_path and html_extra_path
+exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
+
+# The reST default role (used for this markup: `text`) to use for all
+# documents.
+#
+# default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#
+# add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#
+# add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#
+# show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# A list of ignored prefixes for module index sorting.
+# modindex_common_prefix = []
+
+# If true, keep warnings as "system message" paragraphs in the built documents.
+# keep_warnings = False
+
+# If true, `todo` and `todoList` produce output, else they produce nothing.
+todo_include_todos = False
+
+
+# -- Options for HTML output ----------------------------------------------
+
+# The theme to use for HTML and HTML Help pages. See the documentation for
+# a list of builtin themes.
+#
+html_theme = 'sphinx_rtd_theme'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further. For a list of options available for each theme, see the
+# documentation.
+#
+# html_theme_options = {}
+
+# Add any paths that contain custom themes here, relative to this directory.
+# html_theme_path = []
+
+# The name for this set of Sphinx documents.
+# "<project> v<release> documentation" by default.
+#
+# html_title = 'yellowbrick v0.1'
+
+# A shorter title for the navigation bar. Default is the same as html_title.
+#
+# html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+#
+# html_logo = None
+
+# The name of an image file (relative to this directory) to use as a favicon of
+# the docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#
+# html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# Add any extra paths that contain custom files (such as robots.txt or
+# .htaccess) here, relative to this directory. These files are copied
+# directly to the root of the documentation.
+#
+# html_extra_path = []
+
+# If not None, a 'Last updated on:' timestamp is inserted at every page
+# bottom, using the given strftime format.
+# The empty string is equivalent to '%b %d, %Y'.
+#
+# html_last_updated_fmt = None
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#
+# html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+#
+# html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#
+# html_additional_pages = {}
+
+# If false, no module index is generated.
+#
+# html_domain_indices = True
+
+# If false, no index is generated.
+#
+# html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+#
+# html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+#
+# html_show_sourcelink = True
+
+# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
+#
+# html_show_sphinx = True
+
+# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
+#
+# html_show_copyright = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it. The value of this option must be the
+# base URL from which the finished HTML is served.
+#
+# html_use_opensearch = ''
+
+# This is the file name suffix for HTML files (e.g. ".xhtml").
+# html_file_suffix = None
+
+# Language to be used for generating the HTML full-text search index.
+# Sphinx supports the following languages:
+# 'da', 'de', 'en', 'es', 'fi', 'fr', 'h', 'it', 'ja'
+# 'nl', 'no', 'pt', 'ro', 'r', 'sv', 'tr', 'zh'
+#
+# html_search_language = 'en'
+
+# A dictionary with options for the search language support, empty by default.
+# 'ja' uses this config value.
+# 'zh' user can custom change `jieba` dictionary path.
+#
+# html_search_options = {'type': 'default'}
+
+# The name of a javascript file (relative to the configuration directory) that
+# implements a search results scorer. If empty, the default will be used.
+#
+# html_search_scorer = 'scorer.js'
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'yellowbrickdoc'
+
+# -- Options for LaTeX output ---------------------------------------------
+
+latex_elements = {
+ # The paper size ('letterpaper' or 'a4paper').
+ #
+ # 'papersize': 'letterpaper',
+
+ # The font size ('10pt', '11pt' or '12pt').
+ #
+ # 'pointsize': '10pt',
+
+ # Additional stuff for the LaTeX preamble.
+ #
+ # 'preamble': '',
+
+ # Latex figure (float) alignment
+ #
+ # 'figure_align': 'htbp',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+# author, documentclass [howto, manual, or own class]).
+latex_documents = [
+ (master_doc, 'yellowbrick.tex', 'yellowbrick Documentation',
+ 'District Data Labs', 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#
+# latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#
+# latex_use_parts = False
+
+# If true, show page references after internal links.
+#
+# latex_show_pagerefs = False
+
+# If true, show URL addresses after external links.
+#
+# latex_show_urls = False
+
+# Documents to append as an appendix to all manuals.
+#
+# latex_appendices = []
+
+# If false, no module index is generated.
+#
+# latex_domain_indices = True
+
+
+# -- Options for manual page output ---------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+ (master_doc, 'yellowbrick', 'yellowbrick Documentation',
+ [author], 1)
+]
+
+# If true, show URL addresses after external links.
+#
+# man_show_urls = False
+
+
+# -- Options for Texinfo output -------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+# dir menu entry, description, category)
+texinfo_documents = [
+ (master_doc, 'yellowbrick', 'yellowbrick Documentation',
+ author, 'yellowbrick', 'One line description of project.',
+ 'Miscellaneous'),
+]
+
+# Documents to append as an appendix to all manuals.
+#
+# texinfo_appendices = []
+
+# If false, no module index is generated.
+#
+# texinfo_domain_indices = True
+
+# How to display URL addresses: 'footnote', 'no', or 'inline'.
+#
+# texinfo_show_urls = 'footnote'
+
+# If true, do not generate a @detailmenu in the "Top" node's menu.
+#
+# texinfo_no_detailmenu = False
+
+
+# Locations of objects.inv files for intersphinx extension that auto links to external api docs.
+intersphinx_mapping = {'python': ('https://docs.python.org/3', None),
+ 'matplotlib': ('http://matplotlib.org/', None),
+ 'scipy': ('http://scipy.github.io/devdocs/', None),
+ 'numpy': ('https://docs.scipy.org/doc/numpy-dev/', None),
+ 'cycler': ('http://matplotlib.org/cycler/', None),
+ 'seaborn': ('https://web.stanford.edu/~mwaskom/software/seaborn/', None)}
diff --git a/yellowbrick/yb_palettes.py b/yellowbrick/yb_palettes.py
--- a/yellowbrick/yb_palettes.py
+++ b/yellowbrick/yb_palettes.py
@@ -86,40 +86,58 @@ def as_hex(self):
def color_palette(palette=None, n_colors=None, desat=None):
"""Return a list of colors defining a color palette.
- Availible seaborn palette names:
- accent, dark, paired, pastel, bold, muted
- Availible seaborn palette names:
- sns_deep, sns_muted, sns_bright, sns_pastel, sns_dark, sns_colorblind
- Other options:
- list of colors
Calling this function with ``palette=None`` will return the current
matplotlib color cycle.
This function can also be used in a ``with`` statement to temporarily
set the color cycle for a plot or set of plots.
- Parameters
- ----------
- palette: None, string, or sequence, optional
+
+ :param palette:
Name of palette or None to return current palette. If a sequence, input
colors are used but possibly cycled and desaturated.
- n_colors : int, optional
+
+ Available seaborn palette names from :py:mod:`seaborn.palettes` are:
+
+ .. hlist::
+ :columns: 3
+
+ * :py:const:`deep`
+ * :py:const:`dark`
+ * :py:const:`paired`
+ * :py:const:`pastel`
+ * :py:const:`bold`
+ * :py:const:`muted`
+ * :py:const:`sns_deep`
+ * :py:const:`sns_muted`
+ * :py:const:`sns_bright`
+ * :py:const:`sns_pastel`
+ * :py:const:`sns_dark`
+ * :py:const:`sns_colorblind`
+
+ :type palette: None or str or sequence
+ :param n_colors:
Number of colors in the palette. If ``None``, the default will depend
on how ``palette`` is specified. Named palettes default to 6 colors
(except paired, which has 10),
but grabbing the current palette or passing in a list of colors will
not change the number of colors unless this is specified. Asking for
more colors than exist in the palette will cause it to cycle.
+ :type n_colors: int or None
+ :param desat:
+ :type desat:
- Returns
- -------
- palette : list of RGB tuples.
+ :rtype: list(tuple)
+ :return: list of RGB tuples.
Color palette. Behaves like a list, but can be used as a context
- manager and possesses an ``as_hex`` method to convert to hex color
+ manager and possesses an :py:meth:`as_hex` method to convert to hex color
codes.
- See Also
- --------
- set_palette : Set the default color cycle for all plots.
- set_color_codes : Reassign color codes like ``"b"``, ``"g"``, etc. to
- colors from one of the yellowbrick palettes.
+
+ .. seealso::
+
+ :func:`.set_palette`
+ Set the default color cycle for all plots.
+ :func:`.set_color_codes`
+ Reassign color codes like ``"b"``, ``"g"``, etc. to
+ colors from one of the yellowbrick palettes.
"""
if palette is None:
palette = get_color_cycle()
| Add Sphinx autodoc and intersphinx
Add sphinx autodoc and intersphinx to pull docstrings into documentation.
Add Sphinx autodoc and intersphinx
Add sphinx autodoc and intersphinx to pull docstrings into documentation.
| Planning on moving all the documentation over to reStructuredText and Sphinx?
Yes
Sent from my iPhone
> On Jun 25, 2016, at 4:41 PM, Benjamin Bengfort [email protected] wrote:
>
> Planning on moving all the documentation over to reStructuredText and Sphinx?
>
> —
> You are receiving this because you were assigned.
> Reply to this email directly, view it on GitHub, or mute the thread.
Planning on moving all the documentation over to reStructuredText and Sphinx?
Yes
Sent from my iPhone
> On Jun 25, 2016, at 4:41 PM, Benjamin Bengfort [email protected] wrote:
>
> Planning on moving all the documentation over to reStructuredText and Sphinx?
>
> —
> You are receiving this because you were assigned.
> Reply to this email directly, view it on GitHub, or mute the thread.
| 2016-07-06T02:18:33 |
|
DistrictDataLabs/yellowbrick | 246 | DistrictDataLabs__yellowbrick-246 | [
"243"
] | 4148cbc5e44317e602c858386c9d81c72fd91440 | diff --git a/yellowbrick/features/jointplot.py b/yellowbrick/features/jointplot.py
--- a/yellowbrick/features/jointplot.py
+++ b/yellowbrick/features/jointplot.py
@@ -129,13 +129,14 @@ def __init__(self, ax=None, feature=None, target=None,
xy_plot='hist', xy_args=None,
size=6, ratio=5, space=.2, **kwargs):
- #check matplotlib version - needs to be version 2.0.0
- if mpl.__version__ == "2.0.0":
+ # Check matplotlib version - needs to be version 2.0.0 or greater.
+ mpl_vers_maj = int(mpl.__version__.split(".")[0])
+ if mpl_vers_maj >= 2:
pass
else:
warnings.warn((
- "{} requires Matplotlib version 2.0.0."
- "Please upgrade to continue."
+ "{} requires matplotlib major version 2 or greater. "
+ "Please upgrade."
).format(self.__class__.__name__))
super(JointPlotVisualizer, self).__init__(ax, **kwargs)
| diff --git a/tests/test_features/test_jointplot.py b/tests/test_features/test_jointplot.py
--- a/tests/test_features/test_jointplot.py
+++ b/tests/test_features/test_jointplot.py
@@ -68,12 +68,13 @@ def test_warning(self):
self.assertEqual(len(w), 1)
self.assertEqual(
str(w[-1].message),
- "JointPlotVisualizer requires Matplotlib version 2.0.0.Please upgrade to continue."
+ "JointPlotVisualizer requires matplotlib major version 2 "
+ "or greater. Please upgrade."
)
@unittest.skipIf(MPL_VERS_MAJ < 2, "requires matplotlib 2.0.0 or greater")
- def test_jointplot(self):
+ def test_jointplot_has_no_errors(self):
"""
Assert no errors occur during jointplot visualizer integration
"""
@@ -84,7 +85,7 @@ def test_jointplot(self):
@unittest.skipIf(MPL_VERS_MAJ < 2, "requires matplotlib 2.0.0 or greater")
- def test_jointplot_integrated(self):
+ def test_jointplot_integrated_has_no_errors(self):
"""
Test jointplot on the concrete data set
"""
@@ -99,3 +100,27 @@ def test_jointplot_integrated(self):
visualizer = JointPlotVisualizer(feature=feature, target=target, joint_plot="hex")
visualizer.fit(X, y) # Fit the data to the visualizer
g = visualizer.poof()
+
+
+ @unittest.skipIf(MPL_VERS_MAJ < 2, "requires matplotlib 2.0.0 or greater")
+ def test_jointplot_no_matplotlib2_warning(self):
+ """
+ Assert no UserWarning occurs if matplotlib major version >= 2
+ (and not exactly 2.0.0).
+ """
+ with warnings.catch_warnings(record=True) as ws:
+ # Filter on UserWarnings
+ warnings.filterwarnings("always", category=UserWarning)
+ visualizer = JointPlotVisualizer()
+ visualizer.fit(self.X, self.y)
+ visualizer.poof()
+
+ # Filter out user warnings not related to matplotlib version
+ ver_warn_msg = "requires matplotlib major version 2 or greater"
+ mpl_ver_cnt = 0
+ for w in ws:
+ if w and w.message and ver_warn_msg in str(w.message):
+ mpl_ver_cnt += 1
+ self.assertEqual(0, mpl_ver_cnt, ws[-1].message \
+ if ws else "No error")
+
| JointPlotVisualizer insists on Matplotlib "2.0.0" exactly.
Warns if on mpl.__version__ == "2.0.2". That's overly picky, yes?
Line 133 in features/jointplot.py.
| 2017-05-24T00:01:46 |
|
DistrictDataLabs/yellowbrick | 333 | DistrictDataLabs__yellowbrick-333 | [
"323"
] | 5879ae0b94a3e81bb27649efff13f6f8e211004c | diff --git a/docs/api/text/tsne.py b/docs/api/text/tsne.py
--- a/docs/api/text/tsne.py
+++ b/docs/api/text/tsne.py
@@ -1,3 +1,13 @@
+# ID: tsne.py [] [email protected] $
+
+"""
+Generate figures for TSNE documentation.
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
import matplotlib.pyplot as plt
from corpus import load_corpus
@@ -7,17 +17,25 @@
from sklearn.feature_extraction.text import TfidfVectorizer
-def tsne(docs, labels, outpath, **kwargs):
+##########################################################################
+## Generate
+##########################################################################
+
+def tsne(docs, target, outpath, **kwargs):
# Create a new figure and axes
fig = plt.figure()
ax = fig.add_subplot(111)
# Visualize the frequency distribution
visualizer = TSNEVisualizer(ax=ax, **kwargs)
- visualizer.fit(docs, labels)
+ visualizer.fit(docs, target)
visualizer.poof(outpath=outpath)
+##########################################################################
+## Main Method
+##########################################################################
+
if __name__ == '__main__':
# Load and vectorize the corpus
@@ -25,17 +43,13 @@ def tsne(docs, labels, outpath, **kwargs):
tfidf = TfidfVectorizer()
docs = tfidf.fit_transform(corpus.data)
- labels = corpus.target
+ target = corpus.target
# Whole corpus visualization
- tsne(docs, labels, "images/tsne_all_docs.png")
-
- # Partial corpus visualization
- # Only visualize the sports, cinema, and gaming classes
- tsne(docs, labels, "images/tsne_limit_classes.png", classes=['sports', 'cinema', 'gaming'])
+ tsne(docs, target, "images/tsne_all_docs.png")
# No labels
- tsne(docs, None, "images/tsne_no_labels.png")
+ tsne(docs, None, "images/tsne_no_labels.png", labels=["documents"])
# Apply clustering instead of class names.
clusters = KMeans(n_clusters=5)
diff --git a/yellowbrick/text/tsne.py b/yellowbrick/text/tsne.py
--- a/yellowbrick/text/tsne.py
+++ b/yellowbrick/text/tsne.py
@@ -17,10 +17,12 @@
## Imports
##########################################################################
+import numpy as np
+
from collections import defaultdict
from yellowbrick.text.base import TextVisualizer
-from yellowbrick.style.colors import get_color_cycle
+from yellowbrick.style.colors import resolve_colors
from yellowbrick.exceptions import YellowbrickValueError
from sklearn.manifold import TSNE
@@ -130,17 +132,18 @@ class TSNEVisualizer(TextVisualizer):
ax : matplotlib axes
The axes to plot the figure on.
- decompose : string or None
+ decompose : string or None, default: ``'svd'``
A preliminary decomposition is often used prior to TSNE to make the
- projection faster. Specify `"svd"` for sparse data or `"pca"` for
- dense data. If decompose is None, the original data set will be used.
+ projection faster. Specify ``"svd"`` for sparse data or ``"pca"`` for
+ dense data. If None, the original data set will be used.
- decompose_by : int
+ decompose_by : int, default: 50
Specify the number of components for preliminary decomposition, by
default this is 50; the more components, the slower TSNE will be.
- classes : list of strings
+ labels : list of strings
The names of the classes in the target, used to create a legend.
+ Labels must match names of classes in sorted order.
colors : list or tuple of colors
Specify the colors for each individual class
@@ -148,25 +151,32 @@ class TSNEVisualizer(TextVisualizer):
colormap : string or matplotlib cmap
Sequential colormap for continuous target
+ random_state : int, RandomState instance or None, optional, default: None
+ If int, random_state is the seed used by the random number generator;
+ If RandomState instance, random_state is the random number generator;
+ If None, the random number generator is the RandomState instance used
+ by np.random. The random state is applied to the preliminary
+ decomposition as well as tSNE.
+
kwargs : dict
Pass any additional keyword arguments to the TSNE transformer.
"""
- def __init__(self, ax=None, decompose='svd', decompose_by=50, classes=None,
- colors=None, colormap=None, **kwargs):
+ # NOTE: cannot be np.nan
+ NULL_CLASS = None
+
+ def __init__(self, ax=None, decompose='svd', decompose_by=50, labels=None,
+ classes=None, colors=None, colormap=None, random_state=None, **kwargs):
"""
Initialize the TSNE visualizer with visual hyperparameters.
"""
super(TSNEVisualizer, self).__init__(ax=ax, **kwargs)
- # Visualizer parameters
- self.classes_ = classes
- self.n_instances_ = 0
-
# Visual Parameters
- # TODO: Only colors currently works to select the colors of classes.
+ self.labels = labels
self.colors = colors
self.colormap = colormap
+ self.random_state = random_state
# TSNE Parameters
self.transformer_ = self.make_transformer(decompose, decompose_by, kwargs)
@@ -181,12 +191,13 @@ def make_transformer(self, decompose='svd', decompose_by=50, tsne_kwargs={}):
Parameters
----------
- decompose : string or None
- A preliminary decomposition is often used prior to TSNE to make the
- projection faster. Specify `"svd"` for sparse data or `"pca"` for
- dense data. If decompose is None, the original data set will be used.
+ decompose : string or None, default: ``'svd'``
+ A preliminary decomposition is often used prior to TSNE to make
+ the projection faster. Specify ``"svd"`` for sparse data or ``"pca"``
+ for dense data. If decompose is None, the original data set will
+ be used.
- decompose_by : int
+ decompose_by : int, default: 50
Specify the number of components for preliminary decomposition, by
default this is 50; the more components, the slower TSNE will be.
@@ -197,6 +208,8 @@ def make_transformer(self, decompose='svd', decompose_by=50, tsne_kwargs={}):
Pipelined transformer for TSNE projections
"""
+ # TODO: detect decompose by inferring from sparse matrix or dense or
+ # If number of features > 50 etc.
decompositions = {
'svd': TruncatedSVD,
'pca': PCA,
@@ -215,10 +228,12 @@ def make_transformer(self, decompose='svd', decompose_by=50, tsne_kwargs={}):
# Add the pre-decomposition
if decompose:
klass = decompositions[decompose]
- steps.append((decompose, klass(n_components=decompose_by)))
+ steps.append((decompose, klass(
+ n_components=decompose_by, random_state=self.random_state)))
# Add the TSNE manifold
- steps.append(('tsne', TSNE(n_components=2, **tsne_kwargs)))
+ steps.append(('tsne', TSNE(
+ n_components=2, random_state=self.random_state, **tsne_kwargs)))
# return the pipeline
return Pipeline(steps)
@@ -252,13 +267,17 @@ def fit(self, X, y=None, **kwargs):
Returns the instance of the transformer/visualizer
"""
- # If we don't have classes already stored, store them.
- if y and self.classes_ is None:
- self.classes_ = [str(label) for label in set(y)]
+ # Store the classes we observed in y
+ if y is not None:
+ self.classes_ = np.unique(y)
+ elif y is None and self.labels is not None:
+ self.classes_ = np.array([self.labels[0]])
+ else:
+ self.classes_ = np.array([self.NULL_CLASS])
# Fit our internal transformer and transform the data.
vecs = self.transformer_.fit_transform(X)
- self.n_instances_ += vecs.shape[0]
+ self.n_instances_ = vecs.shape[0]
# Draw the vectors
self.draw(vecs, y, **kwargs)
@@ -274,34 +293,45 @@ def draw(self, points, target=None, **kwargs):
of each of the points. If the target is not specified, then the points
are plotted as a single cloud to show similar documents.
"""
- # Create the color mapping for the classes.
- # TODO: Allow both colormap, listed colors, and palette definition
- # See the FeatureVisualizer for more on this.
- color_values = get_color_cycle()
- classes = self.classes_ or [None]
- colors = dict(zip(classes, color_values))
+ # Resolve the labels with the classes
+ labels = self.labels if self.labels is not None else self.classes_
+ if len(labels) != len(self.classes_):
+ raise YellowbrickValueError((
+ "number of supplied labels ({}) does not "
+ "match the number of classes ({})"
+ ).format(len(labels), len(self.classes_)))
+
+
+ # Create the color mapping for the labels.
+ color_values = resolve_colors(
+ n_colors=len(labels), colormap=self.colormap, colors=self.color)
+ colors = dict(zip(labels, color_values))
+
+ # Transform labels into a map of class to label
+ labels = dict(zip(self.classes_, labels))
# Expand the points into vectors of x and y for scatter plotting,
# assigning them to their label if the label has been passed in.
# Additionally, filter classes not specified directly by the user.
series = defaultdict(lambda: {'x':[], 'y':[]})
- if self.classes_: classes = frozenset(self.classes_)
-
- if target:
- for label, point in zip(target, points):
- if self.classes_ and label not in classes:
- continue
+ if target is not None:
+ for t, point in zip(target, points):
+ label = labels[t]
series[label]['x'].append(point[0])
series[label]['y'].append(point[1])
else:
+ label = self.classes_[0]
for x,y in points:
- series[None]['x'].append(x)
- series[None]['y'].append(y)
+ series[label]['x'].append(x)
+ series[label]['y'].append(y)
# Plot the points
for label, points in series.items():
- self.ax.scatter(points['x'], points['y'], c=colors[label], alpha=0.7, label=label)
+ self.ax.scatter(
+ points['x'], points['y'], c=colors[label],
+ alpha=0.7, label=label
+ )
def finalize(self, **kwargs):
"""
@@ -319,7 +349,7 @@ def finalize(self, **kwargs):
self.ax.set_xticks([])
# Add the legend outside of the figure box.
- if self.classes_:
+ if not all(self.classes_ == np.array([self.NULL_CLASS])):
box = self.ax.get_position()
self.ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
self.ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
| diff --git a/tests/baseline_images/test_text/test_tsne/test_integrated_tsne.png b/tests/baseline_images/test_text/test_tsne/test_integrated_tsne.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_text/test_tsne/test_integrated_tsne.png differ
diff --git a/tests/baseline_images/test_text/test_tsne/test_make_classification_tsne.png b/tests/baseline_images/test_text/test_tsne/test_make_classification_tsne.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_text/test_tsne/test_make_classification_tsne.png differ
diff --git a/tests/baseline_images/test_text/test_tsne/test_make_classification_tsne_class_labels.png b/tests/baseline_images/test_text/test_tsne/test_make_classification_tsne_class_labels.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_text/test_tsne/test_make_classification_tsne_class_labels.png differ
diff --git a/tests/baseline_images/test_text/test_tsne/test_no_target_tsne.png b/tests/baseline_images/test_text/test_tsne/test_no_target_tsne.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_text/test_tsne/test_no_target_tsne.png differ
diff --git a/tests/baseline_images/test_text/test_tsne/test_visualizer_with_pandas.png b/tests/baseline_images/test_text/test_tsne/test_visualizer_with_pandas.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_text/test_tsne/test_visualizer_with_pandas.png differ
diff --git a/tests/requirements.txt b/tests/requirements.txt
--- a/tests/requirements.txt
+++ b/tests/requirements.txt
@@ -12,6 +12,7 @@ pytest-flakes>=2.0.0
pytest-spec>=1.1.0
coverage>=4.4.1
requests>=2.18.3
+six==1.11.0
# Python 2 Testing Requirements
mock>=2.0.0
diff --git a/tests/test_text/test_tsne.py b/tests/test_text/test_tsne.py
--- a/tests/test_text/test_tsne.py
+++ b/tests/test_text/test_tsne.py
@@ -17,26 +17,37 @@
## Imports
##########################################################################
-
-import unittest
+import six
+import pytest
from yellowbrick.text.tsne import *
+from tests.base import VisualTestCase
from tests.dataset import DatasetMixin
from yellowbrick.exceptions import YellowbrickValueError
+
+from sklearn.datasets import make_classification
from sklearn.feature_extraction.text import TfidfVectorizer
+try:
+ import pandas
+except ImportError:
+ pandas = None
+
##########################################################################
## TSNE Tests
##########################################################################
-class TSNETests(unittest.TestCase, DatasetMixin):
+class TestTSNE(VisualTestCase, DatasetMixin):
+ """
+ TSNEVisualizer tests
+ """
def test_bad_decomposition(self):
"""
Ensure an error is raised when a bad decompose argument is specified
"""
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
TSNEVisualizer(decompose='bob')
def test_make_pipeline(self):
@@ -45,20 +56,20 @@ def test_make_pipeline(self):
"""
tsne = TSNEVisualizer() # Should not cause an exception.
- self.assertIsNotNone(tsne.transformer_)
+ assert tsne.transformer_ is not None
svdp = tsne.make_transformer('svd', 90)
- self.assertEqual(len(svdp.steps), 2)
+ assert len(svdp.steps) == 2
pcap = tsne.make_transformer('pca')
- self.assertEqual(len(pcap.steps), 2)
+ assert len(pcap.steps) == 2
none = tsne.make_transformer(None)
- self.assertEqual(len(none.steps), 1)
+ assert len(none.steps) == 1
def test_integrated_tsne(self):
"""
- Assert no errors occur during tsne integration
+ Check tSNE integrated visualization on the hobbies corpus
"""
corpus = self.load_data('hobbies')
tfidf = TfidfVectorizer()
@@ -66,5 +77,96 @@ def test_integrated_tsne(self):
docs = tfidf.fit_transform(corpus.data)
labels = corpus.target
- tsne = TSNEVisualizer()
+ tsne = TSNEVisualizer(random_state=8392, colormap='Set1')
tsne.fit_transform(docs, labels)
+
+ tol = 40 if six.PY3 else 55
+ self.assert_images_similar(tsne, tol=tol)
+
+ def test_make_classification_tsne(self):
+ """
+ Test tSNE integrated visualization on a sklearn classifier dataset
+ """
+
+ ## produce random data
+ X, y = make_classification(n_samples=200, n_features=100,
+ n_informative=20, n_redundant=10,
+ n_classes=3, random_state=42)
+
+ ## visualize data with t-SNE
+ tsne = TSNEVisualizer(random_state=87)
+ tsne.fit(X, y)
+
+ tol = 0.1 if six.PY3 else 40
+ self.assert_images_similar(tsne, tol=tol)
+
+ def test_make_classification_tsne_class_labels(self):
+ """
+ Test tSNE integrated visualization with class labels specified
+ """
+
+ ## produce random data
+ X, y = make_classification(n_samples=200, n_features=100,
+ n_informative=20, n_redundant=10,
+ n_classes=3, random_state=42)
+
+ ## visualize data with t-SNE
+ tsne = TSNEVisualizer(random_state=87, labels=['a', 'b', 'c'])
+ tsne.fit(X, y)
+
+ tol = 0.1 if six.PY3 else 40
+ self.assert_images_similar(tsne, tol=tol)
+
+ def test_tsne_mismtached_labels(self):
+ """
+ Assert exception is raised when number of labels doesn't match
+ """
+ ## produce random data
+ X, y = make_classification(n_samples=200, n_features=100,
+ n_informative=20, n_redundant=10,
+ n_classes=3, random_state=42)
+
+ ## fewer labels than classes
+ tsne = TSNEVisualizer(random_state=87, labels=['a', 'b'])
+ with pytest.raises(YellowbrickValueError):
+ tsne.fit(X,y)
+
+ ## more labels than classes
+ tsne = TSNEVisualizer(random_state=87, labels=['a', 'b', 'c', 'd'])
+ with pytest.raises(YellowbrickValueError):
+ tsne.fit(X,y)
+
+
+ def test_no_target_tsne(self):
+ """
+ Test tSNE when no target or classes are specified
+ """
+ ## produce random data
+ X, y = make_classification(n_samples=200, n_features=100,
+ n_informative=20, n_redundant=10,
+ n_classes=3, random_state=6897)
+
+ ## visualize data with t-SNE
+ tsne = TSNEVisualizer(random_state=64)
+ tsne.fit(X)
+
+ self.assert_images_similar(tsne, tol=0.1)
+
+ @pytest.mark.skipif(pandas is None, reason="test requires pandas")
+ def test_visualizer_with_pandas(self):
+ """
+ Test tSNE when passed a pandas DataFrame and series
+ """
+ X, y = make_classification(
+ n_samples=200, n_features=100, n_informative=20, n_redundant=10,
+ n_classes=3, random_state=3020
+ )
+
+ X = pandas.DataFrame(X)
+ y = pandas.Series(y)
+
+ tsne = TSNEVisualizer(random_state=64)
+ tsne.fit(X, y)
+
+ tol = 0.1 if six.PY3 else 40
+ self.assert_images_similar(tsne, tol=tol)
| tSNE Value Error when no classes are specified
See [yellowbrick t-SNE fit raises ValueError](https://stackoverflow.com/questions/48950135/yellowbrick-t-sne-fit-raises-valueerror):
The issue has to do with the way numpy arrays are evaluated in 1.13 - we need to change [yellowbrick.text.tsne Line 256](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/text/tsne.py#L256) to `if y is not None and self.classes_ is None`
code to produce:
```python
import pandas as pd
from yellowbrick.text import TSNEVisualizer
from sklearn.datasets import make_classification
## produce random data
X, y = make_classification(n_samples=200, n_features=100,
n_informative=20, n_redundant=10,
n_classes=3, random_state=42)
## visualize data with t-SNE
tsne = TSNEVisualizer()
tsne.fit(X, y)
tsne.poof()
```
error raised:
```
Traceback (most recent call last):
File "t.py", line 12, in <module>
tsne.fit(X, y)
File "/Users/benjamin/Workspace/ddl/yellowbrick/yellowbrick/text/tsne.py", line 256, in fit
if y and self.classes_ is None:
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
| 2018-03-11T12:47:32 |
|
DistrictDataLabs/yellowbrick | 343 | DistrictDataLabs__yellowbrick-343 | [
"301"
] | 967f7f0f3d8e8a1ec3417c558bf4793738183ca2 | diff --git a/yellowbrick/__init__.py b/yellowbrick/__init__.py
--- a/yellowbrick/__init__.py
+++ b/yellowbrick/__init__.py
@@ -23,7 +23,7 @@
_orig_rc_params = mpl.rcParams.copy()
# Import the version number at the top level
-from .version import get_version
+from .version import get_version, __version_info__
# Import the style management functions
from .style.rcmod import reset_defaults, reset_orig
diff --git a/yellowbrick/classifier/confusion_matrix.py b/yellowbrick/classifier/confusion_matrix.py
--- a/yellowbrick/classifier/confusion_matrix.py
+++ b/yellowbrick/classifier/confusion_matrix.py
@@ -17,6 +17,7 @@
## Imports
##########################################################################
+import warnings
import numpy as np
from sklearn.metrics import confusion_matrix
@@ -31,43 +32,77 @@
## ConfusionMatrix
##########################################################################
+CMAP_UNDERCOLOR = 'w'
CMAP_OVERCOLOR = '#2a7d4f'
+CMAP_MUTEDCOLOR = '0.75'
class ConfusionMatrix(ClassificationScoreVisualizer):
"""
- Creates a heatmap visualization of the sklearn.metrics.confusion_matrix(). A confusion
- matrix shows each combination of the true and predicted classes for a test data set.
+ Creates a heatmap visualization of the sklearn.metrics.confusion_matrix().
+ A confusion matrix shows each combination of the true and predicted
+ classes for a test data set.
- The default color map uses a yellow/orange/red color scale. The user can choose between
- displaying values as the percent of true (cell value divided by sum of row) or as direct
- counts. If percent of true mode is selected, 100% accurate predictions are highlighted in green.
+ The default color map uses a yellow/orange/red color scale. The user can
+ choose between displaying values as the percent of true (cell value
+ divided by sum of row) or as direct counts. If percent of true mode is
+ selected, 100% accurate predictions are highlighted in green.
- Requires a classification model
+ Requires a classification model.
Parameters
----------
- model : the Scikit-Learn estimator
- Should be an instance of a classifier or __init__ will return an error.
+ model : estimator
+ Must be a classifier, otherwise raises YellowbrickTypeError
- ax : the matplotlib axis to plot the figure on (if None, a new axis will be created)
+ ax : matplotlib Axes, default: None
+ The axes to plot the figure on. If None is passed in the current axes
+ will be used (or generated if required).
+
+ sample_weight: array-like of shape = [n_samples], optional
+ Passed to ``confusion_matrix`` to weight the samples.
+
+ percent: bool, default: False
+ Determines whether or not the confusion_matrix is displayed as counts
+ or as a percent of true predictions. Note, if specifying a subset of
+ classes, percent should be set to False or inaccurate figures will be
+ displayed.
classes : list, default: None
- a list of class names to use in the confusion_matrix. This is passed to the 'labels'
- parameter of sklearn.metrics.confusion_matrix(), and follows the behaviour
- indicated by that function. It may be used to reorder or select a subset of labels.
- If None, values that appear at least once in y_true or y_pred are used in sorted order.
+ a list of class names to use in the confusion_matrix.
+ This is passed to the ``labels`` parameter of
+ ``sklearn.metrics.confusion_matrix()``, and follows the behaviour
+ indicated by that function. It may be used to reorder or select a
+ subset of labels. If None, classes that appear at least once in
+ ``y_true`` or ``y_pred`` are used in sorted order.
label_encoder : dict or LabelEncoder, default: None
- When specifying the ``classes`` argument, the input to ``fit()`` and ``score()`` must match the
- expected labels. If the ``X`` and ``y`` datasets have been encoded prior to training and the
- labels must be preserved for the visualization, use this argument to provide a mapping from the
- encoded class to the correct label. Because typically a Scikit-Learn ``LabelEncoder`` is used to
- perform this operation, you may provide it directly to the class to utilize its fitted encoding.
+ When specifying the ``classes`` argument, the input to ``fit()``
+ and ``score()`` must match the expected labels. If the ``X`` and ``y``
+ datasets have been encoded prior to training and the labels must be
+ preserved for the visualization, use this argument to provide a
+ mapping from the encoded class to the correct label. Because typically
+ a Scikit-Learn ``LabelEncoder`` is used to perform this operation, you
+ may provide it directly to the class to utilize its fitted encoding.
+
+ cmap : string, default: ``'YlOrRd'``
+ Specify a colormap to define the heatmap of the predicted class
+ against the actual class in the confusion matrix.
+
+ fontsize : int, default: None
+ Specify the fontsize of the text in the grid and labels to make the
+ matrix a bit easier to read. Uses rcParams font size by default.
+
+ Attributes
+ ----------
+ confusion_matrix_ : array, shape = [n_classes, n_classes]
+ The numeric scores of the confusion matrix
+
+ class_counts_ : array, shape = [n_classes,]
+ The total number of each class supporting the confusion matrix
Examples
--------
-
>>> from yellowbrick.classifier import ConfusionMatrix
>>> from sklearn.linear_model import LogisticRegression
>>> viz = ConfusionMatrix(LogisticRegression())
@@ -77,43 +112,56 @@ class ConfusionMatrix(ClassificationScoreVisualizer):
"""
- def __init__(self, model, ax=None, classes=None, label_encoder=None, **kwargs):
+ def __init__(self, model, ax=None, classes=None, sample_weight=None,
+ percent=False, label_encoder=None, cmap='YlOrRd',
+ fontsize=None, **kwargs):
super(ConfusionMatrix, self).__init__(
model, ax=ax, classes=classes, **kwargs
)
- #Initialize all the other attributes we'll use (for coder clarity)
- self.confusion_matrix = None
-
- self.cmap = color_sequence(kwargs.pop('cmap', 'YlOrRd'))
- self.cmap.set_under(color = 'w')
+ # Visual parameters
+ self.cmap = color_sequence(cmap)
+ self.cmap.set_under(color=CMAP_UNDERCOLOR)
self.cmap.set_over(color=CMAP_OVERCOLOR)
- self.edgecolors = [] #used to draw diagonal line for predicted class = true class
+ self.fontsize = fontsize
+
+ # Estimator parameters
self.label_encoder = label_encoder
+ self.sample_weight = sample_weight
+ self.percent = percent
- def score(self, X, y, sample_weight=None, percent=True):
+ # Used to draw diagonal line for predicted class = true class
+ self._edgecolors = []
+
+ def score(self, X, y, **kwargs):
"""
- Generates the Scikit-Learn confusion_matrix and applies this to the appropriate axis
+ Draws a confusion matrix based on the test data supplied by comparing
+ predictions on instances X with the true values specified by the
+ target vector y.
Parameters
----------
-
X : ndarray or DataFrame of shape n x m
A matrix of n instances with m features
y : ndarray or Series of length n
An array or series of target or class values
-
- sample_weight: optional, passed to the confusion_matrix
-
- percent: optional, Boolean. Determines whether or not the confusion_matrix
- should be displayed as raw numbers or as a percent of the true
- predictions. Note, if using a subset of classes in __init__, percent should
- be set to False or inaccurate percents will be displayed.
"""
+ # Perform deprecation warnings for attributes to score
+ # TODO: remove this in v0.9
+ for param in ("percent", "sample_weight"):
+ if param in kwargs:
+ warnings.warn(PendingDeprecationWarning((
+ "specifying '{}' in score is no longer supported, "
+ "pass to constructor of the visualizer instead."
+ ).format(param)))
+
+ setattr(self, param, kwargs[param])
+
+ # Create predictions from X (will raise not fitted error)
y_pred = self.predict(X)
-
+ # Encode the target with the supplied label encoder
if self.label_encoder:
try :
y = self.label_encoder.inverse_transform(y)
@@ -123,128 +171,100 @@ def score(self, X, y, sample_weight=None, percent=True):
y = [self.label_encoder[x] for x in y]
y_pred = [self.label_encoder[x] for x in y_pred]
- self.confusion_matrix = confusion_matrix(
- y, y_pred, labels=self.classes_, sample_weight=sample_weight
+ # Compute the confusion matrix and class counts
+ self.confusion_matrix_ = confusion_matrix(
+ y, y_pred, labels=self.classes_, sample_weight=self.sample_weight
)
- self._class_counts = self.class_counts(y)
+ self.class_counts_ = self.class_counts(y)
- #Make array of only the classes actually being used.
- #Needed because sklearn confusion_matrix only returns counts for selected classes
- #but percent should be calculated based on all classes
+ # Make array of only the classes actually being used.
+ # Needed because sklearn confusion_matrix only returns counts for
+ # selected classes but percent should be calculated on all classes
selected_class_counts = []
for c in self.classes_:
try:
- selected_class_counts.append(self._class_counts[c])
+ selected_class_counts.append(self.class_counts_[c])
except KeyError:
selected_class_counts.append(0)
- self.selected_class_counts = np.array(selected_class_counts)
+ self.class_counts_ = np.array(selected_class_counts)
- return self.draw(percent)
+ return self.draw()
- def draw(self, percent=True):
+ def draw(self):
+ """
+ Renders the classification report; must be called after score.
"""
- Renders the classification report
- Should only be called internally, as it uses values calculated in Score
- and score calls this method.
-
- Parameters
- ----------
- percent: Boolean
- Whether the heatmap should represent "% of True" or raw counts
+ # Perform display related manipulations on the confusion matrix data
+ cm_display = self.confusion_matrix_
+
+ # Convert confusion matrix to percent of each row, i.e. the
+ # predicted as a percent of true in each class.
+ if self.percent == True:
+ # Note: div_safe function returns 0 instead of NAN.
+ cm_display = div_safe(self.confusion_matrix_, self.class_counts_)
+ cm_display = np.round(cm_display* 100, decimals=0)
+
+ # Y axis should be sorted top to bottom in pcolormesh
+ cm_display = cm_display[::-1,::]
+
+ # Set up the dimensions of the pcolormesh
+ n_classes = len(self.classes_)
+ X, Y = np.arange(n_classes+1), np.arange(n_classes+1)
+ self.ax.set_ylim(bottom=0, top=cm_display.shape[0])
+ self.ax.set_xlim(left=0, right=cm_display.shape[1])
+
+ # Fetch the grid labels from the classes in correct order; set ticks.
+ xticklabels = self.classes_
+ yticklabels = self.classes_[::-1]
+ ticks = np.arange(n_classes) + 0.5
+
+ self.ax.set(xticks=ticks, yticks=ticks)
+ self.ax.set_xticklabels(xticklabels, rotation="vertical", fontsize=self.fontsize)
+ self.ax.set_yticklabels(yticklabels, fontsize=self.fontsize)
+
+ # Set data labels in the grid enumerating over all x,y class pairs.
+ # NOTE: X and Y are one element longer than the confusion matrix, so
+ # skip the last element in the enumeration to label grids.
+ for x in X[:-1]:
+ for y in Y[:-1]:
+
+ # Extract the value and the text label
+ value = cm_display[x,y]
+ svalue = "{:0.0f}".format(value)
+ if self.percent:
+ svalue += "%"
+
+ # Determine the grid and text colors
+ base_color = self.cmap(value / cm_display.max())
+ text_color = find_text_color(base_color)
+
+ # Make zero values more subtle
+ if cm_display[x,y] == 0:
+ text_color = CMAP_MUTEDCOLOR
+
+ # Add the label to the middle of the grid
+ cx, cy = x+0.5, y+0.5
+ self.ax.text(
+ cy, cx, svalue, va='center', ha='center',
+ color=text_color, fontsize=self.fontsize,
+ )
+
+ # Add a dark line on the grid with the diagonal. Note that the
+ # tick labels have already been reversed.
+ lc = 'k' if xticklabels[x] == yticklabels[y] else 'w'
+ self._edgecolors.append(lc)
+
+
+ # Draw the heatmap with colors bounded by vmin,vmax
+ vmin = 0.00001
+ vmax = 99.999 if self.percent == True else cm_display.max()
+ self.ax.pcolormesh(
+ X, Y, cm_display, vmin=vmin, vmax=vmax,
+ edgecolor=self._edgecolors, cmap=self.cmap, linewidth='0.01'
+ )
- """
- if percent == True:
- #Convert confusion matrix to percent of each row, i.e. the predicted as a percent of true in each class
- #div_safe function returns 0 instead of NAN.
- self._confusion_matrix_display = div_safe(
- self.confusion_matrix,
- self.selected_class_counts
- )
- self._confusion_matrix_display =np.round(self._confusion_matrix_display* 100, decimals=0)
- else:
- self._confusion_matrix_display = self.confusion_matrix
-
- #Y axis should be sorted top to bottom in pcolormesh
- self._confusion_matrix_plottable = self._confusion_matrix_display[::-1,::]
-
- self.max = self._confusion_matrix_plottable.max()
-
- #Set up the dimensions of the pcolormesh
- X = np.linspace(start=0, stop=len(self.classes_), num=len(self.classes_)+1)
- Y = np.linspace(start=0, stop=len(self.classes_), num=len(self.classes_)+1)
- self.ax.set_ylim(bottom=0, top=self._confusion_matrix_plottable.shape[0])
- self.ax.set_xlim(left=0, right=self._confusion_matrix_plottable.shape[1])
-
- #Put in custom axis labels
- self.xticklabels = self.classes_
- self.yticklabels = self.classes_[::-1]
- self.xticks = np.arange(0, len(self.classes_), 1) + .5
- self.yticks = np.arange(0, len(self.classes_), 1) + .5
- self.ax.set(xticks=self.xticks, yticks=self.yticks)
- self.ax.set_xticklabels(self.xticklabels, rotation="vertical", fontsize=8)
- self.ax.set_yticklabels(self.yticklabels, fontsize=8)
-
- ######################
- # Add the data labels to each square
- ######################
- for x_index, x in np.ndenumerate(X):
- #np.ndenumerate returns a tuple for the index, must access first element using [0]
- x_index = x_index[0]
- for y_index, y in np.ndenumerate(Y):
- #Clean up our iterators
- #numpy doesn't like non integers as indexes; also np.ndenumerate returns tuple
- x_int = int(x)
- y_int = int(y)
- y_index = y_index[0]
-
- #X and Y are one element longer than the confusion_matrix. Don't want to add text for the last X or Y
- if x_index == X[-1] or y_index == Y[-1]:
- break
-
- #center the text in the middle of the block
- text_x = x + 0.5
- text_y = y + 0.5
-
- #extract the value
- grid_val = self._confusion_matrix_plottable[x_int,y_int]
-
- #Determine text color
- scaled_grid_val = grid_val / self.max
- base_color = self.cmap(scaled_grid_val)
- text_color= find_text_color(base_color)
-
- #make zero values more subtle
- if self._confusion_matrix_plottable[x_int,y_int] == 0:
- text_color = "0.75"
-
- #Put the data labels in the middle of the heatmap square
- self.ax.text(text_y,
- text_x,
- "{:.0f}{}".format(grid_val,"%" if percent==True else ""),
- va='center',
- ha='center',
- fontsize=8,
- color=text_color)
-
- #If the prediction is correct, put a bounding box around that square to better highlight it to the user
- #This will be used in ax.pcolormesh, setting now since we're iterating over the matrix
- #ticklabels are conveniently already reversed properly to match the _confusion_matrix_plottalbe order
- if self.xticklabels[x_int] == self.yticklabels[y_int]:
- self.edgecolors.append('black')
- else:
- self.edgecolors.append('w')
-
- # Draw the heatmap. vmin and vmax operate in tandem with the cmap.set_under and cmap.set_over to alter the color of 0 and 100
- highest_count = self._confusion_matrix_plottable.max()
- vmax = 99.999 if percent == True else highest_count
- self.ax.pcolormesh(X, Y,
- self._confusion_matrix_plottable,
- vmin=0.00001,
- vmax=vmax,
- edgecolor=self.edgecolors,
- cmap=self.cmap,
- linewidth='0.01') #edgecolor='0.75', linewidth='0.01'
+ # Return the axes being drawn on
return self.ax
def finalize(self, **kwargs):
diff --git a/yellowbrick/classifier/rocauc.py b/yellowbrick/classifier/rocauc.py
--- a/yellowbrick/classifier/rocauc.py
+++ b/yellowbrick/classifier/rocauc.py
@@ -60,11 +60,12 @@ class ROCAUC(ClassificationScoreVisualizer):
Parameters
----------
- ax : the axis to plot the figure on.
+ model : estimator
+ Must be a classifier, otherwise raises YellowbrickTypeError
- model : the Scikit-Learn estimator
- Should be an instance of a classifier, else the __init__ will
- return an error.
+ ax : matplotlib Axes, default: None
+ The axes to plot the figure on. If None is passed in the current axes
+ will be used (or generated if required).
classes : list
A list of class names for the legend. If classes is None and a y value
| diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_class_filter_eg_zoom_in.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_class_filter_eg_zoom_in.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_class_filter_eg_zoom_in.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_confusion_matrix.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_confusion_matrix.png
Binary files a/tests/baseline_images/test_classifier/test_confusion_matrix/test_confusion_matrix.png and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_confusion_matrix.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_extra_classes.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_extra_classes.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_extra_classes.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_fontsize.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_fontsize.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_fontsize.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_inverse_mapping.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_inverse_mapping.png
Binary files a/tests/baseline_images/test_classifier/test_confusion_matrix/test_inverse_mapping.png and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_inverse_mapping.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_no_classes_provided.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_no_classes_provided.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_no_classes_provided.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_one_class.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_one_class.png
Binary files a/tests/baseline_images/test_classifier/test_confusion_matrix/test_one_class.png and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_one_class.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_pandas_integration.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_pandas_integration.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_pandas_integration.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_percent_mode.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_percent_mode.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_percent_mode.png differ
diff --git a/tests/test_classifier/test_confusion_matrix.py b/tests/test_classifier/test_confusion_matrix.py
--- a/tests/test_classifier/test_confusion_matrix.py
+++ b/tests/test_classifier/test_confusion_matrix.py
@@ -1,138 +1,325 @@
-import yellowbrick
+# tests.test_classifier.test_confusion_matrix
+# Tests for the confusion matrix visualizer
+#
+# Aithor: Neal Humphrey
+# Author: Benjamin Bengfort <[email protected]>
+# Created: Tue May 03 11:05:11 2017 -0700
+#
+# ID: test_confusion_matrix.py [] [email protected] $
+
+"""
+Tests for the confusion matrix visualizer
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+import six
+import pytest
+import yellowbrick as yb
+import numpy.testing as npt
import matplotlib.pyplot as plt
+from collections import namedtuple
+
from yellowbrick.classifier.confusion_matrix import *
from tests.base import VisualTestCase
+from tests.dataset import DatasetMixin
+from sklearn.svm import SVC
from sklearn.datasets import load_digits
+from sklearn.naive_bayes import GaussianNB
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import PassiveAggressiveRegressor
from sklearn.model_selection import train_test_split as tts
+try:
+ import pandas as pd
+except ImportError:
+ pd = None
+
+
+# Helpers for fixtures
+Dataset = namedtuple('Dataset', 'X,y')
+Split = namedtuple('Split', 'train,test')
-class ConfusionMatrixTests(VisualTestCase):
+
[email protected](scope='class')
+def digits(request):
"""
- ConfusionMatrix visualizer
+ Creates a fixture of train and test splits for the sklearn digits dataset
+ For ease of use returns a Dataset named tuple composed of two Split tuples.
"""
+ data = load_digits()
+ X_train, X_test, y_train, y_test = tts(
+ data.data, data.target, test_size=0.2, random_state=11
+ )
- def setUp(self):
- #Use the same data for all the tests
- self.digits = load_digits()
+ # Set a class attribute for digits
+ request.cls.digits = Dataset(
+ Split(X_train, X_test), Split(y_train, y_test)
+ )
- X = self.digits.data
- y = self.digits.target
- X_train, X_test, y_train, y_test = tts(X,y, test_size =0.2, random_state=11)
- self.X_train = X_train
- self.X_test = X_test
- self.y_train = y_train
- self.y_test = y_test
[email protected]("digits")
+class ConfusionMatrixTests(VisualTestCase, DatasetMixin):
+ """
+ ConfusionMatrix visualizer tests
+ """
def test_confusion_matrix(self):
"""
- Integration test of visualizer
+ Integration test on digits dataset with LogisticRegression
"""
- fig = plt.figure()
- ax = fig.add_subplot()
+ _, ax = plt.subplots()
- model = LogisticRegression()
+ model = LogisticRegression(random_state=93)
cm = ConfusionMatrix(model, ax=ax, classes=[0,1,2,3,4,5,6,7,8,9])
- cm.fit(self.X_train, self.y_train)
- cm.score(self.X_test, self.y_test)
+ cm.fit(self.digits.X.train, self.digits.y.train)
+ cm.score(self.digits.X.test, self.digits.y.test)
- self.assert_images_similar(cm)
+ self.assert_images_similar(cm, tol=10)
+
+ # Ensure correct confusion matrix under the hood
+ npt.assert_array_equal(cm.confusion_matrix_, np.array([
+ [38, 0, 0, 0, 0, 0, 0, 0, 0, 0],
+ [ 0, 35, 0, 0, 0, 0, 0, 0, 2, 0],
+ [ 0, 0, 39, 0, 0, 0, 0, 0, 0, 0],
+ [ 0, 0, 0, 38, 0, 1, 0, 0, 2, 0],
+ [ 0, 0, 0, 0, 40, 0, 0, 1, 0, 0],
+ [ 0, 0, 0, 0, 0, 27, 0, 0, 0, 0],
+ [ 0, 0, 0, 0, 0, 1, 29, 0, 0, 0],
+ [ 0, 0, 0, 0, 0, 0, 0, 35, 0, 1],
+ [ 0, 2, 0, 0, 0, 0, 0, 0, 32, 0],
+ [ 0, 0, 0, 0, 0, 0, 0, 1, 1, 35]]))
def test_no_classes_provided(self):
"""
- Assert no errors when no classes are provided
+ Integration test on digits dataset with GaussianNB, no classes
+ """
+ _, ax = plt.subplots()
+
+ model = GaussianNB()
+ cm = ConfusionMatrix(model, ax=ax, classes=None)
+ cm.fit(self.digits.X.train, self.digits.y.train)
+ cm.score(self.digits.X.test, self.digits.y.test)
+
+ self.assert_images_similar(cm, tol=10)
+
+ # Ensure correct confusion matrix under the hood
+ npt.assert_array_equal(cm.confusion_matrix_, np.array([
+ [36, 0, 0, 0, 1, 0, 0, 1, 0, 0],
+ [ 0, 31, 0, 0, 0, 0, 0, 1, 3, 2],
+ [ 0, 1, 34, 0, 0, 0, 0, 0, 4, 0],
+ [ 0, 1, 0, 33, 0, 2, 0, 2, 3, 0],
+ [ 0, 0, 0, 0, 36, 0, 0, 5, 0, 0],
+ [ 0, 0, 0, 0, 0, 27, 0, 0, 0, 0],
+ [ 0, 0, 1, 0, 1, 0, 28, 0, 0, 0],
+ [ 0, 0, 0, 0, 0, 0, 0, 36, 0, 0],
+ [ 0, 3, 0, 1, 0, 1, 0, 4, 25, 0],
+ [ 1, 2, 0, 0, 1, 0, 0, 8, 3, 22]]))
+
+ def test_fontsize(self):
+ """
+ Test confusion matrix with smaller fontsize on digits dataset with SVC
+ """
+ _, ax = plt.subplots()
+
+ model = SVC(random_state=93)
+ cm = ConfusionMatrix(model, ax=ax, fontsize=8)
+
+ cm.fit(self.digits.X.train, self.digits.y.train)
+ cm.score(self.digits.X.test, self.digits.y.test)
+
+ self.assert_images_similar(cm, tol=10)
+
+ def test_percent_mode(self):
+ """
+ Test confusion matrix in percent mode on digits dataset with SVC
"""
- model = LogisticRegression()
- cm = ConfusionMatrix(model)
- cm.fit(self.X_train, self.y_train)
- cm.score(self.X_test, self.y_test)
+ _, ax = plt.subplots()
+
+ model = SVC(random_state=93)
+ cm = ConfusionMatrix(model, ax=ax, percent=True)
+
+ cm.fit(self.digits.X.train, self.digits.y.train)
+ cm.score(self.digits.X.test, self.digits.y.test)
- def test_raw_count_mode(self):
+ self.assert_images_similar(cm, tol=10)
+
+ # Ensure correct confusion matrix under the hood
+ npt.assert_array_equal(cm.confusion_matrix_, np.array([
+ [16, 0, 0, 0, 0, 22, 0, 0, 0, 0],
+ [ 0, 11, 0, 0, 0, 26, 0, 0, 0, 0],
+ [ 0, 0, 10, 0, 0, 29, 0, 0, 0, 0],
+ [ 0, 0, 0, 6, 0, 35, 0, 0, 0, 0],
+ [ 0, 0, 0, 0, 11, 30, 0, 0, 0, 0],
+ [ 0, 0, 0, 0, 0, 27, 0, 0, 0, 0],
+ [ 0, 0, 0, 0, 0, 9, 21, 0, 0, 0],
+ [ 0, 0, 0, 0, 0, 29, 0, 7, 0, 0],
+ [ 0, 0, 0, 0, 0, 32, 0, 0, 2, 0],
+ [ 0, 0, 0, 0, 0, 34, 0, 0, 0, 3]]))
+
+ def test_deprecated_fit_kwargs(self):
"""
- Assert that raw count mode works as expected
+ Test that passing percent or sample_weight is deprecated
"""
- model = LogisticRegression()
- cm = ConfusionMatrix(model)
- cm.fit(self.X_train, self.y_train)
- cm.score(self.X_test, self.y_test, percent=False)
+ if yb.__version_info__['minor'] >= 9:
+ pytest.fail("deprecation warnings should be removed after 0.9")
+
+ args = (self.digits.X.test, self.digits.y.test)
+ cm = ConfusionMatrix(LogisticRegression())
+ cm.fit(self.digits.X.train, self.digits.y.train)
- def test_zoomed_in(self):
+ # Deprecated percent in score
+ pytest.deprecated_call(cm.score, *args, percent=True)
+
+ # Deprecated sample_weight in score
+ pytest.deprecated_call(cm.score, *args, sample_weight=np.arange(360))
+
+ def test_class_filter_eg_zoom_in(self):
"""
- Test zoomed in classes works as expected
+ Test filtering classes zooms in on the confusion matrix.
"""
- model = LogisticRegression()
- cm = ConfusionMatrix(model, classes=[0,1,2])
- cm.fit(self.X_train, self.y_train)
- cm.score(self.X_test, self.y_test)
+ _, ax = plt.subplots()
+
+ model = LogisticRegression(random_state=93)
+ cm = ConfusionMatrix(model, ax=ax, classes=[0,1,2])
+ cm.fit(self.digits.X.train, self.digits.y.train)
+ cm.score(self.digits.X.test, self.digits.y.test)
+
+ self.assert_images_similar(cm, tol=10)
+
+ # Ensure correct confusion matrix under the hood
+ npt.assert_array_equal(cm.confusion_matrix_, np.array([
+ [38, 0, 0],
+ [ 0, 35, 0],
+ [ 0, 0, 39]]))
def test_extra_classes(self):
"""
- Test that extra classes are ignored
+ Assert that any extra classes are simply ignored
"""
- model = LogisticRegression()
- cm = ConfusionMatrix(model, classes=[0,1,2,11])
- cm.fit(self.X_train, self.y_train)
- cm.score(self.X_test, self.y_test)
- self.assertTrue(cm.selected_class_counts[3]==0)
+ # TODO: raise exception instead
+ _, ax = plt.subplots()
+
+ model = LogisticRegression(random_state=93)
+ cm = ConfusionMatrix(model, ax=ax, classes=[0,1,2,11])
+ cm.fit(self.digits.X.train, self.digits.y.train)
+ cm.score(self.digits.X.test, self.digits.y.test)
+
+ npt.assert_array_equal(cm.class_counts_, [38, 37, 39, 0])
+
+ # Ensure correct confusion matrix under the hood
+ npt.assert_array_equal(cm.confusion_matrix_, np.array([
+ [38, 0, 0, 0],
+ [ 0, 35, 0, 0],
+ [ 0, 0, 39, 0],
+ [ 0, 0, 0, 0]]))
+
+ self.assert_images_similar(cm, tol=10)
def test_one_class(self):
"""
- Test single class confusion matrix
+ Test single class confusion matrix with LogisticRegression
"""
- fig = plt.figure()
- ax = fig.add_subplot()
+ _, ax = plt.subplots()
- model = LogisticRegression()
+ model = LogisticRegression(random_state=93)
cm = ConfusionMatrix(model, ax=ax, classes=[0])
- cm.fit(self.X_train, self.y_train)
- cm.score(self.X_test, self.y_test)
+ cm.fit(self.digits.X.train, self.digits.y.train)
+ cm.score(self.digits.X.test, self.digits.y.test)
- self.assert_images_similar(cm)
+ self.assert_images_similar(cm, tol=10)
def test_defined_mapping(self):
"""
- Test mapping as label encoder
+ Test mapping as label encoder to define tick labels
"""
- model = LogisticRegression()
+ _, ax = plt.subplots()
+
+ model = LogisticRegression(random_state=93)
classes = ['one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine']
mapping = {0: 'zero', 1: 'one', 2: 'two', 3: 'three', 4: 'four', 5: 'five',
6: 'six', 7: 'seven', 8: 'eight', 9: 'nine'}
- cm = ConfusionMatrix(model, classes=classes, label_encoder = mapping)
- cm.fit(self.X_train, self.y_train)
- cm.score(self.X_test, self.y_test)
+ cm = ConfusionMatrix(model, ax=ax, classes=classes, label_encoder=mapping)
+ cm.fit(self.digits.X.train, self.digits.y.train)
+ cm.score(self.digits.X.test, self.digits.y.test)
+
+ assert [l.get_text() for l in ax.get_xticklabels()] == classes
+ ylabels = [l.get_text() for l in ax.get_yticklabels()]
+ ylabels.reverse()
def test_inverse_mapping(self):
"""
- Test LabelEncoder as label encoder
+ Test LabelEncoder as label encoder to define tick labels
"""
- fig = plt.figure()
- ax = fig.add_subplot()
+ _, ax = plt.subplots()
- model = LogisticRegression()
+ model = LogisticRegression(random_state=93)
le = LabelEncoder()
classes = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine']
le.fit(['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine'])
cm = ConfusionMatrix(model, ax=ax, classes=classes, label_encoder=le)
- cm.fit(self.X_train, self.y_train)
- cm.score(self.X_test, self.y_test)
+ cm.fit(self.digits.X.train, self.digits.y.train)
+ cm.score(self.digits.X.test, self.digits.y.test)
+
+ assert [l.get_text() for l in ax.get_xticklabels()] == classes
+ ylabels = [l.get_text() for l in ax.get_yticklabels()]
+ ylabels.reverse()
+ assert ylabels == classes
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_pandas_integration(self):
+ """
+ Test with Pandas DataFrame and Series input
+ """
+ _, ax = plt.subplots()
- self.assert_images_similar(cm)
+ # Load the occupancy dataset from fixtures
+ data = self.load_data('occupancy')
+ target = 'occupancy'
+ features = [
+ "temperature", "relative_humidity", "light", "C02", "humidity"
+ ]
+
+ # Create instances and target
+ X = pd.DataFrame(data[features])
+ y = pd.Series(data[target].astype(int))
+
+ # Create train/test splits
+ splits = tts(X, y, test_size=0.2, random_state=8873)
+ X_train, X_test, y_train, y_test = splits
+
+ # Create confusion matrix
+ model = GaussianNB()
+ cm = ConfusionMatrix(model, ax=ax, classes=None)
+ cm.fit(X_train, y_train)
+ cm.score(X_test, y_test)
+
+ tol = 0.1 if six.PY3 else 40
+ self.assert_images_similar(cm, tol=tol)
+
+ # Ensure correct confusion matrix under the hood
+ npt.assert_array_equal(cm.confusion_matrix_, np.array([
+ [3012, 114],
+ [ 1, 985]
+ ]))
def test_isclassifier(self):
"""
- Test taht non-classifiers raise exceptions
+ Assert that only classifiers can be used with the visualizer.
"""
model = PassiveAggressiveRegressor()
- message = 'This estimator is not a classifier; try a regression or clustering score visualizer instead!'
- classes = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine']
+ message = (
+ 'This estimator is not a classifier; '
+ 'try a regression or clustering score visualizer instead!'
+ )
- with self.assertRaisesRegexp(yellowbrick.exceptions.YellowbrickError, message):
- ConfusionMatrix(model, classes=classes)
+ with self.assertRaisesRegexp(yb.exceptions.YellowbrickError, message):
+ ConfusionMatrix(model)
| ConfusionMatrix FontSize
The confusion matrix label font size is hardcoded to 8pt: https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/classifier/confusion_matrix.py#L228
It appears that this may be a tad too small: https://stackoverflow.com/questions/47450804/yellowbrick-increasing-font-size-on-yellowbrick-generated-charts
We should probably remove this line and make it default to the matplotlib font size.
| +1!
I'm changing my matplotlib defaults as on my UHD screen the tiny labels are hard to read, this is fine for the `ClassificationReport` but `ConfusionMatrix` is stubbornly unreadable. I'm planning to include these yb plots in a talk for the PyDataLondon conference this year. Unless there's a compelling reason to hard-code to size 8, I'd suggest using the `mpl` defaults.
The hardcoding occurs 3 times here: https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/classifier/confusion_matrix.py
+1 Was just going to file this | 2018-03-16T22:26:49 |
DistrictDataLabs/yellowbrick | 348 | DistrictDataLabs__yellowbrick-348 | [
"267"
] | 1b8be6a8458ab6675cbeac10e5338fd60a5274bf | diff --git a/docs/api/classifier/classification_report.py b/docs/api/classifier/classification_report.py
--- a/docs/api/classifier/classification_report.py
+++ b/docs/api/classifier/classification_report.py
@@ -1,29 +1,58 @@
+# classification_report
+# Generates images for the classification report documentation.
+#
+# Author: Benjamin Bengfort <[email protected]>
+# Created: Sun Mar 18 16:35:30 2018 -0400
+#
+# ID: classification_report.py [] [email protected] $
+
+"""
+Generates images for the classification report documentation.
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
import pandas as pd
+import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
-from sklearn.model_selection import train_test_split
+from sklearn.model_selection import train_test_split as tts
from yellowbrick.classifier import ClassificationReport
-if __name__ == '__main__':
- # Load the regression data set
+##########################################################################
+## Quick Methods
+##########################################################################
+
+def make_dataset():
data = pd.read_csv("../../../examples/data/occupancy/occupancy.csv")
- features = ["temperature", "relative humidity", "light", "C02", "humidity"]
- classes = ['unoccupied', 'occupied']
+ X = data[["temperature", "relative humidity", "light", "C02", "humidity"]]
+ y = data.occupancy
+
+ return tts(X, y, test_size=0.2)
+
- # Extract the numpy arrays from the data frame
- X = data[features].as_matrix()
- y = data.occupancy.as_matrix()
+def make_gb_report(path="images/classification_report.png"):
+ X_train, X_test, y_train, y_test = make_dataset()
- # Create the train and test data
- X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
+ _, ax = plt.subplots()
- # Instantiate the classification model and visualizer
bayes = GaussianNB()
- visualizer = ClassificationReport(bayes, classes=classes)
+ viz = ClassificationReport(bayes, ax=ax, classes=['unoccupied', 'occupied'])
- visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
- visualizer.score(X_test, y_test) # Evaluate the model on the test data
- g = visualizer.poof(outpath="images/classification_report.png") # Draw/show/poof the data
+ viz.fit(X_train, y_train)
+ viz.score(X_test, y_test)
+
+ viz.poof(outpath=path)
+
+
+##########################################################################
+## Main Method
+##########################################################################
+
+if __name__ == '__main__':
+ make_gb_report()
diff --git a/yellowbrick/classifier/classification_report.py b/yellowbrick/classifier/classification_report.py
--- a/yellowbrick/classifier/classification_report.py
+++ b/yellowbrick/classifier/classification_report.py
@@ -34,6 +34,11 @@
## Classification Report
##########################################################################
+CMAP_UNDERCOLOR = 'w'
+CMAP_OVERCOLOR = '#2a7d4f'
+SCORES_KEYS = ('precision', 'recall', 'f1')
+
+
class ClassificationReport(ClassificationScoreVisualizer):
"""
Classification report that shows the precision, recall, and F1 scores
@@ -41,7 +46,6 @@ class ClassificationReport(ClassificationScoreVisualizer):
Parameters
----------
-
ax : The axis to plot the figure on.
model : the Scikit-Learn estimator
@@ -52,14 +56,14 @@ class ClassificationReport(ClassificationScoreVisualizer):
If classes is None and a y value is passed to fit then the classes
are selected from the target vector.
- colormap : optional string or matplotlib cmap to colorize lines
- Use sequential heatmap.
+ cmap : string, default: ``'YlOrRd'``
+ Specify a colormap to define the heatmap of the predicted class
+ against the actual class in the confusion matrix.
kwargs : keyword arguments passed to the super class.
Examples
--------
-
>>> from yellowbrick.classifier import ClassificationReport
>>> from sklearn.linear_model import LogisticRegression
>>> viz = ClassificationReport(LogisticRegression())
@@ -67,69 +71,91 @@ class ClassificationReport(ClassificationScoreVisualizer):
>>> viz.score(X_test, y_test)
>>> viz.poof()
+ Attributes
+ ----------
+ scores_ : dict of dicts
+ Outer dictionary composed of precision, recall, and f1 scores with
+ inner dictionaries specifiying the values for each class listed.
"""
- def __init__(self, model, ax=None, classes=None, **kwargs):
+ def __init__(self, model, ax=None, classes=None, cmap='YlOrRd', **kwargs):
super(ClassificationReport, self).__init__(
model, ax=ax, classes=classes, **kwargs
)
- self.cmap = color_sequence(kwargs.pop('cmap', 'YlOrRd'))
+ self.cmap = color_sequence(cmap)
+ self.cmap.set_under(color=CMAP_UNDERCOLOR)
+ self.cmap.set_over(color=CMAP_OVERCOLOR)
def score(self, X, y=None, **kwargs):
"""
- Generates the Scikit-Learn classification_report
+ Generates the Scikit-Learn classification report.
Parameters
----------
-
X : ndarray or DataFrame of shape n x m
A matrix of n instances with m features
y : ndarray or Series of length n
An array or series of target or class values
-
"""
y_pred = self.predict(X)
- keys = ('precision', 'recall', 'f1')
- self.scores = precision_recall_fscore_support(y, y_pred)
- self.scores = map(lambda s: dict(zip(self.classes_, s)), self.scores[0:3])
- self.scores = dict(zip(keys, self.scores))
- return self.draw(y, y_pred)
+ scores = precision_recall_fscore_support(y, y_pred)
+ scores = map(lambda s: dict(zip(self.classes_, s)), scores[0:3])
+ self.scores_ = dict(zip(SCORES_KEYS, scores))
+
+ return self.draw()
- def draw(self, y, y_pred):
+ def draw(self):
"""
Renders the classification report across each axis.
-
- Parameters
- ----------
-
- y : ndarray or Series of length n
- An array or series of target or class values
-
- y_pred : ndarray or Series of length n
- An array or series of predicted target values
"""
- self.matrix = []
- for cls in self.classes_:
- self.matrix.append([self.scores['precision'][cls],self.scores['recall'][cls],self.scores['f1'][cls]])
-
- for column in range(0,3): #3 columns - prec,rec,f1
- for row in range(len(self.classes_)):
- current_score = self.matrix[row][column]
- base_color = self.cmap(current_score)
- text_color= find_text_color(base_color)
-
- # Limit the current score to a precision of 3
- current_score = "{:0.3f}".format(current_score)
-
- self.ax.text(column,row,current_score,va='center',ha='center', color=text_color)
-
- plt.imshow(self.matrix, interpolation='nearest', cmap=self.cmap, vmin=0, vmax=1, aspect='auto')
+ # Create display grid
+ cr_display = np.zeros((len(self.classes_), 3))
+
+ # For each class row, append columns for precision, recall, and f1
+ for idx, cls in enumerate(self.classes_):
+ for jdx, metric in enumerate(('precision', 'recall', 'f1')):
+ cr_display[idx, jdx] = self.scores_[metric][cls]
+
+ # Set up the dimensions of the pcolormesh
+ # NOTE: pcolormesh accepts grids that are (N+1,M+1)
+ X, Y = np.arange(len(self.classes_)+1), np.arange(4)
+ self.ax.set_ylim(bottom=0, top=cr_display.shape[0])
+ self.ax.set_xlim(left=0, right=cr_display.shape[1])
+
+ # Set data labels in the grid, enumerating over class, metric pairs
+ # NOTE: X and Y are one element longer than the classification report
+ # so skip the last element to label the grid correctly.
+ for x in X[:-1]:
+ for y in Y[:-1]:
+
+ # Extract the value and the text label
+ value = cr_display[x,y]
+ svalue = "{:0.3f}".format(value)
+
+ # Determine the grid and text colors
+ base_color = self.cmap(value)
+ text_color = find_text_color(base_color)
+
+ # Add the label to the middle of the grid
+ cx, cy = x+0.5, y+0.5
+ self.ax.text(
+ cy, cx, svalue, va='center', ha='center', color=text_color
+ )
+
+
+ # Draw the heatmap with colors bounded by the min and max of the grid
+ # NOTE: I do not understand why this is Y, X instead of X, Y it works
+ # in this order but raises an exception with the other order.
+ g = self.ax.pcolormesh(
+ Y, X, cr_display, vmin=0, vmax=1, cmap=self.cmap, edgecolor='w',
+ )
# Add the color bar
- plt.colorbar()
+ plt.colorbar(g, ax=self.ax)
+ # Return the axes being drawn on
return self.ax
def finalize(self, **kwargs):
@@ -145,20 +171,14 @@ def finalize(self, **kwargs):
# Set the title of the classifiation report
self.set_title('{} Classification Report'.format(self.name))
- # Compute the tick marks for both x and y
- x_tick_marks = np.arange(len(self.classes_)+1)
- y_tick_marks = np.arange(len(self.classes_))
-
# Set the tick marks appropriately
- self.ax.set_xticks(x_tick_marks)
- self.ax.set_yticks(y_tick_marks)
+ self.ax.set_xticks(np.arange(3)+0.5)
+ self.ax.set_yticks(np.arange(len(self.classes_))+0.5)
self.ax.set_xticklabels(['precision', 'recall', 'f1-score'], rotation=45)
self.ax.set_yticklabels(self.classes_)
- # Set the labels for the two axes
- self.ax.set_ylabel('Classes')
- self.ax.set_xlabel('Measures')
+ plt.tight_layout()
def classification_report(model, X, y=None, ax=None, classes=None, **kwargs):
diff --git a/yellowbrick/classifier/confusion_matrix.py b/yellowbrick/classifier/confusion_matrix.py
--- a/yellowbrick/classifier/confusion_matrix.py
+++ b/yellowbrick/classifier/confusion_matrix.py
@@ -20,13 +20,14 @@
import warnings
import numpy as np
-from sklearn.metrics import confusion_matrix
-
from ..utils import div_safe
from ..style import find_text_color
from ..style.palettes import color_sequence
from .base import ClassificationScoreVisualizer
+from sklearn.model_selection import train_test_split
+from sklearn.metrics import confusion_matrix as confusion_matrix_metric
+
##########################################################################
## ConfusionMatrix
@@ -168,11 +169,11 @@ def score(self, X, y, **kwargs):
y_pred = self.label_encoder.inverse_transform(y_pred)
except AttributeError:
# if a mapping is passed to class apply it here.
- y = [self.label_encoder[x] for x in y]
- y_pred = [self.label_encoder[x] for x in y_pred]
+ y = np.array([self.label_encoder[x] for x in y])
+ y_pred = np.array([self.label_encoder[x] for x in y_pred])
# Compute the confusion matrix and class counts
- self.confusion_matrix_ = confusion_matrix(
+ self.confusion_matrix_ = confusion_matrix_metric(
y, y_pred, labels=self.classes_, sample_weight=self.sample_weight
)
self.class_counts_ = self.class_counts(y)
@@ -271,3 +272,95 @@ def finalize(self, **kwargs):
self.set_title('{} Confusion Matrix'.format(self.name))
self.ax.set_ylabel('True Class')
self.ax.set_xlabel('Predicted Class')
+
+
+##########################################################################
+## Quick Method
+##########################################################################
+
+
+def confusion_matrix(model, X, y, ax=None, classes=None, sample_weight=None,
+ percent=False, label_encoder=None, cmap='YlOrRd',
+ fontsize=None, **kwargs):
+ """Quick method:
+
+ Creates a heatmap visualization of the sklearn.metrics.confusion_matrix().
+ A confusion matrix shows each combination of the true and predicted
+ classes for a test data set.
+
+ The default color map uses a yellow/orange/red color scale. The user can
+ choose between displaying values as the percent of true (cell value
+ divided by sum of row) or as direct counts. If percent of true mode is
+ selected, 100% accurate predictions are highlighted in green.
+
+ Requires a classification model.
+
+ Parameters
+ ----------
+ model : estimator
+ Must be a classifier, otherwise raises YellowbrickTypeError
+
+ X : ndarray or DataFrame of shape n x m
+ A matrix of n instances with m features.
+
+ y : ndarray or Series of length n
+ An array or series of target or class values.
+
+ ax : matplotlib Axes, default: None
+ The axes to plot the figure on. If None is passed in the current axes
+ will be used (or generated if required).
+
+ sample_weight: array-like of shape = [n_samples], optional
+ Passed to ``confusion_matrix`` to weight the samples.
+
+ percent: bool, default: False
+ Determines whether or not the confusion_matrix is displayed as counts
+ or as a percent of true predictions. Note, if specifying a subset of
+ classes, percent should be set to False or inaccurate figures will be
+ displayed.
+
+ classes : list, default: None
+ a list of class names to use in the confusion_matrix.
+ This is passed to the ``labels`` parameter of
+ ``sklearn.metrics.confusion_matrix()``, and follows the behaviour
+ indicated by that function. It may be used to reorder or select a
+ subset of labels. If None, classes that appear at least once in
+ ``y_true`` or ``y_pred`` are used in sorted order.
+
+ label_encoder : dict or LabelEncoder, default: None
+ When specifying the ``classes`` argument, the input to ``fit()``
+ and ``score()`` must match the expected labels. If the ``X`` and ``y``
+ datasets have been encoded prior to training and the labels must be
+ preserved for the visualization, use this argument to provide a
+ mapping from the encoded class to the correct label. Because typically
+ a Scikit-Learn ``LabelEncoder`` is used to perform this operation, you
+ may provide it directly to the class to utilize its fitted encoding.
+
+ cmap : string, default: ``'YlOrRd'``
+ Specify a colormap to define the heatmap of the predicted class
+ against the actual class in the confusion matrix.
+
+ fontsize : int, default: None
+ Specify the fontsize of the text in the grid and labels to make the
+ matrix a bit easier to read. Uses rcParams font size by default.
+
+ Returns
+ -------
+ ax : matplotlib axes
+ Returns the axes that the classification report was drawn on.
+ """
+ # Instantiate the visualizer
+ visualizer = ConfusionMatrix(
+ model, ax, classes, sample_weight, percent,
+ label_encoder, cmap, fontsize, **kwargs
+ )
+
+ # Create the train and test splits
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
+
+ # Fit and transform the visualizer (calls draw)
+ visualizer.fit(X_train, y_train, **kwargs)
+ visualizer.score(X_test, y_test)
+
+ # Return the axes object on the visualizer
+ return visualizer.ax
| diff --git a/tests/baseline_images/test_classifier/test_classification_report/test_binary_class_report.png b/tests/baseline_images/test_classifier/test_classification_report/test_binary_class_report.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_classification_report/test_binary_class_report.png differ
diff --git a/tests/baseline_images/test_classifier/test_classification_report/test_class_report.png b/tests/baseline_images/test_classifier/test_classification_report/test_class_report.png
deleted file mode 100644
Binary files a/tests/baseline_images/test_classifier/test_classification_report/test_class_report.png and /dev/null differ
diff --git a/tests/baseline_images/test_classifier/test_classification_report/test_multiclass_class_report.png b/tests/baseline_images/test_classifier/test_classification_report/test_multiclass_class_report.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_classification_report/test_multiclass_class_report.png differ
diff --git a/tests/baseline_images/test_classifier/test_classification_report/test_pandas_integration.png b/tests/baseline_images/test_classifier/test_classification_report/test_pandas_integration.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_classification_report/test_pandas_integration.png differ
diff --git a/tests/baseline_images/test_classifier/test_classification_report/test_quick_method.png b/tests/baseline_images/test_classifier/test_classification_report/test_quick_method.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_classification_report/test_quick_method.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_quick_method.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_quick_method.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_quick_method.png differ
diff --git a/tests/test_classifier/test_classification_report.py b/tests/test_classifier/test_classification_report.py
--- a/tests/test_classifier/test_classification_report.py
+++ b/tests/test_classifier/test_classification_report.py
@@ -1,36 +1,212 @@
+# tests.test_classifier.test_classification_report
+# Tests for the classification report visualizer
+#
+# Author: Rebecca Bilbro <[email protected]>
+# Author: Benjamin Bengfort <[email protected]>
+# Created: Sun Mar 18 16:57:27 2018 -0400
+#
+# ID: test_classification_report.py [] [email protected] $
+
+"""
+Tests for the classification report visualizer
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+import pytest
+import yellowbrick as yb
+import matplotlib.pyplot as plt
+
+from collections import namedtuple
from yellowbrick.classifier.classification_report import *
+
from tests.base import VisualTestCase
+from tests.dataset import DatasetMixin
from sklearn.svm import LinearSVC
+from sklearn.naive_bayes import GaussianNB
+from sklearn.tree import DecisionTreeClassifier
+from sklearn.datasets import make_classification
+from sklearn.model_selection import train_test_split as tts
+from sklearn.linear_model import LassoCV, LogisticRegression
+
+try:
+ import pandas as pd
+except ImportError:
+ pd = None
+
##########################################################################
-## Data
+## Fixtures
##########################################################################
-X = np.array(
- [[ 2.318, 2.727, 4.260, 7.212, 4.792],
- [ 2.315, 2.726, 4.295, 7.140, 4.783,],
- [ 2.315, 2.724, 4.260, 7.135, 4.779,],
- [ 2.110, 3.609, 4.330, 7.985, 5.595,],
- [ 2.110, 3.626, 4.330, 8.203, 5.621,],
- [ 2.110, 3.620, 4.470, 8.210, 5.612,]]
+# Helpers for fixtures
+Dataset = namedtuple('Dataset', 'X,y')
+Split = namedtuple('Split', 'train,test')
+
+
[email protected](scope='class')
+def binary(request):
+ """
+ Creates a random binary classification dataset fixture
+ """
+ X, y = make_classification(
+ n_samples=500, n_features=20, n_informative=8, n_redundant=2,
+ n_classes=2, n_clusters_per_class=3, random_state=87
+ )
+
+ X_train, X_test, y_train, y_test = tts(
+ X, y, test_size=0.2, random_state=93
)
-y = np.array([1, 1, 0, 1, 0, 0])
+ dataset = Dataset(Split(X_train, X_test), Split(y_train, y_test))
+ request.cls.binary = dataset
+
+
[email protected](scope='class')
+def multiclass(request):
+ """
+ Creates a random multiclass classification dataset fixture
+ """
+ X, y = make_classification(
+ n_samples=500, n_features=20, n_informative=8, n_redundant=2,
+ n_classes=6, n_clusters_per_class=3, random_state=87
+ )
+
+ X_train, X_test, y_train, y_test = tts(
+ X, y, test_size=0.2, random_state=93
+ )
+
+ dataset = Dataset(Split(X_train, X_test), Split(y_train, y_test))
+ request.cls.multiclass = dataset
+
+
##########################################################################
## Test for Classification Report
##########################################################################
-class ClassificationReportTests(VisualTestCase):
[email protected]("binary", "multiclass")
+class ClassificationReportTests(VisualTestCase, DatasetMixin):
+ """
+ ClassificationReport visualizer tests
+ """
+
+ def test_binary_class_report(self):
+ """
+ Correctly generates a report for binary classification with LinearSVC
+ """
+ _, ax = plt.subplots()
+
+ viz = ClassificationReport(LinearSVC(), ax=ax)
+ viz.fit(self.binary.X.train, self.binary.y.train)
+ viz.score(self.binary.X.test, self.binary.y.test)
- def test_class_report(self):
+ self.assert_images_similar(viz)
+
+ assert viz.scores_ == {
+ 'precision': {0: 0.7446808510638298, 1: 0.8490566037735849},
+ 'recall': {0: 0.813953488372093, 1: 0.7894736842105263},
+ 'f1': {0: 0.7777777777777778, 1: 0.8181818181818182}
+ }
+
+ def test_multiclass_class_report(self):
"""
- Assert no errors occur during classification report integration
+ Correctly generates report for multi-class with LogisticRegression
+ """
+ _, ax = plt.subplots()
+
+ viz = ClassificationReport(LogisticRegression(random_state=12), ax=ax)
+ viz.fit(self.multiclass.X.train, self.multiclass.y.train)
+ viz.score(self.multiclass.X.test, self.multiclass.y.test)
+
+ self.assert_images_similar(viz)
+
+ assert viz.scores_ == {
+ 'precision': {
+ 0: 0.5333333333333333, 1: 0.5, 2: 0.45,
+ 3: 0.4, 4: 0.4, 5: 0.5882352941176471
+ }, 'recall': {
+ 0: 0.42105263157894735, 1: 0.5625, 2: 0.6428571428571429,
+ 3: 0.3157894736842105, 4: 0.375, 5: 0.625
+ }, 'f1': {
+ 0: 0.47058823529411764, 1: 0.5294117647058824,
+ 2: 0.5294117647058824, 3: 0.35294117647058826,
+ 4: 0.38709677419354843, 5: 0.6060606060606061
+ }}
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_pandas_integration(self):
"""
- model = LinearSVC()
- model.fit(X,y)
- visualizer = ClassificationReport(model, classes=["A", "B"])
- visualizer.score(X,y)
- self.assert_images_similar(visualizer)
+ Test with Pandas DataFrame and Series input
+ """
+ _, ax = plt.subplots()
+
+ # Load the occupancy dataset from fixtures
+ data = self.load_data('occupancy')
+ target = 'occupancy'
+ features = [
+ "temperature", "relative_humidity", "light", "C02", "humidity"
+ ]
+
+ # Create instances and target
+ X = pd.DataFrame(data[features])
+ y = pd.Series(data[target].astype(int))
+
+ # Create train/test splits
+ splits = tts(X, y, test_size=0.2, random_state=4512)
+ X_train, X_test, y_train, y_test = splits
+
+ classes = ['unoccupied', 'occupied']
+
+ # Create classification report
+ model = GaussianNB()
+ viz = ClassificationReport(model, ax=ax, classes=classes)
+ viz.fit(X_train, y_train)
+ viz.score(X_test, y_test)
+
+ self.assert_images_similar(viz, tol=0.1)
+
+ # Ensure correct classification scores under the hood
+ assert viz.scores_ == {
+ 'precision': {
+ 'unoccupied': 0.999347471451876,
+ 'occupied': 0.8825214899713467
+ }, 'recall': {
+ 'unoccupied': 0.9613935969868174,
+ 'occupied': 0.9978401727861771
+ }, 'f1': {
+ 'unoccupied': 0.9800031994880819,
+ 'occupied': 0.9366447034972124
+ }}
+
+ @pytest.mark.skip(reason="requires random state in quick method")
+ def test_quick_method(self):
+ """
+ Test the quick method with a random dataset
+ """
+ X, y = make_classification(
+ n_samples=400, n_features=20, n_informative=8, n_redundant=8,
+ n_classes=2, n_clusters_per_class=4, random_state=27
+ )
+
+ _, ax = plt.subplots()
+ classification_report(DecisionTreeClassifier(), X, y, ax=ax)
+
+ self.assert_images_similar(ax=ax)
+
+ def test_isclassifier(self):
+ """
+ Assert that only classifiers can be used with the visualizer.
+ """
+
+ message = (
+ 'This estimator is not a classifier; '
+ 'try a regression or clustering score visualizer instead!'
+ )
+
+ with self.assertRaisesRegexp(yb.exceptions.YellowbrickError, message):
+ ClassificationReport(LassoCV())
diff --git a/tests/test_classifier/test_confusion_matrix.py b/tests/test_classifier/test_confusion_matrix.py
--- a/tests/test_classifier/test_confusion_matrix.py
+++ b/tests/test_classifier/test_confusion_matrix.py
@@ -32,6 +32,8 @@
from sklearn.datasets import load_digits
from sklearn.naive_bayes import GaussianNB
from sklearn.preprocessing import LabelEncoder
+from sklearn.tree import DecisionTreeClassifier
+from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import PassiveAggressiveRegressor
from sklearn.model_selection import train_test_split as tts
@@ -41,6 +43,9 @@
except ImportError:
pd = None
+##########################################################################
+## Fixtures
+##########################################################################
# Helpers for fixtures
Dataset = namedtuple('Dataset', 'X,y')
@@ -64,6 +69,10 @@ def digits(request):
)
+##########################################################################
+## Test Cases
+##########################################################################
+
@pytest.mark.usefixtures("digits")
class ConfusionMatrixTests(VisualTestCase, DatasetMixin):
"""
@@ -310,6 +319,20 @@ def test_pandas_integration(self):
[ 1, 985]
]))
+ @pytest.mark.skip(reason="requires random state in quick method")
+ def test_quick_method(self):
+ """
+ Test the quick method with a random dataset
+ """
+ X, y = make_classification(
+ n_samples=400, n_features=20, n_informative=8, n_redundant=8,
+ n_classes=2, n_clusters_per_class=4, random_state=27
+ )
+
+ _, ax = plt.subplots()
+ confusion_matrix(DecisionTreeClassifier(), X, y, ax=ax)
+
+ self.assert_images_similar(ax=ax)
def test_isclassifier(self):
"""
| Lines in Classification Report
First, this is an awesome module!!
I created a classification report (see attached) and the white lines make reading the numbers impossible.

I wanted to ask if there is a way to either lighten or remove these lines so that you can read the various scores?
Thanks!
| @cgivre thanks for the note, and for using Yellowbrick! You're right the numbers are tough to read with those lines, we'll definitely have to explore a fix on our end. Perhaps we can add a box behind the text with some opacity that makes it easier to read. I know that we did do a dark/light text algorithm so that we had white text on a dark background and vice versa. Hopefully we can use that to make things more readable.
In the meantime you can get access to the gridlines through the `ax` property of the visualizer, and modify the color yourself:
```python
model = ClassificationReport(RandomForestClassifier())
model.fit(X_train, y_train)
model.score(X_test, y_test)
model.ax.grid(color="#666666", linestyle='--')
model.poof()
```
Which results in something like:

When I've done these sort of plots in the past I've always wondered if I could move the grid lines to be _between_ the grid items to form box-edges, rather than running through the centre of the cells. This would turn the display into discrete cells. Might that be worth exploring?
Definitely worth exploring, but not 100% sure how to. @NealHumphrey -- you were taking a look at using the `pcolormeshgrid` instead of `imshow`, is it possible with this method?
@bbengfort and @ianozsvald - yes, it should be the default result if the classification report is converted to a `pcolormeshgrid`. You can see that the lines are outside the grids on the confusion matrix in these screenshots: https://github.com/DistrictDataLabs/yellowbrick/pull/144 | 2018-03-18T21:44:06 |
DistrictDataLabs/yellowbrick | 371 | DistrictDataLabs__yellowbrick-371 | [
"370"
] | 203cc2d6ad564fc7e4d2a4fcf40600da03bc3f8b | diff --git a/yellowbrick/cluster/elbow.py b/yellowbrick/cluster/elbow.py
--- a/yellowbrick/cluster/elbow.py
+++ b/yellowbrick/cluster/elbow.py
@@ -19,6 +19,8 @@
##########################################################################
import time
+import numpy as np
+import scipy.sparse as sp
from .base import ClusteringScoreVisualizer
from ..exceptions import YellowbrickValueError
@@ -83,8 +85,16 @@ def distortion_score(X, labels, metric='euclidean'):
# Compute the center of these instances
center = instances.mean(axis=0)
+ # NOTE: csc_matrix and csr_matrix mean returns a 2D array, numpy.mean
+ # returns an array of 1 dimension less than the input. We expect
+ # instances to be a 2D array, therefore to do pairwise computation we
+ # require center to be a 2D array with a single row (the center).
+ # See #370 for more detail.
+ if not sp.issparse(instances):
+ center = np.array([center])
+
# Compute the square distances from the instances to the center
- distances = pairwise_distances(instances, [center], metric=metric)
+ distances = pairwise_distances(instances, center, metric=metric)
distances = distances ** 2
# Add the mean square distance to the distortion
| diff --git a/tests/test_cluster/test_elbow.py b/tests/test_cluster/test_elbow.py
--- a/tests/test_cluster/test_elbow.py
+++ b/tests/test_cluster/test_elbow.py
@@ -18,19 +18,27 @@
##########################################################################
import pytest
-import unittest
import numpy as np
import matplotlib.pyplot as plt
from ..base import VisualTestCase
from ..dataset import DatasetMixin
+from scipy.sparse import csc_matrix, csr_matrix
+from numpy.testing.utils import assert_array_almost_equal
+
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans, MiniBatchKMeans
+from sklearn.feature_extraction.text import TfidfVectorizer
+
from yellowbrick.cluster.elbow import distortion_score
from yellowbrick.cluster.elbow import KElbowVisualizer
from yellowbrick.exceptions import YellowbrickValueError
-from numpy.testing.utils import assert_array_almost_equal
+
+try:
+ import pandas as pd
+except ImportError:
+ pd = None
##########################################################################
@@ -61,7 +69,7 @@
y = np.array([0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0])
-class KElbowHelperTests(unittest.TestCase):
+class TestKElbowHelper(object):
"""
Helper functions for K-Elbow Visualizer
"""
@@ -71,19 +79,40 @@ def test_distortion_score(self):
Test the distortion score metric function
"""
score = distortion_score(X, y)
- self.assertEqual(score, 7.6777850157143783)
+ assert score == 7.6777850157143783
+
+ @pytest.mark.parametrize("Xs", [
+ csc_matrix(X), csr_matrix(X),
+ ], ids=["csc", "csr"])
+ def test_distortion_score_sparse_matrix_input(self, Xs):
+ """
+ Test the distortion score metric on a sparse array
+ """
+ score = distortion_score(Xs, y)
+ assert score == pytest.approx(7.6777850157143783)
+
+ @pytest.mark.skipif(pd is None, reason="pandas is required")
+ def test_distortion_score_pandas_input(self):
+ """
+ Test the distortion score metric on pandas DataFrame and Series
+ """
+ df = pd.DataFrame(X)
+ s = pd.Series(y)
+
+ score = distortion_score(df, s)
+ assert score == pytest.approx(7.6777850157143783)
##########################################################################
## KElbowVisualizer Test Cases
##########################################################################
-class KElbowVisualizerTests(VisualTestCase, DatasetMixin):
+class TestKElbowVisualizer(VisualTestCase, DatasetMixin):
"""
K-Elbow Visualizer Tests
"""
- @pytest.mark.skip("images not close due to timing lines")
+ @pytest.mark.xfail(reason="images not close due to timing lines")
def test_integrated_kmeans_elbow(self):
"""
Test no exceptions for kmeans k-elbow visualizer on blobs dataset
@@ -92,12 +121,12 @@ def test_integrated_kmeans_elbow(self):
# Generate a blobs data set
X,y = make_blobs(
- n_samples=1000, n_features=12, centers=6, shuffle=True, random_state=42
+ n_samples=1000, n_features=12, centers=6,
+ shuffle=True, random_state=42
)
try:
- fig = plt.figure()
- ax = fig.add_subplot()
+ _, ax = plt.subplots()
visualizer = KElbowVisualizer(KMeans(random_state=42), k=4, ax=ax)
visualizer.fit(X)
@@ -105,9 +134,9 @@ def test_integrated_kmeans_elbow(self):
self.assert_images_similar(visualizer)
except Exception as e:
- self.fail("error during k-elbow: {}".format(e))
+ pytest.fail("error during k-elbow: {}".format(e))
- @pytest.mark.skip("images not close due to timing lines")
+ @pytest.mark.xfail(reason="images not close due to timing lines")
def test_integrated_mini_batch_kmeans_elbow(self):
"""
Test no exceptions for mini-batch kmeans k-elbow visualizer
@@ -120,37 +149,57 @@ def test_integrated_mini_batch_kmeans_elbow(self):
)
try:
- fig = plt.figure()
- ax = fig.add_subplot()
+ _, ax = plt.subplots()
- visualizer = KElbowVisualizer(MiniBatchKMeans(random_state=42), k=4, ax=ax)
+ visualizer = KElbowVisualizer(
+ MiniBatchKMeans(random_state=42), k=4, ax=ax
+ )
visualizer.fit(X)
visualizer.poof()
self.assert_images_similar(visualizer)
except Exception as e:
- self.fail("error during k-elbow: {}".format(e))
+ pytest.fail("error during k-elbow: {}".format(e))
+
+ @pytest.mark.skip(reason="takes over 20 seconds to run")
+ def test_topic_modeling_k_means(self):
+ """
+ Test topic modeling k-means on the hobbies corpus
+ """
+ corpus = self.load_corpus("hobbies")
+
+ tfidf = TfidfVectorizer()
+ docs = tfidf.fit_transform(corpus.data)
+ visualizer = KElbowVisualizer(KMeans(), k=(4, 8))
+
+ visualizer.fit(docs)
+ visualizer.poof()
+
+ self.assert_images_similar(visualizer)
def test_invalid_k(self):
"""
Assert that invalid values of K raise exceptions
"""
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
KElbowVisualizer(KMeans(), k=(1,2,3,4,5))
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
KElbowVisualizer(KMeans(), k="foo")
def test_distortion_metric(self):
"""
Test the distortion metric of the k-elbow visualizer
"""
- visualizer = KElbowVisualizer(KMeans(random_state=0), k=5, metric="distortion", timings=False)
+ visualizer = KElbowVisualizer(
+ KMeans(random_state=0), k=5, metric="distortion", timings=False
+ )
visualizer.fit(X)
expected = np.array([ 7.677785, 8.364319, 8.893634, 8.013021])
- self.assertEqual(len(visualizer.k_scores_), 4)
+ assert len(visualizer.k_scores_) == 4
+
visualizer.poof()
self.assert_images_similar(visualizer)
assert_array_almost_equal(visualizer.k_scores_, expected)
@@ -159,11 +208,14 @@ def test_silhouette_metric(self):
"""
Test the silhouette metric of the k-elbow visualizer
"""
- visualizer = KElbowVisualizer(KMeans(random_state=0), k=5, metric="silhouette", timings=False)
+ visualizer = KElbowVisualizer(
+ KMeans(random_state=0), k=5, metric="silhouette", timings=False
+ )
visualizer.fit(X)
expected = np.array([ 0.691636, 0.456646, 0.255174, 0.239842])
- self.assertEqual(len(visualizer.k_scores_), 4)
+ assert len(visualizer.k_scores_) == 4
+
visualizer.poof()
self.assert_images_similar(visualizer)
assert_array_almost_equal(visualizer.k_scores_, expected)
@@ -172,15 +224,19 @@ def test_calinski_harabaz_metric(self):
"""
Test the calinski-harabaz metric of the k-elbow visualizer
"""
- visualizer = KElbowVisualizer(KMeans(random_state=0), k=5, metric="calinski_harabaz", timings=False)
+ visualizer = KElbowVisualizer(
+ KMeans(random_state=0), k=5,
+ metric="calinski_harabaz", timings=False
+ )
visualizer.fit(X)
+ assert len(visualizer.k_scores_) == 4
expected = np.array([
81.662726256035683, 50.992378259195554,
40.952179227847012, 35.939494
])
- self.assertEqual(len(visualizer.k_scores_), 4)
+
visualizer.poof()
self.assert_images_similar(visualizer)
assert_array_almost_equal(visualizer.k_scores_, expected)
@@ -189,28 +245,33 @@ def test_bad_metric(self):
"""
Assert KElbow raises an exception when a bad metric is supplied
"""
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
KElbowVisualizer(KMeans(), k=5, metric="foo")
def test_timings(self):
"""
Test the twinx double axes with k-elbow timings
"""
- visualizer = KElbowVisualizer(KMeans(random_state=0), k=5, timings=True)
+ visualizer = KElbowVisualizer(
+ KMeans(random_state=0), k=5, timings=True
+ )
visualizer.fit(X)
# Check that we kept track of time
- self.assertEqual(len(visualizer.k_timers_), 4)
- self.assertTrue(all([t > 0 for t in visualizer.k_timers_]))
+ assert len(visualizer.k_timers_) == 4
+ assert all([t > 0 for t in visualizer.k_timers_])
# Check that we plotted time on a twinx
- self.assertTrue(hasattr(visualizer, "axes"))
- self.assertEqual(len(visualizer.axes), 2)
+ assert hasattr(visualizer, "axes")
+ assert len(visualizer.axes) == 2
# delete the timings axes and
# overwrite k_timers_, k_values_ for image similarity Tests
visualizer.axes[1].remove()
- visualizer.k_timers_ = [0.01084589958190918, 0.011144161224365234, 0.017028093338012695, 0.010634183883666992]
+ visualizer.k_timers_ = [
+ 0.01084589958190918, 0.011144161224365234,
+ 0.017028093338012695, 0.010634183883666992
+ ]
visualizer.k_values_ = [2, 3, 4, 5]
# call draw again which is normally called in fit
| Error from KElbowVisualizer's distortion_score method
`KElbowVisualizer` is generating a `ValueError` from Sklearn.
### Issue
Using the Yellowbrick `hobbies` corpus and vectorizing with TFIDF, the KElbowVisualizer is generating an error from Sklearn: `ValueError: Found array with dim 3. check_pairwise_arrays expected <= 2`. The `distortion_score` method appears to be causing the error when calling Sklearn's `pairwise_distances` method.
### Code
```
corpus = load_corpus('hobbies')
tfidf = TfidfVectorizer()
docs = tfidf.fit_transform(corpus)
visualizer = KElbowVisualizer(KMeans(), k=(4, 8))
visualizer.fit(docs)
visualizer.poof()
```
### Error
```
Traceback (most recent call last):
File "elbows.py", line 82, in <module>
visualizer.fit(docs)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/cluster/elbow.py", line 245, in fit
self.scoring_metric(X, self.estimator.labels_)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/cluster/elbow.py", line 87, in distortion_score
distances = pairwise_distances(instances, [center], metric=metric)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/metrics/pairwise.py", line 1247, in pairwise_distances
return _parallel_pairwise(X, Y, func, n_jobs, **kwds)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/metrics/pairwise.py", line 1090, in _parallel_pairwise
return func(X, Y, **kwds)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/metrics/pairwise.py", line 223, in euclidean_distances
X, Y = check_pairwise_arrays(X, Y)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/metrics/pairwise.py", line 112, in check_pairwise_arrays
warn_on_dtype=warn_on_dtype, estimator=estimator)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/utils/validation.py", line 451, in check_array
% (array.ndim, estimator_name))
ValueError: Found array with dim 3. check_pairwise_arrays expected <= 2.
```
| I've discovered the source of the problem but not the cause.
In [`distortion_score`](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/cluster/elbow.py#L84) we take the mean of the instances along `axis=0` to find the centroid. We expect `instances` to be a 2D array, therefore the mean should be a 1D array. In order to pass this to the `pairwise_distances` function, we then covert the center into a single 2D point.
For some reason (still undetermined), in this particular case, the mean is returning a 2D array with only one row in it. It happens that the input is a sparse matrix, not a dense matrix, however, it should still only output a 1D array. Some testing:
```python
import numpy as np
from scipy.sparse import csc_matrix
def amake(ndims=1, s=10, sparse=False):
cube = [s]*ndims
a = np.random.normal(loc=10.0, scale=1.8, size=cube)
if sparse:
return csc_matrix(a)
return a
for i in range(1,6):
mshape = amake(ndims=i).mean(axis=0).ndim
print("{}D -> {}D mean".format(i, mshape))
```
output:
```
1D -> 0D mean
2D -> 1D mean
3D -> 2D mean
4D -> 3D mean
5D -> 4D mean
```
So I'm a bit flummoxed, but still looking into it.
**UPDATE**
Apparently sparse matrices always return a 2D matrix, for any input dimension, although they won't accept over 2D input ...
```
for i in range(0, 3):
mshape = amake(ndims=i, sparse=True).mean(axis=0).ndim
print("{}D sparse -> {}D mean".format(i, mshape))
```
output:
```
0D sparse -> 2D mean
1D sparse -> 2D mean
2D sparse -> 2D mean
``` | 2018-03-26T18:42:25 |
DistrictDataLabs/yellowbrick | 382 | DistrictDataLabs__yellowbrick-382 | [
"268"
] | 3fac943f3d197ab7bbc297f9209cc39a4c345416 | diff --git a/docs/api/features/rfecv.py b/docs/api/features/rfecv.py
new file mode 100644
--- /dev/null
+++ b/docs/api/features/rfecv.py
@@ -0,0 +1,50 @@
+#!/usr/bin/env python3
+# Generates RFECV visualizations for the documentation
+
+import os
+import pandas as pd
+import matplotlib.pyplot as plt
+
+from sklearn.svm import SVC
+from yellowbrick.features import RFECV
+from sklearn.datasets import make_classification
+from sklearn.ensemble import RandomForestClassifier
+from sklearn.model_selection import StratifiedKFold
+
+CWD = os.path.dirname(__file__)
+DATA = os.path.join(CWD, "..", "..", "..", "examples", "data")
+IMAGES = os.path.join(CWD, "images")
+
+
+def rfecv_sklearn_example(image="rfecv_sklearn_example.png"):
+ X, y = make_classification(
+ n_samples=1000, n_features=25, n_informative=3, n_redundant=2,
+ n_repeated=0, n_classes=8, n_clusters_per_class=1, random_state=0
+ )
+
+ _, ax = plt.subplots()
+
+ oz = RFECV(SVC(kernel='linear', C=1), ax=ax)
+ oz.fit(X, y)
+ oz.poof(outpath=os.path.join(IMAGES, image))
+
+
+def rfecv_credit_example(image="rfecv_credit.png"):
+ data = pd.read_csv(os.path.join(DATA, "credit", "credit.csv"))
+
+ target = "default"
+ features = [col for col in data.columns if col != target]
+
+ X = data[features]
+ y = data[target]
+
+ _, ax = plt.subplots()
+ cv = StratifiedKFold(5)
+ oz = RFECV(RandomForestClassifier(), ax=ax, cv=cv, scoring='f1_weighted')
+ oz.fit(X, y)
+ oz.poof(outpath=os.path.join(IMAGES, image))
+
+
+if __name__ == '__main__':
+ rfecv_sklearn_example()
+ rfecv_credit_example()
diff --git a/yellowbrick/features/__init__.py b/yellowbrick/features/__init__.py
--- a/yellowbrick/features/__init__.py
+++ b/yellowbrick/features/__init__.py
@@ -25,3 +25,4 @@
from .jointplot import JointPlotVisualizer
from .pca import PCADecomposition, pca_decomposition
from .importances import FeatureImportances, feature_importances
+from .rfecv import RFECV, rfecv
diff --git a/yellowbrick/features/rfecv.py b/yellowbrick/features/rfecv.py
new file mode 100644
--- /dev/null
+++ b/yellowbrick/features/rfecv.py
@@ -0,0 +1,331 @@
+# yellowbrick.features.rfecv
+# Visualize the number of features selected with recursive feature elimination
+#
+# Author: Benjamin Bengfort <[email protected]>
+# Created: Tue Apr 03 17:31:37 2018 -0400
+#
+# ID: rfecv.py [] [email protected] $
+
+"""
+Visualize the number of features selected using recursive feature elimination
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+import numpy as np
+
+from yellowbrick.base import ModelVisualizer
+from yellowbrick.exceptions import YellowbrickValueError
+
+from sklearn.utils import check_X_y
+from sklearn.feature_selection import RFE
+from sklearn.model_selection import cross_val_score
+
+
+##########################################################################
+## Recursive Feature Elimination
+##########################################################################
+
+class RFECV(ModelVisualizer):
+ """
+ Recursive Feature Elimination, Cross-Validated (RFECV) feature selection.
+
+ Selects the best subset of features for the suplied estimator by removing
+ 0 to N features (where N is the number of features) using recursive
+ feature elimination, then selecting the best subset based on the
+ cross-validation score of the model. Recursive feature elimination
+ eliminates n features from a model by fitting the model multiple times and
+ at each step, removing the weakest features, determined by either the
+ ``coef_`` or ``feature_importances_`` attribute of the fitted model.
+
+ The visualization plots the score relative to each subset and shows trends
+ in feature elimination. If the feature elimination CV score is flat, then
+ potentially there are not enough features in the model. An ideal curve is
+ when the score jumps from low to high as the number of features removed
+ increases, then slowly decreases again from the optimal number of
+ features.
+
+ Parameters
+ ----------
+ model : a scikit-learn estimator
+ An object that implements ``fit`` and provides information about the
+ relative importance of features with either a ``coef_`` or
+ ``feature_importances_`` attribute.
+
+ Note that the object is cloned for each validation.
+
+ ax : matplotlib.Axes object, optional
+ The axes object to plot the figure on.
+
+ step : int or float, optional (default=1)
+ If greater than or equal to 1, then step corresponds to the (integer)
+ number of features to remove at each iteration. If within (0.0, 1.0),
+ then step corresponds to the percentage (rounded down) of features to
+ remove at each iteration.
+
+ groups : array-like, with shape (n_samples,), optional
+ Group labels for the samples used while splitting the dataset into
+ train/test set.
+
+ cv : int, cross-validation generator or an iterable, optional
+ Determines the cross-validation splitting strategy.
+ Possible inputs for cv are:
+
+ - None, to use the default 3-fold cross-validation,
+ - integer, to specify the number of folds.
+ - An object to be used as a cross-validation generator.
+ - An iterable yielding train/test splits.
+
+ see the scikit-learn
+ `cross-validation guide <http://scikit-learn.org/stable/modules/cross_validation.html>`_
+ for more information on the possible strategies that can be used here.
+
+ scoring : string, callable or None, optional, default: None
+ A string or scorer callable object / function with signature
+ ``scorer(estimator, X, y)``. See scikit-learn model evaluation
+ documentation for names of possible metrics.
+
+ kwargs : dict
+ Keyword arguments that are passed to the base class and may influence
+ the visualization as defined in other Visualizers.
+
+ Attributes
+ ----------
+ n_features_ : int
+ The number of features in the selected subset
+
+ support_ : array of shape [n_features]
+ A mask of the selected features
+
+ ranking_ : array of shape [n_features]
+ The feature ranking, such that ``ranking_[i]`` corresponds to the
+ ranked position of feature i. Selected features are assigned rank 1.
+
+ cv_scores_ : array of shape [n_subsets_of_features, n_splits]
+ The cross-validation scores for each subset of features and splits in
+ the cross-validation strategy.
+
+ rfe_estimator_ : sklearn.feature_selection.RFE
+ A fitted RFE estimator wrapping the original estimator. All estimator
+ functions such as ``predict()`` and ``score()`` are passed through to
+ this estimator (it rewraps the original model).
+
+ Notes
+ -----
+ This model wraps ``sklearn.feature_selection.RFE`` and not
+ ``sklearn.feature_selection.RFECV`` because access to the internals of the
+ CV and RFE estimators is required for the visualization. The visualizer
+ does take similar arguments, however it does not expose the same internal
+ attributes.
+
+ Additionally, the RFE model can be accessed via the ``rfe_estimator_``
+ attribute. Once fitted, the visualizer acts as a wrapper for this
+ estimator and not for the original model passed to the model. This way the
+ visualizer model can be used to make predictions.
+
+ .. caution:: This visualizer requires a model that has either a ``coef_``
+ or ``feature_importances_`` attribute when fitted.
+ """
+
+ def __init__(self, model, ax=None, step=1, groups=None, cv=None,
+ scoring=None, **kwargs):
+
+ # Initialize the model visualizer
+ super(RFECV, self).__init__(model, ax=ax, **kwargs)
+
+ # Set parameters
+ self.set_params(step=step, groups=groups, cv=cv, scoring=scoring)
+
+ def fit(self, X, y=None):
+ """
+ Fits the RFECV with the wrapped model to the specified data and draws
+ the rfecv curve with the optimal number of features found.
+
+ Parameters
+ ----------
+ X : array-like, shape (n_samples, n_features)
+ Training vector, where n_samples is the number of samples and
+ n_features is the number of features.
+
+ y : array-like, shape (n_samples) or (n_samples, n_features), optional
+ Target relative to X for classification or regression.
+
+ Returns
+ -------
+ self : instance
+ Returns the instance of the RFECV visualizer.
+ """
+ X, y = check_X_y(X, y, "csr")
+ n_features = X.shape[1]
+
+ # This check is kind of unnecessary since RFE will do it, but it's
+ # nice to get it out of the way ASAP and raise a meaningful error.
+ if 0.0 < self.step < 1.0:
+ step = int(max(1, self.step * n_features))
+ else:
+ step = int(self.step)
+
+ if step < 0:
+ raise YellowbrickValueError("step must be >0")
+
+ # Create the RFE model
+ rfe = RFE(self.estimator, step=step)
+ n_feature_subsets = np.arange(1, n_features+1)
+
+ # Create the cross validation params
+ # TODO: handle random state
+ cv_params = {
+ key: self.get_params()[key]
+ for key in ('groups', 'cv', 'scoring')
+ }
+
+ # Perform cross-validation for each feature subset
+ scores = []
+ for n_features_to_select in n_feature_subsets:
+ rfe.set_params(n_features_to_select=n_features_to_select)
+ scores.append(cross_val_score(rfe, X, y, **cv_params))
+
+ # Convert scores to array
+ self.cv_scores_ = np.array(scores)
+
+ # Find the best RFE model
+ bestidx = self.cv_scores_.mean(axis=1).argmax()
+ self.n_features_ = n_feature_subsets[bestidx]
+
+ # Fit the final RFE model for the number of features
+ self.rfe_estimator_ = rfe
+ self.rfe_estimator_.set_params(n_features_to_select=self.n_features_)
+ self.rfe_estimator_.fit(X, y)
+
+ # Rewrap the visualizer to use the rfe estimator
+ self._wrapped = self.rfe_estimator_
+
+ # Hoist the RFE params to the visualizer
+ self.support_ = self.rfe_estimator_.support_
+ self.ranking_ = self.rfe_estimator_.ranking_
+
+ self.draw()
+ return self
+
+ def draw(self, **kwargs):
+ """
+ Renders the rfecv curve.
+ """
+ # Compute the curves
+ x = np.arange(1, len(self.cv_scores_)+1)
+ means = self.cv_scores_.mean(axis=1)
+ sigmas = self.cv_scores_.std(axis=1)
+
+
+ # Plot one standard deviation above and below the mean
+ self.ax.fill_between(x, means - sigmas, means+sigmas, alpha=0.25)
+
+ # Plot the curve
+ self.ax.plot(x, means, 'o-')
+
+ # Plot the maximum number of features
+ self.ax.axvline(
+ self.n_features_, c='k', ls='--',
+ label="n_features = {}\nscore = {:0.3f}".format(
+ self.n_features_, self.cv_scores_.mean(axis=1).max()
+ )
+ )
+
+ return self.ax
+
+ def finalize(self, **kwargs):
+ """
+ Add the title, legend, and other visual final touches to the plot.
+ """
+ # Set the title of the figure
+ self.set_title('RFECV for {}'.format(self.name))
+
+ # Add the legend
+ self.ax.legend(frameon=True, loc='best')
+
+ # Set the axis labels
+ self.ax.set_xlabel('Number of Features Selected')
+ self.ax.set_ylabel('Score')
+
+
+##########################################################################
+## Quick Methods
+##########################################################################
+
+def rfecv(model, X, y, ax=None, step=1, groups=None, cv=None,
+ scoring=None, **kwargs):
+ """
+ Performs recursive feature elimination with cross-validation to determine
+ an optimal number of features for a model. Visualizes the feature subsets
+ with respect to the cross-validation score.
+
+ This helper function is a quick wrapper to utilize the RFECV visualizer
+ for one-off analysis.
+
+ Parameters
+ ----------
+ model : a scikit-learn estimator
+ An object that implements ``fit`` and provides information about the
+ relative importance of features with either a ``coef_`` or
+ ``feature_importances_`` attribute.
+
+ Note that the object is cloned for each validation.
+
+ X : array-like, shape (n_samples, n_features)
+ Training vector, where n_samples is the number of samples and
+ n_features is the number of features.
+
+ y : array-like, shape (n_samples) or (n_samples, n_features), optional
+ Target relative to X for classification or regression.
+
+ ax : matplotlib.Axes object, optional
+ The axes object to plot the figure on.
+
+ step : int or float, optional (default=1)
+ If greater than or equal to 1, then step corresponds to the (integer)
+ number of features to remove at each iteration. If within (0.0, 1.0),
+ then step corresponds to the percentage (rounded down) of features to
+ remove at each iteration.
+
+ groups : array-like, with shape (n_samples,), optional
+ Group labels for the samples used while splitting the dataset into
+ train/test set.
+
+ cv : int, cross-validation generator or an iterable, optional
+ Determines the cross-validation splitting strategy.
+ Possible inputs for cv are:
+
+ - None, to use the default 3-fold cross-validation,
+ - integer, to specify the number of folds.
+ - An object to be used as a cross-validation generator.
+ - An iterable yielding train/test splits.
+
+ see the scikit-learn
+ `cross-validation guide <http://scikit-learn.org/stable/modules/cross_validation.html>`_
+ for more information on the possible strategies that can be used here.
+
+ scoring : string, callable or None, optional, default: None
+ A string or scorer callable object / function with signature
+ ``scorer(estimator, X, y)``. See scikit-learn model evaluation
+ documentation for names of possible metrics.
+
+ kwargs : dict
+ Keyword arguments that are passed to the base class and may influence
+ the visualization as defined in other Visualizers. These arguments are
+ also passed to the `poof()` method, e.g. can pass a path to save the
+ figure to.
+
+ Returns
+ -------
+ ax : matplotlib axes
+ Returns the axes that the rfecv were drawn on.
+ """
+ # Initialize the visualizer
+ oz = RFECV(model, ax=ax, step=step, groups=groups, cv=cv, scoring=scoring)
+
+ # Fit and poof the visualizer
+ oz.fit(X, y)
+ oz.poof(**kwargs)
+ return oz.ax
| diff --git a/tests/baseline_images/test_features/test_rfecv/test_pandas_integration.png b/tests/baseline_images/test_features/test_rfecv/test_pandas_integration.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_rfecv/test_pandas_integration.png differ
diff --git a/tests/baseline_images/test_features/test_rfecv/test_quick_method.png b/tests/baseline_images/test_features/test_rfecv/test_quick_method.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_rfecv/test_quick_method.png differ
diff --git a/tests/baseline_images/test_features/test_rfecv/test_rfecv_classification.png b/tests/baseline_images/test_features/test_rfecv/test_rfecv_classification.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_rfecv/test_rfecv_classification.png differ
diff --git a/tests/test_features/test_rfecv.py b/tests/test_features/test_rfecv.py
new file mode 100644
--- /dev/null
+++ b/tests/test_features/test_rfecv.py
@@ -0,0 +1,156 @@
+# tests.test_feautures.test_rfecv
+# Tests for the RFECV visualizer
+#
+# Author: Benjamin Bengfort <[email protected]>
+# Created: Tue Apr 03 17:35:16 2018 -0400
+#
+# ID: test_rfecv.py [] [email protected] $
+
+"""
+Tests for the RFECV visualizer
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+import pytest
+
+from tests.base import VisualTestCase
+from tests.dataset import DatasetMixin, Dataset
+
+from yellowbrick.features.rfecv import *
+from yellowbrick.exceptions import YellowbrickValueError
+
+from sklearn.svm import SVC
+from sklearn.model_selection import ShuffleSplit
+from sklearn.model_selection import StratifiedKFold
+from sklearn.datasets import make_classification
+from sklearn.linear_model import LogisticRegression
+from sklearn.ensemble import RandomForestClassifier
+
+try:
+ import pandas as pd
+except ImportError:
+ pd = None
+
+try:
+ from unittest.mock import patch
+except ImportError:
+ from mock import patch
+
+
+##########################################################################
+## Fixtures
+##########################################################################
+
[email protected](scope="class")
+def dataset(request):
+ """
+ Creates a multiclass classification dataset fixture for RFECV
+ """
+ X, y = make_classification(
+ n_samples=600, n_features=15, n_informative=7, n_redundant=4,
+ n_repeated=0, n_classes=8, n_clusters_per_class=1, random_state=0
+ )
+
+ dataset = Dataset(X, y)
+ request.cls.dataset = dataset
+
+
+##########################################################################
+## Test Cases
+##########################################################################
+
[email protected]("dataset")
+class TestRFECV(VisualTestCase, DatasetMixin):
+ """
+ Test the RFECV visualizer
+ """
+
+ @patch.object(RFECV, 'draw')
+ def test_fit(self, mock_draw):
+ """
+ Assert that fit returns self and creates expected properties with NB
+ """
+ X, y = self.dataset
+ params = (
+ "n_features_", "support_", "ranking_",
+ "cv_scores_", "rfe_estimator_",
+ )
+
+ rf = RandomForestClassifier()
+ oz = RFECV(rf)
+ for param in params:
+ assert not hasattr(oz, param)
+
+ # Assert original estimator is wrapped
+ assert oz._wrapped is rf
+
+ assert oz.fit(X, y) is oz
+ mock_draw.assert_called_once()
+
+ for param in params:
+ assert hasattr(oz, param)
+
+ # Assert rfe estimator is now wrapped
+ assert oz._wrapped is not rf
+ assert oz._wrapped is oz.rfe_estimator_
+
+ def test_rfecv_classification(self):
+ """
+ Test image closeness on a classification dataset with an SVM
+ """
+ cv = ShuffleSplit(3, random_state=21)
+ oz = RFECV(SVC(kernel="linear", C=1), cv=cv)
+ oz.fit(self.dataset.X, self.dataset.y)
+ oz.poof()
+
+ self.assert_images_similar(oz)
+
+ @pytest.mark.filterwarnings('ignore:F-score is ill-defined')
+ def test_quick_method(self):
+ """
+ Test the recv quick method works with LogisticRegression
+ """
+ cv = ShuffleSplit(2, random_state=14)
+ model = LogisticRegression()
+ X, y = self.dataset
+
+ ax = rfecv(model, X, y, step=3, cv=cv, scoring='f1_weighted')
+
+ self.assert_images_similar(ax=ax)
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_pandas_integration(self):
+ """
+ Test on a real dataset with pandas DataFrame and Series
+ """
+ df = self.load_pandas("occupancy")
+
+ target = "occupancy"
+ features = [
+ 'temperature', 'relative humidity', 'light', 'C02', 'humidity'
+ ]
+
+ X = df[features]
+ y = df[target]
+
+ assert isinstance(X, pd.DataFrame)
+ assert isinstance(y, pd.Series)
+
+ cv = StratifiedKFold(n_splits=4, random_state=32)
+ oz = RFECV(RandomForestClassifier(random_state=83), cv=cv)
+ oz.fit(X, y)
+ oz.poof()
+
+ self.assert_images_similar(oz)
+
+ def test_valid_step(self):
+ """
+ Test step hyperparam validation
+ """
+ # TODO: parametrize when unittest is removed
+ with pytest.raises(YellowbrickValueError):
+ oz = RFECV(SVC(kernel="lnear"), step=-1)
+ oz.fit(self.dataset.X, self.dataset.y)
diff --git a/tests/test_utils/test_wrapper.py b/tests/test_utils/test_wrapper.py
--- a/tests/test_utils/test_wrapper.py
+++ b/tests/test_utils/test_wrapper.py
@@ -17,11 +17,10 @@
## Imports
##########################################################################
-import unittest
-
from yellowbrick.base import Visualizer
from yellowbrick.utils.wrapper import *
from sklearn.naive_bayes import MultinomialNB
+from sklearn.naive_bayes import GaussianNB
try:
from unittest import mock
@@ -70,7 +69,10 @@ def foo(self, a, b):
## Wrapper Test Case
##########################################################################
-class WrapperTests(unittest.TestCase):
+class TestWrapper(object):
+ """
+ Test the object Wrapper mixin utility
+ """
def test_wrapper_methods(self):
"""
@@ -79,9 +81,9 @@ def test_wrapper_methods(self):
obj = WrappedEstimator()
# Assert that all the wrapper methods are called
- self.assertTrue(obj.draw())
- self.assertEqual(obj.foo(2,2), 4)
- self.assertIsNotNone(obj.estimator)
+ assert obj.draw()
+ assert obj.foo(2,2) == 4
+ assert obj.estimator is not None
def test_super_methods(self):
"""
@@ -96,7 +98,7 @@ def test_super_methods(self):
obj.poof()
obj.set_title()
- self.assertIsNone(obj.ax)
+ assert obj.ax is None
obj.fit.assert_called_once_with()
obj.finalize.assert_called_once_with()
obj.poof.assert_called_once_with()
@@ -104,7 +106,7 @@ def test_super_methods(self):
def test_wrapped_methods(self):
"""
- Assert that wrapped estimator methods are calle d
+ Assert that wrapped estimator methods are called
"""
obj = WrappedEstimator()
@@ -116,3 +118,21 @@ def test_wrapped_methods(self):
obj._wrapped.predict.assert_called_once_with()
obj._wrapped.predict_proba.assert_called_once_with()
obj._wrapped.score.assert_called_once_with()
+
+ def test_rewrap_object(self):
+ """
+ Test the ability to "rewrap" an object on demand
+ """
+ obj = WrappedEstimator()
+ old = obj._wrapped
+ new = mock.MagicMock(spec=GaussianNB())
+
+ obj.predict()
+ old.predict.assert_called_once()
+ new.assert_not_called()
+
+ # rewrap
+ obj._wrapped = new
+ obj.predict()
+ old.predict.assert_called_once()
+ new.predict.assert_called_once()
| Recursive Feature Elimination
This [recursive feature elimination heatmap](http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html#sklearn.feature_selection.RFE) would be a cool feature selection visualization to implement in yb - possibly we can hoist some of #164 into a base class (see #58) for feature importance visualizers?
| I had this idea that we could do a combo plot of [rfecv features scores](http://scikit-learn.org/stable/auto_examples/feature_selection/plot_rfe_with_cross_validation.html#sphx-glr-auto-examples-feature-selection-plot-rfe-with-cross-validation-py) along with the heatmap. I was putting together some code to do this, however when I got to the heatmap I discovered that was specific to the `digits` dataset, the heatmap was just showing the importance of pixels in a 2D array.
We can/should still use the ranking attribute to show which features are ranked higher relative to the rest, but I'm not sure how to do that. In the meantime, I'll do the RFECV visualizer to get that in our library. | 2018-04-04T15:17:28 |
DistrictDataLabs/yellowbrick | 383 | DistrictDataLabs__yellowbrick-383 | [
"306"
] | f14451c251e5904d9ecebcd02746a6cc705359ab | diff --git a/yellowbrick/contrib/statsmodels/__init__.py b/yellowbrick/contrib/statsmodels/__init__.py
new file mode 100644
--- /dev/null
+++ b/yellowbrick/contrib/statsmodels/__init__.py
@@ -0,0 +1,17 @@
+# yellowbrick.contrib.statsmodels
+# Implements wrappers around hte statsmodels library to use Yellowbrick with.
+#
+# Author: Benjamin Bengfort <[email protected]>
+# Created: Wed Apr 04 13:13:24 2018 -0400
+#
+# ID: __init__.py [] [email protected] $
+
+"""
+Implements wrappers around hte statsmodels library to use Yellowbrick with.
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+from .base import StatsModelsWrapper
diff --git a/yellowbrick/contrib/statsmodels/base.py b/yellowbrick/contrib/statsmodels/base.py
new file mode 100644
--- /dev/null
+++ b/yellowbrick/contrib/statsmodels/base.py
@@ -0,0 +1,84 @@
+# yellowbrick.contrib.statsmodels.base
+# A basic wrapper for statsmodels that emulates a scikit-learn estimator.
+#
+# Author: Ian Ozsvald
+# Created: Wed Jan 10 12:47:00 2018 -0500
+#
+# ID: base.py [] [email protected] $
+
+"""
+A basic wrapper for statsmodels that emulates a scikit-learn estimator.
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+from sklearn.metrics import r2_score
+from sklearn.base import BaseEstimator
+
+
+##########################################################################
+## statsmodels Estimator
+##########################################################################
+
+class StatsModelsWrapper(BaseEstimator):
+ """
+ Wrap a statsmodels GLM as a sklearn (fake) BaseEstimator for YellowBrick.
+
+ Examples
+ --------
+ First import the external libraries and helper utilities:
+
+ >>> import statsmodels as sm
+ >>> from functools import partial
+
+ Instantiate a partial with the statsmodels API:
+
+ >>> glm_gaussian_partial = partial(sm.GLM, family=sm.families.Gaussian())
+ >>> sm_est = StatsModelsWrapper(glm_gaussian_partial)
+
+ Create a Yellowbrick visualizer to visualize prediction error:
+
+ >>> visualizer = PredictionError(sm_est)
+ >>> visualizer.fit(X_train, y_train)
+ >>> visualizer.score(X_test, y_test)
+
+ For statsmodels usage, calling .summary() etc:
+
+ >>> gaussian_model = glm_gaussian_partial(y_train, X_train)
+
+ Note
+ ----
+ .. note:: This wrapper is trivial, options and extra things like weights
+ are not currently handled.
+ """
+ def __init__(self, glm_partial, stated_estimator_type="regressor",
+ scorer=r2_score):
+
+ # YellowBrick checks the attribute to see if it is a
+ # regressor/clusterer/classifier
+ self._estimator_type = stated_estimator_type
+
+ # assume user passes in a partial which we can instantiate later
+ self.glm_partial = glm_partial
+
+ # needs a default scoring function, regression uses r^2 in sklearn
+ self.scorer = scorer
+
+ def fit(self, X, y):
+ """
+ Pretend to be a sklearn estimator, fit is called on creation
+ """
+
+ # note that GLM takes endog (y) and then exog (X):
+ # this is the reverse of sklearn's methods
+ self.glm_model = self.glm_partial(y, X)
+ self.glm_results = self.glm_model.fit()
+ return self
+
+ def predict(self, X):
+ return self.glm_results.predict(X)
+
+ def score(self, X, y):
+ return self.scorer(y, self.predict(X))
| diff --git a/tests/test_contrib/test_statsmodels/__init__.py b/tests/test_contrib/test_statsmodels/__init__.py
new file mode 100644
--- /dev/null
+++ b/tests/test_contrib/test_statsmodels/__init__.py
@@ -0,0 +1,15 @@
+# tests.test_contrib.test_statsmodels
+# Tests for the statsmodels contrib package
+#
+# Author: Benjamin Bengfort <[email protected]>
+# Created: Wed Apr 04 13:28:13 2018 -0400
+#
+# ID: __init__.py [] [email protected] $
+
+"""
+Tests for the statsmodels contrib package
+"""
+
+##########################################################################
+## Imports
+##########################################################################
diff --git a/tests/test_contrib/test_statsmodels/test_base.py b/tests/test_contrib/test_statsmodels/test_base.py
new file mode 100644
--- /dev/null
+++ b/tests/test_contrib/test_statsmodels/test_base.py
@@ -0,0 +1,46 @@
+# tests.test_contrib.test_statsmodels.test_base
+# Tests for the statsmodels estimator wrapper.
+#
+# Author: Ian Ozsvald
+# Created: Wed Jan 10 12:47:00 2018 -0500
+#
+# ID: test_base.py [] [email protected] $
+
+"""
+Tests for the statsmodels estimator wrapper.
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+import pytest
+import functools
+import numpy as np
+
+from yellowbrick.contrib.statsmodels import StatsModelsWrapper
+
+try:
+ import statsmodels as sm
+except ImportError:
+ sm = None
+
+
+##########################################################################
+## Test Cases
+##########################################################################
+
[email protected](sm is None, reason="test requires statsmodels")
+def test_stats_models_wrapper():
+ """
+ A trivial test of the StatsModelsWrapper
+ """
+ X = np.array([[1], [2], [3]])
+ y = np.array([1.1, 2, 3])
+
+ glm_gaussian = functools.partial(sm.GLM, family=sm.families.Gaussian())
+ sm_est = StatsModelsWrapper(glm_gaussian)
+
+ assert sm_est.fit(X, y) is sm_est, "fit did not return self"
+ assert sm_est.predict(X).shape == (3,)
+ assert 0.0 <= sm_est.score(X, y) <= 1.0
| YellowBrick not compatible with statsmodels estimators (and a trivial wrapper as a light fix)
I'm comparing GLMs in `statsmodels` and other models in `sklearn`. I wanted to use a consistent visual output but realised that `yellowbrick` only understands the `sklearn BaseEstimator`.
I've written a trivial wrapper (i.e. - it works for me, it may not do what you need!) around a `statsmodels GLM` that makes it 'look like' a `BaseEstimator`, in the eyes of `yellowbrick`. As a result I can create a `partial` GLM, push it into this wrapper, then call `yb.ResidualsPlot` and `yb.PredictionError`.
I present this code more as a placeholder in case someone else has the same need and they'd like to build a stronger solution.
This is tested using `yellowbrick 0.5`, `statsmodels 0.8`, `sklearn 0.19` in Python 3.6.
```
class StatsModelsWrapper(BaseEstimator):
"""Wrap a statsmodels GLM as a sklearn (fake) BaseEstimator for YellowBrick
To use it, first create a partial
```
glm_gaussian_partial = functools.partial(sm.GLM, family=sm.families.Gaussian())
#gaussian_model = glm_gaussian_partial(y_train, X_train) # for statsmodels usage, calling .summary() etc
fake_est = StatsModelsWrapper(glm_gaussian_partial)
visualizer = PredictionError(fake_est)
#subsequent calls to e.g. fake_est.fit(X, y) will be passed through
```
NOTE that this wrapper is trivial, options and extra things like weights are not handled
"""
def __init__(self,
glm_partial,
stated_estimator_type="regressor",
scorer=sklearn.metrics.r2_score):
# YellowBrick checks the attribute to see if it is a regressor/clusterer/classifier
self._estimator_type = stated_estimator_type
# assume user passes in a partial which we can subsequently instantiate
self.glm_partial = glm_partial
# score() needs a default scoring function, regression uses r^2 in sklearn
self.scorer = scorer
def fit(self, X, y):
"""Pretend to be a sklearn estimator, fit in this case is called on creation"""
# note that GLM takes endog (y) and then exog (X), this is the reverse of sklearn's methods
self.glm_model = self.glm_partial(y, X)
self.glm_results = self.glm_model.fit()
def predict(self, X):
return self.glm_results.predict(X)
def score(self, X, y):
return self.scorer(y, self.predict(X))
def test_stats_models_wrapper():
"""A trivial test of the StatsModelsWrapper defined above"""
X = np.array([[1], [2], [3]])
y = np.array([1.1, 2, 3])
glm_gaussian_partial = functools.partial(sm.GLM, family=sm.families.Gaussian())
fake_est = StatsModelsWrapper(glm_gaussian_partial)
fake_est.fit(X, y)
fake_est.predict(X)
fake_est.score(X, y)
print("All the fake methods ran")
test_stats_models_wrapper()
```
| @ianozsvald thank you! this is a very helpful wrapper that you've contributed here. at our next maintainers meeting, we'll discuss how we might be able to potentially incorporate this in. Maybe in a `contrib` module to start
See #341 -- once we get the `contrib` module in place, we're planning on including this as well as possibly Keras visualizers in it; as well as some of our more prototype-y implementations. | 2018-04-04T17:49:50 |
DistrictDataLabs/yellowbrick | 407 | DistrictDataLabs__yellowbrick-407 | [
"361"
] | f8f96d20e69b4b9f86e4f986fdb9edb54448eea9 | diff --git a/yellowbrick/classifier/base.py b/yellowbrick/classifier/base.py
--- a/yellowbrick/classifier/base.py
+++ b/yellowbrick/classifier/base.py
@@ -110,6 +110,28 @@ def fit(self, X, y=None, **kwargs):
# Always return self from fit
return self
+
+ def score(self, X, y, **kwargs):
+ """
+ The score function is the hook for visual interaction. Pass in test
+ data and the visualizer will create predictions on the data and
+ evaluate them with respect to the test values. The evaluation will
+ then be passed to draw() and the result of the estimator score will
+ be returned.
+ Parameters
+ ----------
+ X : array-like
+ X (also X_test) are the dependent variables of test set to predict
+ y : array-like
+ y (also y_test) is the independent actual variables to score against
+ Returns
+ -------
+ score : float
+ """
+ self.score_ = self.estimator.score(X, y, **kwargs)
+
+ return self.score_
+
#TODO during refactoring this can be used to generalize ClassBalance
def class_counts(self, y):
unique, counts = np.unique(y, return_counts=True)
| ClassificationScoreVisualizers should return accuracy
See #358 and #213 -- classification score visualizers should return accuracy when `score()` is called. If F1 or accuracy is not in the figure it should also be included in the figure.
| This is a test alert @DistrictDataLabs/team-oz-maintainers
echo
I'm going to start working on this
As it stands, The following is True about the display of each visualizer
1. Classification Report has F1 score
2. Confusion Matrix has None
3. ROCAUC has None
4. Class Balance has None *** SPECIAL NOTE*** Doesn't require Accuracy to be displayed
5. Class Prediction Error has None
6. Discrimination Threshold has F1 Score | 2018-05-14T16:42:15 |
|
DistrictDataLabs/yellowbrick | 410 | DistrictDataLabs__yellowbrick-410 | [
"367"
] | 02cd9b88150b607cc95d24b1b66b9b2177003d7a | diff --git a/yellowbrick/utils/types.py b/yellowbrick/utils/types.py
--- a/yellowbrick/utils/types.py
+++ b/yellowbrick/utils/types.py
@@ -61,12 +61,6 @@ def is_classifier(estimator):
is_classifier
`sklearn.is_classifier() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L518>`_
"""
- # TODO: once we make ScoreVisualizer and ModelVisualizer pass through
- # wrappers as in Issue #90, these three lines become unnecessary.
- # NOTE: This must be imported here to avoid recursive import.
- from yellowbrick.base import Visualizer
- if isinstance(estimator, Visualizer):
- return is_classifier(estimator.estimator)
# Test the _estimator_type property
return getattr(estimator, "_estimator_type", None) == "classifier"
@@ -90,12 +84,6 @@ def is_regressor(estimator):
is_regressor
`sklearn.is_regressor() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L531>`_
"""
- # TODO: once we make ScoreVisualizer and ModelVisualizer pass through
- # wrappers as in Issue #90, these three lines become unnecessary.
- # NOTE: This must be imported here to avoid recursive import.
- from yellowbrick.base import Visualizer
- if isinstance(estimator, Visualizer):
- return is_regressor(estimator.estimator)
# Test the _estimator_type property
return getattr(estimator, "_estimator_type", None) == "regressor"
@@ -114,12 +102,6 @@ def is_clusterer(estimator):
The object to test if it is a Scikit-Learn clusterer, especially a
Scikit-Learn estimator or Yellowbrick visualizer
"""
- # TODO: once we make ScoreVisualizer and ModelVisualizer pass through
- # wrappers as in Issue #90, these three lines become unnecessary.
- # NOTE: This must be imported here to avoid recursive import.
- from yellowbrick.base import Visualizer
- if isinstance(estimator, Visualizer):
- return is_clusterer(estimator.estimator)
# Test the _estimator_type property
return getattr(estimator, "_estimator_type", None) == "clusterer"
@@ -138,12 +120,6 @@ def is_gridsearch(estimator):
The object to test if it is a Scikit-Learn clusterer, especially a
Scikit-Learn estimator or Yellowbrick visualizer
"""
- # TODO: once we make ScoreVisualizer and ModelVisualizer pass through
- # wrappers as in Issue #90, these three lines become unnecessary.
- # NOTE: This must be imported here to avoid recursive import.
- from yellowbrick.base import Visualizer
- if isinstance(estimator, Visualizer):
- return is_gridsearch(estimator.estimator)
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
| Remove visualizer tests in type checking
Many of the type checking utilities, e.g. `is_classifier`, `is_regressor`, etc. have a note to remove lines of code that are unnecessary after #90 is implemented.
For example see: [yellowbrick/utils/types.py#L64](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/utils/types.py#L64):
```python
def is_classifier(estimator):
"""
Returns True if the given estimator is (probably) a classifier.
Parameters
----------
estimator : class or instance
The object to test if it is a Scikit-Learn clusterer, especially a
Scikit-Learn estimator or Yellowbrick visualizer
See also
--------
is_classifier
`sklearn.is_classifier() <https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/base.py#L518>`_
"""
# TODO: once we make ScoreVisualizer and ModelVisualizer pass through
# wrappers as in Issue #90, these three lines become unnecessary.
# NOTE: This must be imported here to avoid recursive import.
from yellowbrick.base import Visualizer
if isinstance(estimator, Visualizer):
return is_classifier(estimator.estimator)
# Test the _estimator_type property
return getattr(estimator, "_estimator_type", None) == "classifier"
# Alias for closer name to isinstance and issubclass
isclassifier = is_classifier
```
We should remove these lines of code and **ensure the tests have correct coverage**.
| Another alternative is to:
```python
if hasattr(estimator, "estimator"):
return is_classifier(estimator.estimator)
```
Working on it | 2018-05-14T18:54:04 |
|
DistrictDataLabs/yellowbrick | 413 | DistrictDataLabs__yellowbrick-413 | [
"359"
] | 4f1a5ad36e087e3252e6daf1afe37e5216a58f14 | diff --git a/yellowbrick/classifier/classification_report.py b/yellowbrick/classifier/classification_report.py
--- a/yellowbrick/classifier/classification_report.py
+++ b/yellowbrick/classifier/classification_report.py
@@ -4,6 +4,7 @@
# Author: Rebecca Bilbro <[email protected]>
# Author: Benjamin Bengfort <[email protected]>
# Author: Neal Humphrey
+# Author: Allyssa Riley
# Created: Wed May 18 12:39:40 2016 -0400
#
# Copyright (C) 2017 District Data Labs
@@ -14,11 +15,12 @@
"""
Visual classification report for classifier scoring.
"""
-
+
##########################################################################
## Imports
##########################################################################
+from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
@@ -36,12 +38,12 @@
CMAP_UNDERCOLOR = 'w'
CMAP_OVERCOLOR = '#2a7d4f'
-SCORES_KEYS = ('precision', 'recall', 'f1')
+SCORES_KEYS = ('precision', 'recall', 'f1', 'support')
class ClassificationReport(ClassificationScoreVisualizer):
"""
- Classification report that shows the precision, recall, and F1 scores
+ Classification report that shows the precision, recall, F1, and support scores
for the model. Integrates numerical scores as well as a color-coded heatmap.
Parameters
@@ -74,7 +76,7 @@ class ClassificationReport(ClassificationScoreVisualizer):
Attributes
----------
scores_ : dict of dicts
- Outer dictionary composed of precision, recall, and f1 scores with
+ Outer dictionary composed of precision, recall, f1, and support scores with
inner dictionaries specifiying the values for each class listed.
"""
def __init__(self, model, ax=None, classes=None, cmap='YlOrRd', **kwargs):
@@ -101,9 +103,18 @@ def score(self, X, y=None, **kwargs):
y_pred = self.predict(X)
scores = precision_recall_fscore_support(y, y_pred)
- scores = map(lambda s: dict(zip(self.classes_, s)), scores[0:3])
+
+ # Calculate the percentage for the support metric
+ self.support_score = scores[-1]
+ support_percent = self.support_score / (sum(self.support_score))
+
+ scores = map(lambda s: dict(zip(self.classes_, s)), scores)
self.scores_ = dict(zip(SCORES_KEYS, scores))
+ # Change the support score from the actual support value to the percent
+ # value to be used in the Classification Report.
+ self.scores_['support'] = dict(zip(self.classes_, support_percent))
+
return self.draw()
def draw(self):
@@ -111,16 +122,17 @@ def draw(self):
Renders the classification report across each axis.
"""
# Create display grid
- cr_display = np.zeros((len(self.classes_), 3))
+ cr_display = np.zeros((len(self.classes_), 4))
- # For each class row, append columns for precision, recall, and f1
+
+ # For each class row, append columns for precision, recall, f1, and support
for idx, cls in enumerate(self.classes_):
- for jdx, metric in enumerate(('precision', 'recall', 'f1')):
+ for jdx, metric in enumerate(('precision', 'recall', 'f1', 'support')):
cr_display[idx, jdx] = self.scores_[metric][cls]
# Set up the dimensions of the pcolormesh
# NOTE: pcolormesh accepts grids that are (N+1,M+1)
- X, Y = np.arange(len(self.classes_)+1), np.arange(4)
+ X, Y = np.arange(len(self.classes_)+1), np.arange(5)
self.ax.set_ylim(bottom=0, top=cr_display.shape[0])
self.ax.set_xlim(left=0, right=cr_display.shape[1])
@@ -134,6 +146,11 @@ def draw(self):
value = cr_display[x,y]
svalue = "{:0.3f}".format(value)
+ # change the svalue for support (when y == 3) because we want
+ # to label it as the actual support value, not the percentage
+ if y == 3:
+ svalue = self.support_score[x]
+
# Determine the grid and text colors
base_color = self.cmap(value)
text_color = find_text_color(base_color)
@@ -172,20 +189,21 @@ def finalize(self, **kwargs):
self.set_title('{} Classification Report'.format(self.name))
# Set the tick marks appropriately
- self.ax.set_xticks(np.arange(3)+0.5)
+ self.ax.set_xticks(np.arange(4)+0.5)
self.ax.set_yticks(np.arange(len(self.classes_))+0.5)
- self.ax.set_xticklabels(['precision', 'recall', 'f1-score'], rotation=45)
+ self.ax.set_xticklabels(['precision', 'recall', 'f1-score', 'support'], rotation=45)
self.ax.set_yticklabels(self.classes_)
plt.tight_layout()
-def classification_report(model, X, y=None, ax=None, classes=None, **kwargs):
+def classification_report(model, X, y=None, ax=None, classes=None,
+ random_state=None,**kwargs):
"""Quick method:
- Displays precision, recall, and F1 scores for the model.
- Integrates numerical scores as well color-coded heatmap.
+ Displays precision, recall, F1, and support scores for the model.
+ Integrates numerical scores as well as color-coded heatmap.
This helper function is a quick wrapper to utilize the ClassificationReport
ScoreVisualizer for one-off analysis.
@@ -206,6 +224,9 @@ def classification_report(model, X, y=None, ax=None, classes=None, **kwargs):
classes : list of strings
The names of the classes in the target
+ random_state: integer
+ The seed value for a random generator
+
Returns
-------
ax : matplotlib axes
@@ -215,7 +236,9 @@ def classification_report(model, X, y=None, ax=None, classes=None, **kwargs):
visualizer = ClassificationReport(model, ax, classes, **kwargs)
# Create the train and test splits
- X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
+ X_train, X_test, y_train, y_test = train_test_split(
+ X, y, test_size=0.2, random_state=random_state
+ )
# Fit and transform the visualizer (calls draw)
visualizer.fit(X_train, y_train, **kwargs)
| diff --git a/tests/baseline_images/test_classifier/test_classification_report/test_binary_class_report.png b/tests/baseline_images/test_classifier/test_classification_report/test_binary_class_report.png
Binary files a/tests/baseline_images/test_classifier/test_classification_report/test_binary_class_report.png and b/tests/baseline_images/test_classifier/test_classification_report/test_binary_class_report.png differ
diff --git a/tests/baseline_images/test_classifier/test_classification_report/test_multiclass_class_report.png b/tests/baseline_images/test_classifier/test_classification_report/test_multiclass_class_report.png
Binary files a/tests/baseline_images/test_classifier/test_classification_report/test_multiclass_class_report.png and b/tests/baseline_images/test_classifier/test_classification_report/test_multiclass_class_report.png differ
diff --git a/tests/baseline_images/test_classifier/test_classification_report/test_pandas_integration.png b/tests/baseline_images/test_classifier/test_classification_report/test_pandas_integration.png
Binary files a/tests/baseline_images/test_classifier/test_classification_report/test_pandas_integration.png and b/tests/baseline_images/test_classifier/test_classification_report/test_pandas_integration.png differ
diff --git a/tests/baseline_images/test_classifier/test_classification_report/test_quick_method.png b/tests/baseline_images/test_classifier/test_classification_report/test_quick_method.png
Binary files a/tests/baseline_images/test_classifier/test_classification_report/test_quick_method.png and b/tests/baseline_images/test_classifier/test_classification_report/test_quick_method.png differ
diff --git a/tests/test_classifier/test_classification_report.py b/tests/test_classifier/test_classification_report.py
--- a/tests/test_classifier/test_classification_report.py
+++ b/tests/test_classifier/test_classification_report.py
@@ -59,13 +59,14 @@ def test_binary_class_report(self):
viz.fit(self.binary.X.train, self.binary.y.train)
viz.score(self.binary.X.test, self.binary.y.test)
- self.assert_images_similar(viz, tol=35)
+ self.assert_images_similar(viz, tol=40)
assert viz.scores_ == {
'precision': {0: approx(0.7446808), 1: approx(0.8490566)},
'recall': {0: approx(0.8139534), 1: approx(0.7894736)},
- 'f1': {0: approx(0.7777777), 1: approx(0.8181818)}
- }
+ 'f1': {0: approx(0.7777777), 1: approx(0.8181818)},
+ 'support': {0: approx(0.42999999999999999), 1: approx(0.56999999999999995)}
+ }
@pytest.mark.xfail(
sys.platform == 'win32', reason="images not close on windows"
@@ -80,7 +81,7 @@ def test_multiclass_class_report(self):
viz.fit(self.multiclass.X.train, self.multiclass.y.train)
viz.score(self.multiclass.X.test, self.multiclass.y.test)
- self.assert_images_similar(viz)
+ self.assert_images_similar(viz, tol=11.0)
assert viz.scores_ == {
'precision': {
@@ -93,6 +94,10 @@ def test_multiclass_class_report(self):
0: 0.47058823529411764, 1: 0.5294117647058824,
2: 0.5294117647058824, 3: 0.35294117647058826,
4: 0.38709677419354843, 5: 0.6060606060606061
+ }, 'support': {
+ 0: 0.19, 1: 0.16,
+ 2: 0.14000000000000001, 3: 0.19,
+ 4: 0.16, 5: 0.16
}}
@pytest.mark.xfail(
@@ -128,9 +133,9 @@ def test_pandas_integration(self):
viz.fit(X_train, y_train)
viz.score(X_test, y_test)
- self.assert_images_similar(viz, tol=0.1)
+ self.assert_images_similar(viz, tol=43.0)
- # Ensure correct classification scores under the hood
+ # Ensure correct classification scores under the hood!
assert viz.scores_ == {
'precision': {
'unoccupied': 0.999347471451876,
@@ -141,9 +146,11 @@ def test_pandas_integration(self):
}, 'f1': {
'unoccupied': 0.9800031994880819,
'occupied': 0.9366447034972124
+ }, 'support': {
+ 'occupied': 0.22519455252918288,
+ 'unoccupied': 0.77480544747081714
}}
- @pytest.mark.skip(reason="requires random state in quick method")
def test_quick_method(self):
"""
Test the quick method with a random dataset
@@ -154,9 +161,10 @@ def test_quick_method(self):
)
_, ax = plt.subplots()
- classification_report(DecisionTreeClassifier(), X, y, ax=ax)
+ classification_report(DecisionTreeClassifier(), X, y,
+ ax=ax, random_state=42)
- self.assert_images_similar(ax=ax)
+ self.assert_images_similar(ax=ax, tol=20.0)
def test_isclassifier(self):
"""
| Add support metric to ClassificationReport
Currently, the ClassificationReport omits support because it is difficult to put into the heatmap scale of (0.0, 1.0). We should still include it, however and _color_ it as the percent of the total number of records, while _labeling_ it with the actual support number.
| 2018-05-14T19:19:57 |
|
DistrictDataLabs/yellowbrick | 417 | DistrictDataLabs__yellowbrick-417 | [
"219"
] | 3f95a0cec84c749b17bc758f831dcf527826e273 | diff --git a/yellowbrick/datasets/__init__.py b/yellowbrick/datasets/__init__.py
new file mode 100644
--- /dev/null
+++ b/yellowbrick/datasets/__init__.py
@@ -0,0 +1,23 @@
+from .download import load_concrete
+from .download import load_energy
+from .download import load_credit
+from .download import load_occupancy
+from .download import load_mushroom
+from .download import load_hobbies
+from .download import load_game
+from .download import load_bikeshare
+from .download import load_spam
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/yellowbrick/datasets/download.py b/yellowbrick/datasets/download.py
new file mode 100644
--- /dev/null
+++ b/yellowbrick/datasets/download.py
@@ -0,0 +1,274 @@
+#!/usr/bin/env python
+# download
+# Downloads the example datasets for running the examples.
+#
+# Author: Rebecca Bilbro <[email protected]>
+# Author: Benjamin Bengfort <[email protected]>
+# Author: Raul Peralta <[email protected]>
+# Created: Wed May 18 11:54:45 2016 -0400
+#
+# Copyright (C) 2016 District Data Labs
+# For license information, see LICENSE.txt
+#
+# ID: download.py [1f73d2b] [email protected] $
+
+"""
+Downloads the example datasets for running the examples.
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+import os
+import sys
+import hashlib
+import zipfile
+import json
+import csv
+import numpy as np
+
+try:
+ import requests
+except ImportError:
+ print((
+ "The requests module is required to download data --\n"
+ "please install it with pip install requests."
+ ))
+ sys.exit(1)
+
+
+##########################################################################
+## Links and MD5 hash of datasets
+##########################################################################
+
+DATASETS = {
+ 'concrete': {
+ 'url': 'https://s3.amazonaws.com/ddl-data-lake/yellowbrick/concrete.zip',
+ 'signature': 'b9ea5f26a7bb272a040e2f1a993b26babbf8dc4a04ab8198bb315ca66d71f10d',
+ },
+ 'energy': {
+ 'url': 'https://s3.amazonaws.com/ddl-data-lake/yellowbrick/energy.zip',
+ 'signature': '19fb86f3bcdde208eed46944172cb643ef6a7d58da103fb568fae43205ed89d3',
+ },
+ 'credit': {
+ 'url': 'https://s3.amazonaws.com/ddl-data-lake/yellowbrick/credit.zip',
+ 'signature': '4a91339c69f55e18f3f48004328fbcb7868070b618208fed099920427b084e5e',
+ },
+ 'occupancy': {
+ 'url': 'https://s3.amazonaws.com/ddl-data-lake/yellowbrick/occupancy.zip',
+ 'signature': '429cfe376dc9929a1fa528da89f0e1626e34e19695f3f555d8954025bbc522b8',
+ },
+ 'mushroom': {
+ 'url': 'https://s3.amazonaws.com/ddl-data-lake/yellowbrick/mushroom.zip',
+ 'signature': '884c43cb70db35d211c67b1cf6a3683b2b4569393d2789d5c07840da4dc85ba8',
+ },
+ 'hobbies': {
+ 'url': 'https://s3.amazonaws.com/ddl-data-lake/yellowbrick/hobbies.zip',
+ 'signature': '415c8f68df1486d5d84a1d1757a5aa3035aef5ad63ede5013c261d622fbd29d8',
+ },
+ 'game': {
+ 'url': 'https://s3.amazonaws.com/ddl-data-lake/yellowbrick/game.zip',
+ 'signature': 'b1bd85789a014a898daa34cb5f89ceab6d2cd6488a2e572187e34aa4ec21a43b',
+ },
+ 'bikeshare': {
+ 'url': 'https://s3.amazonaws.com/ddl-data-lake/yellowbrick/bikeshare.zip',
+ 'signature': 'a9b440f65549746dff680c92ff8bdca3c7265f09db1cf09e708e6e26fc8aba44',
+ },
+ 'spam': {
+ 'url': 'https://s3.amazonaws.com/ddl-data-lake/yellowbrick/spam.zip',
+ 'signature': '65be21196ba3d8448847409b70a67d761f873f30719c807600eb516d7aef1de1',
+ },
+}
+
+
+##########################################################################
+## Download functions
+##########################################################################
+
+def sha256sum(path, blocksize=65536):
+ """
+ Computes the SHA256 signature of a file to verify that the file has not
+ been modified in transit and that it is the correct version of the data.
+ """
+ sig = hashlib.sha256()
+ with open(path, 'rb') as f:
+ buf = f.read(blocksize)
+ while len(buf) > 0:
+ sig.update(buf)
+ buf = f.read(blocksize)
+ return sig.hexdigest()
+
+
+def download_data(url, path='data', signature=None, extract=True):
+ """
+ Downloads the zipped data set specified at the given URL, saving it to
+ the output path specified. This function verifies the download with the
+ given signature (if supplied) and extracts the zip file if requested.
+ """
+ # Create the output directory if it does not exist
+ if not os.path.exists(path):
+ os.mkdir(path)
+
+ # Get the name of the file from the URL
+ name = os.path.basename(url)
+ dlpath = os.path.join(path, name)
+
+ # Fetch the response in a streaming fashion and write it to disk.
+ response = requests.get(url, stream=True)
+ with open(dlpath, 'wb') as f:
+ for chunk in response.iter_content(65536):
+ f.write(chunk)
+
+ # If verify, compare the signature
+ if signature is not None:
+ dlsignature = sha256sum(dlpath)
+ if signature != dlsignature:
+ raise ValueError(
+ "Download signature does not match hardcoded signature!"
+ )
+
+ # If extract, extract the zipfile.
+ if extract:
+ zf = zipfile.ZipFile(dlpath)
+ zf.extractall(path)
+
+
+def download_all(path='data', verify=True, extract=True):
+ """
+ Downloads all the example datasets. If verify is True then compare the
+ download signature with the hardcoded signature. If extract is True then
+ extract the contents of the zipfile to the given path.
+ """
+ for name, meta in DATASETS.items():
+ url = meta['url']
+ signature = meta['signature'] if verify else None
+
+ download_data(url, path=path, signature=signature, extract=extract)
+
+
+def _load_file_data(name, path='data', extract=True):
+ """
+ Returns the information of the specified dataset.
+ """
+ url = DATASETS[name]['url']
+ signature = DATASETS[name]['signature']
+ download_data(url, path=path, signature=signature, extract=extract)
+ with open(os.path.join(path, name, 'meta.json')) as meta_file:
+ feature_names = json.load(meta_file)
+ with open(os.path.join(path, name, 'README.md')) as readme_file:
+ description = readme_file.read()
+ with open(os.path.join(path, name, '{0}.csv'.format(name))) as csv_file:
+ data_file = csv.reader(csv_file)
+ # removing columns name
+ next(data_file)
+ data = np.asarray([line for line in data_file])
+ result = {'data': data, 'DESCR': description}
+ for k, v in feature_names.items():
+ result[k] = v
+ return result
+
+
+def load_concrete(path='data', extract=True):
+ """
+ Downloads the 'concrete' dataset, saving it to the output
+ path specified and returns the data.
+ """
+ # name of the dataset
+ name = 'concrete'
+ data = _load_file_data(name, path, extract)
+ return data
+
+
+def load_energy(path='data', extract=True):
+ """
+ Downloads the 'energy' dataset, saving it to the output
+ path specified and returns the data.
+ """
+ # name of the dataset
+ name = 'energy'
+ data = _load_file_data(name, path, extract)
+ return data
+
+
+def load_credit(path='data', extract=True):
+ """
+ Downloads the 'credit' dataset, saving it to the output
+ path specified and returns the data.
+ """
+ # name of the dataset
+ name = 'credit'
+ data = _load_file_data(name, path, extract)
+ return data
+
+
+def load_occupancy(path='data', extract=True):
+ """
+ Downloads the 'occupancy' dataset, saving it to the output
+ path specified and returns the data.
+ """
+ # name of the dataset
+ name = 'occupancy'
+ data = _load_file_data(name, path, extract)
+ return data
+
+
+def load_mushroom(path='data', extract=True):
+ """
+ Downloads the 'mushroom' dataset, saving it to the output
+ path specified and returns the data.
+ """
+ # name of the dataset
+ name = 'mushroom'
+ data = _load_file_data(name, path, extract)
+ return data
+
+
+def load_hobbies(path='data', extract=True):
+ """
+ Downloads the 'hobbies' dataset, saving it to the output
+ path specified and returns the data.
+ """
+ # name of the dataset
+ name = 'hobbies'
+ data = _load_file_data(name, path, extract)
+ return data
+
+
+def load_game(path='data', extract=True):
+ """
+ Downloads the 'game' dataset, saving it to the output
+ path specified and returns the data.
+ """
+ # name of the dataset
+ name = 'game'
+ data = _load_file_data(name, path, extract)
+ return data
+
+
+def load_bikeshare(path='data', extract=True):
+ """
+ Downloads the 'bikeshare' dataset, saving it to the output
+ path specified and returns the data.
+ """
+ # name of the dataset
+ name = 'bikeshare'
+ data = _load_file_data(name, path, extract)
+ return data
+
+
+def load_spam(path='data', extract=True):
+ """
+ Downloads the 'spam' dataset, saving it to the output
+ path specified and returns the data.
+ """
+ # name of the dataset
+ name = 'spam'
+ data = _load_file_data(name, path, extract)
+ return data
+
+
+if __name__ == '__main__':
+ path = 'data'
+ download_all(path)
+ print("Downloaded datasets to {}".format(os.path.abspath(path)))
| Data Loading
See: http://scikit-learn.org/stable/datasets/index.html
And #203 and (other issue we don't remember right now)
| changed this issue to move the data downloader from examples to the primary module. That way we can simply put everything in one place similar to how Scikit-Learn provides examples. This will change how we do #223 as well.
I am working on this issue | 2018-05-14T20:34:50 |
|
DistrictDataLabs/yellowbrick | 420 | DistrictDataLabs__yellowbrick-420 | [
"59"
] | f7c60ebd97a585b937e4bea4d928c89ec88ace0f | diff --git a/yellowbrick/features/pcoords.py b/yellowbrick/features/pcoords.py
--- a/yellowbrick/features/pcoords.py
+++ b/yellowbrick/features/pcoords.py
@@ -19,10 +19,13 @@
## Imports
##########################################################################
+from numpy import hstack, ones
+from numpy.random import RandomState
+
from sklearn.preprocessing import MinMaxScaler, MaxAbsScaler
from sklearn.preprocessing import Normalizer, StandardScaler
-from yellowbrick.utils import is_dataframe
+from yellowbrick.utils import is_dataframe, is_series
from yellowbrick.features.base import DataVisualizer
from yellowbrick.exceptions import YellowbrickTypeError, YellowbrickValueError
from yellowbrick.style.colors import resolve_colors
@@ -149,6 +152,15 @@ class ParallelCoordinates(DataVisualizer):
If int, specifies the maximum number of samples to display.
If float, specifies a fraction between 0 and 1 to display.
+ random_state : int, RandomState instance or None
+ If int, random_state is the seed used by the random number generator;
+ If RandomState instance, random_state is the random number generator;
+ If None, the random number generator is the RandomState instance used
+ by np.random; only used if shuffle is True and sample < 1.0
+
+ shuffle : boolean, default: True
+ specifies whether sample is drawn randomly
+
color : list or tuple, default: None
optional list or tuple of colors to colorize lines
Use either color to colorize the lines on a per class basis or
@@ -169,6 +181,12 @@ class ParallelCoordinates(DataVisualizer):
Keyword arguments that are passed to the base class and may influence
the visualization as defined in other Visualizers.
+ Attributes
+ --------
+
+ n_samples_ : int
+ number of samples included in the visualization object
+
Examples
--------
@@ -193,7 +211,7 @@ class ParallelCoordinates(DataVisualizer):
}
def __init__(self, ax=None, features=None, classes=None, normalize=None,
- sample=1.0, color=None, colormap=None, vlines=True,
+ sample=1.0, random_state=None, shuffle=False, color=None, colormap=None, vlines=True,
vlines_kwds=None, **kwargs):
super(ParallelCoordinates, self).__init__(
ax, features, classes, color, colormap, **kwargs
@@ -225,40 +243,97 @@ def __init__(self, ax=None, features=None, classes=None, normalize=None,
)
self.sample = sample
+ # Set sample parameters
+ if isinstance(shuffle, bool):
+ self.shuffle = shuffle
+ else:
+ raise YellowbrickTypeError(
+ "`shuffle` parameter must be boolean"
+ )
+ if self.shuffle:
+ if (random_state is None) or isinstance(random_state, int):
+ self._rng = RandomState(random_state)
+ elif isinstance(random_state, RandomState):
+ self._rng = random_state
+ else:
+ raise YellowbrickTypeError(
+ "`random_state` parameter must be None, int, or np.random.RandomState"
+ )
+ else:
+ self._rng = None
+
# Visual Parameters
self.show_vlines = vlines
self.vlines_kwds = vlines_kwds or {
'linewidth': 1, 'color': 'black'
}
- def draw(self, X, y, **kwargs):
+ def fit(self, X, y=None, **kwargs):
"""
- Called from the fit method, this method creates the parallel
- coordinates canvas and draws each instance and vertical lines on it.
+ The fit method is the primary drawing input for the
+ visualization since it has both the X and y data required for the
+ viz and the transform method does not.
+
+ Parameters
+ ----------
+ X : ndarray or DataFrame of shape n x m
+ A matrix of n instances with m features
+
+ y : ndarray or Series of length n
+ An array or series of target or class values
+
+ kwargs : dict
+ Pass generic arguments to the drawing method
+
+ Returns
+ -------
+ self : instance
+ Returns the instance of the transformer/visualizer
"""
+
# Convert from dataframe
if is_dataframe(X):
X = X.as_matrix()
+ if is_series(y):
+ y = y.as_matrix()
- # Choose a subset of samples
- # TODO: allow selection of a random subset of samples instead of head
-
- if isinstance(self.sample, int):
- self.n_samples = min([self.sample, len(X)])
- elif isinstance(self.sample, float):
- self.n_samples = int(len(X) * self.sample)
- X = X[:self.n_samples, :]
+ # Subsample
+ X, y = self._subsample(X, y)
# Normalize
if self.normalize is not None:
X = self.normalizers[self.normalize].fit_transform(X)
- # Get the shape of the data
+ # the super method calls draw and returns self
+ super(ParallelCoordinates, self).fit(X, y, **kwargs)
+
+ # Fit always returns self.
+ return self
+
+ def draw(self, X, y, **kwargs):
+ """
+ Called from the fit method, this method creates the parallel
+ coordinates canvas and draws each instance and vertical lines on it.
+
+ Parameters
+ ----------
+ X : ndarray of shape n x m
+ A matrix of n instances with m features
+
+ y : ndarray of length n
+ An array or series of target or class values
+
+ kwargs : dict
+ Pass generic arguments to the drawing method
+
+ """
+
+ # Get shape of the sampled data
nrows, ncols = X.shape
# Create the xticks for each column
# TODO: Allow the user to specify this feature
- x = list(range(ncols))
+ increments = list(range(ncols))
# Create the colors
# TODO: Allow both colormap, listed colors, and palette definition
@@ -268,32 +343,35 @@ def draw(self, X, y, **kwargs):
)
colors = dict(zip(self.classes_, color_values))
- # Track which labels are already in the legend
- used_legends = set([])
-
# TODO: Make this function compatible with DataFrames!
# TODO: Make an independent function to allow addition of instances!
- for idx, row in enumerate(X):
- # TODO: How to map classmap to labels?
- label = y[idx] # Get the label for the row
- label = self.classes_[label]
-
- if label not in used_legends:
- used_legends.add(label)
- self.ax.plot(x, row, color=colors[label], alpha=0.25, label=label, **kwargs)
- else:
- self.ax.plot(x, row, color=colors[label], alpha=0.25, **kwargs)
+
+ # Prepare to flatten data within each class
+ # introduce separation between individual data points using None in x-values and arbitrary value (one) in
+ # y-values
+ X_separated = hstack([X, ones((nrows, 1))])
+ increments_separated = increments.copy()
+ increments_separated.append(None)
+
+ # Plot each class
+ for label, color in sorted(colors.items()):
+ y_as_str = y.astype('str') # must be consistent with class conversion in DataVisualizer.fit()
+ X_in_class = X_separated[y_as_str == label, :]
+ increments_in_class = increments_separated * len(X_in_class)
+ if len(X_in_class) > 0:
+ self.ax.plot(increments_in_class, X_in_class.flatten(),
+ label=label, color=colors[label], alpha=0.25, linewidth=1, **kwargs)
# Add the vertical lines
# TODO: Make an independent function for override!
if self.show_vlines:
- for idx in x:
+ for idx in increments:
self.ax.axvline(idx, **self.vlines_kwds)
# Set the limits
- self.ax.set_xticks(x)
+ self.ax.set_xticks(increments)
self.ax.set_xticklabels(self.features_)
- self.ax.set_xlim(x[0], x[-1])
+ self.ax.set_xlim(increments[0], increments[-1])
def finalize(self, **kwargs):
"""
@@ -313,3 +391,21 @@ def finalize(self, **kwargs):
# Set the legend and the grid
self.ax.legend(loc='best')
self.ax.grid()
+
+ def _subsample(self, X, y):
+
+ # Choose a subset of samples
+ if isinstance(self.sample, int):
+ n_samples = min([self.sample, len(X)])
+ elif isinstance(self.sample, float):
+ n_samples = int(len(X) * self.sample)
+
+ if (n_samples < len(X)) and self.shuffle:
+ indices = self._rng.choice(len(X), n_samples, replace=False)
+ else:
+ indices = slice(n_samples)
+ X = X[indices, :]
+ y = y[indices]
+
+ self.n_samples_ = n_samples
+ return X, y
diff --git a/yellowbrick/utils/types.py b/yellowbrick/utils/types.py
--- a/yellowbrick/utils/types.py
+++ b/yellowbrick/utils/types.py
@@ -202,6 +202,27 @@ def is_dataframe(obj):
isdataframe = is_dataframe
+def is_series(obj):
+ """
+ Returns True if the given object is a Pandas Series.
+
+ Parameters
+ ----------
+ obj: instance
+ The object to test whether or not is a Pandas Series.
+ """
+ try:
+ # This is the best method of type checking
+ from pandas import Series
+ return isinstance(obj, Series)
+ except ImportError:
+ # Pandas is not a dependency, so this is scary
+ return obj.__class__.__name__ == "Series"
+
+# Alias for closer name to isinstance and issubclass
+isseries = is_series
+
+
def is_structured_array(obj):
"""
Returns True if the given object is a Numpy Structured Array.
| diff --git a/tests/baseline_images/test_features/test_pcoords/test_normalized_pcoords.png b/tests/baseline_images/test_features/test_pcoords/test_normalized_pcoords.png
Binary files a/tests/baseline_images/test_features/test_pcoords/test_normalized_pcoords.png and b/tests/baseline_images/test_features/test_pcoords/test_normalized_pcoords.png differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_parallel_coords.png b/tests/baseline_images/test_features/test_pcoords/test_parallel_coords.png
Binary files a/tests/baseline_images/test_features/test_pcoords/test_parallel_coords.png and b/tests/baseline_images/test_features/test_pcoords/test_parallel_coords.png differ
diff --git a/tests/test_features/test_pcoords.py b/tests/test_features/test_pcoords.py
--- a/tests/test_features/test_pcoords.py
+++ b/tests/test_features/test_pcoords.py
@@ -78,6 +78,32 @@ def test_pcoords_sample_int(self):
visualizer = ParallelCoordinates(sample=10)
visualizer.fit_transform(self.X, self.y)
+ def test_pcoords_sample_int_shuffle(self):
+ """
+ Assert no errors occur using integer 'sample' argument and shuffle, with different random_state args
+ """
+ visualizer = ParallelCoordinates(sample=3, shuffle=True)
+ visualizer.fit_transform(self.X, self.y)
+
+ visualizer = ParallelCoordinates(sample=3, shuffle=True, random_state=444)
+ visualizer.fit_transform(self.X, self.y)
+
+ visualizer = ParallelCoordinates(sample=3, shuffle=True, random_state=np.random.RandomState())
+ visualizer.fit_transform(self.X, self.y)
+
+ def test_pcoords_sample_int_shuffle_false(self):
+ """
+ Assert no errors occur using integer 'sample' argument and shuffle, with different random_state args
+ """
+ visualizer = ParallelCoordinates(sample=3, shuffle=False)
+ visualizer.fit_transform(self.X, self.y)
+
+ visualizer = ParallelCoordinates(sample=3, shuffle=False, random_state=444)
+ visualizer.fit_transform(self.X, self.y)
+
+ visualizer = ParallelCoordinates(sample=3, shuffle=False, random_state=np.random.RandomState())
+ visualizer.fit_transform(self.X, self.y)
+
def test_pcoords_sample_int_invalid(self):
"""
Negative int values should raise
@@ -92,6 +118,32 @@ def test_pcoords_sample_float(self):
visualizer = ParallelCoordinates(sample=0.5)
visualizer.fit_transform(self.X, self.y)
+ def test_pcoords_sample_float_shuffle(self):
+ """
+ Assert no errors occur using float 'sample' argument and shuffle, with different random_state args
+ """
+ visualizer = ParallelCoordinates(sample=0.5, shuffle=True)
+ visualizer.fit_transform(self.X, self.y)
+
+ visualizer = ParallelCoordinates(sample=0.5, shuffle=True, random_state=444)
+ visualizer.fit_transform(self.X, self.y)
+
+ visualizer = ParallelCoordinates(sample=0.5, shuffle=True, random_state=np.random.RandomState())
+ visualizer.fit_transform(self.X, self.y)
+
+ def test_pcoords_sample_float_shuffle_false(self):
+ """
+ Assert no errors occur using float 'sample' argument and shuffle, with different random_state args
+ """
+ visualizer = ParallelCoordinates(sample=0.5, shuffle=False)
+ visualizer.fit_transform(self.X, self.y)
+
+ visualizer = ParallelCoordinates(sample=0.5, shuffle=False, random_state=444)
+ visualizer.fit_transform(self.X, self.y)
+
+ visualizer = ParallelCoordinates(sample=0.5, shuffle=False, random_state=np.random.RandomState())
+ visualizer.fit_transform(self.X, self.y)
+
def test_pcoords_sample_float_invalid(self):
"""
Float values for 'sample' argument outside [0,1] should raise.
@@ -131,3 +183,65 @@ def test_integrated_pcoords(self):
visualizer.fit_transform(X, y)
visualizer.poof()
self.assert_images_similar(visualizer)
+
+ @staticmethod
+ def test_static_subsample():
+ """
+ Assert output of subsampling method against expectations
+ """
+
+ ntotal = 100
+ ncols = 50
+
+ y = np.arange(ntotal)
+ X = np.ones((ntotal, ncols)) * y.reshape(ntotal, 1)
+
+ visualizer = ParallelCoordinates(sample=1.0, random_state=None, shuffle=False)
+ Xprime, yprime = visualizer._subsample(X, y)
+ assert np.array_equal(Xprime, X)
+ assert np.array_equal(yprime, y)
+
+ visualizer = ParallelCoordinates(sample=200, random_state=None, shuffle=False)
+ Xprime, yprime = visualizer._subsample(X, y)
+ assert np.array_equal(Xprime, X)
+ assert np.array_equal(yprime, y)
+
+ sample = 50
+ visualizer = ParallelCoordinates(sample=sample, random_state=None, shuffle=False)
+ Xprime, yprime = visualizer._subsample(X, y)
+ assert np.array_equal(Xprime, X[:sample, :])
+ assert np.array_equal(yprime, y[:sample])
+
+ sample = 50
+ visualizer = ParallelCoordinates(sample=sample, random_state=None, shuffle=True)
+ Xprime, yprime = visualizer._subsample(X, y)
+ assert np.array_equal(Xprime, X[yprime.flatten(), :])
+ assert len(Xprime) == sample
+ assert len(yprime) == sample
+
+ visualizer = ParallelCoordinates(sample=0.5, random_state=None, shuffle=False)
+ Xprime, yprime = visualizer._subsample(X, y)
+ assert np.array_equal(Xprime, X[:int(ntotal/2), :])
+ assert np.array_equal(yprime, y[:int(ntotal/2)])
+
+ sample = 0.5
+ visualizer = ParallelCoordinates(sample=sample, random_state=None, shuffle=True)
+ Xprime, yprime = visualizer._subsample(X, y)
+ assert np.array_equal(Xprime, X[yprime.flatten(), :])
+ assert len(Xprime) == ntotal * sample
+ assert len(yprime) == ntotal * sample
+
+ sample = 0.25
+ visualizer = ParallelCoordinates(sample=sample, random_state=444, shuffle=True)
+ Xprime, yprime = visualizer._subsample(X, y)
+ assert np.array_equal(Xprime, X[yprime.flatten(), :])
+ assert len(Xprime) == ntotal * sample
+ assert len(yprime) == ntotal * sample
+
+ sample = 0.99
+ visualizer = ParallelCoordinates(sample=sample, random_state=np.random.RandomState(), shuffle=True)
+ Xprime, yprime = visualizer._subsample(X, y)
+ assert np.array_equal(Xprime, X[yprime.flatten(), :])
+ assert len(Xprime) == ntotal * sample
+ assert len(yprime) == ntotal * sample
+
diff --git a/tests/test_utils/test_types.py b/tests/test_utils/test_types.py
--- a/tests/test_utils/test_types.py
+++ b/tests/test_utils/test_types.py
@@ -522,6 +522,40 @@ def test_not_is_dataframe(self, obj):
"""
assert not is_dataframe(obj)
+ ##////////////////////////////////////////////////////////////////////
+ ## is_series testing
+ ##////////////////////////////////////////////////////////////////////
+
+ def test_series_alias(self):
+ """
+ Assert isseries aliases is_series
+ """
+ assert isseries is is_series
+
+ @pytest.mark.skipif(pd is None, reason="requires pandas")
+ def test_is_series(self):
+ """
+ Test that is_series works correctly
+ """
+ df = pd.Series([1, 2, 3])
+
+ assert is_series(df)
+
+ @pytest.mark.parametrize("obj", [
+ np.array([
+ (1,2.,'Hello'), (2,3.,"World")],
+ dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'S10')]
+ ),
+ np.array([1,2,3]),
+ [1, 2, 3],
+ ],
+ ids=["structured array", "array", "list"])
+ def test_not_is_series(self, obj):
+ """
+ Test that is_series does not match non-dataframes
+ """
+ assert not is_series(obj)
+
##////////////////////////////////////////////////////////////////////
## is_structured_array testing
##////////////////////////////////////////////////////////////////////
| Improve Parallel Coordinates
See #50 for a more detailed discussion.
There are several things that need to be done to improve the parallel coordinates visualizer:
- [x] Benchmarking, it is currently SLOW
- [x] Add a `draw_instance` method so that instances can be added to the figure at any time
- [x] Add `DataFrame` support so that the visualizer can accept either an `ndarray` or a `DataFrame` as input.
- [x] Create a subclass `NormalizedParallelCoordinates` that normalizes the data to the space 0 to 1 before drawing the coordinates.
- [x] Add subsampling of instances to reduce clutter and improve performance
- [x] Add fast vs. slow drawing methods for performance
- [x] Add alpha so that you can see instances through the other lines
- [ ] <strike>Create an optimization technique that reorders the columns such that the overlap of two instances by the same class is minimized</strike>
There are probably several more things and there are comments in the Parallel Coordinates codebase as well.
| At PyCon 2018 sprints. I'm going to take a pass at some of these items.
@bbengfort - should we also include an item to make the "sample" capability select random rows, as opposed to ordered rows. For data sets with sorted targets, you don't end up with a true representative sample.
@thekylesaurus I think that is a great idea - [we were planning on making sample a uniform random sample](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/features/pcoords.py#L244) but the number of choices for how to conduct sampling led us to simply use `head`. If there is a sklearn-method that does sampling we could use that as a guide for how to treat the `sample` parameter, otherwise perhaps we could simply add a `shuffle=True` argument which would make `head` a uniform random sample? What do you think?
@bbengfort - shuffle=True would definitely get the job done; but I'll check sklearn first to see if there is a different convention that we should inherit. | 2018-05-14T21:50:03 |
DistrictDataLabs/yellowbrick | 427 | DistrictDataLabs__yellowbrick-427 | [
"419"
] | 6643e4a1a2071647756adbc673a052d3962c9c18 | diff --git a/yellowbrick/features/radviz.py b/yellowbrick/features/radviz.py
--- a/yellowbrick/features/radviz.py
+++ b/yellowbrick/features/radviz.py
@@ -138,10 +138,11 @@ class RadialVisualizer(DataVisualizer):
"""
def __init__(self, ax=None, features=None, classes=None, color=None,
- colormap=None, **kwargs):
+ colormap=None, alpha=1, **kwargs):
super(RadialVisualizer, self).__init__(
ax, features, classes, color, colormap, **kwargs
)
+ self.alpha = alpha
@staticmethod
def normalize(X):
@@ -210,7 +211,7 @@ def draw(self, X, y, **kwargs):
# TODO: store these plots to add more instances to later
# TODO: make this a separate function
for i, kls in enumerate(self.classes_):
- self.ax.scatter(to_plot[kls][0], to_plot[kls][1], color=colors[kls], label=str(kls), **kwargs)
+ self.ax.scatter(to_plot[kls][0], to_plot[kls][1], color=colors[kls], label=str(kls), alpha=self.alpha, **kwargs)
# Add the circular axis path
# TODO: Make this a seperate function (along with labeling)
| Add alpha transparency to RadViz
To make the RadViz a bit easier to read, we can add optional transparency, set by the user to be able to distinguish regions of more or less density.
| I will work on this issue. | 2018-05-15T19:27:30 |
|
DistrictDataLabs/yellowbrick | 444 | DistrictDataLabs__yellowbrick-444 | [
"374"
] | f843c4471b5dac100cc837f69504a45570370a86 | diff --git a/yellowbrick/model_selection/__init__.py b/yellowbrick/model_selection/__init__.py
--- a/yellowbrick/model_selection/__init__.py
+++ b/yellowbrick/model_selection/__init__.py
@@ -16,3 +16,4 @@
from .learning_curve import LearningCurve, learning_curve
from .validation_curve import ValidationCurve, validation_curve
+from .cv import CVScores, cv_scores
diff --git a/yellowbrick/model_selection/cv.py b/yellowbrick/model_selection/cv.py
new file mode 100644
--- /dev/null
+++ b/yellowbrick/model_selection/cv.py
@@ -0,0 +1,237 @@
+
+# coding: utf-8
+
+# In[ ]:
+
+# yellowbrick.model_selection.cv
+#
+#
+# Author: Prema Damodaran Roman
+
+#
+# Copyright (C) 2018 District Data Labs
+# For license information, see LICENSE.txt
+#
+# ID: cv.py [7f47800] [email protected] $
+
+##########################################################################
+## Imports
+##########################################################################
+
+import numpy as np
+import matplotlib.ticker as ticker
+
+from yellowbrick.base import ModelVisualizer
+from sklearn.model_selection import cross_val_score
+
+##########################################################################
+## CVScores Visualizer
+##########################################################################
+
+class CVScores(ModelVisualizer):
+ """
+ CVScores displays cross validation scores as a bar chart and the
+ average of the scores as a horizontal line
+
+ Parameters
+ ----------
+
+ model : a scikit-learn estimator
+ An object that implements ``fit`` and ``predict``, can be a
+ classifier, regressor, or clusterer so long as there is also a valid
+ associated scoring metric.
+ Note that the object is cloned for each validation.
+
+ ax : matplotlib.Axes object, optional
+ The axes object to plot the figure on.
+
+ cv : int, cross-validation generator or an iterable, optional
+ Determines the cross-validation splitting strategy.
+ Possible inputs for cv are:
+ - None, to use the default 3-fold cross-validation,
+ - integer, to specify the number of folds.
+ - An object to be used as a cross-validation generator.
+ - An iterable yielding train/test splits.
+
+ see the scikit-learn
+ `cross-validation guide <http://scikit-learn.org/stable/modules/cross_validation.html>`_
+ for more information on the possible strategies that can be used here.
+
+ scoring : string, callable or None, optional, default: None
+ A string or scorer callable object / function with signature
+ ``scorer(estimator, X, y)``.
+
+ See scikit-learn `cross-validation guide <http://scikit-learn.org/stable/modules/cross_validation.html>`_
+ for more information on the possible metrics that can be used.
+
+ kwargs : dict
+ Keyword arguments that are passed to the base class and may influence
+ the visualization as defined in other Visualizers.
+
+ Examples
+ --------
+
+ >>> from sklearn.model_selection import KFold, cross_val_score,
+ >>> ShuffleSplit, StratifiedKFold
+ >>> from sklearn import datasets, svm, linear_model
+
+ >>> iris = datasets.load_iris()
+ >>> clf = svm.SVC(kernel='linear', C=1)
+
+ >>> X = iris.data
+ >>> y = iris.target
+
+ >>> visualizer = CVScores(model=clf, cv=5, scoring='f1_macro')
+ >>> visualizer.fit(X,y)
+ >>> visualizer.poof()
+
+ Notes
+ -----
+
+ This visualizer is a wrapper around for the ``sklearn.model_selection.cross_val_score``
+ <<http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html>>
+
+ Refer to the scikit-learn
+ `cross-validation guide <http://scikit-learn.org/stable/modules/cross_validation.html>`
+ for more details
+
+ """
+
+ def __init__(self, model, ax=None, cv=None, scoring=None, **kwargs):
+
+ super(CVScores, self).__init__(model, ax=ax, **kwargs)
+
+ self.cv = cv
+ self.scoring = scoring
+
+ def fit(self, X, y, **kwargs):
+ """
+ Fits the learning curve with the wrapped model to the specified data.
+ Draws training and test score curves and saves the scores to the
+ estimator.
+
+ Parameters
+ ----------
+ X : array-like, shape (n_samples, n_features)
+ Training vector, where n_samples is the number of samples and
+ n_features is the number of features.
+
+ y : array-like, shape (n_samples) or (n_samples, n_features), optional
+ Target relative to X for classification or regression;
+ None for unsupervised learning.
+
+ Returns
+ -------
+ self : instance
+
+ """
+
+ self.cv_scores_ = cross_val_score(self.estimator, X, y, cv=self.cv, scoring=self.scoring)
+ self.cv_scores_mean_ = self.cv_scores_.mean()
+
+ self.draw()
+ return self
+
+ def draw(self, **kwargs):
+ """
+ creates the bar chart of the CV scores generated from the fit method and places
+ a dashed horizontal line that represents the average value of the CV scores
+ """
+ minimum = self.cv_scores_.min()
+ #update minimum if it is greater than 0.05 to remove whitespace in the bottom of the chart
+ #for easier comparison of values
+ if minimum > 0.05:
+ minimum = minimum - 0.05
+ self.ax.set_ylim(minimum, 1)
+ xvals = np.arange(1, len(self.cv_scores_) + 1, 1)
+ width = kwargs.pop("width", 0.3)
+ self.ax.bar(xvals, self.cv_scores_, width = width)
+ color = kwargs.pop("color", "b")
+ linewidth = kwargs.pop("linewidth", 1)
+ self.ax.axhline(self.cv_scores_mean_, color=color, label='Average', linestyle='--', linewidth=linewidth)
+
+ return self.ax
+
+ def finalize(self, **kwargs):
+ """
+ Add the title, legend, and other visual final touches to the plot.
+ """
+ # Set the title of the figure
+ self.set_title('Cross Validation Scores for {}'.format(self.name))
+
+ # Add the legend
+ loc = kwargs.pop("loc", "best")
+ edgecolor = kwargs.pop("edgecolor", "k")
+ self.ax.legend(frameon=True, loc=loc, edgecolor=edgecolor)
+
+ #set spacing between the x ticks
+ self.ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
+
+ # Set the axis labels
+ self.ax.set_xlabel('Training Instances')
+ self.ax.set_ylabel('Score')
+
+
+##########################################################################
+## Quick Method
+##########################################################################
+
+def cv_scores(model, X, y, ax=None, cv=None, scoring=None, **kwargs):
+
+ """
+ Displays cross validation scores as a bar chart and the
+ average of the scores as a horizontal line
+
+ This helper function is a quick wrapper to utilize the
+ CVScores visualizer for one-off analysis.
+
+ Parameters
+ ----------
+
+ model : a scikit-learn estimator
+ An object that implements ``fit`` and ``predict``, can be a
+ classifier, regressor, or clusterer so long as there is also a valid
+ associated scoring metric.
+ Note that the object is cloned for each validation.
+
+ ax : matplotlib.Axes object, optional
+ The axes object to plot the figure on.
+
+ cv : int, cross-validation generator or an iterable, optional
+ Determines the cross-validation splitting strategy.
+ Possible inputs for cv are:
+ - None, to use the default 3-fold cross-validation,
+ - integer, to specify the number of folds.
+ - An object to be used as a cross-validation generator.
+ - An iterable yielding train/test splits.
+
+ see the scikit-learn
+ `cross-validation guide <http://scikit-learn.org/stable/modules/cross_validation.html>`_
+ for more information on the possible strategies that can be used here.
+
+ scoring : string, callable or None, optional, default: None
+ A string or scorer callable object / function with signature
+ ``scorer(estimator, X, y)``.
+
+ See scikit-learn `cross-validation guide <http://scikit-learn.org/stable/modules/cross_validation.html>`_
+ for more information on the possible metrics that can be used.
+
+ kwargs : dict
+ Keyword arguments that are passed to the base class and may influence
+ the visualization as defined in other Visualizers.
+
+ Returns
+ -------
+ ax : matplotlib.Axes
+ The axes object that the validation curves were drawn on.
+
+ """
+ # Initialize the visualizer
+ visualizer = cv_scores(model, X, y, ax=ax, cv=cv, scoring=scoring)
+
+ # Fit and poof the visualizer
+ visualizer.fit(X, y)
+ visualizer.poof(**kwargs)
+ return visualizer.ax
+
+
| CVScores
Implement a visualizer that shows cross-validation scores as a bar chart along with the final score as an annotated horizontal line.
We want to start moving toward better cross-validation and model selection. Create `yb.model_selection.CVScores` visualizer that extends `ModelVisualizer` and wraps an estimator. Accepts a `cv` and `scoring` params, similar to the ones exposed in [`sklearn.model_selection.cross_val_score`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html).
Output is a bar chart with the score of each split and a horizontal dotted annotation with the average score.
| 2018-05-19T23:17:03 |
||
DistrictDataLabs/yellowbrick | 446 | DistrictDataLabs__yellowbrick-446 | [
"354"
] | 3248943e4ad1cc16dcf9181c0b46a401b8f0d8b2 | diff --git a/docs/api/cluster/elbow.py b/docs/api/cluster/elbow.py
deleted file mode 100644
--- a/docs/api/cluster/elbow.py
+++ /dev/null
@@ -1,55 +0,0 @@
-#!/usr/bin/env python
-
-"""
-Generate images for the elbow plot documentation.
-"""
-
-# Import necessary modules
-import matplotlib.pyplot as plt
-
-from sklearn.cluster import KMeans
-from sklearn.datasets import make_blobs
-from yellowbrick.cluster import KElbowVisualizer
-
-
-def draw_elbow(path="images/elbow.png"):
- # Generate synthetic dataset with 8 blobs
- X, y = make_blobs(
- centers=8, n_features=12, n_samples=1000,
- shuffle=True, random_state=42
- )
-
- # Create a new figure to draw the clustering visualizer on
- _, ax = plt.subplots()
-
- # Instantiate the clustering model and visualizer
- model = KMeans()
- visualizer = KElbowVisualizer(model, ax=ax, k=(4,12))
-
- visualizer.fit(X) # Fit the data to the visualizer
- visualizer.poof(outpath=path) # Draw/show/poof the data
-
-
-def draw_calinski_harabaz(path="images/calinski_harabaz.png"):
- # Generate synthetic dataset with 8 blobs
- X, y = make_blobs(
- centers=8, n_features=12, n_samples=1000,
- shuffle=True, random_state=42
- )
-
- # Create a new figure to draw the clustering visualizer on
- _, ax = plt.subplots()
-
- # Instantiate the clustering model and visualizer
- model = KMeans()
- visualizer = KElbowVisualizer(
- model, ax=ax, k=(4,12),
- metric='calinski_harabaz', timings=False
- )
- visualizer.fit(X) # Fit the data to the visualizer
- visualizer.poof(outpath=path) # Draw/show/poof the data
-
-
-if __name__ == '__main__':
- draw_elbow()
- draw_calinski_harabaz()
diff --git a/docs/api/features/importances.py b/docs/api/features/importances.py
deleted file mode 100644
--- a/docs/api/features/importances.py
+++ /dev/null
@@ -1,64 +0,0 @@
-import os
-import pandas as pd
-import matplotlib.pyplot as plt
-
-from yellowbrick.features.importances import FeatureImportances
-from sklearn.ensemble import GradientBoostingClassifier
-from sklearn.linear_model import Lasso, LogisticRegression
-from sklearn.datasets import load_iris
-
-
-DATA_DIR = os.path.relpath(os.path.join(
- os.path.dirname(__file__), "..", "..", "..", "examples", "data"
-))
-
-
-def feature_importances_(outpath):
- occupancy = pd.read_csv(os.path.join(DATA_DIR, "occupancy", "occupancy.csv"))
-
- feats = [
- "temperature", "relative humidity", "light", "C02", "humidity"
- ]
-
- X = occupancy[feats]
- y = occupancy['occupancy'].astype(int)
-
- fig = plt.figure()
- ax = fig.add_subplot()
-
- viz = FeatureImportances(GradientBoostingClassifier(), ax=ax)
- viz.fit(X, y)
- viz.poof(outpath=outpath)
-
-
-def coef_(outpath):
- concrete = pd.read_csv(os.path.join(DATA_DIR, "concrete", "concrete.csv"))
-
- feats = ['cement','slag','ash','water','splast','coarse','fine','age']
- X = concrete[feats]
- y = concrete['strength']
-
- fig = plt.figure()
- ax = fig.add_subplot()
-
- feats = list(map(lambda s: s.title(), feats))
- viz = FeatureImportances(Lasso(), ax=ax, labels=feats, relative=False)
- viz.fit(X, y)
- viz.poof(outpath=outpath)
-
-
-def stacked_coef_(outpath):
- data = load_iris()
-
- fig = plt.figure()
- ax = fig.add_subplot()
-
- viz = FeatureImportances(LogisticRegression(), ax=ax, stack=True, relative=False)
- viz.fit(data.data, data.target)
- viz.poof(outpath=outpath)
-
-
-if __name__ == '__main__':
- # feature_importances_("images/feature_importances.png")
- # coef_("images/feature_importances_coef.png")
- stacked_coef_("images/feature_importances_stacked.png")
\ No newline at end of file
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -1,41 +1,74 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
+# conf
+# Yellowbrick documentation build config file, created by sphinx-quickstart
#
-# yellowbrick documentation build configuration file, created by
-# sphinx-quickstart on Tue Jul 5 19:45:43 2016.
+# Created: Tue Jul 05 19:45:43 2016 -0400
+# Copyright (C) 2016-2019 The scikit-yb developers
+# For license information, see LICENSE.txt
#
-# This file is execfile()d with the current directory set to its
-# containing dir.
-#
-# Note that not all possible configuration values are present in this
-# autogenerated file.
-#
-# All configuration values have a default; values that are commented out
-# serve to show the default.
+# ID: conf.py [] [email protected] $
+
+"""
+Yellowbrick documentation build config file, created by sphinx-quickstart.
+
+This file is executed with the current directory set to its containing dir
+by ``execfile()``, e.g. the working directory will be yellowbrick/docs.
+Ensure that all specified paths relative to the docs directory are made
+absolute by using ``os.path.abspath``.
+
+Note that not all possible configuration values are present in this
+autogenerated file.
+
+All configuration values have a default; values that are commented out
+serve to show the default.
+
+See: https://www.sphinx-doc.org/en/master/usage/configuration.html
+for more details on configuring the documentation build.
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+import os
+import sys
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
-#
-import os
-import sys
sys.path.insert(0, os.path.abspath('..'))
# Set the backend of matplotlib to prevent build errors.
import matplotlib
matplotlib.use('agg')
+# Import yellowbrick information.
import yellowbrick as yb
-# -- General configuration ------------------------------------------------
+##########################################################################
+## General configuration
+##########################################################################
# If your documentation needs a minimal Sphinx version, state it here.
-#
-# needs_sphinx = '1.0'
+# needs_sphinx = '1.8'
+
+# General information about the project.
+project = 'Yellowbrick'
+copyright = '2016-2019, The scikit-yb developers.'
+author = 'The scikit-yb developers'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+
+# The short X.Y version.
+version = yb.get_version(short=True)
+# The full version, including alpha/beta/rc tags.
+release = "v" + yb.get_version(short=False)
# Add any Sphinx extension module names here, as strings. They can be
-# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
-# ones.
+# extensions coming with Sphinx (named 'sphinx.ext.*') or custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
@@ -45,6 +78,7 @@
'sphinx.ext.viewcode',
'sphinx.ext.todo',
'numpydoc',
+ 'matplotlib.sphinxext.plot_directive',
]
# Add any paths that contain templates here, relative to this directory.
@@ -52,31 +86,15 @@
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
-#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The encoding of source files.
-#
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
-# General information about the project.
-project = 'yellowbrick'
-copyright = '2016, District Data Labs'
-author = 'District Data Labs'
-
-# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-#
-# The short X.Y version.
-version = yb.__version__
-# The full version, including alpha/beta/rc tags.
-release = yb.__version__
-
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
@@ -86,11 +104,9 @@
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
-#
# today = ''
-#
+
# Else, today_fmt is used as the format for a strftime call.
-#
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
@@ -98,23 +114,18 @@
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
-# The reST default role (used for this markup: `text`) to use for all
-# documents.
-#
+# The reST default role (used for this markup: `text`) for all docs.
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
-#
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
-#
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
-#
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
@@ -126,11 +137,61 @@
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
+##########################################################################
+## Extension Configuration
+##########################################################################
+
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
+# Auto-plot settings either as extension or (file format, dpi)
+plot_formats = [
+ 'png',
+ 'pdf',
+ # ('hires.png', 350),
+]
+
+# By default, include the source code generating plots in documentation
+plot_include_source = True
+
+# Whether to show a link to the source in HTML.
+plot_html_show_source_link = True
+
+# Code that should be executed before each plot.
+plot_pre_code = (
+ "import numpy as np\n"
+ "import matplotlib.pyplot as plt\n"
+ "from yellowbrick.datasets import *\n"
+)
+
+# Whether to show links to the files in HTML.
+plot_html_show_formats = True
-# -- Options for HTML output ----------------------------------------------
+# A dictionary containing any non-standard rcParams that should be applied before each plot.
+plot_rcparams = {
+ "figure.figsize": (9,6),
+ "figure.dpi": 128,
+}
+
+# Autodoc requires numpy to skip class members otherwise we get an exception:
+# toctree contains reference to nonexisting document
+# See: https://github.com/phn/pytpm/issues/3#issuecomment-12133978
+numpydoc_show_class_members = False
+
+# Locations of objects.inv files for intersphinx extension that auto-links
+# to external api docs.
+intersphinx_mapping = {
+ 'python': ('https://docs.python.org/3', None),
+ 'matplotlib': ('http://matplotlib.org/', None),
+ 'scipy': ('http://docs.scipy.org/doc/scipy/reference', None),
+ 'numpy': ('https://docs.scipy.org/doc/numpy/', None),
+ 'cycler': ('http://matplotlib.org/cycler/', None),
+ 'sklearn': ('http://scikit-learn.org/stable/', None)
+}
+
+##########################################################################
+## Options for HTML output
+##########################################################################
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
@@ -254,7 +315,9 @@ def setup(app):
# Output file base name for HTML help builder.
htmlhelp_basename = 'yellowbrickdoc'
-# -- Options for LaTeX output ---------------------------------------------
+##########################################################################
+## Options for LaTeX output
+##########################################################################
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
@@ -274,12 +337,15 @@ def setup(app):
# 'figure_align': 'htbp',
}
-# Grouping the document tree into LaTeX files. List of tuples
-# (source start file, target name, title,
-# author, documentclass [howto, manual, or own class]).
+# Grouping the document tree into LaTeX files. List of tuples.
latex_documents = [
- (master_doc, 'yellowbrick.tex', 'yellowbrick Documentation',
- 'District Data Labs', 'manual'),
+ (
+ master_doc, # source start file
+ 'yellowbrick.tex', # target name
+ '{} Documentation'.format(project), # title
+ author, # author
+ 'manual' # documentclass [howto,manual, or own class]
+ ),
]
# The name of an image file (relative to this directory) to place at the top of
@@ -308,14 +374,20 @@ def setup(app):
#
# latex_domain_indices = True
-
-# -- Options for manual page output ---------------------------------------
+##########################################################################
+## Options for manual page output
+##########################################################################
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
- (master_doc, 'yellowbrick', 'yellowbrick Documentation',
- [author], 1)
+ (
+ master_doc,
+ project,
+ '{} Documentation'.format(project),
+ [author],
+ 1
+ )
]
# If true, show URL addresses after external links.
@@ -329,9 +401,15 @@ def setup(app):
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
- (master_doc, 'yellowbrick', 'yellowbrick Documentation',
- author, 'yellowbrick', 'One line description of project.',
- 'Miscellaneous'),
+ (
+ master_doc,
+ 'yellowbrick',
+ '{} Documentation'.format(project),
+ author,
+ 'yellowbrick',
+ 'machine learning visualization',
+ 'scientific visualization',
+ ),
]
# Documents to append as an appendix to all manuals.
@@ -348,17 +426,4 @@ def setup(app):
# If true, do not generate a @detailmenu in the "Top" node's menu.
#
-# texinfo_no_detailmenu = False
-
-# Autodoc requires numpy to skip class members otherwise we get an exception:
-# toctree contains reference to nonexisting document
-# See: https://github.com/phn/pytpm/issues/3#issuecomment-12133978
-numpydoc_show_class_members = False
-
-# Locations of objects.inv files for intersphinx extension that auto links to external api docs.
-intersphinx_mapping = {'python': ('https://docs.python.org/3', None),
- 'matplotlib': ('http://matplotlib.org/', None),
- 'scipy': ('http://scipy.github.io/devdocs/', None),
- 'numpy': ('https://docs.scipy.org/doc/numpy-dev/', None),
- 'cycler': ('http://matplotlib.org/cycler/', None),
- 'sklearn': ('http://scikit-learn.org/stable/', None)}
+# texinfo_no_detailmenu = False
\ No newline at end of file
| Scripts to Regenerate Tutorial and Quickstart Documentation Images
In the documentation, the tutorial and quickstart need a script that can autogenerate the images similar to how they are done in the `doc/api/` section. The code to generate the images is located in the rst files and can be simple extracted and moved into a py file.
These scripts can live right alongside their .rst compatriots.
| 2018-05-23T05:34:50 |
||
DistrictDataLabs/yellowbrick | 448 | DistrictDataLabs__yellowbrick-448 | [
"447"
] | 0ed41c74151b774fc73e857be9371c8ffc6b02a4 | diff --git a/docs/api/features/pcoords.py b/docs/api/features/pcoords.py
--- a/docs/api/features/pcoords.py
+++ b/docs/api/features/pcoords.py
@@ -1,7 +1,25 @@
+import time
+import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from yellowbrick.features import ParallelCoordinates
+from sklearn.datasets import load_iris
+
+
+def load_occupancy_data():
+ # Load the classification data set
+ data = pd.read_csv("../../../examples/data/occupancy/occupancy.csv")
+
+ # Specify the features of interest and the classes of the target
+ features = ["temperature", "relative humidity", "light", "C02", "humidity"]
+ classes = ['unoccupied', 'occupied']
+
+ # Extract the numpy arrays from the data frame
+ X = data[features].as_matrix()
+ y = data.occupancy.as_matrix()
+
+ return X, y, features, classes
def pcoords(X, y, outpath, **kwargs):
@@ -10,30 +28,102 @@ def pcoords(X, y, outpath, **kwargs):
# Create the visualizer
visualizer = ParallelCoordinates(ax=ax, **kwargs)
- visualizer.fit(X, y)
- visualizer.transform(X)
+ visualizer.fit_transform(X, y)
# Save to disk
visualizer.poof(outpath=outpath)
-if __name__ == '__main__':
- # Load the classification data set
- data = pd.read_csv("../../../examples/data/occupancy/occupancy.csv")
+def plot_fast_vs_slow():
+ data = load_iris()
- # Specify the features of interest and the classes of the target
- features = ["temperature", "relative humidity", "light", "C02", "humidity"]
- classes = ['unoccupied', 'occupied']
+ _, axes = plt.subplots(nrows=2, figsize=(9,9))
+
+ for idx, fast in enumerate((False, True)):
+ title = "Fast Parallel Coordinates" if fast else "Standard Parallel Coordinates"
+ oz = ParallelCoordinates(ax=axes[idx], fast=fast, title=title)
+ oz.fit_transform(data.data, data.target)
+ oz.finalize()
+
+ plt.tight_layout()
+ plt.savefig("images/fast_vs_slow_parallel_coordinates.png")
- # Extract the numpy arrays from the data frame
- X = data[features].as_matrix()
- y = data.occupancy.as_matrix()
+
+def plot_speedup(trials=5, factors=np.arange(1, 11)):
+
+ def pcoords_time(X, y, fast=True):
+ _, ax = plt.subplots()
+ oz = ParallelCoordinates(fast=fast, ax=ax)
+
+ start = time.time()
+ oz.fit_transform(X, y)
+ delta = time.time() - start
+
+ plt.cla() # clear current axis
+ plt.clf() # clear current figure
+ plt.close("all") # close all existing plots
+
+ return delta
+
+ def pcoords_speedup(X, y):
+ fast_time = pcoords_time(X, y, fast=True)
+ slow_time = pcoords_time(X, y, fast=False)
+
+ return slow_time / fast_time
+
+ data = load_iris()
+
+ speedups = []
+ variance = []
+
+ for factor in factors:
+ X = np.repeat(data.data, factor, axis=0)
+ y = np.repeat(data.target, factor, axis=0)
+
+ local_speedups = []
+ for trial in range(trials):
+ local_speedups.append(pcoords_speedup(X, y))
+
+ local_speedups = np.array(local_speedups)
+ speedups.append(local_speedups.mean())
+ variance.append(local_speedups.std())
+
+ speedups = np.array(speedups)
+ variance = np.array(variance)
+
+ series = pd.Series(speedups, index=factors)
+ _, ax = plt.subplots(figsize=(9,6))
+ series.plot(ax=ax, marker='o', label="speedup factor", color='b')
+
+ # Plot one standard deviation above and below the mean
+ ax.fill_between(
+ factors, speedups - variance, speedups + variance, alpha=0.25,
+ color='b',
+ )
+
+ ax.set_ylabel("speedup factor")
+ ax.set_xlabel("dataset size (number of repeats in Iris dataset)")
+ ax.set_title("Speed Improvement of Fast Parallel Coordinates")
+ plt.savefig("images/fast_parallel_coordinates_speedup.png")
+
+
+if __name__ == '__main__':
+ # plot_fast_vs_slow()
+ # plot_speedup()
+
+ # Occupancy data visualizations
+ X, y, features, classes = load_occupancy_data()
# Draw the full, original parallel coordinates
- pcoords(X, y, "images/parallel_coordinates.png", classes=classes, features=features)
+ pcoords(
+ X, y, "images/parallel_coordinates.png",
+ classes=classes, features=features,
+ sample=0.05, shuffle=True, random_state=19,
+ )
# Draw the noramlized, sampled parallel coordinates
- pcoords(X, y, "images/normalized_sampled_parallel_coordinates.png",
+ pcoords(
+ X, y, "images/normalized_sampled_parallel_coordinates.png",
classes=classes, features=features,
- normalize='standard', sample=0.1,
+ normalize='standard', sample=0.05, shuffle=True, random_state=19,
)
diff --git a/yellowbrick/features/pcoords.py b/yellowbrick/features/pcoords.py
--- a/yellowbrick/features/pcoords.py
+++ b/yellowbrick/features/pcoords.py
@@ -1,8 +1,9 @@
# yellowbrick.features.pcoords
# Implementations of parallel coordinates for feature analysis.
#
-# Author: Benjamin Bengfort <[email protected]>
-# Created: Mon Oct 03 21:46:06 2016 -0400
+# Author: Benjamin Bengfort <[email protected]>
+# Author: @thekylesaurus
+# Created: Mon Oct 03 21:46:06 2016 -0400
#
# Copyright (C) 2016 District Data Labs
# For license information, see LICENSE.txt
@@ -10,19 +11,19 @@
# ID: pcoords.py [0f4b236] [email protected] $
"""
-Implementations of parallel coordinates for multi-dimensional feature
-analysis. There are a variety of parallel coordinates from Andrews Curves to
-coordinates that optimize column order.
+Implementation of parallel coordinates for multi-dimensional feature analysis.
"""
##########################################################################
## Imports
##########################################################################
-from copy import copy
-from numpy import hstack, ones
-from numpy.random import RandomState
+import numpy as np
+from six import string_types
+from matplotlib import patches
+from operator import itemgetter
+from numpy.random import RandomState
from sklearn.preprocessing import MinMaxScaler, MaxAbsScaler
from sklearn.preprocessing import Normalizer, StandardScaler
@@ -38,7 +39,8 @@
def parallel_coordinates(X, y, ax=None, features=None, classes=None,
normalize=None, sample=1.0, color=None, colormap=None,
- vlines=True, vlines_kwds=None, **kwargs):
+ alpha=None, fast=False, vlines=True, vlines_kwds=None,
+ **kwargs):
"""Displays each feature as a vertical axis and each instance as a line.
This helper function is a quick wrapper to utilize the ParallelCoordinates
@@ -87,6 +89,17 @@ def parallel_coordinates(X, y, ax=None, features=None, classes=None,
Use either color to colorize the lines on a per class basis or
colormap to color them on a continuous scale.
+ alpha : float, default: None
+ Specify a transparency where 1 is completely opaque and 0 is completely
+ transparent. This property makes densely clustered lines more visible.
+ If None, the alpha is set to 0.5 in "fast" mode and 0.25 otherwise.
+
+ fast : bool, default: False
+ Fast mode improves the performance of the drawing time of parallel
+ coordinates but produces an image that does not show the overlap of
+ instances in the same class. Fast mode should be used when drawing all
+ instances is too burdensome and sampling is not an option.
+
vlines : boolean, default: True
flag to determine vertical line display
@@ -104,8 +117,8 @@ def parallel_coordinates(X, y, ax=None, features=None, classes=None,
"""
# Instantiate the visualizer
visualizer = ParallelCoordinates(
- ax, features, classes, normalize, sample, color, colormap, vlines,
- vlines_kwds, **kwargs
+ ax, features, classes, normalize, sample, color, colormap, alpha,
+ fast, vlines, vlines_kwds, **kwargs
)
# Fit and transform the visualizer (calls draw)
@@ -124,7 +137,8 @@ class ParallelCoordinates(DataVisualizer):
"""
Parallel coordinates displays each feature as a vertical axis spaced
evenly along the horizontal, and each instance as a line drawn between
- each individual axis.
+ each individual axis. This allows you to detect braids of similar instances
+ and separability that suggests a good classification problem.
Parameters
----------
@@ -172,6 +186,17 @@ class ParallelCoordinates(DataVisualizer):
Use either color to colorize the lines on a per class basis or
colormap to color them on a continuous scale.
+ alpha : float, default: None
+ Specify a transparency where 1 is completely opaque and 0 is completely
+ transparent. This property makes densely clustered lines more visible.
+ If None, the alpha is set to 0.5 in "fast" mode and 0.25 otherwise.
+
+ fast : bool, default: False
+ Fast mode improves the performance of the drawing time of parallel
+ coordinates but produces an image that does not show the overlap of
+ instances in the same class. Fast mode should be used when drawing all
+ instances is too burdensome and sampling is not an option.
+
vlines : boolean, default: True
flag to determine vertical line display
@@ -184,7 +209,6 @@ class ParallelCoordinates(DataVisualizer):
Attributes
--------
-
n_samples_ : int
number of samples included in the visualization object
@@ -203,7 +227,7 @@ class ParallelCoordinates(DataVisualizer):
process, but can and should be set as early as possible.
"""
- normalizers = {
+ NORMALIZERS = {
'minmax': MinMaxScaler(),
'maxabs': MaxAbsScaler(),
'standard': StandardScaler(),
@@ -211,15 +235,28 @@ class ParallelCoordinates(DataVisualizer):
'l2': Normalizer('l2'),
}
- def __init__(self, ax=None, features=None, classes=None, normalize=None,
- sample=1.0, random_state=None, shuffle=False, color=None, colormap=None, vlines=True,
- vlines_kwds=None, **kwargs):
+ def __init__(self,
+ ax=None,
+ features=None,
+ classes=None,
+ normalize=None,
+ sample=1.0,
+ random_state=None,
+ shuffle=False,
+ color=None,
+ colormap=None,
+ alpha=None,
+ fast=False,
+ vlines=True,
+ vlines_kwds=None,
+ **kwargs):
+
super(ParallelCoordinates, self).__init__(
ax, features, classes, color, colormap, **kwargs
)
# Validate 'normalize' argument
- if normalize in self.normalizers or normalize is None:
+ if normalize in self.NORMALIZERS or normalize is None:
self.normalize = normalize
else:
raise YellowbrickValueError(
@@ -263,12 +300,18 @@ def __init__(self, ax=None, features=None, classes=None, normalize=None,
else:
self._rng = None
- # Visual Parameters
+ # Visual and drawing parameters
+ self.fast = fast
+ self.alpha = alpha
self.show_vlines = vlines
self.vlines_kwds = vlines_kwds or {
'linewidth': 1, 'color': 'black'
}
+ # Internal properties
+ self._increments = None
+ self._colors = None
+
def fit(self, X, y=None, **kwargs):
"""
The fit method is the primary drawing input for the
@@ -292,24 +335,45 @@ def fit(self, X, y=None, **kwargs):
Returns the instance of the transformer/visualizer
"""
- # Convert from dataframe
+ # Convert from pandas data types
if is_dataframe(X):
+ # Get column names before reverting to an np.ndarray
+ if self.features_ is None:
+ self.features_ = np.array(X.columns)
+
X = X.as_matrix()
if is_series(y):
y = y.as_matrix()
- # Subsample
+ # Assign integer labels to the feature columns from the input
+ if self.features_ is None:
+ self.features_ = np.arange(0, X.shape[1])
+
+ # Ensure that all classes are represented in the color mapping (before sample)
+ # NOTE: np.unique also specifies the ordering of the classes
+ if self.classes_ is None:
+ self.classes_ = [str(label) for label in np.unique(y)]
+
+ # Create the color mapping for each class
+ # TODO: Allow both colormap, listed colors, and palette definition
+ # TODO: Make this an independent function or property for override!
+ color_values = resolve_colors(
+ n_colors=len(self.classes_), colormap=self.colormap, colors=self.color
+ )
+ self._colors = dict(zip(self.classes_, color_values))
+
+ # Ticks for each feature specified
+ self._increments = np.arange(len(self.features_))
+
+ # Subsample instances
X, y = self._subsample(X, y)
- # Normalize
+ # Normalize instances
if self.normalize is not None:
- X = self.normalizers[self.normalize].fit_transform(X)
+ X = self.NORMALIZERS[self.normalize].fit_transform(X)
# the super method calls draw and returns self
- super(ParallelCoordinates, self).fit(X, y, **kwargs)
-
- # Fit always returns self.
- return self
+ return super(ParallelCoordinates, self).fit(X, y, **kwargs)
def draw(self, X, y, **kwargs):
"""
@@ -328,51 +392,100 @@ def draw(self, X, y, **kwargs):
Pass generic arguments to the drawing method
"""
+ if self.fast:
+ return self.draw_classes(X, y, **kwargs)
+ return self.draw_instances(X, y, **kwargs)
- # Get shape of the sampled data
- nrows, ncols = X.shape
+ def draw_instances(self, X, y, **kwargs):
+ """
+ Draw the instances colored by the target y such that each line is a
+ single instance. This is the "slow" mode of drawing, since each
+ instance has to be drawn individually. However, in so doing, the
+ density of instances in braids is more apparent since lines have an
+ independent alpha that is compounded in the figure.
- # Create the xticks for each column
- # TODO: Allow the user to specify this feature
- increments = list(range(ncols))
+ This is the default method of drawing.
- # Create the colors
- # TODO: Allow both colormap, listed colors, and palette definition
- # TODO: Make this an independent function or property for override!
- color_values = resolve_colors(
- n_colors=len(self.classes_), colormap=self.colormap, colors=self.color
- )
- colors = dict(zip(self.classes_, color_values))
+ Parameters
+ ----------
+ X : ndarray of shape n x m
+ A matrix of n instances with m features
+
+ y : ndarray of length n
+ An array or series of target or class values
+
+ Notes
+ -----
+ This method can be used to draw additional instances onto the parallel
+ coordinates before the figure is finalized.
+ """
+ # Get alpha from param or default
+ alpha = self.alpha or 0.25
- # TODO: Make this function compatible with DataFrames!
- # TODO: Make an independent function to allow addition of instances!
+ for idx in range(len(X)):
+ Xi = X[idx]
+ yi = y[idx]
- # Prepare to flatten data within each class
- # introduce separation between individual data points using None in x-values and arbitrary value (one) in
- # y-values
- X_separated = hstack([X, ones((nrows, 1))])
- increments_separated = copy(increments)
+ # TODO: generalize this duplicated code into a single function
+ if isinstance(yi, string_types):
+ label = yi
+ else:
+ # TODO: what happens if yi is not in classes?!
+ label = self.classes_[yi]
+
+ self.ax.plot(
+ self._increments, Xi,
+ color=self._colors[label], alpha=alpha, **kwargs
+ )
+
+ return self.ax
+
+ def draw_classes(self, X, y, **kwargs):
+ """
+ Draw the instances colored by the target y such that each line is a
+ single class. This is the "fast" mode of drawing, since the number of
+ lines drawn equals the number of classes, rather than the number of
+ instances. However, this drawing method sacrifices inter-class density
+ of points using the alpha parameter.
+
+ Parameters
+ ----------
+ X : ndarray of shape n x m
+ A matrix of n instances with m features
+
+ y : ndarray of length n
+ An array or series of target or class values
+ """
+ # Get alpha from param or default
+ alpha = self.alpha or 0.5
+
+ # Prepare to flatten data within each class:
+ # introduce separation between individual data points using None in
+ # x-values and arbitrary value (one) in y-values
+ X_separated = np.hstack([X, np.ones((X.shape[0], 1))])
+ increments_separated = self._increments.tolist()
increments_separated.append(None)
- # Plot each class
- for label, color in sorted(colors.items()):
- y_as_str = y.astype('str') # must be consistent with class conversion in DataVisualizer.fit()
- X_in_class = X_separated[y_as_str == label, :]
+ # Get the classes that exist in the dataset, y
+ y_values = np.unique(y)
+
+ # Plot each class as a single line plot
+ for yi in y_values:
+ if isinstance(yi, string_types):
+ label = yi
+ else:
+ # TODO: what happens if yi is not in classes?!
+ label = self.classes_[yi]
+
+ X_in_class = X_separated[y == yi, :]
increments_in_class = increments_separated * len(X_in_class)
if len(X_in_class) > 0:
- self.ax.plot(increments_in_class, X_in_class.flatten(),
- label=label, color=colors[label], alpha=0.25, linewidth=1, **kwargs)
-
- # Add the vertical lines
- # TODO: Make an independent function for override!
- if self.show_vlines:
- for idx in increments:
- self.ax.axvline(idx, **self.vlines_kwds)
+ self.ax.plot(
+ increments_in_class, X_in_class.flatten(), linewidth=1,
+ color=self._colors[label], alpha=alpha, **kwargs
+ )
- # Set the limits
- self.ax.set_xticks(increments)
- self.ax.set_xticklabels(self.features_)
- self.ax.set_xlim(increments[0], increments[-1])
+ return self.ax
def finalize(self, **kwargs):
"""
@@ -389,8 +502,25 @@ def finalize(self, **kwargs):
'Parallel Coordinates for {} Features'.format(len(self.features_))
)
- # Set the legend and the grid
- self.ax.legend(loc='best')
+ # Add the vertical lines
+ # TODO: Make an independent function for override!
+ if self.show_vlines:
+ for idx in self._increments:
+ self.ax.axvline(idx, **self.vlines_kwds)
+
+ # Set the limits
+ self.ax.set_xticks(self._increments)
+ self.ax.set_xticklabels(self.features_)
+ self.ax.set_xlim(self._increments[0], self._increments[-1])
+
+ # Add the legend
+ handles = [
+ patches.Patch(color=color, label=label)
+ for label, color in sorted(self._colors.items(), key=itemgetter(0))
+ ]
+ self.ax.legend(handles=handles, loc='best', frameon=True)
+
+ # Add the grid view
self.ax.grid()
def _subsample(self, X, y):
| diff --git a/tests/baseline_images/test_features/test_pcoords/test_alpha.png b/tests/baseline_images/test_features/test_pcoords/test_alpha.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_pcoords/test_alpha.png differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_alpha_fast.png b/tests/baseline_images/test_features/test_pcoords/test_alpha_fast.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_pcoords/test_alpha_fast.png differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_integrated_pcoords.png b/tests/baseline_images/test_features/test_pcoords/test_integrated_pcoords.png
deleted file mode 100644
Binary files a/tests/baseline_images/test_features/test_pcoords/test_integrated_pcoords.png and /dev/null differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_labels.png b/tests/baseline_images/test_features/test_pcoords/test_labels.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_pcoords/test_labels.png differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_labels_fast.png b/tests/baseline_images/test_features/test_pcoords/test_labels_fast.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_pcoords/test_labels_fast.png differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_normalized_l2.png b/tests/baseline_images/test_features/test_pcoords/test_normalized_l2.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_pcoords/test_normalized_l2.png differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_normalized_l2_fast.png b/tests/baseline_images/test_features/test_pcoords/test_normalized_l2_fast.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_pcoords/test_normalized_l2_fast.png differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_normalized_minmax.png b/tests/baseline_images/test_features/test_pcoords/test_normalized_minmax.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_pcoords/test_normalized_minmax.png differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_normalized_minmax_fast.png b/tests/baseline_images/test_features/test_pcoords/test_normalized_minmax_fast.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_pcoords/test_normalized_minmax_fast.png differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_normalized_pcoords.png b/tests/baseline_images/test_features/test_pcoords/test_normalized_pcoords.png
deleted file mode 100644
Binary files a/tests/baseline_images/test_features/test_pcoords/test_normalized_pcoords.png and /dev/null differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_pandas_integration_fast.png b/tests/baseline_images/test_features/test_pcoords/test_pandas_integration_fast.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_pcoords/test_pandas_integration_fast.png differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_pandas_integration_sampled.png b/tests/baseline_images/test_features/test_pcoords/test_pandas_integration_sampled.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_pcoords/test_pandas_integration_sampled.png differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_parallel_coords.png b/tests/baseline_images/test_features/test_pcoords/test_parallel_coords.png
Binary files a/tests/baseline_images/test_features/test_pcoords/test_parallel_coords.png and b/tests/baseline_images/test_features/test_pcoords/test_parallel_coords.png differ
diff --git a/tests/baseline_images/test_features/test_pcoords/test_parallel_coords_fast.png b/tests/baseline_images/test_features/test_pcoords/test_parallel_coords_fast.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_pcoords/test_parallel_coords_fast.png differ
diff --git a/tests/test_features/test_pcoords.py b/tests/test_features/test_pcoords.py
--- a/tests/test_features/test_pcoords.py
+++ b/tests/test_features/test_pcoords.py
@@ -1,8 +1,9 @@
# tests.test_features.test_pcoords
# Testing for the parallel coordinates feature visualizers
#
-# Author: Benjamin Bengfort <[email protected]>
-# Created: Thu Oct 06 11:21:27 2016 -0400
+# Author: Benjamin Bengfort <[email protected]>
+# Author: @thekylesaurus
+# Created: Thu Oct 06 11:21:27 2016 -0400
#
# Copyright (C) 2016 District Data Labs
# For license information, see LICENSE.txt
@@ -17,134 +18,279 @@
## Imports
##########################################################################
-import sys
import pytest
import numpy as np
-from tests.base import VisualTestCase
from yellowbrick.features.pcoords import *
-from tests.dataset import DatasetMixin
+
+from tests.base import VisualTestCase
+from tests.dataset import DatasetMixin, Dataset
+from sklearn.datasets import make_classification
+
+
+try:
+ import pandas as pd
+except ImportError:
+ pd = None
+
##########################################################################
-## Parallel Coordinates Tests
+## Fixtures
##########################################################################
[email protected](scope='class')
+def dataset(request):
+ """
+ Creates a random multiclass classification dataset fixture
+ """
+ X, y = make_classification(
+ n_samples=200, n_features=5, n_informative=4, n_redundant=0,
+ n_classes=3, n_clusters_per_class=1, random_state=451, flip_y=0,
+ class_sep=3, scale=np.array([1.0, 2.0, 100.0, 20.0, 1.0])
+ )
-class ParallelCoordinatesTests(VisualTestCase, DatasetMixin):
+ dataset = Dataset(X, y)
+ request.cls.dataset = dataset
- X = np.array(
- [[ 2.318, 2.727, 4.260, 7.212, 4.792],
- [ 2.315, 2.726, 4.295, 7.140, 4.783,],
- [ 2.315, 2.724, 4.260, 7.135, 4.779,],
- [ 2.110, 3.609, 4.330, 7.985, 5.595,],
- [ 2.110, 3.626, 4.330, 8.203, 5.621,],
- [ 2.110, 3.620, 4.470, 8.210, 5.612,]]
- )
- y = np.array([1, 1, 0, 1, 0, 0])
+
+##########################################################################
+## Parallel Coordinates Tests
+##########################################################################
+
[email protected]('dataset')
+class TestParallelCoordinates(VisualTestCase, DatasetMixin):
+ """
+ Test the ParallelCoordinates visualizer
+ """
def test_parallel_coords(self):
"""
- Assert no errors occur during parallel coordinates integration
+ Test images closeness on random 3 class dataset
"""
visualizer = ParallelCoordinates()
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
visualizer.poof()
self.assert_images_similar(visualizer, tol=0.25)
- @pytest.mark.xfail(
- sys.platform == 'win32', reason="images not close on windows"
- )
- def test_normalized_pcoords(self):
+ def test_parallel_coords_fast(self):
"""
- Assert no errors occur using 'normalize' argument
+ Test images closeness on random 3 class dataset in fast mode
"""
- visualizer = ParallelCoordinates(normalize='l2')
- visualizer.fit_transform(self.X, self.y)
+ visualizer = ParallelCoordinates(fast=True)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
+ visualizer.poof()
+ self.assert_images_similar(visualizer, tol=0.25)
+
+ def test_alpha(self):
+ """
+ Test image closeness on opaque alpha for random 3 class dataset
+ """
+ visualizer = ParallelCoordinates(alpha=1.0)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
+ visualizer.poof()
+ self.assert_images_similar(visualizer, tol=0.25)
+
+ def test_alpha_fast(self):
+ """
+ Test image closeness on opaque alpha for random 3 class dataset in fast mode
+ """
+ visualizer = ParallelCoordinates(alpha=1.0, fast=True)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
+ visualizer.poof()
+ self.assert_images_similar(visualizer, tol=0.25)
+
+ def test_labels(self):
+ """
+ Test image closeness when class and feature labels are supplied
+ """
+ visualizer = ParallelCoordinates(
+ classes=['a', 'b', 'c'], features=['f1', 'f2', 'f3', 'f4', 'f5']
+ )
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
visualizer.poof()
self.assert_images_similar(visualizer)
- def test_normalized_pcoords_invalid_arg(self):
+ def test_labels_fast(self):
+ """
+ Test image closeness when class and feature labels are supplied in fast mode
+ """
+ visualizer = ParallelCoordinates(
+ classes=['a', 'b', 'c'], features=['f1', 'f2', 'f3', 'f4', 'f5'], fast=True
+ )
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
+ visualizer.poof()
+ self.assert_images_similar(visualizer)
+
+ def test_normalized_l2(self):
+ """
+ Test image closeness on l2 normalized 3 class dataset
+ """
+ visualizer = ParallelCoordinates(normalize='l2')
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
+ visualizer.poof()
+ self.assert_images_similar(visualizer, tol=0.25)
+
+ def test_normalized_l2_fast(self):
+ """
+ Test image closeness on l2 normalized 3 class dataset in fast mode
+ """
+ visualizer = ParallelCoordinates(normalize='l2', fast=True)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
+ visualizer.poof()
+ self.assert_images_similar(visualizer, tol=0.25)
+
+ def test_normalized_minmax(self):
+ """
+ Test image closeness on minmax normalized 3 class dataset
+ """
+ visualizer = ParallelCoordinates(normalize='minmax')
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
+ visualizer.poof()
+ self.assert_images_similar(visualizer, tol=0.25)
+
+ def test_normalized_minmax_fast(self):
+ """
+ Test image closeness on minmax normalized 3 class dataset in fast mode
+ """
+ visualizer = ParallelCoordinates(normalize='minmax', fast=True)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
+ visualizer.poof()
+ self.assert_images_similar(visualizer, tol=0.25)
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_pandas_integration_sampled(self):
+ """
+ Test on a real dataset with pandas DataFrame and Series sampled for speed
+ """
+ df = self.load_pandas("occupancy")
+
+ target = "occupancy"
+ features = [
+ 'temperature', 'relative humidity', 'light', 'C02', 'humidity'
+ ]
+
+ X = df[features]
+ y = pd.Series([
+ 'occupied' if yi == 1 else 'unoccupied' for yi in df[target]
+ ])
+
+ assert isinstance(X, pd.DataFrame)
+ assert isinstance(y, pd.Series)
+
+ oz = ParallelCoordinates(sample=0.05, shuffle=True, random_state=4291)
+ oz.fit_transform(X, y)
+ oz.poof()
+
+ self.assert_images_similar(oz, tol=0.1)
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_pandas_integration_fast(self):
+ """
+ Test on a real dataset with pandas DataFrame and Series in fast mode
+ """
+ df = self.load_pandas("occupancy")
+
+ target = "occupancy"
+ features = [
+ 'temperature', 'relative humidity', 'light', 'C02', 'humidity'
+ ]
+
+ X = df[features]
+ y = pd.Series([
+ 'occupied' if yi == 1 else 'unoccupied' for yi in df[target]
+ ])
+
+ assert isinstance(X, pd.DataFrame)
+ assert isinstance(y, pd.Series)
+
+ oz = ParallelCoordinates(fast=True)
+ oz.fit_transform(X, y)
+ oz.poof()
+
+ self.assert_images_similar(oz, tol=0.1)
+
+ def test_normalized_invalid_arg(self):
"""
Invalid argument to 'normalize' should raise
"""
with self.assertRaises(YellowbrickValueError):
ParallelCoordinates(normalize='foo')
- def test_pcoords_sample_int(self):
+ def test_sample_int(self):
"""
Assert no errors occur using integer 'sample' argument
"""
visualizer = ParallelCoordinates(sample=10)
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
- def test_pcoords_sample_int_shuffle(self):
+ def test_sample_int_shuffle(self):
"""
Assert no errors occur using integer 'sample' argument and shuffle, with different random_state args
"""
visualizer = ParallelCoordinates(sample=3, shuffle=True)
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
visualizer = ParallelCoordinates(sample=3, shuffle=True, random_state=444)
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
visualizer = ParallelCoordinates(sample=3, shuffle=True, random_state=np.random.RandomState())
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
- def test_pcoords_sample_int_shuffle_false(self):
+ def test_sample_int_shuffle_false(self):
"""
Assert no errors occur using integer 'sample' argument and shuffle, with different random_state args
"""
visualizer = ParallelCoordinates(sample=3, shuffle=False)
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
visualizer = ParallelCoordinates(sample=3, shuffle=False, random_state=444)
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
visualizer = ParallelCoordinates(sample=3, shuffle=False, random_state=np.random.RandomState())
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
- def test_pcoords_sample_int_invalid(self):
+ def test_sample_int_invalid(self):
"""
- Negative int values should raise
+ Negative int values should raise exception
"""
with self.assertRaises(YellowbrickValueError):
ParallelCoordinates(sample=-1)
- def test_pcoords_sample_float(self):
+ def test_sample_float(self):
"""
Assert no errors occur using float 'sample' argument
"""
visualizer = ParallelCoordinates(sample=0.5)
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
- def test_pcoords_sample_float_shuffle(self):
+ def test_sample_float_shuffle(self):
"""
Assert no errors occur using float 'sample' argument and shuffle, with different random_state args
"""
visualizer = ParallelCoordinates(sample=0.5, shuffle=True)
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
visualizer = ParallelCoordinates(sample=0.5, shuffle=True, random_state=444)
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
visualizer = ParallelCoordinates(sample=0.5, shuffle=True, random_state=np.random.RandomState())
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
- def test_pcoords_sample_float_shuffle_false(self):
+ def test_sample_float_shuffle_false(self):
"""
Assert no errors occur using float 'sample' argument and shuffle, with different random_state args
"""
visualizer = ParallelCoordinates(sample=0.5, shuffle=False)
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
visualizer = ParallelCoordinates(sample=0.5, shuffle=False, random_state=444)
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
visualizer = ParallelCoordinates(sample=0.5, shuffle=False, random_state=np.random.RandomState())
- visualizer.fit_transform(self.X, self.y)
+ visualizer.fit_transform(self.dataset.X, self.dataset.y)
- def test_pcoords_sample_float_invalid(self):
+ def test_sample_float_invalid(self):
"""
Float values for 'sample' argument outside [0,1] should raise.
"""
@@ -153,37 +299,13 @@ def test_pcoords_sample_float_invalid(self):
with self.assertRaises(YellowbrickValueError):
ParallelCoordinates(sample=1.1)
- def test_pcoords_sample_invalid_type(self):
+ def test_sample_invalid_type(self):
"""
Non-numeric values for 'sample' argument should raise.
"""
with self.assertRaises(YellowbrickTypeError):
ParallelCoordinates(sample='foo')
- @pytest.mark.xfail(
- sys.platform == 'win32', reason="images not close on windows"
- )
- def test_integrated_pcoords(self):
- """
- Test parallel coordinates on a real data set (downsampled for speed)
- """
- occupancy = self.load_data('occupancy')
-
- X = occupancy[[
- "temperature", "relative_humidity", "light", "C02", "humidity"
- ]]
-
- y = occupancy['occupancy'].astype(int)
-
- # Convert X to an ndarray
- X = X.copy().view((float, len(X.dtype.names)))
-
- # Test the visualizer
- visualizer = ParallelCoordinates(sample=200)
- visualizer.fit_transform(X, y)
- visualizer.poof()
- self.assert_images_similar(visualizer)
-
@staticmethod
def test_static_subsample():
"""
@@ -244,4 +366,3 @@ def test_static_subsample():
assert np.array_equal(Xprime, X[yprime.flatten(), :])
assert len(Xprime) == ntotal * sample
assert len(yprime) == ntotal * sample
-
| Fix ParallelCoordinates tests and add fast parameter
See #420
- [x] fix image comparison tests for parallel coordinates
- [x] add `fast=False` argument and docstring
- [x] create fast and regular drawing methods based on parameter
- [x] add section in documentation explaining fast vs. slow
- [x] update #230 and #59
| 2018-05-23T21:31:04 |
|
DistrictDataLabs/yellowbrick | 480 | DistrictDataLabs__yellowbrick-480 | [
"264"
] | 4d0483edd1468855df714f77bca1a0a93f01cbce | diff --git a/docs/api/regressor/residuals.py b/docs/api/regressor/residuals.py
--- a/docs/api/regressor/residuals.py
+++ b/docs/api/regressor/residuals.py
@@ -1,4 +1,5 @@
import pandas as pd
+import matplotlib.pyplot as plt
from sklearn.linear_model import Ridge
from sklearn.model_selection import train_test_split
@@ -6,24 +7,38 @@
from yellowbrick.regressor import ResidualsPlot
-if __name__ == '__main__':
+def plot_residuals(X, y, model, outpath="images/residuals.png", **kwargs):
+ X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
+
+ _, ax = plt.subplots()
+
+ visualizer = ResidualsPlot(model, ax=ax, **kwargs)
+ visualizer.fit(X_train, y_train)
+ visualizer.score(X_test, y_test)
+ visualizer.poof(outpath=outpath)
+
+
+def load_concrete():
# Load the regression data set
df = pd.read_csv("../../../examples/data/concrete/concrete.csv")
- feature_names = ['cement', 'slag', 'ash', 'water', 'splast', 'coarse', 'fine', 'age']
+ feature_names = [
+ 'cement', 'slag', 'ash', 'water', 'splast', 'coarse', 'fine', 'age'
+ ]
target_name = 'strength'
# Get the X and y data from the DataFrame
- X = df[feature_names].as_matrix()
- y = df[target_name].as_matrix()
+ X = df[feature_names]
+ y = df[target_name]
+
+ return X, y
- # Create the train and test data
- X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
- # Instantiate the linear model and visualizer
- ridge = Ridge()
- visualizer = ResidualsPlot(ridge)
- visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
- visualizer.score(X_test, y_test) # Evaluate the model on the test data
- g = visualizer.poof(outpath="images/residuals.png") # Draw/show/poof the data
+if __name__ == '__main__':
+ # Draw the default residuals graph
+ X, y = load_concrete()
+ plot_residuals(X, y, Ridge())
+
+ # Draw the residuals graph with no histogram
+ plot_residuals(X, y, Ridge(), "images/residuals_no_hist.png", hist=False)
diff --git a/yellowbrick/regressor/residuals.py b/yellowbrick/regressor/residuals.py
--- a/yellowbrick/regressor/residuals.py
+++ b/yellowbrick/regressor/residuals.py
@@ -18,10 +18,22 @@
## Imports
##########################################################################
+
+import matplotlib.pyplot as plt
+
+try:
+ # Only available in Matplotlib >= 2.0.2
+ from mpl_toolkits.axes_grid1 import make_axes_locatable
+except ImportError:
+ make_axes_locatable = None
+
from sklearn.model_selection import train_test_split
-from ..style.palettes import LINE_COLOR
from .base import RegressionScoreVisualizer
+
+from ..style.palettes import LINE_COLOR
+from ..utils.decorators import memoized
+from ..exceptions import YellowbrickValueError
from ..bestfit import draw_best_fit, draw_identity_line
@@ -31,6 +43,7 @@
"ResidualsPlot", "residuals_plot"
]
+
##########################################################################
## Prediction Error Plots
##########################################################################
@@ -46,7 +59,7 @@ class PredictionError(RegressionScoreVisualizer):
----------
model : a Scikit-Learn regressor
- Should be an instance of a regressor, otherwise a will raise a
+ Should be an instance of a regressor, otherwise will raise a
YellowbrickTypeError exception on instantiation.
ax : matplotlib Axes, default: None
@@ -313,15 +326,18 @@ class ResidualsPlot(RegressionScoreVisualizer):
Parameters
----------
-
model : a Scikit-Learn regressor
- Should be an instance of a regressor, otherwise a will raise a
+ Should be an instance of a regressor, otherwise will raise a
YellowbrickTypeError exception on instantiation.
ax : matplotlib Axes, default: None
The axes to plot the figure on. If None is passed in the current axes
will be used (or generated if required).
+ hist : bool, default: True
+ Draw a histogram showing the distribution of the residuals on the
+ right side of the figure. Requires Matplotlib >= 2.0.2.
+
train_color : color, default: 'b'
Residuals for training data are ploted with this color but also
given an opacity of 0.5 to ensure that the test data residuals
@@ -331,7 +347,7 @@ class ResidualsPlot(RegressionScoreVisualizer):
Residuals for test data are plotted with this color. In order to
create generalizable models, reserved test data residuals are of
the most analytical interest, so these points are highlighted by
- hvaing full opacity. Can be any matplotlib color.
+ having full opacity. Can be any matplotlib color.
line_color : color, default: dark grey
Defines the color of the zero error line, can be any matplotlib color.
@@ -352,28 +368,51 @@ class ResidualsPlot(RegressionScoreVisualizer):
Notes
-----
-
ResidualsPlot is a ScoreVisualizer, meaning that it wraps a model and
- its primary entry point is the `score()` method.
+ its primary entry point is the ``score()`` method.
+
+ The residuals histogram feature requires matplotlib 2.0.2 or greater.
"""
- def __init__(self, model, ax=None, **kwargs):
+ def __init__(self, model, ax=None, hist=True, train_color='b',
+ test_color='g', line_color=LINE_COLOR, **kwargs):
super(ResidualsPlot, self).__init__(model, ax=ax, **kwargs)
- # TODO Is there a better way to differentiate between train and test points?
- # We'd like to color them differently in draw...
- # Can the user pass those in as keyword arguments?
+ # TODO: allow more scatter plot arguments for train and test points
+ # See #475 (RE: ScatterPlotMixin)
self.colors = {
- 'train_point': kwargs.pop('train_color', 'b'),
- 'test_point': kwargs.pop('test_color', 'g'),
- 'line': kwargs.pop('line_color', LINE_COLOR),
+ 'train_point': train_color,
+ 'test_point': test_color,
+ 'line': line_color,
}
- def fit(self, X, y=None, **kwargs):
+ self.hist = hist
+ if self.hist:
+ self.hax # If hist is True, test the version availability
+
+ @memoized
+ def hax(self):
+ """
+ Returns the histogram axes, creating it only on demand.
+ """
+ if make_axes_locatable is None:
+ raise YellowbrickValueError((
+ "residuals histogram requires matplotlib 2.0.2 or greater "
+ "please upgrade matplotlib or set hist=False on the visualizer"
+ ))
+
+ divider = make_axes_locatable(self.ax)
+
+ hax = divider.append_axes("right", size=1, pad=0.1, sharey=self.ax)
+ hax.yaxis.tick_right()
+ hax.grid(False, axis='x')
+
+ return hax
+
+ def fit(self, X, y, **kwargs):
"""
Parameters
----------
-
X : ndarray or DataFrame of shape n x m
A matrix of n instances with m features
@@ -381,9 +420,14 @@ def fit(self, X, y=None, **kwargs):
An array or series of target values
kwargs: keyword arguments passed to Scikit-Learn API.
+
+ Returns
+ -------
+ self : visualizer instance
"""
super(ResidualsPlot, self).fit(X, y, **kwargs)
self.score(X, y, train=True)
+ return self
def score(self, X, y=None, train=False, **kwargs):
"""
@@ -405,9 +449,9 @@ def score(self, X, y=None, train=False, **kwargs):
Returns
------
-
- ax : the axis with the plotted figure
-
+ score : float
+ The score of the underlying estimator, usually the R-squared score
+ for regression estimators.
"""
score = self.estimator.score(X, y, **kwargs)
if train:
@@ -423,6 +467,11 @@ def score(self, X, y=None, train=False, **kwargs):
def draw(self, y_pred, residuals, train=False, **kwargs):
"""
+ Draw the residuals against the predicted value for the specified split.
+ It is best to draw the training split first, then the test split so
+ that the test split (usually smaller) is above the training split;
+ particularly if the histogram is turned on.
+
Parameters
----------
y_pred : ndarray or Series of length n
@@ -432,16 +481,14 @@ def draw(self, y_pred, residuals, train=False, **kwargs):
An array or series of the difference between the predicted and the
target values
- train : boolean
+ train : boolean, default: False
If False, `draw` assumes that the residual points being plotted
are from the test data; if True, `draw` assumes the residuals
are the train data.
Returns
------
-
ax : the axis with the plotted figure
-
"""
if train:
@@ -453,7 +500,15 @@ def draw(self, y_pred, residuals, train=False, **kwargs):
alpha = 0.9
label = "Test $R^2 = {:0.3f}$".format(self.test_score_)
+ # Draw the residuals scatter plot
self.ax.scatter(y_pred, residuals, c=color, alpha=alpha, label=label)
+
+ # Add residuals histogram histogram
+ if self.hist:
+ self.hax.hist(residuals, bins=50, orientation="horizontal")
+
+ # Ensure the current axes is always the main residuals axes
+ plt.sca(self.ax)
return self.ax
def finalize(self, **kwargs):
@@ -464,7 +519,6 @@ def finalize(self, **kwargs):
Parameters
----------
kwargs: generic keyword arguments.
-
"""
# Add the title to the plot
self.set_title('Residuals for {} Model'.format(self.name))
@@ -479,28 +533,77 @@ def finalize(self, **kwargs):
self.ax.set_ylabel('Residuals')
self.ax.set_xlabel("Predicted Value")
-
-def residuals_plot(model, X, y=None, ax=None, **kwargs):
+ # Finalize the histogram axes
+ if self.hist:
+ self.hax.axhline(y=0, c=self.colors['line'])
+ self.hax.set_xlabel("Distribution")
+
+
+def residuals_plot(model,
+ X,
+ y,
+ ax=None,
+ hist=True,
+ test_size=0.25,
+ train_color='b',
+ test_color='g',
+ line_color=LINE_COLOR,
+ random_state=None,
+ **kwargs):
"""Quick method:
- Plot the residuals on the vertical axis and the
- independent variable on the horizontal axis.
+ Divides the dataset X, y into a train and test split (the size of the
+ splits determined by test_size) then plots the training and test residuals
+ agains the predicted value for the given model.
This helper function is a quick wrapper to utilize the ResidualsPlot
ScoreVisualizer for one-off analysis.
Parameters
----------
+ model : a Scikit-Learn regressor
+ Should be an instance of a regressor, otherwise will raise a
+ YellowbrickTypeError exception on instantiation.
+
X : ndarray or DataFrame of shape n x m
A matrix of n instances with m features.
y : ndarray or Series of length n
An array or series of target or class values.
- ax : matplotlib axes
- The axes to plot the figure on.
+ ax : matplotlib Axes, default: None
+ The axes to plot the figure on. If None is passed in the current axes
+ will be used (or generated if required).
- model : the Scikit-Learn estimator (should be a regressor)
+ hist : bool, default: True
+ Draw a histogram showing the distribution of the residuals on the
+ right side of the figure. Requires Matplotlib >= 2.0.2.
+
+ test_size : float, int default: 0.25
+ If float, should be between 0.0 and 1.0 and represent the proportion
+ of the dataset to include in the test split. If int, represents the
+ absolute number of test samples.
+
+ train_color : color, default: 'b'
+ Residuals for training data are ploted with this color but also
+ given an opacity of 0.5 to ensure that the test data residuals
+ are more visible. Can be any matplotlib color.
+
+ test_color : color, default: 'g'
+ Residuals for test data are plotted with this color. In order to
+ create generalizable models, reserved test data residuals are of
+ the most analytical interest, so these points are highlighted by
+ having full opacity. Can be any matplotlib color.
+
+ line_color : color, default: dark grey
+ Defines the color of the zero error line, can be any matplotlib color.
+
+ random_state : int, RandomState instance or None, optional
+ Passed to the train_test_split function.
+
+ kwargs : dict
+ Keyword arguments that are passed to the base class and may influence
+ the visualization as defined in other Visualizers.
Returns
-------
@@ -508,13 +611,18 @@ def residuals_plot(model, X, y=None, ax=None, **kwargs):
Returns the axes that the residuals plot was drawn on.
"""
# Instantiate the visualizer
- visualizer = ResidualsPlot(model, ax, **kwargs)
+ visualizer = ResidualsPlot(
+ model=model, ax=ax, hist=hist, train_color=train_color,
+ test_color=train_color, line_color=line_color, **kwargs
+ )
# Create the train and test splits
- X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
+ X_train, X_test, y_train, y_test = train_test_split(
+ X, y, test_size=test_size, random_state=random_state
+ )
# Fit and transform the visualizer (calls draw)
- visualizer.fit(X_train, y_train, **kwargs)
+ visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.finalize()
| diff --git a/tests/base.py b/tests/base.py
--- a/tests/base.py
+++ b/tests/base.py
@@ -72,7 +72,7 @@ def setUp(self):
self.assertEqual(self._backend, 'agg')
super(VisualTestCase, self).setUp()
- def assert_images_similar(self, visualizer=None, ax=None, tol=0.01):
+ def assert_images_similar(self, visualizer=None, ax=None, tol=0.01, **kwargs):
"""Accessible testing method for testing generation of a Visualizer.
Requires the placement of a baseline image for comparison in the
@@ -103,13 +103,16 @@ def assert_images_similar(self, visualizer=None, ax=None, tol=0.01):
The tolerance (a color value difference, where 255 is the
maximal difference). The test fails if the average pixel
difference is greater than this value.
+
+ kwargs : dict
+ Options to pass to the ImageComparison class.
"""
# Hide this method from the pytest traceback on test failure.
__tracebackhide__ = True
# Build and execute the image comparison
compare = ImageComparison(
- inspect.stack(), visualizer=visualizer, ax=ax, tol=tol
+ inspect.stack(), visualizer=visualizer, ax=ax, tol=tol, **kwargs
)
compare()
@@ -167,7 +170,7 @@ class ImageComparison(object):
"""
def __init__(self, stack, visualizer=None, ax=None, tol=0.01, ext=".png",
- remove_ticks=True, remove_title=True):
+ remove_ticks=True, remove_title=True, remove_legend=False):
# Ensure we have something to draw on
if visualizer is None and ax is None:
@@ -201,6 +204,7 @@ def __init__(self, stack, visualizer=None, ax=None, tol=0.01, ext=".png",
self.ext = ext
self.remove_ticks = remove_ticks
self.remove_title = remove_title
+ self.remove_legend = remove_legend
def __call__(self):
"""
@@ -267,6 +271,9 @@ def cleanup(self):
except AttributeError:
continue
+ if self.remove_legend:
+ self.ax.legend_.remove()
+
def save(self):
"""
Save the actual image to disk after cleaning it up.
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_peplot_no_lines.png b/tests/baseline_images/test_regressor/test_residuals/test_peplot_no_lines.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_regressor/test_residuals/test_peplot_no_lines.png differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_peplot_no_shared_limits.png b/tests/baseline_images/test_regressor/test_residuals/test_peplot_no_shared_limits.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_regressor/test_residuals/test_peplot_no_shared_limits.png differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_pred_error.png b/tests/baseline_images/test_regressor/test_residuals/test_pred_error.png
deleted file mode 100644
Binary files a/tests/baseline_images/test_regressor/test_residuals/test_pred_error.png and /dev/null differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_pred_error_integration.png b/tests/baseline_images/test_regressor/test_residuals/test_pred_error_integration.png
deleted file mode 100644
Binary files a/tests/baseline_images/test_regressor/test_residuals/test_pred_error_integration.png and /dev/null differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_pred_error_integration_pandas.png b/tests/baseline_images/test_regressor/test_residuals/test_pred_error_integration_pandas.png
deleted file mode 100644
Binary files a/tests/baseline_images/test_regressor/test_residuals/test_pred_error_integration_pandas.png and /dev/null differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_prediction_error.png b/tests/baseline_images/test_regressor/test_residuals/test_prediction_error.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_regressor/test_residuals/test_prediction_error.png differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_prediction_error_pandas.png b/tests/baseline_images/test_regressor/test_residuals/test_prediction_error_pandas.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_regressor/test_residuals/test_prediction_error_pandas.png differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_resid_plots.png b/tests/baseline_images/test_regressor/test_residuals/test_resid_plots.png
deleted file mode 100644
Binary files a/tests/baseline_images/test_regressor/test_residuals/test_resid_plots.png and /dev/null differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot.png b/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot.png differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_integration.png b/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_integration.png
deleted file mode 100644
Binary files a/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_integration.png and /dev/null differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_integration_pandas.png b/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_integration_pandas.png
deleted file mode 100644
Binary files a/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_integration_pandas.png and /dev/null differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_no_histogram.png b/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_no_histogram.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_no_histogram.png differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_pandas.png b/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_pandas.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_pandas.png differ
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_residuals_quick_method.png b/tests/baseline_images/test_regressor/test_residuals/test_residuals_quick_method.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_regressor/test_residuals/test_residuals_quick_method.png differ
diff --git a/tests/test_regressor/test_residuals.py b/tests/test_regressor/test_residuals.py
--- a/tests/test_regressor/test_residuals.py
+++ b/tests/test_regressor/test_residuals.py
@@ -2,6 +2,7 @@
# Ensure that the regressor residuals visualizations work.
#
# Author: Rebecca Bilbro <[email protected]>
+# Author: Benjamin Bengfort <[email protected]>
# Created: Sat Oct 8 16:30:39 2016 -0400
#
# Copyright (C) 2016 District Data Labs
@@ -17,17 +18,20 @@
## Imports
##########################################################################
+import sys
import pytest
+import matplotlib as mpl
import matplotlib.pyplot as plt
from yellowbrick.regressor.residuals import *
+from yellowbrick.exceptions import YellowbrickValueError
from tests.base import VisualTestCase
from tests.dataset import DatasetMixin, Dataset, Split
-from sklearn.svm import SVR
from sklearn.linear_model import Ridge, Lasso
from sklearn.linear_model import LinearRegression
+from sklearn.neural_network import MLPRegressor
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split as tts
@@ -39,6 +43,10 @@
pd = None
+# Determine version of matplotlib
+MPL_VERS_MAJ = int(mpl.__version__.split(".")[0])
+
+
##########################################################################
## Data
##########################################################################
@@ -50,7 +58,8 @@ def data(request):
For ease of use returns a Dataset named tuple composed of two Split tuples.
"""
X, y = make_regression(
- n_samples=500, n_features=22, n_informative=8, random_state=42
+ n_samples=500, n_features=22, n_informative=8, random_state=42,
+ noise=0.2, bias=0.2,
)
X_train, X_test, y_train, y_test = tts(
@@ -74,22 +83,25 @@ class TestPredictionError(VisualTestCase, DatasetMixin):
Test the PredictionError visualizer
"""
- def test_pred_error_integration(self):
+ @pytest.mark.filterwarnings("ignore:Stochastic Optimizer")
+ @pytest.mark.filterwarnings("ignore:internal gelsd driver lwork query error")
+ def test_prediction_error(self):
"""
- Integration test with image similarity on random data with SVR
+ Test image similarity of prediction error on random data
"""
_, ax = plt.subplots()
- visualizer = PredictionError(SVR(), ax=ax)
+ model = MLPRegressor(random_state=229)
+ visualizer = PredictionError(model, ax=ax)
visualizer.fit(self.data.X.train, self.data.y.train)
visualizer.score(self.data.X.test, self.data.y.test)
visualizer.finalize()
- self.assert_images_similar(visualizer, tol=10)
+ self.assert_images_similar(visualizer, tol=1, remove_legend=True)
@pytest.mark.skipif(pd is None, reason="pandas is required")
- def test_pred_error_integration_pandas(self):
+ def test_prediction_error_pandas(self):
"""
Test Pandas real world dataset with image similarity on Ridge
"""
@@ -112,12 +124,12 @@ def test_pred_error_integration_pandas(self):
splits = tts(X, y, test_size=0.2, random_state=8873)
X_train, X_test, y_train, y_test = splits
- visualizer = PredictionError(Ridge(), ax=ax)
+ visualizer = PredictionError(Ridge(random_state=22), ax=ax)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.finalize()
- self.assert_images_similar(visualizer, tol=10)
+ self.assert_images_similar(visualizer, tol=1, remove_legend=True)
def test_score(self):
"""
@@ -128,22 +140,53 @@ def test_score(self):
visualizer.fit(self.data.X.train, self.data.y.train)
score = visualizer.score(self.data.X.test, self.data.y.test)
- assert score == pytest.approx(1.0)
+ assert score == pytest.approx(0.9999983124154965)
assert visualizer.score_ == score
- @pytest.mark.skip(reason="not implemented yet")
def test_peplot_shared_limits(self):
"""
Test shared limits on the peplot
"""
- raise NotImplementedError("not yet implemented")
+ visualizer = PredictionError(LinearRegression(), shared_limits=False)
+
+ visualizer.fit(self.data.X.train, self.data.y.train)
+ visualizer.score(self.data.X.test, self.data.y.test)
+ visualizer.finalize()
+
+ xlim = tuple(map(int, visualizer.ax.get_xlim()))
+ ylim = tuple(map(int, visualizer.ax.get_ylim()))
+ assert xlim == ylim
- @pytest.mark.skip(reason="not implemented yet")
- def test_peplot_draw_bounds(self):
+ @pytest.mark.filterwarnings("ignore:internal gelsd driver lwork query error")
+ def test_peplot_no_shared_limits(self):
"""
- Test the peplot +/- one bounding in draw
+ Test image similarity with no shared limits on the peplot
"""
- raise NotImplementedError("not yet implemented")
+ visualizer = PredictionError(Ridge(random_state=43), shared_limits=False)
+
+ visualizer.fit(self.data.X.train, self.data.y.train)
+ visualizer.score(self.data.X.test, self.data.y.test)
+ visualizer.finalize()
+
+ xlim = tuple(map(int, visualizer.ax.get_xlim()))
+ ylim = tuple(map(int, visualizer.ax.get_ylim()))
+ assert not xlim == ylim
+
+ self.assert_images_similar(visualizer, tol=1.0, remove_legend=True)
+
+ def test_peplot_no_lines(self):
+ """
+ Test image similarity with no lines drawn on the plot
+ """
+ visualizer = PredictionError(
+ Lasso(random_state=23, alpha=10), bestfit=False, identity=False
+ )
+
+ visualizer.fit(self.data.X.train, self.data.y.train)
+ visualizer.score(self.data.X.test, self.data.y.test)
+ visualizer.finalize()
+
+ self.assert_images_similar(visualizer, tol=1.0, remove_legend=True)
##########################################################################
@@ -156,9 +199,12 @@ class TestResidualsPlot(VisualTestCase, DatasetMixin):
Test ResidualPlot visualizer
"""
- def test_residuals_plot_integration(self):
+ @pytest.mark.xfail(
+ sys.platform == 'win32', reason="images not close on windows (RMSE=32)"
+ )
+ def test_residuals_plot(self):
"""
- Integration test with image similarity on random data with OLS
+ Image similarity of residuals plot on random data with OLS
"""
_, ax = plt.subplots()
@@ -168,10 +214,74 @@ def test_residuals_plot_integration(self):
visualizer.score(self.data.X.test, self.data.y.test)
visualizer.finalize()
- self.assert_images_similar(visualizer, tol=10)
+ self.assert_images_similar(visualizer, tol=1, remove_legend=True)
+ @pytest.mark.xfail(
+ sys.platform == 'win32', reason="images not close on windows (RMSE=32)"
+ )
+ @pytest.mark.filterwarnings("ignore:Stochastic Optimizer")
+ def test_residuals_plot_no_histogram(self):
+ """
+ Image similarity test when hist=False
+ """
+ _, ax = plt.subplots()
+
+ model = MLPRegressor(random_state=19)
+ visualizer = ResidualsPlot(model, ax=ax, hist=False)
+
+ visualizer.fit(self.data.X.train, self.data.y.train)
+ visualizer.score(self.data.X.test, self.data.y.test)
+ visualizer.finalize()
+
+ self.assert_images_similar(visualizer, tol=1, remove_legend=True)
+
+ @pytest.mark.skipif(MPL_VERS_MAJ >= 2, reason="test requires mpl earlier than 2.0.2")
+ def test_hist_matplotlib_version(self, mock_toolkit):
+ """
+ ValueError is raised when matplotlib version is incorrect and hist=True
+ """
+ with pytst.raises(ImportError):
+ from mpl_toolkits.axes_grid1 import make_axes_locatable
+ assert not make_axes_locatable
+
+ with pytest.raises(YellowbrickValueError, match="requires matplotlib 2.0.2"):
+ ResidualsPlot(LinearRegression(), hist=True)
+
+ @pytest.mark.skipif(MPL_VERS_MAJ >= 2, reason="test requires mpl earlier than 2.0.2")
+ def test_no_hist_matplotlib_version(self, mock_toolkit):
+ """
+ No error is raised when matplotlib version is incorrect and hist=False
+ """
+ with pytst.raises(ImportError):
+ from mpl_toolkits.axes_grid1 import make_axes_locatable
+ assert not make_axes_locatable
+
+ try:
+ ResidualsPlot(LinearRegression(), hist=False)
+ except YellowbrickValueError as e:
+ self.fail(e)
+
+ @pytest.mark.xfail(
+ sys.platform == 'win32', reason="images not close on windows (RMSE=32)"
+ )
+ def test_residuals_quick_method(self):
+ """
+ Image similarity test using the residuals plot quick method
+ """
+ _, ax = plt.subplots()
+
+ model = Lasso(random_state=19)
+ ax = residuals_plot(
+ model, self.data.X.train, self.data.y.train, ax=ax, random_state=23
+ )
+
+ self.assert_images_similar(ax=ax, tol=1, remove_legend=True)
+
+ @pytest.mark.xfail(
+ sys.platform == 'win32', reason="images not close on windows (RMSE=32)"
+ )
@pytest.mark.skipif(pd is None, reason="pandas is required")
- def test_residuals_plot_integration_pandas(self):
+ def test_residuals_plot_pandas(self):
"""
Test Pandas real world dataset with image similarity on Lasso
"""
@@ -194,22 +304,22 @@ def test_residuals_plot_integration_pandas(self):
splits = tts(X, y, test_size=0.2, random_state=231)
X_train, X_test, y_train, y_test = splits
- visualizer = ResidualsPlot(Lasso(), ax=ax)
+ visualizer = ResidualsPlot(Lasso(random_state=44), ax=ax)
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.finalize()
- self.assert_images_similar(visualizer, tol=10)
+ self.assert_images_similar(visualizer, tol=1, remove_legend=True)
def test_score(self):
"""
Assert returns R2 score
"""
- visualizer = ResidualsPlot(SVR())
+ visualizer = ResidualsPlot(Ridge(random_state=8893))
visualizer.fit(self.data.X.train, self.data.y.train)
score = visualizer.score(self.data.X.test, self.data.y.test)
- assert score == pytest.approx(0.03344393985277794)
- assert visualizer.train_score_ == pytest.approx(0.04743502276335876)
+ assert score == pytest.approx(0.9999888484, rel=1e-4)
+ assert visualizer.train_score_ == pytest.approx(0.9999906, rel=1e-4)
assert visualizer.test_score_ == score
| [Feature Suggestion] Add histogram plot of residual errors to ResidualsPlot
The current `ResidualsPlot` shows training and testing residuals as a scatter plot, by eye we can get an idea of whether more errors are above or below the 0 line. By adding a histogram of testing errors we might more clearly be able to tell if errors have a Normal distribution.
In the following examples I have some large positive and negative errors, from the histogram it looks as though I have a negatively skewed distribution which might tell me something about my training examples:

```
from yellowbrick.regressor import ResidualsPlot
fig, ax = plt.subplots(figsize=(8,6));
model = ResidualsPlot(clone_estimator(clf), ax=ax)
model.fit(X_train, y_train)
model.score(X_test, y_test)
# add histogram of residual errors
left, bottom, width, height = [0.65, 0.17, 0.2, 0.2]
ax2 = fig.add_axes([left, bottom, width, height])
testing_residuals = pd.Series(model.predict(X_test) - y_test)
testing_residuals.plot(kind="hist", bins=50, title="Residuals on Predicted", ax=ax2);
ax2.vlines(0, ymin=0, ymax=ax2.get_ylim()[1] ) # add x==0 line
model.poof()
```
It isn't obvious where the best location would be for the histogram. Annoyingly I cannot get an `alpha` value for `ax2` either (I'd hoped to make this semi-transparent so location was less of an issue).
| @ianozsvald thanks for this great feature enhancement suggestion. you're welcome to make a pull request with this enhancement =)
@bbengfort
Sorry, I'm only going as far as sharing some proof-of-concept code, I'm still recovering from running PyDataLondon 2017 and I'm not up to extending any libraries at the moment! Maybe someone will find time to take this a little further. Cheers!
@ianozsvald thank you so much for your feature suggestions and the time that you've spent in writing up the detailed issues with examples. it is really fantastic and helpful. i'm sure that someone will move forward with these.
Well, thank you all too for putting this library together, I've got a bunch of my own hacky viz tools but you've built something far more useful here. @rebeccabilbro's talk for us at the conference (and the book signing she joined me for) was ace :-)
Hey there @ianozsvald - thanks so much for all your work on PyData London - what a terrific conference! And thanks for checking out Yellowbrick -- I love this idea of plotting error distributions to make it easier to look for things like skew and heavy tails. As for how to best cope with subaxis locations, we might take a look at [GridSpec](https://matplotlib.org/api/gridspec_api.html#matplotlib.gridspec.GridSpec), similar to what @pdamodaran did with [JointPlot](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/features/jointplot.py#L208).
I would be up for making the enhancement to the ResidualsPlot, do you guys think adding the histogram should be on the top of the main plot or the bottom?
@pdamodaran @ianozsvald my instinct actually says to put it on the right side, oriented vertically so that the histogram shares an access with the residuals (the y axis of the plot). In this orientation I think it might be easier to directly compare and would probably also have the effect of balancing the axes so that zero is in the middle, RE: the axes issues we had in #263 -- what do you guys think?
@ianozsvald sorry for the delay on this; we've been slammed - but I wanted to take a crack at this when you reminded me via the notebooks in the Slack channel the other day.
What do you think about this?

It's an initial prototype, but I'm thinking we'll just make it an option right now (e.g. `hist=True`) and then this is displayed. | 2018-06-15T21:55:25 |
DistrictDataLabs/yellowbrick | 484 | DistrictDataLabs__yellowbrick-484 | [
"257"
] | d364facfdecaf5874b85bca3f7f0b2832e3483a2 | diff --git a/yellowbrick/__init__.py b/yellowbrick/__init__.py
--- a/yellowbrick/__init__.py
+++ b/yellowbrick/__init__.py
@@ -33,6 +33,7 @@
# Import yellowbrick functionality to the top level
# TODO: review top-level functionality
from .anscombe import anscombe
+from .datasaurus import datasaurus
from .classifier import ROCAUC, ClassBalance, ClassificationScoreVisualizer
# from .classifier import crplot, rocplot
# from .regressor import peplot, residuals_plot
diff --git a/yellowbrick/datasaurus.py b/yellowbrick/datasaurus.py
new file mode 100644
--- /dev/null
+++ b/yellowbrick/datasaurus.py
@@ -0,0 +1,282 @@
+# yellowbrick.datasaurus
+# Plots a Datasaurus Quartet as an illustration of the importance of visualization.
+#
+# Author: Larry Gray
+# Created: Wed May 18 11:38:25 2016 -0400
+#
+# Copyright (C) 2018 District Data Labs
+# For license information, see LICENSE.txt
+#
+# ID: datasaurus.py [0bfa366] [email protected] $
+
+"""
+Plots a Datasaurus Quartet as an illustration of the importance of visualization.
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+import numpy as np
+import matplotlib.pyplot as plt
+
+from yellowbrick.bestfit import draw_best_fit
+from yellowbrick.style import get_color_cycle
+
+
+##########################################################################
+## DATASAURUS Data Arrays
+##########################################################################
+
+DATASAURUS = [
+ np.array([[55.3846, 51.5385, 46.1538, 42.8205, 40.7692, 38.7179, 35.641 ,
+ 33.0769, 28.9744, 26.1538, 23.0769, 22.3077, 22.3077, 23.3333,
+ 25.8974, 29.4872, 32.8205, 35.3846, 40.2564, 44.1026, 46.6667,
+ 50. , 53.0769, 56.6667, 59.2308, 61.2821, 61.5385, 61.7949,
+ 57.4359, 54.8718, 52.5641, 48.2051, 49.4872, 51.0256, 45.3846,
+ 42.8205, 38.7179, 35.1282, 32.5641, 30. , 33.5897, 36.6667,
+ 38.2051, 29.7436, 29.7436, 30. , 32.0513, 35.8974, 41.0256,
+ 44.1026, 47.1795, 49.4872, 51.5385, 53.5897, 55.1282, 56.6667,
+ 59.2308, 62.3077, 64.8718, 67.9487, 70.5128, 71.5385, 71.5385,
+ 69.4872, 46.9231, 48.2051, 50. , 53.0769, 55.3846, 56.6667,
+ 56.1538, 53.8462, 51.2821, 50. , 47.9487, 29.7436, 29.7436,
+ 31.2821, 57.9487, 61.7949, 64.8718, 68.4615, 70.7692, 72.0513,
+ 73.8462, 75.1282, 76.6667, 77.6923, 79.7436, 81.7949, 83.3333,
+ 85.1282, 86.4103, 87.9487, 89.4872, 93.3333, 95.3846, 98.2051,
+ 56.6667, 59.2308, 60.7692, 63.0769, 64.1026, 64.359 , 74.359 ,
+ 71.2821, 67.9487, 65.8974, 63.0769, 61.2821, 58.7179, 55.1282,
+ 52.3077, 49.7436, 47.4359, 44.8718, 48.7179, 51.2821, 54.1026,
+ 56.1538, 52.0513, 48.7179, 47.1795, 46.1538, 50.5128, 53.8462,
+ 57.4359, 60. , 64.1026, 66.9231, 71.2821, 74.359 , 78.2051,
+ 67.9487, 68.4615, 68.2051, 37.6923, 39.4872, 91.2821, 50. ,
+ 47.9487, 44.1026],
+ [97.1795, 96.0256, 94.4872, 91.4103, 88.3333, 84.8718, 79.8718,
+ 77.5641, 74.4872, 71.4103, 66.4103, 61.7949, 57.1795, 52.9487,
+ 51.0256, 51.0256, 51.0256, 51.4103, 51.4103, 52.9487, 54.1026,
+ 55.2564, 55.641 , 56.0256, 57.9487, 62.1795, 66.4103, 69.1026,
+ 55.2564, 49.8718, 46.0256, 38.3333, 42.1795, 44.1026, 36.4103,
+ 32.5641, 31.4103, 30.2564, 32.1795, 36.7949, 41.4103, 45.641 ,
+ 49.1026, 36.0256, 32.1795, 29.1026, 26.7949, 25.2564, 25.2564,
+ 25.641 , 28.718 , 31.4103, 34.8718, 37.5641, 40.641 , 42.1795,
+ 44.4872, 46.0256, 46.7949, 47.9487, 53.718 , 60.641 , 64.4872,
+ 69.4872, 79.8718, 84.1026, 85.2564, 85.2564, 86.0256, 86.0256,
+ 82.9487, 80.641 , 78.718 , 78.718 , 77.5641, 59.8718, 62.1795,
+ 62.5641, 99.4872, 99.1026, 97.5641, 94.1026, 91.0256, 86.4103,
+ 83.3333, 79.1026, 75.2564, 71.4103, 66.7949, 60.2564, 55.2564,
+ 51.4103, 47.5641, 46.0256, 42.5641, 39.8718, 36.7949, 33.718 ,
+ 40.641 , 38.3333, 33.718 , 29.1026, 25.2564, 24.1026, 22.9487,
+ 22.9487, 22.1795, 20.2564, 19.1026, 19.1026, 18.3333, 18.3333,
+ 18.3333, 17.5641, 16.0256, 13.718 , 14.8718, 14.8718, 14.8718,
+ 14.1026, 12.5641, 11.0256, 9.8718, 6.0256, 9.4872, 10.2564,
+ 10.2564, 10.641 , 10.641 , 10.641 , 10.641 , 10.641 , 10.641 ,
+ 8.718 , 5.2564, 2.9487, 25.7692, 25.3846, 41.5385, 95.7692,
+ 95. , 92.6923]]),
+ np.array([[51.20389114, 58.9744699 , 51.87207267, 48.17993079, 41.6832004 ,
+ 37.8904155 , 39.54897369, 39.64957388, 34.75059705, 27.56083529,
+ 24.63553998, 20.95946481, 20.68914905, 19.28820474, 20.02450057,
+ 35.469523 , 36.89432765, 39.05554978, 46.95708015, 37.31045274,
+ 40.009672 , 48.01438668, 53.70377593, 63.06749989, 62.04803251,
+ 59.83996671, 55.16094182, 61.27978658, 60.83491753, 61.52059065,
+ 36.91654386, 38.50219967, 48.66437073, 50.2852524 , 42.27633267,
+ 54.03177562, 37.32935526, 41.38952255, 40.07466666, 35.34968062,
+ 34.76370042, 37.02662945, 36.45556953, 35.53766421, 20.40894789,
+ 23.49571047, 29.55754336, 33.00823391, 53.98039918, 52.2343086 ,
+ 59.50307661, 41.16378107, 48.99304012, 59.26928032, 45.469177 ,
+ 62.69126654, 73.42867087, 70.84642611, 71.53901985, 67.62086589,
+ 72.47095256, 64.81223756, 60.85367987, 67.78949616, 41.60955727,
+ 53.00302532, 54.71417106, 44.29166872, 49.19172196, 53.10138178,
+ 51.59984815, 54.37972195, 46.4807681 , 53.17465627, 45.27200294,
+ 36.03340215, 28.27119417, 25.05480608, 64.758887 , 63.14452748,
+ 50.42467869, 70.64499626, 63.14904908, 62.82402452, 70.23686951,
+ 70.04273524, 72.57062345, 75.13071604, 83.29390573, 79.66426228,
+ 88.43210253, 89.11555901, 89.09219763, 91.72600577, 91.73553876,
+ 91.50788817, 88.2390019 , 88.5305192 , 55.36516034, 62.56025887,
+ 58.00666912, 55.06711799, 61.61477596, 68.54314354, 77.70610965,
+ 68.453046 , 68.25720644, 70.25547467, 65.04432528, 60.09224661,
+ 52.99202897, 50.14462898, 46.50861419, 43.80703196, 57.81785469,
+ 50.94049266, 63.49732308, 50.01648295, 58.63676508, 54.73028909,
+ 65.8755478 , 57.06098271, 46.81990795, 38.35939487, 47.31541578,
+ 55.05191654, 50.51596026, 49.67741465, 67.28065952, 66.17301826,
+ 61.08854414, 66.05308577, 72.66998927, 61.5034725 , 68.99502863,
+ 78.24991617, 36.48198057, 50.96774838, 91.19105361, 55.86376849,
+ 49.2805948 , 43.36850154],
+ [83.33977661, 85.49981761, 85.82973763, 85.04511674, 84.0179406 ,
+ 82.567493 , 80.81260177, 82.66453387, 80.01109099, 72.84782559,
+ 71.61071483, 66.04149838, 62.72130521, 62.06305936, 61.34262387,
+ 43.11588495, 47.70655597, 55.54697371, 65.24040739, 45.2587509 ,
+ 60.98658251, 65.71281959, 66.38948204, 64.03500046, 63.84586325,
+ 64.47676444, 65.23730817, 65.7664025 , 64.60376971, 64.79185504,
+ 41.09524744, 41.56715562, 30.68066685, 30.33792211, 34.52763612,
+ 29.67234831, 39.60204231, 37.29605623, 34.6236852 , 47.14107313,
+ 47.62479992, 44.46229305, 40.79184303, 48.72938687, 32.20303042,
+ 25.32246815, 21.36477746, 15.98507146, 29.35098671, 29.71167299,
+ 30.66967394, 34.31575825, 32.03035884, 29.64070177, 33.83119273,
+ 30.29037383, 48.57785513, 52.28225333, 45.52180616, 38.00655847,
+ 51.12213482, 62.81091559, 65.49914703, 61.36370155, 83.84868656,
+ 84.6747986 , 84.04312807, 82.90944121, 85.87622912, 84.54765869,
+ 84.81982149, 84.24035555, 83.51821167, 84.26056799, 85.23707942,
+ 53.37168776, 72.84023126, 71.54859792, 82.31522364, 85.23669633,
+ 85.17474759, 82.43091876, 83.94685535, 84.96618595, 82.17115106,
+ 80.38502135, 80.97121843, 79.98409314, 70.77843179, 73.93230972,
+ 64.624247 , 64.00150664, 57.76819305, 52.62335326, 48.97021089,
+ 53.31265209, 31.47743488, 30.47603101, 30.44585028, 30.44713567,
+ 30.2537213 , 29.0115352 , 29.99439119, 35.65783217, 20.30426019,
+ 13.03552859, 12.38463915, 13.25038497, 11.00084148, 11.87211171,
+ 9.90666848, 12.21154309, 11.20713449, 11.31894489, 10.94514243,
+ 9.69154713, 11.91406917, 11.93385209, 11.97472107, 11.41288267,
+ 11.73243636, 9.92056085, 10.49465268, 13.43132262, 12.85345178,
+ 11.94998862, 9.76559162, 10.38313251, 14.12865153, 12.03791702,
+ 10.08453441, 13.38022601, 15.23422594, 10.82841448, 13.99431053,
+ 17.88324091, 15.16276009, 29.67977429, 46.67434284, 85.33648676,
+ 84.04882283, 84.3321772 ]]),
+ np.array([[58.21360826, 58.19605369, 58.71823072, 57.27837287, 58.08202049,
+ 57.48944777, 28.08874132, 28.08546821, 28.08727305, 27.57802522,
+ 27.77991911, 28.58899981, 28.7391415 , 27.02460324, 28.8013367 ,
+ 27.18646384, 29.2851466 , 39.4029453 , 28.81132844, 34.30395791,
+ 29.60276098, 49.11615686, 39.61754583, 43.23308466, 64.89278794,
+ 62.49014932, 68.98808443, 62.10561863, 32.46184674, 41.32720065,
+ 44.00714993, 44.07406069, 44.00131524, 45.00630045, 44.44384061,
+ 42.1787134 , 44.04456562, 41.64045402, 41.93833001, 44.05392751,
+ 39.20671933, 28.70444923, 31.7086629 , 42.81171147, 43.30061489,
+ 40.39863291, 40.43569158, 40.93654667, 39.66157367, 40.89925917,
+ 41.96861683, 40.38340582, 56.53812645, 52.97069128, 54.62095259,
+ 65.09904439, 63.05599091, 70.96013623, 69.89581924, 70.59589286,
+ 69.64702143, 77.39298249, 64.40078719, 63.86895983, 56.59442132,
+ 56.53133729, 59.65215837, 56.6365087 , 58.672288 , 58.22161273,
+ 57.91466448, 55.31550906, 54.57572859, 54.41309365, 55.0745059 ,
+ 29.43296052, 29.42268607, 29.00561416, 58.46183859, 57.99780474,
+ 57.54947408, 59.52992846, 58.24939106, 58.02451401, 58.38212449,
+ 62.56675904, 72.17582431, 79.47276157, 80.35770088, 78.75723614,
+ 82.54023959, 86.43589719, 79.48868442, 81.53042032, 79.18678857,
+ 77.89905795, 75.13071421, 76.05801375, 57.61467439, 56.17139753,
+ 66.2878906 , 67.88171962, 64.0280813 , 77.49665175, 77.63465176,
+ 77.86372643, 77.33815817, 76.18041653, 77.25265109, 77.41337528,
+ 76.7318494 , 49.47110541, 42.47653994, 43.59511586, 50.33996967,
+ 40.74898026, 38.38652558, 38.40401521, 38.76427889, 41.47014233,
+ 47.15540481, 39.58256675, 41.74024382, 39.31187189, 41.67984769,
+ 39.08746445, 41.48150286, 77.60608655, 75.98266152, 76.94575724,
+ 77.54372007, 77.58473984, 76.82230426, 77.34857166, 77.57315269,
+ 77.97261068, 41.52891976, 43.7225508 , 79.32607818, 56.66397408,
+ 57.82178923, 58.2431719 ],
+ [91.88189151, 92.21498865, 90.31053209, 89.90760672, 92.00814501,
+ 88.08528556, 63.51079443, 63.59019695, 63.12328281, 62.82103866,
+ 63.51814752, 63.02408057, 62.72086389, 62.90185886, 63.38904039,
+ 63.55872965, 63.38360583, 51.1508572 , 61.35785406, 56.54212591,
+ 60.15734672, 63.66000062, 62.92518796, 63.16521872, 65.81417676,
+ 74.58428961, 63.2321473 , 75.99087076, 62.88190292, 49.07025127,
+ 46.44967378, 34.55320389, 33.90420735, 38.29901955, 36.0190833 ,
+ 26.49211948, 35.66223828, 27.09309542, 24.99152298, 33.55639249,
+ 51.5337157 , 61.7775254 , 58.83775437, 30.02044842, 31.5264262 ,
+ 16.34700838, 20.23267068, 16.91300484, 15.60935558, 20.79852895,
+ 26.4970726 , 21.39122552, 32.44424547, 29.04019669, 30.34452445,
+ 27.24155756, 29.70909567, 41.25950129, 43.45375927, 41.96474387,
+ 44.04444502, 63.37145906, 67.44871845, 70.21373883, 86.92700622,
+ 87.49981107, 87.80946159, 85.63749556, 90.07716031, 90.41101877,
+ 89.95380277, 80.25186069, 77.53628847, 78.22908659, 79.81754642,
+ 60.80177654, 63.06846482, 63.39075133, 90.26532639, 92.15990861,
+ 90.74890656, 88.32727415, 92.12968148, 91.69442117, 90.55347607,
+ 77.74393476, 63.12892942, 63.40868612, 63.29543754, 53.33262001,
+ 56.54105229, 59.79276181, 53.65167426, 56.02536457, 53.23479185,
+ 51.82245833, 23.37244197, 16.38374969, 33.82244765, 32.11798877,
+ 26.11710975, 24.23601841, 27.67268551, 14.94852356, 14.46185393,
+ 14.61067765, 15.89005466, 15.91257375, 15.15151702, 15.22192798,
+ 16.21684614, 25.06301931, 18.33847356, 19.99420098, 26.47139661,
+ 16.18214166, 14.58021515, 14.45194845, 14.36559047, 17.27803344,
+ 22.37793253, 17.64845284, 17.82932431, 15.64071697, 17.74591901,
+ 15.12230394, 18.04743744, 15.16287254, 16.30692238, 15.85847833,
+ 15.25394915, 15.83003939, 15.59516532, 15.77452924, 14.78064583,
+ 14.95569875, 24.91642519, 19.0773278 , 52.90039129, 87.94012501,
+ 90.69316655, 92.10432787]]),
+ np.array([[51.14791671, 50.51712581, 50.2074802 , 50.06948192, 50.56284634,
+ 50.2885278 , 25.58347508, 25.48358339, 25.4435257 , 25.56511342,
+ 25.92884427, 27.55147826, 27.53046637, 27.09557036, 27.43924961,
+ 27.87826426, 27.33886892, 27.67840297, 52.63565768, 52.02521411,
+ 52.88116479, 52.95260731, 52.52055249, 52.34282206, 51.92759021,
+ 52.71377449, 50.44380279, 50.21669503, 52.18418011, 52.79209735,
+ 52.58971986, 52.02884867, 52.72924658, 52.88431329, 52.50930089,
+ 50.86268433, 50.89149225, 25.8551276 , 26.02564455, 27.89317272,
+ 27.63996794, 27.8926589 , 52.79773294, 27.58063881, 26.49139853,
+ 25.98531782, 26.20141928, 25.85756947, 50.70468436, 50.81197535,
+ 50.56484556, 50.93930391, 50.45885484, 52.90136407, 52.68495344,
+ 52.50008894, 51.83563726, 76.9954121 , 77.31060048, 77.92604434,
+ 77.25438834, 76.2431578 , 77.08448437, 75.2280532 , 50.65835477,
+ 50.20336581, 50.9295477 , 50.17867185, 50.42269806, 50.46422483,
+ 50.44927033, 49.92838028, 50.48801364, 49.96490538, 50.75210826,
+ 27.42242921, 27.6740834 , 27.53739532, 52.26334738, 51.73728166,
+ 75.87096369, 75.24432621, 75.19829529, 75.70104153, 75.47933966,
+ 75.19456687, 74.82025396, 75.16434049, 75.26335555, 77.75641893,
+ 77.95443505, 77.08333777, 76.06355025, 77.68201632, 76.87808198,
+ 76.94850272, 77.86405471, 75.77145009, 52.33156913, 52.59281837,
+ 50.47704772, 75.29647509, 75.57395413, 75.40052716, 75.87099084,
+ 75.60588476, 75.89557705, 75.7465632 , 75.14234148, 50.66177956,
+ 50.69985064, 50.91894087, 50.72525854, 51.26387123, 51.25091965,
+ 50.78515721, 50.50139658, 50.73367454, 50.71137854, 50.8127449 ,
+ 51.01423295, 50.35352141, 50.43552957, 50.63098196, 51.0668072 ,
+ 50.79235473, 50.55127806, 50.55975806, 75.32597855, 75.04472578,
+ 75.28708772, 75.23996998, 75.1524592 , 75.96184009, 75.44806251,
+ 75.75938382, 50.3782623 , 50.53363501, 77.50090732, 50.69112419,
+ 49.99039495, 50.12718203],
+ [90.86741233, 89.10239459, 85.4600474 , 83.05766953, 82.93782178,
+ 82.97525357, 82.91489113, 82.92908498, 82.8742005 , 82.92409777,
+ 82.82118411, 51.48738653, 51.41484656, 52.07679944, 51.71207905,
+ 50.70890793, 51.65304675, 51.18198917, 51.41855226, 52.12301105,
+ 50.62155476, 50.07473901, 51.5024421 , 51.86195209, 52.25779061,
+ 51.19794432, 82.94182882, 83.75234297, 51.97525067, 51.07339565,
+ 51.3380902 , 52.1768375 , 51.20176505, 50.44143545, 51.41620515,
+ 17.14563109, 17.14132373, 17.08190869, 16.92501353, 50.66196341,
+ 51.39909748, 50.79528152, 50.68603709, 51.52476126, 17.40539097,
+ 17.20372213, 17.09382391, 17.11384266, 17.02374454, 17.11492526,
+ 17.07777732, 16.98102188, 17.03857897, 50.69056272, 51.29446922,
+ 51.59435617, 52.33576553, 52.04552865, 51.74673004, 50.31866042,
+ 51.46182482, 52.12368985, 51.9671367 , 82.98566202, 83.11447934,
+ 82.98265686, 82.84604113, 83.18462233, 82.90990147, 82.93532841,
+ 83.96992038, 82.99366549, 83.09951912, 83.7083177 , 82.9019501 ,
+ 51.43887623, 51.30411215, 51.59365408, 94.24932783, 92.97911753,
+ 88.38644174, 83.90349738, 83.46230334, 82.91945886, 82.88405139,
+ 82.93211578, 82.96238879, 83.03499717, 82.9452793 , 51.15177033,
+ 50.47557897, 52.15779927, 52.10465206, 51.16563781, 51.8675623 ,
+ 51.90751654, 49.66254553, 17.11125121, 51.87886035, 51.39159152,
+ 17.04828941, 17.01565319, 17.06219214, 17.04110689, 17.13489391,
+ 17.06772306, 17.16994971, 17.10571651, 16.75492389, 17.07814052,
+ 17.08518438, 17.14760476, 16.90746981, 17.16234971, 17.24045586,
+ 17.18019648, 17.10577072, 16.99296341, 17.08831585, 16.57271805,
+ 17.22109553, 17.06474308, 17.0651685 , 17.07652235, 17.20885971,
+ 17.20421434, 17.08465518, 17.09388377, 15.77189199, 17.00426226,
+ 16.17493491, 17.03184749, 17.0049424 , 16.69484223, 17.04514941,
+ 16.94292965, 16.94627981, 17.01958137, 50.16698595, 87.51396042,
+ 83.99735692, 82.99075 ]])]
+
+
+def datasaurus():
+ """
+ Creates 2x2 grid plot of 4 from the Datasaurus Dozen datasets for illustration.
+
+ Citation:
+ Justin Matejka, George Fitzmaurice (2017)
+ Same Stats, Different Graphs: Generating Datasets with Varied Appearance and
+ Identical Statistics through Simulated Annealing
+ CHI 2017 Conference proceedings:
+ ACM SIGCHI Conference on Human Factors in Computing Systems
+ """
+ fig, ((axa, axb), (axc, axd)) = plt.subplots(2, 2, sharex='col', sharey='row')
+ colors = get_color_cycle()
+ for arr, ax, color in zip(DATASAURUS, (axa, axb, axc, axd), colors):
+ x = arr[0]
+ y = arr[1]
+
+ # Draw the points in the scatter plot
+ ax.scatter(x, y, c=color)
+
+ # Set the X and Y limits
+ ax.set_xlim(0, 100)
+ ax.set_ylim(0, 110)
+
+ # Draw the linear best fit line on the plot
+ draw_best_fit(x, y, ax, c=color)
+
+ return (axa, axb, axc, axd)
+
+
+if __name__ == '__main__':
+ datasaurus()
+ plt.show()
| Add Datasaurus, a version of Anscombe's Quartet
As an addition to Anscombe's Quartet, but starting from a plot that looks like a dinosaur.
https://www.autodeskresearch.com/publications/samestats
http://www.thefunctionalart.com/2016/08/download-datasaurus-never-trust-summary.html
data: https://www.autodeskresearch.com/sites/default/files/The%20Datasaurus%20Dozen.zip
| @evelynmitchell such a great idea! Want to take a crack at implementing it in Python for Yellowbrick and submitting a pull request?
This is amazing. Fun dataset!
@rebeccabilbro @bbengfort This is my first shot at it?

Wow
Nice! | 2018-06-20T17:02:45 |
|
DistrictDataLabs/yellowbrick | 486 | DistrictDataLabs__yellowbrick-486 | [
"147"
] | 65a245ded3de3a947fd24c12be48a1ccea8c1e23 | diff --git a/yellowbrick/text/__init__.py b/yellowbrick/text/__init__.py
--- a/yellowbrick/text/__init__.py
+++ b/yellowbrick/text/__init__.py
@@ -20,3 +20,4 @@
from .tsne import TSNEVisualizer, tsne
from .freqdist import FreqDistVisualizer, freqdist
from .postag import PosTagVisualizer
+from .dispersion import DispersionPlot, dispersion
diff --git a/yellowbrick/text/dispersion.py b/yellowbrick/text/dispersion.py
new file mode 100644
--- /dev/null
+++ b/yellowbrick/text/dispersion.py
@@ -0,0 +1,173 @@
+# yellowbrick.text.dispersion
+# Implementations of lexical dispersions for text visualization.
+#
+# Author: Larry Gray
+# Created: 2018-06-21 10:06
+#
+# Copyright (C) 2018 District Data Labs
+# For license information, see LICENSE.txt
+#
+# ID: dispersion.py [] [email protected] $
+
+"""
+Implementation of lexical dispersion for text visualization
+"""
+
+
+##########################################################################
+## Imports
+##########################################################################
+
+from yellowbrick.text.base import TextVisualizer
+import numpy as np
+
+##########################################################################
+## Dispersion Plot Visualizer
+##########################################################################
+
+class DispersionPlot(TextVisualizer):
+ """
+ DispersionPlotVisualizer allows for visualization of the lexical dispersion
+ of words in a corpus. Lexical dispersion is a measure of a word's
+ homeogeneity across the parts of a corpus. This plot notes the occurences
+ of a word and how many words from the beginning it appears.
+
+ Parameters
+ ----------
+ words : list
+ A list of target words whose dispersion across a corpus passed at fit
+ will be visualized.
+
+ ax : matplotlib axes, default: None
+ The axes to plot the figure on.
+
+ color : list or tuple of colors
+ Specify color for bars
+
+ ignore_case : boolean, default: False
+ Specify whether input will be case-sensitive.
+
+ kwargs : dict
+ Pass any additional keyword arguments to the super class.
+
+ These parameters can be influenced later on in the visualization
+ process, but can and should be set as early as possible.
+ """
+
+ def __init__(self, words, ax=None, color=None, ignore_case=False, **kwargs):
+ super(DispersionPlot, self).__init__(ax=ax, **kwargs)
+
+ self.color = color
+ self.words = words
+ self.ignore_case = ignore_case
+
+
+ def _compute_dispersion(self, text):
+ for x, word in enumerate(text):
+ if self.ignore_case:
+ word = word.lower()
+
+ # NOTE: this will find all indices if duplicate words are supplied
+ # In the case that word is not in target words, any empty list is
+ # returned and no data will be yielded
+ for y in (self.target_words_ == word).nonzero()[0]:
+ yield (x, y)
+
+ def fit(self, text):
+ """
+ The fit method is the primary drawing input for the dispersion
+ visualization. It requires the corpus as a list of words.
+
+ Parameters
+ ----------
+ text : list
+ A list of words in the order they appear in the corpus.
+ """
+
+ self.words.reverse()
+
+ # Create an index (e.g. the y position) for the target words
+ self.target_words_ = np.array(self.words)
+ if self.ignore_case:
+ self.target_words_ = np.array([w.lower() for w in self.target_words_])
+
+ # Stack is used to create a 2D array from the generator
+ points = np.stack(self._compute_dispersion(text))
+ self.draw(points)
+ return self
+
+ def draw(self, points, **kwargs):
+ """
+ Called from the fit method, this method creates the canvas and
+ draws the distribution plot on it.
+ Parameters
+ ----------
+ kwargs: generic keyword arguments.
+ """
+
+ self.ax.scatter(points[:,0], points[:,1], marker='|', color=self.color)
+ self.ax.set_yticks(list(range(len(self.words))))
+ self.ax.set_yticklabels(self.words)
+
+ def finalize(self, **kwargs):
+ """
+ The finalize method executes any subclass-specific axes
+ finalization steps. The user calls poof & poof calls finalize.
+ Parameters
+ ----------
+ kwargs: generic keyword arguments.
+ """
+
+ self.ax.set_ylim(-1, len(self.words))
+ self.ax.set_title("Lexical Dispersion Plot")
+ self.ax.set_xlabel("Word Offset")
+ self.ax.grid(False)
+
+
+##########################################################################
+## Quick Method
+##########################################################################
+
+def dispersion(words, corpus, ax=None, color=None, ignore_case=False, **kwargs):
+ """ Displays lexical dispersion plot for words in a corpus
+
+ This helper function is a quick wrapper to utilize the DisperstionPlot
+ Visualizer for one-off analysis
+
+ Parameters
+ ----------
+
+ words : list
+ A list of words whose dispersion will be examined within a corpus
+
+ corpus : list
+ A list of words in the order they appear in the corpus
+
+ ax : matplotlib axes, default: None
+ The axes to plot the figure on.
+
+ color : list or tuple of colors
+ Specify color for bars
+
+ ignore_case : boolean, default: False
+ Specify whether input will be case-sensitive.
+
+ kwargs : dict
+ Pass any additional keyword arguments to the super class.
+
+ Returns
+ -------
+ ax: matplotlib axes
+ Returns the axes that the plot was drawn on
+ """
+
+ # Instantiate the visualizer
+ visualizer = DispersionPlot(
+ words, ax=ax, color=color, ignore_case=ignore_case, **kwargs
+ )
+
+ # Fit and transform the visualizer (calls draw)
+ visualizer.fit(corpus)
+
+ # Return the axes object on the visualizer
+ return visualizer.ax
| DispersionPlot
Implement a `DispersionPlot` similar to that in [NLTK](http://www.nltk.org/_modules/nltk/draw/dispersion.html) that generates a lexical dispersion plot for a corpus. A lexical dispersion plot will plot occurrences of words in a text, creating a kind of mini-map that can allow a user to compare highlighting across documents.
| 2018-06-21T23:36:04 |
||
DistrictDataLabs/yellowbrick | 489 | DistrictDataLabs__yellowbrick-489 | [
"487"
] | d58105d8113b59d291355540942ff5b7c0c625ce | diff --git a/docs/api/text/dispersion.py b/docs/api/text/dispersion.py
new file mode 100644
--- /dev/null
+++ b/docs/api/text/dispersion.py
@@ -0,0 +1,48 @@
+# ID: dispersion.py [] [email protected] $
+
+"""
+Generate figures for Dispersion Plot documentation.
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+import matplotlib.pyplot as plt
+
+from corpus import load_corpus
+from yellowbrick.text.dispersion import DispersionPlot
+
+##########################################################################
+## Generate
+##########################################################################
+
+def dispersion(target_words, text, outpath, **kwargs):
+ # Create a new figure and axes
+ _, ax = plt.subplots()
+
+ # Visualize the Dispersion of target words
+ visualizer = DispersionPlot(target_words, ax=ax, **kwargs)
+ visualizer.fit(text)
+ visualizer.poof(outpath=outpath)
+
+
+##########################################################################
+## Main Method
+##########################################################################
+
+if __name__ == '__main__':
+
+ # Load the corpus
+ corpus = load_corpus("../../../examples/data/hobbies")
+
+ # Convert corpus into a list of all words from beginning to end
+ text = [word for doc in corpus.data for word in doc.split()]
+
+ # Select target words to visualize
+ target_words = ['Game', 'player', 'score', 'oil', 'Man']
+
+ # Display dispersion of target words throughout corpus
+ dispersion(target_words, text, "images/dispersion_docs.png")
+
+
diff --git a/yellowbrick/text/dispersion.py b/yellowbrick/text/dispersion.py
--- a/yellowbrick/text/dispersion.py
+++ b/yellowbrick/text/dispersion.py
@@ -84,10 +84,8 @@ def fit(self, text):
A list of words in the order they appear in the corpus.
"""
- self.words.reverse()
-
# Create an index (e.g. the y position) for the target words
- self.target_words_ = np.array(self.words)
+ self.target_words_ = np.flip(self.words, axis=0)
if self.ignore_case:
self.target_words_ = np.array([w.lower() for w in self.target_words_])
@@ -106,8 +104,8 @@ def draw(self, points, **kwargs):
"""
self.ax.scatter(points[:,0], points[:,1], marker='|', color=self.color)
- self.ax.set_yticks(list(range(len(self.words))))
- self.ax.set_yticklabels(self.words)
+ self.ax.set_yticks(list(range(len(self.target_words_))))
+ self.ax.set_yticklabels(self.target_words_)
def finalize(self, **kwargs):
"""
@@ -118,7 +116,7 @@ def finalize(self, **kwargs):
kwargs: generic keyword arguments.
"""
- self.ax.set_ylim(-1, len(self.words))
+ self.ax.set_ylim(-1, len(self.target_words_))
self.ax.set_title("Lexical Dispersion Plot")
self.ax.set_xlabel("Word Offset")
self.ax.grid(False)
| diff --git a/tests/baseline_images/test_text/test_dispersion/test_dispersionplot_generator_input.png b/tests/baseline_images/test_text/test_dispersion/test_dispersionplot_generator_input.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_text/test_dispersion/test_dispersionplot_generator_input.png differ
diff --git a/tests/baseline_images/test_text/test_dispersion/test_dispersionplot_ignore_case.png b/tests/baseline_images/test_text/test_dispersion/test_dispersionplot_ignore_case.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_text/test_dispersion/test_dispersionplot_ignore_case.png differ
diff --git a/tests/baseline_images/test_text/test_dispersion/test_integrated_dispersionplot.png b/tests/baseline_images/test_text/test_dispersion/test_integrated_dispersionplot.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_text/test_dispersion/test_integrated_dispersionplot.png differ
diff --git a/tests/test_text/test_dispersion.py b/tests/test_text/test_dispersion.py
new file mode 100644
--- /dev/null
+++ b/tests/test_text/test_dispersion.py
@@ -0,0 +1,82 @@
+# tests.test_text.test_dispersion
+# Tests for the dispersion plot visualization
+#
+# Author: Larry Gray
+# Github: @lwgray
+# Created: 2018-06-22 15:27
+#
+# Copyright (C) 2018
+# For license information, see LICENSE.txt
+#
+# ID: test_dispersion.py [] [email protected] $
+
+"""
+Tests for the dispersion plot text visualization
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+import sys
+import pytest
+
+from yellowbrick.text.dispersion import *
+from tests.dataset import DatasetMixin
+from tests.base import VisualTestCase
+from itertools import chain
+
+##########################################################################
+## DispersionPlot Tests
+##########################################################################
+
[email protected](sys.platform == "win32", reason="Issue #491")
+class DispersionPlotTests(VisualTestCase, DatasetMixin):
+
+ def test_integrated_dispersionplot(self):
+ """
+ Assert no errors occur during DispersionPlot integration
+ """
+ corpus = self.load_data('hobbies')
+
+ text = [word for doc in corpus.data for word in doc.split()]
+ target_words = ['Game', 'player', 'score', 'oil', 'Man']
+
+ visualizer = DispersionPlot(target_words)
+ visualizer.fit(text)
+ visualizer.ax.grid(False)
+
+ self.assert_images_similar(visualizer, tol=25)
+
+ def test_dispersionplot_ignore_case(self):
+ """
+ Assert no errors occur during DispersionPlot integration
+ with ignore_case parameter turned on
+ """
+ corpus = self.load_data('hobbies')
+
+ text = [word for doc in corpus.data for word in doc.split()]
+ target_words = ['Game', 'player', 'score', 'oil', 'Man']
+
+ visualizer = DispersionPlot(target_words, ignore_case=True)
+ visualizer.fit(text)
+ visualizer.ax.grid(False)
+
+ self.assert_images_similar(visualizer, tol=25)
+
+ def test_dispersionplot_generator_input(self):
+ """
+ Assert no errors occur during dispersionPlot integration
+ when the corpus' text type is a generator
+ """
+ corpus = self.load_data('hobbies')
+
+ text = chain(*map(str.split, corpus.data))
+ target_words = ['Game', 'player', 'score', 'oil', 'Man']
+
+ visualizer = DispersionPlot(target_words, ignore_case=True)
+ visualizer.fit(text)
+ visualizer.ax.grid(False)
+
+ self.assert_images_similar(visualizer, tol=25)
+
| Complete Tests and Documentation for DispersionPlot Visualizer
To be complete, the new Dispersion Plot Visualizer requires:
- [ ] Tests
- [ ] Documentation
@DistrictDataLabs/team-oz-maintainers
| 2018-06-23T00:18:42 |
|
DistrictDataLabs/yellowbrick | 505 | DistrictDataLabs__yellowbrick-505 | [
"504"
] | 3b3eb63c32bfc86145ec9e130e3a79c9ffc2bec0 | diff --git a/yellowbrick/features/manifold.py b/yellowbrick/features/manifold.py
--- a/yellowbrick/features/manifold.py
+++ b/yellowbrick/features/manifold.py
@@ -262,6 +262,14 @@ def manifold(self, transformer):
self._name = self._manifold.__class__.__name__
def fit(self, X, y=None):
+ """
+ Fits the manifold on X and transforms the data to plot it on the axes.
+ See fit_transform() for more details.
+ """
+ self.fit_transform(X, y)
+ return self
+
+ def fit_transform(self, X, y=None):
"""
Fits the manifold on X and transforms the data to plot it on the axes.
The optional y specified can be used to declare discrete colors. If
@@ -311,7 +319,7 @@ def fit(self, X, y=None):
self.fit_time_ = time.time() - start
self.draw(Xp, y)
- return self
+ return Xp
def transform(self, X):
"""
@@ -327,7 +335,10 @@ def transform(self, X):
Xprime : array-like of shape (n, 2)
Returns the 2-dimensional embedding of the instances.
"""
- return self.manifold.transform(X)
+ try:
+ return self.manifold.transform(X)
+ except AttributeError as e:
+ raise AttributeError(str(e) + " try using fit_transform instead.")
def draw(self, X, y=None):
"""
| diff --git a/tests/test_features/test_manifold.py b/tests/test_features/test_manifold.py
--- a/tests/test_features/test_manifold.py
+++ b/tests/test_features/test_manifold.py
@@ -92,16 +92,30 @@ def test_manifold_instance_construction(self):
oz = Manifold(manifold=manifold)
assert oz.manifold is manifold
- @patch('yellowbrick.features.manifold.Manifold.draw', spec=True)
- def test_manifold_fit(self, mock_draw):
+ @patch('yellowbrick.features.manifold.Manifold.fit_transform', spec=True)
+ def test_manifold_fit(self, mock_fit_transform):
"""
Test manifold fit method
"""
X, y = make_s_curve(1000, random_state=888)
manifold = Manifold(target="auto")
- assert not hasattr(manifold, 'fit_time_')
assert manifold.fit(X, y) is manifold, "fit did not return self"
+ mock_fit_transform.assert_called_once()
+
+ @patch('yellowbrick.features.manifold.Manifold.draw', spec=True)
+ def test_manifold_fit_transform(self, mock_draw):
+ """
+ Test manifold fit_transform method
+ """
+ X, y = make_s_curve(1000, random_state=888)
+ manifold = Manifold(target="auto")
+
+ assert not hasattr(manifold, 'fit_time_')
+
+ Xp = manifold.fit_transform(X, y)
+ assert Xp.shape == (X.shape[0], 2)
+
mock_draw.assert_called_once()
assert hasattr(manifold, 'fit_time_')
assert manifold._target_color_type == CONTINUOUS
@@ -117,8 +131,12 @@ def test_manifold_classification(self):
)
oz = Manifold(manifold="spectral", target="discrete", random_state=108)
+ assert not hasattr(oz, 'classes_')
+
oz.fit(X, y)
+ assert hasattr(oz, 'classes_')
+ assert not hasattr(oz, 'range_')
self.assert_images_similar(oz, tol=0.5)
def test_manifold_regression(self):
@@ -130,8 +148,12 @@ def test_manifold_regression(self):
)
oz = Manifold(manifold="lle", target="continuous", random_state=1)
+ assert not hasattr(oz, 'range_')
+
oz.fit(X, y)
+ assert not hasattr(oz, 'classes_')
+ assert hasattr(oz, 'range_')
self.assert_images_similar(oz, tol=1.5)
def test_manifold_single(self):
@@ -233,3 +255,15 @@ def test_determine_target_color_type(self):
msg = "could not determine target color type"
with pytest.raises(YellowbrickValueError, match=msg):
manifold._determine_target_color_type([])
+
+ def test_manifold_no_transform(self):
+ """
+ Test the exception when manifold doesn't implement transform.
+ """
+ X, _ = make_s_curve(1000, random_state=888)
+ manifold = Manifold(manifold='mds', target="auto")
+
+ assert not hasattr(manifold._manifold, 'transform')
+
+ with pytest.raises(AttributeError, match="try using fit_transform instead"):
+ manifold.transform(X)
| Some manifold algorithms do not have transform attribute in sklearn.
**Describe the bug**
Manifold plotting with `manifold=mds` produces correct plot but raises no transform attribute error.
**To Reproduce**
```python
%matplotlib inline
import numpy as np
from yellowbrick.features.manifold import Manifold
y = np.linspace(-1, 1, 100)[:, ]
X = y[:, None] + np.random.random((100, 10))
viz = Manifold(manifold='mds')
viz.fit_transform(X, y)
viz.poof()
```
**Expected behavior**
No transform attribute error.
**Traceback**
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-7f7457a2faab> in <module>()
6 X = y[:, None] + np.random.random((100, 10))
7 viz = Manifold(manifold='mds')
----> 8 viz.fit_transform(X, y)
9 viz.poof()
~/anaconda3/lib/python3.6/site-packages/sklearn/base.py in fit_transform(self, X, y, **fit_params)
518 else:
519 # fit method of arity 2 (supervised transformation)
--> 520 return self.fit(X, y, **fit_params).transform(X)
521
522
~/anaconda3/lib/python3.6/site-packages/yellowbrick/features/manifold.py in transform(self, X)
328 Returns the 2-dimensional embedding of the instances.
329 """
--> 330 return self.manifold.transform(X)
331
332 def draw(self, X, y=None):
AttributeError: 'MDS' object has no attribute 'transform'
```
**Desktop (please complete the following information):**
- OS: macOS
- Python 3.6
- Yellowbrick Version 0.8
**Additional context**
Similarly error for `manifold='spectral'` and `manifold='tsne'`.
**Proposed solution**
Remove the [transform](https://github.com/DistrictDataLabs/yellowbrick/blob/3b3eb63c32bfc86145ec9e130e3a79c9ffc2bec0/yellowbrick/features/manifold.py#L316) method from `yellowbrick.features.manifold` unless it is being used somewhere else.
| @zjpoh thanks for the bug report! I think this should be an easy fix; I'm going to leave `transform` in, but will catch `AttributeErrors` in the implementation; then I'll implement a manifold-specific `fit_transform` method rather than relying on the default implementation that's part of `TransformerMixin`.
Update to follow shortly! | 2018-07-17T20:52:28 |
DistrictDataLabs/yellowbrick | 510 | DistrictDataLabs__yellowbrick-510 | [
"492"
] | 2869f65a8c37b34b86a520bd6fdb866f33198596 | diff --git a/yellowbrick/features/importances.py b/yellowbrick/features/importances.py
--- a/yellowbrick/features/importances.py
+++ b/yellowbrick/features/importances.py
@@ -24,9 +24,10 @@
import numpy as np
import matplotlib.pyplot as plt
-from yellowbrick.utils import is_dataframe
+from yellowbrick.utils import is_dataframe, is_classifier
from yellowbrick.base import ModelVisualizer
from yellowbrick.exceptions import YellowbrickTypeError, NotFitted
+from ..style.palettes import color_palette
##########################################################################
@@ -72,6 +73,11 @@ class FeatureImportances(ModelVisualizer):
The label for the X-axis. If None is automatically determined by the
underlying model and options provided.
+ stack : bool, default: False
+ If true and the classifier returns multi-class feature importance,
+ then a stacked bar plot is plotted; otherwise the mean of the
+ feature importance across classes are plotted.
+
kwargs : dict
Keyword arguments that are passed to the base class and may influence
the visualization as defined in other Visualizers.
@@ -84,6 +90,9 @@ class FeatureImportances(ModelVisualizer):
feature_importances_ : np.array
The numeric value of the feature importance computed by the model
+ classes_ : np.array
+ The classees labeled. Is not None only for classifier.
+
Examples
--------
@@ -94,13 +103,13 @@ class FeatureImportances(ModelVisualizer):
"""
def __init__(self, model, ax=None, labels=None, relative=True,
- absolute=False, xlabel=None, **kwargs):
+ absolute=False, xlabel=None, stack=False, **kwargs):
super(FeatureImportances, self).__init__(model, ax, **kwargs)
# Data Parameters
self.set_params(
labels=labels, relative=relative, absolute=absolute,
- xlabel=xlabel,
+ xlabel=xlabel, stack=stack
)
def fit(self, X, y=None, **kwargs):
@@ -129,15 +138,20 @@ def fit(self, X, y=None, **kwargs):
# Get the feature importances from the model
self.feature_importances_ = self._find_importances_param()
- # If feature importances is a multidim array, we're expecting a shape of
- # (n_classes, n_features) therefore we flatten by taking the average by
- # column to get shape (n_features,) (see LogisticRegression)
- if self.feature_importances_.ndim > 1:
- self.feature_importances_ = np.mean(self.feature_importances_, axis=0)
+ # Get the classes from the model
+ if is_classifier(self):
+ self.classes_ = self._find_classes_param()
+ else:
+ self.classes_ = None
+ self.stack = False
- # TODO - as an alternative to the above flattening approach, explore an
- # alternative visualize that uses the array shape to create a stacked bar chart
- # of feature importances for each class/feature combination
+ # If self.stack = True and feature importances is a multidim array,
+ # we're expecting a shape of (n_classes, n_features)
+ # therefore we flatten by taking the average by
+ # column to get shape (n_features,) (see LogisticRegression)
+ if not self.stack and self.feature_importances_.ndim > 1:
+ self.feature_importances_ = np.mean(self.feature_importances_,
+ axis=0)
# Apply absolute value filter before normalization
if self.absolute:
@@ -145,7 +159,7 @@ def fit(self, X, y=None, **kwargs):
# Normalize features relative to the maximum
if self.relative:
- maxv = self.feature_importances_.max()
+ maxv = np.abs(self.feature_importances_).max()
self.feature_importances_ /= maxv
self.feature_importances_ *= 100.0
@@ -164,9 +178,14 @@ def fit(self, X, y=None, **kwargs):
self.features_ = np.array(self.labels)
# Sort the features and their importances
- sort_idx = np.argsort(self.feature_importances_)
- self.features_ = self.features_[sort_idx]
- self.feature_importances_ = self.feature_importances_[sort_idx]
+ if self.stack:
+ sort_idx = np.argsort(np.mean(self.feature_importances_, 0))
+ self.features_ = self.features_[sort_idx]
+ self.feature_importances_ = self.feature_importances_[:, sort_idx]
+ else:
+ sort_idx = np.argsort(self.feature_importances_)
+ self.features_ = self.features_[sort_idx]
+ self.feature_importances_ = self.feature_importances_[sort_idx]
# Draw the feature importances
self.draw()
@@ -185,7 +204,27 @@ def draw(self, **kwargs):
pos = np.arange(self.features_.shape[0]) + 0.5
# Plot the bar chart
- self.ax.barh(pos, self.feature_importances_, align='center')
+ if self.stack:
+ colors = color_palette(kwargs.pop('colors', None),
+ len(self.classes_))
+ zeros = np.zeros(self.feature_importances_.shape[1])
+ left_arr = np.zeros((self.feature_importances_.shape[1], 2))
+
+ for idx in range(len(self.feature_importances_)):
+ left = [
+ left_arr[j, int(self.feature_importances_[idx][j] > 0)]
+ for j in range(len(self.feature_importances_[idx]))
+ ]
+
+ self.ax.barh(pos, self.feature_importances_[idx], left=left,
+ color=colors[idx], label=self.classes_[idx])
+
+ left_arr[:, 0] += np.minimum(self.feature_importances_[idx],
+ zeros)
+ left_arr[:, 1] += np.maximum(self.feature_importances_[idx],
+ zeros)
+ else:
+ self.ax.barh(pos, self.feature_importances_, align='center')
# Set the labels for the bars
self.ax.set_yticks(pos)
@@ -207,9 +246,27 @@ def finalize(self, **kwargs):
# Remove the ygrid
self.ax.grid(False, axis='y')
+ if self.stack:
+ plt.legend(bbox_to_anchor=(1.04, 0.5), loc="center left")
# Ensure we have a tight fit
plt.tight_layout()
+ def _find_classes_param(self):
+ """
+ Searches the wrapped model for the classes_ parameter.
+ """
+ for attr in ["classes_"]:
+ try:
+ return getattr(self.estimator, attr)
+ except AttributeError:
+ continue
+
+ raise YellowbrickTypeError(
+ "could not find classes_ param on {}".format(
+ self.estimator.__class__.__name__
+ )
+ )
+
def _find_importances_param(self):
"""
Searches the wrapped model for the feature importances parameter.
@@ -257,7 +314,8 @@ def _is_fitted(self):
##########################################################################
def feature_importances(model, X, y=None, ax=None, labels=None,
- relative=True, absolute=False, xlabel=None, **kwargs):
+ relative=True, absolute=False, xlabel=None,
+ stack=False, **kwargs):
"""
Displays the most informative features in a model by showing a bar chart
of features ranked by their importances. Although primarily a feature
@@ -297,6 +355,11 @@ def feature_importances(model, X, y=None, ax=None, labels=None,
The label for the X-axis. If None is automatically determined by the
underlying model and options provided.
+ stack : bool, default: False
+ If true and the classifier returns multi-class feature importance,
+ then a stacked bar plot is plotted; otherwise the mean of the
+ feature importance across classes are plotted.
+
kwargs : dict
Keyword arguments that are passed to the base class and may influence
the visualization as defined in other Visualizers.
@@ -308,7 +371,7 @@ def feature_importances(model, X, y=None, ax=None, labels=None,
"""
# Instantiate the visualizer
visualizer = FeatureImportances(
- model, ax, labels, relative, absolute, xlabel, **kwargs)
+ model, ax, labels, relative, absolute, xlabel, stack, **kwargs)
# Fit and transform the visualizer (calls draw)
visualizer.fit(X, y)
| diff --git a/tests/baseline_images/test_features/test_importances/test_multi_coefs_stacked.png b/tests/baseline_images/test_features/test_importances/test_multi_coefs_stacked.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_importances/test_multi_coefs_stacked.png differ
diff --git a/tests/test_features/test_importances.py b/tests/test_features/test_importances.py
--- a/tests/test_features/test_importances.py
+++ b/tests/test_features/test_importances.py
@@ -28,8 +28,9 @@
from yellowbrick.exceptions import NotFitted
from yellowbrick.features.importances import *
-from sklearn.base import BaseEstimator
-from sklearn.linear_model import Lasso
+from sklearn.datasets import load_iris
+from sklearn.base import BaseEstimator, ClassifierMixin
+from sklearn.linear_model import LogisticRegression, Lasso
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
@@ -248,6 +249,24 @@ def test_multi_coefs(self):
npt.assert_equal(visualizer.feature_importances_.ndim, 1)
+ @pytest.mark.xfail(
+ sys.platform == 'win32', reason="images not close on windows"
+ )
+ def test_multi_coefs_stacked(self):
+ """
+ Test stack plot with multidimensional coefficients
+ """
+ X_iris, y_iris = load_iris(True)
+ X_iris_pd = pd.DataFrame(X_iris, columns=['f1', 'f2', 'f3', 'f4'])
+
+ viz = FeatureImportances(LogisticRegression(), stack=True)
+ viz.fit(X_iris_pd, y_iris)
+ viz.poof()
+
+ npt.assert_equal(viz.feature_importances_.shape, (3, 4))
+ self.assert_images_similar(viz)
+
+
@pytest.mark.skipif(pd is None, reason="pandas is required for this test")
def test_fit_dataframe(self):
"""
@@ -350,6 +369,21 @@ def test_find_importances_param_not_found(self):
with pytest.raises(YellowbrickTypeError):
visualizer._find_importances_param()
+ def test_find_classes_param_not_found(self):
+ """
+ Raises an exception when classes param not found
+ """
+ model = MockClassifier()
+ visualizer = FeatureImportances(model)
+
+ assert not hasattr(model, 'classes_')
+
+ e = 'could not find classes_ param on {}'.format(
+ visualizer.estimator.__class__.__name__
+ )
+ with pytest.raises(YellowbrickTypeError, match=e):
+ visualizer._find_classes_param()
+
def test_xlabel(self):
"""
Check the various xlabels are sensical
@@ -421,3 +455,10 @@ def make_importance_param(self, name='feature_importances_', value=None):
def fit(self, X, y=None, **kwargs):
return self
+
+
+class MockClassifier(BaseEstimator, ClassifierMixin):
+ """
+ Creates empty classifier.
+ """
+ pass
| Enhanced FeatureImportances for Multidimensional Coefficients
**Describe the solution you'd like**
Create an enhanced version of the Feature Importances visualizer to allow the user to see the coefficients on a per-class basis.
**Is your feature request related to a problem? Please describe.**
Some estimators returns a multidimensional array for `coef_` , like scikit-learn's [`LogisticRegression`](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html), in anticipation that there will be multiple classes and decision function coefficients for each class. In the case of a binary classification, the shape of `coef_` is (1, n_features), for multiclass it will be (n_classes, n_features). In #490, we addressed the bug raised in #485 by simply averaging the per-class feature importances for each feature in the case of multidimensional coefficients (as with LogisticRegression). However, we could create an enhanced version of this visualizer that would allow the user to see the coefficients on a per-class basis, depicted as a stacked histogram.
**Examples**
The `fit` method now checks to see if the `feature_importances_` are a multidimensional array. We could add a param such as `hist=True` that instead of averaging would stack the importances per class using something like [this](https://stackoverflow.com/questions/16653815/horizontal-stacked-bar-chart-in-matplotlib/16654564#16654564) in our `draw` method. If we go this route, we may want to consider adding a target labels attribute to the class à la the [`ClassificationScoreVisualizer`](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/classifier/base.py#L67).
| @rebeccabilbro If nobody is working on this, I would like to give it a try~
@zjpoh That would be great! Go ahead and open up a PR that references this issue when you're ready. | 2018-07-19T15:14:59 |
DistrictDataLabs/yellowbrick | 518 | DistrictDataLabs__yellowbrick-518 | [
"312"
] | 161a91e9a65de139b1b3e54ff680b53a74610a48 | diff --git a/yellowbrick/target/__init__.py b/yellowbrick/target/__init__.py
--- a/yellowbrick/target/__init__.py
+++ b/yellowbrick/target/__init__.py
@@ -20,3 +20,4 @@
# Hoist visualizers into the top level of the target package
from .class_balance import ClassBalance, class_balance
+from .binning import BalancedBinningReference, balanced_binning_reference
diff --git a/yellowbrick/target/binning.py b/yellowbrick/target/binning.py
new file mode 100644
--- /dev/null
+++ b/yellowbrick/target/binning.py
@@ -0,0 +1,177 @@
+
+# yellowbrick.target.binning
+# Implementations of histogram with vertical lines to help with balanced binning.
+#
+# Author: Juan L. Kehoe ([email protected])
+# Author: Prema Damodaran Roman ([email protected])
+
+# Created: Tue Mar 13 19:50:54 2018 -0400
+#
+# Copyright (C) 2018 District Data Labs
+# For license information, see LICENSE.txt
+#
+# ID: binning.py
+
+"""
+Implements histogram with vertical lines to help with balanced binning.
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+import matplotlib.pyplot as plt
+import numpy as np
+
+from .base import TargetVisualizer
+from yellowbrick.exceptions import YellowbrickValueError
+
+##########################################################################
+## Balanced Binning Reference
+##########################################################################
+
+class BalancedBinningReference(TargetVisualizer):
+ """
+ BalancedBinningReference allows to generate a histogram with vertical lines
+ showing the recommended value point to bin your data so they can be evenly
+ distributed in each bin.
+
+ Parameters
+ ----------
+ ax : matplotlib Axes, default: None
+ This is inherited from FeatureVisualizer and is defined within
+ BalancedBinningReference.
+ target : string, default: "Frequency"
+ The name of the y variable
+ bins : number of bins to generate the histogram, default: 4
+ kwargs : dict
+ Keyword arguments that are passed to the base class and may influence
+ the visualization as defined in other Visualizers.
+
+ Attributes
+ ----------
+ bin_edges : binning reference values
+
+ Examples
+ --------
+ >>> visualizer = BalancedBinningReference()
+ >>> visualizer.fit(y)
+ >>> visualizer.poof()
+
+
+ Notes
+ -----
+ These parameters can be influenced later on in the visualization
+ process, but can and should be set as early as possible.
+ """
+
+ def __init__(self, ax=None, target=None, bins=4, **kwargs):
+
+ super(BalancedBinningReference, self).__init__(ax, **kwargs)
+
+ self.target = target
+ self.bins = bins
+
+ def draw(self, y, **kwargs):
+ """
+ Draws a histogram with the reference value for binning as vetical lines
+ Parameters
+ ----------
+ y : an array of one dimension or a pandas Series
+ """
+
+ # draw the histogram
+ hist, bin_edges = np.histogram(y, bins=self.bins)
+ self.bin_edges_ = bin_edges
+ self.ax.hist(y, bins=self.bins, color=kwargs.pop("color", "#6897bb"), **kwargs)
+
+ # add vetical line with binning reference values
+ plt.vlines(bin_edges,0,max(hist),colors=kwargs.pop("colors", "r"))
+
+ def fit(self, y, **kwargs):
+ """
+ Sets up y for the histogram and checks to
+ ensure that y is of the correct data type
+ Fit calls draw
+ Parameters
+ ----------
+ y : an array of one dimension or a pandas Series
+ kwargs : dict
+ keyword arguments passed to Scikit-Learn API.
+ """
+
+ #throw an error if y has more than 1 column
+ if y.ndim > 1:
+ raise YellowbrickValueError("y needs to be an array or Series with one dimension")
+
+ # Handle the target name if it is None.
+ if self.target is None:
+ self.target = 'Frequency'
+
+ self.draw(y)
+ return self
+
+
+ def poof(self, **kwargs):
+ """
+ Creates the labels for the feature and target variables
+ """
+
+ self.ax.set_xlabel(self.target)
+ self.finalize(**kwargs)
+
+ def finalize(self, **kwargs):
+ """
+ Finalize executes any subclass-specific axes finalization steps.
+ The user calls poof and poof calls finalize.
+ Parameters
+ ----------
+ kwargs: generic keyword arguments.
+ """
+
+ for tk in self.ax.get_xticklabels():
+ tk.set_visible(True)
+
+ for tk in self.ax.get_yticklabels():
+ tk.set_visible(True)
+
+
+##########################################################################
+## Quick Method
+##########################################################################
+
+def balanced_binning_reference(y, ax=None, target='Frequency', bins=4, **kwargs):
+
+ """
+ BalancedBinningReference allows to generate a histogram with vertical lines
+ showing the recommended value point to bin your data so they can be evenly
+ distributed in each bin.
+
+ Parameters
+ ----------
+ y : an array of one dimension or a pandas Series
+
+ ax : matplotlib Axes, default: None
+ This is inherited from FeatureVisualizer and is defined within
+ BalancedBinningReference.
+ target : string, default: "Frequency"
+ The name of the y variable
+ bins : number of bins to generate the histogram, default: 4
+ kwargs : dict
+ Keyword arguments that are passed to the base class and may influence
+ the visualization as defined in other Visualizers.
+
+ """
+
+ # Initialize the visualizer
+ visualizer = BalancedBinningReference(ax=ax, bins=bins, target=target, **kwargs)
+
+ # Fit and poof the visualizer
+ visualizer.fit(y)
+ visualizer.poof()
+
+
+
+
+
+
+
| diff --git a/tests/baseline_images/test_target/test_binning/test_balancedbinningreference.png b/tests/baseline_images/test_target/test_binning/test_balancedbinningreference.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_target/test_binning/test_balancedbinningreference.png differ
diff --git a/tests/test_target/test_binning.py b/tests/test_target/test_binning.py
new file mode 100644
--- /dev/null
+++ b/tests/test_target/test_binning.py
@@ -0,0 +1,39 @@
+# tests.test_target.test_binning
+# Tests for the BalancedBinningReference visualizer
+#
+# Author: Juan L. Kehoe ([email protected])
+# Author: Prema Damodaran Roman ([email protected])
+# Created: Thu Jul 20 10:21:49 2018 -0400
+#
+# ID: test_binning.py
+
+from tests.base import VisualTestCase
+from tests.dataset import DatasetMixin
+from yellowbrick.target.binning import *
+
+##########################################################################
+## BalancedBinningReference Tests
+##########################################################################
+
+class TestBalancedBinningReference(VisualTestCase, DatasetMixin):
+ """
+ Test the BalancedBinningReference visualizer
+ """
+
+ def test_balancedbinningreference(self):
+ """
+ Test Histogram on a real dataset
+ """
+ # Load the data from the fixture
+ dataset = self.load_data('occupancy')
+
+ # Get the data
+ y = dataset["temperature"]
+
+
+ visualizer = BalancedBinningReference()
+ visualizer.fit(y)
+ visualizer.poof()
+ self.assert_images_similar(visualizer, tol=0.5)
+
+
\ No newline at end of file
| Demonstration/walkthrough proposal on how to use ClassBalance (and variations) to bin continuous values for classification
I. Title
Better Binning for Classification Problems:
Creating Categorical Values from Continuous Values
II. Premise
- A lot of machine learning problems in the real world suffer from the curse of dimensionality; you have fewer training instances than you’d like, and predictive signal is distributed (often unpredictably!) across many different features.
- One example is when your target is continuously-valued, but there aren’t enough instances to predict these values to the precision of regression.
- What if we transform the regression problem into a classification problem? We can try to do this by binning the continuous values into buckets for classification. But how do we pick the bins?
III. Dataset Intro
- About the Pitchfork album reviews corpus - funny! snarky! sentiment analysis??
- Download the data from https://www.kaggle.com/nolanbconaway/pitchfork-data/data
- Custom CorpusReader to access the text and scores
- Custom TextNormalizer to lemmatize and remove stop words
- use Numpy digitize method to naively bin the continuous target values
IV. Preliminary Text Analytics Pipeline
- Build using Scikit-Learn Pipeline
- use ConfusionMatrix to visually evaluate
- use ClassBalance to visualize imbalance
- talk through selection bias - why initial bins didn’t work
V. Tuning Bins with ClassBalance
- redo with better distributed bins for target values
- (hopefully) show better results
VI. Conclusion/Teaser for New ClassBalanceHeatmap Visualizer
- how to combine insight from ConfusionMatrix with interpretability of ClassBalance?
Reviewers:
@marskar
@lwgray
@yzyzy
| You beat me to it... But here is my attempt at transcribing your notes
[Sane_Binning.pdf](https://github.com/DistrictDataLabs/yellowbrick/files/1769942/Sane_Binning.pdf)
Has the coding for the corpus reader, TextBormalizer, and Pipeline been completed?
Hey there @lwgray - yes; here's the `TextNormalizer`:
```
class TextNormalizer(BaseEstimator, TransformerMixin):
def __init__(self, language='english'):
self.stopwords = set(nltk.corpus.stopwords.words(language))
self.lemmatizer = WordNetLemmatizer()
def is_punct(self, token):
return all(
unicodedata.category(char).startswith('P') for char in token
)
def is_stopword(self, token):
return token.lower() in self.stopwords
def normalize(self, document):
return [
self.lemmatize(token, tag).lower()
for sentence in document
for (token, tag) in sentence
if not self.is_punct(token)
and not self.is_stopword(token)
]
def lemmatize(self, token, pos_tag):
tag = {
'N': wn.NOUN,
'V': wn.VERB,
'R': wn.ADV,
'J': wn.ADJ
}.get(pos_tag[0], wn.NOUN)
return self.lemmatizer.lemmatize(token, tag)
def fit(self, documents, y=None):
return self
def transform(self, documents):
return [
' '.join(self.normalize(doc)) for doc in documents
]
```
I should have some time to work on the draft tomorrow, so I'll post the corpus reader and preprocessor then!
If you need any help with the coding, just let me know 😄
I know you have done all the analysis but I produced this so I could better understand the data and possible workflow
### Confusion Matrix and Class Balance: Before Class Adjustment


### Confusion Matrix and Class Balance: After Class Adjustment


@lwgray nice! Would you be interested in taking a crack at pulling the prototype code that @bbengfort wrote into a new `ClassBalanceHeatMap` visualizer using the [Yellowbrick API](http://www.scikit-yb.org/en/latest/contributing.html#visualizer-api)? It would be awesome to be able to reference the work-in-progress in my post, and then maybe you could do a follow-up post on creating a new Yellowbrick visualizer?
Here's the prototype code:
```python
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from sklearn.utils.multiclass import unique_labels
from sklearn.metrics.classification import _check_targets
def plot_class_balance_preds(y_true, y_pred, labels=None, ax=None):
# Use Sklearn tools to validate the target
# Note y_true and y_pred should already be label encoded
y_type, y_true, y_pred = _check_targets(y_true, y_pred)
indices = unique_labels(y_true, y_pred)
# Create a 2D numpy array where each row is the count of
# the predicted classes and each column is the true class
data = np.array([
[(y_pred[y_true==label_t] == label_p).sum() for label_p in indices]
for label_t in indices
])
# Ensure that the number of elements in data matches y_pred and y_true
# Not necessary but used as a sanity check
assert data.sum() == len(y_pred) == len(y_true)
# labels_present is the indices of the classes, labels is the string names
# Another sanity check, this will not prevent missing classes, which is bad
labels = labels if labels is not None else indices
assert len(labels) == len(indices)
# Create a matplotlib axis
if ax is None:
_, ax = plt.subplots()
# Create a unique color for each predict class
colors = [cm.spectral(x) for x in np.linspace(0, 1, len(indices))]
# Track the stack of the bar graph
prev = np.zeros(len(labels))
# Plot each row
for idx, row in enumerate(data):
ax.bar(indices, row, label=labels[idx], bottom=prev, color=colors[idx])
prev += row
# Make the graph pretty
ax.set_xticks(indices)
ax.set_xticklabels(labels)
ax.set_xlabel("actual class")
ax.set_ylabel("number of predicted class")
# Put the legend outside of the graph
plt.legend(bbox_to_anchor=(1.04,0.5), loc="center left")
plt.tight_layout(rect=[0,0,0.85,1])
return ax
## Usage
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split as tts
from sklearn.ensemble import GradientBoostingClassifier
digits = load_digits()
X_train, X_test, y_train, y_true = tts(digits.data, digits.target, test_size=0.33)
model = GradientBoostingClassifier()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
g = plot_class_balance_preds(y_true, y_pred, labels=digits.target_names)
plt.show()
```
If you're looking for a model for the Yellowbrick PR process, you can check out [the one @bbengfort did](https://github.com/DistrictDataLabs/yellowbrick/pull/317) that @ndanielsen and I are reviewing now.
FYI @marskar @lwgray @yzyzy - Still have some sections to flesh out, but you can see my draft in development [here](https://github.com/rebeccabilbro/rebeccabilbro.github.io/blob/master/_drafts/critical-binning.md).
I will give it a shot. However, this is my first implementation of a visualizer. I might need your assistance if I get stuck. 😟
Awesome! Sure thing @lwgray -- definitely start by checking out the [Visualizer API description](http://www.scikit-yb.org/en/latest/contributing.html#visualizer-api) in the docs. The best strategy is to open a pull request early that describes and scopes the task so that us maintainers can be ready and available to assist as needed. Look forward to seeing what you come up with!
@rebeccabilbro I read through your drafts 3 times.... I think it is fun and fairly thorough. The only question I have is, will you discuss changing your binning so that the classes are better balanced?
Also, do you think we could talk for 10 minutes at the March 14th meeting, I would like to understand some of your coding decisions.
@rebeccabilbro I read through the draft and think it's great. My question here is it seems to me that you are trying to see if they can build a ClassBalanceHeatmap visualizer so you can see where your prediction goes wrong with different classes in a bar chart. However, this doesn't seem like solve your problem of the non-balanced sample from the beginning when you try to assign them from different scores to classes.
@rebeccabilbro I created a visualizer to help with balanced binning. You can create a balanced binning based on the referenced value created from the visualizer. You can check it via the following link:
https://github.com/Juan0001/yellowbrick-balanced-bin-reference/blob/master/balanced_binning.ipynb
I was trying to put the package on yellowbrick, but don't have a permission. Could you help me? Thank you.
@Juan0001 very nice! We'd love to review your work and add it to Yellowbrick. The way we do this is to have you fork the Yellowbrick repository into your own GitHub account; once you've done that you can create a pull request so that we can go over your additions, and once approved we merge them into Yellowbrick.
Detailed instructions are here: [Contributing to Yellowbrick](http://www.scikit-yb.org/en/develop/contributing.html#getting-started-on-github) but of course we're happy to go over it with you tonight.
Ok, draft now published [here](https://rebeccabilbro.github.io/better-binning/).
@rebeccabilbro @Juan0001
To me it isn't obvious what the reader can get out of visiting Juan's jupyter notebook. Maybe say something like "If you are looking for an automated way to create balanced binning then checkout this visualizer in the works"
@lwgray - I believe @Juan0001's notebook is in draft form and she's still working on fleshing out some of the possible use cases.
@rebeccabilbro Thank you very much!
@lwgray You are right, the notebook version is not very clear yet. It's first version of the draft. In the later versions I will explain why I created this function, what can it do and how to use it in more details. Please let me know if there's anything else I need to improve, thank you very much. | 2018-07-21T03:48:58 |
DistrictDataLabs/yellowbrick | 521 | DistrictDataLabs__yellowbrick-521 | [
"520"
] | 98a83ce3fa26c58e251d17e50dc063384ea2a1cb | diff --git a/paper/figures/figures.py b/paper/figures/figures.py
new file mode 100644
--- /dev/null
+++ b/paper/figures/figures.py
@@ -0,0 +1,211 @@
+#!/usr/bin/env python3
+# Script to create visualizations for the JOSS paper
+
+import os
+import argparse
+import numpy as np
+import pandas as pd
+import matplotlib.pyplot as plt
+
+from yellowbrick.features import Rank2D, RadViz
+from yellowbrick.model_selection import LearningCurve
+from yellowbrick.cluster import KElbowVisualizer, SilhouetteVisualizer
+from yellowbrick.classifier import ClassificationReport, DiscriminationThreshold
+from yellowbrick.regressor import ResidualsPlot, PredictionError, AlphaSelection
+
+from collections import namedtuple
+from sklearn.datasets import make_blobs
+from sklearn.naive_bayes import MultinomialNB
+from sklearn.ensemble import RandomForestRegressor
+from sklearn.cluster import MiniBatchKMeans, Birch
+from sklearn.model_selection import train_test_split as tts
+from sklearn.linear_model import LassoCV, RidgeCV, LogisticRegression
+
+
+# Store figures alongside the script that generates them
+FIGURES = os.path.dirname(__file__)
+
+# Path to datasets downloaded from S3
+DATA = os.path.join(
+ os.path.dirname(__file__), "..", "..", "yellowbrick", "datasets", "fixtures"
+)
+
+# Quick reference dataset objects
+Dataset = namedtuple('Dataset', 'X,y')
+Split = namedtuple('Split', 'train,test')
+
+
+def _make_dataset(X, y, split=False):
+ if split:
+ X_train, X_test, y_train, y_test = tts(X, y, test_size=0.2)
+ return Dataset(Split(X_train, X_test), Split(y_train, y_test))
+ return Dataset(X, y)
+
+
+def load_occupancy(split=False):
+ """
+ Create a dataset for the specified yb dataset
+ """
+ path = os.path.join(DATA, "occupancy", "occupancy.csv")
+ data = pd.read_csv(path)
+
+ X = data[["temperature", "relative humidity", "light", "C02", "humidity"]]
+ y = data["occupancy"]
+ return _make_dataset(X, y, split)
+
+
+def load_concrete(split=False):
+ path = os.path.join(DATA, "concrete", "concrete.csv")
+ data = pd.read_csv(path)
+
+ X = data[['cement', 'slag', 'ash', 'water', 'splast', 'coarse', 'fine', 'age']]
+ y = data['strength']
+ return _make_dataset(X, y, split)
+
+
+def load_spam(split=False):
+ path = os.path.join(DATA, "spam", "spam.csv")
+ data = pd.read_csv(path)
+
+ target = "is_spam"
+ features = [col for col in data.columns if col != target]
+
+ X = data[features]
+ y = data[target]
+ return _make_dataset(X, y, split)
+
+
+def feature_analysis(fname="feature_analysis.png"):
+ """
+ Create figures for feature analysis
+ """
+
+ # Create side-by-side axes grid
+ _, axes = plt.subplots(ncols=2, figsize=(18,6))
+
+ # Draw RadViz on the left
+ data = load_occupancy(split=False)
+ oz = RadViz(ax=axes[0], classes=["unoccupied", "occupied"])
+ oz.fit(data.X, data.y)
+ oz.finalize()
+
+ # Draw Rank2D on the right
+ data = load_concrete(split=False)
+ oz = Rank2D(ax=axes[1])
+ oz.fit_transform(data.X, data.y)
+ oz.finalize()
+
+ # Save figure
+ path = os.path.join(FIGURES, fname)
+ plt.tight_layout()
+ plt.savefig(path)
+
+
+def regression(fname="regression.png"):
+ """
+ Create figures for regression models
+ """
+ _, axes = plt.subplots(ncols=2, figsize=(18, 6))
+ alphas = np.logspace(-10, 1, 300)
+ data = load_concrete(split=True)
+
+ # Plot prediction error in the middle
+ oz = PredictionError(LassoCV(alphas=alphas), ax=axes[0])
+ oz.fit(data.X.train, data.y.train)
+ oz.score(data.X.test, data.y.test)
+ oz.finalize()
+
+ # Plot residuals on the right
+ oz = ResidualsPlot(RidgeCV(alphas=alphas), ax=axes[1])
+ oz.fit(data.X.train, data.y.train)
+ oz.score(data.X.test, data.y.test)
+ oz.finalize()
+
+ # Save figure
+ path = os.path.join(FIGURES, fname)
+ plt.tight_layout()
+ plt.savefig(path)
+
+
+def classification(fname="classification.png"):
+
+ # Create side-by-side axes grid
+ _, axes = plt.subplots(ncols=2, figsize=(18,6))
+
+ # Add ClassificationReport to the reft
+ data = load_spam(split=True)
+ oz = ClassificationReport(MultinomialNB(), classes=["ham", "spam"], ax=axes[0])
+ oz.fit(data.X.train, data.y.train)
+ oz.score(data.X.test, data.y.test)
+ oz.finalize()
+
+ # Add DiscriminationThreshold to the right
+ data = load_spam(split=False)
+ oz = DiscriminationThreshold(LogisticRegression(), ax=axes[1])
+ oz.fit(data.X, data.y)
+ oz.finalize()
+
+ # Save figure
+ path = os.path.join(FIGURES, fname)
+ plt.tight_layout()
+ plt.savefig(path)
+
+
+def clustering(fname="clustering.png"):
+ # Create side-by-side axes grid
+ _, axes = plt.subplots(ncols=2, figsize=(18,6))
+ X, y = make_blobs(centers=7)
+
+ # Add K-Elbow to the left
+ oz = KElbowVisualizer(MiniBatchKMeans(), k=(3,12), ax=axes[0])
+ oz.fit(X, y)
+ oz.finalize()
+
+ # Add SilhouetteVisualizer to the right
+ oz = SilhouetteVisualizer(Birch(n_clusters=5), ax=axes[1])
+ oz.fit(X, y)
+ oz.finalize()
+
+ # Save figure
+ path = os.path.join(FIGURES, fname)
+ plt.tight_layout()
+ plt.savefig(path)
+
+def hyperparameter_tuning(fname="hyperparameter_tuning.png"):
+ # Create side-by-side axes grid
+ _, axes = plt.subplots(ncols=2, figsize=(18,6))
+
+ # Load the concrete dataset
+ data = load_concrete(split=False)
+
+ # Create a list of alphas to cross-validate against
+ alphas = np.logspace(-10, 1, 400)
+
+ # Add AlphaSelection to the left
+ oz = AlphaSelection(LassoCV(alphas=alphas), ax=axes[0])
+ oz.fit(data.X, data.y)
+ oz.finalize()
+
+ # Add LearningCurve to the right
+ oz = LearningCurve(RandomForestRegressor(), scoring='r2', ax=axes[1])
+ oz.fit(data.X, data.y)
+ oz.finalize()
+
+ # Save figure
+ path = os.path.join(FIGURES, fname)
+ plt.tight_layout()
+ plt.savefig(path)
+
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser(
+ description="generate visualizations for JOSS paper"
+ )
+
+ args = parser.parse_args()
+ feature_analysis()
+ regression()
+ classification()
+ clustering()
+ hyperparameter_tuning()
| Submit a Yellowbrick Paper to the Journal of Open Source Software
The [Journal of Open Source Software (JOSS)](http://arfon.org/announcing-the-journal-of-open-source-software/) is a developer-friendly journal for research software packages and is affiliated with the Open Source Initiative (OSI) and sponsored by NumFOCUS. For a long time, we've wanted to publish a paper related to Yellowbrick, and this might be a good start.
The paper should be written in Markdown and be between 250-1000 words. Principally, it should contain a summary describing high-level functionality for a non-specialist audience with a clear statement of purpose. It should also include a bibliography and a metadata file that can be read by JOSS reviewers.
High level tasks:
- [x] determine who the authors are
- [x] create a bibliography in paper/paper.bib
- [x] generate the metadata in paper/paper.json with this [script](https://gist.github.com/arfon/478b2ed49e11f984d6fb)
- [x] write outline of the paper
- [x] write first draft of the paper
- [x] review and edit draft
- [x] submit to JOSS
Requirements:
- [x] The software should be open source as per the OSI definition
- [ ] The software should have an obvious research application
- [x] You should be a major contributor to the software you are submitting
- [x] The software should be a significant contribution to the available open source software that either enables some new research challenges to be addressed or makes addressing research challenges significantly better (e.g., faster, easier, simpler)
- [x] The software should be feature complete (no half-baked solutions) Minor, ‘utility’ packages, including ‘thin’ API clients are not acceptable
- [x] Be stored in a repository that can be cloned without registration
- [x] Be stored in a repository that is browsable online without registration
- [x] Have an issue tracker that is readable without registration
- [x] Permit individuals to create issues/file tickets against your repository
| @rebeccabilbro do you want to start putting together the bibliography and I'll start with the outline and the draft, then we can switch? | 2018-07-23T16:18:29 |
|
DistrictDataLabs/yellowbrick | 530 | DistrictDataLabs__yellowbrick-530 | [
"529"
] | 4bb942ed025a4228c82c9bb46761fc6861e54230 | diff --git a/yellowbrick/cluster/elbow.py b/yellowbrick/cluster/elbow.py
--- a/yellowbrick/cluster/elbow.py
+++ b/yellowbrick/cluster/elbow.py
@@ -125,11 +125,11 @@ class KElbowVisualizer(ClusteringScoreVisualizer):
The elbow method runs k-means clustering on the dataset for a range of
values for k (say from 1-10) and then for each value of k computes an
- average score for all clusters. By default, the ``distortion_score`` is
+ average score for all clusters. By default, the ``distortion`` score is
computed, the sum of square distances from each point to its assigned
- center. Other metrics can also be used such as the ``silhouette_score``,
+ center. Other metrics can also be used such as the ``silhouette`` score,
the mean silhouette coefficient for all samples or the
- ``calinski_harabaz_score``, which computes the ratio of dispersion between
+ ``calinski_harabaz`` score, which computes the ratio of dispersion between
and within clusters.
When these overall metrics for each model are plotted, it is possible to
@@ -186,16 +186,21 @@ class KElbowVisualizer(ClusteringScoreVisualizer):
If you get a visualizer that doesn't have an elbow or inflection point,
then this method may not be working. The elbow method does not work well
- if the data is not very clustered; in this case you might see a smooth
- curve and the value of k is unclear. Other scoring methods such as BIC or
- SSE also can be used to explore if clustering is a correct choice.
+ if the data is not very clustered; in this case, you might see a smooth
+ curve and the value of k is unclear. Other scoring methods, such as BIC or
+ SSE, also can be used to explore if clustering is a correct choice.
For a discussion on the Elbow method, read more at
`Robert Gove's Block <https://bl.ocks.org/rpgove/0060ff3b656618e9136b>`_.
+
+ .. seealso:: The scikit-learn documentation for the `silhouette_score
+ <https://bit.ly/2LYWjYb>`_ and `calinski_harabaz_score
+ <https://bit.ly/2LW3Zu9>`_. The default, `distortion_score`, is
+ implemented in`yellowbrick.cluster.elbow`.
.. todo:: add parallelization option for performance
- .. todo:: add different metrics for scores and silhoutte
- .. todo:: add timing information about how long its taking
+ .. todo:: add different metrics for scores and silhouette
+ .. todo:: add timing information about how long it's taking
"""
def __init__(self, model, ax=None, k=10,
| Incorrect parameter values in K-Elbow Visualizer docstring
Initially in the K-Elbow Visualizer's docstring, the possible values for the parameter `metric` are named as `distortion_score`, `silhouette_score`, and `calinski_harabaz_score`. However, using these values returns the following error:
`YellowbrickValueError: '{}' is not a defined metric use one of distortion, silhouette, or calinski_harabaz`
However, the correct names—`distortion`, `silhouette`, `calinski_harabaz`—are listed corrected further down under `Parameters`.

| 2018-07-28T17:22:33 |
||
DistrictDataLabs/yellowbrick | 534 | DistrictDataLabs__yellowbrick-534 | [
"523"
] | ae14370f23e41bcf67ce3bd1788e3a6ed4ccb100 | diff --git a/yellowbrick/regressor/residuals.py b/yellowbrick/regressor/residuals.py
--- a/yellowbrick/regressor/residuals.py
+++ b/yellowbrick/regressor/residuals.py
@@ -334,9 +334,11 @@ class ResidualsPlot(RegressionScoreVisualizer):
The axes to plot the figure on. If None is passed in the current axes
will be used (or generated if required).
- hist : bool, default: True
+ hist : {True, False, None, 'density', 'frequency'}, default: True
Draw a histogram showing the distribution of the residuals on the
right side of the figure. Requires Matplotlib >= 2.0.2.
+ If set to 'density', the probability density function will be plotted.
+ If set to True or 'frequency' then the frequency will be plotted.
train_color : color, default: 'b'
Residuals for training data are ploted with this color but also
@@ -385,9 +387,15 @@ def __init__(self, model, ax=None, hist=True, train_color='b',
'test_point': test_color,
'line': line_color,
}
-
+
self.hist = hist
- if self.hist:
+ if self.hist not in {True, 'density', 'frequency', None, False}:
+ raise YellowbrickValueError(
+ "'{}' is an invalid argument for hist, use None, True, " \
+ "False, 'density', or 'frequency'".format(hist)
+ )
+
+ if self.hist in {True, 'density', 'frequency'}:
self.hax # If hist is True, test the version availability
@memoized
@@ -503,9 +511,11 @@ def draw(self, y_pred, residuals, train=False, **kwargs):
# Draw the residuals scatter plot
self.ax.scatter(y_pred, residuals, c=color, alpha=alpha, label=label)
- # Add residuals histogram histogram
- if self.hist:
+ # Add residuals histogram
+ if self.hist in {True, 'frequency'}:
self.hax.hist(residuals, bins=50, orientation="horizontal")
+ elif self.hist == 'density':
+ self.hax.hist(residuals, bins=50, orientation="horizontal", density=True)
# Ensure the current axes is always the main residuals axes
plt.sca(self.ax)
@@ -575,9 +585,11 @@ def residuals_plot(model,
The axes to plot the figure on. If None is passed in the current axes
will be used (or generated if required).
- hist : bool, default: True
+ hist : {True, False, None, 'density', 'frequency'}, default: True
Draw a histogram showing the distribution of the residuals on the
right side of the figure. Requires Matplotlib >= 2.0.2.
+ If set to 'density', the probability density function will be plotted.
+ If set to True or 'frequency' then the frequency will be plotted.
test_size : float, int default: 0.25
If float, should be between 0.0 and 1.0 and represent the proportion
| Modify ResisdualsPlot histogram to optionally use a PDF
Because the training data set is usually much larger than the test data set, adding an option to normalize the histogram by using a PDF instead of pure frequency may make comparing the train and test set residuals distribution easier.
Right now the histogram is drawn on the plot if the argument `hist=True`; we can change this argument to accept a string or a boolean, e.g. `hist=True` or `hist='density'` draws the PDF, `hist='frequency'` draws the normal histogram, and `hist=None` or `hist=False` does not draw the histogram at all. Alternatively, we can simply add another boolean argument, `density=True` to the visualizer if that is more understandable.
The histogram is drawn on [line 508](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/regressor/residuals.py#L507), to use the PDF, pass `density=True` to the `hist()` function.
| 2018-07-31T17:03:42 |
||
DistrictDataLabs/yellowbrick | 542 | DistrictDataLabs__yellowbrick-542 | [
"539"
] | 311e79efffe5a501aa3389bc1aa26d209edd2e64 | diff --git a/yellowbrick/text/tsne.py b/yellowbrick/text/tsne.py
--- a/yellowbrick/text/tsne.py
+++ b/yellowbrick/text/tsne.py
@@ -2,6 +2,7 @@
# Implements TSNE visualizations of documents in 2D space.
#
# Author: Benjamin Bengfort <[email protected]>
+# Author: Rebecca Bilbro <[email protected]>
# Created: Mon Feb 20 06:33:29 2017 -0500
#
# Copyright (C) 2016 Bengfort.com
@@ -166,11 +167,7 @@ class TSNEVisualizer(TextVisualizer):
NULL_CLASS = None
def __init__(self, ax=None, decompose='svd', decompose_by=50, labels=None,
- classes=None, colors=None, colormap=None, random_state=None, **kwargs):
- """
- Initialize the TSNE visualizer with visual hyperparameters.
- """
- super(TSNEVisualizer, self).__init__(ax=ax, **kwargs)
+ classes=None, colors=None, colormap=None, random_state=None, **kwargs):
# Visual Parameters
self.labels = labels
@@ -178,8 +175,16 @@ def __init__(self, ax=None, decompose='svd', decompose_by=50, labels=None,
self.colormap = colormap
self.random_state = random_state
- # TSNE Parameters
- self.transformer_ = self.make_transformer(decompose, decompose_by, kwargs)
+ # Fetch TSNE kwargs from kwargs by popping only keys belonging to TSNE params
+ tsne_kwargs = {
+ key: kwargs.pop(key)
+ for key in TSNE().get_params()
+ if key in kwargs
+ }
+ self.transformer_ = self.make_transformer(decompose, decompose_by, tsne_kwargs)
+
+ # Call super at the end so that size and title are set correctly
+ super(TSNEVisualizer, self).__init__(ax=ax, **kwargs)
def make_transformer(self, decompose='svd', decompose_by=50, tsne_kwargs={}):
"""
@@ -338,8 +343,6 @@ def finalize(self, **kwargs):
Finalize the drawing by adding a title and legend, and removing the
axes objects that do not convey information about TNSE.
"""
-
- # Add a title
self.set_title(
"TSNE Projection of {} Documents".format(self.n_instances_)
)
| diff --git a/tests/test_text/test_tsne.py b/tests/test_text/test_tsne.py
--- a/tests/test_text/test_tsne.py
+++ b/tests/test_text/test_tsne.py
@@ -26,6 +26,7 @@
from tests.dataset import DatasetMixin
from yellowbrick.exceptions import YellowbrickValueError
+from sklearn.manifold import TSNE
from sklearn.datasets import make_classification
from sklearn.feature_extraction.text import TfidfVectorizer
@@ -87,6 +88,44 @@ def test_integrated_tsne(self):
tol = 40 if six.PY3 else 55
self.assert_images_similar(tsne, tol=tol)
+ def test_sklearn_tsne_size(self):
+ """
+ Check to make sure sklearn's TSNE doesn't use the size param
+ """
+ # In TSNEVisualizer, the internal sklearn TSNE transform consumes
+ # some but not all kwargs passed in by user. Those not in get_params(),
+ # like size, are passed through to YB's finalize method. This test should
+ # notify us if TSNE's params change on the sklearn side.
+ with pytest.raises(TypeError):
+ TSNE(size=(100,100))
+
+ def test_sklearn_tsne_title(self):
+ """
+ Check to make sure sklearn's TSNE doesn't use the title param
+ """
+ # In TSNEVisualizer, the internal sklearn TSNE transform consumes
+ # some but not all kwargs passed in by user. Those not in get_params(),
+ # like title, are passed through to YB's finalize method. This test should
+ # notify us if TSNE's params change on the sklearn side.
+ with pytest.raises(TypeError):
+ TSNE(title="custom_title")
+
+ def test_custom_title_tsne(self):
+ """
+ Check tSNE can accept a custom title (string) from the user
+ """
+ tsne = TSNEVisualizer(title="custom_title")
+
+ assert tsne.title == "custom_title"
+
+ def test_custom_size_tsne(self):
+ """
+ Check tSNE can accept a custom size (tuple of pixels) from the user
+ """
+ tsne = TSNEVisualizer(size=(100, 50))
+
+ assert tsne._size == (100, 50)
+
def test_make_classification_tsne(self):
"""
Test tSNE integrated visualization on a sklearn classifier dataset
| TSNE size & title bug
**Describe the bug**
Looks like our `TSNEVisualizer` might have a bug that causes an error on instantiation if either the `size` or `title` parameters are used.
**To Reproduce**
```python
from yellowbrick.text import TSNEVisualizer
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = load_data('hobbies')
tfidf = TfidfVectorizer()
docs = tfidf.fit_transform(corpus.data)
labels = corpus.target
tsne = TSNEVisualizer(size=(1080, 720))
```
or
```
tsne = TSNEVisualizer(title="My Special TSNE Visualizer")
```
**Dataset**
This bug was triggered using the YB hobbies corpus.
**Expected behavior**
Users should be able to influence the size of the visualizer on instantiation using the `size` parameter and a tuple with `(width, height)` in pixels, and the title of the visualizer using the `title` parameter and a string.
**Traceback**
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-59-120fbfcec07c> in <module>()
----> 1 tsne = TSNEVisualizer(size=(1080, 720))
2 tsne.fit(labels)
3 tsne.poof()
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/text/tsne.py in __init__(self, ax, decompose, decompose_by, labels, classes, colors, colormap, random_state, **kwargs)
180
181 # TSNE Parameters
--> 182 self.transformer_ = self.make_transformer(decompose, decompose_by, kwargs)
183
184 def make_transformer(self, decompose='svd', decompose_by=50, tsne_kwargs={}):
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/text/tsne.py in make_transformer(self, decompose, decompose_by, tsne_kwargs)
234 # Add the TSNE manifold
235 steps.append(('tsne', TSNE(
--> 236 n_components=2, random_state=self.random_state, **tsne_kwargs)))
237
238 # return the pipeline
TypeError: __init__() got an unexpected keyword argument 'size'
```
or for `title`:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-64-92c88e0bdd33> in <module>()
----> 1 tsne = TSNEVisualizer(title="My Special TSNE Visualizer")
2 tsne.fit(labels)
3 tsne.poof()
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/text/tsne.py in __init__(self, ax, decompose, decompose_by, labels, classes, colors, colormap, random_state, **kwargs)
180
181 # TSNE Parameters
--> 182 self.transformer_ = self.make_transformer(decompose, decompose_by, kwargs)
183
184 def make_transformer(self, decompose='svd', decompose_by=50, tsne_kwargs={}):
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/text/tsne.py in make_transformer(self, decompose, decompose_by, tsne_kwargs)
234 # Add the TSNE manifold
235 steps.append(('tsne', TSNE(
--> 236 n_components=2, random_state=self.random_state, **tsne_kwargs)))
237
238 # return the pipeline
TypeError: __init__() got an unexpected keyword argument 'title'
```
**Desktop (please complete the following information):**
- macOS
- Python Version 3.6
- Yellowbrick Version 0.8
| 2018-08-02T21:45:38 |
|
DistrictDataLabs/yellowbrick | 545 | DistrictDataLabs__yellowbrick-545 | [
"474",
"474"
] | 5ae6e320615ea32e6d1c201bdc9b551175c1e42b | diff --git a/docs/api/features/jointplot.py b/docs/api/features/jointplot.py
--- a/docs/api/features/jointplot.py
+++ b/docs/api/features/jointplot.py
@@ -5,12 +5,8 @@
def jointplot(X, y, outpath, **kwargs):
- # Create a new figure and axes
- fig = plt.figure()
- ax = fig.add_subplot(111)
-
# Create the visualizer
- visualizer = JointPlotVisualizer(ax=ax, **kwargs)
+ visualizer = JointPlotVisualizer(**kwargs)
visualizer.fit(X, y)
visualizer.transform(X)
| Joint Plot Viz has messed up (overlapping labels on) axes
**Describe the bug**
If you look at the x and y axis on http://www.scikit-yb.org/en/latest/api/features/scatter.html#joint-plot-visualization you will see that the labels are overlapping.
**To Reproduce**
Create a joint plot as shown in the docs
**Expected behavior**
Labels on axes should be clear.
**Desktop (please complete the following information):**
- OS: macOS
- Python Version 3.6.4
- Yellowbrick Version 0.8
Joint Plot Viz has messed up (overlapping labels on) axes
**Describe the bug**
If you look at the x and y axis on http://www.scikit-yb.org/en/latest/api/features/scatter.html#joint-plot-visualization you will see that the labels are overlapping.
**To Reproduce**
Create a joint plot as shown in the docs
**Expected behavior**
Labels on axes should be clear.
**Desktop (please complete the following information):**
- OS: macOS
- Python Version 3.6.4
- Yellowbrick Version 0.8
| 2018-08-04T15:00:31 |
||
DistrictDataLabs/yellowbrick | 580 | DistrictDataLabs__yellowbrick-580 | [
"579"
] | e0804fda0b8f5a2cc69611d3476592802b9d55f1 | diff --git a/yellowbrick/base.py b/yellowbrick/base.py
--- a/yellowbrick/base.py
+++ b/yellowbrick/base.py
@@ -181,7 +181,7 @@ def finalize(self, **kwargs):
"""
return self.ax
- def poof(self, outpath=None, **kwargs):
+ def poof(self, outpath=None, clear_figure=False, **kwargs):
"""
Poof makes the magic happen and a visualizer appear! You can pass in
a path to save the figure to disk with various backends, or you can
@@ -191,7 +191,11 @@ def poof(self, outpath=None, **kwargs):
Parameters
----------
outpath: string, default: None
- path or None. Save figure to disk or if None show in window
+ path or None. Save figure to disk or if None show in window
+
+ clear_figure: boolean, default: False
+ When True, this flag clears the figure after saving to file or
+ showing on screen. This is useful when making consecutive plots.
kwargs: dict
generic keyword arguments.
@@ -212,6 +216,9 @@ def poof(self, outpath=None, **kwargs):
else:
plt.show()
+ if clear_figure:
+ plt.gcf().clear()
+
##////////////////////////////////////////////////////////////////////
## Helper Functions
##////////////////////////////////////////////////////////////////////
@@ -533,7 +540,7 @@ def score(self,X,y):
return self
- def poof(self, outpath=None, **kwargs):
+ def poof(self, outpath=None, clear_figure=False, **kwargs):
if self.axarr is None: return
@@ -552,3 +559,6 @@ def poof(self, outpath=None, **kwargs):
plt.savefig(outpath, **kwargs)
else:
plt.show()
+
+ if clear_figure:
+ plt.gcf().clear()
| Visualizer does not forget previous plot.
**Describe the bug**
When making consecutive plots in python script, the older plots are still visible.
**To Reproduce**
In `doc/api/target/`,
```python
python class_balance.py
```
**Expected behavior**


**Traceback**


The second plot contains the first plot.
This can be solved by adding `plt.gcf().clear()` after `viz.poof()`.
**Desktop (please complete the following information):**
- OS: macOS & Ubuntu
- Python Version 3.6
- Yellowbrick Version dev
| 2018-08-22T04:35:33 |
||
DistrictDataLabs/yellowbrick | 617 | DistrictDataLabs__yellowbrick-617 | [
"616"
] | 545c6fba13533d16e594a4a2363b4a69dfbe5955 | diff --git a/yellowbrick/regressor/residuals.py b/yellowbrick/regressor/residuals.py
--- a/yellowbrick/regressor/residuals.py
+++ b/yellowbrick/regressor/residuals.py
@@ -543,10 +543,10 @@ def draw(self, y_pred, residuals, train=False, **kwargs):
# Add residuals histogram
if self.hist in {True, 'frequency'}:
- self.hax.hist(residuals, bins=50, orientation="horizontal")
+ self.hax.hist(residuals, bins=50, orientation="horizontal", color=color)
elif self.hist == 'density':
self.hax.hist(
- residuals, bins=50, orientation="horizontal", density=True
+ residuals, bins=50, orientation="horizontal", density=True, color=color
)
# Ensure the current axes is always the main residuals axes
| diff --git a/tests/test_text/test_freqdist.py b/tests/test_text/test_freqdist.py
--- a/tests/test_text/test_freqdist.py
+++ b/tests/test_text/test_freqdist.py
@@ -50,4 +50,4 @@ def test_integrated_freqdist(self):
visualizer.fit(docs)
visualizer.poof()
- self.assert_images_similar(visualizer)
+ self.assert_images_similar(visualizer, tol=1)
| Extend color selection on ResidualsPlot() to histogram
When running ResidualsPlot() visualizer, the `test_color` and `train_color` do not propagate down to the histogram. It would be nice if the histogram on the right reflected the color choices as well.
```visualizer = ResidualsPlot(regr_ols, test_color='#615d6c', train_color='#6F8AB7')
visualizer.fit(X_train, y_train)
visualizer.score(X_test, y_test)
visualizer.poof()```
Results in a plot like this:

| 2018-09-19T04:58:41 |
|
DistrictDataLabs/yellowbrick | 652 | DistrictDataLabs__yellowbrick-652 | [
"651"
] | b4bf5c88ed3679f7c4232adc723e3303f00f4e92 | diff --git a/docs/api/cluster/elbow.py b/docs/api/cluster/elbow.py
--- a/docs/api/cluster/elbow.py
+++ b/docs/api/cluster/elbow.py
@@ -1,27 +1,55 @@
-# Clustering Evaluation Imports
-from functools import partial
+#!/usr/bin/env python
-from sklearn.cluster import MiniBatchKMeans
-from sklearn.datasets import make_blobs as sk_make_blobs
+"""
+Generate images for the elbow plot documentation.
+"""
-from yellowbrick.cluster import KElbowVisualizer
+# Import necessary modules
+import matplotlib.pyplot as plt
-# Helpers for easy dataset creation
-N_SAMPLES = 1000
-N_FEATURES = 12
-SHUFFLE = True
+from sklearn.cluster import KMeans
+from sklearn.datasets import make_blobs
+from yellowbrick.cluster import KElbowVisualizer
-# Make blobs partial
-make_blobs = partial(sk_make_blobs, n_samples=N_SAMPLES, n_features=N_FEATURES, shuffle=SHUFFLE)
+def draw_elbow(path="images/elbow.png"):
+ # Generate synthetic dataset with 8 blobs
+ X, y = make_blobs(
+ centers=8, n_features=12, n_samples=1000,
+ shuffle=True, random_state=42
+ )
-if __name__ == '__main__':
- # Make 8 blobs dataset
- X, y = make_blobs(centers=8)
+ # Create a new figure to draw the clustering visualizer on
+ _, ax = plt.subplots()
# Instantiate the clustering model and visualizer
+ model = KMeans()
+ visualizer = KElbowVisualizer(model, ax=ax, k=(4,12))
+
+ visualizer.fit(X) # Fit the data to the visualizer
+ visualizer.poof(outpath=path) # Draw/show/poof the data
+
+
+def draw_calinski_harabaz(path="images/calinski_harabaz.png"):
+ # Generate synthetic dataset with 8 blobs
+ X, y = make_blobs(
+ centers=8, n_features=12, n_samples=1000,
+ shuffle=True, random_state=42
+ )
+
+ # Create a new figure to draw the clustering visualizer on
+ _, ax = plt.subplots()
+
# Instantiate the clustering model and visualizer
- visualizer = KElbowVisualizer(MiniBatchKMeans(), k=(4,12))
+ model = KMeans()
+ visualizer = KElbowVisualizer(
+ model, ax=ax, k=(4,12),
+ metric='calinski_harabaz', timings=False
+ )
+ visualizer.fit(X) # Fit the data to the visualizer
+ visualizer.poof(outpath=path) # Draw/show/poof the data
- visualizer.fit(X) # Fit the training data to the visualizer
- visualizer.poof(outpath="images/elbow.png") # Draw/show/poof the data
+
+if __name__ == '__main__':
+ draw_elbow()
+ draw_calinski_harabaz()
| Fix second image in KElbowVisualizer documentation
The second image that was added to the `KElbowVisualizer` documentation in PR #635 is not rendering correctly because the `elbow_is_behind.png` file is not generated by the `elbow.py` file, but was added separately.
- [x] Expand `KElbowVisualizer` documentation in `elbow.rst`
- [x] Add example showing how to hide timing and use `calinski_harabaz` scoring metric
- [x] Update `elbow.py` to generate new image for the documentation.
| 2018-11-07T19:38:37 |
||
DistrictDataLabs/yellowbrick | 676 | DistrictDataLabs__yellowbrick-676 | [
"73"
] | 9c76ea71f1dd56d5fe848c9ce878fb5c096c6f68 | diff --git a/yellowbrick/utils/target.py b/yellowbrick/utils/target.py
new file mode 100644
--- /dev/null
+++ b/yellowbrick/utils/target.py
@@ -0,0 +1,76 @@
+# yellowbrick.utils.target
+# Helper functions related to the target variable.
+#
+# Author: Benjamin Bengfort <[email protected]>
+# Created: Thu Dec 27 20:16:18 2018 -0500
+#
+# For license information, see LICENSE.txt
+#
+# ID: target.py [] [email protected] $
+
+"""
+Helper functions related to the target variable.
+"""
+
+##########################################################################
+## Imports and Module Variables
+##########################################################################
+
+import numpy as np
+
+from sklearn.utils.multiclass import type_of_target
+
+
+__all__ = [
+ 'CONTINUOUS', 'DISCRETE', 'UNKNOWN', 'MAX_DISCRETE_CLASSES', 'target_color_type'
+]
+
+CONTINUOUS = "continuous"
+DISCRETE = "discrete"
+UNKNOWN = "unknown"
+MAX_DISCRETE_CLASSES = 12
+
+
+##########################################################################
+## Helper Functions
+##########################################################################
+
+def target_color_type(y):
+ """
+ Determines the type of color space that will best represent the target
+ variable y, e.g. either a discrete (categorical) color space or a
+ continuous color space that requires a colormap. This function can handle
+ both 1D or column vectors as well as multi-output targets.
+
+ Parameters
+ ----------
+ y : array-like
+ Must be a valid array-like data structure that can be passed to a
+ scikit-learn supervised estimator.
+
+ Returns
+ -------
+ color_type : string
+ One of:
+
+ * 'discrete': `y` is either a binary target or a multiclass target
+ with <= 12 discrete classes.
+ * 'continuous': `y` is an array-like of floats that are not all
+ integers or a multiclass target with > 12 discrete classes.
+ * 'unknown': `y` is array-like but none of the above. For example
+ a multilabel-indicator or a 3D array. No exception is raised.
+ """
+ ttype = type_of_target(y)
+
+ if ttype.startswith(CONTINUOUS):
+ return CONTINUOUS
+
+ if ttype.startswith("binary"):
+ return DISCRETE
+
+ if ttype.startswith("multiclass"):
+ if len(np.unique(y)) > MAX_DISCRETE_CLASSES:
+ return CONTINUOUS
+ return DISCRETE
+
+ return UNKNOWN
| diff --git a/tests/test_utils/test_target.py b/tests/test_utils/test_target.py
new file mode 100644
--- /dev/null
+++ b/tests/test_utils/test_target.py
@@ -0,0 +1,65 @@
+# tests.test_utils.test_target
+# Tests for the target helper functions module.
+#
+# Author: Benjamin Bengfort <[email protected]>
+# Created: Thu Dec 27 20:43:31 2018 -0500
+#
+# For license information, see LICENSE.txt
+#
+# ID: test_target.py [] [email protected] $
+
+"""
+Tests for the target helper functions module.
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+import pytest
+import numpy as np
+
+from yellowbrick.utils.target import *
+from sklearn.datasets import make_regression, make_classification
+
+
+##########################################################################
+## Target Color Type Tests
+##########################################################################
+
[email protected]("value,expected", [
+ (['a', 'b', 'a', 'b', 'c'], DISCRETE),
+ ([1, 2, 1, 2, 3], DISCRETE),
+ ([.23, 0.94, 1.3, -1.02, 0.11], CONTINUOUS),
+ ([1, 2, 0.2, 0.5, 1], CONTINUOUS),
+ (np.array([0.2, 2.2, 1.2, -3.1]), CONTINUOUS),
+ (np.array([[1, 2], [0, 2], [2, 1]]), DISCRETE),
+ (np.array([[[1,2], [1,2]], [[1,2], [1,2]]]), UNKNOWN),
+], ids=['list str', 'list int', 'list float', 'mixed list', 'float array', 'multioutput', 'cube'])
+def test_target_color_type(value, expected):
+ """
+ Test the target_color_type helper function with a variety of data types
+ """
+ assert target_color_type(value) == expected
+
+
[email protected]("n_classes,expected", [
+ (2, DISCRETE),
+ (4, DISCRETE),
+ (MAX_DISCRETE_CLASSES, DISCRETE),
+ (MAX_DISCRETE_CLASSES+3, CONTINUOUS),
+], ids=["binary", "multiclass", "max discrete", "too many discrete"])
+def test_binary_target_color_type(n_classes, expected):
+ """
+ Test classification target color type
+ """
+ _, y = make_classification(n_classes=n_classes, n_informative=n_classes+2)
+ assert target_color_type(y) == expected
+
+
+def test_regression_target_color_type():
+ """
+ Test regression target color type
+ """
+ _, y = make_regression()
+ assert target_color_type(y) == CONTINUOUS
| Detect continuous/discrete target vals for color util
Right now the ParallelCoordinates and RadViz do not work for regressions since it supports a class based visualization.
Give these classes the opportunity to do a continuous space by using a sequential colormap and coloring the instance lines according to target value.
| Now that we have the `color_sequence` function, we can create a binning method to assign continuous valued points to the number of classes specified by the colormap.
Finding a solution to this has been tricky because discrete classes has been embedded into the logic of both RadViz and ParallelCoordinates. There are two primary features that continuous data requires:
1. A colormap that will bin values into a consistent space.
2. Instead of a label legend, a colorbar that shows the ranges of the instances.
To handle this for now, I'm going to do add a principle argument to both visualizers called "target_type" whose values should be "auto", "discrete", or "continuous" or None. Discrete or continuous will select the correct methodology. For auto or None values. I will create two utility functions:
- `is_continuous(y)`
- `is_discrete(y)`
Which will accept a vector and use rules to decide if the data is continuous or discrete. Not exactly sure how to implement this.
This should solve the issue for now, but we should revisit it when we develop RadViz and ParallelCoordinates with more features and functionality.
Ok, this ticket has been extremely difficult to work on -- the RadViz and Parallel Coordinates classes are entirely set up for class based visualization only (thanks to their being pulled from matplotlib.
This needs to be looked at a bit more closely, and I stashed my changes (which touched nearly every module in yellowbrick). We'll have to push this back to another sprint.
Some notes here:
http://bbengfort.github.io/snippets/2017/01/17/resolving-matplotlib-colors.html
Sklearn apparently does have a type of target checker that returns continuous or classification as one of it's outputs:
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/utils/multiclass.py#L175
A very lightweight version of this has been implemented in #399
See #334 for package where this should be placed. | 2018-12-28T02:06:36 |
DistrictDataLabs/yellowbrick | 678 | DistrictDataLabs__yellowbrick-678 | [
"531"
] | d3a09b1700e52ef8d26f5b73145313f2c45c0a68 | diff --git a/docs/api/features/importances.py b/docs/api/features/importances.py
--- a/docs/api/features/importances.py
+++ b/docs/api/features/importances.py
@@ -4,7 +4,8 @@
from yellowbrick.features.importances import FeatureImportances
from sklearn.ensemble import GradientBoostingClassifier
-from sklearn.linear_model import Lasso
+from sklearn.linear_model import Lasso, LogisticRegression
+from sklearn.datasets import load_iris
DATA_DIR = os.path.relpath(os.path.join(
@@ -46,6 +47,18 @@ def coef_(outpath):
viz.poof(outpath=outpath)
+def stacked_coef_(outpath):
+ data = load_iris()
+
+ fig = plt.figure()
+ ax = fig.add_subplot()
+
+ viz = FeatureImportances(LogisticRegression(), ax=ax, stack=True, relative=False)
+ viz.fit(data.data, data.target)
+ viz.poof(outpath=outpath)
+
+
if __name__ == '__main__':
- feature_importances_("images/feature_importances.png")
- coef_("images/feature_importances_coef.png")
+ # feature_importances_("images/feature_importances.png")
+ # coef_("images/feature_importances_coef.png")
+ stacked_coef_("images/feature_importances_stacked.png")
\ No newline at end of file
diff --git a/yellowbrick/features/importances.py b/yellowbrick/features/importances.py
--- a/yellowbrick/features/importances.py
+++ b/yellowbrick/features/importances.py
@@ -21,13 +21,14 @@
## Imports
##########################################################################
+import warnings
import numpy as np
import matplotlib.pyplot as plt
-from yellowbrick.utils import is_dataframe, is_classifier
from yellowbrick.base import ModelVisualizer
-from yellowbrick.exceptions import YellowbrickTypeError, NotFitted
-from ..style.palettes import color_palette
+from yellowbrick.style.palettes import color_palette
+from yellowbrick.utils import is_dataframe, is_classifier
+from yellowbrick.exceptions import YellowbrickTypeError, NotFitted, YellowbrickWarning
##########################################################################
@@ -150,8 +151,11 @@ def fit(self, X, y=None, **kwargs):
# therefore we flatten by taking the average by
# column to get shape (n_features,) (see LogisticRegression)
if not self.stack and self.feature_importances_.ndim > 1:
- self.feature_importances_ = np.mean(self.feature_importances_,
- axis=0)
+ self.feature_importances_ = np.mean(self.feature_importances_, axis=0)
+ warnings.warn((
+ "detected multi-dimensional feature importances but stack=False, "
+ "using mean to aggregate them."
+ ), YellowbrickWarning)
# Apply absolute value filter before normalization
if self.absolute:
| diff --git a/tests/baseline_images/test_features/test_importances/test_multi_coefs_stacked.png b/tests/baseline_images/test_features/test_importances/test_multi_coefs_stacked.png
Binary files a/tests/baseline_images/test_features/test_importances/test_multi_coefs_stacked.png and b/tests/baseline_images/test_features/test_importances/test_multi_coefs_stacked.png differ
diff --git a/tests/test_features/test_importances.py b/tests/test_features/test_importances.py
--- a/tests/test_features/test_importances.py
+++ b/tests/test_features/test_importances.py
@@ -230,7 +230,7 @@ def test_fit_absolute(self):
def test_multi_coefs(self):
"""
- Test fit with multidimensional coefficients
+ Test fit with multidimensional coefficients and stack warning
"""
coefs = np.array([
[0.4, 0.2, -0.08, 0.07, 0.16, 0.23, -0.38, 0.1, -0.05],
@@ -242,10 +242,12 @@ def test_multi_coefs(self):
model = MockEstimator()
model.make_importance_param(value=coefs)
- visualizer = FeatureImportances(model)
- visualizer.fit(
- np.random.rand(100, len(np.mean(coefs, axis=0))), np.random.rand(100)
- )
+ visualizer = FeatureImportances(model, stack=False)
+
+ with pytest.warns(YellowbrickWarning):
+ visualizer.fit(
+ np.random.rand(100, len(np.mean(coefs, axis=0))), np.random.rand(100)
+ )
npt.assert_equal(visualizer.feature_importances_.ndim, 1)
@@ -256,12 +258,11 @@ def test_multi_coefs_stacked(self):
"""
Test stack plot with multidimensional coefficients
"""
- X_iris, y_iris = load_iris(True)
- X_iris_pd = pd.DataFrame(X_iris, columns=['f1', 'f2', 'f3', 'f4'])
+ X, y = load_iris(True)
- viz = FeatureImportances(LogisticRegression(), stack=True)
- viz.fit(X_iris_pd, y_iris)
- viz.poof()
+ viz = FeatureImportances(LogisticRegression(random_state=222), stack=True)
+ viz.fit(X, y)
+ viz.finalize()
npt.assert_equal(viz.feature_importances_.shape, (3, 4))
self.assert_images_similar(viz)
| FeatureImportances stack documentation
Thank you so much @zjpoh for enhancing the `FeatureImportance` visualizer with a stack bar chart! There are a couple of minor tasks to complete that I will take care of in the next couple of days:
1. Add documentation about the stack argument and the implications
2. Add `skipif` pandas not available, or non-pandas testing
3. When stack gets set to False, issue warning about what happened
4. Non-classifier multi-output regressors test
See #492 and #510
| @bbengfort thanks for filling in the gaps of my PR. I will take a look at what you add and hopefully next time I can have a more complete PR. =)
@zjpoh your PR was absolutely complete in every respect! We try to do things in iterative, small, incremental steps so that we don't overload the PR process or exhaust our contributors. As YB has gotten bigger, there is just a lot of stuff from documentation to checks, CI, dependencies etc. that we have to manage. It's kind of drudge work and just has to be done in order to make the package accessible and intuitive. Please keep submitting PRs and if you'd like to take on some of these other maintenance tasks, we'd be happy to have those types of contributions also!
I'm sorry if I made you feel that your PR wasn't sufficient - it was great and a welcome addition to Yellowbrick!
RE @DistrictDataLabs/team-oz-maintainers | 2018-12-29T03:15:52 |
DistrictDataLabs/yellowbrick | 679 | DistrictDataLabs__yellowbrick-679 | [
"664"
] | 708274289d66d9265f7ded03e3445bc2bd70f46e | diff --git a/yellowbrick/features/rfecv.py b/yellowbrick/features/rfecv.py
--- a/yellowbrick/features/rfecv.py
+++ b/yellowbrick/features/rfecv.py
@@ -112,6 +112,10 @@ class RFECV(ModelVisualizer):
functions such as ``predict()`` and ``score()`` are passed through to
this estimator (it rewraps the original model).
+ n_feature_subsets_ : array of shape [n_subsets_of_features]
+ The number of features removed on each iteration of RFE, computed by the
+ number of features in the dataset and the step parameter.
+
Notes
-----
This model wraps ``sklearn.feature_selection.RFE`` and not
@@ -172,7 +176,7 @@ def fit(self, X, y=None):
# Create the RFE model
rfe = RFE(self.estimator, step=step)
- n_feature_subsets = np.arange(1, n_features+1)
+ self.n_feature_subsets_ = np.arange(1, n_features+step, step)
# Create the cross validation params
# TODO: handle random state
@@ -183,7 +187,7 @@ def fit(self, X, y=None):
# Perform cross-validation for each feature subset
scores = []
- for n_features_to_select in n_feature_subsets:
+ for n_features_to_select in self.n_feature_subsets_:
rfe.set_params(n_features_to_select=n_features_to_select)
scores.append(cross_val_score(rfe, X, y, **cv_params))
@@ -192,7 +196,7 @@ def fit(self, X, y=None):
# Find the best RFE model
bestidx = self.cv_scores_.mean(axis=1).argmax()
- self.n_features_ = n_feature_subsets[bestidx]
+ self.n_features_ = self.n_feature_subsets_[bestidx]
# Fit the final RFE model for the number of features
self.rfe_estimator_ = rfe
@@ -214,7 +218,7 @@ def draw(self, **kwargs):
Renders the rfecv curve.
"""
# Compute the curves
- x = np.arange(1, len(self.cv_scores_)+1)
+ x = self.n_feature_subsets_
means = self.cv_scores_.mean(axis=1)
sigmas = self.cv_scores_.std(axis=1)
| diff --git a/tests/baseline_images/test_features/test_rfecv/test_quick_method.png b/tests/baseline_images/test_features/test_rfecv/test_quick_method.png
Binary files a/tests/baseline_images/test_features/test_rfecv/test_quick_method.png and b/tests/baseline_images/test_features/test_rfecv/test_quick_method.png differ
diff --git a/tests/baseline_images/test_features/test_rfecv/test_rfecv_step.png b/tests/baseline_images/test_features/test_rfecv/test_rfecv_step.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_rfecv/test_rfecv_step.png differ
diff --git a/tests/test_features/test_rfecv.py b/tests/test_features/test_rfecv.py
--- a/tests/test_features/test_rfecv.py
+++ b/tests/test_features/test_rfecv.py
@@ -16,6 +16,8 @@
import sys
import pytest
+import numpy as np
+import numpy.testing as npt
from tests.base import VisualTestCase
from tests.dataset import DatasetMixin, Dataset
@@ -77,7 +79,7 @@ def test_fit(self, mock_draw):
X, y = self.dataset
params = (
"n_features_", "support_", "ranking_",
- "cv_scores_", "rfe_estimator_",
+ "cv_scores_", "rfe_estimator_", "n_feature_subsets_"
)
rf = RandomForestClassifier()
@@ -124,7 +126,7 @@ def test_quick_method(self):
model = LogisticRegression()
X, y = self.dataset
- ax = rfecv(model, X, y, step=3, cv=cv, scoring='f1_weighted')
+ ax = rfecv(model, X, y, step=2, cv=cv, scoring='f1_weighted')
self.assert_images_similar(ax=ax)
@@ -156,11 +158,28 @@ def test_pandas_integration(self):
self.assert_images_similar(oz)
- def test_valid_step(self):
+ def test_invalid_step(self):
"""
Test step hyperparam validation
"""
# TODO: parametrize when unittest is removed
- with pytest.raises(YellowbrickValueError):
- oz = RFECV(SVC(kernel="lnear"), step=-1)
+ with pytest.raises(YellowbrickValueError, match="step must be >0"):
+ oz = RFECV(SVC(kernel="linear"), step=-1)
oz.fit(self.dataset.X, self.dataset.y)
+
+ def test_rfecv_step(self):
+ """
+ Test RFECV step=5 with LogisticRegression
+ """
+ X, y = make_classification(
+ n_samples=200, n_features=30, n_informative=18, n_redundant=6,
+ n_repeated=0, n_classes=8, n_clusters_per_class=1, random_state=0
+ )
+
+ oz = RFECV(LogisticRegression(random_state=32), step=5).fit(X, y)
+ assert hasattr(oz, "n_feature_subsets_")
+ npt.assert_array_equal(oz.n_feature_subsets_, np.arange(1,35,5))
+
+ oz.finalize()
+ tol = 1.75 if sys.platform == "win32" else 0.25
+ self.assert_images_similar(oz, tol=tol)
\ No newline at end of file
| step: removing 5 features in each iteration
Using RFECV, the graph shows best features with one increment in each iteration.
I want to show each iteration should work with an increment of 5 features.
Example: score should be plotted against x-axis of [5,10,15,20,25].
I used step=5, but its not working.
| @mubashirtuf I'm sorry it's taken so long for us to respond; things got busy at the end of the semester and during the holiday. I can take a look at this, but it would be helpful if you could provide the code you used to generate the error - on my quick first attempt I couldn't reproduce the issue. I see a place where the problem might be occurring, but I'm not totally sure. Would you also upload the figure that was produced? | 2018-12-29T21:48:51 |
DistrictDataLabs/yellowbrick | 688 | DistrictDataLabs__yellowbrick-688 | [
"601"
] | 5b4c05a6685d8582e6dd963a19ad99f77c07f871 | diff --git a/yellowbrick/classifier/prcurve.py b/yellowbrick/classifier/prcurve.py
--- a/yellowbrick/classifier/prcurve.py
+++ b/yellowbrick/classifier/prcurve.py
@@ -180,7 +180,8 @@ def fit(self, X, y=None):
self.estimator = OneVsRestClassifier(self.estimator)
# Use label_binarize to create multi-label ouptut for OneVsRestClassifier
- Y = label_binarize(y, classes=np.unique(y))
+ self._target_labels = np.unique(y)
+ Y = label_binarize(y, classes=self._target_labels)
elif ttype.startswith(BINARY):
self.target_type_ = BINARY
@@ -226,7 +227,7 @@ def score(self, X, y=None):
self.score_ = average_precision_score(y, y_scores)
else:
# Use label_binarize to create multi-label ouptut for OneVsRestClassifier
- Y = label_binarize(y, classes=self.classes_)
+ Y = label_binarize(y, classes=self._target_labels)
self.precision_, self.recall_, self.score_ = {}, {}, {}
| diff --git a/tests/baseline_images/test_classifier/test_prcurve/test_multiclass_probability_with_class_labels.png b/tests/baseline_images/test_classifier/test_prcurve/test_multiclass_probability_with_class_labels.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_prcurve/test_multiclass_probability_with_class_labels.png differ
diff --git a/tests/test_classifier/test_prcurve.py b/tests/test_classifier/test_prcurve.py
--- a/tests/test_classifier/test_prcurve.py
+++ b/tests/test_classifier/test_prcurve.py
@@ -14,6 +14,7 @@
## Imports
##########################################################################
+import matplotlib
import sys
import pytest
@@ -29,7 +30,6 @@
from sklearn.datasets import make_regression
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import RidgeClassifier
-from sklearn.model_selection import train_test_split as tts
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
@@ -239,6 +239,72 @@ def test_multiclass_probability(self):
tol = 6.6 if sys.platform == 'win32' else 1.0 # fails with RMSE 6.583 on AppVeyor
self.assert_images_similar(oz, tol=tol)
+ def test_multiclass_probability_with_class_labels(self):
+ """Visual similarity of multiclass classifier with class labels."""
+ # Create and fit the visualizer
+ oz = PrecisionRecallCurve(
+ GaussianNB(), per_class=True, micro=False, fill_area=False,
+ iso_f1_curves=True, ap_score=False,
+ classes=["a", "b", "c", "d", "e", "f"]
+ )
+ assert_not_fitted(oz)
+
+ # Fit returns self
+ assert oz.fit(self.multiclass.X.train, self.multiclass.y.train) is oz
+
+ # Score the visualizer
+ s = oz.score(self.multiclass.X.test, self.multiclass.y.test)
+ assert_fitted(oz)
+
+ # Score should be between 0 and 1
+ assert 0.0 <= s <= 1.0
+
+ # Check the multiclass classification properties
+ assert oz.target_type_ == MULTICLASS
+ assert isinstance(oz.score_, dict)
+ assert oz.score_[MICRO] == s
+ assert isinstance(oz.precision_, dict)
+ assert isinstance(oz.recall_, dict)
+ assert len(oz.score_) == len(oz.classes_) + 1
+ assert len(oz.precision_) == len(oz.classes_) + 1
+ assert len(oz.recall_) == len(oz.classes_) + 1
+
+ # Finalize image
+ oz.finalize()
+
+ # Compare texts of the images.
+ # Labels
+ assert oz.ax.get_xlabel() == "Recall"
+ oz.ax.set_xlabel("")
+ assert oz.ax.get_ylabel() == "Precision"
+ oz.ax.set_ylabel("")
+ assert oz.ax.get_title() == "Precision-Recall Curve for GaussianNB"
+ oz.ax.set_title("")
+ # Legend
+ oz_legend_txt = [x.get_text() for x in oz.ax.legend().get_texts()]
+ expected_legend_txt = [
+ "PR for class a (area=0.42)",
+ "PR for class b (area=0.36)",
+ "PR for class c (area=0.44)",
+ "PR for class d (area=0.52)",
+ "PR for class e (area=0.37)",
+ "PR for class f (area=0.49)",
+ ]
+ assert oz_legend_txt == expected_legend_txt
+ handles, _ = oz.ax.get_legend_handles_labels()
+ empty_labels = [""] * len(handles)
+ oz.ax.legend(handles=handles, labels=empty_labels, loc='lower left',
+ frameon=True)
+ # Text in iso_f1_curves.
+ # Will not check for these as they appears okay in other test images.
+ for child in oz.ax.get_children():
+ if isinstance(child, matplotlib.text.Annotation):
+ oz.ax.texts.remove(child)
+
+ # Compare the images
+ tol = 6.6 if sys.platform == 'win32' else 1.0 # fails with RMSE 6.583 on AppVeyor
+ self.assert_images_similar(oz, tol=tol)
+
@pytest.mark.filterwarnings("ignore:From version 0.21")
def test_quick_method(self):
"""
@@ -314,4 +380,4 @@ def test_missing_test_data_in_quick_method(self):
precision_recall_curve(RandomForestClassifier(random_state=27),x_train, y_train,y_test=y_test,random_state=7)
with pytest.raises(YellowbrickValueError, match="both X_test and y_test are required if one is specified"):
- precision_recall_curve(RandomForestClassifier(random_state=27),x_train, y_train,X_test=x_test,random_state=7)
\ No newline at end of file
+ precision_recall_curve(RandomForestClassifier(random_state=27),x_train, y_train,X_test=x_test,random_state=7)
| Precision-Recall Curve does not show class labels
**Describe the bug**
When labels are passed into the `PrecisionRecallCurve` the visualization is not drawn correctly.
**To Reproduce**
```python
from yellowbrick.classifier import PrecisionRecallCurve
from yellowbrick.dataset import load_game
from sklearn.preprocessing import LabelEncoder
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import train_test_split as tts
# Load the dataset and label encode the target
data = load_game()
X = data.iloc[:, data.columns != 'outcome']
y = LabelEncoder().fit_transform(data['outcome'])
# Create train test splits
X_train, X_test, y_train, y_test = tts(X, y, test_size=0.2, shuffle=True)
oz = PrecisionRecallCurve(
MultinomialNB(), per_class=True, iso_f1_curves=True, fill_area=False,
micro=False, classes=["loss", "draw", "win"]
)
oz.fit(X_train, y_train)
oz.score(X_test, y_test)
oz.poof()
```
**Dataset**
I used the game multi-label dataset from the UCI Machine Learning repository, as wrangled by the yellowbrick datasets module.
**Expected behavior**
When the target, `y` is label encoded (e.g. via the `LabelEncoder`) to classes 0, 1, and 2 and class names are passed in via the `classes` param of the visualizer, the legend should display the class names. However, in this case the visualization does not appear at all:

**Traceback**
No exception is raised, however, the following warning is issued:
```
/Users/benjamin/.pyenv/versions/3.6.2/envs/yellowbrick3/lib/python3.6/site-packages/numpy/lib/arraysetops.py:522: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
mask |= (ar1 == a)
/Users/benjamin/.pyenv/versions/3.6.2/envs/yellowbrick3/lib/python3.6/site-packages/sklearn/metrics/ranking.py:444: RuntimeWarning: invalid value encountered in true_divide
recall = tps / tps[-1]
objc[46460]: Class FIFinderSyncExtensionHost is implemented in both /System/Library/PrivateFrameworks/FinderKit.framework/Versions/A/FinderKit (0x7fffb473cc90) and /System/Library/PrivateFrameworks/FileProvider.framework/OverrideBundles/FinderSyncCollaborationFileProviderOverride.bundle/Contents/MacOS/FinderSyncCollaborationFileProviderOverride (0x12814ecd8). One of the two will be used. Which one is undefined.
```
**Desktop (please complete the following information):**
- OS: macOS High Sierra 10.13.6
- Python Version 3.6.2
- Yellowbrick Version 0.9 develop
**Additional context**
This was a known bug during development but was left in as a rare case.
| @bbengfort I would like to take a stab at this. Btw, what is the meaning of the hacktoberfest label?
@zjpoh that would be great - though you'll have to wait until #599 is merged before you can do a fix! Do you want to work with @rebeccabilbro in doing a code review of that PR?
The [hacktoberfest label](https://github.com/DistrictDataLabs/yellowbrick/issues?q=is%3Aopen+is%3Aissue+label%3Ahacktoberfest) is meant to designate issues that would be good for folks who are participating in [Hacktoberfest](https://hacktoberfest.digitalocean.com/), particularly [PyData Manchester folks](https://twitter.com/_JAStark/status/1037431605890048000).
@bbengfort Sure. I’ll take a look at that tonight.
@zjpoh still interested in fixing this bug?
@bbengfort Sorry again for being so irresponsive for such a long time.
I think the problem is that the `y` in `score` has values `[0, 1, 2]`, but the classes are `classes=['loss' 'draw' 'win']`. This breaks the `label_binarize`. I'm working on a fix and will open a PR when I have it ready. | 2019-01-18T16:29:37 |
DistrictDataLabs/yellowbrick | 705 | DistrictDataLabs__yellowbrick-705 | [
"699"
] | a5941a6c47fbe5264f3622bc15276ba618bbe1d0 | diff --git a/yellowbrick/classifier/confusion_matrix.py b/yellowbrick/classifier/confusion_matrix.py
--- a/yellowbrick/classifier/confusion_matrix.py
+++ b/yellowbrick/classifier/confusion_matrix.py
@@ -283,7 +283,7 @@ def finalize(self, **kwargs):
def confusion_matrix(model, X, y, ax=None, classes=None, sample_weight=None,
percent=False, label_encoder=None, cmap='YlOrRd',
- fontsize=None, **kwargs):
+ fontsize=None, random_state=None, **kwargs):
"""Quick method:
Creates a heatmap visualization of the sklearn.metrics.confusion_matrix().
@@ -346,6 +346,9 @@ def confusion_matrix(model, X, y, ax=None, classes=None, sample_weight=None,
Specify the fontsize of the text in the grid and labels to make the
matrix a bit easier to read. Uses rcParams font size by default.
+ random_state : int, RandomState instance or None, optional (default=None)
+ Passes a random state parameter to the train_test_split function.
+
Returns
-------
ax : matplotlib axes
@@ -358,7 +361,10 @@ def confusion_matrix(model, X, y, ax=None, classes=None, sample_weight=None,
)
# Create the train and test splits
- X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
+ # TODO: determine how to use quick methods that require train and test data.
+ X_train, X_test, y_train, y_test = train_test_split(
+ X, y, test_size=0.2, random_state=random_state
+ )
# Fit and transform the visualizer (calls draw)
visualizer.fit(X_train, y_train, **kwargs)
| diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_class_filter_eg_zoom_in.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_class_filter_eg_zoom_in.png
Binary files a/tests/baseline_images/test_classifier/test_confusion_matrix/test_class_filter_eg_zoom_in.png and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_class_filter_eg_zoom_in.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_confusion_matrix.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_confusion_matrix.png
Binary files a/tests/baseline_images/test_classifier/test_confusion_matrix/test_confusion_matrix.png and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_confusion_matrix.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_extra_classes.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_extra_classes.png
Binary files a/tests/baseline_images/test_classifier/test_confusion_matrix/test_extra_classes.png and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_extra_classes.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_fontsize.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_fontsize.png
Binary files a/tests/baseline_images/test_classifier/test_confusion_matrix/test_fontsize.png and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_fontsize.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_no_classes_provided.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_no_classes_provided.png
Binary files a/tests/baseline_images/test_classifier/test_confusion_matrix/test_no_classes_provided.png and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_no_classes_provided.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_one_class.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_one_class.png
Binary files a/tests/baseline_images/test_classifier/test_confusion_matrix/test_one_class.png and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_one_class.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_pandas_integration.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_pandas_integration.png
Binary files a/tests/baseline_images/test_classifier/test_confusion_matrix/test_pandas_integration.png and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_pandas_integration.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_percent_mode.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_percent_mode.png
Binary files a/tests/baseline_images/test_classifier/test_confusion_matrix/test_percent_mode.png and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_percent_mode.png differ
diff --git a/tests/baseline_images/test_classifier/test_confusion_matrix/test_quick_method.png b/tests/baseline_images/test_classifier/test_confusion_matrix/test_quick_method.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_confusion_matrix/test_quick_method.png differ
diff --git a/tests/baseline_images/test_contrib/test_scatter/test_scatter_image.png b/tests/baseline_images/test_contrib/test_scatter/test_scatter_image.png
Binary files a/tests/baseline_images/test_contrib/test_scatter/test_scatter_image.png and b/tests/baseline_images/test_contrib/test_scatter/test_scatter_image.png differ
diff --git a/tests/baseline_images/test_features/test_importances/test_integration_coef.png b/tests/baseline_images/test_features/test_importances/test_integration_coef.png
Binary files a/tests/baseline_images/test_features/test_importances/test_integration_coef.png and b/tests/baseline_images/test_features/test_importances/test_integration_coef.png differ
diff --git a/tests/baseline_images/test_features/test_importances/test_integration_feature_importances.png b/tests/baseline_images/test_features/test_importances/test_integration_feature_importances.png
Binary files a/tests/baseline_images/test_features/test_importances/test_integration_feature_importances.png and b/tests/baseline_images/test_features/test_importances/test_integration_feature_importances.png differ
diff --git a/tests/baseline_images/test_features/test_importances/test_integration_quick_method.png b/tests/baseline_images/test_features/test_importances/test_integration_quick_method.png
Binary files a/tests/baseline_images/test_features/test_importances/test_integration_quick_method.png and b/tests/baseline_images/test_features/test_importances/test_integration_quick_method.png differ
diff --git a/tests/baseline_images/test_features/test_importances/test_multi_coefs_stacked.png b/tests/baseline_images/test_features/test_importances/test_multi_coefs_stacked.png
Binary files a/tests/baseline_images/test_features/test_importances/test_multi_coefs_stacked.png and b/tests/baseline_images/test_features/test_importances/test_multi_coefs_stacked.png differ
diff --git a/tests/test_classifier/test_confusion_matrix.py b/tests/test_classifier/test_confusion_matrix.py
--- a/tests/test_classifier/test_confusion_matrix.py
+++ b/tests/test_classifier/test_confusion_matrix.py
@@ -23,9 +23,10 @@
import matplotlib.pyplot as plt
from yellowbrick.classifier.confusion_matrix import *
+from yellowbrick.datasets import load_occupancy
from tests.base import VisualTestCase
-from tests.dataset import DatasetMixin, Dataset, Split
+from tests.dataset import Dataset, Split
from sklearn.svm import SVC
from sklearn.datasets import load_digits
@@ -69,9 +70,9 @@ def digits(request):
##########################################################################
@pytest.mark.usefixtures("digits")
-class ConfusionMatrixTests(VisualTestCase, DatasetMixin):
+class ConfusionMatrixTests(VisualTestCase):
"""
- ConfusionMatrix visualizer tests
+ Test ConfusionMatrix visualizer
"""
@pytest.mark.xfail(
@@ -277,15 +278,7 @@ def test_pandas_integration(self):
_, ax = plt.subplots()
# Load the occupancy dataset from fixtures
- data = self.load_data('occupancy')
- target = 'occupancy'
- features = [
- "temperature", "relative_humidity", "light", "C02", "humidity"
- ]
-
- # Create instances and target
- X = pd.DataFrame(data[features])
- y = pd.Series(data[target].astype(int))
+ X, y = load_occupancy(return_dataset=True).to_pandas()
# Create train/test splits
splits = tts(X, y, test_size=0.2, random_state=8873)
@@ -306,7 +299,9 @@ def test_pandas_integration(self):
[ 1, 985]
]))
- @pytest.mark.skip(reason="requires random state in quick method")
+ @pytest.mark.xfail(
+ sys.platform == 'win32', reason="images not close on windows"
+ )
def test_quick_method(self):
"""
Test the quick method with a random dataset
@@ -317,9 +312,11 @@ def test_quick_method(self):
)
_, ax = plt.subplots()
- confusion_matrix(DecisionTreeClassifier(), X, y, ax=ax)
+ model = DecisionTreeClassifier(random_state=25)
+ confusion_matrix(model, X, y, ax=ax, random_state=23)
- self.assert_images_similar(ax=ax)
+ tol = 0.1 if six.PY3 else 10
+ self.assert_images_similar(ax=ax, tol=tol)
def test_isclassifier(self):
"""
@@ -338,17 +335,10 @@ def test_score_returns_score(self):
"""
Test that ConfusionMatrix score() returns a score between 0 and 1
"""
- data = self.load_data("occupancy")
- X = data[[
- "temperature", "relative_humidity", "light", "C02", "humidity"
- ]]
-
- y = data['occupancy']
-
- # Convert X to an ndarray
- X = X.copy().view((float, len(X.dtype.names)))
-
+ # Load the occupancy dataset from fixtures
+ X, y = load_occupancy(return_dataset=True).to_numpy()
X_train, X_test, y_train, y_test = tts(X, y, test_size=0.2, random_state=42)
+
# Create and fit the visualizer
visualizer = ConfusionMatrix(LogisticRegression())
visualizer.fit(X_train, y_train)
diff --git a/tests/test_classifier/test_threshold.py b/tests/test_classifier/test_threshold.py
--- a/tests/test_classifier/test_threshold.py
+++ b/tests/test_classifier/test_threshold.py
@@ -295,7 +295,7 @@ def test_splitter_random_state(self):
assert viz._check_cv(splits, random_state=23).random_state == 23
splits = StratifiedShuffleSplit(n_splits=1, random_state=181)
- assert viz._check_cv(splits, random_state=None).random_state is 181
+ assert viz._check_cv(splits, random_state=None).random_state == 181
assert viz._check_cv(splits, random_state=72).random_state == 72
def test_bad_exclude(self):
diff --git a/tests/test_contrib/test_classifier/test_boundaries.py b/tests/test_contrib/test_classifier/test_boundaries.py
--- a/tests/test_contrib/test_classifier/test_boundaries.py
+++ b/tests/test_contrib/test_classifier/test_boundaries.py
@@ -73,10 +73,10 @@
@pytest.mark.filterwarnings('ignore')
class DecisionBoundariesVisualizerTest(VisualTestCase):
"""
- DecisionBoundariesVisualizer
+ Test DecisionBoundariesVisualizer
"""
- def test_decision_bounardies(self):
+ def test_decision_boundaries(self):
"""
Assert no errors during kNN DecisionBoundariesVisualizer integration
"""
@@ -85,12 +85,18 @@ def test_decision_bounardies(self):
viz.fit_draw_poof(X_two_cols, y=y)
def test_deprecated(self):
+ """
+ Assert the DecisionViz class issues deprecation warning
+ """
with pytest.deprecated_call():
model = neighbors.KNeighborsClassifier(3)
DecisionViz(model)
@pytest.mark.skipif(six.PY2, reason="deprecation warnings filtered in PY2")
def test_deprecated_message(self):
+ """
+ Test the deprecation warning message
+ """
with pytest.warns(DeprecationWarning, match='Will be moved to yellowbrick.contrib in v0.8'):
model = neighbors.KNeighborsClassifier(3)
DecisionViz(model)
@@ -320,9 +326,7 @@ def test_fit_draw_poof(self):
viz.draw.assert_called_once_with(X_two_cols, y)
viz.poof.assert_called_once_with()
- @pytest.mark.xfail(
- sys.platform == 'win32', reason="images not close on windows"
- )
+ @pytest.mark.xfail(reason="numpy structured arrays have changed since v1.14")
def test_integrated_plot_numpy_named_arrays(self):
"""
Test integration of visualizer with numpy named arrays
diff --git a/tests/test_contrib/test_scatter.py b/tests/test_contrib/test_scatter.py
--- a/tests/test_contrib/test_scatter.py
+++ b/tests/test_contrib/test_scatter.py
@@ -20,13 +20,13 @@
import numpy as np
import matplotlib as mptl
+from yellowbrick.style import palettes
from yellowbrick.contrib.scatter import *
+from yellowbrick.datasets import load_occupancy
from yellowbrick.exceptions import YellowbrickValueError
-from yellowbrick.style import palettes
+from yellowbrick.exceptions import ImageComparisonFailure
-from tests.dataset import DatasetMixin
from tests.base import VisualTestCase
-from yellowbrick.exceptions import ImageComparisonFailure
try:
import pandas as pd
@@ -44,7 +44,10 @@
##########################################################################
@pytest.mark.filterwarnings('ignore')
-class ScatterVizTests(VisualTestCase, DatasetMixin):
+class ScatterVizTests(VisualTestCase):
+ """
+ Test ScatterViz
+ """
# yapf: disable
X = np.array([
@@ -58,19 +61,11 @@ class ScatterVizTests(VisualTestCase, DatasetMixin):
# yapf: enable
y = np.array([1, 0, 1, 0, 1, 0])
- def setUp(self):
- self.occupancy = self.load_data('occupancy')
- super(ScatterVizTests, self).setUp()
-
- def tearDown(self):
- self.occupancy = None
- super(ScatterVizTests, self).tearDown()
-
def test_init_alias(self):
"""
Test alias for ScatterViz
"""
- features = ["temperature", "relative_humidity"]
+ features = ["temperature", "relative humidity"]
visualizer = ScatterVisualizer(features=features, markers=['*'])
self.assertIsNotNone(visualizer.markers)
@@ -79,7 +74,7 @@ def test_scatter(self):
Assert no errors occur during scatter visualizer integration
"""
X_two_cols = self.X[:, :2]
- features = ["temperature", "relative_humidity"]
+ features = ["temperature", "relative humidity"]
visualizer = ScatterViz(features=features)
visualizer.fit_transform(X_two_cols, self.y)
@@ -89,7 +84,7 @@ def test_color_builds(self):
"""
colors = palettes.PALETTES['pastel']
X_two_cols = self.X[:, :2]
- features = ["temperature", "relative_humidity"]
+ features = ["temperature", "relative humidity"]
visualizer = ScatterViz(features=features, color=colors)
visualizer.fit_transform(X_two_cols, self.y)
@@ -106,7 +101,7 @@ def test_scatter_only_two_features_allowed_init(self):
"""
Assert that only two features are allowed for scatter visualizer init
"""
- features = ["temperature", "relative_humidity", "light"]
+ features = ["temperature", "relative humidity", "light"]
with self.assertRaises(YellowbrickValueError):
ScatterViz(features=features)
@@ -115,7 +110,7 @@ def test_scatter_xy_and_features_raise_error(self):
"""
Assert that x,y and features will raise scatterviz error
"""
- features = ["temperature", "relative_humidity", "light"]
+ features = ["temperature", "relative humidity", "light"]
with self.assertRaises(YellowbrickValueError):
ScatterViz(features=features, x='one', y='two')
@@ -142,16 +137,10 @@ def test_integrated_scatter(self):
Test scatter on the real, occupancy data set
"""
# Load the data from the fixture
- X = self.occupancy[[
- "temperature", "relative_humidity", "light", "C02", "humidity"
- ]]
-
- # Convert to numpy arrays
- X = X.copy().view((float, len(X.dtype.names)))
- y = self.occupancy['occupancy'].astype(int)
+ X, y = load_occupancy(return_dataset=True).to_numpy()
# Test the visualizer
- features = ["temperature", "relative_humidity"]
+ features = ["temperature", "relative humidity"]
visualizer = ScatterViz(features=features)
visualizer.fit_transform_poof(X[:, :2], y)
@@ -180,16 +169,10 @@ def test_scatter_quick_method(self):
Test scatter quick method on the real, occupancy data set
"""
# Load the data from the fixture
- X = self.occupancy[[
- "temperature", "relative_humidity", "light", "C02", "humidity"
- ]]
-
- # Convert to numpy arrays
- X = X.copy().view((float, len(X.dtype.names)))
- y = self.occupancy['occupancy'].astype(int)
+ X, y = load_occupancy(return_dataset=True).to_numpy()
# Test the visualizer
- features = ["temperature", "relative_humidity"]
+ features = ["temperature", "relative humidity"]
ax = scatterviz(X[:, :2], y=y, ax=None, features=features)
# test that is returns a matplotlib obj with axes
@@ -201,22 +184,15 @@ def test_integrated_scatter_with_pandas(self):
Test scatterviz on the real, occupancy data set with pandas
"""
# Load the data from the fixture
- X = self.occupancy[[
- "temperature", "relative_humidity", "light", "C02", "humidity"
- ]]
- y = self.occupancy['occupancy'].astype(int)
-
- # Convert X to a pandas dataframe
- X = pd.DataFrame(X)
- X.columns = [
- "temperature", "relative_humidity", "light", "C02", "humidity"
- ]
+ # Load the data from the fixture
+ X, y = load_occupancy(return_dataset=True).to_pandas()
# Test the visualizer
- features = ["temperature", "relative_humidity"]
+ features = ["temperature", "relative humidity"]
visualizer = ScatterViz(features=features)
visualizer.fit_transform_poof(X, y)
+ @pytest.mark.xfail(reason="numpy structured arrays have changed since v1.14")
def test_integrated_scatter_numpy_named_arrays(self):
"""
Test scatterviz on numpy named arrays
@@ -252,21 +228,20 @@ def test_scatter_image(self):
# self.setUp_ImageTest()
X_two_cols = self.X[:, :2]
- features = ["temperature", "relative_humidity"]
+ features = ["temperature", "relative humidity"]
visualizer = ScatterViz(features=features)
visualizer.fit(X_two_cols, self.y)
visualizer.draw(X_two_cols, self.y)
self.assert_images_similar(visualizer)
-
def test_scatter_image_fail(self):
"""
Assert bad image similarity on scatterviz errors
"""
X_two_cols = self.X[:, :2]
- features = ["temperature", "relative_humidity"]
+ features = ["temperature", "relative humidity"]
visualizer = ScatterViz(features=features)
visualizer.fit(X_two_cols, self.y)
visualizer.draw(X_two_cols, self.y)
diff --git a/tests/test_features/test_importances.py b/tests/test_features/test_importances.py
--- a/tests/test_features/test_importances.py
+++ b/tests/test_features/test_importances.py
@@ -27,6 +27,7 @@
from yellowbrick.exceptions import NotFitted
from yellowbrick.features.importances import *
+from yellowbrick.datasets import load_occupancy, load_concrete
from sklearn.datasets import load_iris
from sklearn.base import BaseEstimator, ClassifierMixin
@@ -35,7 +36,6 @@
from sklearn.ensemble import GradientBoostingClassifier
from tests.base import VisualTestCase
-from tests.dataset import DatasetMixin
try:
from unittest import mock
@@ -52,9 +52,9 @@
## Feature Importances Tests
##########################################################################
-class TestFeatureImportancesVisualizer(VisualTestCase, DatasetMixin):
+class TestFeatureImportancesVisualizer(VisualTestCase):
"""
- FeatureImportances visualizer
+ Test FeatureImportances visualizer
"""
@pytest.mark.xfail(
@@ -65,15 +65,8 @@ def test_integration_feature_importances(self):
Integration test of visualizer with feature importances param
"""
- occupancy = self.load_data('occupancy')
- features = [
- "temperature", "relative_humidity", "light", "C02", "humidity"
- ]
-
- # Extract X and y as numpy arrays
- X = occupancy[features].copy()
- X = X.view((float, len(X.dtype.names)))
- y = occupancy['occupancy'].astype(int)
+ # Load the test dataset
+ X, y = load_occupancy(return_dataset=True).to_numpy()
fig = plt.figure()
ax = fig.add_subplot()
@@ -81,7 +74,7 @@ def test_integration_feature_importances(self):
clf = GradientBoostingClassifier(random_state=42)
viz = FeatureImportances(clf, ax=ax)
viz.fit(X, y)
- viz.poof()
+ viz.finalize()
self.assert_images_similar(viz)
@@ -93,22 +86,19 @@ def test_integration_coef(self):
Integration test of visualizer with coef param
"""
- concrete = self.load_data('concrete')
- feats = ['cement','slag','ash','water','splast','coarse','fine','age']
-
- # Create X and y datasets as numpy arrays
- X = concrete[feats].copy()
- X = X.view((float, len(X.dtype.names)))
- y = concrete['strength']
+ # Load the test dataset
+ dataset = load_concrete(return_dataset=True)
+ X, y = dataset.to_numpy()
+ features = dataset.meta["features"]
fig = plt.figure()
ax = fig.add_subplot()
reg = Lasso(random_state=42)
- feats = list(map(lambda s: s.title(), feats))
- viz = FeatureImportances(reg, ax=ax, labels=feats, relative=False)
+ features = list(map(lambda s: s.title(), features))
+ viz = FeatureImportances(reg, ax=ax, labels=features, relative=False)
viz.fit(X, y)
- viz.poof()
+ viz.finalize()
self.assert_images_similar(viz)
@@ -120,15 +110,8 @@ def test_integration_quick_method(self):
Integration test of quick method
"""
- occupancy = self.load_data('occupancy')
- features = [
- "temperature", "relative_humidity", "light", "C02", "humidity"
- ]
-
- # Create X and y datasets as numpy arrays
- X = occupancy[features].copy()
- X = X.view((float, len(X.dtype.names)))
- y = occupancy['occupancy'].astype(int)
+ # Load the test dataset
+ X, y = load_occupancy(return_dataset=True).to_numpy()
fig = plt.figure()
ax = fig.add_subplot()
| CI Tests are Failing
It looks like recent updates to a couple of our dependencies have broken our CI tests on both Travis and AppVeyor. This needs to be resolved so that we can review the PRs currently in the backlog.
Travis:
The Travis failure appears to be a pyflakes related issues:
```
___________ pyflakes-check(ignoring ImportStarUsage ImportStarUsed) ____________
/home/travis/build/DistrictDataLabs/yellowbrick/tests/test_classifier/test_threshold.py:298: IsLiteral
use ==/!= to compare str, bytes, and int literals
```
AppVeyor:
The AppVeyor tests seem to be failing with a change to Numpy with `X.copy().view((float, len(X.dtype.names)))` now failing:
```
# Convert X to an ndarray
> X = X.copy().view((float, len(X.dtype.names)))
E ValueError: Changing the dtype to a subarray type is only supported if the total itemsize is unchanged
```
This occurs in the following test files:
- tests\test_classifier\test_confusion_matrix.py:349
- tests\test_contrib\test_scatter.py:150
- tests\test_contrib\test_scatter.py:237
- tests\test_contrib\test_scatter.py:188
| 2019-01-30T00:12:40 |
|
DistrictDataLabs/yellowbrick | 721 | DistrictDataLabs__yellowbrick-721 | [
"605"
] | 17f08e12db20f03f7bdbca6e61a7872bba232bdf | diff --git a/yellowbrick/features/__init__.py b/yellowbrick/features/__init__.py
--- a/yellowbrick/features/__init__.py
+++ b/yellowbrick/features/__init__.py
@@ -21,7 +21,7 @@
from .pcoords import ParallelCoordinates, parallel_coordinates
from .radviz import RadialVisualizer, RadViz, radviz
from .rankd import Rank1D, rank1d, Rank2D, rank2d
-from .jointplot import JointPlotVisualizer
+from .jointplot import JointPlot, JointPlotVisualizer, joint_plot
from .pca import PCADecomposition, pca_decomposition
from .importances import FeatureImportances, feature_importances
from .rfecv import RFECV, rfecv
diff --git a/yellowbrick/features/jointplot.py b/yellowbrick/features/jointplot.py
--- a/yellowbrick/features/jointplot.py
+++ b/yellowbrick/features/jointplot.py
@@ -1,11 +1,11 @@
# yellowbrick.features.jointplot
-# Implementations of joint plots for univariate and bivariate analysis.
+# Implementation of joint plots for univariate and bivariate analysis.
#
# Author: Prema Damodaran Roman
# Created: Mon Apr 10 21:00:54 2017 -0400
#
-# Copyright (C) 2017 District Data Labs
+# Copyright (C) 2017 The scikit-yb developers.
# For license information, see LICENSE.txt
#
# ID: jointplot.py [7f47800] [email protected] $
@@ -14,308 +14,410 @@
## Imports
##########################################################################
-import warnings
import numpy as np
-import matplotlib as mpl
import matplotlib.pyplot as plt
-from yellowbrick.features.base import FeatureVisualizer
-from yellowbrick.exceptions import YellowbrickValueError
-from yellowbrick.bestfit import draw_best_fit
-from yellowbrick.utils import is_dataframe
+try:
+ # Only available in Matplotlib >= 2.0.2
+ from mpl_toolkits.axes_grid1 import make_axes_locatable
+except ImportError:
+ make_axes_locatable = None
+from .base import FeatureVisualizer
+# from ..bestfit import draw_best_fit # TODO: return in #728
+from ..utils.types import is_dataframe
+from ..exceptions import YellowbrickValueError
+from scipy.stats import pearsonr, spearmanr, kendalltau
+
+# Default Colors
+# TODO: should we reuse these colors?
FACECOLOR = "#FAFAFA"
HISTCOLOR = "#6897bb"
+# Objects for export
+__all__ = [
+ "JointPlot", "JointPlotVisualizer", "joint_plot",
+]
+
+
##########################################################################
## Joint Plot Visualizer
##########################################################################
-class JointPlotVisualizer(FeatureVisualizer):
+class JointPlot(FeatureVisualizer):
"""
- JointPlotVisualizer allows for a simultaneous visualization of the relationship
- between two variables and the distrbution of each individual variable. The
- relationship is plotted along the joint axis and univariate distributions
- are plotted on top of the x axis and to the right of the y axis.
+ Joint plots are useful for machine learning on multi-dimensional data, allowing for
+ the visualization of complex interactions between different data dimensions, their
+ varying distributions, and even their relationships to the target variable for
+ prediction.
+
+ The Yellowbrick ``JointPlot`` can be used both for pairwise feature analysis and
+ feature-to-target plots. For pairwise feature analysis, the ``columns`` argument can
+ be used to specify the index of the two desired columns in ``X``. If ``y`` is also
+ specified, the plot can be colored with a heatmap or by class. For feature-to-target
+ plots, the user can provide either ``X`` and ``y` as 1D vectors, or a ``columns``
+ argument with an index to a single feature in ``X`` to be plotted against ``y``.
+
+ Histograms can be included by setting the ``hist`` argument to ``True`` for a
+ frequency distribution, or to ``"density"`` for a probability density function. Note
+ that histograms requires matplotlib 2.0.2 or greater.
Parameters
----------
- ax: matplotlib Axes, default: None
- This is inherited from FeatureVisualizer but is defined within
- JointPlotVisualizer since there are three axes objects.
-
- feature: string, default: None
- The name of the X variable
- If a DataFrame is passed to fit and feature is None, feature
- is selected as the column of the DataFrame. There must be only
- one column in the DataFrame.
-
- target: string, default: None
- The name of the Y variable
- If target is None and a y value is passed to fit then the target
- is selected from the target vector.
-
- joint_plot: one of {'scatter', 'hex'}, default: 'scatter'
- The type of plot to render in the joint axis
- Currently, the choices are scatter and hex.
- Use scatter for small datasets and hex for large datasets
-
- joint_args: dict, default: None
- Keyword arguments used for customizing the joint plot:
-
- ============= ==================================================================
- Property Description
- ------------- ------------------------------------------------------------------
- alpha transparency
- facecolor background color of the joint axis
- aspect aspect ratio
- fit used if scatter is selected for joint_plot to draw a
- best fit line - values can be True or False.
- Uses ``Yellowbrick.bestfit``
- estimator used if scatter is selected for joint_plot to determine
- the type of best fit line to use. Refer to
- Yellowbrick.bestfit for types of estimators that can be used.
- x_bins used if hex is selected to set the number of bins for the x value
- y_bins used if hex is selected to set the number of bins for the y value
- cmap string or matplotlib cmap to colorize lines
- Use either color to colorize the lines on a per class basis or
- colormap to color them on a continuous scale.
- ============= ==================================================================
-
- xy_plot: one of {'hist'}, default: 'hist'
- The type of plot to render along the x and y axes
- Currently, the choice is hist
-
- xy_args: dict, default: None
- Keyword arguments used for customizing the x and y plots:
-
- ============== =====================================================
- Property Description
- -------------- -----------------------------------------------------
- alpha transparency
- facecolor_x background color of the x axis
- facecolor_y background color of the y axis
- bins used to set up the number of bins for the hist plot
- histcolor_x used to set the color for the histogram on the x axis
- histcolor_y used to set the color for the histogram on the y axis
- ============== =====================================================
-
- ratio: float, default: 5
- Ratio of joint axis size to the x and y axes height
-
- space: float, default: 0.2
- Space between the joint axis and the x and y axes
+ ax : matplotlib Axes, default: None
+ The axes to plot the figure on. If None is passed in the current axes will be
+ used (or generated if required). This is considered the base axes where the
+ the primary joint plot is drawn. It will be shifted and two additional axes
+ added above (xhax) and to the right (yhax) if hist=True.
+
+ columns : int, str, [int, int], [str, str], default: None
+ Determines what data is plotted in the joint plot and acts as a selection index
+ into the data passed to ``fit(X, y)``. This data therefore must be indexable by
+ the column type (e.g. an int for a numpy array or a string for a DataFrame).
+
+ If None is specified then either both X and y must be 1D vectors and they will
+ be plotted against each other or X must be a 2D array with only 2 columns. If a
+ single index is specified then the data is indexed as ``X[columns]`` and plotted
+ jointly with the target variable, y. If two indices are specified then they are
+ both selected from X, additionally in this case, if y is specified, then it is
+ used to plot the color of points.
+
+ Note that these names are also used as the x and y axes labels if they aren't
+ specified in the joint_kws argument.
+
+ correlation : str, default: 'pearson'
+ The algorithm used to compute the relationship between the variables in the
+ joint plot, one of: 'pearson', 'covariance', 'spearman', 'kendalltau'.
+
+ kind : str in {'scatter', 'hex'}, default: 'scatter'
+ The type of plot to render in the joint axes. Note that when kind='hex' the
+ target cannot be plotted by color.
+
+ hist : {True, False, None, 'density', 'frequency'}, default: True
+ Draw histograms showing the distribution of the variables plotted jointly.
+ If set to 'density', the probability density function will be plotted.
+ If set to True or 'frequency' then the frequency will be plotted.
+ Requires Matplotlib >= 2.0.2.
+
+ alpha : float, default: 0.65
+ Specify a transparency where 1 is completely opaque and 0 is completely
+ transparent. This property makes densely clustered points more visible.
+
+ {joint, hist}_kws : dict, default: None
+ Additional keyword arguments for the plot components.
kwargs : dict
Keyword arguments that are passed to the base class and may influence
the visualization as defined in other Visualizers.
+ Attributes
+ ----------
+ corr_ : float
+ The correlation or relationship of the data in the joint plot, specified by the
+ correlation algorithm.
+
Examples
--------
- >>> visualizer = JointPlotVisualizer()
- >>> visualizer.fit(X,y)
- >>> visualizer.poof()
-
- Notes
- -----
- These parameters can be influenced later on in the visualization
- process, but can and should be set as early as possible.
+ >>> viz = JointPlot(columns=["temp", "humidity"])
+ >>> viz.fit(X, y)
+ >>> viz.poof()
"""
- def __init__(self, ax=None, feature=None, target=None,
- joint_plot='scatter', joint_args=None,
- xy_plot='hist', xy_args=None,
- ratio=5, space=.2, **kwargs):
-
- # Check matplotlib version - needs to be version 2.0.0 or greater.
- mpl_vers_maj = int(mpl.__version__.split(".")[0])
- if mpl_vers_maj < 2:
- warnings.warn((
- "{} requires matplotlib major version 2 or greater. "
- "Please upgrade."
- ).format(self.__class__.__name__))
-
- super(JointPlotVisualizer, self).__init__(ax, **kwargs)
-
- self.feature = feature
- self.target = target
- self.joint_plot = joint_plot
- self.joint_args = joint_args
- self.xy_plot = xy_plot
- self.xy_args = xy_args
- self.ratio = ratio
- self.space = space
-
- def fit(self, X, y, **kwargs):
- """
- Sets up the X and y variables for the jointplot
- and checks to ensure that X and y are of the
- correct data type
+ # TODO: should we couple more closely with Rank2D?
+ correlation_methods = {
+ 'pearson': lambda x, y: pearsonr(x,y)[0],
+ 'spearman': lambda x, y: spearmanr(x,y)[0],
+ 'covariance': lambda x, y: np.cov(x,y)[0,1],
+ 'kendalltau': lambda x, y: kendalltau(x,y)[0],
+ }
+
+ def __init__(self, ax=None, columns=None, correlation='pearson', kind="scatter",
+ hist=True, alpha=0.65, joint_kws=None, hist_kws=None, **kwargs):
+ # Initialize the visualizer
+ super(JointPlot, self).__init__(ax=ax, **kwargs)
+ self._xhax, self._yhax = None, None
+
+ # Set and validate the columns
+ self.columns = columns
+ if self.columns is not None and not isinstance(self.columns, (int, str)):
+ self.columns = tuple(self.columns)
+ if len(self.columns) > 2:
+ raise YellowbrickValueError((
+ "'{}' contains too many indices or is invalid for joint plot - "
+ "specify either a single int or str index or two columns as a list"
+ ).format(columns))
+
+ # Seet and validate the correlation
+ self.correlation = correlation
+ if self.correlation not in self.correlation_methods:
+ raise YellowbrickValueError(
+ "'{}' is an invalid correlation method, use one of {}".format(
+ self.correlation, ", ".join(self.correlation_methods.keys())
+ ))
- Fit calls draw
+ # Set and validate the kind of plot
+ self.kind = kind
+ if self.kind not in {'scatter', 'hex', 'hexbin'}:
+ raise YellowbrickValueError((
+ "'{}' is invalid joint plot kind, use 'scatter' or 'hex'"
+ ).format(self.kind))
- Parameters
- ----------
+ # Set and validate the histogram if specified
+ self.hist = hist
+ if self.hist not in {True, 'density', 'frequency', None, False}:
+ raise YellowbrickValueError((
+ "'{}' is an invalid argument for hist, use None, True, "
+ "False, 'density', or 'frequency'"
+ ).format(hist))
- X : ndarray or DataFrame of shape n x 1
- A matrix of n instances with 1 feature
+ # If hist is True, test the version availability
+ if self.hist in {True, 'density', 'frequency'}:
+ self._layout()
- y : ndarray or Series of length n
- An array or series of the target value
+ # Set the additional visual parameters
+ self.alpha = alpha
+ self.joint_kws = joint_kws
+ self.hist_kws = hist_kws
- kwargs: dict
- keyword arguments passed to Scikit-Learn API.
+ @property
+ def xhax(self):
+ """
+ The axes of the histogram for the top of the JointPlot (X-axis)
"""
+ if self._xhax is None:
+ raise AttributeError(
+ "this visualizer does not have a histogram for the X axis"
+ )
+ return self._xhax
- #throw an error if X has more than 1 column
- if is_dataframe(X):
- nrows, ncols = X.shape
+ @property
+ def yhax(self):
+ """
+ The axes of the histogram for the right of the JointPlot (Y-axis)
+ """
+ if self._yhax is None:
+ raise AttributeError(
+ "this visualizer does not have a histogram for the Y axis"
+ )
+ return self._yhax
- if ncols > 1:
- raise YellowbrickValueError((
- "X needs to be an ndarray or DataFrame with one feature, "
- "please select one feature from the DataFrame"
- ))
+ def _layout(self):
+ """
+ Creates the grid layout for the joint plot, adding new axes for the histograms
+ if necessary and modifying the aspect ratio. Does not modify the axes or the
+ layout if self.hist is False or None.
+ """
+ # Ensure the axes are created if not hist, then return.
+ if not self.hist:
+ self.ax
+ return
- #throw an error is y is None
- if y is None:
+ # Ensure matplotlib version compatibility
+ if make_axes_locatable is None:
raise YellowbrickValueError((
- "Joint plots are useful for classification and regression "
- "problems, which require a target variable"
+ "joint plot histograms requires matplotlib 2.0.2 or greater "
+ "please upgrade matplotlib or set hist=False on the visualizer"
))
+ # Create the new axes for the histograms
+ divider = make_axes_locatable(self.ax)
+ self._xhax = divider.append_axes("top", size=1, pad=0.1, sharex=self.ax)
+ self._yhax = divider.append_axes("right", size=1, pad=0.1, sharey=self.ax)
- # Handle the feature name if it is None.
- if self.feature is None:
+ # Modify the display of the axes
+ self._xhax.xaxis.tick_top()
+ self._yhax.yaxis.tick_right()
+ self._xhax.grid(False, axis='y')
+ self._yhax.grid(False, axis='x')
- # If X is a data frame, get the columns off it.
- if is_dataframe(X):
- self.feature = X.columns
+ def fit(self, X, y=None):
+ """
+ Fits the JointPlot, creating a correlative visualization between the columns
+ specified during initialization and the data and target passed into fit:
- else:
- self.feature = ['x']
+ - If self.columns is None then X and y must both be specified as 1D arrays
+ or X must be a 2D array with only 2 columns.
+ - If self.columns is a single int or str, that column is selected to be
+ visualized against the target y.
+ - If self.columns is two ints or strs, those columns are visualized against
+ each other. If y is specified then it is used to color the points.
- # Handle the target name if it is None.
- if self.target is None:
- self.target = ['y']
+ This is the main entry point into the joint plot visualization.
- self.draw(X, y, **kwargs)
- return self
+ Parameters
+ ----------
+ X : array-like
+ An array-like object of either 1 or 2 dimensions depending on self.columns.
+ Usually this is a 2D table with shape (n, m)
- def draw(self, X, y, **kwargs):
- """
- Sets up the layout for the joint plot draw calls ``draw_joint`` and
- ``draw_xy`` to render the visualizations.
+ y : array-like, default: None
+ An vector or 1D array that has the same length as X. May be used to either
+ directly plot data or to color data points.
"""
- fig = plt.gcf()
- gs = plt.GridSpec(self.ratio + 1, self.ratio + 1)
+ # Convert python objects to numpy arrays
+ if isinstance(X, (list, tuple)):
+ X = np.array(X)
- #Set up the 3 axes objects
- joint_ax = fig.add_subplot(gs[1:, :-1])
- x_ax = fig.add_subplot(gs[0, :-1], sharex=joint_ax)
- y_ax = fig.add_subplot(gs[1:, -1], sharey=joint_ax)
+ if y is not None and isinstance(y, (list, tuple)):
+ y = np.array(y)
- fig.tight_layout()
- fig.subplots_adjust(hspace=self.space, wspace=self.space)
+ # Case where no columns are specified
+ if self.columns is None:
+ if (y is None and (X.ndim != 2 or X.shape[1] != 2)) or (y is not None and (X.ndim != 1 or y.ndim != 1)):
+ raise YellowbrickValueError((
+ "when self.columns is None specify either X and y as 1D arrays "
+ "or X as a matrix with 2 columns"
+ ))
- self.fig = fig
- self.joint_ax = joint_ax
- self.x_ax = x_ax
- self.y_ax = y_ax
- self._ax = joint_ax
+ if y is None:
+ # Draw the first column as x and the second column as y
+ self.draw(X[:,0], X[:,1], xlabel="0", ylabel="1")
+ return self
+
+ # Draw x against y
+ self.draw(X, y, xlabel="x", ylabel="y")
+ return self
+
+ # Case where a single string or int index is specified
+ if isinstance(self.columns, (int,str)):
+ if y is None:
+ raise YellowbrickValueError(
+ "when self.columns is a single index, y must be specified"
+ )
+
+ # fetch the index from X -- raising index error if not possible
+ x = self._index_into(self.columns, X)
+ self.draw(x, y, xlabel=str(self.columns), ylabel="target")
+ return self
+
+ # Case where there is a double index for both columns
+ columns = tuple(self.columns)
+ if len(columns) != 2:
+ raise YellowbrickValueError((
+ "'{}' contains too many indices or is invalid for joint plot"
+ ).format(columns))
- self.draw_joint(X, y, **kwargs)
- self.draw_xy(X, y, **kwargs)
+ # TODO: color the points based on the target if it is given
+ x = self._index_into(columns[0], X)
+ y = self._index_into(columns[1], X)
+ self.draw(x, y, xlabel=str(columns[0]), ylabel=str(columns[1]))
+ return self
- def draw_joint(self, X, y, **kwargs):
- """
- Draws the visualization for the joint axis.
+ def draw(self, x, y, xlabel=None, ylabel=None):
"""
+ Draw the joint plot for the data in x and y.
- if self.joint_args is None:
- self.joint_args = {}
+ Parameters
+ ----------
+ x, y : 1D array-like
+ The data to plot for the x axis and the y axis
- self.joint_args.setdefault("alpha", 0.4)
- facecolor = self.joint_args.pop("facecolor", FACECOLOR)
- self.joint_ax.set_facecolor(facecolor)
+ xlabel, ylabel : str
+ The labels for the x and y axes.
+ """
+ # This is a little weird to be here, but it is the best place to perform
+ # this computation given how fit calls draw and returns.
+ self.corr_ = self.correlation_methods[self.correlation](x, y)
+
+ # First draw the joint plot
+ joint_kws = self.joint_kws or {}
+ joint_kws.setdefault("alpha", self.alpha)
+ joint_kws.setdefault("label", "{}={:0.3f}".format(self.correlation, self.corr_))
+
+ # Draw scatter joint plot
+ if self.kind == "scatter":
+ self.ax.scatter(x, y, **joint_kws)
+
+ # TODO: Draw best fit line (or should this be kind='reg'?)
+
+ # Draw hexbin joint plot
+ elif self.kind in ('hex', 'hexbin'):
+ joint_kws.setdefault("mincnt", 1)
+ joint_kws.setdefault("gridsize", 50)
+ self.ax.hexbin(x, y, **joint_kws)
+
+ # Something bad happened
+ else:
+ raise ValueError("unknown joint plot kind '{}'".format(self.kind))
+
+ # Set the X and Y axis labels on the plot
+ self.ax.set_xlabel(xlabel)
+ self.ax.set_ylabel(ylabel)
+
+ # If we're not going to draw histograms, stop here
+ if not self.hist:
+ # Ensure the current axes is always the main joint plot axes
+ plt.sca(self.ax)
+ return self.ax
+
+ # Draw the histograms
+ hist_kws = self.hist_kws or {}
+ hist_kws.setdefault("bins", 50)
+ if self.hist == "density":
+ hist_kws.setdefault("density", True)
+
+ self.xhax.hist(x, **hist_kws)
+ self.yhax.hist(y, orientation="horizontal", **hist_kws)
+
+ # Ensure the current axes is always the main joint plot axes
+ plt.sca(self.ax)
+ return self.ax
- if self.joint_plot == "scatter":
- aspect = self.joint_args.pop("aspect", "auto")
- self.joint_ax.set_aspect(aspect)
- self.joint_ax.scatter(X, y, **self.joint_args)
+ def finalize(self, **kwargs):
+ """
+ Finalize executes any remaining image modifications making it ready to show.
+ """
+ # Set the aspect ratio to make the visualization square
+ # TODO: still unable to make plot square using make_axes_locatable
+ # x0,x1 = self.ax.get_xlim()
+ # y0,y1 = self.ax.get_ylim()
+ # self.ax.set_aspect(abs(x1-x0)/abs(y1-y0))
- fit = self.joint_args.pop("fit", True)
- if fit:
- estimator = self.joint_args.pop("estimator", "linear")
- draw_best_fit(X, y, self.joint_ax, estimator)
+ # Add the title to the plot if the user has set one.
+ self.set_title("")
- elif self.joint_plot == "hex":
- x_bins = self.joint_args.pop("x_bins", 50)
- y_bins = self.joint_args.pop("y_bins", 50)
- colormap = self.joint_args.pop("cmap", 'Blues')
- gridsize = int(np.mean([x_bins, y_bins]))
+ # Set the legend with full opacity patches using manual legend.
+ # Or Add the colorbar if this is a continuous plot.
+ self.ax.legend(loc="best", frameon=True)
- xmin = X.min()
- xmax = X.max()
- ymin = y.min()
- ymax = y.max()
+ # Finalize the histograms
+ if self.hist:
+ plt.setp(self.xhax.get_xticklabels(), visible=False)
+ plt.setp(self.yhax.get_yticklabels(), visible=False)
+ plt.sca(self.ax)
- self.joint_ax.hexbin(X, y,
- gridsize=gridsize, cmap=colormap, mincnt=1, **self.joint_args
- )
- self.joint_ax.axis([xmin, xmax, ymin, ymax])
+ # Call tight layout to maximize readability
+ plt.tight_layout()
- def draw_xy(self, X, y, **kwargs):
+ def _index_into(self, idx, data):
"""
- Draws the visualization for the x and y axes
+ Attempts to get the column from the data using the specified index, raises an
+ exception if this is not possible from this point in the stack.
"""
+ try:
+ if is_dataframe(data):
+ # Assume column indexing
+ return data[idx]
+ # Otherwise assume numpy array-like indexing
+ return data[:,idx]
+ except Exception as e:
+ raise IndexError(
+ "could not index column '{}' into type {}: {}".format(
+ self.columns, data.__class__.__name__, e
+ ))
- if self.xy_args is None:
- self.xy_args = {}
-
- facecolor_x = self.xy_args.pop("facecolor_x", FACECOLOR)
- self.x_ax.set_facecolor(facecolor_x)
- facecolor_y = self.xy_args.pop("facecolor_y", FACECOLOR)
- self.y_ax.set_facecolor(facecolor_y)
+# Alias for JointPlot
+JointPlotVisualizer = JointPlot
- if self.xy_plot == "hist":
- hist_bins = self.xy_args.pop("bins", 50)
- self.xy_args.setdefault("alpha", 0.4)
- histcolor_x = self.xy_args.pop("histcolor_x", HISTCOLOR)
- self.x_ax.set_facecolor(facecolor_x)
- histcolor_y = self.xy_args.pop("histcolor_y", HISTCOLOR)
- self.y_ax.set_facecolor(facecolor_y)
- self.x_ax.hist(X, bins=hist_bins, color=histcolor_x, **self.xy_args)
- self.y_ax.hist(y, bins=hist_bins, color=histcolor_y,
- orientation='horizontal', **self.xy_args)
- def finalize(self, **kwargs):
- """
- Finalize executes any subclass-specific axes finalization steps.
- The user calls poof and poof calls finalize.
+##########################################################################
+## Quick Method for JointPlot visualizations
+##########################################################################
- Parameters
- ----------
- kwargs: generic keyword arguments.
- """
- self.joint_ax.set_xlabel(self.feature)
- self.joint_ax.set_ylabel(self.target)
-
- plt.setp(self.x_ax.get_xticklabels(), visible=False)
- plt.setp(self.y_ax.get_yticklabels(), visible=False)
-
- plt.setp(self.x_ax.yaxis.get_majorticklines(), visible=False)
- plt.setp(self.x_ax.yaxis.get_minorticklines(), visible=False)
- plt.setp(self.y_ax.xaxis.get_majorticklines(), visible=False)
- plt.setp(self.y_ax.xaxis.get_minorticklines(), visible=False)
- plt.setp(self.x_ax.get_yticklabels(), visible=False)
- plt.setp(self.y_ax.get_xticklabels(), visible=False)
- self.x_ax.yaxis.grid(False)
- self.y_ax.xaxis.grid(False)
- self.fig.suptitle("Joint Plot of {} vs {}"
- .format(self.feature, self.target), y=1.05)
+def joint_plot():
+ raise NotImplementedError("quick method still needs to be implemented")
\ No newline at end of file
| diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_double_index_continuous_y.png b/tests/baseline_images/test_features/test_jointplot/test_columns_double_index_continuous_y.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_double_index_continuous_y.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_double_index_continuous_y_hist.png b/tests/baseline_images/test_features/test_jointplot/test_columns_double_index_continuous_y_hist.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_double_index_continuous_y_hist.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_double_index_discrete_y.png b/tests/baseline_images/test_features/test_jointplot/test_columns_double_index_discrete_y.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_double_index_discrete_y.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_double_index_discrete_y_hist.png b/tests/baseline_images/test_features/test_jointplot/test_columns_double_index_discrete_y_hist.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_double_index_discrete_y_hist.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_double_int_index_numpy_no_y.png b/tests/baseline_images/test_features/test_jointplot/test_columns_double_int_index_numpy_no_y.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_double_int_index_numpy_no_y.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_double_int_index_numpy_no_y_hist.png b/tests/baseline_images/test_features/test_jointplot/test_columns_double_int_index_numpy_no_y_hist.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_double_int_index_numpy_no_y_hist.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_double_str_index_pandas_no_y.png b/tests/baseline_images/test_features/test_jointplot/test_columns_double_str_index_pandas_no_y.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_double_str_index_pandas_no_y.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_double_str_index_pandas_no_y_hist.png b/tests/baseline_images/test_features/test_jointplot/test_columns_double_str_index_pandas_no_y_hist.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_double_str_index_pandas_no_y_hist.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_none_x.png b/tests/baseline_images/test_features/test_jointplot/test_columns_none_x.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_none_x.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_none_x_hist.png b/tests/baseline_images/test_features/test_jointplot/test_columns_none_x_hist.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_none_x_hist.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_none_x_y.png b/tests/baseline_images/test_features/test_jointplot/test_columns_none_x_y.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_none_x_y.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_none_x_y_hist.png b/tests/baseline_images/test_features/test_jointplot/test_columns_none_x_y_hist.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_none_x_y_hist.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_single_int_index_numpy.png b/tests/baseline_images/test_features/test_jointplot/test_columns_single_int_index_numpy.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_single_int_index_numpy.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_single_int_index_numpy_hist.png b/tests/baseline_images/test_features/test_jointplot/test_columns_single_int_index_numpy_hist.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_single_int_index_numpy_hist.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_single_str_index_pandas.png b/tests/baseline_images/test_features/test_jointplot/test_columns_single_str_index_pandas.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_single_str_index_pandas.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_columns_single_str_index_pandas_hist.png b/tests/baseline_images/test_features/test_jointplot/test_columns_single_str_index_pandas_hist.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_jointplot/test_columns_single_str_index_pandas_hist.png differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_jointplot_has_no_errors.png b/tests/baseline_images/test_features/test_jointplot/test_jointplot_has_no_errors.png
deleted file mode 100644
Binary files a/tests/baseline_images/test_features/test_jointplot/test_jointplot_has_no_errors.png and /dev/null differ
diff --git a/tests/baseline_images/test_features/test_jointplot/test_jointplot_integrated_has_no_errors.png b/tests/baseline_images/test_features/test_jointplot/test_jointplot_integrated_has_no_errors.png
deleted file mode 100644
Binary files a/tests/baseline_images/test_features/test_jointplot/test_jointplot_integrated_has_no_errors.png and /dev/null differ
diff --git a/tests/test_features/test_jointplot.py b/tests/test_features/test_jointplot.py
--- a/tests/test_features/test_jointplot.py
+++ b/tests/test_features/test_jointplot.py
@@ -1,20 +1,21 @@
# tests.test_features.test_jointplot
-# Test the JointPlotVisualizer
+# Test the JointPlot Visualizer
#
# Author: Prema Damodaran Roman
# Created: Mon Apr 10 21:00:54 2017 -0400
#
-# Copyright (C) 2017 District Data Labs
+# Copyright (C) 2017 The scikit-yb developers.
# For license information, see LICENSE.txt
#
# ID: test_jointplot.py [9e008b0] [email protected] $
"""
-Test the JointPlotVisualizer.
+Test joint plot visualization methods.
These tests work differently depending on what version of matplotlib is
-installed. If version 2.0.0 or greater is installed, then most tests will
-execute, otherwise most will skip and only the warning will be tested.
+installed. If version 2.0.2 or greater is installed, then most tests will
+execute, otherwise the histogram tests will skip and only the warning will
+be tested.
"""
##########################################################################
@@ -23,117 +24,430 @@
import sys
import pytest
-import warnings
import numpy as np
-import matplotlib as mpl
-import matplotlib.pyplot as plt
-from tests.dataset import DatasetMixin
+from functools import partial
+from tests.dataset import Dataset
from tests.base import VisualTestCase
from yellowbrick.features.jointplot import *
+from yellowbrick.exceptions import YellowbrickValueError
+from sklearn.datasets import make_classification, make_regression
+
+try:
+ # Only available in Matplotlib >= 2.0.2
+ from mpl_toolkits.axes_grid1 import make_axes_locatable
+except ImportError:
+ make_axes_locatable = None
+
+try:
+ from unittest.mock import patch, MagicMock
+except ImportError:
+ from mock import patch, MagicMock
+
+try:
+ import pandas as pd
+except ImportError:
+ pd = None
+
##########################################################################
-## JointPlotVisualizer Tests
+## Fixtures
##########################################################################
-# Determine version of matplotlib
-MPL_VERS_MAJ = int(mpl.__version__.split(".")[0])
+# Random numpy array generators
+rand1d = partial(np.random.rand, 120)
+rand2col = partial(np.random.rand, 120, 2)
+rand3col = partial(np.random.rand, 120, 3)
-class JointPlotTests(VisualTestCase, DatasetMixin):
[email protected](scope='class')
+def discrete(request):
+ """
+ Creates a simple 2-column dataset with a discrete target.
+ """
+ X, y = make_classification(
+ n_samples=120, n_features=2, n_informative=2, n_redundant=0,
+ n_classes=3, n_clusters_per_class=1, random_state=2221,
+ )
+
+ request.cls.discrete = Dataset(X, y)
- X = np.array([1, 2, 3, 5, 8, 10])
- y = np.array([1, 3, 6, 2, 9, 2])
[email protected](scope='class')
+def continuous(request):
+ """
+ Creates a simple 2-column dataset with a continuous target.
+ """
+ X, y = make_regression(
+ n_samples=120, n_features=2, random_state=1112,
+ )
- def setUp(self):
- self.concrete = self.load_data('concrete')
+ request.cls.continuous = Dataset(X, y)
- def tearDown(self):
- self.concrete = None
- @pytest.mark.skipif(MPL_VERS_MAJ > 1, reason="requires matplotlib 1.5.3 or less")
- def test_warning(self):
+##########################################################################
+## JointPlot Tests
+##########################################################################
+
[email protected]("discrete", "continuous")
+class TestJointPlotNoHistogram(VisualTestCase):
+ """
+ Test the JointPlot visualizer without histograms
+ """
+
+ def test_invalid_columns_values(self):
"""
- Ensure that the jointplot warns if mpl version is < 2.0.0
+ Assert invalid columns arguments raise exception
"""
- # Note Python 3.2+ has a self.assertWarns ... but we need to be
- # Python 2.7 compatible, so we're going to do this.
- with warnings.catch_warnings(record=True) as w:
- # Cause all warnings to always be triggered.
- warnings.simplefilter("always")
+ with pytest.raises(YellowbrickValueError, match="invalid for joint plot"):
+ JointPlot(columns=['a', 'b', 'c'], hist=False)
- # Trigger a warning.
- JointPlotVisualizer()
+ def test_invalid_correlation_values(self):
+ """
+ Assert invalid correlation arguments raise an exception
+ """
+ with pytest.raises(YellowbrickValueError, match="invalid correlation method"):
+ JointPlot(correlation="foo", hist=False)
- # Ensure that a warning occurred
- self.assertEqual(len(w), 1)
- self.assertEqual(
- str(w[-1].message),
- "JointPlotVisualizer requires matplotlib major version 2 "
- "or greater. Please upgrade."
- )
+ def test_invalid_kind_values(self):
+ """
+ Assert invalid kind arguments raise exception
+ """
+ for bad_kind in ('foo', None, 123):
+ with pytest.raises(YellowbrickValueError, match="invalid joint plot kind"):
+ JointPlot(kind=bad_kind, hist=False)
- @pytest.mark.xfail(
- sys.platform == 'win32', reason="images not close on windows"
- )
- @pytest.mark.skipif(MPL_VERS_MAJ < 2, reason="requires matplotlib 2.0.0 or greater")
- @pytest.mark.filterwarnings("ignore:internal gelsd driver")
- def test_jointplot_has_no_errors(self):
+ def test_invalid_hist_values(self):
"""
- Assert no errors occur during jointplot visualizer integration
+ Assert invalid hist arguments raise exception
"""
- fig = plt.figure()
- ax = fig.add_subplot()
+ for bad_hist in ('foo', 123):
+ with pytest.raises(YellowbrickValueError, match="invalid argument for hist"):
+ JointPlot(hist=bad_hist)
- visualizer = JointPlotVisualizer(ax=ax)
- visualizer.fit(self.X, self.y)
+ def test_no_haxes(self):
+ """
+ Test that xhax and yhax are not available
+ """
+ oz = JointPlot(hist=False)
+ with pytest.raises(AttributeError, match="histogram for the X axis"):
+ oz.xhax
- self.assert_images_similar(visualizer, tol=10)
+ with pytest.raises(AttributeError, match="histogram for the Y axis"):
+ oz.yhax
- @pytest.mark.xfail(
- sys.platform == 'win32', reason="images not close on windows"
- )
- @pytest.mark.skipif(MPL_VERS_MAJ < 2, reason="requires matplotlib 2.0.0 or greater")
- def test_jointplot_integrated_has_no_errors(self):
+ @patch('yellowbrick.features.jointplot.plt')
+ def test_correlation(self, mplt):
+ """
+ Test correlation is correctly computed
+ """
+ x = self.discrete.X[:,0]
+ y = self.discrete.X[:,1]
+
+ cases = (
+ ("pearson", -0.3847799883805261),
+ ("spearman", -0.37301201472324463),
+ ("covariance", -0.5535440619953924),
+ ("kendalltau", -0.2504201680672269),
+ )
+
+ for alg, expected in cases:
+ oz = JointPlot(hist=False, correlation=alg, columns=None)
+ oz.ax = MagicMock()
+ oz.fit(x,y)
+
+ assert hasattr(oz, 'corr_')
+ assert oz.corr_ == pytest.approx(expected), "{} not computed correctly".format(alg)
+
+ def test_columns_none_invalid_x(self):
+ """
+ When self.columns=None validate X and y
+ """
+ bad_kws = (
+ {'X': rand1d(), 'y': None},
+ {'X': rand3col(), 'y': None},
+ {'X': rand2col(), 'y': rand1d()},
+ {'X': rand3col(), 'y': rand1d()},
+ {'X': rand1d(), 'y': rand2col()},
+ )
+
+ for kws in bad_kws:
+ oz = JointPlot(columns=None, hist=False)
+ with pytest.raises(YellowbrickValueError, match="when self.columns is None"):
+ oz.fit(**kws)
+
+ def test_columns_none_x_y(self):
+ """
+ When self.columns=None image similarity with valid X and y
+ """
+ oz = JointPlot(hist=False, columns=None)
+ assert oz.fit(self.discrete.X[:,0], self.discrete.y) is oz
+ assert hasattr(oz, "corr_")
+
+ oz.finalize()
+ tol = 2.0 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 1.859
+ self.assert_images_similar(oz, tol=tol)
+
+ def test_columns_none_x(self):
+ """
+ When self.columns=None image similarity with valid X, no y
+ """
+ oz = JointPlot(hist=False, columns=None)
+ assert oz.fit(self.discrete.X) is oz
+ assert hasattr(oz, "corr_")
+
+ oz.finalize()
+ tol = 4.0 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.941
+ self.assert_images_similar(oz, tol=tol)
+
+ def test_columns_single_index_no_y(self):
+ """
+ When self.columns=int or str y must not be None
+ """
+ oz = JointPlot(columns="foo", hist=False)
+ with pytest.raises(YellowbrickValueError, match="y must be specified"):
+ oz.fit(rand2col(), y=None)
+
+ def test_columns_single_invalid_index_numpy(self):
+ """
+ When self.columns=int validate the index in X
+ """
+ oz = JointPlot(columns=2, hist=False)
+ with pytest.raises(IndexError, match="could not index column '2' into type"):
+ oz.fit(self.continuous.X, self.continuous.y)
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_columns_single_invalid_index_pandas(self):
+ """
+ When self.columns=str validate the index in X
+ """
+ oz = JointPlot(columns="foo", hist=False)
+ X = pd.DataFrame(self.continuous.X, columns=["a", "b"])
+ y = pd.Series(self.continuous.y)
+
+ with pytest.raises(IndexError, match="could not index column 'foo' into type"):
+ oz.fit(X, y)
+
+ def test_columns_single_int_index_numpy(self):
+ """
+ When self.columns=int image similarity on numpy dataset
+ """
+ oz = JointPlot(columns=1, hist=False)
+ assert oz.fit(self.continuous.X, self.continuous.y) is oz
+ assert hasattr(oz, "corr_")
+
+ oz.finalize()
+ tol = 0.5 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 0.442
+ self.assert_images_similar(oz, tol=tol)
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_columns_single_str_index_pandas(self):
+ """
+ When self.columns=str image similarity on pandas dataset
+ """
+ oz = JointPlot(columns="a", hist=False)
+ X = pd.DataFrame(self.continuous.X, columns=['a', 'b'])
+ y = pd.Series(self.continuous.y)
+ assert oz.fit(X, y) is oz
+ assert hasattr(oz, "corr_")
+
+ oz.finalize()
+ tol = 0.5 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 0.447
+ self.assert_images_similar(oz, tol=tol)
+
+ def test_columns_double_int_index_numpy_no_y(self):
+ """
+ When self.columns=[int, int] image similarity on numpy dataset no y
+ """
+ oz = JointPlot(columns=[0,1], hist=False)
+ assert oz.fit(self.discrete.X, y=None) is oz
+ assert hasattr(oz, "corr_")
+
+ oz.finalize()
+ tol = 4.0 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.941
+ self.assert_images_similar(oz, tol=tol)
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_columns_double_str_index_pandas_no_y(self):
+ """
+ When self.columns=[str, str] image similarity on pandas dataset no y
+ """
+ oz = JointPlot(columns=['a', 'b'], hist=False)
+ X = pd.DataFrame(self.continuous.X, columns=['a', 'b'])
+ assert oz.fit(X, y=None) is oz
+ assert hasattr(oz, "corr_")
+
+ oz.finalize()
+ tol = 4.0 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.911
+ self.assert_images_similar(oz, tol=tol)
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_columns_double_index_discrete_y(self):
+ """
+ When self.columns=[str, str] on DataFrame with discrete y
+ """
+ oz = JointPlot(columns=['a', 'b'], hist=False)
+ X = pd.DataFrame(self.discrete.X, columns=['a', 'b'])
+ y = pd.Series(self.discrete.y)
+ assert oz.fit(X, y) is oz
+ assert hasattr(oz, "corr_")
+
+ oz.finalize()
+ tol = 4.0 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.940
+ self.assert_images_similar(oz, tol=tol)
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_columns_double_index_continuous_y(self):
+ """
+ When self.columns=[str, str] on DataFrame with continuous y
+ """
+ oz = JointPlot(columns=['a', 'b'], hist=False)
+ X = pd.DataFrame(self.continuous.X, columns=['a', 'b'])
+ y = pd.Series(self.continuous.y)
+ assert oz.fit(X, y) is oz
+ assert hasattr(oz, "corr_")
+
+ oz.finalize()
+ tol = 4.0 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.911
+ self.assert_images_similar(oz, tol=tol)
+
+
[email protected](make_axes_locatable is not None, reason="requires matplotlib <= 2.0.1")
+def test_matplotlib_version_error():
+ """
+ Assert an exception is raised with incompatible matplotlib versions
+ """
+ with pytest.raises(YellowbrickValueError):
+ JointPlot(hist=True)
+
+
+@patch("yellowbrick.features.jointplot.make_axes_locatable", None)
+def test_matplotlib_incompatibility():
+ """
+ Assert an exception is raised if make_axes_locatable is None
+ """
+ with pytest.raises(YellowbrickValueError):
+ JointPlot(hist=True)
+
+
[email protected]("discrete", "continuous")
[email protected](make_axes_locatable is None, reason="requires matplotlib >= 2.0.2")
+class TestJointPlotHistogram(VisualTestCase):
+ """
+ Test the JointPlot visualizer with histograms
+ """
+
+ def test_haxes_available(self):
"""
- Test jointplot on the concrete data set
+ Test that xhax and yhax are available
"""
+ oz = JointPlot(hist=True)
+ assert oz.xhax is not None
+ assert oz.yhax is not None
- fig = plt.figure()
- ax = fig.add_subplot()
+ def test_columns_none_x_y_hist(self):
+ """
+ When self.columns=None image similarity with valid X and y
+ """
+ oz = JointPlot(hist=True, columns=None)
+ assert oz.fit(self.discrete.X[:,0], self.discrete.y) is oz
+ assert hasattr(oz, "corr_")
- # Load the data from the fixture
- X = self.concrete['cement']
- y = self.concrete['strength']
- feature = 'cement'
- target = 'strength'
+ oz.finalize()
+ tol = 3.5 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.013
+ self.assert_images_similar(oz, tol=tol)
+
+ def test_columns_none_x_hist(self):
+ """
+ When self.columns=None image similarity with valid X, no y
+ """
+ oz = JointPlot(hist=True, columns=None)
+ assert oz.fit(self.discrete.X) is oz
+ assert hasattr(oz, "corr_")
- # Test the visualizer
- visualizer = JointPlotVisualizer(
- feature=feature, target=target, joint_plot="hex", ax=ax)
- visualizer.fit(X, y)
+ oz.finalize()
+ tol = 4.0 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.945
+ self.assert_images_similar(oz, tol=tol)
- self.assert_images_similar(visualizer, tol=15)
+ def test_columns_single_int_index_numpy_hist(self):
+ """
+ When self.columns=int image similarity on numpy dataset
+ """
+ oz = JointPlot(columns=1, hist=True)
+ assert oz.fit(self.continuous.X, self.continuous.y) is oz
+ assert hasattr(oz, "corr_")
+
+ oz.finalize()
+ tol = 0.5 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 0.470
+ self.assert_images_similar(oz, tol=tol)
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_columns_single_str_index_pandas_hist(self):
+ """
+ When self.columns=str image similarity on pandas dataset
+ """
+ oz = JointPlot(columns="a", hist=True)
+ X = pd.DataFrame(self.continuous.X, columns=['a', 'b'])
+ y = pd.Series(self.continuous.y)
+ assert oz.fit(X, y) is oz
+ assert hasattr(oz, "corr_")
+
+ oz.finalize()
+ tol = 0.5 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 0.470
+ self.assert_images_similar(oz, tol=tol)
+
+ def test_columns_double_int_index_numpy_no_y_hist(self):
+ """
+ When self.columns=[int, int] image similarity on numpy dataset no y
+ """
+ oz = JointPlot(columns=[0,1], hist=True)
+ assert oz.fit(self.discrete.X, y=None) is oz
+ assert hasattr(oz, "corr_")
+
+ oz.finalize()
+ tol = 4.0 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.945
+ self.assert_images_similar(oz, tol=tol)
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_columns_double_str_index_pandas_no_y_hist(self):
+ """
+ When self.columns=[str, str] image similarity on pandas dataset no y
+ """
+ oz = JointPlot(columns=['a', 'b'], hist=True)
+ X = pd.DataFrame(self.continuous.X, columns=['a', 'b'])
+ assert oz.fit(X, y=None) is oz
+ assert hasattr(oz, "corr_")
+
+ oz.finalize()
+ tol = 4.0 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.934
+ self.assert_images_similar(oz, tol=tol)
+
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_columns_double_index_discrete_y_hist(self):
+ """
+ When self.columns=[str, str] on DataFrame with discrete y
+ """
+ oz = JointPlot(columns=['a', 'b'], hist=True)
+ X = pd.DataFrame(self.discrete.X, columns=['a', 'b'])
+ y = pd.Series(self.discrete.y)
+ assert oz.fit(X, y) is oz
+ assert hasattr(oz, "corr_")
+ oz.finalize()
+ tol = 4.0 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.944
+ self.assert_images_similar(oz, tol=tol)
- @pytest.mark.skipif(MPL_VERS_MAJ < 2, reason="requires matplotlib 2.0.0 or greater")
- def test_jointplot_no_matplotlib2_warning(self):
+ @pytest.mark.skipif(pd is None, reason="test requires pandas")
+ def test_columns_double_index_continuous_y_hist(self):
"""
- Assert no UserWarning occurs if matplotlib major version >= 2
+ When self.columns=[str, str] on DataFrame with continuous y
"""
- with warnings.catch_warnings(record=True) as ws:
- # Filter on UserWarnings
- warnings.filterwarnings("always", category=UserWarning)
- visualizer = JointPlotVisualizer()
- visualizer.fit(self.X, self.y)
- visualizer.finalize()
+ oz = JointPlot(columns=['a', 'b'], hist=True)
+ X = pd.DataFrame(self.continuous.X, columns=['a', 'b'])
+ y = pd.Series(self.continuous.y)
+ assert oz.fit(X, y) is oz
+ assert hasattr(oz, "corr_")
- # Filter out user warnings not related to matplotlib version
- ver_warn_msg = "requires matplotlib major version 2 or greater"
- mpl_ver_cnt = 0
- for w in ws:
- if w and w.message and ver_warn_msg in str(w.message):
- mpl_ver_cnt += 1
- self.assertEqual(0, mpl_ver_cnt, ws[-1].message \
- if ws else "No error")
+ oz.finalize()
+ tol = 4.0 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.934
+ self.assert_images_similar(oz, tol=tol)
\ No newline at end of file
| JointPlotVisualizer produces different visualizations depending on how the figure and axes objects are defined
**Describe the bug**
JointPlotVisualizer produces different visualizations depending on how the figure and axes objects are defined.
If the figure and axes are defined in the following manner:
_, ax = plt.subplots()
visualizer = JointPlotVisualizer(ax=ax, feature=feature, target=target, joint_plot="hex")
The results are as follows:
1) feature and target labels are displayed in the image
2) the tick labels are displayed only on the left and bottom axes
3) there are issues with the tick label values
On the other hand, if the figure and axes are defined as such:
fig = plt.figure()
ax = fig.add_subplot()
The results are as follows:
1) the feature and target labels are not displayed
2) the tick labels show up in all the axes
3) the tick label values are displayed with more meaningful values
Refer to the following jupyter notebook to see a demonstration of the differences:
https://github.com/pdamodaran/yellowbrick/blob/develop/examples/pdamodaran/JointPlot_Examples.ipynb
NOTE: any fixes to this issue should be considered in conjunction with the following issues:
https://github.com/DistrictDataLabs/yellowbrick/issues/434
https://github.com/DistrictDataLabs/yellowbrick/issues/214
| 2019-02-03T03:43:33 |
|
DistrictDataLabs/yellowbrick | 736 | DistrictDataLabs__yellowbrick-736 | [
"450"
] | ac048c17ea19debd8e3a6f3b653033884bdbd84b | diff --git a/yellowbrick/features/jointplot.py b/yellowbrick/features/jointplot.py
--- a/yellowbrick/features/jointplot.py
+++ b/yellowbrick/features/jointplot.py
@@ -58,7 +58,7 @@ class JointPlot(FeatureVisualizer):
feature-to-target plots. For pairwise feature analysis, the ``columns`` argument can
be used to specify the index of the two desired columns in ``X``. If ``y`` is also
specified, the plot can be colored with a heatmap or by class. For feature-to-target
- plots, the user can provide either ``X`` and ``y` as 1D vectors, or a ``columns``
+ plots, the user can provide either ``X`` and ``y`` as 1D vectors, or a ``columns``
argument with an index to a single feature in ``X`` to be plotted against ``y``.
Histograms can be included by setting the ``hist`` argument to ``True`` for a
| diff --git a/tests/README.md b/tests/README.md
new file mode 100644
--- /dev/null
+++ b/tests/README.md
@@ -0,0 +1,94 @@
+# Yellowbrick Tests
+
+*Welcome to the Yellowbrick tests!*
+
+If you're looking for information about how to use Yellowbrick, for our contributor's guide, for examples and teaching resources, for answers to frequently asked questions, and more, please visit the latest version of our documentation at [www.scikit-yb.org](https://www.scikit-yb.org/).
+
+## Running Yellowbrick Tests
+
+To run the tests locally, first install the tests-specific requirements with `pip` using the `requirements.txt` file in the `tests` directory:
+
+```
+$ pip install -r tests/requirements.txt
+```
+
+The required dependencies for the test suite include testing utilities and libraries such as `pandas` and `nltk` that are not included in the core dependencies.
+
+Tests can then be run as follows from the project `root`:
+
+```bash
+$ make test
+```
+
+The Makefile uses the `pytest` runner and testing suite as well as the coverage library.
+
+## Adding a Test for Your Visualizer
+
+The `tests` package mirrors the yellowbrick package in structure and also contains several helper methods and base functionality. To add a test to your visualizer, find the corresponding file to add the test case, or create a new test file in the same place you added your code.
+
+### Visual Tests
+
+The primary test you should create is simply to test your visualizer from end to end and make sure that no exceptions occur.
+
+Visual tests are notoriously difficult to create --- how do you test a visualization or figure? Moreover, testing scikit-learn models with real data can consume a lot of memory. To assist with this, we have two primary helpers, `VisualTestCase` and the `yellowbrick.datasets` module.
+
+Leverage these helpers to create your unittest as follows:
+
+```python
+import pytest
+
+from tests.base import VisualTestCase
+from yellowbrick.datasets import load_occupancy
+
+
+class MyVisualizerTests(VisualTestCase):
+
+ def test_my_visualizer(self):
+ """
+ Test MyVisualizer on a real dataset
+ """
+ # Load the data using the Yellowbrick datasets module
+ X, y = load_occupancy()
+
+ try:
+ visualizer = MyVisualizer()
+ visualizer.fit(X)
+ visualizer.finalize()
+ except Exception as e:
+ pytest.fail("my visualizer didn't work")
+```
+
+### Image Comparison Tests
+
+Writing an image-based comparison test is only a little more difficult than the simple test case presented above. We have adapted `matplotlib`'s image comparison test utility into an easy to use assert method: `self.assert_images_similar(visualizer)`
+
+The main consideration is that you must specify the “baseline” (i.e. expected) image in the `tests/baseline_images/` folder structure.
+
+For example, let's say you create your unittest in `tests/test_regressor/test_myvisualizer.py` as follows:
+
+```python
+from tests.base import VisualTestCase
+...
+ def test_my_visualizer_output(self):
+ ...
+ visualizer = MyVisualizer()
+ visualizer.fit(X)
+ visualizer.finalize()
+ self.assert_images_similar(visualizer)
+```
+
+The first time this test is run, there will be no baseline image to compare against, so the test will fail. Alternatively, if you are making a correction to the existing test `test_my_visualizer_output`, and the correction modifies the resulting test image, the test may also fail to match the existing baseline image. The solution is to first run the tests, then copy the new output images to the correct subdirectory under source code revision control (with `git add`). When rerunning the tests, they should now pass!
+
+We have a helper script, `tests/images.py` to clean up and manage baseline images automatically. It is run using the ``python -m`` command to execute a module as main, and it takes as an argument the path to **your** test file. To copy the figures as above:
+
+```bash
+$ python -m tests.images tests/test_regressor/test_myvisualizer.py
+```
+
+This will move all related test images from `actual_images` to `baseline_images` on your behalf (note you'll have had to already run the tests at least once to generate the images). You can also clean up images from both actual and baseline as follows:
+
+```bash
+$ python -m tests.images -C tests/test_regressor/test_myvisualizer.py
+```
+
+This is useful particularly if you're stuck trying to get an image comparison to work. For more information on the images helper script, use `python -m tests.images --help`.
| Documentation README
### Proposal
Develop a documentation contribution guideline modeled after [pandas](https://github.com/pandas-dev/pandas/blob/master/doc/README.rst)
Documentation is often seen as a safe place for new contributors to start, but Sphinx and reStructured text can be intimidating at first. The goal of the readme would be to creating an inviting document that gives a high level of how to build the docs, where to find rst documentation, how and when to use the `plot::` directive, and give some direction towards some concrete issues to tackle (ie the novice and doc tags in the issue list).
Relates to issues #353 , #354 and PR #446
| Thanks for adding this issue - I think it's especially important after #354
This is an excellent proposal and I think would be very valuable to our contributors. After #720 we think this should work as follows:
- Create a `docs/README.md` that has limited instructions about how to build docs and install requirements, then points to the documentation for further instructions.
- Create a section in the contributor's guide for documentation that discusses all of the above.
- Perhaps also do the same for tests? | 2019-02-10T20:28:52 |
DistrictDataLabs/yellowbrick | 766 | DistrictDataLabs__yellowbrick-766 | [
"765"
] | 140caacd9af6a46b6d07a1e14b410c776e083787 | diff --git a/yellowbrick/datasets/base.py b/yellowbrick/datasets/base.py
--- a/yellowbrick/datasets/base.py
+++ b/yellowbrick/datasets/base.py
@@ -187,7 +187,7 @@ def to_numpy(self):
A numpy array describing the target vector.
"""
path = find_dataset_path(self.name, ext=".npz", data_home=self.data_home)
- with np.load(path) as npf:
+ with np.load(path, allow_pickle=False) as npf:
if "X" not in npf or "y" not in npf:
raise DatasetsError((
"the downloaded dataset was improperly packaged without numpy arrays "
| Critical Vulnerability in np.load with NumPy v1.16 and earlier
There is a critical vulnerability with NumPy v1.16 and earlier that affects the new YB datasets module:
https://www.bleepingcomputer.com/news/security/numpy-is-awaiting-fix-for-critical-remote-code-execution-bug/
This does not affect any Yellowbrick user for version 0.9.1 or earlier and we will not release version 1.0 without a patch for this bug. When NumPy 1.17 is released (if it contains the fix), we will mark our minimum NumPy requirement to that version.
Currently, in the `develop` branch, we do use `np.load` when [loading a numpy dataset](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/datasets/base.py#L195), e.g. if Pandas is not available. We should update this to `np.load(allow_pickle=False)` as per the recommendation of the post above. Note that we do [ensure data downloaded from our repository matches an expected signature](https://github.com/DistrictDataLabs/yellowbrick/blob/708274289d66d9265f7ded03e3445bc2bd70f46e/yellowbrick/datasets/download.py#L106), which minimizes but does not eliminate the risk to Yellowbrick users.
Thanks @theagent for bringing this to our attention!
| It turns out that our `load_game` and `load_mushroom` datasets are affected by this change because they contain string data with `dtype=object`, which results in: `ValueError: Object arrays cannot be loaded when allow_pickle=False` when the patch is applied.
I tested a workaround to this, and that is to save these arrays with `dtype=np.unicode_` -- which does allow save and load with `allow_pickle=False`.
Having these datasets as strings allows us to give examples with various forms of categorical encoding such as OneHotEncoding; I'm not sure how modifying the dtype will change these examples. @Kautumn06 I may need some help going through our docs to ensure we don't break anything with this change.
See: https://github.com/DistrictDataLabs/yellowbrick-datasets/issues/5 for more on the datasets end of things. | 2019-02-19T22:27:33 |
|
DistrictDataLabs/yellowbrick | 813 | DistrictDataLabs__yellowbrick-813 | [
"764"
] | d223c05a6242c2799dbe942e54b1694af7b71b0e | diff --git a/yellowbrick/cluster/elbow.py b/yellowbrick/cluster/elbow.py
--- a/yellowbrick/cluster/elbow.py
+++ b/yellowbrick/cluster/elbow.py
@@ -22,9 +22,12 @@
import time
import numpy as np
import scipy.sparse as sp
+import warnings
from .base import ClusteringScoreVisualizer
-from ..exceptions import YellowbrickValueError
+from ..style.palettes import LINE_COLOR
+from ..exceptions import YellowbrickValueError, YellowbrickWarning
+from ..utils import KneeLocator
from sklearn.metrics import silhouette_score
from sklearn.metrics import calinski_harabaz_score
@@ -170,10 +173,31 @@ class KElbowVisualizer(ClusteringScoreVisualizer):
Display the fitting time per k to evaluate the amount of time required
to train the clustering model.
+ locate_elbow : bool, default: True
+ Automatically find the "elbow" or "knee" which likely corresponds to the optimal
+ value of k using the "knee point detection algorithm". The knee point detection
+ algorithm finds the point of maximum curvature, which in a well-behaved clustering
+ problem also represents the pivot of the elbow curve. The point is labeled with a
+ dashed line and annotated with the score and k values.
+
kwargs : dict
Keyword arguments that are passed to the base class and may influence
the visualization as defined in other Visualizers.
+ Attributes
+ ----------
+ k_scores_ : array of shape (n,) where n is no. of k values
+ The silhouette score corresponding to each k value.
+
+ k_timers_ : array of shape (n,) where n is no. of k values
+ The time taken to fit n KMeans model corresponding to each k value.
+
+ elbow_value_ : integer
+ The optimal value of k.
+
+ elbow_score_ : float
+ The silhouette score corresponding to the optimal value of k.
+
Examples
--------
@@ -194,6 +218,8 @@ class KElbowVisualizer(ClusteringScoreVisualizer):
For a discussion on the Elbow method, read more at
`Robert Gove's Block <https://bl.ocks.org/rpgove/0060ff3b656618e9136b>`_.
+ To know about 'Knee Point Detection Algorithm' read at `Finding a "kneedle" in a Haystack
+ <https://raghavan.usc.edu//papers/kneedle-simplex11.pdf>`_.
.. seealso:: The scikit-learn documentation for the `silhouette_score
<https://bit.ly/2LYWjYb>`_ and `calinski_harabaz_score
@@ -206,7 +232,7 @@ class KElbowVisualizer(ClusteringScoreVisualizer):
"""
def __init__(self, model, ax=None, k=10,
- metric="distortion", timings=True, **kwargs):
+ metric="distortion", timings=True, locate_elbow=True, **kwargs):
super(KElbowVisualizer, self).__init__(model, ax=ax, **kwargs)
# Get the scoring method
@@ -218,7 +244,9 @@ def __init__(self, model, ax=None, k=10,
# Store the arguments
self.scoring_metric = KELBOW_SCOREMAP[metric]
+ self.metric = metric
self.timings = timings
+ self.locate_elbow=locate_elbow
# Convert K into a tuple argument if an integer
if isinstance(k, int):
@@ -241,13 +269,19 @@ def __init__(self, model, ax=None, k=10,
def fit(self, X, y=None, **kwargs):
"""
Fits n KMeans models where n is the length of ``self.k_values_``,
- storing the silhoutte scores in the ``self.k_scores_`` attribute.
+ storing the silhouette scores in the ``self.k_scores_`` attribute.
+ The "elbow" and silhouette score corresponding to it are stored in
+ ``self.elbow_value`` and ``self.elbow_score`` respectively.
This method finishes up by calling draw to create the plot.
"""
self.k_scores_ = []
self.k_timers_ = []
+ if self.locate_elbow:
+ self.elbow_value_ = None
+ self.elbow_score_ = None
+
for k in self.k_values_:
# Compute the start time for each model
start = time.time()
@@ -260,7 +294,24 @@ def fit(self, X, y=None, **kwargs):
self.k_timers_.append(time.time() - start)
self.k_scores_.append(
self.scoring_metric(X, self.estimator.labels_)
- )
+ )
+
+ if self.locate_elbow:
+ locator_kwargs = {
+ 'distortion': {'curve_nature': 'convex', 'curve_direction': 'decreasing'},
+ 'silhouette': {'curve_nature': 'concave', 'curve_direction': 'increasing'},
+ 'calinski_harabaz': {'curve_nature': 'concave', 'curve_direction': 'increasing'},
+ }.get(self.metric, {})
+ elbow_locator = KneeLocator(self.k_values_,self.k_scores_,**locator_kwargs)
+ self.elbow_value_ = elbow_locator.knee
+ if self.elbow_value_ == None:
+ warning_message=\
+ "No 'knee' or 'elbow' point detected, " \
+ "pass `locate_elbow=False` to remove the warning"
+ warnings.warn(warning_message,YellowbrickWarning)
+ else:
+ self.elbow_score_ = self.k_scores_[self.k_values_.index(self.elbow_value_)]
+
self.draw()
@@ -271,8 +322,11 @@ def draw(self):
Draw the elbow curve for the specified scores and values of K.
"""
# Plot the silhouette score against k
- self.ax.plot(self.k_values_, self.k_scores_, marker="D", label="score")
-
+ self.ax.plot(self.k_values_, self.k_scores_, marker="D")
+ if self.locate_elbow and self.elbow_value_!=None:
+ elbow_label = "$elbow\ at\ k={}, score={:0.3f}$".format(self.elbow_value_, self.elbow_score_)
+ self.ax.axvline(self.elbow_value_, c=LINE_COLOR, linestyle="--", label=elbow_label)
+
# If we're going to plot the timings, create a twinx axis
if self.timings:
self.axes = [self.ax, self.ax.twinx()]
@@ -281,12 +335,14 @@ def draw(self):
c='g', marker="o", linestyle="--", alpha=0.75,
)
+
return self.ax
def finalize(self):
"""
Prepare the figure for rendering by setting the title as well as the
X and Y axis labels and adding the legend.
+
"""
# Get the metric name
metric = self.scoring_metric.__name__.replace("_", " ").title()
@@ -299,6 +355,10 @@ def finalize(self):
# Set the x and y labels
self.ax.set_xlabel('k')
self.ax.set_ylabel(metric.lower())
+
+ #set the legend if locate_elbow=True
+ if self.locate_elbow and self.elbow_value_!=None:
+ self.ax.legend(loc='best', fontsize='medium')
# Set the second y axis labels
if self.timings:
diff --git a/yellowbrick/utils/__init__.py b/yellowbrick/utils/__init__.py
--- a/yellowbrick/utils/__init__.py
+++ b/yellowbrick/utils/__init__.py
@@ -22,3 +22,4 @@
from .helpers import *
from .types import *
+from .kneed import *
diff --git a/yellowbrick/utils/kneed.py b/yellowbrick/utils/kneed.py
new file mode 100644
--- /dev/null
+++ b/yellowbrick/utils/kneed.py
@@ -0,0 +1,242 @@
+# yellowbrick.utils.kneed
+# A port of the knee-point detection package, kneed.
+#
+# Author: Kevin Arvai
+# Author: Pradeep Singh
+# Created: Mon Apr 15 09:43:18 2019 -0400
+#
+# Copyright (C) 2017 Kevin Arvai
+# All rights reserved.
+# Redistribution and use in source and binary forms, with or without modification,
+# are permitted provided that the following conditions are met:
+#
+# 1. Redistributions of source code must retain the above copyright notice, this list
+# of conditions and the following disclaimer.
+#
+# 2. Redistributions in binary form must reproduce the above copyright notice, this
+# list of conditions and the following disclaimer in the documentation and/or other
+# materials provided with the distribution.
+#
+# 3. Neither the name of the copyright holder nor the names of its contributors may
+# be used to endorse or promote products derived from this software without specific
+# prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
+# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
+# OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
+# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
+# IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+#
+# ID: kneed.py [] [email protected] $
+
+"""
+This package contains a port of the knee-point detection package, kneed, by
+Kevin Arvai and hosted at https://github.com/arvkevi/kneed. This port is maintained
+with permission by the Yellowbrick contributors.
+"""
+import numpy as np
+from scipy import interpolate
+from scipy.signal import argrelextrema
+import warnings
+
+from yellowbrick.exceptions import YellowbrickWarning
+
+
+class KneeLocator(object):
+ """
+ Finds the "elbow" or "knee" which is a value corresponding to the point of maximum curvature
+ in an elbow curve, using knee point detection algorithm. This point is accessible via the
+ `knee` attribute.
+
+ Parameters
+ ----------
+ x : list
+ A list of k values representing the no. of clusters in KMeans Clustering algorithm.
+
+ y : list
+ A list of silhouette score corresponding to each value of k.
+
+ S : float, default: 1.0
+ Sensitivity parameter that allows us to adjust how aggressive we want KneeLocator to
+ be when detecting "knees" or "elbows".
+
+ curve_nature : string, default: 'convace'
+ A string that determines the nature of the elbow curve in which "knee" or "elbow" is
+ to be found.
+
+ curve_direction : string, default: 'increasing'
+ A string that determines tha increasing or decreasing nature of the elbow curve in
+ which "knee" or "elbow" is to be found.
+
+ Notes
+ -----
+ The KneeLocator is implemented using the "knee point detection algorithm" which can be read at
+ `<https://www1.icsi.berkeley.edu/~barath/papers/kneedle-simplex11.pdf>`
+
+ """
+
+ def __init__(self, x, y, S=1.0, curve_nature='concave', curve_direction='increasing'):
+
+ # Raw Input
+ self.x = x
+ self.y = y
+ self.curve_nature = curve_nature
+ self.curve_direction = curve_direction
+ self.N = len(self.x)
+ self.S = S
+
+ # Step 1: fit a smooth line
+ uspline = interpolate.interp1d(self.x, self.y)
+ self.Ds_x = np.linspace(np.min(self.x), np.max(self.x), self.N)
+ self.Ds_y = uspline(self.Ds_x)
+
+ # Step 2: normalize values
+ self.xsn = self.__normalize(self.Ds_x)
+ self.ysn = self.__normalize(self.Ds_y)
+
+ # Step 3: Calculate difference curve
+ self.xd = self.xsn
+ if self.curve_nature == 'convex' and curve_direction == 'decreasing':
+ self.yd = self.ysn + self.xsn
+ self.yd = 1 - self.yd
+ elif self.curve_nature == 'concave' and curve_direction == 'decreasing':
+ self.yd = self.ysn + self.xsn
+ elif self.curve_nature == 'concave' and curve_direction == 'increasing':
+ self.yd = self.ysn - self.xsn
+ if self.curve_nature == 'convex' and curve_direction == 'increasing':
+ self.yd = abs(self.ysn - self.xsn)
+
+ # Step 4: Identify local maxima/minima
+ # local maxima
+ self.xmx_idx = argrelextrema(self.yd, np.greater)[0]
+ self.xmx = self.xd[self.xmx_idx]
+ self.ymx = self.yd[self.xmx_idx]
+
+ # local minima
+ self.xmn_idx = argrelextrema(self.yd, np.less)[0]
+ self.xmn = self.xd[self.xmn_idx]
+ self.ymn = self.yd[self.xmn_idx]
+
+ # Step 5: Calculate thresholds
+ self.Tmx = self.__threshold(self.ymx)
+
+ # Step 6: find knee
+ self.knee, self.norm_knee, self.knee_x = self.find_knee()
+
+ @staticmethod
+ def __normalize(a):
+ """
+ Normalizes an array.
+
+ Parameters
+ -----------
+ a : list
+ The array to normalize
+ """
+ return (a - min(a)) / (max(a) - min(a))
+
+ def __threshold(self, ymx_i):
+ """
+ Calculates the difference threshold for a
+ given difference local maximum.
+
+ Parameters
+ -----------
+ ymx_i : float
+ The normalized y value of a local maximum.
+ """
+ return ymx_i - (self.S * np.diff(self.xsn).mean())
+
+ def find_knee(self, ):
+ """
+ Finds and returns the "knee"or "elbow" value, the normalized knee
+ value, and the x value where the knee is located.
+
+ """
+ if not self.xmx_idx.size:
+ warning_message = \
+ 'No "knee" or "elbow point" detected ' \
+ 'This could be due to bad clustering, no '\
+ 'actual clusters being formed etc.'
+ warnings.warn(warning_message,YellowbrickWarning)
+ return None, None, None
+
+ mxmx_iter = np.arange(self.xmx_idx[0], len(self.xsn))
+ xmx_idx_iter = np.append(self.xmx_idx, len(self.xsn))
+
+ knee_, norm_knee_, knee_x = 0.0, 0.0, None
+ for mxmx_i, mxmx in enumerate(xmx_idx_iter):
+ # stopping criteria for exhasuting array
+ if mxmx_i == len(xmx_idx_iter) - 1:
+ break
+ # indices between maxima/minima
+ idxs = (mxmx_iter > xmx_idx_iter[mxmx_i]) * \
+ (mxmx_iter < xmx_idx_iter[mxmx_i + 1])
+ between_local_mx = mxmx_iter[np.where(idxs)]
+
+ for j in between_local_mx:
+ if j in self.xmn_idx:
+ # reached a minima, x indices are unique
+ # only need to check if j is a min
+ if self.yd[j + 1] > self.yd[j]:
+ self.Tmx[mxmx_i] = 0
+ knee_x = None # reset x where yd crossed Tmx
+ elif self.yd[j + 1] <= self.yd[j]:
+ warning_message="If this is a minima, " \
+ "how would you ever get here."
+ warnings.warn(warning_message, YellowbrickWarning)
+ if self.yd[j] < self.Tmx[mxmx_i] or self.Tmx[mxmx_i] < 0:
+ # declare a knee
+ if not knee_x:
+ knee_x = j
+ knee_ = self.x[self.xmx_idx[mxmx_i]]
+ norm_knee_ = self.xsn[self.xmx_idx[mxmx_i]]
+ return knee_, norm_knee_, knee_x
+
+ def plot_knee_normalized(self, ):
+ """
+ Plots the normalized curve, the distance curve (xd, ysn) and the
+ knee, if it exists.
+
+ """
+ import matplotlib.pyplot as plt
+
+ plt.figure(figsize=(8, 8))
+ plt.plot(self.xsn, self.ysn)
+ plt.plot(self.xd, self.yd, 'r')
+ plt.xticks(np.arange(min(self.xsn), max(self.xsn) + 0.1, 0.1))
+ plt.yticks(np.arange(min(self.xd), max(self.ysn) + 0.1, 0.1))
+
+ plt.vlines(self.norm_knee, plt.ylim()[0], plt.ylim()[1])
+
+ def plot_knee(self, ):
+ """
+ Plot the curve and the knee, if it exists
+
+ """
+ import matplotlib.pyplot as plt
+
+ plt.figure(figsize=(8, 8))
+ plt.plot(self.x, self.y)
+ plt.vlines(self.knee, plt.ylim()[0], plt.ylim()[1])
+
+ # Niceties for users working with elbows rather than knees
+
+ @property
+ def elbow(self):
+ return self.knee
+
+ @property
+ def norm_elbow(self):
+ return self.norm_knee
+
+ @property
+ def elbow_x(self):
+ return self.knee_x
+
+
| diff --git a/tests/test_cluster/test_elbow.py b/tests/test_cluster/test_elbow.py
--- a/tests/test_cluster/test_elbow.py
+++ b/tests/test_cluster/test_elbow.py
@@ -217,7 +217,7 @@ def test_distortion_metric(self):
Test the distortion metric of the k-elbow visualizer
"""
visualizer = KElbowVisualizer(
- KMeans(random_state=0), k=5, metric="distortion", timings=False
+ KMeans(random_state=0), k=5, metric="distortion", timings=False, locate_elbow=False
)
visualizer.fit(X)
@@ -236,7 +236,7 @@ def test_silhouette_metric(self):
Test the silhouette metric of the k-elbow visualizer
"""
visualizer = KElbowVisualizer(
- KMeans(random_state=0), k=5, metric="silhouette", timings=False
+ KMeans(random_state=0), k=5, metric="silhouette", timings=False, locate_elbow=False
)
visualizer.fit(X)
@@ -256,7 +256,7 @@ def test_calinski_harabaz_metric(self):
"""
visualizer = KElbowVisualizer(
KMeans(random_state=0), k=5,
- metric="calinski_harabaz", timings=False
+ metric="calinski_harabaz", timings=False, locate_elbow=False
)
visualizer.fit(X)
assert len(visualizer.k_scores_) == 4
@@ -286,7 +286,7 @@ def test_timings(self):
Test the twinx double axes with k-elbow timings
"""
visualizer = KElbowVisualizer(
- KMeans(random_state=0), k=5, timings=True
+ KMeans(random_state=0), k=5, timings=True, locate_elbow=False
)
visualizer.fit(X)
| Add elbow detection using the "kneedle" method to Elbow Visualizer
It would be nice if we could (optionally) annotate the "best" k in the elbow method by detecting the "knee" of the curve using the "kneedle" method described in: https://github.com/arvkevi/kneed. This feature could be added either by adding `kneed` as an _optional_ dependency or by reimplementing part of the kneedle code in Yellowbrick.
| 2019-04-12T16:46:54 |
|
DistrictDataLabs/yellowbrick | 818 | DistrictDataLabs__yellowbrick-818 | [
"500"
] | 4e70b07992a3ee457a7e069f8053fd88adde061b | diff --git a/docs/tutorial.py b/docs/tutorial.py
--- a/docs/tutorial.py
+++ b/docs/tutorial.py
@@ -1,111 +1,87 @@
#!/usr/bin/env python
# Generate the classification report images for the tutorial
-import os
-import pandas as pd
import matplotlib.pyplot as plt
-from yellowbrick.classifier import ClassificationReport
-
from sklearn.pipeline import Pipeline
-from sklearn.base import BaseEstimator, TransformerMixin
-from sklearn.preprocessing import LabelEncoder, OneHotEncoder
-
from sklearn.svm import LinearSVC, NuSVC, SVC
from sklearn.neighbors import KNeighborsClassifier
+from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn.linear_model import LogisticRegressionCV, LogisticRegression, SGDClassifier
from sklearn.ensemble import BaggingClassifier, ExtraTreesClassifier, RandomForestClassifier
-
-
-DATA = os.path.join(
- os.path.dirname(__file__), "..", "examples", "data", "mushroom", "mushroom.csv"
-)
+
+from yellowbrick.datasets import load_mushroom
+from yellowbrick.classifier import ClassificationReport
ESTIMATORS = {
- LinearSVC: "images/tutorial/modelselect_linear_svc.png",
- NuSVC: "images/tutorial/modelselect_nu_svc.png",
- SVC: "images/tutorial/modelselect_svc.png",
- SGDClassifier: "images/tutorial/modelselect_sgd_classifier.png",
- KNeighborsClassifier: "images/tutorial/modelselect_kneighbors_classifier.png",
- LogisticRegressionCV: "images/tutorial/modelselect_logistic_regression_cv.png",
- LogisticRegression: "images/tutorial/modelselect_logistic_regression.png",
- BaggingClassifier: "images/tutorial/modelselect_bagging_classifier.png",
- ExtraTreesClassifier: "images/tutorial/modelselect_extra_trees_classifier.png",
- RandomForestClassifier: "images/tutorial/modelselect_random_forest_classifier.png",
+ 'SVC': {
+ 'model': SVC(gamma='auto'),
+ 'path': 'images/tutorial/modelselect_svc.png'
+ },
+ 'NuSVC': {
+ 'model': NuSVC(gamma='auto'),
+ 'path': 'images/tutorial/modelselect_nu_svc.png'
+ },
+ 'LinearSVC': {
+ 'model': LinearSVC(),
+ 'path': 'images/tutorial/modelselect_linear_svc.png'
+ },
+ 'SGD': {
+ 'model': SGDClassifier(max_iter=100, tol=1e-3),
+ 'path': 'images/tutorial/modelselect_sgd_classifier.png'
+ },
+ 'KNN': {
+ 'model': KNeighborsClassifier(),
+ 'path': 'images/tutorial/modelselect_kneighbors_classifier.png'
+ },
+ 'LR': {
+ 'model': LogisticRegression(solver='lbfgs'),
+ 'path': 'images/tutorial/modelselect_logistic_regression.png'
+ },
+ 'LRCV': {
+ 'model': LogisticRegressionCV(cv=3),
+ 'path': 'images/tutorial/modelselect_logistic_regression_cv.png'
+ },
+ 'Bags': {
+ 'model': BaggingClassifier(),
+ 'path': 'images/tutorial/modelselect_bagging_classifier.png'
+ },
+ 'XTrees': {
+ 'model': ExtraTreesClassifier(n_estimators=100),
+ 'path': 'images/tutorial/modelselect_extra_trees_classifier.png'
+ },
+ 'RF': {
+ 'model': RandomForestClassifier(n_estimators=100),
+ 'path': 'images/tutorial/modelselect_random_forest_classifier.png'
+ },
}
-
-class EncodeCategorical(BaseEstimator, TransformerMixin):
- """
- Encodes a specified list of columns or all columns if None.
- """
-
- def __init__(self, columns=None):
- self.columns = [col for col in columns]
- self.encoders = None
-
- def fit(self, data, target=None):
- """
- Expects a data frame with named columns to encode.
- """
- # Encode all columns if columns is None
- if self.columns is None:
- self.columns = data.columns
-
- # Fit a label encoder for each column in the data frame
- self.encoders = {
- column: LabelEncoder().fit(data[column])
- for column in self.columns
- }
- return self
-
- def transform(self, data):
- """
- Uses the encoders to transform a data frame.
- """
- output = data.copy()
- for column, encoder in self.encoders.items():
- output[column] = encoder.transform(data[column])
-
- return output
-
-
-def load_data(path=DATA):
- dataset = pd.read_csv(path)
- features = ['shape', 'surface', 'color']
- target = ['target']
-
- X = dataset[features]
- y = dataset[target]
-
- y = LabelEncoder().fit_transform(y.values.ravel())
-
- return X, y
-
-
-def visual_model_selection(X, y, estimator, path):
+def visualize_model(X, y, estimator, path, **kwargs):
"""
Test various estimators.
- """
+ """
+ y = LabelEncoder().fit_transform(y)
model = Pipeline([
- ('label_encoding', EncodeCategorical(X.keys())),
- ('one_hot_encoder', OneHotEncoder()),
+ ('one_hot_encoder', OneHotEncoder()),
('estimator', estimator)
])
_, ax = plt.subplots()
# Instantiate the classification model and visualizer
- visualizer = ClassificationReport(model, ax=ax, classes=['edible', 'poisonous'])
- visualizer.fit(X, y)
+ visualizer = ClassificationReport(
+ model, classes=['edible', 'poisonous'],
+ cmap="YlGn", size=(600, 360), ax=ax, **kwargs
+ )
+ visualizer.fit(X, y)
visualizer.score(X, y)
- visualizer.poof(outpath=path)
+ visualizer.poof(outpath=path)
if __name__ == '__main__':
- X, y = load_data()
+ X, y = load_mushroom()
- for clf, path in ESTIMATORS.items():
- visual_model_selection(X, y, clf(), path)
+ for clf in ESTIMATORS.values():
+ visualize_model(X, y, clf['model'], clf['path'])
| Update Model Selection Tutorial example to use Yellowbrick datasets/download module
The model selection tutorial datasets do not currently the Yellowbrick hosted and modified datasets. For consistency, examples should leverage the core sets as per the 'example datasets' documentation http://www.scikit-yb.org/en/latest/api/datasets.html.
- [x] Modify docs on model selection tutorial to align with the example datasets documentation
| related to this documentation issue: https://github.com/DistrictDataLabs/yellowbrick/issues/441
This is a duplicate issue of https://github.com/DistrictDataLabs/yellowbrick/issues/424
Not going to close either issue, but @joelvanveluwen please note that I reworked the datasets api in https://github.com/DistrictDataLabs/yellowbrick/pull/501
| 2019-04-15T21:39:19 |
|
DistrictDataLabs/yellowbrick | 886 | DistrictDataLabs__yellowbrick-886 | [
"855"
] | 00e1c9211f61d1d60ba9e1d5e23cb8c284259759 | diff --git a/yellowbrick/cluster/silhouette.py b/yellowbrick/cluster/silhouette.py
--- a/yellowbrick/cluster/silhouette.py
+++ b/yellowbrick/cluster/silhouette.py
@@ -66,12 +66,10 @@ class SilhouetteVisualizer(ClusteringScoreVisualizer):
The axes to plot the figure on. If None is passed in the current axes
will be used (or generated if required).
-
colors : iterable or string, default: None
A collection of colors to use for each cluster group. If there are
fewer colors than cluster groups, colors will repeat. May also be a
- matplotlib colormap string.
-
+ Yellowbrick or matplotlib colormap string.
kwargs : dict
Keyword arguments that are passed to the base class and may influence
diff --git a/yellowbrick/style/colors.py b/yellowbrick/style/colors.py
--- a/yellowbrick/style/colors.py
+++ b/yellowbrick/style/colors.py
@@ -68,7 +68,7 @@ def resolve_colors(n_colors=None, colormap=None, colors=None):
truncate or multiple the colors available. If None the length of the
colors will not be modified.
- colormap : str, default: None
+ colormap : str, yellowbrick.style.palettes.ColorPalette, matplotlib.cm, default: None
The name of the matplotlib color map with which to generate colors.
colors : iterable, default: None
@@ -87,15 +87,45 @@ def resolve_colors(n_colors=None, colormap=None, colors=None):
# Work with the colormap if specified and colors is not
if colormap is not None and colors is None:
+ # Must import here to avoid recursive import
+ from .palettes import PALETTES, ColorPalette
if isinstance(colormap, str):
try:
- colormap = cm.get_cmap(colormap)
+
+ # try to get colormap from PALETTES first
+ _colormap = PALETTES.get(colormap, None)
+
+ if _colormap is None:
+
+ colormap = cm.get_cmap(colormap)
+ n_colors = n_colors or len(get_color_cycle())
+ _colors = list(map(colormap, np.linspace(0, 1, num=n_colors)))
+
+ else:
+
+ _colors = ColorPalette(_colormap).as_rgb()
+ n_colors = n_colors or len(_colors)
+
except ValueError as e:
+
raise YellowbrickValueError(e)
+ # if yellowbrick color palette is provided as colormap
+ elif isinstance(colormap, ColorPalette):
- n_colors = n_colors or len(get_color_cycle())
- _colors = list(map(colormap, np.linspace(0, 1, num=n_colors)))
+ _colors = colormap.as_rgb()
+ n_colors = n_colors or len(_colors)
+
+ # if matplotlib color palette is provided as colormap
+ elif isinstance(colormap, mpl.colors.Colormap):
+ n_colors = n_colors or len(get_color_cycle())
+ _colors = list(map(colormap, np.linspace(0, 1, num=n_colors)))
+ else:
+ raise YellowbrickValueError(
+ "Colormap type {} is not recognized. Possible types are: {}"
+ .format(type(colormap), ', '.join(['yellowbrick.style.ColorPalette,',
+ 'matplotlib.cm,',
+ 'str'])))
# Work with the color list
elif colors is not None:
| diff --git a/tests/baseline_images/test_style/test_colors/test_integrated_yb_colormap.png b/tests/baseline_images/test_style/test_colors/test_integrated_yb_colormap.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_style/test_colors/test_integrated_yb_colormap.png differ
diff --git a/tests/test_style/test_colors.py b/tests/test_style/test_colors.py
--- a/tests/test_style/test_colors.py
+++ b/tests/test_style/test_colors.py
@@ -17,13 +17,18 @@
## Imports
##########################################################################
+import sys
import pytest
from matplotlib import cm
from cycler import Cycler
+from sklearn.cluster import KMeans
+from sklearn.datasets import make_blobs
+
from yellowbrick.style.colors import *
from yellowbrick.style.palettes import ColorPalette, PALETTES
+from yellowbrick.cluster.silhouette import SilhouetteVisualizer
from tests.base import VisualTestCase
@@ -205,6 +210,90 @@ def test_colormap_cmap(self):
(0.8, 0.8, 0.8, 1.0)
]
+ def test_colormap_palette_mpl(self):
+ """
+ Assert that supplying a maptlotlib palette as colormap works
+ """
+ colormap = cm.get_cmap('nipy_spectral')
+ colors = resolve_colors(colormap=colormap)
+ assert colors == [
+ (0.0, 0.0, 0.0, 1.0),
+ (0.0, 0.0, 0.8667, 1.0),
+ (0.0, 0.6667, 0.5333, 1.0),
+ (0.0, 1.0, 0.0, 1.0),
+ (1.0, 0.6, 0.0, 1.0),
+ (0.8, 0.8, 0.8, 1.0)
+ ]
+
+ def test_integrated_yb_colormap(self):
+ """
+ Assert silhouette plot colormap can be set with a yellowbrick palette
+ """
+ # Generate a blobs data set
+ X, y = make_blobs(
+ n_samples=1000, n_features=12, centers=8, shuffle=False, random_state=0
+ )
+ visualizer = SilhouetteVisualizer(KMeans(random_state=0), colormap='neural_paint')
+ visualizer.fit(X)
+ visualizer.poof()
+
+ tol = 3.2 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.143
+ self.assert_images_similar(visualizer, remove_legend=True, tol=tol)
+
+ def test_colormap_palette_yb(self):
+ """
+ Assert that supplying a yellowbrick palette as colormap works
+ """
+ colormap = ColorPalette('neural_paint')
+ assert resolve_colors(colormap=colormap) == [
+ (0.08627450980392157, 0.44313725490196076, 0.5725490196078431),
+ (0.43137254901960786, 0.4588235294117647, 0.2823529411764706),
+ (0.7725490196078432, 0.6352941176470588, 0.6705882352941176),
+ (0.0, 0.8, 1.0),
+ (0.8705882352941177, 0.47058823529411764, 0.6823529411764706),
+ (1.0, 0.8, 0.6),
+ (0.23921568627450981, 0.24705882352941178, 0.25882352941176473),
+ (1.0, 1.0, 0.8)
+ ]
+
+ def test_colormap_cmap_with_colors(self):
+ """
+ Assert that colors overrides a mpl colormap if both are provided
+ """
+ colormap = cm.get_cmap('nipy_spectral')
+ overriding_colors = [
+ (0.0, 0.0, 0.0, 1.0),
+ (0.0, 0.6444666666666666, 0.7333666666666667, 1.0),
+ (0.7999666666666666, 0.9777666666666667, 0.0, 1.0),
+ (0.8, 0.8, 0.8, 1.0)
+ ]
+ with pytest.warns(Warning, match="both colormap and colors specified"):
+ colors = resolve_colors(colormap=colormap, colors=overriding_colors)
+ assert colors == overriding_colors
+
+ def test_colormap_palette_yb_colors(self):
+ """
+ Assert that colors overrides a yellowbrick colormap if both are provided
+ """
+ colormap = ColorPalette('neural_paint')
+ overriding_colors = [
+ (0.0, 0.0, 0.0, 1.0),
+ (0.0, 0.6444666666666666, 0.7333666666666667, 1.0),
+ (0.7999666666666666, 0.9777666666666667, 0.0, 1.0),
+ (0.8, 0.8, 0.8, 1.0)
+ ]
+ with pytest.warns(Warning, match="both colormap and colors specified"):
+ colors = resolve_colors(colormap=colormap, colors=overriding_colors)
+ assert colors == overriding_colors
+
+ def test_colormap_invalid_type(self):
+ """
+ Exception raised when invalid colormap type is supplied
+ """
+ with pytest.raises(YellowbrickValueError):
+ a = lambda x: x + 1
+ resolve_colors(colormap=a)
+
def test_colors(self):
"""
Test passing in a list of colors
| resolve_colors() needs a way to be able to pull colormaps from PALETTES
The func `resolve_colors()` in https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/style/colors.py does not have any way to use colors defined in https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/style/palettes.py
See conversation here for some background https://github.com/DistrictDataLabs/yellowbrick/pull/837#discussion_r283103118
| Hi @mgarod, i was assigned to this issue by @lwgray.
Is this how the issue was supposed to be resolved?
@thisHermit did you open a PR?
@lwgray, just pulled up an PR. | 2019-06-16T10:52:17 |
DistrictDataLabs/yellowbrick | 904 | DistrictDataLabs__yellowbrick-904 | [
"318"
] | 1bb171cafb468ea6867668099c1b4a0f23db2b79 | diff --git a/yellowbrick/features/rfecv.py b/yellowbrick/features/rfecv.py
--- a/yellowbrick/features/rfecv.py
+++ b/yellowbrick/features/rfecv.py
@@ -171,7 +171,7 @@ def fit(self, X, y=None):
else:
step = int(self.step)
- if step < 0:
+ if step <= 0:
raise YellowbrickValueError("step must be >0")
# Create the RFE model
| diff --git a/tests/README.md b/tests/README.md
--- a/tests/README.md
+++ b/tests/README.md
@@ -2,7 +2,7 @@
*Welcome to the Yellowbrick tests!*
-If you're looking for information about how to use Yellowbrick, for our contributor's guide, for examples and teaching resources, for answers to frequently asked questions, and more, please visit the latest version of our documentation at [www.scikit-yb.org](https://www.scikit-yb.org/).
+If you're looking for information about how to use Yellowbrick, for our contributor's guide, for examples and teaching resources, for answers to frequently asked questions, and more, please visit the latest version of our documentation at [www.scikit-yb.org](https://www.scikit-yb.org/).
## Running Yellowbrick Tests
@@ -20,7 +20,7 @@ Tests can then be run as follows from the project `root`:
$ make test
```
-The Makefile uses the `pytest` runner and testing suite as well as the coverage library.
+The Makefile uses the `pytest` runner and testing suite as well as the coverage library.
## Adding a Test for Your Visualizer
@@ -28,11 +28,11 @@ The `tests` package mirrors the yellowbrick package in structure and also contai
### Visual Tests
-The primary test you should create is simply to test your visualizer from end to end and make sure that no exceptions occur.
+The primary test you should create is simply to test your visualizer from end to end and make sure that no exceptions occur.
-Visual tests are notoriously difficult to create --- how do you test a visualization or figure? Moreover, testing scikit-learn models with real data can consume a lot of memory. To assist with this, we have two primary helpers, `VisualTestCase` and the `yellowbrick.datasets` module.
+Visual tests are notoriously difficult to create --- how do you test a visualization or figure? Moreover, testing scikit-learn models with real data can consume a lot of memory. To assist with this, we have two primary helpers, `VisualTestCase` and the `yellowbrick.datasets` module.
-Leverage these helpers to create your unittest as follows:
+Leverage these helpers to create your tests as follows:
```python
import pytest
@@ -64,7 +64,7 @@ Writing an image-based comparison test is only a little more difficult than the
The main consideration is that you must specify the “baseline” (i.e. expected) image in the `tests/baseline_images/` folder structure.
-For example, let's say you create your unittest in `tests/test_regressor/test_myvisualizer.py` as follows:
+For example, let's say you create your tests in `tests/test_regressor/test_myvisualizer.py` as follows:
```python
from tests.base import VisualTestCase
diff --git a/tests/__init__.py b/tests/__init__.py
--- a/tests/__init__.py
+++ b/tests/__init__.py
@@ -17,7 +17,6 @@
## Imports
##########################################################################
-import unittest
import matplotlib
## IMPORTANT! Set matplotlib to use the Agg backend before imported anywhere!
@@ -35,13 +34,13 @@
## Initialization Tests
##########################################################################
-class InitializationTests(unittest.TestCase):
+class TestInitialization(object):
def test_sanity(self):
"""
Test that tests work by confirming 7-3 = 4
"""
- self.assertEqual(7-3, 4, "The world went wrong!!")
+ assert 7 - 3 == 4, "The world went wrong!!"
def test_import(self):
"""
@@ -58,6 +57,6 @@ def test_version(self):
"""
try:
import yellowbrick as yb
- self.assertEqual(yb.__version__, EXPECTED_VERSION)
+ assert yb.__version__ == EXPECTED_VERSION
except ImportError:
self.fail("Could not import the yellowbrick library!")
diff --git a/tests/base.py b/tests/base.py
--- a/tests/base.py
+++ b/tests/base.py
@@ -15,24 +15,31 @@
##########################################################################
import os
-import inspect
import sys
+import inspect
-import unittest
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import ticker
-from matplotlib import rcParams
-
from matplotlib.testing.compare import compare_images
from yellowbrick.exceptions import ImageComparisonFailure
+
+##########################################################################
+## Environment
+##########################################################################
+
def is_windows_or_conda():
+ """
+ Simple detection mechanism to determine if the tests are running in a
+ win32 or Anaconda/Miniconda environment.
+ """
is_windows = sys.platform == 'win32'
is_conda = os.path.exists(os.path.join(sys.prefix, 'conda-meta'))
return is_windows or is_conda
+
##########################################################################
## Module Constants
##########################################################################
@@ -43,45 +50,45 @@ def is_windows_or_conda():
BASELINE_IMAGES = os.path.join(TESTS, "baseline_images")
IS_WINDOWS_OR_CONDA = is_windows_or_conda()
+
##########################################################################
## Visual Test Case
##########################################################################
-class VisualTestCase(unittest.TestCase):
-
- @classmethod
- def setUpClass(klass):
- """
- This setup function is available to ensure that all CI tests
- that do visual work are set up correctly.
+class VisualTestCase(object):
+ """
+ The visual test case class ensures that all tests inside of the class
+ can execute image similarity tests inside of a clean matplotlib global
+ figure.
+ """
- Note:
+ def setup_method(self):
"""
- super(VisualTestCase, klass).setUpClass()
+ Before a visual test case method is run, ensure that the previous
+ figure is closed and the current axes are cleared.
- def setUp(self):
- """
- Close all previous plots
+ See: https://docs.pytest.org/en/latest/xunit_setup.html
"""
# Reset the matplotlib environment
- plt.cla() # clear current axis
- plt.clf() # clear current figure
- plt.close("all") # close all existing plots
+ plt.cla() # clear current axis
+ plt.clf() # clear current figure
+ plt.close("all") # close all existing plots
- # Travis-CI does not have san-serif
- rcParams['font.family'] = 'DejaVu Sans'
+ # Travis-CI does not have san-serif so ensure standard fonts are used.
+ # Note that this must be set before each test otherwise it will be reset by
+ # the Yellowbrick styles.
+ mpl.rcParams['font.family'] = 'DejaVu Sans'
- super(VisualTestCase, self).setUp()
-
- def assert_images_similar(self, visualizer=None, ax=None, tol=0.01, windows_tol=None, **kwargs):
+ def assert_images_similar(self, visualizer=None, ax=None,
+ tol=0.01, windows_tol=None, **kwargs):
"""Accessible testing method for testing generation of a Visualizer.
Requires the placement of a baseline image for comparison in the
tests/baseline_images folder that corresponds to the module path of the
VisualTestCase being evaluated. The name of the image corresponds to
- the unittest function where "self.assert_images_similar" is called.
+ the test function where "self.assert_images_similar" is called.
- For example, calling "assert_images_similar" in the unittest
+ For example, calling "assert_images_similar" in the test function
"test_class_report" in tests.test_classifier.test_class_balance would
require placement a baseline image at:
@@ -93,20 +100,21 @@ def assert_images_similar(self, visualizer=None, ax=None, tol=0.01, windows_tol=
actual_images/
- visualizer : yellowbrick visualizer
+ visualizer : yellowbrick visualizer, default: None
An instantiated yellowbrick visualizer that has been fitted,
transformed and had all operations except for poof called on it.
ax : matplotlib Axes, default: None
The axis to plot the figure on.
- tol : float
+ tol : float, default: 0.01
The tolerance (a color value difference, where 255 is the
maximal difference). The test fails if the average pixel
difference is greater than this value.
windows_tol: float, default: None
- Similar to the tol parameter, but targeted for testing on a windows environment.
+ Similar to the tol parameter, but targeted for testing on a
+ windows environment.
kwargs : dict
Options to pass to the ImageComparison class.
@@ -116,7 +124,12 @@ def assert_images_similar(self, visualizer=None, ax=None, tol=0.01, windows_tol=
# Build and execute the image comparison
compare = ImageComparison(
- inspect.stack(), visualizer=visualizer, ax=ax, tol=tol, windows_tol=windows_tol, **kwargs
+ inspect.stack(),
+ visualizer=visualizer,
+ ax=ax,
+ tol=tol,
+ windows_tol=windows_tol,
+ **kwargs
)
compare()
@@ -182,8 +195,8 @@ class ImageComparison(object):
ValueError : at least one of visualizer or ax must be specified.
"""
- def __init__(self, stack, visualizer=None, ax=None, tol=0.01,
- windows_tol=0.01, ext=".png", remove_ticks=True,
+ def __init__(self, stack, visualizer=None, ax=None, tol=0.01,
+ windows_tol=0.01, ext=".png", remove_ticks=True,
remove_title=True, remove_labels=True, remove_legend=True):
# Ensure we have something to draw on
@@ -216,10 +229,9 @@ def __init__(self, stack, visualizer=None, ax=None, tol=0.01,
# Set the error tolerance depending on the os
if os.name == "nt" and windows_tol is not None:
self.tol = windows_tol
- else:
+ else:
self.tol = tol
-
# Save other image comparison properties
self.ext = ext
self.remove_ticks = remove_ticks
diff --git a/tests/conftest.py b/tests/conftest.py
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -18,9 +18,34 @@
##########################################################################
import os
+import matplotlib as mpl
from pytest_flakes import FlakesItem
+
+##########################################################################
+## Configure tests
+##########################################################################
+
+def pytest_configure(config):
+ """
+ This function is called by pytest for every plugin and conftest file
+ after the command line arguments have been passed but before the
+ session object is created and all of the tests are created. It is used
+ to set a global configuration before all tests are run.
+
+ Yellowbrick uses this function primarily to ensure that the matplotlib
+ environment is setup correctly for all tests.
+ """
+ # This is redundant with the line in tests/__init__.py but ensures that
+ # the backend is correctly set across all tests and plugins.
+ mpl.use('Agg')
+
+ # Travis-CI does not have san-serif so ensure standard fonts are used.
+ # TODO: this is currently being reset before each test; needs fixing.
+ mpl.rcParams['font.family'] = 'DejaVu Sans'
+
+
##########################################################################
## PyTest Hooks
##########################################################################
diff --git a/tests/dataset.py b/tests/dataset.py
--- a/tests/dataset.py
+++ b/tests/dataset.py
@@ -105,7 +105,7 @@
class DatasetMixin(object):
"""
- Mixin for unittest.TestCase class to download datasets from S3 for
+ Mixin for VisualTestCase class to download datasets from S3 for
testing real world machine learning visual diagnostics.
"""
diff --git a/tests/requirements.txt b/tests/requirements.txt
--- a/tests/requirements.txt
+++ b/tests/requirements.txt
@@ -15,7 +15,7 @@ requests>=2.18.3
# Optional Testing Dependencies
nltk>=3.2
-# spacy>=2.0.18
+# spacy>=2.0.18
pandas>=0.20
umap-learn==0.3
numba==0.42
diff --git a/tests/test_base.py b/tests/test_base.py
--- a/tests/test_base.py
+++ b/tests/test_base.py
@@ -31,6 +31,7 @@
from sklearn.datasets import make_classification
+
##########################################################################
## Base Cases
##########################################################################
diff --git a/tests/test_bestfit.py b/tests/test_bestfit.py
--- a/tests/test_bestfit.py
+++ b/tests/test_bestfit.py
@@ -35,49 +35,49 @@
## Best fit tests
##########################################################################
-class BestFitTests(VisualTestCase):
+class TestBestFit(VisualTestCase):
def test_bad_estimator(self):
"""
Test that a bad estimator name raises a value error.
"""
- fig, axe = plt.subplots()
+ fig, ax = plt.subplots()
X, y = ANSCOMBE[1]
- with self.assertRaises(YellowbrickValueError):
- draw_best_fit(X, y, axe, 'pepper')
+ with pytest.raises(YellowbrickValueError):
+ draw_best_fit(X, y, ax, 'pepper')
def test_ensure_same_length(self):
"""
Ensure that vectors of different lengths raise
"""
- fig, axe = plt.subplots()
+ fig, ax = plt.subplots()
X = np.array([1, 2, 3, 5, 8, 10, 2])
y = np.array([1, 3, 6, 2])
- with self.assertRaises(YellowbrickValueError):
- draw_best_fit(X, y, axe, 'linear')
+ with pytest.raises(YellowbrickValueError):
+ draw_best_fit(X, y, ax, 'linear')
- with self.assertRaises(YellowbrickValueError):
- draw_best_fit(X[:,np.newaxis], y, axe, 'linear')
+ with pytest.raises(YellowbrickValueError):
+ draw_best_fit(X[:,np.newaxis], y, ax, 'linear')
@pytest.mark.filterwarnings('ignore')
- def testdraw_best_fit(self):
+ def test_draw_best_fit(self):
"""
Test that drawing a best fit line works.
"""
- fig, axe = plt.subplots()
+ fig, ax = plt.subplots()
X, y = ANSCOMBE[0]
- self.assertEqual(axe, draw_best_fit(X, y, axe, 'linear'))
- self.assertEqual(axe, draw_best_fit(X, y, axe, 'quadratic'))
+ assert ax == draw_best_fit(X, y, ax, 'linear')
+ assert ax == draw_best_fit(X, y, ax, 'quadratic')
##########################################################################
## Estimator tests
##########################################################################
-class EstimatorTests(VisualTestCase):
+class TestEstimator(VisualTestCase):
"""
Test the estimator functions for best fit lines.
"""
@@ -92,9 +92,8 @@ def test_linear(self):
X = X[:,np.newaxis]
model = fit_linear(X, y)
- self.assertIsNotNone(model)
- self.assertIsInstance(model, LinearRegression)
-
+ assert model is not None
+ assert isinstance(model, LinearRegression)
def test_quadratic(self):
"""
@@ -106,8 +105,8 @@ def test_quadratic(self):
X = X[:,np.newaxis]
model = fit_quadratic(X, y)
- self.assertIsNotNone(model)
- self.assertIsInstance(model, Pipeline)
+ assert model is not None
+ assert isinstance(model, Pipeline)
def test_select_best(self):
"""
@@ -119,8 +118,8 @@ def test_select_best(self):
X = X[:,np.newaxis]
model = fit_select_best(X, y)
- self.assertIsNotNone(model)
- self.assertIsInstance(model, Pipeline)
+ assert model is not None
+ assert isinstance(model, Pipeline)
X, y = ANSCOMBE[3]
X = np.array(X)
@@ -128,5 +127,5 @@ def test_select_best(self):
X = X[:,np.newaxis]
model = fit_select_best(X, y)
- self.assertIsNotNone(model)
- self.assertIsInstance(model, LinearRegression)
+ assert model is not None
+ assert isinstance(model, LinearRegression)
diff --git a/tests/test_classifier/__init__.py b/tests/test_classifier/__init__.py
--- a/tests/test_classifier/__init__.py
+++ b/tests/test_classifier/__init__.py
@@ -1,6 +1,13 @@
-#Backend must be set before first use.
-# Setting backend here allows us to run tests just in this folder, without running the whole yellowbrick.tests folder
-# This command will have no effect if backend has already been set previously.
-import matplotlib
-matplotlib.use('Agg')
+# tests.test_classifier
+# Tests for the classifier visualizers
+#
+# ID: __init__.py [] [email protected] $
+
+"""
+Tests for the classifier visualizers
+"""
+
+##########################################################################
+## Imports
+##########################################################################
diff --git a/tests/test_classifier/test_class_prediction_error.py b/tests/test_classifier/test_class_prediction_error.py
--- a/tests/test_classifier/test_class_prediction_error.py
+++ b/tests/test_classifier/test_class_prediction_error.py
@@ -45,7 +45,7 @@
##########################################################################
-class ClassPredictionErrorTests(VisualTestCase, DatasetMixin):
+class TestClassPredictionError(VisualTestCase, DatasetMixin):
def test_integration_class_prediction_error(self):
"""
@@ -79,7 +79,7 @@ def test_classes_greater_than_indices(self):
"""
model = LinearSVC()
model.fit(X, y)
- with self.assertRaises(ModelError):
+ with pytest.raises(ModelError):
visualizer = ClassPredictionError(
model, classes=["A", "B", "C", "D", "E"]
)
@@ -91,7 +91,7 @@ def test_classes_less_than_indices(self):
"""
model = LinearSVC()
model.fit(X, y)
- with self.assertRaises(NotImplementedError):
+ with pytest.raises(NotImplementedError):
visualizer = ClassPredictionError(model, classes=["A"])
visualizer.score(X, y)
@@ -109,7 +109,7 @@ def test_class_type(self):
X, y = make_multilabel_classification()
model = RandomForestClassifier()
model.fit(X, y)
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
visualizer = ClassPredictionError(model)
visualizer.score(X, y)
diff --git a/tests/test_classifier/test_classification_report.py b/tests/test_classifier/test_classification_report.py
--- a/tests/test_classifier/test_classification_report.py
+++ b/tests/test_classifier/test_classification_report.py
@@ -44,7 +44,7 @@
##########################################################################
@pytest.mark.usefixtures("binary", "multiclass")
-class ClassificationReportTests(VisualTestCase, DatasetMixin):
+class TestClassificationReport(VisualTestCase, DatasetMixin):
"""
ClassificationReport visualizer tests
"""
diff --git a/tests/test_classifier/test_confusion_matrix.py b/tests/test_classifier/test_confusion_matrix.py
--- a/tests/test_classifier/test_confusion_matrix.py
+++ b/tests/test_classifier/test_confusion_matrix.py
@@ -68,7 +68,7 @@ def digits(request):
##########################################################################
@pytest.mark.usefixtures("digits")
-class ConfusionMatrixTests(VisualTestCase):
+class TestConfusionMatrix(VisualTestCase):
"""
Test ConfusionMatrix visualizer
"""
diff --git a/tests/test_classifier/test_rocauc.py b/tests/test_classifier/test_rocauc.py
--- a/tests/test_classifier/test_rocauc.py
+++ b/tests/test_classifier/test_rocauc.py
@@ -52,12 +52,29 @@ class FakeClassifier(BaseEstimator, ClassifierMixin):
pass
+def assert_valid_rocauc_scores(visualizer, nscores=4):
+ """
+ Assertion helper to ensure scores are correctly computed
+ """
+ __tracebackhide__ = True
+ assert len(visualizer.fpr.keys()) == nscores
+ assert len(visualizer.tpr.keys()) == nscores
+ assert len(visualizer.roc_auc.keys()) == nscores
+
+ for k in (0, 1, "micro", "macro"):
+ assert k in visualizer.fpr
+ assert k in visualizer.tpr
+ assert k in visualizer.roc_auc
+ assert len(visualizer.fpr[k]) == len(visualizer.tpr[k])
+ assert 0.0 < visualizer.roc_auc[k] < 1.0
+
+
##########################################################################
## Tests
##########################################################################
@pytest.mark.usefixtures("binary", "multiclass")
-class ROCAUCTests(VisualTestCase, DatasetMixin):
+class TestROCAUC(VisualTestCase, DatasetMixin):
def test_binary_probability(self):
"""
@@ -74,20 +91,10 @@ def test_binary_probability(self):
assert 0 <= s <= 1
# Check the scores
- self.assertEqual(len(visualizer.fpr.keys()), 4)
- self.assertEqual(len(visualizer.tpr.keys()), 4)
- self.assertEqual(len(visualizer.roc_auc.keys()), 4)
-
- for k in (0, 1, "micro", "macro"):
- self.assertIn(k, visualizer.fpr)
- self.assertIn(k, visualizer.tpr)
- self.assertIn(k, visualizer.roc_auc)
- self.assertEqual(len(visualizer.fpr[k]), len(visualizer.tpr[k]))
- self.assertGreater(visualizer.roc_auc[k], 0.0)
- self.assertLess(visualizer.roc_auc[k], 1.0)
+ assert_valid_rocauc_scores(visualizer)
# Compare the images
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=TOL)
def test_binary_probability_decision(self):
@@ -105,20 +112,10 @@ def test_binary_probability_decision(self):
assert 0 <= s <= 1
# Check the scores
- self.assertEqual(len(visualizer.fpr.keys()), 4)
- self.assertEqual(len(visualizer.tpr.keys()), 4)
- self.assertEqual(len(visualizer.roc_auc.keys()), 4)
-
- for k in (0, 1, "micro", "macro"):
- self.assertIn(k, visualizer.fpr)
- self.assertIn(k, visualizer.tpr)
- self.assertIn(k, visualizer.roc_auc)
- self.assertEqual(len(visualizer.fpr[k]), len(visualizer.tpr[k]))
- self.assertGreater(visualizer.roc_auc[k], 0.0)
- self.assertLess(visualizer.roc_auc[k], 1.0)
+ assert_valid_rocauc_scores(visualizer)
# Compare the images
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=TOL)
def test_binary_decision(self):
@@ -136,13 +133,13 @@ def test_binary_decision(self):
assert 0 <= s <= 1
# Check the scores
- self.assertEqual(len(visualizer.fpr.keys()), 1)
- self.assertEqual(len(visualizer.tpr.keys()), 1)
- self.assertEqual(len(visualizer.roc_auc.keys()), 1)
+ assert len(visualizer.fpr.keys()) == 1
+ assert len(visualizer.tpr.keys()) == 1
+ assert len(visualizer.roc_auc.keys()) == 1
# Compare the images
# NOTE: increased tolerance for both AppVeyor and Travis CI tests
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=10)
def test_binary_micro_error(self):
@@ -154,7 +151,7 @@ def test_binary_micro_error(self):
visualizer.fit(self.binary.X.train, self.binary.y.train)
# Ensure score raises error (micro curves aren't defined for binary decisions)
- with self.assertRaises(ModelError):
+ with pytest.raises(ModelError):
visualizer.score(self.binary.X.test, self.binary.y.test)
def test_binary_macro_error(self):
@@ -166,7 +163,7 @@ def test_binary_macro_error(self):
visualizer.fit(self.binary.X.train, self.binary.y.train)
# Ensure score raises error (macro curves aren't defined for binary decisions)
- with self.assertRaises(ModelError):
+ with pytest.raises(ModelError):
visualizer.score(self.binary.X.test, self.binary.y.test)
def test_binary_per_class_error(self):
@@ -178,7 +175,7 @@ def test_binary_per_class_error(self):
visualizer.fit(self.binary.X.train, self.binary.y.train)
# Ensure score raises error (per_class curves not defined for binary decisions)
- with self.assertRaises(ModelError):
+ with pytest.raises(ModelError):
visualizer.score(self.binary.X.test, self.binary.y.test)
def test_multiclass_rocauc(self):
@@ -196,20 +193,10 @@ def test_multiclass_rocauc(self):
assert 0 <= s <= 1
# Check the scores
- self.assertEqual(len(visualizer.fpr.keys()), 8)
- self.assertEqual(len(visualizer.tpr.keys()), 8)
- self.assertEqual(len(visualizer.roc_auc.keys()), 8)
-
- for k in (0, 1, "micro", "macro"):
- self.assertIn(k, visualizer.fpr)
- self.assertIn(k, visualizer.tpr)
- self.assertIn(k, visualizer.roc_auc)
- self.assertEqual(len(visualizer.fpr[k]), len(visualizer.tpr[k]))
- self.assertGreater(visualizer.roc_auc[k], 0.0)
- self.assertLess(visualizer.roc_auc[k], 1.0)
+ assert_valid_rocauc_scores(visualizer, nscores=8)
# Compare the images
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=TOL)
def test_rocauc_quickmethod(self):
@@ -232,15 +219,15 @@ def test_rocauc_no_micro(self):
# Score the visualizer (should be the macro average)
s = visualizer.score(self.binary.X.test, self.binary.y.test)
- self.assertAlmostEqual(s, 0.8)
+ assert s == pytest.approx(0.8)
# Assert that there is no micro score
- self.assertNotIn("micro", visualizer.fpr)
- self.assertNotIn("micro", visualizer.tpr)
- self.assertNotIn("micro", visualizer.roc_auc)
+ assert "micro" not in visualizer.fpr
+ assert "micro" not in visualizer.tpr
+ assert "micro" not in visualizer.roc_auc
# Compare the images
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=TOL)
def test_rocauc_no_macro(self):
@@ -253,15 +240,15 @@ def test_rocauc_no_macro(self):
# Score the visualizer (should be the micro average)
s = visualizer.score(self.binary.X.test, self.binary.y.test)
- self.assertAlmostEqual(s, 0.8)
+ assert s == pytest.approx(0.8)
# Assert that there is no macro score
- self.assertNotIn("macro", visualizer.fpr)
- self.assertNotIn("macro", visualizer.tpr)
- self.assertNotIn("macro", visualizer.roc_auc)
+ assert "macro" not in visualizer.fpr
+ assert "macro" not in visualizer.tpr
+ assert "macro" not in visualizer.roc_auc
# Compare the images
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=TOL)
def test_rocauc_no_macro_no_micro(self):
@@ -274,20 +261,20 @@ def test_rocauc_no_macro_no_micro(self):
# Score the visualizer (should be the F1 score)
s = visualizer.score(self.binary.X.test, self.binary.y.test)
- self.assertAlmostEqual(s, 0.8)
+ assert s == pytest.approx(0.8)
# Assert that there is no macro score
- self.assertNotIn("macro", visualizer.fpr)
- self.assertNotIn("macro", visualizer.tpr)
- self.assertNotIn("macro", visualizer.roc_auc)
+ assert "macro" not in visualizer.fpr
+ assert "macro" not in visualizer.tpr
+ assert "macro" not in visualizer.roc_auc
# Assert that there is no micro score
- self.assertNotIn("micro", visualizer.fpr)
- self.assertNotIn("micro", visualizer.tpr)
- self.assertNotIn("micro", visualizer.roc_auc)
+ assert "micro" not in visualizer.fpr
+ assert "micro" not in visualizer.tpr
+ assert "micro" not in visualizer.roc_auc
# Compare the images
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=TOL)
def test_rocauc_no_classes(self):
@@ -300,16 +287,16 @@ def test_rocauc_no_classes(self):
# Score the visualizer (should be the micro average)
s = visualizer.score(self.binary.X.test, self.binary.y.test)
- self.assertAlmostEqual(s, 0.8)
+ assert s == pytest.approx(0.8)
# Assert that there still are per-class scores
for c in (0, 1):
- self.assertIn(c, visualizer.fpr)
- self.assertIn(c, visualizer.tpr)
- self.assertIn(c, visualizer.roc_auc)
+ assert c in visualizer.fpr
+ assert c in visualizer.tpr
+ assert c in visualizer.roc_auc
# Compare the images
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=TOL)
def test_rocauc_no_curves(self):
@@ -336,7 +323,7 @@ def test_rocauc_label_encoded(self):
# Score the visualizer
visualizer.score(self.multiclass.X.test, self.multiclass.y.test)
- self.assertEqual(list(visualizer.classes_), class_labels)
+ assert list(visualizer.classes_) == class_labels
def test_rocauc_not_label_encoded(self):
"""
@@ -352,7 +339,7 @@ def test_rocauc_not_label_encoded(self):
visualizer.fit(self.multiclass.X.train, y_train)
# Confirm that y_train and y_test have the same targets before calling score
- self.assertEqual(set(y_train), set(y_test))
+ assert set(y_train) == set(y_test)
def test_binary_decision_function_rocauc(self):
"""
@@ -360,7 +347,7 @@ def test_binary_decision_function_rocauc(self):
"""
# Load the model and assert there is no predict_proba method.
model = LinearSVC()
- with self.assertRaises(AttributeError):
+ with pytest.raises(AttributeError):
model.predict_proba
# Fit model and visualizer
@@ -384,7 +371,7 @@ def test_multi_decision_function_rocauc(self):
"""
# Load the model and assert there is no predict_proba method.
model = LinearSVC()
- with self.assertRaises(AttributeError):
+ with pytest.raises(AttributeError):
model.predict_proba
# Fit model and visualizer
@@ -412,7 +399,7 @@ def test_predict_proba_rocauc(self):
"""
# Load the model and assert there is no decision_function method.
model = GaussianNB()
- with self.assertRaises(AttributeError):
+ with pytest.raises(AttributeError):
model.decision_function
# Fit model and visualizer
@@ -444,5 +431,5 @@ def test_no_scoring_function(self):
Test ROCAUC with classifiers that have no scoring method
"""
visualizer = ROCAUC(FakeClassifier())
- with self.assertRaises(ModelError):
+ with pytest.raises(ModelError):
visualizer._get_y_scores(self.binary.X.train)
diff --git a/tests/test_classifier/test_threshold.py b/tests/test_classifier/test_threshold.py
--- a/tests/test_classifier/test_threshold.py
+++ b/tests/test_classifier/test_threshold.py
@@ -72,7 +72,7 @@ def test_binary_discrimination_threshold(self):
visualizer = DiscriminationThreshold(model, ax=ax, random_state=23)
visualizer.fit(X, y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer)
@@ -119,7 +119,7 @@ def test_pandas_integration(self):
LogisticRegression(), ax=ax, classes=classes, random_state=193
)
viz.fit(X, y)
- viz.poof()
+ viz.finalize()
self.assert_images_similar(viz, tol=0.1)
@@ -184,7 +184,7 @@ def test_binary_discrimination_threshold_alt_args(self):
)
visualizer.fit(X, y)
- visualizer.poof()
+ visualizer.finalize()
for metric in exclude:
assert metric not in visualizer.cv_scores_
diff --git a/tests/test_cluster/test_base.py b/tests/test_cluster/test_base.py
--- a/tests/test_cluster/test_base.py
+++ b/tests/test_cluster/test_base.py
@@ -17,7 +17,7 @@
## Imports
##########################################################################
-import unittest
+import pytest
from yellowbrick.exceptions import YellowbrickTypeError
from yellowbrick.cluster.base import ClusteringScoreVisualizer
@@ -28,30 +28,31 @@
from sklearn.cluster import KMeans, MiniBatchKMeans, AffinityPropagation
from sklearn.cluster import MeanShift, DBSCAN, Birch
+
##########################################################################
## Clustering Base Test Cases
##########################################################################
-class ClusterBaseTests(unittest.TestCase):
+class TestClusterBase(object):
+
+ @pytest.mark.parametrize("model", [
+ SVC, SVR, Ridge, RidgeCV, LinearRegression, RandomForestClassifier
+ ])
+ def test_clusterer_enforcement_raises(self, model):
+ """
+ Assert that non-cluster models raise a TypeError for cluster visualizers
+ """
+ with pytest.raises(YellowbrickTypeError):
+ ClusteringScoreVisualizer(model())
- def test_clusterer_enforcement(self):
+ @pytest.mark.parametrize("model", [
+ KMeans, MiniBatchKMeans, AffinityPropagation, MeanShift, DBSCAN, Birch
+ ])
+ def test_clusterer_enforcement(self, model):
"""
Assert that only clustering estimators can be passed to cluster viz
"""
- nomodels = [
- SVC, SVR, Ridge, RidgeCV, LinearRegression, RandomForestClassifier
- ]
-
- for nomodel in nomodels:
- with self.assertRaises(YellowbrickTypeError):
- ClusteringScoreVisualizer(nomodel())
-
- models = [
- KMeans, MiniBatchKMeans, AffinityPropagation, MeanShift, DBSCAN, Birch
- ]
-
- for model in models:
- try:
- ClusteringScoreVisualizer(model())
- except YellowbrickTypeError:
- self.fail("could not pass clustering estimator to visualizer")
+ try:
+ ClusteringScoreVisualizer(model())
+ except YellowbrickTypeError:
+ self.fail("could not pass clustering estimator to visualizer")
diff --git a/tests/test_cluster/test_elbow.py b/tests/test_cluster/test_elbow.py
--- a/tests/test_cluster/test_elbow.py
+++ b/tests/test_cluster/test_elbow.py
@@ -143,7 +143,7 @@ def test_integrated_kmeans_elbow(self):
visualizer = KElbowVisualizer(KMeans(random_state=42), k=4, ax=ax)
visualizer.fit(X)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer)
except Exception as e:
@@ -157,7 +157,7 @@ def test_integrated_mini_batch_kmeans_elbow(self):
# NOTE #182: cannot use occupancy dataset because of memory usage
# Generate a blobs data set
- X,y = make_blobs(
+ X, y = make_blobs(
n_samples=1000, n_features=12, centers=6, shuffle=True, random_state=42
)
@@ -168,7 +168,7 @@ def test_integrated_mini_batch_kmeans_elbow(self):
MiniBatchKMeans(random_state=42), k=4, ax=ax
)
visualizer.fit(X)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer)
except Exception as e:
@@ -186,7 +186,7 @@ def test_topic_modeling_k_means(self):
visualizer = KElbowVisualizer(KMeans(), k=(4, 8))
visualizer.fit(docs)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer)
@@ -236,7 +236,7 @@ def test_distortion_metric(self):
expected = np.array([ 69.100065, 54.081571, 43.146921, 34.978487])
assert len(visualizer.k_scores_) == 4
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer)
assert_array_almost_equal(visualizer.k_scores_, expected)
@@ -255,7 +255,7 @@ def test_silhouette_metric(self):
expected = np.array([ 0.691636, 0.456646, 0.255174, 0.239842])
assert len(visualizer.k_scores_) == 4
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer)
assert_array_almost_equal(visualizer.k_scores_, expected)
@@ -279,13 +279,13 @@ def test_calinski_harabasz_metric(self):
40.952179227847012, 35.939494
])
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer)
assert_array_almost_equal(visualizer.k_scores_, expected)
def test_locate_elbow(self):
"""
- Test the addition of locate_elbow to an image
+ Test the addition of locate_elbow to an image
"""
X,y = make_blobs(
n_samples=1000, n_features=5, centers=3, shuffle=True, random_state=42
@@ -303,7 +303,7 @@ def test_locate_elbow(self):
4286.479848, 12463.383743, 8766.999551, 6950.08391, 5865.79722
])
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, windows_tol=2.2)
assert_array_almost_equal(visualizer.k_scores_, expected)
@@ -347,6 +347,6 @@ def test_timings(self):
# call draw again which is normally called in fit
visualizer.draw()
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer)
diff --git a/tests/test_cluster/test_silhouette.py b/tests/test_cluster/test_silhouette.py
--- a/tests/test_cluster/test_silhouette.py
+++ b/tests/test_cluster/test_silhouette.py
@@ -34,9 +34,9 @@
## SilhouetteVisualizer Test Cases
##########################################################################
-class SilhouetteVisualizerTests(VisualTestCase):
+class TestSilhouetteVisualizer(VisualTestCase):
"""
- Silhouette Visualizer
+ Silhouette Visualizer Tests
"""
@pytest.mark.xfail(
@@ -59,7 +59,7 @@ def test_integrated_kmeans_silhouette(self):
visualizer = SilhouetteVisualizer(KMeans(random_state=0), ax=ax)
visualizer.fit(X)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, remove_legend=True)
except Exception as e:
@@ -85,7 +85,7 @@ def test_integrated_mini_batch_kmeans_silhouette(self):
visualizer = SilhouetteVisualizer(MiniBatchKMeans(random_state=0), ax=ax)
visualizer.fit(X)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, remove_legend=True)
except Exception as e:
@@ -118,7 +118,7 @@ def test_colormap_silhouette(self):
visualizer = SilhouetteVisualizer(MiniBatchKMeans(random_state=0), ax=ax, colormap='gnuplot')
visualizer.fit(X)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, remove_legend=True)
except Exception as e:
@@ -145,7 +145,7 @@ def test_colors_silhouette(self):
colors=['red', 'green', 'blue', 'indigo', 'cyan', 'lavender']
)
visualizer.fit(X)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, remove_legend=True)
except Exception as e:
@@ -167,7 +167,7 @@ def test_colormap_as_colors_silhouette(self):
visualizer = SilhouetteVisualizer(MiniBatchKMeans(random_state=0), ax=ax, colors='cool')
visualizer.fit(X)
- visualizer.poof()
+ visualizer.finalize()
tol = 3.2 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.143
self.assert_images_similar(visualizer, remove_legend=True, tol=tol)
diff --git a/tests/test_contrib/test_classifier/test_boundaries.py b/tests/test_contrib/test_classifier/test_boundaries.py
--- a/tests/test_contrib/test_classifier/test_boundaries.py
+++ b/tests/test_contrib/test_classifier/test_boundaries.py
@@ -66,7 +66,7 @@
@pytest.mark.filterwarnings('ignore')
-class DecisionBoundariesVisualizerTest(VisualTestCase):
+class TestDecisionBoundariesVisualizer(VisualTestCase):
"""
Test DecisionBoundariesVisualizer
"""
@@ -102,24 +102,23 @@ def test_init(self):
model = neighbors.KNeighborsClassifier(3)
viz = DecisionBoundariesVisualizer(model)
- self.assertEquals(viz.step_size, 0.0025)
- self.assertEqual(viz.name, 'KNeighborsClassifier')
- self.assertEqual(viz.estimator, model)
+ assert viz.step_size == 0.0025
+ assert viz.name == 'KNeighborsClassifier'
+ assert viz.estimator is model
- self.assertIsNone(viz.classes_)
- self.assertIsNone(viz.features_)
- self.assertIsNotNone(viz.markers)
- self.assertIsNotNone(viz.scatter_alpha)
- self.assertTrue(viz.show_scatter)
-
- self.assertIsNone(viz.Z)
- self.assertIsNone(viz.xx)
- self.assertIsNone(viz.yy)
- self.assertIsNone(viz.class_labels)
- self.assertIsNone(viz.title)
- self.assertIsNone(viz.x)
- self.assertIsNone(viz.y)
+ assert viz.classes_ is None
+ assert viz.features_ is None
+ assert viz.markers is not None
+ assert viz.scatter_alpha is not None
+ assert viz.show_scatter is True
+ assert viz.Z is None
+ assert viz.xx is None
+ assert viz.yy is None
+ assert viz.class_labels is None
+ assert viz.title is None
+ assert viz.x is None
+ assert viz.y is None
def test_scatter_xy_and_features_raise_error(self):
"""
@@ -128,7 +127,7 @@ def test_scatter_xy_and_features_raise_error(self):
model = neighbors.KNeighborsClassifier(3)
features = ["temperature", "relative_humidity", "light"]
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
DecisionBoundariesVisualizer(
model, features=features, x='one', y='two'
)
@@ -139,8 +138,7 @@ def test_scatter_xy_changes_to_features(self):
"""
model = neighbors.KNeighborsClassifier(3)
visualizer = DecisionBoundariesVisualizer(model, x='one', y='two')
- self.assertEquals(visualizer.features_, ['one', 'two'])
-
+ assert visualizer.features_ == ['one', 'two']
def test_fit(self):
"""
@@ -154,17 +152,17 @@ def test_fit(self):
fitted_viz = viz.fit(X_two_cols, y=y)
# assert that classes and labels are established
- self.assertEqual(fitted_viz.classes_, {0: '0', 1: '1', 2: '2', 3: '3'})
- self.assertEqual(fitted_viz.features_, ['Feature One', 'Feature Two'])
+ assert fitted_viz.classes_ == {0: '0', 1: '1', 2: '2', 3: '3'}
+ assert fitted_viz.features_ == ['Feature One', 'Feature Two']
# assert that the fit method is called
model.fit.assert_called_once_with(X_two_cols, y)
# mock object is called twice in predict and reshape
- self.assertEqual(len(model.predict.mock_calls), 2)
+ assert len(model.predict.mock_calls) == 2
# test that attrs are set
- self.assertIsNotNone(fitted_viz.ax)
- self.assertIsNotNone(fitted_viz.Z_shape)
+ assert fitted_viz.ax is not None
+ assert fitted_viz.Z_shape is not None
def test_fit_class_labels(self):
"""
@@ -174,11 +172,7 @@ def test_fit_class_labels(self):
viz = DecisionBoundariesVisualizer(
model, classes=['one', 'two', 'three', 'four'])
fitted_viz = viz.fit(X_two_cols, y=y)
- self.assertEquals(fitted_viz.classes_,
- {'three': '2',
- 'four': '3',
- 'two': '1',
- 'one': '0'})
+ assert fitted_viz.classes_ == {'three': '2', 'four': '3', 'two': '1', 'one': '0'}
def test_fit_class_labels_class_names_edge_case(self):
"""
@@ -187,7 +181,9 @@ def test_fit_class_labels_class_names_edge_case(self):
model = neighbors.KNeighborsClassifier(3)
viz = DecisionBoundariesVisualizer(
model, classes=['one', 'two', 'three', 'four', 'five'])
- self.assertRaises(YellowbrickTypeError, viz.fit, X_two_cols, y=y)
+
+ with pytest.raises(YellowbrickTypeError):
+ viz.fit(X_two_cols, y=y)
def test_fit_features_assignment_None(self):
"""
@@ -195,9 +191,9 @@ def test_fit_features_assignment_None(self):
"""
model = neighbors.KNeighborsClassifier(3)
viz = DecisionBoundariesVisualizer(model)
- self.assertIsNone(viz.features_)
+ assert viz.features_ is None
fitted_viz = viz.fit(X_two_cols, y=y)
- self.assertEquals(fitted_viz.features_, ['Feature One', 'Feature Two'])
+ assert fitted_viz.features_ == ['Feature One', 'Feature Two']
def test_fit_features_assignment(self):
"""
@@ -206,7 +202,7 @@ def test_fit_features_assignment(self):
model = neighbors.KNeighborsClassifier(3)
viz = DecisionBoundariesVisualizer(model, features=['one', 'two'])
fitted_viz = viz.fit(X_two_cols, y=y)
- self.assertEquals(fitted_viz.features_, ['one', 'two'])
+ assert fitted_viz.features_ == ['one', 'two']
@mock.patch("yellowbrick.contrib.classifier.boundaries.OrderedDict")
def test_draw_ordereddict_calls(self, mock_odict):
@@ -216,8 +212,11 @@ def test_draw_ordereddict_calls(self, mock_odict):
mock_odict.return_value = {}
model = neighbors.KNeighborsClassifier(3)
viz = DecisionBoundariesVisualizer(model, features=['one', 'two'])
- self.assertRaises(KeyError, viz.fit_draw, X_two_cols, y=y)
- self.assertEquals(len(mock_odict.mock_calls), 2)
+
+ with pytest.raises(KeyError):
+ viz.fit_draw(X_two_cols, y=y)
+
+ assert len(mock_odict.mock_calls) == 2
@mock.patch("yellowbrick.contrib.classifier.boundaries.resolve_colors")
def test_draw_ordereddict_calls_one(self, mock_resolve_colors):
@@ -227,8 +226,11 @@ def test_draw_ordereddict_calls_one(self, mock_resolve_colors):
mock_resolve_colors.return_value = []
model = neighbors.KNeighborsClassifier(3)
viz = DecisionBoundariesVisualizer(model, features=['one', 'two'])
- self.assertRaises(StopIteration, viz.fit_draw, X_two_cols, y=y)
- self.assertEquals(len(mock_resolve_colors.mock_calls), 1)
+
+ with pytest.raises(StopIteration):
+ viz.fit_draw(X_two_cols, y=y)
+
+ assert len(mock_resolve_colors.mock_calls) == 1
def test_draw_ax_show_scatter_true(self):
"""
@@ -243,9 +245,9 @@ def test_draw_ax_show_scatter_true(self):
fitted_viz.ax.legend = mock.MagicMock()
fitted_viz.draw(X_two_cols, y=y)
- self.assertEquals(len(fitted_viz.ax.pcolormesh.mock_calls), 1)
- self.assertEquals(len(fitted_viz.ax.scatter.mock_calls), 4)
- self.assertEquals(len(fitted_viz.ax.legend.mock_calls), 0)
+ assert len(fitted_viz.ax.pcolormesh.mock_calls) == 1
+ assert len(fitted_viz.ax.scatter.mock_calls) == 4
+ assert len(fitted_viz.ax.legend.mock_calls) == 0
def test_draw_ax_show_scatter_False(self):
"""
@@ -261,9 +263,9 @@ def test_draw_ax_show_scatter_False(self):
fitted_viz.ax.legend = mock.MagicMock()
fitted_viz.draw(X_two_cols, y=y)
- self.assertEquals(len(fitted_viz.ax.pcolormesh.mock_calls), 1)
- self.assertEquals(len(fitted_viz.ax.scatter.mock_calls), 0)
- self.assertEquals(len(fitted_viz.ax.legend.mock_calls), 1)
+ assert len(fitted_viz.ax.pcolormesh.mock_calls) == 1
+ assert len(fitted_viz.ax.scatter.mock_calls) == 0
+ assert len(fitted_viz.ax.legend.mock_calls) == 1
def test_finalize(self):
"""
@@ -280,7 +282,7 @@ def test_finalize(self):
fitted_viz.ax.set_xlabel = mock.MagicMock()
fitted_viz.ax.set_ylabel = mock.MagicMock()
- fitted_viz.poof()
+ fitted_viz.finalize()
fitted_viz.ax.legend.assert_called_once_with(loc='best', frameon=True)
fitted_viz.ax.set_xlabel.assert_called_once_with('one')
@@ -345,7 +347,7 @@ def test_integrated_plot_numpy_named_arrays(self):
visualizer = DecisionBoundariesVisualizer(model, features=['a', 'f'])
visualizer.fit_draw_poof(X, y=y)
- self.assertEquals(visualizer.features_, ['a', 'f'])
+ assert visualizer.features_ == ['a', 'f']
self.assert_images_similar(visualizer)
def test_integrated_scatter_numpy_arrays_no_names(self):
@@ -356,7 +358,7 @@ def test_integrated_scatter_numpy_arrays_no_names(self):
visualizer = DecisionBoundariesVisualizer(model, features=[1, 2])
visualizer.fit_draw_poof(X, y)
- self.assertEquals(visualizer.features_, [1, 2])
+ assert visualizer.features_ == [1, 2]
@pytest.mark.xfail(
sys.platform == 'win32', reason="images not close on windows"
diff --git a/tests/test_contrib/test_missing/test_bar.py b/tests/test_contrib/test_missing/test_bar.py
--- a/tests/test_contrib/test_missing/test_bar.py
+++ b/tests/test_contrib/test_missing/test_bar.py
@@ -18,6 +18,9 @@
##########################################################################
import os
+import pytest
+import numpy as np
+
from tests.base import VisualTestCase
from sklearn.datasets import make_classification
from yellowbrick.contrib.missing.bar import *
@@ -27,21 +30,22 @@
except ImportError:
pd = None
+
[email protected](scope="class")
+def missing_bar_tolerance(request):
+ request.cls.tol = 0.5 if os.name == 'nt' else 0.01
+
+
##########################################################################
## Feature Importances Tests
##########################################################################
[email protected]("missing_bar_tolerance")
class TestMissingBarVisualizer(VisualTestCase):
"""
FeatureImportances visualizer
"""
- def setUp(self):
- super(TestMissingBarVisualizer, self).setUp()
- self.tol = 0.01
- if os.name == 'nt': # Windows
- self.tol = 0.5
-
def test_missingvaluesbar_pandas(self):
"""
Integration test of visualizer with pandas
@@ -58,11 +62,10 @@ def test_missingvaluesbar_pandas(self):
features = [str(n) for n in range(20)]
viz = MissingValuesBar(features=features)
viz.fit(X_)
- viz.poof()
+ viz.finalize()
self.assert_images_similar(viz, tol=self.tol)
-
def test_missingvaluesbar_numpy(self):
"""
Integration test of visualizer with numpy without target y passed in
@@ -78,7 +81,7 @@ def test_missingvaluesbar_numpy(self):
features = [str(n) for n in range(20)]
viz = MissingValuesBar(features=features)
viz.fit(X)
- viz.poof()
+ viz.finalize()
self.assert_images_similar(viz, tol=self.tol)
@@ -98,7 +101,7 @@ def test_missingvaluesbar_numpy_with_y_target(self):
features = [str(n) for n in range(20)]
viz = MissingValuesBar(features=features)
viz.fit(X, y)
- viz.poof()
+ viz.finalize()
self.assert_images_similar(viz, tol=self.tol)
@@ -118,6 +121,6 @@ def test_missingvaluesbar_numpy_with_y_target_with_labels(self):
features = [str(n) for n in range(20)]
viz = MissingValuesBar(features=features, classes=['class A', 'class B'])
viz.fit(X, y)
- viz.poof()
+ viz.finalize()
self.assert_images_similar(viz, tol=self.tol)
diff --git a/tests/test_contrib/test_missing/test_dispersion.py b/tests/test_contrib/test_missing/test_dispersion.py
--- a/tests/test_contrib/test_missing/test_dispersion.py
+++ b/tests/test_contrib/test_missing/test_dispersion.py
@@ -16,7 +16,10 @@
##########################################################################
## Imports
##########################################################################
+
import os
+import pytest
+
from sklearn.datasets import make_classification
from tests.base import VisualTestCase
@@ -27,20 +30,21 @@
except ImportError:
pd = None
+
[email protected](scope="class")
+def missing_dispersion_tolerance(request):
+ request.cls.tol = 0.5 if os.name == 'nt' else 0.01
+
+
##########################################################################
## Feature Importances Tests
##########################################################################
-class MissingValuesDispersionTestCase(VisualTestCase):
[email protected]("missing_dispersion_tolerance")
+class TestMissingValuesDispersion(VisualTestCase):
"""
MissingValuesDispersion visualizer
"""
- def setUp(self):
- super(MissingValuesDispersionTestCase, self).setUp()
- self.tol = 0.01
- if os.name == 'nt': # Windows
- self.tol = 5.0
-
def test_missingvaluesdispersion_with_pandas(self):
"""
@@ -58,7 +62,7 @@ def test_missingvaluesdispersion_with_pandas(self):
features = [str(n) for n in range(20)]
viz = MissingValuesDispersion(features=features)
viz.fit(X_)
- viz.poof()
+ viz.finalize()
self.assert_images_similar(viz, tol=self.tol)
@@ -79,11 +83,10 @@ def test_missingvaluesdispersion_with_pandas_with_y_targets(self):
classes = ['Class A', 'Class B']
viz = MissingValuesDispersion(features=features, classes=classes)
viz.fit(X_, y=y)
- viz.poof()
+ viz.finalize()
self.assert_images_similar(viz, tol=self.tol)
-
def test_missingvaluesdispersion_with_numpy(self):
"""
Integration test of visualizer with numpy
@@ -99,7 +102,7 @@ def test_missingvaluesdispersion_with_numpy(self):
features = [str(n) for n in range(20)]
viz = MissingValuesDispersion(features=features)
viz.fit(X)
- viz.poof()
+ viz.finalize()
self.assert_images_similar(viz, tol=self.tol)
@@ -119,6 +122,6 @@ def test_missingvaluesdispersion_with_numpy_with_y_targets(self):
classes = ['Class A', 'Class B']
viz = MissingValuesDispersion(features=features, classes=classes)
viz.fit(X, y=y)
- viz.poof()
+ viz.finalize()
self.assert_images_similar(viz, tol=self.tol)
diff --git a/tests/test_contrib/test_scatter.py b/tests/test_contrib/test_scatter.py
--- a/tests/test_contrib/test_scatter.py
+++ b/tests/test_contrib/test_scatter.py
@@ -16,11 +16,10 @@
# Imports
##########################################################################
-from unittest import mock
-
-import matplotlib as mpl
import pytest
+import matplotlib as mpl
+from unittest import mock
from tests.base import VisualTestCase
from yellowbrick.contrib.scatter import *
from yellowbrick.datasets import load_occupancy
@@ -38,7 +37,7 @@
##########################################################################
@pytest.mark.filterwarnings('ignore')
-class ScatterVizTests(VisualTestCase):
+class TestScatterViz(VisualTestCase):
"""
Test ScatterViz
"""
@@ -61,7 +60,7 @@ def test_init_alias(self):
"""
features = ["temperature", "relative humidity"]
visualizer = ScatterVisualizer(features=features, markers=['*'])
- self.assertIsNotNone(visualizer.markers)
+ assert visualizer.markers is not None
def test_scatter(self):
"""
@@ -89,7 +88,7 @@ def test_scatter_no_features(self):
X_two_cols = self.X[:, :2]
visualizer = ScatterViz()
visualizer.fit_transform_poof(X_two_cols, self.y)
- self.assertEquals(visualizer.features_, ['Feature One', 'Feature Two'])
+ assert visualizer.features_ == ['Feature One', 'Feature Two']
def test_scatter_only_two_features_allowed_init(self):
"""
@@ -97,7 +96,7 @@ def test_scatter_only_two_features_allowed_init(self):
"""
features = ["temperature", "relative humidity", "light"]
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
ScatterViz(features=features)
def test_scatter_xy_and_features_raise_error(self):
@@ -106,7 +105,7 @@ def test_scatter_xy_and_features_raise_error(self):
"""
features = ["temperature", "relative humidity", "light"]
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
ScatterViz(features=features, x='one', y='two')
def test_scatter_xy_changes_to_features(self):
@@ -114,17 +113,15 @@ def test_scatter_xy_changes_to_features(self):
Assert that x,y with no features will not raise scatterviz error
"""
visualizer = ScatterViz(x='one', y='two')
- self.assertEquals(visualizer.features_, ['one', 'two'])
+ assert visualizer.features_ == ['one', 'two']
def test_scatter_requires_two_features_in_numpy_matrix(self):
"""
Assert only two features allowed for scatter visualizer if not in init
"""
visualizer = ScatterViz()
- with self.assertRaises(YellowbrickValueError) as context:
+ with pytest.raises(YellowbrickValueError, match='only accepts two features'):
visualizer.fit_transform(self.X, self.y)
- self.assertTrue(
- 'only accepts two features' in str(context.exception))
def test_integrated_scatter(self):
"""
@@ -170,7 +167,7 @@ def test_scatter_quick_method(self):
ax = scatterviz(X[:, :2], y=y, ax=None, features=features)
# test that is returns a matplotlib obj with axes
- self.assertIsInstance(ax, mpl.axes.Axes)
+ assert isinstance(ax, mpl.axes.Axes)
@pytest.mark.skipif(pd is None, reason="pandas is required for this test")
def test_integrated_scatter_with_pandas(self):
@@ -205,7 +202,7 @@ def test_integrated_scatter_numpy_named_arrays(self):
X_named = self.X.astype(dt, casting='unsafe')
visualizer = ScatterViz(features=['one', 'two'])
visualizer.fit_transform_poof(X_named, self.y)
- self.assertEquals(visualizer.features_, ['one', 'two'])
+ assert visualizer.features_ == ['one', 'two']
def test_integrated_scatter_numpy_arrays_no_names(self):
"""
@@ -213,7 +210,7 @@ def test_integrated_scatter_numpy_arrays_no_names(self):
"""
visualizer = ScatterViz(features=[1, 2])
visualizer.fit_transform_poof(self.X, self.y)
- self.assertEquals(visualizer.features_, [1, 2])
+ assert visualizer.features_ == [1, 2]
def test_scatter_image(self):
"""
diff --git a/tests/test_draw.py b/tests/test_draw.py
--- a/tests/test_draw.py
+++ b/tests/test_draw.py
@@ -36,14 +36,14 @@ def test_manual_legend_uneven_colors():
@pytest.fixture(scope="class")
def data(request):
-
+
data = np.array(
[[4, 8, 7, 6, 5, 2, 1],
[6, 7, 9, 6, 9, 3, 6],
[5, 1, 6, 8, 4, 7, 8],
[6, 8, 1, 5, 6, 7, 4]]
)
-
+
request.cls.data = data
##########################################################################
@@ -83,80 +83,79 @@ def test_manual_legend(self):
def test_vertical_bar_stack(self):
"""
- Test bar_stack for vertical orientation
+ Test bar_stack for vertical orientation
"""
_, ax = plt.subplots()
-
+
# Plots stacked bar charts
bar_stack(self.data, ax=ax, orientation='v')
-
+
# Assert image similarity
self.assert_images_similar(ax=ax, tol=0.1)
-
+
def test_horizontal_bar_stack(self):
"""
- Test bar_stack for horizontal orientation
+ Test bar_stack for horizontal orientation
"""
_, ax = plt.subplots()
# Plots stacked bar charts
bar_stack(self.data, ax=ax, orientation='h')
-
+
# Assert image similarity
self.assert_images_similar(ax=ax, tol=0.1)
-
+
def test_single_row_bar_stack(self):
"""
- Test bar_stack for single row
- """
+ Test bar_stack for single row
+ """
data = np.array([[4, 8, 7, 6, 5, 2, 1]])
-
+
_, ax = plt.subplots()
-
+
# Plots stacked bar charts
bar_stack(data, ax=ax)
-
+
# Assert image similarity
self.assert_images_similar(ax=ax, tol=0.1)
-
+
def test_labels_vertical(self):
"""
Test labels and ticks for vertical barcharts
- """
+ """
labels = ['books', 'cinema', 'cooking', 'gaming']
- ticks = ['noun', 'verb', 'adverb', 'pronoun', 'preposition',
+ ticks = ['noun', 'verb', 'adverb', 'pronoun', 'preposition',
'digit', 'other']
_, ax = plt.subplots()
-
+
# Plots stacked bar charts
- bar_stack(self.data, labels = labels, ticks=ticks,
+ bar_stack(self.data, labels = labels, ticks=ticks,
colors=['r','b','g','y'])
-
+
# Extract tick labels from the plot
ticks_ax = [tick.get_text() for tick in ax.xaxis.get_ticklabels()]
#Assert that ticks are set properly
assert ticks_ax==ticks
-
+
# Assert image similarity
self.assert_images_similar(ax=ax, tol=0.05)
-
+
def test_labels_horizontal(self):
"""
Test labels and ticks with horizontal barcharts
- """
+ """
labels = ['books', 'cinema', 'cooking', 'gaming']
- ticks = ['noun', 'verb', 'adverb', 'pronoun', 'preposition',
+ ticks = ['noun', 'verb', 'adverb', 'pronoun', 'preposition',
'digit', 'other']
_, ax = plt.subplots()
-
+
# Plots stacked bar charts
- bar_stack(self.data, labels = labels, ticks=ticks, orientation='h',
+ bar_stack(self.data, labels = labels, ticks=ticks, orientation='h',
colormap='cool')
-
+
# Extract tick labels from the plot
ticks_ax = [tick.get_text() for tick in ax.yaxis.get_ticklabels()]
#Assert that ticks are set properly
assert ticks_ax==ticks
-
+
# Assert image similarity
self.assert_images_similar(ax=ax, tol=0.05)
-
\ No newline at end of file
diff --git a/tests/test_features/test_base.py b/tests/test_features/test_base.py
--- a/tests/test_features/test_base.py
+++ b/tests/test_features/test_base.py
@@ -28,22 +28,13 @@
## FeatureVisualizer Base Tests
##########################################################################
-class FeatureVisualizerBaseTests(VisualTestCase):
+class TestFeatureVisualizerBase(VisualTestCase):
def test_subclass(self):
"""
Assert the feature visualizer is in its rightful place
"""
visualizer = FeatureVisualizer()
- self.assertIsInstance(visualizer, TransformerMixin)
- self.assertIsInstance(visualizer, BaseEstimator)
- self.assertIsInstance(visualizer, Visualizer)
-
- # def test_interface(self):
- # """
- # Test the feature visualizer interface
- # """
- #
- # visualizer = FeatureVisualizer()
- # with self.assertRaises(NotImplementedError):
- # visualizer.poof()
+ assert isinstance(visualizer, TransformerMixin)
+ assert isinstance(visualizer, BaseEstimator)
+ assert isinstance(visualizer, Visualizer)
diff --git a/tests/test_features/test_jointplot.py b/tests/test_features/test_jointplot.py
--- a/tests/test_features/test_jointplot.py
+++ b/tests/test_features/test_jointplot.py
@@ -23,11 +23,12 @@
##########################################################################
import sys
+import pytest
+import numpy as np
+
from functools import partial
from unittest.mock import patch, MagicMock
-import numpy as np
-import pytest
from sklearn.datasets import make_classification, make_regression
from tests.base import IS_WINDOWS_OR_CONDA, VisualTestCase
@@ -46,6 +47,7 @@
except ImportError:
pd = None
+
##########################################################################
## Fixtures
##########################################################################
diff --git a/tests/test_features/test_manifold.py b/tests/test_features/test_manifold.py
--- a/tests/test_features/test_manifold.py
+++ b/tests/test_features/test_manifold.py
@@ -44,60 +44,50 @@ class TestManifold(VisualTestCase):
Test Manifold visualizer
"""
- def test_manifold_construction(self):
+ @pytest.mark.parametrize("algorithm", [
+ "lle", "ltsa", "hessian", "modified", "isomap", "mds", "spectral", "tsne",
+ ])
+ def test_manifold_construction(self, algorithm):
"""
Should be able to construct a manifold estimator from a string
"""
- # TODO: parametrize this once unittest.TestCase dependency removed.
- algorithms = [
- "lle", "ltsa", "hessian", "modified",
- "isomap", "mds", "spectral", "tsne",
- ]
-
- for algorithm in algorithms:
- message = "case failed for {}".format(algorithm)
- params = {
- "n_neighbors": 18,
- "random_state": 53,
- }
- oz = Manifold(manifold=algorithm, **params)
- assert is_estimator(oz.manifold), message
- assert oz.manifold.get_params()["n_components"] == 2, message
-
- manifold_params = oz.manifold.get_params()
- for param, value in params.items():
- if param in manifold_params:
- assert value == manifold_params[param], message
-
- def test_manifold_warning(self):
+ message = "case failed for {}".format(algorithm)
+ params = {
+ "n_neighbors": 18,
+ "random_state": 53,
+ }
+ oz = Manifold(manifold=algorithm, **params)
+ assert is_estimator(oz.manifold), message
+ assert oz.manifold.get_params()["n_components"] == 2, message
+
+ manifold_params = oz.manifold.get_params()
+ for param, value in params.items():
+ if param in manifold_params:
+ assert value == manifold_params[param], message
+
+ @pytest.mark.parametrize("algorithm", [
+ "lle", "ltsa", "hessian", "modified", "isomap", "spectral",
+ ])
+ def test_manifold_warning(self, algorithm):
"""
Should raise a warning if n_neighbors not specified
"""
- # TODO: parametrize this once unittest.TestCase dependency removed.
- algorithms = [
- "lle", "ltsa", "hessian", "modified", "isomap", "spectral",
- ]
+ message = "case failed for {}".format(algorithm)
+ n_neighbors = 6 if algorithm == "hessian" else 5
- for algorithm in algorithms:
- message = "case failed for {}".format(algorithm)
- n_neighbors = 6 if algorithm == "hessian" else 5
+ with pytest.warns(YellowbrickWarning):
+ oz = Manifold(manifold=algorithm)
+ assert oz.n_neighbors == n_neighbors, message
- with pytest.warns(YellowbrickWarning):
- oz = Manifold(manifold=algorithm)
- assert oz.n_neighbors == n_neighbors, message
-
- def test_manifold_no_warning(self):
+ @pytest.mark.parametrize("algorithm", ["mds", "tsne"])
+ def test_manifold_no_warning(self, algorithm):
"""
Should not raise a warning if n_neighbors not specified
"""
- # TODO: parametrize this once unittest.TestCase dependency removed.
- algorithms = ["mds", "tsne"]
-
- for algorithm in algorithms:
- message = "case failed for {}".format(algorithm)
+ message = "case failed for {}".format(algorithm)
- with pytest.warns(None) as record:
- assert not record.list, message
+ with pytest.warns(None) as record:
+ assert not record.list, message
def test_bad_manifold_exception(self):
"""
@@ -216,21 +206,16 @@ def test_manifold_pandas(self):
self.assert_images_similar(oz, tol=35)
@pytest.mark.filterwarnings("ignore:Conversion of the second argument")
- def test_manifold_algorithm_fit(self):
+ @pytest.mark.parametrize("algorithm", [
+ "lle", "ltsa", "hessian", "modified", "isomap", "mds", "spectral", "tsne",
+ ])
+ def test_manifold_algorithm_fit(self, algorithm):
"""
Test that all algorithms can be fitted correctly
"""
- # TODO: parametrize this once unittest.TestCase dependency removed.
- algorithms = [
- "lle", "ltsa", "hessian", "modified",
- "isomap", "mds", "spectral", "tsne",
- ]
-
X, y = make_s_curve(200, random_state=888)
-
- for algorithm in algorithms:
- oz = Manifold(manifold=algorithm, n_neighbors=10, random_state=223)
- oz.fit(X, y)
+ oz = Manifold(manifold=algorithm, n_neighbors=10, random_state=223)
+ oz.fit(X, y)
def test_determine_target_color_type(self):
"""
diff --git a/tests/test_features/test_pca.py b/tests/test_features/test_pca.py
--- a/tests/test_features/test_pca.py
+++ b/tests/test_features/test_pca.py
@@ -56,7 +56,7 @@ def binary(request):
##########################################################################
@pytest.mark.usefixtures("binary")
-class PCADecompositionTests(VisualTestCase):
+class TestPCADecomposition(VisualTestCase):
"""
Test the PCADecomposition visualizer
"""
@@ -191,7 +191,7 @@ def test_scale_true_3d_execption(self):
with pytest.raises(ValueError, match=e):
pca = PCADecomposition(**params)
pca.fit(X)
-
+
@mock.patch('yellowbrick.features.pca.plt.sca', autospec=True)
def test_alpha_param(self, mock_sca):
"""
@@ -202,7 +202,7 @@ def test_alpha_param(self, mock_sca):
visualizer = PCADecomposition(**params).fit(self.dataset.X)
pca_array = visualizer.transform(self.dataset.X)
assert visualizer.alpha == 0.3
-
+
visualizer.ax = mock.MagicMock()
visualizer.fit(self.dataset.X)
visualizer.transform(self.dataset.X)
@@ -212,4 +212,4 @@ def test_alpha_param(self, mock_sca):
assert "alpha" in scatter_kwargs
assert scatter_kwargs["alpha"] == 0.3
assert pca_array.shape == (self.dataset.X.shape[0], 2)
-
+
diff --git a/tests/test_features/test_pcoords.py b/tests/test_features/test_pcoords.py
--- a/tests/test_features/test_pcoords.py
+++ b/tests/test_features/test_pcoords.py
@@ -70,7 +70,7 @@ def test_parallel_coords(self):
"""
visualizer = ParallelCoordinates()
visualizer.fit_transform(self.dataset.X, self.dataset.y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.25)
def test_parallel_coords_fast(self):
@@ -79,7 +79,7 @@ def test_parallel_coords_fast(self):
"""
visualizer = ParallelCoordinates(fast=True)
visualizer.fit_transform(self.dataset.X, self.dataset.y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.25)
def test_alpha(self):
@@ -88,7 +88,7 @@ def test_alpha(self):
"""
visualizer = ParallelCoordinates(alpha=1.0)
visualizer.fit_transform(self.dataset.X, self.dataset.y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.25)
def test_alpha_fast(self):
@@ -97,7 +97,7 @@ def test_alpha_fast(self):
"""
visualizer = ParallelCoordinates(alpha=1.0, fast=True)
visualizer.fit_transform(self.dataset.X, self.dataset.y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.25)
def test_labels(self):
@@ -108,7 +108,7 @@ def test_labels(self):
classes=['a', 'b', 'c'], features=['f1', 'f2', 'f3', 'f4', 'f5']
)
visualizer.fit_transform(self.dataset.X, self.dataset.y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer)
def test_labels_fast(self):
@@ -119,7 +119,7 @@ def test_labels_fast(self):
classes=['a', 'b', 'c'], features=['f1', 'f2', 'f3', 'f4', 'f5'], fast=True
)
visualizer.fit_transform(self.dataset.X, self.dataset.y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer)
def test_normalized_l2(self):
@@ -128,7 +128,7 @@ def test_normalized_l2(self):
"""
visualizer = ParallelCoordinates(normalize='l2')
visualizer.fit_transform(self.dataset.X, self.dataset.y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.25)
def test_normalized_l2_fast(self):
@@ -137,7 +137,7 @@ def test_normalized_l2_fast(self):
"""
visualizer = ParallelCoordinates(normalize='l2', fast=True)
visualizer.fit_transform(self.dataset.X, self.dataset.y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.25)
def test_normalized_minmax(self):
@@ -146,7 +146,7 @@ def test_normalized_minmax(self):
"""
visualizer = ParallelCoordinates(normalize='minmax')
visualizer.fit_transform(self.dataset.X, self.dataset.y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.25)
def test_normalized_minmax_fast(self):
@@ -155,7 +155,7 @@ def test_normalized_minmax_fast(self):
"""
visualizer = ParallelCoordinates(normalize='minmax', fast=True)
visualizer.fit_transform(self.dataset.X, self.dataset.y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.25)
@pytest.mark.skipif(pd is None, reason="test requires pandas")
@@ -174,7 +174,7 @@ def test_pandas_integration_sampled(self):
sample=0.05, shuffle=True, random_state=4291, classes=classes
)
oz.fit_transform(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz, tol=0.1)
@@ -193,7 +193,7 @@ def test_numpy_integration_sampled(self):
sample=0.05, shuffle=True, random_state=4291, classes=classes
)
oz.fit_transform(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz, tol=0.1)
@@ -211,7 +211,7 @@ def test_pandas_integration_fast(self):
oz = ParallelCoordinates(fast=True, classes=classes)
oz.fit_transform(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz, tol=0.1)
@@ -228,7 +228,7 @@ def test_numpy_integration_fast(self):
oz = ParallelCoordinates(fast=True, classes=classes)
oz.fit_transform(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz, tol=0.1)
@@ -236,7 +236,7 @@ def test_normalized_invalid_arg(self):
"""
Invalid argument to 'normalize' should raise
"""
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
ParallelCoordinates(normalize='foo')
def test_sample_int(self):
@@ -276,7 +276,7 @@ def test_sample_int_invalid(self):
"""
Negative int values should raise exception
"""
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
ParallelCoordinates(sample=-1)
def test_sample_float(self):
@@ -316,16 +316,17 @@ def test_sample_float_invalid(self):
"""
Float values for 'sample' argument outside [0,1] should raise.
"""
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
ParallelCoordinates(sample=-0.2)
- with self.assertRaises(YellowbrickValueError):
+
+ with pytest.raises(YellowbrickValueError):
ParallelCoordinates(sample=1.1)
def test_sample_invalid_type(self):
"""
Non-numeric values for 'sample' argument should raise.
"""
- with self.assertRaises(YellowbrickTypeError):
+ with pytest.raises(YellowbrickTypeError):
ParallelCoordinates(sample='foo')
@staticmethod
diff --git a/tests/test_features/test_radviz.py b/tests/test_features/test_radviz.py
--- a/tests/test_features/test_radviz.py
+++ b/tests/test_features/test_radviz.py
@@ -96,7 +96,7 @@ def test_radviz(self):
"""
visualizer = RadViz()
visualizer.fit_transform(self.dataset.X, self.dataset.y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.25)
def test_radviz_alpha(self):
@@ -105,7 +105,7 @@ def test_radviz_alpha(self):
"""
visualizer = RadViz(alpha=0.5)
visualizer.fit_transform(self.dataset.X, self.dataset.y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.25)
@pytest.mark.xfail(
diff --git a/tests/test_features/test_rfecv.py b/tests/test_features/test_rfecv.py
--- a/tests/test_features/test_rfecv.py
+++ b/tests/test_features/test_rfecv.py
@@ -107,7 +107,7 @@ def test_rfecv_classification(self):
cv = ShuffleSplit(3, random_state=21)
oz = RFECV(SVC(kernel="linear", C=1), cv=cv)
oz.fit(self.dataset.X, self.dataset.y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz, remove_legend=True)
@@ -144,7 +144,7 @@ def test_pandas_integration(self):
cv = StratifiedKFold(n_splits=4, random_state=32)
oz = RFECV(RandomForestClassifier(random_state=83), cv=cv)
oz.fit(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz, remove_legend=True)
@@ -164,17 +164,17 @@ def test_numpy_integration(self):
cv = StratifiedKFold(n_splits=4, random_state=32)
oz = RFECV(RandomForestClassifier(random_state=83), cv=cv)
oz.fit(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz, remove_legend=True)
- def test_invalid_step(self):
+ @pytest.mark.parametrize("step", [0, -1, -5])
+ def test_invalid_step(self, step):
"""
Test step hyperparam validation
"""
- # TODO: parametrize when unittest is removed
with pytest.raises(YellowbrickValueError, match="step must be >0"):
- oz = RFECV(SVC(kernel="linear"), step=-1)
+ oz = RFECV(SVC(kernel="linear"), step=step)
oz.fit(self.dataset.X, self.dataset.y)
def test_rfecv_step(self):
diff --git a/tests/test_meta.py b/tests/test_meta.py
--- a/tests/test_meta.py
+++ b/tests/test_meta.py
@@ -113,7 +113,7 @@ def test_missing_baseline_image(self):
Test that a missing basline image raises an exception
"""
viz = RandomVisualizer(random_state=14).fit()
- viz.poof()
+ viz.finalize()
# Assert the baseline image does not exist
assert_path_not_exists(
@@ -133,7 +133,7 @@ def test_random_visualizer(self):
Test that a random visualization is correctly compared to a baseline
"""
viz = RandomVisualizer(random_state=111).fit()
- viz.poof()
+ viz.finalize()
assert mpl.get_backend() == 'agg'
@@ -147,7 +147,7 @@ def test_random_visualizer_not_close(self):
"""
# Baseline image random_state=225
viz = RandomVisualizer(random_state=224).fit()
- viz.poof()
+ viz.finalize()
with pytest.raises(ImageComparisonFailure, match="images not close"):
self.assert_images_similar(viz)
@@ -162,6 +162,6 @@ def test_random_visualizer_increased_tolerance(self):
Test that not close visualizers pass with increased tolerance
"""
viz = RandomVisualizer(random_state=224).fit()
- viz.poof()
+ viz.finalize()
self.assert_images_similar(viz, tol=30)
diff --git a/tests/test_model_selection/test_cross_validation.py b/tests/test_model_selection/test_cross_validation.py
--- a/tests/test_model_selection/test_cross_validation.py
+++ b/tests/test_model_selection/test_cross_validation.py
@@ -80,7 +80,7 @@ def test_classifier(self):
)
oz.fit(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz, tol=2.0)
@@ -120,7 +120,7 @@ def test_regression(self):
)
oz.fit(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz, tol=36.0)
@@ -178,6 +178,6 @@ def test_pandas_integration(self):
oz = CVScores(BernoulliNB(), cv=cv)
oz.fit(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz, tol=2.0)
diff --git a/tests/test_model_selection/test_learning_curve.py b/tests/test_model_selection/test_learning_curve.py
--- a/tests/test_model_selection/test_learning_curve.py
+++ b/tests/test_model_selection/test_learning_curve.py
@@ -83,7 +83,7 @@ def test_classifier(self):
oz = LearningCurve(
RandomForestClassifier(random_state=21), random_state=12
).fit(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz)
@@ -98,7 +98,7 @@ def test_regressor(self):
oz = LearningCurve(Ridge(), random_state=18)
oz.fit(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz)
@@ -111,7 +111,7 @@ def test_clusters(self):
oz = LearningCurve(
MiniBatchKMeans(random_state=281), random_state=182
).fit(X)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz, tol=10)
@@ -153,7 +153,7 @@ def test_pandas_integration(self):
cv = StratifiedKFold(n_splits=4, random_state=32)
oz = LearningCurve(GaussianNB(), cv=cv, random_state=23)
oz.fit(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz)
diff --git a/tests/test_model_selection/test_validation_curve.py b/tests/test_model_selection/test_validation_curve.py
--- a/tests/test_model_selection/test_validation_curve.py
+++ b/tests/test_model_selection/test_validation_curve.py
@@ -90,7 +90,7 @@ def test_classifier(self):
)
oz.fit(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz)
@@ -109,7 +109,7 @@ def test_regression(self):
)
oz.fit(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz, tol=12.0)
@@ -155,7 +155,7 @@ def test_pandas_integration(self):
BernoulliNB(), cv=cv, param_range=pr, param_name='alpha'
)
oz.fit(X, y)
- oz.poof()
+ oz.finalize()
self.assert_images_similar(oz)
diff --git a/tests/test_pipeline.py b/tests/test_pipeline.py
--- a/tests/test_pipeline.py
+++ b/tests/test_pipeline.py
@@ -18,7 +18,7 @@
##########################################################################
import os
-import unittest
+import pytest
from unittest import mock
from yellowbrick.base import Visualizer
@@ -40,6 +40,7 @@ class MockEstimator(BaseEstimator):
def fit(self, X, y=None, **kwargs):
return self
+
class MockVisualEstimator(Visualizer):
def fit(self, X, y=None, **kwargs):
@@ -76,7 +77,7 @@ def draw(self, **kwargs):
## VisualPipeline Tests
##########################################################################
-class VisualPipelineTests(unittest.TestCase):
+class TestVisualPipeline(object):
def test_validate_steps(self):
"""
@@ -87,7 +88,7 @@ def test_validate_steps(self):
# TypeError if the steps don't match transforms --> estimator.
# validate a bad intermediate transformer on the Pipeline
- with self.assertRaises(TypeError):
+ with pytest.raises(TypeError):
Pipeline([
('real', MockTransformer()),
('bad', Thing()),
@@ -95,7 +96,7 @@ def test_validate_steps(self):
])
# validate a bad intermediate transformer on the VisualPipeline
- with self.assertRaises(TypeError):
+ with pytest.raises(TypeError):
VisualPipeline([
('real', MockTransformer()),
('bad', Thing()),
@@ -103,14 +104,14 @@ def test_validate_steps(self):
])
# validate a bad final estimator on the Pipeline
- with self.assertRaises(TypeError):
+ with pytest.raises(TypeError):
Pipeline([
('real', MockTransformer()),
('bad', Thing()),
])
# validate a bad final estimator on the VisualPipeline
- with self.assertRaises(TypeError):
+ with pytest.raises(TypeError):
VisualPipeline([
('real', MockTransformer()),
('bad', Thing()),
@@ -149,11 +150,11 @@ def test_visual_steps_property(self):
('e', MockEstimator()),
])
- self.assertNotIn('a', pipeline.visual_steps)
- self.assertIn('b', pipeline.visual_steps)
- self.assertNotIn('c', pipeline.visual_steps)
- self.assertIn('d', pipeline.visual_steps)
- self.assertNotIn('e', pipeline.visual_steps)
+ assert 'a' not in pipeline.visual_steps
+ assert 'b' in pipeline.visual_steps
+ assert 'c' not in pipeline.visual_steps
+ assert 'd' in pipeline.visual_steps
+ assert 'e' not in pipeline.visual_steps
def test_pipeline_poof(self):
"""
@@ -192,7 +193,7 @@ def test_pipeline_savefig_poof(self):
pipeline.steps[3][1].poof.assert_called_once_with(outpath=os.path.join(tmpdir, "d.pdf"))
pipeline.steps[4][1].poof.assert_called_once_with(outpath=os.path.join(tmpdir, "e.pdf"))
- @unittest.skip("need to find a way for fit to return self in mocks")
+ @pytest.mark.skip(reason="need to find a way for fit to return self in mocks")
def test_fit_transform_poof_and_draw_calls(self):
"""
Test calling fit, transform, and poof on the pipeline
diff --git a/tests/test_regressor/test_alphas.py b/tests/test_regressor/test_alphas.py
--- a/tests/test_regressor/test_alphas.py
+++ b/tests/test_regressor/test_alphas.py
@@ -29,6 +29,8 @@
from yellowbrick.exceptions import YellowbrickValueError
from sklearn.svm import SVR, SVC
+from sklearn.cluster import KMeans
+from sklearn.decomposition import PCA
from sklearn.datasets import make_regression
from sklearn.linear_model import Ridge, RidgeCV
from sklearn.linear_model import Lasso, LassoCV
@@ -57,33 +59,35 @@ def test_similar_image(self):
X, y = make_regression(random_state=0)
visualizer.fit(X, y)
- visualizer.poof()
+ visualizer.finalize()
self.assert_images_similar(visualizer)
- def test_regressor_cv(self):
+ @pytest.mark.parametrize("model", [SVR, Ridge, Lasso, LassoLars, ElasticNet])
+ def test_regressor_nocv(self, model):
"""
Ensure only "CV" regressors are allowed
"""
- # TODO: parametrize with models when unittest dependency removed
- for model in (SVR, Ridge, Lasso, LassoLars, ElasticNet):
- with pytest.raises(YellowbrickTypeError):
- AlphaSelection(model())
+ with pytest.raises(YellowbrickTypeError):
+ AlphaSelection(model())
- # TODO: parametrize with models when unittest dependency removed (new test case)
- for model in (RidgeCV, LassoCV, LassoLarsCV, ElasticNetCV):
- try:
- AlphaSelection(model())
- except YellowbrickTypeError:
- pytest.fail("could not instantiate RegressorCV on alpha selection")
+ @pytest.mark.parametrize("model", [RidgeCV, LassoCV, LassoLarsCV, ElasticNetCV])
+ def test_regressor_cv(self, model):
+ """
+ Ensure "CV" regressors are allowed
+ """
+ try:
+ AlphaSelection(model())
+ except YellowbrickTypeError:
+ pytest.fail("could not instantiate RegressorCV on alpha selection")
- def test_only_regressors(self):
+ @pytest.mark.parametrize("model", [SVC, KMeans, PCA])
+ def test_only_regressors(self, model):
"""
Assert AlphaSelection only works with regressors
"""
- # TODO: parameterize with classifier, clusterer, decomposition
with pytest.raises(YellowbrickTypeError):
- AlphaSelection(SVC())
+ AlphaSelection(model())
def test_store_cv_values(self):
"""
@@ -99,21 +103,19 @@ def test_store_cv_values(self):
model = AlphaSelection(RidgeCV(store_cv_values=False))
assert model.estimator.store_cv_values
- def test_get_alphas_param(self):
+ @pytest.mark.parametrize("model", [RidgeCV, LassoCV, ElasticNetCV])
+ def test_get_alphas_param(self, model):
"""
- Assert that we can get the alphas from ridge, lasso, and elasticnet
+ Assert that we can get the alphas from original CV models
"""
alphas = np.logspace(-10, -2, 100)
- # Test original CV models
- # TODO: parametrize this test with different models
- for model in (RidgeCV, LassoCV, ElasticNetCV):
- try:
- model = AlphaSelection(model(alphas=alphas))
- malphas = model._find_alphas_param()
- assert_array_equal(alphas, malphas)
- except YellowbrickValueError:
- pytest.fail("could not find alphas on {}".format(model.name))
+ try:
+ model = AlphaSelection(model(alphas=alphas))
+ malphas = model._find_alphas_param()
+ assert_array_equal(alphas, malphas)
+ except YellowbrickValueError:
+ pytest.fail("could not find alphas on {}".format(model.name))
def test_get_alphas_param_lassolars(self):
"""
@@ -128,24 +130,21 @@ def test_get_alphas_param_lassolars(self):
except YellowbrickValueError:
pytest.fail("could not find alphas on {}".format(model.name))
- def test_get_errors_param(self):
+ @pytest.mark.parametrize("model", [RidgeCV, LassoCV, LassoLarsCV, ElasticNetCV])
+ def test_get_errors_param(self, model):
"""
Test known models we can get the cv errors for alpha selection
"""
+ try:
+ model = AlphaSelection(model())
- # Test original CV models
- # TODO: parametrize this test with different models
- for model in (RidgeCV, LassoCV, LassoLarsCV, ElasticNetCV):
- try:
- model = AlphaSelection(model())
-
- X, y = make_regression()
- model.fit(X, y)
+ X, y = make_regression()
+ model.fit(X, y)
- errors = model._find_errors_param()
- assert len(errors) > 0
- except YellowbrickValueError:
- pytest.fail("could not find errors on {}".format(model.name))
+ errors = model._find_errors_param()
+ assert len(errors) > 0
+ except YellowbrickValueError:
+ pytest.fail("could not find errors on {}".format(model.name))
def test_score(self):
"""
diff --git a/tests/test_style/test_colors.py b/tests/test_style/test_colors.py
--- a/tests/test_style/test_colors.py
+++ b/tests/test_style/test_colors.py
@@ -235,7 +235,7 @@ def test_integrated_yb_colormap(self):
)
visualizer = SilhouetteVisualizer(KMeans(random_state=0), colormap='neural_paint')
visualizer.fit(X)
- visualizer.poof()
+ visualizer.finalize()
tol = 3.2 if sys.platform == "win32" else 0.01 # Fails on AppVeyor with RMS 3.143
self.assert_images_similar(visualizer, remove_legend=True, tol=tol)
diff --git a/tests/test_style/test_palettes.py b/tests/test_style/test_palettes.py
--- a/tests/test_style/test_palettes.py
+++ b/tests/test_style/test_palettes.py
@@ -17,7 +17,7 @@
## Imports
##########################################################################
-import unittest
+import pytest
import numpy as np
import matplotlib as mpl
@@ -35,7 +35,7 @@
## Color Palette Tests
##########################################################################
-class ColorPaletteObjectTests(VisualTestCase):
+class TestColorPaletteObject(VisualTestCase):
"""
Tests the ColorPalette object
"""
@@ -54,11 +54,11 @@ def test_init_palette_by_name(self):
"Could not instantiate {} color palette by name".format(name)
)
- self.assertEqual(value, palette)
+ assert value == palette
# Try a name not in PALETTES
- with self.assertRaises(YellowbrickValueError):
- self.assertNotIn('foo', PALETTES, "Cannot test bad name 'foo' it is in PALETTES!")
+ with pytest.raises(YellowbrickValueError):
+ assert 'foo' not in PALETTES, "Cannot test bad name 'foo' it is in PALETTES!"
palette = ColorPalette('foo')
def test_init_palette_by_list(self):
@@ -69,12 +69,12 @@ def test_init_palette_by_list(self):
# Try all the values in the palettes (HEX)
for value in PALETTES.values():
palette = ColorPalette(value)
- self.assertEqual(len(value), len(palette))
+ assert len(value) == len(palette)
# Try all the values converted to RGB
for value in PALETTES.values():
palette = ColorPalette(map(mpl.colors.colorConverter.to_rgb, value))
- self.assertEqual(len(value), len(palette))
+ assert len(value) == len(palette)
def test_color_palette_context(self):
"""
@@ -84,10 +84,10 @@ def test_color_palette_context(self):
context = color_palette('dark')
with ColorPalette('dark') as palette:
- self.assertIsInstance(palette, ColorPalette)
- self.assertEqual(get_color_cycle(), context)
+ assert isinstance(palette, ColorPalette)
+ assert get_color_cycle() == context
- self.assertEqual(get_color_cycle(), default)
+ assert get_color_cycle() == default
def test_as_hex_as_rgb(self):
"""
@@ -97,16 +97,16 @@ def test_as_hex_as_rgb(self):
expected = PALETTES['flatui']
morgified = palette.as_hex()
- self.assertIsNot(morgified, palette)
- self.assertIsInstance(morgified, ColorPalette)
- self.assertEqual(morgified, expected)
+ assert morgified is not palette
+ assert isinstance(morgified, ColorPalette)
+ assert morgified == expected
remorgified = morgified.as_rgb()
- self.assertIsNot(remorgified, morgified)
- self.assertIsNot(remorgified, palette)
- self.assertEqual(remorgified, palette)
+ assert remorgified is not morgified
+ assert remorgified is not palette
+ assert remorgified == palette
- @unittest.skip("not implemented yet")
+ @pytest.mark.skip(reason="not implemented yet")
def test_plot_color_palette(self):
"""
Test the plotting of a color palette for color visualization
@@ -116,7 +116,7 @@ def test_plot_color_palette(self):
)
-class ColorPaletteFunctionTests(VisualTestCase):
+class TestColorPaletteFunction(VisualTestCase):
"""
Tests the color_palette function.
"""
@@ -127,7 +127,7 @@ def test_current_palette(self):
"""
pal = color_palette(["red", "blue", "green"], 3)
set_palette(pal, 3)
- self.assertEqual(pal, get_color_cycle())
+ assert pal == get_color_cycle()
# Reset the palette
set_aesthetic()
@@ -141,9 +141,9 @@ def test_palette_context(self):
context_pal = color_palette("muted")
with color_palette(context_pal):
- self.assertEqual(get_color_cycle(), context_pal)
+ assert get_color_cycle() == context_pal
- self.assertEqual(get_color_cycle(), default_pal)
+ assert get_color_cycle() == default_pal
def test_big_palette_context(self):
"""
@@ -155,9 +155,9 @@ def test_big_palette_context(self):
set_palette(original_pal)
with color_palette(context_pal, 10):
- self.assertEqual(get_color_cycle(), context_pal)
+ assert get_color_cycle() == context_pal
- self.assertEqual(get_color_cycle(), original_pal)
+ assert get_color_cycle() == original_pal
# Reset default
set_aesthetic()
@@ -169,7 +169,7 @@ def test_yellowbrick_palettes(self):
pals = ["accent", "dark", "pastel", "bold", "muted"]
for name in pals:
pal_out = color_palette(name)
- self.assertEqual(len(pal_out), 6, "{} is not of len 6".format(name))
+ assert len(pal_out) == 6, "{} is not of len 6".format(name)
def test_seaborn_palettes(self):
"""
@@ -179,7 +179,7 @@ def test_seaborn_palettes(self):
"sns_bright", "sns_dark", "sns_colorblind"]
for name in pals:
pal_out = color_palette(name)
- self.assertEqual(len(pal_out), 6)
+ assert len(pal_out) == 6
def test_other_palettes(self):
"""
@@ -188,7 +188,8 @@ def test_other_palettes(self):
pals = ["flatui", "paired", "neural_paint", "set1"]
for name in pals:
pal_out = color_palette(name)
- self.assertTrue(pal_out)
+ assert pal_out is not None
+ assert len(pal_out) > 0
def test_bad_palette_name(self):
@@ -196,10 +197,10 @@ def test_bad_palette_name(self):
Test that a bad palette name raises an exception
"""
- with self.assertRaises(ValueError):
+ with pytest.raises(ValueError):
color_palette("IAmNotAPalette")
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
color_palette("IAmNotAPalette")
def test_bad_palette_colors(self):
@@ -208,10 +209,10 @@ def test_bad_palette_colors(self):
"""
pal = ["red", "blue", "iamnotacolor"]
- with self.assertRaises(ValueError):
+ with pytest.raises(ValueError):
color_palette(pal)
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
color_palette(pal)
def test_palette_is_list_of_tuples(self):
@@ -222,10 +223,10 @@ def test_palette_is_list_of_tuples(self):
pal_in = np.array(["red", "blue", "green"])
pal_out = color_palette(pal_in, 3)
- self.assertIsInstance(pal_out, list)
- self.assertIsInstance(pal_out[0], tuple)
- self.assertIsInstance(pal_out[0][0], float)
- self.assertEqual(len(pal_out[0]), 3)
+ assert isinstance(pal_out, list)
+ assert isinstance(pal_out[0], tuple)
+ assert isinstance(pal_out[0][0], float)
+ assert len(pal_out[0]) == 3
def test_palette_cycles(self):
"""
@@ -233,20 +234,20 @@ def test_palette_cycles(self):
"""
accent = color_palette("accent")
double_accent = color_palette("accent", 12)
- self.assertEqual(double_accent, accent + accent)
+ assert double_accent == accent + accent
- @unittest.skip("Discovered this commented out, don't know why")
+ @pytest.mark.skip(reason="discovered this commented out, don't know why")
def test_cbrewer_qual(self):
"""
Test colorbrewer qualitative palettes
"""
pal_short = mpl_palette("Set1", 4)
pal_long = mpl_palette("Set1", 6)
- self.assertEqual(pal_short, pal_long[:4])
+ assert pal_short == pal_long[:4]
pal_full = palettes.mpl_palette("Set2", 8)
pal_long = palettes.mpl_palette("Set2", 10)
- self.assertEqual(pal_full, pal_long[:8])
+ assert pal_full == pal_long[:8]
def test_color_codes(self):
"""
@@ -257,7 +258,7 @@ def test_color_codes(self):
for code, color in zip("bgrmyck", colors):
rgb_want = mpl.colors.colorConverter.to_rgb(color)
rgb_got = mpl.colors.colorConverter.to_rgb(code)
- self.assertEqual(rgb_want, rgb_got)
+ assert rgb_want == rgb_got
set_color_codes("reset")
def test_as_hex(self):
@@ -266,10 +267,10 @@ def test_as_hex(self):
"""
pal = color_palette("accent")
for rgb, hex in zip(pal, pal.as_hex()):
- self.assertEqual(mpl.colors.rgb2hex(rgb), hex)
+ assert mpl.colors.rgb2hex(rgb) == hex
for rgb_e, rgb_v in zip(pal, pal.as_hex().as_rgb()):
- self.assertEqual(rgb_e, rgb_v)
+ assert rgb_e == rgb_v
def test_preserved_palette_length(self):
"""
@@ -277,7 +278,7 @@ def test_preserved_palette_length(self):
"""
pal_in = color_palette("Set1", 10)
pal_out = color_palette(pal_in)
- self.assertEqual(pal_in, pal_out)
+ assert pal_in == pal_out
def test_color_sequence(self):
"""
@@ -286,33 +287,30 @@ def test_color_sequence(self):
for name, ncols in SEQUENCES.items():
for n in ncols.keys():
cmap = color_sequence(name, n)
- self.assertEqual(name, cmap.name)
- self.assertEqual(n, cmap.N)
+ assert name == cmap.name
+ assert n == cmap.N
def test_color_sequence_default(self):
"""
Assert the default color sequence is RdBu
"""
cmap = color_sequence()
- self.assertEqual(cmap.name, "RdBu")
- self.assertEqual(cmap.N, 11)
+ assert cmap.name == "RdBu"
+ assert cmap.N == 11
def test_color_sequence_unrecocognized(self):
"""
Test value errors for unrecognized sequences
"""
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
color_sequence('PepperBucks', 3)
def test_color_sequence_bounds(self):
"""
Test color sequence out of bounds value error
"""
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
color_sequence('RdBu', 18)
- with self.assertRaises(YellowbrickValueError):
+ with pytest.raises(YellowbrickValueError):
color_sequence('RdBu', 2)
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/tests/test_style/test_rcmod.py b/tests/test_style/test_rcmod.py
--- a/tests/test_style/test_rcmod.py
+++ b/tests/test_style/test_rcmod.py
@@ -18,13 +18,12 @@
## Imports
##########################################################################
-import unittest
+import pytest
import numpy as np
import matplotlib as mpl
import numpy.testing as npt
import yellowbrick.style.rcmod as yb_rcmod
-from distutils.version import LooseVersion
from tests.base import VisualTestCase
@@ -38,9 +37,9 @@ class RCParamTester(VisualTestCase):
"""
excluded_params = {
- "backend", # This cannot be changed by manipulating rc
- "svg.embed_char_paths", # This param causes test issues and is deprecated anyway
- "font.family", # breaks the visualtest case
+ "backend", # This cannot be changed by manipulating rc
+ "svg.embed_char_paths", # This param causes test issues and is deprecated
+ "font.family", # breaks the visualtest case
}
def flatten_list(self, orig_list):
@@ -57,7 +56,7 @@ def assert_rc_params(self, params):
elif isinstance(v, np.ndarray):
npt.assert_array_equal(mpl.rcParams[k], v)
else:
- self.assertEqual((k, mpl.rcParams[k]), (k, v))
+ assert (k, mpl.rcParams[k]) == (k, v)
##########################################################################
@@ -80,8 +79,8 @@ def test_rc_override(self):
rc = {"axes.facecolor": "blue", "foo.notaparam": "bar"}
out = yb_rcmod._axes_style("darkgrid", rc)
- self.assertEqual(out["axes.facecolor"], "blue")
- self.assertNotIn("foo.notaparam", out)
+ assert out["axes.facecolor"] == "blue"
+ assert "foo.notaparam" not in out
def test_set_style(self):
"""
@@ -91,7 +90,7 @@ def test_set_style(self):
yb_rcmod.set_style()
self.assert_rc_params(style_dict)
- @unittest.skip("This test doesn't make sense without multiple styles")
+ @pytest.mark.skip(reason="this test doesn't make sense without multiple styles")
def test_style_context_manager(self):
yb_rcmod.set_style("darkgrid")
@@ -112,25 +111,20 @@ def test_style_context_independence(self):
"""
Assert context and style independence
"""
- self.assertTrue(set(yb_rcmod._style_keys) ^ set(yb_rcmod._context_keys))
+ assert len(set(yb_rcmod._style_keys) ^ set(yb_rcmod._context_keys)) > 0
def test_set_rc(self):
"""
Test the ability to set the mpl configuration rc dict
"""
yb_rcmod.set_aesthetic(rc={"lines.linewidth": 4})
- self.assertEqual(mpl.rcParams["lines.linewidth"], 4)
+ assert mpl.rcParams["lines.linewidth"] == 4
yb_rcmod.set_aesthetic()
def test_reset_defaults(self):
"""
Test the ability to reset to the mpl defaults
"""
- # Changes to the rc parameters make this test hard to manage
- # on older versions of matplotlib, so we'll skip it
- if LooseVersion(mpl.__version__) < LooseVersion("1.3"):
- raise self.SkipTest
-
yb_rcmod.reset_defaults()
self.assert_rc_params(mpl.rcParamsDefault)
yb_rcmod.set_aesthetic()
@@ -139,12 +133,6 @@ def test_reset_orig(self):
"""
Test the ability to reset to the original (respecting custom styles)
"""
-
- # Changes to the rc parameters make this test hard to manage
- # on older versions of matplotlib, so we'll skip it
- if LooseVersion(mpl.__version__) < LooseVersion("1.3"):
- raise self.SkipTest
-
yb_rcmod.reset_orig()
self.assert_rc_params(mpl.rcParamsOrig)
yb_rcmod.set_aesthetic()
@@ -171,7 +159,7 @@ def test_font_scale(self):
"xtick.labelsize", "ytick.labelsize", "font.size"]
for k in font_keys:
- self.assertEqual(notebook_ref[k] * 2, notebook_big[k])
+ assert notebook_ref[k] * 2 == notebook_big[k]
def test_rc_override(self):
"""
@@ -180,8 +168,8 @@ def test_rc_override(self):
key, val = "grid.linewidth", 5
rc = {key: val, "foo": "bar"}
out = yb_rcmod._plotting_context("talk", rc=rc)
- self.assertEqual(out[key], val)
- self.assertNotIn("foo", out)
+ assert out[key] == val
+ assert "foo" not in out
def test__set_context(self):
"""
@@ -191,7 +179,7 @@ def test__set_context(self):
yb_rcmod._set_context()
self.assert_rc_params(context_dict)
- @unittest.skip("This test doesn't make sense without multiple contexts")
+ @pytest.mark.skip(reason="this test doesn't make sense without multiple contexts")
def test_context_context_manager(self):
yb_rcmod._set_context("notebook")
@@ -208,6 +196,3 @@ def func():
func()
self.assert_rc_params(orig_params)
-
-if __name__ == "__main__":
- unittest.main()
diff --git a/tests/test_target/test_binning.py b/tests/test_target/test_binning.py
--- a/tests/test_target/test_binning.py
+++ b/tests/test_target/test_binning.py
@@ -1,7 +1,7 @@
# tests.test_target.test_binning
# Tests for the BalancedBinningReference visualizer
#
-# Author: Juan L. Kehoe ([email protected])
+# Author: Juan L. Kehoe ([email protected])
# Author: Prema Damodaran Roman ([email protected])
# Created: Thu Jul 20 10:21:49 2018 -0400
#
@@ -11,29 +11,28 @@
from tests.dataset import DatasetMixin
from yellowbrick.target.binning import *
+
##########################################################################
## BalancedBinningReference Tests
##########################################################################
class TestBalancedBinningReference(VisualTestCase, DatasetMixin):
- """
- Test the BalancedBinningReference visualizer
- """
-
- def test_balancedbinningreference(self):
- """
- Test Histogram on a real dataset
- """
- # Load the data from the fixture
- dataset = self.load_data('occupancy')
-
- # Get the data
- y = dataset["temperature"]
-
-
- visualizer = BalancedBinningReference()
- visualizer.fit(y)
- visualizer.finalize()
- self.assert_images_similar(visualizer, tol=0.5)
-
-
\ No newline at end of file
+ """
+ Test the BalancedBinningReference visualizer
+ """
+
+ def test_balancedbinningreference(self):
+ """
+ Test Histogram on a real dataset
+ """
+ # Load the data from the fixture
+ dataset = self.load_data('occupancy')
+
+ # Get the data
+ y = dataset["temperature"]
+
+ visualizer = BalancedBinningReference()
+ visualizer.fit(y)
+ visualizer.finalize()
+ self.assert_images_similar(visualizer, tol=0.5)
+
diff --git a/tests/test_target/test_class_balance.py b/tests/test_target/test_class_balance.py
--- a/tests/test_target/test_class_balance.py
+++ b/tests/test_target/test_class_balance.py
@@ -66,7 +66,7 @@ def make_fixture(binary=False, balanced=False, split=False):
## Tests
##########################################################################
-class ClassBalanceTests(VisualTestCase, DatasetMixin):
+class TestClassBalance(VisualTestCase, DatasetMixin):
"""
Test ClassBalance visualizer
"""
diff --git a/tests/test_target/test_feature_correlation.py b/tests/test_target/test_feature_correlation.py
--- a/tests/test_target/test_feature_correlation.py
+++ b/tests/test_target/test_feature_correlation.py
@@ -20,10 +20,6 @@
import sys
import pytest
import numpy as np
-try:
- import pandas as pd
-except ImportError:
- pd = None
import numpy.testing as npt
import matplotlib.pyplot as plt
@@ -31,9 +27,13 @@
from yellowbrick.exceptions import YellowbrickValueError, YellowbrickWarning
from sklearn import datasets
-
from tests.base import VisualTestCase
+try:
+ import pandas as pd
+except ImportError:
+ pd = None
+
##########################################################################
## Feature Correlation Tests
diff --git a/tests/test_text/test_base.py b/tests/test_text/test_base.py
--- a/tests/test_text/test_base.py
+++ b/tests/test_text/test_base.py
@@ -17,8 +17,6 @@
## Imports
##########################################################################
-import unittest
-
from yellowbrick.base import *
from yellowbrick.text.base import *
from sklearn.base import BaseEstimator, TransformerMixin
@@ -28,22 +26,13 @@
## TextVisualizer Base Tests
##########################################################################
-class TextVisualizerBaseTests(unittest.TestCase):
+class TestTextVisualizerBase(object):
def test_subclass(self):
"""
- Assert the text visualizer is subclassed correctly
+ Assert the text visualizer is subclassed correctly
"""
visualizer = TextVisualizer()
- self.assertIsInstance(visualizer, TransformerMixin)
- self.assertIsInstance(visualizer, BaseEstimator)
- self.assertIsInstance(visualizer, Visualizer)
-
- # def test_interface(self):
- # """
- # Test the feature visualizer interface
- # """
- #
- # visualizer = TextVisualizer()
- # with self.assertRaises(NotImplementedError):
- # visualizer.poof()
+ assert isinstance(visualizer, TransformerMixin)
+ assert isinstance(visualizer, BaseEstimator)
+ assert isinstance(visualizer, Visualizer)
diff --git a/tests/test_text/test_dispersion.py b/tests/test_text/test_dispersion.py
--- a/tests/test_text/test_dispersion.py
+++ b/tests/test_text/test_dispersion.py
@@ -19,12 +19,13 @@
##########################################################################
import pytest
+import matplotlib.pyplot as plt
from yellowbrick.exceptions import YellowbrickValueError
from yellowbrick.datasets import load_hobbies
from yellowbrick.text.dispersion import *
from tests.base import VisualTestCase
-import matplotlib.pyplot as plt
+
##########################################################################
## Data
@@ -36,7 +37,7 @@
## DispersionPlot Tests
##########################################################################
-class DispersionPlotTests(VisualTestCase):
+class TestDispersionPlot(VisualTestCase):
def test_quick_method(self):
"""
@@ -107,7 +108,6 @@ def test_dispersion_plot_annotate_docs(self):
self.assert_images_similar(visualizer, tol=25.5)
-
def test_dispersion_plot_color_by_class(self):
"""
Assert no errors occur during DispersionPlot integration
diff --git a/tests/test_text/test_freqdist.py b/tests/test_text/test_freqdist.py
--- a/tests/test_text/test_freqdist.py
+++ b/tests/test_text/test_freqdist.py
@@ -35,7 +35,7 @@
## FreqDist Tests
##########################################################################
-class FreqDistTests(VisualTestCase):
+class TestFreqDist(VisualTestCase):
@pytest.mark.xfail(
IS_WINDOWS_OR_CONDA,
diff --git a/tests/test_text/test_postag.py b/tests/test_text/test_postag.py
--- a/tests/test_text/test_postag.py
+++ b/tests/test_text/test_postag.py
@@ -37,6 +37,7 @@
except ImportError:
spacy = None
+
##########################################################################
## Data
##########################################################################
@@ -201,11 +202,11 @@ def test_frequency_mode(self):
viz = postag(tagged_docs, ax=ax, frequency=True)
viz.finalize()
ax.grid(False)
-
+
# Sorted tags i.e predetermined order
sorted_tags = ['noun', 'adjective', 'punctuation', 'verb', 'preposition',
- 'determiner', 'adverb', 'conjunction', 'pronoun', 'wh- word',
- 'modal', 'infinitive', 'possessive', 'other', 'symbol',
+ 'determiner', 'adverb', 'conjunction', 'pronoun', 'wh- word',
+ 'modal', 'infinitive', 'possessive', 'other', 'symbol',
'existential', 'digit', 'non-English', 'interjection', 'list']
# Extract tick labels from the plot
ticks_ax = [tick.get_text() for tick in ax.xaxis.get_ticklabels()]
@@ -284,10 +285,10 @@ def test_stack_mode(self):
visualizer.ax.grid(False)
self.assert_images_similar(ax=ax)
-
+
def test_stack_frequency_mode(self):
"""
- Assert no errors occur when the visualizer is run on both stack and
+ Assert no errors occur when the visualizer is run on both stack and
frequency mode
"""
check_nltk_data()
@@ -298,11 +299,11 @@ def test_stack_frequency_mode(self):
visualizer = PosTagVisualizer(stack=True, frequency=True, ax=ax)
visualizer.fit(tagged_docs, y=['a','b','c'])
visualizer.ax.grid(False)
-
+
# Sorted tags i.e predetermined order
sorted_tags = ['noun', 'adjective', 'punctuation', 'verb', 'preposition',
- 'determiner', 'adverb', 'conjunction', 'pronoun', 'wh- word',
- 'modal', 'infinitive', 'possessive', 'other', 'symbol',
+ 'determiner', 'adverb', 'conjunction', 'pronoun', 'wh- word',
+ 'modal', 'infinitive', 'possessive', 'other', 'symbol',
'existential', 'digit', 'non-English', 'interjection', 'list']
# Extract tick labels from the plot
ticks_ax = [tick.get_text() for tick in ax.xaxis.get_ticklabels()]
diff --git a/tests/test_text/test_tsne.py b/tests/test_text/test_tsne.py
--- a/tests/test_text/test_tsne.py
+++ b/tests/test_text/test_tsne.py
@@ -41,6 +41,7 @@
corpus = load_hobbies()
+
##########################################################################
## TSNE Tests
##########################################################################
@@ -132,16 +133,16 @@ def test_custom_colors_tsne(self):
"""
## produce random data
X, y = make_classification(n_samples=200, n_features=100,
- n_informative=20, n_redundant=10,
+ n_informative=20, n_redundant=10,
n_classes=5, random_state=42)
-
+
## specify a list of custom colors >= n_classes
purple_blues = ["indigo", "orchid", "plum", "navy", "purple", "blue"]
-
+
## instantiate the visualizer and check that self.colors is correct
purple_tsne = TSNEVisualizer(colors=purple_blues, random_state=87)
assert purple_tsne.colors == purple_blues
-
+
## fit the visualizer and check that self.color_values is as long as
## n_classes and is the first n_classes items in self.colors
purple_tsne.fit(X,y)
diff --git a/tests/test_text/test_umap.py b/tests/test_text/test_umap.py
--- a/tests/test_text/test_umap.py
+++ b/tests/test_text/test_umap.py
@@ -51,6 +51,7 @@
corpus = load_hobbies()
+
##########################################################################
## UMAP Tests
##########################################################################
@@ -143,16 +144,16 @@ def test_custom_colors_umap(self):
"""
## produce random data
X, y = make_classification(n_samples=200, n_features=100,
- n_informative=20, n_redundant=10,
+ n_informative=20, n_redundant=10,
n_classes=5, random_state=42)
-
+
## specify a list of custom colors >= n_classes
purple_blues = ["indigo", "orchid", "plum", "navy", "purple", "blue"]
-
+
## instantiate the visualizer and check that self.colors is correct
purple_umap = UMAPVisualizer(colors=purple_blues, random_state=87)
assert purple_umap.colors == purple_blues
-
+
## fit the visualizer and check that self.color_values is as long as
## n_classes and is the first n_classes items in self.colors
purple_umap.fit(X,y)
diff --git a/tests/test_utils/test_decorators.py b/tests/test_utils/test_decorators.py
--- a/tests/test_utils/test_decorators.py
+++ b/tests/test_utils/test_decorators.py
@@ -17,8 +17,6 @@
## Imports
##########################################################################
-import unittest
-
from yellowbrick.utils.decorators import *
@@ -26,7 +24,7 @@
## Decorator Tests
##########################################################################
-class DecoratorTests(unittest.TestCase):
+class TestDecorators(object):
"""
Tests for the decorator utilities.
"""
@@ -43,10 +41,9 @@ def foo(self):
return "bar"
viz = Visualizer()
- self.assertFalse(hasattr(viz, "_foo"))
- self.assertEqual(viz.foo, "bar")
- self.assertEqual(viz._foo, "bar")
-
+ assert not hasattr(viz, "_foo")
+ assert viz.foo == "bar"
+ assert viz._foo == "bar"
def test_docutil(self):
"""
@@ -69,34 +66,18 @@ def undecorated(*args, **kwargs):
pass
# Test the undecorated string to protect from magic
- self.assertEqual(
- undecorated.__doc__.strip(), "This is an undecorated function string."
- )
+ assert undecorated.__doc__.strip() == "This is an undecorated function string."
# Decorate manually and test the newly decorated return function.
decorated = docutil(Visualizer.__init__)(undecorated)
- self.assertEqual(
- decorated.__doc__.strip(), "This is the correct docstring."
- )
+ assert decorated.__doc__.strip() == "This is the correct docstring."
# Assert that decoration modifies the original function.
- self.assertEqual(
- undecorated.__doc__.strip(), "This is the correct docstring."
- )
+ assert undecorated.__doc__.strip() == "This is the correct docstring."
@docutil(Visualizer.__init__)
def sugar(*args, **kwargs):
pass
# Assert that syntactic sugar works as expected.
- self.assertEqual(
- sugar.__doc__.strip(), "This is the correct docstring."
- )
-
-
-##########################################################################
-## Execute Tests
-##########################################################################
-
-if __name__ == "__main__":
- unittest.main()
+ assert sugar.__doc__.strip() == "This is the correct docstring."
diff --git a/tests/test_utils/test_helpers.py b/tests/test_utils/test_helpers.py
--- a/tests/test_utils/test_helpers.py
+++ b/tests/test_utils/test_helpers.py
@@ -81,7 +81,6 @@ def test_str_input(self):
## Numeric Function Tests
##########################################################################
-
class TestNumericFunctions(object):
"""
Numeric helper functions
diff --git a/tests/test_utils/test_kneed.py b/tests/test_utils/test_kneed.py
--- a/tests/test_utils/test_kneed.py
+++ b/tests/test_utils/test_kneed.py
@@ -2,22 +2,22 @@
# A port of the tests for knee-point detection package, kneed.
#
# Author: Kevin Arvai
-# Author: Pradeep Singh
+# Author: Pradeep Singh
# Created: Mon Apr 23 01:29:18 2019 -0400
#
# Copyright (C) 2017 Kevin Arvai
# All rights reserved.
-# Redistribution and use in source and binary forms, with or without modification,
+# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
-#
+#
# 1. Redistributions of source code must retain the above copyright notice, this list
# of conditions and the following disclaimer.
#
-# 2. Redistributions in binary form must reproduce the above copyright notice, this
-# list of conditions and the following disclaimer in the documentation and/or other
+# 2. Redistributions in binary form must reproduce the above copyright notice, this
+# list of conditions and the following disclaimer in the documentation and/or other
# materials provided with the distribution.
#
-# 3. Neither the name of the copyright holder nor the names of its contributors may
+# 3. Neither the name of the copyright holder nor the names of its contributors may
# be used to endorse or promote products derived from this software without specific
# prior written permission.
#
@@ -26,7 +26,7 @@
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
+# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
# OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
@@ -35,10 +35,11 @@
# ID: test_kneed.py [] [email protected] $
"""
-This package contains a port of the tests for knee-point detection package, kneed, by
-Kevin Arvai and hosted at https://github.com/arvkevi/kneed. This port is maintained
+This package contains a port of the tests for knee-point detection package, kneed, by
+Kevin Arvai and hosted at https://github.com/arvkevi/kneed. This port is maintained
with permission by the Yellowbrick contributors.
"""
+
import numpy as np
from yellowbrick.utils.kneed import KneeLocator
@@ -50,28 +51,28 @@
def test_concave_increasing():
- """Tests that a correct knee point is detected in
+ """Tests that a correct knee point is detected in
curve having concave and increasing nature."""
kn = KneeLocator(x, y_concave_inc, curve_nature='concave', curve_direction='increasing')
assert kn.knee == 2
def test_concave_decreasing():
- """Tests that a correct knee point is detected in
+ """Tests that a correct knee point is detected in
curve having concave and decreasing nature."""
kn = KneeLocator(x, y_concave_dec, curve_nature='concave', curve_direction='decreasing')
assert kn.knee == 7
def test_convex_increasing():
- """Tests that a correct knee point is detected in
+ """Tests that a correct knee point is detected in
curve having convex and increasing nature."""
kn = KneeLocator(x, y_convex_inc, curve_nature='convex', curve_direction='increasing')
assert kn.knee == 7
def test_convex_decreasing():
- """Tests that a correct knee point is detected in
+ """Tests that a correct knee point is detected in
curve having convex and decreasing nature."""
kn = KneeLocator(x, y_convex_dec, curve_nature='convex', curve_direction='decreasing')
assert kn.knee == 2
diff --git a/tests/test_utils/test_timer.py b/tests/test_utils/test_timer.py
--- a/tests/test_utils/test_timer.py
+++ b/tests/test_utils/test_timer.py
@@ -19,6 +19,7 @@
from unittest import mock
from yellowbrick.utils.timer import *
+
##########################################################################
## Helper Function Tests
##########################################################################
| Overhaul unittest and fixtures
Remove dependency on `unittest.TestCase` by replacing assertions with `assert` statements. This also involves using `pytest.raises` and `pytest.skip` decorators.
Use PyTest fixtures for datasets and other fixtures.
~Add PEP checking to pytest (alongside pyflakes)~
- [x] rename `ThingTests` to `TestThing`
- [x] assertions
- [x] pytest.skip
- [x] dataset fixtures
- [ ] ~PEP8~
Alongside this we should also do a PY2 removal check, namely anywhere:
```python
try:
from unittest import mock
except ImportError:
import mock
```
should be replaced with just `from unittest import mock`
| @bbengfort what's the motivation for removing the unittest dependency?
@NealHumphrey the short answer is that weird things happen when you mix and match pytest and unittest - my primary concern is test discovery. PyTest expects `TestThing` vs `ThingTests` (although I guess that is more of a nose thing than a unittest thing). Plus pragmas and marks (e.g. skip, xfail) behave slightly differently. I think it's better to standardize on one or the other.
Because our feeling is that people are more comfortable with pytest and because of features like pytest-flakes, pytest-spec, etc. we've chosen to move to pytest. Pytest does play well with unittest so there is no rush on this, but the sooner it's standard the easier it will be to diagnose issues in the tests (of which we have a few).
Yeah @NealHumphrey, this is sort of a follow on from #291 -- after we discovered last year that [nose wasn't being maintained anymore](https://twitter.com/llanga/status/826144101833732097) and started converting all the tests to pytest.
Adding a TODO:
- [x] Update the [documentation](http://www.scikit-yb.org/en/latest/contributing.html#testing) for contributors
- [x] add pytest to requirements.txt
As noted in my pull request, I had trouble running `pytest` locally including pyflakes and coverage. On Windows, haven't tried it on Mac yet.
What I tried:
1) Create a new virtual environment (using conda)
2) activate it `activate yb`
3) pip install pytest (b/c it's not in the requirments.txt, see TODO in previous comment), which installed 3.4.2 plus some dependencies, and `pip install --upgrade pyflakes` (installed v 1.6.0)
4) in root `yellowbrick` folder (i.e. not in `yellowbrick/tests` and not in `yellowbrick/yellowbrick`), run `python -m pytest`
Output:
```
usage: pytest.py [options] [file_or_dir] [file_or_dir] [...]
pytest.py: error: unrecognized arguments: --cov=yellowbrick --flakes --spec
inifile: C:\Users\humph\Documents\Github\yellowbrick\setup.cfg
rootdir: C:\Users\humph\Documents\Github\yellowbrick
```
If I run without the virtual environment and use `python -m pytest`, it runs all the test but lots of them fail (44). There are various causes, though a lot of them seem to trace back to this:
```
def _actual_img_path(self, extension='.png'):
"""Determines the correct outpath for drawing a matplotlib image that
corresponds to the unittest module path.
"""
module_path, test_func_name = self._setup_imagetest()
> module_path = os.path.join(*module_path)
E TypeError: join() missing 1 required positional argument: 'path'
```
@bbengfort see above
@NealHumphrey, thank you for the detailed notes. This is a very good point; the documentation has been updated with pytest, but is currently in develop and will be moved to latest once we go to 0.6 (very shortly). If you could review the develop documents we can try to adapt them to make it a bit friendlier.
In [Forking the Repository](http://www.scikit-yb.org/en/develop/contributing.html#forking-the-repository) in the contributors guide, step 3 mentions `pip install -r tests/requirements.txt` and `pip install -r docs/requirements.txt`, this isn't in the testing section though which simply mentions:
> The Makefile uses the pytest runner and testing suite as well as the coverage library, so make sure you have those dependencies installed! The DatasetMixin also requires requests.py to fetch data from our Amazon S3 account.
We could adapt this section to reinforce installing the dependencies with `pip install -r tests/requirments.txt` and also mention using `pytest` directly rather than `make test`.
I believe the error you are observing will be solved when you install pyspec and coverage using that requirements file.
A note on the various requirements files: the main `requirements.txt` file is used in `setup.py` to determine dependencies, as such we have only the dependencies required for using yellowbrick as a library in it (nor do we manage our dependencies' subdependencies). The requirements for testing and documentation are actually in that file, just commented out and they're uncommented in the respective docs and test directories.
@bbengfort potentially we could close this in favor of #456 and #682?
@rebeccabilbro thanks for grooming the backlog of issues! Although #456 and #682 are closely related to this issue, I think this is an independent task from those and there are several gotchas that this issue records (e.g. TestThing vs ThingTests). The good news is that in the PRs since this issue we've added no new unittest dependencies and have replaced some of the old dependencies with PyTest. I think that this should be (relatively) easily sorted when we do the v1.0 release.
Gotcha @bbengfort, I’ll add to the v1.0 milestone in that case.
Recommend that we remove PEP8 from this as we are going to add Black in another V1.0 issue:
https://github.com/DistrictDataLabs/yellowbrick/issues/456 | 2019-07-01T21:46:37 |
DistrictDataLabs/yellowbrick | 905 | DistrictDataLabs__yellowbrick-905 | [
"902"
] | cead4c2448dcb3a2b0851087aaa65647493a8dc8 | diff --git a/yellowbrick/cluster/elbow.py b/yellowbrick/cluster/elbow.py
--- a/yellowbrick/cluster/elbow.py
+++ b/yellowbrick/cluster/elbow.py
@@ -30,10 +30,14 @@
from ..utils import KneeLocator
from sklearn.metrics import silhouette_score
-from sklearn.metrics import calinski_harabasz_score
from sklearn.metrics.pairwise import pairwise_distances
from sklearn.preprocessing import LabelEncoder
+try:
+ from sklearn.metrics import calinski_harabasz_score as chs
+except ImportError:
+ from sklearn.metrics import calinski_harabaz_score as chs
+
## Packages for export
__all__ = [
@@ -114,7 +118,7 @@ def distortion_score(X, labels, metric='euclidean'):
KELBOW_SCOREMAP = {
"distortion": distortion_score,
"silhouette": silhouette_score,
- "calinski_harabasz": calinski_harabasz_score,
+ "calinski_harabasz": chs,
}
| ImportError: cannot import name 'calinski_harabasz_score' with sklearn 0.20.2
**Describe the bug**
After #887 we unintentionally injected an `ImportError` for our current set of dependencies; namely `calinski_harabasz_score` doesn't exist in scikitlearn 0.20. This does not show up in our current CI/test suite because we always install the latest deps. There are two options to fix:
1. Update our `requirements.txt` to a later version of scikit-learn
2. Wrap the import in `try/except`, e.g.
```python
try:
from sklearn.metrics import calinski_harabasz_score
except ImportError:
from sklearn.metrics import calinski_harabaz_score
```
**To Reproduce**
```
>>> from yellowbrick.cluster import ElbowVisualizer
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/benjamin/Workspace/git/yellowbrick/yellowbrick/cluster/__init__.py", line 23, in <module>
from .elbow import *
File "/Users/benjamin/Workspace/git/yellowbrick/yellowbrick/cluster/elbow.py", line 33, in <module>
from sklearn.metrics import calinski_harabasz_score
ImportError: cannot import name 'calinski_harabasz_score'
```
**Expected behavior**
If we use the minimum dependencies as set in `requirements.txt` we should not have import errors.
**Traceback**
From the tests:
```
________________________________ ERROR collecting tests/test_cluster/test_silhouette.py ________________________________
ImportError while importing test module '/Users/benjamin/Workspace/git/yellowbrick/tests/test_cluster/test_silhouette.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
tests/test_cluster/test_silhouette.py:30: in <module>
from yellowbrick.cluster.silhouette import SilhouetteVisualizer
yellowbrick/cluster/__init__.py:23: in <module>
from .elbow import *
yellowbrick/cluster/elbow.py:33: in <module>
from sklearn.metrics import calinski_harabasz_score
E ImportError: cannot import name 'calinski_harabasz_score'
```
**Desktop (please complete the following information):**
- OS: macOS
- Python Version: 3.6.2
- Yellowbrick Version: develop
- Scikit-Learn Version: 0.20.2
| 2019-07-02T15:01:16 |
||
DistrictDataLabs/yellowbrick | 914 | DistrictDataLabs__yellowbrick-914 | [
"912"
] | 06401deaaf5f151f030edc9b378ad1959137034d | diff --git a/yellowbrick/features/rankd.py b/yellowbrick/features/rankd.py
--- a/yellowbrick/features/rankd.py
+++ b/yellowbrick/features/rankd.py
@@ -17,14 +17,17 @@
## Imports
##########################################################################
+import warnings
import numpy as np
+import matplotlib as mpl
+
from scipy.stats import shapiro
from scipy.stats import spearmanr
from scipy.stats import kendalltau as sp_kendalltau
from yellowbrick.utils import is_dataframe
from yellowbrick.features.base import MultiFeatureVisualizer
-from yellowbrick.exceptions import YellowbrickValueError
+from yellowbrick.exceptions import YellowbrickValueError, YellowbrickWarning
__all__ = ["rank1d", "rank2d", "Rank1D", "Rank2D"]
@@ -189,7 +192,16 @@ def finalize(self, **kwargs):
generic keyword arguments
"""
- # Set the title
+ # There is a known bug in matplotlib 3.1.1 that affects RankD plots
+ # See #912 and #914 for details.
+ if mpl.__version__ == "3.1.1":
+ msg = (
+ "RankD plots may be clipped when using matplotlib v3.1.1, "
+ "upgrade to matplotlib v3.1.2 or later to fix the plots."
+ )
+ warnings.warn(msg, YellowbrickWarning)
+
+ # Set the title for all RankD visualizations.
self.set_title(
"{} Ranking of {} Features".format(
self.ranking_.title(), len(self.features_)
| diff --git a/tests/baseline_images/test_features/test_rankd/test_rank1d_integrated_numpy.png b/tests/baseline_images/test_features/test_rankd/test_rank1d_integrated_numpy.png
Binary files a/tests/baseline_images/test_features/test_rankd/test_rank1d_integrated_numpy.png and b/tests/baseline_images/test_features/test_rankd/test_rank1d_integrated_numpy.png differ
diff --git a/tests/baseline_images/test_features/test_rankd/test_rank1d_integrated_pandas.png b/tests/baseline_images/test_features/test_rankd/test_rank1d_integrated_pandas.png
Binary files a/tests/baseline_images/test_features/test_rankd/test_rank1d_integrated_pandas.png and b/tests/baseline_images/test_features/test_rankd/test_rank1d_integrated_pandas.png differ
diff --git a/tests/baseline_images/test_features/test_rankd/test_rank1d_orientation.png b/tests/baseline_images/test_features/test_rankd/test_rank1d_orientation.png
Binary files a/tests/baseline_images/test_features/test_rankd/test_rank1d_orientation.png and b/tests/baseline_images/test_features/test_rankd/test_rank1d_orientation.png differ
diff --git a/tests/baseline_images/test_features/test_rankd/test_rank1d_shapiro.png b/tests/baseline_images/test_features/test_rankd/test_rank1d_shapiro.png
Binary files a/tests/baseline_images/test_features/test_rankd/test_rank1d_shapiro.png and b/tests/baseline_images/test_features/test_rankd/test_rank1d_shapiro.png differ
diff --git a/tests/baseline_images/test_features/test_rankd/test_rank2d_covariance.png b/tests/baseline_images/test_features/test_rankd/test_rank2d_covariance.png
Binary files a/tests/baseline_images/test_features/test_rankd/test_rank2d_covariance.png and b/tests/baseline_images/test_features/test_rankd/test_rank2d_covariance.png differ
diff --git a/tests/baseline_images/test_features/test_rankd/test_rank2d_integrated_numpy.png b/tests/baseline_images/test_features/test_rankd/test_rank2d_integrated_numpy.png
Binary files a/tests/baseline_images/test_features/test_rankd/test_rank2d_integrated_numpy.png and b/tests/baseline_images/test_features/test_rankd/test_rank2d_integrated_numpy.png differ
diff --git a/tests/baseline_images/test_features/test_rankd/test_rank2d_integrated_pandas.png b/tests/baseline_images/test_features/test_rankd/test_rank2d_integrated_pandas.png
Binary files a/tests/baseline_images/test_features/test_rankd/test_rank2d_integrated_pandas.png and b/tests/baseline_images/test_features/test_rankd/test_rank2d_integrated_pandas.png differ
diff --git a/tests/baseline_images/test_features/test_rankd/test_rank2d_kendalltau.png b/tests/baseline_images/test_features/test_rankd/test_rank2d_kendalltau.png
Binary files a/tests/baseline_images/test_features/test_rankd/test_rank2d_kendalltau.png and b/tests/baseline_images/test_features/test_rankd/test_rank2d_kendalltau.png differ
diff --git a/tests/baseline_images/test_features/test_rankd/test_rank2d_pearson.png b/tests/baseline_images/test_features/test_rankd/test_rank2d_pearson.png
Binary files a/tests/baseline_images/test_features/test_rankd/test_rank2d_pearson.png and b/tests/baseline_images/test_features/test_rankd/test_rank2d_pearson.png differ
diff --git a/tests/baseline_images/test_features/test_rankd/test_rank2d_random.png b/tests/baseline_images/test_features/test_rankd/test_rank2d_random.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_features/test_rankd/test_rank2d_random.png differ
diff --git a/tests/baseline_images/test_features/test_rankd/test_rank2d_spearman.png b/tests/baseline_images/test_features/test_rankd/test_rank2d_spearman.png
Binary files a/tests/baseline_images/test_features/test_rankd/test_rank2d_spearman.png and b/tests/baseline_images/test_features/test_rankd/test_rank2d_spearman.png differ
diff --git a/tests/requirements.txt b/tests/requirements.txt
--- a/tests/requirements.txt
+++ b/tests/requirements.txt
@@ -1,5 +1,5 @@
# Library Dependencies
-matplotlib>=1.5.1,!=3.0.0 # Note: rank2d image tests require mpl==3.1.1
+matplotlib>=1.5.1,!=3.0.0,!=3.1.1
scipy>=1.0.0
scikit-learn>=0.20
numpy>=1.13.0
| Matplotlib 3.1.1 impacts some visual tests
Matplotlib 3.1.1 (released on July 1, 2019) broke some of our tests, particularly the Rank2D tests with very big RMSE (~40) and very different diff images. For now in in #907 I have regenerated the baseline images for Rank2D so that our tests will pass on Travis and Appveyor, but I have not changed the `tests/requirements.txt` (except to note this strange behavior). Running the tests with `matplotlib==3.1.0` will result in image comparison failures for Rank2D. We should investigate what the behavior is for other matplotlib versions and determine what has changed in 3.1.1 that changed the Rank2D images (but not, say, the Confusion Matrix or Classification Heatmap) so drastically.
| I looked at plots on Windows conda under 3.1.0 and 3.0.3 - both are different from 3.1.1. baseline in the same way (~40).
The difference is most likely due to do to a recent fix for the conversions between types for stacked bar charts (bar and barh). It had a 3.1 milestone:
https://github.com/matplotlib/matplotlib/issues/10788#issuecomment-373177134
The heights of rows are no longer consistent within 3.1.1, as compared to 3.1.0 and 3.0.3. Exactly where this would need to be corrected, if it is a bug in matplotlib, or a needed change in yb, is not clear to me as yet.
This is across the 6 Rank2D visualizers, but here is the rank2d_integrated_numpy visualizer:
**Integrated NumPy 3.0.3**

**Integrated NumPy 3.1.0**

**Integrated NumPy 3.1.1**

Even when reverting to the yb commit https://github.com/DistrictDataLabs/yellowbrick/commit/1bb171cafb468ea6867668099c1b4a0f23db2b79 from prior to the release of 3.1.1, but using 3.1.1, the heights of the rows are inconsistent.
I would recommend that matplotlib!=3.1.1 be added to requirements.txt and tests/requirements.txt to avoid users and contributors installing with 3.1.1. I can file a PR for this, include the previous Rank2D images in it, and create a matplotlib issue, pointing to this yb issue for illustration. Does this sound reasonable?
>I would recommend that matplotlib!=3.1.1 be added to requirements.txt and tests/requirements.txt to avoid users and contributors installing with 3.1.1. I can file a PR for this, include the previous Rank2D images in it, and create a matplotlib issue, pointing to this yb issue for illustration. Does this sound reasonable?
Thanks for all this great research you've been doing into this issue @nickpowersys! If you wouldn't mind holding off on the PR for just a bit longer as the maintainers are still powering through [some of those contagious issues](https://github.com/DistrictDataLabs/yellowbrick/pull/894#issuecomment-507370010). In the meantime though, If you could do a bit more digging into this, that would be great! A few open questions seem to be:
1. Does the same behavior occur between Matplotlib 3.1.0 and 3.1.1 using a different dataset? E.g. credit or bikeshare? Perhaps there is some data type checking we should be doing from inside the visualizer to ensure the proper dimensions?
2. Specifically what changed in Matplotlib between 3.1.0 and 3.1.1 that is causing this discrepancy in the Rank2D plots? Maybe @tacaswell or @jklymak can help point us in the right direction?
3. Why is the behavior manifesting in Rank2D but not in any of the other visualizers?
This was a bug and should be fixed 3.1.2. Sorry about that. https://github.com/matplotlib/matplotlib/pull/14677
Thanks for the speedy response @jklymak; btw the mpl team are total badasses — thank you, thank you, thank you for all you do! And thanks @nickpowersys for offering to do the PR, but I'll go ahead and open one now since I already have a branch on my local with the repairs to the images 😉. Stay tuned! | 2019-07-06T19:55:51 |
DistrictDataLabs/yellowbrick | 935 | DistrictDataLabs__yellowbrick-935 | [
"931"
] | e24661a1aa2a86a6e69877c91a3c6803d78b3c22 | diff --git a/yellowbrick/utils/kneed.py b/yellowbrick/utils/kneed.py
--- a/yellowbrick/utils/kneed.py
+++ b/yellowbrick/utils/kneed.py
@@ -65,7 +65,7 @@ class KneeLocator(object):
Sensitivity parameter that allows us to adjust how aggressive we want KneeLocator to
be when detecting "knees" or "elbows".
- curve_nature : string, default: 'convace'
+ curve_nature : string, default: 'concave'
A string that determines the nature of the elbow curve in which "knee" or "elbow" is
to be found.
@@ -77,9 +77,7 @@ class KneeLocator(object):
-----
The KneeLocator is implemented using the "knee point detection algorithm" which can be read at
`<https://www1.icsi.berkeley.edu/~barath/papers/kneedle-simplex11.pdf>`
-
"""
-
def __init__(self, x, y, S=1.0, curve_nature='concave', curve_direction='increasing'):
# Raw Input
@@ -89,128 +87,145 @@ def __init__(self, x, y, S=1.0, curve_nature='concave', curve_direction='increas
self.curve_direction = curve_direction
self.N = len(self.x)
self.S = S
+ self.all_knees = set()
+ self.all_norm_knees = set()
# Step 1: fit a smooth line
uspline = interpolate.interp1d(self.x, self.y)
- self.Ds_x = np.linspace(np.min(self.x), np.max(self.x), self.N)
- self.Ds_y = uspline(self.Ds_x)
+ self.x = np.array(x)
+ self.Ds_y = uspline(self.x)
# Step 2: normalize values
- self.xsn = self.__normalize(self.Ds_x)
- self.ysn = self.__normalize(self.Ds_y)
-
- # Step 3: Calculate difference curve
- self.xd = self.xsn
- if self.curve_nature == 'convex' and curve_direction == 'decreasing':
- self.yd = self.ysn + self.xsn
- self.yd = 1 - self.yd
- elif self.curve_nature == 'concave' and curve_direction == 'decreasing':
- self.yd = self.ysn + self.xsn
- elif self.curve_nature == 'concave' and curve_direction == 'increasing':
- self.yd = self.ysn - self.xsn
- if self.curve_nature == 'convex' and curve_direction == 'increasing':
- self.yd = abs(self.ysn - self.xsn)
+ self.x_normalized = self.__normalize(self.x)
+ self.y_normalized = self.__normalize(self.Ds_y)
+
+ # Step 3: Calculate the Difference curve
+ self.x_normalized, self.y_normalized = self.transform_xy(
+ self.x_normalized, self.y_normalized, self.curve_direction, self.curve_nature
+ )
+ # normalized difference curve
+ self.y_distance = self.y_normalized - self.x_normalized
+ self.x_distance = self.x_normalized.copy()
# Step 4: Identify local maxima/minima
# local maxima
- self.xmx_idx = argrelextrema(self.yd, np.greater)[0]
- self.xmx = self.xd[self.xmx_idx]
- self.ymx = self.yd[self.xmx_idx]
+ self.maxima_inidices = argrelextrema(self.y_distance, np.greater)[0]
+ self.x_distance_maxima = self.x_distance[self.maxima_inidices]
+ self.y_distance_maxima = self.y_distance[self.maxima_inidices]
# local minima
- self.xmn_idx = argrelextrema(self.yd, np.less)[0]
- self.xmn = self.xd[self.xmn_idx]
- self.ymn = self.yd[self.xmn_idx]
+ self.minima_indices = argrelextrema(self.y_distance, np.less)[0]
+ self.x_distance_minima = self.x_distance[self.minima_indices]
+ self.y_distance_minima = self.y_distance[self.minima_indices]
# Step 5: Calculate thresholds
- self.Tmx = self.__threshold(self.ymx)
+ self.Tmx = self.y_distance_maxima - (self.S * np.abs(np.diff(self.x_normalized).mean()))
# Step 6: find knee
- self.knee, self.norm_knee, self.knee_x = self.find_knee()
+ self.find_knee()
+ self.knee, self.norm_knee = min(self.all_knees), min(self.all_norm_knees)
@staticmethod
def __normalize(a):
"""
Normalizes an array.
-
Parameters
-----------
- a : list
+ a : list
The array to normalize
"""
return (a - min(a)) / (max(a) - min(a))
- def __threshold(self, ymx_i):
- """
- Calculates the difference threshold for a
- given difference local maximum.
-
- Parameters
- -----------
- ymx_i : float
- The normalized y value of a local maximum.
- """
- return ymx_i - (self.S * np.diff(self.xsn).mean())
+ @staticmethod
+ def transform_xy(x, y, direction, curve):
+ """transform x and y to concave, increasing based on curve_direction and curve_nature"""
+ # convert elbows to knees
+ if curve == 'convex':
+ x = x.max() - x
+ y = y.max() - y
+ # flip decreasing functions to increasing
+ if direction == 'decreasing':
+ y = np.flip(y)
+
+ if curve == 'convex':
+ x = np.flip(x)
+ y = np.flip(y)
+
+ return x, y
def find_knee(self, ):
- """
- Finds and returns the "knee"or "elbow" value, the normalized knee
- value, and the x value where the knee is located.
-
- """
- if not self.xmx_idx.size:
+ """This function finds and sets the knee value and the normalized knee value. """
+ if not self.maxima_inidices.size:
warning_message = \
'No "knee" or "elbow point" detected ' \
'This could be due to bad clustering, no '\
- 'actual clusters being formed etc.'
- warnings.warn(warning_message,YellowbrickWarning)
- return None, None, None
-
- mxmx_iter = np.arange(self.xmx_idx[0], len(self.xsn))
- xmx_idx_iter = np.append(self.xmx_idx, len(self.xsn))
-
- knee_, norm_knee_, knee_x = 0.0, 0.0, None
- for mxmx_i, mxmx in enumerate(xmx_idx_iter):
- # stopping criteria for exhasuting array
- if mxmx_i == len(xmx_idx_iter) - 1:
+ 'actual clusters being formed etc.'
+ warnings.warn(warning_message, YellowbrickWarning)
+ return None, None
+
+ # artificially place a local max at the last item in the x_distance array
+ self.maxima_inidices = np.append(self.maxima_inidices, len(self.x_distance) - 1)
+ self.minima_indices = np.append(self.minima_indices, len(self.x_distance) - 1)
+
+ # placeholder for which threshold region i is located in.
+ maxima_threshold_index = 0
+ minima_threshold_index = 0
+ # traverse the distance curve
+ for idx, i in enumerate(self.x_distance):
+ # reached the end of the curve
+ if i == 1.0:
break
- # indices between maxima/minima
- idxs = (mxmx_iter > xmx_idx_iter[mxmx_i]) * \
- (mxmx_iter < xmx_idx_iter[mxmx_i + 1])
- between_local_mx = mxmx_iter[np.where(idxs)]
-
- for j in between_local_mx:
- if j in self.xmn_idx:
- # reached a minima, x indices are unique
- # only need to check if j is a min
- if self.yd[j + 1] > self.yd[j]:
- self.Tmx[mxmx_i] = 0
- knee_x = None # reset x where yd crossed Tmx
- elif self.yd[j + 1] <= self.yd[j]:
- warning_message="If this is a minima, " \
- "how would you ever get here."
- warnings.warn(warning_message, YellowbrickWarning)
- if self.yd[j] < self.Tmx[mxmx_i] or self.Tmx[mxmx_i] < 0:
- # declare a knee
- if not knee_x:
- knee_x = j
- knee_ = self.x[self.xmx_idx[mxmx_i]]
- norm_knee_ = self.xsn[self.xmx_idx[mxmx_i]]
- return knee_, norm_knee_, knee_x
+ # values in distance curve are at or after a local maximum
+ if idx >= self.maxima_inidices[maxima_threshold_index]:
+ threshold = self.Tmx[maxima_threshold_index]
+ threshold_index = idx
+ maxima_threshold_index += 1
+ # values in distance curve are at or after a local minimum
+ if idx >= self.minima_indices[minima_threshold_index]:
+ threshold = 0.0
+ minima_threshold_index += 1
+ # Do not evaluate values in the distance curve before the first local maximum.
+ if idx < self.maxima_inidices[0]:
+ continue
+
+ # evaluate the threshold
+ if self.y_distance[idx] < threshold:
+ if self.curve_nature == 'convex':
+ if self.curve_direction == 'decreasing':
+ knee = self.x[threshold_index]
+ self.all_knees.add(knee)
+ norm_knee = self.x_normalized[threshold_index]
+ self.all_norm_knees.add(norm_knee)
+ else:
+ knee = self.x[-(threshold_index + 1)]
+ self.all_knees.add(knee)
+ norm_knee = self.x_normalized[-(threshold_index + 1)]
+ self.all_norm_knees.add(norm_knee)
+
+ elif self.curve_nature == 'concave':
+ if self.curve_direction == 'decreasing':
+ knee = self.x[-(threshold_index + 1)]
+ self.all_knees.add(knee)
+ norm_knee = self.x_normalized[-(threshold_index + 1)]
+ self.all_norm_knees.add(norm_knee)
+ else:
+ knee = self.x[threshold_index]
+ self.all_knees.add(knee)
+ norm_knee = self.x_normalized[threshold_index]
+ self.all_norm_knees.add(norm_knee)
def plot_knee_normalized(self, ):
"""
- Plots the normalized curve, the distance curve (xd, ysn) and the
+ Plots the normalized curve, the distance curve (x_distance, y_normalized) and the
knee, if it exists.
-
"""
import matplotlib.pyplot as plt
plt.figure(figsize=(8, 8))
- plt.plot(self.xsn, self.ysn)
- plt.plot(self.xd, self.yd, 'r')
- plt.xticks(np.arange(min(self.xsn), max(self.xsn) + 0.1, 0.1))
- plt.yticks(np.arange(min(self.xd), max(self.ysn) + 0.1, 0.1))
+ plt.plot(self.x_normalized, self.y_normalized,)
+ plt.plot(self.x_distance, self.y_distance, 'r')
+ plt.xticks(np.arange(self.x_normalized.min(), self.x_normalized.max() + 0.1, 0.1))
+ plt.yticks(np.arange(self.y_distance.min(), self.y_normalized.max() + 0.1, 0.1))
plt.vlines(self.norm_knee, plt.ylim()[0], plt.ylim()[1])
@@ -226,7 +241,6 @@ def plot_knee(self, ):
plt.vlines(self.knee, plt.ylim()[0], plt.ylim()[1])
# Niceties for users working with elbows rather than knees
-
@property
def elbow(self):
return self.knee
@@ -236,7 +250,9 @@ def norm_elbow(self):
return self.norm_knee
@property
- def elbow_x(self):
- return self.knee_x
-
+ def all_elbows(self):
+ return self.all_knees
+ @property
+ def all_norm_elbows(self):
+ return self.all_norm_knees
| Incorporate kneed refactor in utils/kneed.py
yellowbrick has a port of some source code from the [kneed](https://github.com/arvkevi/kneed) package in `utils/kneed.py`. The `kneed` package was recently refactored with a [new release](https://github.com/arvkevi/kneed/releases/tag/v0.4.0).
I will create a PR to update `utils/kneed.py` to reflect the changes in `kneed`.
This [issue](https://github.com/DistrictDataLabs/yellowbrick/pull/813#issuecomment-513845068) raised by @Kautumn06 shows the elbow is "incorrectly" identified for the [example in the elbow documentation](https://www.scikit-yb.org/en/latest/api/cluster/elbow.html#).
The image shows knees identified in the example by incrementing `random_state` by 1 for 0-9999 in `make_blobs` for both the current and proposed (refactored) method. Could these `k<=7` runs be blobs with high variance (noisy clusters)? Should yb consider this separately and I create a new issue for this?
<img width="759" alt="Screen Shot 2019-07-22 at 10 08 37 PM" src="https://user-images.githubusercontent.com/9151717/61677416-8fa8fa80-accd-11e9-8a86-b873bcf3226f.png">
| 2019-07-27T19:29:34 |
||
DistrictDataLabs/yellowbrick | 944 | DistrictDataLabs__yellowbrick-944 | [
"943",
"943"
] | 8e9132104add45e560b9dbab05100d65850e10f3 | diff --git a/yellowbrick/cluster/elbow.py b/yellowbrick/cluster/elbow.py
--- a/yellowbrick/cluster/elbow.py
+++ b/yellowbrick/cluster/elbow.py
@@ -333,8 +333,8 @@ def fit(self, X, y=None, **kwargs):
elbow_locator = KneeLocator(
self.k_values_, self.k_scores_, **locator_kwargs
)
- self.elbow_value_ = elbow_locator.knee
- if self.elbow_value_ is not None:
+ if elbow_locator.knee is None:
+ self.elbow_value_ = None
self.elbow_score_ = 0
warning_message = (
"No 'knee' or 'elbow' point detected, "
@@ -342,6 +342,7 @@ def fit(self, X, y=None, **kwargs):
)
warnings.warn(warning_message, YellowbrickWarning)
else:
+ self.elbow_value_ = elbow_locator.knee
self.elbow_score_ = self.k_scores_[
self.k_values_.index(self.elbow_value_)
]
@@ -420,8 +421,8 @@ def kelbow_visualizer(
X,
y=None,
k=10,
- ax=None,
- timings=True,
+ ax=None,
+ timings=True,
locate_elbow=True,
metric="distortion",
**kwargs
diff --git a/yellowbrick/utils/kneed.py b/yellowbrick/utils/kneed.py
--- a/yellowbrick/utils/kneed.py
+++ b/yellowbrick/utils/kneed.py
@@ -2,22 +2,22 @@
# A port of the knee-point detection package, kneed.
#
# Author: Kevin Arvai
-# Author: Pradeep Singh
+# Author: Pradeep Singh
# Created: Mon Apr 15 09:43:18 2019 -0400
#
# Copyright (C) 2017 Kevin Arvai
# All rights reserved.
-# Redistribution and use in source and binary forms, with or without modification,
+# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
-#
+#
# 1. Redistributions of source code must retain the above copyright notice, this list
# of conditions and the following disclaimer.
#
-# 2. Redistributions in binary form must reproduce the above copyright notice, this
-# list of conditions and the following disclaimer in the documentation and/or other
+# 2. Redistributions in binary form must reproduce the above copyright notice, this
+# list of conditions and the following disclaimer in the documentation and/or other
# materials provided with the distribution.
#
-# 3. Neither the name of the copyright holder nor the names of its contributors may
+# 3. Neither the name of the copyright holder nor the names of its contributors may
# be used to endorse or promote products derived from this software without specific
# prior written permission.
#
@@ -26,7 +26,7 @@
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
-# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
+# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
# OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
@@ -78,7 +78,10 @@ class KneeLocator(object):
The KneeLocator is implemented using the "knee point detection algorithm" which can be read at
`<https://www1.icsi.berkeley.edu/~barath/papers/kneedle-simplex11.pdf>`
"""
- def __init__(self, x, y, S=1.0, curve_nature='concave', curve_direction='increasing'):
+
+ def __init__(
+ self, x, y, S=1.0, curve_nature="concave", curve_direction="increasing"
+ ):
# Raw Input
self.x = x
@@ -101,7 +104,10 @@ def __init__(self, x, y, S=1.0, curve_nature='concave', curve_direction='increas
# Step 3: Calculate the Difference curve
self.x_normalized, self.y_normalized = self.transform_xy(
- self.x_normalized, self.y_normalized, self.curve_direction, self.curve_nature
+ self.x_normalized,
+ self.y_normalized,
+ self.curve_direction,
+ self.curve_nature,
)
# normalized difference curve
self.y_distance = self.y_normalized - self.x_normalized
@@ -119,11 +125,23 @@ def __init__(self, x, y, S=1.0, curve_nature='concave', curve_direction='increas
self.y_distance_minima = self.y_distance[self.minima_indices]
# Step 5: Calculate thresholds
- self.Tmx = self.y_distance_maxima - (self.S * np.abs(np.diff(self.x_normalized).mean()))
+ self.Tmx = self.y_distance_maxima - (
+ self.S * np.abs(np.diff(self.x_normalized).mean())
+ )
# Step 6: find knee
self.find_knee()
- self.knee, self.norm_knee = min(self.all_knees), min(self.all_norm_knees)
+ if (self.all_knees or self.all_norm_knees) == set():
+ warning_message = (
+ "No 'knee' or 'elbow point' detected "
+ "This could be due to bad clustering, no "
+ "actual clusters being formed etc."
+ )
+ warnings.warn(warning_message, YellowbrickWarning)
+ self.knee = None
+ self.norm_knee = None
+ else:
+ self.knee, self.norm_knee = min(self.all_knees), min(self.all_norm_knees)
@staticmethod
def __normalize(a):
@@ -140,26 +158,27 @@ def __normalize(a):
def transform_xy(x, y, direction, curve):
"""transform x and y to concave, increasing based on curve_direction and curve_nature"""
# convert elbows to knees
- if curve == 'convex':
+ if curve == "convex":
x = x.max() - x
y = y.max() - y
# flip decreasing functions to increasing
- if direction == 'decreasing':
+ if direction == "decreasing":
y = np.flip(y)
- if curve == 'convex':
+ if curve == "convex":
x = np.flip(x)
y = np.flip(y)
return x, y
- def find_knee(self, ):
+ def find_knee(self,):
"""This function finds and sets the knee value and the normalized knee value. """
if not self.maxima_inidices.size:
- warning_message = \
- 'No "knee" or "elbow point" detected ' \
- 'This could be due to bad clustering, no '\
- 'actual clusters being formed etc.'
+ warning_message = (
+ 'No "knee" or "elbow point" detected '
+ "This could be due to bad clustering, no "
+ "actual clusters being formed etc."
+ )
warnings.warn(warning_message, YellowbrickWarning)
return None, None
@@ -190,8 +209,8 @@ def find_knee(self, ):
# evaluate the threshold
if self.y_distance[idx] < threshold:
- if self.curve_nature == 'convex':
- if self.curve_direction == 'decreasing':
+ if self.curve_nature == "convex":
+ if self.curve_direction == "decreasing":
knee = self.x[threshold_index]
self.all_knees.add(knee)
norm_knee = self.x_normalized[threshold_index]
@@ -202,8 +221,8 @@ def find_knee(self, ):
norm_knee = self.x_normalized[-(threshold_index + 1)]
self.all_norm_knees.add(norm_knee)
- elif self.curve_nature == 'concave':
- if self.curve_direction == 'decreasing':
+ elif self.curve_nature == "concave":
+ if self.curve_direction == "decreasing":
knee = self.x[-(threshold_index + 1)]
self.all_knees.add(knee)
norm_knee = self.x_normalized[-(threshold_index + 1)]
@@ -214,7 +233,7 @@ def find_knee(self, ):
norm_knee = self.x_normalized[threshold_index]
self.all_norm_knees.add(norm_knee)
- def plot_knee_normalized(self, ):
+ def plot_knee_normalized(self,):
"""
Plots the normalized curve, the distance curve (x_distance, y_normalized) and the
knee, if it exists.
@@ -222,14 +241,16 @@ def plot_knee_normalized(self, ):
import matplotlib.pyplot as plt
plt.figure(figsize=(8, 8))
- plt.plot(self.x_normalized, self.y_normalized,)
- plt.plot(self.x_distance, self.y_distance, 'r')
- plt.xticks(np.arange(self.x_normalized.min(), self.x_normalized.max() + 0.1, 0.1))
+ plt.plot(self.x_normalized, self.y_normalized)
+ plt.plot(self.x_distance, self.y_distance, "r")
+ plt.xticks(
+ np.arange(self.x_normalized.min(), self.x_normalized.max() + 0.1, 0.1)
+ )
plt.yticks(np.arange(self.y_distance.min(), self.y_normalized.max() + 0.1, 0.1))
plt.vlines(self.norm_knee, plt.ylim()[0], plt.ylim()[1])
- def plot_knee(self, ):
+ def plot_knee(self,):
"""
Plot the curve and the knee, if it exists
| diff --git a/tests/test_cluster/test_elbow.py b/tests/test_cluster/test_elbow.py
--- a/tests/test_cluster/test_elbow.py
+++ b/tests/test_cluster/test_elbow.py
@@ -33,8 +33,8 @@
from tests.base import VisualTestCase
from yellowbrick.datasets import load_hobbies
from yellowbrick.cluster.elbow import distortion_score
-from yellowbrick.exceptions import YellowbrickValueError
from yellowbrick.cluster.elbow import KElbowVisualizer, kelbow_visualizer
+from yellowbrick.exceptions import YellowbrickValueError, YellowbrickWarning
from tests.base import IS_WINDOWS_OR_CONDA
@@ -313,6 +313,22 @@ def test_locate_elbow(self):
self.assert_images_similar(visualizer, windows_tol=2.2)
assert_array_almost_equal(visualizer.k_scores_, expected)
+ def test_no_knee(self):
+ """
+ Assert that a warning is issued if there is no knee detected
+ """
+ X, y = make_blobs(n_samples=1000, centers=3, n_features=12, random_state=12)
+ message = (
+ "No 'knee' or 'elbow point' detected "
+ "This could be due to bad clustering, no "
+ "actual clusters being formed etc."
+ )
+ with pytest.warns(YellowbrickWarning, match=message):
+ visualizer = KElbowVisualizer(
+ KMeans(random_state=12), k=(4, 12), locate_elbow=True
+ )
+ visualizer.fit(X)
+
def test_bad_metric(self):
"""
Assert KElbow raises an exception when a bad metric is supplied
| KElbow raises confusing `ValueError` when optimal k is outside provided k-range
**Describe the bug**
We seem to have a bug in our updated `KElbow` visualizer following the updates in #813 and #935. When the `locate_elbow` param is set to `True` (which it is by default), we get a `ValueError` when calling `fit` on the visualizer.
**To Reproduce**
```python
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
from yellowbrick.cluster import KElbowVisualizer
X, y = make_blobs(centers=12, n_samples=1000, n_features=16, shuffle=True)
viz = KElbowVisualizer(KMeans(), k=(4,12))
viz.fit(X)
```
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/rbilbro/pyjects/my_yb/yellowbrick/cluster/elbow.py", line 334, in fit
self.k_values_, self.k_scores_, **locator_kwargs
File "/Users/rbilbro/pyjects/my_yb/yellowbrick/utils/kneed.py", line 126, in __init__
self.knee, self.norm_knee = min(self.all_knees), min(self.all_norm_knees)
ValueError: min() arg is an empty sequence
```
**Dataset**
`make_blobs`
**Expected behavior**
`fit` should fit the `KElbow` visualizer without raising an error.
**Desktop (please complete the following information):**
- OS: macOS
- Python Version 3.7
- Yellowbrick Version v1.0 dev
KElbow raises confusing `ValueError` when optimal k is outside provided k-range
**Describe the bug**
We seem to have a bug in our updated `KElbow` visualizer following the updates in #813 and #935. When the `locate_elbow` param is set to `True` (which it is by default), we get a `ValueError` when calling `fit` on the visualizer.
**To Reproduce**
```python
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
from yellowbrick.cluster import KElbowVisualizer
X, y = make_blobs(centers=12, n_samples=1000, n_features=16, shuffle=True)
viz = KElbowVisualizer(KMeans(), k=(4,12))
viz.fit(X)
```
```bash
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/rbilbro/pyjects/my_yb/yellowbrick/cluster/elbow.py", line 334, in fit
self.k_values_, self.k_scores_, **locator_kwargs
File "/Users/rbilbro/pyjects/my_yb/yellowbrick/utils/kneed.py", line 126, in __init__
self.knee, self.norm_knee = min(self.all_knees), min(self.all_norm_knees)
ValueError: min() arg is an empty sequence
```
**Dataset**
`make_blobs`
**Expected behavior**
`fit` should fit the `KElbow` visualizer without raising an error.
**Desktop (please complete the following information):**
- OS: macOS
- Python Version 3.7
- Yellowbrick Version v1.0 dev
| 2019-08-08T18:32:46 |
|
DistrictDataLabs/yellowbrick | 953 | DistrictDataLabs__yellowbrick-953 | [
"375"
] | da729dab14194dba84e75571f08f927efbc19865 | diff --git a/yellowbrick/base.py b/yellowbrick/base.py
--- a/yellowbrick/base.py
+++ b/yellowbrick/base.py
@@ -248,6 +248,9 @@ def poof(self, outpath=None, clear_figure=False, **kwargs):
if clear_figure:
self.fig.clear()
+ # Return ax to ensure display in notebooks
+ return self.ax
+
## ////////////////////////////////////////////////////////////////////
## Helper Functions
## ////////////////////////////////////////////////////////////////////
@@ -558,3 +561,6 @@ def poof(self, outpath=None, clear_figure=False, **kwargs):
if clear_figure:
plt.gcf().clear()
+
+ # Return Axes array to ensure poof works in notebooks
+ return self.axarr
diff --git a/yellowbrick/pipeline.py b/yellowbrick/pipeline.py
--- a/yellowbrick/pipeline.py
+++ b/yellowbrick/pipeline.py
@@ -90,13 +90,18 @@ def poof(self, outdir=None, ext=".pdf", **kwargs):
kwargs : dict
Keyword arguments to pass to the ``poof()`` method of all steps.
"""
+ axes = []
for name, step in self.visual_steps.items():
if outdir is not None:
outpath = path.join(outdir, slugify(name) + ext)
else:
outpath = None
- step.poof(outpath=outpath, **kwargs)
+ ax = step.poof(outpath=outpath, **kwargs)
+ axes.append(ax)
+
+ # Return axes array to ensure figures are shown in notebook
+ return axes
def fit_transform_poof(self, X, y=None, outpath=None, **kwargs):
"""
| diff --git a/tests/test_base.py b/tests/test_base.py
--- a/tests/test_base.py
+++ b/tests/test_base.py
@@ -124,7 +124,7 @@ class CustomVisualizer(Visualizer):
_, ax = plt.subplots()
viz = CustomVisualizer(ax=ax)
viz.finalize = MagicMock()
- viz.poof()
+ assert viz.poof() is ax
viz.finalize.assert_called_once_with()
mock_plt.show.assert_called_once_with()
@@ -142,7 +142,7 @@ class CustomVisualizer(Visualizer):
_, ax = plt.subplots()
viz = CustomVisualizer(ax=ax)
viz.finalize = MagicMock()
- viz.poof(outpath="test.png")
+ assert viz.poof(outpath="test.png") is ax
viz.finalize.assert_called_once_with()
mock_plt.show.assert_not_called()
@@ -159,7 +159,7 @@ class CustomVisualizer(Visualizer):
with pytest.warns(YellowbrickWarning):
viz = CustomVisualizer()
- viz.poof()
+ assert viz.poof() is not None
##########################################################################
## ScoreVisualizer Cases
@@ -228,7 +228,8 @@ def test_draw_visualizer_grid(self):
grid = VisualizerGrid(visualizers)
grid.fit(X, y)
- grid.poof() # poof is required here (do not replace with finalize)!
+ # poof is required here (do not replace with finalize)!
+ assert grid.poof() is not None
self.assert_images_similar(grid)
@@ -249,7 +250,8 @@ def test_draw_with_rows(self):
grid = VisualizerGrid(visualizers, nrows=2)
grid.fit(X, y)
- grid.poof() # poof is required here (do not replace with finalize)!
+ # poof is required here (do not replace with finalize)!
+ assert grid.poof() is not None
self.assert_images_similar(grid)
@@ -270,7 +272,8 @@ def test_draw_with_cols(self):
grid = VisualizerGrid(visualizers, ncols=2)
grid.fit(X, y)
- grid.poof() # poof is required here (do not replace with finalize)!
+ # poof is required here (do not replace with finalize)!
+ assert grid.poof() is not None
self.assert_images_similar(grid)
| Poof() should return ax
Make sure that ll visualizers return the `self.ax` when calling poof(). This is so that users can always get access to the ax, for example when working inside a function in a notebook. In addition it will encourage the behavior pattern of tweaking the plot if desired after poof(). We will be adjusting our documentation to use this form to also encourage this behavior:
```
ax = viz.poof()
```
This why this behavior pattern is useful:
https://stackoverflow.com/questions/47450804/yellowbrick-increasing-font-size-on-yellowbrick-generated-charts
| As a workaround, once poof is called, you can access ax through:
`ax = viz.ax`
and fig:
`fig = viz.ax.get_figure()`
@fdion out of curiosity, we have a handle to the current ax via the visualizer, do you think we should also add one to the figure? I have to say, it's very rare that I'm doing anything with the figure, though we do have the ability to modify the size of the figure on the visualizer.
@bbengfort good question, Ben. The primary use is to save to file (pdf for latex), but since that's already covered by yellowbrick, and you currently don't have multi ax figures, probably not a high demand item :)
I return the figure in stemgraphic because I have grids of heatmaps or other plots and stuff like that. If you think faceting / multivariate might be a thing soon in Yellow Brick, then I'd say it would be useful to provide it.
Francois
Turns out we just included a VisualizerGrid that draws each visualizer on different axes; we're also thinking about trying to do a "flip book" style visualization with multiple figures. Guess I'll have to keep this in mind! Thanks!
Just a quick implementation note on the original ticket. To implement this:
Add the line `return self.ax` at the end of the base visualizer's poof() method in `yellowbrick/base.py`, and then searching through all the visualizers in the code to make sure no one else overrides poof.
Stumbled upon this issue when looking at #380.
Started the implementation in https://github.com/DistrictDataLabs/yellowbrick/pull/422, still work in progress at the moment.
fdion's way of getting `fig` happens to currently be the only way for me to display `Rank2D` correlation plots, as `poof()` does not work anymore, even within the same cell. Not sure if this qualifies as a new bug, or is covered here somehow.
@michael-ziedalski - I'm a bit concerned that `poof` isn't working for you - what environment are you in? Are you still at PyData? Stop by the District Data Labs table and I'd be happy to take a look at it with you. | 2019-08-20T14:58:37 |
DistrictDataLabs/yellowbrick | 1,007 | DistrictDataLabs__yellowbrick-1007 | [
"996"
] | 425d9573e1d6e8027349b37ae5e9714290414ac4 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -77,14 +77,14 @@
## Directories to ignore in find_packages
EXCLUDES = (
- "tests",
+ "tests", "tests.*",
"bin",
- "docs",
+ "docs", "docs.*",
"fixtures",
"register",
- "notebooks",
- "examples",
- "binder",
+ "notebooks", "notebooks.*",
+ "examples", "examples.*",
+ "binder", "binder.*",
"paper",
)
| Installs tests package
**Describe the bug**
Installing yellowbrick also installs a package "tests" into the enviornment.
**To Reproduce**
```shell
PS> virtualenv env
PS> .\env\Scripts\activate
PS> python -c "import tests; print(tests.__path__)"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'tests'
PS> pip install yellowbrick
PS> python -c "import tests; print(tests.__path__)"
_NamespacePath(['<PATH_FROM_C:>\\env\\lib\\site-packages\\tests'])
```
I dug into the files and found the scikit-yb developer copyright notice in the source files in the fields.
**Expected behavior**
I would guess it is not the expected nor intendent behavior to install the tests package. Also looking at the setup.py it seems like it should be excluded, so i do not understand why this isn't the case. Mainly, this is a issue as it causes python to import the yb tests package instead of my local tests package when running pytest.
**Desktop (please complete the following information):**
- OS: Windows
- Python Version 3.7.4
- Yellowbrick Version 1.0.1
| 2020-01-07T14:55:08 |
||
DistrictDataLabs/yellowbrick | 1,042 | DistrictDataLabs__yellowbrick-1042 | [
"853"
] | 19a83455b0ebbf4f5129a7ff45359667dd8fa29c | diff --git a/yellowbrick/regressor/residuals.py b/yellowbrick/regressor/residuals.py
--- a/yellowbrick/regressor/residuals.py
+++ b/yellowbrick/regressor/residuals.py
@@ -21,6 +21,8 @@
import matplotlib.pyplot as plt
+from scipy.stats import probplot
+
try:
# Only available in Matplotlib >= 2.0.2
from mpl_toolkits.axes_grid1 import make_axes_locatable
@@ -434,6 +436,12 @@ class ResidualsPlot(RegressionScoreVisualizer):
If set to 'density', the probability density function will be plotted.
If set to True or 'frequency' then the frequency will be plotted.
+ qqplot : {True, False}, default: False
+ Draw a Q-Q plot on the right side of the figure, comparing the quantiles
+ of the residuals against quantiles of a standard normal distribution.
+ Q-Q plot and histogram of residuals can not be plotted simultaneously,
+ either `hist` or `qqplot` has to be set to False.
+
train_color : color, default: 'b'
Residuals for training data are ploted with this color but also
given an opacity of 0.5 to ensure that the test data residuals
@@ -502,6 +510,7 @@ def __init__(
model,
ax=None,
hist=True,
+ qqplot=False,
train_color="b",
test_color="g",
line_color=LINE_COLOR,
@@ -531,9 +540,25 @@ def __init__(
"False, 'density', or 'frequency'".format(hist)
)
+ self.qqplot = qqplot
+ if self.qqplot not in {True, False}:
+ raise YellowbrickValueError(
+ "'{}' is an invalid argument for qqplot, use True, "
+ " or False".format(hist)
+ )
+
+ if self.hist in {True, "density", "frequency"} and self.qqplot in {True}:
+ raise YellowbrickValueError(
+ "Set either hist or qqplot to False, can not plot "
+ "both of them simultaneously."
+ )
+
if self.hist in {True, "density", "frequency"}:
self.hax # If hist is True, test the version availability
+ if self.qqplot in {True}:
+ self.qqax # If qqplot is True, test the version availability
+
# Store labels and colors for the legend ordered by call
self._labels, self._colors = [], []
@@ -560,6 +585,26 @@ def hax(self):
return hax
+ @memoized
+ def qqax(self):
+ """
+ Returns the Q-Q plot axes, creating it only on demand.
+ """
+ if make_axes_locatable is None:
+ raise YellowbrickValueError(
+ (
+ "residuals histogram requires matplotlib 2.0.2 or greater "
+ "please upgrade matplotlib or set qqplot=False on the visualizer"
+ )
+ )
+
+ divider = make_axes_locatable(self.ax)
+
+ qqax = divider.append_axes("right", size=2, pad=0.25, sharey=self.ax)
+ qqax.yaxis.tick_right()
+
+ return qqax
+
def fit(self, X, y, **kwargs):
"""
Parameters
@@ -670,6 +715,12 @@ def draw(self, y_pred, residuals, train=False, **kwargs):
residuals, bins=50, orientation="horizontal", density=True, color=color
)
+ # Add residuals histogram
+ if self.qqplot in {True}:
+ osm, osr = probplot(residuals, dist='norm', fit=False)
+
+ self.qqax.scatter(osm, osr, c=color, alpha=alpha, label=label)
+
# Ensure the current axes is always the main residuals axes
plt.sca(self.ax)
return self.ax
@@ -705,6 +756,12 @@ def finalize(self, **kwargs):
self.hax.axhline(y=0, c=self.colors["line"])
self.hax.set_xlabel("Distribution")
+ # Finalize the histogram axes
+ if self.qqplot:
+ self.qqax.set_title("Q-Q plot")
+ self.qqax.set_xlabel("Theoretical quantiles")
+ self.qqax.set_ylabel("Observed quantiles")
+
##########################################################################
## Quick Method
@@ -719,6 +776,7 @@ def residuals_plot(
y_test=None,
ax=None,
hist=True,
+ qqplot=False,
train_color="b",
test_color="g",
line_color=LINE_COLOR,
@@ -772,6 +830,12 @@ def residuals_plot(
If set to 'density', the probability density function will be plotted.
If set to True or 'frequency' then the frequency will be plotted.
+ qqplot : {True, False}, default: False
+ Draw a Q-Q plot on the right side of the figure, comparing the quantiles
+ of the residuals against quantiles of a standard normal distribution.
+ Q-Q plot and histogram of residuals can not be plotted simultaneously,
+ either `hist` or `qqplot` has to be set to False.
+
train_color : color, default: 'b'
Residuals for training data are ploted with this color but also
given an opacity of 0.5 to ensure that the test data residuals
@@ -822,6 +886,7 @@ def residuals_plot(
model=model,
ax=ax,
hist=hist,
+ qqplot=qqplot,
train_color=train_color,
test_color=test_color,
line_color=line_color,
| diff --git a/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_QQ_plot.png b/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_QQ_plot.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_regressor/test_residuals/test_residuals_plot_QQ_plot.png differ
diff --git a/tests/test_regressor/test_residuals.py b/tests/test_regressor/test_residuals.py
--- a/tests/test_regressor/test_residuals.py
+++ b/tests/test_regressor/test_residuals.py
@@ -280,6 +280,32 @@ def test_residuals_plot(self):
self.assert_images_similar(visualizer)
+ @pytest.mark.xfail(
+ IS_WINDOWS_OR_CONDA,
+ reason="font rendering different in OS and/or Python; see #892",
+ )
+ def test_residuals_plot_QQ_plot(self):
+ """
+ Image similarity of residuals and Q-Q plot on random data with OLS
+ """
+ _, ax = plt.subplots()
+
+ visualizer = ResidualsPlot(LinearRegression(), hist=False,
+ qqplot=True, ax=ax)
+
+ visualizer.fit(self.data.X.train, self.data.y.train)
+ visualizer.score(self.data.X.test, self.data.y.test)
+
+ self.assert_images_similar(visualizer)
+
+ def test_either_hist_or_QQ_plot(self):
+ """
+ Setting both hist=True and qqplot=True raises exception.
+ """
+ with pytest.raises(YellowbrickValueError,
+ match="Set either hist or qqplot to False"):
+ ResidualsPlot(LinearRegression(), hist=True, qqplot=True)
+
@pytest.mark.xfail(
sys.platform == "win32", reason="images not close on windows (RMSE=32)"
)
| Add Q-Q plot to the yellowbrick.regressor.residuals class
You already have a histogram feature for the `ResidulaPlot` method.
It will be immensely helpful to add a standard normality check method to the parent `yellowbrick.regressor.residuals` class like Q-Q plot.
Your functional interface is gearing up to be similar to statistical languages like **R** where you can throw the fitted model inside a function to generate more insight - mostly a visualization or a statistical score.
[Q-Q plot](https://en.wikipedia.org/wiki/Q%E2%80%93Q_plot) ability will be one of such basic statistical insights that can add value.
| @tirthajyoti Thanks for opening this issue. This is an excellent suggestion. If this is something you would like to work on then you could signup as a core-contributor during our fall semester and begin working on it then. https://forms.gle/NT3inRQhaV278kf96
I will see what I can do. Is it OK if I use a [`SciPy` method](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.probplot.html) for this to keep the code compact and fast?
@tirthajyoti We definitely use scipy methods... for instance https://github.com/DistrictDataLabs/yellowbrick/blob/76b2f8854a44dde534609b9545d8215d43c6555b/yellowbrick/features/rankd.py#L21
I hope to see you this Fall
| 2020-02-18T11:37:15 |
DistrictDataLabs/yellowbrick | 1,056 | DistrictDataLabs__yellowbrick-1056 | [
"1040"
] | 4737f0f28086e53f8afb6a203084c57b52f5ff79 | diff --git a/yellowbrick/classifier/rocauc.py b/yellowbrick/classifier/rocauc.py
--- a/yellowbrick/classifier/rocauc.py
+++ b/yellowbrick/classifier/rocauc.py
@@ -24,6 +24,7 @@
from scipy import interp
from sklearn.metrics import auc, roc_curve
from sklearn.preprocessing import label_binarize
+from sklearn.utils.multiclass import type_of_target
from yellowbrick.exceptions import ModelError
from yellowbrick.style.palettes import LINE_COLOR
@@ -35,6 +36,10 @@
MACRO = "macro"
MICRO = "micro"
+# Target Type Constants
+BINARY = "binary"
+MULTICLASS = "multiclass"
+
##########################################################################
## ROCAUC Visualizer
@@ -85,9 +90,17 @@ class ROCAUC(ClassificationScoreVisualizer):
per_class : bool, default: True
Plot the ROC curves for each individual class. This should be set
- to false if only the macro or micro average curves are required. Per-
- class classification is not defined for binary classification problems
- with estimators with only a decision_function method.
+ to false if only the macro or micro average curves are required. For true
+ binary classifiers, setting per_class=False will plot the positive class
+ ROC curve, and per_class=True will use ``1-P(1)`` to compute the curve of
+ the negative class if only a decision_function method exists on the estimator.
+
+ binary : bool, default: False
+ This argument quickly resets the visualizer for true binary classification
+ by updating the micro, macro, and per_class arguments to False (do not use
+ in conjunction with those other arguments). Note that this is not a true
+ hyperparameter to the visualizer, it just collects other parameters into
+ a single, simpler argument.
classes : list of str, defult: None
The class labels to use for the legend ordered by the index of the sorted
@@ -131,6 +144,9 @@ class classification is not defined for binary classification problems
generally better. For classifiers, this score is usually accuracy, but
if micro or macro is specified this returns an F1 score.
+ target_type_ : string
+ Specifies if the detected classification target was binary or multiclass.
+
Notes
-----
ROC curves are typically used in binary classification, and in fact the
@@ -173,6 +189,7 @@ def __init__(
micro=True,
macro=True,
per_class=True,
+ binary=False,
classes=None,
encoder=None,
is_fitted="auto",
@@ -190,7 +207,33 @@ def __init__(
)
# Set the visual parameters for ROCAUC
- self.set_params(micro=micro, macro=macro, per_class=per_class)
+ # NOTE: the binary flag breaks our API since it's really just a meta parameter
+ # for micro, macro, and per_class. We knew this going into it, but did it anyway.
+ if binary:
+ self.set_params(micro=False, macro=False, per_class=False)
+ else:
+ self.set_params(micro=micro, macro=macro, per_class=per_class)
+
+ def fit(self, X, y=None):
+ """
+ Fit the classification model.
+ """
+ # The target determines what kind of estimator is fit
+ ttype = type_of_target(y)
+ if ttype.startswith(MULTICLASS):
+ self.target_type_ = MULTICLASS
+ elif ttype.startswith(BINARY):
+ self.target_type_ = BINARY
+ else:
+ raise YellowbrickValueError(
+ (
+ "{} does not support target type '{}', "
+ "please provide a binary or multiclass single-output target"
+ ).format(self.__class__.__name__, ttype)
+ )
+
+ # Fit the model and return self
+ return super(ROCAUC, self).fit(X, y)
def score(self, X, y=None):
"""
@@ -217,27 +260,14 @@ def score(self, X, y=None):
# Compute the predictions for the test data
y_pred = self._get_y_scores(X)
- # Note: In the above, _get_y_scores calls either a decision_function or
- # predict_proba, which should return a 2D array. But in a binary
- # classification using an estimator with only a decision_function, y_pred
- # will instead be 1D, meaning only one curve can be plotted. In this case,
- # we set the _binary_decision attribute to True to ensure only one curve is
- # computed and plotted later on.
- if y_pred.ndim == 1:
- self._binary_decision = True
-
- # Raise an error if it's a binary decision and user has set micro,
- # macro, or per_class to True
- if self.micro or self.macro or self.per_class:
+ if self.target_type_ == BINARY:
+ # If it's binary classification, to draw micro or macro curves per_class must be True
+ if (self.micro or self.macro) and not self.per_class:
raise ModelError(
- "Micro, macro, and per-class scores are not defined for "
- "binary classification for estimators with only "
- "decision_function methods; set micro, macro, and "
- "per-class params to False."
+ "no curves will be drawn; set per_class=True or micro=False and macro=False."
)
- else:
- self._binary_decision = False
- # If it's not a binary decision, at least one of micro, macro, or
+ if self.target_type_ == MULTICLASS:
+ # If it's multiclass classification, at least one of micro, macro, or
# per_class must be True
if not self.micro and not self.macro and not self.per_class:
raise YellowbrickValueError(
@@ -254,16 +284,48 @@ def score(self, X, y=None):
self.tpr = dict()
self.roc_auc = dict()
- # If the decision is binary, compute the ROC curve and ROC area
- if self._binary_decision is True:
- self.fpr[0], self.tpr[0], _ = roc_curve(y, y_pred)
+ # If the decision is binary draw only ROC curve for the postitive class
+ if self.target_type_ is BINARY and not self.per_class:
+ # In this case predict_proba returns an array of shape (n, 2) which
+ # specifies the probabilities of both the negative and positive classes.
+ if len(y_pred.shape) == 2 and y_pred.shape[1] == 2:
+ self.fpr[BINARY], self.tpr[BINARY], _ = roc_curve(y, y_pred[:,1])
+ else:
+ # decision_function returns array of shape (n,), so plot it directly
+ self.fpr[BINARY], self.tpr[BINARY], _ = roc_curve(y, y_pred)
+ self.roc_auc[BINARY] = auc(self.fpr[BINARY], self.tpr[BINARY])
+
+ # Per-class binary decisions may have to have the negative class curve computed
+ elif self.target_type_ is BINARY and self.per_class:
+ # draw a curve for class 1 (the positive class)
+ if len(y_pred.shape) == 2 and y_pred.shape[1] == 2:
+ # predict_proba returns array of shape (n, 2), so use
+ # probability of class 1 to compute ROC
+ self.fpr[1], self.tpr[1], _ = roc_curve(y, y_pred[:,1])
+ else:
+ # decision_function returns array of shape (n,)
+ self.fpr[1], self.tpr[1], _ = roc_curve(y, y_pred)
+ self.roc_auc[1] = auc(self.fpr[1], self.tpr[1])
+
+ # draw a curve for class 0 (the negative class)
+ if len(y_pred.shape) == 2 and y_pred.shape[1] == 2:
+ # predict_proba returns array of shape (n, 2), so use
+ # probability of class 0 to compute ROC
+ self.fpr[0], self.tpr[0], _ = roc_curve(1-y, y_pred[:,0])
+ else:
+ # decision_function returns array of shape (n,).
+ # To draw a ROC curve for class 0 we swap the classes 0 and 1 in y
+ # and reverse classifiers predictions y_pred.
+ self.fpr[0], self.tpr[0], _ = roc_curve(1-y, -y_pred)
self.roc_auc[0] = auc(self.fpr[0], self.tpr[0])
+
else:
# Otherwise compute the ROC curve and ROC area for each class
for i, c in enumerate(classes):
self.fpr[i], self.tpr[i], _ = roc_curve(y, y_pred[:, i], pos_label=c)
self.roc_auc[i] = auc(self.fpr[i], self.tpr[i])
+
# Compute micro average
if self.micro:
self._score_micro_average(y, y_pred, classes, n_classes)
@@ -298,11 +360,11 @@ def draw(self):
n_classes = len(colors)
# If it's a binary decision, plot the single ROC curve
- if self._binary_decision is True:
+ if self.target_type_ == BINARY and not self.per_class:
self.ax.plot(
- self.fpr[0],
- self.tpr[0],
- label="ROC for binary decision, AUC = {:0.2f}".format(self.roc_auc[0]),
+ self.fpr[BINARY],
+ self.tpr[BINARY],
+ label="ROC for binary decision, AUC = {:0.2f}".format(self.roc_auc[BINARY]),
)
# If per-class plotting is requested, plot ROC curves for each class
@@ -459,6 +521,7 @@ def roc_auc(
micro=True,
macro=True,
per_class=True,
+ binary=False,
classes=None,
encoder=None,
is_fitted="auto",
@@ -494,7 +557,7 @@ def roc_auc(
X_train : array-like, 2D
The table of instance data or independent variables that describe the outcome of
- the dependent variable, y. Used to fit the visualizer and also to score the
+ the dependent variable, y. Used to fit the visualizer and also to score the
visualizer if test splits are not specified.
y_train : array-like, 2D
@@ -504,9 +567,9 @@ def roc_auc(
X_test: array-like, 2D, default: None
The table of instance data or independent variables that describe the outcome of
the dependent variable, y. Used to score the visualizer if specified.
-
+
y_test: array-like, 1D, default: None
- The vector of target data or the dependent variable predicted by X.
+ The vector of target data or the dependent variable predicted by X.
Used to score the visualizer if specified.
ax : matplotlib Axes, default: None
@@ -533,9 +596,17 @@ def roc_auc(
per_class : bool, default: True
Plot the ROC curves for each individual class. This should be set
- to false if only the macro or micro average curves are required. Per-
- class classification is not defined for binary classification problems
- with estimators with only a decision_function method.
+ to false if only the macro or micro average curves are required. For true
+ binary classifiers, setting per_class=False will plot the positive class
+ ROC curve, and per_class=True will use ``1-P(1)`` to compute the curve of
+ the negative class if only a decision_function method exists on the estimator.
+
+ binary : bool, default: False
+ This argument quickly resets the visualizer for true binary classification
+ by updating the micro, macro, and per_class arguments to False (do not use
+ in conjunction with those other arguments). Note that this is not a true
+ hyperparameter to the visualizer, it just collects other parameters into
+ a single, simpler argument.
classes : list of str, defult: None
The class labels to use for the legend ordered by the index of the sorted
@@ -611,6 +682,7 @@ class classification is not defined for binary classification problems
micro=micro,
macro=macro,
per_class=per_class,
+ binary=binary,
classes=classes,
encoder=encoder,
is_fitted=is_fitted,
@@ -626,7 +698,7 @@ class classification is not defined for binary classification problems
visualizer.score(X_test, y_test)
else:
visualizer.score(X_train, y_train)
-
+
if show:
visualizer.show()
else:
| diff --git a/tests/baseline_images/test_classifier/test_rocauc/test_binary_decision.png b/tests/baseline_images/test_classifier/test_rocauc/test_binary_decision.png
Binary files a/tests/baseline_images/test_classifier/test_rocauc/test_binary_decision.png and b/tests/baseline_images/test_classifier/test_rocauc/test_binary_decision.png differ
diff --git a/tests/baseline_images/test_classifier/test_rocauc/test_binary_decision_per_class.png b/tests/baseline_images/test_classifier/test_rocauc/test_binary_decision_per_class.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_rocauc/test_binary_decision_per_class.png differ
diff --git a/tests/baseline_images/test_classifier/test_rocauc/test_binary_probability.png b/tests/baseline_images/test_classifier/test_rocauc/test_binary_probability.png
Binary files a/tests/baseline_images/test_classifier/test_rocauc/test_binary_probability.png and b/tests/baseline_images/test_classifier/test_rocauc/test_binary_probability.png differ
diff --git a/tests/baseline_images/test_classifier/test_rocauc/test_binary_probability_decision.png b/tests/baseline_images/test_classifier/test_rocauc/test_binary_probability_decision.png
Binary files a/tests/baseline_images/test_classifier/test_rocauc/test_binary_probability_decision.png and b/tests/baseline_images/test_classifier/test_rocauc/test_binary_probability_decision.png differ
diff --git a/tests/baseline_images/test_classifier/test_rocauc/test_binary_probability_decision_single_curve.png b/tests/baseline_images/test_classifier/test_rocauc/test_binary_probability_decision_single_curve.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_classifier/test_rocauc/test_binary_probability_decision_single_curve.png differ
diff --git a/tests/baseline_images/test_classifier/test_rocauc/test_multiclass_rocauc.png b/tests/baseline_images/test_classifier/test_rocauc/test_multiclass_rocauc.png
Binary files a/tests/baseline_images/test_classifier/test_rocauc/test_multiclass_rocauc.png and b/tests/baseline_images/test_classifier/test_rocauc/test_multiclass_rocauc.png differ
diff --git a/tests/baseline_images/test_classifier/test_rocauc/test_pandas_integration.png b/tests/baseline_images/test_classifier/test_rocauc/test_pandas_integration.png
Binary files a/tests/baseline_images/test_classifier/test_rocauc/test_pandas_integration.png and b/tests/baseline_images/test_classifier/test_rocauc/test_pandas_integration.png differ
diff --git a/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_classes.png b/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_classes.png
Binary files a/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_classes.png and b/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_classes.png differ
diff --git a/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_macro.png b/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_macro.png
Binary files a/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_macro.png and b/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_macro.png differ
diff --git a/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_macro_no_micro.png b/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_macro_no_micro.png
Binary files a/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_macro_no_micro.png and b/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_macro_no_micro.png differ
diff --git a/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_micro.png b/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_micro.png
Binary files a/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_micro.png and b/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_no_micro.png differ
diff --git a/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_quickmethod.png b/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_quickmethod.png
Binary files a/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_quickmethod.png and b/tests/baseline_images/test_classifier/test_rocauc/test_rocauc_quickmethod.png differ
diff --git a/tests/test_classifier/test_rocauc.py b/tests/test_classifier/test_rocauc.py
--- a/tests/test_classifier/test_rocauc.py
+++ b/tests/test_classifier/test_rocauc.py
@@ -26,8 +26,8 @@
from tests.base import VisualTestCase
from yellowbrick.classifier.rocauc import *
+from yellowbrick.exceptions import ModelError
from yellowbrick.datasets import load_occupancy
-from yellowbrick.exceptions import ModelError, YellowbrickValueError
from sklearn.svm import LinearSVC
from sklearn.naive_bayes import GaussianNB
@@ -41,11 +41,11 @@
except ImportError:
pd = None
+
##########################################################################
## Fixtures
##########################################################################
-
class FakeClassifier(BaseEstimator, ClassifierMixin):
"""
A fake classifier for testing noops on the visualizer.
@@ -124,6 +124,29 @@ def test_binary_probability_decision(self):
visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.1, windows_tol=10)
+ def test_binary_probability_decision_single_curve(self):
+ """
+ Test ROCAUC binary classifier with both decision & predict_proba with per_class=False
+ """
+ # Create and fit the visualizer
+ visualizer = ROCAUC(AdaBoostClassifier(), micro=False, macro=False, per_class=False)
+ visualizer.fit(self.binary.X.train, self.binary.y.train)
+
+ # Score the visualizer
+ s = visualizer.score(self.binary.X.test, self.binary.y.test)
+
+ # Test that score method successfully returns a value between 0 and 1
+ assert 0 <= s <= 1
+
+ # Check the scores
+ assert len(visualizer.fpr.keys()) == 1
+ assert len(visualizer.tpr.keys()) == 1
+ assert len(visualizer.roc_auc.keys()) == 1
+
+ # Compare the images
+ visualizer.finalize()
+ self.assert_images_similar(visualizer, tol=0.1, windows_tol=10)
+
def test_binary_decision(self):
"""
Test ROCAUC with a binary classifier with a decision_function
@@ -150,12 +173,38 @@ def test_binary_decision(self):
visualizer.finalize()
self.assert_images_similar(visualizer, tol=10)
+ def test_binary_decision_per_class(self):
+ """
+ Test ROCAUC with a binary classifier with a decision_function
+ """
+ # Create and fit the visualizer
+ visualizer = ROCAUC(
+ LinearSVC(random_state=42), micro=False, macro=False, per_class=True
+ )
+ visualizer.fit(self.binary.X.train, self.binary.y.train)
+
+ # Score the visualizer
+ s = visualizer.score(self.binary.X.test, self.binary.y.test)
+
+ # Test that score method successfully returns a value between 0 and 1
+ assert 0 <= s <= 1
+
+ # Check the scores
+ assert len(visualizer.fpr.keys()) == 2
+ assert len(visualizer.tpr.keys()) == 2
+ assert len(visualizer.roc_auc.keys()) == 2
+
+ # Compare the images
+ # NOTE: increased tolerance for both AppVeyor and Travis CI tests
+ visualizer.finalize()
+ self.assert_images_similar(visualizer, tol=10)
+
def test_binary_micro_error(self):
"""
Test ROCAUC to see if _binary_decision with micro = True raises an error
"""
# Create visualizer with a linear model to force a binary decision
- visualizer = ROCAUC(LinearSVC(random_state=42), micro=True)
+ visualizer = ROCAUC(LinearSVC(random_state=42), micro=True, per_class=False)
visualizer.fit(self.binary.X.train, self.binary.y.train)
# Ensure score raises error (micro curves aren't defined for binary decisions)
@@ -167,25 +216,13 @@ def test_binary_macro_error(self):
Test ROCAUC to see if _binary_decision with macro = True raises an error
"""
# Create visualizer with a linear model to force a binary decision
- visualizer = ROCAUC(LinearSVC(random_state=42), macro=True)
+ visualizer = ROCAUC(LinearSVC(random_state=42), macro=True, per_class=False)
visualizer.fit(self.binary.X.train, self.binary.y.train)
# Ensure score raises error (macro curves aren't defined for binary decisions)
with pytest.raises(ModelError):
visualizer.score(self.binary.X.test, self.binary.y.test)
- def test_binary_per_class_error(self):
- """
- Test ROCAUC to see if _binary_decision with per_class = True raises an error
- """
- # Create visualizer with a linear model to force a binary decision
- visualizer = ROCAUC(LinearSVC(random_state=42), per_class=True)
- visualizer.fit(self.binary.X.train, self.binary.y.train)
-
- # Ensure score raises error (per_class curves not defined for binary decisions)
- with pytest.raises(ModelError):
- visualizer.score(self.binary.X.test, self.binary.y.test)
-
def test_multiclass_rocauc(self):
"""
Test ROCAUC with a multiclass classifier
@@ -207,6 +244,42 @@ def test_multiclass_rocauc(self):
visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.1, windows_tol=10)
+ def test_rocauc_no_classes(self):
+ """
+ Test ROCAUC without per-class curves
+ """
+ # Create and fit the visualizer
+ visualizer = ROCAUC(GaussianNB(), per_class=False)
+ visualizer.fit(self.multiclass.X.train, self.multiclass.y.train)
+
+ # Score the visualizer (should be the micro average)
+ s = visualizer.score(self.multiclass.X.test, self.multiclass.y.test)
+ assert s == pytest.approx(0.77303, abs=1e-4)
+
+ # Assert that there still are per-class scores
+ for c in (0, 1):
+ assert c in visualizer.fpr
+ assert c in visualizer.tpr
+ assert c in visualizer.roc_auc
+
+ # Compare the images
+ visualizer.finalize()
+ self.assert_images_similar(visualizer, tol=0.1, windows_tol=10)
+
+ def test_rocauc_no_curves(self):
+ """
+ Test ROCAUC with no curves specified at all
+ """
+ # Create and fit the visualizer
+ visualizer = ROCAUC(
+ GaussianNB(), per_class=False, macro=False, micro=False
+ )
+ visualizer.fit(self.multiclass.X.train, self.multiclass.y.train)
+
+ # Attempt to score the visualizer
+ with pytest.raises(YellowbrickValueError, match="no curves will be drawn"):
+ visualizer.score(self.multiclass.X.test, self.multiclass.y.test)
+
def test_rocauc_quickmethod(self):
"""
Test the ROCAUC quick method
@@ -305,42 +378,6 @@ def test_rocauc_no_macro_no_micro(self):
visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.1, windows_tol=10)
- def test_rocauc_no_classes(self):
- """
- Test ROCAUC without per-class curves
- """
- # Create and fit the visualizer
- visualizer = ROCAUC(LogisticRegression(), per_class=False)
- visualizer.fit(self.binary.X.train, self.binary.y.train)
-
- # Score the visualizer (should be the micro average)
- s = visualizer.score(self.binary.X.test, self.binary.y.test)
- assert s == pytest.approx(0.8661, abs=1e-4)
-
- # Assert that there still are per-class scores
- for c in (0, 1):
- assert c in visualizer.fpr
- assert c in visualizer.tpr
- assert c in visualizer.roc_auc
-
- # Compare the images
- visualizer.finalize()
- self.assert_images_similar(visualizer, tol=0.1, windows_tol=10)
-
- def test_rocauc_no_curves(self):
- """
- Test ROCAUC with no curves specified at all
- """
- # Create and fit the visualizer
- visualizer = ROCAUC(
- LogisticRegression(), per_class=False, macro=False, micro=False
- )
- visualizer.fit(self.binary.X.train, self.binary.y.train)
-
- # Attempt to score the visualizer
- with pytest.raises(YellowbrickValueError, match="no curves will be drawn"):
- visualizer.score(self.binary.X.test, self.binary.y.test)
-
def test_rocauc_label_encoded(self):
"""
Test ROCAUC with a target specifying a list of classes as strings
@@ -489,3 +526,17 @@ def test_with_fitted(self):
oz = ROCAUC(model, classes=classes, is_fitted=False)
oz.fit(X, y)
mockfit.assert_called_once_with(X, y)
+
+ def test_binary_meta_param(self):
+ """
+ Test the binary meta param with ROCAUC
+ """
+ oz = ROCAUC(GaussianNB(), binary=False)
+ assert oz.micro is True
+ assert oz.macro is True
+ assert oz.per_class is True
+
+ oz = ROCAUC(GaussianNB(), binary=True)
+ assert oz.micro is False
+ assert oz.macro is False
+ assert oz.per_class is False
| ROCAUC treats binary classification as multiclass for estimators with predict_proba available
**Describe the bug**
``ROCAUC`` plots 4 different curves for a binary classification problem when using an estimator with ``predict_proba``, which returns predictions of shape ``nx2``. Only one curve is plotted when estimator only has ``decision_function``, which returns predictions of shape ``nx1``. I think only one ROC curve should be drawn for a binary classification problem, as in the binary case the two ROC curves before averaging are just a reflection of each other.
For comparison, ``PrecisionRecallCurve`` plots 1 curve for binary classification irrespective, whether provided estimator uses ``predict_proba`` or ``decision_function``.
**To Reproduce**
To see this behavior it is enough to look at
``yellowbrick/tests/baseline_images/test_classifier/test_rocauc/test_binary_decision.png`` and ``yellowbrick/tests/baseline_images/test_classifier/test_rocauc/test_binary_probability.png``
**Expected behavior**
I expected to see just one ROC for binary classification instead of separate curves for class 0 and 1 and their micro and macro averaging.
| To draw just 1 ROC curve minor changes are needed in ``ROCAUC``, I can work on this.
@VladSkripniuk thank you so much for opening this issue and for the two PRs - I've been away on travel and we've been working to get the v1.1 release out; but I'll get to your PRs soon!
@VladSkripniuk which sklearn estimators do you believe should produce single curves?
@lwgray basically all estimators which solve binary classification problem, i.e. estimators which have `predict_proba` or `decision_function` which returns array of shape (n_samples,2) or (n_samples,)
Relates to #1041 | 2020-04-09T21:37:39 |
DistrictDataLabs/yellowbrick | 1,059 | DistrictDataLabs__yellowbrick-1059 | [
"1057"
] | 779487cb06f89edc1b284146a781598a390f51d0 | diff --git a/yellowbrick/cluster/elbow.py b/yellowbrick/cluster/elbow.py
--- a/yellowbrick/cluster/elbow.py
+++ b/yellowbrick/cluster/elbow.py
@@ -28,7 +28,7 @@
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics.pairwise import pairwise_distances
-from yellowbrick.utils import KneeLocator
+from yellowbrick.utils import KneeLocator, get_param_names
from yellowbrick.style.palettes import LINE_COLOR
from yellowbrick.cluster.base import ClusteringScoreVisualizer
from yellowbrick.exceptions import YellowbrickValueError, YellowbrickWarning
@@ -309,7 +309,7 @@ def fit(self, X, y=None, **kwargs):
# Set the k value and fit the model
self.estimator.set_params(n_clusters=k)
- self.estimator.fit(X)
+ self.estimator.fit(X, **kwargs)
# Append the time and score to our plottable metrics
self.k_timers_.append(time.time() - start)
@@ -415,6 +415,7 @@ def finalize(self):
## Quick Method
##########################################################################
+
def kelbow_visualizer(
model,
X,
@@ -487,6 +488,13 @@ def kelbow_visualizer(
viz : KElbowVisualizer
The kelbow visualizer, fitted and finalized.
"""
+ klass = type(model)
+
+ # figure out which kwargs correspond to fit method
+ fit_params = get_param_names(klass.fit)
+
+ fit_kwargs = {key: kwargs.pop(key) for key in fit_params if key in kwargs}
+
oz = KElbow(
model,
ax=ax,
@@ -496,7 +504,7 @@ def kelbow_visualizer(
locate_elbow=locate_elbow,
**kwargs
)
- oz.fit(X, y)
+ oz.fit(X, y, **fit_kwargs)
if show:
oz.show()
diff --git a/yellowbrick/utils/helpers.py b/yellowbrick/utils/helpers.py
--- a/yellowbrick/utils/helpers.py
+++ b/yellowbrick/utils/helpers.py
@@ -19,6 +19,7 @@
##########################################################################
import re
+import inspect
import sklearn
import numpy as np
@@ -185,6 +186,35 @@ def is_monotonic(a, increasing=True):
return np.all(a[1:] <= a[:-1], axis=0)
+def get_param_names(method):
+ """
+ Returns a list of keyword-only parameter names that may be
+ passed into method.
+
+ Parameters
+ ----------
+ method : function
+ The method for which to return keyword-only parameters.
+
+ Returns
+ -------
+ parameters : list
+ A list of keyword-only parameter names for method.
+ """
+ try:
+ signature = inspect.signature(method)
+ except (ValueError, TypeError) as e:
+ raise e
+
+ parameters = [
+ p
+ for p in signature.parameters.values()
+ if p.name != "self" and p.kind != p.VAR_KEYWORD
+ ]
+
+ return sorted([p.name for p in parameters])
+
+
##########################################################################
## Numeric Computations
##########################################################################
| diff --git a/tests/test_cluster/test_elbow.py b/tests/test_cluster/test_elbow.py
--- a/tests/test_cluster/test_elbow.py
+++ b/tests/test_cluster/test_elbow.py
@@ -306,7 +306,10 @@ def test_calinski_harabasz_metric(self):
self.assert_images_similar(visualizer)
assert_array_almost_equal(visualizer.k_scores_, expected)
- @pytest.mark.xfail(IS_WINDOWS_OR_CONDA, reason="computation of k_scores_ varies by 2.867 max absolute difference")
+ @pytest.mark.xfail(
+ IS_WINDOWS_OR_CONDA,
+ reason="computation of k_scores_ varies by 2.867 max absolute difference",
+ )
def test_locate_elbow(self):
"""
Test the addition of locate_elbow to an image
@@ -325,15 +328,7 @@ def test_locate_elbow(self):
visualizer.fit(X)
assert len(visualizer.k_scores_) == 5
assert visualizer.elbow_value_ == 3
- expected = np.array(
- [
- 4286.5,
- 12463.4,
- 8763.8,
- 6939.3,
- 5858.8,
- ]
- )
+ expected = np.array([4286.5, 12463.4, 8763.8, 6939.3, 5858.8])
visualizer.finalize()
self.assert_images_similar(visualizer, tol=0.5, windows_tol=2.2)
@@ -400,6 +395,32 @@ def test_timings(self):
self.assert_images_similar(visualizer)
+ def test_sample_weights(self):
+ """
+ Test that passing in sample weights correctly influences the clusterer's fit
+ """
+ seed = 1234
+
+ # original data has 5 clusters
+ X, y = make_blobs(
+ n_samples=[5, 30, 30, 30, 30],
+ n_features=5,
+ random_state=seed,
+ shuffle=False,
+ )
+
+ visualizer = KElbowVisualizer(
+ KMeans(random_state=seed), k=(2, 12), timings=False
+ )
+ visualizer.fit(X)
+ assert visualizer.elbow_value_ == 5
+
+ # weights should push elbow down to 4
+ weights = np.concatenate([np.ones(5) * 0.0001, np.ones(120)])
+
+ visualizer.fit(X, sample_weight=weights)
+ assert visualizer.elbow_value_ == 4
+
@pytest.mark.xfail(reason="images not close due to timing lines")
def test_quick_method(self):
"""
@@ -414,3 +435,15 @@ def test_quick_method(self):
assert isinstance(oz, KElbowVisualizer)
self.assert_images_similar(oz)
+
+ def test_quick_method_params(self):
+ """
+ Test the quick method correctly consumes the user-provided parameters
+ """
+ X, y = make_blobs(centers=3)
+ custom_title = "My custom title"
+ model = KMeans(3, random_state=13)
+ oz = kelbow_visualizer(
+ model, X, sample_weight=np.ones(X.shape[0]), title=custom_title
+ )
+ assert oz.title == custom_title
diff --git a/tests/test_utils/test_helpers.py b/tests/test_utils/test_helpers.py
--- a/tests/test_utils/test_helpers.py
+++ b/tests/test_utils/test_helpers.py
@@ -173,6 +173,42 @@ def test_check_fitted(self):
assert check_fitted(model, is_fitted_by=True) is True
assert check_fitted(model, is_fitted_by=False) is False
+ @pytest.mark.parametrize(
+ "estimator",
+ [
+ SVC,
+ SVR,
+ Ridge,
+ KMeans,
+ RidgeCV,
+ GaussianNB,
+ MiniBatchKMeans,
+ LinearRegression,
+ ],
+ ids=[
+ "SVC",
+ "SVR",
+ "Ridge",
+ "KMeans",
+ "RidgeCV",
+ "GaussianNB",
+ "MiniBatchKMeans",
+ "LinearRegression",
+ ],
+ )
+ def test_get_param_names(self, estimator):
+ """
+ Assert we successfully extract the parameters from sklearn estimators
+ """
+ assert "sample_weight" in get_param_names(estimator.fit)
+
+ def test_get_param_names_type(self):
+ """
+ Assert a type error is raised when passing a non-method
+ """
+ with pytest.raises(TypeError):
+ get_param_names("test")
+
##########################################################################
## Numeric Function Tests
| Support for sample_weight parameter in KElbowVisualizer
**Describe the solution you'd like**
KMeans and MiniBatchKMeans both have an optional `sample_weight` parameter in their `fit` methods. We should be able to pass this parameter to KElbowVisualizer.
**Is your feature request related to a problem? Please describe.**
Currently, supplying sample weights to `KElbowVisualizer` has no effect on the result.
**Examples**
N/A
| This seems to be just a matter of passing `kwargs` into the fit method. I can open a PR for this. Not sure if it needs any special tests. | 2020-04-12T03:11:35 |
DistrictDataLabs/yellowbrick | 1,061 | DistrictDataLabs__yellowbrick-1061 | [
"1058"
] | 674cad214054c9beaffa85cfaf968322304ccd81 | diff --git a/yellowbrick/cluster/elbow.py b/yellowbrick/cluster/elbow.py
--- a/yellowbrick/cluster/elbow.py
+++ b/yellowbrick/cluster/elbow.py
@@ -358,7 +358,7 @@ def draw(self):
# Plot the silhouette score against k
self.ax.plot(self.k_values_, self.k_scores_, marker="D")
if self.locate_elbow is True and self.elbow_value_ is not None:
- elbow_label = "$elbow at k={}, score={:0.3f}$".format(
+ elbow_label = "elbow at $k={}$, $score={:0.3f}$".format(
self.elbow_value_, self.elbow_score_
)
self.ax.axvline(
@@ -398,7 +398,7 @@ def finalize(self):
# set the legend if locate_elbow=True
if self.locate_elbow is True and self.elbow_value_ is not None:
- self.ax.legend(loc="best", fontsize="medium")
+ self.ax.legend(loc="best", fontsize="medium", frameon=True)
# Set the second y axis labels
if self.timings:
| KelbowVisualizer: The letters in the Legend Label are smushed together (missing spacing)
**Describe the issue**
A clear and concise description of what the bug is.
In the Kelbow visualizer I found that the Spacing between letters in the legend are absent. I also found this to be the case in the example visualization for Kelbow in both the latest and develop version of the docs. See image below
**To Reproduce**
```python
from sklearn.datasets import make_blobs
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from yellowbrick.cluster import KElbowVisualizer
# Generate sample data
centers = [[1, 1], [-1, -1], [1, -1]]
X, labels_true = make_blobs(n_samples=1500, centers=centers, cluster_std=0.3,
random_state=0)
Z = StandardScaler().fit_transform(X)
viz = KElbowVisualizer(KMeans())
viz.fit(Z)
viz.show()
```
**Dataset**
Did you use a specific dataset to produce the bug? Where can we access it?
yes... generated in the above code
**Expected behavior**
A clear and concise description of what you expected to happen.
I expect to see "elbow at k = 3, score = 264.727"
**Desktop (please complete the following information):**
- OS: [e.g. macOS] MacOS
- Python Version [e.g. 2.7, 3.6, miniconda] 3.7
- Yellowbrick Version [e.g. 0.7] 1.1
**Additional context**
Add any other context about the problem here.

<!-- If you have a question, note that you can email us via our listserve:
https://groups.google.com/forum/#!forum/yellowbrick -->
<!-- This line alerts the Yellowbrick maintainers, feel free to use this
@ address to alert us directly in follow up comments -->
@DistrictDataLabs/team-oz-maintainers
| 2020-04-21T22:35:44 |
||
DistrictDataLabs/yellowbrick | 1,067 | DistrictDataLabs__yellowbrick-1067 | [
"1052"
] | 6a004ed9ff4eee2cbb5b47cd7ce363f224625b88 | diff --git a/yellowbrick/regressor/alphas.py b/yellowbrick/regressor/alphas.py
--- a/yellowbrick/regressor/alphas.py
+++ b/yellowbrick/regressor/alphas.py
@@ -311,10 +311,13 @@ def __init__(self, model, ax=None, alphas=None, cv=None, scoring=None, **kwargs)
)
# Call super to initialize the class
- super(ManualAlphaSelection, self).__init__(model, ax=ax, **kwargs)
+ super(AlphaSelection, self).__init__(model, ax=ax, **kwargs)
# Set manual alpha selection parameters
- self.alphas = alphas or np.logspace(-10, -2, 200)
+ if alphas is not None:
+ self.alphas = alphas
+ else:
+ self.alphas = np.logspace(-10, -2, 200)
self.errors = None
self.score_method = partial(cross_val_score, cv=cv, scoring=scoring)
@@ -361,7 +364,7 @@ def draw(self):
##########################################################################
-## Quick Method
+## Quick Methods
##########################################################################
@@ -426,3 +429,84 @@ def alphas(model, X, y=None, ax=None, is_fitted="auto", show=True, **kwargs):
# Return the visualizer
return visualizer
+
+
+def manual_alphas(
+ model,
+ X,
+ y=None,
+ ax=None,
+ alphas=None,
+ cv=None,
+ scoring=None,
+ show=True,
+ **kwargs
+):
+ """Quick Method:
+ The Manual Alpha Selection Visualizer demonstrates how different values of alpha
+ influence model selection during the regularization of linear models.
+ Generally speaking, alpha increases the affect of regularization, e.g. if
+ alpha is zero there is no regularization and the higher the alpha, the
+ more the regularization parameter influences the final model.
+
+ Parameters
+ ----------
+
+ model : an unfitted Scikit-Learn regressor
+ Should be an instance of an unfitted regressor, and specifically one
+ whose name doesn't end with "CV". The regressor must support a call to
+ ``set_params(alpha=alpha)`` and be fit multiple times. If the
+ regressor name ends with "CV" a ``YellowbrickValueError`` is raised.
+
+ ax : matplotlib Axes, default: None
+ The axes to plot the figure on. If None is passed in the current axes
+ will be used (or generated if required).
+
+ alphas : ndarray or Series, default: np.logspace(-10, 2, 200)
+ An array of alphas to fit each model with
+
+ cv : int, cross-validation generator or an iterable, optional
+ Determines the cross-validation splitting strategy.
+ Possible inputs for cv are:
+
+ - None, to use the default 3-fold cross validation,
+ - integer, to specify the number of folds in a `(Stratified)KFold`,
+ - An object to be used as a cross-validation generator.
+ - An iterable yielding train, test splits.
+
+ This argument is passed to the
+ ``sklearn.model_selection.cross_val_score`` method to produce the
+ cross validated score for each alpha.
+
+ scoring : string, callable or None, optional, default: None
+ A string (see model evaluation documentation) or
+ a scorer callable object / function with signature
+ ``scorer(estimator, X, y)``.
+
+ This argument is passed to the
+ ``sklearn.model_selection.cross_val_score`` method to produce the
+ cross validated score for each alpha.
+
+ kwargs : dict
+ Keyword arguments that are passed to the base class and may influence
+ the visualization as defined in other Visualizers.
+
+ Returns
+ -------
+ visualizer : AlphaSelection
+ Returns the alpha selection visualizer
+ """
+ # Instantiate the visualizer
+ visualizer = ManualAlphaSelection(
+ model, ax, alphas=alphas, scoring=scoring, cv=cv, **kwargs
+ )
+
+ visualizer.fit(X, y)
+
+ if show:
+ visualizer.show()
+ else:
+ visualizer.finalize()
+
+ # Return the visualizer
+ return visualizer
| diff --git a/tests/baseline_images/test_regressor/test_alphas/test_quick_method_manual.png b/tests/baseline_images/test_regressor/test_alphas/test_quick_method_manual.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_regressor/test_alphas/test_quick_method_manual.png differ
diff --git a/tests/baseline_images/test_regressor/test_alphas/test_similar_image_manual.png b/tests/baseline_images/test_regressor/test_alphas/test_similar_image_manual.png
new file mode 100644
Binary files /dev/null and b/tests/baseline_images/test_regressor/test_alphas/test_similar_image_manual.png differ
diff --git a/tests/test_regressor/test_alphas.py b/tests/test_regressor/test_alphas.py
--- a/tests/test_regressor/test_alphas.py
+++ b/tests/test_regressor/test_alphas.py
@@ -28,6 +28,8 @@
from yellowbrick.exceptions import YellowbrickTypeError
from yellowbrick.exceptions import YellowbrickValueError
from yellowbrick.regressor.alphas import AlphaSelection, alphas
+from yellowbrick.regressor.alphas import ManualAlphaSelection, manual_alphas
+
from sklearn.svm import SVR, SVC
from sklearn.cluster import KMeans
@@ -167,3 +169,52 @@ def test_quick_method(self):
)
assert isinstance(visualizer, AlphaSelection)
self.assert_images_similar(visualizer)
+
+
+class TestManualAlphaSelection(VisualTestCase):
+ """
+ Test the ManualAlphaSelection visualizer
+ """
+ def test_similar_image_manual(self):
+ """
+ Integration test with image similarity comparison
+ """
+
+ visualizer = ManualAlphaSelection(Lasso(random_state=0), cv=5)
+
+ X, y = make_regression(random_state=0)
+ visualizer.fit(X, y)
+ visualizer.finalize()
+
+ # Image comparison fails on Appveyor with RMS 0.024
+ self.assert_images_similar(visualizer, tol=0.1)
+
+ @pytest.mark.parametrize("model", [RidgeCV, LassoCV, LassoLarsCV, ElasticNetCV])
+ def test_manual_with_cv(self, model):
+ """
+ Ensure only non-CV regressors are allowed
+ """
+ with pytest.raises(YellowbrickTypeError):
+ ManualAlphaSelection(model())
+
+ @pytest.mark.parametrize("model", [SVR, Ridge, Lasso, LassoLars, ElasticNet])
+ def test_manual_no_cv(self, model):
+ """
+ Ensure non-CV regressors are allowed
+ """
+ try:
+ ManualAlphaSelection(model())
+ except YellowbrickTypeError:
+ pytest.fail("could not instantiate Regressor on alpha selection")
+
+ def test_quick_method_manual(self):
+ """
+ Test the manual alphas quick method producing a valid visualization
+ """
+ X, y = load_energy(return_dataset=True).to_numpy()
+
+ visualizer = manual_alphas(
+ ElasticNet(random_state=0), X, y, cv=3, is_fitted=False, show=False
+ )
+ assert isinstance(visualizer, ManualAlphaSelection)
+ self.assert_images_similar(visualizer)
| Ridge' is not a CV regularization model; try ManualAlphaSelection instead
**Describe the bug**
I am trying to find the best Alpha for a Ridge model without CV using Yellowbrick ManualAlphaSelection API. My code is pretty basic and it has been taken from the yellowbrick´s documentation. Even though it does not work:
**To Reproduce**
from yellowbrick.regressor import ManualAlphaSelection
from sklearn.linear_model import Ridge
model = ManualAlphaSelection(Ridge(), scoring='neg_mean_squared_error')
model.fit(X_train, y_train)
model.show()
`
**Dataset**
The dataset does not matter because the code itself does not Works. It is a syntax problem.
**Expected behavior**
It was expected that ManualAlphaSelection Works but Python raises the message: 'Ridge' is not a CV regularization model; try ManualAlphaSelection instead. But this message is wrong because the ManualAlphaSelection is already being used
**Traceback**
```
If applicable, add the traceback from the exception.
```
**Desktop (please complete the following information):**
- OS: Windows 10
- Python Version Anaconda3 Python 3
- Yellowbrick Version is the last on because I just installed it on march the 19th. I don´t know how to check this out.
**Additional context** Jupyter notebook 6.0.3
| @mmhernandm this is a known bug, but it's slipped past us for a while since we didn't have an issue for it. Thanks for opening it - we will apply a patch before our next release.
@rebeccabilbro it would be nice to know what it would take to make ManualAlphaSelection green and create an issue for it.
Also, is #157 related? If not, could we add it to the above issue. | 2020-06-03T19:12:33 |
DistrictDataLabs/yellowbrick | 1,072 | DistrictDataLabs__yellowbrick-1072 | [
"1001",
"1001"
] | 6a7a9a2d0482c895162da6eb1c537fe15376ffb0 | diff --git a/yellowbrick/classifier/class_prediction_error.py b/yellowbrick/classifier/class_prediction_error.py
--- a/yellowbrick/classifier/class_prediction_error.py
+++ b/yellowbrick/classifier/class_prediction_error.py
@@ -21,7 +21,7 @@
import numpy as np
from sklearn.utils.multiclass import unique_labels
-from sklearn.metrics.classification import _check_targets
+from sklearn.metrics._classification import _check_targets
from yellowbrick.draw import bar_stack
from yellowbrick.classifier.base import ClassificationScoreVisualizer
diff --git a/yellowbrick/classifier/rocauc.py b/yellowbrick/classifier/rocauc.py
--- a/yellowbrick/classifier/rocauc.py
+++ b/yellowbrick/classifier/rocauc.py
@@ -21,7 +21,6 @@
import numpy as np
-from scipy import interp
from sklearn.metrics import auc, roc_curve
from sklearn.preprocessing import label_binarize
from sklearn.utils.multiclass import type_of_target
@@ -497,7 +496,7 @@ def _score_macro_average(self, n_classes):
# Compute the averages per class
for i in range(n_classes):
- avg_tpr += interp(all_fpr, self.fpr[i], self.tpr[i])
+ avg_tpr += np.interp(all_fpr, self.fpr[i], self.tpr[i])
# Finalize the average
avg_tpr /= n_classes
diff --git a/yellowbrick/regressor/influence.py b/yellowbrick/regressor/influence.py
--- a/yellowbrick/regressor/influence.py
+++ b/yellowbrick/regressor/influence.py
@@ -180,7 +180,8 @@ def draw(self):
"""
# Draw a stem plot with the influence for each instance
_, _, baseline = self.ax.stem(
- self.distance_, linefmt=self.linefmt, markerfmt=self.markerfmt
+ self.distance_, linefmt=self.linefmt, markerfmt=self.markerfmt,
+ use_line_collection=True
)
# No padding on either side of the instance index
| diff --git a/tests/baseline_images/test_regressor/test_influence/test_cooks_distance.png b/tests/baseline_images/test_regressor/test_influence/test_cooks_distance.png
Binary files a/tests/baseline_images/test_regressor/test_influence/test_cooks_distance.png and b/tests/baseline_images/test_regressor/test_influence/test_cooks_distance.png differ
diff --git a/tests/baseline_images/test_regressor/test_influence/test_cooks_distance_quickmethod.png b/tests/baseline_images/test_regressor/test_influence/test_cooks_distance_quickmethod.png
Binary files a/tests/baseline_images/test_regressor/test_influence/test_cooks_distance_quickmethod.png and b/tests/baseline_images/test_regressor/test_influence/test_cooks_distance_quickmethod.png differ
diff --git a/tests/baseline_images/test_regressor/test_influence/test_numpy_integration.png b/tests/baseline_images/test_regressor/test_influence/test_numpy_integration.png
Binary files a/tests/baseline_images/test_regressor/test_influence/test_numpy_integration.png and b/tests/baseline_images/test_regressor/test_influence/test_numpy_integration.png differ
diff --git a/tests/baseline_images/test_regressor/test_influence/test_pandas_integration.png b/tests/baseline_images/test_regressor/test_influence/test_pandas_integration.png
Binary files a/tests/baseline_images/test_regressor/test_influence/test_pandas_integration.png and b/tests/baseline_images/test_regressor/test_influence/test_pandas_integration.png differ
diff --git a/tests/test_classifier/test_threshold.py b/tests/test_classifier/test_threshold.py
--- a/tests/test_classifier/test_threshold.py
+++ b/tests/test_classifier/test_threshold.py
@@ -29,7 +29,7 @@
from unittest.mock import patch
from tests.base import VisualTestCase
-from numpy.testing.utils import assert_array_equal
+from numpy.testing import assert_array_equal
from sklearn.base import ClassifierMixin
from sklearn.svm import LinearSVC, NuSVC
diff --git a/tests/test_cluster/test_elbow.py b/tests/test_cluster/test_elbow.py
--- a/tests/test_cluster/test_elbow.py
+++ b/tests/test_cluster/test_elbow.py
@@ -23,7 +23,7 @@
import matplotlib.pyplot as plt
from scipy.sparse import csc_matrix, csr_matrix
-from numpy.testing.utils import assert_array_almost_equal
+from numpy.testing import assert_array_almost_equal
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans, MiniBatchKMeans
diff --git a/tests/test_regressor/test_alphas.py b/tests/test_regressor/test_alphas.py
--- a/tests/test_regressor/test_alphas.py
+++ b/tests/test_regressor/test_alphas.py
@@ -22,7 +22,7 @@
import numpy as np
from tests.base import VisualTestCase
-from numpy.testing.utils import assert_array_equal
+from numpy.testing import assert_array_equal
from yellowbrick.datasets import load_energy
from yellowbrick.exceptions import YellowbrickTypeError
| Update classification metrics import to prevent deprecation
> FutureWarning: The sklearn.metrics.classification module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics. Anything that cannot be imported from sklearn.metrics is now part of the private API.
Update classification metrics import to prevent deprecation
> FutureWarning: The sklearn.metrics.classification module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics. Anything that cannot be imported from sklearn.metrics is now part of the private API.
| 2020-06-10T17:56:12 |
|
DistrictDataLabs/yellowbrick | 1,074 | DistrictDataLabs__yellowbrick-1074 | [
"1049"
] | a713f478ac52d10bb38c1d900424736eb5f9895f | diff --git a/yellowbrick/classifier/prcurve.py b/yellowbrick/classifier/prcurve.py
--- a/yellowbrick/classifier/prcurve.py
+++ b/yellowbrick/classifier/prcurve.py
@@ -55,12 +55,14 @@ class PrecisionRecallCurve(ClassificationScoreVisualizer):
Precision-Recall curves are a metric used to evaluate a classifier's quality,
particularly when classes are very imbalanced. The precision-recall curve
shows the tradeoff between precision, a measure of result relevancy, and
- recall, a measure of how many relevant results are returned. A large area
- under the curve represents both high recall and precision, the best case
- scenario for a classifier, showing a model that returns accurate results
- for the majority of classes it selects.
+ recall, a measure of completeness. For each class, precision is defined as
+ the ratio of true positives to the sum of true and false positives, and
+ recall is the ratio of true positives to the sum of true positives and false
+ negatives.
- .. todo:: extend docstring
+ A large area under the curve represents both high recall and precision, the
+ best case scenario for a classifier, showing a model that returns accurate
+ results for the majority of classes it selects.
Parameters
----------
@@ -193,6 +195,15 @@ class PrecisionRecallCurve(ClassificationScoreVisualizer):
Notes
-----
+ To support multi-label classification, the estimator is wrapped in a
+ ``OneVsRestClassifier`` to produce binary comparisons for each class
+ (e.g. the positive case is the class and the negative case is any other
+ class). The precision-recall curve can then be computed as the micro-average
+ of the precision and recall for all classes (by setting micro=True), or individual
+ curves can be plotted for each class (by setting per_class=True).
+
+ Note also that some parameters of this visualizer are learned on the ``score``
+ method, not only on ``fit``.
.. seealso:: https://bit.ly/2kOIeCC
"""
@@ -250,8 +261,8 @@ def __init__(
def fit(self, X, y=None):
"""
- Fit the classification model; if y is multi-class, then the estimator
- is adapted with a OneVsRestClassifier strategy, otherwise the estimator
+ Fit the classification model; if ``y`` is multi-class, then the estimator
+ is adapted with a ``OneVsRestClassifier`` strategy, otherwise the estimator
is fit directly.
"""
# The target determines what kind of estimator is fit
@@ -288,6 +299,7 @@ def score(self, X, y):
Average precision, a summary of the plot as a weighted mean of
precision at each threshold, weighted by the increase in recall from
the previous threshold.
+
"""
# If we don't do this check, then it is possible that OneVsRestClassifier
# has not correctly been fitted for multi-class targets.
@@ -501,10 +513,14 @@ def precision_recall_curve(
Precision-Recall curves are a metric used to evaluate a classifier's quality,
particularly when classes are very imbalanced. The precision-recall curve
shows the tradeoff between precision, a measure of result relevancy, and
- recall, a measure of how many relevant results are returned. A large area
- under the curve represents both high recall and precision, the best case
- scenario for a classifier, showing a model that returns accurate results
- for the majority of classes it selects.
+ recall, a measure of completeness. For each class, precision is defined as
+ the ratio of true positives to the sum of true and false positives, and
+ recall is the ratio of true positives to the sum of true positives and false
+ negatives.
+
+ A large area under the curve represents both high recall and precision, the
+ best case scenario for a classifier, showing a model that returns accurate
+ results for the majority of classes it selects.
Parameters
----------
| Definition of “Precision” seems incorrect...
On this page at [ClassificationReport](https://www.scikit-yb.org/en/latest/api/classifier/classification_report.html)

For binary classifier, the part **“not to label an instance positive that is actually negative”** sounded like “to correctly classified a negative”, which is the definition of **True Negative** right? I think the first sentence should be removed to avoid confusion.
| @Scoodood I agree that we could make it more clear; however I don't think we can necessarily simply remove the first sentence; do you have an edit to suggest?
Hi @bbengfort,
If we simplified the first sentence it will become
"**Precision** is the ability of a classifier to correctly label a negative instance as negative."
That definition is wrong, because it belongs to the **True Negative Rate**, not **Precision**. According to the [wikipedia](https://en.wikipedia.org/wiki/Precision_and_recall),
Precision = TP / (TP + FP) = [Positive Predictive Value (PPV)](https://en.wikipedia.org/wiki/Positive_and_negative_predictive_values)
The denominator term TP + FP also means total predicted output. So the whole equation means **for all cases that were labeled by the classifier as positive, what percent of those were actually correct**. But your second sentence said this already.
Besides, the formula of TPR (True Positive Rate) and PPV is very similar, and yet their interpretation is different.
TPR = The ability of classifier to recognize positive instance
PPV = The ability of classifier to give relevant positive output
So if you insist to keep the first sentence, then I would like to suggest something like this
**Precision measures the ability of a classifier to give relevant positive output.**
But I really think your second sentence is clear enough even without the first sentence.
| 2020-06-10T22:46:00 |
|
DistrictDataLabs/yellowbrick | 1,078 | DistrictDataLabs__yellowbrick-1078 | [
"1031"
] | d13005c57487f4d917ddb76f1425fab74dea3d65 | diff --git a/yellowbrick/regressor/__init__.py b/yellowbrick/regressor/__init__.py
--- a/yellowbrick/regressor/__init__.py
+++ b/yellowbrick/regressor/__init__.py
@@ -21,5 +21,6 @@
## Hoist visualizers into the regressor namespace
from .base import *
from .residuals import *
+from .prediction_error import *
from .alphas import *
from .influence import *
diff --git a/yellowbrick/regressor/prediction_error.py b/yellowbrick/regressor/prediction_error.py
new file mode 100644
--- /dev/null
+++ b/yellowbrick/regressor/prediction_error.py
@@ -0,0 +1,399 @@
+# yellowbrick.regressor.prediction_error
+# Comparison of the predicted vs. actual values for regression problems
+#
+# Author: Rebecca Bilbro
+# Author: Benjamin Bengfort
+# Created: Fri Jun 03 10:30:36 2016 -0700
+#
+# Copyright (C) 2016 The scikit-yb developers
+# For license information, see LICENSE.txt
+#
+# ID: prediction_error.py [] $
+
+"""
+Comparison of the predicted vs. actual values for regression problems
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+from yellowbrick.style.palettes import LINE_COLOR
+from yellowbrick.exceptions import YellowbrickValueError
+from yellowbrick.bestfit import draw_best_fit, draw_identity_line
+from yellowbrick.regressor.base import RegressionScoreVisualizer
+
+
+## Packages for export
+__all__ = ["PredictionError", "prediction_error"]
+
+
+##########################################################################
+## Prediction Error Plots
+##########################################################################
+
+
+class PredictionError(RegressionScoreVisualizer):
+ """
+ The prediction error visualizer plots the actual targets from the dataset
+ against the predicted values generated by our model(s). This visualizer is
+ used to detect noise or heteroscedasticity along a range of the target
+ domain.
+
+ Parameters
+ ----------
+
+ model : a Scikit-Learn regressor
+ Should be an instance of a regressor, otherwise will raise a
+ YellowbrickTypeError exception on instantiation.
+ If the estimator is not fitted, it is fit when the visualizer is fitted,
+ unless otherwise specified by ``is_fitted``.
+
+ ax : matplotlib Axes, default: None
+ The axes to plot the figure on. If None is passed in the current axes
+ will be used (or generated if required).
+
+ shared_limits : bool, default: True
+ If shared_limits is True, the range of the X and Y axis limits will
+ be identical, creating a square graphic with a true 45 degree line.
+ In this form, it is easier to diagnose under- or over- prediction,
+ though the figure will become more sparse. To localize points, set
+ shared_limits to False, but note that this will distort the figure
+ and should be accounted for during analysis.
+
+ bestfit : bool, default: True
+ Draw a linear best fit line to estimate the correlation between the
+ predicted and measured value of the target variable. The color of
+ the bestfit line is determined by the ``line_color`` argument.
+
+ identity : bool, default: True
+ Draw the 45 degree identity line, y=x in order to better show the
+ relationship or pattern of the residuals. E.g. to estimate if the
+ model is over- or under- estimating the given values. The color of the
+ identity line is a muted version of the ``line_color`` argument.
+
+ alpha : float, default: 0.75
+ Specify a transparency where 1 is completely opaque and 0 is completely
+ transparent. This property makes densely clustered points more visible.
+
+ is_fitted : bool or str, default='auto'
+ Specify if the wrapped estimator is already fitted. If False, the estimator
+ will be fit when the visualizer is fit, otherwise, the estimator will not be
+ modified. If 'auto' (default), a helper method will check if the estimator
+ is fitted before fitting it again.
+
+ kwargs : dict
+ Keyword arguments that are passed to the base class and may influence
+ the visualization as defined in other Visualizers.
+
+ Attributes
+ ----------
+
+ score_ : float
+ The R^2 score that specifies the goodness of fit of the underlying
+ regression model to the test data.
+
+ Examples
+ --------
+
+ >>> from yellowbrick.regressor import PredictionError
+ >>> from sklearn.linear_model import Lasso
+ >>> model = PredictionError(Lasso())
+ >>> model.fit(X_train, y_train)
+ >>> model.score(X_test, y_test)
+ >>> model.show()
+
+ Notes
+ -----
+
+ PredictionError is a ScoreVisualizer, meaning that it wraps a model and
+ its primary entry point is the `score()` method.
+ """
+
+ def __init__(
+ self,
+ model,
+ ax=None,
+ shared_limits=True,
+ bestfit=True,
+ identity=True,
+ alpha=0.75,
+ is_fitted="auto",
+ **kwargs
+ ):
+ # Whether or not to check if the model is already fitted
+ self.is_fitted = is_fitted
+
+ # Initialize the visualizer
+ super(PredictionError, self).__init__(model, ax=ax, **kwargs)
+
+ # Visual arguments
+ self.colors = {
+ "point": kwargs.pop("point_color", None),
+ "line": kwargs.pop("line_color", LINE_COLOR),
+ }
+
+ # Drawing arguments
+ self.shared_limits = shared_limits
+ self.bestfit = bestfit
+ self.identity = identity
+ self.alpha = alpha
+
+ def score(self, X, y, **kwargs):
+ """
+ The score function is the hook for visual interaction. Pass in test
+ data and the visualizer will create predictions on the data and
+ evaluate them with respect to the test values. The evaluation will
+ then be passed to draw() and the result of the estimator score will
+ be returned.
+
+ Parameters
+ ----------
+ X : array-like
+ X (also X_test) are the dependent variables of test set to predict
+
+ y : array-like
+ y (also y_test) is the independent actual variables to score against
+
+ Returns
+ -------
+ score : float
+ """
+ # super will set score_ on the visualizer
+ super(PredictionError, self).score(X, y, **kwargs)
+
+ y_pred = self.predict(X)
+ self.draw(y, y_pred)
+
+ return self.score_
+
+ def draw(self, y, y_pred):
+ """
+ Parameters
+ ----------
+ y : ndarray or Series of length n
+ An array or series of target or class values
+
+ y_pred : ndarray or Series of length n
+ An array or series of predicted target values
+
+ Returns
+ -------
+ ax : matplotlib Axes
+ The axis with the plotted figure
+ """
+ label = "$R^2 = {:0.3f}$".format(self.score_)
+ self.ax.scatter(
+ y, y_pred, c=self.colors["point"], alpha=self.alpha, label=label
+ )
+
+ # TODO If score happens inside a loop, draw gets called multiple times.
+ # Ideally we'd want the best fit line to be drawn only once
+ if self.bestfit:
+ draw_best_fit(
+ y,
+ y_pred,
+ self.ax,
+ "linear",
+ ls="--",
+ lw=2,
+ c=self.colors["line"],
+ label="best fit",
+ )
+
+ # Set the axes limits based on the range of X and Y data
+ # NOTE: shared_limits will be accounted for in finalize()
+ # TODO: do better than add one for really small residuals
+ self.ax.set_xlim(y.min() - 1, y.max() + 1)
+ self.ax.set_ylim(y_pred.min() - 1, y_pred.max() + 1)
+
+ return self.ax
+
+ def finalize(self, **kwargs):
+ """
+ Finalizes the figure by ensuring the aspect ratio is correct and adding
+ the identity line for comparison. Also adds a title, axis labels, and
+ the legend.
+
+ Parameters
+ ----------
+ kwargs: generic keyword arguments.
+
+ Notes
+ -----
+ Generally this method is called from show and not directly by the user.
+ """
+ # Set the title on the plot
+ self.set_title("Prediction Error for {}".format(self.name))
+
+ # Square the axes to ensure a 45 degree line
+ if self.shared_limits:
+ # Get the current limits
+ ylim = self.ax.get_ylim()
+ xlim = self.ax.get_xlim()
+
+ # Find the range that captures all data
+ bounds = (min(ylim[0], xlim[0]), max(ylim[1], xlim[1]))
+
+ # Reset the limits
+ self.ax.set_xlim(bounds)
+ self.ax.set_ylim(bounds)
+
+ # Ensure the aspect ratio is square
+ self.ax.set_aspect("equal", adjustable="box")
+
+ # Draw the 45 degree line
+ if self.identity:
+ draw_identity_line(
+ ax=self.ax,
+ ls="--",
+ lw=2,
+ c=self.colors["line"],
+ alpha=0.5,
+ label="identity",
+ )
+
+ # Set the axes labels
+ self.ax.set_ylabel(r"$\hat{y}$")
+ self.ax.set_xlabel(r"$y$")
+
+ # Set the legend
+ # Note: it would be nice to be able to use the manual_legend utility
+ # here, since if the user sets a low alpha value, the R2 color in the
+ # legend will also become more translucent. Unfortunately this is a
+ # bit tricky because adding a manual legend here would override the
+ # best fit and 45 degree line legend components. In particular, the
+ # best fit is plotted in draw because it depends on y and y_pred.
+ self.ax.legend(loc="best", frameon=True)
+
+
+##########################################################################
+## Quick Method
+##########################################################################
+
+
+def prediction_error(
+ model,
+ X_train,
+ y_train,
+ X_test=None,
+ y_test=None,
+ ax=None,
+ shared_limits=True,
+ bestfit=True,
+ identity=True,
+ alpha=0.75,
+ is_fitted="auto",
+ show=True,
+ **kwargs
+):
+ """Quickly plot a prediction error visualizer
+
+ Plot the actual targets from the dataset against the
+ predicted values generated by our model(s).
+
+ This helper function is a quick wrapper to utilize the PredictionError
+ ScoreVisualizer for one-off analysis.
+
+ Parameters
+ ----------
+ model : the Scikit-Learn estimator (should be a regressor)
+ Should be an instance of a regressor, otherwise will raise a
+ YellowbrickTypeError exception on instantiation.
+ If the estimator is not fitted, it is fit when the visualizer is fitted,
+ unless otherwise specified by ``is_fitted``.
+
+ X_train : ndarray or DataFrame of shape n x m
+ A feature array of n instances with m features the model is trained on.
+ Used to fit the visualizer and also to score the visualizer if test splits are
+ not directly specified.
+
+ y_train : ndarray or Series of length n
+ An array or series of target or class values. Used to fit the visualizer and
+ also to score the visualizer if test splits are not specified.
+
+ X_test : ndarray or DataFrame of shape n x m, default: None
+ An optional feature array of n instances with m features that the model
+ is scored on if specified, using X_train as the training data.
+
+ y_test : ndarray or Series of length n, default: None
+ An optional array or series of target or class values that serve as actual
+ labels for X_test for scoring purposes.
+
+ ax : matplotlib Axes
+ The axes to plot the figure on.
+
+ shared_limits : bool, default: True
+ If shared_limits is True, the range of the X and Y axis limits will
+ be identical, creating a square graphic with a true 45 degree line.
+ In this form, it is easier to diagnose under- or over- prediction,
+ though the figure will become more sparse. To localize points, set
+ shared_limits to False, but note that this will distort the figure
+ and should be accounted for during analysis.
+
+ bestfit : bool, default: True
+ Draw a linear best fit line to estimate the correlation between the
+ predicted and measured value of the target variable. The color of
+ the bestfit line is determined by the ``line_color`` argument.
+
+ identity: bool, default: True
+ Draw the 45 degree identity line, y=x in order to better show the
+ relationship or pattern of the residuals. E.g. to estimate if the
+ model is over- or under- estimating the given values. The color of the
+ identity line is a muted version of the ``line_color`` argument.
+
+ alpha : float, default: 0.75
+ Specify a transparency where 1 is completely opaque and 0 is completely
+ transparent. This property makes densely clustered points more visible.
+
+ is_fitted : bool or str, default='auto'
+ Specify if the wrapped estimator is already fitted. If False, the estimator
+ will be fit when the visualizer is fit, otherwise, the estimator will not be
+ modified. If 'auto' (default), a helper method will check if the estimator
+ is fitted before fitting it again.
+
+ show: bool, default: True
+ If True, calls ``show()``, which in turn calls ``plt.show()`` however you cannot
+ call ``plt.savefig`` from this signature, nor ``clear_figure``. If False, simply
+ calls ``finalize()``
+
+ kwargs : dict
+ Keyword arguments that are passed to the base class and may influence
+ the visualization as defined in other Visualizers.
+
+ Returns
+ -------
+ ax : matplotlib Axes
+ Returns the axes that the prediction error plot was drawn on.
+ """
+ # Instantiate the visualizer
+ visualizer = PredictionError(
+ model,
+ ax,
+ shared_limits=shared_limits,
+ bestfit=bestfit,
+ identity=identity,
+ alpha=alpha,
+ is_fitted=is_fitted,
+ **kwargs
+ )
+
+ visualizer.fit(X_train, y_train)
+
+ # Scores the visualizer with X and y test if provided, X and y train if not
+ if X_test is not None and y_test is not None:
+ visualizer.score(X_test, y_test)
+ elif X_test is not None or y_test is not None:
+ raise YellowbrickValueError(
+ "both X_test and y_test are required if one is specified"
+ )
+ else:
+ visualizer.score(X_train, y_train)
+
+ if show:
+ visualizer.show()
+ else:
+ visualizer.finalize()
+
+ # Return the axes object on the visualizer
+ return visualizer
diff --git a/yellowbrick/regressor/residuals.py b/yellowbrick/regressor/residuals.py
--- a/yellowbrick/regressor/residuals.py
+++ b/yellowbrick/regressor/residuals.py
@@ -1,5 +1,5 @@
# yellowbrick.regressor.residuals
-# Regressor visualizers that score residuals: prediction vs. actual data.
+# Visualize the residuals between predicted and actual data for regression problems
#
# Author: Rebecca Bilbro
# Author: Benjamin Bengfort
@@ -11,7 +11,7 @@
# ID: residuals.py [7d3f5e6] [email protected] $
"""
-Regressor visualizers that score residuals: prediction vs. actual data.
+Visualize the residuals between predicted and actual data for regression problems
"""
##########################################################################
@@ -34,374 +34,9 @@
from yellowbrick.style.palettes import LINE_COLOR
from yellowbrick.exceptions import YellowbrickValueError
from yellowbrick.regressor.base import RegressionScoreVisualizer
-from yellowbrick.bestfit import draw_best_fit, draw_identity_line
## Packages for export
-__all__ = ["PredictionError", "prediction_error", "ResidualsPlot", "residuals_plot"]
-
-
-##########################################################################
-## Prediction Error Plots
-##########################################################################
-
-
-class PredictionError(RegressionScoreVisualizer):
- """
- The prediction error visualizer plots the actual targets from the dataset
- against the predicted values generated by our model(s). This visualizer is
- used to detect noise or heteroscedasticity along a range of the target
- domain.
-
- Parameters
- ----------
-
- model : a Scikit-Learn regressor
- Should be an instance of a regressor, otherwise will raise a
- YellowbrickTypeError exception on instantiation.
- If the estimator is not fitted, it is fit when the visualizer is fitted,
- unless otherwise specified by ``is_fitted``.
-
- ax : matplotlib Axes, default: None
- The axes to plot the figure on. If None is passed in the current axes
- will be used (or generated if required).
-
- shared_limits : bool, default: True
- If shared_limits is True, the range of the X and Y axis limits will
- be identical, creating a square graphic with a true 45 degree line.
- In this form, it is easier to diagnose under- or over- prediction,
- though the figure will become more sparse. To localize points, set
- shared_limits to False, but note that this will distort the figure
- and should be accounted for during analysis.
-
- bestfit : bool, default: True
- Draw a linear best fit line to estimate the correlation between the
- predicted and measured value of the target variable. The color of
- the bestfit line is determined by the ``line_color`` argument.
-
- identity : bool, default: True
- Draw the 45 degree identity line, y=x in order to better show the
- relationship or pattern of the residuals. E.g. to estimate if the
- model is over- or under- estimating the given values. The color of the
- identity line is a muted version of the ``line_color`` argument.
-
- alpha : float, default: 0.75
- Specify a transparency where 1 is completely opaque and 0 is completely
- transparent. This property makes densely clustered points more visible.
-
- is_fitted : bool or str, default='auto'
- Specify if the wrapped estimator is already fitted. If False, the estimator
- will be fit when the visualizer is fit, otherwise, the estimator will not be
- modified. If 'auto' (default), a helper method will check if the estimator
- is fitted before fitting it again.
-
- kwargs : dict
- Keyword arguments that are passed to the base class and may influence
- the visualization as defined in other Visualizers.
-
- Attributes
- ----------
-
- score_ : float
- The R^2 score that specifies the goodness of fit of the underlying
- regression model to the test data.
-
- Examples
- --------
-
- >>> from yellowbrick.regressor import PredictionError
- >>> from sklearn.linear_model import Lasso
- >>> model = PredictionError(Lasso())
- >>> model.fit(X_train, y_train)
- >>> model.score(X_test, y_test)
- >>> model.show()
-
- Notes
- -----
-
- PredictionError is a ScoreVisualizer, meaning that it wraps a model and
- its primary entry point is the `score()` method.
- """
-
- def __init__(
- self,
- model,
- ax=None,
- shared_limits=True,
- bestfit=True,
- identity=True,
- alpha=0.75,
- is_fitted="auto",
- **kwargs
- ):
- # Whether or not to check if the model is already fitted
- self.is_fitted = is_fitted
-
- # Initialize the visualizer
- super(PredictionError, self).__init__(model, ax=ax, **kwargs)
-
- # Visual arguments
- self.colors = {
- "point": kwargs.pop("point_color", None),
- "line": kwargs.pop("line_color", LINE_COLOR),
- }
-
- # Drawing arguments
- self.shared_limits = shared_limits
- self.bestfit = bestfit
- self.identity = identity
- self.alpha = alpha
-
- def score(self, X, y, **kwargs):
- """
- The score function is the hook for visual interaction. Pass in test
- data and the visualizer will create predictions on the data and
- evaluate them with respect to the test values. The evaluation will
- then be passed to draw() and the result of the estimator score will
- be returned.
-
- Parameters
- ----------
- X : array-like
- X (also X_test) are the dependent variables of test set to predict
-
- y : array-like
- y (also y_test) is the independent actual variables to score against
-
- Returns
- -------
- score : float
- """
- # super will set score_ on the visualizer
- super(PredictionError, self).score(X, y, **kwargs)
-
- y_pred = self.predict(X)
- self.draw(y, y_pred)
-
- return self.score_
-
- def draw(self, y, y_pred):
- """
- Parameters
- ----------
- y : ndarray or Series of length n
- An array or series of target or class values
-
- y_pred : ndarray or Series of length n
- An array or series of predicted target values
-
- Returns
- -------
- ax : matplotlib Axes
- The axis with the plotted figure
- """
- label = "$R^2 = {:0.3f}$".format(self.score_)
- self.ax.scatter(
- y, y_pred, c=self.colors["point"], alpha=self.alpha, label=label
- )
-
- # TODO If score happens inside a loop, draw gets called multiple times.
- # Ideally we'd want the best fit line to be drawn only once
- if self.bestfit:
- draw_best_fit(
- y,
- y_pred,
- self.ax,
- "linear",
- ls="--",
- lw=2,
- c=self.colors["line"],
- label="best fit",
- )
-
- # Set the axes limits based on the range of X and Y data
- # NOTE: shared_limits will be accounted for in finalize()
- # TODO: do better than add one for really small residuals
- self.ax.set_xlim(y.min() - 1, y.max() + 1)
- self.ax.set_ylim(y_pred.min() - 1, y_pred.max() + 1)
-
- return self.ax
-
- def finalize(self, **kwargs):
- """
- Finalizes the figure by ensuring the aspect ratio is correct and adding
- the identity line for comparison. Also adds a title, axis labels, and
- the legend.
-
- Parameters
- ----------
- kwargs: generic keyword arguments.
-
- Notes
- -----
- Generally this method is called from show and not directly by the user.
- """
- # Set the title on the plot
- self.set_title("Prediction Error for {}".format(self.name))
-
- # Square the axes to ensure a 45 degree line
- if self.shared_limits:
- # Get the current limits
- ylim = self.ax.get_ylim()
- xlim = self.ax.get_xlim()
-
- # Find the range that captures all data
- bounds = (min(ylim[0], xlim[0]), max(ylim[1], xlim[1]))
-
- # Reset the limits
- self.ax.set_xlim(bounds)
- self.ax.set_ylim(bounds)
-
- # Ensure the aspect ratio is square
- self.ax.set_aspect("equal", adjustable="box")
-
- # Draw the 45 degree line
- if self.identity:
- draw_identity_line(
- ax=self.ax,
- ls="--",
- lw=2,
- c=self.colors["line"],
- alpha=0.5,
- label="identity",
- )
-
- # Set the axes labels
- self.ax.set_ylabel(r"$\hat{y}$")
- self.ax.set_xlabel(r"$y$")
-
- # Set the legend
- # Note: it would be nice to be able to use the manual_legend utility
- # here, since if the user sets a low alpha value, the R2 color in the
- # legend will also become more translucent. Unfortunately this is a
- # bit tricky because adding a manual legend here would override the
- # best fit and 45 degree line legend components. In particular, the
- # best fit is plotted in draw because it depends on y and y_pred.
- self.ax.legend(loc="best", frameon=True)
-
-
-def prediction_error(
- model,
- X_train,
- y_train,
- X_test=None,
- y_test=None,
- ax=None,
- shared_limits=True,
- bestfit=True,
- identity=True,
- alpha=0.75,
- is_fitted="auto",
- show=True,
- **kwargs):
- """Quickly plot a prediction error visualizer
-
- Plot the actual targets from the dataset against the
- predicted values generated by our model(s).
-
- This helper function is a quick wrapper to utilize the PredictionError
- ScoreVisualizer for one-off analysis.
-
- Parameters
- ----------
- model : the Scikit-Learn estimator (should be a regressor)
- Should be an instance of a regressor, otherwise will raise a
- YellowbrickTypeError exception on instantiation.
- If the estimator is not fitted, it is fit when the visualizer is fitted,
- unless otherwise specified by ``is_fitted``.
-
- X_train : ndarray or DataFrame of shape n x m
- A feature array of n instances with m features the model is trained on.
- Used to fit the visualizer and also to score the visualizer if test splits are
- not directly specified.
-
- y_train : ndarray or Series of length n
- An array or series of target or class values. Used to fit the visualizer and
- also to score the visualizer if test splits are not specified.
-
- X_test : ndarray or DataFrame of shape n x m, default: None
- An optional feature array of n instances with m features that the model
- is scored on if specified, using X_train as the training data.
-
- y_test : ndarray or Series of length n, default: None
- An optional array or series of target or class values that serve as actual
- labels for X_test for scoring purposes.
-
- ax : matplotlib Axes
- The axes to plot the figure on.
-
- shared_limits : bool, default: True
- If shared_limits is True, the range of the X and Y axis limits will
- be identical, creating a square graphic with a true 45 degree line.
- In this form, it is easier to diagnose under- or over- prediction,
- though the figure will become more sparse. To localize points, set
- shared_limits to False, but note that this will distort the figure
- and should be accounted for during analysis.
-
- bestfit : bool, default: True
- Draw a linear best fit line to estimate the correlation between the
- predicted and measured value of the target variable. The color of
- the bestfit line is determined by the ``line_color`` argument.
-
- identity: bool, default: True
- Draw the 45 degree identity line, y=x in order to better show the
- relationship or pattern of the residuals. E.g. to estimate if the
- model is over- or under- estimating the given values. The color of the
- identity line is a muted version of the ``line_color`` argument.
-
- alpha : float, default: 0.75
- Specify a transparency where 1 is completely opaque and 0 is completely
- transparent. This property makes densely clustered points more visible.
-
- is_fitted : bool or str, default='auto'
- Specify if the wrapped estimator is already fitted. If False, the estimator
- will be fit when the visualizer is fit, otherwise, the estimator will not be
- modified. If 'auto' (default), a helper method will check if the estimator
- is fitted before fitting it again.
-
- show: bool, default: True
- If True, calls ``show()``, which in turn calls ``plt.show()`` however you cannot
- call ``plt.savefig`` from this signature, nor ``clear_figure``. If False, simply
- calls ``finalize()``
-
- kwargs : dict
- Keyword arguments that are passed to the base class and may influence
- the visualization as defined in other Visualizers.
-
- Returns
- -------
- ax : matplotlib Axes
- Returns the axes that the prediction error plot was drawn on.
- """
- # Instantiate the visualizer
- visualizer = PredictionError(
- model,
- ax,
- shared_limits=shared_limits,
- bestfit=bestfit,
- identity=identity,
- alpha=alpha,
- is_fitted=is_fitted,
- **kwargs)
-
- visualizer.fit(X_train, y_train)
-
- # Scores the visualizer with X and y test if provided, X and y train if not
- if X_test is not None and y_test is not None:
- visualizer.score(X_test, y_test)
- elif X_test is not None or y_test is not None:
- raise YellowbrickValueError(
- "both X_test and y_test are required if one is specified"
- )
- else:
- visualizer.score(X_train, y_train)
-
- if show:
- visualizer.show()
- else:
- visualizer.finalize()
-
- # Return the axes object on the visualizer
- return visualizer
+__all__ = ["ResidualsPlot", "residuals_plot"]
##########################################################################
@@ -717,7 +352,7 @@ def draw(self, y_pred, residuals, train=False, **kwargs):
# Add residuals histogram
if self.qqplot in {True}:
- osm, osr = probplot(residuals, dist='norm', fit=False)
+ osm, osr = probplot(residuals, dist="norm", fit=False)
self.qqax.scatter(osm, osr, c=color, alpha=alpha, label=label)
@@ -727,7 +362,7 @@ def draw(self, y_pred, residuals, train=False, **kwargs):
def finalize(self, **kwargs):
"""
- Prepares the plot for renderig by adding a title, legend, and axis labels.
+ Prepares the plot for rendering by adding a title, legend, and axis labels.
Also draws a line at the zero residuals to show the baseline.
Parameters
| diff --git a/tests/baseline_images/test_regressor/test_residuals/test_peplot_no_lines.png b/tests/baseline_images/test_regressor/test_prediction_error/test_peplot_no_lines.png
similarity index 100%
rename from tests/baseline_images/test_regressor/test_residuals/test_peplot_no_lines.png
rename to tests/baseline_images/test_regressor/test_prediction_error/test_peplot_no_lines.png
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_peplot_no_shared_limits.png b/tests/baseline_images/test_regressor/test_prediction_error/test_peplot_no_shared_limits.png
similarity index 100%
rename from tests/baseline_images/test_regressor/test_residuals/test_peplot_no_shared_limits.png
rename to tests/baseline_images/test_regressor/test_prediction_error/test_peplot_no_shared_limits.png
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_prediction_error.png b/tests/baseline_images/test_regressor/test_prediction_error/test_prediction_error.png
similarity index 100%
rename from tests/baseline_images/test_regressor/test_residuals/test_prediction_error.png
rename to tests/baseline_images/test_regressor/test_prediction_error/test_prediction_error.png
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_prediction_error_numpy.png b/tests/baseline_images/test_regressor/test_prediction_error/test_prediction_error_numpy.png
similarity index 100%
rename from tests/baseline_images/test_regressor/test_residuals/test_prediction_error_numpy.png
rename to tests/baseline_images/test_regressor/test_prediction_error/test_prediction_error_numpy.png
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_prediction_error_pandas.png b/tests/baseline_images/test_regressor/test_prediction_error/test_prediction_error_pandas.png
similarity index 100%
rename from tests/baseline_images/test_regressor/test_residuals/test_prediction_error_pandas.png
rename to tests/baseline_images/test_regressor/test_prediction_error/test_prediction_error_pandas.png
diff --git a/tests/baseline_images/test_regressor/test_residuals/test_prediction_error_quick_method.png b/tests/baseline_images/test_regressor/test_prediction_error/test_prediction_error_quick_method.png
similarity index 100%
rename from tests/baseline_images/test_regressor/test_residuals/test_prediction_error_quick_method.png
rename to tests/baseline_images/test_regressor/test_prediction_error/test_prediction_error_quick_method.png
diff --git a/tests/test_contrib/test_classifier/test_boundaries.py b/tests/test_contrib/test_classifier/test_boundaries.py
--- a/tests/test_contrib/test_classifier/test_boundaries.py
+++ b/tests/test_contrib/test_classifier/test_boundaries.py
@@ -356,7 +356,7 @@ def test_integrated_scatter_numpy_arrays_no_names(self):
"""
Test integration of visualizer with numpy arrays
"""
- model = neighbors.KNeighborsClassifier(3)
+ model = neighbors.KNeighborsClassifier(n_neighbors=3)
visualizer = DecisionBoundariesVisualizer(model, features=[1, 2])
visualizer.fit_draw_show(X, y)
diff --git a/tests/test_regressor/test_prediction_error.py b/tests/test_regressor/test_prediction_error.py
new file mode 100644
--- /dev/null
+++ b/tests/test_regressor/test_prediction_error.py
@@ -0,0 +1,260 @@
+# tests.test_regressor.test_prediction_error
+# Ensure that the regressor prediction error visualization works.
+#
+# Author: Rebecca Bilbro
+# Author: Benjamin Bengfort
+# Created: Sat Oct 8 16:30:39 2016 -0400
+#
+# Copyright (C) 2016 The scikit-yb developers
+# For license information, see LICENSE.txt
+#
+# ID: test_prediction_error.py [] $
+
+"""
+Ensure that the regressor prediction error visualization works.
+"""
+
+##########################################################################
+## Imports
+##########################################################################
+
+import pytest
+import matplotlib.pyplot as plt
+
+from unittest import mock
+from tests.fixtures import Dataset, Split
+from tests.base import IS_WINDOWS_OR_CONDA, VisualTestCase
+
+from yellowbrick.datasets import load_energy
+from yellowbrick.regressor.prediction_error import PredictionError, prediction_error
+
+from sklearn.datasets import make_regression
+from sklearn.linear_model import Ridge, Lasso
+from sklearn.neural_network import MLPRegressor
+from sklearn.linear_model import LinearRegression
+from sklearn.model_selection import train_test_split as tts
+
+try:
+ import pandas as pd
+except ImportError:
+ pd = None
+
+##########################################################################
+## Data
+##########################################################################
+
+
[email protected](scope="class")
+def data(request):
+ """
+ Creates a fixture of train and test splits for the sklearn digits dataset
+ For ease of use returns a Dataset named tuple composed of two Split tuples.
+ """
+ X, y = make_regression(
+ n_samples=500,
+ n_features=22,
+ n_informative=8,
+ random_state=42,
+ noise=0.2,
+ bias=0.2,
+ )
+
+ X_train, X_test, y_train, y_test = tts(X, y, test_size=0.2, random_state=11)
+
+ # Set a class attribute for digits
+ request.cls.data = Dataset(Split(X_train, X_test), Split(y_train, y_test))
+
+
+##########################################################################
+## Prediction Error Test Cases
+##########################################################################
+
+
[email protected]("data")
+class TestPredictionError(VisualTestCase):
+ """
+ Test the PredictionError visualizer
+ """
+
+ @pytest.mark.filterwarnings("ignore:Stochastic Optimizer")
+ @pytest.mark.filterwarnings("ignore:internal gelsd driver lwork query error")
+ def test_prediction_error(self):
+ """
+ Test image similarity of prediction error on random data
+ """
+ _, ax = plt.subplots()
+
+ model = MLPRegressor(random_state=229)
+ visualizer = PredictionError(model, ax=ax)
+
+ visualizer.fit(self.data.X.train, self.data.y.train)
+ visualizer.score(self.data.X.test, self.data.y.test)
+ visualizer.finalize()
+
+ self.assert_images_similar(visualizer, tol=1, remove_legend=True)
+
+ @pytest.mark.skipif(pd is None, reason="pandas is required")
+ def test_prediction_error_pandas(self):
+ """
+ Test Pandas real world dataset with image similarity on Ridge
+ """
+ _, ax = plt.subplots()
+
+ # Load the occupancy dataset from fixtures
+ data = load_energy(return_dataset=True)
+ X, y = data.to_pandas()
+
+ # Create train/test splits
+ splits = tts(X, y, test_size=0.2, random_state=8873)
+ X_train, X_test, y_train, y_test = splits
+
+ visualizer = PredictionError(Ridge(random_state=22), ax=ax)
+ visualizer.fit(X_train, y_train)
+ visualizer.score(X_test, y_test)
+ visualizer.finalize()
+
+ self.assert_images_similar(visualizer, tol=1, remove_legend=True)
+
+ def test_prediction_error_numpy(self):
+ """
+ Test NumPy real world dataset with image similarity on Ridge
+ """
+ _, ax = plt.subplots()
+
+ # Load the occupancy dataset from fixtures
+ data = load_energy(return_dataset=True)
+ X, y = data.to_numpy()
+
+ # Create train/test splits
+ splits = tts(X, y, test_size=0.2, random_state=8873)
+ X_train, X_test, y_train, y_test = splits
+
+ visualizer = PredictionError(Ridge(random_state=22), ax=ax)
+ visualizer.fit(X_train, y_train)
+ visualizer.score(X_test, y_test)
+ visualizer.finalize()
+
+ self.assert_images_similar(visualizer, tol=1, remove_legend=True)
+
+ def test_score(self):
+ """
+ Assert returns R2 score
+ """
+ visualizer = PredictionError(LinearRegression())
+
+ visualizer.fit(self.data.X.train, self.data.y.train)
+ score = visualizer.score(self.data.X.test, self.data.y.test)
+
+ assert score == pytest.approx(0.9999983124154965)
+ assert visualizer.score_ == score
+
+ def test_peplot_shared_limits(self):
+ """
+ Test shared limits on the peplot
+ """
+ visualizer = PredictionError(LinearRegression(), shared_limits=False)
+
+ visualizer.fit(self.data.X.train, self.data.y.train)
+ visualizer.score(self.data.X.test, self.data.y.test)
+ visualizer.finalize()
+
+ xlim = tuple(map(int, visualizer.ax.get_xlim()))
+ ylim = tuple(map(int, visualizer.ax.get_ylim()))
+ assert xlim == ylim
+
+ @pytest.mark.filterwarnings("ignore:internal gelsd driver lwork query error")
+ def test_peplot_no_shared_limits(self):
+ """
+ Test image similarity with no shared limits on the peplot
+ """
+ visualizer = PredictionError(Ridge(random_state=43), shared_limits=False)
+
+ visualizer.fit(self.data.X.train, self.data.y.train)
+ visualizer.score(self.data.X.test, self.data.y.test)
+ visualizer.finalize()
+
+ xlim = tuple(map(int, visualizer.ax.get_xlim()))
+ ylim = tuple(map(int, visualizer.ax.get_ylim()))
+ assert not xlim == ylim
+
+ self.assert_images_similar(visualizer, tol=1.0, remove_legend=True)
+
+ def test_peplot_no_lines(self):
+ """
+ Test image similarity with no lines drawn on the plot
+ """
+ visualizer = PredictionError(
+ Lasso(random_state=23, alpha=10), bestfit=False, identity=False
+ )
+
+ visualizer.fit(self.data.X.train, self.data.y.train)
+ visualizer.score(self.data.X.test, self.data.y.test)
+ visualizer.finalize()
+
+ self.assert_images_similar(visualizer, tol=1.0, remove_legend=True)
+
+ def test_alpha_param(self):
+ """
+ Test that the user can supply an alpha param on instantiation
+ """
+ # Instantiate a sklearn regressor
+ model = Lasso(random_state=23, alpha=10)
+ # Instantiate a prediction error plot, provide custom alpha
+ visualizer = PredictionError(model, bestfit=False, identity=False, alpha=0.7)
+
+ # Test param gets set correctly
+ assert visualizer.alpha == 0.7
+
+ # Mock ax and fit the visualizer
+ visualizer.ax = mock.MagicMock(autospec=True)
+ visualizer.fit(self.data.X.train, self.data.y.train)
+ visualizer.score(self.data.X.test, self.data.y.test)
+
+ # Test that alpha was passed to internal matplotlib scatterplot
+ _, scatter_kwargs = visualizer.ax.scatter.call_args
+ assert "alpha" in scatter_kwargs
+ assert scatter_kwargs["alpha"] == 0.7
+
+ @pytest.mark.xfail(
+ reason="""third test fails with AssertionError: Expected fit
+ to be called once. Called 0 times."""
+ )
+ def test_peplot_with_fitted(self):
+ """
+ Test that PredictionError properly handles an already-fitted model
+ """
+ X, y = load_energy(return_dataset=True).to_numpy()
+
+ model = Ridge().fit(X, y)
+
+ with mock.patch.object(model, "fit") as mockfit:
+ oz = PredictionError(model)
+ oz.fit(X, y)
+ mockfit.assert_not_called()
+
+ with mock.patch.object(model, "fit") as mockfit:
+ oz = PredictionError(model, is_fitted=True)
+ oz.fit(X, y)
+ mockfit.assert_not_called()
+
+ with mock.patch.object(model, "fit") as mockfit:
+ oz = PredictionError(model, is_fitted=False)
+ oz.fit(X, y)
+ mockfit.assert_called_once_with(X, y)
+
+ @pytest.mark.xfail(
+ IS_WINDOWS_OR_CONDA,
+ reason="font rendering different in OS and/or Python; see #892",
+ )
+ def test_prediction_error_quick_method(self):
+ """
+ Image similarity test using the residuals plot quick method
+ """
+ _, ax = plt.subplots()
+
+ model = Lasso(random_state=19)
+ oz = prediction_error(
+ model, self.data.X.train, self.data.y.train, ax=ax, show=False
+ )
+ assert isinstance(oz, PredictionError)
+ self.assert_images_similar(oz)
diff --git a/tests/test_regressor/test_residuals.py b/tests/test_regressor/test_residuals.py
--- a/tests/test_regressor/test_residuals.py
+++ b/tests/test_regressor/test_residuals.py
@@ -24,8 +24,8 @@
import matplotlib.pyplot as plt
from yellowbrick.datasets import load_energy
-from yellowbrick.regressor.residuals import *
from yellowbrick.exceptions import YellowbrickValueError
+from yellowbrick.regressor.residuals import ResidualsPlot, residuals_plot
from unittest import mock
from tests.fixtures import Dataset, Split
@@ -72,185 +72,6 @@ def data(request):
# Set a class attribute for digits
request.cls.data = Dataset(Split(X_train, X_test), Split(y_train, y_test))
-
-##########################################################################
-## Prediction Error Test Cases
-##########################################################################
-
-
[email protected]("data")
-class TestPredictionError(VisualTestCase):
- """
- Test the PredictionError visualizer
- """
-
- @pytest.mark.filterwarnings("ignore:Stochastic Optimizer")
- @pytest.mark.filterwarnings("ignore:internal gelsd driver lwork query error")
- def test_prediction_error(self):
- """
- Test image similarity of prediction error on random data
- """
- _, ax = plt.subplots()
-
- model = MLPRegressor(random_state=229)
- visualizer = PredictionError(model, ax=ax)
-
- visualizer.fit(self.data.X.train, self.data.y.train)
- visualizer.score(self.data.X.test, self.data.y.test)
- visualizer.finalize()
-
- self.assert_images_similar(visualizer, tol=1, remove_legend=True)
-
- @pytest.mark.skipif(pd is None, reason="pandas is required")
- def test_prediction_error_pandas(self):
- """
- Test Pandas real world dataset with image similarity on Ridge
- """
- _, ax = plt.subplots()
-
- # Load the occupancy dataset from fixtures
- data = load_energy(return_dataset=True)
- X, y = data.to_pandas()
-
- # Create train/test splits
- splits = tts(X, y, test_size=0.2, random_state=8873)
- X_train, X_test, y_train, y_test = splits
-
- visualizer = PredictionError(Ridge(random_state=22), ax=ax)
- visualizer.fit(X_train, y_train)
- visualizer.score(X_test, y_test)
- visualizer.finalize()
-
- self.assert_images_similar(visualizer, tol=1, remove_legend=True)
-
- def test_prediction_error_numpy(self):
- """
- Test NumPy real world dataset with image similarity on Ridge
- """
- _, ax = plt.subplots()
-
- # Load the occupancy dataset from fixtures
- data = load_energy(return_dataset=True)
- X, y = data.to_numpy()
-
- # Create train/test splits
- splits = tts(X, y, test_size=0.2, random_state=8873)
- X_train, X_test, y_train, y_test = splits
-
- visualizer = PredictionError(Ridge(random_state=22), ax=ax)
- visualizer.fit(X_train, y_train)
- visualizer.score(X_test, y_test)
- visualizer.finalize()
-
- self.assert_images_similar(visualizer, tol=1, remove_legend=True)
-
- def test_score(self):
- """
- Assert returns R2 score
- """
- visualizer = PredictionError(LinearRegression())
-
- visualizer.fit(self.data.X.train, self.data.y.train)
- score = visualizer.score(self.data.X.test, self.data.y.test)
-
- assert score == pytest.approx(0.9999983124154965)
- assert visualizer.score_ == score
-
- def test_peplot_shared_limits(self):
- """
- Test shared limits on the peplot
- """
- visualizer = PredictionError(LinearRegression(), shared_limits=False)
-
- visualizer.fit(self.data.X.train, self.data.y.train)
- visualizer.score(self.data.X.test, self.data.y.test)
- visualizer.finalize()
-
- xlim = tuple(map(int, visualizer.ax.get_xlim()))
- ylim = tuple(map(int, visualizer.ax.get_ylim()))
- assert xlim == ylim
-
- @pytest.mark.filterwarnings("ignore:internal gelsd driver lwork query error")
- def test_peplot_no_shared_limits(self):
- """
- Test image similarity with no shared limits on the peplot
- """
- visualizer = PredictionError(Ridge(random_state=43), shared_limits=False)
-
- visualizer.fit(self.data.X.train, self.data.y.train)
- visualizer.score(self.data.X.test, self.data.y.test)
- visualizer.finalize()
-
- xlim = tuple(map(int, visualizer.ax.get_xlim()))
- ylim = tuple(map(int, visualizer.ax.get_ylim()))
- assert not xlim == ylim
-
- self.assert_images_similar(visualizer, tol=1.0, remove_legend=True)
-
- def test_peplot_no_lines(self):
- """
- Test image similarity with no lines drawn on the plot
- """
- visualizer = PredictionError(
- Lasso(random_state=23, alpha=10), bestfit=False, identity=False
- )
-
- visualizer.fit(self.data.X.train, self.data.y.train)
- visualizer.score(self.data.X.test, self.data.y.test)
- visualizer.finalize()
-
- self.assert_images_similar(visualizer, tol=1.0, remove_legend=True)
-
- def test_alpha_param(self):
- """
- Test that the user can supply an alpha param on instantiation
- """
- # Instantiate a sklearn regressor
- model = Lasso(random_state=23, alpha=10)
- # Instantiate a prediction error plot, provide custom alpha
- visualizer = PredictionError(model, bestfit=False, identity=False, alpha=0.7)
-
- # Test param gets set correctly
- assert visualizer.alpha == 0.7
-
- # Mock ax and fit the visualizer
- visualizer.ax = mock.MagicMock(autospec=True)
- visualizer.fit(self.data.X.train, self.data.y.train)
- visualizer.score(self.data.X.test, self.data.y.test)
-
- # Test that alpha was passed to internal matplotlib scatterplot
- _, scatter_kwargs = visualizer.ax.scatter.call_args
- assert "alpha" in scatter_kwargs
- assert scatter_kwargs["alpha"] == 0.7
-
- @pytest.mark.xfail(
- reason="""third test fails with AssertionError: Expected fit
- to be called once. Called 0 times."""
- )
- def test_peplot_with_fitted(self):
- """
- Test that PredictionError properly handles an already-fitted model
- """
- X, y = load_energy(return_dataset=True).to_numpy()
-
- model = Ridge().fit(X, y)
-
- with mock.patch.object(model, "fit") as mockfit:
- oz = PredictionError(model)
- oz.fit(X, y)
- mockfit.assert_not_called()
-
- with mock.patch.object(model, "fit") as mockfit:
- oz = PredictionError(model, is_fitted=True)
- oz.fit(X, y)
- mockfit.assert_not_called()
-
- with mock.patch.object(model, "fit") as mockfit:
- oz = PredictionError(model, is_fitted=False)
- oz.fit(X, y)
- mockfit.assert_called_once_with(X, y)
-
-
##########################################################################
## Residuals Plot Test Cases
##########################################################################
@@ -290,8 +111,7 @@ def test_residuals_plot_QQ_plot(self):
"""
_, ax = plt.subplots()
- visualizer = ResidualsPlot(LinearRegression(), hist=False,
- qqplot=True, ax=ax)
+ visualizer = ResidualsPlot(LinearRegression(), hist=False, qqplot=True, ax=ax)
visualizer.fit(self.data.X.train, self.data.y.train)
visualizer.score(self.data.X.test, self.data.y.test)
@@ -302,8 +122,9 @@ def test_either_hist_or_QQ_plot(self):
"""
Setting both hist=True and qqplot=True raises exception.
"""
- with pytest.raises(YellowbrickValueError,
- match="Set either hist or qqplot to False"):
+ with pytest.raises(
+ YellowbrickValueError, match="Set either hist or qqplot to False"
+ ):
ResidualsPlot(LinearRegression(), hist=True, qqplot=True)
@pytest.mark.xfail(
@@ -332,7 +153,7 @@ def test_hist_matplotlib_version(self, mock_toolkit):
"""
ValueError is raised when matplotlib version is incorrect and hist=True
"""
- with pytst.raises(ImportError):
+ with pytest.raises(ImportError):
from mpl_toolkits.axes_grid1 import make_axes_locatable
assert not make_axes_locatable
@@ -392,9 +213,7 @@ def test_residuals_quick_method_train_only(self):
Test the quick method with only train data (simplest args)
"""
oz = residuals_plot(
- Ridge(random_state=19),
- self.data.X.train,
- self.data.y.train,
+ Ridge(random_state=19), self.data.X.train, self.data.y.train
)
assert isinstance(oz, ResidualsPlot)
@@ -407,10 +226,7 @@ def test_residuals_quick_method_missing_data(self):
msg = "both X_test and y_test are required if one is specified"
with pytest.raises(YellowbrickValueError, match=msg):
residuals_plot(
- Lasso(),
- self.data.X.train,
- self.data.y.train,
- self.data.X.test,
+ Lasso(), self.data.X.train, self.data.y.train, self.data.X.test
)
@pytest.mark.xfail(
@@ -521,20 +337,3 @@ def test_residuals_with_fitted(self):
oz = ResidualsPlot(model, is_fitted=False)
oz.fit(X, y)
mockfit.assert_called_once_with(X, y)
-
- @pytest.mark.xfail(
- IS_WINDOWS_OR_CONDA,
- reason="font rendering different in OS and/or Python; see #892",
- )
- def test_prediction_error_quick_method(self):
- """
- Image similarity test using the residuals plot quick method
- """
- _, ax = plt.subplots()
-
- model = Lasso(random_state=19)
- oz = prediction_error(
- model, self.data.X.train, self.data.y.train, ax=ax, show=False
- )
- assert isinstance(oz, PredictionError)
- self.assert_images_similar(oz)
| API reference conflict for residuals.py in documentation
We have a minor documentation conflict where the `:class:~yellowbrick.regressor.residuals.PredictionError` directive in our documentation is not able to resolve the link to the API documentation. The issue is that we have the `yellowbrick.regressor.residuals` `automodule` in both `peplot.rst` and in `residuals.rst` so the documentation doesn't know which one to link to. The solutions that I can see to fix this are:
1. Move PredictionError to its own module `yellowbrick.regressor.peplot` (and associated tests)
2. Combine the residuals documentation into one page (probably not the best)
3. Research a way to get Sphinx to recognize two separate `automodule` directives (no idea if this is possible)
This issue was discovered in #1022
| Thank you @bbengfort for opening this so that we don't forget it in the midst of the quick method extravaganza! My vote is for option # 1! | 2020-06-14T16:58:43 |
DistrictDataLabs/yellowbrick | 1,121 | DistrictDataLabs__yellowbrick-1121 | [
"1120"
] | a46e70c94e0ed96cc3160d370df35d3bc0bc7888 | diff --git a/yellowbrick/utils/kneed.py b/yellowbrick/utils/kneed.py
--- a/yellowbrick/utils/kneed.py
+++ b/yellowbrick/utils/kneed.py
@@ -49,7 +49,7 @@
class KneeLocator(object):
"""
- Finds the "elbow" or "knee" which is a value corresponding to the point of maximum curvature
+ Finds the "elbow" or "knee" which is a value corresponding to the point of maximum curvature
in an elbow curve, using knee point detection algorithm. This point is accessible via the
`knee` attribute.
@@ -60,19 +60,22 @@ class KneeLocator(object):
y : list
A list of silhouette score corresponding to each value of k.
-
+
S : float, default: 1.0
- Sensitivity parameter that allows us to adjust how aggressive we want KneeLocator to
+ Sensitivity parameter that allows us to adjust how aggressive we want KneeLocator to
be when detecting "knees" or "elbows".
curve_nature : string, default: 'concave'
- A string that determines the nature of the elbow curve in which "knee" or "elbow" is
+ A string that determines the nature of the elbow curve in which "knee" or "elbow" is
to be found.
curve_direction : string, default: 'increasing'
- A string that determines tha increasing or decreasing nature of the elbow curve in
+ A string that determines tha increasing or decreasing nature of the elbow curve in
which "knee" or "elbow" is to be found.
-
+
+ online : bool, default: False
+ kneed will correct old knee points if True, will return first knee if False
+
Notes
-----
The KneeLocator is implemented using the "knee point detection algorithm" which can be read at
@@ -80,22 +83,30 @@ class KneeLocator(object):
"""
def __init__(
- self, x, y, S=1.0, curve_nature="concave", curve_direction="increasing"
+ self,
+ x,
+ y,
+ S=1.0,
+ curve_nature="concave",
+ curve_direction="increasing",
+ online=False,
):
# Raw Input
- self.x = x
- self.y = y
+ self.x = np.array(x)
+ self.y = np.array(y)
self.curve_nature = curve_nature
self.curve_direction = curve_direction
self.N = len(self.x)
self.S = S
self.all_knees = set()
self.all_norm_knees = set()
+ self.all_knees_y = []
+ self.all_norm_knees_y = []
+ self.online = online
# Step 1: fit a smooth line
uspline = interpolate.interp1d(self.x, self.y)
- self.x = np.array(x)
self.Ds_y = uspline(self.x)
# Step 2: normalize values
@@ -103,34 +114,38 @@ def __init__(
self.y_normalized = self.__normalize(self.Ds_y)
# Step 3: Calculate the Difference curve
- self.x_normalized, self.y_normalized = self.transform_xy(
- self.x_normalized,
- self.y_normalized,
- self.curve_direction,
- self.curve_nature,
+ self.y_normalized = self.transform_y(
+ self.y_normalized, self.curve_direction, self.curve_nature
)
# normalized difference curve
- self.y_distance = self.y_normalized - self.x_normalized
- self.x_distance = self.x_normalized.copy()
+ self.y_difference = self.y_normalized - self.x_normalized
+ self.x_difference = self.x_normalized.copy()
# Step 4: Identify local maxima/minima
# local maxima
- self.maxima_inidices = argrelextrema(self.y_distance, np.greater)[0]
- self.x_distance_maxima = self.x_distance[self.maxima_inidices]
- self.y_distance_maxima = self.y_distance[self.maxima_inidices]
+ self.maxima_indices = argrelextrema(self.y_difference, np.greater_equal)[0]
+ self.x_difference_maxima = self.x_difference[self.maxima_indices]
+ self.y_difference_maxima = self.y_difference[self.maxima_indices]
# local minima
- self.minima_indices = argrelextrema(self.y_distance, np.less)[0]
- self.x_distance_minima = self.x_distance[self.minima_indices]
- self.y_distance_minima = self.y_distance[self.minima_indices]
+ self.minima_indices = argrelextrema(self.y_difference, np.less_equal)[0]
+ self.x_difference_minima = self.x_difference[self.minima_indices]
+ self.y_difference_minima = self.y_difference[self.minima_indices]
# Step 5: Calculate thresholds
- self.Tmx = self.y_distance_maxima - (
+ self.Tmx = self.y_difference_maxima - (
self.S * np.abs(np.diff(self.x_normalized).mean())
)
# Step 6: find knee
- self.find_knee()
+ self.knee, self.norm_knee = self.find_knee()
+
+ # Step 7: If we have a knee, extract data about it
+ self.knee_y = self.norm_knee_y = None
+ if self.knee:
+ self.knee_y = self.y[self.x == self.knee][0]
+ self.norm_knee_y = self.y_normalized[self.x_normalized == self.norm_knee][0]
+
if (self.all_knees or self.all_norm_knees) == set():
warning_message = (
"No 'knee' or 'elbow point' detected "
@@ -140,8 +155,8 @@ def __init__(
warnings.warn(warning_message, YellowbrickWarning)
self.knee = None
self.norm_knee = None
- else:
- self.knee, self.norm_knee = min(self.all_knees), min(self.all_norm_knees)
+ self.knee_y = None
+ self.norm_knee_y = None
@staticmethod
def __normalize(a):
@@ -155,25 +170,24 @@ def __normalize(a):
return (a - min(a)) / (max(a) - min(a))
@staticmethod
- def transform_xy(x, y, direction, curve):
- """transform x and y to concave, increasing based on curve_direction and curve_nature"""
+ def transform_y(y, direction, curve):
+ """transform y to concave, increasing based on given direction and curve"""
# convert elbows to knees
- if curve == "convex":
- x = x.max() - x
- y = y.max() - y
- # flip decreasing functions to increasing
if direction == "decreasing":
- y = np.flip(y)
+ if curve == "concave":
+ y = np.flip(y)
+ elif curve == "convex":
+ y = y.max() - y
+ elif direction == "increasing" and curve == "convex":
+ y = np.flip(y.max() - y)
- if curve == "convex":
- x = np.flip(x)
- y = np.flip(y)
+ return y
- return x, y
-
- def find_knee(self,):
+ def find_knee(
+ self,
+ ):
"""This function finds and sets the knee value and the normalized knee value. """
- if not self.maxima_inidices.size:
+ if not self.maxima_indices.size:
warning_message = (
'No "knee" or "elbow point" detected '
"This could be due to bad clustering, no "
@@ -182,58 +196,71 @@ def find_knee(self,):
warnings.warn(warning_message, YellowbrickWarning)
return None, None
- # artificially place a local max at the last item in the x_distance array
- self.maxima_inidices = np.append(self.maxima_inidices, len(self.x_distance) - 1)
- self.minima_indices = np.append(self.minima_indices, len(self.x_distance) - 1)
-
# placeholder for which threshold region i is located in.
maxima_threshold_index = 0
minima_threshold_index = 0
- # traverse the distance curve
- for idx, i in enumerate(self.x_distance):
+ # traverse the difference curve
+ for i, x in enumerate(self.x_difference):
+ # skip points on the curve before the the first local maxima
+ if i < self.maxima_indices[0]:
+ continue
+
+ j = i + 1
+
# reached the end of the curve
- if i == 1.0:
+ if x == 1.0:
break
- # values in distance curve are at or after a local maximum
- if idx >= self.maxima_inidices[maxima_threshold_index]:
+
+ # if we're at a local max, increment the maxima threshold index and continue
+ if (self.maxima_indices == i).any():
threshold = self.Tmx[maxima_threshold_index]
- threshold_index = idx
+ threshold_index = i
maxima_threshold_index += 1
- # values in distance curve are at or after a local minimum
- if idx >= self.minima_indices[minima_threshold_index]:
+ # values in difference curve are at or after a local minimum
+ if (self.minima_indices == i).any():
threshold = 0.0
minima_threshold_index += 1
- # Do not evaluate values in the distance curve before the first local maximum.
- if idx < self.maxima_inidices[0]:
- continue
- # evaluate the threshold
- if self.y_distance[idx] < threshold:
+ if self.y_difference[j] < threshold:
if self.curve_nature == "convex":
if self.curve_direction == "decreasing":
knee = self.x[threshold_index]
- self.all_knees.add(knee)
norm_knee = self.x_normalized[threshold_index]
- self.all_norm_knees.add(norm_knee)
else:
knee = self.x[-(threshold_index + 1)]
- self.all_knees.add(knee)
- norm_knee = self.x_normalized[-(threshold_index + 1)]
- self.all_norm_knees.add(norm_knee)
+ norm_knee = self.x_normalized[threshold_index]
elif self.curve_nature == "concave":
if self.curve_direction == "decreasing":
knee = self.x[-(threshold_index + 1)]
- self.all_knees.add(knee)
- norm_knee = self.x_normalized[-(threshold_index + 1)]
- self.all_norm_knees.add(norm_knee)
+ norm_knee = self.x_normalized[threshold_index]
else:
knee = self.x[threshold_index]
- self.all_knees.add(knee)
norm_knee = self.x_normalized[threshold_index]
- self.all_norm_knees.add(norm_knee)
- def plot_knee_normalized(self,):
+ # add the y value at the knee
+ y_at_knee = self.y[self.x == knee][0]
+ y_norm_at_knee = self.y_normalized[self.x_normalized == norm_knee][0]
+ if knee not in self.all_knees:
+ self.all_knees_y.append(y_at_knee)
+ self.all_norm_knees_y.append(y_norm_at_knee)
+
+ # now add the knee
+ self.all_knees.add(knee)
+ self.all_norm_knees.add(norm_knee)
+
+ # if detecting in offline mode, return the first knee found
+ if self.online is False:
+ return knee, norm_knee
+
+ if self.all_knees == set():
+ return None, None
+
+ return knee, norm_knee
+
+ def plot_knee_normalized(
+ self,
+ ):
"""
Plots the normalized curve, the distance curve (x_distance, y_normalized) and the
knee, if it exists.
@@ -242,18 +269,22 @@ def plot_knee_normalized(self,):
plt.figure(figsize=(8, 8))
plt.plot(self.x_normalized, self.y_normalized)
- plt.plot(self.x_distance, self.y_distance, "r")
+ plt.plot(self.x_difference, self.y_difference, "r")
plt.xticks(
np.arange(self.x_normalized.min(), self.x_normalized.max() + 0.1, 0.1)
)
- plt.yticks(np.arange(self.y_distance.min(), self.y_normalized.max() + 0.1, 0.1))
+ plt.yticks(
+ np.arange(self.y_difference.min(), self.y_normalized.max() + 0.1, 0.1)
+ )
plt.vlines(self.norm_knee, plt.ylim()[0], plt.ylim()[1])
- def plot_knee(self,):
+ def plot_knee(
+ self,
+ ):
"""
Plot the curve and the knee, if it exists
-
+
"""
import matplotlib.pyplot as plt
@@ -270,6 +301,14 @@ def elbow(self):
def norm_elbow(self):
return self.norm_knee
+ @property
+ def elbow_y(self):
+ return self.knee_y
+
+ @property
+ def norm_elbow_y(self):
+ return self.norm_knee_y
+
@property
def all_elbows(self):
return self.all_knees
@@ -277,3 +316,11 @@ def all_elbows(self):
@property
def all_norm_elbows(self):
return self.all_norm_knees
+
+ @property
+ def all_elbows_y(self):
+ return self.all_knees_y
+
+ @property
+ def all_norm_elbows_y(self):
+ return self.all_norm_knees_y
| diff --git a/tests/test_utils/test_kneed.py b/tests/test_utils/test_kneed.py
--- a/tests/test_utils/test_kneed.py
+++ b/tests/test_utils/test_kneed.py
@@ -40,6 +40,8 @@
with permission by the Yellowbrick contributors.
"""
+import pytest
+import matplotlib.pyplot as plt
import numpy as np
from yellowbrick.utils.kneed import KneeLocator
@@ -132,3 +134,77 @@ def test_convex_decreasing_truncated():
curve_direction="decreasing",
)
assert kn.knee == 0.2
+
+
+def test_x_equals_y():
+ """Test that a runtime warning is raised when no maxima are found"""
+ x = range(10)
+ y = [1] * len(x)
+ with pytest.warns(RuntimeWarning):
+ KneeLocator(x, y)
+
+
[email protected]("online, expected", [(True, 482), (False, 22)])
+def test_gamma_online_offline(online, expected):
+ """Tests online and offline knee detection.
+ Notable that a large number of samples are highly sensitive to S parameter
+ """
+ np.random.seed(23)
+ n = 1000
+ x = range(1, n + 1)
+ y = sorted(np.random.gamma(0.5, 1.0, n), reverse=True)
+ kl = KneeLocator(x, y, curve_nature="convex", curve_direction="decreasing", online=online)
+ assert kl.knee == expected
+
+
+def test_properties():
+ """Tests that elbow and knee can be used interchangeably."""
+ kn = KneeLocator(
+ x, y_concave_inc, curve_nature="concave", curve_direction="increasing"
+ )
+ assert kn.knee == kn.elbow
+ assert kn.norm_knee == kn.norm_elbow
+ # pytest compares all elements in each list.
+ assert kn.all_knees == kn.all_elbows
+ assert kn.all_norm_knees == kn.all_norm_elbows
+
+
+def test_plot_knee_normalized():
+ """Test that plotting is functional"""
+ with np.errstate(divide="ignore"):
+ x = np.linspace(0.0, 1, 10)
+ y = np.true_divide(-1, x + 0.1) + 5
+ kl = KneeLocator(x, y, S=1.0, curve_nature="concave")
+ num_figures_before = plt.gcf().number
+ kl.plot_knee_normalized()
+ num_figures_after = plt.gcf().number
+ assert num_figures_before < num_figures_after
+
+
+def test_plot_knee():
+ """Test that plotting is functional"""
+ with np.errstate(divide="ignore"):
+ x = np.linspace(0.0, 1, 10)
+ y = np.true_divide(-1, x + 0.1) + 5
+ kl = KneeLocator(x, y, S=1.0, curve_nature="concave")
+ num_figures_before = plt.gcf().number
+ kl.plot_knee()
+ num_figures_after = plt.gcf().number
+ assert num_figures_before < num_figures_after
+
+
+def test_y():
+ """Test the y value"""
+ with np.errstate(divide="ignore"):
+ x = np.linspace(0.0, 1, 10)
+ y = np.true_divide(-1, x + 0.1) + 5
+ kl = KneeLocator(x, y, S=1.0, curve_nature="concave")
+ assert kl.knee_y == pytest.approx(1.897, 0.03)
+ assert kl.all_knees_y[0] == pytest.approx(1.897, 0.03)
+ assert kl.norm_knee_y == pytest.approx(0.758, 0.03)
+ assert kl.all_norm_knees_y[0] == pytest.approx(0.758, 0.03)
+
+ assert kl.elbow_y == pytest.approx(1.897, 0.03)
+ assert kl.all_elbows_y[0] == pytest.approx(1.897, 0.03)
+ assert kl.norm_elbow_y == pytest.approx(0.758, 0.03)
+ assert kl.all_norm_elbows_y[0] == pytest.approx(0.758, 0.03)
| Update kneed
The [kneed](https://github.com/arvkevi/kneed) library has been updated a couple of times (with bugfixes!) since it was incorporated into the yellowbrick code. I would like to update the knee finding algorithm to be consistent with the [0.7.0 release](https://github.com/arvkevi/kneed/releases/tag/v0.7.0) of kneed.
Would you welcome a PR to update the code? Thanks!
| 2020-10-16T01:29:40 |
|
DistrictDataLabs/yellowbrick | 1,124 | DistrictDataLabs__yellowbrick-1124 | [
"1122"
] | 1831c29bd83435ff44bf748994b61ae3e2e51aea | diff --git a/yellowbrick/classifier/class_prediction_error.py b/yellowbrick/classifier/class_prediction_error.py
--- a/yellowbrick/classifier/class_prediction_error.py
+++ b/yellowbrick/classifier/class_prediction_error.py
@@ -21,12 +21,17 @@
import numpy as np
from sklearn.utils.multiclass import unique_labels
-from sklearn.metrics._classification import _check_targets
from yellowbrick.draw import bar_stack
from yellowbrick.classifier.base import ClassificationScoreVisualizer
from yellowbrick.exceptions import ModelError, YellowbrickValueError, NotFitted
+try:
+ # See #1124: this allows compatibility for scikit-learn >= 0.20
+ from sklearn.metrics._classification import _check_targets
+except ImportError:
+ from sklearn.metrics.classification import _check_targets
+
##########################################################################
## Class Prediction Error Chart
| Problem with import yellowbrick.model_selection
**Describe the bug**
Problem with importing libraries
**To Reproduce**
```python
# Steps to reproduce the behavior (code snippet):
# Should include imports, dataset loading, and execution
# Add the traceback below
```
from yellowbrick.model_selection import LearningCurve
**Dataset**
Did you use a specific dataset to produce the bug? Where can we access it?
**Expected behavior**
A clear and concise description of what you expected to happen.
**Traceback**
```
If applicable, add the traceback from the exception.
```
<ipython-input-35-82e8f0d75030> in <module>
10 from sklearn.tree import DecisionTreeClassifier
11 from sklearn.model_selection import learning_curve
---> 12 from yellowbrick.model_selection import LearningCurve
~\Anaconda3\lib\site-packages\yellowbrick\__init__.py in <module>
37 from .anscombe import anscombe
38 from .datasaurus import datasaurus
---> 39 from .classifier import ROCAUC, ClassBalance, ClassificationScoreVisualizer
40
41 # from .classifier import crplot, rocplot
~\Anaconda3\lib\site-packages\yellowbrick\classifier\__init__.py in <module>
24 from ..base import ScoreVisualizer
25 from .base import ClassificationScoreVisualizer
---> 26 from .class_prediction_error import ClassPredictionError, class_prediction_error
27 from .classification_report import ClassificationReport, classification_report
28 from .confusion_matrix import ConfusionMatrix, confusion_matrix
~\Anaconda3\lib\site-packages\yellowbrick\classifier\class_prediction_error.py in <module>
22
23 from sklearn.utils.multiclass import unique_labels
---> 24 from sklearn.metrics._classification import _check_targets
25
26 from yellowbrick.draw import bar_stack
ModuleNotFoundError: No module named 'sklearn.metrics._classification'
**Desktop (please complete the following information):**
- OS: [e.g. macOS]
- Python Version [e.g. 2.7, 3.6, miniconda]
- Yellowbrick Version [e.g. 0.7]
**Additional context**
Add any other context about the problem here.
| Hi @djyerabati - thanks for using Yellowbrick, and I'm sorry you're having trouble with it. Can I ask what version of scikit-learn are you using?
'0.21.3'
@djyerabati it looks like there was a change with scikit-learn 0.22 that forced us to modify our import paths. Please update to scikit-learn 0.22 or later and that should resolve the problem. You can do either:
```
$ pip install -U scikit-learn>=0.22
```
or
```
$ conda install scikit-learn>=0.22
```
We will also update our dependencies as further protection from this issue in the future. | 2020-10-22T11:08:45 |
|
DistrictDataLabs/yellowbrick | 1,151 | DistrictDataLabs__yellowbrick-1151 | [
"1149"
] | 7927d5a2aaa0969ece9fc746c213a26ea0e9c3b0 | diff --git a/yellowbrick/classifier/rocauc.py b/yellowbrick/classifier/rocauc.py
--- a/yellowbrick/classifier/rocauc.py
+++ b/yellowbrick/classifier/rocauc.py
@@ -270,6 +270,14 @@ def score(self, X, y=None):
"no curves will be drawn; ",
"set per_class=True or micro=False and macro=False.",
)
+
+ # For binary, if predictions are returned in shape (n,), micro and macro
+ # curves are not defined
+ if (self.micro or self.macro) and len(y_pred.shape) == 1:
+ raise ModelError(
+ "no curves will be drawn; set binary=True.",
+ )
+
if self.target_type_ == MULTICLASS:
# If it's multiclass classification, at least one of micro, macro, or
# per_class must be True
@@ -359,7 +367,7 @@ def draw(self):
-------
ax : the axis with the plotted figure
"""
- colors = self.class_colors_[0: len(self.classes_)]
+ colors = self.class_colors_[0 : len(self.classes_)]
n_classes = len(colors)
# If it's a binary decision, plot the single ROC curve
| diff --git a/tests/test_classifier/test_rocauc.py b/tests/test_classifier/test_rocauc.py
--- a/tests/test_classifier/test_rocauc.py
+++ b/tests/test_classifier/test_rocauc.py
@@ -46,6 +46,7 @@
## Fixtures
##########################################################################
+
class FakeClassifier(BaseEstimator, ClassifierMixin):
"""
A fake classifier for testing noops on the visualizer.
@@ -129,7 +130,9 @@ def test_binary_probability_decision_single_curve(self):
Test ROCAUC binary classifier with both decision & predict_proba with per_class=False
"""
# Create and fit the visualizer
- visualizer = ROCAUC(AdaBoostClassifier(), micro=False, macro=False, per_class=False)
+ visualizer = ROCAUC(
+ AdaBoostClassifier(), micro=False, macro=False, per_class=False
+ )
visualizer.fit(self.binary.X.train, self.binary.y.train)
# Score the visualizer
@@ -271,9 +274,7 @@ def test_rocauc_no_curves(self):
Test ROCAUC with no curves specified at all
"""
# Create and fit the visualizer
- visualizer = ROCAUC(
- GaussianNB(), per_class=False, macro=False, micro=False
- )
+ visualizer = ROCAUC(GaussianNB(), per_class=False, macro=False, micro=False)
visualizer.fit(self.multiclass.X.train, self.multiclass.y.train)
# Attempt to score the visualizer
@@ -432,6 +433,19 @@ def test_binary_decision_function_rocauc(self):
# Check to see if the first 10 y_scores match the expected
npt.assert_array_almost_equal(y_scores[:10], first_ten_expected, decimal=1)
+ def test_binary_false_decision_function_error(self):
+ """
+ Test binary decision_function model raises error when the binary param is False
+ """
+ # Create and fit the visualizer
+ visualizer = ROCAUC(LinearSVC(random_state=42), binary=False)
+ visualizer.fit(self.binary.X.train, self.binary.y.train)
+
+ # Ensure score raises error
+ # (only binary curve defined for binary decisions with decision_function clf)
+ with pytest.raises(ModelError):
+ visualizer.score(self.binary.X.test, self.binary.y.test)
+
def test_multi_decision_function_rocauc(self):
"""
Test ROCAUC with multiclass classifiers that have a decision function
| ROCAUC.score() raises ValueError: Found input variables with inconsistent numbers of samples
**Describe the bug**
Fitting a LinearSVC model on a binary class dataset and calling the ROCAUC score() function causes an unexpected error to be raised from sklearn. If the binary=True parameter is passed into ROCAUC, the error goes away.
```ValueError: Found input variables with inconsistent numbers of samples: [4, 2]```
**To Reproduce**
```python
import numpy as np
from sklearn.svm import LinearSVC
from yellowbrick.classifier import ROCAUC
X = np.array([1, 2]).reshape(-1, 1)
y = np.array([0, 1])
model = LinearSVC()
visualizer = ROCAUC(model)
visualizer.fit(X, y)
visualizer.score(X, y)
```
**Expected behavior**
Yellowbrick should raise a more meaningful error if the binary=True parameter is required to score ROCAUC on binary class datasets.
**Traceback**
```
Traceback (most recent call last):
File "visualizer.py", line 9, in <module>
visualizer.score(X, y)
File "C:\Users\DezielPa\conda\yellowbrick\yellowbrick\classifier\rocauc.py", line 330, in score
self._score_micro_average(y, y_pred, classes, n_classes)
File "C:\Users\DezielPa\conda\yellowbrick\yellowbrick\classifier\rocauc.py", line 486, in _score_micro_average
self.fpr[MICRO], self.tpr[MICRO], _ = roc_curve(y.ravel(), y_pred.ravel())
File "C:\Users\DezielPa\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 73, in inner_f
return f(**kwargs)
File "C:\Users\DezielPa\Anaconda3\lib\site-packages\sklearn\metrics\_ranking.py", line 776, in roc_curve
y_true, y_score, pos_label=pos_label, sample_weight=sample_weight)
File "C:\Users\DezielPa\Anaconda3\lib\site-packages\sklearn\metrics\_ranking.py", line 541, in _binary_clf_curve
check_consistent_length(y_true, y_score, sample_weight)
File "C:\Users\DezielPa\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 257, in check_consistent_length
" samples: %r" % [int(l) for l in lengths])
ValueError: Found input variables with inconsistent numbers of samples: [4, 2]
```
**Desktop (please complete the following information):**
- OS: Microsoft Windows 10 Enterprise
- Python 3.7.6
- Yellowbrick Version 1.2.1
**Additional context**
sklearn version 0.23.0
| Thank you @pdeziel for noting this bug (and for your recent PR #1148!).
My guess is we should take another look at the check we're doing [here](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/classifier/rocauc.py#L262-L268) since we seem to be missing an edge case. Maybe we can take a look together on Wednesday and open a PR? | 2021-02-11T21:44:07 |
DistrictDataLabs/yellowbrick | 1,154 | DistrictDataLabs__yellowbrick-1154 | [
"1153"
] | ac585eaa0785988705f01616b0ef8d75865fa2df | diff --git a/yellowbrick/base.py b/yellowbrick/base.py
--- a/yellowbrick/base.py
+++ b/yellowbrick/base.py
@@ -343,7 +343,6 @@ def get_params(self, deep=True):
for param in list(params.keys()):
if param.startswith("estimator__"):
params[param[len("estimator__"):]] = params.pop(param)
- print(params.keys(), "\n\n")
return params
def set_params(self, **params):
| A dangling print statement in the new 1.3 release while plotting LearningCurve & ValidationCurve.
**Describe the bug**
A dangling print statement in the new 1.3 release while plotting LearningCurve & ValidationCurve.
**To Reproduce**
Any snippet like the following
```python
viz = validation_curve(
DecisionTreeClassifier(), X, y, param_name="max_depth",
param_range=np.arange(2, 50, 2), cv=10, scoring="f1_macro"
)
```
**Dataset**
any
**Expected behavior**
Should not print these logs
**Traceback**
```
If applicable, add the traceback from the exception.
```
**Desktop (please complete the following information):**
- macOS
- python 3.7
- Yellowbrick Version 1.3
**Additional context**
Add any other context about the problem here.

| 2021-02-13T19:53:45 |
||
DistrictDataLabs/yellowbrick | 1,162 | DistrictDataLabs__yellowbrick-1162 | [
"1132"
] | 7927d5a2aaa0969ece9fc746c213a26ea0e9c3b0 | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -163,9 +163,7 @@ def get_description_type(path=PKG_DESCRIBE):
"zip_safe": False,
"entry_points": {"console_scripts": []},
"install_requires": list(get_requires()),
- "python_requires": ">=3.4, <4",
- "setup_requires": ["pytest-runner"],
- "tests_require": ["pytest"],
+ "python_requires": ">=3.4, <4"
}
| pytest-runner is deprecated
pytest-runner is deprecated: https://github.com/pytest-dev/pytest-runner/#deprecation-notice
If I find time, then I can make a PR, but I thought I'd let you know in the meantime.
| @jamesmyatt thanks for the note! If you can get a PR together, that would be great!
@bbengfort happy to pick this one up when CI is fixed. | 2021-02-25T18:54:33 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.