repo_name
stringlengths 6
112
| path
stringlengths 4
204
| copies
stringlengths 1
3
| size
stringlengths 4
6
| content
stringlengths 714
810k
| license
stringclasses 15
values |
---|---|---|---|---|---|
seckcoder/lang-learn | python/sklearn/examples/linear_model/plot_multi_task_lasso_support.py | 4 | 2176 | #!/usr/bin/env python
"""
=============================================
Joint feature selection with multi-task Lasso
=============================================
The multi-task lasso allows to fit multiple regression problems
jointly enforcing the selected features to be the same accross
tasks. This example simulates sequential measurements, each task
is a time instant, and the relevant features vary in amplitude
over time while being the same. The multi-task lasso imposes that
features that are selected at one time point are select for all time
point. This makes feature selection by the Lasso more stable.
"""
print __doc__
# Author: Alexandre Gramfort <[email protected]>
# License: BSD Style.
import pylab as pl
import numpy as np
from sklearn.linear_model import MultiTaskLasso, Lasso
rng = np.random.RandomState(42)
# Generate some 2D coefficients with sine waves with random frequency and phase
n_samples, n_features, n_tasks = 100, 30, 40
n_relevant_features = 5
coef = np.zeros((n_tasks, n_features))
times = np.linspace(0, 2 * np.pi, n_tasks)
for k in range(n_relevant_features):
coef[:, k] = np.sin((1. + rng.randn(1)) * times + 3 * rng.randn(1))
X = rng.randn(n_samples, n_features)
Y = np.dot(X, coef.T) + rng.randn(n_samples, n_tasks)
coef_lasso_ = np.array([Lasso(alpha=0.5).fit(X, y).coef_ for y in Y.T])
coef_multi_task_lasso_ = MultiTaskLasso(alpha=1.).fit(X, Y).coef_
###############################################################################
# Plot support and time series
fig = pl.figure(figsize=(8, 5))
pl.subplot(1, 2, 1)
pl.spy(coef_lasso_)
pl.xlabel('Feature')
pl.ylabel('Time (or Task)')
pl.text(10, 5, 'Lasso')
pl.subplot(1, 2, 2)
pl.spy(coef_multi_task_lasso_)
pl.xlabel('Feature')
pl.ylabel('Time (or Task)')
pl.text(10, 5, 'MultiTaskLasso')
fig.suptitle('Coefficient non-zero location')
feature_to_plot = 0
pl.figure()
pl.plot(coef[:, feature_to_plot], 'k', label='Ground truth')
pl.plot(coef_lasso_[:, feature_to_plot], 'g', label='Lasso')
pl.plot(coef_multi_task_lasso_[:, feature_to_plot],
'r', label='MultiTaskLasso')
pl.legend(loc='upper center')
pl.axis('tight')
pl.ylim([-1.1, 1.1])
pl.show()
| unlicense |
cngo-github/nupic | external/linux32/lib/python2.6/site-packages/matplotlib/pylab.py | 70 | 10245 | """
This is a procedural interface to the matplotlib object-oriented
plotting library.
The following plotting commands are provided; the majority have
Matlab(TM) analogs and similar argument.
_Plotting commands
acorr - plot the autocorrelation function
annotate - annotate something in the figure
arrow - add an arrow to the axes
axes - Create a new axes
axhline - draw a horizontal line across axes
axvline - draw a vertical line across axes
axhspan - draw a horizontal bar across axes
axvspan - draw a vertical bar across axes
axis - Set or return the current axis limits
bar - make a bar chart
barh - a horizontal bar chart
broken_barh - a set of horizontal bars with gaps
box - set the axes frame on/off state
boxplot - make a box and whisker plot
cla - clear current axes
clabel - label a contour plot
clf - clear a figure window
clim - adjust the color limits of the current image
close - close a figure window
colorbar - add a colorbar to the current figure
cohere - make a plot of coherence
contour - make a contour plot
contourf - make a filled contour plot
csd - make a plot of cross spectral density
delaxes - delete an axes from the current figure
draw - Force a redraw of the current figure
errorbar - make an errorbar graph
figlegend - make legend on the figure rather than the axes
figimage - make a figure image
figtext - add text in figure coords
figure - create or change active figure
fill - make filled polygons
findobj - recursively find all objects matching some criteria
gca - return the current axes
gcf - return the current figure
gci - get the current image, or None
getp - get a handle graphics property
grid - set whether gridding is on
hist - make a histogram
hold - set the axes hold state
ioff - turn interaction mode off
ion - turn interaction mode on
isinteractive - return True if interaction mode is on
imread - load image file into array
imshow - plot image data
ishold - return the hold state of the current axes
legend - make an axes legend
loglog - a log log plot
matshow - display a matrix in a new figure preserving aspect
pcolor - make a pseudocolor plot
pcolormesh - make a pseudocolor plot using a quadrilateral mesh
pie - make a pie chart
plot - make a line plot
plot_date - plot dates
plotfile - plot column data from an ASCII tab/space/comma delimited file
pie - pie charts
polar - make a polar plot on a PolarAxes
psd - make a plot of power spectral density
quiver - make a direction field (arrows) plot
rc - control the default params
rgrids - customize the radial grids and labels for polar
savefig - save the current figure
scatter - make a scatter plot
setp - set a handle graphics property
semilogx - log x axis
semilogy - log y axis
show - show the figures
specgram - a spectrogram plot
spy - plot sparsity pattern using markers or image
stem - make a stem plot
subplot - make a subplot (numrows, numcols, axesnum)
subplots_adjust - change the params controlling the subplot positions of current figure
subplot_tool - launch the subplot configuration tool
suptitle - add a figure title
table - add a table to the plot
text - add some text at location x,y to the current axes
thetagrids - customize the radial theta grids and labels for polar
title - add a title to the current axes
xcorr - plot the autocorrelation function of x and y
xlim - set/get the xlimits
ylim - set/get the ylimits
xticks - set/get the xticks
yticks - set/get the yticks
xlabel - add an xlabel to the current axes
ylabel - add a ylabel to the current axes
autumn - set the default colormap to autumn
bone - set the default colormap to bone
cool - set the default colormap to cool
copper - set the default colormap to copper
flag - set the default colormap to flag
gray - set the default colormap to gray
hot - set the default colormap to hot
hsv - set the default colormap to hsv
jet - set the default colormap to jet
pink - set the default colormap to pink
prism - set the default colormap to prism
spring - set the default colormap to spring
summer - set the default colormap to summer
winter - set the default colormap to winter
spectral - set the default colormap to spectral
_Event handling
connect - register an event handler
disconnect - remove a connected event handler
_Matrix commands
cumprod - the cumulative product along a dimension
cumsum - the cumulative sum along a dimension
detrend - remove the mean or besdt fit line from an array
diag - the k-th diagonal of matrix
diff - the n-th differnce of an array
eig - the eigenvalues and eigen vectors of v
eye - a matrix where the k-th diagonal is ones, else zero
find - return the indices where a condition is nonzero
fliplr - flip the rows of a matrix up/down
flipud - flip the columns of a matrix left/right
linspace - a linear spaced vector of N values from min to max inclusive
logspace - a log spaced vector of N values from min to max inclusive
meshgrid - repeat x and y to make regular matrices
ones - an array of ones
rand - an array from the uniform distribution [0,1]
randn - an array from the normal distribution
rot90 - rotate matrix k*90 degress counterclockwise
squeeze - squeeze an array removing any dimensions of length 1
tri - a triangular matrix
tril - a lower triangular matrix
triu - an upper triangular matrix
vander - the Vandermonde matrix of vector x
svd - singular value decomposition
zeros - a matrix of zeros
_Probability
levypdf - The levy probability density function from the char. func.
normpdf - The Gaussian probability density function
rand - random numbers from the uniform distribution
randn - random numbers from the normal distribution
_Statistics
corrcoef - correlation coefficient
cov - covariance matrix
amax - the maximum along dimension m
mean - the mean along dimension m
median - the median along dimension m
amin - the minimum along dimension m
norm - the norm of vector x
prod - the product along dimension m
ptp - the max-min along dimension m
std - the standard deviation along dimension m
asum - the sum along dimension m
_Time series analysis
bartlett - M-point Bartlett window
blackman - M-point Blackman window
cohere - the coherence using average periodiogram
csd - the cross spectral density using average periodiogram
fft - the fast Fourier transform of vector x
hamming - M-point Hamming window
hanning - M-point Hanning window
hist - compute the histogram of x
kaiser - M length Kaiser window
psd - the power spectral density using average periodiogram
sinc - the sinc function of array x
_Dates
date2num - convert python datetimes to numeric representation
drange - create an array of numbers for date plots
num2date - convert numeric type (float days since 0001) to datetime
_Other
angle - the angle of a complex array
griddata - interpolate irregularly distributed data to a regular grid
load - load ASCII data into array
polyfit - fit x, y to an n-th order polynomial
polyval - evaluate an n-th order polynomial
roots - the roots of the polynomial coefficients in p
save - save an array to an ASCII file
trapz - trapezoidal integration
__end
"""
import sys, warnings
from cbook import flatten, is_string_like, exception_to_str, popd, \
silent_list, iterable, dedent
import numpy as np
from numpy import ma
from matplotlib import mpl # pulls in most modules
from matplotlib.dates import date2num, num2date,\
datestr2num, strpdate2num, drange,\
epoch2num, num2epoch, mx2num,\
DateFormatter, IndexDateFormatter, DateLocator,\
RRuleLocator, YearLocator, MonthLocator, WeekdayLocator,\
DayLocator, HourLocator, MinuteLocator, SecondLocator,\
rrule, MO, TU, WE, TH, FR, SA, SU, YEARLY, MONTHLY,\
WEEKLY, DAILY, HOURLY, MINUTELY, SECONDLY, relativedelta
import matplotlib.dates
# bring all the symbols in so folks can import them from
# pylab in one fell swoop
from matplotlib.mlab import window_hanning, window_none,\
conv, detrend, detrend_mean, detrend_none, detrend_linear,\
polyfit, polyval, entropy, normpdf, griddata,\
levypdf, find, trapz, prepca, rem, norm, orth, rank,\
sqrtm, prctile, center_matrix, rk4, exp_safe, amap,\
sum_flat, mean_flat, rms_flat, l1norm, l2norm, norm, frange,\
diagonal_matrix, base_repr, binary_repr, log2, ispower2,\
bivariate_normal, load, save
from matplotlib.mlab import stineman_interp, slopes, \
stineman_interp, inside_poly, poly_below, poly_between, \
is_closed_polygon, path_length, distances_along_curve, vector_lengths
from numpy import *
from numpy.fft import *
from numpy.random import *
from numpy.linalg import *
from matplotlib.mlab import window_hanning, window_none, conv, detrend, demean, \
detrend_mean, detrend_none, detrend_linear, entropy, normpdf, levypdf, \
find, longest_contiguous_ones, longest_ones, prepca, prctile, prctile_rank, \
center_matrix, rk4, bivariate_normal, get_xyz_where, get_sparse_matrix, dist, \
dist_point_to_segment, segments_intersect, fftsurr, liaupunov, movavg, \
save, load, exp_safe, \
amap, rms_flat, l1norm, l2norm, norm_flat, frange, diagonal_matrix, identity, \
base_repr, binary_repr, log2, ispower2, fromfunction_kw, rem, norm, orth, rank, sqrtm,\
mfuncC, approx_real, rec_append_field, rec_drop_fields, rec_join, csv2rec, rec2csv, isvector
from matplotlib.pyplot import *
# provide the recommended module abbrevs in the pylab namespace
import matplotlib.pyplot as plt
import numpy as np
| agpl-3.0 |
MAndelkovic/pybinding | docs/advanced/kwant_example.py | 1 | 5660 | #! /usr/bin/env python3
"""Transport through a barrier
The `main()` function build identical models in pybinding and kwant and then calculates
the transmission using `kwant.smatrix`. The results are plotted to verify that they are
identical.
The `measure_and_plot()` function compares transport calculation time for various system
sizes. Modify the `__name__ == '__main__'` section at the bottom to run this benchmark.
"""
import math
import numpy as np
import matplotlib.pyplot as plt
import kwant
import pybinding as pb
from pybinding.repository import graphene
pb.pltutils.use_style()
pb.pltutils.set_palette("Set1", start=3)
def measure_pybinding(width, length, electron_energy, barrier_heights, plot=False):
def potential_barrier(v0):
@pb.onsite_energy_modifier(is_double=True)
def function(energy, x):
energy[np.logical_and(-length / 4 <= x, x <= length / 4)] = v0
return energy
return function
def make_model(v0=0):
model = pb.Model(
graphene.monolayer().with_min_neighbors(1),
pb.rectangle(length, width),
potential_barrier(v0),
)
model.attach_lead(-1, pb.line([-length/2, -width/2], [-length/2, width/2]))
model.attach_lead(+1, pb.line([ length/2, -width/2], [ length/2, width/2]))
return model
if plot:
make_model().plot()
plt.show()
transmission = []
for v in barrier_heights:
smatrix = kwant.smatrix(make_model(v).tokwant(), energy=electron_energy)
transmission.append(smatrix.transmission(1, 0))
return transmission
def measure_kwant(width, length, electron_energy, barrier_heights, plot=False):
def make_system():
t = 2.8
a_cc = 0.142
a = a_cc * math.sqrt(3)
graphene_lattice = kwant.lattice.general([(a, 0), (a / 2, a / 2 * math.sqrt(3))],
[(0, -a_cc / 2), (0, a_cc / 2)])
def shape(pos):
x, y = pos
return -length / 2 <= x <= length / 2 and -width / 2 <= y <= width / 2
def onsite(site, v0):
x, _ = site.pos
return v0 if -length / 4 <= x <= length / 4 else 0
builder = kwant.Builder()
builder[graphene_lattice.shape(shape, (0, 0))] = onsite
builder[graphene_lattice.neighbors()] = -t
def lead_shape(pos):
x, y = pos
return -width / 2 <= y <= width / 2
lead = kwant.Builder(kwant.TranslationalSymmetry(graphene_lattice.vec((-1, 0))))
lead[graphene_lattice.shape(lead_shape, (0, 0))] = 0
lead[graphene_lattice.neighbors()] = -t
builder.attach_lead(lead)
builder.attach_lead(lead.reversed())
return builder.finalized()
system = make_system()
if plot:
kwant.plot(system)
transmission = []
for v in barrier_heights:
smatrix = kwant.smatrix(system, energy=electron_energy, args=[v])
transmission.append(smatrix.transmission(1, 0))
return transmission
def main():
"""Build the same model using pybinding and kwant and verify that the results are identical"""
width, length = 15, 15
electron_energy = 0.25
barrier_heights = np.linspace(0, 0.5, 100)
with pb.utils.timed("pybinding:"):
pb_transmission = measure_pybinding(width, length, electron_energy, barrier_heights)
with pb.utils.timed("kwant:"):
kwant_transmission = measure_kwant(width, length, electron_energy, barrier_heights)
plt.plot(barrier_heights, pb_transmission, lw=1, label="pybinding")
plt.plot(barrier_heights, kwant_transmission, lw=2.5, ls="--", label="kwant")
plt.ylabel("transmission")
plt.xlabel("barrier height (eV)")
plt.axvline(electron_energy, 0, 0.5, color="gray", ls=":")
plt.annotate("electron energy\n{} eV".format(electron_energy), (electron_energy, 0.52),
xycoords=("data", "axes fraction"), ha="center")
pb.pltutils.despine()
pb.pltutils.legend()
plt.show()
def plot_time(sizes, times, label):
plt.plot(sizes, times, label=label, marker='o', markersize=5, lw=2, zorder=10)
plt.grid(True, which='major', color='gray', ls=':', alpha=0.5)
plt.title("transmission calculation time")
plt.xlabel("system size (nm)")
plt.ylabel("compute time (seconds)")
plt.xlim(0.8 * min(sizes), 1.05 * max(sizes))
pb.pltutils.despine()
pb.pltutils.legend(loc='upper left', reverse=True)
def measure_and_plot(sizes):
"""Measure transport calculation time
The list of `sizes` specifies the dimensions of the scattering region in nanometers.
"""
electron_energy = 0.25
barrier_heights = np.linspace(0, 0.5, 100)
print("pybinding:")
pb_times = []
for size in sizes:
with pb.utils.timed() as time:
measure_pybinding(size, size, electron_energy, barrier_heights)
print(" {:7} <-> size = {} nm".format(str(time), size))
pb_times.append(time.elapsed)
print("\nkwant:")
kwant_times = []
for size in sizes:
with pb.utils.timed() as time:
measure_kwant(size, size, electron_energy, barrier_heights)
print(" {:7} <-> size = {} nm".format(str(time), size))
kwant_times.append(time.elapsed)
plt.figure(figsize=(3, 2.4))
plot_time(sizes, pb_times, label="pybinding")
plot_time(sizes, kwant_times, label="kwant")
filename = "kwant_example_results.png"
plt.savefig(filename)
print("\nDone! Results saved to file: {}".format(filename))
if __name__ == '__main__':
# measure_and_plot(sizes=[5, 10, 15, 20, 25, 30])
main()
| bsd-2-clause |
lukebarnard1/bokeh | bokeh/charts/builder/dot_builder.py | 43 | 6160 | """This is the Bokeh charts interface. It gives you a high level API to build
complex plot is a simple way.
This is the Dot class which lets you build your Dot charts just
passing the arguments to the Chart class and calling the proper functions.
"""
#-----------------------------------------------------------------------------
# Copyright (c) 2012 - 2014, Continuum Analytics, Inc. All rights reserved.
#
# Powered by the Bokeh Development Team.
#
# The full license is in the file LICENSE.txt, distributed with this software.
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
from __future__ import absolute_import
import numpy as np
try:
import pandas as pd
except ImportError:
pd = None
from ..utils import chunk, cycle_colors, make_scatter
from .._builder import Builder, create_and_build
from ...models import ColumnDataSource, FactorRange, GlyphRenderer, Range1d
from ...models.glyphs import Segment
from ...properties import Any, Bool, Either, List
def Dot(values, cat=None, stem=True, xscale="categorical", yscale="linear",
xgrid=False, ygrid=True, **kws):
""" Create a dot chart using :class:`DotBuilder <bokeh.charts.builder.dot_builder.DotBuilder>`
to render the geometry from values and cat.
Args:
values (iterable): iterable 2d representing the data series
values matrix.
cat (list or bool, optional): list of string representing the categories.
Defaults to None.
In addition the the parameters specific to this chart,
:ref:`userguide_charts_generic_arguments` are also accepted as keyword parameters.
Returns:
a new :class:`Chart <bokeh.charts.Chart>`
Examples:
.. bokeh-plot::
:source-position: above
from collections import OrderedDict
from bokeh.charts import Dot, output_file, show
# dict, OrderedDict, lists, arrays and DataFrames are valid inputs
xyvalues = OrderedDict()
xyvalues['python']=[2, 5]
xyvalues['pypy']=[12, 40]
xyvalues['jython']=[22, 30]
dot = Dot(xyvalues, ['cpu1', 'cpu2'], title='dots')
output_file('dot.html')
show(dot)
"""
return create_and_build(
DotBuilder, values, cat=cat, stem=stem, xscale=xscale, yscale=yscale,
xgrid=xgrid, ygrid=ygrid, **kws
)
#-----------------------------------------------------------------------------
# Classes and functions
#-----------------------------------------------------------------------------
class DotBuilder(Builder):
"""This is the Dot class and it is in charge of plotting Dot chart
in an easy and intuitive way.
Essentially, it provides a way to ingest the data, make the proper
calculations and push the references into a source object.
We additionally make calculations for the ranges.
And finally add the needed glyphs (segments and circles) taking
the references from the source.
"""
cat = Either(Bool, List(Any), help="""
List of string representing the categories. (Defaults to None.)
""")
stem = Bool(True, help="""
Whether to draw a stem from each do to the axis.
""")
def _process_data(self):
"""Take the Dot data from the input **value.
It calculates the chart properties accordingly. Then build a dict
containing references to all the calculated points to be used by
the rect glyph inside the ``_yield_renderers`` method.
"""
if not self.cat:
self.cat = [str(x) for x in self._values.index]
self._data = dict(cat=self.cat, zero=np.zeros(len(self.cat)))
# list to save all the attributes we are going to create
# list to save all the groups available in the incoming input
# Grouping
self._groups.extend(self._values.keys())
step = np.linspace(0, 1.0, len(self._values.keys()) + 1, endpoint=False)
for i, (val, values) in enumerate(self._values.items()):
# original y value
self.set_and_get("", val, values)
# x value
cats = [c + ":" + str(step[i + 1]) for c in self.cat]
self.set_and_get("cat", val, cats)
# zeros
self.set_and_get("z_", val, np.zeros(len(values)))
# segment top y value
self.set_and_get("seg_top_", val, values)
def _set_sources(self):
"""Push the Dot data into the ColumnDataSource and calculate
the proper ranges.
"""
self._source = ColumnDataSource(self._data)
self.x_range = FactorRange(factors=self._source.data["cat"])
cat = [i for i in self._attr if not i.startswith(("cat",))]
end = 1.1 * max(max(self._data[i]) for i in cat)
self.y_range = Range1d(start=0, end=end)
def _yield_renderers(self):
"""Use the rect glyphs to display the bars.
Takes reference points from data loaded at the source and
renders circle glyphs (and segments) on the related
coordinates.
"""
self._tuples = list(chunk(self._attr, 4))
colors = cycle_colors(self._tuples, self.palette)
# quartet elements are: [data, cat, zeros, segment_top]
for i, quartet in enumerate(self._tuples):
# draw segment first so when scatter will be place on top of it
# and it won't show segment chunk on top of the circle
if self.stem:
glyph = Segment(
x0=quartet[1], y0=quartet[2], x1=quartet[1], y1=quartet[3],
line_color="black", line_width=2
)
yield GlyphRenderer(data_source=self._source, glyph=glyph)
renderer = make_scatter(
self._source, quartet[1], quartet[0], 'circle',
colors[i - 1], line_color='black', size=15, fill_alpha=1.,
)
self._legends.append((self._groups[i], [renderer]))
yield renderer
| bsd-3-clause |
jniediek/mne-python | tutorials/plot_modifying_data_inplace.py | 4 | 2936 | """
.. _tut_modifying_data_inplace:
Modifying data in-place
=======================
"""
from __future__ import print_function
import mne
import os.path as op
import numpy as np
from matplotlib import pyplot as plt
###############################################################################
# It is often necessary to modify data once you have loaded it into memory.
# Common examples of this are signal processing, feature extraction, and data
# cleaning. Some functionality is pre-built into MNE-python, though it is also
# possible to apply an arbitrary function to the data.
# Load an example dataset, the preload flag loads the data into memory now
data_path = op.join(mne.datasets.sample.data_path(), 'MEG',
'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(data_path, preload=True, add_eeg_ref=False)
raw = raw.crop(0, 10)
print(raw)
###############################################################################
# Signal processing
# -----------------
#
# Most MNE objects have in-built methods for filtering:
filt_bands = [(1, 3), (3, 10), (10, 20), (20, 60)]
f, (ax, ax2) = plt.subplots(2, 1, figsize=(15, 10))
_ = ax.plot(raw._data[0])
for fband in filt_bands:
raw_filt = raw.copy()
raw_filt.filter(*fband, h_trans_bandwidth='auto', l_trans_bandwidth='auto',
filter_length='auto', phase='zero')
_ = ax2.plot(raw_filt[0][0][0])
ax2.legend(filt_bands)
ax.set_title('Raw data')
ax2.set_title('Band-pass filtered data')
###############################################################################
# In addition, there are functions for applying the Hilbert transform, which is
# useful to calculate phase / amplitude of your signal.
# Filter signal with a fairly steep filter, then take hilbert transform
raw_band = raw.copy()
raw_band.filter(12, 18, l_trans_bandwidth=2., h_trans_bandwidth=2.,
filter_length='auto', phase='zero')
raw_hilb = raw_band.copy()
hilb_picks = mne.pick_types(raw_band.info, meg=False, eeg=True)
raw_hilb.apply_hilbert(hilb_picks)
print(raw_hilb._data.dtype)
###############################################################################
# Finally, it is possible to apply arbitrary functions to your data to do
# what you want. Here we will use this to take the amplitude and phase of
# the hilbert transformed data.
#
# .. note:: You can also use ``amplitude=True`` in the call to
# :meth:`mne.io.Raw.apply_hilbert` to do this automatically.
#
# Take the amplitude and phase
raw_amp = raw_hilb.copy()
raw_amp.apply_function(np.abs, hilb_picks, float, 1)
raw_phase = raw_hilb.copy()
raw_phase.apply_function(np.angle, hilb_picks, float, 1)
f, (a1, a2) = plt.subplots(2, 1, figsize=(15, 10))
a1.plot(raw_band._data[hilb_picks[0]])
a1.plot(raw_amp._data[hilb_picks[0]])
a2.plot(raw_phase._data[hilb_picks[0]])
a1.set_title('Amplitude of frequency band')
a2.set_title('Phase of frequency band')
| bsd-3-clause |
0asa/scikit-learn | examples/cluster/plot_ward_structured_vs_unstructured.py | 29 | 3349 | """
===========================================================
Hierarchical clustering: structured vs unstructured ward
===========================================================
Example builds a swiss roll dataset and runs
hierarchical clustering on their position.
For more information, see :ref:`hierarchical_clustering`.
In a first step, the hierarchical clustering is performed without connectivity
constraints on the structure and is solely based on distance, whereas in
a second step the clustering is restricted to the k-Nearest Neighbors
graph: it's a hierarchical clustering with structure prior.
Some of the clusters learned without connectivity constraints do not
respect the structure of the swiss roll and extend across different folds of
the manifolds. On the opposite, when opposing connectivity constraints,
the clusters form a nice parcellation of the swiss roll.
"""
# Authors : Vincent Michel, 2010
# Alexandre Gramfort, 2010
# Gael Varoquaux, 2010
# License: BSD 3 clause
print(__doc__)
import time as time
import numpy as np
import matplotlib.pyplot as plt
import mpl_toolkits.mplot3d.axes3d as p3
from sklearn.cluster import AgglomerativeClustering
from sklearn.datasets.samples_generator import make_swiss_roll
###############################################################################
# Generate data (swiss roll dataset)
n_samples = 1500
noise = 0.05
X, _ = make_swiss_roll(n_samples, noise)
# Make it thinner
X[:, 1] *= .5
###############################################################################
# Compute clustering
print("Compute unstructured hierarchical clustering...")
st = time.time()
ward = AgglomerativeClustering(n_clusters=6, linkage='ward').fit(X)
elapsed_time = time.time() - st
label = ward.labels_
print("Elapsed time: %.2fs" % elapsed_time)
print("Number of points: %i" % label.size)
###############################################################################
# Plot result
fig = plt.figure()
ax = p3.Axes3D(fig)
ax.view_init(7, -80)
for l in np.unique(label):
ax.plot3D(X[label == l, 0], X[label == l, 1], X[label == l, 2],
'o', color=plt.cm.jet(np.float(l) / np.max(label + 1)))
plt.title('Without connectivity constraints (time %.2fs)' % elapsed_time)
###############################################################################
# Define the structure A of the data. Here a 10 nearest neighbors
from sklearn.neighbors import kneighbors_graph
connectivity = kneighbors_graph(X, n_neighbors=10)
###############################################################################
# Compute clustering
print("Compute structured hierarchical clustering...")
st = time.time()
ward = AgglomerativeClustering(n_clusters=6, connectivity=connectivity,
linkage='ward').fit(X)
elapsed_time = time.time() - st
label = ward.labels_
print("Elapsed time: %.2fs" % elapsed_time)
print("Number of points: %i" % label.size)
###############################################################################
# Plot result
fig = plt.figure()
ax = p3.Axes3D(fig)
ax.view_init(7, -80)
for l in np.unique(label):
ax.plot3D(X[label == l, 0], X[label == l, 1], X[label == l, 2],
'o', color=plt.cm.jet(float(l) / np.max(label + 1)))
plt.title('With connectivity constraints (time %.2fs)' % elapsed_time)
plt.show()
| bsd-3-clause |
OshynSong/scikit-learn | examples/ensemble/plot_forest_importances.py | 241 | 1761 | """
=========================================
Feature importances with forests of trees
=========================================
This examples shows the use of forests of trees to evaluate the importance of
features on an artificial classification task. The red bars are the feature
importances of the forest, along with their inter-trees variability.
As expected, the plot suggests that 3 features are informative, while the
remaining are not.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.ensemble import ExtraTreesClassifier
# Build a classification task using 3 informative features
X, y = make_classification(n_samples=1000,
n_features=10,
n_informative=3,
n_redundant=0,
n_repeated=0,
n_classes=2,
random_state=0,
shuffle=False)
# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(10):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(10), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(10), indices)
plt.xlim([-1, 10])
plt.show()
| bsd-3-clause |
lbishal/scikit-learn | sklearn/tests/test_calibration.py | 62 | 12288 | # Authors: Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
import numpy as np
from scipy import sparse
from sklearn.utils.testing import (assert_array_almost_equal, assert_equal,
assert_greater, assert_almost_equal,
assert_greater_equal,
assert_array_equal,
assert_raises,
ignore_warnings,
assert_warns_message)
from sklearn.datasets import make_classification, make_blobs
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.svm import LinearSVC
from sklearn.linear_model import Ridge
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer
from sklearn.metrics import brier_score_loss, log_loss
from sklearn.calibration import CalibratedClassifierCV
from sklearn.calibration import _sigmoid_calibration, _SigmoidCalibration
from sklearn.calibration import calibration_curve
@ignore_warnings
def test_calibration():
"""Test calibration objects with isotonic and sigmoid"""
n_samples = 100
X, y = make_classification(n_samples=2 * n_samples, n_features=6,
random_state=42)
sample_weight = np.random.RandomState(seed=42).uniform(size=y.size)
X -= X.min() # MultinomialNB only allows positive X
# split train and test
X_train, y_train, sw_train = \
X[:n_samples], y[:n_samples], sample_weight[:n_samples]
X_test, y_test = X[n_samples:], y[n_samples:]
# Naive-Bayes
clf = MultinomialNB().fit(X_train, y_train, sample_weight=sw_train)
prob_pos_clf = clf.predict_proba(X_test)[:, 1]
pc_clf = CalibratedClassifierCV(clf, cv=y.size + 1)
assert_raises(ValueError, pc_clf.fit, X, y)
# Naive Bayes with calibration
for this_X_train, this_X_test in [(X_train, X_test),
(sparse.csr_matrix(X_train),
sparse.csr_matrix(X_test))]:
for method in ['isotonic', 'sigmoid']:
pc_clf = CalibratedClassifierCV(clf, method=method, cv=2)
# Note that this fit overwrites the fit on the entire training
# set
pc_clf.fit(this_X_train, y_train, sample_weight=sw_train)
prob_pos_pc_clf = pc_clf.predict_proba(this_X_test)[:, 1]
# Check that brier score has improved after calibration
assert_greater(brier_score_loss(y_test, prob_pos_clf),
brier_score_loss(y_test, prob_pos_pc_clf))
# Check invariance against relabeling [0, 1] -> [1, 2]
pc_clf.fit(this_X_train, y_train + 1, sample_weight=sw_train)
prob_pos_pc_clf_relabeled = pc_clf.predict_proba(this_X_test)[:, 1]
assert_array_almost_equal(prob_pos_pc_clf,
prob_pos_pc_clf_relabeled)
# Check invariance against relabeling [0, 1] -> [-1, 1]
pc_clf.fit(this_X_train, 2 * y_train - 1, sample_weight=sw_train)
prob_pos_pc_clf_relabeled = pc_clf.predict_proba(this_X_test)[:, 1]
assert_array_almost_equal(prob_pos_pc_clf,
prob_pos_pc_clf_relabeled)
# Check invariance against relabeling [0, 1] -> [1, 0]
pc_clf.fit(this_X_train, (y_train + 1) % 2,
sample_weight=sw_train)
prob_pos_pc_clf_relabeled = \
pc_clf.predict_proba(this_X_test)[:, 1]
if method == "sigmoid":
assert_array_almost_equal(prob_pos_pc_clf,
1 - prob_pos_pc_clf_relabeled)
else:
# Isotonic calibration is not invariant against relabeling
# but should improve in both cases
assert_greater(brier_score_loss(y_test, prob_pos_clf),
brier_score_loss((y_test + 1) % 2,
prob_pos_pc_clf_relabeled))
# check that calibration can also deal with regressors that have
# a decision_function
clf_base_regressor = CalibratedClassifierCV(Ridge())
clf_base_regressor.fit(X_train, y_train)
clf_base_regressor.predict(X_test)
# Check failure cases:
# only "isotonic" and "sigmoid" should be accepted as methods
clf_invalid_method = CalibratedClassifierCV(clf, method="foo")
assert_raises(ValueError, clf_invalid_method.fit, X_train, y_train)
# base-estimators should provide either decision_function or
# predict_proba (most regressors, for instance, should fail)
clf_base_regressor = \
CalibratedClassifierCV(RandomForestRegressor(), method="sigmoid")
assert_raises(RuntimeError, clf_base_regressor.fit, X_train, y_train)
def test_sample_weight_warning():
n_samples = 100
X, y = make_classification(n_samples=2 * n_samples, n_features=6,
random_state=42)
sample_weight = np.random.RandomState(seed=42).uniform(size=len(y))
X_train, y_train, sw_train = \
X[:n_samples], y[:n_samples], sample_weight[:n_samples]
X_test = X[n_samples:]
for method in ['sigmoid', 'isotonic']:
base_estimator = LinearSVC(random_state=42)
calibrated_clf = CalibratedClassifierCV(base_estimator, method=method)
# LinearSVC does not currently support sample weights but they
# can still be used for the calibration step (with a warning)
msg = "LinearSVC does not support sample_weight."
assert_warns_message(
UserWarning, msg,
calibrated_clf.fit, X_train, y_train, sample_weight=sw_train)
probs_with_sw = calibrated_clf.predict_proba(X_test)
# As the weights are used for the calibration, they should still yield
# a different predictions
calibrated_clf.fit(X_train, y_train)
probs_without_sw = calibrated_clf.predict_proba(X_test)
diff = np.linalg.norm(probs_with_sw - probs_without_sw)
assert_greater(diff, 0.1)
def test_calibration_multiclass():
"""Test calibration for multiclass """
# test multi-class setting with classifier that implements
# only decision function
clf = LinearSVC()
X, y_idx = make_blobs(n_samples=100, n_features=2, random_state=42,
centers=3, cluster_std=3.0)
# Use categorical labels to check that CalibratedClassifierCV supports
# them correctly
target_names = np.array(['a', 'b', 'c'])
y = target_names[y_idx]
X_train, y_train = X[::2], y[::2]
X_test, y_test = X[1::2], y[1::2]
clf.fit(X_train, y_train)
for method in ['isotonic', 'sigmoid']:
cal_clf = CalibratedClassifierCV(clf, method=method, cv=2)
cal_clf.fit(X_train, y_train)
probas = cal_clf.predict_proba(X_test)
assert_array_almost_equal(np.sum(probas, axis=1), np.ones(len(X_test)))
# Check that log-loss of calibrated classifier is smaller than
# log-loss of naively turned OvR decision function to probabilities
# via softmax
def softmax(y_pred):
e = np.exp(-y_pred)
return e / e.sum(axis=1).reshape(-1, 1)
uncalibrated_log_loss = \
log_loss(y_test, softmax(clf.decision_function(X_test)))
calibrated_log_loss = log_loss(y_test, probas)
assert_greater_equal(uncalibrated_log_loss, calibrated_log_loss)
# Test that calibration of a multiclass classifier decreases log-loss
# for RandomForestClassifier
X, y = make_blobs(n_samples=100, n_features=2, random_state=42,
cluster_std=3.0)
X_train, y_train = X[::2], y[::2]
X_test, y_test = X[1::2], y[1::2]
clf = RandomForestClassifier(n_estimators=10, random_state=42)
clf.fit(X_train, y_train)
clf_probs = clf.predict_proba(X_test)
loss = log_loss(y_test, clf_probs)
for method in ['isotonic', 'sigmoid']:
cal_clf = CalibratedClassifierCV(clf, method=method, cv=3)
cal_clf.fit(X_train, y_train)
cal_clf_probs = cal_clf.predict_proba(X_test)
cal_loss = log_loss(y_test, cal_clf_probs)
assert_greater(loss, cal_loss)
def test_calibration_prefit():
"""Test calibration for prefitted classifiers"""
n_samples = 50
X, y = make_classification(n_samples=3 * n_samples, n_features=6,
random_state=42)
sample_weight = np.random.RandomState(seed=42).uniform(size=y.size)
X -= X.min() # MultinomialNB only allows positive X
# split train and test
X_train, y_train, sw_train = \
X[:n_samples], y[:n_samples], sample_weight[:n_samples]
X_calib, y_calib, sw_calib = \
X[n_samples:2 * n_samples], y[n_samples:2 * n_samples], \
sample_weight[n_samples:2 * n_samples]
X_test, y_test = X[2 * n_samples:], y[2 * n_samples:]
# Naive-Bayes
clf = MultinomialNB()
clf.fit(X_train, y_train, sw_train)
prob_pos_clf = clf.predict_proba(X_test)[:, 1]
# Naive Bayes with calibration
for this_X_calib, this_X_test in [(X_calib, X_test),
(sparse.csr_matrix(X_calib),
sparse.csr_matrix(X_test))]:
for method in ['isotonic', 'sigmoid']:
pc_clf = CalibratedClassifierCV(clf, method=method, cv="prefit")
for sw in [sw_calib, None]:
pc_clf.fit(this_X_calib, y_calib, sample_weight=sw)
y_prob = pc_clf.predict_proba(this_X_test)
y_pred = pc_clf.predict(this_X_test)
prob_pos_pc_clf = y_prob[:, 1]
assert_array_equal(y_pred,
np.array([0, 1])[np.argmax(y_prob, axis=1)])
assert_greater(brier_score_loss(y_test, prob_pos_clf),
brier_score_loss(y_test, prob_pos_pc_clf))
def test_sigmoid_calibration():
"""Test calibration values with Platt sigmoid model"""
exF = np.array([5, -4, 1.0])
exY = np.array([1, -1, -1])
# computed from my python port of the C++ code in LibSVM
AB_lin_libsvm = np.array([-0.20261354391187855, 0.65236314980010512])
assert_array_almost_equal(AB_lin_libsvm,
_sigmoid_calibration(exF, exY), 3)
lin_prob = 1. / (1. + np.exp(AB_lin_libsvm[0] * exF + AB_lin_libsvm[1]))
sk_prob = _SigmoidCalibration().fit(exF, exY).predict(exF)
assert_array_almost_equal(lin_prob, sk_prob, 6)
# check that _SigmoidCalibration().fit only accepts 1d array or 2d column
# arrays
assert_raises(ValueError, _SigmoidCalibration().fit,
np.vstack((exF, exF)), exY)
def test_calibration_curve():
"""Check calibration_curve function"""
y_true = np.array([0, 0, 0, 1, 1, 1])
y_pred = np.array([0., 0.1, 0.2, 0.8, 0.9, 1.])
prob_true, prob_pred = calibration_curve(y_true, y_pred, n_bins=2)
prob_true_unnormalized, prob_pred_unnormalized = \
calibration_curve(y_true, y_pred * 2, n_bins=2, normalize=True)
assert_equal(len(prob_true), len(prob_pred))
assert_equal(len(prob_true), 2)
assert_almost_equal(prob_true, [0, 1])
assert_almost_equal(prob_pred, [0.1, 0.9])
assert_almost_equal(prob_true, prob_true_unnormalized)
assert_almost_equal(prob_pred, prob_pred_unnormalized)
# probabilities outside [0, 1] should not be accepted when normalize
# is set to False
assert_raises(ValueError, calibration_curve, [1.1], [-0.1],
normalize=False)
def test_calibration_nan_imputer():
"""Test that calibration can accept nan"""
X, y = make_classification(n_samples=10, n_features=2,
n_informative=2, n_redundant=0,
random_state=42)
X[0, 0] = np.nan
clf = Pipeline(
[('imputer', Imputer()),
('rf', RandomForestClassifier(n_estimators=1))])
clf_c = CalibratedClassifierCV(clf, cv=2, method='isotonic')
clf_c.fit(X, y)
clf_c.predict(X)
| bsd-3-clause |
eragonruan/text-detection-ctpn | utils/dataset/data_provider.py | 1 | 3253 | # encoding:utf-8
import os
import time
import cv2
import matplotlib.pyplot as plt
import numpy as np
from utils.dataset.data_util import GeneratorEnqueuer
DATA_FOLDER = "data/dataset/mlt/"
def get_training_data():
img_files = []
exts = ['jpg', 'png', 'jpeg', 'JPG']
for parent, dirnames, filenames in os.walk(os.path.join(DATA_FOLDER, "image")):
for filename in filenames:
for ext in exts:
if filename.endswith(ext):
img_files.append(os.path.join(parent, filename))
break
print('Find {} images'.format(len(img_files)))
return img_files
def load_annoataion(p):
bbox = []
with open(p, "r") as f:
lines = f.readlines()
for line in lines:
line = line.strip().split(",")
x_min, y_min, x_max, y_max = map(int, line)
bbox.append([x_min, y_min, x_max, y_max, 1])
return bbox
def generator(vis=False):
image_list = np.array(get_training_data())
print('{} training images in {}'.format(image_list.shape[0], DATA_FOLDER))
index = np.arange(0, image_list.shape[0])
while True:
np.random.shuffle(index)
for i in index:
try:
im_fn = image_list[i]
im = cv2.imread(im_fn)
h, w, c = im.shape
im_info = np.array([h, w, c]).reshape([1, 3])
_, fn = os.path.split(im_fn)
fn, _ = os.path.splitext(fn)
txt_fn = os.path.join(DATA_FOLDER, "label", fn + '.txt')
if not os.path.exists(txt_fn):
print("Ground truth for image {} not exist!".format(im_fn))
continue
bbox = load_annoataion(txt_fn)
if len(bbox) == 0:
print("Ground truth for image {} empty!".format(im_fn))
continue
if vis:
for p in bbox:
cv2.rectangle(im, (p[0], p[1]), (p[2], p[3]), color=(0, 0, 255), thickness=1)
fig, axs = plt.subplots(1, 1, figsize=(30, 30))
axs.imshow(im[:, :, ::-1])
axs.set_xticks([])
axs.set_yticks([])
plt.tight_layout()
plt.show()
plt.close()
yield [im], bbox, im_info
except Exception as e:
print(e)
continue
def get_batch(num_workers, **kwargs):
try:
enqueuer = GeneratorEnqueuer(generator(**kwargs), use_multiprocessing=True)
enqueuer.start(max_queue_size=24, workers=num_workers)
generator_output = None
while True:
while enqueuer.is_running():
if not enqueuer.queue.empty():
generator_output = enqueuer.queue.get()
break
else:
time.sleep(0.01)
yield generator_output
generator_output = None
finally:
if enqueuer is not None:
enqueuer.stop()
if __name__ == '__main__':
gen = get_batch(num_workers=2, vis=True)
while True:
image, bbox, im_info = next(gen)
print('done')
| mit |
kenshay/ImageScript | ProgramData/SystemFiles/Python/Lib/site-packages/pandas/tests/test_msgpack/test_limits.py | 9 | 2700 | #!/usr/bin/env python
# coding: utf-8
from __future__ import (absolute_import, division, print_function,
unicode_literals)
import pandas.util.testing as tm
from pandas.msgpack import packb, unpackb, Packer, Unpacker, ExtType
class TestLimits(tm.TestCase):
def test_integer(self):
x = -(2 ** 63)
assert unpackb(packb(x)) == x
self.assertRaises((OverflowError, ValueError), packb, x - 1)
x = 2 ** 64 - 1
assert unpackb(packb(x)) == x
self.assertRaises((OverflowError, ValueError), packb, x + 1)
def test_array_header(self):
packer = Packer()
packer.pack_array_header(2 ** 32 - 1)
self.assertRaises((OverflowError, ValueError),
packer.pack_array_header, 2 ** 32)
def test_map_header(self):
packer = Packer()
packer.pack_map_header(2 ** 32 - 1)
self.assertRaises((OverflowError, ValueError),
packer.pack_array_header, 2 ** 32)
def test_max_str_len(self):
d = 'x' * 3
packed = packb(d)
unpacker = Unpacker(max_str_len=3, encoding='utf-8')
unpacker.feed(packed)
assert unpacker.unpack() == d
unpacker = Unpacker(max_str_len=2, encoding='utf-8')
unpacker.feed(packed)
self.assertRaises(ValueError, unpacker.unpack)
def test_max_bin_len(self):
d = b'x' * 3
packed = packb(d, use_bin_type=True)
unpacker = Unpacker(max_bin_len=3)
unpacker.feed(packed)
assert unpacker.unpack() == d
unpacker = Unpacker(max_bin_len=2)
unpacker.feed(packed)
self.assertRaises(ValueError, unpacker.unpack)
def test_max_array_len(self):
d = [1, 2, 3]
packed = packb(d)
unpacker = Unpacker(max_array_len=3)
unpacker.feed(packed)
assert unpacker.unpack() == d
unpacker = Unpacker(max_array_len=2)
unpacker.feed(packed)
self.assertRaises(ValueError, unpacker.unpack)
def test_max_map_len(self):
d = {1: 2, 3: 4, 5: 6}
packed = packb(d)
unpacker = Unpacker(max_map_len=3)
unpacker.feed(packed)
assert unpacker.unpack() == d
unpacker = Unpacker(max_map_len=2)
unpacker.feed(packed)
self.assertRaises(ValueError, unpacker.unpack)
def test_max_ext_len(self):
d = ExtType(42, b"abc")
packed = packb(d)
unpacker = Unpacker(max_ext_len=3)
unpacker.feed(packed)
assert unpacker.unpack() == d
unpacker = Unpacker(max_ext_len=2)
unpacker.feed(packed)
self.assertRaises(ValueError, unpacker.unpack)
| gpl-3.0 |
cybernet14/scikit-learn | examples/svm/plot_separating_hyperplane_unbalanced.py | 329 | 1850 | """
=================================================
SVM: Separating hyperplane for unbalanced classes
=================================================
Find the optimal separating hyperplane using an SVC for classes that
are unbalanced.
We first find the separating plane with a plain SVC and then plot
(dashed) the separating hyperplane with automatically correction for
unbalanced classes.
.. currentmodule:: sklearn.linear_model
.. note::
This example will also work by replacing ``SVC(kernel="linear")``
with ``SGDClassifier(loss="hinge")``. Setting the ``loss`` parameter
of the :class:`SGDClassifier` equal to ``hinge`` will yield behaviour
such as that of a SVC with a linear kernel.
For example try instead of the ``SVC``::
clf = SGDClassifier(n_iter=100, alpha=0.01)
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
#from sklearn.linear_model import SGDClassifier
# we create 40 separable points
rng = np.random.RandomState(0)
n_samples_1 = 1000
n_samples_2 = 100
X = np.r_[1.5 * rng.randn(n_samples_1, 2),
0.5 * rng.randn(n_samples_2, 2) + [2, 2]]
y = [0] * (n_samples_1) + [1] * (n_samples_2)
# fit the model and get the separating hyperplane
clf = svm.SVC(kernel='linear', C=1.0)
clf.fit(X, y)
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-5, 5)
yy = a * xx - clf.intercept_[0] / w[1]
# get the separating hyperplane using weighted classes
wclf = svm.SVC(kernel='linear', class_weight={1: 10})
wclf.fit(X, y)
ww = wclf.coef_[0]
wa = -ww[0] / ww[1]
wyy = wa * xx - wclf.intercept_[0] / ww[1]
# plot separating hyperplanes and samples
h0 = plt.plot(xx, yy, 'k-', label='no weights')
h1 = plt.plot(xx, wyy, 'k--', label='with weights')
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
plt.legend()
plt.axis('tight')
plt.show()
| bsd-3-clause |
ddboline/driven_data_predict_restraurant_inspections | plot_data.py | 1 | 2185 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Created on Tue Apr 28 23:15:29 2015
@author: ddboline
"""
import os
import matplotlib
matplotlib.use('Agg')
import pylab as pl
from pandas.tools.plotting import scatter_matrix
def create_html_page_of_plots(list_of_plots, prefix='html'):
"""
create html page with png files
"""
if not os.path.exists(prefix):
os.makedirs(prefix)
os.system('mv *.png %s' % prefix)
#print(list_of_plots)
idx = 0
htmlfile = open('%s/index_0.html' % prefix, 'w')
htmlfile.write('<!DOCTYPE html><html><body><div>\n')
for plot in list_of_plots:
if idx > 0 and idx % 200 == 0:
htmlfile.write('</div></html></html>\n')
htmlfile.close()
htmlfile = open('%s/index_%d.html' % (prefix, (idx//200)), 'w')
htmlfile.write('<!DOCTYPE html><html><body><div>\n')
htmlfile.write('<p><img src="%s"></p>\n' % plot)
idx += 1
htmlfile.write('</div></html></html>\n')
htmlfile.close()
def plot_data(indf, prefix='html'):
"""
create scatter matrix plot, histograms
"""
list_of_plots = []
column_groups = []
column_list = []
for col in indf.columns:
if len(indf[col].unique()) > 5 and 'checkin' not in col:
column_list.append(col)
for idx in range(0, len(column_list), 3):
print len(column_list), idx, (idx+3)
column_groups.append(column_list[idx:(idx+3)])
for idx in range(len(column_groups)):
for idy in range(0, idx):
if idx == idy:
continue
print column_groups[idx]+column_groups[idy]
pl.clf()
scatter_matrix(indf[column_groups[idx]+column_groups[idy]])
pl.savefig('scatter_matrix_%d_%d.png' % (idx, idy))
list_of_plots.append('scatter_matrix_%d_%d.png' % (idx, idy))
pl.close()
for col in indf:
pl.clf()
print col
indf[col].hist(histtype='step', normed=True)
pl.title(col)
pl.savefig('%s_hist.png' % col)
list_of_plots.append('%s_hist.png' % col)
create_html_page_of_plots(list_of_plots, prefix)
return
| mit |
xuleiboy1234/autoTitle | tensorflow/tensorflow/examples/get_started/regression/imports85.py | 8 | 3495 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""A dataset loader for imports85.data."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import numpy as np
import pandas as pd
import tensorflow as tf
header = collections.OrderedDict([
("symboling", np.int32),
("normalized-losses", np.float32),
("make", str),
("fuel-type", str),
("aspiration", str),
("num-of-doors", str),
("body-style", str),
("drive-wheels", str),
("engine-location", str),
("wheel-base", np.float32),
("length", np.float32),
("width", np.float32),
("height", np.float32),
("curb-weight", np.float32),
("engine-type", str),
("num-of-cylinders", str),
("engine-size", np.float32),
("fuel-system", str),
("bore", np.float32),
("stroke", np.float32),
("compression-ratio", np.float32),
("horsepower", np.float32),
("peak-rpm", np.float32),
("city-mpg", np.float32),
("highway-mpg", np.float32),
("price", np.float32)
]) # pyformat: disable
def raw():
"""Get the imports85 data and load it as a pd.DataFrame."""
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data" # pylint: disable=line-too-long
# Download and cache the data.
path = tf.contrib.keras.utils.get_file(url.split("/")[-1], url)
# Load the CSV data into a pandas dataframe.
df = pd.read_csv(path, names=header.keys(), dtype=header, na_values="?")
return df
def load_data(y_name="price", train_fraction=0.7, seed=None):
"""Returns the imports85 shuffled and split into train and test subsets.
A description of the data is available at:
https://archive.ics.uci.edu/ml/datasets/automobile
The data itself can be found at:
https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data
Args:
y_name: the column to return as the label.
train_fraction: the fraction of the dataset to use for training.
seed: The random seed to use when shuffling the data. `None` generates a
unique shuffle every run.
Returns:
a pair of pairs where the first pair is the training data, and the second
is the test data:
`(x_train, y_train), (x_test, y_test) = get_imports85_dataset(...)`
`x` contains a pandas DataFrame of features, while `y` contains the label
array.
"""
# Load the raw data columns.
data = raw()
# Delete rows with unknowns
data = data.dropna()
# Shuffle the data
np.random.seed(seed)
# Split the data into train/test subsets.
x_train = data.sample(frac=train_fraction, random_state=seed)
x_test = data.drop(x_train.index)
# Extract the label from the features dataframe.
y_train = x_train.pop(y_name)
y_test = x_test.pop(y_name)
return (x_train, y_train), (x_test, y_test)
| mit |
lin-credible/scikit-learn | examples/datasets/plot_random_multilabel_dataset.py | 93 | 3460 | """
==============================================
Plot randomly generated multilabel dataset
==============================================
This illustrates the `datasets.make_multilabel_classification` dataset
generator. Each sample consists of counts of two features (up to 50 in
total), which are differently distributed in each of two classes.
Points are labeled as follows, where Y means the class is present:
===== ===== ===== ======
1 2 3 Color
===== ===== ===== ======
Y N N Red
N Y N Blue
N N Y Yellow
Y Y N Purple
Y N Y Orange
Y Y N Green
Y Y Y Brown
===== ===== ===== ======
A star marks the expected sample for each class; its size reflects the
probability of selecting that class label.
The left and right examples highlight the ``n_labels`` parameter:
more of the samples in the right plot have 2 or 3 labels.
Note that this two-dimensional example is very degenerate:
generally the number of features would be much greater than the
"document length", while here we have much larger documents than vocabulary.
Similarly, with ``n_classes > n_features``, it is much less likely that a
feature distinguishes a particular class.
"""
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_multilabel_classification as make_ml_clf
print(__doc__)
COLORS = np.array(['!',
'#FF3333', # red
'#0198E1', # blue
'#BF5FFF', # purple
'#FCD116', # yellow
'#FF7216', # orange
'#4DBD33', # green
'#87421F' # brown
])
# Use same random seed for multiple calls to make_multilabel_classification to
# ensure same distributions
RANDOM_SEED = np.random.randint(2 ** 10)
def plot_2d(ax, n_labels=1, n_classes=3, length=50):
X, Y, p_c, p_w_c = make_ml_clf(n_samples=150, n_features=2,
n_classes=n_classes, n_labels=n_labels,
length=length, allow_unlabeled=False,
return_indicator=True,
return_distributions=True,
random_state=RANDOM_SEED)
ax.scatter(X[:, 0], X[:, 1], color=COLORS.take((Y * [1, 2, 4]
).sum(axis=1)),
marker='.')
ax.scatter(p_w_c[0] * length, p_w_c[1] * length,
marker='*', linewidth=.5, edgecolor='black',
s=20 + 1500 * p_c ** 2,
color=COLORS.take([1, 2, 4]))
ax.set_xlabel('Feature 0 count')
return p_c, p_w_c
_, (ax1, ax2) = plt.subplots(1, 2, sharex='row', sharey='row', figsize=(8, 4))
plt.subplots_adjust(bottom=.15)
p_c, p_w_c = plot_2d(ax1, n_labels=1)
ax1.set_title('n_labels=1, length=50')
ax1.set_ylabel('Feature 1 count')
plot_2d(ax2, n_labels=3)
ax2.set_title('n_labels=3, length=50')
ax2.set_xlim(left=0, auto=True)
ax2.set_ylim(bottom=0, auto=True)
plt.show()
print('The data was generated from (random_state=%d):' % RANDOM_SEED)
print('Class', 'P(C)', 'P(w0|C)', 'P(w1|C)', sep='\t')
for k, p, p_w in zip(['red', 'blue', 'yellow'], p_c, p_w_c.T):
print('%s\t%0.2f\t%0.2f\t%0.2f' % (k, p, p_w[0], p_w[1]))
| bsd-3-clause |
phaustin/pyman | Book/chap9/Supporting Materials/specFuncPlots.py | 3 | 2545 | import numpy as np
import scipy.special
import matplotlib.pyplot as plt
# create a figure window
fig = plt.figure(1, figsize=(9,8))
# create arrays for a few Bessel functions and plot them
x = np.linspace(0, 20, 256)
j0 = scipy.special.jn(0, x)
j1 = scipy.special.jn(1, x)
y0 = scipy.special.yn(0, x)
y1 = scipy.special.yn(1, x)
ax1 = fig.add_subplot(321)
ax1.plot(x,j0, x,j1, x,y0, x,y1)
ax1.axhline(color="grey", ls="--", zorder=-1)
ax1.set_ylim(-1,1)
ax1.text(0.5, 0.95,'Bessel', ha='center', va='top',
transform = ax1.transAxes)
# gamma function
x = np.linspace(-3.5, 6., 3601)
g = scipy.special.gamma(x)
g = np.ma.masked_outside(g, -100, 400)
ax2 = fig.add_subplot(322)
ax2.plot(x,g)
ax2.set_xlim(-3.5, 6)
ax2.axhline(color="grey", ls="--", zorder=-1)
ax2.axvline(color="grey", ls="--", zorder=-1)
ax2.set_ylim(-20, 100)
ax2.text(0.5, 0.95,'Gamma', ha='center', va='top',
transform = ax2.transAxes)
# error function
x = np.linspace(0, 2.5, 256)
ef = scipy.special.erf(x)
ax3 = fig.add_subplot(323)
ax3.plot(x,ef)
ax3.set_ylim(0,1.1)
ax3.text(0.5, 0.95,'Error', ha='center', va='top',
transform = ax3.transAxes)
# Airy function
x = np.linspace(-15, 4, 256)
ai, aip, bi, bip = scipy.special.airy(x)
ax4 = fig.add_subplot(324)
ax4.plot(x,ai, x,bi)
ax4.axhline(color="grey", ls="--", zorder=-1)
ax4.axvline(color="grey", ls="--", zorder=-1)
ax4.set_xlim(-15,4)
ax4.set_ylim(-0.5,0.6)
ax4.text(0.5, 0.95,'Airy', ha='center', va='top',
transform = ax4.transAxes)
# Legendre polynomials
x = np.linspace(-1, 1, 256)
lp0 = np.polyval(scipy.special.legendre(0),x)
lp1 = np.polyval(scipy.special.legendre(1),x)
lp2 = np.polyval(scipy.special.legendre(2),x)
lp3 = np.polyval(scipy.special.legendre(3),x)
ax5 = fig.add_subplot(325)
ax5.plot(x,lp0, x,lp1, x,lp2, x,lp3)
ax5.axhline(color="grey", ls="--", zorder=-1)
ax5.axvline(color="grey", ls="--", zorder=-1)
ax5.set_ylim(-1,1.1)
ax5.text(0.5, 0.9,'Legendre', ha='center', va='top',
transform = ax5.transAxes)
# Laguerre polynomials
x = np.linspace(-5, 8, 256)
lg0 = np.polyval(scipy.special.laguerre(0),x)
lg1 = np.polyval(scipy.special.laguerre(1),x)
lg2 = np.polyval(scipy.special.laguerre(2),x)
lg3 = np.polyval(scipy.special.laguerre(3),x)
ax6 = fig.add_subplot(326)
ax6.plot(x,lg0, x,lg1, x,lg2, x,lg3)
ax6.axhline(color="grey", ls="--", zorder=-1)
ax6.axvline(color="grey", ls="--", zorder=-1)
ax6.set_xlim(-5,8)
ax6.set_ylim(-5,10)
ax6.text(0.5, 0.9,'Laguerre', ha='center', va='top',
transform = ax6.transAxes)
plt.savefig("specFuncPlots.pdf")
plt.show() | cc0-1.0 |
JaviMerino/trappy | trappy/plotter/StaticPlot.py | 1 | 9823 | # Copyright 2016-2016 ARM Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""Base matplotlib plotter module"""
from abc import abstractmethod, ABCMeta
from collections import defaultdict as ddict
import matplotlib.pyplot as plt
from trappy.plotter import AttrConf
from trappy.plotter.Constraint import ConstraintManager
from trappy.plotter.PlotLayout import PlotLayout
from trappy.plotter.AbstractDataPlotter import AbstractDataPlotter
from trappy.plotter.ColorMap import ColorMap
class StaticPlot(AbstractDataPlotter):
"""
This class uses :mod:`trappy.plotter.Constraint.Constraint` to
represent different permutations of input parameters. These
constraints are generated by creating an instance of
:mod:`trappy.plotter.Constraint.ConstraintManager`.
:param traces: The input data
:type traces: a list of :mod:`trappy.trace.FTrace` or :mod:`pandas.DataFrame` or a single instance of them
:param column: specifies the name of the column to
be plotted.
:type column: (str, list(str))
:param templates: TRAPpy events
.. note::
This is not required if a :mod:`pandas.DataFrame` is
used
:type templates: :mod:`trappy.base.Base`
:param filters: Filter the column to be plotted as per the
specified criteria. For Example:
::
filters =
{
"pid": [ 3338 ],
"cpu": [0, 2, 4],
}
:type filters: dict
:param per_line: Used to control the number of graphs
in each graph subplot row
:type per_line: int
:param concat: Draw all the pivots on a single graph
:type concat: bool
:param permute: Draw one plot for each of the traces specified
:type permute: bool
:param drawstyle: This argument is forwarded to the matplotlib
corresponding :func:`matplotlib.pyplot.plot` call
drawing style.
.. note::
step plots are not currently supported for filled
graphs
:param xlim: A tuple representing the upper and lower xlimits
:type xlim: tuple
:param ylim: A tuple representing the upper and lower ylimits
:type ylim: tuple
:param title: A title describing all the generated plots
:type title: str
:param style: Created pre-styled graphs loaded from
:mod:`trappy.plotter.AttrConf.MPL_STYLE`
:type style: bool
:param signals: A string of the type event_name:column
to indicate the value that needs to be plotted
.. note::
- Only one of `signals` or both `templates` and
`columns` should be specified
- Signals format won't work for :mod:`pandas.DataFrame`
input
:type signals: str
:param legend_ncol: A positive integer that represents the
number of columns in the legend
:type legend_ncol: int
"""
__metaclass__ = ABCMeta
def __init__(self, traces, templates, **kwargs):
self._fig = None
self._layout = None
super(StaticPlot, self).__init__(traces=traces,
templates=templates)
self.set_defaults()
for key in kwargs:
if key in AttrConf.ARGS_TO_FORWARD:
self._attr["args_to_forward"][key] = kwargs[key]
else:
self._attr[key] = kwargs[key]
if "signals" in self._attr:
self._describe_signals()
self._check_data()
if "column" not in self._attr:
raise RuntimeError("Value Column not specified")
zip_constraints = not self._attr["permute"]
self.c_mgr = ConstraintManager(traces, self._attr["column"],
self.templates, self._attr["pivot"],
self._attr["filters"], zip_constraints)
def savefig(self, *args, **kwargs):
"""Save the plot as a PNG fill. This calls into
:mod:`matplotlib.figure.savefig`
"""
if self._fig is None:
self.view()
self._fig.savefig(*args, **kwargs)
@abstractmethod
def set_defaults(self):
"""Sets the default attrs"""
self._attr["width"] = AttrConf.WIDTH
self._attr["length"] = AttrConf.LENGTH
self._attr["per_line"] = AttrConf.PER_LINE
self._attr["concat"] = AttrConf.CONCAT
self._attr["filters"] = {}
self._attr["style"] = True
self._attr["permute"] = False
self._attr["pivot"] = AttrConf.PIVOT
self._attr["xlim"] = AttrConf.XLIM
self._attr["ylim"] = AttrConf.XLIM
self._attr["title"] = AttrConf.TITLE
self._attr["args_to_forward"] = {}
self._attr["map_label"] = {}
self._attr["_legend_handles"] = []
self._attr["_legend_labels"] = []
self._attr["legend_ncol"] = AttrConf.LEGEND_NCOL
def view(self, test=False):
"""Displays the graph"""
if test:
self._attr["style"] = True
AttrConf.MPL_STYLE["interactive"] = False
permute = self._attr["permute"] and not self._attr["concat"]
if self._attr["style"]:
with plt.rc_context(AttrConf.MPL_STYLE):
self._resolve(permute, self._attr["concat"])
else:
self._resolve(permute, self._attr["concat"])
def make_title(self, constraint, pivot, permute, concat):
"""Generates a title string for an axis"""
if concat:
return str(constraint)
if permute:
return constraint.get_data_name()
elif pivot != AttrConf.PIVOT_VAL:
return "{0}: {1}".format(self._attr["pivot"], self._attr["map_label"].get(pivot, pivot))
else:
return ""
def add_to_legend(self, series_index, handle, constraint, pivot, concat, permute):
"""
Add series handles and names to the legend
A handle is returned from a plot on an axis
e.g. Line2D from axis.plot()
"""
self._attr["_legend_handles"][series_index] = handle
legend_labels = self._attr["_legend_labels"]
if concat and pivot == AttrConf.PIVOT_VAL:
legend_labels[series_index] = self._attr["column"]
elif concat:
legend_labels[series_index] = "{0}: {1}".format(
self._attr["pivot"],
self._attr["map_label"].get(pivot, pivot)
)
elif permute:
legend_labels[series_index] = constraint._template.name + ":" + constraint.column
else:
legend_labels[series_index] = str(constraint)
def _resolve(self, permute, concat):
"""Determine what data to plot on which axis"""
pivot_vals, len_pivots = self.c_mgr.generate_pivots(permute)
pivot_vals = list(pivot_vals)
num_of_axes = len(self.c_mgr) if concat else len_pivots
# Create a 2D Layout
self._layout = PlotLayout(
self._attr["per_line"],
num_of_axes,
width=self._attr["width"],
length=self._attr["length"],
title=self._attr['title'])
self._fig = self._layout.get_fig()
# Determine what constraint to plot and the corresponding pivot value
if permute:
legend_len = self.c_mgr._max_len
pivots = [y for _, y in pivot_vals]
c_dict = {c : str(c) for c in self.c_mgr}
c_list = sorted(c_dict.items(), key=lambda x: (x[1].split(":")[-1], x[1].split(":")[0]))
constraints = [c[0] for c in c_list]
cp_pairs = [(c, p) for c in constraints for p in sorted(set(pivots))]
else:
legend_len = len_pivots if concat else len(self.c_mgr)
pivots = pivot_vals
cp_pairs = [(c, p) for c in self.c_mgr for p in pivots if p in c.result]
# Initialise legend data and colormap
self._attr["_legend_handles"] = [None] * legend_len
self._attr["_legend_labels"] = [None] * legend_len
self._cmap = ColorMap(legend_len)
# Group constraints/series with the axis they are to be plotted on
figure_data = ddict(list)
for i, (constraint, pivot) in enumerate(cp_pairs):
axis = self._layout.get_axis(constraint.trace_index if concat else i)
figure_data[axis].append((constraint, pivot))
# Plot each axis
for axis, series_list in figure_data.iteritems():
self.plot_axis(
axis,
series_list,
permute,
self._attr["concat"],
self._attr["args_to_forward"]
)
# Show legend
legend = self._fig.legend(self._attr["_legend_handles"],
self._attr["_legend_labels"],
loc='lower center',
ncol=self._attr["legend_ncol"],
borderaxespad=0.)
legend.get_frame().set_facecolor('#F4F4F4')
self._layout.finish(num_of_axes)
def plot_axis(self, axis, series_list, permute, concat, args_to_forward):
"""Internal Method called to plot data (series_list) on a given axis"""
raise NotImplementedError("Method Not Implemented")
| apache-2.0 |
raghavrv/scikit-learn | sklearn/decomposition/tests/test_sparse_pca.py | 63 | 6459 | # Author: Vlad Niculae
# License: BSD 3 clause
import sys
import numpy as np
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import SkipTest
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_false
from sklearn.utils.testing import assert_warns_message
from sklearn.utils.testing import if_safe_multiprocessing_with_blas
from sklearn.decomposition import SparsePCA, MiniBatchSparsePCA
from sklearn.utils import check_random_state
def generate_toy_data(n_components, n_samples, image_size, random_state=None):
n_features = image_size[0] * image_size[1]
rng = check_random_state(random_state)
U = rng.randn(n_samples, n_components)
V = rng.randn(n_components, n_features)
centers = [(3, 3), (6, 7), (8, 1)]
sz = [1, 2, 1]
for k in range(n_components):
img = np.zeros(image_size)
xmin, xmax = centers[k][0] - sz[k], centers[k][0] + sz[k]
ymin, ymax = centers[k][1] - sz[k], centers[k][1] + sz[k]
img[xmin:xmax][:, ymin:ymax] = 1.0
V[k, :] = img.ravel()
# Y is defined by : Y = UV + noise
Y = np.dot(U, V)
Y += 0.1 * rng.randn(Y.shape[0], Y.shape[1]) # Add noise
return Y, U, V
# SparsePCA can be a bit slow. To avoid having test times go up, we
# test different aspects of the code in the same test
def test_correct_shapes():
rng = np.random.RandomState(0)
X = rng.randn(12, 10)
spca = SparsePCA(n_components=8, random_state=rng)
U = spca.fit_transform(X)
assert_equal(spca.components_.shape, (8, 10))
assert_equal(U.shape, (12, 8))
# test overcomplete decomposition
spca = SparsePCA(n_components=13, random_state=rng)
U = spca.fit_transform(X)
assert_equal(spca.components_.shape, (13, 10))
assert_equal(U.shape, (12, 13))
def test_fit_transform():
alpha = 1
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 10, (8, 8), random_state=rng) # wide array
spca_lars = SparsePCA(n_components=3, method='lars', alpha=alpha,
random_state=0)
spca_lars.fit(Y)
# Test that CD gives similar results
spca_lasso = SparsePCA(n_components=3, method='cd', random_state=0,
alpha=alpha)
spca_lasso.fit(Y)
assert_array_almost_equal(spca_lasso.components_, spca_lars.components_)
# Test that deprecated ridge_alpha parameter throws warning
warning_msg = "The ridge_alpha parameter on transform()"
assert_warns_message(DeprecationWarning, warning_msg, spca_lars.transform,
Y, ridge_alpha=0.01)
assert_warns_message(DeprecationWarning, warning_msg, spca_lars.transform,
Y, ridge_alpha=None)
@if_safe_multiprocessing_with_blas
def test_fit_transform_parallel():
alpha = 1
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 10, (8, 8), random_state=rng) # wide array
spca_lars = SparsePCA(n_components=3, method='lars', alpha=alpha,
random_state=0)
spca_lars.fit(Y)
U1 = spca_lars.transform(Y)
# Test multiple CPUs
spca = SparsePCA(n_components=3, n_jobs=2, method='lars', alpha=alpha,
random_state=0).fit(Y)
U2 = spca.transform(Y)
assert_true(not np.all(spca_lars.components_ == 0))
assert_array_almost_equal(U1, U2)
def test_transform_nan():
# Test that SparsePCA won't return NaN when there is 0 feature in all
# samples.
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 10, (8, 8), random_state=rng) # wide array
Y[:, 0] = 0
estimator = SparsePCA(n_components=8)
assert_false(np.any(np.isnan(estimator.fit_transform(Y))))
def test_fit_transform_tall():
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 65, (8, 8), random_state=rng) # tall array
spca_lars = SparsePCA(n_components=3, method='lars',
random_state=rng)
U1 = spca_lars.fit_transform(Y)
spca_lasso = SparsePCA(n_components=3, method='cd', random_state=rng)
U2 = spca_lasso.fit(Y).transform(Y)
assert_array_almost_equal(U1, U2)
def test_initialization():
rng = np.random.RandomState(0)
U_init = rng.randn(5, 3)
V_init = rng.randn(3, 4)
model = SparsePCA(n_components=3, U_init=U_init, V_init=V_init, max_iter=0,
random_state=rng)
model.fit(rng.randn(5, 4))
assert_array_equal(model.components_, V_init)
def test_mini_batch_correct_shapes():
rng = np.random.RandomState(0)
X = rng.randn(12, 10)
pca = MiniBatchSparsePCA(n_components=8, random_state=rng)
U = pca.fit_transform(X)
assert_equal(pca.components_.shape, (8, 10))
assert_equal(U.shape, (12, 8))
# test overcomplete decomposition
pca = MiniBatchSparsePCA(n_components=13, random_state=rng)
U = pca.fit_transform(X)
assert_equal(pca.components_.shape, (13, 10))
assert_equal(U.shape, (12, 13))
def test_mini_batch_fit_transform():
raise SkipTest("skipping mini_batch_fit_transform.")
alpha = 1
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 10, (8, 8), random_state=rng) # wide array
spca_lars = MiniBatchSparsePCA(n_components=3, random_state=0,
alpha=alpha).fit(Y)
U1 = spca_lars.transform(Y)
# Test multiple CPUs
if sys.platform == 'win32': # fake parallelism for win32
import sklearn.externals.joblib.parallel as joblib_par
_mp = joblib_par.multiprocessing
joblib_par.multiprocessing = None
try:
U2 = MiniBatchSparsePCA(n_components=3, n_jobs=2, alpha=alpha,
random_state=0).fit(Y).transform(Y)
finally:
joblib_par.multiprocessing = _mp
else: # we can efficiently use parallelism
U2 = MiniBatchSparsePCA(n_components=3, n_jobs=2, alpha=alpha,
random_state=0).fit(Y).transform(Y)
assert_true(not np.all(spca_lars.components_ == 0))
assert_array_almost_equal(U1, U2)
# Test that CD gives similar results
spca_lasso = MiniBatchSparsePCA(n_components=3, method='cd', alpha=alpha,
random_state=0).fit(Y)
assert_array_almost_equal(spca_lasso.components_, spca_lars.components_)
| bsd-3-clause |
lukasmarshall/embedded-network-model | participant.py | 1 | 2188 | import numpy as np
import pandas as pd
import datetime
import util
class Participant:
# Need to update to have both network and retail tariffs as inputs
def __init__(self, participant_id, participant_type, retail_tariff_type, network_tariff_type,retailer):
self.participant_id = participant_id
self.participant_type = participant_type
self.retail_tariff_type = retail_tariff_type
self.network_tariff_type = network_tariff_type
self.retailer = retailer
def print_attributes(self):
print(self.participant_type, self.retail_tariff_type, self.network_tariff_type, self.retailer)
# TODO - make this work
def calc_net_export(self, date_time, interval_min):
return np.random.uniform(-10,10)
def get_id(self):
return self.participant_id
def get_retail_tariff_type(self):
return self.retail_tariff_type
def get_network_tariff_type(self):
return self.network_tariff_type
class CSV_Participant(Participant):
def __init__(self, participant_id, participant_type, retail_tariff_type, network_tariff_type, retailer, solar_path, load_path, solar_capacity):
Participant.__init__(self, participant_id, participant_type, retail_tariff_type, network_tariff_type, retailer)
self.solar_path = solar_path
self.load_path = load_path
solar_data = pd.read_csv(solar_path,index_col = 'date_time', parse_dates=True, date_parser=util.date_parser)
load_data = pd.read_csv(load_path,index_col = 'date_time', parse_dates=False, date_parser=util.date_parser)
# Delete all cols not relevant to this participant
self.load_data = load_data[participant_id]
self.solar_data = solar_data[participant_id]
# Apply capacity to solar data
self.solar_data = self.solar_data * solar_capacity
# print solar_data
def calc_net_export(self, date_time, interval_min):
solar_data = float(self.solar_data.loc[date_time])
load_data = float(self.load_data.loc[date_time])
net_export = solar_data - load_data
return net_export
| mit |
shenzebang/scikit-learn | sklearn/ensemble/weight_boosting.py | 97 | 40773 | """Weight Boosting
This module contains weight boosting estimators for both classification and
regression.
The module structure is the following:
- The ``BaseWeightBoosting`` base class implements a common ``fit`` method
for all the estimators in the module. Regression and classification
only differ from each other in the loss function that is optimized.
- ``AdaBoostClassifier`` implements adaptive boosting (AdaBoost-SAMME) for
classification problems.
- ``AdaBoostRegressor`` implements adaptive boosting (AdaBoost.R2) for
regression problems.
"""
# Authors: Noel Dawe <[email protected]>
# Gilles Louppe <[email protected]>
# Hamzeh Alsalhi <[email protected]>
# Arnaud Joly <[email protected]>
#
# Licence: BSD 3 clause
from abc import ABCMeta, abstractmethod
import numpy as np
from numpy.core.umath_tests import inner1d
from .base import BaseEnsemble
from ..base import ClassifierMixin, RegressorMixin
from ..externals import six
from ..externals.six.moves import zip
from ..externals.six.moves import xrange as range
from .forest import BaseForest
from ..tree import DecisionTreeClassifier, DecisionTreeRegressor
from ..tree.tree import BaseDecisionTree
from ..tree._tree import DTYPE
from ..utils import check_array, check_X_y, check_random_state
from ..metrics import accuracy_score, r2_score
from sklearn.utils.validation import has_fit_parameter, check_is_fitted
__all__ = [
'AdaBoostClassifier',
'AdaBoostRegressor',
]
class BaseWeightBoosting(six.with_metaclass(ABCMeta, BaseEnsemble)):
"""Base class for AdaBoost estimators.
Warning: This class should not be used directly. Use derived classes
instead.
"""
@abstractmethod
def __init__(self,
base_estimator=None,
n_estimators=50,
estimator_params=tuple(),
learning_rate=1.,
random_state=None):
super(BaseWeightBoosting, self).__init__(
base_estimator=base_estimator,
n_estimators=n_estimators,
estimator_params=estimator_params)
self.learning_rate = learning_rate
self.random_state = random_state
def fit(self, X, y, sample_weight=None):
"""Build a boosted classifier/regressor from the training set (X, y).
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. COO, DOK, and LIL are converted to CSR. The dtype is
forced to DTYPE from tree._tree if the base classifier of this
ensemble weighted boosting classifier is a tree or forest.
y : array-like of shape = [n_samples]
The target values (class labels in classification, real numbers in
regression).
sample_weight : array-like of shape = [n_samples], optional
Sample weights. If None, the sample weights are initialized to
1 / n_samples.
Returns
-------
self : object
Returns self.
"""
# Check parameters
if self.learning_rate <= 0:
raise ValueError("learning_rate must be greater than zero")
if (self.base_estimator is None or
isinstance(self.base_estimator, (BaseDecisionTree,
BaseForest))):
dtype = DTYPE
accept_sparse = 'csc'
else:
dtype = None
accept_sparse = ['csr', 'csc']
X, y = check_X_y(X, y, accept_sparse=accept_sparse, dtype=dtype)
if sample_weight is None:
# Initialize weights to 1 / n_samples
sample_weight = np.empty(X.shape[0], dtype=np.float)
sample_weight[:] = 1. / X.shape[0]
else:
# Normalize existing weights
sample_weight = sample_weight / sample_weight.sum(dtype=np.float64)
# Check that the sample weights sum is positive
if sample_weight.sum() <= 0:
raise ValueError(
"Attempting to fit with a non-positive "
"weighted number of samples.")
# Check parameters
self._validate_estimator()
# Clear any previous fit results
self.estimators_ = []
self.estimator_weights_ = np.zeros(self.n_estimators, dtype=np.float)
self.estimator_errors_ = np.ones(self.n_estimators, dtype=np.float)
for iboost in range(self.n_estimators):
# Boosting step
sample_weight, estimator_weight, estimator_error = self._boost(
iboost,
X, y,
sample_weight)
# Early termination
if sample_weight is None:
break
self.estimator_weights_[iboost] = estimator_weight
self.estimator_errors_[iboost] = estimator_error
# Stop if error is zero
if estimator_error == 0:
break
sample_weight_sum = np.sum(sample_weight)
# Stop if the sum of sample weights has become non-positive
if sample_weight_sum <= 0:
break
if iboost < self.n_estimators - 1:
# Normalize
sample_weight /= sample_weight_sum
return self
@abstractmethod
def _boost(self, iboost, X, y, sample_weight):
"""Implement a single boost.
Warning: This method needs to be overriden by subclasses.
Parameters
----------
iboost : int
The index of the current boost iteration.
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. COO, DOK, and LIL are converted to CSR.
y : array-like of shape = [n_samples]
The target values (class labels).
sample_weight : array-like of shape = [n_samples]
The current sample weights.
Returns
-------
sample_weight : array-like of shape = [n_samples] or None
The reweighted sample weights.
If None then boosting has terminated early.
estimator_weight : float
The weight for the current boost.
If None then boosting has terminated early.
error : float
The classification error for the current boost.
If None then boosting has terminated early.
"""
pass
def staged_score(self, X, y, sample_weight=None):
"""Return staged scores for X, y.
This generator method yields the ensemble score after each iteration of
boosting and therefore allows monitoring, such as to determine the
score on a test set after each boost.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
y : array-like, shape = [n_samples]
Labels for X.
sample_weight : array-like, shape = [n_samples], optional
Sample weights.
Returns
-------
z : float
"""
for y_pred in self.staged_predict(X):
if isinstance(self, ClassifierMixin):
yield accuracy_score(y, y_pred, sample_weight=sample_weight)
else:
yield r2_score(y, y_pred, sample_weight=sample_weight)
@property
def feature_importances_(self):
"""Return the feature importances (the higher, the more important the
feature).
Returns
-------
feature_importances_ : array, shape = [n_features]
"""
if self.estimators_ is None or len(self.estimators_) == 0:
raise ValueError("Estimator not fitted, "
"call `fit` before `feature_importances_`.")
try:
norm = self.estimator_weights_.sum()
return (sum(weight * clf.feature_importances_ for weight, clf
in zip(self.estimator_weights_, self.estimators_))
/ norm)
except AttributeError:
raise AttributeError(
"Unable to compute feature importances "
"since base_estimator does not have a "
"feature_importances_ attribute")
def _check_sample_weight(self):
if not has_fit_parameter(self.base_estimator_, "sample_weight"):
raise ValueError("%s doesn't support sample_weight."
% self.base_estimator_.__class__.__name__)
def _validate_X_predict(self, X):
"""Ensure that X is in the proper format"""
if (self.base_estimator is None or
isinstance(self.base_estimator,
(BaseDecisionTree, BaseForest))):
X = check_array(X, accept_sparse='csr', dtype=DTYPE)
else:
X = check_array(X, accept_sparse=['csr', 'csc', 'coo'])
return X
def _samme_proba(estimator, n_classes, X):
"""Calculate algorithm 4, step 2, equation c) of Zhu et al [1].
References
----------
.. [1] J. Zhu, H. Zou, S. Rosset, T. Hastie, "Multi-class AdaBoost", 2009.
"""
proba = estimator.predict_proba(X)
# Displace zero probabilities so the log is defined.
# Also fix negative elements which may occur with
# negative sample weights.
proba[proba < np.finfo(proba.dtype).eps] = np.finfo(proba.dtype).eps
log_proba = np.log(proba)
return (n_classes - 1) * (log_proba - (1. / n_classes)
* log_proba.sum(axis=1)[:, np.newaxis])
class AdaBoostClassifier(BaseWeightBoosting, ClassifierMixin):
"""An AdaBoost classifier.
An AdaBoost [1] classifier is a meta-estimator that begins by fitting a
classifier on the original dataset and then fits additional copies of the
classifier on the same dataset but where the weights of incorrectly
classified instances are adjusted such that subsequent classifiers focus
more on difficult cases.
This class implements the algorithm known as AdaBoost-SAMME [2].
Read more in the :ref:`User Guide <adaboost>`.
Parameters
----------
base_estimator : object, optional (default=DecisionTreeClassifier)
The base estimator from which the boosted ensemble is built.
Support for sample weighting is required, as well as proper `classes_`
and `n_classes_` attributes.
n_estimators : integer, optional (default=50)
The maximum number of estimators at which boosting is terminated.
In case of perfect fit, the learning procedure is stopped early.
learning_rate : float, optional (default=1.)
Learning rate shrinks the contribution of each classifier by
``learning_rate``. There is a trade-off between ``learning_rate`` and
``n_estimators``.
algorithm : {'SAMME', 'SAMME.R'}, optional (default='SAMME.R')
If 'SAMME.R' then use the SAMME.R real boosting algorithm.
``base_estimator`` must support calculation of class probabilities.
If 'SAMME' then use the SAMME discrete boosting algorithm.
The SAMME.R algorithm typically converges faster than SAMME,
achieving a lower test error with fewer boosting iterations.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Attributes
----------
estimators_ : list of classifiers
The collection of fitted sub-estimators.
classes_ : array of shape = [n_classes]
The classes labels.
n_classes_ : int
The number of classes.
estimator_weights_ : array of floats
Weights for each estimator in the boosted ensemble.
estimator_errors_ : array of floats
Classification error for each estimator in the boosted
ensemble.
feature_importances_ : array of shape = [n_features]
The feature importances if supported by the ``base_estimator``.
See also
--------
AdaBoostRegressor, GradientBoostingClassifier, DecisionTreeClassifier
References
----------
.. [1] Y. Freund, R. Schapire, "A Decision-Theoretic Generalization of
on-Line Learning and an Application to Boosting", 1995.
.. [2] J. Zhu, H. Zou, S. Rosset, T. Hastie, "Multi-class AdaBoost", 2009.
"""
def __init__(self,
base_estimator=None,
n_estimators=50,
learning_rate=1.,
algorithm='SAMME.R',
random_state=None):
super(AdaBoostClassifier, self).__init__(
base_estimator=base_estimator,
n_estimators=n_estimators,
learning_rate=learning_rate,
random_state=random_state)
self.algorithm = algorithm
def fit(self, X, y, sample_weight=None):
"""Build a boosted classifier from the training set (X, y).
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
y : array-like of shape = [n_samples]
The target values (class labels).
sample_weight : array-like of shape = [n_samples], optional
Sample weights. If None, the sample weights are initialized to
``1 / n_samples``.
Returns
-------
self : object
Returns self.
"""
# Check that algorithm is supported
if self.algorithm not in ('SAMME', 'SAMME.R'):
raise ValueError("algorithm %s is not supported" % self.algorithm)
# Fit
return super(AdaBoostClassifier, self).fit(X, y, sample_weight)
def _validate_estimator(self):
"""Check the estimator and set the base_estimator_ attribute."""
super(AdaBoostClassifier, self)._validate_estimator(
default=DecisionTreeClassifier(max_depth=1))
# SAMME-R requires predict_proba-enabled base estimators
if self.algorithm == 'SAMME.R':
if not hasattr(self.base_estimator_, 'predict_proba'):
raise TypeError(
"AdaBoostClassifier with algorithm='SAMME.R' requires "
"that the weak learner supports the calculation of class "
"probabilities with a predict_proba method.\n"
"Please change the base estimator or set "
"algorithm='SAMME' instead.")
self._check_sample_weight()
def _boost(self, iboost, X, y, sample_weight):
"""Implement a single boost.
Perform a single boost according to the real multi-class SAMME.R
algorithm or to the discrete SAMME algorithm and return the updated
sample weights.
Parameters
----------
iboost : int
The index of the current boost iteration.
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
y : array-like of shape = [n_samples]
The target values (class labels).
sample_weight : array-like of shape = [n_samples]
The current sample weights.
Returns
-------
sample_weight : array-like of shape = [n_samples] or None
The reweighted sample weights.
If None then boosting has terminated early.
estimator_weight : float
The weight for the current boost.
If None then boosting has terminated early.
estimator_error : float
The classification error for the current boost.
If None then boosting has terminated early.
"""
if self.algorithm == 'SAMME.R':
return self._boost_real(iboost, X, y, sample_weight)
else: # elif self.algorithm == "SAMME":
return self._boost_discrete(iboost, X, y, sample_weight)
def _boost_real(self, iboost, X, y, sample_weight):
"""Implement a single boost using the SAMME.R real algorithm."""
estimator = self._make_estimator()
try:
estimator.set_params(random_state=self.random_state)
except ValueError:
pass
estimator.fit(X, y, sample_weight=sample_weight)
y_predict_proba = estimator.predict_proba(X)
if iboost == 0:
self.classes_ = getattr(estimator, 'classes_', None)
self.n_classes_ = len(self.classes_)
y_predict = self.classes_.take(np.argmax(y_predict_proba, axis=1),
axis=0)
# Instances incorrectly classified
incorrect = y_predict != y
# Error fraction
estimator_error = np.mean(
np.average(incorrect, weights=sample_weight, axis=0))
# Stop if classification is perfect
if estimator_error <= 0:
return sample_weight, 1., 0.
# Construct y coding as described in Zhu et al [2]:
#
# y_k = 1 if c == k else -1 / (K - 1)
#
# where K == n_classes_ and c, k in [0, K) are indices along the second
# axis of the y coding with c being the index corresponding to the true
# class label.
n_classes = self.n_classes_
classes = self.classes_
y_codes = np.array([-1. / (n_classes - 1), 1.])
y_coding = y_codes.take(classes == y[:, np.newaxis])
# Displace zero probabilities so the log is defined.
# Also fix negative elements which may occur with
# negative sample weights.
proba = y_predict_proba # alias for readability
proba[proba < np.finfo(proba.dtype).eps] = np.finfo(proba.dtype).eps
# Boost weight using multi-class AdaBoost SAMME.R alg
estimator_weight = (-1. * self.learning_rate
* (((n_classes - 1.) / n_classes) *
inner1d(y_coding, np.log(y_predict_proba))))
# Only boost the weights if it will fit again
if not iboost == self.n_estimators - 1:
# Only boost positive weights
sample_weight *= np.exp(estimator_weight *
((sample_weight > 0) |
(estimator_weight < 0)))
return sample_weight, 1., estimator_error
def _boost_discrete(self, iboost, X, y, sample_weight):
"""Implement a single boost using the SAMME discrete algorithm."""
estimator = self._make_estimator()
try:
estimator.set_params(random_state=self.random_state)
except ValueError:
pass
estimator.fit(X, y, sample_weight=sample_weight)
y_predict = estimator.predict(X)
if iboost == 0:
self.classes_ = getattr(estimator, 'classes_', None)
self.n_classes_ = len(self.classes_)
# Instances incorrectly classified
incorrect = y_predict != y
# Error fraction
estimator_error = np.mean(
np.average(incorrect, weights=sample_weight, axis=0))
# Stop if classification is perfect
if estimator_error <= 0:
return sample_weight, 1., 0.
n_classes = self.n_classes_
# Stop if the error is at least as bad as random guessing
if estimator_error >= 1. - (1. / n_classes):
self.estimators_.pop(-1)
if len(self.estimators_) == 0:
raise ValueError('BaseClassifier in AdaBoostClassifier '
'ensemble is worse than random, ensemble '
'can not be fit.')
return None, None, None
# Boost weight using multi-class AdaBoost SAMME alg
estimator_weight = self.learning_rate * (
np.log((1. - estimator_error) / estimator_error) +
np.log(n_classes - 1.))
# Only boost the weights if I will fit again
if not iboost == self.n_estimators - 1:
# Only boost positive weights
sample_weight *= np.exp(estimator_weight * incorrect *
((sample_weight > 0) |
(estimator_weight < 0)))
return sample_weight, estimator_weight, estimator_error
def predict(self, X):
"""Predict classes for X.
The predicted class of an input sample is computed as the weighted mean
prediction of the classifiers in the ensemble.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
Returns
-------
y : array of shape = [n_samples]
The predicted classes.
"""
pred = self.decision_function(X)
if self.n_classes_ == 2:
return self.classes_.take(pred > 0, axis=0)
return self.classes_.take(np.argmax(pred, axis=1), axis=0)
def staged_predict(self, X):
"""Return staged predictions for X.
The predicted class of an input sample is computed as the weighted mean
prediction of the classifiers in the ensemble.
This generator method yields the ensemble prediction after each
iteration of boosting and therefore allows monitoring, such as to
determine the prediction on a test set after each boost.
Parameters
----------
X : array-like of shape = [n_samples, n_features]
The input samples.
Returns
-------
y : generator of array, shape = [n_samples]
The predicted classes.
"""
n_classes = self.n_classes_
classes = self.classes_
if n_classes == 2:
for pred in self.staged_decision_function(X):
yield np.array(classes.take(pred > 0, axis=0))
else:
for pred in self.staged_decision_function(X):
yield np.array(classes.take(
np.argmax(pred, axis=1), axis=0))
def decision_function(self, X):
"""Compute the decision function of ``X``.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
Returns
-------
score : array, shape = [n_samples, k]
The decision function of the input samples. The order of
outputs is the same of that of the `classes_` attribute.
Binary classification is a special cases with ``k == 1``,
otherwise ``k==n_classes``. For binary classification,
values closer to -1 or 1 mean more like the first or second
class in ``classes_``, respectively.
"""
check_is_fitted(self, "n_classes_")
X = self._validate_X_predict(X)
n_classes = self.n_classes_
classes = self.classes_[:, np.newaxis]
pred = None
if self.algorithm == 'SAMME.R':
# The weights are all 1. for SAMME.R
pred = sum(_samme_proba(estimator, n_classes, X)
for estimator in self.estimators_)
else: # self.algorithm == "SAMME"
pred = sum((estimator.predict(X) == classes).T * w
for estimator, w in zip(self.estimators_,
self.estimator_weights_))
pred /= self.estimator_weights_.sum()
if n_classes == 2:
pred[:, 0] *= -1
return pred.sum(axis=1)
return pred
def staged_decision_function(self, X):
"""Compute decision function of ``X`` for each boosting iteration.
This method allows monitoring (i.e. determine error on testing set)
after each boosting iteration.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
Returns
-------
score : generator of array, shape = [n_samples, k]
The decision function of the input samples. The order of
outputs is the same of that of the `classes_` attribute.
Binary classification is a special cases with ``k == 1``,
otherwise ``k==n_classes``. For binary classification,
values closer to -1 or 1 mean more like the first or second
class in ``classes_``, respectively.
"""
check_is_fitted(self, "n_classes_")
X = self._validate_X_predict(X)
n_classes = self.n_classes_
classes = self.classes_[:, np.newaxis]
pred = None
norm = 0.
for weight, estimator in zip(self.estimator_weights_,
self.estimators_):
norm += weight
if self.algorithm == 'SAMME.R':
# The weights are all 1. for SAMME.R
current_pred = _samme_proba(estimator, n_classes, X)
else: # elif self.algorithm == "SAMME":
current_pred = estimator.predict(X)
current_pred = (current_pred == classes).T * weight
if pred is None:
pred = current_pred
else:
pred += current_pred
if n_classes == 2:
tmp_pred = np.copy(pred)
tmp_pred[:, 0] *= -1
yield (tmp_pred / norm).sum(axis=1)
else:
yield pred / norm
def predict_proba(self, X):
"""Predict class probabilities for X.
The predicted class probabilities of an input sample is computed as
the weighted mean predicted class probabilities of the classifiers
in the ensemble.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
Returns
-------
p : array of shape = [n_samples]
The class probabilities of the input samples. The order of
outputs is the same of that of the `classes_` attribute.
"""
check_is_fitted(self, "n_classes_")
n_classes = self.n_classes_
X = self._validate_X_predict(X)
if self.algorithm == 'SAMME.R':
# The weights are all 1. for SAMME.R
proba = sum(_samme_proba(estimator, n_classes, X)
for estimator in self.estimators_)
else: # self.algorithm == "SAMME"
proba = sum(estimator.predict_proba(X) * w
for estimator, w in zip(self.estimators_,
self.estimator_weights_))
proba /= self.estimator_weights_.sum()
proba = np.exp((1. / (n_classes - 1)) * proba)
normalizer = proba.sum(axis=1)[:, np.newaxis]
normalizer[normalizer == 0.0] = 1.0
proba /= normalizer
return proba
def staged_predict_proba(self, X):
"""Predict class probabilities for X.
The predicted class probabilities of an input sample is computed as
the weighted mean predicted class probabilities of the classifiers
in the ensemble.
This generator method yields the ensemble predicted class probabilities
after each iteration of boosting and therefore allows monitoring, such
as to determine the predicted class probabilities on a test set after
each boost.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
Returns
-------
p : generator of array, shape = [n_samples]
The class probabilities of the input samples. The order of
outputs is the same of that of the `classes_` attribute.
"""
X = self._validate_X_predict(X)
n_classes = self.n_classes_
proba = None
norm = 0.
for weight, estimator in zip(self.estimator_weights_,
self.estimators_):
norm += weight
if self.algorithm == 'SAMME.R':
# The weights are all 1. for SAMME.R
current_proba = _samme_proba(estimator, n_classes, X)
else: # elif self.algorithm == "SAMME":
current_proba = estimator.predict_proba(X) * weight
if proba is None:
proba = current_proba
else:
proba += current_proba
real_proba = np.exp((1. / (n_classes - 1)) * (proba / norm))
normalizer = real_proba.sum(axis=1)[:, np.newaxis]
normalizer[normalizer == 0.0] = 1.0
real_proba /= normalizer
yield real_proba
def predict_log_proba(self, X):
"""Predict class log-probabilities for X.
The predicted class log-probabilities of an input sample is computed as
the weighted mean predicted class log-probabilities of the classifiers
in the ensemble.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
Returns
-------
p : array of shape = [n_samples]
The class probabilities of the input samples. The order of
outputs is the same of that of the `classes_` attribute.
"""
return np.log(self.predict_proba(X))
class AdaBoostRegressor(BaseWeightBoosting, RegressorMixin):
"""An AdaBoost regressor.
An AdaBoost [1] regressor is a meta-estimator that begins by fitting a
regressor on the original dataset and then fits additional copies of the
regressor on the same dataset but where the weights of instances are
adjusted according to the error of the current prediction. As such,
subsequent regressors focus more on difficult cases.
This class implements the algorithm known as AdaBoost.R2 [2].
Read more in the :ref:`User Guide <adaboost>`.
Parameters
----------
base_estimator : object, optional (default=DecisionTreeRegressor)
The base estimator from which the boosted ensemble is built.
Support for sample weighting is required.
n_estimators : integer, optional (default=50)
The maximum number of estimators at which boosting is terminated.
In case of perfect fit, the learning procedure is stopped early.
learning_rate : float, optional (default=1.)
Learning rate shrinks the contribution of each regressor by
``learning_rate``. There is a trade-off between ``learning_rate`` and
``n_estimators``.
loss : {'linear', 'square', 'exponential'}, optional (default='linear')
The loss function to use when updating the weights after each
boosting iteration.
random_state : int, RandomState instance or None, optional (default=None)
If int, random_state is the seed used by the random number generator;
If RandomState instance, random_state is the random number generator;
If None, the random number generator is the RandomState instance used
by `np.random`.
Attributes
----------
estimators_ : list of classifiers
The collection of fitted sub-estimators.
estimator_weights_ : array of floats
Weights for each estimator in the boosted ensemble.
estimator_errors_ : array of floats
Regression error for each estimator in the boosted ensemble.
feature_importances_ : array of shape = [n_features]
The feature importances if supported by the ``base_estimator``.
See also
--------
AdaBoostClassifier, GradientBoostingRegressor, DecisionTreeRegressor
References
----------
.. [1] Y. Freund, R. Schapire, "A Decision-Theoretic Generalization of
on-Line Learning and an Application to Boosting", 1995.
.. [2] H. Drucker, "Improving Regressors using Boosting Techniques", 1997.
"""
def __init__(self,
base_estimator=None,
n_estimators=50,
learning_rate=1.,
loss='linear',
random_state=None):
super(AdaBoostRegressor, self).__init__(
base_estimator=base_estimator,
n_estimators=n_estimators,
learning_rate=learning_rate,
random_state=random_state)
self.loss = loss
self.random_state = random_state
def fit(self, X, y, sample_weight=None):
"""Build a boosted regressor from the training set (X, y).
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
y : array-like of shape = [n_samples]
The target values (real numbers).
sample_weight : array-like of shape = [n_samples], optional
Sample weights. If None, the sample weights are initialized to
1 / n_samples.
Returns
-------
self : object
Returns self.
"""
# Check loss
if self.loss not in ('linear', 'square', 'exponential'):
raise ValueError(
"loss must be 'linear', 'square', or 'exponential'")
# Fit
return super(AdaBoostRegressor, self).fit(X, y, sample_weight)
def _validate_estimator(self):
"""Check the estimator and set the base_estimator_ attribute."""
super(AdaBoostRegressor, self)._validate_estimator(
default=DecisionTreeRegressor(max_depth=3))
self._check_sample_weight()
def _boost(self, iboost, X, y, sample_weight):
"""Implement a single boost for regression
Perform a single boost according to the AdaBoost.R2 algorithm and
return the updated sample weights.
Parameters
----------
iboost : int
The index of the current boost iteration.
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
y : array-like of shape = [n_samples]
The target values (class labels in classification, real numbers in
regression).
sample_weight : array-like of shape = [n_samples]
The current sample weights.
Returns
-------
sample_weight : array-like of shape = [n_samples] or None
The reweighted sample weights.
If None then boosting has terminated early.
estimator_weight : float
The weight for the current boost.
If None then boosting has terminated early.
estimator_error : float
The regression error for the current boost.
If None then boosting has terminated early.
"""
estimator = self._make_estimator()
try:
estimator.set_params(random_state=self.random_state)
except ValueError:
pass
generator = check_random_state(self.random_state)
# Weighted sampling of the training set with replacement
# For NumPy >= 1.7.0 use np.random.choice
cdf = sample_weight.cumsum()
cdf /= cdf[-1]
uniform_samples = generator.random_sample(X.shape[0])
bootstrap_idx = cdf.searchsorted(uniform_samples, side='right')
# searchsorted returns a scalar
bootstrap_idx = np.array(bootstrap_idx, copy=False)
# Fit on the bootstrapped sample and obtain a prediction
# for all samples in the training set
estimator.fit(X[bootstrap_idx], y[bootstrap_idx])
y_predict = estimator.predict(X)
error_vect = np.abs(y_predict - y)
error_max = error_vect.max()
if error_max != 0.:
error_vect /= error_max
if self.loss == 'square':
error_vect **= 2
elif self.loss == 'exponential':
error_vect = 1. - np.exp(- error_vect)
# Calculate the average loss
estimator_error = (sample_weight * error_vect).sum()
if estimator_error <= 0:
# Stop if fit is perfect
return sample_weight, 1., 0.
elif estimator_error >= 0.5:
# Discard current estimator only if it isn't the only one
if len(self.estimators_) > 1:
self.estimators_.pop(-1)
return None, None, None
beta = estimator_error / (1. - estimator_error)
# Boost weight using AdaBoost.R2 alg
estimator_weight = self.learning_rate * np.log(1. / beta)
if not iboost == self.n_estimators - 1:
sample_weight *= np.power(
beta,
(1. - error_vect) * self.learning_rate)
return sample_weight, estimator_weight, estimator_error
def _get_median_predict(self, X, limit):
# Evaluate predictions of all estimators
predictions = np.array([
est.predict(X) for est in self.estimators_[:limit]]).T
# Sort the predictions
sorted_idx = np.argsort(predictions, axis=1)
# Find index of median prediction for each sample
weight_cdf = self.estimator_weights_[sorted_idx].cumsum(axis=1)
median_or_above = weight_cdf >= 0.5 * weight_cdf[:, -1][:, np.newaxis]
median_idx = median_or_above.argmax(axis=1)
median_estimators = sorted_idx[np.arange(X.shape[0]), median_idx]
# Return median predictions
return predictions[np.arange(X.shape[0]), median_estimators]
def predict(self, X):
"""Predict regression value for X.
The predicted regression value of an input sample is computed
as the weighted median prediction of the classifiers in the ensemble.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
Returns
-------
y : array of shape = [n_samples]
The predicted regression values.
"""
check_is_fitted(self, "estimator_weights_")
X = self._validate_X_predict(X)
return self._get_median_predict(X, len(self.estimators_))
def staged_predict(self, X):
"""Return staged predictions for X.
The predicted regression value of an input sample is computed
as the weighted median prediction of the classifiers in the ensemble.
This generator method yields the ensemble prediction after each
iteration of boosting and therefore allows monitoring, such as to
determine the prediction on a test set after each boost.
Parameters
----------
X : {array-like, sparse matrix} of shape = [n_samples, n_features]
The training input samples. Sparse matrix can be CSC, CSR, COO,
DOK, or LIL. DOK and LIL are converted to CSR.
Returns
-------
y : generator of array, shape = [n_samples]
The predicted regression values.
"""
check_is_fitted(self, "estimator_weights_")
X = self._validate_X_predict(X)
for i, _ in enumerate(self.estimators_, 1):
yield self._get_median_predict(X, limit=i)
| bsd-3-clause |
toobaz/pandas | pandas/tests/reshape/test_qcut.py | 2 | 6328 | import os
import numpy as np
import pytest
from pandas import (
Categorical,
DatetimeIndex,
Interval,
IntervalIndex,
NaT,
Series,
TimedeltaIndex,
Timestamp,
cut,
date_range,
isna,
qcut,
timedelta_range,
)
from pandas.api.types import CategoricalDtype as CDT
from pandas.core.algorithms import quantile
import pandas.util.testing as tm
from pandas.tseries.offsets import Day, Nano
def test_qcut():
arr = np.random.randn(1000)
# We store the bins as Index that have been
# rounded to comparisons are a bit tricky.
labels, bins = qcut(arr, 4, retbins=True)
ex_bins = quantile(arr, [0, 0.25, 0.5, 0.75, 1.0])
result = labels.categories.left.values
assert np.allclose(result, ex_bins[:-1], atol=1e-2)
result = labels.categories.right.values
assert np.allclose(result, ex_bins[1:], atol=1e-2)
ex_levels = cut(arr, ex_bins, include_lowest=True)
tm.assert_categorical_equal(labels, ex_levels)
def test_qcut_bounds():
arr = np.random.randn(1000)
factor = qcut(arr, 10, labels=False)
assert len(np.unique(factor)) == 10
def test_qcut_specify_quantiles():
arr = np.random.randn(100)
factor = qcut(arr, [0, 0.25, 0.5, 0.75, 1.0])
expected = qcut(arr, 4)
tm.assert_categorical_equal(factor, expected)
def test_qcut_all_bins_same():
with pytest.raises(ValueError, match="edges.*unique"):
qcut([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 3)
def test_qcut_include_lowest():
values = np.arange(10)
ii = qcut(values, 4)
ex_levels = IntervalIndex(
[
Interval(-0.001, 2.25),
Interval(2.25, 4.5),
Interval(4.5, 6.75),
Interval(6.75, 9),
]
)
tm.assert_index_equal(ii.categories, ex_levels)
def test_qcut_nas():
arr = np.random.randn(100)
arr[:20] = np.nan
result = qcut(arr, 4)
assert isna(result[:20]).all()
def test_qcut_index():
result = qcut([0, 2], 2)
intervals = [Interval(-0.001, 1), Interval(1, 2)]
expected = Categorical(intervals, ordered=True)
tm.assert_categorical_equal(result, expected)
def test_qcut_binning_issues(datapath):
# see gh-1978, gh-1979
cut_file = datapath(os.path.join("reshape", "data", "cut_data.csv"))
arr = np.loadtxt(cut_file)
result = qcut(arr, 20)
starts = []
ends = []
for lev in np.unique(result):
s = lev.left
e = lev.right
assert s != e
starts.append(float(s))
ends.append(float(e))
for (sp, sn), (ep, en) in zip(
zip(starts[:-1], starts[1:]), zip(ends[:-1], ends[1:])
):
assert sp < sn
assert ep < en
assert ep <= sn
def test_qcut_return_intervals():
ser = Series([0, 1, 2, 3, 4, 5, 6, 7, 8])
res = qcut(ser, [0, 0.333, 0.666, 1])
exp_levels = np.array(
[Interval(-0.001, 2.664), Interval(2.664, 5.328), Interval(5.328, 8)]
)
exp = Series(exp_levels.take([0, 0, 0, 1, 1, 1, 2, 2, 2])).astype(CDT(ordered=True))
tm.assert_series_equal(res, exp)
@pytest.mark.parametrize(
"kwargs,msg",
[
(dict(duplicates="drop"), None),
(dict(), "Bin edges must be unique"),
(dict(duplicates="raise"), "Bin edges must be unique"),
(dict(duplicates="foo"), "invalid value for 'duplicates' parameter"),
],
)
def test_qcut_duplicates_bin(kwargs, msg):
# see gh-7751
values = [0, 0, 0, 0, 1, 2, 3]
if msg is not None:
with pytest.raises(ValueError, match=msg):
qcut(values, 3, **kwargs)
else:
result = qcut(values, 3, **kwargs)
expected = IntervalIndex([Interval(-0.001, 1), Interval(1, 3)])
tm.assert_index_equal(result.categories, expected)
@pytest.mark.parametrize(
"data,start,end", [(9.0, 8.999, 9.0), (0.0, -0.001, 0.0), (-9.0, -9.001, -9.0)]
)
@pytest.mark.parametrize("length", [1, 2])
@pytest.mark.parametrize("labels", [None, False])
def test_single_quantile(data, start, end, length, labels):
# see gh-15431
ser = Series([data] * length)
result = qcut(ser, 1, labels=labels)
if labels is None:
intervals = IntervalIndex([Interval(start, end)] * length, closed="right")
expected = Series(intervals).astype(CDT(ordered=True))
else:
expected = Series([0] * length)
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize(
"ser",
[
Series(DatetimeIndex(["20180101", NaT, "20180103"])),
Series(TimedeltaIndex(["0 days", NaT, "2 days"])),
],
ids=lambda x: str(x.dtype),
)
def test_qcut_nat(ser):
# see gh-19768
intervals = IntervalIndex.from_tuples(
[(ser[0] - Nano(), ser[2] - Day()), np.nan, (ser[2] - Day(), ser[2])]
)
expected = Series(Categorical(intervals, ordered=True))
result = qcut(ser, 2)
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("bins", [3, np.linspace(0, 1, 4)])
def test_datetime_tz_qcut(bins):
# see gh-19872
tz = "US/Eastern"
ser = Series(date_range("20130101", periods=3, tz=tz))
result = qcut(ser, bins)
expected = Series(
IntervalIndex(
[
Interval(
Timestamp("2012-12-31 23:59:59.999999999", tz=tz),
Timestamp("2013-01-01 16:00:00", tz=tz),
),
Interval(
Timestamp("2013-01-01 16:00:00", tz=tz),
Timestamp("2013-01-02 08:00:00", tz=tz),
),
Interval(
Timestamp("2013-01-02 08:00:00", tz=tz),
Timestamp("2013-01-03 00:00:00", tz=tz),
),
]
)
).astype(CDT(ordered=True))
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize(
"arg,expected_bins",
[
[
timedelta_range("1day", periods=3),
TimedeltaIndex(["1 days", "2 days", "3 days"]),
],
[
date_range("20180101", periods=3),
DatetimeIndex(["2018-01-01", "2018-01-02", "2018-01-03"]),
],
],
)
def test_date_like_qcut_bins(arg, expected_bins):
# see gh-19891
ser = Series(arg)
result, result_bins = qcut(ser, 2, retbins=True)
tm.assert_index_equal(result_bins, expected_bins)
| bsd-3-clause |
benoitsteiner/tensorflow | tensorflow/examples/learn/boston.py | 33 | 1981 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Example of DNNRegressor for Housing dataset."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from sklearn import datasets
from sklearn import metrics
from sklearn import model_selection
from sklearn import preprocessing
import tensorflow as tf
def main(unused_argv):
# Load dataset
boston = datasets.load_boston()
x, y = boston.data, boston.target
# Split dataset into train / test
x_train, x_test, y_train, y_test = model_selection.train_test_split(
x, y, test_size=0.2, random_state=42)
# Scale data (training set) to 0 mean and unit standard deviation.
scaler = preprocessing.StandardScaler()
x_train = scaler.fit_transform(x_train)
# Build 2 layer fully connected DNN with 10, 10 units respectively.
feature_columns = tf.contrib.learn.infer_real_valued_columns_from_input(
x_train)
regressor = tf.contrib.learn.DNNRegressor(
feature_columns=feature_columns, hidden_units=[10, 10])
# Fit
regressor.fit(x_train, y_train, steps=5000, batch_size=1)
# Transform
x_transformed = scaler.transform(x_test)
# Predict and score
y_predicted = list(regressor.predict(x_transformed, as_iterable=True))
score = metrics.mean_squared_error(y_predicted, y_test)
print('MSE: {0:f}'.format(score))
if __name__ == '__main__':
tf.app.run()
| apache-2.0 |
cwu2011/scikit-learn | sklearn/datasets/lfw.py | 38 | 19042 | """Loader for the Labeled Faces in the Wild (LFW) dataset
This dataset is a collection of JPEG pictures of famous people collected
over the internet, all details are available on the official website:
http://vis-www.cs.umass.edu/lfw/
Each picture is centered on a single face. The typical task is called
Face Verification: given a pair of two pictures, a binary classifier
must predict whether the two images are from the same person.
An alternative task, Face Recognition or Face Identification is:
given the picture of the face of an unknown person, identify the name
of the person by referring to a gallery of previously seen pictures of
identified persons.
Both Face Verification and Face Recognition are tasks that are typically
performed on the output of a model trained to perform Face Detection. The
most popular model for Face Detection is called Viola-Johns and is
implemented in the OpenCV library. The LFW faces were extracted by this face
detector from various online websites.
"""
# Copyright (c) 2011 Olivier Grisel <[email protected]>
# License: BSD 3 clause
from os import listdir, makedirs, remove
from os.path import join, exists, isdir
from sklearn.utils import deprecated
import logging
import numpy as np
try:
import urllib.request as urllib # for backwards compatibility
except ImportError:
import urllib
from .base import get_data_home, Bunch
from ..externals.joblib import Memory
from ..externals.six import b
logger = logging.getLogger(__name__)
BASE_URL = "http://vis-www.cs.umass.edu/lfw/"
ARCHIVE_NAME = "lfw.tgz"
FUNNELED_ARCHIVE_NAME = "lfw-funneled.tgz"
TARGET_FILENAMES = [
'pairsDevTrain.txt',
'pairsDevTest.txt',
'pairs.txt',
]
def scale_face(face):
"""Scale back to 0-1 range in case of normalization for plotting"""
scaled = face - face.min()
scaled /= scaled.max()
return scaled
#
# Common private utilities for data fetching from the original LFW website
# local disk caching, and image decoding.
#
def check_fetch_lfw(data_home=None, funneled=True, download_if_missing=True):
"""Helper function to download any missing LFW data"""
data_home = get_data_home(data_home=data_home)
lfw_home = join(data_home, "lfw_home")
if funneled:
archive_path = join(lfw_home, FUNNELED_ARCHIVE_NAME)
data_folder_path = join(lfw_home, "lfw_funneled")
archive_url = BASE_URL + FUNNELED_ARCHIVE_NAME
else:
archive_path = join(lfw_home, ARCHIVE_NAME)
data_folder_path = join(lfw_home, "lfw")
archive_url = BASE_URL + ARCHIVE_NAME
if not exists(lfw_home):
makedirs(lfw_home)
for target_filename in TARGET_FILENAMES:
target_filepath = join(lfw_home, target_filename)
if not exists(target_filepath):
if download_if_missing:
url = BASE_URL + target_filename
logger.warn("Downloading LFW metadata: %s", url)
urllib.urlretrieve(url, target_filepath)
else:
raise IOError("%s is missing" % target_filepath)
if not exists(data_folder_path):
if not exists(archive_path):
if download_if_missing:
logger.warn("Downloading LFW data (~200MB): %s", archive_url)
urllib.urlretrieve(archive_url, archive_path)
else:
raise IOError("%s is missing" % target_filepath)
import tarfile
logger.info("Decompressing the data archive to %s", data_folder_path)
tarfile.open(archive_path, "r:gz").extractall(path=lfw_home)
remove(archive_path)
return lfw_home, data_folder_path
def _load_imgs(file_paths, slice_, color, resize):
"""Internally used to load images"""
# Try to import imread and imresize from PIL. We do this here to prevent
# the whole sklearn.datasets module from depending on PIL.
try:
try:
from scipy.misc import imread
except ImportError:
from scipy.misc.pilutil import imread
from scipy.misc import imresize
except ImportError:
raise ImportError("The Python Imaging Library (PIL)"
" is required to load data from jpeg files")
# compute the portion of the images to load to respect the slice_ parameter
# given by the caller
default_slice = (slice(0, 250), slice(0, 250))
if slice_ is None:
slice_ = default_slice
else:
slice_ = tuple(s or ds for s, ds in zip(slice_, default_slice))
h_slice, w_slice = slice_
h = (h_slice.stop - h_slice.start) // (h_slice.step or 1)
w = (w_slice.stop - w_slice.start) // (w_slice.step or 1)
if resize is not None:
resize = float(resize)
h = int(resize * h)
w = int(resize * w)
# allocate some contiguous memory to host the decoded image slices
n_faces = len(file_paths)
if not color:
faces = np.zeros((n_faces, h, w), dtype=np.float32)
else:
faces = np.zeros((n_faces, h, w, 3), dtype=np.float32)
# iterate over the collected file path to load the jpeg files as numpy
# arrays
for i, file_path in enumerate(file_paths):
if i % 1000 == 0:
logger.info("Loading face #%05d / %05d", i + 1, n_faces)
face = np.asarray(imread(file_path)[slice_], dtype=np.float32)
face /= 255.0 # scale uint8 coded colors to the [0.0, 1.0] floats
if resize is not None:
face = imresize(face, resize)
if not color:
# average the color channels to compute a gray levels
# representaion
face = face.mean(axis=2)
faces[i, ...] = face
return faces
#
# Task #1: Face Identification on picture with names
#
def _fetch_lfw_people(data_folder_path, slice_=None, color=False, resize=None,
min_faces_per_person=0):
"""Perform the actual data loading for the lfw people dataset
This operation is meant to be cached by a joblib wrapper.
"""
# scan the data folder content to retain people with more that
# `min_faces_per_person` face pictures
person_names, file_paths = [], []
for person_name in sorted(listdir(data_folder_path)):
folder_path = join(data_folder_path, person_name)
if not isdir(folder_path):
continue
paths = [join(folder_path, f) for f in listdir(folder_path)]
n_pictures = len(paths)
if n_pictures >= min_faces_per_person:
person_name = person_name.replace('_', ' ')
person_names.extend([person_name] * n_pictures)
file_paths.extend(paths)
n_faces = len(file_paths)
if n_faces == 0:
raise ValueError("min_faces_per_person=%d is too restrictive" %
min_faces_per_person)
target_names = np.unique(person_names)
target = np.searchsorted(target_names, person_names)
faces = _load_imgs(file_paths, slice_, color, resize)
# shuffle the faces with a deterministic RNG scheme to avoid having
# all faces of the same person in a row, as it would break some
# cross validation and learning algorithms such as SGD and online
# k-means that make an IID assumption
indices = np.arange(n_faces)
np.random.RandomState(42).shuffle(indices)
faces, target = faces[indices], target[indices]
return faces, target, target_names
def fetch_lfw_people(data_home=None, funneled=True, resize=0.5,
min_faces_per_person=0, color=False,
slice_=(slice(70, 195), slice(78, 172)),
download_if_missing=True):
"""Loader for the Labeled Faces in the Wild (LFW) people dataset
This dataset is a collection of JPEG pictures of famous people
collected on the internet, all details are available on the
official website:
http://vis-www.cs.umass.edu/lfw/
Each picture is centered on a single face. Each pixel of each channel
(color in RGB) is encoded by a float in range 0.0 - 1.0.
The task is called Face Recognition (or Identification): given the
picture of a face, find the name of the person given a training set
(gallery).
The original images are 250 x 250 pixels, but the default slice and resize
arguments reduce them to 62 x 74.
Parameters
----------
data_home : optional, default: None
Specify another download and cache folder for the datasets. By default
all scikit learn data is stored in '~/scikit_learn_data' subfolders.
funneled : boolean, optional, default: True
Download and use the funneled variant of the dataset.
resize : float, optional, default 0.5
Ratio used to resize the each face picture.
min_faces_per_person : int, optional, default None
The extracted dataset will only retain pictures of people that have at
least `min_faces_per_person` different pictures.
color : boolean, optional, default False
Keep the 3 RGB channels instead of averaging them to a single
gray level channel. If color is True the shape of the data has
one more dimension than than the shape with color = False.
slice_ : optional
Provide a custom 2D slice (height, width) to extract the
'interesting' part of the jpeg files and avoid use statistical
correlation from the background
download_if_missing : optional, True by default
If False, raise a IOError if the data is not locally available
instead of trying to download the data from the source site.
Returns
-------
dataset : dict-like object with the following attributes:
dataset.data : numpy array of shape (13233, 2914)
Each row corresponds to a ravelled face image of original size 62 x 47
pixels. Changing the ``slice_`` or resize parameters will change the shape
of the output.
dataset.images : numpy array of shape (13233, 62, 47)
Each row is a face image corresponding to one of the 5749 people in
the dataset. Changing the ``slice_`` or resize parameters will change the shape
of the output.
dataset.target : numpy array of shape (13233,)
Labels associated to each face image. Those labels range from 0-5748
and correspond to the person IDs.
dataset.DESCR : string
Description of the Labeled Faces in the Wild (LFW) dataset.
"""
lfw_home, data_folder_path = check_fetch_lfw(
data_home=data_home, funneled=funneled,
download_if_missing=download_if_missing)
logger.info('Loading LFW people faces from %s', lfw_home)
# wrap the loader in a memoizing function that will return memmaped data
# arrays for optimal memory usage
m = Memory(cachedir=lfw_home, compress=6, verbose=0)
load_func = m.cache(_fetch_lfw_people)
# load and memoize the pairs as np arrays
faces, target, target_names = load_func(
data_folder_path, resize=resize,
min_faces_per_person=min_faces_per_person, color=color, slice_=slice_)
# pack the results as a Bunch instance
return Bunch(data=faces.reshape(len(faces), -1), images=faces,
target=target, target_names=target_names,
DESCR="LFW faces dataset")
#
# Task #2: Face Verification on pairs of face pictures
#
def _fetch_lfw_pairs(index_file_path, data_folder_path, slice_=None,
color=False, resize=None):
"""Perform the actual data loading for the LFW pairs dataset
This operation is meant to be cached by a joblib wrapper.
"""
# parse the index file to find the number of pairs to be able to allocate
# the right amount of memory before starting to decode the jpeg files
with open(index_file_path, 'rb') as index_file:
split_lines = [ln.strip().split(b('\t')) for ln in index_file]
pair_specs = [sl for sl in split_lines if len(sl) > 2]
n_pairs = len(pair_specs)
# interating over the metadata lines for each pair to find the filename to
# decode and load in memory
target = np.zeros(n_pairs, dtype=np.int)
file_paths = list()
for i, components in enumerate(pair_specs):
if len(components) == 3:
target[i] = 1
pair = (
(components[0], int(components[1]) - 1),
(components[0], int(components[2]) - 1),
)
elif len(components) == 4:
target[i] = 0
pair = (
(components[0], int(components[1]) - 1),
(components[2], int(components[3]) - 1),
)
else:
raise ValueError("invalid line %d: %r" % (i + 1, components))
for j, (name, idx) in enumerate(pair):
try:
person_folder = join(data_folder_path, name)
except TypeError:
person_folder = join(data_folder_path, str(name, 'UTF-8'))
filenames = list(sorted(listdir(person_folder)))
file_path = join(person_folder, filenames[idx])
file_paths.append(file_path)
pairs = _load_imgs(file_paths, slice_, color, resize)
shape = list(pairs.shape)
n_faces = shape.pop(0)
shape.insert(0, 2)
shape.insert(0, n_faces // 2)
pairs.shape = shape
return pairs, target, np.array(['Different persons', 'Same person'])
@deprecated("Function 'load_lfw_people' has been deprecated in 0.17 and will be "
"removed in 0.19."
"Use fetch_lfw_people(download_if_missing=False) instead.")
def load_lfw_people(download_if_missing=False, **kwargs):
"""Alias for fetch_lfw_people(download_if_missing=False)
Check fetch_lfw_people.__doc__ for the documentation and parameter list.
"""
return fetch_lfw_people(download_if_missing=download_if_missing, **kwargs)
def fetch_lfw_pairs(subset='train', data_home=None, funneled=True, resize=0.5,
color=False, slice_=(slice(70, 195), slice(78, 172)),
download_if_missing=True):
"""Loader for the Labeled Faces in the Wild (LFW) pairs dataset
This dataset is a collection of JPEG pictures of famous people
collected on the internet, all details are available on the
official website:
http://vis-www.cs.umass.edu/lfw/
Each picture is centered on a single face. Each pixel of each channel
(color in RGB) is encoded by a float in range 0.0 - 1.0.
The task is called Face Verification: given a pair of two pictures,
a binary classifier must predict whether the two images are from
the same person.
In the official `README.txt`_ this task is described as the
"Restricted" task. As I am not sure as to implement the
"Unrestricted" variant correctly, I left it as unsupported for now.
.. _`README.txt`: http://vis-www.cs.umass.edu/lfw/README.txt
The original images are 250 x 250 pixels, but the default slice and resize
arguments reduce them to 62 x 74.
Read more in the :ref:`User Guide <labeled_faces_in_the_wild>`.
Parameters
----------
subset : optional, default: 'train'
Select the dataset to load: 'train' for the development training
set, 'test' for the development test set, and '10_folds' for the
official evaluation set that is meant to be used with a 10-folds
cross validation.
data_home : optional, default: None
Specify another download and cache folder for the datasets. By
default all scikit learn data is stored in '~/scikit_learn_data'
subfolders.
funneled : boolean, optional, default: True
Download and use the funneled variant of the dataset.
resize : float, optional, default 0.5
Ratio used to resize the each face picture.
color : boolean, optional, default False
Keep the 3 RGB channels instead of averaging them to a single
gray level channel. If color is True the shape of the data has
one more dimension than than the shape with color = False.
slice_ : optional
Provide a custom 2D slice (height, width) to extract the
'interesting' part of the jpeg files and avoid use statistical
correlation from the background
download_if_missing : optional, True by default
If False, raise a IOError if the data is not locally available
instead of trying to download the data from the source site.
Returns
-------
The data is returned as a Bunch object with the following attributes:
data : numpy array of shape (2200, 5828)
Each row corresponds to 2 ravel'd face images of original size 62 x 47
pixels. Changing the ``slice_`` or resize parameters will change the shape
of the output.
pairs : numpy array of shape (2200, 2, 62, 47)
Each row has 2 face images corresponding to same or different person
from the dataset containing 5749 people. Changing the ``slice_`` or resize
parameters will change the shape of the output.
target : numpy array of shape (13233,)
Labels associated to each pair of images. The two label values being
different persons or the same person.
DESCR : string
Description of the Labeled Faces in the Wild (LFW) dataset.
"""
lfw_home, data_folder_path = check_fetch_lfw(
data_home=data_home, funneled=funneled,
download_if_missing=download_if_missing)
logger.info('Loading %s LFW pairs from %s', subset, lfw_home)
# wrap the loader in a memoizing function that will return memmaped data
# arrays for optimal memory usage
m = Memory(cachedir=lfw_home, compress=6, verbose=0)
load_func = m.cache(_fetch_lfw_pairs)
# select the right metadata file according to the requested subset
label_filenames = {
'train': 'pairsDevTrain.txt',
'test': 'pairsDevTest.txt',
'10_folds': 'pairs.txt',
}
if subset not in label_filenames:
raise ValueError("subset='%s' is invalid: should be one of %r" % (
subset, list(sorted(label_filenames.keys()))))
index_file_path = join(lfw_home, label_filenames[subset])
# load and memoize the pairs as np arrays
pairs, target, target_names = load_func(
index_file_path, data_folder_path, resize=resize, color=color,
slice_=slice_)
# pack the results as a Bunch instance
return Bunch(data=pairs.reshape(len(pairs), -1), pairs=pairs,
target=target, target_names=target_names,
DESCR="'%s' segment of the LFW pairs dataset" % subset)
@deprecated("Function 'load_lfw_pairs' has been deprecated in 0.17 and will be "
"removed in 0.19."
"Use fetch_lfw_pairs(download_if_missing=False) instead.")
def load_lfw_pairs(download_if_missing=False, **kwargs):
"""Alias for fetch_lfw_pairs(download_if_missing=False)
Check fetch_lfw_pairs.__doc__ for the documentation and parameter list.
"""
return fetch_lfw_pairs(download_if_missing=download_if_missing, **kwargs)
| bsd-3-clause |
raghavrv/scikit-learn | examples/decomposition/plot_pca_3d.py | 10 | 2432 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=========================================================
Principal components analysis (PCA)
=========================================================
These figures aid in illustrating how a point cloud
can be very flat in one direction--which is where PCA
comes in to choose a direction that is not flat.
"""
print(__doc__)
# Authors: Gael Varoquaux
# Jaques Grobler
# Kevin Hughes
# License: BSD 3 clause
from sklearn.decomposition import PCA
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# #############################################################################
# Create the data
e = np.exp(1)
np.random.seed(4)
def pdf(x):
return 0.5 * (stats.norm(scale=0.25 / e).pdf(x)
+ stats.norm(scale=4 / e).pdf(x))
y = np.random.normal(scale=0.5, size=(30000))
x = np.random.normal(scale=0.5, size=(30000))
z = np.random.normal(scale=0.1, size=len(x))
density = pdf(x) * pdf(y)
pdf_z = pdf(5 * z)
density *= pdf_z
a = x + y
b = 2 * y
c = a - b + z
norm = np.sqrt(a.var() + b.var())
a /= norm
b /= norm
# #############################################################################
# Plot the figures
def plot_figs(fig_num, elev, azim):
fig = plt.figure(fig_num, figsize=(4, 3))
plt.clf()
ax = Axes3D(fig, rect=[0, 0, .95, 1], elev=elev, azim=azim)
ax.scatter(a[::10], b[::10], c[::10], c=density[::10], marker='+', alpha=.4)
Y = np.c_[a, b, c]
# Using SciPy's SVD, this would be:
# _, pca_score, V = scipy.linalg.svd(Y, full_matrices=False)
pca = PCA(n_components=3)
pca.fit(Y)
pca_score = pca.explained_variance_ratio_
V = pca.components_
x_pca_axis, y_pca_axis, z_pca_axis = V.T * pca_score / pca_score.min()
x_pca_axis, y_pca_axis, z_pca_axis = 3 * V.T
x_pca_plane = np.r_[x_pca_axis[:2], - x_pca_axis[1::-1]]
y_pca_plane = np.r_[y_pca_axis[:2], - y_pca_axis[1::-1]]
z_pca_plane = np.r_[z_pca_axis[:2], - z_pca_axis[1::-1]]
x_pca_plane.shape = (2, 2)
y_pca_plane.shape = (2, 2)
z_pca_plane.shape = (2, 2)
ax.plot_surface(x_pca_plane, y_pca_plane, z_pca_plane)
ax.w_xaxis.set_ticklabels([])
ax.w_yaxis.set_ticklabels([])
ax.w_zaxis.set_ticklabels([])
elev = -40
azim = -80
plot_figs(1, elev, azim)
elev = 30
azim = 20
plot_figs(2, elev, azim)
plt.show()
| bsd-3-clause |
appapantula/scikit-learn | sklearn/feature_extraction/tests/test_dict_vectorizer.py | 276 | 3790 | # Authors: Lars Buitinck <[email protected]>
# Dan Blanchard <[email protected]>
# License: BSD 3 clause
from random import Random
import numpy as np
import scipy.sparse as sp
from numpy.testing import assert_array_equal
from sklearn.utils.testing import (assert_equal, assert_in,
assert_false, assert_true)
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_selection import SelectKBest, chi2
def test_dictvectorizer():
D = [{"foo": 1, "bar": 3},
{"bar": 4, "baz": 2},
{"bar": 1, "quux": 1, "quuux": 2}]
for sparse in (True, False):
for dtype in (int, np.float32, np.int16):
for sort in (True, False):
for iterable in (True, False):
v = DictVectorizer(sparse=sparse, dtype=dtype, sort=sort)
X = v.fit_transform(iter(D) if iterable else D)
assert_equal(sp.issparse(X), sparse)
assert_equal(X.shape, (3, 5))
assert_equal(X.sum(), 14)
assert_equal(v.inverse_transform(X), D)
if sparse:
# CSR matrices can't be compared for equality
assert_array_equal(X.A, v.transform(iter(D) if iterable
else D).A)
else:
assert_array_equal(X, v.transform(iter(D) if iterable
else D))
if sort:
assert_equal(v.feature_names_,
sorted(v.feature_names_))
def test_feature_selection():
# make two feature dicts with two useful features and a bunch of useless
# ones, in terms of chi2
d1 = dict([("useless%d" % i, 10) for i in range(20)],
useful1=1, useful2=20)
d2 = dict([("useless%d" % i, 10) for i in range(20)],
useful1=20, useful2=1)
for indices in (True, False):
v = DictVectorizer().fit([d1, d2])
X = v.transform([d1, d2])
sel = SelectKBest(chi2, k=2).fit(X, [0, 1])
v.restrict(sel.get_support(indices=indices), indices=indices)
assert_equal(v.get_feature_names(), ["useful1", "useful2"])
def test_one_of_k():
D_in = [{"version": "1", "ham": 2},
{"version": "2", "spam": .3},
{"version=3": True, "spam": -1}]
v = DictVectorizer()
X = v.fit_transform(D_in)
assert_equal(X.shape, (3, 5))
D_out = v.inverse_transform(X)
assert_equal(D_out[0], {"version=1": 1, "ham": 2})
names = v.get_feature_names()
assert_true("version=2" in names)
assert_false("version" in names)
def test_unseen_or_no_features():
D = [{"camelot": 0, "spamalot": 1}]
for sparse in [True, False]:
v = DictVectorizer(sparse=sparse).fit(D)
X = v.transform({"push the pram a lot": 2})
if sparse:
X = X.toarray()
assert_array_equal(X, np.zeros((1, 2)))
X = v.transform({})
if sparse:
X = X.toarray()
assert_array_equal(X, np.zeros((1, 2)))
try:
v.transform([])
except ValueError as e:
assert_in("empty", str(e))
def test_deterministic_vocabulary():
# Generate equal dictionaries with different memory layouts
items = [("%03d" % i, i) for i in range(1000)]
rng = Random(42)
d_sorted = dict(items)
rng.shuffle(items)
d_shuffled = dict(items)
# check that the memory layout does not impact the resulting vocabulary
v_1 = DictVectorizer().fit([d_sorted])
v_2 = DictVectorizer().fit([d_shuffled])
assert_equal(v_1.vocabulary_, v_2.vocabulary_)
| bsd-3-clause |
markmuetz/stormtracks | stormtracks/analysis/plotting.py | 1 | 28762 | from __future__ import print_function
import copy
import os
import datetime as dt
from glob import glob
import simplejson
import pylab as plt
import numpy as np
from mpl_toolkits.basemap import Basemap
from ..utils.c_wrapper import cvort, cvort4
from ..load_settings import settings
SAVE_FILE_TPL = 'plot_{0}.json'
DEFAULT_PLOTTER_LAYOUT = {
'version': 0.1,
'date': None,
'name': None,
'figure': 1,
'plot_settings': [],
'ensemble_member': 0,
'ensemble_mode': 'member',
}
DEFAULT_PLOT_SETTINGS = {
'subplot': '111',
'field': 'prmsl',
'points': 'pmins',
'loc': 'wa',
'vmax': None,
'vmin': None,
'best_track_index': -1,
'vort_max_track_index': -1,
'pressure_min_track_index': -1,
'match_index': -1,
}
def plot_pandas_matched_unmatched(df_year, bt_matches, f1='vort', f2='t850s', flip=False, alpha=0.5):
plt.clf()
comb = df_year.join(bt_matches)
matched = comb[~comb.bt_name.isnull()]
unmatched = comb[comb.bt_name.isnull()]
if not flip:
zo1 = 1
zo2 = 2
alpha1=1
alpha2=alpha
else:
zo1 = 2
zo2 = 1
alpha1=alpha
alpha2=1
uf1 = getattr(unmatched, f1)
uf2 = getattr(unmatched, f2)
mf1 = getattr(matched, f1)
mf2 = getattr(matched, f2)
plt.scatter(uf1, uf2, c='k', marker='s', zorder=zo1, alpha=alpha1)
plt.scatter(mf1, mf2, c=matched.bt_wind, marker='s', zorder=zo2, alpha=alpha2)
plt.xlim((uf1.min(), uf1.max()))
plt.ylim((uf2.min(), uf2.max()))
def plot_pandas_tracks(ib, matched, mode='all'):
for bt in ib.best_tracks:
plt.plot(bt.lons, bt.lats, 'r-', zorder=1)
plt.scatter(x=bt.lons, y=bt.lats, c=bt.winds, marker='o', zorder=2)
if mode == 'all':
lon_lat = matched[matched.bt_name == bt.name]
plt.scatter(lon_lat.lon, lon_lat.lat, marker='x', zorder=0)
elif mode == 'mean':
lon_lat = matched[matched.bt_name == bt.name].groupby('date')['lon', 'lat'].mean()
lon_lat_std = matched[matched.bt_name == bt.name].groupby('date')['lon', 'lat'].std()
plt.errorbar(lon_lat.lon, lon_lat.lat, xerr=lon_lat_std.lon, yerr=lon_lat_std.lat, marker='x', zorder=0)
plt.clim(0, 150)
elif mode == 'median':
lon_lat = matched[matched.bt_name == bt.name].groupby('date')['lon', 'lat'].median()
lon_lat_std = matched[matched.bt_name == bt.name].groupby('date')['lon', 'lat'].std()
plt.errorbar(lon_lat.lon, lon_lat.lat, xerr=lon_lat_std.lon, yerr=lon_lat_std.lat, marker='x', zorder=0)
plt.clim(0, 150)
plt.title('Best track {}: {} - {}'.format(bt.name, bt.dates[0], bt.dates[-1]))
plt.pause(.1)
raw_input()
plt.clf()
def datetime_encode_handler(obj):
if hasattr(obj, 'isoformat'):
return {'__isotime__': obj.isoformat()}
else:
raise TypeError('Object of type %s with value of {0} is not JSON serializable'.format(
type(obj), repr(obj)))
def datetime_decode_hook(dct):
if '__isotime__' in dct:
return dt.datetime.strptime(dct['__isotime__'], '%Y-%m-%dT%H:%M:%S')
return dct
class Plotter(object):
def __init__(self, title, best_tracks, c20data, all_matches):
self.title = title
self.best_tracks = best_tracks
self.c20data = c20data
self.all_matches = all_matches
self.date = self.c20data.first_date()
self.layout = copy.copy(DEFAULT_PLOTTER_LAYOUT)
self.layout['date'] = self.date
self.layout['plot_settings'].append(DEFAULT_PLOT_SETTINGS)
self.date = None
self.field_names = ['prmsl', 'vort', 'vort4', 'windspeed']
self.track_names = ['best', 'vort_max', 'pressure_min']
self.point_names = ['vort_max', 'pressure_min']
def plot_match_from_best_track(self, best_track):
for matches in self.all_matches:
for match in matches:
if best_track.name == match.best_track.name:
for settings in self.layout['plot_settings']:
settings['match_index'] = matches.index(match)
self.plot()
return
print('Match not found for track')
def plot(self):
ensemble_member = self.layout['ensemble_member']
date = self.layout['date']
plt.figure(self.layout['figure'])
plt.clf()
for plot_settings in self.layout['plot_settings']:
plt.subplot(plot_settings['subplot'])
title = [self.title, str(date), str(ensemble_member)]
if plot_settings['field']:
title.append(plot_settings['field'])
field = getattr(self.c20data, plot_settings['field'])
raster_on_earth(self.c20data.lons, self.c20data.lats, field,
plot_settings['vmin'], plot_settings['vmax'], plot_settings['loc'])
else:
raster_on_earth(self.c20data.lons, self.c20data.lats,
None, None, None, plot_settings['loc'])
if plot_settings['points']:
title.append(plot_settings['points'])
points = getattr(self.c20data, plot_settings['points'])
for p_val, p_loc in points:
plot_point_on_earth(p_loc[0], p_loc[1], 'kx')
if plot_settings['best_track_index'] != -1:
plot_ibtrack_with_date(self.best_tracks[plot_settings['best_track_index']], date)
if plot_settings['vort_max_track_index'] != -1:
pass
if plot_settings['pressure_min_track_index'] != -1:
pass
if plot_settings['match_index'] != -1:
plot_match_with_date(
self.all_matches[ensemble_member][plot_settings['match_index']], date)
plt.title(' '.join(title))
def save(self, name):
self.layout['name'] = name
f = open(os.path.join(settings.SETTINGS_DIR, 'plots',
SAVE_FILE_TPL.format(self.layout['name'])), 'w')
simplejson.dump(self.layout, f, default=datetime_encode_handler, indent=4)
def load(self, name, is_plot=True):
try:
f = open(os.path.join(settings.SETTINGS_DIR, 'plots', SAVE_FILE_TPL.format(name)), 'r')
layout = simplejson.load(f, object_hook=datetime_decode_hook)
print(layout)
if layout['version'] != self.layout['version']:
r = raw_input('version mismatch, may not work! Press c to continue anyway: ')
if r != 'c':
raise Exception('Version mismatch (user cancelled)')
self.layout['name'] = name
self.layout = layout
self.set_date(self.layout['date'], is_plot)
except Exception, e:
print('Settings {0} could not be loaded'.format(name))
print('{0}'.format(e.message))
raise
def print_list(self):
for plot_settings_name in self.list():
print(plot_settings_name)
def delete(self, name):
file_name = os.path.join(settings.SETTINGS_DIR, 'plots', SAVE_FILE_TPL.format(name))
os.remove(file_name)
def list(self):
plot_settings = []
for fn in glob(os.path.join(settings.SETTINGS_DIR, 'plots', SAVE_FILE_TPL.format('*'))):
plot_settings.append('_'.join(os.path.basename(fn).split('.')[0].split('_')[1:]))
return plot_settings
def set_date(self, date, is_plot=True):
self.layout['date'] = self.c20data.set_date(
date, self.layout['ensemble_member'], self.layout['ensemble_mode'])
if is_plot:
self.plot()
def next_date(self):
self.layout['date'] = self.c20data.next_date(
self.layout['ensemble_member'], self.layout['ensemble_mode'])
self.plot()
def prev_date(self):
self.layout['date'] = self.c20data.prev_date(
self.layout['ensemble_member'], self.layout['ensemble_mode'])
self.plot()
def set_ensemble_member(self, ensemble_member):
self.layout['ensemble_member'] = ensemble_member
self.c20data.set_date(
self.layout['date'], self.layout['ensemble_member'], self.layout['ensemble_mode'])
self.plot()
def next_ensemble_member(self):
self.set_ensemble_member(self.layout['ensemble_member'] + 1)
def prev_ensemble_member(self):
self.set_ensemble_member(self.layout['ensemble_member'] - 1)
def set_match(self, match_index):
self.match_index = match_index
for plot_settings in self.layout['plot_settings']:
plot_settings.match_index = match_index
self.plot()
def next_match(self):
self.set_match(self.match_index + 1)
def prev_match(self):
self.set_match(self.match_index - 1)
def interactive_plot(self):
cmd = ''
args = []
while True:
try:
prev_cmd = cmd
prev_args = args
if cmd not in ['c', 'cr']:
r = raw_input('# ')
if r == '':
cmd = prev_cmd
args = prev_args
else:
try:
cmd = r.split(' ')[0]
args = r.split(' ')[1:]
except:
cmd = r
args = None
if cmd == 'q':
break
elif cmd == 'pl':
print('plot')
self.next_date()
self.plot()
elif cmd == 'n':
print('next')
self.next_date()
self.plot()
elif cmd == 'p':
print('prev')
self.prev_date()
self.plot()
elif cmd == 'c':
print('continuing')
self.c20data.next_date()
plt.pause(0.01)
self.plot()
elif cmd == 'cr':
print('continuing backwards')
self.c20data.prev_date()
plt.pause(0.01)
self.plot()
elif cmd == 's':
self.save(args[0])
elif cmd == 'l':
self.load(args[0])
elif cmd == 'ls':
self.list()
elif cmd == 'em':
if args[0] == 'n':
self.next_ensemble_member()
elif args[0] == 'p':
self.prev_ensemble_member()
else:
try:
self.set_ensemble_member(int(args[0]))
except:
pass
except KeyboardInterrupt, ki:
# Handle ctrl+c
# deletes ^C from terminal:
print('\r', end='')
cmd = ''
print('ctrl+c pressed')
def plot_3d_scatter(cyclone_matches, unmatched_cyclones):
fig = plt.figure(4)
ax = fig.add_subplot(111, projection='3d')
ps = {'hu': {'xs': [], 'ys': [], 'zs': []},
'ts': {'xs': [], 'ys': [], 'zs': []},
'no': {'xs': [], 'ys': [], 'zs': []}}
for match in cyclone_matches:
best_track = match.best_track
cyclone = match.cyclone
plotted_dates = []
for date, cls in zip(best_track.dates, best_track.cls):
if date in cyclone.dates and cyclone.pmins[date]:
plotted_dates.append(date)
if cls == 'HU':
ps['hu']['xs'].append(cyclone.pmins[date])
ps['hu']['ys'].append(cyclone.p_ambient_diffs[date])
ps['hu']['zs'].append(cyclone.vortmax_track.vortmax_by_date[date].vort)
else:
ps['ts']['xs'].append(cyclone.pmins[date])
ps['ts']['ys'].append(cyclone.p_ambient_diffs[date])
ps['ts']['zs'].append(cyclone.vortmax_track.vortmax_by_date[date].vort)
for date in cyclone.dates:
if date not in plotted_dates and cyclone.pmins[date]:
ps['no']['xs'].append(cyclone.pmins[date])
ps['no']['ys'].append(cyclone.p_ambient_diffs[date])
ps['no']['zs'].append(cyclone.vortmax_track.vortmax_by_date[date].vort)
ax.scatter(c='r', marker='o', **ps['hu'])
ax.scatter(c='b', marker='o', **ps['ts'])
ax.scatter(c='b', marker='x', **ps['no'])
SCATTER_ATTRS = {
# Normally x.
'vort': {'range': (0, 0.0012)},
# 1st row.
'pmin': {'range': (96000, 104000)},
'pambdiff': {'range': (-1000, 5000)},
'mindist': {'range': (0, 1000)},
'max_windspeed': {'range': (0, 100)},
# 2nd row.
't995': {'range': (260, 320)},
't850': {'range': (250, 310)},
't_anom': {'range': (-15, 10)},
'max_windspeed_dist': {'range': (0, 1000)},
'max_windspeed_dir': {'range': (-np.pi, np.pi)},
'lon': {'range': (200, 360)},
'lat': {'range': (0, 70)},
}
def plot_2d_scatter(ps, var1, var2, unmatched_lim=None):
attr1 = SCATTER_ATTRS[var1]
attr2 = SCATTER_ATTRS[var2]
plt.xlim(attr1['range'])
plt.ylim(attr2['range'])
plt.xlabel(var1)
plt.ylabel(var2)
if not unmatched_lim:
unmatched_lim = len(ps['unmatched']['xs'])
plt.plot(ps['unmatched']['xs'][:unmatched_lim],
ps['unmatched']['ys'][:unmatched_lim], 'g+', zorder=0)
plt.plot(ps['no']['xs'], ps['no']['ys'], 'bx', zorder=1)
plt.plot(ps['ts']['xs'], ps['ts']['ys'], 'bo', zorder=2)
plt.plot(ps['hu']['xs'], ps['hu']['ys'], 'ro', zorder=3)
def plot_2d_error_scatter(ps, var1, var2, unmatched_lim=None):
attr1 = SCATTER_ATTRS[var1]
attr2 = SCATTER_ATTRS[var2]
plt.xlim(attr1['range'])
plt.ylim(attr2['range'])
plt.xlabel(var1)
plt.ylabel(var2)
if not unmatched_lim:
unmatched_lim = len(ps['un']['xs'])
plt.plot(ps['un']['xs'][:unmatched_lim], ps['un']['ys'][:unmatched_lim], 'kx', zorder=0)
plt.plot(ps['fp']['xs'], ps['fp']['ys'], 'bo', zorder=1)
plt.plot(ps['fn']['xs'], ps['fn']['ys'], 'r^', zorder=2)
plt.plot(ps['tp']['xs'], ps['tp']['ys'], 'ro', zorder=3)
plt.plot(ps['tn']['xs'], ps['tn']['ys'], 'b^', zorder=4)
def plot_ensemble_matches(c20data, matches):
plt.clf()
raster_on_earth(c20data.lons, c20data.lats, None, None, None, 'wa')
for match in matches:
plot_track(match.av_vort_track)
if match.store_all_tracks:
for vort_track in match.vort_tracks:
plot_track(vort_track, 'b+')
# if raw_input() == 'q':
# return
def plot_vortmax_track_with_date(vortmax_track, date=None):
plot_track(vortmax_track, plt_fmt='b--')
if date:
try:
index = np.where(vortmax_track.dates == date)[0][0]
plot_point_on_earth(vortmax_track.lons[index], vortmax_track.lats[index], 'ko')
except:
pass
# print("Couldn't plot track at date {0} (start {1}, end {2})".format(
# date, vortmax_track.dates[0], vortmax_track.dates[-1]))
def plot_ibtrack_with_date(best_track, date=None):
plot_track(best_track)
if date:
try:
index = np.where(best_track.dates == date)[0][0]
if best_track.cls[index] == 'HU':
plot_point_on_earth(best_track.lons[index], best_track.lats[index], 'ro')
else:
plot_point_on_earth(best_track.lons[index], best_track.lats[index], 'ko')
except:
pass
# print("Couldn't plot best_track at date {0} (start {1}, end {2})".format(
# date, best_track.dates[0], best_track.dates[-1]))
# TODO: integrate with Plotting class.
def time_plot_match(c20data, match):
best_track = match.best_track
vort_track = match.vort_track
for date in best_track.dates:
c20data.set_date(date)
plt.clf()
plt.title(date)
raster_on_earth(c20data.lons, c20data.lats, c20data.vort, -6e-5, 5e-4, 'wa')
plot_vortmax_track_with_date(vort_track, date)
plot_ibtrack_with_date(best_track, date)
plot_match_dist_with_date(match, date)
raw_input()
# TODO: integrate with Plotting class.
def time_plot_ibtrack(c20data, best_track):
for date in best_track.dates:
c20data.set_date(date)
plt.clf()
plt.subplot(2, 1, 1)
plt.title(date)
plot_ibtrack_with_date(best_track, date)
raster_on_earth(c20data.lons, c20data.lats, c20data.prmsl, vmin=99500, vmax=103000, loc='wa')
plt.subplot(2, 1, 2)
plot_ibtrack_with_date(best_track, date)
raster_on_earth(c20data.lons, c20data.lats, c20data.vort, -6e-5, 5e-4, loc='wa')
raw_input()
# TODO: integrate with Plotting class.
def time_plot_vmax(gdata):
for date in gdata.vortmax_time_series.keys():
vortmaxes = gdata.vortmax_time_series[date]
plt.clf()
plt.title(date)
raster_on_earth(gdata.c20data.lons, gdata.c20data.lats, None, 0, 0, 'wa')
for vmax in vortmaxes:
plot_point_on_earth(vmax.pos[0], vmax.pos[1], 'ko')
raw_input()
# TODO: integrate with Plotting class.
def plot_matches(c20data, matches, clear=False):
for match in matches:
if clear:
plt.clf()
raster_on_earth(c20data.lons, c20data.lats, None, 0, 0, 'wa')
plot_match_with_date(match)
raw_input()
def cvorticity(u, v, dx, dy):
'''Calculates the (2nd order) vorticity by calling into a c function'''
vort = np.zeros_like(u)
cvort(u, v, u.shape[0], u.shape[1], dx, dy, vort)
return vort
def plot_grib_vorticity_at_level(c20gribdata, level_key):
level = c20gribdata.levels[level_key]
raster_on_earth(level.lons, level.lats, level.vort, loc='wa')
def plot_match_with_date(match, date=None):
plot_vortmax_track_with_date(match.vort_track, date)
plot_ibtrack_with_date(match.best_track, date)
plot_match_dist_with_date(match, date)
print('Overlap: {0}, cum dist: {1}, av dist: {2}'.format(
match.overlap, match.cum_dist, match.av_dist()))
def plot_match_dist_with_date(match, date):
best_track = match.best_track
vortmax_track = match.vort_track
track_index = np.where(best_track.dates == match.overlap_start)[0][0]
vortmax_track_index = np.where(vortmax_track.dates == match.overlap_start)[0][0]
while True:
vortmax = vortmax_track.vortmaxes[vortmax_track_index]
if date and date == best_track.dates[track_index]:
plot_path_on_earth(np.array((best_track.lons[track_index], vortmax.pos[0])),
np.array((best_track.lats[track_index], vortmax.pos[1])), 'r-')
else:
plot_path_on_earth(np.array((best_track.lons[track_index], vortmax.pos[0])),
np.array((best_track.lats[track_index], vortmax.pos[1])), 'y-')
track_index += 1
vortmax_track_index += 1
if track_index >= len(best_track.lons) or vortmax_track_index >= len(vortmax_track.lons):
break
def plot_vort_vort4(c20data, date):
c20data.set_date(date)
plt.clf()
plt.subplot(2, 2, 1)
plt.title(date)
raster_on_earth(c20data.lons, c20data.lats, c20data.vort, vmin=-5e-6, vmax=2e-4)
plt.subplot(2, 2, 3)
raster_on_earth(c20data.lons, c20data.lats, c20data.vort4, vmin=-5e-6, vmax=2e-4)
plt.subplot(2, 2, 2)
raster_on_earth(c20data.lons, c20data.lats, None)
print(len(c20data.vmaxs))
for v, vmax in c20data.vmaxs:
if v > 1e-4:
plot_point_on_earth(vmax[0], vmax[1], 'go')
else:
plot_point_on_earth(vmax[0], vmax[1], 'kx')
plt.subplot(2, 2, 4)
raster_on_earth(c20data.lons, c20data.lats, None)
print(len(c20data.v4maxs))
for v, vmax in c20data.v4maxs:
if v > 5e-5:
plot_point_on_earth(vmax[0], vmax[1], 'go')
else:
plot_point_on_earth(vmax[0], vmax[1], 'kx')
def time_plot_ibtracks_pressure_vort(c20data, best_tracks, dates):
for i, date in enumerate(dates):
c20data.set_date(date)
plt.clf()
plt.subplot(2, 2, 1)
plt.title(date)
raster_on_earth(c20data.lons, c20data.lats, c20data.prmsl, vmin=99500, vmax=103000)
plt.subplot(2, 2, 3)
raster_on_earth(c20data.lons, c20data.lats, c20data.vort, vmin=-5e-6, vmax=2e-4)
plt.subplot(2, 2, 2)
raster_on_earth(c20data.lons, c20data.lats, None)
for p, pmin in c20data.pmins:
plot_point_on_earth(pmin[0], pmin[1], 'kx')
plt.subplot(2, 2, 4)
raster_on_earth(c20data.lons, c20data.lats, None)
for v, vmax in c20data.vmaxs:
if v > 10:
plot_point_on_earth(vmax[0], vmax[1], 'go')
else:
plot_point_on_earth(vmax[0], vmax[1], 'kx')
for best_track in best_tracks:
if best_track.dates[0] > date or best_track.dates[-1] < date:
continue
index = 0
for j, track_date in enumerate(best_track.dates):
if track_date == date:
index = j
break
for i in range(4):
plt.subplot(2, 2, i + 1)
plot_ibtrack_with_date(best_track, date)
plt.pause(0.1)
print(date)
raw_input()
def time_plot_ibtracks_vort_smoothedvort(c20data, best_tracks, dates):
if not c20data.smoothing:
print('Turning on smoothing for c20 data')
c20data.smoothing = True
for i, date in enumerate(dates):
c20data.set_date(date)
plt.clf()
plt.subplot(2, 2, 1)
plt.title(date)
raster_on_earth(c20data.lons, c20data.lats, c20data.vort)
plt.subplot(2, 2, 3)
raster_on_earth(c20data.lons, c20data.lats, c20data.smoothed_vort)
plt.subplot(2, 2, 2)
raster_on_earth(c20data.lons, c20data.lats, None)
for v, vmax in c20data.vmaxs:
if v > 10:
plot_point_on_earth(vmax[0], vmax[1], 'go')
elif v > 4:
plot_point_on_earth(vmax[0], vmax[1], 'kx')
else:
plot_point_on_earth(vmax[0], vmax[1], 'y+')
plt.subplot(2, 2, 4)
raster_on_earth(c20data.lons, c20data.lats, None)
for v, vmax in c20data.smoothed_vmaxs:
if v > 10:
plot_point_on_earth(vmax[0], vmax[1], 'go')
elif v > 4:
plot_point_on_earth(vmax[0], vmax[1], 'kx')
else:
plot_point_on_earth(vmax[0], vmax[1], 'y+')
for best_track in best_tracks:
if best_track.dates[0] > date or best_track.dates[-1] < date:
continue
index = 0
for j, track_date in enumerate(best_track.dates):
if track_date == date:
index = j
break
for i in range(4):
plt.subplot(2, 2, i + 1)
plot_ibtrack_with_date(best_track, date)
plt.pause(0.1)
print(date)
raw_input()
def plot_ibtracks(best_tracks, start_date, end_date):
for best_track in best_tracks:
if best_track.dates[0] >= start_date and best_track.dates[0] <= end_date:
plot_track(best_track)
def plot_track(track, plt_fmt=None, zorder=1):
if plt_fmt:
plot_path_on_earth(track.lons, track.lats, plt_fmt, zorder=zorder)
else:
plot_path_on_earth(track.lons, track.lats, 'r-', zorder=zorder)
# TODO: integrate with Plotting class.
def plot_wilma(c20data):
plot_between_dates(c20data, dt.datetime(2005, 10, 18), dt.datetime(2005, 10, 28))
# TODO: integrate with Plotting class.
def plot_between_dates(c20data, start_date, end_date):
date = c20data.set_date(start_date)
while date < end_date:
plt.clf()
plt.subplot(2, 2, 1)
plt.title(date)
# raster_on_earth(c20data.lons, c20data.lats, c20data.prmsl, 97000, 106000, 'wa')
raster_on_earth(c20data.lons, c20data.lats, c20data.vort, -6e-5, 5e-4, 'wa')
plt.subplot(2, 2, 3)
raster_on_earth(c20data.lons, c20data.lats, c20data.vort4, -6e-5, 5e-4, 'wa')
plt.subplot(2, 2, 2)
raster_on_earth(c20data.lons, c20data.lats, None, 0, 0, 'wa')
for v, vmax in c20data.vmaxs:
if v > 3e-4:
plot_point_on_earth(vmax[0], vmax[1], 'ro')
plt.annotate('{0:2.1f}'.format(v * 1e4), (vmax[0], vmax[1] + 0.2))
elif v > 2e-4:
plot_point_on_earth(vmax[0], vmax[1], 'yo')
plt.annotate('{0:2.1f}'.format(v * 1e4), (vmax[0], vmax[1] + 0.2))
elif v > 1e-4:
plot_point_on_earth(vmax[0], vmax[1], 'go')
else:
# plot_point_on_earth(vmax[0], vmax[1], 'kx')
pass
plt.subplot(2, 2, 4)
raster_on_earth(c20data.lons, c20data.lats, None, 0, 0, 'wa')
for v, vmax in c20data.v4maxs:
if v > 3e-4:
plot_point_on_earth(vmax[0], vmax[1], 'ro')
plt.annotate('{0:2.1f}'.format(v * 1e4), (vmax[0], vmax[1] + 0.2))
elif v > 2e-4:
plot_point_on_earth(vmax[0], vmax[1], 'yo')
plt.annotate('{0:2.1f}'.format(v * 1e4), (vmax[0], vmax[1] + 0.2))
elif v > 1e-4:
plot_point_on_earth(vmax[0], vmax[1], 'go')
else:
# plot_point_on_earth(vmax[0], vmax[1], 'kx')
pass
raw_input()
date = c20data.next_date()
# plt.pause(0.1)
def lons_convert(lons):
new_lons = lons.copy()
m = new_lons > 180
new_lons[m] = new_lons[m] - 360
return new_lons
def lon_convert(lon):
return lon if lon <= 180 else lon - 360
def plot_path_on_earth(lons, lats, plot_fmt=None, zorder=1):
if plot_fmt:
plt.plot(lons_convert(lons), lats, plot_fmt, zorder=zorder)
else:
plt.plot(lons_convert(lons), lats, zorder=zorder)
def plot_point_on_earth(lon, lat, plot_fmt=None):
if plot_fmt:
plt.plot(lon_convert(lon), lat, plot_fmt)
else:
plt.plot(lon_convert(lon), lat)
def raster_on_earth(lons, lats, data, vmin=None, vmax=None, loc='earth'):
if loc == 'earth':
m = Basemap(projection='cyl', resolution='c',
llcrnrlat=-90, urcrnrlat=90, llcrnrlon=-180, urcrnrlon=180)
elif loc in ['wa', 'west_atlantic']:
m = Basemap(projection='cyl', resolution='c',
llcrnrlat=0, urcrnrlat=60, llcrnrlon=-120, urcrnrlon=-30)
elif loc in ['nwp', 'northwest_pacific']:
m = Basemap(projection='cyl', resolution='c',
llcrnrlat=0, urcrnrlat=50, llcrnrlon=100, urcrnrlon=180)
if data is not None:
plot_lons, plot_data = extend_data(lons, lats, data)
lons, lats = np.meshgrid(plot_lons, lats)
x, y = m(lons, lats)
if vmin:
m.pcolormesh(x, y, plot_data, vmin=vmin, vmax=vmax)
else:
m.pcolormesh(x, y, plot_data)
m.drawcoastlines()
p_labels = [0, 1, 0, 0]
m.drawparallels(np.arange(-90., 90.1, 45.), labels=p_labels, fontsize=10)
m.drawmeridians(np.arange(-180., 180., 60.), labels=[0, 0, 0, 1], fontsize=10)
def extend_data(lons, lats, data):
if False:
# TODO: probably doesn't work!
# Adds extra data at the end.
plot_offset = 2
plot_lons = np.zeros((lons.shape[0] + plot_offset,))
plot_lons[:-plot_offset] = lons
plot_lons[-plot_offset:] = lons[-plot_offset:] + 3.75 * plot_offset
plot_data = np.zeros((data.shape[0], data.shape[1] + plot_offset))
plot_data[:, :-plot_offset] = data
plot_data[:, -plot_offset:] = data[:, :plot_offset]
else:
# Adds extra data before the start.
delta = lons[1] - lons[0]
plot_offset = 180
plot_lons = np.ma.zeros((lons.shape[0] + plot_offset,))
plot_lons[plot_offset:] = lons
plot_lons[:plot_offset] = lons[-plot_offset:] - delta * (lons.shape[0])
plot_data = np.ma.zeros((data.shape[0], data.shape[1] + plot_offset))
plot_data[:, plot_offset:] = data
plot_data[:, :plot_offset] = data[:, -plot_offset:]
return plot_lons, plot_data
| mit |
wavelets/machine-learning | code/abalone.py | 2 | 1908 | # abalone
# Classification and Clustering of Wheat Dataset
#
# Author: Author: Benjamin Bengfort <[email protected]>
# Created: Thu Feb 26 17:56:52 2015 -0500
#
# Copyright (C) 2015 District Data Labs
# For license information, see LICENSE.txt
#
# ID: abalone.py [] [email protected] $
"""
Classification and Clustering of Abalone Dataset
"""
##########################################################################
## Imports
##########################################################################
import time
import pickle
from abaloneUtils import *
from sklearn import cross_validation
##########################################################################
## Logistic Regression Classification
##########################################################################
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
start_time = time.time()
# Load the dataset
dataset = load_abalone()
data = dataset.data
target = dataset.target
# Get training and testing splits
splits = cross_validation.train_test_split(data, target, test_size=0.2)
data_train, data_test, target_train, target_test = splits
load_time = time.time()
# Fit the training data to the model
model = LogisticRegression()
model.fit(data_train, target_train)
build_time = time.time()
print model
# Make predictions
expected = target_test
predicted = model.predict(data_test)
# Evaluate the predictions
print metrics.classification_report(expected, predicted)
print metrics.confusion_matrix(expected, predicted)
eval_time = time.time()
print "Times: %0.3f sec loading, %0.3f sec building, %0.3f sec evaluation" % (load_time-start_time, build_time-load_time, eval_time-build_time,)
print "Total time: %0.3f seconds" % (eval_time-start_time)
# Save the model to disk
with open('abaloneModel.pickle', 'w') as f:
pickle.dump(model, f)
| mit |
jayflo/scikit-learn | examples/manifold/plot_compare_methods.py | 259 | 4031 | """
=========================================
Comparison of Manifold Learning methods
=========================================
An illustration of dimensionality reduction on the S-curve dataset
with various manifold learning methods.
For a discussion and comparison of these algorithms, see the
:ref:`manifold module page <manifold>`
For a similar example, where the methods are applied to a
sphere dataset, see :ref:`example_manifold_plot_manifold_sphere.py`
Note that the purpose of the MDS is to find a low-dimensional
representation of the data (here 2D) in which the distances respect well
the distances in the original high-dimensional space, unlike other
manifold-learning algorithms, it does not seeks an isotropic
representation of the data in the low-dimensional space.
"""
# Author: Jake Vanderplas -- <[email protected]>
print(__doc__)
from time import time
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.ticker import NullFormatter
from sklearn import manifold, datasets
# Next line to silence pyflakes. This import is needed.
Axes3D
n_points = 1000
X, color = datasets.samples_generator.make_s_curve(n_points, random_state=0)
n_neighbors = 10
n_components = 2
fig = plt.figure(figsize=(15, 8))
plt.suptitle("Manifold Learning with %i points, %i neighbors"
% (1000, n_neighbors), fontsize=14)
try:
# compatibility matplotlib < 1.0
ax = fig.add_subplot(251, projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=color, cmap=plt.cm.Spectral)
ax.view_init(4, -72)
except:
ax = fig.add_subplot(251, projection='3d')
plt.scatter(X[:, 0], X[:, 2], c=color, cmap=plt.cm.Spectral)
methods = ['standard', 'ltsa', 'hessian', 'modified']
labels = ['LLE', 'LTSA', 'Hessian LLE', 'Modified LLE']
for i, method in enumerate(methods):
t0 = time()
Y = manifold.LocallyLinearEmbedding(n_neighbors, n_components,
eigen_solver='auto',
method=method).fit_transform(X)
t1 = time()
print("%s: %.2g sec" % (methods[i], t1 - t0))
ax = fig.add_subplot(252 + i)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("%s (%.2g sec)" % (labels[i], t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
t0 = time()
Y = manifold.Isomap(n_neighbors, n_components).fit_transform(X)
t1 = time()
print("Isomap: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(257)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("Isomap (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
t0 = time()
mds = manifold.MDS(n_components, max_iter=100, n_init=1)
Y = mds.fit_transform(X)
t1 = time()
print("MDS: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(258)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("MDS (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
t0 = time()
se = manifold.SpectralEmbedding(n_components=n_components,
n_neighbors=n_neighbors)
Y = se.fit_transform(X)
t1 = time()
print("SpectralEmbedding: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(259)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("SpectralEmbedding (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
t0 = time()
tsne = manifold.TSNE(n_components=n_components, init='pca', random_state=0)
Y = tsne.fit_transform(X)
t1 = time()
print("t-SNE: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(250)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("t-SNE (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
plt.show()
| bsd-3-clause |
scw/scipy-devsummit-2016-talk | examples/pandas-filter.py | 3 | 1218 | import pandas
data = pandas.read_csv('data/season-ratings.csv')
data.columns
#Index([u'season', u'households', u'rank', u'tv_households', \
# u'net_indep', u'primetime_pct'], dtype='object')
majority_simpsons = data[data.primetime_pct > 50]
""" season households tv_households net_indep primetime_pct
0 1 13.4m[41] 92.1 51.6 80.751174
1 2 12.2m[n2] 92.1 50.4 78.504673
2 3 12.0m[n3] 92.1 48.4 76.582278
3 4 12.1m[48] 93.1 46.2 72.755906
4 5 10.5m[n4] 93.1 46.5 72.093023
5 6 9.0m[50] 95.4 46.1 71.032357
6 7 8.0m[51] 95.9 46.6 70.713202
7 8 8.6m[52] 97.0 44.2 67.584098
8 9 9.1m[53] 98.0 42.3 64.383562
9 10 7.9m[54] 99.4 39.9 60.916031
10 11 8.2m[55] 100.8 38.1 57.466063
11 12 14.7m[56] 102.2 36.8 53.958944
12 13 12.4m[57] 105.5 35.0 51.094891
"""
print(majority_simpsons)
| apache-2.0 |
joernhees/scikit-learn | examples/cluster/plot_feature_agglomeration_vs_univariate_selection.py | 87 | 3903 | """
==============================================
Feature agglomeration vs. univariate selection
==============================================
This example compares 2 dimensionality reduction strategies:
- univariate feature selection with Anova
- feature agglomeration with Ward hierarchical clustering
Both methods are compared in a regression problem using
a BayesianRidge as supervised estimator.
"""
# Author: Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
print(__doc__)
import shutil
import tempfile
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg, ndimage
from sklearn.feature_extraction.image import grid_to_graph
from sklearn import feature_selection
from sklearn.cluster import FeatureAgglomeration
from sklearn.linear_model import BayesianRidge
from sklearn.pipeline import Pipeline
from sklearn.externals.joblib import Memory
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import KFold
###############################################################################
# Generate data
n_samples = 200
size = 40 # image size
roi_size = 15
snr = 5.
np.random.seed(0)
mask = np.ones([size, size], dtype=np.bool)
coef = np.zeros((size, size))
coef[0:roi_size, 0:roi_size] = -1.
coef[-roi_size:, -roi_size:] = 1.
X = np.random.randn(n_samples, size ** 2)
for x in X: # smooth data
x[:] = ndimage.gaussian_filter(x.reshape(size, size), sigma=1.0).ravel()
X -= X.mean(axis=0)
X /= X.std(axis=0)
y = np.dot(X, coef.ravel())
noise = np.random.randn(y.shape[0])
noise_coef = (linalg.norm(y, 2) / np.exp(snr / 20.)) / linalg.norm(noise, 2)
y += noise_coef * noise # add noise
###############################################################################
# Compute the coefs of a Bayesian Ridge with GridSearch
cv = KFold(2) # cross-validation generator for model selection
ridge = BayesianRidge()
cachedir = tempfile.mkdtemp()
mem = Memory(cachedir=cachedir, verbose=1)
# Ward agglomeration followed by BayesianRidge
connectivity = grid_to_graph(n_x=size, n_y=size)
ward = FeatureAgglomeration(n_clusters=10, connectivity=connectivity,
memory=mem)
clf = Pipeline([('ward', ward), ('ridge', ridge)])
# Select the optimal number of parcels with grid search
clf = GridSearchCV(clf, {'ward__n_clusters': [10, 20, 30]}, n_jobs=1, cv=cv)
clf.fit(X, y) # set the best parameters
coef_ = clf.best_estimator_.steps[-1][1].coef_
coef_ = clf.best_estimator_.steps[0][1].inverse_transform(coef_)
coef_agglomeration_ = coef_.reshape(size, size)
# Anova univariate feature selection followed by BayesianRidge
f_regression = mem.cache(feature_selection.f_regression) # caching function
anova = feature_selection.SelectPercentile(f_regression)
clf = Pipeline([('anova', anova), ('ridge', ridge)])
# Select the optimal percentage of features with grid search
clf = GridSearchCV(clf, {'anova__percentile': [5, 10, 20]}, cv=cv)
clf.fit(X, y) # set the best parameters
coef_ = clf.best_estimator_.steps[-1][1].coef_
coef_ = clf.best_estimator_.steps[0][1].inverse_transform(coef_.reshape(1, -1))
coef_selection_ = coef_.reshape(size, size)
###############################################################################
# Inverse the transformation to plot the results on an image
plt.close('all')
plt.figure(figsize=(7.3, 2.7))
plt.subplot(1, 3, 1)
plt.imshow(coef, interpolation="nearest", cmap=plt.cm.RdBu_r)
plt.title("True weights")
plt.subplot(1, 3, 2)
plt.imshow(coef_selection_, interpolation="nearest", cmap=plt.cm.RdBu_r)
plt.title("Feature Selection")
plt.subplot(1, 3, 3)
plt.imshow(coef_agglomeration_, interpolation="nearest", cmap=plt.cm.RdBu_r)
plt.title("Feature Agglomeration")
plt.subplots_adjust(0.04, 0.0, 0.98, 0.94, 0.16, 0.26)
plt.show()
# Attempt to remove the temporary cachedir, but don't worry if it fails
shutil.rmtree(cachedir, ignore_errors=True)
| bsd-3-clause |
maheshakya/scikit-learn | examples/manifold/plot_compare_methods.py | 259 | 4031 | """
=========================================
Comparison of Manifold Learning methods
=========================================
An illustration of dimensionality reduction on the S-curve dataset
with various manifold learning methods.
For a discussion and comparison of these algorithms, see the
:ref:`manifold module page <manifold>`
For a similar example, where the methods are applied to a
sphere dataset, see :ref:`example_manifold_plot_manifold_sphere.py`
Note that the purpose of the MDS is to find a low-dimensional
representation of the data (here 2D) in which the distances respect well
the distances in the original high-dimensional space, unlike other
manifold-learning algorithms, it does not seeks an isotropic
representation of the data in the low-dimensional space.
"""
# Author: Jake Vanderplas -- <[email protected]>
print(__doc__)
from time import time
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.ticker import NullFormatter
from sklearn import manifold, datasets
# Next line to silence pyflakes. This import is needed.
Axes3D
n_points = 1000
X, color = datasets.samples_generator.make_s_curve(n_points, random_state=0)
n_neighbors = 10
n_components = 2
fig = plt.figure(figsize=(15, 8))
plt.suptitle("Manifold Learning with %i points, %i neighbors"
% (1000, n_neighbors), fontsize=14)
try:
# compatibility matplotlib < 1.0
ax = fig.add_subplot(251, projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=color, cmap=plt.cm.Spectral)
ax.view_init(4, -72)
except:
ax = fig.add_subplot(251, projection='3d')
plt.scatter(X[:, 0], X[:, 2], c=color, cmap=plt.cm.Spectral)
methods = ['standard', 'ltsa', 'hessian', 'modified']
labels = ['LLE', 'LTSA', 'Hessian LLE', 'Modified LLE']
for i, method in enumerate(methods):
t0 = time()
Y = manifold.LocallyLinearEmbedding(n_neighbors, n_components,
eigen_solver='auto',
method=method).fit_transform(X)
t1 = time()
print("%s: %.2g sec" % (methods[i], t1 - t0))
ax = fig.add_subplot(252 + i)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("%s (%.2g sec)" % (labels[i], t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
t0 = time()
Y = manifold.Isomap(n_neighbors, n_components).fit_transform(X)
t1 = time()
print("Isomap: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(257)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("Isomap (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
t0 = time()
mds = manifold.MDS(n_components, max_iter=100, n_init=1)
Y = mds.fit_transform(X)
t1 = time()
print("MDS: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(258)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("MDS (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
t0 = time()
se = manifold.SpectralEmbedding(n_components=n_components,
n_neighbors=n_neighbors)
Y = se.fit_transform(X)
t1 = time()
print("SpectralEmbedding: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(259)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("SpectralEmbedding (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
t0 = time()
tsne = manifold.TSNE(n_components=n_components, init='pca', random_state=0)
Y = tsne.fit_transform(X)
t1 = time()
print("t-SNE: %.2g sec" % (t1 - t0))
ax = fig.add_subplot(250)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("t-SNE (%.2g sec)" % (t1 - t0))
ax.xaxis.set_major_formatter(NullFormatter())
ax.yaxis.set_major_formatter(NullFormatter())
plt.axis('tight')
plt.show()
| bsd-3-clause |
crpurcell/RM-tools | RMutils/corner.py | 2 | 22955 | # -*- coding: utf-8 -*-
from __future__ import print_function, absolute_import
import logging
import numpy as np
import matplotlib.pyplot as pl
from matplotlib.ticker import MaxNLocator, NullLocator
from matplotlib.colors import LinearSegmentedColormap, colorConverter
from matplotlib.ticker import ScalarFormatter
try:
from scipy.ndimage import gaussian_filter
except ImportError:
gaussian_filter = None
__all__ = ["corner", "hist2d", "quantile"]
def corner(xs, bins=20, range=None, weights=None, color="k",
smooth=None, smooth1d=None,
labels=None, label_kwargs=None,
show_titles=False, title_fmt=".2f", title_kwargs=None,
truths=None, truth_color="#4682b4",
scale_hist=False, quantiles=None, verbose=False, fig=None,
max_n_ticks=5, top_ticks=False, use_math_text=False, reverse=False,
hist_kwargs=None, **hist2d_kwargs):
"""
Make a *sick* corner plot showing the projections of a data set in a
multi-dimensional space. kwargs are passed to hist2d() or used for
`matplotlib` styling.
Parameters
----------
xs : array_like[nsamples, ndim]
The samples. This should be a 1- or 2-dimensional array. For a 1-D
array this results in a simple histogram. For a 2-D array, the zeroth
axis is the list of samples and the next axis are the dimensions of
the space.
bins : int or array_like[ndim,]
The number of bins to use in histograms, either as a fixed value for
all dimensions, or as a list of integers for each dimension.
weights : array_like[nsamples,]
The weight of each sample. If `None` (default), samples are given
equal weight.
color : str
A ``matplotlib`` style color for all histograms.
smooth, smooth1d : float
The standard deviation for Gaussian kernel passed to
`scipy.ndimage.gaussian_filter` to smooth the 2-D and 1-D histograms
respectively. If `None` (default), no smoothing is applied.
labels : iterable (ndim,)
A list of names for the dimensions. If a ``xs`` is a
``pandas.DataFrame``, labels will default to column names.
label_kwargs : dict
Any extra keyword arguments to send to the `set_xlabel` and
`set_ylabel` methods.
show_titles : bool
Displays a title above each 1-D histogram showing the 0.5 quantile
with the upper and lower errors supplied by the quantiles argument.
title_fmt : string
The format string for the quantiles given in titles. If you explicitly
set ``show_titles=True`` and ``title_fmt=None``, the labels will be
shown as the titles. (default: ``.2f``)
title_kwargs : dict
Any extra keyword arguments to send to the `set_title` command.
range : iterable (ndim,)
A list where each element is either a length 2 tuple containing
lower and upper bounds or a float in range (0., 1.)
giving the fraction of samples to include in bounds, e.g.,
[(0.,10.), (1.,5), 0.999, etc.].
If a fraction, the bounds are chosen to be equal-tailed.
truths : iterable (ndim,)
A list of reference values to indicate on the plots. Individual
values can be omitted by using ``None``.
truth_color : str
A ``matplotlib`` style color for the ``truths`` makers.
scale_hist : bool
Should the 1-D histograms be scaled in such a way that the zero line
is visible?
quantiles : iterable
A list of fractional quantiles to show on the 1-D histograms as
vertical dashed lines.
verbose : bool
If true, print the values of the computed quantiles.
plot_contours : bool
Draw contours for dense regions of the plot.
use_math_text : bool
If true, then axis tick labels for very large or small exponents will
be displayed as powers of 10 rather than using `e`.
reverse : bool
If true, plot the corner plot starting in the upper-right corner instead
of the usual bottom-left corner
max_n_ticks: int
Maximum number of ticks to try to use
top_ticks : bool
If true, label the top ticks of each axis
fig : matplotlib.Figure
Overplot onto the provided figure object.
hist_kwargs : dict
Any extra keyword arguments to send to the 1-D histogram plots.
**hist2d_kwargs
Any remaining keyword arguments are sent to `corner.hist2d` to generate
the 2-D histogram plots.
"""
if quantiles is None:
quantiles = []
if title_kwargs is None:
title_kwargs = dict()
if label_kwargs is None:
label_kwargs = dict()
# Try filling in labels from pandas.DataFrame columns.
if labels is None:
try:
labels = xs.columns
except AttributeError:
pass
# Deal with 1D sample lists.
xs = np.atleast_1d(xs)
if len(xs.shape) == 1:
xs = np.atleast_2d(xs)
else:
assert len(xs.shape) == 2, "The input sample array must be 1- or 2-D."
xs = xs.T
assert xs.shape[0] <= xs.shape[1], "I don't believe that you want more " \
"dimensions than samples!"
# Parse the weight array.
if weights is not None:
weights = np.asarray(weights)
if weights.ndim != 1:
raise ValueError("Weights must be 1-D")
if xs.shape[1] != weights.shape[0]:
raise ValueError("Lengths of weights must match number of samples")
# Parse the parameter ranges.
if range is None:
if "extents" in hist2d_kwargs:
logging.warn("Deprecated keyword argument 'extents'. "
"Use 'range' instead.")
range = hist2d_kwargs.pop("extents")
else:
range = [[x.min(), x.max()] for x in xs]
# Check for parameters that never change.
m = np.array([e[0] == e[1] for e in range], dtype=bool)
if np.any(m):
raise ValueError(("It looks like the parameter(s) in "
"column(s) {0} have no dynamic range. "
"Please provide a `range` argument.")
.format(", ".join(map(
"{0}".format, np.arange(len(m))[m]))))
else:
# If any of the extents are percentiles, convert them to ranges.
# Also make sure it's a normal list.
range = list(range)
for i, _ in enumerate(range):
try:
emin, emax = range[i]
except TypeError:
q = [0.5 - 0.5*range[i], 0.5 + 0.5*range[i]]
range[i] = quantile(xs[i], q, weights=weights)
if len(range) != xs.shape[0]:
raise ValueError("Dimension mismatch between samples and range")
# Parse the bin specifications.
try:
bins = [int(bins) for _ in range]
except TypeError:
if len(bins) != len(range):
raise ValueError("Dimension mismatch between bins and range")
# Some magic numbers for pretty axis layout.
K = len(xs)
factor = 2.0 # size of one side of one panel
if reverse:
lbdim = 0.2 * factor # size of left/bottom margin
trdim = 0.5 * factor # size of top/right margin
else:
lbdim = 0.5 * factor # size of left/bottom margin
trdim = 0.2 * factor # size of top/right margin
whspace = 0.05 # w/hspace size
plotdim = factor * K + factor * (K - 1.) * whspace
dim = lbdim + plotdim + trdim
# Create a new figure if one wasn't provided.
if fig is None:
fig, axes = pl.subplots(K, K, figsize=(dim, dim))
else:
try:
axes = np.array(fig.axes).reshape((K, K))
except:
raise ValueError("Provided figure has {0} axes, but data has "
"dimensions K={1}".format(len(fig.axes), K))
# Format the figure.
lb = lbdim / dim
tr = (lbdim + plotdim) / dim
fig.subplots_adjust(left=lb, bottom=lb, right=tr, top=tr,
wspace=whspace, hspace=whspace)
# Set up the default histogram keywords.
if hist_kwargs is None:
hist_kwargs = dict()
hist_kwargs["color"] = hist_kwargs.get("color", color)
if smooth1d is None:
hist_kwargs["histtype"] = hist_kwargs.get("histtype", "step")
for i, x in enumerate(xs):
# Deal with masked arrays.
if hasattr(x, "compressed"):
x = x.compressed()
if np.shape(xs)[0] == 1:
ax = axes
else:
if reverse:
ax = axes[K-i-1, K-i-1]
else:
ax = axes[i, i]
# Plot the histograms.
if smooth1d is None:
n, _, _ = ax.hist(x, bins=bins[i], weights=weights,
range=np.sort(range[i]), **hist_kwargs)
else:
if gaussian_filter is None:
raise ImportError("Please install scipy for smoothing")
n, b = np.histogram(x, bins=bins[i], weights=weights,
range=np.sort(range[i]))
n = gaussian_filter(n, smooth1d)
x0 = np.array(list(zip(b[:-1], b[1:]))).flatten()
y0 = np.array(list(zip(n, n))).flatten()
ax.plot(x0, y0, **hist_kwargs)
if truths is not None and truths[i] is not None:
ax.axvline(truths[i], color=truth_color)
# Plot quantiles if wanted.
if len(quantiles) > 0:
qvalues = quantile(x, quantiles, weights=weights)
for q in qvalues:
ax.axvline(q, ls="dashed", color=color)
if verbose:
print("Quantiles:")
print([item for item in zip(quantiles, qvalues)])
if show_titles:
title = None
if title_fmt is not None:
# Compute the quantiles for the title. This might redo
# unneeded computation but who cares.
q_16, q_50, q_84 = quantile(x, [0.16, 0.5, 0.84],
weights=weights)
q_m, q_p = q_50-q_16, q_84-q_50
# Format the quantile display.
fmt = "{{0:{0}}}".format(title_fmt).format
title = r"${{{0}}}_{{-{1}}}^{{+{2}}}$"
title = title.format(fmt(q_50), fmt(q_m), fmt(q_p))
# Add in the column name if it's given.
if labels is not None:
title = "{0} = {1}".format(labels[i], title)
elif labels is not None:
title = "{0}".format(labels[i])
if title is not None:
if reverse:
ax.set_xlabel(title, **title_kwargs)
else:
ax.set_title(title, **title_kwargs)
# Set up the axes.
ax.set_xlim(range[i])
if scale_hist:
maxn = np.max(n)
ax.set_ylim(-0.1 * maxn, 1.1 * maxn)
else:
ax.set_ylim(0, 1.1 * np.max(n))
ax.set_yticklabels([])
if max_n_ticks == 0:
ax.xaxis.set_major_locator(NullLocator())
ax.yaxis.set_major_locator(NullLocator())
else:
ax.xaxis.set_major_locator(MaxNLocator(max_n_ticks, prune="lower"))
ax.yaxis.set_major_locator(NullLocator())
if i < K - 1:
if top_ticks:
ax.xaxis.set_ticks_position("top")
[l.set_rotation(45) for l in ax.get_xticklabels()]
else:
ax.set_xticklabels([])
else:
if reverse:
ax.xaxis.tick_top()
[l.set_rotation(45) for l in ax.get_xticklabels()]
if labels is not None:
if reverse:
ax.set_title(labels[i], y=1.25, **label_kwargs)
else:
ax.set_xlabel(labels[i], **label_kwargs)
# use MathText for axes ticks
ax.xaxis.set_major_formatter(
ScalarFormatter(useMathText=use_math_text))
for j, y in enumerate(xs):
if np.shape(xs)[0] == 1:
ax = axes
else:
if reverse:
ax = axes[K-i-1, K-j-1]
else:
ax = axes[i, j]
if j > i:
ax.set_frame_on(False)
ax.set_xticks([])
ax.set_yticks([])
continue
elif j == i:
continue
# Deal with masked arrays.
if hasattr(y, "compressed"):
y = y.compressed()
hist2d(y, x, ax=ax, range=[range[j], range[i]], weights=weights,
color=color, smooth=smooth, bins=[bins[j], bins[i]],
**hist2d_kwargs)
if truths is not None:
if truths[i] is not None and truths[j] is not None:
ax.plot(truths[j], truths[i], "s", color=truth_color)
if truths[j] is not None:
ax.axvline(truths[j], color=truth_color)
if truths[i] is not None:
ax.axhline(truths[i], color=truth_color)
if max_n_ticks == 0:
ax.xaxis.set_major_locator(NullLocator())
ax.yaxis.set_major_locator(NullLocator())
else:
ax.xaxis.set_major_locator(MaxNLocator(max_n_ticks,
prune="lower"))
ax.yaxis.set_major_locator(MaxNLocator(max_n_ticks,
prune="lower"))
if i < K - 1:
ax.set_xticklabels([])
else:
if reverse:
ax.xaxis.tick_top()
[l.set_rotation(45) for l in ax.get_xticklabels()]
if labels is not None:
ax.set_xlabel(labels[j], **label_kwargs)
if reverse:
ax.xaxis.set_label_coords(0.5, 1.4)
else:
ax.xaxis.set_label_coords(0.5, -0.3)
# use MathText for axes ticks
ax.xaxis.set_major_formatter(
ScalarFormatter(useMathText=use_math_text))
if j > 0:
ax.set_yticklabels([])
else:
if reverse:
ax.yaxis.tick_right()
[l.set_rotation(45) for l in ax.get_yticklabels()]
if labels is not None:
if reverse:
ax.set_ylabel(labels[i], rotation=-90, **label_kwargs)
ax.yaxis.set_label_coords(1.3, 0.5)
else:
ax.set_ylabel(labels[i], **label_kwargs)
ax.yaxis.set_label_coords(-0.3, 0.5)
# use MathText for axes ticks
ax.yaxis.set_major_formatter(
ScalarFormatter(useMathText=use_math_text))
return fig
def quantile(x, q, weights=None):
"""
Compute sample quantiles with support for weighted samples.
Note
----
When ``weights`` is ``None``, this method simply calls numpy's percentile
function with the values of ``q`` multiplied by 100.
Parameters
----------
x : array_like[nsamples,]
The samples.
q : array_like[nquantiles,]
The list of quantiles to compute. These should all be in the range
``[0, 1]``.
weights : Optional[array_like[nsamples,]]
An optional weight corresponding to each sample. These
Returns
-------
quantiles : array_like[nquantiles,]
The sample quantiles computed at ``q``.
Raises
------
ValueError
For invalid quantiles; ``q`` not in ``[0, 1]`` or dimension mismatch
between ``x`` and ``weights``.
"""
x = np.atleast_1d(x)
q = np.atleast_1d(q)
if np.any(q < 0.0) or np.any(q > 1.0):
raise ValueError("Quantiles must be between 0 and 1")
if weights is None:
return np.percentile(x, list(100.0 * q))
else:
weights = np.atleast_1d(weights)
if len(x) != len(weights):
raise ValueError("Dimension mismatch: len(weights) != len(x)")
idx = np.argsort(x)
sw = weights[idx]
cdf = np.cumsum(sw)[:-1]
cdf /= cdf[-1]
cdf = np.append(0, cdf)
return np.interp(q, cdf, x[idx]).tolist()
def hist2d(x, y, bins=20, range=None, weights=None, levels=None, smooth=None,
ax=None, color=None, plot_datapoints=True, plot_density=True,
plot_contours=True, no_fill_contours=False, fill_contours=False,
contour_kwargs=None, contourf_kwargs=None, data_kwargs=None,
**kwargs):
"""
Plot a 2-D histogram of samples.
Parameters
----------
x : array_like[nsamples,]
The samples.
y : array_like[nsamples,]
The samples.
levels : array_like
The contour levels to draw.
ax : matplotlib.Axes
A axes instance on which to add the 2-D histogram.
plot_datapoints : bool
Draw the individual data points.
plot_density : bool
Draw the density colormap.
plot_contours : bool
Draw the contours.
no_fill_contours : bool
Add no filling at all to the contours (unlike setting
``fill_contours=False``, which still adds a white fill at the densest
points).
fill_contours : bool
Fill the contours.
contour_kwargs : dict
Any additional keyword arguments to pass to the `contour` method.
contourf_kwargs : dict
Any additional keyword arguments to pass to the `contourf` method.
data_kwargs : dict
Any additional keyword arguments to pass to the `plot` method when
adding the individual data points.
"""
if ax is None:
ax = pl.gca()
# Set the default range based on the data range if not provided.
if range is None:
if "extent" in kwargs:
logging.warn("Deprecated keyword argument 'extent'. "
"Use 'range' instead.")
range = kwargs["extent"]
else:
range = [[x.min(), x.max()], [y.min(), y.max()]]
# Set up the default plotting arguments.
if color is None:
color = "k"
# Choose the default "sigma" contour levels.
if levels is None:
levels = 1.0 - np.exp(-0.5 * np.arange(0.5, 2.1, 0.5) ** 2)
# This is the color map for the density plot, over-plotted to indicate the
# density of the points near the center.
density_cmap = LinearSegmentedColormap.from_list(
"density_cmap", [color, (1, 1, 1, 0)])
# This color map is used to hide the points at the high density areas.
white_cmap = LinearSegmentedColormap.from_list(
"white_cmap", [(1, 1, 1), (1, 1, 1)], N=2)
# This "color map" is the list of colors for the contour levels if the
# contours are filled.
rgba_color = colorConverter.to_rgba(color)
contour_cmap = [list(rgba_color) for l in levels] + [rgba_color]
for i, l in enumerate(levels):
contour_cmap[i][-1] *= float(i) / (len(levels)+1)
# We'll make the 2D histogram to directly estimate the density.
try:
H, X, Y = np.histogram2d(x.flatten(), y.flatten(), bins=bins,
range=list(map(np.sort, range)),
weights=weights)
except ValueError:
raise ValueError("It looks like at least one of your sample columns "
"have no dynamic range. You could try using the "
"'range' argument.")
if smooth is not None:
if gaussian_filter is None:
raise ImportError("Please install scipy for smoothing")
H = gaussian_filter(H, smooth)
# Compute the density levels.
Hflat = H.flatten()
inds = np.argsort(Hflat)[::-1]
Hflat = Hflat[inds]
sm = np.cumsum(Hflat)
sm /= sm[-1]
V = np.empty(len(levels))
for i, v0 in enumerate(levels):
try:
V[i] = Hflat[sm <= v0][-1]
except:
V[i] = Hflat[0]
V.sort()
m = np.diff(V) == 0
if np.any(m):
logging.warning("Too few points to create valid contours")
while np.any(m):
V[np.where(m)[0][0]] *= 1.0 - 1e-4
m = np.diff(V) == 0
V.sort()
# Compute the bin centers.
X1, Y1 = 0.5 * (X[1:] + X[:-1]), 0.5 * (Y[1:] + Y[:-1])
# Extend the array for the sake of the contours at the plot edges.
H2 = H.min() + np.zeros((H.shape[0] + 4, H.shape[1] + 4))
H2[2:-2, 2:-2] = H
H2[2:-2, 1] = H[:, 0]
H2[2:-2, -2] = H[:, -1]
H2[1, 2:-2] = H[0]
H2[-2, 2:-2] = H[-1]
H2[1, 1] = H[0, 0]
H2[1, -2] = H[0, -1]
H2[-2, 1] = H[-1, 0]
H2[-2, -2] = H[-1, -1]
X2 = np.concatenate([
X1[0] + np.array([-2, -1]) * np.diff(X1[:2]),
X1,
X1[-1] + np.array([1, 2]) * np.diff(X1[-2:]),
])
Y2 = np.concatenate([
Y1[0] + np.array([-2, -1]) * np.diff(Y1[:2]),
Y1,
Y1[-1] + np.array([1, 2]) * np.diff(Y1[-2:]),
])
if plot_datapoints:
if data_kwargs is None:
data_kwargs = dict()
data_kwargs["color"] = data_kwargs.get("color", color)
data_kwargs["ms"] = data_kwargs.get("ms", 2.0)
data_kwargs["mec"] = data_kwargs.get("mec", "none")
data_kwargs["alpha"] = data_kwargs.get("alpha", 0.1)
ax.plot(x, y, "o", zorder=-1, rasterized=True, **data_kwargs)
# Plot the base fill to hide the densest data points.
if (plot_contours or plot_density) and not no_fill_contours:
ax.contourf(X2, Y2, H2.T, [V.min(), H.max()],
cmap=white_cmap, antialiased=False)
if plot_contours and fill_contours:
if contourf_kwargs is None:
contourf_kwargs = dict()
contourf_kwargs["colors"] = contourf_kwargs.get("colors", contour_cmap)
contourf_kwargs["antialiased"] = contourf_kwargs.get("antialiased",
False)
ax.contourf(X2, Y2, H2.T, np.concatenate([[0], V, [H.max()*(1+1e-4)]]),
**contourf_kwargs)
# Plot the density map. This can't be plotted at the same time as the
# contour fills.
elif plot_density:
ax.pcolor(X, Y, H.max() - H.T, cmap=density_cmap)
# Plot the contour edge colors.
if plot_contours:
if contour_kwargs is None:
contour_kwargs = dict()
contour_kwargs["colors"] = contour_kwargs.get("colors", color)
ax.contour(X2, Y2, H2.T, V, **contour_kwargs)
ax.set_xlim(range[0])
ax.set_ylim(range[1])
| mit |
murphy214/berrl | build/lib/berrl/postgis_interface.py | 2 | 7979 | import psycopg2
import pandas as pd
import sys
import itertools
from sqlalchemy import create_engine
'''
Purpose: This module exists as an easy postgis integration module its purpose is to
bring in database in its entirety into memory in the future in may support more robust querries
but for now only supports getting an entire database into memory
Currently only supports input databases with no passwords. Its currently meant to be used for postgis
polygons and linestrings, in the future may also support point data.
Created by: Bennett Murphy
'''
# intially connects to database
def connect_to_db(dbname):
string = "dbname=%s user=postgres password=secret" % (dbname)
try:
conn = psycopg2.connect(string)
except Exception:
print 'failed connection'
return conn
def retrieve(conn,dbname,SID,geomcolumn):
DEC2FLOAT = psycopg2.extensions.new_type(
psycopg2.extensions.DECIMAL.values,
'DEC2FLOAT',
lambda value, curs: float(value) if value is not None else None)
psycopg2.extensions.register_type(DEC2FLOAT)
if SID==0:
string = "SELECT *,ST_AsEWKT(%s) FROM %s;" % (geomcolumn,dbname)
'''
elif SID==10000:
string = """""SELECT gid, ST_AsEWKT(ST_Collect(ST_MakePolygon(geom))) As geom FROM(SELECT gid, ST_ExteriorRing((ST_Dump(geom)).geom) As geom FROM %s)s GROUP BY gid; """"" % (dbname)
'''
else:
string = "SELECT *,ST_AsEWKT(ST_Transform(%s,%s)) FROM %s;" % (geomcolumn,SID,dbname)
cur = conn.cursor()
try:
cur.execute(string)
except psycopg2.Error as e:
print 'failed'
data = cur.fetchall()
return data
# returns an engine and an sql querry
# to be sent into read_sql querry
# regular_db is bool kwarg for whether
# the dbname your querry is to return geometries
def make_query(dbname,**kwargs):
regular_db = False
tablename = False
for key,value in kwargs.iteritems():
if key == 'regular_db':
regular_db = value
if key == 'tablename':
tablename = value
# getting appropriate sql querry
if regular_db == False:
if tablename == False:
try:
sqlquerry = "SELECT *,ST_AsEWKT(ST_Transform(geom,4326)) FROM %s" % dbname
except:
sqlquerry = "SELECT * FROM %s" % dbname
else:
sqlquerry = "SELECT * FROM %s" % tablename
else:
if tablename == False:
sqlquerry = "SELECT * FROM %s" % dbname
else:
sqlquerry = "SELECT * FROM %s" % tablename
# creating engine using sqlalchemy
engine = create_engine('postgresql://postgres:pass@localhost/%s' % dbname)
return sqlquerry,engine
# creating simpoly an engine for a given
def create_sql_engine(dbname):
return create_engine('postgresql://postgres:pass@localhost/%s')
def retrieve_buffer(conn,dbname,args,geomcolumn):
#unpacking arguments
SID,size,normal_db,tablename = args
if not tablename == False:
dbname = tablename
cur = conn.cursor('cursor-name')
cur.itersize = 1000
if size == False:
size = 100000
DEC2FLOAT = psycopg2.extensions.new_type(
psycopg2.extensions.DECIMAL.values,
'DEC2FLOAT',
lambda value, curs: float(value) if value is not None else None)
psycopg2.extensions.register_type(DEC2FLOAT)
if SID==0:
string = "SELECT *,ST_AsEWKT(%s) FROM %s;" % (geomcolumn,dbname)
'''
elif SID==10000:
string = """""SELECT gid, ST_AsEWKT(ST_Collect(ST_MakePolygon(geom))) As geom FROM(SELECT gid, ST_ExteriorRing((ST_Dump(geom)).geom) As geom FROM %s)s GROUP BY gid; """"" % (dbname)
'''
else:
if normal_db == False:
string = "SELECT *,ST_AsEWKT(ST_Transform(%s,%s)) FROM %s LIMIT %s;" % (geomcolumn,SID,dbname,size)
else:
string = "SELECT * FROM %s LIMIT %s;" % (dbname,size)
cur.execute(string)
data = cur.fetchall()
cur.close()
return data,conn
def get_header(conn,dbname,normal_db):
cur = conn.cursor()
string = "SELECT a.attname as column_name, format_type(a.atttypid, a.atttypmod) AS data_type FROM pg_attribute a JOIN pg_class b ON (a.attrelid = b.relfilenode) WHERE b.relname = '%s' and a.attstattarget = -1;" % (dbname)
try:
cur.execute(string)
except psycopg2.Error as e:
print 'failed'
header = cur.fetchall()
newheader = []
for row in header:
newheader.append(row[0])
if normal_db == False:
newheader.append('st_asewkt')
return newheader
# takes a list and turns it into a datafrae
def list2df(df):
df = pd.DataFrame(df[1:], columns=df[0])
return df
# takes a dataframe and turns it into a list
def df2list(df):
df = [df.columns.values.tolist()]+df.values.tolist()
return df
# gets both column header and data
def get_both(conn,dbname,SID):
header = get_header(conn,dbname)
for row in header:
if 'geom' in str(row):
geometryheader = row
data = retrieve(conn,dbname,SID,geometryheader)
data = pd.DataFrame(data,columns=header)
return data
# gets both column header and data
def get_both2(conn,dbname,args):
a,b,normal_db,tablename = args
if not tablename == False:
header = get_header(conn,tablename,normal_db)
else:
header = get_header(conn,dbname,normal_db)
geometryheader = False
for row in header:
if 'geom' in str(row):
geometryheader = row
data,conn = retrieve_buffer(conn,dbname,args,geometryheader)
data = pd.DataFrame(data,columns=header)
return data,conn
# gets database assuming you have postgres sql server running, returns dataframe
def get_database(dbname,**kwargs):
SID=4326
# dbname is the database name
# SID is the spatial identifier you wish to output your table as usually 4326
if kwargs is not None:
for key,value in kwargs.iteritems():
if key == 'SID':
SID = int(value)
conn = connect_to_db(dbname)
data = get_both(conn,dbname,SID)
return data
# gets database assuming you have postgres sql server running, returns dataframe
def get_database_buffer(dbname,**kwargs):
conn = False
size = False
normal_db = False
tablename = False
for key,value in kwargs.iteritems():
if key == 'conn':
conn = value
if key == 'size':
size = value
if key == 'normal_db':
normal_db = value
if key == 'tablename':
tablename = value
SID=4326
print size
# dbname is the database name
# SID is the spatial identifier you wish to output your table as usually 4326
if kwargs is not None:
for key,value in kwargs.iteritems():
if key == 'SID':
SID = int(value)
if conn == False:
conn = connect_to_db(dbname)
# putting args in list so i dont have to carry through for no reason
args = [SID,size,normal_db,tablename]
data,conn = get_both2(conn,dbname,args)
return data,conn
def db_buffer(dbname,**kwargs):
conn = False
size = False
normal_db = False
tablename = False
size = 100000
for key,value in kwargs.iteritems():
if key == 'conn':
conn = value
if key == 'size':
size = value
if key == 'normal_db':
normal_db = value
if key == 'tablename':
tablename = value
count = 0
total = 0
while size == size:
if count == 0:
data,conn = get_database_buffer(dbname,tablename=tablename,normal_db=normal_db,size=size)
else:
data,conn = get_database_buffer(dbname,tablename=tablename,normal_db=normal_db,size=size,conn=conn)
itersize = len(data)
total += itersize
print 'Blocks Generated: %s,Total Rows Generated: %s' % (count,total)
count += 1
yield data
# generates a querry for a given lists of indexs
# reads the sql into pandas and returns result
# indexs are expected to be in string format
def select_fromindexs(dbname,field,indexs,**kwargs):
normal_db = False
tablename = False
# handling even if indexs arent in str format
if type(indexs[0]) == int:
indexs = [str(row) for row in indexs]
for key,value in kwargs.iteritems():
if key == 'size':
size = value
if key == 'normal_db':
normal_db = value
if key == 'tablename':
tablename = value
a,engine = make_query(dbname,tablename=tablename,normal_db=normal_db)
stringindexs = ','.join(indexs)
if not tablename == False:
dbname = tablename
# now making querry
query = '''SELECT * FROM %s WHERE %s IN (%s);''' % (dbname,field,stringindexs)
return pd.read_sql_query(query,engine)
| apache-2.0 |
YinongLong/scikit-learn | sklearn/covariance/tests/test_robust_covariance.py | 77 | 3825 | # Author: Alexandre Gramfort <[email protected]>
# Gael Varoquaux <[email protected]>
# Virgile Fritsch <[email protected]>
#
# License: BSD 3 clause
import numpy as np
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_raise_message
from sklearn.exceptions import NotFittedError
from sklearn import datasets
from sklearn.covariance import empirical_covariance, MinCovDet, \
EllipticEnvelope
from sklearn.covariance import fast_mcd
X = datasets.load_iris().data
X_1d = X[:, 0]
n_samples, n_features = X.shape
def test_mcd():
# Tests the FastMCD algorithm implementation
# Small data set
# test without outliers (random independent normal data)
launch_mcd_on_dataset(100, 5, 0, 0.01, 0.1, 80)
# test with a contaminated data set (medium contamination)
launch_mcd_on_dataset(100, 5, 20, 0.01, 0.01, 70)
# test with a contaminated data set (strong contamination)
launch_mcd_on_dataset(100, 5, 40, 0.1, 0.1, 50)
# Medium data set
launch_mcd_on_dataset(1000, 5, 450, 0.1, 0.1, 540)
# Large data set
launch_mcd_on_dataset(1700, 5, 800, 0.1, 0.1, 870)
# 1D data set
launch_mcd_on_dataset(500, 1, 100, 0.001, 0.001, 350)
def test_fast_mcd_on_invalid_input():
X = np.arange(100)
assert_raise_message(ValueError, 'fast_mcd expects at least 2 samples',
fast_mcd, X)
def test_mcd_class_on_invalid_input():
X = np.arange(100)
mcd = MinCovDet()
assert_raise_message(ValueError, 'MinCovDet expects at least 2 samples',
mcd.fit, X)
def launch_mcd_on_dataset(n_samples, n_features, n_outliers, tol_loc, tol_cov,
tol_support):
rand_gen = np.random.RandomState(0)
data = rand_gen.randn(n_samples, n_features)
# add some outliers
outliers_index = rand_gen.permutation(n_samples)[:n_outliers]
outliers_offset = 10. * \
(rand_gen.randint(2, size=(n_outliers, n_features)) - 0.5)
data[outliers_index] += outliers_offset
inliers_mask = np.ones(n_samples).astype(bool)
inliers_mask[outliers_index] = False
pure_data = data[inliers_mask]
# compute MCD by fitting an object
mcd_fit = MinCovDet(random_state=rand_gen).fit(data)
T = mcd_fit.location_
S = mcd_fit.covariance_
H = mcd_fit.support_
# compare with the estimates learnt from the inliers
error_location = np.mean((pure_data.mean(0) - T) ** 2)
assert(error_location < tol_loc)
error_cov = np.mean((empirical_covariance(pure_data) - S) ** 2)
assert(error_cov < tol_cov)
assert(np.sum(H) >= tol_support)
assert_array_almost_equal(mcd_fit.mahalanobis(data), mcd_fit.dist_)
def test_mcd_issue1127():
# Check that the code does not break with X.shape = (3, 1)
# (i.e. n_support = n_samples)
rnd = np.random.RandomState(0)
X = rnd.normal(size=(3, 1))
mcd = MinCovDet()
mcd.fit(X)
def test_outlier_detection():
rnd = np.random.RandomState(0)
X = rnd.randn(100, 10)
clf = EllipticEnvelope(contamination=0.1)
assert_raises(NotFittedError, clf.predict, X)
assert_raises(NotFittedError, clf.decision_function, X)
clf.fit(X)
y_pred = clf.predict(X)
decision = clf.decision_function(X, raw_values=True)
decision_transformed = clf.decision_function(X, raw_values=False)
assert_array_almost_equal(
decision, clf.mahalanobis(X))
assert_array_almost_equal(clf.mahalanobis(X), clf.dist_)
assert_almost_equal(clf.score(X, np.ones(100)),
(100 - y_pred[y_pred == -1].size) / 100.)
assert(sum(y_pred == -1) == sum(decision_transformed < 0))
| bsd-3-clause |
tmrowco/electricitymap | parsers/US_SVERI.py | 1 | 4545 | #!/usr/bin/env python3
"""Parser for the SVERI area of the USA."""
from datetime import datetime, timedelta
from io import StringIO
import logging
import pandas as pd
import pytz
import requests
# SVERI = Southwest Variable Energy Resource Initiative
# https://sveri.energy.arizona.edu/#howto
# SVERI participants include Arizona's G&T Cooperatives, Arizona Public Service,
# El Paso Electric, Imperial Irrigation District, Public Service Company of New Mexico,
# Salt River Project, Tucson Electric Power and the Western Area Power Administration’s Desert Southwest Region.
#TODO geothermal is negative, 15 to add to request
GENERATION_URL = 'https://sveri.energy.arizona.edu/api?ids=1,2,4,5,6,7,8,16&saveData=true'
GENERATION_MAPPING = {'Solar Aggregate (MW)': 'solar',
'Wind Aggregate (MW)': 'wind',
'Hydro Aggregate (MW)': 'hydro',
'Coal Aggregate (MW)': 'coal',
'Gas Aggregate (MW)': 'gas',
'Other Fossil Fuels Aggregate (MW)': 'unknown',
'Nuclear Aggregate (MW)': 'nuclear',
#'Geothermal Aggregate (MW)': 'geothermal',
'Biomass/gas Aggregate (MW)': 'biomass'}
def query_api(limits, session=None):
"""Makes a request to the SVERI api and returns a dataframe."""
s = session or requests.Session()
url = GENERATION_URL + '&startDate={}&endDate={}'.format(limits[0], limits[1])
data_req = s.get(url)
df = pd.read_csv(StringIO(data_req.text))
return df
def timestamp_converter(timestamp):
"""Turns string representation of timestamp into an aware datetime object."""
dt_naive = datetime.strptime(timestamp, '%Y-%m-%d %H:%M:%S')
mountain = pytz.timezone('America/Dawson_Creek')
dt_aware = mountain.localize(dt_naive)
return dt_aware
def data_processor(raw_data, logger):
"""
Maps generation data to type, logging and removing unknown types.
Returns a list of tuples in the form (datetime, production).
"""
mapped_df = raw_data.rename(columns=lambda x: GENERATION_MAPPING.get(x,x))
actual_keys = set(mapped_df.columns)
expected_keys = set(GENERATION_MAPPING.values()) | {'Time (MST)'}
unknown_keys = actual_keys - expected_keys
for k in unknown_keys:
logger.warning('New type {} seen in US-SVERI data source'.format(k),
extra={'key': 'US-SVERI'})
mapped_df.drop(k, axis=1, inplace=True)
processed_data = []
for index, row in mapped_df.iterrows():
production = row.to_dict()
dt = production.pop('Time (MST)')
dt = timestamp_converter(dt)
processed_data.append((dt, production))
return processed_data
def fetch_production(zone_key='US-SVERI', session=None, target_datetime=None, logger=logging.getLogger(__name__)):
"""
Requests the last known production mix (in MW) of a given zone
Arguments:
zone_key (optional): used in case a parser is able to fetch multiple zones
session: request session passed in order to re-use an existing session
target_datetime: datetime object that allows historical data to be fetched
Return:
A list of dictionaries in the form:
{
'zoneKey': 'FR',
'datetime': '2017-01-01T00:00:00Z',
'production': {
'biomass': 0.0,
'coal': 0.0,
'gas': 0.0,
'hydro': 0.0,
'nuclear': None,
'oil': 0.0,
'solar': 0.0,
'wind': 0.0,
'geothermal': 0.0,
'unknown': 0.0
},
'storage': {
'hydro': -10.0,
},
'source': 'mysource.com'
}
"""
if target_datetime is None:
target_datetime = datetime.now()
start_date = target_datetime.strftime('%Y-%m-%d')
shift_date = target_datetime + timedelta(days=1)
end_date = shift_date.strftime('%Y-%m-%d')
limits = (start_date, end_date)
raw_data = query_api(limits, session=session)
processed_data = data_processor(raw_data, logger)
data = []
for item in processed_data:
datapoint = {
'zoneKey': zone_key,
'datetime': item[0],
'production': item[1],
'storage': {},
'source': 'sveri.energy.arizona.edu'
}
data.append(datapoint)
return data
if __name__ == '__main__':
"Main method, never used by the Electricity Map backend, but handy for testing."
print('fetch_production() ->')
print(fetch_production())
| gpl-3.0 |
DonBeo/scikit-learn | examples/linear_model/plot_lasso_and_elasticnet.py | 249 | 1982 | """
========================================
Lasso and Elastic Net for Sparse Signals
========================================
Estimates Lasso and Elastic-Net regression models on a manually generated
sparse signal corrupted with an additive noise. Estimated coefficients are
compared with the ground-truth.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
###############################################################################
# generate some sparse data to play with
np.random.seed(42)
n_samples, n_features = 50, 200
X = np.random.randn(n_samples, n_features)
coef = 3 * np.random.randn(n_features)
inds = np.arange(n_features)
np.random.shuffle(inds)
coef[inds[10:]] = 0 # sparsify coef
y = np.dot(X, coef)
# add noise
y += 0.01 * np.random.normal((n_samples,))
# Split data in train set and test set
n_samples = X.shape[0]
X_train, y_train = X[:n_samples / 2], y[:n_samples / 2]
X_test, y_test = X[n_samples / 2:], y[n_samples / 2:]
###############################################################################
# Lasso
from sklearn.linear_model import Lasso
alpha = 0.1
lasso = Lasso(alpha=alpha)
y_pred_lasso = lasso.fit(X_train, y_train).predict(X_test)
r2_score_lasso = r2_score(y_test, y_pred_lasso)
print(lasso)
print("r^2 on test data : %f" % r2_score_lasso)
###############################################################################
# ElasticNet
from sklearn.linear_model import ElasticNet
enet = ElasticNet(alpha=alpha, l1_ratio=0.7)
y_pred_enet = enet.fit(X_train, y_train).predict(X_test)
r2_score_enet = r2_score(y_test, y_pred_enet)
print(enet)
print("r^2 on test data : %f" % r2_score_enet)
plt.plot(enet.coef_, label='Elastic net coefficients')
plt.plot(lasso.coef_, label='Lasso coefficients')
plt.plot(coef, '--', label='original coefficients')
plt.legend(loc='best')
plt.title("Lasso R^2: %f, Elastic Net R^2: %f"
% (r2_score_lasso, r2_score_enet))
plt.show()
| bsd-3-clause |
moutai/scikit-learn | examples/semi_supervised/plot_label_propagation_structure.py | 45 | 2433 | """
==============================================
Label Propagation learning a complex structure
==============================================
Example of LabelPropagation learning a complex internal structure
to demonstrate "manifold learning". The outer circle should be
labeled "red" and the inner circle "blue". Because both label groups
lie inside their own distinct shape, we can see that the labels
propagate correctly around the circle.
"""
print(__doc__)
# Authors: Clay Woolam <[email protected]>
# Andreas Mueller <[email protected]>
# Licence: BSD
import numpy as np
import matplotlib.pyplot as plt
from sklearn.semi_supervised import label_propagation
from sklearn.datasets import make_circles
# generate ring with inner box
n_samples = 200
X, y = make_circles(n_samples=n_samples, shuffle=False)
outer, inner = 0, 1
labels = -np.ones(n_samples)
labels[0] = outer
labels[-1] = inner
###############################################################################
# Learn with LabelSpreading
label_spread = label_propagation.LabelSpreading(kernel='knn', alpha=1.0)
label_spread.fit(X, labels)
###############################################################################
# Plot output labels
output_labels = label_spread.transduction_
plt.figure(figsize=(8.5, 4))
plt.subplot(1, 2, 1)
plt.scatter(X[labels == outer, 0], X[labels == outer, 1], color='navy',
marker='s', lw=0, label="outer labeled", s=10)
plt.scatter(X[labels == inner, 0], X[labels == inner, 1], color='c',
marker='s', lw=0, label='inner labeled', s=10)
plt.scatter(X[labels == -1, 0], X[labels == -1, 1], color='darkorange',
marker='.', label='unlabeled')
plt.legend(scatterpoints=1, shadow=False, loc='upper right')
plt.title("Raw data (2 classes=outer and inner)")
plt.subplot(1, 2, 2)
output_label_array = np.asarray(output_labels)
outer_numbers = np.where(output_label_array == outer)[0]
inner_numbers = np.where(output_label_array == inner)[0]
plt.scatter(X[outer_numbers, 0], X[outer_numbers, 1], color='navy',
marker='s', lw=0, s=10, label="outer learned")
plt.scatter(X[inner_numbers, 0], X[inner_numbers, 1], color='c',
marker='s', lw=0, s=10, label="inner learned")
plt.legend(scatterpoints=1, shadow=False, loc='upper right')
plt.title("Labels learned with Label Spreading (KNN)")
plt.subplots_adjust(left=0.07, bottom=0.07, right=0.93, top=0.92)
plt.show()
| bsd-3-clause |
WarrenWeckesser/scikits-image | doc/examples/plot_tinting_grayscale_images.py | 9 | 5593 | """
=========================
Tinting gray-scale images
=========================
It can be useful to artificially tint an image with some color, either to
highlight particular regions of an image or maybe just to liven up a grayscale
image. This example demonstrates image-tinting by scaling RGB values and by
adjusting colors in the HSV color-space.
In 2D, color images are often represented in RGB---3 layers of 2D arrays, where
the 3 layers represent (R)ed, (G)reen and (B)lue channels of the image. The
simplest way of getting a tinted image is to set each RGB channel to the
grayscale image scaled by a different multiplier for each channel. For example,
multiplying the green and blue channels by 0 leaves only the red channel and
produces a bright red image. Similarly, zeroing-out the blue channel leaves
only the red and green channels, which combine to form yellow.
"""
import matplotlib.pyplot as plt
from skimage import data
from skimage import color
from skimage import img_as_float
grayscale_image = img_as_float(data.camera()[::2, ::2])
image = color.gray2rgb(grayscale_image)
red_multiplier = [1, 0, 0]
yellow_multiplier = [1, 1, 0]
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(8, 4), sharex=True, sharey=True)
ax1.imshow(red_multiplier * image)
ax2.imshow(yellow_multiplier * image)
ax1.set_adjustable('box-forced')
ax2.set_adjustable('box-forced')
"""
.. image:: PLOT2RST.current_figure
In many cases, dealing with RGB values may not be ideal. Because of that, there
are many other `color spaces`_ in which you can represent a color image. One
popular color space is called HSV, which represents hue (~the color),
saturation (~colorfulness), and value (~brightness). For example, a color
(hue) might be green, but its saturation is how intense that green is---where
olive is on the low end and neon on the high end.
In some implementations, the hue in HSV goes from 0 to 360, since hues wrap
around in a circle. In scikit-image, however, hues are float values from 0 to
1, so that hue, saturation, and value all share the same scale.
.. _color spaces:
http://en.wikipedia.org/wiki/List_of_color_spaces_and_their_uses
Below, we plot a linear gradient in the hue, with the saturation and value
turned all the way up:
"""
import numpy as np
hue_gradient = np.linspace(0, 1)
hsv = np.ones(shape=(1, len(hue_gradient), 3), dtype=float)
hsv[:, :, 0] = hue_gradient
all_hues = color.hsv2rgb(hsv)
fig, ax = plt.subplots(figsize=(5, 2))
# Set image extent so hues go from 0 to 1 and the image is a nice aspect ratio.
ax.imshow(all_hues, extent=(0, 1, 0, 0.2))
ax.set_axis_off()
"""
.. image:: PLOT2RST.current_figure
Notice how the colors at the far left and far right are the same. That reflects
the fact that the hues wrap around like the color wheel (see HSV_ for more
info).
.. _HSV: http://en.wikipedia.org/wiki/HSL_and_HSV
Now, let's create a little utility function to take an RGB image and:
1. Transform the RGB image to HSV
2. Set the hue and saturation
3. Transform the HSV image back to RGB
"""
def colorize(image, hue, saturation=1):
""" Add color of the given hue to an RGB image.
By default, set the saturation to 1 so that the colors pop!
"""
hsv = color.rgb2hsv(image)
hsv[:, :, 1] = saturation
hsv[:, :, 0] = hue
return color.hsv2rgb(hsv)
"""
Notice that we need to bump up the saturation; images with zero saturation are
grayscale, so we need to a non-zero value to actually see the color we've set.
Using the function above, we plot six images with a linear gradient in the hue
and a non-zero saturation:
"""
hue_rotations = np.linspace(0, 1, 6)
fig, axes = plt.subplots(nrows=2, ncols=3, sharex=True, sharey=True)
for ax, hue in zip(axes.flat, hue_rotations):
# Turn down the saturation to give it that vintage look.
tinted_image = colorize(image, hue, saturation=0.3)
ax.imshow(tinted_image, vmin=0, vmax=1)
ax.set_axis_off()
ax.set_adjustable('box-forced')
fig.tight_layout()
"""
.. image:: PLOT2RST.current_figure
You can combine this tinting effect with numpy slicing and fancy-indexing to
selectively tint your images. In the example below, we set the hue of some
rectangles using slicing and scale the RGB values of some pixels found by
thresholding. In practice, you might want to define a region for tinting based
on segmentation results or blob detection methods.
"""
from skimage.filters import rank
# Square regions defined as slices over the first two dimensions.
top_left = (slice(100),) * 2
bottom_right = (slice(-100, None),) * 2
sliced_image = image.copy()
sliced_image[top_left] = colorize(image[top_left], 0.82, saturation=0.5)
sliced_image[bottom_right] = colorize(image[bottom_right], 0.5, saturation=0.5)
# Create a mask selecting regions with interesting texture.
noisy = rank.entropy(grayscale_image, np.ones((9, 9)))
textured_regions = noisy > 4
# Note that using `colorize` here is a bit more difficult, since `rgb2hsv`
# expects an RGB image (height x width x channel), but fancy-indexing returns
# a set of RGB pixels (# pixels x channel).
masked_image = image.copy()
masked_image[textured_regions, :] *= red_multiplier
fig, (ax1, ax2) = plt.subplots(ncols=2, nrows=1, figsize=(8, 4), sharex=True, sharey=True)
ax1.imshow(sliced_image)
ax2.imshow(masked_image)
ax1.set_adjustable('box-forced')
ax2.set_adjustable('box-forced')
plt.show()
"""
.. image:: PLOT2RST.current_figure
For coloring multiple regions, you may also be interested in
`skimage.color.label2rgb
<http://scikit-image.org/docs/0.9.x/api/skimage.color.html#label2rgb>`_.
"""
| bsd-3-clause |
artwr/airflow | airflow/contrib/hooks/bigquery_hook.py | 2 | 86753 | # -*- coding: utf-8 -*-
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
"""
This module contains a BigQuery Hook, as well as a very basic PEP 249
implementation for BigQuery.
"""
import time
import six
from builtins import range
from copy import deepcopy
from six import iteritems
from past.builtins import basestring
from airflow import AirflowException
from airflow.contrib.hooks.gcp_api_base_hook import GoogleCloudBaseHook
from airflow.hooks.dbapi_hook import DbApiHook
from airflow.utils.log.logging_mixin import LoggingMixin
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from pandas_gbq.gbq import \
_check_google_client_version as gbq_check_google_client_version
from pandas_gbq import read_gbq
from pandas_gbq.gbq import \
_test_google_api_imports as gbq_test_google_api_imports
from pandas_gbq.gbq import GbqConnector
class BigQueryHook(GoogleCloudBaseHook, DbApiHook):
"""
Interact with BigQuery. This hook uses the Google Cloud Platform
connection.
"""
conn_name_attr = 'bigquery_conn_id'
def __init__(self,
bigquery_conn_id='bigquery_default',
delegate_to=None,
use_legacy_sql=True,
location=None):
super(BigQueryHook, self).__init__(
gcp_conn_id=bigquery_conn_id, delegate_to=delegate_to)
self.use_legacy_sql = use_legacy_sql
self.location = location
def get_conn(self):
"""
Returns a BigQuery PEP 249 connection object.
"""
service = self.get_service()
project = self._get_field('project')
return BigQueryConnection(
service=service,
project_id=project,
use_legacy_sql=self.use_legacy_sql,
location=self.location,
)
def get_service(self):
"""
Returns a BigQuery service object.
"""
http_authorized = self._authorize()
return build(
'bigquery', 'v2', http=http_authorized, cache_discovery=False)
def insert_rows(self, table, rows, target_fields=None, commit_every=1000):
"""
Insertion is currently unsupported. Theoretically, you could use
BigQuery's streaming API to insert rows into a table, but this hasn't
been implemented.
"""
raise NotImplementedError()
def get_pandas_df(self, sql, parameters=None, dialect=None):
"""
Returns a Pandas DataFrame for the results produced by a BigQuery
query. The DbApiHook method must be overridden because Pandas
doesn't support PEP 249 connections, except for SQLite. See:
https://github.com/pydata/pandas/blob/master/pandas/io/sql.py#L447
https://github.com/pydata/pandas/issues/6900
:param sql: The BigQuery SQL to execute.
:type sql: str
:param parameters: The parameters to render the SQL query with (not
used, leave to override superclass method)
:type parameters: mapping or iterable
:param dialect: Dialect of BigQuery SQL – legacy SQL or standard SQL
defaults to use `self.use_legacy_sql` if not specified
:type dialect: str in {'legacy', 'standard'}
"""
private_key = self._get_field('key_path', None) or self._get_field('keyfile_dict', None)
if dialect is None:
dialect = 'legacy' if self.use_legacy_sql else 'standard'
return read_gbq(sql,
project_id=self._get_field('project'),
dialect=dialect,
verbose=False,
private_key=private_key)
def table_exists(self, project_id, dataset_id, table_id):
"""
Checks for the existence of a table in Google BigQuery.
:param project_id: The Google cloud project in which to look for the
table. The connection supplied to the hook must provide access to
the specified project.
:type project_id: str
:param dataset_id: The name of the dataset in which to look for the
table.
:type dataset_id: str
:param table_id: The name of the table to check the existence of.
:type table_id: str
"""
service = self.get_service()
try:
service.tables().get(
projectId=project_id, datasetId=dataset_id,
tableId=table_id).execute()
return True
except HttpError as e:
if e.resp['status'] == '404':
return False
raise
class BigQueryPandasConnector(GbqConnector):
"""
This connector behaves identically to GbqConnector (from Pandas), except
that it allows the service to be injected, and disables a call to
self.get_credentials(). This allows Airflow to use BigQuery with Pandas
without forcing a three legged OAuth connection. Instead, we can inject
service account credentials into the binding.
"""
def __init__(self,
project_id,
service,
reauth=False,
verbose=False,
dialect='legacy'):
super(BigQueryPandasConnector, self).__init__(project_id)
gbq_check_google_client_version()
gbq_test_google_api_imports()
self.project_id = project_id
self.reauth = reauth
self.service = service
self.verbose = verbose
self.dialect = dialect
class BigQueryConnection(object):
"""
BigQuery does not have a notion of a persistent connection. Thus, these
objects are small stateless factories for cursors, which do all the real
work.
"""
def __init__(self, *args, **kwargs):
self._args = args
self._kwargs = kwargs
def close(self):
""" BigQueryConnection does not have anything to close. """
pass
def commit(self):
""" BigQueryConnection does not support transactions. """
pass
def cursor(self):
""" Return a new :py:class:`Cursor` object using the connection. """
return BigQueryCursor(*self._args, **self._kwargs)
def rollback(self):
raise NotImplementedError(
"BigQueryConnection does not have transactions")
class BigQueryBaseCursor(LoggingMixin):
"""
The BigQuery base cursor contains helper methods to execute queries against
BigQuery. The methods can be used directly by operators, in cases where a
PEP 249 cursor isn't needed.
"""
def __init__(self,
service,
project_id,
use_legacy_sql=True,
api_resource_configs=None,
location=None):
self.service = service
self.project_id = project_id
self.use_legacy_sql = use_legacy_sql
if api_resource_configs:
_validate_value("api_resource_configs", api_resource_configs, dict)
self.api_resource_configs = api_resource_configs \
if api_resource_configs else {}
self.running_job_id = None
self.location = location
def create_empty_table(self,
project_id,
dataset_id,
table_id,
schema_fields=None,
time_partitioning=None,
cluster_fields=None,
labels=None,
view=None,
num_retries=5):
"""
Creates a new, empty table in the dataset.
To create a view, which is defined by a SQL query, parse a dictionary to 'view' kwarg
:param project_id: The project to create the table into.
:type project_id: str
:param dataset_id: The dataset to create the table into.
:type dataset_id: str
:param table_id: The Name of the table to be created.
:type table_id: str
:param schema_fields: If set, the schema field list as defined here:
https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.schema
:type schema_fields: list
:param labels: a dictionary containing labels for the table, passed to BigQuery
:type labels: dict
**Example**: ::
schema_fields=[{"name": "emp_name", "type": "STRING", "mode": "REQUIRED"},
{"name": "salary", "type": "INTEGER", "mode": "NULLABLE"}]
:param time_partitioning: configure optional time partitioning fields i.e.
partition by field, type and expiration as per API specifications.
.. seealso::
https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#timePartitioning
:type time_partitioning: dict
:param cluster_fields: [Optional] The fields used for clustering.
Must be specified with time_partitioning, data in the table will be first
partitioned and subsequently clustered.
https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#clustering.fields
:type cluster_fields: list
:param view: [Optional] A dictionary containing definition for the view.
If set, it will create a view instead of a table:
https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#view
:type view: dict
**Example**: ::
view = {
"query": "SELECT * FROM `test-project-id.test_dataset_id.test_table_prefix*` LIMIT 1000",
"useLegacySql": False
}
:return: None
"""
project_id = project_id if project_id is not None else self.project_id
table_resource = {
'tableReference': {
'tableId': table_id
}
}
if schema_fields:
table_resource['schema'] = {'fields': schema_fields}
if time_partitioning:
table_resource['timePartitioning'] = time_partitioning
if cluster_fields:
table_resource['clustering'] = {
'fields': cluster_fields
}
if labels:
table_resource['labels'] = labels
if view:
table_resource['view'] = view
self.log.info('Creating Table %s:%s.%s',
project_id, dataset_id, table_id)
try:
self.service.tables().insert(
projectId=project_id,
datasetId=dataset_id,
body=table_resource).execute(num_retries=num_retries)
self.log.info('Table created successfully: %s:%s.%s',
project_id, dataset_id, table_id)
except HttpError as err:
raise AirflowException(
'BigQuery job failed. Error was: {}'.format(err.content)
)
def create_external_table(self,
external_project_dataset_table,
schema_fields,
source_uris,
source_format='CSV',
autodetect=False,
compression='NONE',
ignore_unknown_values=False,
max_bad_records=0,
skip_leading_rows=0,
field_delimiter=',',
quote_character=None,
allow_quoted_newlines=False,
allow_jagged_rows=False,
src_fmt_configs=None,
labels=None
):
"""
Creates a new external table in the dataset with the data in Google
Cloud Storage. See here:
https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource
for more details about these parameters.
:param external_project_dataset_table:
The dotted (<project>.|<project>:)<dataset>.<table>($<partition>) BigQuery
table name to create external table.
If <project> is not included, project will be the
project defined in the connection json.
:type external_project_dataset_table: str
:param schema_fields: The schema field list as defined here:
https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#resource
:type schema_fields: list
:param source_uris: The source Google Cloud
Storage URI (e.g. gs://some-bucket/some-file.txt). A single wild
per-object name can be used.
:type source_uris: list
:param source_format: File format to export.
:type source_format: str
:param autodetect: Try to detect schema and format options automatically.
Any option specified explicitly will be honored.
:type autodetect: bool
:param compression: [Optional] The compression type of the data source.
Possible values include GZIP and NONE.
The default value is NONE.
This setting is ignored for Google Cloud Bigtable,
Google Cloud Datastore backups and Avro formats.
:type compression: str
:param ignore_unknown_values: [Optional] Indicates if BigQuery should allow
extra values that are not represented in the table schema.
If true, the extra values are ignored. If false, records with extra columns
are treated as bad records, and if there are too many bad records, an
invalid error is returned in the job result.
:type ignore_unknown_values: bool
:param max_bad_records: The maximum number of bad records that BigQuery can
ignore when running the job.
:type max_bad_records: int
:param skip_leading_rows: Number of rows to skip when loading from a CSV.
:type skip_leading_rows: int
:param field_delimiter: The delimiter to use when loading from a CSV.
:type field_delimiter: str
:param quote_character: The value that is used to quote data sections in a CSV
file.
:type quote_character: str
:param allow_quoted_newlines: Whether to allow quoted newlines (true) or not
(false).
:type allow_quoted_newlines: bool
:param allow_jagged_rows: Accept rows that are missing trailing optional columns.
The missing values are treated as nulls. If false, records with missing
trailing columns are treated as bad records, and if there are too many bad
records, an invalid error is returned in the job result. Only applicable when
soure_format is CSV.
:type allow_jagged_rows: bool
:param src_fmt_configs: configure optional fields specific to the source format
:type src_fmt_configs: dict
:param labels: a dictionary containing labels for the table, passed to BigQuery
:type labels: dict
"""
if src_fmt_configs is None:
src_fmt_configs = {}
project_id, dataset_id, external_table_id = \
_split_tablename(table_input=external_project_dataset_table,
default_project_id=self.project_id,
var_name='external_project_dataset_table')
# bigquery only allows certain source formats
# we check to make sure the passed source format is valid
# if it's not, we raise a ValueError
# Refer to this link for more details:
# https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#externalDataConfiguration.sourceFormat
source_format = source_format.upper()
allowed_formats = [
"CSV", "NEWLINE_DELIMITED_JSON", "AVRO", "GOOGLE_SHEETS",
"DATASTORE_BACKUP", "PARQUET"
]
if source_format not in allowed_formats:
raise ValueError("{0} is not a valid source format. "
"Please use one of the following types: {1}"
.format(source_format, allowed_formats))
compression = compression.upper()
allowed_compressions = ['NONE', 'GZIP']
if compression not in allowed_compressions:
raise ValueError("{0} is not a valid compression format. "
"Please use one of the following types: {1}"
.format(compression, allowed_compressions))
table_resource = {
'externalDataConfiguration': {
'autodetect': autodetect,
'sourceFormat': source_format,
'sourceUris': source_uris,
'compression': compression,
'ignoreUnknownValues': ignore_unknown_values
},
'tableReference': {
'projectId': project_id,
'datasetId': dataset_id,
'tableId': external_table_id,
}
}
if schema_fields:
table_resource['externalDataConfiguration'].update({
'schema': {
'fields': schema_fields
}
})
self.log.info('Creating external table: %s', external_project_dataset_table)
if max_bad_records:
table_resource['externalDataConfiguration']['maxBadRecords'] = max_bad_records
# if following fields are not specified in src_fmt_configs,
# honor the top-level params for backward-compatibility
if 'skipLeadingRows' not in src_fmt_configs:
src_fmt_configs['skipLeadingRows'] = skip_leading_rows
if 'fieldDelimiter' not in src_fmt_configs:
src_fmt_configs['fieldDelimiter'] = field_delimiter
if 'quote_character' not in src_fmt_configs:
src_fmt_configs['quote'] = quote_character
if 'allowQuotedNewlines' not in src_fmt_configs:
src_fmt_configs['allowQuotedNewlines'] = allow_quoted_newlines
if 'allowJaggedRows' not in src_fmt_configs:
src_fmt_configs['allowJaggedRows'] = allow_jagged_rows
src_fmt_to_param_mapping = {
'CSV': 'csvOptions',
'GOOGLE_SHEETS': 'googleSheetsOptions'
}
src_fmt_to_configs_mapping = {
'csvOptions': [
'allowJaggedRows', 'allowQuotedNewlines',
'fieldDelimiter', 'skipLeadingRows',
'quote'
],
'googleSheetsOptions': ['skipLeadingRows']
}
if source_format in src_fmt_to_param_mapping.keys():
valid_configs = src_fmt_to_configs_mapping[
src_fmt_to_param_mapping[source_format]
]
src_fmt_configs = {
k: v
for k, v in src_fmt_configs.items() if k in valid_configs
}
table_resource['externalDataConfiguration'][src_fmt_to_param_mapping[
source_format]] = src_fmt_configs
if labels:
table_resource['labels'] = labels
try:
self.service.tables().insert(
projectId=project_id,
datasetId=dataset_id,
body=table_resource
).execute()
self.log.info('External table created successfully: %s',
external_project_dataset_table)
except HttpError as err:
raise Exception(
'BigQuery job failed. Error was: {}'.format(err.content)
)
def patch_table(self,
dataset_id,
table_id,
project_id=None,
description=None,
expiration_time=None,
external_data_configuration=None,
friendly_name=None,
labels=None,
schema=None,
time_partitioning=None,
view=None,
require_partition_filter=None):
"""
Patch information in an existing table.
It only updates fileds that are provided in the request object.
Reference: https://cloud.google.com/bigquery/docs/reference/rest/v2/tables/patch
:param dataset_id: The dataset containing the table to be patched.
:type dataset_id: str
:param table_id: The Name of the table to be patched.
:type table_id: str
:param project_id: The project containing the table to be patched.
:type project_id: str
:param description: [Optional] A user-friendly description of this table.
:type description: str
:param expiration_time: [Optional] The time when this table expires,
in milliseconds since the epoch.
:type expiration_time: int
:param external_data_configuration: [Optional] A dictionary containing
properties of a table stored outside of BigQuery.
:type external_data_configuration: dict
:param friendly_name: [Optional] A descriptive name for this table.
:type friendly_name: str
:param labels: [Optional] A dictionary containing labels associated with this table.
:type labels: dict
:param schema: [Optional] If set, the schema field list as defined here:
https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.schema
The supported schema modifications and unsupported schema modification are listed here:
https://cloud.google.com/bigquery/docs/managing-table-schemas
**Example**: ::
schema=[{"name": "emp_name", "type": "STRING", "mode": "REQUIRED"},
{"name": "salary", "type": "INTEGER", "mode": "NULLABLE"}]
:type schema: list
:param time_partitioning: [Optional] A dictionary containing time-based partitioning
definition for the table.
:type time_partitioning: dict
:param view: [Optional] A dictionary containing definition for the view.
If set, it will patch a view instead of a table:
https://cloud.google.com/bigquery/docs/reference/rest/v2/tables#view
**Example**: ::
view = {
"query": "SELECT * FROM `test-project-id.test_dataset_id.test_table_prefix*` LIMIT 500",
"useLegacySql": False
}
:type view: dict
:param require_partition_filter: [Optional] If true, queries over the this table require a
partition filter. If false, queries over the table
:type require_partition_filter: bool
"""
project_id = project_id if project_id is not None else self.project_id
table_resource = {}
if description is not None:
table_resource['description'] = description
if expiration_time is not None:
table_resource['expirationTime'] = expiration_time
if external_data_configuration:
table_resource['externalDataConfiguration'] = external_data_configuration
if friendly_name is not None:
table_resource['friendlyName'] = friendly_name
if labels:
table_resource['labels'] = labels
if schema:
table_resource['schema'] = {'fields': schema}
if time_partitioning:
table_resource['timePartitioning'] = time_partitioning
if view:
table_resource['view'] = view
if require_partition_filter is not None:
table_resource['requirePartitionFilter'] = require_partition_filter
self.log.info('Patching Table %s:%s.%s',
project_id, dataset_id, table_id)
try:
self.service.tables().patch(
projectId=project_id,
datasetId=dataset_id,
tableId=table_id,
body=table_resource).execute()
self.log.info('Table patched successfully: %s:%s.%s',
project_id, dataset_id, table_id)
except HttpError as err:
raise AirflowException(
'BigQuery job failed. Error was: {}'.format(err.content)
)
def run_query(self,
sql,
destination_dataset_table=None,
write_disposition='WRITE_EMPTY',
allow_large_results=False,
flatten_results=None,
udf_config=None,
use_legacy_sql=None,
maximum_billing_tier=None,
maximum_bytes_billed=None,
create_disposition='CREATE_IF_NEEDED',
query_params=None,
labels=None,
schema_update_options=(),
priority='INTERACTIVE',
time_partitioning=None,
api_resource_configs=None,
cluster_fields=None,
location=None):
"""
Executes a BigQuery SQL query. Optionally persists results in a BigQuery
table. See here:
https://cloud.google.com/bigquery/docs/reference/v2/jobs
For more details about these parameters.
:param sql: The BigQuery SQL to execute.
:type sql: str
:param destination_dataset_table: The dotted <dataset>.<table>
BigQuery table to save the query results.
:type destination_dataset_table: str
:param write_disposition: What to do if the table already exists in
BigQuery.
:type write_disposition: str
:param allow_large_results: Whether to allow large results.
:type allow_large_results: bool
:param flatten_results: If true and query uses legacy SQL dialect, flattens
all nested and repeated fields in the query results. ``allowLargeResults``
must be true if this is set to false. For standard SQL queries, this
flag is ignored and results are never flattened.
:type flatten_results: bool
:param udf_config: The User Defined Function configuration for the query.
See https://cloud.google.com/bigquery/user-defined-functions for details.
:type udf_config: list
:param use_legacy_sql: Whether to use legacy SQL (true) or standard SQL (false).
If `None`, defaults to `self.use_legacy_sql`.
:type use_legacy_sql: bool
:param api_resource_configs: a dictionary that contain params
'configuration' applied for Google BigQuery Jobs API:
https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs
for example, {'query': {'useQueryCache': False}}. You could use it
if you need to provide some params that are not supported by the
BigQueryHook like args.
:type api_resource_configs: dict
:param maximum_billing_tier: Positive integer that serves as a
multiplier of the basic price.
:type maximum_billing_tier: int
:param maximum_bytes_billed: Limits the bytes billed for this job.
Queries that will have bytes billed beyond this limit will fail
(without incurring a charge). If unspecified, this will be
set to your project default.
:type maximum_bytes_billed: float
:param create_disposition: Specifies whether the job is allowed to
create new tables.
:type create_disposition: str
:param query_params: a list of dictionary containing query parameter types and
values, passed to BigQuery
:type query_params: list
:param labels: a dictionary containing labels for the job/query,
passed to BigQuery
:type labels: dict
:param schema_update_options: Allows the schema of the destination
table to be updated as a side effect of the query job.
:type schema_update_options: tuple
:param priority: Specifies a priority for the query.
Possible values include INTERACTIVE and BATCH.
The default value is INTERACTIVE.
:type priority: str
:param time_partitioning: configure optional time partitioning fields i.e.
partition by field, type and expiration as per API specifications.
:type time_partitioning: dict
:param cluster_fields: Request that the result of this query be stored sorted
by one or more columns. This is only available in combination with
time_partitioning. The order of columns given determines the sort order.
:type cluster_fields: list[str]
:param location: The geographic location of the job. Required except for
US and EU. See details at
https://cloud.google.com/bigquery/docs/locations#specifying_your_location
:type location: str
"""
if time_partitioning is None:
time_partitioning = {}
if location:
self.location = location
if not api_resource_configs:
api_resource_configs = self.api_resource_configs
else:
_validate_value('api_resource_configs',
api_resource_configs, dict)
configuration = deepcopy(api_resource_configs)
if 'query' not in configuration:
configuration['query'] = {}
else:
_validate_value("api_resource_configs['query']",
configuration['query'], dict)
if sql is None and not configuration['query'].get('query', None):
raise TypeError('`BigQueryBaseCursor.run_query` '
'missing 1 required positional argument: `sql`')
# BigQuery also allows you to define how you want a table's schema to change
# as a side effect of a query job
# for more details:
# https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.schemaUpdateOptions
allowed_schema_update_options = [
'ALLOW_FIELD_ADDITION', "ALLOW_FIELD_RELAXATION"
]
if not set(allowed_schema_update_options
).issuperset(set(schema_update_options)):
raise ValueError("{0} contains invalid schema update options. "
"Please only use one or more of the following "
"options: {1}"
.format(schema_update_options,
allowed_schema_update_options))
if schema_update_options:
if write_disposition not in ["WRITE_APPEND", "WRITE_TRUNCATE"]:
raise ValueError("schema_update_options is only "
"allowed if write_disposition is "
"'WRITE_APPEND' or 'WRITE_TRUNCATE'.")
if destination_dataset_table:
destination_project, destination_dataset, destination_table = \
_split_tablename(table_input=destination_dataset_table,
default_project_id=self.project_id)
destination_dataset_table = {
'projectId': destination_project,
'datasetId': destination_dataset,
'tableId': destination_table,
}
if cluster_fields:
cluster_fields = {'fields': cluster_fields}
query_param_list = [
(sql, 'query', None, six.string_types),
(priority, 'priority', 'INTERACTIVE', six.string_types),
(use_legacy_sql, 'useLegacySql', self.use_legacy_sql, bool),
(query_params, 'queryParameters', None, list),
(udf_config, 'userDefinedFunctionResources', None, list),
(maximum_billing_tier, 'maximumBillingTier', None, int),
(maximum_bytes_billed, 'maximumBytesBilled', None, float),
(time_partitioning, 'timePartitioning', {}, dict),
(schema_update_options, 'schemaUpdateOptions', None, tuple),
(destination_dataset_table, 'destinationTable', None, dict),
(cluster_fields, 'clustering', None, dict),
]
for param_tuple in query_param_list:
param, param_name, param_default, param_type = param_tuple
if param_name not in configuration['query'] and param in [None, {}, ()]:
if param_name == 'timePartitioning':
param_default = _cleanse_time_partitioning(
destination_dataset_table, time_partitioning)
param = param_default
if param not in [None, {}, ()]:
_api_resource_configs_duplication_check(
param_name, param, configuration['query'])
configuration['query'][param_name] = param
# check valid type of provided param,
# it last step because we can get param from 2 sources,
# and first of all need to find it
_validate_value(param_name, configuration['query'][param_name],
param_type)
if param_name == 'schemaUpdateOptions' and param:
self.log.info("Adding experimental 'schemaUpdateOptions': "
"{0}".format(schema_update_options))
if param_name == 'destinationTable':
for key in ['projectId', 'datasetId', 'tableId']:
if key not in configuration['query']['destinationTable']:
raise ValueError(
"Not correct 'destinationTable' in "
"api_resource_configs. 'destinationTable' "
"must be a dict with {'projectId':'', "
"'datasetId':'', 'tableId':''}")
configuration['query'].update({
'allowLargeResults': allow_large_results,
'flattenResults': flatten_results,
'writeDisposition': write_disposition,
'createDisposition': create_disposition,
})
if 'useLegacySql' in configuration['query'] and configuration['query']['useLegacySql'] and\
'queryParameters' in configuration['query']:
raise ValueError("Query parameters are not allowed "
"when using legacy SQL")
if labels:
_api_resource_configs_duplication_check(
'labels', labels, configuration)
configuration['labels'] = labels
return self.run_with_configuration(configuration)
def run_extract( # noqa
self,
source_project_dataset_table,
destination_cloud_storage_uris,
compression='NONE',
export_format='CSV',
field_delimiter=',',
print_header=True,
labels=None):
"""
Executes a BigQuery extract command to copy data from BigQuery to
Google Cloud Storage. See here:
https://cloud.google.com/bigquery/docs/reference/v2/jobs
For more details about these parameters.
:param source_project_dataset_table: The dotted <dataset>.<table>
BigQuery table to use as the source data.
:type source_project_dataset_table: str
:param destination_cloud_storage_uris: The destination Google Cloud
Storage URI (e.g. gs://some-bucket/some-file.txt). Follows
convention defined here:
https://cloud.google.com/bigquery/exporting-data-from-bigquery#exportingmultiple
:type destination_cloud_storage_uris: list
:param compression: Type of compression to use.
:type compression: str
:param export_format: File format to export.
:type export_format: str
:param field_delimiter: The delimiter to use when extracting to a CSV.
:type field_delimiter: str
:param print_header: Whether to print a header for a CSV file extract.
:type print_header: bool
:param labels: a dictionary containing labels for the job/query,
passed to BigQuery
:type labels: dict
"""
source_project, source_dataset, source_table = \
_split_tablename(table_input=source_project_dataset_table,
default_project_id=self.project_id,
var_name='source_project_dataset_table')
configuration = {
'extract': {
'sourceTable': {
'projectId': source_project,
'datasetId': source_dataset,
'tableId': source_table,
},
'compression': compression,
'destinationUris': destination_cloud_storage_uris,
'destinationFormat': export_format,
}
}
if labels:
configuration['labels'] = labels
if export_format == 'CSV':
# Only set fieldDelimiter and printHeader fields if using CSV.
# Google does not like it if you set these fields for other export
# formats.
configuration['extract']['fieldDelimiter'] = field_delimiter
configuration['extract']['printHeader'] = print_header
return self.run_with_configuration(configuration)
def run_copy(self,
source_project_dataset_tables,
destination_project_dataset_table,
write_disposition='WRITE_EMPTY',
create_disposition='CREATE_IF_NEEDED',
labels=None):
"""
Executes a BigQuery copy command to copy data from one BigQuery table
to another. See here:
https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.copy
For more details about these parameters.
:param source_project_dataset_tables: One or more dotted
``(project:|project.)<dataset>.<table>``
BigQuery tables to use as the source data. Use a list if there are
multiple source tables.
If <project> is not included, project will be the project defined
in the connection json.
:type source_project_dataset_tables: list|string
:param destination_project_dataset_table: The destination BigQuery
table. Format is: ``(project:|project.)<dataset>.<table>``
:type destination_project_dataset_table: str
:param write_disposition: The write disposition if the table already exists.
:type write_disposition: str
:param create_disposition: The create disposition if the table doesn't exist.
:type create_disposition: str
:param labels: a dictionary containing labels for the job/query,
passed to BigQuery
:type labels: dict
"""
source_project_dataset_tables = ([
source_project_dataset_tables
] if not isinstance(source_project_dataset_tables, list) else
source_project_dataset_tables)
source_project_dataset_tables_fixup = []
for source_project_dataset_table in source_project_dataset_tables:
source_project, source_dataset, source_table = \
_split_tablename(table_input=source_project_dataset_table,
default_project_id=self.project_id,
var_name='source_project_dataset_table')
source_project_dataset_tables_fixup.append({
'projectId':
source_project,
'datasetId':
source_dataset,
'tableId':
source_table
})
destination_project, destination_dataset, destination_table = \
_split_tablename(table_input=destination_project_dataset_table,
default_project_id=self.project_id)
configuration = {
'copy': {
'createDisposition': create_disposition,
'writeDisposition': write_disposition,
'sourceTables': source_project_dataset_tables_fixup,
'destinationTable': {
'projectId': destination_project,
'datasetId': destination_dataset,
'tableId': destination_table
}
}
}
if labels:
configuration['labels'] = labels
return self.run_with_configuration(configuration)
def run_load(self,
destination_project_dataset_table,
source_uris,
schema_fields=None,
source_format='CSV',
create_disposition='CREATE_IF_NEEDED',
skip_leading_rows=0,
write_disposition='WRITE_EMPTY',
field_delimiter=',',
max_bad_records=0,
quote_character=None,
ignore_unknown_values=False,
allow_quoted_newlines=False,
allow_jagged_rows=False,
schema_update_options=(),
src_fmt_configs=None,
time_partitioning=None,
cluster_fields=None,
autodetect=False):
"""
Executes a BigQuery load command to load data from Google Cloud Storage
to BigQuery. See here:
https://cloud.google.com/bigquery/docs/reference/v2/jobs
For more details about these parameters.
:param destination_project_dataset_table:
The dotted (<project>.|<project>:)<dataset>.<table>($<partition>) BigQuery
table to load data into. If <project> is not included, project will be the
project defined in the connection json. If a partition is specified the
operator will automatically append the data, create a new partition or create
a new DAY partitioned table.
:type destination_project_dataset_table: str
:param schema_fields: The schema field list as defined here:
https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.load
Required if autodetect=False; optional if autodetect=True.
:type schema_fields: list
:param autodetect: Attempt to autodetect the schema for CSV and JSON
source files.
:type autodetect: bool
:param source_uris: The source Google Cloud
Storage URI (e.g. gs://some-bucket/some-file.txt). A single wild
per-object name can be used.
:type source_uris: list
:param source_format: File format to export.
:type source_format: str
:param create_disposition: The create disposition if the table doesn't exist.
:type create_disposition: str
:param skip_leading_rows: Number of rows to skip when loading from a CSV.
:type skip_leading_rows: int
:param write_disposition: The write disposition if the table already exists.
:type write_disposition: str
:param field_delimiter: The delimiter to use when loading from a CSV.
:type field_delimiter: str
:param max_bad_records: The maximum number of bad records that BigQuery can
ignore when running the job.
:type max_bad_records: int
:param quote_character: The value that is used to quote data sections in a CSV
file.
:type quote_character: str
:param ignore_unknown_values: [Optional] Indicates if BigQuery should allow
extra values that are not represented in the table schema.
If true, the extra values are ignored. If false, records with extra columns
are treated as bad records, and if there are too many bad records, an
invalid error is returned in the job result.
:type ignore_unknown_values: bool
:param allow_quoted_newlines: Whether to allow quoted newlines (true) or not
(false).
:type allow_quoted_newlines: bool
:param allow_jagged_rows: Accept rows that are missing trailing optional columns.
The missing values are treated as nulls. If false, records with missing
trailing columns are treated as bad records, and if there are too many bad
records, an invalid error is returned in the job result. Only applicable when
soure_format is CSV.
:type allow_jagged_rows: bool
:param schema_update_options: Allows the schema of the destination
table to be updated as a side effect of the load job.
:type schema_update_options: tuple
:param src_fmt_configs: configure optional fields specific to the source format
:type src_fmt_configs: dict
:param time_partitioning: configure optional time partitioning fields i.e.
partition by field, type and expiration as per API specifications.
:type time_partitioning: dict
:param cluster_fields: Request that the result of this load be stored sorted
by one or more columns. This is only available in combination with
time_partitioning. The order of columns given determines the sort order.
:type cluster_fields: list[str]
"""
# bigquery only allows certain source formats
# we check to make sure the passed source format is valid
# if it's not, we raise a ValueError
# Refer to this link for more details:
# https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.query.tableDefinitions.(key).sourceFormat
if schema_fields is None and not autodetect:
raise ValueError(
'You must either pass a schema or autodetect=True.')
if src_fmt_configs is None:
src_fmt_configs = {}
source_format = source_format.upper()
allowed_formats = [
"CSV", "NEWLINE_DELIMITED_JSON", "AVRO", "GOOGLE_SHEETS",
"DATASTORE_BACKUP", "PARQUET"
]
if source_format not in allowed_formats:
raise ValueError("{0} is not a valid source format. "
"Please use one of the following types: {1}"
.format(source_format, allowed_formats))
# bigquery also allows you to define how you want a table's schema to change
# as a side effect of a load
# for more details:
# https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs#configuration.load.schemaUpdateOptions
allowed_schema_update_options = [
'ALLOW_FIELD_ADDITION', "ALLOW_FIELD_RELAXATION"
]
if not set(allowed_schema_update_options).issuperset(
set(schema_update_options)):
raise ValueError(
"{0} contains invalid schema update options."
"Please only use one or more of the following options: {1}"
.format(schema_update_options, allowed_schema_update_options))
destination_project, destination_dataset, destination_table = \
_split_tablename(table_input=destination_project_dataset_table,
default_project_id=self.project_id,
var_name='destination_project_dataset_table')
configuration = {
'load': {
'autodetect': autodetect,
'createDisposition': create_disposition,
'destinationTable': {
'projectId': destination_project,
'datasetId': destination_dataset,
'tableId': destination_table,
},
'sourceFormat': source_format,
'sourceUris': source_uris,
'writeDisposition': write_disposition,
'ignoreUnknownValues': ignore_unknown_values
}
}
time_partitioning = _cleanse_time_partitioning(
destination_project_dataset_table,
time_partitioning
)
if time_partitioning:
configuration['load'].update({
'timePartitioning': time_partitioning
})
if cluster_fields:
configuration['load'].update({'clustering': {'fields': cluster_fields}})
if schema_fields:
configuration['load']['schema'] = {'fields': schema_fields}
if schema_update_options:
if write_disposition not in ["WRITE_APPEND", "WRITE_TRUNCATE"]:
raise ValueError("schema_update_options is only "
"allowed if write_disposition is "
"'WRITE_APPEND' or 'WRITE_TRUNCATE'.")
else:
self.log.info(
"Adding experimental "
"'schemaUpdateOptions': {0}".format(schema_update_options))
configuration['load'][
'schemaUpdateOptions'] = schema_update_options
if max_bad_records:
configuration['load']['maxBadRecords'] = max_bad_records
# if following fields are not specified in src_fmt_configs,
# honor the top-level params for backward-compatibility
if 'skipLeadingRows' not in src_fmt_configs:
src_fmt_configs['skipLeadingRows'] = skip_leading_rows
if 'fieldDelimiter' not in src_fmt_configs:
src_fmt_configs['fieldDelimiter'] = field_delimiter
if 'ignoreUnknownValues' not in src_fmt_configs:
src_fmt_configs['ignoreUnknownValues'] = ignore_unknown_values
if quote_character is not None:
src_fmt_configs['quote'] = quote_character
if allow_quoted_newlines:
src_fmt_configs['allowQuotedNewlines'] = allow_quoted_newlines
src_fmt_to_configs_mapping = {
'CSV': [
'allowJaggedRows', 'allowQuotedNewlines', 'autodetect',
'fieldDelimiter', 'skipLeadingRows', 'ignoreUnknownValues',
'nullMarker', 'quote'
],
'DATASTORE_BACKUP': ['projectionFields'],
'NEWLINE_DELIMITED_JSON': ['autodetect', 'ignoreUnknownValues'],
'PARQUET': ['autodetect', 'ignoreUnknownValues'],
'AVRO': [],
}
valid_configs = src_fmt_to_configs_mapping[source_format]
src_fmt_configs = {
k: v
for k, v in src_fmt_configs.items() if k in valid_configs
}
configuration['load'].update(src_fmt_configs)
if allow_jagged_rows:
configuration['load']['allowJaggedRows'] = allow_jagged_rows
return self.run_with_configuration(configuration)
def run_with_configuration(self, configuration):
"""
Executes a BigQuery SQL query. See here:
https://cloud.google.com/bigquery/docs/reference/v2/jobs
For more details about the configuration parameter.
:param configuration: The configuration parameter maps directly to
BigQuery's configuration field in the job object. See
https://cloud.google.com/bigquery/docs/reference/v2/jobs for
details.
"""
jobs = self.service.jobs()
job_data = {'configuration': configuration}
# Send query and wait for reply.
query_reply = jobs \
.insert(projectId=self.project_id, body=job_data) \
.execute()
self.running_job_id = query_reply['jobReference']['jobId']
if 'location' in query_reply['jobReference']:
location = query_reply['jobReference']['location']
else:
location = self.location
# Wait for query to finish.
keep_polling_job = True
while keep_polling_job:
try:
if location:
job = jobs.get(
projectId=self.project_id,
jobId=self.running_job_id,
location=location).execute()
else:
job = jobs.get(
projectId=self.project_id,
jobId=self.running_job_id).execute()
if job['status']['state'] == 'DONE':
keep_polling_job = False
# Check if job had errors.
if 'errorResult' in job['status']:
raise Exception(
'BigQuery job failed. Final error was: {}. The job was: {}'.
format(job['status']['errorResult'], job))
else:
self.log.info('Waiting for job to complete : %s, %s',
self.project_id, self.running_job_id)
time.sleep(5)
except HttpError as err:
if err.resp.status in [500, 503]:
self.log.info(
'%s: Retryable error, waiting for job to complete: %s',
err.resp.status, self.running_job_id)
time.sleep(5)
else:
raise Exception(
'BigQuery job status check failed. Final error was: %s',
err.resp.status)
return self.running_job_id
def poll_job_complete(self, job_id):
jobs = self.service.jobs()
try:
if self.location:
job = jobs.get(projectId=self.project_id,
jobId=job_id,
location=self.location).execute()
else:
job = jobs.get(projectId=self.project_id,
jobId=job_id).execute()
if job['status']['state'] == 'DONE':
return True
except HttpError as err:
if err.resp.status in [500, 503]:
self.log.info(
'%s: Retryable error while polling job with id %s',
err.resp.status, job_id)
else:
raise Exception(
'BigQuery job status check failed. Final error was: %s',
err.resp.status)
return False
def cancel_query(self):
"""
Cancel all started queries that have not yet completed
"""
jobs = self.service.jobs()
if (self.running_job_id and
not self.poll_job_complete(self.running_job_id)):
self.log.info('Attempting to cancel job : %s, %s', self.project_id,
self.running_job_id)
if self.location:
jobs.cancel(
projectId=self.project_id,
jobId=self.running_job_id,
location=self.location).execute()
else:
jobs.cancel(
projectId=self.project_id,
jobId=self.running_job_id).execute()
else:
self.log.info('No running BigQuery jobs to cancel.')
return
# Wait for all the calls to cancel to finish
max_polling_attempts = 12
polling_attempts = 0
job_complete = False
while polling_attempts < max_polling_attempts and not job_complete:
polling_attempts = polling_attempts + 1
job_complete = self.poll_job_complete(self.running_job_id)
if job_complete:
self.log.info('Job successfully canceled: %s, %s',
self.project_id, self.running_job_id)
elif polling_attempts == max_polling_attempts:
self.log.info(
"Stopping polling due to timeout. Job with id %s "
"has not completed cancel and may or may not finish.",
self.running_job_id)
else:
self.log.info('Waiting for canceled job with id %s to finish.',
self.running_job_id)
time.sleep(5)
def get_schema(self, dataset_id, table_id):
"""
Get the schema for a given datset.table.
see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
:param dataset_id: the dataset ID of the requested table
:param table_id: the table ID of the requested table
:return: a table schema
"""
tables_resource = self.service.tables() \
.get(projectId=self.project_id, datasetId=dataset_id, tableId=table_id) \
.execute()
return tables_resource['schema']
def get_tabledata(self, dataset_id, table_id,
max_results=None, selected_fields=None, page_token=None,
start_index=None):
"""
Get the data of a given dataset.table and optionally with selected columns.
see https://cloud.google.com/bigquery/docs/reference/v2/tabledata/list
:param dataset_id: the dataset ID of the requested table.
:param table_id: the table ID of the requested table.
:param max_results: the maximum results to return.
:param selected_fields: List of fields to return (comma-separated). If
unspecified, all fields are returned.
:param page_token: page token, returned from a previous call,
identifying the result set.
:param start_index: zero based index of the starting row to read.
:return: map containing the requested rows.
"""
optional_params = {}
if max_results:
optional_params['maxResults'] = max_results
if selected_fields:
optional_params['selectedFields'] = selected_fields
if page_token:
optional_params['pageToken'] = page_token
if start_index:
optional_params['startIndex'] = start_index
return (self.service.tabledata().list(
projectId=self.project_id,
datasetId=dataset_id,
tableId=table_id,
**optional_params).execute())
def run_table_delete(self, deletion_dataset_table,
ignore_if_missing=False):
"""
Delete an existing table from the dataset;
If the table does not exist, return an error unless ignore_if_missing
is set to True.
:param deletion_dataset_table: A dotted
``(<project>.|<project>:)<dataset>.<table>`` that indicates which table
will be deleted.
:type deletion_dataset_table: str
:param ignore_if_missing: if True, then return success even if the
requested table does not exist.
:type ignore_if_missing: bool
:return:
"""
deletion_project, deletion_dataset, deletion_table = \
_split_tablename(table_input=deletion_dataset_table,
default_project_id=self.project_id)
try:
self.service.tables() \
.delete(projectId=deletion_project,
datasetId=deletion_dataset,
tableId=deletion_table) \
.execute()
self.log.info('Deleted table %s:%s.%s.', deletion_project,
deletion_dataset, deletion_table)
except HttpError:
if not ignore_if_missing:
raise Exception('Table deletion failed. Table does not exist.')
else:
self.log.info('Table does not exist. Skipping.')
def run_table_upsert(self, dataset_id, table_resource, project_id=None):
"""
creates a new, empty table in the dataset;
If the table already exists, update the existing table.
Since BigQuery does not natively allow table upserts, this is not an
atomic operation.
:param dataset_id: the dataset to upsert the table into.
:type dataset_id: str
:param table_resource: a table resource. see
https://cloud.google.com/bigquery/docs/reference/v2/tables#resource
:type table_resource: dict
:param project_id: the project to upsert the table into. If None,
project will be self.project_id.
:return:
"""
# check to see if the table exists
table_id = table_resource['tableReference']['tableId']
project_id = project_id if project_id is not None else self.project_id
tables_list_resp = self.service.tables().list(
projectId=project_id, datasetId=dataset_id).execute()
while True:
for table in tables_list_resp.get('tables', []):
if table['tableReference']['tableId'] == table_id:
# found the table, do update
self.log.info('Table %s:%s.%s exists, updating.',
project_id, dataset_id, table_id)
return self.service.tables().update(
projectId=project_id,
datasetId=dataset_id,
tableId=table_id,
body=table_resource).execute()
# If there is a next page, we need to check the next page.
if 'nextPageToken' in tables_list_resp:
tables_list_resp = self.service.tables()\
.list(projectId=project_id,
datasetId=dataset_id,
pageToken=tables_list_resp['nextPageToken'])\
.execute()
# If there is no next page, then the table doesn't exist.
else:
# do insert
self.log.info('Table %s:%s.%s does not exist. creating.',
project_id, dataset_id, table_id)
return self.service.tables().insert(
projectId=project_id,
datasetId=dataset_id,
body=table_resource).execute()
def run_grant_dataset_view_access(self,
source_dataset,
view_dataset,
view_table,
source_project=None,
view_project=None):
"""
Grant authorized view access of a dataset to a view table.
If this view has already been granted access to the dataset, do nothing.
This method is not atomic. Running it may clobber a simultaneous update.
:param source_dataset: the source dataset
:type source_dataset: str
:param view_dataset: the dataset that the view is in
:type view_dataset: str
:param view_table: the table of the view
:type view_table: str
:param source_project: the project of the source dataset. If None,
self.project_id will be used.
:type source_project: str
:param view_project: the project that the view is in. If None,
self.project_id will be used.
:type view_project: str
:return: the datasets resource of the source dataset.
"""
# Apply default values to projects
source_project = source_project if source_project else self.project_id
view_project = view_project if view_project else self.project_id
# we don't want to clobber any existing accesses, so we have to get
# info on the dataset before we can add view access
source_dataset_resource = self.service.datasets().get(
projectId=source_project, datasetId=source_dataset).execute()
access = source_dataset_resource[
'access'] if 'access' in source_dataset_resource else []
view_access = {
'view': {
'projectId': view_project,
'datasetId': view_dataset,
'tableId': view_table
}
}
# check to see if the view we want to add already exists.
if view_access not in access:
self.log.info(
'Granting table %s:%s.%s authorized view access to %s:%s dataset.',
view_project, view_dataset, view_table, source_project,
source_dataset)
access.append(view_access)
return self.service.datasets().patch(
projectId=source_project,
datasetId=source_dataset,
body={
'access': access
}).execute()
else:
# if view is already in access, do nothing.
self.log.info(
'Table %s:%s.%s already has authorized view access to %s:%s dataset.',
view_project, view_dataset, view_table, source_project, source_dataset)
return source_dataset_resource
def create_empty_dataset(self, dataset_id="", project_id="",
dataset_reference=None):
"""
Create a new empty dataset:
https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/insert
:param project_id: The name of the project where we want to create
an empty a dataset. Don't need to provide, if projectId in dataset_reference.
:type project_id: str
:param dataset_id: The id of dataset. Don't need to provide,
if datasetId in dataset_reference.
:type dataset_id: str
:param dataset_reference: Dataset reference that could be provided
with request body. More info:
https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets#resource
:type dataset_reference: dict
"""
if dataset_reference:
_validate_value('dataset_reference', dataset_reference, dict)
else:
dataset_reference = {}
if "datasetReference" not in dataset_reference:
dataset_reference["datasetReference"] = {}
if not dataset_reference["datasetReference"].get("datasetId") and not dataset_id:
raise ValueError(
"{} not provided datasetId. Impossible to create dataset")
dataset_required_params = [(dataset_id, "datasetId", ""),
(project_id, "projectId", self.project_id)]
for param_tuple in dataset_required_params:
param, param_name, param_default = param_tuple
if param_name not in dataset_reference['datasetReference']:
if param_default and not param:
self.log.info("{} was not specified. Will be used default "
"value {}.".format(param_name,
param_default))
param = param_default
dataset_reference['datasetReference'].update(
{param_name: param})
elif param:
_api_resource_configs_duplication_check(
param_name, param,
dataset_reference['datasetReference'], 'dataset_reference')
dataset_id = dataset_reference.get("datasetReference").get("datasetId")
dataset_project_id = dataset_reference.get("datasetReference").get(
"projectId")
self.log.info('Creating Dataset: %s in project: %s ', dataset_id,
dataset_project_id)
try:
self.service.datasets().insert(
projectId=dataset_project_id,
body=dataset_reference).execute()
self.log.info('Dataset created successfully: In project %s '
'Dataset %s', dataset_project_id, dataset_id)
except HttpError as err:
raise AirflowException(
'BigQuery job failed. Error was: {}'.format(err.content)
)
def delete_dataset(self, project_id, dataset_id):
"""
Delete a dataset of Big query in your project.
:param project_id: The name of the project where we have the dataset .
:type project_id: str
:param dataset_id: The dataset to be delete.
:type dataset_id: str
:return:
"""
project_id = project_id if project_id is not None else self.project_id
self.log.info('Deleting from project: %s Dataset:%s',
project_id, dataset_id)
try:
self.service.datasets().delete(
projectId=project_id,
datasetId=dataset_id).execute()
self.log.info('Dataset deleted successfully: In project %s '
'Dataset %s', project_id, dataset_id)
except HttpError as err:
raise AirflowException(
'BigQuery job failed. Error was: {}'.format(err.content)
)
def get_dataset(self, dataset_id, project_id=None):
"""
Method returns dataset_resource if dataset exist
and raised 404 error if dataset does not exist
:param dataset_id: The BigQuery Dataset ID
:type dataset_id: str
:param project_id: The GCP Project ID
:type project_id: str
:return: dataset_resource
.. seealso::
For more information, see Dataset Resource content:
https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets#resource
"""
if not dataset_id or not isinstance(dataset_id, str):
raise ValueError("dataset_id argument must be provided and has "
"a type 'str'. You provided: {}".format(dataset_id))
dataset_project_id = project_id if project_id else self.project_id
try:
dataset_resource = self.service.datasets().get(
datasetId=dataset_id, projectId=dataset_project_id).execute()
self.log.info("Dataset Resource: {}".format(dataset_resource))
except HttpError as err:
raise AirflowException(
'BigQuery job failed. Error was: {}'.format(err.content))
return dataset_resource
def get_datasets_list(self, project_id=None):
"""
Method returns full list of BigQuery datasets in the current project
.. seealso::
For more information, see:
https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets/list
:param project_id: Google Cloud Project for which you
try to get all datasets
:type project_id: str
:return: datasets_list
Example of returned datasets_list: ::
{
"kind":"bigquery#dataset",
"location":"US",
"id":"your-project:dataset_2_test",
"datasetReference":{
"projectId":"your-project",
"datasetId":"dataset_2_test"
}
},
{
"kind":"bigquery#dataset",
"location":"US",
"id":"your-project:dataset_1_test",
"datasetReference":{
"projectId":"your-project",
"datasetId":"dataset_1_test"
}
}
]
"""
dataset_project_id = project_id if project_id else self.project_id
try:
datasets_list = self.service.datasets().list(
projectId=dataset_project_id).execute()['datasets']
self.log.info("Datasets List: {}".format(datasets_list))
except HttpError as err:
raise AirflowException(
'BigQuery job failed. Error was: {}'.format(err.content))
return datasets_list
def insert_all(self, project_id, dataset_id, table_id,
rows, ignore_unknown_values=False,
skip_invalid_rows=False, fail_on_error=False):
"""
Method to stream data into BigQuery one record at a time without needing
to run a load job
.. seealso::
For more information, see:
https://cloud.google.com/bigquery/docs/reference/rest/v2/tabledata/insertAll
:param project_id: The name of the project where we have the table
:type project_id: str
:param dataset_id: The name of the dataset where we have the table
:type dataset_id: str
:param table_id: The name of the table
:type table_id: str
:param rows: the rows to insert
:type rows: list
**Example or rows**:
rows=[{"json": {"a_key": "a_value_0"}}, {"json": {"a_key": "a_value_1"}}]
:param ignore_unknown_values: [Optional] Accept rows that contain values
that do not match the schema. The unknown values are ignored.
The default value is false, which treats unknown values as errors.
:type ignore_unknown_values: bool
:param skip_invalid_rows: [Optional] Insert all valid rows of a request,
even if invalid rows exist. The default value is false, which causes
the entire request to fail if any invalid rows exist.
:type skip_invalid_rows: bool
:param fail_on_error: [Optional] Force the task to fail if any errors occur.
The default value is false, which indicates the task should not fail
even if any insertion errors occur.
:type fail_on_error: bool
"""
dataset_project_id = project_id if project_id else self.project_id
body = {
"rows": rows,
"ignoreUnknownValues": ignore_unknown_values,
"kind": "bigquery#tableDataInsertAllRequest",
"skipInvalidRows": skip_invalid_rows,
}
try:
self.log.info('Inserting {} row(s) into Table {}:{}.{}'.format(
len(rows), dataset_project_id,
dataset_id, table_id))
resp = self.service.tabledata().insertAll(
projectId=dataset_project_id, datasetId=dataset_id,
tableId=table_id, body=body
).execute()
if 'insertErrors' not in resp:
self.log.info('All row(s) inserted successfully: {}:{}.{}'.format(
dataset_project_id, dataset_id, table_id))
else:
error_msg = '{} insert error(s) occurred: {}:{}.{}. Details: {}'.format(
len(resp['insertErrors']),
dataset_project_id, dataset_id, table_id, resp['insertErrors'])
if fail_on_error:
raise AirflowException(
'BigQuery job failed. Error was: {}'.format(error_msg)
)
self.log.info(error_msg)
except HttpError as err:
raise AirflowException(
'BigQuery job failed. Error was: {}'.format(err.content)
)
class BigQueryCursor(BigQueryBaseCursor):
"""
A very basic BigQuery PEP 249 cursor implementation. The PyHive PEP 249
implementation was used as a reference:
https://github.com/dropbox/PyHive/blob/master/pyhive/presto.py
https://github.com/dropbox/PyHive/blob/master/pyhive/common.py
"""
def __init__(self, service, project_id, use_legacy_sql=True, location=None):
super(BigQueryCursor, self).__init__(
service=service,
project_id=project_id,
use_legacy_sql=use_legacy_sql,
location=location,
)
self.buffersize = None
self.page_token = None
self.job_id = None
self.buffer = []
self.all_pages_loaded = False
@property
def description(self):
""" The schema description method is not currently implemented. """
raise NotImplementedError
def close(self):
""" By default, do nothing """
pass
@property
def rowcount(self):
""" By default, return -1 to indicate that this is not supported. """
return -1
def execute(self, operation, parameters=None):
"""
Executes a BigQuery query, and returns the job ID.
:param operation: The query to execute.
:type operation: str
:param parameters: Parameters to substitute into the query.
:type parameters: dict
"""
sql = _bind_parameters(operation,
parameters) if parameters else operation
self.job_id = self.run_query(sql)
def executemany(self, operation, seq_of_parameters):
"""
Execute a BigQuery query multiple times with different parameters.
:param operation: The query to execute.
:type operation: str
:param seq_of_parameters: List of dictionary parameters to substitute into the
query.
:type seq_of_parameters: list
"""
for parameters in seq_of_parameters:
self.execute(operation, parameters)
def fetchone(self):
""" Fetch the next row of a query result set. """
return self.next()
def next(self):
"""
Helper method for fetchone, which returns the next row from a buffer.
If the buffer is empty, attempts to paginate through the result set for
the next page, and load it into the buffer.
"""
if not self.job_id:
return None
if len(self.buffer) == 0:
if self.all_pages_loaded:
return None
query_results = (self.service.jobs().getQueryResults(
projectId=self.project_id,
jobId=self.job_id,
pageToken=self.page_token).execute())
if 'rows' in query_results and query_results['rows']:
self.page_token = query_results.get('pageToken')
fields = query_results['schema']['fields']
col_types = [field['type'] for field in fields]
rows = query_results['rows']
for dict_row in rows:
typed_row = ([
_bq_cast(vs['v'], col_types[idx])
for idx, vs in enumerate(dict_row['f'])
])
self.buffer.append(typed_row)
if not self.page_token:
self.all_pages_loaded = True
else:
# Reset all state since we've exhausted the results.
self.page_token = None
self.job_id = None
self.page_token = None
return None
return self.buffer.pop(0)
def fetchmany(self, size=None):
"""
Fetch the next set of rows of a query result, returning a sequence of sequences
(e.g. a list of tuples). An empty sequence is returned when no more rows are
available. The number of rows to fetch per call is specified by the parameter.
If it is not given, the cursor's arraysize determines the number of rows to be
fetched. The method should try to fetch as many rows as indicated by the size
parameter. If this is not possible due to the specified number of rows not being
available, fewer rows may be returned. An :py:class:`~pyhive.exc.Error`
(or subclass) exception is raised if the previous call to
:py:meth:`execute` did not produce any result set or no call was issued yet.
"""
if size is None:
size = self.arraysize
result = []
for _ in range(size):
one = self.fetchone()
if one is None:
break
else:
result.append(one)
return result
def fetchall(self):
"""
Fetch all (remaining) rows of a query result, returning them as a sequence of
sequences (e.g. a list of tuples).
"""
result = []
while True:
one = self.fetchone()
if one is None:
break
else:
result.append(one)
return result
def get_arraysize(self):
""" Specifies the number of rows to fetch at a time with .fetchmany() """
return self._buffersize if self.buffersize else 1
def set_arraysize(self, arraysize):
""" Specifies the number of rows to fetch at a time with .fetchmany() """
self.buffersize = arraysize
arraysize = property(get_arraysize, set_arraysize)
def setinputsizes(self, sizes):
""" Does nothing by default """
pass
def setoutputsize(self, size, column=None):
""" Does nothing by default """
pass
def _bind_parameters(operation, parameters):
""" Helper method that binds parameters to a SQL query. """
# inspired by MySQL Python Connector (conversion.py)
string_parameters = {}
for (name, value) in iteritems(parameters):
if value is None:
string_parameters[name] = 'NULL'
elif isinstance(value, basestring):
string_parameters[name] = "'" + _escape(value) + "'"
else:
string_parameters[name] = str(value)
return operation % string_parameters
def _escape(s):
""" Helper method that escapes parameters to a SQL query. """
e = s
e = e.replace('\\', '\\\\')
e = e.replace('\n', '\\n')
e = e.replace('\r', '\\r')
e = e.replace("'", "\\'")
e = e.replace('"', '\\"')
return e
def _bq_cast(string_field, bq_type):
"""
Helper method that casts a BigQuery row to the appropriate data types.
This is useful because BigQuery returns all fields as strings.
"""
if string_field is None:
return None
elif bq_type == 'INTEGER':
return int(string_field)
elif bq_type == 'FLOAT' or bq_type == 'TIMESTAMP':
return float(string_field)
elif bq_type == 'BOOLEAN':
if string_field not in ['true', 'false']:
raise ValueError("{} must have value 'true' or 'false'".format(
string_field))
return string_field == 'true'
else:
return string_field
def _split_tablename(table_input, default_project_id, var_name=None):
if '.' not in table_input:
raise ValueError(
'Expected target table name in the format of '
'<dataset>.<table>. Got: {}'.format(table_input))
if not default_project_id:
raise ValueError("INTERNAL: No default project is specified")
def var_print(var_name):
if var_name is None:
return ""
else:
return "Format exception for {var}: ".format(var=var_name)
if table_input.count('.') + table_input.count(':') > 3:
raise Exception(('{var}Use either : or . to specify project '
'got {input}').format(
var=var_print(var_name), input=table_input))
cmpt = table_input.rsplit(':', 1)
project_id = None
rest = table_input
if len(cmpt) == 1:
project_id = None
rest = cmpt[0]
elif len(cmpt) == 2 and cmpt[0].count(':') <= 1:
if cmpt[-1].count('.') != 2:
project_id = cmpt[0]
rest = cmpt[1]
else:
raise Exception(('{var}Expect format of (<project:)<dataset>.<table>, '
'got {input}').format(
var=var_print(var_name), input=table_input))
cmpt = rest.split('.')
if len(cmpt) == 3:
if project_id:
raise ValueError(
"{var}Use either : or . to specify project".format(
var=var_print(var_name)))
project_id = cmpt[0]
dataset_id = cmpt[1]
table_id = cmpt[2]
elif len(cmpt) == 2:
dataset_id = cmpt[0]
table_id = cmpt[1]
else:
raise Exception(
('{var}Expect format of (<project.|<project:)<dataset>.<table>, '
'got {input}').format(var=var_print(var_name), input=table_input))
if project_id is None:
if var_name is not None:
log = LoggingMixin().log
log.info('Project not included in {var}: {input}; '
'using project "{project}"'.format(
var=var_name,
input=table_input,
project=default_project_id))
project_id = default_project_id
return project_id, dataset_id, table_id
def _cleanse_time_partitioning(destination_dataset_table, time_partitioning_in):
# if it is a partitioned table ($ is in the table name) add partition load option
if time_partitioning_in is None:
time_partitioning_in = {}
time_partitioning_out = {}
if destination_dataset_table and '$' in destination_dataset_table:
time_partitioning_out['type'] = 'DAY'
time_partitioning_out.update(time_partitioning_in)
return time_partitioning_out
def _validate_value(key, value, expected_type):
""" function to check expected type and raise
error if type is not correct """
if not isinstance(value, expected_type):
raise TypeError("{} argument must have a type {} not {}".format(
key, expected_type, type(value)))
def _api_resource_configs_duplication_check(key, value, config_dict,
config_dict_name='api_resource_configs'):
if key in config_dict and value != config_dict[key]:
raise ValueError("Values of {param_name} param are duplicated. "
"{dict_name} contained {param_name} param "
"in `query` config and {param_name} was also provided "
"with arg to run_query() method. Please remove duplicates."
.format(param_name=key, dict_name=config_dict_name))
| apache-2.0 |
OshynSong/scikit-learn | examples/linear_model/plot_lasso_and_elasticnet.py | 249 | 1982 | """
========================================
Lasso and Elastic Net for Sparse Signals
========================================
Estimates Lasso and Elastic-Net regression models on a manually generated
sparse signal corrupted with an additive noise. Estimated coefficients are
compared with the ground-truth.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
###############################################################################
# generate some sparse data to play with
np.random.seed(42)
n_samples, n_features = 50, 200
X = np.random.randn(n_samples, n_features)
coef = 3 * np.random.randn(n_features)
inds = np.arange(n_features)
np.random.shuffle(inds)
coef[inds[10:]] = 0 # sparsify coef
y = np.dot(X, coef)
# add noise
y += 0.01 * np.random.normal((n_samples,))
# Split data in train set and test set
n_samples = X.shape[0]
X_train, y_train = X[:n_samples / 2], y[:n_samples / 2]
X_test, y_test = X[n_samples / 2:], y[n_samples / 2:]
###############################################################################
# Lasso
from sklearn.linear_model import Lasso
alpha = 0.1
lasso = Lasso(alpha=alpha)
y_pred_lasso = lasso.fit(X_train, y_train).predict(X_test)
r2_score_lasso = r2_score(y_test, y_pred_lasso)
print(lasso)
print("r^2 on test data : %f" % r2_score_lasso)
###############################################################################
# ElasticNet
from sklearn.linear_model import ElasticNet
enet = ElasticNet(alpha=alpha, l1_ratio=0.7)
y_pred_enet = enet.fit(X_train, y_train).predict(X_test)
r2_score_enet = r2_score(y_test, y_pred_enet)
print(enet)
print("r^2 on test data : %f" % r2_score_enet)
plt.plot(enet.coef_, label='Elastic net coefficients')
plt.plot(lasso.coef_, label='Lasso coefficients')
plt.plot(coef, '--', label='original coefficients')
plt.legend(loc='best')
plt.title("Lasso R^2: %f, Elastic Net R^2: %f"
% (r2_score_lasso, r2_score_enet))
plt.show()
| bsd-3-clause |
ggirelli/gpseq-img-py | pygpseq/fish/nucleus.py | 1 | 15768 | # -*- coding: utf-8 -*-
'''
@author: Gabriele Girelli
@contact: [email protected]
@description: methods for nuclear channels manipulation.
'''
# DEPENDENCIES =================================================================
import matplotlib
import matplotlib.pyplot as plt
import math
import numpy as np
import os
import pandas as pd
from scipy.ndimage.measurements import center_of_mass
from skimage import draw
from skimage.morphology import dilation
from .. import const
from ..anim import Nucleus
from ..tools import image as imt
from ..tools import plot
from ..tools import stat as stt
# FUNCTIONS ====================================================================
def annotate_compartments(msg, t, nuclei, outdir, pole_fraction, aspect):
'''
Add compartment status to dots table (by DOTTER).
For each nucleus: the major three axes are identified, the nucleus is
centered and rotated. Then, the dots are also centered and rotated,
and assigned to different compartments based on the fitted ellipsoid.
Information on the goodness of ellipsoid fit is added to the main log and
can be extracted by grepping lines starting with " >>>> GoF_ellipse:".
Args:
msg (string): log message, to be continued.
t (pd.DataFrame): DOTTER output table.
aspect (tuple): Z,Y,X voxel sides in real units.
Returns:
'''
# Temporarily remove dots outside cells
nan_cond = np.isnan(t.loc[:, 'cell_ID'])
vcomp_table = pd.DataFrame()
subt = t.loc[np.logical_not(nan_cond), :].copy()
if 0 == subt.shape[0]:
print("!WARNING! All dots in FoV#%d are outside cells." % (
t['File'].values[0],))
return((t, vcomp_table, msg))
fid = subt['File'].values[0]
# Create empty table to host compartment volume data
vcomp_table = pd.DataFrame(index = range(1, int(subt['cell_ID'].max()) + 1))
vcomp_table['File'] = fid
vcomp_table['cell_ID'] = range(1, int(subt['cell_ID'].max()) + 1)
vcomp_table['center_bot'] = np.nan
vcomp_table['center_top'] = np.nan
vcomp_table['poles'] = np.nan
vcomp_table['ndots_center_bot'] = np.nan
vcomp_table['ndots_center_top'] = np.nan
vcomp_table['ndots_poles'] = np.nan
vcomp_table['a'] = np.nan
vcomp_table['b'] = np.nan
vcomp_table['c'] = np.nan
for cid in range(int(subt['cell_ID'].max()) + 1):
if cid in nuclei.keys():
msg += " >>> Working on cell #%d...\n" % (cid,)
cell_cond = cid == subt['cell_ID']
# Extract dots coordinates -----------------------------------------
dot_coords = np.vstack([
subt.loc[cell_cond, 'z'] - nuclei[cid].box_origin[0],
subt.loc[cell_cond, 'x'] - nuclei[cid].box_origin[1],
subt.loc[cell_cond, 'y'] - nuclei[cid].box_origin[2]
])
# Center coordinates -----------------------------------------------
x, y, z, xd, yd, zd = stt.centered_coords_3d(
nuclei[cid].mask, dot_coords)
coords = np.vstack([x, y, z])
dot_coords = np.vstack([xd, yd, zd])
# Rotate data ------------------------------------------------------
# First axis
xv, yv, zv = stt.extract_3ev(coords)
theta1 = stt.calc_theta(xv[0], yv[0])
xt, yt, zt = stt.rotate3d(coords, theta1, 2)
tcoords = np.vstack([xt, yt, zt])
# # Third axis
# xv, yv, zv = stt.extract_3ev(tcoords)
# theta3 = stt.calc_theta(xv[2], zv[2])
# if np.abs(theta3) > np.pi / 2.:
# if theta3 > 0:
# theta3 = -np.abs(theta3 - np.pi / 2.)
# else:
# theta3 = np.abs(theta3 + np.pi / 2.)
# else:
# theta3 = -np.abs(theta3 + np.pi / 2.)
# xt, yt, zt = stt.rotate3d(tcoords, theta3, 1)
# tcoords = np.vstack([xt, yt, zt])
# # Second axis
# xv, yv, zv = stt.extract_3ev(tcoords)
# theta2 = stt.calc_theta(yv[1], zv[1])
# xt, yt, zt = stt.rotate3d(tcoords, theta2, 0)
# tcoords = np.vstack([xt, yt, zt])
# Fit ellipsoid ----------------------------------------------------
# Round up rotated coordinates
trcoords = tcoords.astype('i')
# Convert to rotated image
icoords = np.transpose(trcoords) + abs(trcoords.min(1))
trbin = np.zeros((icoords.max(0) + 1).tolist()[::-1])
trbin[icoords[:, 2], icoords[:, 1], icoords[:, 0]] = 1
# Calculate axes size
zax_size, yax_size, xax_size = trbin.shape
el = draw.ellipsoid(zax_size / 2., yax_size / 2., xax_size / 2.)
el = el[2:(zax_size + 2), 1:(yax_size + 1), 1:(xax_size + 1)]
# Calculate intersection with fitting ellipsoid
inter_size = np.logical_and(trbin, el).sum()
# Log intersection
comments = []
comments.append("%s%%%s [%s.%s]." % (
round(inter_size / float(trbin.sum()) * 100, 2,),
" of the nucleus is in the ellipsoid", fid, cid,))
comments.append("%s%%%s [%s.%s]." % (
round(inter_size / float(el.sum()) * 100, 2,),
" of the ellipsoid is in the nucleus", fid, cid,))
msg += "".join([" >>>> GoF_ellipse: %s\n" % (s,)
for s in comments])
# Rotate dots ------------------------------------------------------
dot_coords_t = np.vstack(stt.rotate3d(dot_coords, theta1, 2))
#dot_coords_t = np.vstack(stt.rotate3d(dot_coords_t, theta2, 0))
#dot_coords_t = np.vstack(stt.rotate3d(dot_coords_t, theta3, 1))
# Assign compartments ----------------------------------------------
# Compartment code:
# 0 = center-top
# 1 = center-bottom
# 2 = pole
c = zax_size / 2.
b = yax_size / 2.
a = xax_size / 2.
cf = 1 - 2 * pole_fraction
status = np.zeros(dot_coords.shape[1])
status[dot_coords_t[2] < 0] = 1
status[dot_coords_t[0] > cf * a] = 2
status[dot_coords_t[0] < -(cf * a)] = 2
subt.loc[cell_cond, 'compartment'] = status
# Rescale coords ---------------------------------------------------
dot_coords_t2 = dot_coords_t.copy()
dot_coords_t2[0] = dot_coords_t[0] / a
dot_coords_t2[1] = dot_coords_t[1] / b
dot_coords_t2[2] = dot_coords_t[2] / zax_size * 2
# Store rescaled coords --------------------------------------------
subt.loc[cell_cond, 'znorm'] = dot_coords_t2[2]
subt.loc[cell_cond, 'xnorm'] = dot_coords_t2[0]
subt.loc[cell_cond, 'ynorm'] = dot_coords_t2[1]
# Calculate compartment volume -------------------------------------
# Round up coordinates
xt = xt.astype('i')
zt = zt.astype('i')
# Count voxels in compartments
vpole = sum(xt > cf * a) + sum(xt < -(cf * a))
centr_cond = np.logical_and(xt < (cf * a), xt > -(cf * a))
vctop = np.logical_and(centr_cond, zt >= 0).sum()
vcbot = np.logical_and(centr_cond, zt < 0).sum()
vcomp_table.loc[cid, 'center_top'] = vctop
vcomp_table.loc[cid, 'center_bot'] = vcbot
vcomp_table.loc[cid, 'poles'] = vpole
# Count dots
vcomp_table.loc[cid, 'ndots_center_top'] = (status == 0).sum()
vcomp_table.loc[cid, 'ndots_center_bot'] = (status == 1).sum()
vcomp_table.loc[cid, 'ndots_poles'] = (status == 2).sum()
# Store nucleus dimensions
vcomp_table.loc[cid, 'a'] = xax_size / 2.
vcomp_table.loc[cid, 'b'] = yax_size / 2.
vcomp_table.loc[cid, 'c'] = zax_size / 2.
# Assign volume information
volume = np.zeros(dot_coords.shape[1])
volume[:] = vctop
volume[dot_coords_t[2] < 0] = vcbot
volume[dot_coords_t[1] > cf * a] = vpole
volume[dot_coords_t[1] < -(cf * a)] = vpole
subt.loc[cell_cond, 'compartment_volume'] = volume
# Generate compartment plot with dots ------------------------------
if not type(None) == type(outdir):
outpng = open(os.path.join(outdir,
"%s.%s.png" % (fid, cid,)), "wb")
plt.close("all")
plot.ortho_3d(tcoords, dot_coords = dot_coords_t,
aspect = aspect, c = a * cf,
channels = subt.loc[cell_cond, 'Channel'].values)
plt.suptitle("\n".join(comments))
plt.savefig(outpng, format = "png")
plt.close("all")
outpng.close()
t.loc[np.logical_not(nan_cond), :] = subt
# Make aggregated visualization --------------------------------------------
# Calculate median a/b/c
a = np.median(vcomp_table['a'].values)
b = np.median(vcomp_table['b'].values)
c = np.median(vcomp_table['c'].values)
coords = np.vstack([t['xnorm'].values * a, t['ynorm'].values * b,
t['znorm'].values * c])
# Plot
if not type(None) == type(outdir):
# All-channels visualization
plot.dots_in_ellipsoid(a, b, c, coords, aspect = aspect,
channels = t['Channel'].values,
title = "[f%d] Aggregated FISH visualization" % fid,
outpng = os.path.join(outdir, "%d.aggregated.all.png" % fid))
# Single channel visualization
for channel in set(t['Channel'].values.tolist()):
title = "[f%d] Aggregated FISH visualization" % fid
title += " for channel '%s'" % channel
plot.dots_in_ellipsoid(a, b, c,
coords[:, t['Channel'].values == channel], aspect = aspect,
channels = t['Channel'].values[t['Channel'].values == channel],
title = title, outpng = os.path.join(outdir,
"%d.aggregated.%s.png" % (fid, channel)))
# All-channels folded visualization
plot.dots_in_ellipsoid(a, b, c, coords, aspect = aspect,
channels = t['Channel'].values, fold = True,
title = "[f%d] Aggregated-folded FISH visualization" % fid,
outpng = os.path.join(outdir,
"%d.aggregated.all.folded.png" % fid))
# Single channel folded visualization
for channel in set(t['Channel'].values.tolist()):
title = "[f%d] Aggregated-folded FISH visualization" % fid
title += " for channel '%s'" % channel
plot.dots_in_ellipsoid(a, b, c,
coords[:, t['Channel'].values == channel],
channels = t['Channel'].values[t['Channel'].values == channel],
aspect = aspect, fold = True, title = title,
outpng = os.path.join(outdir,
"%d.aggregated.%s.folded.png" % (fid, channel)))
return((t, vcomp_table, msg))
def build_nuclei(msg, L, dilate_factor, series_id, thr, dna_bg, sig_bg,
aspect, offset, logpath, i, istruct):
'''
Build nuclei objects
Args:
msg (string): log message, to be continued.
L (np.ndarray): labeled mask.
dilate_factor (int): dilation factor.
series_id (int): series ID.
thr (float): global threshold value.
dna_bg (float): DNA channel background.
sig_bg (float): signal channel background.
aspect (tuple): Z,Y,X voxel sides in real units.
offset (tuple): tuple with pixel offset for bounding box.
logpath (string): path to log file.
i (np.array): image.
Returns:
(string, list): log message and list of Nucleus objects.
'''
# Prepare input for Nucleus class
kwargs = {
'series_id' : series_id, 'thr' : thr,
'dna_bg' : dna_bg, 'sig_bg' : sig_bg,
'aspect' : aspect, 'offset' : offset,
'logpath' : logpath, 'i' : i
}
# Default nuclear ID list and empty dictionary
seq = range(1, L.max() + 1)
curnuclei = {}
# Log operation
if 0 != dilate_factor:
msg += " - Saving %d nuclei with dilation [%d]...\n" % (
L.max(), dilate_factor)
else:
msg += " - Saving %d nuclei...\n" % (L.max(),)
# Iterate through nuclei
for n in seq:
# Make nucleus
if 0 != dilate_factor:
# With dilated mask
mask = dilation(L == n, istruct)
nucleus = Nucleus(n = n, mask = mask, **kwargs)
else:
mask = L == n
nucleus = Nucleus(n = n, mask = mask, **kwargs)
# Apply box
msg += " > Applying nuclear box [%d]...\n" % (n,)
mask = imt.apply_box(mask, nucleus.box)
# Store nucleus
nucleus.mask = mask
nucleus.box_origin = np.array([c[0] + 1 for c in nucleus.box])
nucleus.box_sides = np.array([np.diff(c) for c in nucleus.box])
nucleus.box_mass_center = center_of_mass(mask)
nucleus.dilate_factor = dilate_factor
curnuclei[n] = nucleus
return((msg, curnuclei))
def flag_G1_cells(t, nuclei, outdir, dilate_factor, dot_file_name):
'''
Assign a binary flag identifying the predominant cell population
based on flatten size and intensity sum
Args:
t (pd.DataFrame): DOTTER output table.
nuclei (list(gp.Nucleus)): identified nuclei.
outdir (string): path to output folder.
dilate_factor (int): number of dilation operations.
dot_file_name (string): output file name.
Returns:
pd.DataFrame:.
'''
print("> Flagging G1 cells...")
# Retrieve nuclei summaries ------------------------------------------------
print(' > Retrieving nuclear summary...')
summary = np.zeros(len(nuclei),
dtype = const.DTYPE_NUCLEAR_SUMMARY)
for i in range(len(nuclei)):
summary[i] = nuclei[i].get_summary()
# Filter nuclei ------------------------------------------------------------
print(' > Filtering nuclei based on flatten size and intensity...')
cond_name = 'none'
sigma = .1
nsf = (const.NSEL_FLAT_SIZE, const.NSEL_SUMI)
out_dir = '.'
# Filter features
sel_data = {}
ranges = {}
plot_counter = 1
for nsfi in nsf:
# Identify Nuclear Selection Feature
nsf_field = const.NSEL_FIELDS[nsfi]
nsf_name = const.NSEL_NAMES[nsfi]
print(' >> Filtering %s...' % (nsf_name,))
# Start building output
d = {'data' : summary[nsf_field]}
# Calculate density
d['density'] = stt.calc_density(d['data'], sigma = sigma)
# Identify range
args = [d['density']['x'], d['density']['y']]
d['fwhm_range'] = stt.get_fwhm(*args)
ranges[nsf_name] = d['fwhm_range']
# Plot
sel_data[nsf_field] = d
# Select based on range
f = lambda x, r: x >= r[0] and x <= r[1]
for nsfi in nsf:
nsf_field = const.NSEL_FIELDS[nsfi]
nsf_name = const.NSEL_NAMES[nsfi]
print(" > Selecting range for %s ..." % (nsf_name,))
# Identify nuclei in the FWHM range
nsf_data = sel_data[nsf_field]
nsf_data['sel'] = [f(i, nsf_data['fwhm_range'])
for i in nsf_data['data']]
sel_data[nsf_field] = nsf_data
# Select those in every FWHM range
print(" > Applying selection criteria")
nsfields = [const.NSEL_FIELDS[nsfi] for nsfi in nsf]
selected = [sel_data[f]['sel'] for f in nsfields]
g = lambda i: all([sel[i] for sel in selected])
selected = [i for i in range(len(selected[0])) if g(i)]
sub_data = np.array(summary[selected])
# Identify selected nuclei objects
sel_nuclei_labels = ["_%d.%d_" % (n, s)
for (n, s) in sub_data[['s', 'n']]]
sel_nucl = [n for n in nuclei
if "_%d.%d_" % (n.s, n.n) in sel_nuclei_labels]
# Check which dots are in which nucleus and update flag --------------------
print(" > Matching DOTTER cells with GPSeq cells...")
t['G1'] = 0
t.loc[np.where(np.isnan(t['cell_ID']))[0], 'G1'] = np.nan
t['universalID'] = ["_%s.%s_" % x for x in zip(
t['File'].values, t['cell_ID'].values
)]
g1ids = [i for i in range(t.shape[0])
if t.loc[i, 'universalID'] in sel_nuclei_labels]
t.loc[g1ids, 'G1'] = 1
t = t.drop('universalID', 1)
# Add G1 status to summary -------------------------------------------------
summary = pd.DataFrame(summary)
summary['G1'] = np.zeros((summary.shape[0],))
summary['universalID'] = ["_%s.%s_" % x
for x in zip(summary['s'].values, summary['n'].astype("f").values)]
g1ids = [i for i in range(summary.shape[0])
if summary.loc[i, 'universalID'] in sel_nuclei_labels]
summary.loc[g1ids, 'G1'] = 1
summary = summary.drop('universalID', 1)
# Estimate radius ----------------------------------------------------------
summary['sphere_radius'] = summary['size'].values * 3 / (4 * math.pi)
summary['sphere_radius'] = (summary['sphere_radius'])**(1/3.)
# Export -------------------------------------------------------------------
# Export feature ranges
s = ""
for (k, v) in ranges.items():
s += "%s\t%f\t%f\n" % (k, v[0], v[1])
f = open("%s/feature_ranges.txt" % (outdir,), "w+")
f.write(s)
f.close()
# Export summary
outname = "%s/nuclei.out.dilate%d.%s" % (
outdir, dilate_factor, dot_file_name)
summary.to_csv(outname, sep = '\t', index = False)
# Output -------------------------------------------------------------------
print("> Flagged G1 cells...")
return(t)
# END ==========================================================================
################################################################################
| mit |
ifuding/Kaggle | TalkingDataFraudDetect/Code/BarisKanber.py | 3 | 14070 | """
A non-blending lightGBM model that incorporates portions and ideas from various public kernels
This kernel gives LB: 0.977 when the parameter 'debug' below is set to 0 but this implementation requires a machine with ~32 GB of memory
"""
import pandas as pd
import time
import numpy as np
from sklearn.cross_validation import train_test_split
import lightgbm as lgb
import gc
import matplotlib.pyplot as plt
import os
debug=1
if debug:
print('*** debug parameter set: this is a test run for debugging purposes ***')
def lgb_modelfit_nocv(params, dtrain, dvalid, predictors, target='target', objective='binary', metrics='auc',
feval=None, early_stopping_rounds=20, num_boost_round=3000, verbose_eval=10, categorical_features=None):
lgb_params = {
'boosting_type': 'gbdt',
'objective': objective,
'metric':metrics,
'learning_rate': 0.2,
#'is_unbalance': 'true', #because training data is unbalance (replaced with scale_pos_weight)
'num_leaves': 31, # we should let it be smaller than 2^(max_depth)
'max_depth': -1, # -1 means no limit
'min_child_samples': 20, # Minimum number of data need in a child(min_data_in_leaf)
'max_bin': 255, # Number of bucketed bin for feature values
'subsample': 0.6, # Subsample ratio of the training instance.
'subsample_freq': 0, # frequence of subsample, <=0 means no enable
'colsample_bytree': 0.3, # Subsample ratio of columns when constructing each tree.
'min_child_weight': 5, # Minimum sum of instance weight(hessian) needed in a child(leaf)
'subsample_for_bin': 200000, # Number of samples for constructing bin
'min_split_gain': 0, # lambda_l1, lambda_l2 and min_gain_to_split to regularization
'reg_alpha': 0, # L1 regularization term on weights
'reg_lambda': 0, # L2 regularization term on weights
'nthread': 4,
'verbose': 0,
'metric':metrics
}
lgb_params.update(params)
print("preparing validation datasets")
xgtrain = lgb.Dataset(dtrain[predictors].values, label=dtrain[target].values,
feature_name=predictors,
categorical_feature=categorical_features
)
xgvalid = lgb.Dataset(dvalid[predictors].values, label=dvalid[target].values,
feature_name=predictors,
categorical_feature=categorical_features
)
evals_results = {}
bst1 = lgb.train(lgb_params,
xgtrain,
valid_sets=[xgtrain, xgvalid],
valid_names=['train','valid'],
evals_result=evals_results,
num_boost_round=num_boost_round,
early_stopping_rounds=early_stopping_rounds,
verbose_eval=10,
feval=feval)
print("\nModel Report")
print("bst1.best_iteration: ", bst1.best_iteration)
print(metrics+":", evals_results['valid'][metrics][bst1.best_iteration-1])
return (bst1,bst1.best_iteration)
def DO(frm,to,fileno):
dtypes = {
'ip' : 'uint32',
'app' : 'uint16',
'device' : 'uint16',
'os' : 'uint16',
'channel' : 'uint16',
'is_attributed' : 'uint8',
'click_id' : 'uint32',
}
print('loading train data...',frm,to)
train_df = pd.read_csv("../input/train.csv", parse_dates=['click_time'], skiprows=range(1,frm), nrows=to-frm, dtype=dtypes, usecols=['ip','app','device','os', 'channel', 'click_time', 'is_attributed'])
print('loading test data...')
if debug:
test_df = pd.read_csv("../input/test.csv", nrows=100000, parse_dates=['click_time'], dtype=dtypes, usecols=['ip','app','device','os', 'channel', 'click_time', 'click_id'])
else:
test_df = pd.read_csv("../input/test.csv", parse_dates=['click_time'], dtype=dtypes, usecols=['ip','app','device','os', 'channel', 'click_time', 'click_id'])
len_train = len(train_df)
train_df=train_df.append(test_df)
del test_df
gc.collect()
print('Extracting new features...')
train_df['hour'] = pd.to_datetime(train_df.click_time).dt.hour.astype('uint8')
train_df['day'] = pd.to_datetime(train_df.click_time).dt.day.astype('uint8')
gc.collect()
naddfeat=9
for i in range(0,naddfeat):
if i==0: selcols=['ip', 'channel']; QQ=4;
if i==1: selcols=['ip', 'device', 'os', 'app']; QQ=5;
if i==2: selcols=['ip', 'day', 'hour']; QQ=4;
if i==3: selcols=['ip', 'app']; QQ=4;
if i==4: selcols=['ip', 'app', 'os']; QQ=4;
if i==5: selcols=['ip', 'device']; QQ=4;
if i==6: selcols=['app', 'channel']; QQ=4;
if i==7: selcols=['ip', 'os']; QQ=5;
if i==8: selcols=['ip', 'device', 'os', 'app']; QQ=4;
print('selcols',selcols,'QQ',QQ)
filename='X%d_%d_%d.csv'%(i,frm,to)
if os.path.exists(filename):
if QQ==5:
gp=pd.read_csv(filename,header=None)
train_df['X'+str(i)]=gp
else:
gp=pd.read_csv(filename)
train_df = train_df.merge(gp, on=selcols[0:len(selcols)-1], how='left')
else:
if QQ==0:
gp = train_df[selcols].groupby(by=selcols[0:len(selcols)-1])[selcols[len(selcols)-1]].count().reset_index().\
rename(index=str, columns={selcols[len(selcols)-1]: 'X'+str(i)})
train_df = train_df.merge(gp, on=selcols[0:len(selcols)-1], how='left')
if QQ==1:
gp = train_df[selcols].groupby(by=selcols[0:len(selcols)-1])[selcols[len(selcols)-1]].mean().reset_index().\
rename(index=str, columns={selcols[len(selcols)-1]: 'X'+str(i)})
train_df = train_df.merge(gp, on=selcols[0:len(selcols)-1], how='left')
if QQ==2:
gp = train_df[selcols].groupby(by=selcols[0:len(selcols)-1])[selcols[len(selcols)-1]].var().reset_index().\
rename(index=str, columns={selcols[len(selcols)-1]: 'X'+str(i)})
train_df = train_df.merge(gp, on=selcols[0:len(selcols)-1], how='left')
if QQ==3:
gp = train_df[selcols].groupby(by=selcols[0:len(selcols)-1])[selcols[len(selcols)-1]].skew().reset_index().\
rename(index=str, columns={selcols[len(selcols)-1]: 'X'+str(i)})
train_df = train_df.merge(gp, on=selcols[0:len(selcols)-1], how='left')
if QQ==4:
gp = train_df[selcols].groupby(by=selcols[0:len(selcols)-1])[selcols[len(selcols)-1]].nunique().reset_index().\
rename(index=str, columns={selcols[len(selcols)-1]: 'X'+str(i)})
train_df = train_df.merge(gp, on=selcols[0:len(selcols)-1], how='left')
if QQ==5:
gp = train_df[selcols].groupby(by=selcols[0:len(selcols)-1])[selcols[len(selcols)-1]].cumcount()
train_df['X'+str(i)]=gp.values
if not debug:
gp.to_csv(filename,index=False)
del gp
gc.collect()
print('doing nextClick')
predictors=[]
new_feature = 'nextClick'
filename='nextClick_%d_%d.csv'%(frm,to)
if os.path.exists(filename):
print('loading from save file')
QQ=pd.read_csv(filename).values
else:
D=2**26
train_df['category'] = (train_df['ip'].astype(str) + "_" + train_df['app'].astype(str) + "_" + train_df['device'].astype(str) \
+ "_" + train_df['os'].astype(str)).apply(hash) % D
click_buffer= np.full(D, 3000000000, dtype=np.uint32)
train_df['epochtime']= train_df['click_time'].astype(np.int64) // 10 ** 9
next_clicks= []
for category, t in zip(reversed(train_df['category'].values), reversed(train_df['epochtime'].values)):
next_clicks.append(click_buffer[category]-t)
click_buffer[category]= t
del(click_buffer)
QQ= list(reversed(next_clicks))
if not debug:
print('saving')
pd.DataFrame(QQ).to_csv(filename,index=False)
train_df[new_feature] = QQ
predictors.append(new_feature)
train_df[new_feature+'_shift'] = pd.DataFrame(QQ).shift(+1).values
predictors.append(new_feature+'_shift')
del QQ
gc.collect()
print('grouping by ip-day-hour combination...')
gp = train_df[['ip','day','hour','channel']].groupby(by=['ip','day','hour'])[['channel']].count().reset_index().rename(index=str, columns={'channel': 'ip_tcount'})
train_df = train_df.merge(gp, on=['ip','day','hour'], how='left')
del gp
gc.collect()
print('grouping by ip-app combination...')
gp = train_df[['ip', 'app', 'channel']].groupby(by=['ip', 'app'])[['channel']].count().reset_index().rename(index=str, columns={'channel': 'ip_app_count'})
train_df = train_df.merge(gp, on=['ip','app'], how='left')
del gp
gc.collect()
print('grouping by ip-app-os combination...')
gp = train_df[['ip','app', 'os', 'channel']].groupby(by=['ip', 'app', 'os'])[['channel']].count().reset_index().rename(index=str, columns={'channel': 'ip_app_os_count'})
train_df = train_df.merge(gp, on=['ip','app', 'os'], how='left')
del gp
gc.collect()
# Adding features with var and mean hour (inspired from nuhsikander's script)
print('grouping by : ip_day_chl_var_hour')
gp = train_df[['ip','day','hour','channel']].groupby(by=['ip','day','channel'])[['hour']].var().reset_index().rename(index=str, columns={'hour': 'ip_tchan_count'})
train_df = train_df.merge(gp, on=['ip','day','channel'], how='left')
del gp
gc.collect()
print('grouping by : ip_app_os_var_hour')
gp = train_df[['ip','app', 'os', 'hour']].groupby(by=['ip', 'app', 'os'])[['hour']].var().reset_index().rename(index=str, columns={'hour': 'ip_app_os_var'})
train_df = train_df.merge(gp, on=['ip','app', 'os'], how='left')
del gp
gc.collect()
print('grouping by : ip_app_channel_var_day')
gp = train_df[['ip','app', 'channel', 'day']].groupby(by=['ip', 'app', 'channel'])[['day']].var().reset_index().rename(index=str, columns={'day': 'ip_app_channel_var_day'})
train_df = train_df.merge(gp, on=['ip','app', 'channel'], how='left')
del gp
gc.collect()
print('grouping by : ip_app_chl_mean_hour')
gp = train_df[['ip','app', 'channel','hour']].groupby(by=['ip', 'app', 'channel'])[['hour']].mean().reset_index().rename(index=str, columns={'hour': 'ip_app_channel_mean_hour'})
print("merging...")
train_df = train_df.merge(gp, on=['ip','app', 'channel'], how='left')
del gp
gc.collect()
print("vars and data type: ")
train_df.info()
train_df['ip_tcount'] = train_df['ip_tcount'].astype('uint16')
train_df['ip_app_count'] = train_df['ip_app_count'].astype('uint16')
train_df['ip_app_os_count'] = train_df['ip_app_os_count'].astype('uint16')
target = 'is_attributed'
predictors.extend(['app','device','os', 'channel', 'hour', 'day',
'ip_tcount', 'ip_tchan_count', 'ip_app_count',
'ip_app_os_count', 'ip_app_os_var',
'ip_app_channel_var_day','ip_app_channel_mean_hour'])
categorical = ['app', 'device', 'os', 'channel', 'hour', 'day']
for i in range(0,naddfeat):
predictors.append('X'+str(i))
print('predictors',predictors)
test_df = train_df[len_train:]
val_df = train_df[(len_train-val_size):len_train]
train_df = train_df[:(len_train-val_size)]
print("train size: ", len(train_df))
print("valid size: ", len(val_df))
print("test size : ", len(test_df))
sub = pd.DataFrame()
sub['click_id'] = test_df['click_id'].astype('int')
gc.collect()
print("Training...")
start_time = time.time()
params = {
'learning_rate': 0.20,
#'is_unbalance': 'true', # replaced with scale_pos_weight argument
'num_leaves': 7, # 2^max_depth - 1
'max_depth': 3, # -1 means no limit
'min_child_samples': 100, # Minimum number of data need in a child(min_data_in_leaf)
'max_bin': 100, # Number of bucketed bin for feature values
'subsample': 0.7, # Subsample ratio of the training instance.
'subsample_freq': 1, # frequence of subsample, <=0 means no enable
'colsample_bytree': 0.9, # Subsample ratio of columns when constructing each tree.
'min_child_weight': 0, # Minimum sum of instance weight(hessian) needed in a child(leaf)
'scale_pos_weight':200 # because training data is extremely unbalanced
}
(bst,best_iteration) = lgb_modelfit_nocv(params,
train_df,
val_df,
predictors,
target,
objective='binary',
metrics='auc',
early_stopping_rounds=30,
verbose_eval=True,
num_boost_round=1000,
categorical_features=categorical)
print('[{}]: model training time'.format(time.time() - start_time))
del train_df
del val_df
gc.collect()
print('Plot feature importances...')
ax = lgb.plot_importance(bst, max_num_features=100)
plt.show()
print("Predicting...")
sub['is_attributed'] = bst.predict(test_df[predictors],num_iteration=best_iteration)
if not debug:
print("writing...")
sub.to_csv('sub_it%d.csv.gz'%(fileno),index=False,compression='gzip')
print("done...")
return sub
nrows=184903891-1
nchunk=40000000
val_size=2500000
frm=nrows-75000000
if debug:
frm=0
nchunk=100000
val_size=10000
to=frm+nchunk
sub=DO(frm,to,0)
| apache-2.0 |
Scapogo/zipline | tests/pipeline/test_frameload.py | 5 | 7703 | """
Tests for zipline.pipeline.loaders.frame.DataFrameLoader.
"""
from unittest import TestCase
from mock import patch
from numpy import arange, ones
from numpy.testing import assert_array_equal
from pandas import (
DataFrame,
DatetimeIndex,
Int64Index,
)
from zipline.lib.adjustment import (
ADD,
Float64Add,
Float64Multiply,
Float64Overwrite,
MULTIPLY,
OVERWRITE,
)
from zipline.pipeline.data import USEquityPricing
from zipline.pipeline.loaders.frame import (
DataFrameLoader,
)
from zipline.utils.calendars import get_calendar
class DataFrameLoaderTestCase(TestCase):
def setUp(self):
self.trading_day = get_calendar("NYSE").day
self.nsids = 5
self.ndates = 20
self.sids = Int64Index(range(self.nsids))
self.dates = DatetimeIndex(
start='2014-01-02',
freq=self.trading_day,
periods=self.ndates,
)
self.mask = ones((len(self.dates), len(self.sids)), dtype=bool)
def tearDown(self):
pass
def test_bad_input(self):
data = arange(100).reshape(self.ndates, self.nsids)
baseline = DataFrame(data, index=self.dates, columns=self.sids)
loader = DataFrameLoader(
USEquityPricing.close,
baseline,
)
with self.assertRaises(ValueError):
# Wrong column.
loader.load_adjusted_array(
[USEquityPricing.open], self.dates, self.sids, self.mask
)
with self.assertRaises(ValueError):
# Too many columns.
loader.load_adjusted_array(
[USEquityPricing.open, USEquityPricing.close],
self.dates,
self.sids,
self.mask,
)
def test_baseline(self):
data = arange(100).reshape(self.ndates, self.nsids)
baseline = DataFrame(data, index=self.dates, columns=self.sids)
loader = DataFrameLoader(USEquityPricing.close, baseline)
dates_slice = slice(None, 10, None)
sids_slice = slice(1, 3, None)
[adj_array] = loader.load_adjusted_array(
[USEquityPricing.close],
self.dates[dates_slice],
self.sids[sids_slice],
self.mask[dates_slice, sids_slice],
).values()
for idx, window in enumerate(adj_array.traverse(window_length=3)):
expected = baseline.values[dates_slice, sids_slice][idx:idx + 3]
assert_array_equal(window, expected)
def test_adjustments(self):
data = arange(100).reshape(self.ndates, self.nsids)
baseline = DataFrame(data, index=self.dates, columns=self.sids)
# Use the dates from index 10 on and sids 1-3.
dates_slice = slice(10, None, None)
sids_slice = slice(1, 4, None)
# Adjustments that should actually affect the output.
relevant_adjustments = [
{
'sid': 1,
'start_date': None,
'end_date': self.dates[15],
'apply_date': self.dates[16],
'value': 0.5,
'kind': MULTIPLY,
},
{
'sid': 2,
'start_date': self.dates[5],
'end_date': self.dates[15],
'apply_date': self.dates[16],
'value': 1.0,
'kind': ADD,
},
{
'sid': 2,
'start_date': self.dates[15],
'end_date': self.dates[16],
'apply_date': self.dates[17],
'value': 1.0,
'kind': ADD,
},
{
'sid': 3,
'start_date': self.dates[16],
'end_date': self.dates[17],
'apply_date': self.dates[18],
'value': 99.0,
'kind': OVERWRITE,
},
]
# These adjustments shouldn't affect the output.
irrelevant_adjustments = [
{ # Sid Not Requested
'sid': 0,
'start_date': self.dates[16],
'end_date': self.dates[17],
'apply_date': self.dates[18],
'value': -9999.0,
'kind': OVERWRITE,
},
{ # Sid Unknown
'sid': 9999,
'start_date': self.dates[16],
'end_date': self.dates[17],
'apply_date': self.dates[18],
'value': -9999.0,
'kind': OVERWRITE,
},
{ # Date Not Requested
'sid': 2,
'start_date': self.dates[1],
'end_date': self.dates[2],
'apply_date': self.dates[3],
'value': -9999.0,
'kind': OVERWRITE,
},
{ # Date Before Known Data
'sid': 2,
'start_date': self.dates[0] - (2 * self.trading_day),
'end_date': self.dates[0] - self.trading_day,
'apply_date': self.dates[0] - self.trading_day,
'value': -9999.0,
'kind': OVERWRITE,
},
{ # Date After Known Data
'sid': 2,
'start_date': self.dates[-1] + self.trading_day,
'end_date': self.dates[-1] + (2 * self.trading_day),
'apply_date': self.dates[-1] + (3 * self.trading_day),
'value': -9999.0,
'kind': OVERWRITE,
},
]
adjustments = DataFrame(relevant_adjustments + irrelevant_adjustments)
loader = DataFrameLoader(
USEquityPricing.close,
baseline,
adjustments=adjustments,
)
expected_baseline = baseline.iloc[dates_slice, sids_slice]
formatted_adjustments = loader.format_adjustments(
self.dates[dates_slice],
self.sids[sids_slice],
)
expected_formatted_adjustments = {
6: [
Float64Multiply(
first_row=0,
last_row=5,
first_col=0,
last_col=0,
value=0.5,
),
Float64Add(
first_row=0,
last_row=5,
first_col=1,
last_col=1,
value=1.0,
),
],
7: [
Float64Add(
first_row=5,
last_row=6,
first_col=1,
last_col=1,
value=1.0,
),
],
8: [
Float64Overwrite(
first_row=6,
last_row=7,
first_col=2,
last_col=2,
value=99.0,
)
],
}
self.assertEqual(formatted_adjustments, expected_formatted_adjustments)
mask = self.mask[dates_slice, sids_slice]
with patch('zipline.pipeline.loaders.frame.AdjustedArray') as m:
loader.load_adjusted_array(
columns=[USEquityPricing.close],
dates=self.dates[dates_slice],
assets=self.sids[sids_slice],
mask=mask,
)
self.assertEqual(m.call_count, 1)
args, kwargs = m.call_args
assert_array_equal(kwargs['data'], expected_baseline.values)
assert_array_equal(kwargs['mask'], mask)
self.assertEqual(kwargs['adjustments'], expected_formatted_adjustments)
| apache-2.0 |
credp/lisa | lisa/wa_results_collector.py | 2 | 53959 | #! /usr/bin/env python3
# SPDX-License-Identifier: Apache-2.0
#
# Copyright (C) 2018, ARM Limited and contributors.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from collections import namedtuple, defaultdict
import json
import numpy as np
import re
import os
import pandas as pd
import subprocess
import sqlite3
from scipy.stats import ttest_ind
import matplotlib.cm as cm
import matplotlib.pyplot as plt
from matplotlib.colors import to_hex
from devlib.target import KernelVersion
from IPython.display import display
from lisa.trace import Trace
from lisa.git import find_shortest_symref, get_commit_message
from lisa.utils import Loggable, memoized
from lisa.datautils import series_integrate, series_mean
class WaResultsCollector(Loggable):
"""
Collects, analyses and visualises results from multiple WA3 directories
Takes a list of output directories from Workload Automation 3 and parses
them. Finds metrics reported by WA itself, and extends those metrics with
extra detail extracted from ftrace files, energy instrumentation output, and
workload-specific artifacts that are found in the output.
Results can be grouped according to the following terms:
- 'metric' is a specific measurable quantity such as a single frame's
rendering time or the average energy consumed during a workload run.
- 'workload' is the general name of a workload such as 'jankbench' or
'youtube'.
- 'test' is a more specific identification for workload - for example this
might identify one of Jankbench's sub-benchmarks, or specifically playing
a certain video on Youtube for 30s.
WaResultsCollector ultimately derives 'test' names from the
'classifiers'::'test' field of the WA3 agenda file's 'workloads' entries.
- 'tag' is an identifier for a set of run-time target configurations that
the target was run under. For example there might exist one 'tag'
identifying running under the schedutil governor and another for the
performance governor.
WaResultsCollector ultimately derives 'tag' names from the 'classifiers'
field of the WA3 agenda file's 'sections' entries.
- 'kernel' identifies the kernel that was running when the metric was
collected. This may be a SHA1 or a symbolic ref (branch/tag) derived from
a provided Git repository. To try to keep identifiers readable, common
prefixes of refs are removed: if the raw refs are 'test/foo/bar' and
'test/foo/baz', they will be referred to just as 'bar' and 'baz'.
Aside from the provided helper attributes, all metrics are exposed in a
DataFrame as the ``results_df`` attribute.
:param wa_dirs: List of paths to WA3 output directories or a regexp of WA3
output directories names to consider starting from the
specified base_path
:type wa_dirs: str
:param base_dir: The path of a directory containing a collection of WA3
output directories
:type base_dir: str
:param plat_info: Optional LISA platform description. If provided, used to
enrich extra metrics gleaned from trace analysis.
:type plat_info: lisa.platforms.platinfo.PlatformInfo
:param kernel_repo_path: Optional path to kernel repository. WA3 reports the
SHA1 of the kernel that workloads were run against. If this
param is provided, the repository is search for symbolic
references to replace SHA1s in data representation. This is
purely to make the output more manageable for humans.
:param parse_traces: This class uses LISA to parse and analyse ftrace files
for extra metrics. With multiple/large traces this
can take some time. Set this param to False to disable
trace parsing.
:param use_cached_trace_metrics: This class uses LISA to parse and analyse
ftrace files for extra metrics. With multiple/large traces
this can take some time, so the extracted metrics are
cached in the provided output directories. Set this param
to False to disable this caching.
:param display_charts: This class uses IPython.display module to render some
charts of workloads' results. But we also want to use
this class without rendering any charts when we are
only interested in table of figures. Set this param
to False if you only want table of results but not
display them.
"""
RE_WLTEST_DIR = re.compile(r"wa\.(?P<sha1>\w+)_(?P<name>.+)")
def __init__(self, base_dir=None, wa_dirs=".*", plat_info=None,
kernel_repo_path=None, parse_traces=True,
use_cached_trace_metrics=True, display_charts=True):
logger = self.get_logger()
if base_dir:
base_dir = os.path.expanduser(base_dir)
if not isinstance(wa_dirs, str):
raise ValueError(
'If base_dir is provided, wa_dirs should be a regexp')
regex = wa_dirs
wa_dirs = self._list_wa_dirs(base_dir, regex)
if not wa_dirs:
raise ValueError(f"Couldn't find any WA results matching '{regex}' in {base_dir}")
else:
if not hasattr(wa_dirs, '__iter__'):
raise ValueError(
'if base_dir is not provided, wa_dirs should be a list of paths')
wa_dirs = [os.path.expanduser(p) for p in wa_dirs]
self.plat_info = plat_info
self.parse_traces = parse_traces
if not self.parse_traces:
logger.warning("Trace parsing disabled")
self.use_cached_trace_metrics = use_cached_trace_metrics
self.display_charts = display_charts
df = pd.DataFrame()
df_list = []
for wa_dir in wa_dirs:
logger.info("Reading wa_dir %s", wa_dir)
df_list.append(self._read_wa_dir(wa_dir))
df = df.append(df_list)
kernel_refs = {}
for sha1 in df['kernel_sha1'].unique():
if sha1 is None:
continue
if kernel_repo_path:
try:
symref = find_shortest_symref(kernel_repo_path, sha1)
except ValueError:
try:
symref = get_commit_message(kernel_repo_path, sha1)
except subprocess.CalledProcessError:
symref = sha1
kernel_refs[sha1] = symref
else:
kernel_refs[sha1] = sha1
common_prefix = os.path.commonprefix(list(kernel_refs.values()))
for sha1, ref in kernel_refs.items():
kernel_refs[sha1] = ref[len(common_prefix):]
# The kernel identifier is the sha1 to ref mapping if it exists or the
# original kernel version string.
df['kernel'] = df['kernel_sha1'].map(kernel_refs).fillna(
df['kernel_name'])
self.results_df = df
def _list_wa_dirs(self, base_dir, wa_dirs_re):
dirs = []
logger = self.get_logger()
logger.info("Processing WA3 dirs matching [%s], rooted at %s",
wa_dirs_re, base_dir)
wa_dirs_re = re.compile(wa_dirs_re)
for subdir in os.listdir(base_dir):
dir = os.path.join(base_dir, subdir)
if not os.path.isdir(dir) or not wa_dirs_re.search(subdir):
continue
# WA3 results dirs contains a __meta directory at the top level.
if '__meta' not in os.listdir(dir):
logger.warning('Ignoring {}, does not contain __meta directory')
continue
dirs.append(dir)
return dirs
def _read_wa_dir(self, wa_dir):
"""
Get a DataFrame of metrics from a single WA3 output directory.
Includes the extra metrics derived from workload-specific artifacts and
ftrace files.
Columns returned:
kernel_name,kernel_sha1,kernel,id,workload,tag,test,iteration,metric,value,units
"""
# A WA output directory looks something like:
#
# wa_output/
# |- __meta/
# | | - jobs.json
# | | (some other bits)
# |- results.csv
# |- pelt-wk1-jankbench-1/
# | | - result.json
# | | (other results from iteration 1 of pelt-wk1, which is a
# | | jankbench job)
# |- pelt-wk1-jankbench-2/
# [etc]
logger = self.get_logger()
# results.csv contains all the metrics reported by WA for all jobs.
df = pd.read_csv(os.path.join(wa_dir, 'results.csv'))
# When using Monsoon, the device is a single channel which reports
# two metrics. This means that devlib's DerivedEnergymeasurements class
# cannot see the output. Due to the way that the monsoon.py script
# works, it looks difficult to change Monsoon over to the Acme way of
# operating. As a workaround, let's mangle the results here instead.
unique_metrics = df['metric'].unique()
if 'device_total_energy' not in unique_metrics:
# potentially, we need to assemble a device_total_energy from
# other energy values we can add together.
if 'output_total_energy' in unique_metrics and 'USB_total_energy' in unique_metrics:
new_rows = []
output_df = df[df['metric'] == 'output_total_energy']
usb_df = df[df['metric'] == 'USB_total_energy']
# for each 'output_total_energy' metric, we will find
# the matching 'USB_total_energy' metric and assemble
# a 'device_total_energy' metric by adding them.
for row in output_df.iterrows():
vals = row[1]
_id = vals['id']
_workload = vals['workload']
_iteration = vals['iteration']
_value = vals['value']
usb_row = usb_df[(usb_df['workload'] == _workload) & (usb_df['id'] == _id) & (usb_df['iteration'] == _iteration)]
new_val = float(_value) + float(usb_row['value'])
# instead of creating a new row, just change the name
# and value of this one
vals['metric'] = 'device_total_energy'
vals['value'] = new_val
new_rows.append(vals)
# add all the new rows in one go at the end
df = df.append(new_rows, ignore_index=True)
# __meta/jobs.json describes the jobs that were run - we can use this to
# find extra artifacts (like traces and detailed energy measurement
# data) from the jobs, which we'll use to add additional metrics that WA
# didn't report itself.
with open(os.path.join(wa_dir, '__meta', 'jobs.json')) as f:
jobs = json.load(f)['jobs']
subdirs_done = []
# Keep track of how many times we've seen each job id so we know which
# iteration to look at (If we use the proper WA3 API this awkwardness
# isn't necessary).
next_iteration = defaultdict(lambda: 1)
# Keep track of which jobs we skipped for each iteration
skipped_jobs = defaultdict(lambda: [])
# Dicts mapping job IDs to things determined about the job - this will
# be used to add extra columns to the DataFrame (that aren't reported
# directly in WA's results.csv)
tag_map = {}
test_map = {}
job_dir_map = {}
extra_dfs = []
for job in jobs:
workload = job['workload_name']
job_id = job['id']
# If there's a 'tag' in the 'classifiers' object, use that to
# identify the runtime configuration. If not, use a representation
# of the full key=value pairs.
classifiers = job['classifiers'] or {}
if 'test' in classifiers:
# If the workload spec has a 'test' classifier, use that to
# identify it.
test = classifiers.pop('test')
elif 'test' in job['workload_parameters']:
# If not, some workloads have a 'test' workload_parameter, try
# using that
test = job['workload_parameters']['test']
elif 'test_ids' in job['workload_parameters']:
# If not, some workloads have a 'test_ids' workload_parameter, try
# using that
test = job['workload_parameters']['test_ids']
else:
# Otherwise just use the workload name.
# This isn't ideal because it means the results from jobs with
# different workload parameters will be amalgamated.
test = workload
rich_tag = ';'.join(f'{k}={v}' for k, v in iter(classifiers.items()))
tag = classifiers.get('tag', rich_tag)
if job_id in tag_map:
# Double check I didn't do a stupid
if tag_map[job_id] != tag:
raise RuntimeError(f'Multiple tags ({tag}, {tag_map[job_id]}) found for job ID {job_id}')
tag_map[job_id] = tag
if job_id in test_map:
# Double check I didn't do a stupid
if test_map[job_id] != test:
raise RuntimeError(f'Multiple tests ({test}, {test_map[job_id]}) found for job ID {job_id}')
test_map[job_id] = test
iteration = next_iteration[job_id]
next_iteration[job_id] += 1
job_dir = os.path.join(wa_dir,
'-'.join([job_id, workload, str(iteration)]))
job_dir_map[job_id] = job_dir
# Jobs can fail due to target misconfiguration or other problems,
# without preventing us from collecting the results for the jobs
# that ran OK.
my_file = os.path.join(job_dir, 'result.json')
if not os.path.isfile(my_file):
skipped_jobs[iteration].append(job_id)
continue
with open(my_file) as f:
job_result = json.load(f)
if job_result['status'] == 'FAILED':
skipped_jobs[iteration].append(job_id)
continue
extra_df = self._get_extra_job_metrics(job_dir, workload)
if extra_df.empty:
continue
extra_df['workload'] = workload
extra_df['iteration'] = iteration
extra_df['id'] = job_id
extra_df['tag'] = tag
extra_df['test'] = test
# Collect all these DFs to merge them in one go at the end.
extra_dfs.append(extra_df)
# Append all extra DFs to the results WA's results DF
if extra_dfs:
df = df.append(extra_dfs)
for iteration, job_ids in skipped_jobs.items():
logger.warning("Skipped failed iteration %d for jobs:", iteration)
logger.warning(" %s", ', '.join(job_ids))
df['tag'] = df['id'].replace(tag_map)
df['test'] = df['id'].replace(test_map)
# TODO: This is a bit lazy: we're storing the directory that every
# single metric came from in a DataFrame column. That's redundant really
# - instead, to get from a row in results_df to a job output directory,
# we should just store a mapping from kernel identifiers to wa_output
# directories, then derive at the job dir from that mapping plus the
# job_id+workload+iteration in the results_df row. This works fine for
# now, though - that refactoring would probably belong alongside a
# refactoring to use WA's own API for reading output directories.
df['_job_dir'] = df['id'].replace(job_dir_map)
df['kernel_name'] = self._wa_get_kernel_name(wa_dir)
df['kernel_sha1'] = self._wa_get_kernel_sha1(wa_dir)
return df
def _get_trace_metrics(self, trace_path):
"""
Parse a trace (or used cached results) and extract extra metrics from it
Returns a DataFrame with columns:
metric,value,units
"""
logger = self.get_logger()
cache_path = os.path.join(os.path.dirname(trace_path), 'lisa_trace_metrics.csv')
if self.use_cached_trace_metrics and os.path.exists(cache_path):
return pd.read_csv(cache_path)
# I wonder if this should go in LISA itself? Probably.
metrics = []
events = ['irq_handler_entry', 'cpu_frequency', 'nohz_kick', 'sched_switch',
'sched_load_cfs_rq', 'sched_load_avg_task', 'thermal_temperature',
'cpu_idle']
trace = Trace(trace_path, plat_info=self.plat_info, events=events)
metrics.append(('cpu_wakeup_count', len(trace.analysis.idle.df_cpus_wakeups()), None))
# Helper to get area under curve of multiple CPU active signals
def get_cpu_time(trace, cpus):
df = pd.DataFrame([trace.analysis.idle.signal_cpu_active(cpu) for cpu in cpus])
return df.sum(axis=1).sum(axis=0)
domains = trace.plat_info.get('freq-domains', [])
for domain in domains:
name = '-'.join(str(c) for c in domain)
df = trace.analysis.frequency.df_domain_frequency_residency(domain)
if df is None or df.empty:
logger.warning("Can't get cluster freq residency from %s",
trace.trace_path)
else:
df = df.reset_index()
avg_freq = (df.frequency * df.time).sum() / df.time.sum()
metric = f'avg_freq_cluster_{name}'
metrics.append((metric, avg_freq, 'MHz'))
df = trace.analysis.frequency.df_cpu_frequency(domain[0])
metrics.append((f'freq_transition_count_{name}', len(df), None))
active_time = series_integrate(trace.analysis.idle.signal_cluster_active(domain))
metrics.append((f'active_time_cluster_{name}',
active_time, 'seconds'))
metrics.append((f'cpu_time_cluster_{name}',
get_cpu_time(trace, domain), 'cpu-seconds'))
metrics.append(('cpu_time_total',
get_cpu_time(trace, list(range(trace.cpus_count))),
'cpu-seconds'))
event = None
if trace.has_events('sched_load_cfs_rq'):
event = 'sched_load_cfs_rq'
def row_filter(r): return r.path == '/'
column = 'util'
elif trace.has_events('sched_load_avg_cpu'):
event = 'sched_load_avg_cpu'
def row_filter(r): return True
column = 'util_avg'
if event:
df = trace.df_event(event)
util_sum = (df[row_filter]
.pivot(columns='cpu')[column].ffill().sum(axis=1))
avg_util_sum = series_mean(util_sum)
metrics.append(('avg_util_sum', avg_util_sum, None))
if trace.has_events('thermal_temperature'):
df = trace.df_event('thermal_temperature')
for zone, zone_df in df.groupby('thermal_zone'):
metrics.append((f'tz_{zone}_start_temp',
zone_df.iloc[0]['temp_prev'],
'milliCelcius'))
if len(zone_df == 1): # Avoid division by 0
avg_tmp = zone_df['temp'].iloc[0]
else:
avg_tmp = series_mean(zone_df['temp'])
metrics.append((f'tz_{zone}_avg_temp',
avg_tmp,
'milliCelcius'))
ret = pd.DataFrame(metrics, columns=['metric', 'value', 'units'])
ret.to_csv(cache_path, index=False)
return ret
def _read_energy_instrument_metrics(self, path):
"""
Parse the WA energy_measurement output. Use or create a Parquet cache
version of the original CSV.
Warning: If found, the cached version of the original CSV will _always_
be used.
"""
path_cache = path + ".parquet"
try:
df = pd.read_parquet(path_cache)
except OSError:
df = pd.read_csv(path)
df.to_parquet(path_cache)
if 'device_power' in df.columns:
# Looks like this is from an ACME
df = pd.DataFrame({'value': df['device_power']})
# Figure out what to call the sample metrics. If the
# artifact name has something extra, that will be the
# channel (IIO device) name. Use that to differentiate where
# the samples came from. If not just call it
# 'device_power_sample'.
device_name = artifact_name[len('energy_instrument_output') + 1:]
name_extra = device_name or 'device'
df['metric'] = f'{name_extra}_power_sample'
df['units'] = 'watts'
elif 'output_power' in df.columns and 'USB_power' in df.columns:
# Looks like this is from a Monsoon
# For monsoon the USB and device power are collected
# together with the same timestamps, so we can just add them
# up.
power_samples = df['output_power'] + df['USB_power']
df = pd.DataFrame({'value': power_samples})
df['metric'] = 'device_power_sample'
df['units'] = 'watts'
return df
def _get_extra_job_metrics(self, job_dir, workload):
"""
Get extra metrics (not reported directly by WA) from a WA job output dir
Returns a DataFrame with columns:
metric,value,units
"""
logger = self.get_logger()
# return
# value,metric,units
extra_metric_list = []
artifacts = self._read_artifacts(job_dir)
if self.parse_traces and 'trace-cmd-bin' in artifacts:
extra_metric_list.append(
self._get_trace_metrics(artifacts['trace-cmd-bin']))
if 'jankbench_results_csv' in artifacts:
df = pd.read_csv(artifacts['jankbench_results_csv'])
df = pd.DataFrame({'value': df['total_duration']})
df['metric'] = 'frame_total_duration'
df['units'] = 'ms'
extra_metric_list.append(df)
elif 'jankbench-results' in artifacts:
con = sqlite3.connect(artifacts['jankbench-results'])
df = pd.read_sql_query("SELECT _id, name, run_id, iteration, total_duration, jank_frame from ui_results", con)
df = pd.DataFrame({'value': df['total_duration']})
df['metric'] = 'frame_total_duration'
df['units'] = 'ms'
extra_metric_list.append(df)
# WA's metrics model just exports overall energy metrics, not individual
# samples. We're going to extend that with individual samples so if you
# want to you can see how much variation there was in energy usage.
# So we'll look for the actual CSV files and parse that by hand.
# The parsing necessary is specific to the energy measurement backend
# that was used, which WA doesn't currently report directly.
# TODO: once WA's reporting of this data has been cleaned up a bit I
# think we can simplify this.
for artifact_name, path in artifacts.items():
if os.stat(path).st_size == 0:
logger.info(" no data for %s", path)
continue
if artifact_name.startswith('energy_instrument_output'):
try:
df = self._read_energy_instrument_metrics(path)
except pandas.errors.ParserError as e:
logger.info(" no data for %s", path)
continue
extra_metric_list.append(df)
if len(extra_metric_list) > 0:
return pd.DataFrame().append(extra_metric_list)
else:
return pd.DataFrame()
@memoized
def _wa_get_kernel_version(self, wa_dir):
with open(os.path.join(wa_dir, '__meta', 'target_info.json')) as f:
target_info = json.load(f)
return KernelVersion(target_info['kernel_release'])
@memoized
def _wa_get_kernel_name(self, wa_dir):
return self._wa_get_kernel_version(wa_dir).release
@memoized
def _wa_get_kernel_sha1(self, wa_dir):
"""
Find the SHA1 of the kernel that a WA3 run was run against
"""
kver = self._wa_get_kernel_version(wa_dir)
if kver.sha1 is not None:
return kver.sha1
# Couldn't get the release sha1, default to reading it from the
# directory name built by test_series
res_dir = os.path.basename(wa_dir)
match = re.search(WaResultsCollector.RE_WLTEST_DIR, res_dir)
if match:
return match.group("sha1")
# Git describe doesn't always produce a sha1
return None
@memoized
def _select(self, tag='.*', kernel='.*', test='.*'):
_df = self.results_df
_df = _df[_df.tag.str.contains(tag)]
_df = _df[_df.kernel.str.contains(kernel)]
_df = _df[_df.test.str.contains(test)]
return _df
@property
def workloads(self):
return self.results_df['workload'].unique()
@property
def tags(self):
return self.results_df['tag'].unique()
@memoized
def tests(self, workload=None):
df = self.results_df
if workload:
df = df[df['workload'] == workload]
return df['test'].unique()
def workload_available_metrics(self, workload):
return (self.results_df
.groupby('workload').get_group(workload)
['metric'].unique())
@memoized
def _get_metric_df(self, workload, metric, tag, kernel, test):
"""
Common helper for getting results to plot for a given metric
"""
logger = self.get_logger()
df = self._select(tag, kernel, test)
if df.empty:
logger.warning("No data to plot for (tag: %s, kernel: %s, test: %s)",
tag, kernel, test)
return None
valid_workloads = df.workload.unique()
if workload not in valid_workloads:
logger.warning("No data for [%s] workload", workload)
logger.info("Workloads with data, for the specified filters, are:")
logger.info(" %s", ','.join(valid_workloads))
return None
df = df[df['workload'] == workload]
valid_metrics = df.metric.unique()
if metric not in valid_metrics:
logger.warning("No metric [%s] collected for workoad [%s]",
metric, workload)
logger.info("Metrics with data, for the specied filters, are:")
logger.info(" %s", ', '.join(valid_metrics))
return None
df = df[df['metric'] == metric]
units = df['units'].unique()
if len(units) > 1:
raise RuntimeError(f'Found different units for workload "{workload}" metric "{metric}": {units}')
return df
SortBy = namedtuple('SortBy', ['key', 'params', 'column'])
def _get_sort_params(self, sort_on):
"""
Validate a sort criteria and return the parameters required by the
boxplot and report methods.
"""
valid_sort = ['count', 'mean', 'std', 'min', 'max']
# Verify if valid percentile string has been required
match = re.match(r'^(?P<quantile>\d{1,3})\%$', sort_on)
if match:
quantile = int(match.group('quantile'))
if quantile < 1 or quantile > 100:
raise ValueError("Error sorting data: Quantile value out of range [1..100]")
return self.SortBy('quantile', {'q': quantile / 100.}, sort_on)
# Otherwise, verify if it's a valid Pandas::describe()'s column name
if sort_on in valid_sort:
return self.SortBy(sort_on, {}, sort_on)
raise ValueError(
f"sort_on={sort_on} not supported, allowed values are percentile or {valid_sort}")
def boxplot(self, workload, metric,
tag='.*', kernel='.*', test='.*',
by=['test', 'tag', 'kernel'],
sort_on='mean', ascending=False,
xlim=None):
"""
Display boxplots of a certain metric
Creates horizontal boxplots of metrics in the results. Check
``workloads`` and ``workload_available_metrics`` to find the available
workloads and metrics. Check ``tags``, ``tests`` and ``kernels``
to find the names that results can be filtered against.
By default, the box with the lowest mean value is plotted at the top of
the graph, this can be customized with ``sort_on`` and ``ascending``.
:param workload: Name of workload to display metrics for
:param metric: Name of metric to display
:param tag: regular expression to filter tags that should be plotted
:param kernel: regular expression to filter kernels that should be plotted
:param tag: regular expression to filter tags that should be plotted
:param by: List of identifiers to group output as in DataFrame.groupby.
:param sort_on: Name of the statistic to order data for.
Supported values are: count, mean, std, min, max.
You may alternatively specify a percentile to sort on,
this should be an integer in the range [1..100]
formatted as a percentage, e.g. 95% is the 95th
percentile.
:param ascending: When True, boxplots are plotted by increasing values
(lowest-valued boxplot at the top of the graph) of the
specified `sort_on` statistic.
"""
if not self.display_charts:
return
sp = self._get_sort_params(sort_on)
df = self._get_metric_df(workload, metric, tag, kernel, test)
if df is None:
return
gb = df.groupby(by)
# Convert the groupby into a DataFrame with a column for each group
max_group_size = max(len(group) for group in iter(gb.groups.values()))
_df = pd.DataFrame()
for group_name, group in gb:
# Need to pad the group's column so that they all have the same
# length
padding_length = max_group_size - len(group)
padding = pd.Series(np.nan, index=np.arange(padding_length))
col = group['value'].append(padding)
col.index = np.arange(max_group_size)
_df[group_name] = col
# Sort the columns
# With default params this puts the box with the lowest mean at the
# bottom.
# NOTE: the not(ascending) condition is required to keep these plots
# aligned with the way describe() reports the stats corresponding to
# each boxplot
sorted_df = getattr(_df, sp.key)(**sp.params)
sorted_df = sorted_df.sort_values(ascending=not(ascending))
_df = _df[sorted_df.index]
# Plot boxes sorted by mean
fig, axes = plt.subplots(figsize=(16, 8))
_df.boxplot(ax=axes, vert=False, showmeans=True)
fig.suptitle('')
if xlim:
axes.set_xlim(xlim)
[units] = df['units'].unique()
axes.set_xlabel(f'{metric} [{units}]')
axes.set_title(f'{workload}:{metric}')
plt.show()
return axes
def describe(self, workload, metric,
tag='.*', kernel='.*', test='.*',
by=['test', 'tag', 'kernel'],
sort_on='mean', ascending=False):
"""
Return a DataFrame of statistics for a certain metric
Compute mean, std, min, max and [50, 75, 95, 99] percentiles for
the values collected on each iteration of the specified metric.
Check ``workloads`` and ``workload_available_metrics`` to find the
available workloads and metrics.
Check ``tags``, ``tests`` and ``kernels`` to find the names that
results can be filtered against.
:param workload: Name of workload to display metrics for
:param metric: Name of metric to display
:param tag: regular expression to filter tags that should be plotted
:param kernel: regular expression to filter kernels that should be plotted
:param tag: regular expression to filter tags that should be plotted
:param by: List of identifiers to group output as in DataFrame.groupby.
:param sort_on: Name of the statistic to order data for.
Supported values are: count, mean, std, min, max.
It's also supported at the usage of a percentile value,
which has to be an integer in the range [1..100] and
formatted as a percentage,
e.g. 95% is the 95th percentile.
:param ascending: When True, the statistics are reported by increasing values
of the specified `sort_on` column
"""
sp = self._get_sort_params(sort_on)
df = self._get_metric_df(workload, metric, tag, kernel, test)
if df is None:
return
# Add the eventually required additional percentile
percentiles = [0.75, 0.95, 0.99]
if sp.params and 'q' in sp.params:
percentiles.append(sp.params['q'])
percentiles = sorted(list(set(percentiles)))
grouped = df.groupby(by)['value']
stats_df = pd.DataFrame(
grouped.describe(percentiles=percentiles))
# Use a consistent formatting independently from the PANDAs version
if 'value' in stats_df.columns:
# We must be running on a pre-0.20.0 version of pandas.
# unstack will convert the old output format to the new.
# http://pandas.pydata.org/pandas-docs/version/0.20/whatsnew.html#groupby-describe-formatting
# Main difference is that here we have a top-level column
# named 'value'
stats_df = stats_df.unstack()
else:
# Let's add a top-level column named 'value' which will be replaced
# by the actual metric name by the following code
stats_df.columns = pd.MultiIndex.from_product(
[['value'], stats_df.columns])
# Sort entries by the required metric and order value
stats_df.sort_values(by=[('value', sp.column)],
ascending=ascending, inplace=True)
stats_df.rename(columns={'value': metric}, inplace=True)
return stats_df
def report(self, workload, metric,
tag='.*', kernel='.*', test='.*',
by=['test', 'tag', 'kernel'],
sort_on='mean', ascending=False,
xlim=None):
"""
Report a boxplot and a set of statistics for a certain metric
This is a convenience method to call both ``boxplot`` and ``describe``
at the same time to get a consistent graphical and numerical
representation of the values for the specified metric.
Check ``workloads`` and ``workload_available_metrics`` to find the
available workloads and metrics.
Check ``tags``, ``tests`` and ``kernels`` to find the names that
results can be filtered against.
:param workload: Name of workload to display metrics for
:param metric: Name of metric to display
:param tag: regular expression to filter tags that should be plotted
:param kernel: regular expression to filter kernels that should be plotted
:param tag: regular expression to filter tags that should be plotted
:param by: List of identifiers to group output as in DataFrame.groupby.
"""
axes = self.boxplot(workload, metric, tag, kernel, test,
by, sort_on, ascending, xlim)
stats_df = self.describe(workload, metric, tag, kernel, test,
by, sort_on, ascending)
if self.display_charts:
display(stats_df)
return (axes, stats_df)
CDF = namedtuple('CDF', ['df', 'threshold', 'above', 'below'])
def _get_cdf(self, data, threshold):
"""
Build the "Cumulative Distribution Function" (CDF) for the given data
"""
# Build the series of sorted values
ser = data.sort_values()
if len(ser) < 1000:
# Append again the last (and largest) value.
# This step is important especially for small sample sizes
# in order to get an unbiased CDF
ser = ser.append(pd.Series(ser.iloc[-1]))
df = pd.Series(np.linspace(0., 1., len(ser)), index=ser)
# Compute percentage of samples above/below the specified threshold
below = float(max(df[:threshold]))
above = 1 - below
return self.CDF(df, threshold, above, below)
def plot_cdf(self, workload='jankbench', metric='frame_total_duration',
threshold=16, top_most=None, ncol=1,
tag='.*', kernel='.*', test='.*'):
"""
Display cumulative distribution functions of a certain metric
Draws CDFs of metrics in the results. Check ``workloads`` and
``workload_available_metrics`` to find the available workloads and
metrics. Check ``tags``, ``tests`` and ``kernels`` to find the
names that results can be filtered against.
The most likely use-case for this is plotting frame rendering times
under Jankbench, so default parameters are provided to make this easy.
:param workload: Name of workload to display metrics for
:type workload: str
:param metric: Name of metric to display
:type metric: str
:param threshold: Value to highlight in the plot - the likely use for
this is highlighting the maximum acceptable
frame-rendering time in order to see at a glance the
rough proportion of frames that were rendered in time.
:type threshold: int
:param top_most: Maximum number of CDFs to plot, all available plots
if not specified
:type top_most: int
:param ncol: Number of columns in the legend, default: 1. If more than
one column is requested the legend will be force placed
below the plot to avoid covering the data.
:type ncol: int
:param tag: regular expression to filter tags that should be plotted
:type tag: int
:param kernel: regular expression to filter kernels that should be plotted
:type kernel: int
:param tests: regular expression to filter tests that should be plotted
:type tests: str
"""
logger = self.get_logger()
if not self.display_charts:
return
df = self._get_metric_df(workload, metric, tag, kernel, test)
if df is None:
return
test_cnt = len(df.groupby(['test', 'tag', 'kernel']))
colors = iter(cm.rainbow(np.linspace(0, 1, test_cnt + 1)))
fig, axes = plt.subplots()
axes.axvspan(0, threshold, facecolor='g', alpha=0.1)
# Pre-compute CDFs to support sorted plotting
data = []
for keys, df in df.groupby(['test', 'tag', 'kernel']):
cdf = self._get_cdf(df['value'], threshold)
data.append((keys, df, cdf))
labels = []
lines = []
if top_most is None:
top_most = len(data)
for (keys, df, cdf) in sorted(data, key=lambda x: x[2].below,
reverse=True)[:top_most]:
color = next(colors)
cdf.df.plot(legend=False, xlim=(0, None), figsize=(16, 6),
label=test, color=to_hex(color))
lines.append(axes.lines[-1])
axes.axhline(y=cdf.below, linewidth=1,
linestyle='--', color=to_hex(color))
labels.append("{:16s}: {:32s} ({:4.1f}%)".format(
keys[2], keys[1], 100. * cdf.below))
logger.debug("%-32s: %-32s: %.1f", keys[2], keys[1], 100. * cdf.below)
[units] = df['units'].unique()
axes.set_title(f'Total duration CDFs (% within {threshold} [{units}] threshold)')
axes.grid(True)
if ncol < 2:
axes.legend(lines, labels, loc='best')
else:
axes.legend(lines, labels, loc='upper left',
ncol=ncol, bbox_to_anchor=(0, -.15))
plt.show()
def find_comparisons(self, base_id=None, by='kernel'):
"""
Find metrics that changed between a baseline and variants
The notion of 'variant' and 'baseline' is defined by the `by` param. If
by='kernel', then `base_id` should be a kernel SHA (or whatever key the
'kernel' column in the results_df uses). If by='tag' then `base_id`
should be a WA 'tag id' (as named in the WA agenda).
"""
logger = self.get_logger()
comparisons = []
# I dunno why I wrote this with a namedtuple instead of just a dict or
# whatever, but it works fine
Comparison = namedtuple('Comparison', ['metric', 'test', 'inv_id',
'base_id', 'base_mean', 'base_std',
'new_id', 'new_mean', 'new_std',
'diff', 'diff_pct', 'pvalue'])
# If comparing by kernel, only check comparisons where the 'tag' is the same
# If comparing by tag, only check where kernel is same
if by == 'kernel':
invariant = 'tag'
elif by == 'tag':
invariant = 'kernel'
else:
raise ValueError('`by` must be "kernel" or "tag"')
available_baselines = self.results_df[by].unique()
if base_id is None:
base_id = available_baselines[0]
if base_id not in available_baselines:
raise ValueError(f'base_id "{base_id}" not a valid "{by}" (available: {available_baselines}). Did you mean to set by="{invariant}"?')
for metric, metric_results in self.results_df.groupby('metric'):
# inv_id will either be the id of the kernel or of the tag,
# depending on the `by` param.
# So wl_inv_results will be the results entries for that workload on
# that kernel/tag
for (test, inv_id), wl_inv_results in metric_results.groupby(['test', invariant]):
gb = wl_inv_results.groupby(by)['value']
if base_id not in gb.groups:
logger.warning('Skipping - No baseline results for test '
'[%s] %s [%s] metric [%s]',
test, invariant, inv_id, metric)
continue
base_results = gb.get_group(base_id)
base_mean = base_results.mean()
for group_id, group_results in gb:
if group_id == base_id:
continue
# group_id is now a kernel id or a tag (depending on
# `by`). group_results is a slice of all the rows of self.results_df
# for a given metric, test, tag/test tuple. We
# create comparison object to show how that metric changed
# wrt. to the base tag/test.
group_mean = group_results.mean()
mean_diff = group_mean - base_mean
# Calculate percentage difference in mean metric value
if base_mean != 0:
mean_diff_pct = mean_diff * 100. / base_mean
else:
# base mean is 0, can't divide by that.
if group_mean == 0:
# Both are 0 so diff_pct is 0
mean_diff_pct = 0
else:
# Tricky one - base value was 0, new value isn't.
# Let's just call it a 100% difference.
mean_diff_pct = 100
if len(group_results) <= 1 or len(base_results) <= 1:
# Can't do ttest_ind if we only have one sample. There
# are proper t-tests for this, but let's just assume the
# worst.
pvalue = 1.0
elif mean_diff == 0:
# ttest_ind also gives a warning if the two data sets
# are the same and have no variance. I don't know why
# that is to be honest, but anyway if there's no
# difference in the mean, we don't care about the
# p-value.
pvalue = 1.0
else:
# Find a p-value which hopefully represents the
# (complement of the) certainty that any difference in
# the mean represents something real.
_, pvalue = ttest_ind(group_results, base_results, equal_var=False)
comparisons.append(Comparison(
metric, test, inv_id,
base_id, base_mean, base_results.std(),
group_id, group_mean, group_results.std(),
mean_diff, mean_diff_pct, pvalue))
return pd.DataFrame(comparisons)
def plot_comparisons(self, base_id=None, by='kernel'):
"""
Visualise metrics that changed between a baseline and variants
The notion of 'variant' and 'baseline' is defined by the `by` param. If
by='kernel', then `base_id` should be a kernel SHA (or whatever key the
'kernel' column in the results_df uses). If by='tag' then `base_id`
should be a WA 'tag id' (as named in the WA agenda).
"""
logger = self.get_logger()
if not self.display_charts:
return
df = self.find_comparisons(base_id=base_id, by=by)
if df.empty:
logger.error('No comparisons by %s found', by)
if len(self.results_df[by].unique()) == 1:
logger.warning('There is only one %s in the results', by)
return
# Separate plot for each test (e.g. one plot for Jankbench list_view)
for (test, inv_id), test_comparisons in df.groupby(['test', 'inv_id']):
# Vertical size of plot depends on how many metrics we're comparing
# and how many things (kernels/tags) we're comparing metrics for.
# a.k.a the total length of the comparisons df.
fig, ax = plt.subplots(figsize=(15, len(test_comparisons) / 2.))
# pos is used as the Y-axis. The y-axis is a discrete axis with a
# point for each of the metrics we're comparing. matplotlib needs
# that in numerical form.
# We also have one more tick on the Y-axis than we actually need -
# this is a terrible hack which is necessary because when we set the
# opacity of the first bar, it sets the opacity of the legend. So we
# introduce a dummy bar with a value of 0 and an opacity of 1.
all_metrics = test_comparisons['metric'].unique()
pos = np.arange(-1, len(all_metrics))
# At each point on the discrete y-axis we'll have one bar for each
# comparison: one per kernel/tag (depending on the `by` param), minus
# one for the baseline.
# If there are more bars we'll need to make them thinner so they
# fit. The sum of the bars' thicknesses should be 60% of a tick on
# the 'y-axis'.
thickness = 0.6 / len(test_comparisons.groupby('new_id'))
# TODO: something is up with the calculations above, because there's
# always a bit of empty space at the bottom of the axes.
gb = test_comparisons.groupby('new_id')
colors = cm.rainbow(np.linspace(0, 1, len(gb)))
for i, (group, gdf) in enumerate(gb):
def get_dummy_row(metric):
return pd.DataFrame({col: 0 for col in gdf.columns}, index=[metric])
missing_metrics = set(all_metrics) - set(gdf['metric'].unique())
gdf = gdf.set_index('metric')
for missing_metric in missing_metrics:
logger.warning(
f"Data missing, can't compare metric [{missing_metric}] for {by} [{group}]")
gdf = gdf.append(get_dummy_row(missing_metric))
# Ensure the comparisons are in the same order for each group
gdf = gdf.reindex(all_metrics)
# Append the dummy row we're using to fix the legend opacity
gdf = get_dummy_row('').append(gdf)
# For each of the things we're comparing we'll plot a bar chart
# but slightly shifted. That's how we get multiple bars on each
# y-axis point.
bars = ax.barh(pos + (i * thickness),
width=gdf['diff_pct'],
height=thickness, label=group,
color=colors[i % len(colors)], align='center')
# Decrease the opacity for comparisons with a high p-value
for bar, pvalue in zip(bars, gdf['pvalue']):
bar.set_alpha(1 - (min(pvalue * 10, 0.95)))
# Add some text for labels, title and axes ticks
ax.set_xlabel('Percent difference')
[baseline] = test_comparisons['base_id'].unique()
ax.set_title(f'{test} ({inv_id}): Percent difference compared to {baseline} \nopacity depicts p-value')
ax.set_yticklabels(gdf.index.tolist())
ax.set_yticks(pos + thickness / 2)
# ax.set_xlim((-50, 50))
ax.legend(loc='best')
ax.grid(True)
plt.show()
def _read_artifacts(self, job_dir):
with open(os.path.join(job_dir, 'result.json')) as f:
ret = {a['name']: os.path.join(job_dir, a['path'])
for a in json.load(f)['artifacts']}
return ret
def _find_job_dir(self, workload='.*', tag='.*', kernel='.*', test='.*',
iteration=1):
df = self._select(tag, kernel, test)
df = df[df['workload'].str.match(workload)]
job_dirs = df['_job_dir'].unique()
if len(job_dirs) > 1:
raise ValueError("Params for get_artifacts don't uniquely identify a job. "
"for workload='{}' tag='{}' kernel='{}' test='{}' iteration={}, "
"found:\n{}" .format(
workload, tag, kernel, test, iteration, '\n'.join(job_dirs)))
if not job_dirs:
raise ValueError(
f"No job found for workload='{workload}' tag='{tag}' kernel='{kernel}' test='{test}' iteration={iteration}")
[job_dir] = job_dirs
return job_dir
def get_artifacts(self, workload='.*', tag='.*', kernel='.*', test='.*',
iteration=1):
"""
Get a dict mapping artifact names to file paths for a specific job.
artifact_name specifies the name of an artifact, e.g. 'trace_bin' to
find the ftrace file from the specific job run. The other parameters
should be used to uniquely identify a run of a job.
"""
job_dir = self._find_job_dir(workload, tag, kernel, test, iteration)
return self._read_artifacts(job_dir)
def get_artifact(self, artifact_name, workload='.*',
tag='.*', kernel='.*', test='.*',
iteration=1):
"""
Get the path of an artifact attached to a job output.
artifact_name specifies the name of an artifact, e.g. 'trace_bin' to
find the ftrace file from the specific job run. The other parameters
should be used to uniquely identify a run of a job.
"""
job_dir = self._find_job_dir(workload, tag, kernel, test, iteration)
artifacts = self._read_artifacts(job_dir)
if not artifact_name in artifacts:
raise ValueError(f"No '{artifact_name}' artifact found in {job_dir} (have {list(artifacts.keys())})")
return artifacts[artifact_name]
# vim :set tabstop=4 shiftwidth=4 textwidth=80 expandtab
| apache-2.0 |
wazeerzulfikar/scikit-learn | sklearn/metrics/cluster/supervised.py | 13 | 31406 | """Utilities to evaluate the clustering performance of models.
Functions named as *_score return a scalar value to maximize: the higher the
better.
"""
# Authors: Olivier Grisel <[email protected]>
# Wei LI <[email protected]>
# Diego Molla <[email protected]>
# Arnaud Fouchet <[email protected]>
# Thierry Guillemot <[email protected]>
# Gregory Stupp <[email protected]>
# Joel Nothman <[email protected]>
# License: BSD 3 clause
from __future__ import division
from math import log
import numpy as np
from scipy import sparse as sp
from .expected_mutual_info_fast import expected_mutual_information
from ...utils.validation import check_array
from ...utils.fixes import comb
def comb2(n):
# the exact version is faster for k == 2: use it by default globally in
# this module instead of the float approximate variant
return comb(n, 2, exact=1)
def check_clusterings(labels_true, labels_pred):
"""Check that the two clusterings matching 1D integer arrays."""
labels_true = np.asarray(labels_true)
labels_pred = np.asarray(labels_pred)
# input checks
if labels_true.ndim != 1:
raise ValueError(
"labels_true must be 1D: shape is %r" % (labels_true.shape,))
if labels_pred.ndim != 1:
raise ValueError(
"labels_pred must be 1D: shape is %r" % (labels_pred.shape,))
if labels_true.shape != labels_pred.shape:
raise ValueError(
"labels_true and labels_pred must have same size, got %d and %d"
% (labels_true.shape[0], labels_pred.shape[0]))
return labels_true, labels_pred
def contingency_matrix(labels_true, labels_pred, eps=None, sparse=False):
"""Build a contingency matrix describing the relationship between labels.
Parameters
----------
labels_true : int array, shape = [n_samples]
Ground truth class labels to be used as a reference
labels_pred : array, shape = [n_samples]
Cluster labels to evaluate
eps : None or float, optional.
If a float, that value is added to all values in the contingency
matrix. This helps to stop NaN propagation.
If ``None``, nothing is adjusted.
sparse : boolean, optional.
If True, return a sparse CSR continency matrix. If ``eps is not None``,
and ``sparse is True``, will throw ValueError.
.. versionadded:: 0.18
Returns
-------
contingency : {array-like, sparse}, shape=[n_classes_true, n_classes_pred]
Matrix :math:`C` such that :math:`C_{i, j}` is the number of samples in
true class :math:`i` and in predicted class :math:`j`. If
``eps is None``, the dtype of this array will be integer. If ``eps`` is
given, the dtype will be float.
Will be a ``scipy.sparse.csr_matrix`` if ``sparse=True``.
"""
if eps is not None and sparse:
raise ValueError("Cannot set 'eps' when sparse=True")
classes, class_idx = np.unique(labels_true, return_inverse=True)
clusters, cluster_idx = np.unique(labels_pred, return_inverse=True)
n_classes = classes.shape[0]
n_clusters = clusters.shape[0]
# Using coo_matrix to accelerate simple histogram calculation,
# i.e. bins are consecutive integers
# Currently, coo_matrix is faster than histogram2d for simple cases
contingency = sp.coo_matrix((np.ones(class_idx.shape[0]),
(class_idx, cluster_idx)),
shape=(n_classes, n_clusters),
dtype=np.int)
if sparse:
contingency = contingency.tocsr()
contingency.sum_duplicates()
else:
contingency = contingency.toarray()
if eps is not None:
# don't use += as contingency is integer
contingency = contingency + eps
return contingency
# clustering measures
def adjusted_rand_score(labels_true, labels_pred):
"""Rand index adjusted for chance.
The Rand Index computes a similarity measure between two clusterings
by considering all pairs of samples and counting pairs that are
assigned in the same or different clusters in the predicted and
true clusterings.
The raw RI score is then "adjusted for chance" into the ARI score
using the following scheme::
ARI = (RI - Expected_RI) / (max(RI) - Expected_RI)
The adjusted Rand index is thus ensured to have a value close to
0.0 for random labeling independently of the number of clusters and
samples and exactly 1.0 when the clusterings are identical (up to
a permutation).
ARI is a symmetric measure::
adjusted_rand_score(a, b) == adjusted_rand_score(b, a)
Read more in the :ref:`User Guide <adjusted_rand_score>`.
Parameters
----------
labels_true : int array, shape = [n_samples]
Ground truth class labels to be used as a reference
labels_pred : array, shape = [n_samples]
Cluster labels to evaluate
Returns
-------
ari : float
Similarity score between -1.0 and 1.0. Random labelings have an ARI
close to 0.0. 1.0 stands for perfect match.
Examples
--------
Perfectly matching labelings have a score of 1 even
>>> from sklearn.metrics.cluster import adjusted_rand_score
>>> adjusted_rand_score([0, 0, 1, 1], [0, 0, 1, 1])
1.0
>>> adjusted_rand_score([0, 0, 1, 1], [1, 1, 0, 0])
1.0
Labelings that assign all classes members to the same clusters
are complete be not always pure, hence penalized::
>>> adjusted_rand_score([0, 0, 1, 2], [0, 0, 1, 1]) # doctest: +ELLIPSIS
0.57...
ARI is symmetric, so labelings that have pure clusters with members
coming from the same classes but unnecessary splits are penalized::
>>> adjusted_rand_score([0, 0, 1, 1], [0, 0, 1, 2]) # doctest: +ELLIPSIS
0.57...
If classes members are completely split across different clusters, the
assignment is totally incomplete, hence the ARI is very low::
>>> adjusted_rand_score([0, 0, 0, 0], [0, 1, 2, 3])
0.0
References
----------
.. [Hubert1985] `L. Hubert and P. Arabie, Comparing Partitions,
Journal of Classification 1985`
http://link.springer.com/article/10.1007%2FBF01908075
.. [wk] https://en.wikipedia.org/wiki/Rand_index#Adjusted_Rand_index
See also
--------
adjusted_mutual_info_score: Adjusted Mutual Information
"""
labels_true, labels_pred = check_clusterings(labels_true, labels_pred)
n_samples = labels_true.shape[0]
n_classes = np.unique(labels_true).shape[0]
n_clusters = np.unique(labels_pred).shape[0]
# Special limit cases: no clustering since the data is not split;
# or trivial clustering where each document is assigned a unique cluster.
# These are perfect matches hence return 1.0.
if (n_classes == n_clusters == 1 or
n_classes == n_clusters == 0 or
n_classes == n_clusters == n_samples):
return 1.0
# Compute the ARI using the contingency data
contingency = contingency_matrix(labels_true, labels_pred, sparse=True)
sum_comb_c = sum(comb2(n_c) for n_c in np.ravel(contingency.sum(axis=1)))
sum_comb_k = sum(comb2(n_k) for n_k in np.ravel(contingency.sum(axis=0)))
sum_comb = sum(comb2(n_ij) for n_ij in contingency.data)
prod_comb = (sum_comb_c * sum_comb_k) / comb(n_samples, 2)
mean_comb = (sum_comb_k + sum_comb_c) / 2.
return (sum_comb - prod_comb) / (mean_comb - prod_comb)
def homogeneity_completeness_v_measure(labels_true, labels_pred):
"""Compute the homogeneity and completeness and V-Measure scores at once.
Those metrics are based on normalized conditional entropy measures of
the clustering labeling to evaluate given the knowledge of a Ground
Truth class labels of the same samples.
A clustering result satisfies homogeneity if all of its clusters
contain only data points which are members of a single class.
A clustering result satisfies completeness if all the data points
that are members of a given class are elements of the same cluster.
Both scores have positive values between 0.0 and 1.0, larger values
being desirable.
Those 3 metrics are independent of the absolute values of the labels:
a permutation of the class or cluster label values won't change the
score values in any way.
V-Measure is furthermore symmetric: swapping ``labels_true`` and
``label_pred`` will give the same score. This does not hold for
homogeneity and completeness.
Read more in the :ref:`User Guide <homogeneity_completeness>`.
Parameters
----------
labels_true : int array, shape = [n_samples]
ground truth class labels to be used as a reference
labels_pred : array, shape = [n_samples]
cluster labels to evaluate
Returns
-------
homogeneity : float
score between 0.0 and 1.0. 1.0 stands for perfectly homogeneous labeling
completeness : float
score between 0.0 and 1.0. 1.0 stands for perfectly complete labeling
v_measure : float
harmonic mean of the first two
See also
--------
homogeneity_score
completeness_score
v_measure_score
"""
labels_true, labels_pred = check_clusterings(labels_true, labels_pred)
if len(labels_true) == 0:
return 1.0, 1.0, 1.0
entropy_C = entropy(labels_true)
entropy_K = entropy(labels_pred)
contingency = contingency_matrix(labels_true, labels_pred, sparse=True)
MI = mutual_info_score(None, None, contingency=contingency)
homogeneity = MI / (entropy_C) if entropy_C else 1.0
completeness = MI / (entropy_K) if entropy_K else 1.0
if homogeneity + completeness == 0.0:
v_measure_score = 0.0
else:
v_measure_score = (2.0 * homogeneity * completeness /
(homogeneity + completeness))
return homogeneity, completeness, v_measure_score
def homogeneity_score(labels_true, labels_pred):
"""Homogeneity metric of a cluster labeling given a ground truth.
A clustering result satisfies homogeneity if all of its clusters
contain only data points which are members of a single class.
This metric is independent of the absolute values of the labels:
a permutation of the class or cluster label values won't change the
score value in any way.
This metric is not symmetric: switching ``label_true`` with ``label_pred``
will return the :func:`completeness_score` which will be different in
general.
Read more in the :ref:`User Guide <homogeneity_completeness>`.
Parameters
----------
labels_true : int array, shape = [n_samples]
ground truth class labels to be used as a reference
labels_pred : array, shape = [n_samples]
cluster labels to evaluate
Returns
-------
homogeneity : float
score between 0.0 and 1.0. 1.0 stands for perfectly homogeneous labeling
References
----------
.. [1] `Andrew Rosenberg and Julia Hirschberg, 2007. V-Measure: A
conditional entropy-based external cluster evaluation measure
<http://aclweb.org/anthology/D/D07/D07-1043.pdf>`_
See also
--------
completeness_score
v_measure_score
Examples
--------
Perfect labelings are homogeneous::
>>> from sklearn.metrics.cluster import homogeneity_score
>>> homogeneity_score([0, 0, 1, 1], [1, 1, 0, 0])
1.0
Non-perfect labelings that further split classes into more clusters can be
perfectly homogeneous::
>>> print("%.6f" % homogeneity_score([0, 0, 1, 1], [0, 0, 1, 2]))
... # doctest: +ELLIPSIS
1.0...
>>> print("%.6f" % homogeneity_score([0, 0, 1, 1], [0, 1, 2, 3]))
... # doctest: +ELLIPSIS
1.0...
Clusters that include samples from different classes do not make for an
homogeneous labeling::
>>> print("%.6f" % homogeneity_score([0, 0, 1, 1], [0, 1, 0, 1]))
... # doctest: +ELLIPSIS
0.0...
>>> print("%.6f" % homogeneity_score([0, 0, 1, 1], [0, 0, 0, 0]))
... # doctest: +ELLIPSIS
0.0...
"""
return homogeneity_completeness_v_measure(labels_true, labels_pred)[0]
def completeness_score(labels_true, labels_pred):
"""Completeness metric of a cluster labeling given a ground truth.
A clustering result satisfies completeness if all the data points
that are members of a given class are elements of the same cluster.
This metric is independent of the absolute values of the labels:
a permutation of the class or cluster label values won't change the
score value in any way.
This metric is not symmetric: switching ``label_true`` with ``label_pred``
will return the :func:`homogeneity_score` which will be different in
general.
Read more in the :ref:`User Guide <homogeneity_completeness>`.
Parameters
----------
labels_true : int array, shape = [n_samples]
ground truth class labels to be used as a reference
labels_pred : array, shape = [n_samples]
cluster labels to evaluate
Returns
-------
completeness : float
score between 0.0 and 1.0. 1.0 stands for perfectly complete labeling
References
----------
.. [1] `Andrew Rosenberg and Julia Hirschberg, 2007. V-Measure: A
conditional entropy-based external cluster evaluation measure
<http://aclweb.org/anthology/D/D07/D07-1043.pdf>`_
See also
--------
homogeneity_score
v_measure_score
Examples
--------
Perfect labelings are complete::
>>> from sklearn.metrics.cluster import completeness_score
>>> completeness_score([0, 0, 1, 1], [1, 1, 0, 0])
1.0
Non-perfect labelings that assign all classes members to the same clusters
are still complete::
>>> print(completeness_score([0, 0, 1, 1], [0, 0, 0, 0]))
1.0
>>> print(completeness_score([0, 1, 2, 3], [0, 0, 1, 1]))
1.0
If classes members are split across different clusters, the
assignment cannot be complete::
>>> print(completeness_score([0, 0, 1, 1], [0, 1, 0, 1]))
0.0
>>> print(completeness_score([0, 0, 0, 0], [0, 1, 2, 3]))
0.0
"""
return homogeneity_completeness_v_measure(labels_true, labels_pred)[1]
def v_measure_score(labels_true, labels_pred):
"""V-measure cluster labeling given a ground truth.
This score is identical to :func:`normalized_mutual_info_score`.
The V-measure is the harmonic mean between homogeneity and completeness::
v = 2 * (homogeneity * completeness) / (homogeneity + completeness)
This metric is independent of the absolute values of the labels:
a permutation of the class or cluster label values won't change the
score value in any way.
This metric is furthermore symmetric: switching ``label_true`` with
``label_pred`` will return the same score value. This can be useful to
measure the agreement of two independent label assignments strategies
on the same dataset when the real ground truth is not known.
Read more in the :ref:`User Guide <homogeneity_completeness>`.
Parameters
----------
labels_true : int array, shape = [n_samples]
ground truth class labels to be used as a reference
labels_pred : array, shape = [n_samples]
cluster labels to evaluate
Returns
-------
v_measure : float
score between 0.0 and 1.0. 1.0 stands for perfectly complete labeling
References
----------
.. [1] `Andrew Rosenberg and Julia Hirschberg, 2007. V-Measure: A
conditional entropy-based external cluster evaluation measure
<http://aclweb.org/anthology/D/D07/D07-1043.pdf>`_
See also
--------
homogeneity_score
completeness_score
Examples
--------
Perfect labelings are both homogeneous and complete, hence have score 1.0::
>>> from sklearn.metrics.cluster import v_measure_score
>>> v_measure_score([0, 0, 1, 1], [0, 0, 1, 1])
1.0
>>> v_measure_score([0, 0, 1, 1], [1, 1, 0, 0])
1.0
Labelings that assign all classes members to the same clusters
are complete be not homogeneous, hence penalized::
>>> print("%.6f" % v_measure_score([0, 0, 1, 2], [0, 0, 1, 1]))
... # doctest: +ELLIPSIS
0.8...
>>> print("%.6f" % v_measure_score([0, 1, 2, 3], [0, 0, 1, 1]))
... # doctest: +ELLIPSIS
0.66...
Labelings that have pure clusters with members coming from the same
classes are homogeneous but un-necessary splits harms completeness
and thus penalize V-measure as well::
>>> print("%.6f" % v_measure_score([0, 0, 1, 1], [0, 0, 1, 2]))
... # doctest: +ELLIPSIS
0.8...
>>> print("%.6f" % v_measure_score([0, 0, 1, 1], [0, 1, 2, 3]))
... # doctest: +ELLIPSIS
0.66...
If classes members are completely split across different clusters,
the assignment is totally incomplete, hence the V-Measure is null::
>>> print("%.6f" % v_measure_score([0, 0, 0, 0], [0, 1, 2, 3]))
... # doctest: +ELLIPSIS
0.0...
Clusters that include samples from totally different classes totally
destroy the homogeneity of the labeling, hence::
>>> print("%.6f" % v_measure_score([0, 0, 1, 1], [0, 0, 0, 0]))
... # doctest: +ELLIPSIS
0.0...
"""
return homogeneity_completeness_v_measure(labels_true, labels_pred)[2]
def mutual_info_score(labels_true, labels_pred, contingency=None):
"""Mutual Information between two clusterings.
The Mutual Information is a measure of the similarity between two labels of
the same data. Where :math:`|U_i|` is the number of the samples
in cluster :math:`U_i` and :math:`|V_j|` is the number of the
samples in cluster :math:`V_j`, the Mutual Information
between clusterings :math:`U` and :math:`V` is given as:
.. math::
MI(U,V)=\sum_{i=1}^|U| \sum_{j=1}^|V| \\frac{|U_i\cap V_j|}{N}
\log\\frac{N|U_i \cap V_j|}{|U_i||V_j|}
This metric is independent of the absolute values of the labels:
a permutation of the class or cluster label values won't change the
score value in any way.
This metric is furthermore symmetric: switching ``label_true`` with
``label_pred`` will return the same score value. This can be useful to
measure the agreement of two independent label assignments strategies
on the same dataset when the real ground truth is not known.
Read more in the :ref:`User Guide <mutual_info_score>`.
Parameters
----------
labels_true : int array, shape = [n_samples]
A clustering of the data into disjoint subsets.
labels_pred : array, shape = [n_samples]
A clustering of the data into disjoint subsets.
contingency : {None, array, sparse matrix},
shape = [n_classes_true, n_classes_pred]
A contingency matrix given by the :func:`contingency_matrix` function.
If value is ``None``, it will be computed, otherwise the given value is
used, with ``labels_true`` and ``labels_pred`` ignored.
Returns
-------
mi : float
Mutual information, a non-negative value
See also
--------
adjusted_mutual_info_score: Adjusted against chance Mutual Information
normalized_mutual_info_score: Normalized Mutual Information
"""
if contingency is None:
labels_true, labels_pred = check_clusterings(labels_true, labels_pred)
contingency = contingency_matrix(labels_true, labels_pred, sparse=True)
else:
contingency = check_array(contingency,
accept_sparse=['csr', 'csc', 'coo'],
dtype=[int, np.int32, np.int64])
if isinstance(contingency, np.ndarray):
# For an array
nzx, nzy = np.nonzero(contingency)
nz_val = contingency[nzx, nzy]
elif sp.issparse(contingency):
# For a sparse matrix
nzx, nzy, nz_val = sp.find(contingency)
else:
raise ValueError("Unsupported type for 'contingency': %s" %
type(contingency))
contingency_sum = contingency.sum()
pi = np.ravel(contingency.sum(axis=1))
pj = np.ravel(contingency.sum(axis=0))
log_contingency_nm = np.log(nz_val)
contingency_nm = nz_val / contingency_sum
# Don't need to calculate the full outer product, just for non-zeroes
outer = pi.take(nzx) * pj.take(nzy)
log_outer = -np.log(outer) + log(pi.sum()) + log(pj.sum())
mi = (contingency_nm * (log_contingency_nm - log(contingency_sum)) +
contingency_nm * log_outer)
return mi.sum()
def adjusted_mutual_info_score(labels_true, labels_pred):
"""Adjusted Mutual Information between two clusterings.
Adjusted Mutual Information (AMI) is an adjustment of the Mutual
Information (MI) score to account for chance. It accounts for the fact that
the MI is generally higher for two clusterings with a larger number of
clusters, regardless of whether there is actually more information shared.
For two clusterings :math:`U` and :math:`V`, the AMI is given as::
AMI(U, V) = [MI(U, V) - E(MI(U, V))] / [max(H(U), H(V)) - E(MI(U, V))]
This metric is independent of the absolute values of the labels:
a permutation of the class or cluster label values won't change the
score value in any way.
This metric is furthermore symmetric: switching ``label_true`` with
``label_pred`` will return the same score value. This can be useful to
measure the agreement of two independent label assignments strategies
on the same dataset when the real ground truth is not known.
Be mindful that this function is an order of magnitude slower than other
metrics, such as the Adjusted Rand Index.
Read more in the :ref:`User Guide <mutual_info_score>`.
Parameters
----------
labels_true : int array, shape = [n_samples]
A clustering of the data into disjoint subsets.
labels_pred : array, shape = [n_samples]
A clustering of the data into disjoint subsets.
Returns
-------
ami: float(upperlimited by 1.0)
The AMI returns a value of 1 when the two partitions are identical
(ie perfectly matched). Random partitions (independent labellings) have
an expected AMI around 0 on average hence can be negative.
See also
--------
adjusted_rand_score: Adjusted Rand Index
mutual_information_score: Mutual Information (not adjusted for chance)
Examples
--------
Perfect labelings are both homogeneous and complete, hence have
score 1.0::
>>> from sklearn.metrics.cluster import adjusted_mutual_info_score
>>> adjusted_mutual_info_score([0, 0, 1, 1], [0, 0, 1, 1])
1.0
>>> adjusted_mutual_info_score([0, 0, 1, 1], [1, 1, 0, 0])
1.0
If classes members are completely split across different clusters,
the assignment is totally in-complete, hence the AMI is null::
>>> adjusted_mutual_info_score([0, 0, 0, 0], [0, 1, 2, 3])
0.0
References
----------
.. [1] `Vinh, Epps, and Bailey, (2010). Information Theoretic Measures for
Clusterings Comparison: Variants, Properties, Normalization and
Correction for Chance, JMLR
<http://jmlr.csail.mit.edu/papers/volume11/vinh10a/vinh10a.pdf>`_
.. [2] `Wikipedia entry for the Adjusted Mutual Information
<https://en.wikipedia.org/wiki/Adjusted_Mutual_Information>`_
"""
labels_true, labels_pred = check_clusterings(labels_true, labels_pred)
n_samples = labels_true.shape[0]
classes = np.unique(labels_true)
clusters = np.unique(labels_pred)
# Special limit cases: no clustering since the data is not split.
# This is a perfect match hence return 1.0.
if (classes.shape[0] == clusters.shape[0] == 1 or
classes.shape[0] == clusters.shape[0] == 0):
return 1.0
contingency = contingency_matrix(labels_true, labels_pred, sparse=True)
contingency = contingency.astype(np.float64)
# Calculate the MI for the two clusterings
mi = mutual_info_score(labels_true, labels_pred,
contingency=contingency)
# Calculate the expected value for the mutual information
emi = expected_mutual_information(contingency, n_samples)
# Calculate entropy for each labeling
h_true, h_pred = entropy(labels_true), entropy(labels_pred)
ami = (mi - emi) / (max(h_true, h_pred) - emi)
return ami
def normalized_mutual_info_score(labels_true, labels_pred):
"""Normalized Mutual Information between two clusterings.
Normalized Mutual Information (NMI) is an normalization of the Mutual
Information (MI) score to scale the results between 0 (no mutual
information) and 1 (perfect correlation). In this function, mutual
information is normalized by ``sqrt(H(labels_true) * H(labels_pred))``
This measure is not adjusted for chance. Therefore
:func:`adjusted_mustual_info_score` might be preferred.
This metric is independent of the absolute values of the labels:
a permutation of the class or cluster label values won't change the
score value in any way.
This metric is furthermore symmetric: switching ``label_true`` with
``label_pred`` will return the same score value. This can be useful to
measure the agreement of two independent label assignments strategies
on the same dataset when the real ground truth is not known.
Read more in the :ref:`User Guide <mutual_info_score>`.
Parameters
----------
labels_true : int array, shape = [n_samples]
A clustering of the data into disjoint subsets.
labels_pred : array, shape = [n_samples]
A clustering of the data into disjoint subsets.
Returns
-------
nmi : float
score between 0.0 and 1.0. 1.0 stands for perfectly complete labeling
See also
--------
adjusted_rand_score: Adjusted Rand Index
adjusted_mutual_info_score: Adjusted Mutual Information (adjusted
against chance)
Examples
--------
Perfect labelings are both homogeneous and complete, hence have
score 1.0::
>>> from sklearn.metrics.cluster import normalized_mutual_info_score
>>> normalized_mutual_info_score([0, 0, 1, 1], [0, 0, 1, 1])
1.0
>>> normalized_mutual_info_score([0, 0, 1, 1], [1, 1, 0, 0])
1.0
If classes members are completely split across different clusters,
the assignment is totally in-complete, hence the NMI is null::
>>> normalized_mutual_info_score([0, 0, 0, 0], [0, 1, 2, 3])
0.0
"""
labels_true, labels_pred = check_clusterings(labels_true, labels_pred)
classes = np.unique(labels_true)
clusters = np.unique(labels_pred)
# Special limit cases: no clustering since the data is not split.
# This is a perfect match hence return 1.0.
if (classes.shape[0] == clusters.shape[0] == 1 or
classes.shape[0] == clusters.shape[0] == 0):
return 1.0
contingency = contingency_matrix(labels_true, labels_pred, sparse=True)
contingency = contingency.astype(np.float64)
# Calculate the MI for the two clusterings
mi = mutual_info_score(labels_true, labels_pred,
contingency=contingency)
# Calculate the expected value for the mutual information
# Calculate entropy for each labeling
h_true, h_pred = entropy(labels_true), entropy(labels_pred)
nmi = mi / max(np.sqrt(h_true * h_pred), 1e-10)
return nmi
def fowlkes_mallows_score(labels_true, labels_pred, sparse=False):
"""Measure the similarity of two clusterings of a set of points.
The Fowlkes-Mallows index (FMI) is defined as the geometric mean between of
the precision and recall::
FMI = TP / sqrt((TP + FP) * (TP + FN))
Where ``TP`` is the number of **True Positive** (i.e. the number of pair of
points that belongs in the same clusters in both ``labels_true`` and
``labels_pred``), ``FP`` is the number of **False Positive** (i.e. the
number of pair of points that belongs in the same clusters in
``labels_true`` and not in ``labels_pred``) and ``FN`` is the number of
**False Negative** (i.e the number of pair of points that belongs in the
same clusters in ``labels_pred`` and not in ``labels_True``).
The score ranges from 0 to 1. A high value indicates a good similarity
between two clusters.
Read more in the :ref:`User Guide <fowlkes_mallows_scores>`.
Parameters
----------
labels_true : int array, shape = (``n_samples``,)
A clustering of the data into disjoint subsets.
labels_pred : array, shape = (``n_samples``, )
A clustering of the data into disjoint subsets.
sparse : bool
Compute contingency matrix internally with sparse matrix.
Returns
-------
score : float
The resulting Fowlkes-Mallows score.
Examples
--------
Perfect labelings are both homogeneous and complete, hence have
score 1.0::
>>> from sklearn.metrics.cluster import fowlkes_mallows_score
>>> fowlkes_mallows_score([0, 0, 1, 1], [0, 0, 1, 1])
1.0
>>> fowlkes_mallows_score([0, 0, 1, 1], [1, 1, 0, 0])
1.0
If classes members are completely split across different clusters,
the assignment is totally random, hence the FMI is null::
>>> fowlkes_mallows_score([0, 0, 0, 0], [0, 1, 2, 3])
0.0
References
----------
.. [1] `E. B. Fowkles and C. L. Mallows, 1983. "A method for comparing two
hierarchical clusterings". Journal of the American Statistical
Association
<http://wildfire.stat.ucla.edu/pdflibrary/fowlkes.pdf>`_
.. [2] `Wikipedia entry for the Fowlkes-Mallows Index
<https://en.wikipedia.org/wiki/Fowlkes-Mallows_index>`_
"""
labels_true, labels_pred = check_clusterings(labels_true, labels_pred)
n_samples, = labels_true.shape
c = contingency_matrix(labels_true, labels_pred, sparse=True)
tk = np.dot(c.data, c.data) - n_samples
pk = np.sum(np.asarray(c.sum(axis=0)).ravel() ** 2) - n_samples
qk = np.sum(np.asarray(c.sum(axis=1)).ravel() ** 2) - n_samples
return tk / np.sqrt(pk * qk) if tk != 0. else 0.
def entropy(labels):
"""Calculates the entropy for a labeling."""
if len(labels) == 0:
return 1.0
label_idx = np.unique(labels, return_inverse=True)[1]
pi = np.bincount(label_idx).astype(np.float64)
pi = pi[pi > 0]
pi_sum = np.sum(pi)
# log(a / b) should be calculated as log(a) - log(b) for
# possible loss of precision
return -np.sum((pi / pi_sum) * (np.log(pi) - log(pi_sum)))
| bsd-3-clause |
laszlocsomor/tensorflow | tensorflow/contrib/learn/python/learn/estimators/estimator_test.py | 21 | 53471 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for Estimator."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import itertools
import json
import os
import tempfile
import numpy as np
import six
from six.moves import xrange # pylint: disable=redefined-builtin
from google.protobuf import text_format
from tensorflow.contrib import learn
from tensorflow.contrib import lookup
from tensorflow.contrib.framework.python.ops import variables
from tensorflow.contrib.layers.python.layers import feature_column as feature_column_lib
from tensorflow.contrib.layers.python.layers import optimizers
from tensorflow.contrib.learn.python.learn import experiment
from tensorflow.contrib.learn.python.learn import models
from tensorflow.contrib.learn.python.learn import monitors as monitors_lib
from tensorflow.contrib.learn.python.learn.datasets import base
from tensorflow.contrib.learn.python.learn.estimators import _sklearn
from tensorflow.contrib.learn.python.learn.estimators import constants
from tensorflow.contrib.learn.python.learn.estimators import estimator
from tensorflow.contrib.learn.python.learn.estimators import linear
from tensorflow.contrib.learn.python.learn.estimators import model_fn
from tensorflow.contrib.learn.python.learn.estimators import run_config
from tensorflow.contrib.learn.python.learn.utils import input_fn_utils
from tensorflow.contrib.metrics.python.ops import metric_ops
from tensorflow.contrib.testing.python.framework import util_test
from tensorflow.python.client import session as session_lib
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.lib.io import file_io
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import check_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import parsing_ops
from tensorflow.python.ops import variables as variables_lib
from tensorflow.python.platform import gfile
from tensorflow.python.platform import test
from tensorflow.python.saved_model import loader
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.summary import summary
from tensorflow.python.training import basic_session_run_hooks
from tensorflow.python.training import checkpoint_state_pb2
from tensorflow.python.training import input as input_lib
from tensorflow.python.training import monitored_session
from tensorflow.python.training import saver as saver_lib
from tensorflow.python.training import session_run_hook
from tensorflow.python.util import compat
_BOSTON_INPUT_DIM = 13
_IRIS_INPUT_DIM = 4
def boston_input_fn(num_epochs=None):
boston = base.load_boston()
features = input_lib.limit_epochs(
array_ops.reshape(
constant_op.constant(boston.data), [-1, _BOSTON_INPUT_DIM]),
num_epochs=num_epochs)
labels = array_ops.reshape(constant_op.constant(boston.target), [-1, 1])
return features, labels
def iris_input_fn():
iris = base.load_iris()
features = array_ops.reshape(
constant_op.constant(iris.data), [-1, _IRIS_INPUT_DIM])
labels = array_ops.reshape(constant_op.constant(iris.target), [-1])
return features, labels
def iris_input_fn_labels_dict():
iris = base.load_iris()
features = array_ops.reshape(
constant_op.constant(iris.data), [-1, _IRIS_INPUT_DIM])
labels = {
'labels': array_ops.reshape(constant_op.constant(iris.target), [-1])
}
return features, labels
def boston_eval_fn():
boston = base.load_boston()
n_examples = len(boston.target)
features = array_ops.reshape(
constant_op.constant(boston.data), [n_examples, _BOSTON_INPUT_DIM])
labels = array_ops.reshape(
constant_op.constant(boston.target), [n_examples, 1])
return array_ops.concat([features, features], 0), array_ops.concat(
[labels, labels], 0)
def extract(data, key):
if isinstance(data, dict):
assert key in data
return data[key]
else:
return data
def linear_model_params_fn(features, labels, mode, params):
features = extract(features, 'input')
labels = extract(labels, 'labels')
assert mode in (model_fn.ModeKeys.TRAIN, model_fn.ModeKeys.EVAL,
model_fn.ModeKeys.INFER)
prediction, loss = (models.linear_regression_zero_init(features, labels))
train_op = optimizers.optimize_loss(
loss,
variables.get_global_step(),
optimizer='Adagrad',
learning_rate=params['learning_rate'])
return prediction, loss, train_op
def linear_model_fn(features, labels, mode):
features = extract(features, 'input')
labels = extract(labels, 'labels')
assert mode in (model_fn.ModeKeys.TRAIN, model_fn.ModeKeys.EVAL,
model_fn.ModeKeys.INFER)
if isinstance(features, dict):
(_, features), = features.items()
prediction, loss = (models.linear_regression_zero_init(features, labels))
train_op = optimizers.optimize_loss(
loss, variables.get_global_step(), optimizer='Adagrad', learning_rate=0.1)
return prediction, loss, train_op
def linear_model_fn_with_model_fn_ops(features, labels, mode):
"""Same as linear_model_fn, but returns `ModelFnOps`."""
assert mode in (model_fn.ModeKeys.TRAIN, model_fn.ModeKeys.EVAL,
model_fn.ModeKeys.INFER)
prediction, loss = (models.linear_regression_zero_init(features, labels))
train_op = optimizers.optimize_loss(
loss, variables.get_global_step(), optimizer='Adagrad', learning_rate=0.1)
return model_fn.ModelFnOps(
mode=mode, predictions=prediction, loss=loss, train_op=train_op)
def logistic_model_no_mode_fn(features, labels):
features = extract(features, 'input')
labels = extract(labels, 'labels')
labels = array_ops.one_hot(labels, 3, 1, 0)
prediction, loss = (models.logistic_regression_zero_init(features, labels))
train_op = optimizers.optimize_loss(
loss, variables.get_global_step(), optimizer='Adagrad', learning_rate=0.1)
return {
'class': math_ops.argmax(prediction, 1),
'prob': prediction
}, loss, train_op
VOCAB_FILE_CONTENT = 'emerson\nlake\npalmer\n'
EXTRA_FILE_CONTENT = 'kermit\npiggy\nralph\n'
def _build_estimator_for_export_tests(tmpdir):
def _input_fn():
iris = base.load_iris()
return {
'feature': constant_op.constant(
iris.data, dtype=dtypes.float32)
}, constant_op.constant(
iris.target, shape=[150], dtype=dtypes.int32)
feature_columns = [
feature_column_lib.real_valued_column(
'feature', dimension=4)
]
est = linear.LinearRegressor(feature_columns)
est.fit(input_fn=_input_fn, steps=20)
feature_spec = feature_column_lib.create_feature_spec_for_parsing(
feature_columns)
serving_input_fn = input_fn_utils.build_parsing_serving_input_fn(feature_spec)
# hack in an op that uses an asset, in order to test asset export.
# this is not actually valid, of course.
def serving_input_fn_with_asset():
features, labels, inputs = serving_input_fn()
vocab_file_name = os.path.join(tmpdir, 'my_vocab_file')
vocab_file = gfile.GFile(vocab_file_name, mode='w')
vocab_file.write(VOCAB_FILE_CONTENT)
vocab_file.close()
hashtable = lookup.HashTable(
lookup.TextFileStringTableInitializer(vocab_file_name), 'x')
features['bogus_lookup'] = hashtable.lookup(
math_ops.to_int64(features['feature']))
return input_fn_utils.InputFnOps(features, labels, inputs)
return est, serving_input_fn_with_asset
def _build_estimator_for_resource_export_test():
def _input_fn():
iris = base.load_iris()
return {
'feature': constant_op.constant(iris.data, dtype=dtypes.float32)
}, constant_op.constant(
iris.target, shape=[150], dtype=dtypes.int32)
feature_columns = [
feature_column_lib.real_valued_column('feature', dimension=4)
]
def resource_constant_model_fn(unused_features, unused_labels, mode):
"""A model_fn that loads a constant from a resource and serves it."""
assert mode in (model_fn.ModeKeys.TRAIN, model_fn.ModeKeys.EVAL,
model_fn.ModeKeys.INFER)
const = constant_op.constant(-1, dtype=dtypes.int64)
table = lookup.MutableHashTable(
dtypes.string, dtypes.int64, const, name='LookupTableModel')
update_global_step = variables.get_global_step().assign_add(1)
if mode in (model_fn.ModeKeys.TRAIN, model_fn.ModeKeys.EVAL):
key = constant_op.constant(['key'])
value = constant_op.constant([42], dtype=dtypes.int64)
train_op_1 = table.insert(key, value)
training_state = lookup.MutableHashTable(
dtypes.string, dtypes.int64, const, name='LookupTableTrainingState')
training_op_2 = training_state.insert(key, value)
return (const, const,
control_flow_ops.group(train_op_1, training_op_2,
update_global_step))
if mode == model_fn.ModeKeys.INFER:
key = constant_op.constant(['key'])
prediction = table.lookup(key)
return prediction, const, update_global_step
est = estimator.Estimator(model_fn=resource_constant_model_fn)
est.fit(input_fn=_input_fn, steps=1)
feature_spec = feature_column_lib.create_feature_spec_for_parsing(
feature_columns)
serving_input_fn = input_fn_utils.build_parsing_serving_input_fn(feature_spec)
return est, serving_input_fn
class CheckCallsMonitor(monitors_lib.BaseMonitor):
def __init__(self, expect_calls):
super(CheckCallsMonitor, self).__init__()
self.begin_calls = None
self.end_calls = None
self.expect_calls = expect_calls
def begin(self, max_steps):
self.begin_calls = 0
self.end_calls = 0
def step_begin(self, step):
self.begin_calls += 1
return {}
def step_end(self, step, outputs):
self.end_calls += 1
return False
def end(self):
assert (self.end_calls == self.expect_calls and
self.begin_calls == self.expect_calls)
def _model_fn_ops(
expected_features, expected_labels, actual_features, actual_labels, mode):
assert_ops = tuple([
check_ops.assert_equal(
expected_features[k], actual_features[k], name='assert_%s' % k)
for k in expected_features
] + [
check_ops.assert_equal(
expected_labels, actual_labels, name='assert_labels')
])
with ops.control_dependencies(assert_ops):
return model_fn.ModelFnOps(
mode=mode,
predictions=constant_op.constant(0.),
loss=constant_op.constant(0.),
train_op=variables.get_global_step().assign_add(1))
def _make_input_fn(features, labels):
def _input_fn():
return {
k: constant_op.constant(v)
for k, v in six.iteritems(features)
}, constant_op.constant(labels)
return _input_fn
class EstimatorModelFnTest(test.TestCase):
def testModelFnArgs(self):
features = {'x': 42., 'y': 43.}
labels = 44.
expected_params = {'some_param': 'some_value'}
expected_config = run_config.RunConfig()
expected_config.i_am_test = True
# TODO(ptucker): We have to roll our own mock since Estimator._get_arguments
# doesn't work with mock fns.
model_fn_call_count = [0]
# `features` and `labels` are passed by position, `arg0` and `arg1` here.
def _model_fn(arg0, arg1, mode, params, config):
model_fn_call_count[0] += 1
self.assertItemsEqual(features.keys(), arg0.keys())
self.assertEqual(model_fn.ModeKeys.TRAIN, mode)
self.assertEqual(expected_params, params)
self.assertTrue(config.i_am_test)
return _model_fn_ops(features, labels, arg0, arg1, mode)
est = estimator.Estimator(
model_fn=_model_fn, params=expected_params, config=expected_config)
self.assertEqual(0, model_fn_call_count[0])
est.fit(input_fn=_make_input_fn(features, labels), steps=1)
self.assertEqual(1, model_fn_call_count[0])
def testPartialModelFnArgs(self):
features = {'x': 42., 'y': 43.}
labels = 44.
expected_params = {'some_param': 'some_value'}
expected_config = run_config.RunConfig()
expected_config.i_am_test = True
expected_foo = 45.
expected_bar = 46.
# TODO(ptucker): We have to roll our own mock since Estimator._get_arguments
# doesn't work with mock fns.
model_fn_call_count = [0]
# `features` and `labels` are passed by position, `arg0` and `arg1` here.
def _model_fn(arg0, arg1, foo, mode, params, config, bar):
model_fn_call_count[0] += 1
self.assertEqual(expected_foo, foo)
self.assertEqual(expected_bar, bar)
self.assertItemsEqual(features.keys(), arg0.keys())
self.assertEqual(model_fn.ModeKeys.TRAIN, mode)
self.assertEqual(expected_params, params)
self.assertTrue(config.i_am_test)
return _model_fn_ops(features, labels, arg0, arg1, mode)
partial_model_fn = functools.partial(
_model_fn, foo=expected_foo, bar=expected_bar)
est = estimator.Estimator(
model_fn=partial_model_fn, params=expected_params,
config=expected_config)
self.assertEqual(0, model_fn_call_count[0])
est.fit(input_fn=_make_input_fn(features, labels), steps=1)
self.assertEqual(1, model_fn_call_count[0])
def testModelFnWithModelDir(self):
expected_param = {'some_param': 'some_value'}
expected_model_dir = tempfile.mkdtemp()
def _argument_checker(features, labels, mode, params, config=None,
model_dir=None):
_, _, _ = features, labels, config
self.assertEqual(model_fn.ModeKeys.TRAIN, mode)
self.assertEqual(expected_param, params)
self.assertEqual(model_dir, expected_model_dir)
return (constant_op.constant(0.), constant_op.constant(0.),
variables.get_global_step().assign_add(1))
est = estimator.Estimator(model_fn=_argument_checker,
params=expected_param,
model_dir=expected_model_dir)
est.fit(input_fn=boston_input_fn, steps=1)
def testInvalidModelFn_no_train_op(self):
def _invalid_model_fn(features, labels):
# pylint: disable=unused-argument
w = variables_lib.Variable(42.0, 'weight')
update_global_step = variables.get_global_step().assign_add(1)
with ops.control_dependencies([update_global_step]):
loss = 100.0 - w
return None, loss, None
est = estimator.Estimator(model_fn=_invalid_model_fn)
with self.assertRaisesRegexp(ValueError, 'Missing train_op'):
est.fit(input_fn=boston_input_fn, steps=1)
def testInvalidModelFn_no_loss(self):
def _invalid_model_fn(features, labels, mode):
# pylint: disable=unused-argument
w = variables_lib.Variable(42.0, 'weight')
loss = 100.0 - w
update_global_step = variables.get_global_step().assign_add(1)
with ops.control_dependencies([update_global_step]):
train_op = w.assign_add(loss / 100.0)
predictions = loss
if mode == model_fn.ModeKeys.EVAL:
loss = None
return predictions, loss, train_op
est = estimator.Estimator(model_fn=_invalid_model_fn)
est.fit(input_fn=boston_input_fn, steps=1)
with self.assertRaisesRegexp(ValueError, 'Missing loss'):
est.evaluate(input_fn=boston_eval_fn, steps=1)
def testInvalidModelFn_no_prediction(self):
def _invalid_model_fn(features, labels):
# pylint: disable=unused-argument
w = variables_lib.Variable(42.0, 'weight')
loss = 100.0 - w
update_global_step = variables.get_global_step().assign_add(1)
with ops.control_dependencies([update_global_step]):
train_op = w.assign_add(loss / 100.0)
return None, loss, train_op
est = estimator.Estimator(model_fn=_invalid_model_fn)
est.fit(input_fn=boston_input_fn, steps=1)
with self.assertRaisesRegexp(ValueError, 'Missing prediction'):
est.evaluate(input_fn=boston_eval_fn, steps=1)
with self.assertRaisesRegexp(ValueError, 'Missing prediction'):
est.predict(input_fn=boston_input_fn)
with self.assertRaisesRegexp(ValueError, 'Missing prediction'):
est.predict(
input_fn=functools.partial(
boston_input_fn, num_epochs=1),
as_iterable=True)
def testModelFnScaffoldInTraining(self):
self.is_init_fn_called = False
def _init_fn(scaffold, session):
_, _ = scaffold, session
self.is_init_fn_called = True
def _model_fn_scaffold(features, labels, mode):
_, _ = features, labels
return model_fn.ModelFnOps(
mode=mode,
predictions=constant_op.constant(0.),
loss=constant_op.constant(0.),
train_op=variables.get_global_step().assign_add(1),
scaffold=monitored_session.Scaffold(init_fn=_init_fn))
est = estimator.Estimator(model_fn=_model_fn_scaffold)
est.fit(input_fn=boston_input_fn, steps=1)
self.assertTrue(self.is_init_fn_called)
def testModelFnScaffoldSaverUsage(self):
def _model_fn_scaffold(features, labels, mode):
_, _ = features, labels
variables_lib.Variable(1., 'weight')
real_saver = saver_lib.Saver()
self.mock_saver = test.mock.Mock(
wraps=real_saver, saver_def=real_saver.saver_def)
return model_fn.ModelFnOps(
mode=mode,
predictions=constant_op.constant([[1.]]),
loss=constant_op.constant(0.),
train_op=variables.get_global_step().assign_add(1),
scaffold=monitored_session.Scaffold(saver=self.mock_saver))
def input_fn():
return {
'x': constant_op.constant([[1.]]),
}, constant_op.constant([[1.]])
est = estimator.Estimator(model_fn=_model_fn_scaffold)
est.fit(input_fn=input_fn, steps=1)
self.assertTrue(self.mock_saver.save.called)
est.evaluate(input_fn=input_fn, steps=1)
self.assertTrue(self.mock_saver.restore.called)
est.predict(input_fn=input_fn)
self.assertTrue(self.mock_saver.restore.called)
def serving_input_fn():
serialized_tf_example = array_ops.placeholder(dtype=dtypes.string,
shape=[None],
name='input_example_tensor')
features, labels = input_fn()
return input_fn_utils.InputFnOps(
features, labels, {'examples': serialized_tf_example})
est.export_savedmodel(os.path.join(est.model_dir, 'export'), serving_input_fn)
self.assertTrue(self.mock_saver.restore.called)
class EstimatorTest(test.TestCase):
def testExperimentIntegration(self):
exp = experiment.Experiment(
estimator=estimator.Estimator(model_fn=linear_model_fn),
train_input_fn=boston_input_fn,
eval_input_fn=boston_input_fn)
exp.test()
def testCheckpointSaverHookSuppressesTheDefaultOne(self):
saver_hook = test.mock.Mock(
spec=basic_session_run_hooks.CheckpointSaverHook)
saver_hook.before_run.return_value = None
est = estimator.Estimator(model_fn=linear_model_fn)
est.fit(input_fn=boston_input_fn, steps=1, monitors=[saver_hook])
# test nothing is saved, due to suppressing default saver
with self.assertRaises(learn.NotFittedError):
est.evaluate(input_fn=boston_input_fn, steps=1)
def testCustomConfig(self):
test_random_seed = 5783452
class TestInput(object):
def __init__(self):
self.random_seed = 0
def config_test_input_fn(self):
self.random_seed = ops.get_default_graph().seed
return constant_op.constant([[1.]]), constant_op.constant([1.])
config = run_config.RunConfig(tf_random_seed=test_random_seed)
test_input = TestInput()
est = estimator.Estimator(model_fn=linear_model_fn, config=config)
est.fit(input_fn=test_input.config_test_input_fn, steps=1)
# If input_fn ran, it will have given us the random seed set on the graph.
self.assertEquals(test_random_seed, test_input.random_seed)
def testRunConfigModelDir(self):
config = run_config.RunConfig(model_dir='test_dir')
est = estimator.Estimator(model_fn=linear_model_fn,
config=config)
self.assertEqual('test_dir', est.config.model_dir)
self.assertEqual('test_dir', est.model_dir)
def testModelDirAndRunConfigModelDir(self):
config = run_config.RunConfig(model_dir='test_dir')
est = estimator.Estimator(model_fn=linear_model_fn,
config=config,
model_dir='test_dir')
self.assertEqual('test_dir', est.config.model_dir)
with self.assertRaisesRegexp(
ValueError,
'model_dir are set both in constructor and RunConfig, '
'but with different'):
estimator.Estimator(model_fn=linear_model_fn,
config=config,
model_dir='different_dir')
def testModelDirIsCopiedToRunConfig(self):
config = run_config.RunConfig()
self.assertIsNone(config.model_dir)
est = estimator.Estimator(model_fn=linear_model_fn,
model_dir='test_dir',
config=config)
self.assertEqual('test_dir', est.config.model_dir)
self.assertEqual('test_dir', est.model_dir)
def testModelDirAsTempDir(self):
with test.mock.patch.object(tempfile, 'mkdtemp', return_value='temp_dir'):
est = estimator.Estimator(model_fn=linear_model_fn)
self.assertEqual('temp_dir', est.config.model_dir)
self.assertEqual('temp_dir', est.model_dir)
def testCheckInputs(self):
est = estimator.SKCompat(estimator.Estimator(model_fn=linear_model_fn))
# Lambdas so we have to different objects to compare
right_features = lambda: np.ones(shape=[7, 8], dtype=np.float32)
right_labels = lambda: np.ones(shape=[7, 10], dtype=np.int32)
est.fit(right_features(), right_labels(), steps=1)
# TODO(wicke): This does not fail for np.int32 because of data_feeder magic.
wrong_type_features = np.ones(shape=[7, 8], dtype=np.int64)
wrong_size_features = np.ones(shape=[7, 10])
wrong_type_labels = np.ones(shape=[7, 10], dtype=np.float32)
wrong_size_labels = np.ones(shape=[7, 11])
est.fit(x=right_features(), y=right_labels(), steps=1)
with self.assertRaises(ValueError):
est.fit(x=wrong_type_features, y=right_labels(), steps=1)
with self.assertRaises(ValueError):
est.fit(x=wrong_size_features, y=right_labels(), steps=1)
with self.assertRaises(ValueError):
est.fit(x=right_features(), y=wrong_type_labels, steps=1)
with self.assertRaises(ValueError):
est.fit(x=right_features(), y=wrong_size_labels, steps=1)
def testBadInput(self):
est = estimator.Estimator(model_fn=linear_model_fn)
self.assertRaisesRegexp(
ValueError,
'Either x or input_fn must be provided.',
est.fit,
x=None,
input_fn=None,
steps=1)
self.assertRaisesRegexp(
ValueError,
'Can not provide both input_fn and x or y',
est.fit,
x='X',
input_fn=iris_input_fn,
steps=1)
self.assertRaisesRegexp(
ValueError,
'Can not provide both input_fn and x or y',
est.fit,
y='Y',
input_fn=iris_input_fn,
steps=1)
self.assertRaisesRegexp(
ValueError,
'Can not provide both input_fn and batch_size',
est.fit,
input_fn=iris_input_fn,
batch_size=100,
steps=1)
self.assertRaisesRegexp(
ValueError,
'Inputs cannot be tensors. Please provide input_fn.',
est.fit,
x=constant_op.constant(1.),
steps=1)
def testUntrained(self):
boston = base.load_boston()
est = estimator.SKCompat(estimator.Estimator(model_fn=linear_model_fn))
with self.assertRaises(learn.NotFittedError):
_ = est.score(x=boston.data, y=boston.target.astype(np.float64))
with self.assertRaises(learn.NotFittedError):
est.predict(x=boston.data)
def testContinueTraining(self):
boston = base.load_boston()
output_dir = tempfile.mkdtemp()
est = estimator.SKCompat(
estimator.Estimator(
model_fn=linear_model_fn, model_dir=output_dir))
float64_labels = boston.target.astype(np.float64)
est.fit(x=boston.data, y=float64_labels, steps=50)
scores = est.score(
x=boston.data,
y=float64_labels,
metrics={'MSE': metric_ops.streaming_mean_squared_error})
del est
# Create another estimator object with the same output dir.
est2 = estimator.SKCompat(
estimator.Estimator(
model_fn=linear_model_fn, model_dir=output_dir))
# Check we can evaluate and predict.
scores2 = est2.score(
x=boston.data,
y=float64_labels,
metrics={'MSE': metric_ops.streaming_mean_squared_error})
self.assertAllClose(scores['MSE'], scores2['MSE'])
predictions = np.array(list(est2.predict(x=boston.data)))
other_score = _sklearn.mean_squared_error(predictions, float64_labels)
self.assertAllClose(scores['MSE'], other_score)
# Check we can keep training.
est2.fit(x=boston.data, y=float64_labels, steps=100)
scores3 = est2.score(
x=boston.data,
y=float64_labels,
metrics={'MSE': metric_ops.streaming_mean_squared_error})
self.assertLess(scores3['MSE'], scores['MSE'])
def test_checkpoint_contains_relative_paths(self):
tmpdir = tempfile.mkdtemp()
est = estimator.Estimator(
model_dir=tmpdir,
model_fn=linear_model_fn_with_model_fn_ops)
est.fit(input_fn=boston_input_fn, steps=5)
checkpoint_file_content = file_io.read_file_to_string(
os.path.join(tmpdir, 'checkpoint'))
ckpt = checkpoint_state_pb2.CheckpointState()
text_format.Merge(checkpoint_file_content, ckpt)
self.assertEqual(ckpt.model_checkpoint_path, 'model.ckpt-5')
self.assertAllEqual(
['model.ckpt-1', 'model.ckpt-5'], ckpt.all_model_checkpoint_paths)
def test_train_save_copy_reload(self):
tmpdir = tempfile.mkdtemp()
model_dir1 = os.path.join(tmpdir, 'model_dir1')
est1 = estimator.Estimator(
model_dir=model_dir1,
model_fn=linear_model_fn_with_model_fn_ops)
est1.fit(input_fn=boston_input_fn, steps=5)
model_dir2 = os.path.join(tmpdir, 'model_dir2')
os.renames(model_dir1, model_dir2)
est2 = estimator.Estimator(
model_dir=model_dir2,
model_fn=linear_model_fn_with_model_fn_ops)
self.assertEqual(5, est2.get_variable_value('global_step'))
est2.fit(input_fn=boston_input_fn, steps=5)
self.assertEqual(10, est2.get_variable_value('global_step'))
def testEstimatorParams(self):
boston = base.load_boston()
est = estimator.SKCompat(
estimator.Estimator(
model_fn=linear_model_params_fn, params={'learning_rate': 0.01}))
est.fit(x=boston.data, y=boston.target, steps=100)
def testHooksNotChanged(self):
est = estimator.Estimator(model_fn=logistic_model_no_mode_fn)
# We pass empty array and expect it to remain empty after calling
# fit and evaluate. Requires inside to copy this array if any hooks were
# added.
my_array = []
est.fit(input_fn=iris_input_fn, steps=100, monitors=my_array)
_ = est.evaluate(input_fn=iris_input_fn, steps=1, hooks=my_array)
self.assertEqual(my_array, [])
def testIrisIterator(self):
iris = base.load_iris()
est = estimator.Estimator(model_fn=logistic_model_no_mode_fn)
x_iter = itertools.islice(iris.data, 100)
y_iter = itertools.islice(iris.target, 100)
estimator.SKCompat(est).fit(x_iter, y_iter, steps=20)
eval_result = est.evaluate(input_fn=iris_input_fn, steps=1)
x_iter_eval = itertools.islice(iris.data, 100)
y_iter_eval = itertools.islice(iris.target, 100)
score_result = estimator.SKCompat(est).score(x_iter_eval, y_iter_eval)
print(score_result)
self.assertItemsEqual(eval_result.keys(), score_result.keys())
self.assertItemsEqual(['global_step', 'loss'], score_result.keys())
predictions = estimator.SKCompat(est).predict(x=iris.data)['class']
self.assertEqual(len(predictions), iris.target.shape[0])
def testIrisIteratorArray(self):
iris = base.load_iris()
est = estimator.Estimator(model_fn=logistic_model_no_mode_fn)
x_iter = itertools.islice(iris.data, 100)
y_iter = (np.array(x) for x in iris.target)
est.fit(x_iter, y_iter, steps=100)
_ = est.evaluate(input_fn=iris_input_fn, steps=1)
_ = six.next(est.predict(x=iris.data))['class']
def testIrisIteratorPlainInt(self):
iris = base.load_iris()
est = estimator.Estimator(model_fn=logistic_model_no_mode_fn)
x_iter = itertools.islice(iris.data, 100)
y_iter = (v for v in iris.target)
est.fit(x_iter, y_iter, steps=100)
_ = est.evaluate(input_fn=iris_input_fn, steps=1)
_ = six.next(est.predict(x=iris.data))['class']
def testIrisTruncatedIterator(self):
iris = base.load_iris()
est = estimator.Estimator(model_fn=logistic_model_no_mode_fn)
x_iter = itertools.islice(iris.data, 50)
y_iter = ([np.int32(v)] for v in iris.target)
est.fit(x_iter, y_iter, steps=100)
def testTrainStepsIsIncremental(self):
est = estimator.Estimator(model_fn=linear_model_fn)
est.fit(input_fn=boston_input_fn, steps=10)
self.assertEqual(10, est.get_variable_value('global_step'))
est.fit(input_fn=boston_input_fn, steps=15)
self.assertEqual(25, est.get_variable_value('global_step'))
def testTrainMaxStepsIsNotIncremental(self):
est = estimator.Estimator(model_fn=linear_model_fn)
est.fit(input_fn=boston_input_fn, max_steps=10)
self.assertEqual(10, est.get_variable_value('global_step'))
est.fit(input_fn=boston_input_fn, max_steps=15)
self.assertEqual(15, est.get_variable_value('global_step'))
def testPredict(self):
est = estimator.Estimator(model_fn=linear_model_fn)
boston = base.load_boston()
est.fit(input_fn=boston_input_fn, steps=1)
output = list(est.predict(x=boston.data, batch_size=10))
self.assertEqual(len(output), boston.target.shape[0])
def testWithModelFnOps(self):
"""Test for model_fn that returns `ModelFnOps`."""
est = estimator.Estimator(model_fn=linear_model_fn_with_model_fn_ops)
boston = base.load_boston()
est.fit(input_fn=boston_input_fn, steps=1)
input_fn = functools.partial(boston_input_fn, num_epochs=1)
scores = est.evaluate(input_fn=input_fn, steps=1)
self.assertIn('loss', scores.keys())
output = list(est.predict(input_fn=input_fn))
self.assertEqual(len(output), boston.target.shape[0])
def testWrongInput(self):
def other_input_fn():
return {
'other': constant_op.constant([0, 0, 0])
}, constant_op.constant([0, 0, 0])
est = estimator.Estimator(model_fn=linear_model_fn)
est.fit(input_fn=boston_input_fn, steps=1)
with self.assertRaises(ValueError):
est.fit(input_fn=other_input_fn, steps=1)
def testMonitorsForFit(self):
est = estimator.Estimator(model_fn=linear_model_fn)
est.fit(input_fn=boston_input_fn,
steps=21,
monitors=[CheckCallsMonitor(expect_calls=21)])
def testHooksForEvaluate(self):
class CheckCallHook(session_run_hook.SessionRunHook):
def __init__(self):
self.run_count = 0
def after_run(self, run_context, run_values):
self.run_count += 1
est = learn.Estimator(model_fn=linear_model_fn)
est.fit(input_fn=boston_input_fn, steps=1)
hook = CheckCallHook()
est.evaluate(input_fn=boston_eval_fn, steps=3, hooks=[hook])
self.assertEqual(3, hook.run_count)
def testSummaryWriting(self):
est = estimator.Estimator(model_fn=linear_model_fn)
est.fit(input_fn=boston_input_fn, steps=200)
est.evaluate(input_fn=boston_input_fn, steps=200)
loss_summary = util_test.simple_values_from_events(
util_test.latest_events(est.model_dir), ['OptimizeLoss/loss'])
self.assertEqual(1, len(loss_summary))
def testSummaryWritingWithSummaryProto(self):
def _streaming_mean_squared_error_histogram(predictions,
labels,
weights=None,
metrics_collections=None,
updates_collections=None,
name=None):
metrics, update_ops = metric_ops.streaming_mean_squared_error(
predictions,
labels,
weights=weights,
metrics_collections=metrics_collections,
updates_collections=updates_collections,
name=name)
return summary.histogram('histogram', metrics), update_ops
est = estimator.Estimator(model_fn=linear_model_fn)
est.fit(input_fn=boston_input_fn, steps=200)
est.evaluate(
input_fn=boston_input_fn,
steps=200,
metrics={'MSE': _streaming_mean_squared_error_histogram})
events = util_test.latest_events(est.model_dir + '/eval')
output_values = {}
for e in events:
if e.HasField('summary'):
for v in e.summary.value:
output_values[v.tag] = v
self.assertTrue('MSE' in output_values)
self.assertTrue(output_values['MSE'].HasField('histo'))
def testLossInGraphCollection(self):
class _LossCheckerHook(session_run_hook.SessionRunHook):
def begin(self):
self.loss_collection = ops.get_collection(ops.GraphKeys.LOSSES)
hook = _LossCheckerHook()
est = estimator.Estimator(model_fn=linear_model_fn)
est.fit(input_fn=boston_input_fn, steps=200, monitors=[hook])
self.assertTrue(hook.loss_collection)
def test_export_returns_exported_dirname(self):
expected = '/path/to/some_dir'
with test.mock.patch.object(estimator, 'export') as mock_export_module:
mock_export_module._export_estimator.return_value = expected
est = estimator.Estimator(model_fn=linear_model_fn)
actual = est.export('/path/to')
self.assertEquals(expected, actual)
def test_export_savedmodel(self):
tmpdir = tempfile.mkdtemp()
est, serving_input_fn = _build_estimator_for_export_tests(tmpdir)
extra_file_name = os.path.join(
compat.as_bytes(tmpdir), compat.as_bytes('my_extra_file'))
extra_file = gfile.GFile(extra_file_name, mode='w')
extra_file.write(EXTRA_FILE_CONTENT)
extra_file.close()
assets_extra = {'some/sub/directory/my_extra_file': extra_file_name}
export_dir_base = os.path.join(
compat.as_bytes(tmpdir), compat.as_bytes('export'))
export_dir = est.export_savedmodel(
export_dir_base, serving_input_fn, assets_extra=assets_extra)
self.assertTrue(gfile.Exists(export_dir_base))
self.assertTrue(gfile.Exists(export_dir))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir), compat.as_bytes(
'saved_model.pb'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir), compat.as_bytes('variables'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir),
compat.as_bytes('variables/variables.index'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir),
compat.as_bytes('variables/variables.data-00000-of-00001'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir), compat.as_bytes('assets'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir),
compat.as_bytes('assets/my_vocab_file'))))
self.assertEqual(
compat.as_bytes(VOCAB_FILE_CONTENT),
compat.as_bytes(
gfile.GFile(
os.path.join(
compat.as_bytes(export_dir),
compat.as_bytes('assets/my_vocab_file'))).read()))
expected_extra_path = os.path.join(
compat.as_bytes(export_dir),
compat.as_bytes('assets.extra/some/sub/directory/my_extra_file'))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir), compat.as_bytes('assets.extra'))))
self.assertTrue(gfile.Exists(expected_extra_path))
self.assertEqual(
compat.as_bytes(EXTRA_FILE_CONTENT),
compat.as_bytes(gfile.GFile(expected_extra_path).read()))
expected_vocab_file = os.path.join(
compat.as_bytes(tmpdir), compat.as_bytes('my_vocab_file'))
# Restore, to validate that the export was well-formed.
with ops.Graph().as_default() as graph:
with session_lib.Session(graph=graph) as sess:
loader.load(sess, [tag_constants.SERVING], export_dir)
assets = [
x.eval()
for x in graph.get_collection(ops.GraphKeys.ASSET_FILEPATHS)
]
self.assertItemsEqual([expected_vocab_file], assets)
graph_ops = [x.name for x in graph.get_operations()]
self.assertTrue('input_example_tensor' in graph_ops)
self.assertTrue('ParseExample/ParseExample' in graph_ops)
self.assertTrue('linear/linear/feature/matmul' in graph_ops)
self.assertItemsEqual(
['bogus_lookup', 'feature'],
[compat.as_str_any(x) for x in graph.get_collection(
constants.COLLECTION_DEF_KEY_FOR_INPUT_FEATURE_KEYS)])
# cleanup
gfile.DeleteRecursively(tmpdir)
def test_export_savedmodel_with_resource(self):
tmpdir = tempfile.mkdtemp()
est, serving_input_fn = _build_estimator_for_resource_export_test()
export_dir_base = os.path.join(
compat.as_bytes(tmpdir), compat.as_bytes('export'))
export_dir = est.export_savedmodel(export_dir_base, serving_input_fn)
self.assertTrue(gfile.Exists(export_dir_base))
self.assertTrue(gfile.Exists(export_dir))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir), compat.as_bytes(
'saved_model.pb'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir), compat.as_bytes('variables'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir),
compat.as_bytes('variables/variables.index'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir),
compat.as_bytes('variables/variables.data-00000-of-00001'))))
# Restore, to validate that the export was well-formed.
with ops.Graph().as_default() as graph:
with session_lib.Session(graph=graph) as sess:
loader.load(sess, [tag_constants.SERVING], export_dir)
graph_ops = [x.name for x in graph.get_operations()]
self.assertTrue('input_example_tensor' in graph_ops)
self.assertTrue('ParseExample/ParseExample' in graph_ops)
self.assertTrue('LookupTableModel' in graph_ops)
self.assertFalse('LookupTableTrainingState' in graph_ops)
# cleanup
gfile.DeleteRecursively(tmpdir)
def test_export_savedmodel_with_graph_transforms(self):
tmpdir = tempfile.mkdtemp()
est, serving_input_fn = _build_estimator_for_export_tests(tmpdir)
extra_file_name = os.path.join(
compat.as_bytes(tmpdir), compat.as_bytes('my_extra_file'))
extra_file = gfile.GFile(extra_file_name, mode='w')
extra_file.write(EXTRA_FILE_CONTENT)
extra_file.close()
assets_extra = {'some/sub/directory/my_extra_file': extra_file_name}
export_dir_base = os.path.join(
compat.as_bytes(tmpdir), compat.as_bytes('export'))
export_dir = est.export_savedmodel(
export_dir_base, serving_input_fn, assets_extra=assets_extra,
graph_rewrite_specs=[
estimator.GraphRewriteSpec(['tag_1'], []),
estimator.GraphRewriteSpec(['tag_2', 'tag_3'],
['strip_unused_nodes'])])
self.assertTrue(gfile.Exists(export_dir_base))
self.assertTrue(gfile.Exists(export_dir))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir), compat.as_bytes(
'saved_model.pb'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir), compat.as_bytes('variables'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir),
compat.as_bytes('variables/variables.index'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir),
compat.as_bytes('variables/variables.data-00000-of-00001'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir), compat.as_bytes('assets'))))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir),
compat.as_bytes('assets/my_vocab_file'))))
self.assertEqual(
compat.as_bytes(VOCAB_FILE_CONTENT),
compat.as_bytes(
gfile.GFile(
os.path.join(
compat.as_bytes(export_dir),
compat.as_bytes('assets/my_vocab_file'))).read()))
expected_extra_path = os.path.join(
compat.as_bytes(export_dir),
compat.as_bytes('assets.extra/some/sub/directory/my_extra_file'))
self.assertTrue(
gfile.Exists(
os.path.join(
compat.as_bytes(export_dir), compat.as_bytes('assets.extra'))))
self.assertTrue(gfile.Exists(expected_extra_path))
self.assertEqual(
compat.as_bytes(EXTRA_FILE_CONTENT),
compat.as_bytes(gfile.GFile(expected_extra_path).read()))
expected_vocab_file = os.path.join(
compat.as_bytes(tmpdir), compat.as_bytes('my_vocab_file'))
# Restore, to validate that the export was well-formed.
# tag_1 is untransformed.
tags = ['tag_1']
with ops.Graph().as_default() as graph:
with session_lib.Session(graph=graph) as sess:
loader.load(sess, tags, export_dir)
assets = [
x.eval()
for x in graph.get_collection(ops.GraphKeys.ASSET_FILEPATHS)
]
self.assertItemsEqual([expected_vocab_file], assets)
graph_ops = [x.name for x in graph.get_operations()]
self.assertTrue('input_example_tensor' in graph_ops)
self.assertTrue('ParseExample/ParseExample' in graph_ops)
self.assertTrue('linear/linear/feature/matmul' in graph_ops)
# Since there were no transforms, both save ops are still present.
self.assertTrue('save/SaveV2/tensor_names' in graph_ops)
self.assertTrue('save_1/SaveV2/tensor_names' in graph_ops)
# Since there were no transforms, the hash table lookup is still there.
self.assertTrue('hash_table_Lookup' in graph_ops)
# Restore, to validate that the export was well-formed.
# tag_2, tag_3 was subjected to strip_unused_nodes.
tags = ['tag_2', 'tag_3']
with ops.Graph().as_default() as graph:
with session_lib.Session(graph=graph) as sess:
loader.load(sess, tags, export_dir)
assets = [
x.eval()
for x in graph.get_collection(ops.GraphKeys.ASSET_FILEPATHS)
]
self.assertItemsEqual([expected_vocab_file], assets)
graph_ops = [x.name for x in graph.get_operations()]
self.assertTrue('input_example_tensor' in graph_ops)
self.assertTrue('ParseExample/ParseExample' in graph_ops)
self.assertTrue('linear/linear/feature/matmul' in graph_ops)
# The Saver used to restore the checkpoint into the export Session
# was not added to the SAVERS collection, so strip_unused_nodes removes
# it. The one explicitly created in export_savedmodel is tracked in
# the MetaGraphDef saver_def field, so that one is retained.
# TODO(soergel): Make Savers sane again. I understand this is all a bit
# nuts but for now the test demonstrates what actually happens.
self.assertFalse('save/SaveV2/tensor_names' in graph_ops)
self.assertTrue('save_1/SaveV2/tensor_names' in graph_ops)
# The fake hash table lookup wasn't connected to anything; stripped.
self.assertFalse('hash_table_Lookup' in graph_ops)
# cleanup
gfile.DeleteRecursively(tmpdir)
class InferRealValuedColumnsTest(test.TestCase):
def testInvalidArgs(self):
with self.assertRaisesRegexp(ValueError, 'x or input_fn must be provided'):
estimator.infer_real_valued_columns_from_input(None)
with self.assertRaisesRegexp(ValueError, 'cannot be tensors'):
estimator.infer_real_valued_columns_from_input(constant_op.constant(1.0))
def _assert_single_feature_column(self, expected_shape, expected_dtype,
feature_columns):
self.assertEqual(1, len(feature_columns))
feature_column = feature_columns[0]
self.assertEqual('', feature_column.name)
self.assertEqual(
{
'':
parsing_ops.FixedLenFeature(
shape=expected_shape, dtype=expected_dtype)
},
feature_column.config)
def testInt32Input(self):
feature_columns = estimator.infer_real_valued_columns_from_input(
np.ones(
shape=[7, 8], dtype=np.int32))
self._assert_single_feature_column([8], dtypes.int32, feature_columns)
def testInt32InputFn(self):
feature_columns = estimator.infer_real_valued_columns_from_input_fn(
lambda: (array_ops.ones(shape=[7, 8], dtype=dtypes.int32), None))
self._assert_single_feature_column([8], dtypes.int32, feature_columns)
def testInt64Input(self):
feature_columns = estimator.infer_real_valued_columns_from_input(
np.ones(
shape=[7, 8], dtype=np.int64))
self._assert_single_feature_column([8], dtypes.int64, feature_columns)
def testInt64InputFn(self):
feature_columns = estimator.infer_real_valued_columns_from_input_fn(
lambda: (array_ops.ones(shape=[7, 8], dtype=dtypes.int64), None))
self._assert_single_feature_column([8], dtypes.int64, feature_columns)
def testFloat32Input(self):
feature_columns = estimator.infer_real_valued_columns_from_input(
np.ones(
shape=[7, 8], dtype=np.float32))
self._assert_single_feature_column([8], dtypes.float32, feature_columns)
def testFloat32InputFn(self):
feature_columns = estimator.infer_real_valued_columns_from_input_fn(
lambda: (array_ops.ones(shape=[7, 8], dtype=dtypes.float32), None))
self._assert_single_feature_column([8], dtypes.float32, feature_columns)
def testFloat64Input(self):
feature_columns = estimator.infer_real_valued_columns_from_input(
np.ones(
shape=[7, 8], dtype=np.float64))
self._assert_single_feature_column([8], dtypes.float64, feature_columns)
def testFloat64InputFn(self):
feature_columns = estimator.infer_real_valued_columns_from_input_fn(
lambda: (array_ops.ones(shape=[7, 8], dtype=dtypes.float64), None))
self._assert_single_feature_column([8], dtypes.float64, feature_columns)
def testBoolInput(self):
with self.assertRaisesRegexp(
ValueError, 'on integer or non floating types are not supported'):
estimator.infer_real_valued_columns_from_input(
np.array([[False for _ in xrange(8)] for _ in xrange(7)]))
def testBoolInputFn(self):
with self.assertRaisesRegexp(
ValueError, 'on integer or non floating types are not supported'):
# pylint: disable=g-long-lambda
estimator.infer_real_valued_columns_from_input_fn(
lambda: (constant_op.constant(False, shape=[7, 8], dtype=dtypes.bool),
None))
def testStringInput(self):
with self.assertRaisesRegexp(
ValueError, 'on integer or non floating types are not supported'):
# pylint: disable=g-long-lambda
estimator.infer_real_valued_columns_from_input(
np.array([['%d.0' % i for i in xrange(8)] for _ in xrange(7)]))
def testStringInputFn(self):
with self.assertRaisesRegexp(
ValueError, 'on integer or non floating types are not supported'):
# pylint: disable=g-long-lambda
estimator.infer_real_valued_columns_from_input_fn(
lambda: (
constant_op.constant([['%d.0' % i
for i in xrange(8)]
for _ in xrange(7)]),
None))
def testBostonInputFn(self):
feature_columns = estimator.infer_real_valued_columns_from_input_fn(
boston_input_fn)
self._assert_single_feature_column([_BOSTON_INPUT_DIM], dtypes.float64,
feature_columns)
def testIrisInputFn(self):
feature_columns = estimator.infer_real_valued_columns_from_input_fn(
iris_input_fn)
self._assert_single_feature_column([_IRIS_INPUT_DIM], dtypes.float64,
feature_columns)
class ReplicaDeviceSetterTest(test.TestCase):
def testVariablesAreOnPs(self):
tf_config = {'cluster': {run_config.TaskType.PS: ['fake_ps_0']}}
with test.mock.patch.dict('os.environ',
{'TF_CONFIG': json.dumps(tf_config)}):
config = run_config.RunConfig()
with ops.device(estimator._get_replica_device_setter(config)):
v = variables_lib.Variable([1, 2])
w = variables_lib.Variable([2, 1])
a = v + w
self.assertDeviceEqual('/job:ps/task:0', v.device)
self.assertDeviceEqual('/job:ps/task:0', v.initializer.device)
self.assertDeviceEqual('/job:ps/task:0', w.device)
self.assertDeviceEqual('/job:ps/task:0', w.initializer.device)
self.assertDeviceEqual('/job:worker', a.device)
def testVariablesAreLocal(self):
with ops.device(
estimator._get_replica_device_setter(run_config.RunConfig())):
v = variables_lib.Variable([1, 2])
w = variables_lib.Variable([2, 1])
a = v + w
self.assertDeviceEqual('', v.device)
self.assertDeviceEqual('', v.initializer.device)
self.assertDeviceEqual('', w.device)
self.assertDeviceEqual('', w.initializer.device)
self.assertDeviceEqual('', a.device)
def testMutableHashTableIsOnPs(self):
tf_config = {'cluster': {run_config.TaskType.PS: ['fake_ps_0']}}
with test.mock.patch.dict('os.environ',
{'TF_CONFIG': json.dumps(tf_config)}):
config = run_config.RunConfig()
with ops.device(estimator._get_replica_device_setter(config)):
default_val = constant_op.constant([-1, -1], dtypes.int64)
table = lookup.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
input_string = constant_op.constant(['brain', 'salad', 'tank'])
output = table.lookup(input_string)
self.assertDeviceEqual('/job:ps/task:0', table._table_ref.device)
self.assertDeviceEqual('/job:ps/task:0', output.device)
def testMutableHashTableIsLocal(self):
with ops.device(
estimator._get_replica_device_setter(run_config.RunConfig())):
default_val = constant_op.constant([-1, -1], dtypes.int64)
table = lookup.MutableHashTable(dtypes.string, dtypes.int64,
default_val)
input_string = constant_op.constant(['brain', 'salad', 'tank'])
output = table.lookup(input_string)
self.assertDeviceEqual('', table._table_ref.device)
self.assertDeviceEqual('', output.device)
def testTaskIsSetOnWorkerWhenJobNameIsSet(self):
tf_config = {
'cluster': {
run_config.TaskType.PS: ['fake_ps_0']
},
'task': {
'type': run_config.TaskType.WORKER,
'index': 3
}
}
with test.mock.patch.dict('os.environ',
{'TF_CONFIG': json.dumps(tf_config)}):
config = run_config.RunConfig()
with ops.device(estimator._get_replica_device_setter(config)):
v = variables_lib.Variable([1, 2])
w = variables_lib.Variable([2, 1])
a = v + w
self.assertDeviceEqual('/job:ps/task:0', v.device)
self.assertDeviceEqual('/job:ps/task:0', v.initializer.device)
self.assertDeviceEqual('/job:ps/task:0', w.device)
self.assertDeviceEqual('/job:ps/task:0', w.initializer.device)
self.assertDeviceEqual('/job:worker/task:3', a.device)
if __name__ == '__main__':
test.main()
| apache-2.0 |
mne-tools/mne-python | tutorials/epochs/50_epochs_to_data_frame.py | 10 | 6955 | """
.. _tut-epochs-dataframe:
Exporting Epochs to Pandas DataFrames
=====================================
This tutorial shows how to export the data in :class:`~mne.Epochs` objects to a
:class:`Pandas DataFrame <pandas.DataFrame>`, and applies a typical Pandas
:doc:`split-apply-combine <pandas:user_guide/groupby>` workflow to examine the
latencies of the response maxima across epochs and conditions.
We'll use the :ref:`sample-dataset` dataset, but load a version of the raw file
that has already been filtered and downsampled, and has an average reference
applied to its EEG channels. As usual we'll start by importing the modules we
need and loading the data:
"""
import os
import seaborn as sns
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
###############################################################################
# Next we'll load a list of events from file, map them to condition names with
# an event dictionary, set some signal rejection thresholds (cf.
# :ref:`tut-reject-epochs-section`), and segment the continuous data into
# epochs:
sample_data_events_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_filt-0-40_raw-eve.fif')
events = mne.read_events(sample_data_events_file)
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4}
reject_criteria = dict(mag=3000e-15, # 3000 fT
grad=3000e-13, # 3000 fT/cm
eeg=100e-6, # 100 µV
eog=200e-6) # 200 µV
tmin, tmax = (-0.2, 0.5) # epoch from 200 ms before event to 500 ms after it
baseline = (None, 0) # baseline period from start of epoch to time=0
epochs = mne.Epochs(raw, events, event_dict, tmin, tmax, proj=True,
baseline=baseline, reject=reject_criteria, preload=True)
del raw
###############################################################################
# Converting an ``Epochs`` object to a ``DataFrame``
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#
# Once we have our :class:`~mne.Epochs` object, converting it to a
# :class:`~pandas.DataFrame` is simple: just call :meth:`epochs.to_data_frame()
# <mne.Epochs.to_data_frame>`. Each channel's data will be a column of the new
# :class:`~pandas.DataFrame`, alongside three additional columns of event name,
# epoch number, and sample time. Here we'll just show the first few rows and
# columns:
df = epochs.to_data_frame()
df.iloc[:5, :10]
###############################################################################
# Scaling time and channel values
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# By default, time values are converted from seconds to milliseconds and
# then rounded to the nearest integer; if you don't want this, you can pass
# ``time_format=None`` to keep time as a :class:`float` value in seconds, or
# convert it to a :class:`~pandas.Timedelta` value via
# ``time_format='timedelta'``.
#
# Note also that, by default, channel measurement values are scaled so that EEG
# data are converted to µV, magnetometer data are converted to fT, and
# gradiometer data are converted to fT/cm. These scalings can be customized
# through the ``scalings`` parameter, or suppressed by passing
# ``scalings=dict(eeg=1, mag=1, grad=1)``.
df = epochs.to_data_frame(time_format=None,
scalings=dict(eeg=1, mag=1, grad=1))
df.iloc[:5, :10]
###############################################################################
# Notice that the time values are no longer integers, and the channel values
# have changed by several orders of magnitude compared to the earlier
# DataFrame.
#
#
# Setting the ``index``
# ~~~~~~~~~~~~~~~~~~~~~
#
# It is also possible to move one or more of the indicator columns (event name,
# epoch number, and sample time) into the :ref:`index <pandas:indexing>`, by
# passing a string or list of strings as the ``index`` parameter. We'll also
# demonstrate here the effect of ``time_format='timedelta'``, yielding
# :class:`~pandas.Timedelta` values in the "time" column.
df = epochs.to_data_frame(index=['condition', 'epoch'],
time_format='timedelta')
df.iloc[:5, :10]
###############################################################################
# Wide- versus long-format DataFrames
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#
# Another parameter, ``long_format``, determines whether each channel's data is
# in a separate column of the :class:`~pandas.DataFrame`
# (``long_format=False``), or whether the measured values are pivoted into a
# single ``'value'`` column with an extra indicator column for the channel name
# (``long_format=True``). Passing ``long_format=True`` will also create an
# extra column ``ch_type`` indicating the channel type.
long_df = epochs.to_data_frame(time_format=None, index='condition',
long_format=True)
long_df.head()
###############################################################################
# Generating the :class:`~pandas.DataFrame` in long format can be helpful when
# using other Python modules for subsequent analysis or plotting. For example,
# here we'll take data from the "auditory/left" condition, pick a couple MEG
# channels, and use :func:`seaborn.lineplot` to automatically plot the mean and
# confidence band for each channel, with confidence computed across the epochs
# in the chosen condition:
channels = ['MEG 1332', 'MEG 1342']
data = long_df.loc['auditory/left'].query('channel in @channels')
# convert channel column (CategoryDtype → string; for a nicer-looking legend)
data['channel'] = data['channel'].astype(str)
sns.lineplot(x='time', y='value', hue='channel', data=data)
###############################################################################
# We can also now use all the power of Pandas for grouping and transforming our
# data. Here, we find the latency of peak activation of 2 gradiometers (one
# near auditory cortex and one near visual cortex), and plot the distribution
# of the timing of the peak in each channel as a :func:`~seaborn.violinplot`:
# sphinx_gallery_thumbnail_number = 2
df = epochs.to_data_frame(time_format=None)
peak_latency = (df.filter(regex=r'condition|epoch|MEG 1332|MEG 2123')
.groupby(['condition', 'epoch'])
.aggregate(lambda x: df['time'].iloc[x.idxmax()])
.reset_index()
.melt(id_vars=['condition', 'epoch'],
var_name='channel',
value_name='latency of peak')
)
ax = sns.violinplot(x='channel', y='latency of peak', hue='condition',
data=peak_latency, palette='deep', saturation=1)
| bsd-3-clause |
AndreasMadsen/tensorflow | tensorflow/contrib/learn/python/learn/estimators/__init__.py | 3 | 11622 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""An estimator is a rule for calculating an estimate of a given quantity.
# Estimators
* **Estimators** are used to train and evaluate TensorFlow models.
They support regression and classification problems.
* **Classifiers** are functions that have discrete outcomes.
* **Regressors** are functions that predict continuous values.
## Choosing the correct estimator
* For **Regression** problems use one of the following:
* `LinearRegressor`: Uses linear model.
* `DNNRegressor`: Uses DNN.
* `DNNLinearCombinedRegressor`: Uses Wide & Deep.
* `TensorForestEstimator`: Uses RandomForest. Use `.predict()` for
regression problems.
* `Estimator`: Use when you need a custom model.
* For **Classification** problems use one of the following:
* `LinearClassifier`: Multiclass classifier using Linear model.
* `DNNClassifier`: Multiclass classifier using DNN.
* `DNNLinearCombinedClassifier`: Multiclass classifier using Wide & Deep.
* `TensorForestEstimator`: Uses RandomForest. Use `.predict_proba()` when
using for binary classification problems.
* `SVM`: Binary classifier using linear SVMs.
* `LogisticRegressor`: Use when you need custom model for binary
classification.
* `Estimator`: Use when you need custom model for N class classification.
## Pre-canned Estimators
Pre-canned estimators are machine learning estimators premade for general
purpose problems. If you need more customization, you can always write your
own custom estimator as described in the section below.
Pre-canned estimators are tested and optimized for speed and quality.
### Define the feature columns
Here are some possible types of feature columns used as inputs to a pre-canned
estimator.
Feature columns may vary based on the estimator used. So you can see which
feature columns are fed to each estimator in the below section.
```python
sparse_feature_a = sparse_column_with_keys(
column_name="sparse_feature_a", keys=["AB", "CD", ...])
embedding_feature_a = embedding_column(
sparse_id_column=sparse_feature_a, dimension=3, combiner="sum")
sparse_feature_b = sparse_column_with_hash_bucket(
column_name="sparse_feature_b", hash_bucket_size=1000)
embedding_feature_b = embedding_column(
sparse_id_column=sparse_feature_b, dimension=16, combiner="sum")
crossed_feature_a_x_b = crossed_column(
columns=[sparse_feature_a, sparse_feature_b], hash_bucket_size=10000)
real_feature = real_valued_column("real_feature")
real_feature_buckets = bucketized_column(
source_column=real_feature,
boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
```
### Create the pre-canned estimator
DNNClassifier, DNNRegressor, and DNNLinearCombinedClassifier are all pretty
similar to each other in how you use them. You can easily plug in an
optimizer and/or regularization to those estimators.
#### DNNClassifier
A classifier for TensorFlow DNN models.
```python
my_features = [embedding_feature_a, embedding_feature_b]
estimator = DNNClassifier(
feature_columns=my_features,
hidden_units=[1024, 512, 256],
optimizer=tf.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001
))
```
#### DNNRegressor
A regressor for TensorFlow DNN models.
```python
my_features = [embedding_feature_a, embedding_feature_b]
estimator = DNNRegressor(
feature_columns=my_features,
hidden_units=[1024, 512, 256])
# Or estimator using the ProximalAdagradOptimizer optimizer with
# regularization.
estimator = DNNRegressor(
feature_columns=my_features,
hidden_units=[1024, 512, 256],
optimizer=tf.train.ProximalAdagradOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001
))
```
#### DNNLinearCombinedClassifier
A classifier for TensorFlow Linear and DNN joined training models.
* Wide and deep model
* Multi class (2 by default)
```python
my_linear_features = [crossed_feature_a_x_b]
my_deep_features = [embedding_feature_a, embedding_feature_b]
estimator = DNNLinearCombinedClassifier(
# Common settings
n_classes=n_classes,
weight_column_name=weight_column_name,
# Wide settings
linear_feature_columns=my_linear_features,
linear_optimizer=tf.train.FtrlOptimizer(...),
# Deep settings
dnn_feature_columns=my_deep_features,
dnn_hidden_units=[1000, 500, 100],
dnn_optimizer=tf.train.AdagradOptimizer(...))
```
#### LinearClassifier
Train a linear model to classify instances into one of multiple possible
classes. When number of possible classes is 2, this is binary classification.
```python
my_features = [sparse_feature_b, crossed_feature_a_x_b]
estimator = LinearClassifier(
feature_columns=my_features,
optimizer=tf.train.FtrlOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001
))
```
#### LinearRegressor
Train a linear regression model to predict a label value given observation of
feature values.
```python
my_features = [sparse_feature_b, crossed_feature_a_x_b]
estimator = LinearRegressor(
feature_columns=my_features)
```
### LogisticRegressor
Logistic regression estimator for binary classification.
```python
# See tf.contrib.learn.Estimator(...) for details on model_fn structure
def my_model_fn(...):
pass
estimator = LogisticRegressor(model_fn=my_model_fn)
# Input builders
def input_fn_train:
pass
estimator.fit(input_fn=input_fn_train)
estimator.predict(x=x)
```
#### SVM - Support Vector Machine
Support Vector Machine (SVM) model for binary classification.
Currently only linear SVMs are supported.
```python
my_features = [real_feature, sparse_feature_a]
estimator = SVM(
example_id_column='example_id',
feature_columns=my_features,
l2_regularization=10.0)
```
#### TensorForestEstimator
Supports regression and binary classification.
```python
params = tf.contrib.tensor_forest.python.tensor_forest.ForestHParams(
num_classes=2, num_features=40, num_trees=10, max_nodes=1000)
# Estimator using the default graph builder.
estimator = TensorForestEstimator(params, model_dir=model_dir)
# Or estimator using TrainingLossForest as the graph builder.
estimator = TensorForestEstimator(
params, graph_builder_class=tensor_forest.TrainingLossForest,
model_dir=model_dir)
# Input builders
def input_fn_train: # returns x, y
...
def input_fn_eval: # returns x, y
...
estimator.fit(input_fn=input_fn_train)
estimator.evaluate(input_fn=input_fn_eval)
estimator.predict(x=x)
```
### Use the estimator
There are two main functions for using estimators, one of which is for
training, and one of which is for evaluation.
You can specify different data sources for each one in order to use different
datasets for train and eval.
```python
# Input builders
def input_fn_train: # returns x, Y
...
estimator.fit(input_fn=input_fn_train)
def input_fn_eval: # returns x, Y
...
estimator.evaluate(input_fn=input_fn_eval)
estimator.predict(x=x)
```
## Creating Custom Estimator
To create a custom `Estimator`, provide a function to `Estimator`'s
constructor that builds your model (`model_fn`, below):
```python
estimator = tf.contrib.learn.Estimator(
model_fn=model_fn,
model_dir=model_dir) # Where the model's data (e.g., checkpoints)
# are saved.
```
Here is a skeleton of this function, with descriptions of its arguments and
return values in the accompanying tables:
```python
def model_fn(features, targets, mode, params):
# Logic to do the following:
# 1. Configure the model via TensorFlow operations
# 2. Define the loss function for training/evaluation
# 3. Define the training operation/optimizer
# 4. Generate predictions
return predictions, loss, train_op
```
You may use `mode` and check against
`tf.contrib.learn.ModeKeys.{TRAIN, EVAL, INFER}` to parameterize `model_fn`.
In the Further Reading section below, there is an end-to-end TensorFlow
tutorial for building a custom estimator.
## Additional Estimators
There is an additional estimators under
`tensorflow.contrib.factorization.python.ops`:
* Gaussian mixture model (GMM) clustering
## Further reading
For further reading, there are several tutorials with relevant topics,
including:
* [Overview of linear models](../../../tutorials/linear/overview.md)
* [Linear model tutorial](../../../tutorials/wide/index.md)
* [Wide and deep learning tutorial](../../../tutorials/wide_and_deep/index.md)
* [Custom estimator tutorial](../../../tutorials/estimators/index.md)
* [Building input functions](../../../tutorials/input_fn/index.md)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.contrib.learn.python.learn.estimators._sklearn import NotFittedError
from tensorflow.contrib.learn.python.learn.estimators.classifier import Classifier
from tensorflow.contrib.learn.python.learn.estimators.dnn import DNNClassifier
from tensorflow.contrib.learn.python.learn.estimators.dnn import DNNRegressor
from tensorflow.contrib.learn.python.learn.estimators.dnn_linear_combined import DNNLinearCombinedClassifier
from tensorflow.contrib.learn.python.learn.estimators.dnn_linear_combined import DNNLinearCombinedRegressor
from tensorflow.contrib.learn.python.learn.estimators.estimator import BaseEstimator
from tensorflow.contrib.learn.python.learn.estimators.estimator import Estimator
from tensorflow.contrib.learn.python.learn.estimators.estimator import infer_real_valued_columns_from_input
from tensorflow.contrib.learn.python.learn.estimators.estimator import infer_real_valued_columns_from_input_fn
from tensorflow.contrib.learn.python.learn.estimators.estimator import SKCompat
from tensorflow.contrib.learn.python.learn.estimators.kmeans import KMeansClustering
from tensorflow.contrib.learn.python.learn.estimators.linear import LinearClassifier
from tensorflow.contrib.learn.python.learn.estimators.linear import LinearRegressor
from tensorflow.contrib.learn.python.learn.estimators.logistic_regressor import LogisticRegressor
from tensorflow.contrib.learn.python.learn.estimators.metric_key import MetricKey
from tensorflow.contrib.learn.python.learn.estimators.model_fn import ModeKeys
from tensorflow.contrib.learn.python.learn.estimators.prediction_key import PredictionKey
from tensorflow.contrib.learn.python.learn.estimators.random_forest import TensorForestEstimator
from tensorflow.contrib.learn.python.learn.estimators.random_forest import TensorForestLossHook
from tensorflow.contrib.learn.python.learn.estimators.run_config import ClusterConfig
from tensorflow.contrib.learn.python.learn.estimators.run_config import Environment
from tensorflow.contrib.learn.python.learn.estimators.run_config import RunConfig
from tensorflow.contrib.learn.python.learn.estimators.run_config import TaskType
from tensorflow.contrib.learn.python.learn.estimators.svm import SVM
| apache-2.0 |
runt18/nupic | external/linux32/lib/python2.6/site-packages/matplotlib/backends/backend_wx.py | 1 | 77246 | from __future__ import division
"""
backend_wx.py
A wxPython backend for matplotlib, based (very heavily) on
backend_template.py and backend_gtk.py
Author: Jeremy O'Donoghue ([email protected])
Derived from original copyright work by John Hunter
([email protected])
Copyright (C) Jeremy O'Donoghue & John Hunter, 2003-4
License: This work is licensed under a PSF compatible license. A copy
should be included with this source code.
"""
"""
KNOWN BUGS -
- Mousewheel (on Windows) only works after menu button has been pressed
at least once
- Mousewheel on Linux (wxGTK linked against GTK 1.2) does not work at all
- Vertical text renders horizontally if you use a non TrueType font
on Windows. This is a known wxPython issue. Work-around is to ensure
that you use a TrueType font.
- Pcolor demo puts chart slightly outside bounding box (approx 1-2 pixels
to the bottom left)
- Outputting to bitmap more than 300dpi results in some text being incorrectly
scaled. Seems to be a wxPython bug on Windows or font point sizes > 60, as
font size is correctly calculated.
- Performance poorer than for previous direct rendering version
- TIFF output not supported on wxGTK. This is a wxGTK issue
- Text is not anti-aliased on wxGTK. This is probably a platform
configuration issue.
- If a second call is made to show(), no figure is generated (#866965)
Not implemented:
- Printing
Fixed this release:
- Bug #866967: Interactive operation issues fixed [JDH]
- Bug #866969: Dynamic update does not function with backend_wx [JOD]
Examples which work on this release:
---------------------------------------------------------------
| Windows 2000 | Linux |
| wxPython 2.3.3 | wxPython 2.4.2.4 |
--------------------------------------------------------------|
- alignment_test.py | TBE | OK |
- arctest.py | TBE | (3) |
- axes_demo.py | OK | OK |
- axes_props.py | OK | OK |
- bar_stacked.py | TBE | OK |
- barchart_demo.py | OK | OK |
- color_demo.py | OK | OK |
- csd_demo.py | OK | OK |
- dynamic_demo.py | N/A | N/A |
- dynamic_demo_wx.py | TBE | OK |
- embedding_in_gtk.py | N/A | N/A |
- embedding_in_wx.py | OK | OK |
- errorbar_demo.py | OK | OK |
- figtext.py | OK | OK |
- histogram_demo.py | OK | OK |
- interactive.py | N/A (2) | N/A (2) |
- interactive2.py | N/A (2) | N/A (2) |
- legend_demo.py | OK | OK |
- legend_demo2.py | OK | OK |
- line_styles.py | OK | OK |
- log_demo.py | OK | OK |
- logo.py | OK | OK |
- mpl_with_glade.py | N/A (2) | N/A (2) |
- mri_demo.py | OK | OK |
- mri_demo_with_eeg.py | OK | OK |
- multiple_figs_demo.py | OK | OK |
- pcolor_demo.py | OK | OK |
- psd_demo.py | OK | OK |
- scatter_demo.py | OK | OK |
- scatter_demo2.py | OK | OK |
- simple_plot.py | OK | OK |
- stock_demo.py | OK | OK |
- subplot_demo.py | OK | OK |
- system_monitor.py | N/A (2) | N/A (2) |
- text_handles.py | OK | OK |
- text_themes.py | OK | OK |
- vline_demo.py | OK | OK |
---------------------------------------------------------------
(2) - Script uses GTK-specific features - cannot not run,
but wxPython equivalent should be written.
(3) - Clipping seems to be broken.
"""
cvs_id = '$Id: backend_wx.py 6484 2008-12-03 18:38:03Z jdh2358 $'
import sys, os, os.path, math, StringIO, weakref, warnings
import numpy as npy
# Debugging settings here...
# Debug level set here. If the debug level is less than 5, information
# messages (progressively more info for lower value) are printed. In addition,
# traceback is performed, and pdb activated, for all uncaught exceptions in
# this case
_DEBUG = 5
if _DEBUG < 5:
import traceback, pdb
_DEBUG_lvls = {1 : 'Low ', 2 : 'Med ', 3 : 'High', 4 : 'Error' }
try:
import wx
backend_version = wx.VERSION_STRING
except:
raise ImportError("Matplotlib backend_wx requires wxPython be installed")
#!!! this is the call that is causing the exception swallowing !!!
#wx.InitAllImageHandlers()
def DEBUG_MSG(string, lvl=3, o=None):
if lvl >= _DEBUG:
cls = o.__class__
# Jeremy, often times the commented line won't print but the
# one below does. I think WX is redefining stderr, damned
# beast
#print >>sys.stderr, "%s- %s in %s" % (_DEBUG_lvls[lvl], string, cls)
print "{0!s}- {1!s} in {2!s}".format(_DEBUG_lvls[lvl], string, cls)
def debug_on_error(type, value, tb):
"""Code due to Thomas Heller - published in Python Cookbook (O'Reilley)"""
traceback.print_exc(type, value, tb)
print
pdb.pm() # jdh uncomment
class fake_stderr:
"""Wx does strange things with stderr, as it makes the assumption that there
is probably no console. This redirects stderr to the console, since we know
that there is one!"""
def write(self, msg):
print "Stderr: {0!s}\n\r".format(msg)
#if _DEBUG < 5:
# sys.excepthook = debug_on_error
# WxLogger =wx.LogStderr()
# sys.stderr = fake_stderr
# Event binding code changed after version 2.5
if wx.VERSION_STRING >= '2.5':
def bind(actor,event,action,**kw):
actor.Bind(event,action,**kw)
else:
def bind(actor,event,action,id=None):
if id is not None:
event(actor, id, action)
else:
event(actor,action)
import matplotlib
from matplotlib import verbose
from matplotlib.backend_bases import RendererBase, GraphicsContextBase,\
FigureCanvasBase, FigureManagerBase, NavigationToolbar2, \
cursors
from matplotlib._pylab_helpers import Gcf
from matplotlib.artist import Artist
from matplotlib.cbook import exception_to_str, is_string_like, is_writable_file_like
from matplotlib.figure import Figure
from matplotlib.path import Path
from matplotlib.text import _process_text_args, Text
from matplotlib.transforms import Affine2D
from matplotlib.widgets import SubplotTool
from matplotlib import rcParams
# the True dots per inch on the screen; should be display dependent
# see http://groups.google.com/groups?q=screen+dpi+x11&hl=en&lr=&ie=UTF-8&oe=UTF-8&safe=off&selm=7077.26e81ad5%40swift.cs.tcd.ie&rnum=5 for some info about screen dpi
PIXELS_PER_INCH = 75
# Delay time for idle checks
IDLE_DELAY = 5
def error_msg_wx(msg, parent=None):
"""
Signal an error condition -- in a GUI, popup a error dialog
"""
dialog =wx.MessageDialog(parent = parent,
message = msg,
caption = 'Matplotlib backend_wx error',
style=wx.OK | wx.CENTRE)
dialog.ShowModal()
dialog.Destroy()
return None
def raise_msg_to_str(msg):
"""msg is a return arg from a raise. Join with new lines"""
if not is_string_like(msg):
msg = '\n'.join(map(str, msg))
return msg
class RendererWx(RendererBase):
"""
The renderer handles all the drawing primitives using a graphics
context instance that controls the colors/styles. It acts as the
'renderer' instance used by many classes in the hierarchy.
"""
#In wxPython, drawing is performed on a wxDC instance, which will
#generally be mapped to the client aread of the window displaying
#the plot. Under wxPython, the wxDC instance has a wx.Pen which
#describes the colour and weight of any lines drawn, and a wxBrush
#which describes the fill colour of any closed polygon.
fontweights = {
100 : wx.LIGHT,
200 : wx.LIGHT,
300 : wx.LIGHT,
400 : wx.NORMAL,
500 : wx.NORMAL,
600 : wx.NORMAL,
700 : wx.BOLD,
800 : wx.BOLD,
900 : wx.BOLD,
'ultralight' : wx.LIGHT,
'light' : wx.LIGHT,
'normal' : wx.NORMAL,
'medium' : wx.NORMAL,
'semibold' : wx.NORMAL,
'bold' : wx.BOLD,
'heavy' : wx.BOLD,
'ultrabold' : wx.BOLD,
'black' : wx.BOLD
}
fontangles = {
'italic' : wx.ITALIC,
'normal' : wx.NORMAL,
'oblique' : wx.SLANT }
# wxPython allows for portable font styles, choosing them appropriately
# for the target platform. Map some standard font names to the portable
# styles
# QUESTION: Is it be wise to agree standard fontnames across all backends?
fontnames = { 'Sans' : wx.SWISS,
'Roman' : wx.ROMAN,
'Script' : wx.SCRIPT,
'Decorative' : wx.DECORATIVE,
'Modern' : wx.MODERN,
'Courier' : wx.MODERN,
'courier' : wx.MODERN }
def __init__(self, bitmap, dpi):
"""
Initialise a wxWindows renderer instance.
"""
DEBUG_MSG("__init__()", 1, self)
if wx.VERSION_STRING < "2.8":
raise RuntimeError("matplotlib no longer supports wxPython < 2.8 for the Wx backend.\nYou may, however, use the WxAgg backend.")
self.width = bitmap.GetWidth()
self.height = bitmap.GetHeight()
self.bitmap = bitmap
self.fontd = {}
self.dpi = dpi
self.gc = None
def flipy(self):
return True
def offset_text_height(self):
return True
def get_text_width_height_descent(self, s, prop, ismath):
"""
get the width and height in display coords of the string s
with FontPropertry prop
"""
#return 1, 1
if ismath: s = self.strip_math(s)
if self.gc is None:
gc = self.new_gc()
else:
gc = self.gc
gfx_ctx = gc.gfx_ctx
font = self.get_wx_font(s, prop)
gfx_ctx.SetFont(font, wx.BLACK)
w, h, descent, leading = gfx_ctx.GetFullTextExtent(s)
return w, h, descent
def get_canvas_width_height(self):
'return the canvas width and height in display coords'
return self.width, self.height
def handle_clip_rectangle(self, gc):
new_bounds = gc.get_clip_rectangle()
if new_bounds is not None:
new_bounds = new_bounds.bounds
gfx_ctx = gc.gfx_ctx
if gfx_ctx._lastcliprect != new_bounds:
gfx_ctx._lastcliprect = new_bounds
if new_bounds is None:
gfx_ctx.ResetClip()
else:
gfx_ctx.Clip(new_bounds[0], self.height - new_bounds[1] - new_bounds[3],
new_bounds[2], new_bounds[3])
#@staticmethod
def convert_path(gfx_ctx, tpath):
wxpath = gfx_ctx.CreatePath()
for points, code in tpath.iter_segments():
if code == Path.MOVETO:
wxpath.MoveToPoint(*points)
elif code == Path.LINETO:
wxpath.AddLineToPoint(*points)
elif code == Path.CURVE3:
wxpath.AddQuadCurveToPoint(*points)
elif code == Path.CURVE4:
wxpath.AddCurveToPoint(*points)
elif code == Path.CLOSEPOLY:
wxpath.CloseSubpath()
return wxpath
convert_path = staticmethod(convert_path)
def draw_path(self, gc, path, transform, rgbFace=None):
gc.select()
self.handle_clip_rectangle(gc)
gfx_ctx = gc.gfx_ctx
transform = transform + Affine2D().scale(1.0, -1.0).translate(0.0, self.height)
tpath = transform.transform_path(path)
wxpath = self.convert_path(gfx_ctx, tpath)
if rgbFace is not None:
gfx_ctx.SetBrush(wx.Brush(gc.get_wxcolour(rgbFace)))
gfx_ctx.DrawPath(wxpath)
else:
gfx_ctx.StrokePath(wxpath)
gc.unselect()
def draw_image(self, x, y, im, bbox, clippath=None, clippath_trans=None):
if bbox is not None:
l,b,w,h = bbox.bounds
else:
l=0
b=0,
w=self.width
h=self.height
rows, cols, image_str = im.as_rgba_str()
image_array = npy.fromstring(image_str, npy.uint8)
image_array.shape = rows, cols, 4
bitmap = wx.BitmapFromBufferRGBA(cols,rows,image_array)
gc = self.get_gc()
gc.select()
gc.gfx_ctx.DrawBitmap(bitmap,int(l),int(b),int(w),int(h))
gc.unselect()
def draw_text(self, gc, x, y, s, prop, angle, ismath):
"""
Render the matplotlib.text.Text instance
None)
"""
if ismath: s = self.strip_math(s)
DEBUG_MSG("draw_text()", 1, self)
gc.select()
self.handle_clip_rectangle(gc)
gfx_ctx = gc.gfx_ctx
font = self.get_wx_font(s, prop)
color = gc.get_wxcolour(gc.get_rgb())
gfx_ctx.SetFont(font, color)
w, h, d = self.get_text_width_height_descent(s, prop, ismath)
x = int(x)
y = int(y-h)
if angle == 0.0:
gfx_ctx.DrawText(s, x, y)
else:
rads = angle / 180.0 * math.pi
xo = h * math.sin(rads)
yo = h * math.cos(rads)
gfx_ctx.DrawRotatedText(s, x - xo, y - yo, rads)
gc.unselect()
def new_gc(self):
"""
Return an instance of a GraphicsContextWx, and sets the current gc copy
"""
DEBUG_MSG('new_gc()', 2, self)
self.gc = GraphicsContextWx(self.bitmap, self)
self.gc.select()
self.gc.unselect()
return self.gc
def get_gc(self):
"""
Fetch the locally cached gc.
"""
# This is a dirty hack to allow anything with access to a renderer to
# access the current graphics context
assert self.gc is not None, "gc must be defined"
return self.gc
def get_wx_font(self, s, prop):
"""
Return a wx font. Cache instances in a font dictionary for
efficiency
"""
DEBUG_MSG("get_wx_font()", 1, self)
key = hash(prop)
fontprop = prop
fontname = fontprop.get_name()
font = self.fontd.get(key)
if font is not None:
return font
# Allow use of platform independent and dependent font names
wxFontname = self.fontnames.get(fontname, wx.ROMAN)
wxFacename = '' # Empty => wxPython chooses based on wx_fontname
# Font colour is determined by the active wx.Pen
# TODO: It may be wise to cache font information
size = self.points_to_pixels(fontprop.get_size_in_points())
font =wx.Font(int(size+0.5), # Size
wxFontname, # 'Generic' name
self.fontangles[fontprop.get_style()], # Angle
self.fontweights[fontprop.get_weight()], # Weight
False, # Underline
wxFacename) # Platform font name
# cache the font and gc and return it
self.fontd[key] = font
return font
def points_to_pixels(self, points):
"""
convert point measures to pixes using dpi and the pixels per
inch of the display
"""
return points*(PIXELS_PER_INCH/72.0*self.dpi/72.0)
class GraphicsContextWx(GraphicsContextBase):
"""
The graphics context provides the color, line styles, etc...
This class stores a reference to a wxMemoryDC, and a
wxGraphicsContext that draws to it. Creating a wxGraphicsContext
seems to be fairly heavy, so these objects are cached based on the
bitmap object that is passed in.
The base GraphicsContext stores colors as a RGB tuple on the unit
interval, eg, (0.5, 0.0, 1.0). wxPython uses an int interval, but
since wxPython colour management is rather simple, I have not chosen
to implement a separate colour manager class.
"""
_capd = { 'butt': wx.CAP_BUTT,
'projecting': wx.CAP_PROJECTING,
'round': wx.CAP_ROUND }
_joind = { 'bevel': wx.JOIN_BEVEL,
'miter': wx.JOIN_MITER,
'round': wx.JOIN_ROUND }
_dashd_wx = { 'solid': wx.SOLID,
'dashed': wx.SHORT_DASH,
'dashdot': wx.DOT_DASH,
'dotted': wx.DOT }
_cache = weakref.WeakKeyDictionary()
def __init__(self, bitmap, renderer):
GraphicsContextBase.__init__(self)
#assert self.Ok(), "wxMemoryDC not OK to use"
DEBUG_MSG("__init__()", 1, self)
dc, gfx_ctx = self._cache.get(bitmap, (None, None))
if dc is None:
dc = wx.MemoryDC()
dc.SelectObject(bitmap)
gfx_ctx = wx.GraphicsContext.Create(dc)
gfx_ctx._lastcliprect = None
self._cache[bitmap] = dc, gfx_ctx
self.bitmap = bitmap
self.dc = dc
self.gfx_ctx = gfx_ctx
self._pen = wx.Pen('BLACK', 1, wx.SOLID)
gfx_ctx.SetPen(self._pen)
self._style = wx.SOLID
self.renderer = renderer
def select(self):
"""
Select the current bitmap into this wxDC instance
"""
if sys.platform=='win32':
self.dc.SelectObject(self.bitmap)
self.IsSelected = True
def unselect(self):
"""
Select a Null bitmasp into this wxDC instance
"""
if sys.platform=='win32':
self.dc.SelectObject(wx.NullBitmap)
self.IsSelected = False
def set_foreground(self, fg, isRGB=None):
"""
Set the foreground color. fg can be a matlab format string, a
html hex color string, an rgb unit tuple, or a float between 0
and 1. In the latter case, grayscale is used.
"""
# Implementation note: wxPython has a separate concept of pen and
# brush - the brush fills any outline trace left by the pen.
# Here we set both to the same colour - if a figure is not to be
# filled, the renderer will set the brush to be transparent
# Same goes for text foreground...
DEBUG_MSG("set_foreground()", 1, self)
self.select()
GraphicsContextBase.set_foreground(self, fg, isRGB)
self._pen.SetColour(self.get_wxcolour(self.get_rgb()))
self.gfx_ctx.SetPen(self._pen)
self.unselect()
def set_graylevel(self, frac):
"""
Set the foreground color. fg can be a matlab format string, a
html hex color string, an rgb unit tuple, or a float between 0
and 1. In the latter case, grayscale is used.
"""
DEBUG_MSG("set_graylevel()", 1, self)
self.select()
GraphicsContextBase.set_graylevel(self, frac)
self._pen.SetColour(self.get_wxcolour(self.get_rgb()))
self.gfx_ctx.SetPen(self._pen)
self.unselect()
def set_linewidth(self, w):
"""
Set the line width.
"""
DEBUG_MSG("set_linewidth()", 1, self)
self.select()
if w>0 and w<1: w = 1
GraphicsContextBase.set_linewidth(self, w)
lw = int(self.renderer.points_to_pixels(self._linewidth))
if lw==0: lw = 1
self._pen.SetWidth(lw)
self.gfx_ctx.SetPen(self._pen)
self.unselect()
def set_capstyle(self, cs):
"""
Set the capstyle as a string in ('butt', 'round', 'projecting')
"""
DEBUG_MSG("set_capstyle()", 1, self)
self.select()
GraphicsContextBase.set_capstyle(self, cs)
self._pen.SetCap(GraphicsContextWx._capd[self._capstyle])
self.gfx_ctx.SetPen(self._pen)
self.unselect()
def set_joinstyle(self, js):
"""
Set the join style to be one of ('miter', 'round', 'bevel')
"""
DEBUG_MSG("set_joinstyle()", 1, self)
self.select()
GraphicsContextBase.set_joinstyle(self, js)
self._pen.SetJoin(GraphicsContextWx._joind[self._joinstyle])
self.gfx_ctx.SetPen(self._pen)
self.unselect()
def set_linestyle(self, ls):
"""
Set the line style to be one of
"""
DEBUG_MSG("set_linestyle()", 1, self)
self.select()
GraphicsContextBase.set_linestyle(self, ls)
try:
self._style = GraphicsContextWx._dashd_wx[ls]
except KeyError:
self._style = wx.LONG_DASH# Style not used elsewhere...
# On MS Windows platform, only line width of 1 allowed for dash lines
if wx.Platform == '__WXMSW__':
self.set_linewidth(1)
self._pen.SetStyle(self._style)
self.gfx_ctx.SetPen(self._pen)
self.unselect()
def get_wxcolour(self, color):
"""return a wx.Colour from RGB format"""
DEBUG_MSG("get_wx_color()", 1, self)
if len(color) == 3:
r, g, b = color
r *= 255
g *= 255
b *= 255
return wx.Colour(red=int(r), green=int(g), blue=int(b))
else:
r, g, b, a = color
r *= 255
g *= 255
b *= 255
a *= 255
return wx.Colour(red=int(r), green=int(g), blue=int(b), alpha=int(a))
class FigureCanvasWx(FigureCanvasBase, wx.Panel):
"""
The FigureCanvas contains the figure and does event handling.
In the wxPython backend, it is derived from wxPanel, and (usually) lives
inside a frame instantiated by a FigureManagerWx. The parent window probably
implements a wx.Sizer to control the displayed control size - but we give a
hint as to our preferred minimum size.
"""
keyvald = {
wx.WXK_CONTROL : 'control',
wx.WXK_SHIFT : 'shift',
wx.WXK_ALT : 'alt',
wx.WXK_LEFT : 'left',
wx.WXK_UP : 'up',
wx.WXK_RIGHT : 'right',
wx.WXK_DOWN : 'down',
wx.WXK_ESCAPE : 'escape',
wx.WXK_F1 : 'f1',
wx.WXK_F2 : 'f2',
wx.WXK_F3 : 'f3',
wx.WXK_F4 : 'f4',
wx.WXK_F5 : 'f5',
wx.WXK_F6 : 'f6',
wx.WXK_F7 : 'f7',
wx.WXK_F8 : 'f8',
wx.WXK_F9 : 'f9',
wx.WXK_F10 : 'f10',
wx.WXK_F11 : 'f11',
wx.WXK_F12 : 'f12',
wx.WXK_SCROLL : 'scroll_lock',
wx.WXK_PAUSE : 'break',
wx.WXK_BACK : 'backspace',
wx.WXK_RETURN : 'enter',
wx.WXK_INSERT : 'insert',
wx.WXK_DELETE : 'delete',
wx.WXK_HOME : 'home',
wx.WXK_END : 'end',
wx.WXK_PRIOR : 'pageup',
wx.WXK_NEXT : 'pagedown',
wx.WXK_PAGEUP : 'pageup',
wx.WXK_PAGEDOWN : 'pagedown',
wx.WXK_NUMPAD0 : '0',
wx.WXK_NUMPAD1 : '1',
wx.WXK_NUMPAD2 : '2',
wx.WXK_NUMPAD3 : '3',
wx.WXK_NUMPAD4 : '4',
wx.WXK_NUMPAD5 : '5',
wx.WXK_NUMPAD6 : '6',
wx.WXK_NUMPAD7 : '7',
wx.WXK_NUMPAD8 : '8',
wx.WXK_NUMPAD9 : '9',
wx.WXK_NUMPAD_ADD : '+',
wx.WXK_NUMPAD_SUBTRACT : '-',
wx.WXK_NUMPAD_MULTIPLY : '*',
wx.WXK_NUMPAD_DIVIDE : '/',
wx.WXK_NUMPAD_DECIMAL : 'dec',
wx.WXK_NUMPAD_ENTER : 'enter',
wx.WXK_NUMPAD_UP : 'up',
wx.WXK_NUMPAD_RIGHT : 'right',
wx.WXK_NUMPAD_DOWN : 'down',
wx.WXK_NUMPAD_LEFT : 'left',
wx.WXK_NUMPAD_PRIOR : 'pageup',
wx.WXK_NUMPAD_NEXT : 'pagedown',
wx.WXK_NUMPAD_PAGEUP : 'pageup',
wx.WXK_NUMPAD_PAGEDOWN : 'pagedown',
wx.WXK_NUMPAD_HOME : 'home',
wx.WXK_NUMPAD_END : 'end',
wx.WXK_NUMPAD_INSERT : 'insert',
wx.WXK_NUMPAD_DELETE : 'delete',
}
def __init__(self, parent, id, figure):
"""
Initialise a FigureWx instance.
- Initialise the FigureCanvasBase and wxPanel parents.
- Set event handlers for:
EVT_SIZE (Resize event)
EVT_PAINT (Paint event)
"""
FigureCanvasBase.__init__(self, figure)
# Set preferred window size hint - helps the sizer (if one is
# connected)
l,b,w,h = figure.bbox.bounds
w = int(math.ceil(w))
h = int(math.ceil(h))
wx.Panel.__init__(self, parent, id, size=wx.Size(w, h))
def do_nothing(*args, **kwargs):
warnings.warn('could not find a setinitialsize function for backend_wx; please report your wxpython version={0!s} to the matplotlib developers list'.format(backend_version))
pass
# try to find the set size func across wx versions
try:
getattr(self, 'SetInitialSize')
except AttributeError:
self.SetInitialSize = getattr(self, 'SetBestFittingSize', do_nothing)
if not hasattr(self,'IsShownOnScreen'):
self.IsShownOnScreen = getattr(self, 'IsVisible', lambda *args: True)
# Create the drawing bitmap
self.bitmap =wx.EmptyBitmap(w, h)
DEBUG_MSG("__init__() - bitmap w:{0:d} h:{1:d}".format(w, h), 2, self)
# TODO: Add support for 'point' inspection and plot navigation.
self._isDrawn = False
bind(self, wx.EVT_SIZE, self._onSize)
bind(self, wx.EVT_PAINT, self._onPaint)
bind(self, wx.EVT_ERASE_BACKGROUND, self._onEraseBackground)
bind(self, wx.EVT_KEY_DOWN, self._onKeyDown)
bind(self, wx.EVT_KEY_UP, self._onKeyUp)
bind(self, wx.EVT_RIGHT_DOWN, self._onRightButtonDown)
bind(self, wx.EVT_RIGHT_DCLICK, self._onRightButtonDown)
bind(self, wx.EVT_RIGHT_UP, self._onRightButtonUp)
bind(self, wx.EVT_MOUSEWHEEL, self._onMouseWheel)
bind(self, wx.EVT_LEFT_DOWN, self._onLeftButtonDown)
bind(self, wx.EVT_LEFT_DCLICK, self._onLeftButtonDown)
bind(self, wx.EVT_LEFT_UP, self._onLeftButtonUp)
bind(self, wx.EVT_MOTION, self._onMotion)
bind(self, wx.EVT_LEAVE_WINDOW, self._onLeave)
bind(self, wx.EVT_ENTER_WINDOW, self._onEnter)
bind(self, wx.EVT_IDLE, self._onIdle)
self.SetBackgroundStyle(wx.BG_STYLE_CUSTOM)
self.macros = {} # dict from wx id to seq of macros
self.Printer_Init()
def Destroy(self, *args, **kwargs):
wx.Panel.Destroy(self, *args, **kwargs)
def Copy_to_Clipboard(self, event=None):
"copy bitmap of canvas to system clipboard"
bmp_obj = wx.BitmapDataObject()
bmp_obj.SetBitmap(self.bitmap)
wx.TheClipboard.Open()
wx.TheClipboard.SetData(bmp_obj)
wx.TheClipboard.Close()
def Printer_Init(self):
"""initialize printer settings using wx methods"""
self.printerData = wx.PrintData()
self.printerData.SetPaperId(wx.PAPER_LETTER)
self.printerData.SetPrintMode(wx.PRINT_MODE_PRINTER)
self.printerPageData= wx.PageSetupDialogData()
self.printerPageData.SetMarginBottomRight((25,25))
self.printerPageData.SetMarginTopLeft((25,25))
self.printerPageData.SetPrintData(self.printerData)
self.printer_width = 5.5
self.printer_margin= 0.5
def Printer_Setup(self, event=None):
"""set up figure for printing. The standard wx Printer
Setup Dialog seems to die easily. Therefore, this setup
simply asks for image width and margin for printing. """
dmsg = """Width of output figure in inches.
The current aspect ration will be kept."""
dlg = wx.Dialog(self, -1, 'Page Setup for Printing' , (-1,-1))
df = dlg.GetFont()
df.SetWeight(wx.NORMAL)
df.SetPointSize(11)
dlg.SetFont(df)
x_wid = wx.TextCtrl(dlg,-1,value="{0:.2f}".format(self.printer_width), size=(70,-1))
x_mrg = wx.TextCtrl(dlg,-1,value="{0:.2f}".format(self.printer_margin),size=(70,-1))
sizerAll = wx.BoxSizer(wx.VERTICAL)
sizerAll.Add(wx.StaticText(dlg,-1,dmsg),
0, wx.ALL | wx.EXPAND, 5)
sizer = wx.FlexGridSizer(0,3)
sizerAll.Add(sizer, 0, wx.ALL | wx.EXPAND, 5)
sizer.Add(wx.StaticText(dlg,-1,'Figure Width'),
1, wx.ALIGN_LEFT|wx.ALL, 2)
sizer.Add(x_wid,
1, wx.ALIGN_LEFT|wx.ALL, 2)
sizer.Add(wx.StaticText(dlg,-1,'in'),
1, wx.ALIGN_LEFT|wx.ALL, 2)
sizer.Add(wx.StaticText(dlg,-1,'Margin'),
1, wx.ALIGN_LEFT|wx.ALL, 2)
sizer.Add(x_mrg,
1, wx.ALIGN_LEFT|wx.ALL, 2)
sizer.Add(wx.StaticText(dlg,-1,'in'),
1, wx.ALIGN_LEFT|wx.ALL, 2)
btn = wx.Button(dlg,wx.ID_OK, " OK ")
btn.SetDefault()
sizer.Add(btn, 1, wx.ALIGN_LEFT, 5)
btn = wx.Button(dlg,wx.ID_CANCEL, " CANCEL ")
sizer.Add(btn, 1, wx.ALIGN_LEFT, 5)
dlg.SetSizer(sizerAll)
dlg.SetAutoLayout(True)
sizerAll.Fit(dlg)
if dlg.ShowModal() == wx.ID_OK:
try:
self.printer_width = float(x_wid.GetValue())
self.printer_margin = float(x_mrg.GetValue())
except:
pass
if ((self.printer_width + self.printer_margin) > 7.5):
self.printerData.SetOrientation(wx.LANDSCAPE)
else:
self.printerData.SetOrientation(wx.PORTRAIT)
dlg.Destroy()
return
def Printer_Setup2(self, event=None):
"""set up figure for printing. Using the standard wx Printer
Setup Dialog. """
if hasattr(self, 'printerData'):
data = wx.PageSetupDialogData()
data.SetPrintData(self.printerData)
else:
data = wx.PageSetupDialogData()
data.SetMarginTopLeft( (15, 15) )
data.SetMarginBottomRight( (15, 15) )
dlg = wx.PageSetupDialog(self, data)
if dlg.ShowModal() == wx.ID_OK:
data = dlg.GetPageSetupData()
tl = data.GetMarginTopLeft()
br = data.GetMarginBottomRight()
self.printerData = wx.PrintData(data.GetPrintData())
dlg.Destroy()
def Printer_Preview(self, event=None):
""" generate Print Preview with wx Print mechanism"""
po1 = PrintoutWx(self, width=self.printer_width,
margin=self.printer_margin)
po2 = PrintoutWx(self, width=self.printer_width,
margin=self.printer_margin)
self.preview = wx.PrintPreview(po1,po2,self.printerData)
if not self.preview.Ok(): print "error with preview"
self.preview.SetZoom(50)
frameInst= self
while not isinstance(frameInst, wx.Frame):
frameInst= frameInst.GetParent()
frame = wx.PreviewFrame(self.preview, frameInst, "Preview")
frame.Initialize()
frame.SetPosition(self.GetPosition())
frame.SetSize((850,650))
frame.Centre(wx.BOTH)
frame.Show(True)
self.gui_repaint()
def Printer_Print(self, event=None):
""" Print figure using wx Print mechanism"""
pdd = wx.PrintDialogData()
# SetPrintData for 2.4 combatibility
pdd.SetPrintData(self.printerData)
pdd.SetToPage(1)
printer = wx.Printer(pdd)
printout = PrintoutWx(self, width=int(self.printer_width),
margin=int(self.printer_margin))
print_ok = printer.Print(self, printout, True)
if wx.VERSION_STRING >= '2.5':
if not print_ok and not printer.GetLastError() == wx.PRINTER_CANCELLED:
wx.MessageBox("""There was a problem printing.
Perhaps your current printer is not set correctly?""",
"Printing", wx.OK)
else:
if not print_ok:
wx.MessageBox("""There was a problem printing.
Perhaps your current printer is not set correctly?""",
"Printing", wx.OK)
printout.Destroy()
self.gui_repaint()
def draw_idle(self):
"""
Delay rendering until the GUI is idle.
"""
DEBUG_MSG("draw_idle()", 1, self)
self._isDrawn = False # Force redraw
# Create a timer for handling draw_idle requests
# If there are events pending when the timer is
# complete, reset the timer and continue. The
# alternative approach, binding to wx.EVT_IDLE,
# doesn't behave as nicely.
if hasattr(self,'_idletimer'):
self._idletimer.Restart(IDLE_DELAY)
else:
self._idletimer = wx.FutureCall(IDLE_DELAY,self._onDrawIdle)
# FutureCall is a backwards-compatible alias;
# CallLater became available in 2.7.1.1.
def _onDrawIdle(self, *args, **kwargs):
if wx.GetApp().Pending():
self._idletimer.Restart(IDLE_DELAY, *args, **kwargs)
else:
del self._idletimer
# GUI event or explicit draw call may already
# have caused the draw to take place
if not self._isDrawn:
self.draw(*args, **kwargs)
def draw(self, drawDC=None):
"""
Render the figure using RendererWx instance renderer, or using a
previously defined renderer if none is specified.
"""
DEBUG_MSG("draw()", 1, self)
self.renderer = RendererWx(self.bitmap, self.figure.dpi)
self.figure.draw(self.renderer)
self._isDrawn = True
self.gui_repaint(drawDC=drawDC)
def flush_events(self):
wx.Yield()
def start_event_loop(self, timeout=0):
"""
Start an event loop. This is used to start a blocking event
loop so that interactive functions, such as ginput and
waitforbuttonpress, can wait for events. This should not be
confused with the main GUI event loop, which is always running
and has nothing to do with this.
Call signature::
start_event_loop(self,timeout=0)
This call blocks until a callback function triggers
stop_event_loop() or *timeout* is reached. If *timeout* is
<=0, never timeout.
Raises RuntimeError if event loop is already running.
"""
if hasattr(self, '_event_loop'):
raise RuntimeError("Event loop already running")
id = wx.NewId()
timer = wx.Timer(self, id=id)
if timeout > 0:
timer.Start(timeout*1000, oneShot=True)
bind(self, wx.EVT_TIMER, self.stop_event_loop, id=id)
# Event loop handler for start/stop event loop
self._event_loop = wx.EventLoop()
self._event_loop.Run()
timer.Stop()
def stop_event_loop(self, event=None):
"""
Stop an event loop. This is used to stop a blocking event
loop so that interactive functions, such as ginput and
waitforbuttonpress, can wait for events.
Call signature::
stop_event_loop_default(self)
"""
if hasattr(self,'_event_loop'):
if self._event_loop.IsRunning():
self._event_loop.Exit()
del self._event_loop
def _get_imagesave_wildcards(self):
'return the wildcard string for the filesave dialog'
default_filetype = self.get_default_filetype()
filetypes = self.get_supported_filetypes_grouped()
sorted_filetypes = filetypes.items()
sorted_filetypes.sort()
wildcards = []
extensions = []
filter_index = 0
for i, (name, exts) in enumerate(sorted_filetypes):
ext_list = ';'.join(['*.{0!s}'.format(ext) for ext in exts])
extensions.append(exts[0])
wildcard = '{0!s} ({1!s})|{2!s}'.format(name, ext_list, ext_list)
if default_filetype in exts:
filter_index = i
wildcards.append(wildcard)
wildcards = '|'.join(wildcards)
return wildcards, extensions, filter_index
def gui_repaint(self, drawDC=None):
"""
Performs update of the displayed image on the GUI canvas, using the
supplied device context. If drawDC is None, a ClientDC will be used to
redraw the image.
"""
DEBUG_MSG("gui_repaint()", 1, self)
if self.IsShownOnScreen():
if drawDC is None:
drawDC=wx.ClientDC(self)
drawDC.BeginDrawing()
drawDC.DrawBitmap(self.bitmap, 0, 0)
drawDC.EndDrawing()
#wx.GetApp().Yield()
else:
pass
filetypes = FigureCanvasBase.filetypes.copy()
filetypes['bmp'] = 'Windows bitmap'
filetypes['jpeg'] = 'JPEG'
filetypes['jpg'] = 'JPEG'
filetypes['pcx'] = 'PCX'
filetypes['png'] = 'Portable Network Graphics'
filetypes['tif'] = 'Tagged Image Format File'
filetypes['tiff'] = 'Tagged Image Format File'
filetypes['xpm'] = 'X pixmap'
def print_figure(self, filename, *args, **kwargs):
# Use pure Agg renderer to draw
FigureCanvasBase.print_figure(self, filename, *args, **kwargs)
# Restore the current view; this is needed because the
# artist contains methods rely on particular attributes
# of the rendered figure for determining things like
# bounding boxes.
if self._isDrawn:
self.draw()
def print_bmp(self, filename, *args, **kwargs):
return self._print_image(filename, wx.BITMAP_TYPE_BMP, *args, **kwargs)
def print_jpeg(self, filename, *args, **kwargs):
return self._print_image(filename, wx.BITMAP_TYPE_JPEG, *args, **kwargs)
print_jpg = print_jpeg
def print_pcx(self, filename, *args, **kwargs):
return self._print_image(filename, wx.BITMAP_TYPE_PCX, *args, **kwargs)
def print_png(self, filename, *args, **kwargs):
return self._print_image(filename, wx.BITMAP_TYPE_PNG, *args, **kwargs)
def print_tiff(self, filename, *args, **kwargs):
return self._print_image(filename, wx.BITMAP_TYPE_TIF, *args, **kwargs)
print_tif = print_tiff
def print_xpm(self, filename, *args, **kwargs):
return self._print_image(filename, wx.BITMAP_TYPE_XPM, *args, **kwargs)
def _print_image(self, filename, filetype, *args, **kwargs):
origBitmap = self.bitmap
l,b,width,height = self.figure.bbox.bounds
width = int(math.ceil(width))
height = int(math.ceil(height))
self.bitmap = wx.EmptyBitmap(width, height)
renderer = RendererWx(self.bitmap, self.figure.dpi)
gc = renderer.new_gc()
self.figure.draw(renderer)
# Now that we have rendered into the bitmap, save it
# to the appropriate file type and clean up
if is_string_like(filename):
if not self.bitmap.SaveFile(filename, filetype):
DEBUG_MSG('print_figure() file save error', 4, self)
raise RuntimeError('Could not save figure to {0!s}\n'.format((filename)))
elif is_writable_file_like(filename):
if not self.bitmap.ConvertToImage().SaveStream(filename, filetype):
DEBUG_MSG('print_figure() file save error', 4, self)
raise RuntimeError('Could not save figure to {0!s}\n'.format((filename)))
# Restore everything to normal
self.bitmap = origBitmap
# Note: draw is required here since bits of state about the
# last renderer are strewn about the artist draw methods. Do
# not remove the draw without first verifying that these have
# been cleaned up. The artist contains() methods will fail
# otherwise.
if self._isDrawn:
self.draw()
self.Refresh()
def get_default_filetype(self):
return 'png'
def _onPaint(self, evt):
"""
Called when wxPaintEvt is generated
"""
DEBUG_MSG("_onPaint()", 1, self)
drawDC = wx.PaintDC(self)
if not self._isDrawn:
self.draw(drawDC=drawDC)
else:
self.gui_repaint(drawDC=drawDC)
evt.Skip()
def _onEraseBackground(self, evt):
"""
Called when window is redrawn; since we are blitting the entire
image, we can leave this blank to suppress flicker.
"""
pass
def _onSize(self, evt):
"""
Called when wxEventSize is generated.
In this application we attempt to resize to fit the window, so it
is better to take the performance hit and redraw the whole window.
"""
DEBUG_MSG("_onSize()", 2, self)
# Create a new, correctly sized bitmap
self._width, self._height = self.GetClientSize()
self.bitmap =wx.EmptyBitmap(self._width, self._height)
self._isDrawn = False
if self._width <= 1 or self._height <= 1: return # Empty figure
dpival = self.figure.dpi
winch = self._width/dpival
hinch = self._height/dpival
self.figure.set_size_inches(winch, hinch)
# Rendering will happen on the associated paint event
# so no need to do anything here except to make sure
# the whole background is repainted.
self.Refresh(eraseBackground=False)
def _get_key(self, evt):
keyval = evt.m_keyCode
if keyval in self.keyvald:
key = self.keyvald[keyval]
elif keyval <256:
key = chr(keyval)
else:
key = None
# why is wx upcasing this?
if key is not None: key = key.lower()
return key
def _onIdle(self, evt):
'a GUI idle event'
evt.Skip()
FigureCanvasBase.idle_event(self, guiEvent=evt)
def _onKeyDown(self, evt):
"""Capture key press."""
key = self._get_key(evt)
evt.Skip()
FigureCanvasBase.key_press_event(self, key, guiEvent=evt)
def _onKeyUp(self, evt):
"""Release key."""
key = self._get_key(evt)
#print 'release key', key
evt.Skip()
FigureCanvasBase.key_release_event(self, key, guiEvent=evt)
def _onRightButtonDown(self, evt):
"""Start measuring on an axis."""
x = evt.GetX()
y = self.figure.bbox.height - evt.GetY()
evt.Skip()
self.CaptureMouse()
FigureCanvasBase.button_press_event(self, x, y, 3, guiEvent=evt)
def _onRightButtonUp(self, evt):
"""End measuring on an axis."""
x = evt.GetX()
y = self.figure.bbox.height - evt.GetY()
evt.Skip()
if self.HasCapture(): self.ReleaseMouse()
FigureCanvasBase.button_release_event(self, x, y, 3, guiEvent=evt)
def _onLeftButtonDown(self, evt):
"""Start measuring on an axis."""
x = evt.GetX()
y = self.figure.bbox.height - evt.GetY()
evt.Skip()
self.CaptureMouse()
FigureCanvasBase.button_press_event(self, x, y, 1, guiEvent=evt)
def _onLeftButtonUp(self, evt):
"""End measuring on an axis."""
x = evt.GetX()
y = self.figure.bbox.height - evt.GetY()
#print 'release button', 1
evt.Skip()
if self.HasCapture(): self.ReleaseMouse()
FigureCanvasBase.button_release_event(self, x, y, 1, guiEvent=evt)
def _onMouseWheel(self, evt):
"""Translate mouse wheel events into matplotlib events"""
# Determine mouse location
x = evt.GetX()
y = self.figure.bbox.height - evt.GetY()
# Convert delta/rotation/rate into a floating point step size
delta = evt.GetWheelDelta()
rotation = evt.GetWheelRotation()
rate = evt.GetLinesPerAction()
#print "delta,rotation,rate",delta,rotation,rate
step = rate*float(rotation)/delta
# Done handling event
evt.Skip()
# Mac is giving two events for every wheel event
# Need to skip every second one
if wx.Platform == '__WXMAC__':
if not hasattr(self,'_skipwheelevent'):
self._skipwheelevent = True
elif self._skipwheelevent:
self._skipwheelevent = False
return # Return without processing event
else:
self._skipwheelevent = True
# Convert to mpl event
FigureCanvasBase.scroll_event(self, x, y, step, guiEvent=evt)
def _onMotion(self, evt):
"""Start measuring on an axis."""
x = evt.GetX()
y = self.figure.bbox.height - evt.GetY()
evt.Skip()
FigureCanvasBase.motion_notify_event(self, x, y, guiEvent=evt)
def _onLeave(self, evt):
"""Mouse has left the window."""
evt.Skip()
FigureCanvasBase.leave_notify_event(self, guiEvent = evt)
def _onEnter(self, evt):
"""Mouse has entered the window."""
FigureCanvasBase.enter_notify_event(self, guiEvent = evt)
########################################################################
#
# The following functions and classes are for pylab compatibility
# mode (matplotlib.pylab) and implement figure managers, etc...
#
########################################################################
def _create_wx_app():
"""
Creates a wx.PySimpleApp instance if a wx.App has not been created.
"""
wxapp = wx.GetApp()
if wxapp is None:
wxapp = wx.PySimpleApp()
wxapp.SetExitOnFrameDelete(True)
# retain a reference to the app object so it does not get garbage
# collected and cause segmentation faults
_create_wx_app.theWxApp = wxapp
def draw_if_interactive():
"""
This should be overriden in a windowing environment if drawing
should be done in interactive python mode
"""
DEBUG_MSG("draw_if_interactive()", 1, None)
if matplotlib.is_interactive():
figManager = Gcf.get_active()
if figManager is not None:
figManager.canvas.draw()
def show():
"""
Current implementation assumes that matplotlib is executed in a PyCrust
shell. It appears to be possible to execute wxPython applications from
within a PyCrust without having to ensure that wxPython has been created
in a secondary thread (e.g. SciPy gui_thread).
Unfortunately, gui_thread seems to introduce a number of further
dependencies on SciPy modules, which I do not wish to introduce
into the backend at this point. If there is a need I will look
into this in a later release.
"""
DEBUG_MSG("show()", 3, None)
for figwin in Gcf.get_all_fig_managers():
figwin.frame.Show()
if show._needmain and not matplotlib.is_interactive():
# start the wxPython gui event if there is not already one running
wxapp = wx.GetApp()
if wxapp is not None:
# wxPython 2.4 has no wx.App.IsMainLoopRunning() method
imlr = getattr(wxapp, 'IsMainLoopRunning', lambda: False)
if not imlr():
wxapp.MainLoop()
show._needmain = False
show._needmain = True
def new_figure_manager(num, *args, **kwargs):
"""
Create a new figure manager instance
"""
# in order to expose the Figure constructor to the pylab
# interface we need to create the figure here
DEBUG_MSG("new_figure_manager()", 3, None)
_create_wx_app()
FigureClass = kwargs.pop('FigureClass', Figure)
fig = FigureClass(*args, **kwargs)
frame = FigureFrameWx(num, fig)
figmgr = frame.get_figure_manager()
if matplotlib.is_interactive():
figmgr.frame.Show()
return figmgr
class FigureFrameWx(wx.Frame):
def __init__(self, num, fig):
# On non-Windows platform, explicitly set the position - fix
# positioning bug on some Linux platforms
if wx.Platform == '__WXMSW__':
pos = wx.DefaultPosition
else:
pos =wx.Point(20,20)
l,b,w,h = fig.bbox.bounds
wx.Frame.__init__(self, parent=None, id=-1, pos=pos,
title="Figure {0:d}".format(num))
# Frame will be sized later by the Fit method
DEBUG_MSG("__init__()", 1, self)
self.num = num
statbar = StatusBarWx(self)
self.SetStatusBar(statbar)
self.canvas = self.get_canvas(fig)
self.canvas.SetInitialSize(wx.Size(fig.bbox.width, fig.bbox.height))
self.sizer =wx.BoxSizer(wx.VERTICAL)
self.sizer.Add(self.canvas, 1, wx.TOP | wx.LEFT | wx.EXPAND)
# By adding toolbar in sizer, we are able to put it at the bottom
# of the frame - so appearance is closer to GTK version
self.toolbar = self._get_toolbar(statbar)
if self.toolbar is not None:
self.toolbar.Realize()
if wx.Platform == '__WXMAC__':
# Mac platform (OSX 10.3, MacPython) does not seem to cope with
# having a toolbar in a sizer. This work-around gets the buttons
# back, but at the expense of having the toolbar at the top
self.SetToolBar(self.toolbar)
else:
# On Windows platform, default window size is incorrect, so set
# toolbar width to figure width.
tw, th = self.toolbar.GetSizeTuple()
fw, fh = self.canvas.GetSizeTuple()
# By adding toolbar in sizer, we are able to put it at the bottom
# of the frame - so appearance is closer to GTK version.
# As noted above, doesn't work for Mac.
self.toolbar.SetSize(wx.Size(fw, th))
self.sizer.Add(self.toolbar, 0, wx.LEFT | wx.EXPAND)
self.SetSizer(self.sizer)
self.Fit()
self.figmgr = FigureManagerWx(self.canvas, num, self)
bind(self, wx.EVT_CLOSE, self._onClose)
def _get_toolbar(self, statbar):
if matplotlib.rcParams['toolbar']=='classic':
toolbar = NavigationToolbarWx(self.canvas, True)
elif matplotlib.rcParams['toolbar']=='toolbar2':
toolbar = NavigationToolbar2Wx(self.canvas)
toolbar.set_status_bar(statbar)
else:
toolbar = None
return toolbar
def get_canvas(self, fig):
return FigureCanvasWx(self, -1, fig)
def get_figure_manager(self):
DEBUG_MSG("get_figure_manager()", 1, self)
return self.figmgr
def _onClose(self, evt):
DEBUG_MSG("onClose()", 1, self)
self.canvas.stop_event_loop()
Gcf.destroy(self.num)
#self.Destroy()
def GetToolBar(self):
"""Override wxFrame::GetToolBar as we don't have managed toolbar"""
return self.toolbar
def Destroy(self, *args, **kwargs):
wx.Frame.Destroy(self, *args, **kwargs)
if self.toolbar is not None:
self.toolbar.Destroy()
wxapp = wx.GetApp()
if wxapp:
wxapp.Yield()
return True
class FigureManagerWx(FigureManagerBase):
"""
This class contains the FigureCanvas and GUI frame
It is instantiated by GcfWx whenever a new figure is created. GcfWx is
responsible for managing multiple instances of FigureManagerWx.
NB: FigureManagerBase is found in _pylab_helpers
public attrs
canvas - a FigureCanvasWx(wx.Panel) instance
window - a wxFrame instance - http://www.lpthe.jussieu.fr/~zeitlin/wxWindows/docs/wxwin_wxframe.html#wxframe
"""
def __init__(self, canvas, num, frame):
DEBUG_MSG("__init__()", 1, self)
FigureManagerBase.__init__(self, canvas, num)
self.frame = frame
self.window = frame
self.tb = frame.GetToolBar()
self.toolbar = self.tb # consistent with other backends
def notify_axes_change(fig):
'this will be called whenever the current axes is changed'
if self.tb is not None: self.tb.update()
self.canvas.figure.add_axobserver(notify_axes_change)
def showfig(*args):
frame.Show()
# attach a show method to the figure
self.canvas.figure.show = showfig
def destroy(self, *args):
DEBUG_MSG("destroy()", 1, self)
self.frame.Destroy()
#if self.tb is not None: self.tb.Destroy()
import wx
#wx.GetApp().ProcessIdle()
wx.WakeUpIdle()
def set_window_title(self, title):
self.window.SetTitle(title)
def resize(self, width, height):
'Set the canvas size in pixels'
self.canvas.SetInitialSize(wx.Size(width, height))
self.window.GetSizer().Fit(self.window)
# Identifiers for toolbar controls - images_wx contains bitmaps for the images
# used in the controls. wxWindows does not provide any stock images, so I've
# 'stolen' those from GTK2, and transformed them into the appropriate format.
#import images_wx
_NTB_AXISMENU =wx.NewId()
_NTB_AXISMENU_BUTTON =wx.NewId()
_NTB_X_PAN_LEFT =wx.NewId()
_NTB_X_PAN_RIGHT =wx.NewId()
_NTB_X_ZOOMIN =wx.NewId()
_NTB_X_ZOOMOUT =wx.NewId()
_NTB_Y_PAN_UP =wx.NewId()
_NTB_Y_PAN_DOWN =wx.NewId()
_NTB_Y_ZOOMIN =wx.NewId()
_NTB_Y_ZOOMOUT =wx.NewId()
#_NTB_SUBPLOT =wx.NewId()
_NTB_SAVE =wx.NewId()
_NTB_CLOSE =wx.NewId()
def _load_bitmap(filename):
"""
Load a bitmap file from the backends/images subdirectory in which the
matplotlib library is installed. The filename parameter should not
contain any path information as this is determined automatically.
Returns a wx.Bitmap object
"""
basedir = os.path.join(rcParams['datapath'],'images')
bmpFilename = os.path.normpath(os.path.join(basedir, filename))
if not os.path.exists(bmpFilename):
raise IOError('Could not find bitmap file "{0!s}"; dying'.format(bmpFilename))
bmp = wx.Bitmap(bmpFilename)
return bmp
class MenuButtonWx(wx.Button):
"""
wxPython does not permit a menu to be incorporated directly into a toolbar.
This class simulates the effect by associating a pop-up menu with a button
in the toolbar, and managing this as though it were a menu.
"""
def __init__(self, parent):
wx.Button.__init__(self, parent, _NTB_AXISMENU_BUTTON, "Axes: ",
style=wx.BU_EXACTFIT)
self._toolbar = parent
self._menu =wx.Menu()
self._axisId = []
# First two menu items never change...
self._allId =wx.NewId()
self._invertId =wx.NewId()
self._menu.Append(self._allId, "All", "Select all axes", False)
self._menu.Append(self._invertId, "Invert", "Invert axes selected", False)
self._menu.AppendSeparator()
bind(self, wx.EVT_BUTTON, self._onMenuButton, id=_NTB_AXISMENU_BUTTON)
bind(self, wx.EVT_MENU, self._handleSelectAllAxes, id=self._allId)
bind(self, wx.EVT_MENU, self._handleInvertAxesSelected, id=self._invertId)
def Destroy(self):
self._menu.Destroy()
self.Destroy()
def _onMenuButton(self, evt):
"""Handle menu button pressed."""
x, y = self.GetPositionTuple()
w, h = self.GetSizeTuple()
self.PopupMenuXY(self._menu, x, y+h-4)
# When menu returned, indicate selection in button
evt.Skip()
def _handleSelectAllAxes(self, evt):
"""Called when the 'select all axes' menu item is selected."""
if len(self._axisId) == 0:
return
for i in range(len(self._axisId)):
self._menu.Check(self._axisId[i], True)
self._toolbar.set_active(self.getActiveAxes())
evt.Skip()
def _handleInvertAxesSelected(self, evt):
"""Called when the invert all menu item is selected"""
if len(self._axisId) == 0: return
for i in range(len(self._axisId)):
if self._menu.IsChecked(self._axisId[i]):
self._menu.Check(self._axisId[i], False)
else:
self._menu.Check(self._axisId[i], True)
self._toolbar.set_active(self.getActiveAxes())
evt.Skip()
def _onMenuItemSelected(self, evt):
"""Called whenever one of the specific axis menu items is selected"""
current = self._menu.IsChecked(evt.GetId())
if current:
new = False
else:
new = True
self._menu.Check(evt.GetId(), new)
self._toolbar.set_active(self.getActiveAxes())
evt.Skip()
def updateAxes(self, maxAxis):
"""Ensures that there are entries for max_axis axes in the menu
(selected by default)."""
if maxAxis > len(self._axisId):
for i in range(len(self._axisId) + 1, maxAxis + 1, 1):
menuId =wx.NewId()
self._axisId.append(menuId)
self._menu.Append(menuId, "Axis {0:d}".format(i), "Select axis {0:d}".format(i), True)
self._menu.Check(menuId, True)
bind(self, wx.EVT_MENU, self._onMenuItemSelected, id=menuId)
self._toolbar.set_active(range(len(self._axisId)))
def getActiveAxes(self):
"""Return a list of the selected axes."""
active = []
for i in range(len(self._axisId)):
if self._menu.IsChecked(self._axisId[i]):
active.append(i)
return active
def updateButtonText(self, lst):
"""Update the list of selected axes in the menu button"""
axis_txt = ''
for e in lst:
axis_txt += '{0:d},'.format((e+1))
# remove trailing ',' and add to button string
self.SetLabel("Axes: {0!s}".format(axis_txt[:-1]))
cursord = {
cursors.MOVE : wx.CURSOR_HAND,
cursors.HAND : wx.CURSOR_HAND,
cursors.POINTER : wx.CURSOR_ARROW,
cursors.SELECT_REGION : wx.CURSOR_CROSS,
}
class SubplotToolWX(wx.Frame):
def __init__(self, targetfig):
wx.Frame.__init__(self, None, -1, "Configure subplots")
toolfig = Figure((6,3))
canvas = FigureCanvasWx(self, -1, toolfig)
# Create a figure manager to manage things
figmgr = FigureManager(canvas, 1, self)
# Now put all into a sizer
sizer = wx.BoxSizer(wx.VERTICAL)
# This way of adding to sizer allows resizing
sizer.Add(canvas, 1, wx.LEFT|wx.TOP|wx.GROW)
self.SetSizer(sizer)
self.Fit()
tool = SubplotTool(targetfig, toolfig)
class NavigationToolbar2Wx(NavigationToolbar2, wx.ToolBar):
def __init__(self, canvas):
wx.ToolBar.__init__(self, canvas.GetParent(), -1)
NavigationToolbar2.__init__(self, canvas)
self.canvas = canvas
self._idle = True
self.statbar = None
def get_canvas(self, frame, fig):
return FigureCanvasWx(frame, -1, fig)
def _init_toolbar(self):
DEBUG_MSG("_init_toolbar", 1, self)
self._parent = self.canvas.GetParent()
_NTB2_HOME =wx.NewId()
self._NTB2_BACK =wx.NewId()
self._NTB2_FORWARD =wx.NewId()
self._NTB2_PAN =wx.NewId()
self._NTB2_ZOOM =wx.NewId()
_NTB2_SAVE = wx.NewId()
_NTB2_SUBPLOT =wx.NewId()
self.SetToolBitmapSize(wx.Size(24,24))
self.AddSimpleTool(_NTB2_HOME, _load_bitmap('home.png'),
'Home', 'Reset original view')
self.AddSimpleTool(self._NTB2_BACK, _load_bitmap('back.png'),
'Back', 'Back navigation view')
self.AddSimpleTool(self._NTB2_FORWARD, _load_bitmap('forward.png'),
'Forward', 'Forward navigation view')
# todo: get new bitmap
self.AddCheckTool(self._NTB2_PAN, _load_bitmap('move.png'),
shortHelp='Pan',
longHelp='Pan with left, zoom with right')
self.AddCheckTool(self._NTB2_ZOOM, _load_bitmap('zoom_to_rect.png'),
shortHelp='Zoom', longHelp='Zoom to rectangle')
self.AddSeparator()
self.AddSimpleTool(_NTB2_SUBPLOT, _load_bitmap('subplots.png'),
'Configure subplots', 'Configure subplot parameters')
self.AddSimpleTool(_NTB2_SAVE, _load_bitmap('filesave.png'),
'Save', 'Save plot contents to file')
bind(self, wx.EVT_TOOL, self.home, id=_NTB2_HOME)
bind(self, wx.EVT_TOOL, self.forward, id=self._NTB2_FORWARD)
bind(self, wx.EVT_TOOL, self.back, id=self._NTB2_BACK)
bind(self, wx.EVT_TOOL, self.zoom, id=self._NTB2_ZOOM)
bind(self, wx.EVT_TOOL, self.pan, id=self._NTB2_PAN)
bind(self, wx.EVT_TOOL, self.configure_subplot, id=_NTB2_SUBPLOT)
bind(self, wx.EVT_TOOL, self.save, id=_NTB2_SAVE)
self.Realize()
def zoom(self, *args):
self.ToggleTool(self._NTB2_PAN, False)
NavigationToolbar2.zoom(self, *args)
def pan(self, *args):
self.ToggleTool(self._NTB2_ZOOM, False)
NavigationToolbar2.pan(self, *args)
def configure_subplot(self, evt):
frame = wx.Frame(None, -1, "Configure subplots")
toolfig = Figure((6,3))
canvas = self.get_canvas(frame, toolfig)
# Create a figure manager to manage things
figmgr = FigureManager(canvas, 1, frame)
# Now put all into a sizer
sizer = wx.BoxSizer(wx.VERTICAL)
# This way of adding to sizer allows resizing
sizer.Add(canvas, 1, wx.LEFT|wx.TOP|wx.GROW)
frame.SetSizer(sizer)
frame.Fit()
tool = SubplotTool(self.canvas.figure, toolfig)
frame.Show()
def save(self, evt):
# Fetch the required filename and file type.
filetypes, exts, filter_index = self.canvas._get_imagesave_wildcards()
default_file = "image." + self.canvas.get_default_filetype()
dlg = wx.FileDialog(self._parent, "Save to file", "", default_file,
filetypes,
wx.SAVE|wx.OVERWRITE_PROMPT|wx.CHANGE_DIR)
dlg.SetFilterIndex(filter_index)
if dlg.ShowModal() == wx.ID_OK:
dirname = dlg.GetDirectory()
filename = dlg.GetFilename()
DEBUG_MSG('Save file dir:{0!s} name:{1!s}'.format(dirname, filename), 3, self)
format = exts[dlg.GetFilterIndex()]
basename, ext = os.path.splitext(filename)
if ext.startswith('.'):
ext = ext[1:]
if ext in ('svg', 'pdf', 'ps', 'eps', 'png') and format!=ext:
#looks like they forgot to set the image type drop
#down, going with the extension.
warnings.warn('extension {0!s} did not match the selected image type {1!s}; going with {2!s}'.format(ext, format, ext), stacklevel=0)
format = ext
try:
self.canvas.print_figure(
os.path.join(dirname, filename), format=format)
except Exception, e:
error_msg_wx(str(e))
def set_cursor(self, cursor):
cursor =wx.StockCursor(cursord[cursor])
self.canvas.SetCursor( cursor )
def release(self, event):
try: del self.lastrect
except AttributeError: pass
def dynamic_update(self):
d = self._idle
self._idle = False
if d:
self.canvas.draw()
self._idle = True
def draw_rubberband(self, event, x0, y0, x1, y1):
'adapted from http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/189744'
canvas = self.canvas
dc =wx.ClientDC(canvas)
# Set logical function to XOR for rubberbanding
dc.SetLogicalFunction(wx.XOR)
# Set dc brush and pen
# Here I set brush and pen to white and grey respectively
# You can set it to your own choices
# The brush setting is not really needed since we
# dont do any filling of the dc. It is set just for
# the sake of completion.
wbrush =wx.Brush(wx.Colour(255,255,255), wx.TRANSPARENT)
wpen =wx.Pen(wx.Colour(200, 200, 200), 1, wx.SOLID)
dc.SetBrush(wbrush)
dc.SetPen(wpen)
dc.ResetBoundingBox()
dc.BeginDrawing()
height = self.canvas.figure.bbox.height
y1 = height - y1
y0 = height - y0
if y1<y0: y0, y1 = y1, y0
if x1<y0: x0, x1 = x1, x0
w = x1 - x0
h = y1 - y0
rect = int(x0), int(y0), int(w), int(h)
try: lastrect = self.lastrect
except AttributeError: pass
else: dc.DrawRectangle(*lastrect) #erase last
self.lastrect = rect
dc.DrawRectangle(*rect)
dc.EndDrawing()
def set_status_bar(self, statbar):
self.statbar = statbar
def set_message(self, s):
if self.statbar is not None: self.statbar.set_function(s)
def set_history_buttons(self):
can_backward = (self._views._pos > 0)
can_forward = (self._views._pos < len(self._views._elements) - 1)
self.EnableTool(self._NTB2_BACK, can_backward)
self.EnableTool(self._NTB2_FORWARD, can_forward)
class NavigationToolbarWx(wx.ToolBar):
def __init__(self, canvas, can_kill=False):
"""
figure is the Figure instance that the toolboar controls
win, if not None, is the wxWindow the Figure is embedded in
"""
wx.ToolBar.__init__(self, canvas.GetParent(), -1)
DEBUG_MSG("__init__()", 1, self)
self.canvas = canvas
self._lastControl = None
self._mouseOnButton = None
self._parent = canvas.GetParent()
self._NTB_BUTTON_HANDLER = {
_NTB_X_PAN_LEFT : self.panx,
_NTB_X_PAN_RIGHT : self.panx,
_NTB_X_ZOOMIN : self.zoomx,
_NTB_X_ZOOMOUT : self.zoomy,
_NTB_Y_PAN_UP : self.pany,
_NTB_Y_PAN_DOWN : self.pany,
_NTB_Y_ZOOMIN : self.zoomy,
_NTB_Y_ZOOMOUT : self.zoomy }
self._create_menu()
self._create_controls(can_kill)
self.Realize()
def _create_menu(self):
"""
Creates the 'menu' - implemented as a button which opens a
pop-up menu since wxPython does not allow a menu as a control
"""
DEBUG_MSG("_create_menu()", 1, self)
self._menu = MenuButtonWx(self)
self.AddControl(self._menu)
self.AddSeparator()
def _create_controls(self, can_kill):
"""
Creates the button controls, and links them to event handlers
"""
DEBUG_MSG("_create_controls()", 1, self)
# Need the following line as Windows toolbars default to 15x16
self.SetToolBitmapSize(wx.Size(16,16))
self.AddSimpleTool(_NTB_X_PAN_LEFT, _load_bitmap('stock_left.xpm'),
'Left', 'Scroll left')
self.AddSimpleTool(_NTB_X_PAN_RIGHT, _load_bitmap('stock_right.xpm'),
'Right', 'Scroll right')
self.AddSimpleTool(_NTB_X_ZOOMIN, _load_bitmap('stock_zoom-in.xpm'),
'Zoom in', 'Increase X axis magnification')
self.AddSimpleTool(_NTB_X_ZOOMOUT, _load_bitmap('stock_zoom-out.xpm'),
'Zoom out', 'Decrease X axis magnification')
self.AddSeparator()
self.AddSimpleTool(_NTB_Y_PAN_UP,_load_bitmap('stock_up.xpm'),
'Up', 'Scroll up')
self.AddSimpleTool(_NTB_Y_PAN_DOWN, _load_bitmap('stock_down.xpm'),
'Down', 'Scroll down')
self.AddSimpleTool(_NTB_Y_ZOOMIN, _load_bitmap('stock_zoom-in.xpm'),
'Zoom in', 'Increase Y axis magnification')
self.AddSimpleTool(_NTB_Y_ZOOMOUT, _load_bitmap('stock_zoom-out.xpm'),
'Zoom out', 'Decrease Y axis magnification')
self.AddSeparator()
self.AddSimpleTool(_NTB_SAVE, _load_bitmap('stock_save_as.xpm'),
'Save', 'Save plot contents as images')
self.AddSeparator()
bind(self, wx.EVT_TOOL, self._onLeftScroll, id=_NTB_X_PAN_LEFT)
bind(self, wx.EVT_TOOL, self._onRightScroll, id=_NTB_X_PAN_RIGHT)
bind(self, wx.EVT_TOOL, self._onXZoomIn, id=_NTB_X_ZOOMIN)
bind(self, wx.EVT_TOOL, self._onXZoomOut, id=_NTB_X_ZOOMOUT)
bind(self, wx.EVT_TOOL, self._onUpScroll, id=_NTB_Y_PAN_UP)
bind(self, wx.EVT_TOOL, self._onDownScroll, id=_NTB_Y_PAN_DOWN)
bind(self, wx.EVT_TOOL, self._onYZoomIn, id=_NTB_Y_ZOOMIN)
bind(self, wx.EVT_TOOL, self._onYZoomOut, id=_NTB_Y_ZOOMOUT)
bind(self, wx.EVT_TOOL, self._onSave, id=_NTB_SAVE)
bind(self, wx.EVT_TOOL_ENTER, self._onEnterTool, id=self.GetId())
if can_kill:
bind(self, wx.EVT_TOOL, self._onClose, id=_NTB_CLOSE)
bind(self, wx.EVT_MOUSEWHEEL, self._onMouseWheel)
def set_active(self, ind):
"""
ind is a list of index numbers for the axes which are to be made active
"""
DEBUG_MSG("set_active()", 1, self)
self._ind = ind
if ind is not None:
self._active = [ self._axes[i] for i in self._ind ]
else:
self._active = []
# Now update button text wit active axes
self._menu.updateButtonText(ind)
def get_last_control(self):
"""Returns the identity of the last toolbar button pressed."""
return self._lastControl
def panx(self, direction):
DEBUG_MSG("panx()", 1, self)
for a in self._active:
a.xaxis.pan(direction)
self.canvas.draw()
self.canvas.Refresh(eraseBackground=False)
def pany(self, direction):
DEBUG_MSG("pany()", 1, self)
for a in self._active:
a.yaxis.pan(direction)
self.canvas.draw()
self.canvas.Refresh(eraseBackground=False)
def zoomx(self, in_out):
DEBUG_MSG("zoomx()", 1, self)
for a in self._active:
a.xaxis.zoom(in_out)
self.canvas.draw()
self.canvas.Refresh(eraseBackground=False)
def zoomy(self, in_out):
DEBUG_MSG("zoomy()", 1, self)
for a in self._active:
a.yaxis.zoom(in_out)
self.canvas.draw()
self.canvas.Refresh(eraseBackground=False)
def update(self):
"""
Update the toolbar menu - called when (e.g.) a new subplot or axes are added
"""
DEBUG_MSG("update()", 1, self)
self._axes = self.canvas.figure.get_axes()
self._menu.updateAxes(len(self._axes))
def _do_nothing(self, d):
"""A NULL event handler - does nothing whatsoever"""
pass
# Local event handlers - mainly supply parameters to pan/scroll functions
def _onEnterTool(self, evt):
toolId = evt.GetSelection()
try:
self.button_fn = self._NTB_BUTTON_HANDLER[toolId]
except KeyError:
self.button_fn = self._do_nothing
evt.Skip()
def _onLeftScroll(self, evt):
self.panx(-1)
evt.Skip()
def _onRightScroll(self, evt):
self.panx(1)
evt.Skip()
def _onXZoomIn(self, evt):
self.zoomx(1)
evt.Skip()
def _onXZoomOut(self, evt):
self.zoomx(-1)
evt.Skip()
def _onUpScroll(self, evt):
self.pany(1)
evt.Skip()
def _onDownScroll(self, evt):
self.pany(-1)
evt.Skip()
def _onYZoomIn(self, evt):
self.zoomy(1)
evt.Skip()
def _onYZoomOut(self, evt):
self.zoomy(-1)
evt.Skip()
def _onMouseEnterButton(self, button):
self._mouseOnButton = button
def _onMouseLeaveButton(self, button):
if self._mouseOnButton == button:
self._mouseOnButton = None
def _onMouseWheel(self, evt):
if evt.GetWheelRotation() > 0:
direction = 1
else:
direction = -1
self.button_fn(direction)
_onSave = NavigationToolbar2Wx.save
def _onClose(self, evt):
self.GetParent().Destroy()
class StatusBarWx(wx.StatusBar):
"""
A status bar is added to _FigureFrame to allow measurements and the
previously selected scroll function to be displayed as a user
convenience.
"""
def __init__(self, parent):
wx.StatusBar.__init__(self, parent, -1)
self.SetFieldsCount(2)
self.SetStatusText("None", 1)
#self.SetStatusText("Measurement: None", 2)
#self.Reposition()
def set_function(self, string):
self.SetStatusText("{0!s}".format(string), 1)
#def set_measurement(self, string):
# self.SetStatusText("Measurement: %s" % string, 2)
#< Additions for printing support: Matt Newville
class PrintoutWx(wx.Printout):
"""Simple wrapper around wx Printout class -- all the real work
here is scaling the matplotlib canvas bitmap to the current
printer's definition.
"""
def __init__(self, canvas, width=5.5,margin=0.5, title='matplotlib'):
wx.Printout.__init__(self,title=title)
self.canvas = canvas
# width, in inches of output figure (approximate)
self.width = width
self.margin = margin
def HasPage(self, page):
#current only supports 1 page print
return page == 1
def GetPageInfo(self):
return (1, 1, 1, 1)
def OnPrintPage(self, page):
self.canvas.draw()
dc = self.GetDC()
(ppw,pph) = self.GetPPIPrinter() # printer's pixels per in
(pgw,pgh) = self.GetPageSizePixels() # page size in pixels
(dcw,dch) = dc.GetSize()
(grw,grh) = self.canvas.GetSizeTuple()
# save current figure dpi resolution and bg color,
# so that we can temporarily set them to the dpi of
# the printer, and the bg color to white
bgcolor = self.canvas.figure.get_facecolor()
fig_dpi = self.canvas.figure.dpi
# draw the bitmap, scaled appropriately
vscale = float(ppw) / fig_dpi
# set figure resolution,bg color for printer
self.canvas.figure.dpi = ppw
self.canvas.figure.set_facecolor('#FFFFFF')
renderer = RendererWx(self.canvas.bitmap, self.canvas.figure.dpi)
self.canvas.figure.draw(renderer)
self.canvas.bitmap.SetWidth( int(self.canvas.bitmap.GetWidth() * vscale))
self.canvas.bitmap.SetHeight( int(self.canvas.bitmap.GetHeight()* vscale))
self.canvas.draw()
# page may need additional scaling on preview
page_scale = 1.0
if self.IsPreview(): page_scale = float(dcw)/pgw
# get margin in pixels = (margin in in) * (pixels/in)
top_margin = int(self.margin * pph * page_scale)
left_margin = int(self.margin * ppw * page_scale)
# set scale so that width of output is self.width inches
# (assuming grw is size of graph in inches....)
user_scale = (self.width * fig_dpi * page_scale)/float(grw)
dc.SetDeviceOrigin(left_margin,top_margin)
dc.SetUserScale(user_scale,user_scale)
# this cute little number avoid API inconsistencies in wx
try:
dc.DrawBitmap(self.canvas.bitmap, 0, 0)
except:
try:
dc.DrawBitmap(self.canvas.bitmap, (0, 0))
except:
pass
# restore original figure resolution
self.canvas.figure.set_facecolor(bgcolor)
self.canvas.figure.dpi = fig_dpi
self.canvas.draw()
return True
#>
########################################################################
#
# Now just provide the standard names that backend.__init__ is expecting
#
########################################################################
Toolbar = NavigationToolbarWx
FigureManager = FigureManagerWx
| agpl-3.0 |
kagayakidan/scikit-learn | sklearn/svm/tests/test_svm.py | 70 | 31674 | """
Testing for Support Vector Machine module (sklearn.svm)
TODO: remove hard coded numerical results when possible
"""
import numpy as np
import itertools
from numpy.testing import assert_array_equal, assert_array_almost_equal
from numpy.testing import assert_almost_equal
from scipy import sparse
from nose.tools import assert_raises, assert_true, assert_equal, assert_false
from sklearn.base import ChangedBehaviorWarning
from sklearn import svm, linear_model, datasets, metrics, base
from sklearn.cross_validation import train_test_split
from sklearn.datasets import make_classification, make_blobs
from sklearn.metrics import f1_score
from sklearn.metrics.pairwise import rbf_kernel
from sklearn.utils import check_random_state
from sklearn.utils import ConvergenceWarning
from sklearn.utils.validation import NotFittedError
from sklearn.utils.testing import assert_greater, assert_in, assert_less
from sklearn.utils.testing import assert_raises_regexp, assert_warns
from sklearn.utils.testing import assert_warns_message, assert_raise_message
from sklearn.utils.testing import ignore_warnings
# toy sample
X = [[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]]
Y = [1, 1, 1, 2, 2, 2]
T = [[-1, -1], [2, 2], [3, 2]]
true_result = [1, 2, 2]
# also load the iris dataset
iris = datasets.load_iris()
rng = check_random_state(42)
perm = rng.permutation(iris.target.size)
iris.data = iris.data[perm]
iris.target = iris.target[perm]
def test_libsvm_parameters():
# Test parameters on classes that make use of libsvm.
clf = svm.SVC(kernel='linear').fit(X, Y)
assert_array_equal(clf.dual_coef_, [[-0.25, .25]])
assert_array_equal(clf.support_, [1, 3])
assert_array_equal(clf.support_vectors_, (X[1], X[3]))
assert_array_equal(clf.intercept_, [0.])
assert_array_equal(clf.predict(X), Y)
def test_libsvm_iris():
# Check consistency on dataset iris.
# shuffle the dataset so that labels are not ordered
for k in ('linear', 'rbf'):
clf = svm.SVC(kernel=k).fit(iris.data, iris.target)
assert_greater(np.mean(clf.predict(iris.data) == iris.target), 0.9)
assert_array_equal(clf.classes_, np.sort(clf.classes_))
# check also the low-level API
model = svm.libsvm.fit(iris.data, iris.target.astype(np.float64))
pred = svm.libsvm.predict(iris.data, *model)
assert_greater(np.mean(pred == iris.target), .95)
model = svm.libsvm.fit(iris.data, iris.target.astype(np.float64),
kernel='linear')
pred = svm.libsvm.predict(iris.data, *model, kernel='linear')
assert_greater(np.mean(pred == iris.target), .95)
pred = svm.libsvm.cross_validation(iris.data,
iris.target.astype(np.float64), 5,
kernel='linear',
random_seed=0)
assert_greater(np.mean(pred == iris.target), .95)
# If random_seed >= 0, the libsvm rng is seeded (by calling `srand`), hence
# we should get deteriministic results (assuming that there is no other
# thread calling this wrapper calling `srand` concurrently).
pred2 = svm.libsvm.cross_validation(iris.data,
iris.target.astype(np.float64), 5,
kernel='linear',
random_seed=0)
assert_array_equal(pred, pred2)
@ignore_warnings
def test_single_sample_1d():
# Test whether SVCs work on a single sample given as a 1-d array
clf = svm.SVC().fit(X, Y)
clf.predict(X[0])
clf = svm.LinearSVC(random_state=0).fit(X, Y)
clf.predict(X[0])
def test_precomputed():
# SVC with a precomputed kernel.
# We test it with a toy dataset and with iris.
clf = svm.SVC(kernel='precomputed')
# Gram matrix for train data (square matrix)
# (we use just a linear kernel)
K = np.dot(X, np.array(X).T)
clf.fit(K, Y)
# Gram matrix for test data (rectangular matrix)
KT = np.dot(T, np.array(X).T)
pred = clf.predict(KT)
assert_raises(ValueError, clf.predict, KT.T)
assert_array_equal(clf.dual_coef_, [[-0.25, .25]])
assert_array_equal(clf.support_, [1, 3])
assert_array_equal(clf.intercept_, [0])
assert_array_almost_equal(clf.support_, [1, 3])
assert_array_equal(pred, true_result)
# Gram matrix for test data but compute KT[i,j]
# for support vectors j only.
KT = np.zeros_like(KT)
for i in range(len(T)):
for j in clf.support_:
KT[i, j] = np.dot(T[i], X[j])
pred = clf.predict(KT)
assert_array_equal(pred, true_result)
# same as before, but using a callable function instead of the kernel
# matrix. kernel is just a linear kernel
kfunc = lambda x, y: np.dot(x, y.T)
clf = svm.SVC(kernel=kfunc)
clf.fit(X, Y)
pred = clf.predict(T)
assert_array_equal(clf.dual_coef_, [[-0.25, .25]])
assert_array_equal(clf.intercept_, [0])
assert_array_almost_equal(clf.support_, [1, 3])
assert_array_equal(pred, true_result)
# test a precomputed kernel with the iris dataset
# and check parameters against a linear SVC
clf = svm.SVC(kernel='precomputed')
clf2 = svm.SVC(kernel='linear')
K = np.dot(iris.data, iris.data.T)
clf.fit(K, iris.target)
clf2.fit(iris.data, iris.target)
pred = clf.predict(K)
assert_array_almost_equal(clf.support_, clf2.support_)
assert_array_almost_equal(clf.dual_coef_, clf2.dual_coef_)
assert_array_almost_equal(clf.intercept_, clf2.intercept_)
assert_almost_equal(np.mean(pred == iris.target), .99, decimal=2)
# Gram matrix for test data but compute KT[i,j]
# for support vectors j only.
K = np.zeros_like(K)
for i in range(len(iris.data)):
for j in clf.support_:
K[i, j] = np.dot(iris.data[i], iris.data[j])
pred = clf.predict(K)
assert_almost_equal(np.mean(pred == iris.target), .99, decimal=2)
clf = svm.SVC(kernel=kfunc)
clf.fit(iris.data, iris.target)
assert_almost_equal(np.mean(pred == iris.target), .99, decimal=2)
def test_svr():
# Test Support Vector Regression
diabetes = datasets.load_diabetes()
for clf in (svm.NuSVR(kernel='linear', nu=.4, C=1.0),
svm.NuSVR(kernel='linear', nu=.4, C=10.),
svm.SVR(kernel='linear', C=10.),
svm.LinearSVR(C=10.),
svm.LinearSVR(C=10.),
):
clf.fit(diabetes.data, diabetes.target)
assert_greater(clf.score(diabetes.data, diabetes.target), 0.02)
# non-regression test; previously, BaseLibSVM would check that
# len(np.unique(y)) < 2, which must only be done for SVC
svm.SVR().fit(diabetes.data, np.ones(len(diabetes.data)))
svm.LinearSVR().fit(diabetes.data, np.ones(len(diabetes.data)))
def test_linearsvr():
# check that SVR(kernel='linear') and LinearSVC() give
# comparable results
diabetes = datasets.load_diabetes()
lsvr = svm.LinearSVR(C=1e3).fit(diabetes.data, diabetes.target)
score1 = lsvr.score(diabetes.data, diabetes.target)
svr = svm.SVR(kernel='linear', C=1e3).fit(diabetes.data, diabetes.target)
score2 = svr.score(diabetes.data, diabetes.target)
assert np.linalg.norm(lsvr.coef_ - svr.coef_) / np.linalg.norm(svr.coef_) < .1
assert np.abs(score1 - score2) < 0.1
def test_svr_errors():
X = [[0.0], [1.0]]
y = [0.0, 0.5]
# Bad kernel
clf = svm.SVR(kernel=lambda x, y: np.array([[1.0]]))
clf.fit(X, y)
assert_raises(ValueError, clf.predict, X)
def test_oneclass():
# Test OneClassSVM
clf = svm.OneClassSVM()
clf.fit(X)
pred = clf.predict(T)
assert_array_almost_equal(pred, [-1, -1, -1])
assert_array_almost_equal(clf.intercept_, [-1.008], decimal=3)
assert_array_almost_equal(clf.dual_coef_,
[[0.632, 0.233, 0.633, 0.234, 0.632, 0.633]],
decimal=3)
assert_raises(ValueError, lambda: clf.coef_)
def test_oneclass_decision_function():
# Test OneClassSVM decision function
clf = svm.OneClassSVM()
rnd = check_random_state(2)
# Generate train data
X = 0.3 * rnd.randn(100, 2)
X_train = np.r_[X + 2, X - 2]
# Generate some regular novel observations
X = 0.3 * rnd.randn(20, 2)
X_test = np.r_[X + 2, X - 2]
# Generate some abnormal novel observations
X_outliers = rnd.uniform(low=-4, high=4, size=(20, 2))
# fit the model
clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.1)
clf.fit(X_train)
# predict things
y_pred_test = clf.predict(X_test)
assert_greater(np.mean(y_pred_test == 1), .9)
y_pred_outliers = clf.predict(X_outliers)
assert_greater(np.mean(y_pred_outliers == -1), .9)
dec_func_test = clf.decision_function(X_test)
assert_array_equal((dec_func_test > 0).ravel(), y_pred_test == 1)
dec_func_outliers = clf.decision_function(X_outliers)
assert_array_equal((dec_func_outliers > 0).ravel(), y_pred_outliers == 1)
def test_tweak_params():
# Make sure some tweaking of parameters works.
# We change clf.dual_coef_ at run time and expect .predict() to change
# accordingly. Notice that this is not trivial since it involves a lot
# of C/Python copying in the libsvm bindings.
# The success of this test ensures that the mapping between libsvm and
# the python classifier is complete.
clf = svm.SVC(kernel='linear', C=1.0)
clf.fit(X, Y)
assert_array_equal(clf.dual_coef_, [[-.25, .25]])
assert_array_equal(clf.predict([[-.1, -.1]]), [1])
clf._dual_coef_ = np.array([[.0, 1.]])
assert_array_equal(clf.predict([[-.1, -.1]]), [2])
def test_probability():
# Predict probabilities using SVC
# This uses cross validation, so we use a slightly bigger testing set.
for clf in (svm.SVC(probability=True, random_state=0, C=1.0),
svm.NuSVC(probability=True, random_state=0)):
clf.fit(iris.data, iris.target)
prob_predict = clf.predict_proba(iris.data)
assert_array_almost_equal(
np.sum(prob_predict, 1), np.ones(iris.data.shape[0]))
assert_true(np.mean(np.argmax(prob_predict, 1)
== clf.predict(iris.data)) > 0.9)
assert_almost_equal(clf.predict_proba(iris.data),
np.exp(clf.predict_log_proba(iris.data)), 8)
def test_decision_function():
# Test decision_function
# Sanity check, test that decision_function implemented in python
# returns the same as the one in libsvm
# multi class:
clf = svm.SVC(kernel='linear', C=0.1,
decision_function_shape='ovo').fit(iris.data, iris.target)
dec = np.dot(iris.data, clf.coef_.T) + clf.intercept_
assert_array_almost_equal(dec, clf.decision_function(iris.data))
# binary:
clf.fit(X, Y)
dec = np.dot(X, clf.coef_.T) + clf.intercept_
prediction = clf.predict(X)
assert_array_almost_equal(dec.ravel(), clf.decision_function(X))
assert_array_almost_equal(
prediction,
clf.classes_[(clf.decision_function(X) > 0).astype(np.int)])
expected = np.array([-1., -0.66, -1., 0.66, 1., 1.])
assert_array_almost_equal(clf.decision_function(X), expected, 2)
# kernel binary:
clf = svm.SVC(kernel='rbf', gamma=1, decision_function_shape='ovo')
clf.fit(X, Y)
rbfs = rbf_kernel(X, clf.support_vectors_, gamma=clf.gamma)
dec = np.dot(rbfs, clf.dual_coef_.T) + clf.intercept_
assert_array_almost_equal(dec.ravel(), clf.decision_function(X))
def test_decision_function_shape():
# check that decision_function_shape='ovr' gives
# correct shape and is consistent with predict
clf = svm.SVC(kernel='linear', C=0.1,
decision_function_shape='ovr').fit(iris.data, iris.target)
dec = clf.decision_function(iris.data)
assert_equal(dec.shape, (len(iris.data), 3))
assert_array_equal(clf.predict(iris.data), np.argmax(dec, axis=1))
# with five classes:
X, y = make_blobs(n_samples=80, centers=5, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
clf = svm.SVC(kernel='linear', C=0.1,
decision_function_shape='ovr').fit(X_train, y_train)
dec = clf.decision_function(X_test)
assert_equal(dec.shape, (len(X_test), 5))
assert_array_equal(clf.predict(X_test), np.argmax(dec, axis=1))
# check shape of ovo_decition_function=True
clf = svm.SVC(kernel='linear', C=0.1,
decision_function_shape='ovo').fit(X_train, y_train)
dec = clf.decision_function(X_train)
assert_equal(dec.shape, (len(X_train), 10))
# check deprecation warning
clf.decision_function_shape = None
msg = "change the shape of the decision function"
dec = assert_warns_message(ChangedBehaviorWarning, msg,
clf.decision_function, X_train)
assert_equal(dec.shape, (len(X_train), 10))
def test_svr_decision_function():
# Test SVR's decision_function
# Sanity check, test that decision_function implemented in python
# returns the same as the one in libsvm
X = iris.data
y = iris.target
# linear kernel
reg = svm.SVR(kernel='linear', C=0.1).fit(X, y)
dec = np.dot(X, reg.coef_.T) + reg.intercept_
assert_array_almost_equal(dec.ravel(), reg.decision_function(X).ravel())
# rbf kernel
reg = svm.SVR(kernel='rbf', gamma=1).fit(X, y)
rbfs = rbf_kernel(X, reg.support_vectors_, gamma=reg.gamma)
dec = np.dot(rbfs, reg.dual_coef_.T) + reg.intercept_
assert_array_almost_equal(dec.ravel(), reg.decision_function(X).ravel())
def test_weight():
# Test class weights
clf = svm.SVC(class_weight={1: 0.1})
# we give a small weights to class 1
clf.fit(X, Y)
# so all predicted values belong to class 2
assert_array_almost_equal(clf.predict(X), [2] * 6)
X_, y_ = make_classification(n_samples=200, n_features=10,
weights=[0.833, 0.167], random_state=2)
for clf in (linear_model.LogisticRegression(),
svm.LinearSVC(random_state=0), svm.SVC()):
clf.set_params(class_weight={0: .1, 1: 10})
clf.fit(X_[:100], y_[:100])
y_pred = clf.predict(X_[100:])
assert_true(f1_score(y_[100:], y_pred) > .3)
def test_sample_weights():
# Test weights on individual samples
# TODO: check on NuSVR, OneClass, etc.
clf = svm.SVC()
clf.fit(X, Y)
assert_array_equal(clf.predict([X[2]]), [1.])
sample_weight = [.1] * 3 + [10] * 3
clf.fit(X, Y, sample_weight=sample_weight)
assert_array_equal(clf.predict([X[2]]), [2.])
# test that rescaling all samples is the same as changing C
clf = svm.SVC()
clf.fit(X, Y)
dual_coef_no_weight = clf.dual_coef_
clf.set_params(C=100)
clf.fit(X, Y, sample_weight=np.repeat(0.01, len(X)))
assert_array_almost_equal(dual_coef_no_weight, clf.dual_coef_)
def test_auto_weight():
# Test class weights for imbalanced data
from sklearn.linear_model import LogisticRegression
# We take as dataset the two-dimensional projection of iris so
# that it is not separable and remove half of predictors from
# class 1.
# We add one to the targets as a non-regression test: class_weight="balanced"
# used to work only when the labels where a range [0..K).
from sklearn.utils import compute_class_weight
X, y = iris.data[:, :2], iris.target + 1
unbalanced = np.delete(np.arange(y.size), np.where(y > 2)[0][::2])
classes = np.unique(y[unbalanced])
class_weights = compute_class_weight('balanced', classes, y[unbalanced])
assert_true(np.argmax(class_weights) == 2)
for clf in (svm.SVC(kernel='linear'), svm.LinearSVC(random_state=0),
LogisticRegression()):
# check that score is better when class='balanced' is set.
y_pred = clf.fit(X[unbalanced], y[unbalanced]).predict(X)
clf.set_params(class_weight='balanced')
y_pred_balanced = clf.fit(X[unbalanced], y[unbalanced],).predict(X)
assert_true(metrics.f1_score(y, y_pred, average='weighted')
<= metrics.f1_score(y, y_pred_balanced,
average='weighted'))
def test_bad_input():
# Test that it gives proper exception on deficient input
# impossible value of C
assert_raises(ValueError, svm.SVC(C=-1).fit, X, Y)
# impossible value of nu
clf = svm.NuSVC(nu=0.0)
assert_raises(ValueError, clf.fit, X, Y)
Y2 = Y[:-1] # wrong dimensions for labels
assert_raises(ValueError, clf.fit, X, Y2)
# Test with arrays that are non-contiguous.
for clf in (svm.SVC(), svm.LinearSVC(random_state=0)):
Xf = np.asfortranarray(X)
assert_false(Xf.flags['C_CONTIGUOUS'])
yf = np.ascontiguousarray(np.tile(Y, (2, 1)).T)
yf = yf[:, -1]
assert_false(yf.flags['F_CONTIGUOUS'])
assert_false(yf.flags['C_CONTIGUOUS'])
clf.fit(Xf, yf)
assert_array_equal(clf.predict(T), true_result)
# error for precomputed kernelsx
clf = svm.SVC(kernel='precomputed')
assert_raises(ValueError, clf.fit, X, Y)
# sample_weight bad dimensions
clf = svm.SVC()
assert_raises(ValueError, clf.fit, X, Y, sample_weight=range(len(X) - 1))
# predict with sparse input when trained with dense
clf = svm.SVC().fit(X, Y)
assert_raises(ValueError, clf.predict, sparse.lil_matrix(X))
Xt = np.array(X).T
clf.fit(np.dot(X, Xt), Y)
assert_raises(ValueError, clf.predict, X)
clf = svm.SVC()
clf.fit(X, Y)
assert_raises(ValueError, clf.predict, Xt)
def test_sparse_precomputed():
clf = svm.SVC(kernel='precomputed')
sparse_gram = sparse.csr_matrix([[1, 0], [0, 1]])
try:
clf.fit(sparse_gram, [0, 1])
assert not "reached"
except TypeError as e:
assert_in("Sparse precomputed", str(e))
def test_linearsvc_parameters():
# Test possible parameter combinations in LinearSVC
# Generate list of possible parameter combinations
losses = ['hinge', 'squared_hinge', 'logistic_regression', 'foo']
penalties, duals = ['l1', 'l2', 'bar'], [True, False]
X, y = make_classification(n_samples=5, n_features=5)
for loss, penalty, dual in itertools.product(losses, penalties, duals):
clf = svm.LinearSVC(penalty=penalty, loss=loss, dual=dual)
if ((loss, penalty) == ('hinge', 'l1') or
(loss, penalty, dual) == ('hinge', 'l2', False) or
(penalty, dual) == ('l1', True) or
loss == 'foo' or penalty == 'bar'):
assert_raises_regexp(ValueError,
"Unsupported set of arguments.*penalty='%s.*"
"loss='%s.*dual=%s"
% (penalty, loss, dual),
clf.fit, X, y)
else:
clf.fit(X, y)
# Incorrect loss value - test if explicit error message is raised
assert_raises_regexp(ValueError, ".*loss='l3' is not supported.*",
svm.LinearSVC(loss="l3").fit, X, y)
# FIXME remove in 1.0
def test_linearsvx_loss_penalty_deprecations():
X, y = [[0.0], [1.0]], [0, 1]
msg = ("loss='%s' has been deprecated in favor of "
"loss='%s' as of 0.16. Backward compatibility"
" for the %s will be removed in %s")
# LinearSVC
# loss l1/L1 --> hinge
assert_warns_message(DeprecationWarning,
msg % ("l1", "hinge", "loss='l1'", "1.0"),
svm.LinearSVC(loss="l1").fit, X, y)
# loss l2/L2 --> squared_hinge
assert_warns_message(DeprecationWarning,
msg % ("L2", "squared_hinge", "loss='L2'", "1.0"),
svm.LinearSVC(loss="L2").fit, X, y)
# LinearSVR
# loss l1/L1 --> epsilon_insensitive
assert_warns_message(DeprecationWarning,
msg % ("L1", "epsilon_insensitive", "loss='L1'",
"1.0"),
svm.LinearSVR(loss="L1").fit, X, y)
# loss l2/L2 --> squared_epsilon_insensitive
assert_warns_message(DeprecationWarning,
msg % ("l2", "squared_epsilon_insensitive",
"loss='l2'", "1.0"),
svm.LinearSVR(loss="l2").fit, X, y)
# FIXME remove in 0.18
def test_linear_svx_uppercase_loss_penalty():
# Check if Upper case notation is supported by _fit_liblinear
# which is called by fit
X, y = [[0.0], [1.0]], [0, 1]
msg = ("loss='%s' has been deprecated in favor of "
"loss='%s' as of 0.16. Backward compatibility"
" for the uppercase notation will be removed in %s")
# loss SQUARED_hinge --> squared_hinge
assert_warns_message(DeprecationWarning,
msg % ("SQUARED_hinge", "squared_hinge", "0.18"),
svm.LinearSVC(loss="SQUARED_hinge").fit, X, y)
# penalty L2 --> l2
assert_warns_message(DeprecationWarning,
msg.replace("loss", "penalty")
% ("L2", "l2", "0.18"),
svm.LinearSVC(penalty="L2").fit, X, y)
# loss EPSILON_INSENSITIVE --> epsilon_insensitive
assert_warns_message(DeprecationWarning,
msg % ("EPSILON_INSENSITIVE", "epsilon_insensitive",
"0.18"),
svm.LinearSVR(loss="EPSILON_INSENSITIVE").fit, X, y)
def test_linearsvc():
# Test basic routines using LinearSVC
clf = svm.LinearSVC(random_state=0).fit(X, Y)
# by default should have intercept
assert_true(clf.fit_intercept)
assert_array_equal(clf.predict(T), true_result)
assert_array_almost_equal(clf.intercept_, [0], decimal=3)
# the same with l1 penalty
clf = svm.LinearSVC(penalty='l1', loss='squared_hinge', dual=False, random_state=0).fit(X, Y)
assert_array_equal(clf.predict(T), true_result)
# l2 penalty with dual formulation
clf = svm.LinearSVC(penalty='l2', dual=True, random_state=0).fit(X, Y)
assert_array_equal(clf.predict(T), true_result)
# l2 penalty, l1 loss
clf = svm.LinearSVC(penalty='l2', loss='hinge', dual=True, random_state=0)
clf.fit(X, Y)
assert_array_equal(clf.predict(T), true_result)
# test also decision function
dec = clf.decision_function(T)
res = (dec > 0).astype(np.int) + 1
assert_array_equal(res, true_result)
def test_linearsvc_crammer_singer():
# Test LinearSVC with crammer_singer multi-class svm
ovr_clf = svm.LinearSVC(random_state=0).fit(iris.data, iris.target)
cs_clf = svm.LinearSVC(multi_class='crammer_singer', random_state=0)
cs_clf.fit(iris.data, iris.target)
# similar prediction for ovr and crammer-singer:
assert_true((ovr_clf.predict(iris.data) ==
cs_clf.predict(iris.data)).mean() > .9)
# classifiers shouldn't be the same
assert_true((ovr_clf.coef_ != cs_clf.coef_).all())
# test decision function
assert_array_equal(cs_clf.predict(iris.data),
np.argmax(cs_clf.decision_function(iris.data), axis=1))
dec_func = np.dot(iris.data, cs_clf.coef_.T) + cs_clf.intercept_
assert_array_almost_equal(dec_func, cs_clf.decision_function(iris.data))
def test_crammer_singer_binary():
# Test Crammer-Singer formulation in the binary case
X, y = make_classification(n_classes=2, random_state=0)
for fit_intercept in (True, False):
acc = svm.LinearSVC(fit_intercept=fit_intercept,
multi_class="crammer_singer",
random_state=0).fit(X, y).score(X, y)
assert_greater(acc, 0.9)
def test_linearsvc_iris():
# Test that LinearSVC gives plausible predictions on the iris dataset
# Also, test symbolic class names (classes_).
target = iris.target_names[iris.target]
clf = svm.LinearSVC(random_state=0).fit(iris.data, target)
assert_equal(set(clf.classes_), set(iris.target_names))
assert_greater(np.mean(clf.predict(iris.data) == target), 0.8)
dec = clf.decision_function(iris.data)
pred = iris.target_names[np.argmax(dec, 1)]
assert_array_equal(pred, clf.predict(iris.data))
def test_dense_liblinear_intercept_handling(classifier=svm.LinearSVC):
# Test that dense liblinear honours intercept_scaling param
X = [[2, 1],
[3, 1],
[1, 3],
[2, 3]]
y = [0, 0, 1, 1]
clf = classifier(fit_intercept=True, penalty='l1', loss='squared_hinge',
dual=False, C=4, tol=1e-7, random_state=0)
assert_true(clf.intercept_scaling == 1, clf.intercept_scaling)
assert_true(clf.fit_intercept)
# when intercept_scaling is low the intercept value is highly "penalized"
# by regularization
clf.intercept_scaling = 1
clf.fit(X, y)
assert_almost_equal(clf.intercept_, 0, decimal=5)
# when intercept_scaling is sufficiently high, the intercept value
# is not affected by regularization
clf.intercept_scaling = 100
clf.fit(X, y)
intercept1 = clf.intercept_
assert_less(intercept1, -1)
# when intercept_scaling is sufficiently high, the intercept value
# doesn't depend on intercept_scaling value
clf.intercept_scaling = 1000
clf.fit(X, y)
intercept2 = clf.intercept_
assert_array_almost_equal(intercept1, intercept2, decimal=2)
def test_liblinear_set_coef():
# multi-class case
clf = svm.LinearSVC().fit(iris.data, iris.target)
values = clf.decision_function(iris.data)
clf.coef_ = clf.coef_.copy()
clf.intercept_ = clf.intercept_.copy()
values2 = clf.decision_function(iris.data)
assert_array_almost_equal(values, values2)
# binary-class case
X = [[2, 1],
[3, 1],
[1, 3],
[2, 3]]
y = [0, 0, 1, 1]
clf = svm.LinearSVC().fit(X, y)
values = clf.decision_function(X)
clf.coef_ = clf.coef_.copy()
clf.intercept_ = clf.intercept_.copy()
values2 = clf.decision_function(X)
assert_array_equal(values, values2)
def test_immutable_coef_property():
# Check that primal coef modification are not silently ignored
svms = [
svm.SVC(kernel='linear').fit(iris.data, iris.target),
svm.NuSVC(kernel='linear').fit(iris.data, iris.target),
svm.SVR(kernel='linear').fit(iris.data, iris.target),
svm.NuSVR(kernel='linear').fit(iris.data, iris.target),
svm.OneClassSVM(kernel='linear').fit(iris.data),
]
for clf in svms:
assert_raises(AttributeError, clf.__setattr__, 'coef_', np.arange(3))
assert_raises((RuntimeError, ValueError),
clf.coef_.__setitem__, (0, 0), 0)
def test_linearsvc_verbose():
# stdout: redirect
import os
stdout = os.dup(1) # save original stdout
os.dup2(os.pipe()[1], 1) # replace it
# actual call
clf = svm.LinearSVC(verbose=1)
clf.fit(X, Y)
# stdout: restore
os.dup2(stdout, 1) # restore original stdout
def test_svc_clone_with_callable_kernel():
# create SVM with callable linear kernel, check that results are the same
# as with built-in linear kernel
svm_callable = svm.SVC(kernel=lambda x, y: np.dot(x, y.T),
probability=True, random_state=0,
decision_function_shape='ovr')
# clone for checking clonability with lambda functions..
svm_cloned = base.clone(svm_callable)
svm_cloned.fit(iris.data, iris.target)
svm_builtin = svm.SVC(kernel='linear', probability=True, random_state=0,
decision_function_shape='ovr')
svm_builtin.fit(iris.data, iris.target)
assert_array_almost_equal(svm_cloned.dual_coef_,
svm_builtin.dual_coef_)
assert_array_almost_equal(svm_cloned.intercept_,
svm_builtin.intercept_)
assert_array_equal(svm_cloned.predict(iris.data),
svm_builtin.predict(iris.data))
assert_array_almost_equal(svm_cloned.predict_proba(iris.data),
svm_builtin.predict_proba(iris.data),
decimal=4)
assert_array_almost_equal(svm_cloned.decision_function(iris.data),
svm_builtin.decision_function(iris.data))
def test_svc_bad_kernel():
svc = svm.SVC(kernel=lambda x, y: x)
assert_raises(ValueError, svc.fit, X, Y)
def test_timeout():
a = svm.SVC(kernel=lambda x, y: np.dot(x, y.T), probability=True,
random_state=0, max_iter=1)
assert_warns(ConvergenceWarning, a.fit, X, Y)
def test_unfitted():
X = "foo!" # input validation not required when SVM not fitted
clf = svm.SVC()
assert_raises_regexp(Exception, r".*\bSVC\b.*\bnot\b.*\bfitted\b",
clf.predict, X)
clf = svm.NuSVR()
assert_raises_regexp(Exception, r".*\bNuSVR\b.*\bnot\b.*\bfitted\b",
clf.predict, X)
# ignore convergence warnings from max_iter=1
@ignore_warnings
def test_consistent_proba():
a = svm.SVC(probability=True, max_iter=1, random_state=0)
proba_1 = a.fit(X, Y).predict_proba(X)
a = svm.SVC(probability=True, max_iter=1, random_state=0)
proba_2 = a.fit(X, Y).predict_proba(X)
assert_array_almost_equal(proba_1, proba_2)
def test_linear_svc_convergence_warnings():
# Test that warnings are raised if model does not converge
lsvc = svm.LinearSVC(max_iter=2, verbose=1)
assert_warns(ConvergenceWarning, lsvc.fit, X, Y)
assert_equal(lsvc.n_iter_, 2)
def test_svr_coef_sign():
# Test that SVR(kernel="linear") has coef_ with the right sign.
# Non-regression test for #2933.
X = np.random.RandomState(21).randn(10, 3)
y = np.random.RandomState(12).randn(10)
for svr in [svm.SVR(kernel='linear'), svm.NuSVR(kernel='linear'),
svm.LinearSVR()]:
svr.fit(X, y)
assert_array_almost_equal(svr.predict(X),
np.dot(X, svr.coef_.ravel()) + svr.intercept_)
def test_linear_svc_intercept_scaling():
# Test that the right error message is thrown when intercept_scaling <= 0
for i in [-1, 0]:
lsvc = svm.LinearSVC(intercept_scaling=i)
msg = ('Intercept scaling is %r but needs to be greater than 0.'
' To disable fitting an intercept,'
' set fit_intercept=False.' % lsvc.intercept_scaling)
assert_raise_message(ValueError, msg, lsvc.fit, X, Y)
def test_lsvc_intercept_scaling_zero():
# Test that intercept_scaling is ignored when fit_intercept is False
lsvc = svm.LinearSVC(fit_intercept=False)
lsvc.fit(X, Y)
assert_equal(lsvc.intercept_, 0.)
def test_hasattr_predict_proba():
# Method must be (un)available before or after fit, switched by
# `probability` param
G = svm.SVC(probability=True)
assert_true(hasattr(G, 'predict_proba'))
G.fit(iris.data, iris.target)
assert_true(hasattr(G, 'predict_proba'))
G = svm.SVC(probability=False)
assert_false(hasattr(G, 'predict_proba'))
G.fit(iris.data, iris.target)
assert_false(hasattr(G, 'predict_proba'))
# Switching to `probability=True` after fitting should make
# predict_proba available, but calling it must not work:
G.probability = True
assert_true(hasattr(G, 'predict_proba'))
msg = "predict_proba is not available when fitted with probability=False"
assert_raise_message(NotFittedError, msg, G.predict_proba, iris.data)
| bsd-3-clause |
hdmetor/scikit-learn | examples/neighbors/plot_kde_1d.py | 347 | 5100 | """
===================================
Simple 1D Kernel Density Estimation
===================================
This example uses the :class:`sklearn.neighbors.KernelDensity` class to
demonstrate the principles of Kernel Density Estimation in one dimension.
The first plot shows one of the problems with using histograms to visualize
the density of points in 1D. Intuitively, a histogram can be thought of as a
scheme in which a unit "block" is stacked above each point on a regular grid.
As the top two panels show, however, the choice of gridding for these blocks
can lead to wildly divergent ideas about the underlying shape of the density
distribution. If we instead center each block on the point it represents, we
get the estimate shown in the bottom left panel. This is a kernel density
estimation with a "top hat" kernel. This idea can be generalized to other
kernel shapes: the bottom-right panel of the first figure shows a Gaussian
kernel density estimate over the same distribution.
Scikit-learn implements efficient kernel density estimation using either
a Ball Tree or KD Tree structure, through the
:class:`sklearn.neighbors.KernelDensity` estimator. The available kernels
are shown in the second figure of this example.
The third figure compares kernel density estimates for a distribution of 100
samples in 1 dimension. Though this example uses 1D distributions, kernel
density estimation is easily and efficiently extensible to higher dimensions
as well.
"""
# Author: Jake Vanderplas <[email protected]>
#
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
from sklearn.neighbors import KernelDensity
#----------------------------------------------------------------------
# Plot the progression of histograms to kernels
np.random.seed(1)
N = 20
X = np.concatenate((np.random.normal(0, 1, 0.3 * N),
np.random.normal(5, 1, 0.7 * N)))[:, np.newaxis]
X_plot = np.linspace(-5, 10, 1000)[:, np.newaxis]
bins = np.linspace(-5, 10, 10)
fig, ax = plt.subplots(2, 2, sharex=True, sharey=True)
fig.subplots_adjust(hspace=0.05, wspace=0.05)
# histogram 1
ax[0, 0].hist(X[:, 0], bins=bins, fc='#AAAAFF', normed=True)
ax[0, 0].text(-3.5, 0.31, "Histogram")
# histogram 2
ax[0, 1].hist(X[:, 0], bins=bins + 0.75, fc='#AAAAFF', normed=True)
ax[0, 1].text(-3.5, 0.31, "Histogram, bins shifted")
# tophat KDE
kde = KernelDensity(kernel='tophat', bandwidth=0.75).fit(X)
log_dens = kde.score_samples(X_plot)
ax[1, 0].fill(X_plot[:, 0], np.exp(log_dens), fc='#AAAAFF')
ax[1, 0].text(-3.5, 0.31, "Tophat Kernel Density")
# Gaussian KDE
kde = KernelDensity(kernel='gaussian', bandwidth=0.75).fit(X)
log_dens = kde.score_samples(X_plot)
ax[1, 1].fill(X_plot[:, 0], np.exp(log_dens), fc='#AAAAFF')
ax[1, 1].text(-3.5, 0.31, "Gaussian Kernel Density")
for axi in ax.ravel():
axi.plot(X[:, 0], np.zeros(X.shape[0]) - 0.01, '+k')
axi.set_xlim(-4, 9)
axi.set_ylim(-0.02, 0.34)
for axi in ax[:, 0]:
axi.set_ylabel('Normalized Density')
for axi in ax[1, :]:
axi.set_xlabel('x')
#----------------------------------------------------------------------
# Plot all available kernels
X_plot = np.linspace(-6, 6, 1000)[:, None]
X_src = np.zeros((1, 1))
fig, ax = plt.subplots(2, 3, sharex=True, sharey=True)
fig.subplots_adjust(left=0.05, right=0.95, hspace=0.05, wspace=0.05)
def format_func(x, loc):
if x == 0:
return '0'
elif x == 1:
return 'h'
elif x == -1:
return '-h'
else:
return '%ih' % x
for i, kernel in enumerate(['gaussian', 'tophat', 'epanechnikov',
'exponential', 'linear', 'cosine']):
axi = ax.ravel()[i]
log_dens = KernelDensity(kernel=kernel).fit(X_src).score_samples(X_plot)
axi.fill(X_plot[:, 0], np.exp(log_dens), '-k', fc='#AAAAFF')
axi.text(-2.6, 0.95, kernel)
axi.xaxis.set_major_formatter(plt.FuncFormatter(format_func))
axi.xaxis.set_major_locator(plt.MultipleLocator(1))
axi.yaxis.set_major_locator(plt.NullLocator())
axi.set_ylim(0, 1.05)
axi.set_xlim(-2.9, 2.9)
ax[0, 1].set_title('Available Kernels')
#----------------------------------------------------------------------
# Plot a 1D density example
N = 100
np.random.seed(1)
X = np.concatenate((np.random.normal(0, 1, 0.3 * N),
np.random.normal(5, 1, 0.7 * N)))[:, np.newaxis]
X_plot = np.linspace(-5, 10, 1000)[:, np.newaxis]
true_dens = (0.3 * norm(0, 1).pdf(X_plot[:, 0])
+ 0.7 * norm(5, 1).pdf(X_plot[:, 0]))
fig, ax = plt.subplots()
ax.fill(X_plot[:, 0], true_dens, fc='black', alpha=0.2,
label='input distribution')
for kernel in ['gaussian', 'tophat', 'epanechnikov']:
kde = KernelDensity(kernel=kernel, bandwidth=0.5).fit(X)
log_dens = kde.score_samples(X_plot)
ax.plot(X_plot[:, 0], np.exp(log_dens), '-',
label="kernel = '{0}'".format(kernel))
ax.text(6, 0.38, "N={0} points".format(N))
ax.legend(loc='upper left')
ax.plot(X[:, 0], -0.005 - 0.01 * np.random.random(X.shape[0]), '+k')
ax.set_xlim(-4, 9)
ax.set_ylim(-0.02, 0.4)
plt.show()
| bsd-3-clause |
mrgloom/h2o-3 | h2o-py/tests/testdir_golden/pyunit_svd_1_golden.py | 3 | 2183 | import sys
sys.path.insert(1, "../../")
import h2o
def svd_1_golden(ip, port):
print "Importing USArrests.csv data..."
arrestsH2O = h2o.upload_file(h2o.locate("smalldata/pca_test/USArrests.csv"))
print "Compare with SVD"
fitH2O = h2o.svd(x=arrestsH2O[0:4], nv=4, transform="NONE", max_iterations=2000)
print "Compare singular values (D)"
h2o_d = fitH2O._model_json['output']['d']
r_d = [1419.06139509772, 194.825846110138, 45.6613376308754, 18.0695566224677]
print "R Singular Values: {0}".format(r_d)
print "H2O Singular Values: {0}".format(h2o_d)
for r, h in zip(r_d, h2o_d): assert abs(r - h) < 1e-6, "H2O got {0}, but R got {1}".format(h, r)
print "Compare right singular vectors (V)"
h2o_v = fitH2O._model_json['output']['v']
r_v = [[-0.04239181, 0.01616262, -0.06588426, 0.99679535],
[-0.94395706, 0.32068580, 0.06655170, -0.04094568],
[-0.30842767, -0.93845891, 0.15496743, 0.01234261],
[-0.10963744, -0.12725666, -0.98347101, -0.06760284]]
print "R Right Singular Vectors: {0}".format(r_v)
print "H2O Right Singular Vectors: {0}".format(h2o_v)
for rl, hl in zip(r_v, h2o_v):
for r, h in zip(rl, hl): assert abs(abs(r) - abs(h)) < 1e-5, "H2O got {0}, but R got {1}".format(h, r)
print "Compare left singular vectors (U)"
h2o_u = h2o.as_list(h2o.get_frame(fitH2O._model_json['output']['u_key']['name']), use_pandas=False)
h2o_u.pop(0)
r_u = [[-0.1716251, 0.096325710, 0.06515480, 0.15369551],
[-0.1891166, 0.173452566, -0.42665785, -0.17801438],
[-0.2155930, 0.078998111, 0.02063740, -0.28070784],
[-0.1390244, 0.059889811, 0.01392269, 0.01610418],
[-0.2067788, -0.009812026, -0.17633244, -0.21867425],
[-0.1558794, -0.064555293, -0.28288280, -0.11797419]]
print "R Left Singular Vectors: {0}".format(r_u)
print "H2O Left Singular Vectors: {0}".format(h2o_u)
for rl, hl in zip(r_u, h2o_u):
for r, h in zip(rl, hl): assert abs(abs(r) - abs(float(h))) < 1e-5, "H2O got {0}, but R got {1}".format(h, r)
if __name__ == "__main__":
h2o.run_test(sys.argv, svd_1_golden)
| apache-2.0 |
jpzk/evopy | evopy/examples/experiments/constraints_dses_dsessvc/latex.py | 1 | 6832 | '''
This file is part of evopy.
Copyright 2012 - 2013, Jendrik Poloczek
evopy is free software: you can redistribute it
and/or modify it under the terms of the GNU General Public License as published
by the Free Software Foundation, either version 3 of the License, or (at your
option) any later version.
evopy is distributed in the hope that it will be
useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
Public License for more details.
You should have received a copy of the GNU General Public License along with
evopy. If not, see <http://www.gnu.org/licenses/>.
'''
from sys import path
path.append("../../../..")
import pdb
from pickle import load
from copy import deepcopy
from numpy import matrix, log10, array
from scipy.stats import wilcoxon, ranksums
from itertools import chain
from evopy.strategies.ori_dses_svc_repair import ORIDSESSVCR
from evopy.strategies.ori_dses_svc import ORIDSESSVC
from evopy.strategies.ori_dses import ORIDSES
from evopy.simulators.simulator import Simulator
from evopy.problems.sphere_problem_origin_r1 import SphereProblemOriginR1
from evopy.problems.sphere_problem_origin_r2 import SphereProblemOriginR2
from evopy.problems.schwefels_problem_26 import SchwefelsProblem26
from evopy.problems.tr_problem import TRProblem
from evopy.metamodel.dses_svc_linear_meta_model import DSESSVCLinearMetaModel
from sklearn.cross_validation import KFold
from evopy.operators.scaling.scaling_standardscore import ScalingStandardscore
from evopy.operators.scaling.scaling_dummy import ScalingDummy
from evopy.metamodel.cv.svc_cv_sklearn_grid_linear import SVCCVSkGridLinear
from evopy.operators.termination.or_combinator import ORCombinator
from evopy.operators.termination.accuracy import Accuracy
from evopy.operators.termination.generations import Generations
from evopy.operators.termination.convergence import Convergence
from setup import *
cfcsf = file("output/cfcs_file.save", "r")
cfcs = load(cfcsf)
# statistics
variable_names = ['min', 'max', 'mean', 'var', 'h']
variables = {}
for variable in variable_names:
variables[variable] = create_problem_optimizer_map(0.0)
for problem in problems:
for optimizer in optimizers[problem]:
cfc = cfcs[problem][optimizer]
cfc = list(chain.from_iterable(cfc))
variables['min'][problem][optimizer] = min(cfc)
variables['max'][problem][optimizer] = max(cfc)
variables['mean'][problem][optimizer] = array(cfc).mean()
variables['var'][problem][optimizer] = array(cfc).var()
variables['h'][problem][optimizer] =\
1.06 * array(cfc).std() * (len(cfc)**(-1.0/5.0))
pvalues = {}
for problem in problems:
x = list(chain.from_iterable(cfcs[problem][optimizers[problem][0]]))
y = list(chain.from_iterable(cfcs[problem][optimizers[problem][1]]))
z, pvalues[problem] = ranksums(x,y)
results = file("output/results.tex","w")
lines = [
"\\begin{tabularx}{\\textwidth}{l X X X X X X X }\n",
"\\toprule\n",
"\\textbf{Problem} & p-Wert & SVK & Minimum & Mittel & Maximum & Varianz & h\\\\\n",
"\midrule\n",
"Kugel R. 1 & %1.2f & nein & %i & %1.2f & %i & %1.2f & %1.2f \\\\\n"\
% (pvalues[SphereProblemOriginR1],\
variables['min'][SphereProblemOriginR1][get_method_SphereProblemR1],\
variables['mean'][SphereProblemOriginR1][get_method_SphereProblemR1],\
variables['max'][SphereProblemOriginR1][get_method_SphereProblemR1],\
variables['var'][SphereProblemOriginR1][get_method_SphereProblemR1],\
variables['h'][SphereProblemOriginR1][get_method_SphereProblemR1]),\
"&& ja & %i & %1.2f & %i & %1.2f & %1.2f \\\\\n"\
% (variables['min'][SphereProblemOriginR1][get_method_SphereProblemR1_svc],\
variables['mean'][SphereProblemOriginR1][get_method_SphereProblemR1_svc],\
variables['max'][SphereProblemOriginR1][get_method_SphereProblemR1_svc],\
variables['var'][SphereProblemOriginR1][get_method_SphereProblemR1_svc],\
variables['h'][SphereProblemOriginR1][get_method_SphereProblemR1_svc]),\
"Kugel R. 2 & %1.2f & nein & %i & %1.2f & %i & %1.2f & %1.2f \\\\\n"\
% (pvalues[SphereProblemOriginR2],\
variables['min'][SphereProblemOriginR2][get_method_SphereProblemR2],\
variables['mean'][SphereProblemOriginR2][get_method_SphereProblemR2],\
variables['max'][SphereProblemOriginR2][get_method_SphereProblemR2],\
variables['var'][SphereProblemOriginR2][get_method_SphereProblemR2],\
variables['h'][SphereProblemOriginR2][get_method_SphereProblemR2_svc]),\
"&& ja & %i & %1.2f & %i & %1.2f & %1.2f \\\\\n"\
% (variables['min'][SphereProblemOriginR2][get_method_SphereProblemR2_svc],\
variables['mean'][SphereProblemOriginR2][get_method_SphereProblemR2_svc],\
variables['max'][SphereProblemOriginR2][get_method_SphereProblemR2_svc],\
variables['var'][SphereProblemOriginR2][get_method_SphereProblemR2_svc],\
variables['h'][SphereProblemOriginR2][get_method_SphereProblemR2_svc]),\
"TR2 & %1.2f & nein & %i & %1.2f & %i & %1.2f & %1.2f \\\\\n"\
% (pvalues[TRProblem],\
variables['min'][TRProblem][get_method_TR],\
variables['mean'][TRProblem][get_method_TR],\
variables['max'][TRProblem][get_method_TR],\
variables['var'][TRProblem][get_method_TR],\
variables['h'][TRProblem][get_method_TR]),\
"&& ja & %i & %1.2f & %i & %1.2f & %1.2f \\\\\n"\
% (variables['min'][TRProblem][get_method_TR_svc],\
variables['mean'][TRProblem][get_method_TR_svc],\
variables['max'][TRProblem][get_method_TR_svc],\
variables['var'][TRProblem][get_method_TR_svc],\
variables['h'][TRProblem][get_method_TR_svc]),\
"2.60 mit R. & %1.2f & nein & %i & %1.2f & %i & %1.2f & %1.2f\\\\\n"\
% (pvalues[SchwefelsProblem26],\
variables['min'][SchwefelsProblem26][get_method_Schwefel26],\
variables['mean'][SchwefelsProblem26][get_method_Schwefel26],\
variables['max'][SchwefelsProblem26][get_method_Schwefel26],\
variables['var'][SchwefelsProblem26][get_method_Schwefel26],\
variables['h'][SchwefelsProblem26][get_method_Schwefel26]),\
"&& ja & %i & %1.2f & %i & %1.2f & %1.2f \\\\\n"\
% (variables['min'][SchwefelsProblem26][get_method_Schwefel26_svc],\
variables['mean'][SchwefelsProblem26][get_method_Schwefel26_svc],\
variables['max'][SchwefelsProblem26][get_method_Schwefel26_svc],\
variables['var'][SchwefelsProblem26][get_method_Schwefel26_svc],\
variables['h'][SchwefelsProblem26][get_method_Schwefel26_svc]),\
"\\bottomrule\n",\
"\end{tabularx}\n"]
results.writelines(lines)
results.close()
| gpl-3.0 |
perrygeo/geopandas | setup.py | 7 | 2472 | #!/usr/bin/env/python
"""Installation script
Version handling borrowed from pandas project.
"""
import sys
import os
import warnings
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
LONG_DESCRIPTION = """GeoPandas is a project to add support for geographic data to
`pandas`_ objects.
The goal of GeoPandas is to make working with geospatial data in
python easier. It combines the capabilities of `pandas`_ and `shapely`_,
providing geospatial operations in pandas and a high-level interface
to multiple geometries to shapely. GeoPandas enables you to easily do
operations in python that would otherwise require a spatial database
such as PostGIS.
.. _pandas: http://pandas.pydata.org
.. _shapely: http://toblerity.github.io/shapely
"""
MAJOR = 0
MINOR = 1
MICRO = 0
ISRELEASED = False
VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)
QUALIFIER = ''
FULLVERSION = VERSION
if not ISRELEASED:
FULLVERSION += '.dev'
try:
import subprocess
try:
pipe = subprocess.Popen(["git", "rev-parse", "--short", "HEAD"],
stdout=subprocess.PIPE).stdout
except OSError:
# msysgit compatibility
pipe = subprocess.Popen(
["git.cmd", "describe", "HEAD"],
stdout=subprocess.PIPE).stdout
rev = pipe.read().strip()
# makes distutils blow up on Python 2.7
if sys.version_info[0] >= 3:
rev = rev.decode('ascii')
FULLVERSION = '%d.%d.%d.dev-%s' % (MAJOR, MINOR, MICRO, rev)
except:
warnings.warn("WARNING: Couldn't get git revision")
else:
FULLVERSION += QUALIFIER
def write_version_py(filename=None):
cnt = """\
version = '%s'
short_version = '%s'
"""
if not filename:
filename = os.path.join(
os.path.dirname(__file__), 'geopandas', 'version.py')
a = open(filename, 'w')
try:
a.write(cnt % (FULLVERSION, VERSION))
finally:
a.close()
write_version_py()
setup(name='geopandas',
version=FULLVERSION,
description='Geographic pandas extensions',
license='BSD',
author='Kelsey Jordahl',
author_email='[email protected]',
url='http://geopandas.org',
long_description=LONG_DESCRIPTION,
packages=['geopandas', 'geopandas.io', 'geopandas.tools'],
install_requires=[
'pandas', 'shapely', 'fiona', 'descartes', 'pyproj', 'rtree'],
)
| bsd-3-clause |
CuppenResearch/SmallTools | categorizeSV.py | 3 | 7039 | #!/usr/local/bin/python
##################################################################
# A table will be made that contains all SVs from each VCF-file, with the SVs sorted on type and length.
# Each vcf file is put in a different column of the table.
# In the command line a path should be provided of the vcf files to be processed.
#
# Author: Floor Dussel
##################################################################
import os
import re
import numpy as np
import pandas as pd
import glob
import vcf
import argparse
def categorize(path):
list_vcf_names = []
df_all = pd.DataFrame()
def extractInfoINVDELTRADUP (record):
chrom = record.CHROM
pos = record.POS
end = record.INFO["END"]
length_calc = end - pos
length_abs = abs(length_calc)
length = length_abs + 1
output = [chrom, pos, end, length]
return output;
def extractInfoINS (record):
chrom = record.CHROM
pos = record.POS
end = record.INFO["END"]
length = 0
if "INSLEN" in record.INFO:
length_get = record.INFO["INSLEN"]
length = abs(length_get)
#print 'INSLENGTH used as length of insertion'
elif "SVLEN" in record.INFO:
length = record.INFO["SVLEN"]
if length != int:
length = length[0]
if length < 0:
length = length * -1
#print 'SVLEN used as length of insertion'
#else:
#print 'No insertion length found. Note: insertion is not taken into account in the table'
output = [chrom, pos, end, length]
return output;
#BNDs are said to have length = 0 and END is said to be the same as POS
def extractInfoBND (record):
chrom = record.CHROM
pos = record.POS
end = pos
length = 0
output = [chrom, pos, end, length]
return output;
def checkLengthSVs(record):
if record.INFO["SVTYPE"] == "DEL":
del_output = extractInfoINVDELTRADUP(record)
length = del_output[3]
if (length > 1000 and length < 10001): #1000-10001bp, so length of 1-10kb
del_outputlist10.append(del_output)
elif (length > 10000 and length <100001): #10-100kbp
del_outputlist100.append(del_output)
elif (length > 100000 and length <1000001): #100kb-1mbp
del_outputlist1000.append(del_output)
elif (length > 1000000 and length <10000001): #1mbp - 10 mbp
del_outputlist10000.append(del_output)
elif (length > 10000000): # >10Mb
del_outputlist100000.append(del_output)
if record.INFO["SVTYPE"] == "INS":
ins_output = extractInfoINS(record)
length = ins_output[3]
if (length > 1000 and length < 10001): #1000-10001bp, so length of 1-10kb
ins_outputlist10.append(ins_output)
elif (length > 10000 and length <100001): #10-100kbp
ins_outputlist100.append(ins_output)
elif (length > 100000 and length <1000001): #100kb-1mbp
ins_outputlist1000.append(ins_output)
elif (length > 1000000 and length <10000001): #1mbp - 10 mbp
ins_outputlist10000.append(ins_output)
elif (length > 10000000): # >10Mb
ins_outputlist100000.append(ins_output)
if record.INFO["SVTYPE"] == "INV":
inv_output = extractInfoINVDELTRADUP(record)
length = inv_output[3]
if (length > 1000 and length < 10001): #1000-10001bp, so length of 1-10kb
inv_outputlist10.append(inv_output)
elif (length > 10000 and length <100001): #10-100kbp
inv_outputlist100.append(inv_output)
elif (length > 100000 and length <1000001): #100kb-1mbp
inv_outputlist1000.append(inv_output)
elif (length > 1000000 and length <10000001): #1mbp - 10 mbp
inv_outputlist10000.append(inv_output)
elif (length > 10000000): # >10Mb
inv_outputlist100000.append(inv_output)
if record.INFO["SVTYPE"] == "TRA":
tra_output = extractInfoINVDELTRADUP(record)
tra_bnd_outputlist.append(tra_output)
if record.INFO["SVTYPE"] == "DUP":
dup_output = extractInfoINVDELTRADUP(record)
length = dup_output[3]
if (length > 1000 and length < 10001): #1000-10001bp, so length of 1-10kb
dup_outputlist10.append(dup_output)
elif (length > 10000 and length <100001): #10-100kbp
dup_outputlist100.append(dup_output)
elif (length > 100000 and length <1000001): #100kb-1mbp
dup_outputlist1000.append(dup_output)
elif (length > 1000000 and length <10000001): #1mbp - 10 mbp
dup_outputlist10000.append(dup_output)
elif (length > 10000000): # >10Mb
dup_outputlist100000.append(dup_output)
if record.INFO["SVTYPE"] == "BND":
bnd_output = extractInfoBND(record)
tra_bnd_outputlist.append(bnd_output)
path_vcf = path
#path = "/home/cog/fdussel/Documents/*.vcf"
for vcf_filename in glob.glob(path_vcf):
#vcf= open(vcf_filename)
#print vcf_filename
vcf_file = None
df_vcf = pd.DataFrame()
vcf_file_handle = open(vcf_filename, 'r')
if os.path.getsize(vcf_filename) > 0:
vcf_file = vcf.Reader(vcf_file_handle)
else:
continue
del_outputlist10 = []
del_outputlist100 = []
del_outputlist1000 = []
del_outputlist10000 = []
del_outputlist100000 = []
ins_outputlist10 = []
ins_outputlist100 = []
ins_outputlist1000 = []
ins_outputlist10000 = []
ins_outputlist100000 = []
inv_outputlist10 = []
inv_outputlist100 = []
inv_outputlist1000 = []
inv_outputlist10000 = []
inv_outputlist100000 = []
tra_bnd_outputlist = []
dup_outputlist10 = []
dup_outputlist100 = []
dup_outputlist1000 = []
dup_outputlist10000 = []
dup_outputlist100000 = []
all_names = vcf_filename.rsplit('/', 1)
name = all_names[1]
list_vcf_names.append(name)
for record in vcf_file:
checkLengthSVs(record)
dict_numberSV = {"del_1_10kb" : len(del_outputlist10), "del_10_100kb": len(del_outputlist100), "del_100kb_1mb": len(del_outputlist1000),
"del_1mb_10mb" : len(del_outputlist10000), "del_>10mb": len(del_outputlist100000),
"ins_1_10kb" : len(ins_outputlist10), "ins_10_100kb" : len(ins_outputlist100), "ins_100kb_1mb": len(ins_outputlist1000),
"ins_1mb_10mb" : len(ins_outputlist10000), "ins_>10mb" : len(ins_outputlist100000),
"inv_1_10kb" : len(inv_outputlist10), "inv_10_100kb": len(inv_outputlist100), "inv_100kb_1mb" : len(inv_outputlist1000),
"inv_1mb_10mb" : len(inv_outputlist10000), "inv_>10mb" : len(inv_outputlist100000),
"tra/bnd ": len(tra_bnd_outputlist),
"dup_1_10kb" : len(dup_outputlist10), "dup_10_100kb" : len(dup_outputlist100), "dup_100kb_1mb" : len(dup_outputlist1000),
"dup_1mb_10mb" : len(dup_outputlist10000), "dup_>10mb" : len(dup_outputlist100000)}
df_vcf = pd.DataFrame({name: dict_numberSV}, index = dict_numberSV.keys())
vcf_file_handle.close()
# Check whether data frame is empty
if not df_vcf.empty:
df_all = pd.concat([df_all, df_vcf], axis=1)
df_all = df_all.sort_index()
#print list_vcf_names
print df_all
if __name__ == '__main__':
parser = argparse.ArgumentParser(description = 'Categorizing structural variants on type and length')
required_named = parser.add_argument_group('Required arguments')
required_named.add_argument('-p', '--path', help = "input a path for the vcf-file(s) to be processed", required=True)
args = parser.parse_args()
categorize(args.path)
| gpl-3.0 |
dunkenj/smpy | scripts/dblpower_test.py | 1 | 1035 | import numpy as np
import astropy.units as u
from smpy import smpy as S
from smpy import ssp as B
from smpy.sfh import dblpower
import matplotlib.pyplot as plt
BC = B.BC('data/ssp/bc03/chab/lr/')
models = S.CSP(BC)
tau = 1*u.Gyr
age = 3*u.Gyr
sfh = [(1., -1., 0.3), (1., -1., 0.5)]
dust = 0.
metal = 1.
#BC = B.BC('')
def make_dbl_sfhs(nalpha, nbeta, ntau,
lalpha_min=-1., lalpha_max=3.,
lbeta_min=-1., lbeta_max=3.,
tau_min=0.1, tau_max=1.0):
alphas, betas, taus = np.meshgrid(np.logspace(lalpha_min, lalpha_max, nalpha),
-1*np.logspace(lbeta_min, lbeta_max, nbeta),
np.linspace(tau_min, tau_max, ntau))
sfh_params = np.vstack([alphas.flatten(), betas.flatten(), taus.flatten()]).T
return sfh_params
sfh = make_dbl_sfhs(9, 9, 7)
#sfh = [(31.6227766,-31.6227766,0.7)]
#sfh = [(31.62,-10.,0.7)]
models.build(age, sfh, dust, metal, fesc=[0., 0.1, 0.2], sfh_law=dblpower, verbose=True)
| mit |
valexandersaulys/airbnb_kaggle_contest | venv/lib/python3.4/site-packages/sklearn/decomposition/tests/test_kernel_pca.py | 57 | 8062 | import numpy as np
import scipy.sparse as sp
from sklearn.utils.testing import (assert_array_almost_equal, assert_less,
assert_equal, assert_not_equal,
assert_raises)
from sklearn.decomposition import PCA, KernelPCA
from sklearn.datasets import make_circles
from sklearn.linear_model import Perceptron
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
from sklearn.metrics.pairwise import rbf_kernel
def test_kernel_pca():
rng = np.random.RandomState(0)
X_fit = rng.random_sample((5, 4))
X_pred = rng.random_sample((2, 4))
def histogram(x, y, **kwargs):
# Histogram kernel implemented as a callable.
assert_equal(kwargs, {}) # no kernel_params that we didn't ask for
return np.minimum(x, y).sum()
for eigen_solver in ("auto", "dense", "arpack"):
for kernel in ("linear", "rbf", "poly", histogram):
# histogram kernel produces singular matrix inside linalg.solve
# XXX use a least-squares approximation?
inv = not callable(kernel)
# transform fit data
kpca = KernelPCA(4, kernel=kernel, eigen_solver=eigen_solver,
fit_inverse_transform=inv)
X_fit_transformed = kpca.fit_transform(X_fit)
X_fit_transformed2 = kpca.fit(X_fit).transform(X_fit)
assert_array_almost_equal(np.abs(X_fit_transformed),
np.abs(X_fit_transformed2))
# non-regression test: previously, gamma would be 0 by default,
# forcing all eigenvalues to 0 under the poly kernel
assert_not_equal(X_fit_transformed.size, 0)
# transform new data
X_pred_transformed = kpca.transform(X_pred)
assert_equal(X_pred_transformed.shape[1],
X_fit_transformed.shape[1])
# inverse transform
if inv:
X_pred2 = kpca.inverse_transform(X_pred_transformed)
assert_equal(X_pred2.shape, X_pred.shape)
def test_invalid_parameters():
assert_raises(ValueError, KernelPCA, 10, fit_inverse_transform=True,
kernel='precomputed')
def test_kernel_pca_sparse():
rng = np.random.RandomState(0)
X_fit = sp.csr_matrix(rng.random_sample((5, 4)))
X_pred = sp.csr_matrix(rng.random_sample((2, 4)))
for eigen_solver in ("auto", "arpack"):
for kernel in ("linear", "rbf", "poly"):
# transform fit data
kpca = KernelPCA(4, kernel=kernel, eigen_solver=eigen_solver,
fit_inverse_transform=False)
X_fit_transformed = kpca.fit_transform(X_fit)
X_fit_transformed2 = kpca.fit(X_fit).transform(X_fit)
assert_array_almost_equal(np.abs(X_fit_transformed),
np.abs(X_fit_transformed2))
# transform new data
X_pred_transformed = kpca.transform(X_pred)
assert_equal(X_pred_transformed.shape[1],
X_fit_transformed.shape[1])
# inverse transform
# X_pred2 = kpca.inverse_transform(X_pred_transformed)
# assert_equal(X_pred2.shape, X_pred.shape)
def test_kernel_pca_linear_kernel():
rng = np.random.RandomState(0)
X_fit = rng.random_sample((5, 4))
X_pred = rng.random_sample((2, 4))
# for a linear kernel, kernel PCA should find the same projection as PCA
# modulo the sign (direction)
# fit only the first four components: fifth is near zero eigenvalue, so
# can be trimmed due to roundoff error
assert_array_almost_equal(
np.abs(KernelPCA(4).fit(X_fit).transform(X_pred)),
np.abs(PCA(4).fit(X_fit).transform(X_pred)))
def test_kernel_pca_n_components():
rng = np.random.RandomState(0)
X_fit = rng.random_sample((5, 4))
X_pred = rng.random_sample((2, 4))
for eigen_solver in ("dense", "arpack"):
for c in [1, 2, 4]:
kpca = KernelPCA(n_components=c, eigen_solver=eigen_solver)
shape = kpca.fit(X_fit).transform(X_pred).shape
assert_equal(shape, (2, c))
def test_remove_zero_eig():
X = np.array([[1 - 1e-30, 1], [1, 1], [1, 1 - 1e-20]])
# n_components=None (default) => remove_zero_eig is True
kpca = KernelPCA()
Xt = kpca.fit_transform(X)
assert_equal(Xt.shape, (3, 0))
kpca = KernelPCA(n_components=2)
Xt = kpca.fit_transform(X)
assert_equal(Xt.shape, (3, 2))
kpca = KernelPCA(n_components=2, remove_zero_eig=True)
Xt = kpca.fit_transform(X)
assert_equal(Xt.shape, (3, 0))
def test_kernel_pca_precomputed():
rng = np.random.RandomState(0)
X_fit = rng.random_sample((5, 4))
X_pred = rng.random_sample((2, 4))
for eigen_solver in ("dense", "arpack"):
X_kpca = KernelPCA(4, eigen_solver=eigen_solver).\
fit(X_fit).transform(X_pred)
X_kpca2 = KernelPCA(
4, eigen_solver=eigen_solver, kernel='precomputed').fit(
np.dot(X_fit, X_fit.T)).transform(np.dot(X_pred, X_fit.T))
X_kpca_train = KernelPCA(
4, eigen_solver=eigen_solver,
kernel='precomputed').fit_transform(np.dot(X_fit, X_fit.T))
X_kpca_train2 = KernelPCA(
4, eigen_solver=eigen_solver, kernel='precomputed').fit(
np.dot(X_fit, X_fit.T)).transform(np.dot(X_fit, X_fit.T))
assert_array_almost_equal(np.abs(X_kpca),
np.abs(X_kpca2))
assert_array_almost_equal(np.abs(X_kpca_train),
np.abs(X_kpca_train2))
def test_kernel_pca_invalid_kernel():
rng = np.random.RandomState(0)
X_fit = rng.random_sample((2, 4))
kpca = KernelPCA(kernel="tototiti")
assert_raises(ValueError, kpca.fit, X_fit)
def test_gridsearch_pipeline():
# Test if we can do a grid-search to find parameters to separate
# circles with a perceptron model.
X, y = make_circles(n_samples=400, factor=.3, noise=.05,
random_state=0)
kpca = KernelPCA(kernel="rbf", n_components=2)
pipeline = Pipeline([("kernel_pca", kpca), ("Perceptron", Perceptron())])
param_grid = dict(kernel_pca__gamma=2. ** np.arange(-2, 2))
grid_search = GridSearchCV(pipeline, cv=3, param_grid=param_grid)
grid_search.fit(X, y)
assert_equal(grid_search.best_score_, 1)
def test_gridsearch_pipeline_precomputed():
# Test if we can do a grid-search to find parameters to separate
# circles with a perceptron model using a precomputed kernel.
X, y = make_circles(n_samples=400, factor=.3, noise=.05,
random_state=0)
kpca = KernelPCA(kernel="precomputed", n_components=2)
pipeline = Pipeline([("kernel_pca", kpca), ("Perceptron", Perceptron())])
param_grid = dict(Perceptron__n_iter=np.arange(1, 5))
grid_search = GridSearchCV(pipeline, cv=3, param_grid=param_grid)
X_kernel = rbf_kernel(X, gamma=2.)
grid_search.fit(X_kernel, y)
assert_equal(grid_search.best_score_, 1)
def test_nested_circles():
# Test the linear separability of the first 2D KPCA transform
X, y = make_circles(n_samples=400, factor=.3, noise=.05,
random_state=0)
# 2D nested circles are not linearly separable
train_score = Perceptron().fit(X, y).score(X, y)
assert_less(train_score, 0.8)
# Project the circles data into the first 2 components of a RBF Kernel
# PCA model.
# Note that the gamma value is data dependent. If this test breaks
# and the gamma value has to be updated, the Kernel PCA example will
# have to be updated too.
kpca = KernelPCA(kernel="rbf", n_components=2,
fit_inverse_transform=True, gamma=2.)
X_kpca = kpca.fit_transform(X)
# The data is perfectly linearly separable in that space
train_score = Perceptron().fit(X_kpca, y).score(X_kpca, y)
assert_equal(train_score, 1.0)
| gpl-2.0 |
RobertABT/heightmap | build/matplotlib/examples/animation/strip_chart_demo.py | 7 | 1476 | """
Emulate an oscilloscope. Requires the animation API introduced in
matplotlib 1.0 SVN.
"""
import numpy as np
from matplotlib.lines import Line2D
import matplotlib.pyplot as plt
import matplotlib.animation as animation
class Scope:
def __init__(self, ax, maxt=2, dt=0.02):
self.ax = ax
self.dt = dt
self.maxt = maxt
self.tdata = [0]
self.ydata = [0]
self.line = Line2D(self.tdata, self.ydata)
self.ax.add_line(self.line)
self.ax.set_ylim(-.1, 1.1)
self.ax.set_xlim(0, self.maxt)
def update(self, y):
lastt = self.tdata[-1]
if lastt > self.tdata[0] + self.maxt: # reset the arrays
self.tdata = [self.tdata[-1]]
self.ydata = [self.ydata[-1]]
self.ax.set_xlim(self.tdata[0], self.tdata[0] + self.maxt)
self.ax.figure.canvas.draw()
t = self.tdata[-1] + self.dt
self.tdata.append(t)
self.ydata.append(y)
self.line.set_data(self.tdata, self.ydata)
return self.line,
def emitter(p=0.03):
'return a random value with probability p, else 0'
while True:
v = np.random.rand(1)
if v > p:
yield 0.
else:
yield np.random.rand(1)
fig, ax = plt.subplots()
scope = Scope(ax)
# pass a generator in "emitter" to produce data for the update func
ani = animation.FuncAnimation(fig, scope.update, emitter, interval=10,
blit=True)
plt.show()
| mit |
JohnDMcMaster/uvscada | nuc/berton_plt.py | 1 | 1382 | import argparse
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import json
def load_csv(f):
f = open(f, 'r')
data = []
print 'Loading'
for l in f:
try:
j = json.loads(l)
except:
break
data.append(j['v'])
return data
def load_jl(f):
f = open(f, 'r')
# skip metadata
f.readline()
data = []
print 'Loading'
for l in f:
try:
j = json.loads(l)
except:
break
data.append((j['vin'], j['vout']))
return data
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Help')
parser.add_argument('fn', help='File')
args = parser.parse_args()
print 'Looping'
fn = args.fn
print
print fn
fn_out = fn.replace('.jl', '.png')
if fn_out == fn:
raise Exception()
data = load_jl(fn)
# x mv y V
print 'Plotting (%d samples)' % (len(data),)
#plt.subplot(221)
plt.subplots_adjust(right=0.8)
plt.plot(*zip(*data), label='Real')
# Ideal
plt.plot(*zip(*[(xmv/1000., xmv * -1000.0 / 9000) for xmv in xrange(0, 9000, 10)]), label='Ideal')
#red_patch = mpatches.Patch(color='red', label='Ideal')
#plt.legend(handles=[red_patch])
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.savefig(fn_out)
#plt.show()
| bsd-2-clause |
sanketloke/scikit-learn | examples/svm/plot_separating_hyperplane_unbalanced.py | 329 | 1850 | """
=================================================
SVM: Separating hyperplane for unbalanced classes
=================================================
Find the optimal separating hyperplane using an SVC for classes that
are unbalanced.
We first find the separating plane with a plain SVC and then plot
(dashed) the separating hyperplane with automatically correction for
unbalanced classes.
.. currentmodule:: sklearn.linear_model
.. note::
This example will also work by replacing ``SVC(kernel="linear")``
with ``SGDClassifier(loss="hinge")``. Setting the ``loss`` parameter
of the :class:`SGDClassifier` equal to ``hinge`` will yield behaviour
such as that of a SVC with a linear kernel.
For example try instead of the ``SVC``::
clf = SGDClassifier(n_iter=100, alpha=0.01)
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
#from sklearn.linear_model import SGDClassifier
# we create 40 separable points
rng = np.random.RandomState(0)
n_samples_1 = 1000
n_samples_2 = 100
X = np.r_[1.5 * rng.randn(n_samples_1, 2),
0.5 * rng.randn(n_samples_2, 2) + [2, 2]]
y = [0] * (n_samples_1) + [1] * (n_samples_2)
# fit the model and get the separating hyperplane
clf = svm.SVC(kernel='linear', C=1.0)
clf.fit(X, y)
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-5, 5)
yy = a * xx - clf.intercept_[0] / w[1]
# get the separating hyperplane using weighted classes
wclf = svm.SVC(kernel='linear', class_weight={1: 10})
wclf.fit(X, y)
ww = wclf.coef_[0]
wa = -ww[0] / ww[1]
wyy = wa * xx - wclf.intercept_[0] / ww[1]
# plot separating hyperplanes and samples
h0 = plt.plot(xx, yy, 'k-', label='no weights')
h1 = plt.plot(xx, wyy, 'k--', label='with weights')
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
plt.legend()
plt.axis('tight')
plt.show()
| bsd-3-clause |
jadhavhninad/-CSE_515_MWD_Analytics- | Phase 2/phase2Code/matrix_factorization.py | 1 | 4456 | from mysqlConn import DbConnect
import pandas as pd
#DB connector and curosor
db = DbConnect()
db_conn = db.get_connection()
cur2 = db_conn.cursor();
def get_user_mvrating_DF():
#===========================================================================
#Generate user - movie_rating matrix.
#For each movie, get its rating given by a user. If no rating then give zero
#===========================================================================
dd_users_mvrating = {}
dd_av_rating_for_genre = {}
dd_total_movie_for_genre = {}
#Limit is for checking that algorithm works.
cur2.execute("SELECT userid FROM `mlusers` limit 1000")
result0 = cur2.fetchall();
for usr in result0:
#print "for user" , usr[0]
dd_users_mvrating[usr[0]] = {}
dd_av_rating_for_genre[usr[0]] = {}
dd_total_movie_for_genre[usr[0]] = {}
#Get all movies watched(and hence rated) by each user.
cur2.execute("SELECT movieid, rating FROM `mlratings` where userid = %s",usr)
result1 = cur2.fetchall()
for data1 in result1:
user_movie_id = data1[0]
user_movie_rating = data1[1]
if user_movie_id in dd_users_mvrating[usr[0]]:
continue
else:
#print user_movie_id,user_movie_rating
dd_users_mvrating[usr[0]][user_movie_id] = user_movie_rating
#mlmovies_clean maps one movie to a single genre.
#Get the genre of the movie and add the movie rating to the genre.
cur2.execute("SELECT genres FROM `mlmovies_clean` where movieid = %s", {user_movie_id,})
result_gen = cur2.fetchone()
for data in result_gen:
if data[0] in dd_av_rating_for_genre[usr[0]]:
dd_av_rating_for_genre[usr[0]][data[0]] += user_movie_rating
dd_total_movie_for_genre[usr[0]][data[0]] += 1
else:
dd_av_rating_for_genre[usr[0]][data[0]] = user_movie_rating;
dd_total_movie_for_genre[usr[0]][data[0]] = 1
#Now for every genre of which the user has seen a movie we have the total rating of that genre
#We can get the average rating given to a particular genre by a user.
#WE need give rating to movies of mltags because it does not have a rating,
# give rating = avg rating give to a particular genre to by a user.
cur2.execute("SELECT movieid FROM `mltags` where userid = %s", usr)
result2 = cur2.fetchall()
for data in result2:
#print data1
user_movie_id = data[0]
cur2.execute("SELECT genres FROM `mlmovies_clean` where movieid = %s", {user_movie_id, })
mv_genre = cur2.fetchall()
if user_movie_id in dd_users_mvrating[usr[0]]:
continue
else:
#print user_movie_id
val = 0.0
for gen in mv_genre:
if gen in dd_av_rating_for_genre[usr[0]]:
val += float(dd_av_rating_for_genre[usr[0]][gen])/float(dd_total_movie_for_genre[usr[0]][gen])
else:
val = 1.0
dd_users_mvrating[usr[0]][user_movie_id] = val/float(len(mv_genre))
#Make rating of other movies to zero.
cur2.execute("SELECT DISTINCT movieid FROM `mlmovies`")
genreNames = cur2.fetchall()
for keyval in genreNames:
key = keyval[0]
#print key
if key in dd_users_mvrating[usr[0]]:
continue
else:
dd_users_mvrating[usr[0]][key] = 0
#pprint.pprint(dd_users_genre)
usr_mvrating_matrix = pd.DataFrame(dd_users_mvrating)
#print list(usr_mvrating_matrix.columns.values)
#print list(usr_mvrating_matrix.index)
user_ids_df = pd.DataFrame(usr_mvrating_matrix.columns.values, columns=["user_ids"] )
movie_ids_df = pd.DataFrame(usr_mvrating_matrix.index, columns=["movie_ids"] )
user_ids_df.to_csv("user_ids.csv",sep="\t")
movie_ids_df.to_csv("movie_ids.csv", sep="\t")
return usr_mvrating_matrix
if __name__ == "__main__":
usr_mvrating_matrix = get_user_mvrating_DF()
# usr_genre_matrix = usr_genre_matrix.T
# pprint.pprint(usr_genre_matrix)
usr_mvrating_matrix.to_csv("factorization_1_user_mvrating.csv", sep='\t')
| gpl-3.0 |
Haleyo/spark-tk | regression-tests/sparktkregtests/testcases/graph/graph_weight_degree_test.py | 11 | 4742 | # vim: set encoding=utf-8
# Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
""" Tests the weighted_degree on a graph"""
import unittest
from sparktkregtests.lib import sparktk_test
class WeightedDegreeTest(sparktk_test.SparkTKTestCase):
def setUp(self):
"""Build frames and graphs to be tested"""
super(WeightedDegreeTest, self).setUp()
graph_data = self.get_file("clique_10.csv")
schema = [('src', str),
('dst', str)]
# set up the vertex frame, which is the union of the src and
# the dst columns of the edges
self.frame = self.context.frame.import_csv(graph_data, schema=schema)
self.vertices = self.frame.copy()
self.vertices2 = self.frame.copy()
self.vertices.rename_columns({"src": "id"})
self.vertices.drop_columns(["dst"])
self.vertices2.rename_columns({"dst": "id"})
self.vertices2.drop_columns(["src"])
self.vertices.append(self.vertices2)
self.vertices.drop_duplicates()
self.frame.add_columns(lambda x: 2, ("value", int))
self.graph = self.context.graph.create(self.vertices, self.frame)
def test_weighted_degree_isolated(self):
"""Test weighted degree with an isolated vertex"""
vertex_frame = self.context.frame.create(
[[1],
[2],
[3],
[4],
[5]],
[("id", int)])
edge_frame = self.context.frame.create(
[[2, 3, 1],
[2, 1, 1],
[2, 5, 2]],
[("src", int),
("dst", int),
("weight", int)])
graph = self.context.graph.create(vertex_frame, edge_frame)
degree = graph.weighted_degrees("weight")
known_vals = {1: 1,
2: 4,
3: 1,
4: 0,
5: 2}
degree_pandas = degree.to_pandas(degree.count())
for _, row in degree_pandas.iterrows():
self.assertAlmostEqual(known_vals[row["id"]], row['degree'])
self.assertAlmostEqual(known_vals[row["id"]], row['degree'])
self.assertAlmostEqual(known_vals[row["id"]], row['degree'])
def test_annotate_weight_degree_out(self):
"""Test degree count weighted on out edges"""
degree_weighted = self.graph.weighted_degrees("value", "out")
res = degree_weighted.to_pandas(degree_weighted.count())
for _, row in res.iterrows():
row_val = row['id'].split('_')
self.assertEqual(2*(int(row_val[2])-1), row['degree'])
def test_weight_degree_in(self):
"""Test degree count weighted on in edges"""
degree_weighted = self.graph.weighted_degrees("value", "in")
res = degree_weighted.to_pandas(degree_weighted.count())
for _, row in res.iterrows():
row_val = row['id'].split('_')
self.assertEqual(
2*(int(row_val[1])-int(row_val[2])), row['degree'])
def test_weight_degree_undirected(self):
"""Test degree count weighted on undirected edges"""
degree_weighted = self.graph.weighted_degrees("value", "undirected")
res = degree_weighted.to_pandas(degree_weighted.count())
for _, row in res.iterrows():
row_val = row['id'].split('_')
self.assertEqual(2*(int(row_val[1])-1), row['degree'])
def test_weight_type_error(self):
"""Test degree count weighted with type error."""
with self.assertRaisesRegexp(TypeError, "unexpected keyword"):
self.graph.weighted_degrees(edge_weight_property="badvalue")
def test_weight_non_value(self):
"""Test degree count weighted with type error"""
with self.assertRaisesRegexp(TypeError, "unexpected keyword"):
self.graph.weighted_degrees(edge_weight_property="nonvalue")
if __name__ == "__main__":
unittest.main()
| apache-2.0 |
shaochengcheng/hoaxy-network | hnetwork/degree.py | 1 | 8145 | import logging
import networkx as nx
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib.ticker import LogLocator
from matplotlib import gridspec
from .data_process import get_data_file, get_out_file, ccdf
from .data_process import nplog
logger = logging.getLogger(__name__)
def build_network(fn):
fn = get_data_file(fn)
df = pd.read_csv(fn)
from_names = df[['from_raw_id', 'from_screen_name']].copy()
from_names.columns = ['raw_id', 'screen_name']
to_names = df[['to_raw_id', 'to_screen_name']].copy()
to_names.columns = ['raw_id', 'screen_name']
names = pd.concat([from_names, to_names], ignore_index=True)
names = names.drop_duplicates()
names = names.set_index('raw_id')['screen_name']
g = nx.from_pandas_dataframe(
df,
source='from_raw_id',
target='to_raw_id',
edge_attr='weight',
create_using=nx.DiGraph())
nx.set_node_attributes(g, name='screen_name', values=names.to_dict())
return g
def all_degrees(g):
return dict(
ki=pd.Series(g.in_degree(), name='ki'),
ko=pd.Series(g.out_degree(), name='ko'),
si=pd.Series(g.in_degree(weight='weight'), name='si'),
so=pd.Series(g.out_degree(weight='weight'), name='so'))
def deg_hub_stat(g, deg, fn, top=10):
fn = get_data_file(fn)
names = pd.Series(nx.get_node_attributes(g, name='screen_name'))
dfs = []
for k, v in deg.items():
deg[k] = v.sort_values(ascending=False)
hub = deg[k].iloc[:top].copy()
hub_sn = names.loc[hub.index]
hub_df = pd.concat([hub, hub_sn], axis=1)
hub_df = hub_df.reset_index(drop=False)
k_raw_id = k + '_raw_id'
k_value = k + '_value'
k_screen_name = k + '_screen_name'
hub_df.columns = [k_raw_id, k_value, k_screen_name]
dfs.append(hub_df)
df = pd.concat(dfs, axis=1)
df.to_csv(fn, index=False)
print(df)
def plot_deg_dist(deg1, deg2, figsize=(8, 6)):
ccdf_deg1 = dict()
ccdf_deg2 = dict()
for k, v in deg1.items():
ccdf_deg1[k] = ccdf(v)
for k, v in deg2.items():
ccdf_deg2[k] = ccdf(v)
titles = (('ki', 'In Degree'), ('ko', 'Out Degree'),
('si', 'Weighted In Degree'), ('so', 'Weighted Out Degree'))
fig, axarr = plt.subplots(2, 2, figsize=figsize)
axarr = axarr.flatten()
for i, tt in enumerate(titles):
k, t = tt
axarr[i].set_title(t)
axarr[i].loglog(
ccdf_deg1[k].index + 1, ccdf_deg1[k].values, label='Claim')
axarr[i].loglog(
ccdf_deg2[k].index + 1, ccdf_deg2[k].values, label='Fact Checking')
fig.tight_layout()
def prepare_deg_heatmap(df, base=2):
c = df.columns
X = df[c[0]].values + 1
Y = df[c[1]].values + 1
ximax = int(np.ceil(nplog(X.max(), base)))
yimax = int(np.ceil(nplog(Y.max(), base)))
xbins = [np.power(base, i) for i in range(ximax + 1)]
ybins = [np.power(base, i) for i in range(yimax + 1)]
H, xedges, yedges = np.histogram2d(X, Y, bins=[xbins, ybins])
return H, xedges, yedges
def ax_deg_heatmap(ax, X, Y, H, vmin, vmax):
# heatmap
return ax.pcolormesh(
X,
Y,
H.T,
norm=mpl.colors.LogNorm(vmin=vmin, vmax=vmax),
cmap='gnuplot2_r')
def plot_deg_heatmap(deg1, deg2, base=2, figsize=(6, 5)):
fig = plt.figure(figsize=figsize)
fs = 9
gs = gridspec.GridSpec(
2,
3,
wspace=0.15,
hspace=0.25,
width_ratios=[8, 8, 1],
height_ratios=[0.5, 0.5])
axarr = []
axarr.append(fig.add_subplot(gs[0, 0]))
axarr.append(fig.add_subplot(gs[0, 1]))
axarr.append(fig.add_subplot(gs[1, 0]))
axarr.append(fig.add_subplot(gs[1, 1]))
axarr.append(fig.add_subplot(gs[:, 2]))
axarr[0].set_title('Degree (Claim)', fontsize=fs)
axarr[1].set_title('Weigthed Degree (Claim)', fontsize=fs)
axarr[2].set_title('Degree (Fact Checking)', fontsize=fs)
axarr[3].set_title('Weigthed Degree (Fact Checking)', fontsize=fs)
df = []
df.append(pd.concat((deg1['ki'], deg1['ko']), axis=1))
df.append(pd.concat((deg1['si'], deg1['so']), axis=1))
df.append(pd.concat((deg2['ki'], deg2['ko']), axis=1))
df.append(pd.concat((deg2['si'], deg2['so']), axis=1))
X = []
Y = []
XE = []
YE = []
H = []
for d in df:
c = d.columns
X.append(d[c[0]].values + 1)
Y.append(d[c[1]].values + 1)
ximax = int(np.ceil(nplog(max(x.max() for x in X), base)))
yimax = int(np.ceil(nplog(max(y.max() for y in Y), base)))
xbins = [np.power(base, i) for i in range(ximax + 1)]
ybins = [np.power(base, i) for i in range(yimax + 1)]
for i in range(4):
h, xedges, yedges = np.histogram2d(X[i], Y[i], bins=[xbins, ybins])
H.append(h)
XE.append(xedges)
YE.append(yedges)
vmin = min(h.min() for h in H) + 1
vmax = max(h.max() for h in H)
for i in range(4):
xm, ym = np.meshgrid(XE[i], YE[i])
im = axarr[i].pcolormesh(
xm,
ym,
H[i].T,
norm=mpl.colors.LogNorm(vmin=vmin, vmax=vmax),
cmap='gnuplot2_r')
axarr[i].set_xscale('log')
axarr[i].set_yscale('log')
axarr[0].set_xticklabels([])
axarr[0].set_ylabel('Out')
axarr[1].set_xticklabels([])
axarr[1].set_yticklabels([])
axarr[2].set_ylabel('Out')
axarr[2].set_xlabel('In')
axarr[3].set_yticklabels([])
axarr[3].set_xlabel('In')
axarr[4].axis('off')
# axarr[3].xaxis.set_major_locator(LogLocator(base=10))
# axarr[3].tick_params(axis='x', which='minor', length=4, color='r')
plt.colorbar(im, ax=axarr[4], orientation='vertical', fraction=0.8)
# fig.tight_layout()
def mention_deg_dist(fn1='mention.20170921.fn.csv',
fn2='mention.20170921.fc.csv',
ofn='mention-degree-dist.pdf',
ofn1='mention.20170921.hub.fn.csv',
ofn2='mention.20170921.hub.fc.csv',
top=10,
figsize=(8, 6)):
ofn = get_out_file(ofn)
g1 = build_network(fn1)
g2 = build_network(fn2)
deg1 = all_degrees(g1)
deg2 = all_degrees(g2)
print('Mention of fake news:\n')
deg_hub_stat(g1, deg1, ofn1, top=top)
print('Mention of fact checking\n')
deg_hub_stat(g2, deg2, ofn2, top=top)
plot_deg_dist(deg1, deg2, figsize)
plt.savefig(ofn)
def retweet_deg_dist(fn1='retweet.20170921.fn.csv',
fn2='retweet.20170921.fc.csv',
ofn='retweet-degree-dist.pdf',
ofn1='retweet.20170921.hub.fn.csv',
ofn2='retweet.20170921.hub.fc.csv',
top=10,
figsize=(8, 6)):
ofn = get_out_file(ofn)
g1 = build_network(fn1)
g2 = build_network(fn2)
deg1 = all_degrees(g1)
deg2 = all_degrees(g2)
print('Retweet of fake news:\n')
deg_hub_stat(g1, deg1, ofn1, top=top)
print('Retweet of fact checking\n')
deg_hub_stat(g2, deg2, ofn2, top=top)
plot_deg_dist(deg1, deg2, figsize)
plt.savefig(ofn)
def mention_deg_heatmap(fn1='mention.20170921.fn.csv',
fn2='mention.20170921.fc.csv',
ofn='mention-degree-heatmap.pdf',
base=2,
figsize=(6, 5)):
ofn = get_out_file(ofn)
g1 = build_network(fn1)
g2 = build_network(fn2)
deg1 = all_degrees(g1)
deg2 = all_degrees(g2)
plot_deg_heatmap(deg1, deg2, base=base, figsize=figsize)
plt.savefig(ofn)
def retweet_deg_heatmap(fn1='retweet.20170921.fn.csv',
fn2='retweet.20170921.fc.csv',
ofn='retweet-degree-heatmap.pdf',
base=2,
figsize=(6, 5)):
ofn = get_out_file(ofn)
g1 = build_network(fn1)
g2 = build_network(fn2)
deg1 = all_degrees(g1)
deg2 = all_degrees(g2)
plot_deg_heatmap(deg1, deg2, base=base, figsize=figsize)
plt.savefig(ofn)
| gpl-3.0 |
penguinscontrol/Spinal-Cord-Modeling | ClarkesNetwork/test1_plot.py | 1 | 5699 | # -*- coding: utf-8 -*-
"""
Created on Thu Jan 07 11:08:41 2016
@author: Radu
"""
import numpy
import simrun
import time
from ballandstick_clarke_new import ClarkeRelay
#from neuron import h,gui
from neuron import h
h.load_file('stdgui.hoc')
#h.load_file("stdrun.hoc")
from math import sin, cos, pi
from matplotlib import pyplot
from itertools import izip
from neuronpy.graphics import spikeplot
from neuronpy.util import spiketrain
def tweak_leak(cells, Ncells):
#h.t = -1e10
#dtsav = h.dt
#h.dt = 1e9
#if cvode is on, turn it off to do large fixed step
#temp = h.cvode.active()
#if temp!=0:
# h.cvode.active(0)
#while t<-1e9:
# h.fadvance()
# restore cvode if necessary
#if temp!=0:
# h.cvode.active(1)
#h.dt = dtsav
#h.t = 0
for a in range(Ncells):
#cells[a].soma.e_pas =\
#(cells[a].soma.ina + cells[a].soma.ik\
#+ cells[a].soma.icaN_clarke + cells[a].soma.icaL_clarke\
#+ cells[a].soma.ikca_clarke + cells[a].soma.il_clarke\
#+ cells[a].soma.ikrect_clarke\
#+ cells[a].soma.g_pas*cells[a].soma.v) / cells[a].soma.g_pas
#
#cells[a].dend.e_pas =\
#(cells[a].dend.g_pas * cells[a].dend.v) / cells[a].dend.g_pas
cells[a].soma.el_clarke =\
(cells[a].soma.ina_clarke + cells[a].soma.ikrect_clarke\
+ cells[a].soma.icaN_clarke + cells[a].soma.icaL_clarke\
+ cells[a].soma.ikca_clarke + cells[a].soma.inap_clarke\
+ cells[a].soma.gl_clarke*cells[a].soma.v) / cells[a].soma.gl_clarke
cells[a].dend.e_pas =\
(cells[a].dend.g_pas * cells[a].dend.v) / cells[a].dend.g_pas
print 'hello from inside init'
#h('t = -1e6')
#h('dtsav = dt')
#h('dt = 1')
#h('temp = cvode.active()')
#h('if (temp!=0) { cvode.active(0) }')
#h('while (t<-0.5e6) { fadvance() }')
#h('if (temp!=0) { cvode.active(1) }')
#h('dt = dtsav')
#h('t = 0')
cells = []
N = 3
r = 50 # Radius of cell locations from origin (0,0,0) in microns
for i in range(N):
cell = ClarkeRelay()
# When cells are created, the soma location is at (0,0,0) and
# the dendrite extends along the X-axis.
# First, at the origin, rotate about Z.
cell.rotateZ(i*2*pi/N)
# Then reposition
x_loc = sin(i * 2 * pi / N) * r
y_loc = cos(i * 2 * pi / N) * r
cell.set_position(x_loc, y_loc, 0)
cells.append(cell)
cellSurface = h.area(0.5, sec = cells[0].soma)
h.celsius = 37
#shape_window = h.PlotShape()
#shape_window.exec_menu('Show Diam')
#stim = h.NetStim() # Make a new stimulator
# Attach it to a synapse in the middle of the dendrite
# of the first cell in the network. (Named 'syn_' to avoid
# being overwritten with the 'syn' var assigned later.)
#syn_ = h.ExpSyn(cells[0].dend(0.5))
#syn_.tau = 10
#stim.number = 1
#stim.start = 9
#ncstim = h.NetCon(stim, syn_)
#ncstim.delay = 1
#ncstim.weight[0] = 0.0075 # NetCon weight is a vector.
#Stims and clamps
stim = h.IClamp(cells[0].soma(0.5))
stim.delay = 200
stim.dur = 2
#clamp = h.SEClamp(cells[0].soma(0.5))
#clamp.dur1 = 1e9
#clamp.amp1 = -65
#clamp.rs = 1e2
stim2 = h.IClamp(cells[0].soma(0.5))
stim2.delay = 500
stim2.dur = 300
stim2.amp = 0
stim.amp = 5e-1-stim2.amp
#stim.amp = 0
soma_v_vec, soma_m_vec, soma_h_vec, soma_n_vec,\
soma_inap_vec, soma_idap_vec, soma_ical_vec,\
soma_ican_vec, soma_ikca_vec, soma_ina_vec, soma_ikrect_vec,\
dend_v_vec, t_vec\
= simrun.set_recording_vectors(cells[0])
# Set recording vectors
syn_i_vec = h.Vector()
syn_i_vec.record(stim._ref_i)
#h('v_init = 89')
h.v_init = -65
fih = h.FInitializeHandler(2,(tweak_leak,(cells,N)))
# Draw
fig1 = pyplot.figure(figsize=(4,4))
ax1a = fig1.add_subplot(1,1,1)
#ax1b = ax1a.twinx()
#step = 2.5e-2 #CaN
#step = 1e-5 #CaL
#step = 5e-2 #KCa
#step = 4e-5 #napbar
#step = 5 #tau_mc
#step = 1 #tau_hc
#step = 1e-3 #dap weight
#step = 5e-2 # gkrect
#step = 0.01 #Na
#step = 5 #tau_mp_bar
#step = 1 # tau_n_bar
step = 1e-2 #stim2
num_steps = 1
h.dt = 0.01
for i in numpy.linspace(0, step*num_steps, num_steps):
#stim2.amp += step
for a in range(N):
pass
#cells[a].soma.gcaN_clarke = cells[a].soma.gcaN_clarke + step
#cells[a].soma.gcaL_clarke = cells[a].soma.gcaL_clarke + step
#cells[a].soma.gcak_clarke = cells[a].soma.gcak_clarke + step
#cells[a].soma.gnapbar_clarke = cells[a].soma.gnapbar_clarke + step
#cells[a].soma.tau_mc_clarke = cells[a].soma.tau_mc_clarke + step
#cells[a].soma.tau_hc_clarke = cells[a].soma.tau_hc_clarke + step
#cells[a].dap_nc_.weight[0] = cells[a].dap_nc_.weight[0] +step
#cells[a].soma.gkrect_clarke = cells[a].soma.gkrect_clarke + step
#cells[a].soma.tau_mp_bar_clarke = cells[a].soma.tau_mp_bar_clarke + step
#cells[a].soma.tau_n_bar_clarke = cells[a].soma.tau_n_bar_clarke + step
#cells[a].soma.gnabar_clarke = cells[a].soma.gnabar_clarke + step
simrun.simulate()
time.sleep(1)
lWid = 1
#kca_plot = ax1b.plot(t_vec, soma_ikca_vec, color='red', lw=lWid)
soma_idap_mAcm2 = numpy.array(soma_idap_vec.to_python())
soma_idap_mAcm2 = 100*soma_idap_mAcm2/(cellSurface)
#dap_plot = ax1b.plot(numpy.array(t_vec.to_python()),\
# soma_idap_mAcm2, color='orange', lw=lWid)
lWid = 3
soma_plot = ax1a.plot(t_vec, soma_v_vec, color='black', lw=lWid)
#ax1a.legend(soma_plot + kca_plot + dap_plot,
# ['Membrane Voltage', 'I_KCa', 'I_DAP'])
#ax1a.legend(soma_plot,
# ['Membrane Voltage'])
ax1a.set_ylabel('mV')
#ax1b.set_ylabel('mA/cm2')
ax1a.set_xlim([150,300])
ax1a.set_ylim([-72,-55])
pyplot.show()
#h.quit() | gpl-2.0 |
TimDettmers/dlearndb | python_util/gpumodel.py | 2 | 17749 | # Copyright 2014 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ----------------------------------------------------------------------------
# Copyright 2014 Nervana Systems Inc. All rights reserved.
#
# * added whitening code
# * ability to save weights to disk, dump timing information
# ----------------------------------------------------------------------------
import numpy as n
import os
from time import time, asctime, localtime, strftime
from util import *
from data import *
from options import *
from math import ceil, floor, sqrt
from data import DataProvider, dp_types
import sys
import shutil
import platform
from os import linesep as NL
from threading import Thread
import tempfile as tf
import numpy as np
from ipdb import set_trace as trace
class ModelStateException(Exception):
pass
class CheckpointWriter(Thread):
def __init__(self, path, dic, remove_old=True):
Thread.__init__(self)
self.path = path
self.dic = dic
self.remove_old = remove_old
def run(self):
save_dir = os.path.dirname(self.path)
save_file = os.path.basename(self.path)
# Write checkpoint to temporary filename
tmpfile = tf.NamedTemporaryFile(dir=os.path.dirname(save_dir), delete=False)
pickle(tmpfile, self.dic) # Also closes tf
# Move it to final filename
os.rename(tmpfile.name, self.path)
# Delete old checkpoints
if (self.remove_old):
for f in os.listdir(save_dir):
if f != save_file:
os.remove(os.path.join(save_dir, f))
# GPU Model interface
class IGPUModel:
def __init__(self, model_name, op, load_dic, filename_options=[], dp_params={}):
# these are input parameters
self.model_name = model_name
self.op = op
self.options = op.options
self.load_dic = load_dic
self.filename_options = filename_options
self.dp_params = dp_params
self.device_ids = self.op.get_value('gpu')
self.fill_excused_options()
self.checkpoint_writer = None
self.hackWhiten = False
if self.hackWhiten: print "WATCH OUT -- HackWhitening !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"
#assert self.op.all_values_given()
for o in op.get_options_list():
setattr(self, o.name, o.value)
self.loaded_from_checkpoint = load_dic is not None
# these are things that the model must remember but they're not input parameters
if self.loaded_from_checkpoint:
self.model_state = load_dic["model_state"]
self.save_file = self.options["save_file_override"].value if self.options["save_file_override"].value_given else self.options['load_file'].value
if not os.path.isdir(self.save_file) and os.path.exists(self.save_file):
self.save_file = os.path.dirname(self.save_file)
# print self.options["save_file_override"].value, self.save_file
else:
self.model_state = {}
self.save_file = self.options["save_file_override"].value if self.options["save_file_override"].value_given else os.path.join(self.options['save_path'].value, model_name + "_" + '_'.join(['%s_%s' % (char, self.options[opt].get_str_value()) for opt, char in filename_options]) + '_' + strftime('%Y-%m-%d_%H.%M.%S'))
self.model_state["train_outputs"] = []
self.model_state["test_outputs"] = []
self.model_state["epoch"] = 1
self.model_state["batchnum"] = self.train_batch_range[0]
# print self.save_file
self.init_data_providers()
if load_dic:
self.train_data_provider.advance_batch()
# model state often requries knowledge of data provider, so it's initialized after
try:
self.init_model_state()
except ModelStateException, e:
print e
sys.exit(1)
for var, val in self.model_state.iteritems():
setattr(self, var, val)
self.import_model()
self.init_model_lib()
def import_model(self):
print "========================="
print "Importing %s C++ module" % ('_' + self.model_name)
self.libmodel = __import__('_' + self.model_name)
def fill_excused_options(self):
pass
def init_data_providers(self):
self.dp_params['convnet'] = self
try:
self.test_data_provider = DataProvider.get_instance(self.data_path, self.test_batch_range,
type=self.dp_type, dp_params=self.dp_params, test=True)
self.train_data_provider = DataProvider.get_instance(self.data_path, self.train_batch_range,
self.model_state["epoch"], self.model_state["batchnum"],
type=self.dp_type, dp_params=self.dp_params, test=False)
except DataProviderException, e:
print "Unable to create data provider: %s" % e
self.print_data_providers()
sys.exit()
def init_model_state(self):
pass
def init_model_lib(self):
pass
def start(self):
if self.test_only:
self.test_outputs += [self.get_test_error()]
self.print_test_results()
else:
self.train()
self.cleanup()
if self.force_save:
self.save_state().join()
sys.exit(0)
def train(self):
print "========================="
print "Training %s" % self.model_name
self.op.print_values()
print "========================="
self.print_model_state()
print "Running on CUDA device(s) %s" % ", ".join("%d" % d for d in self.device_ids)
print "Current time: %s" % asctime(localtime())
print "Saving checkpoints to %s" % self.save_file
next_data = self.get_next_batch()
print "data range %2.2f to %2.2f and shape %d %d" % (
n.min(next_data[2][0].flatten()), n.max(next_data[2][0].flatten()),
next_data[2][0].shape[0], next_data[2][0].shape[1])
print "========================="
# woot woot, some hacky whitening code:
if self.hackWhiten:
from ipdb import set_trace as trace
import matplotlib.pyplot as plt
import numpy as np
raw = next_data[2][0] # (1728, 10000) whiten on just 10k, whatever.
raw = raw - raw.mean(1)[:,np.newaxis]
E,D = np.linalg.eigh(np.cov(raw))
self.wM = np.dot(D, np.dot(np.diag((E+1)**-.5), D.T)).astype(np.float32)
print "whitening the data"
#trace()
white = np.dot(self.wM, raw)
next_data[2][0] = 10*white
print "white data range %2.2f to %2.2f and shape %d %d" % (
n.min(next_data[2][0].flatten()), n.max(next_data[2][0].flatten()),
next_data[2][0].shape[0], next_data[2][0].shape[1])
# whitened color images are weird to visualize, do /10+.5 to go to 0-1 space
while self.epoch <= self.num_epochs:
data = next_data
self.epoch, self.batchnum = data[0], data[1]
self.print_iteration()
sys.stdout.flush()
compute_time_py = time()
self.start_batch(data)
# load the next batch while the current one is computing
next_data = self.get_next_batch()
#trace()
if self.hackWhiten: next_data[2][0] = 10*np.dot(self.wM, next_data[2][0]) # WOOOOOOOOOOOT HACK!
batch_output = self.finish_batch()
self.train_outputs += [batch_output]
self.print_train_results()
#leads to lot of dumping
# self.sync_with_host()
# self.save_weights()
if self.get_num_batches_done() % self.testing_freq == 0:
self.sync_with_host()
self.test_outputs += [self.get_test_error()]
self.print_test_results()
self.print_test_status()
self.conditional_save()
self.print_elapsed_time(time() - compute_time_py)
def cleanup(self):
if self.checkpoint_writer is not None:
self.checkpoint_writer.join()
self.checkpoint_writer = None
def print_model_state(self):
pass
def get_num_batches_done(self):
return len(self.train_batch_range) * (self.epoch - 1) + self.batchnum - self.train_batch_range[0] + 1
def get_next_batch(self, train=True):
dp = self.train_data_provider
if not train:
dp = self.test_data_provider
return self.parse_batch_data(dp.get_next_batch(), train=train)
def parse_batch_data(self, batch_data, train=True):
return batch_data[0], batch_data[1], batch_data[2]['data']
def start_batch(self, batch_data, train=True):
self.libmodel.startBatch(batch_data[2], not train)
def finish_batch(self):
return self.libmodel.finishBatch()
def print_iteration(self):
print "\t%d.%d..." % (self.epoch, self.batchnum),
def print_elapsed_time(self, compute_time_py):
print "(%.3f sec)" % (compute_time_py)
def print_train_results(self):
batch_error = self.train_outputs[-1][0]
if not (batch_error > 0 and batch_error < 2e20):
print "Crazy train error: %.6f" % batch_error
self.cleanup()
print "Train error: %.6f " % (batch_error),
def print_test_results(self):
batch_error = self.test_outputs[-1][0]
print "%s\t\tTest error: %.6f" % (NL, batch_error),
def print_test_status(self):
status = (len(self.test_outputs) == 1 or self.test_outputs[-1][0] < self.test_outputs[-2][0]) and "ok" or "WORSE"
print status,
def sync_with_host(self):
if self.checkpoint_writer is not None:
self.checkpoint_writer.join()
self.checkpoint_writer = None
self.libmodel.syncWithHost()
def conditional_save(self):
batch_error = self.test_outputs[-1][0]
if batch_error > 0 and batch_error < self.max_test_err:
self.save_state()
else:
print "\tTest error > %g, not saving." % self.max_test_err,
def aggregate_test_outputs(self, test_outputs):
test_error = tuple([sum(t[r] for t in test_outputs) / (1 if self.test_one else len(self.test_batch_range)) for r in range(len(test_outputs[-1]))])
return test_error
def get_test_error(self):
next_data = self.get_next_batch(train=False)
if self.hackWhiten: next_data[2][0] = 10*np.dot(self.wM, next_data[2][0])
test_outputs = []
while True:
data = next_data
start_time_test = time()
self.start_batch(data, train=False)
load_next = (not self.test_one or self.test_only) and data[1] < self.test_batch_range[-1]
if load_next: # load next batch
next_data = self.get_next_batch(train=False)
if self.hackWhiten: next_data[2][0] = 10*np.dot(self.wM, next_data[2][0])
test_outputs += [self.finish_batch()]
if self.test_only: # Print the individual batch results for safety
print "batch %d: %s" % (data[1], str(test_outputs[-1])),
self.print_elapsed_time(time() - start_time_test)
if not load_next:
break
sys.stdout.flush()
return self.aggregate_test_outputs(test_outputs)
def set_var(self, var_name, var_val):
setattr(self, var_name, var_val)
self.model_state[var_name] = var_val
return var_val
def get_var(self, var_name):
return self.model_state[var_name]
def has_var(self, var_name):
return var_name in self.model_state
def save_state(self):
for att in self.model_state:
if hasattr(self, att):
self.model_state[att] = getattr(self, att)
dic = {"model_state": self.model_state,
"op": self.op}
checkpoint_file = "%d.%d" % (self.epoch, self.batchnum)
checkpoint_file_full_path = os.path.join(self.save_file, checkpoint_file)
if not os.path.exists(self.save_file):
os.makedirs(self.save_file)
assert self.checkpoint_writer is None
self.checkpoint_writer = CheckpointWriter(checkpoint_file_full_path, dic)
self.checkpoint_writer.start()
print "-------------------------------------------------------"
print "Saved checkpoint to %s" % self.save_file
print "=======================================================",
return self.checkpoint_writer
def save_weights(self):
checkpoint_file = "wts.%d.%d" % (self.epoch, self.batchnum)
checkpoint_file_full_path = os.path.join(self.save_file, checkpoint_file)
dic = {}
for ln in ['conv1', 'conv2', 'fc10']:
dic[ln] = [self.model_state['layers']['conv1']['weights'][0], self.model_state['layers']['conv1']['weightsInc'][0]]
if not os.path.exists(self.save_file):
os.makedirs(self.save_file)
tmpfile = tf.NamedTemporaryFile(dir=self.save_file, delete=False)
pickle(tmpfile, dic) # Also closes tf
# Move it to final filename
os.rename(tmpfile.name, checkpoint_file_full_path)
def get_progress(self):
num_batches_total = self.num_epochs * len(self.train_batch_range)
return min(1.0, max(0.0, float(self.get_num_batches_done()-1) / num_batches_total))
@staticmethod
def load_checkpoint(load_dir):
if os.path.isdir(load_dir):
return unpickle(os.path.join(load_dir, sorted(os.listdir(load_dir), key=alphanum_key)[-1]))
return unpickle(load_dir)
@staticmethod
def get_options_parser():
op = OptionsParser()
op.add_option("load-file", "load_file", StringOptionParser, "Load file", default="", excuses=OptionsParser.EXCUSE_ALL)
op.add_option("save-path", "save_path", StringOptionParser, "Save path", excuses=['save_file_override'])
op.add_option("save-file", "save_file_override", StringOptionParser, "Save file override", excuses=['save_path'])
op.add_option("train-range", "train_batch_range", RangeOptionParser, "Data batch range: training")
op.add_option("test-range", "test_batch_range", RangeOptionParser, "Data batch range: testing")
op.add_option("data-provider", "dp_type", StringOptionParser, "Data provider", default="default")
op.add_option("test-freq", "testing_freq", IntegerOptionParser, "Testing frequency", default=25)
op.add_option("epochs", "num_epochs", IntegerOptionParser, "Number of epochs", default=500)
op.add_option("data-path", "data_path", StringOptionParser, "Data path")
op.add_option("max-test-err", "max_test_err", FloatOptionParser, "Maximum test error for saving")
op.add_option("test-only", "test_only", BooleanOptionParser, "Test and quit?", default=0)
op.add_option("test-one", "test_one", BooleanOptionParser, "Test on one batch at a time?", default=1)
op.add_option("force-save", "force_save", BooleanOptionParser, "Force save before quitting", default=0)
op.add_option("gpu", "gpu", ListOptionParser(IntegerOptionParser), "GPU override")
return op
@staticmethod
def print_data_providers():
print "Available data providers:"
for dp, desc in dp_types.iteritems():
print " %s: %s" % (dp, desc)
@staticmethod
def parse_options(op, argslist=sys.argv[1:]):
try:
load_dic = None
options = op.parse(opargslist=argslist)
load_location = None
# print options['load_file'].value_given, options['save_file_override'].value_given
# print options['save_file_override'].value
if options['load_file'].value_given:
load_location = options['load_file'].value
elif options['save_file_override'].value_given and os.path.exists(options['save_file_override'].value):
load_location = options['save_file_override'].value
if load_location is not None:
load_dic = IGPUModel.load_checkpoint(load_location)
old_op = load_dic["op"]
old_op.merge_from(op)
op = old_op
op.eval_expr_defaults()
return op, load_dic
except OptionMissingException, e:
print e
op.print_usage()
except OptionException, e:
print e
except UnpickleError, e:
print "Error loading checkpoint:"
print e
sys.exit()
| apache-2.0 |
bigdataelephants/scikit-learn | examples/decomposition/plot_ica_blind_source_separation.py | 349 | 2228 | """
=====================================
Blind source separation using FastICA
=====================================
An example of estimating sources from noisy data.
:ref:`ICA` is used to estimate sources given noisy measurements.
Imagine 3 instruments playing simultaneously and 3 microphones
recording the mixed signals. ICA is used to recover the sources
ie. what is played by each instrument. Importantly, PCA fails
at recovering our `instruments` since the related signals reflect
non-Gaussian processes.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
from sklearn.decomposition import FastICA, PCA
###############################################################################
# Generate sample data
np.random.seed(0)
n_samples = 2000
time = np.linspace(0, 8, n_samples)
s1 = np.sin(2 * time) # Signal 1 : sinusoidal signal
s2 = np.sign(np.sin(3 * time)) # Signal 2 : square signal
s3 = signal.sawtooth(2 * np.pi * time) # Signal 3: saw tooth signal
S = np.c_[s1, s2, s3]
S += 0.2 * np.random.normal(size=S.shape) # Add noise
S /= S.std(axis=0) # Standardize data
# Mix data
A = np.array([[1, 1, 1], [0.5, 2, 1.0], [1.5, 1.0, 2.0]]) # Mixing matrix
X = np.dot(S, A.T) # Generate observations
# Compute ICA
ica = FastICA(n_components=3)
S_ = ica.fit_transform(X) # Reconstruct signals
A_ = ica.mixing_ # Get estimated mixing matrix
# We can `prove` that the ICA model applies by reverting the unmixing.
assert np.allclose(X, np.dot(S_, A_.T) + ica.mean_)
# For comparison, compute PCA
pca = PCA(n_components=3)
H = pca.fit_transform(X) # Reconstruct signals based on orthogonal components
###############################################################################
# Plot results
plt.figure()
models = [X, S, S_, H]
names = ['Observations (mixed signal)',
'True Sources',
'ICA recovered signals',
'PCA recovered signals']
colors = ['red', 'steelblue', 'orange']
for ii, (model, name) in enumerate(zip(models, names), 1):
plt.subplot(4, 1, ii)
plt.title(name)
for sig, color in zip(model.T, colors):
plt.plot(sig, color=color)
plt.subplots_adjust(0.09, 0.04, 0.94, 0.94, 0.26, 0.46)
plt.show()
| bsd-3-clause |
ngoix/OCRF | sklearn/tree/tests/test_tree.py | 1 | 52385 | """
Testing for the tree module (sklearn.tree).
"""
import pickle
from functools import partial
from itertools import product
import platform
import numpy as np
from scipy.sparse import csc_matrix
from scipy.sparse import csr_matrix
from scipy.sparse import coo_matrix
from sklearn.random_projection import sparse_random_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import mean_squared_error
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_in
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_greater_equal
from sklearn.utils.testing import assert_less
from sklearn.utils.testing import assert_less_equal
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import raises
from sklearn.utils.testing import ignore_warnings
from sklearn.utils.validation import check_random_state
from sklearn.exceptions import NotFittedError
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import DecisionTreeRegressor
from sklearn.tree import ExtraTreeClassifier
from sklearn.tree import ExtraTreeRegressor
from sklearn import tree
from sklearn.tree.tree import SPARSE_SPLITTERS
from sklearn.tree._tree import TREE_LEAF
from sklearn import datasets
from sklearn.utils import compute_sample_weight
CLF_CRITERIONS = ("oneclassgini", "gini", "entropy")
REG_CRITERIONS = ("mse", )
CLF_TREES = {
"DecisionTreeClassifier": DecisionTreeClassifier,
"Presort-DecisionTreeClassifier": partial(DecisionTreeClassifier,
presort=True),
"ExtraTreeClassifier": ExtraTreeClassifier,
}
REG_TREES = {
"DecisionTreeRegressor": DecisionTreeRegressor,
"Presort-DecisionTreeRegressor": partial(DecisionTreeRegressor,
presort=True),
"ExtraTreeRegressor": ExtraTreeRegressor,
}
ALL_TREES = dict()
ALL_TREES.update(CLF_TREES)
ALL_TREES.update(REG_TREES)
SPARSE_TREES = ["DecisionTreeClassifier", "DecisionTreeRegressor",
"ExtraTreeClassifier", "ExtraTreeRegressor"]
X_small = np.array([
[0, 0, 4, 0, 0, 0, 1, -14, 0, -4, 0, 0, 0, 0, ],
[0, 0, 5, 3, 0, -4, 0, 0, 1, -5, 0.2, 0, 4, 1, ],
[-1, -1, 0, 0, -4.5, 0, 0, 2.1, 1, 0, 0, -4.5, 0, 1, ],
[-1, -1, 0, -1.2, 0, 0, 0, 0, 0, 0, 0.2, 0, 0, 1, ],
[-1, -1, 0, 0, 0, 0, 0, 3, 0, 0, 0, 0, 0, 1, ],
[-1, -2, 0, 4, -3, 10, 4, 0, -3.2, 0, 4, 3, -4, 1, ],
[2.11, 0, -6, -0.5, 0, 11, 0, 0, -3.2, 6, 0.5, 0, -3, 1, ],
[2.11, 0, -6, -0.5, 0, 11, 0, 0, -3.2, 6, 0, 0, -2, 1, ],
[2.11, 8, -6, -0.5, 0, 11, 0, 0, -3.2, 6, 0, 0, -2, 1, ],
[2.11, 8, -6, -0.5, 0, 11, 0, 0, -3.2, 6, 0.5, 0, -1, 0, ],
[2, 8, 5, 1, 0.5, -4, 10, 0, 1, -5, 3, 0, 2, 0, ],
[2, 0, 1, 1, 1, -1, 1, 0, 0, -2, 3, 0, 1, 0, ],
[2, 0, 1, 2, 3, -1, 10, 2, 0, -1, 1, 2, 2, 0, ],
[1, 1, 0, 2, 2, -1, 1, 2, 0, -5, 1, 2, 3, 0, ],
[3, 1, 0, 3, 0, -4, 10, 0, 1, -5, 3, 0, 3, 1, ],
[2.11, 8, -6, -0.5, 0, 1, 0, 0, -3.2, 6, 0.5, 0, -3, 1, ],
[2.11, 8, -6, -0.5, 0, 1, 0, 0, -3.2, 6, 1.5, 1, -1, -1, ],
[2.11, 8, -6, -0.5, 0, 10, 0, 0, -3.2, 6, 0.5, 0, -1, -1, ],
[2, 0, 5, 1, 0.5, -2, 10, 0, 1, -5, 3, 1, 0, -1, ],
[2, 0, 1, 1, 1, -2, 1, 0, 0, -2, 0, 0, 0, 1, ],
[2, 1, 1, 1, 2, -1, 10, 2, 0, -1, 0, 2, 1, 1, ],
[1, 1, 0, 0, 1, -3, 1, 2, 0, -5, 1, 2, 1, 1, ],
[3, 1, 0, 1, 0, -4, 1, 0, 1, -2, 0, 0, 1, 0, ]])
y_small = [1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0,
0, 0]
y_small_reg = [1.0, 2.1, 1.2, 0.05, 10, 2.4, 3.1, 1.01, 0.01, 2.98, 3.1, 1.1,
0.0, 1.2, 2, 11, 0, 0, 4.5, 0.201, 1.06, 0.9, 0]
# toy sample
X = [[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]]
y = [-1, -1, -1, 1, 1, 1]
T = [[-1, -1], [2, 2], [3, 2]]
true_result = [-1, 1, 1]
# also load the iris dataset
# and randomly permute it
iris = datasets.load_iris()
rng = np.random.RandomState(1)
perm = rng.permutation(iris.target.size)
iris.data = iris.data[perm]
iris.target = iris.target[perm]
# also load the boston dataset
# and randomly permute it
boston = datasets.load_boston()
perm = rng.permutation(boston.target.size)
boston.data = boston.data[perm]
boston.target = boston.target[perm]
digits = datasets.load_digits()
perm = rng.permutation(digits.target.size)
digits.data = digits.data[perm]
digits.target = digits.target[perm]
random_state = check_random_state(0)
X_multilabel, y_multilabel = datasets.make_multilabel_classification(
random_state=0, n_samples=30, n_features=10)
X_sparse_pos = random_state.uniform(size=(20, 5))
X_sparse_pos[X_sparse_pos <= 0.8] = 0.
y_random = random_state.randint(0, 4, size=(20, ))
X_sparse_mix = sparse_random_matrix(20, 10, density=0.25, random_state=0)
DATASETS = {
"iris": {"X": iris.data, "y": iris.target},
"boston": {"X": boston.data, "y": boston.target},
"digits": {"X": digits.data, "y": digits.target},
"toy": {"X": X, "y": y},
"clf_small": {"X": X_small, "y": y_small},
"reg_small": {"X": X_small, "y": y_small_reg},
"multilabel": {"X": X_multilabel, "y": y_multilabel},
"sparse-pos": {"X": X_sparse_pos, "y": y_random},
"sparse-neg": {"X": - X_sparse_pos, "y": y_random},
"sparse-mix": {"X": X_sparse_mix, "y": y_random},
"zeros": {"X": np.zeros((20, 3)), "y": y_random}
}
for name in DATASETS:
DATASETS[name]["X_sparse"] = csc_matrix(DATASETS[name]["X"])
def assert_tree_equal(d, s, message):
assert_equal(s.node_count, d.node_count,
"{0}: inequal number of node ({1} != {2})"
"".format(message, s.node_count, d.node_count))
assert_array_equal(d.children_right, s.children_right,
message + ": inequal children_right")
assert_array_equal(d.children_left, s.children_left,
message + ": inequal children_left")
external = d.children_right == TREE_LEAF
internal = np.logical_not(external)
assert_array_equal(d.feature[internal], s.feature[internal],
message + ": inequal features")
assert_array_equal(d.threshold[internal], s.threshold[internal],
message + ": inequal threshold")
assert_array_equal(d.n_node_samples.sum(), s.n_node_samples.sum(),
message + ": inequal sum(n_node_samples)")
assert_array_equal(d.n_node_samples, s.n_node_samples,
message + ": inequal n_node_samples")
assert_almost_equal(d.impurity, s.impurity,
err_msg=message + ": inequal impurity")
assert_array_almost_equal(d.value[external], s.value[external],
err_msg=message + ": inequal value")
def test_classification_toy():
# Check classification on a toy dataset.
for name, Tree in CLF_TREES.items():
clf = Tree(random_state=0)
clf.fit(X, y)
assert_array_equal(clf.predict(T), true_result,
"Failed with {0}".format(name))
clf = Tree(max_features=1, random_state=1)
clf.fit(X, y)
assert_array_equal(clf.predict(T), true_result,
"Failed with {0}".format(name))
def test_weighted_classification_toy():
# Check classification on a weighted toy dataset.
for name, Tree in CLF_TREES.items():
clf = Tree(random_state=0)
clf.fit(X, y, sample_weight=np.ones(len(X)))
assert_array_equal(clf.predict(T), true_result,
"Failed with {0}".format(name))
clf.fit(X, y, sample_weight=np.ones(len(X)) * 0.5)
assert_array_equal(clf.predict(T), true_result,
"Failed with {0}".format(name))
def test_regression_toy():
# Check regression on a toy dataset.
for name, Tree in REG_TREES.items():
reg = Tree(random_state=1)
reg.fit(X, y)
assert_almost_equal(reg.predict(T), true_result,
err_msg="Failed with {0}".format(name))
clf = Tree(max_features=1, random_state=1)
clf.fit(X, y)
assert_almost_equal(reg.predict(T), true_result,
err_msg="Failed with {0}".format(name))
def test_xor():
# Check on a XOR problem
y = np.zeros((10, 10))
y[:5, :5] = 1
y[5:, 5:] = 1
gridx, gridy = np.indices(y.shape)
X = np.vstack([gridx.ravel(), gridy.ravel()]).T
y = y.ravel()
for name, Tree in CLF_TREES.items():
clf = Tree(random_state=0)
clf.fit(X, y)
assert_equal(clf.score(X, y), 1.0,
"Failed with {0}".format(name))
clf = Tree(random_state=0, max_features=1)
clf.fit(X, y)
assert_equal(clf.score(X, y), 1.0,
"Failed with {0}".format(name))
def test_iris():
# Check consistency on dataset iris.
for (name, Tree), criterion in product(CLF_TREES.items(), CLF_CRITERIONS):
clf = Tree(criterion=criterion, random_state=0)
clf.fit(iris.data, iris.target)
score = accuracy_score(clf.predict(iris.data), iris.target)
assert_greater(score, 0.9,
"Failed with {0}, criterion = {1} and score = {2}"
"".format(name, criterion, score))
clf = Tree(criterion=criterion, max_features=2, random_state=0)
clf.fit(iris.data, iris.target)
score = accuracy_score(clf.predict(iris.data), iris.target)
assert_greater(score, 0.5,
"Failed with {0}, criterion = {1} and score = {2}"
"".format(name, criterion, score))
def test_boston():
# Check consistency on dataset boston house prices.
for (name, Tree), criterion in product(REG_TREES.items(), REG_CRITERIONS):
reg = Tree(criterion=criterion, random_state=0)
reg.fit(boston.data, boston.target)
score = mean_squared_error(boston.target, reg.predict(boston.data))
assert_less(score, 1,
"Failed with {0}, criterion = {1} and score = {2}"
"".format(name, criterion, score))
# using fewer features reduces the learning ability of this tree,
# but reduces training time.
reg = Tree(criterion=criterion, max_features=6, random_state=0)
reg.fit(boston.data, boston.target)
score = mean_squared_error(boston.target, reg.predict(boston.data))
assert_less(score, 2,
"Failed with {0}, criterion = {1} and score = {2}"
"".format(name, criterion, score))
def test_probability():
# Predict probabilities using DecisionTreeClassifier.
for name, Tree in CLF_TREES.items():
clf = Tree(max_depth=1, max_features=1, random_state=42)
clf.fit(iris.data, iris.target)
prob_predict = clf.predict_proba(iris.data)
assert_array_almost_equal(np.sum(prob_predict, 1),
np.ones(iris.data.shape[0]),
err_msg="Failed with {0}".format(name))
assert_array_equal(np.argmax(prob_predict, 1),
clf.predict(iris.data),
err_msg="Failed with {0}".format(name))
assert_almost_equal(clf.predict_proba(iris.data),
np.exp(clf.predict_log_proba(iris.data)), 8,
err_msg="Failed with {0}".format(name))
def test_arrayrepr():
# Check the array representation.
# Check resize
X = np.arange(10000)[:, np.newaxis]
y = np.arange(10000)
for name, Tree in REG_TREES.items():
reg = Tree(max_depth=None, random_state=0)
reg.fit(X, y)
def test_pure_set():
# Check when y is pure.
X = [[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]]
y = [1, 1, 1, 1, 1, 1]
for name, TreeClassifier in CLF_TREES.items():
clf = TreeClassifier(random_state=0)
clf.fit(X, y)
assert_array_equal(clf.predict(X), y,
err_msg="Failed with {0}".format(name))
for name, TreeRegressor in REG_TREES.items():
reg = TreeRegressor(random_state=0)
reg.fit(X, y)
assert_almost_equal(clf.predict(X), y,
err_msg="Failed with {0}".format(name))
def test_numerical_stability():
# Check numerical stability.
X = np.array([
[152.08097839, 140.40744019, 129.75102234, 159.90493774],
[142.50700378, 135.81935120, 117.82884979, 162.75781250],
[127.28772736, 140.40744019, 129.75102234, 159.90493774],
[132.37025452, 143.71923828, 138.35694885, 157.84558105],
[103.10237122, 143.71928406, 138.35696411, 157.84559631],
[127.71276855, 143.71923828, 138.35694885, 157.84558105],
[120.91514587, 140.40744019, 129.75102234, 159.90493774]])
y = np.array(
[1., 0.70209277, 0.53896582, 0., 0.90914464, 0.48026916, 0.49622521])
with np.errstate(all="raise"):
for name, Tree in REG_TREES.items():
reg = Tree(random_state=0)
reg.fit(X, y)
reg.fit(X, -y)
reg.fit(-X, y)
reg.fit(-X, -y)
def test_importances():
# Check variable importances.
X, y = datasets.make_classification(n_samples=2000,
n_features=10,
n_informative=3,
n_redundant=0,
n_repeated=0,
shuffle=False,
random_state=0)
for name, Tree in CLF_TREES.items():
clf = Tree(random_state=0)
clf.fit(X, y)
importances = clf.feature_importances_
n_important = np.sum(importances > 0.1)
assert_equal(importances.shape[0], 10, "Failed with {0}".format(name))
assert_equal(n_important, 3, "Failed with {0}".format(name))
X_new = assert_warns(
DeprecationWarning, clf.transform, X, threshold="mean")
assert_less(0, X_new.shape[1], "Failed with {0}".format(name))
assert_less(X_new.shape[1], X.shape[1], "Failed with {0}".format(name))
# Check on iris that importances are the same for all builders
clf = DecisionTreeClassifier(random_state=0)
clf.fit(iris.data, iris.target)
clf2 = DecisionTreeClassifier(random_state=0,
max_leaf_nodes=len(iris.data))
clf2.fit(iris.data, iris.target)
assert_array_equal(clf.feature_importances_,
clf2.feature_importances_)
@raises(ValueError)
def test_importances_raises():
# Check if variable importance before fit raises ValueError.
clf = DecisionTreeClassifier()
clf.feature_importances_
def test_importances_gini_equal_mse():
# Check that gini is equivalent to mse for binary output variable
X, y = datasets.make_classification(n_samples=2000,
n_features=10,
n_informative=3,
n_redundant=0,
n_repeated=0,
shuffle=False,
random_state=0)
# The gini index and the mean square error (variance) might differ due
# to numerical instability. Since those instabilities mainly occurs at
# high tree depth, we restrict this maximal depth.
clf = DecisionTreeClassifier(criterion="gini", max_depth=5,
random_state=0).fit(X, y)
reg = DecisionTreeRegressor(criterion="mse", max_depth=5,
random_state=0).fit(X, y)
assert_almost_equal(clf.feature_importances_, reg.feature_importances_)
assert_array_equal(clf.tree_.feature, reg.tree_.feature)
assert_array_equal(clf.tree_.children_left, reg.tree_.children_left)
assert_array_equal(clf.tree_.children_right, reg.tree_.children_right)
assert_array_equal(clf.tree_.n_node_samples, reg.tree_.n_node_samples)
def test_max_features():
# Check max_features.
for name, TreeRegressor in REG_TREES.items():
reg = TreeRegressor(max_features="auto")
reg.fit(boston.data, boston.target)
assert_equal(reg.max_features_, boston.data.shape[1])
for name, TreeClassifier in CLF_TREES.items():
clf = TreeClassifier(max_features="auto")
clf.fit(iris.data, iris.target)
assert_equal(clf.max_features_, 2)
for name, TreeEstimator in ALL_TREES.items():
est = TreeEstimator(max_features="sqrt")
est.fit(iris.data, iris.target)
assert_equal(est.max_features_,
int(np.sqrt(iris.data.shape[1])))
est = TreeEstimator(max_features="log2")
est.fit(iris.data, iris.target)
assert_equal(est.max_features_,
int(np.log2(iris.data.shape[1])))
est = TreeEstimator(max_features=1)
est.fit(iris.data, iris.target)
assert_equal(est.max_features_, 1)
est = TreeEstimator(max_features=3)
est.fit(iris.data, iris.target)
assert_equal(est.max_features_, 3)
est = TreeEstimator(max_features=0.01)
est.fit(iris.data, iris.target)
assert_equal(est.max_features_, 1)
est = TreeEstimator(max_features=0.5)
est.fit(iris.data, iris.target)
assert_equal(est.max_features_,
int(0.5 * iris.data.shape[1]))
est = TreeEstimator(max_features=1.0)
est.fit(iris.data, iris.target)
assert_equal(est.max_features_, iris.data.shape[1])
est = TreeEstimator(max_features=None)
est.fit(iris.data, iris.target)
assert_equal(est.max_features_, iris.data.shape[1])
# use values of max_features that are invalid
est = TreeEstimator(max_features=10)
assert_raises(ValueError, est.fit, X, y)
est = TreeEstimator(max_features=-1)
assert_raises(ValueError, est.fit, X, y)
est = TreeEstimator(max_features=0.0)
assert_raises(ValueError, est.fit, X, y)
est = TreeEstimator(max_features=1.5)
assert_raises(ValueError, est.fit, X, y)
est = TreeEstimator(max_features="foobar")
assert_raises(ValueError, est.fit, X, y)
def test_error():
# Test that it gives proper exception on deficient input.
for name, TreeEstimator in CLF_TREES.items():
# predict before fit
est = TreeEstimator()
assert_raises(NotFittedError, est.predict_proba, X)
est.fit(X, y)
X2 = [[-2, -1, 1]] # wrong feature shape for sample
assert_raises(ValueError, est.predict_proba, X2)
for name, TreeEstimator in ALL_TREES.items():
# Invalid values for parameters
assert_raises(ValueError, TreeEstimator(min_samples_leaf=-1).fit, X, y)
assert_raises(ValueError, TreeEstimator(min_samples_leaf=.6).fit, X, y)
assert_raises(ValueError, TreeEstimator(min_samples_leaf=0.).fit, X, y)
assert_raises(ValueError,
TreeEstimator(min_weight_fraction_leaf=-1).fit,
X, y)
assert_raises(ValueError,
TreeEstimator(min_weight_fraction_leaf=0.51).fit,
X, y)
assert_raises(ValueError, TreeEstimator(min_samples_split=-1).fit,
X, y)
assert_raises(ValueError, TreeEstimator(min_samples_split=0.0).fit,
X, y)
assert_raises(ValueError, TreeEstimator(min_samples_split=1.1).fit,
X, y)
assert_raises(ValueError, TreeEstimator(max_depth=-1).fit, X, y)
assert_raises(ValueError, TreeEstimator(max_features=42).fit, X, y)
# Wrong dimensions
est = TreeEstimator()
y2 = y[:-1]
assert_raises(ValueError, est.fit, X, y2)
# Test with arrays that are non-contiguous.
Xf = np.asfortranarray(X)
est = TreeEstimator()
est.fit(Xf, y)
assert_almost_equal(est.predict(T), true_result)
# predict before fitting
est = TreeEstimator()
assert_raises(NotFittedError, est.predict, T)
# predict on vector with different dims
est.fit(X, y)
t = np.asarray(T)
assert_raises(ValueError, est.predict, t[:, 1:])
# wrong sample shape
Xt = np.array(X).T
est = TreeEstimator()
est.fit(np.dot(X, Xt), y)
assert_raises(ValueError, est.predict, X)
assert_raises(ValueError, est.apply, X)
clf = TreeEstimator()
clf.fit(X, y)
assert_raises(ValueError, clf.predict, Xt)
assert_raises(ValueError, clf.apply, Xt)
# apply before fitting
est = TreeEstimator()
assert_raises(NotFittedError, est.apply, T)
def test_min_samples_split():
"""Test min_samples_split parameter"""
X = np.asfortranarray(iris.data.astype(tree._tree.DTYPE))
y = iris.target
# test both DepthFirstTreeBuilder and BestFirstTreeBuilder
# by setting max_leaf_nodes
for max_leaf_nodes, name in product((None, 1000), ALL_TREES.keys()):
TreeEstimator = ALL_TREES[name]
# test for integer parameter
est = TreeEstimator(min_samples_split=10,
max_leaf_nodes=max_leaf_nodes,
random_state=0)
est.fit(X, y)
# count samples on nodes, -1 means it is a leaf
node_samples = est.tree_.n_node_samples[est.tree_.children_left != -1]
assert_greater(np.min(node_samples), 9,
"Failed with {0}".format(name))
# test for float parameter
est = TreeEstimator(min_samples_split=0.2,
max_leaf_nodes=max_leaf_nodes,
random_state=0)
est.fit(X, y)
# count samples on nodes, -1 means it is a leaf
node_samples = est.tree_.n_node_samples[est.tree_.children_left != -1]
assert_greater(np.min(node_samples), 9,
"Failed with {0}".format(name))
def test_min_samples_leaf():
# Test if leaves contain more than leaf_count training examples
X = np.asfortranarray(iris.data.astype(tree._tree.DTYPE))
y = iris.target
# test both DepthFirstTreeBuilder and BestFirstTreeBuilder
# by setting max_leaf_nodes
for max_leaf_nodes, name in product((None, 1000), ALL_TREES.keys()):
TreeEstimator = ALL_TREES[name]
# test integer parameter
est = TreeEstimator(min_samples_leaf=5,
max_leaf_nodes=max_leaf_nodes,
random_state=0)
est.fit(X, y)
out = est.tree_.apply(X)
node_counts = np.bincount(out)
# drop inner nodes
leaf_count = node_counts[node_counts != 0]
assert_greater(np.min(leaf_count), 4,
"Failed with {0}".format(name))
# test float parameter
est = TreeEstimator(min_samples_leaf=0.1,
max_leaf_nodes=max_leaf_nodes,
random_state=0)
est.fit(X, y)
out = est.tree_.apply(X)
node_counts = np.bincount(out)
# drop inner nodes
leaf_count = node_counts[node_counts != 0]
assert_greater(np.min(leaf_count), 4,
"Failed with {0}".format(name))
def check_min_weight_fraction_leaf(name, datasets, sparse=False):
"""Test if leaves contain at least min_weight_fraction_leaf of the
training set"""
if sparse:
X = DATASETS[datasets]["X_sparse"].astype(np.float32)
else:
X = DATASETS[datasets]["X"].astype(np.float32)
y = DATASETS[datasets]["y"]
weights = rng.rand(X.shape[0])
total_weight = np.sum(weights)
TreeEstimator = ALL_TREES[name]
# test both DepthFirstTreeBuilder and BestFirstTreeBuilder
# by setting max_leaf_nodes
for max_leaf_nodes, frac in product((None, 1000), np.linspace(0, 0.5, 6)):
est = TreeEstimator(min_weight_fraction_leaf=frac,
max_leaf_nodes=max_leaf_nodes,
random_state=0)
est.fit(X, y, sample_weight=weights)
if sparse:
out = est.tree_.apply(X.tocsr())
else:
out = est.tree_.apply(X)
node_weights = np.bincount(out, weights=weights)
# drop inner nodes
leaf_weights = node_weights[node_weights != 0]
assert_greater_equal(
np.min(leaf_weights),
total_weight * est.min_weight_fraction_leaf,
"Failed with {0} "
"min_weight_fraction_leaf={1}".format(
name, est.min_weight_fraction_leaf))
def test_min_weight_fraction_leaf():
# Check on dense input
for name in ALL_TREES:
yield check_min_weight_fraction_leaf, name, "iris"
# Check on sparse input
for name in SPARSE_TREES:
yield check_min_weight_fraction_leaf, name, "multilabel", True
def test_pickle():
for name, TreeEstimator in ALL_TREES.items():
if "Classifier" in name:
X, y = iris.data, iris.target
else:
X, y = boston.data, boston.target
est = TreeEstimator(random_state=0)
est.fit(X, y)
score = est.score(X, y)
fitted_attribute = dict()
for attribute in ["max_depth", "node_count", "capacity"]:
fitted_attribute[attribute] = getattr(est.tree_, attribute)
serialized_object = pickle.dumps(est)
est2 = pickle.loads(serialized_object)
assert_equal(type(est2), est.__class__)
score2 = est2.score(X, y)
assert_equal(score, score2,
"Failed to generate same score after pickling "
"with {0}".format(name))
for attribute in fitted_attribute:
assert_equal(getattr(est2.tree_, attribute),
fitted_attribute[attribute],
"Failed to generate same attribute {0} after "
"pickling with {1}".format(attribute, name))
def test_multioutput():
# Check estimators on multi-output problems.
X = [[-2, -1],
[-1, -1],
[-1, -2],
[1, 1],
[1, 2],
[2, 1],
[-2, 1],
[-1, 1],
[-1, 2],
[2, -1],
[1, -1],
[1, -2]]
y = [[-1, 0],
[-1, 0],
[-1, 0],
[1, 1],
[1, 1],
[1, 1],
[-1, 2],
[-1, 2],
[-1, 2],
[1, 3],
[1, 3],
[1, 3]]
T = [[-1, -1], [1, 1], [-1, 1], [1, -1]]
y_true = [[-1, 0], [1, 1], [-1, 2], [1, 3]]
# toy classification problem
for name, TreeClassifier in CLF_TREES.items():
clf = TreeClassifier(random_state=0)
y_hat = clf.fit(X, y).predict(T)
assert_array_equal(y_hat, y_true)
assert_equal(y_hat.shape, (4, 2))
proba = clf.predict_proba(T)
assert_equal(len(proba), 2)
assert_equal(proba[0].shape, (4, 2))
assert_equal(proba[1].shape, (4, 4))
log_proba = clf.predict_log_proba(T)
assert_equal(len(log_proba), 2)
assert_equal(log_proba[0].shape, (4, 2))
assert_equal(log_proba[1].shape, (4, 4))
# toy regression problem
for name, TreeRegressor in REG_TREES.items():
reg = TreeRegressor(random_state=0)
y_hat = reg.fit(X, y).predict(T)
assert_almost_equal(y_hat, y_true)
assert_equal(y_hat.shape, (4, 2))
def test_classes_shape():
# Test that n_classes_ and classes_ have proper shape.
for name, TreeClassifier in CLF_TREES.items():
# Classification, single output
clf = TreeClassifier(random_state=0)
clf.fit(X, y)
assert_equal(clf.n_classes_, 2)
assert_array_equal(clf.classes_, [-1, 1])
# Classification, multi-output
_y = np.vstack((y, np.array(y) * 2)).T
clf = TreeClassifier(random_state=0)
clf.fit(X, _y)
assert_equal(len(clf.n_classes_), 2)
assert_equal(len(clf.classes_), 2)
assert_array_equal(clf.n_classes_, [2, 2])
assert_array_equal(clf.classes_, [[-1, 1], [-2, 2]])
def test_unbalanced_iris():
# Check class rebalancing.
unbalanced_X = iris.data[:125]
unbalanced_y = iris.target[:125]
sample_weight = compute_sample_weight("balanced", unbalanced_y)
for name, TreeClassifier in CLF_TREES.items():
clf = TreeClassifier(random_state=0)
clf.fit(unbalanced_X, unbalanced_y, sample_weight=sample_weight)
assert_almost_equal(clf.predict(unbalanced_X), unbalanced_y)
def test_memory_layout():
# Check that it works no matter the memory layout
for (name, TreeEstimator), dtype in product(ALL_TREES.items(),
[np.float64, np.float32]):
est = TreeEstimator(random_state=0)
# Nothing
X = np.asarray(iris.data, dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
# C-order
X = np.asarray(iris.data, order="C", dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
# F-order
X = np.asarray(iris.data, order="F", dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
# Contiguous
X = np.ascontiguousarray(iris.data, dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
if not est.presort:
# csr matrix
X = csr_matrix(iris.data, dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
# csc_matrix
X = csc_matrix(iris.data, dtype=dtype)
y = iris.target
assert_array_equal(est.fit(X, y).predict(X), y)
# Strided
X = np.asarray(iris.data[::3], dtype=dtype)
y = iris.target[::3]
assert_array_equal(est.fit(X, y).predict(X), y)
def test_sample_weight():
# Check sample weighting.
# Test that zero-weighted samples are not taken into account
X = np.arange(100)[:, np.newaxis]
y = np.ones(100)
y[:50] = 0.0
sample_weight = np.ones(100)
sample_weight[y == 0] = 0.0
clf = DecisionTreeClassifier(random_state=0)
clf.fit(X, y, sample_weight=sample_weight)
assert_array_equal(clf.predict(X), np.ones(100))
# Test that low weighted samples are not taken into account at low depth
X = np.arange(200)[:, np.newaxis]
y = np.zeros(200)
y[50:100] = 1
y[100:200] = 2
X[100:200, 0] = 200
sample_weight = np.ones(200)
sample_weight[y == 2] = .51 # Samples of class '2' are still weightier
clf = DecisionTreeClassifier(max_depth=1, random_state=0)
clf.fit(X, y, sample_weight=sample_weight)
assert_equal(clf.tree_.threshold[0], 149.5)
sample_weight[y == 2] = .5 # Samples of class '2' are no longer weightier
clf = DecisionTreeClassifier(max_depth=1, random_state=0)
clf.fit(X, y, sample_weight=sample_weight)
assert_equal(clf.tree_.threshold[0], 49.5) # Threshold should have moved
# Test that sample weighting is the same as having duplicates
X = iris.data
y = iris.target
duplicates = rng.randint(0, X.shape[0], 100)
clf = DecisionTreeClassifier(random_state=1)
clf.fit(X[duplicates], y[duplicates])
sample_weight = np.bincount(duplicates, minlength=X.shape[0])
clf2 = DecisionTreeClassifier(random_state=1)
clf2.fit(X, y, sample_weight=sample_weight)
internal = clf.tree_.children_left != tree._tree.TREE_LEAF
assert_array_almost_equal(clf.tree_.threshold[internal],
clf2.tree_.threshold[internal])
def test_sample_weight_invalid():
# Check sample weighting raises errors.
X = np.arange(100)[:, np.newaxis]
y = np.ones(100)
y[:50] = 0.0
clf = DecisionTreeClassifier(random_state=0)
sample_weight = np.random.rand(100, 1)
assert_raises(ValueError, clf.fit, X, y, sample_weight=sample_weight)
sample_weight = np.array(0)
assert_raises(ValueError, clf.fit, X, y, sample_weight=sample_weight)
sample_weight = np.ones(101)
assert_raises(ValueError, clf.fit, X, y, sample_weight=sample_weight)
sample_weight = np.ones(99)
assert_raises(ValueError, clf.fit, X, y, sample_weight=sample_weight)
def check_class_weights(name):
"""Check class_weights resemble sample_weights behavior."""
TreeClassifier = CLF_TREES[name]
# Iris is balanced, so no effect expected for using 'balanced' weights
clf1 = TreeClassifier(random_state=0)
clf1.fit(iris.data, iris.target)
clf2 = TreeClassifier(class_weight='balanced', random_state=0)
clf2.fit(iris.data, iris.target)
assert_almost_equal(clf1.feature_importances_, clf2.feature_importances_)
# Make a multi-output problem with three copies of Iris
iris_multi = np.vstack((iris.target, iris.target, iris.target)).T
# Create user-defined weights that should balance over the outputs
clf3 = TreeClassifier(class_weight=[{0: 2., 1: 2., 2: 1.},
{0: 2., 1: 1., 2: 2.},
{0: 1., 1: 2., 2: 2.}],
random_state=0)
clf3.fit(iris.data, iris_multi)
assert_almost_equal(clf2.feature_importances_, clf3.feature_importances_)
# Check against multi-output "auto" which should also have no effect
clf4 = TreeClassifier(class_weight='balanced', random_state=0)
clf4.fit(iris.data, iris_multi)
assert_almost_equal(clf3.feature_importances_, clf4.feature_importances_)
# Inflate importance of class 1, check against user-defined weights
sample_weight = np.ones(iris.target.shape)
sample_weight[iris.target == 1] *= 100
class_weight = {0: 1., 1: 100., 2: 1.}
clf1 = TreeClassifier(random_state=0)
clf1.fit(iris.data, iris.target, sample_weight)
clf2 = TreeClassifier(class_weight=class_weight, random_state=0)
clf2.fit(iris.data, iris.target)
assert_almost_equal(clf1.feature_importances_, clf2.feature_importances_)
# Check that sample_weight and class_weight are multiplicative
clf1 = TreeClassifier(random_state=0)
clf1.fit(iris.data, iris.target, sample_weight ** 2)
clf2 = TreeClassifier(class_weight=class_weight, random_state=0)
clf2.fit(iris.data, iris.target, sample_weight)
assert_almost_equal(clf1.feature_importances_, clf2.feature_importances_)
def test_class_weights():
for name in CLF_TREES:
yield check_class_weights, name
def check_class_weight_errors(name):
# Test if class_weight raises errors and warnings when expected.
TreeClassifier = CLF_TREES[name]
_y = np.vstack((y, np.array(y) * 2)).T
# Invalid preset string
clf = TreeClassifier(class_weight='the larch', random_state=0)
assert_raises(ValueError, clf.fit, X, y)
assert_raises(ValueError, clf.fit, X, _y)
# Not a list or preset for multi-output
clf = TreeClassifier(class_weight=1, random_state=0)
assert_raises(ValueError, clf.fit, X, _y)
# Incorrect length list for multi-output
clf = TreeClassifier(class_weight=[{-1: 0.5, 1: 1.}], random_state=0)
assert_raises(ValueError, clf.fit, X, _y)
def test_class_weight_errors():
for name in CLF_TREES:
yield check_class_weight_errors, name
def test_max_leaf_nodes():
# Test greedy trees with max_depth + 1 leafs.
from sklearn.tree._tree import TREE_LEAF
X, y = datasets.make_hastie_10_2(n_samples=100, random_state=1)
k = 4
for name, TreeEstimator in ALL_TREES.items():
est = TreeEstimator(max_depth=None, max_leaf_nodes=k + 1).fit(X, y)
tree = est.tree_
assert_equal((tree.children_left == TREE_LEAF).sum(), k + 1)
# max_leaf_nodes in (0, 1) should raise ValueError
est = TreeEstimator(max_depth=None, max_leaf_nodes=0)
assert_raises(ValueError, est.fit, X, y)
est = TreeEstimator(max_depth=None, max_leaf_nodes=1)
assert_raises(ValueError, est.fit, X, y)
est = TreeEstimator(max_depth=None, max_leaf_nodes=0.1)
assert_raises(ValueError, est.fit, X, y)
def test_max_leaf_nodes_max_depth():
# Test precedence of max_leaf_nodes over max_depth.
X, y = datasets.make_hastie_10_2(n_samples=100, random_state=1)
k = 4
for name, TreeEstimator in ALL_TREES.items():
est = TreeEstimator(max_depth=1, max_leaf_nodes=k).fit(X, y)
tree = est.tree_
assert_greater(tree.max_depth, 1)
def test_arrays_persist():
# Ensure property arrays' memory stays alive when tree disappears
# non-regression for #2726
for attr in ['n_classes', 'value', 'children_left', 'children_right',
'threshold', 'impurity', 'feature', 'n_node_samples']:
value = getattr(DecisionTreeClassifier().fit([[0], [1]], [0, 1]).tree_, attr)
# if pointing to freed memory, contents may be arbitrary
assert_true(-3 <= value.flat[0] < 3,
'Array points to arbitrary memory')
def test_only_constant_features():
random_state = check_random_state(0)
X = np.zeros((10, 20))
y = random_state.randint(0, 2, (10, ))
for name, TreeEstimator in ALL_TREES.items():
est = TreeEstimator(random_state=0)
est.fit(X, y)
assert_equal(est.tree_.max_depth, 0)
def test_with_only_one_non_constant_features():
X = np.hstack([np.array([[1.], [1.], [0.], [0.]]),
np.zeros((4, 1000))])
y = np.array([0., 1., 0., 1.0])
for name, TreeEstimator in CLF_TREES.items():
est = TreeEstimator(random_state=0, max_features=1)
est.fit(X, y)
assert_equal(est.tree_.max_depth, 1)
assert_array_equal(est.predict_proba(X), 0.5 * np.ones((4, 2)))
for name, TreeEstimator in REG_TREES.items():
est = TreeEstimator(random_state=0, max_features=1)
est.fit(X, y)
assert_equal(est.tree_.max_depth, 1)
assert_array_equal(est.predict(X), 0.5 * np.ones((4, )))
def test_big_input():
# Test if the warning for too large inputs is appropriate.
X = np.repeat(10 ** 40., 4).astype(np.float64).reshape(-1, 1)
clf = DecisionTreeClassifier()
try:
clf.fit(X, [0, 1, 0, 1])
except ValueError as e:
assert_in("float32", str(e))
def test_realloc():
from sklearn.tree._utils import _realloc_test
assert_raises(MemoryError, _realloc_test)
def test_huge_allocations():
n_bits = int(platform.architecture()[0].rstrip('bit'))
X = np.random.randn(10, 2)
y = np.random.randint(0, 2, 10)
# Sanity check: we cannot request more memory than the size of the address
# space. Currently raises OverflowError.
huge = 2 ** (n_bits + 1)
clf = DecisionTreeClassifier(splitter='best', max_leaf_nodes=huge)
assert_raises(Exception, clf.fit, X, y)
# Non-regression test: MemoryError used to be dropped by Cython
# because of missing "except *".
huge = 2 ** (n_bits - 1) - 1
clf = DecisionTreeClassifier(splitter='best', max_leaf_nodes=huge)
assert_raises(MemoryError, clf.fit, X, y)
def check_sparse_input(tree, dataset, max_depth=None):
TreeEstimator = ALL_TREES[tree]
X = DATASETS[dataset]["X"]
X_sparse = DATASETS[dataset]["X_sparse"]
y = DATASETS[dataset]["y"]
# Gain testing time
if dataset in ["digits", "boston"]:
n_samples = X.shape[0] // 5
X = X[:n_samples]
X_sparse = X_sparse[:n_samples]
y = y[:n_samples]
for sparse_format in (csr_matrix, csc_matrix, coo_matrix):
X_sparse = sparse_format(X_sparse)
# Check the default (depth first search)
d = TreeEstimator(random_state=0, max_depth=max_depth).fit(X, y)
s = TreeEstimator(random_state=0, max_depth=max_depth).fit(X_sparse, y)
assert_tree_equal(d.tree_, s.tree_,
"{0} with dense and sparse format gave different "
"trees".format(tree))
y_pred = d.predict(X)
if tree in CLF_TREES:
y_proba = d.predict_proba(X)
y_log_proba = d.predict_log_proba(X)
for sparse_matrix in (csr_matrix, csc_matrix, coo_matrix):
X_sparse_test = sparse_matrix(X_sparse, dtype=np.float32)
assert_array_almost_equal(s.predict(X_sparse_test), y_pred)
if tree in CLF_TREES:
assert_array_almost_equal(s.predict_proba(X_sparse_test),
y_proba)
assert_array_almost_equal(s.predict_log_proba(X_sparse_test),
y_log_proba)
def test_sparse_input():
for tree, dataset in product(SPARSE_TREES,
("clf_small", "toy", "digits", "multilabel",
"sparse-pos", "sparse-neg", "sparse-mix",
"zeros")):
max_depth = 3 if dataset == "digits" else None
yield (check_sparse_input, tree, dataset, max_depth)
# Due to numerical instability of MSE and too strict test, we limit the
# maximal depth
for tree, dataset in product(REG_TREES, ["boston", "reg_small"]):
if tree in SPARSE_TREES:
yield (check_sparse_input, tree, dataset, 2)
def check_sparse_parameters(tree, dataset):
TreeEstimator = ALL_TREES[tree]
X = DATASETS[dataset]["X"]
X_sparse = DATASETS[dataset]["X_sparse"]
y = DATASETS[dataset]["y"]
# Check max_features
d = TreeEstimator(random_state=0, max_features=1, max_depth=2).fit(X, y)
s = TreeEstimator(random_state=0, max_features=1,
max_depth=2).fit(X_sparse, y)
assert_tree_equal(d.tree_, s.tree_,
"{0} with dense and sparse format gave different "
"trees".format(tree))
assert_array_almost_equal(s.predict(X), d.predict(X))
# Check min_samples_split
d = TreeEstimator(random_state=0, max_features=1,
min_samples_split=10).fit(X, y)
s = TreeEstimator(random_state=0, max_features=1,
min_samples_split=10).fit(X_sparse, y)
assert_tree_equal(d.tree_, s.tree_,
"{0} with dense and sparse format gave different "
"trees".format(tree))
assert_array_almost_equal(s.predict(X), d.predict(X))
# Check min_samples_leaf
d = TreeEstimator(random_state=0,
min_samples_leaf=X_sparse.shape[0] // 2).fit(X, y)
s = TreeEstimator(random_state=0,
min_samples_leaf=X_sparse.shape[0] // 2).fit(X_sparse, y)
assert_tree_equal(d.tree_, s.tree_,
"{0} with dense and sparse format gave different "
"trees".format(tree))
assert_array_almost_equal(s.predict(X), d.predict(X))
# Check best-first search
d = TreeEstimator(random_state=0, max_leaf_nodes=3).fit(X, y)
s = TreeEstimator(random_state=0, max_leaf_nodes=3).fit(X_sparse, y)
assert_tree_equal(d.tree_, s.tree_,
"{0} with dense and sparse format gave different "
"trees".format(tree))
assert_array_almost_equal(s.predict(X), d.predict(X))
def test_sparse_parameters():
for tree, dataset in product(SPARSE_TREES,
["sparse-pos", "sparse-neg", "sparse-mix",
"zeros"]):
yield (check_sparse_parameters, tree, dataset)
def check_sparse_criterion(tree, dataset):
TreeEstimator = ALL_TREES[tree]
X = DATASETS[dataset]["X"]
X_sparse = DATASETS[dataset]["X_sparse"]
y = DATASETS[dataset]["y"]
# Check various criterion
CRITERIONS = REG_CRITERIONS if tree in REG_TREES else CLF_CRITERIONS
for criterion in CRITERIONS:
d = TreeEstimator(random_state=0, max_depth=3,
criterion=criterion).fit(X, y)
s = TreeEstimator(random_state=0, max_depth=3,
criterion=criterion).fit(X_sparse, y)
assert_tree_equal(d.tree_, s.tree_,
"{0} with dense and sparse format gave different "
"trees".format(tree))
assert_array_almost_equal(s.predict(X), d.predict(X))
def test_sparse_criterion():
for tree, dataset in product(SPARSE_TREES,
["sparse-pos", "sparse-neg", "sparse-mix",
"zeros"]):
yield (check_sparse_criterion, tree, dataset)
def check_explicit_sparse_zeros(tree, max_depth=3,
n_features=10):
TreeEstimator = ALL_TREES[tree]
# n_samples set n_feature to ease construction of a simultaneous
# construction of a csr and csc matrix
n_samples = n_features
samples = np.arange(n_samples)
# Generate X, y
random_state = check_random_state(0)
indices = []
data = []
offset = 0
indptr = [offset]
for i in range(n_features):
n_nonzero_i = random_state.binomial(n_samples, 0.5)
indices_i = random_state.permutation(samples)[:n_nonzero_i]
indices.append(indices_i)
data_i = random_state.binomial(3, 0.5, size=(n_nonzero_i, )) - 1
data.append(data_i)
offset += n_nonzero_i
indptr.append(offset)
indices = np.concatenate(indices)
data = np.array(np.concatenate(data), dtype=np.float32)
X_sparse = csc_matrix((data, indices, indptr),
shape=(n_samples, n_features))
X = X_sparse.toarray()
X_sparse_test = csr_matrix((data, indices, indptr),
shape=(n_samples, n_features))
X_test = X_sparse_test.toarray()
y = random_state.randint(0, 3, size=(n_samples, ))
# Ensure that X_sparse_test owns its data, indices and indptr array
X_sparse_test = X_sparse_test.copy()
# Ensure that we have explicit zeros
assert_greater((X_sparse.data == 0.).sum(), 0)
assert_greater((X_sparse_test.data == 0.).sum(), 0)
# Perform the comparison
d = TreeEstimator(random_state=0, max_depth=max_depth).fit(X, y)
s = TreeEstimator(random_state=0, max_depth=max_depth).fit(X_sparse, y)
assert_tree_equal(d.tree_, s.tree_,
"{0} with dense and sparse format gave different "
"trees".format(tree))
Xs = (X_test, X_sparse_test)
for X1, X2 in product(Xs, Xs):
assert_array_almost_equal(s.tree_.apply(X1), d.tree_.apply(X2))
assert_array_almost_equal(s.apply(X1), d.apply(X2))
assert_array_almost_equal(s.apply(X1), s.tree_.apply(X1))
assert_array_almost_equal(s.tree_.decision_path(X1).toarray(),
d.tree_.decision_path(X2).toarray())
assert_array_almost_equal(s.decision_path(X1).toarray(),
d.decision_path(X2).toarray())
assert_array_almost_equal(s.decision_path(X1).toarray(),
s.tree_.decision_path(X1).toarray())
assert_array_almost_equal(s.predict(X1), d.predict(X2))
if tree in CLF_TREES:
assert_array_almost_equal(s.predict_proba(X1),
d.predict_proba(X2))
def test_explicit_sparse_zeros():
for tree in SPARSE_TREES:
yield (check_explicit_sparse_zeros, tree)
@ignore_warnings
def check_raise_error_on_1d_input(name):
TreeEstimator = ALL_TREES[name]
X = iris.data[:, 0].ravel()
X_2d = iris.data[:, 0].reshape((-1, 1))
y = iris.target
assert_raises(ValueError, TreeEstimator(random_state=0).fit, X, y)
est = TreeEstimator(random_state=0)
est.fit(X_2d, y)
assert_raises(ValueError, est.predict, [X])
@ignore_warnings
def test_1d_input():
for name in ALL_TREES:
yield check_raise_error_on_1d_input, name
def _check_min_weight_leaf_split_level(TreeEstimator, X, y, sample_weight):
# Private function to keep pretty printing in nose yielded tests
est = TreeEstimator(random_state=0)
est.fit(X, y, sample_weight=sample_weight)
assert_equal(est.tree_.max_depth, 1)
est = TreeEstimator(random_state=0, min_weight_fraction_leaf=0.4)
est.fit(X, y, sample_weight=sample_weight)
assert_equal(est.tree_.max_depth, 0)
def check_min_weight_leaf_split_level(name):
TreeEstimator = ALL_TREES[name]
X = np.array([[0], [0], [0], [0], [1]])
y = [0, 0, 0, 0, 1]
sample_weight = [0.2, 0.2, 0.2, 0.2, 0.2]
_check_min_weight_leaf_split_level(TreeEstimator, X, y, sample_weight)
if not TreeEstimator().presort:
_check_min_weight_leaf_split_level(TreeEstimator, csc_matrix(X), y,
sample_weight)
def test_min_weight_leaf_split_level():
for name in ALL_TREES:
yield check_min_weight_leaf_split_level, name
def check_public_apply(name):
X_small32 = X_small.astype(tree._tree.DTYPE)
est = ALL_TREES[name]()
est.fit(X_small, y_small)
assert_array_equal(est.apply(X_small),
est.tree_.apply(X_small32))
def check_public_apply_sparse(name):
X_small32 = csr_matrix(X_small.astype(tree._tree.DTYPE))
est = ALL_TREES[name]()
est.fit(X_small, y_small)
assert_array_equal(est.apply(X_small),
est.tree_.apply(X_small32))
def test_public_apply():
for name in ALL_TREES:
yield (check_public_apply, name)
for name in SPARSE_TREES:
yield (check_public_apply_sparse, name)
def check_presort_sparse(est, X, y):
assert_raises(ValueError, est.fit, X, y)
def test_presort_sparse():
ests = (DecisionTreeClassifier(presort=True),
DecisionTreeRegressor(presort=True))
sparse_matrices = (csr_matrix, csc_matrix, coo_matrix)
y, X = datasets.make_multilabel_classification(random_state=0,
n_samples=50,
n_features=1,
n_classes=20)
y = y[:, 0]
for est, sparse_matrix in product(ests, sparse_matrices):
yield check_presort_sparse, est, sparse_matrix(X), y
def test_decision_path_hardcoded():
X = iris.data
y = iris.target
est = DecisionTreeClassifier(random_state=0, max_depth=1).fit(X, y)
node_indicator = est.decision_path(X[:2]).toarray()
assert_array_equal(node_indicator, [[1, 1, 0], [1, 0, 1]])
def check_decision_path(name):
X = iris.data
y = iris.target
n_samples = X.shape[0]
TreeEstimator = ALL_TREES[name]
est = TreeEstimator(random_state=0, max_depth=2)
est.fit(X, y)
node_indicator_csr = est.decision_path(X)
node_indicator = node_indicator_csr.toarray()
assert_equal(node_indicator.shape, (n_samples, est.tree_.node_count))
# Assert that leaves index are correct
leaves = est.apply(X)
leave_indicator = [node_indicator[i, j] for i, j in enumerate(leaves)]
assert_array_almost_equal(leave_indicator, np.ones(shape=n_samples))
# Ensure only one leave node per sample
all_leaves = est.tree_.children_left == TREE_LEAF
assert_array_almost_equal(np.dot(node_indicator, all_leaves),
np.ones(shape=n_samples))
# Ensure max depth is consistent with sum of indicator
max_depth = node_indicator.sum(axis=1).max()
assert_less_equal(est.tree_.max_depth, max_depth)
def test_decision_path():
for name in ALL_TREES:
yield (check_decision_path, name)
def check_no_sparse_y_support(name):
X, y = X_multilabel, csr_matrix(y_multilabel)
TreeEstimator = ALL_TREES[name]
assert_raises(TypeError, TreeEstimator(random_state=0).fit, X, y)
def test_no_sparse_y_support():
# Currently we don't support sparse y
for name in ALL_TREES:
yield (check_no_sparse_y_support, name)
| bsd-3-clause |
econ-ark/HARK | HARK/interpolation.py | 1 | 194285 | """
Custom interpolation methods for representing approximations to functions.
It also includes wrapper classes to enforce standard methods across classes.
Each interpolation class must have a distance() method that compares itself to
another instance; this is used in HARK.core's solve() method to check for solution
convergence. The interpolator classes currently in this module inherit their
distance method from MetricObject.
"""
import numpy as np
from .core import MetricObject
from copy import deepcopy
from HARK.utilities import CRRAutility, CRRAutilityP, CRRAutilityPP
import warnings
def _isscalar(x):
"""
Check whether x is if a scalar type, or 0-dim.
Parameters
----------
x : anything
An input to be checked for scalar-ness.
Returns
-------
is_scalar : boolean
True if the input is a scalar, False otherwise.
"""
return np.isscalar(x) or hasattr(x, "shape") and x.shape == ()
def _check_grid_dimensions(dimension, *args):
if dimension == 1:
if len(args[0]) != len(args[1]):
raise ValueError("Grid dimensions of x and f(x) do not match")
elif dimension == 2:
if args[0].shape != (args[1].size, args[2].size):
raise ValueError("Grid dimensions of x, y and f(x, y) do not match")
elif dimension == 3:
if args[0].shape != (args[1].size, args[2].size, args[3].size):
raise ValueError("Grid dimensions of x, y, z and f(x, y, z) do not match")
elif dimension == 4:
if args[0].shape != (args[1].size, args[2].size, args[3].size, args[4].size):
raise ValueError("Grid dimensions of x, y, z and f(x, y, z) do not match")
else:
raise ValueError("Dimension should be between 1 and 4 inclusive.")
def _check_flatten(dimension, *args):
if dimension == 1:
if isinstance(args[0], np.ndarray) and args[0].shape != args[0].flatten().shape:
warnings.warn("input not of the size (n, ), attempting to flatten")
return False
else:
return True
class HARKinterpolator1D(MetricObject):
"""
A wrapper class for 1D interpolation methods in HARK.
"""
distance_criteria = []
def __call__(self, x):
"""
Evaluates the interpolated function at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
Returns
-------
y : np.array or float
The interpolated function evaluated at x: y = f(x), with the same
shape as x.
"""
z = np.asarray(x)
return (self._evaluate(z.flatten())).reshape(z.shape)
def derivative(self, x):
"""
Evaluates the derivative of the interpolated function at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
Returns
-------
dydx : np.array or float
The interpolated function's first derivative evaluated at x:
dydx = f'(x), with the same shape as x.
"""
z = np.asarray(x)
return (self._der(z.flatten())).reshape(z.shape)
def eval_with_derivative(self, x):
"""
Evaluates the interpolated function and its derivative at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
Returns
-------
y : np.array or float
The interpolated function evaluated at x: y = f(x), with the same
shape as x.
dydx : np.array or float
The interpolated function's first derivative evaluated at x:
dydx = f'(x), with the same shape as x.
"""
z = np.asarray(x)
y, dydx = self._evalAndDer(z.flatten())
return y.reshape(z.shape), dydx.reshape(z.shape)
def _evaluate(self, x):
"""
Interpolated function evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
def _der(self, x):
"""
Interpolated function derivative evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
def _evalAndDer(self, x):
"""
Interpolated function and derivative evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
class HARKinterpolator2D(MetricObject):
"""
A wrapper class for 2D interpolation methods in HARK.
"""
distance_criteria = []
def __call__(self, x, y):
"""
Evaluates the interpolated function at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
fxy : np.array or float
The interpolated function evaluated at x,y: fxy = f(x,y), with the
same shape as x and y.
"""
xa = np.asarray(x)
ya = np.asarray(y)
return (self._evaluate(xa.flatten(), ya.flatten())).reshape(xa.shape)
def derivativeX(self, x, y):
"""
Evaluates the partial derivative of interpolated function with respect
to x (the first argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdx : np.array or float
The derivative of the interpolated function with respect to x, eval-
uated at x,y: dfdx = f_x(x,y), with the same shape as x and y.
"""
xa = np.asarray(x)
ya = np.asarray(y)
return (self._derX(xa.flatten(), ya.flatten())).reshape(xa.shape)
def derivativeY(self, x, y):
"""
Evaluates the partial derivative of interpolated function with respect
to y (the second argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdy : np.array or float
The derivative of the interpolated function with respect to y, eval-
uated at x,y: dfdx = f_y(x,y), with the same shape as x and y.
"""
xa = np.asarray(x)
ya = np.asarray(y)
return (self._derY(xa.flatten(), ya.flatten())).reshape(xa.shape)
def _evaluate(self, x, y):
"""
Interpolated function evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
def _derX(self, x, y):
"""
Interpolated function x-derivative evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
def _derY(self, x, y):
"""
Interpolated function y-derivative evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
class HARKinterpolator3D(MetricObject):
"""
A wrapper class for 3D interpolation methods in HARK.
"""
distance_criteria = []
def __call__(self, x, y, z):
"""
Evaluates the interpolated function at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
fxyz : np.array or float
The interpolated function evaluated at x,y,z: fxyz = f(x,y,z), with
the same shape as x, y, and z.
"""
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._evaluate(xa.flatten(), ya.flatten(), za.flatten())).reshape(
xa.shape
)
def derivativeX(self, x, y, z):
"""
Evaluates the partial derivative of the interpolated function with respect
to x (the first argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdx : np.array or float
The derivative with respect to x of the interpolated function evaluated
at x,y,z: dfdx = f_x(x,y,z), with the same shape as x, y, and z.
"""
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derX(xa.flatten(), ya.flatten(), za.flatten())).reshape(xa.shape)
def derivativeY(self, x, y, z):
"""
Evaluates the partial derivative of the interpolated function with respect
to y (the second argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdy : np.array or float
The derivative with respect to y of the interpolated function evaluated
at x,y,z: dfdy = f_y(x,y,z), with the same shape as x, y, and z.
"""
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derY(xa.flatten(), ya.flatten(), za.flatten())).reshape(xa.shape)
def derivativeZ(self, x, y, z):
"""
Evaluates the partial derivative of the interpolated function with respect
to z (the third argument) at the given input.
Parameters
----------
x : np.array or float
Real values to be evaluated in the interpolated function.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as x.
Returns
-------
dfdz : np.array or float
The derivative with respect to z of the interpolated function evaluated
at x,y,z: dfdz = f_z(x,y,z), with the same shape as x, y, and z.
"""
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (self._derZ(xa.flatten(), ya.flatten(), za.flatten())).reshape(xa.shape)
def _evaluate(self, x, y, z):
"""
Interpolated function evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
def _derX(self, x, y, z):
"""
Interpolated function x-derivative evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
def _derY(self, x, y, z):
"""
Interpolated function y-derivative evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
def _derZ(self, x, y, z):
"""
Interpolated function y-derivative evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
class HARKinterpolator4D(MetricObject):
"""
A wrapper class for 4D interpolation methods in HARK.
"""
distance_criteria = []
def __call__(self, w, x, y, z):
"""
Evaluates the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
fwxyz : np.array or float
The interpolated function evaluated at w,x,y,z: fwxyz = f(w,x,y,z),
with the same shape as w, x, y, and z.
"""
wa = np.asarray(w)
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (
self._evaluate(wa.flatten(), xa.flatten(), ya.flatten(), za.flatten())
).reshape(wa.shape)
def derivativeW(self, w, x, y, z):
"""
Evaluates the partial derivative with respect to w (the first argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdw : np.array or float
The derivative with respect to w of the interpolated function eval-
uated at w,x,y,z: dfdw = f_w(w,x,y,z), with the same shape as inputs.
"""
wa = np.asarray(w)
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (
self._derW(wa.flatten(), xa.flatten(), ya.flatten(), za.flatten())
).reshape(wa.shape)
def derivativeX(self, w, x, y, z):
"""
Evaluates the partial derivative with respect to x (the second argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdx : np.array or float
The derivative with respect to x of the interpolated function eval-
uated at w,x,y,z: dfdx = f_x(w,x,y,z), with the same shape as inputs.
"""
wa = np.asarray(w)
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (
self._derX(wa.flatten(), xa.flatten(), ya.flatten(), za.flatten())
).reshape(wa.shape)
def derivativeY(self, w, x, y, z):
"""
Evaluates the partial derivative with respect to y (the third argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdy : np.array or float
The derivative with respect to y of the interpolated function eval-
uated at w,x,y,z: dfdy = f_y(w,x,y,z), with the same shape as inputs.
"""
wa = np.asarray(w)
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (
self._derY(wa.flatten(), xa.flatten(), ya.flatten(), za.flatten())
).reshape(wa.shape)
def derivativeZ(self, w, x, y, z):
"""
Evaluates the partial derivative with respect to z (the fourth argument)
of the interpolated function at the given input.
Parameters
----------
w : np.array or float
Real values to be evaluated in the interpolated function.
x : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
y : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
z : np.array or float
Real values to be evaluated in the interpolated function; must be
the same size as w.
Returns
-------
dfdz : np.array or float
The derivative with respect to z of the interpolated function eval-
uated at w,x,y,z: dfdz = f_z(w,x,y,z), with the same shape as inputs.
"""
wa = np.asarray(w)
xa = np.asarray(x)
ya = np.asarray(y)
za = np.asarray(z)
return (
self._derZ(wa.flatten(), xa.flatten(), ya.flatten(), za.flatten())
).reshape(wa.shape)
def _evaluate(self, w, x, y, z):
"""
Interpolated function evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
def _derW(self, w, x, y, z):
"""
Interpolated function w-derivative evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
def _derX(self, w, x, y, z):
"""
Interpolated function w-derivative evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
def _derY(self, w, x, y, z):
"""
Interpolated function w-derivative evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
def _derZ(self, w, x, y, z):
"""
Interpolated function w-derivative evaluator, to be defined in subclasses.
"""
raise NotImplementedError()
class IdentityFunction(MetricObject):
"""
A fairly trivial interpolator that simply returns one of its arguments. Useful for avoiding
numeric error in extreme cases.
Parameters
----------
i_dim : int
Index of the dimension on which the identity is defined. f(*x) = x[i]
n_dims : int
Total number of input dimensions for this function.
"""
distance_criteria = ["i_dim"]
def __init__(self, i_dim=0, n_dims=1):
self.i_dim = i_dim
self.n_dims = n_dims
def __call__(self, *args):
"""
Evaluate the identity function.
"""
return args[self.i_dim]
def derivative(self, *args):
"""
Returns the derivative of the function with respect to the first dimension.
"""
if self.i_dim == 0:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
def derivativeX(self, *args):
"""
Returns the derivative of the function with respect to the X dimension.
This is the first input whenever n_dims < 4 and the second input otherwise.
"""
if self.n_dims >= 4:
j = 1
else:
j = 0
if self.i_dim == j:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
def derivativeY(self, *args):
"""
Returns the derivative of the function with respect to the Y dimension.
This is the second input whenever n_dims < 4 and the third input otherwise.
"""
if self.n_dims >= 4:
j = 2
else:
j = 1
if self.i_dim == j:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
def derivativeZ(self, *args):
"""
Returns the derivative of the function with respect to the Z dimension.
This is the third input whenever n_dims < 4 and the fourth input otherwise.
"""
if self.n_dims >= 4:
j = 3
else:
j = 2
if self.i_dim == j:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
def derivativeW(self, *args):
"""
Returns the derivative of the function with respect to the W dimension.
This should only exist when n_dims >= 4.
"""
if self.n_dims >= 4:
j = 0
else:
assert (
False
), "Derivative with respect to W can't be called when n_dims < 4!"
if self.i_dim == j:
return np.ones_like(*args[0])
else:
return np.zeros_like(*args[0])
class ConstantFunction(MetricObject):
"""
A class for representing trivial functions that return the same real output for any input. This
is convenient for models where an object might be a (non-trivial) function, but in some variations
that object is just a constant number. Rather than needing to make a (Bi/Tri/Quad)-
LinearInterpolation with trivial state grids and the same f_value in every entry, ConstantFunction
allows the user to quickly make a constant/trivial function. This comes up, e.g., in models
with endogenous pricing of insurance contracts; a contract's premium might depend on some state
variables of the individual, but in some variations the premium of a contract is just a number.
Parameters
----------
value : float
The constant value that the function returns.
"""
convergence_criteria = ["value"]
def __init__(self, value):
self.value = float(value)
def __call__(self, *args):
"""
Evaluate the constant function. The first input must exist and should be an array.
Returns an array of identical shape to args[0] (if it exists).
"""
if (
len(args) > 0
): # If there is at least one argument, return appropriately sized array
if _isscalar(args[0]):
return self.value
else:
shape = args[0].shape
return self.value * np.ones(shape)
else: # Otherwise, return a single instance of the constant value
return self.value
def _der(self, *args):
"""
Evaluate the derivative of the function. The first input must exist and should be an array.
Returns an array of identical shape to args[0] (if it exists). This is an array of zeros.
"""
if len(args) > 0:
if _isscalar(args[0]):
return 0.0
else:
shape = args[0].shape
return np.zeros(shape)
else:
return 0.0
# All other derivatives are also zero everywhere, so these methods just point to derivative
derivative = _der
derivativeX = derivative
derivativeY = derivative
derivativeZ = derivative
derivativeW = derivative
derivativeXX = derivative
class LinearInterp(HARKinterpolator1D):
"""
A "from scratch" 1D linear interpolation class. Allows for linear or decay
extrapolation (approaching a limiting linear function from below).
NOTE: When no input is given for the limiting linear function, linear
extrapolation is used above the highest gridpoint.
Parameters
----------
x_list : np.array
List of x values composing the grid.
y_list : np.array
List of y values, representing f(x) at the points in x_list.
intercept_limit : float
Intercept of limiting linear function.
slope_limit : float
Slope of limiting linear function.
lower_extrap : boolean
Indicator for whether lower extrapolation is allowed. False means
f(x) = NaN for x < min(x_list); True means linear extrapolation.
"""
distance_criteria = ["x_list", "y_list"]
def __init__(
self, x_list, y_list, intercept_limit=None, slope_limit=None, lower_extrap=False
):
# Make the basic linear spline interpolation
self.x_list = (
np.array(x_list)
if _check_flatten(1, x_list)
else np.array(x_list).flatten()
)
self.y_list = (
np.array(y_list)
if _check_flatten(1, y_list)
else np.array(y_list).flatten()
)
_check_grid_dimensions(1, self.y_list, self.x_list)
self.lower_extrap = lower_extrap
self.x_n = self.x_list.size
# Make a decay extrapolation
if intercept_limit is not None and slope_limit is not None:
slope_at_top = (y_list[-1] - y_list[-2]) / (x_list[-1] - x_list[-2])
level_diff = intercept_limit + slope_limit * x_list[-1] - y_list[-1]
slope_diff = slope_limit - slope_at_top
# If the model that can handle uncertainty has been calibrated with
# with uncertainty set to zero, the 'extrapolation' will blow up
# Guard against that and nearby problems by testing slope equality
if not np.isclose(slope_limit, slope_at_top, atol=1e-15):
self.decay_extrap_A = level_diff
self.decay_extrap_B = -slope_diff / level_diff
self.intercept_limit = intercept_limit
self.slope_limit = slope_limit
self.decay_extrap = True
else:
self.decay_extrap = False
else:
self.decay_extrap = False
def _evalOrDer(self, x, _eval, _Der):
"""
Returns the level and/or first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
Parameters
----------
x_list : scalar or np.array
Set of points where we want to evlauate the interpolated function and/or its derivative..
_eval : boolean
Indicator for whether to evalute the level of the interpolated function.
_Der : boolean
Indicator for whether to evaluate the derivative of the interpolated function.
Returns
-------
A list including the level and/or derivative of the interpolated function where requested.
"""
i = np.maximum(np.searchsorted(self.x_list[:-1], x), 1)
alpha = (x - self.x_list[i - 1]) / (self.x_list[i] - self.x_list[i - 1])
if _eval:
y = (1.0 - alpha) * self.y_list[i - 1] + alpha * self.y_list[i]
if _Der:
dydx = (self.y_list[i] - self.y_list[i - 1]) / (
self.x_list[i] - self.x_list[i - 1]
)
if not self.lower_extrap:
below_lower_bound = x < self.x_list[0]
if _eval:
y[below_lower_bound] = np.nan
if _Der:
dydx[below_lower_bound] = np.nan
if self.decay_extrap:
above_upper_bound = x > self.x_list[-1]
x_temp = x[above_upper_bound] - self.x_list[-1]
if _eval:
y[above_upper_bound] = (
self.intercept_limit
+ self.slope_limit * x[above_upper_bound]
- self.decay_extrap_A * np.exp(-self.decay_extrap_B * x_temp)
)
if _Der:
dydx[above_upper_bound] = (
self.slope_limit
+ self.decay_extrap_B
* self.decay_extrap_A
* np.exp(-self.decay_extrap_B * x_temp)
)
output = []
if _eval:
output += [
y,
]
if _Der:
output += [
dydx,
]
return output
def _evaluate(self, x, return_indices=False):
"""
Returns the level of the interpolated function at each value in x. Only
called internally by HARKinterpolator1D.__call__ (etc).
"""
return self._evalOrDer(x, True, False)[0]
def _der(self, x):
"""
Returns the first derivative of the interpolated function at each value
in x. Only called internally by HARKinterpolator1D.derivative (etc).
"""
return self._evalOrDer(x, False, True)[0]
def _evalAndDer(self, x):
"""
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
"""
y, dydx = self._evalOrDer(x, True, True)
return y, dydx
class CubicInterp(HARKinterpolator1D):
"""
An interpolating function using piecewise cubic splines. Matches level and
slope of 1D function at gridpoints, smoothly interpolating in between.
Extrapolation above highest gridpoint approaches a limiting linear function
if desired (linear extrapolation also enabled.)
NOTE: When no input is given for the limiting linear function, linear
extrapolation is used above the highest gridpoint.
Parameters
----------
x_list : np.array
List of x values composing the grid.
y_list : np.array
List of y values, representing f(x) at the points in x_list.
dydx_list : np.array
List of dydx values, representing f'(x) at the points in x_list
intercept_limit : float
Intercept of limiting linear function.
slope_limit : float
Slope of limiting linear function.
lower_extrap : boolean
Indicator for whether lower extrapolation is allowed. False means
f(x) = NaN for x < min(x_list); True means linear extrapolation.
"""
distance_criteria = ["x_list", "y_list", "dydx_list"]
def __init__(
self,
x_list,
y_list,
dydx_list,
intercept_limit=None,
slope_limit=None,
lower_extrap=False,
):
self.x_list = (
np.asarray(x_list)
if _check_flatten(1, x_list)
else np.array(x_list).flatten()
)
self.y_list = (
np.asarray(y_list)
if _check_flatten(1, y_list)
else np.array(y_list).flatten()
)
self.dydx_list = (
np.asarray(dydx_list)
if _check_flatten(1, dydx_list)
else np.array(dydx_list).flatten()
)
_check_grid_dimensions(1, self.y_list, self.x_list)
_check_grid_dimensions(1, self.dydx_list, self.x_list)
self.n = len(x_list)
# Define lower extrapolation as linear function (or just NaN)
if lower_extrap:
self.coeffs = [[y_list[0], dydx_list[0], 0, 0]]
else:
self.coeffs = [[np.nan, np.nan, np.nan, np.nan]]
# Calculate interpolation coefficients on segments mapped to [0,1]
for i in range(self.n - 1):
x0 = x_list[i]
y0 = y_list[i]
x1 = x_list[i + 1]
y1 = y_list[i + 1]
Span = x1 - x0
dydx0 = dydx_list[i] * Span
dydx1 = dydx_list[i + 1] * Span
temp = [
y0,
dydx0,
3 * (y1 - y0) - 2 * dydx0 - dydx1,
2 * (y0 - y1) + dydx0 + dydx1,
]
self.coeffs.append(temp)
# Calculate extrapolation coefficients as a decay toward limiting function y = mx+b
if slope_limit is None and intercept_limit is None:
slope_limit = dydx_list[-1]
intercept_limit = y_list[-1] - slope_limit * x_list[-1]
gap = slope_limit * x1 + intercept_limit - y1
slope = slope_limit - dydx_list[self.n - 1]
if (gap != 0) and (slope <= 0):
temp = [intercept_limit, slope_limit, gap, slope / gap]
elif slope > 0:
temp = [
intercept_limit,
slope_limit,
0,
0,
] # fixing a problem when slope is positive
else:
temp = [intercept_limit, slope_limit, gap, 0]
self.coeffs.append(temp)
self.coeffs = np.array(self.coeffs)
def _evaluate(self, x):
"""
Returns the level of the interpolated function at each value in x. Only
called internally by HARKinterpolator1D.__call__ (etc).
"""
if _isscalar(x):
pos = np.searchsorted(self.x_list, x)
if pos == 0:
y = self.coeffs[0, 0] + self.coeffs[0, 1] * (x - self.x_list[0])
elif pos < self.n:
alpha = (x - self.x_list[pos - 1]) / (
self.x_list[pos] - self.x_list[pos - 1]
)
y = self.coeffs[pos, 0] + alpha * (
self.coeffs[pos, 1]
+ alpha * (self.coeffs[pos, 2] + alpha * self.coeffs[pos, 3])
)
else:
alpha = x - self.x_list[self.n - 1]
y = (
self.coeffs[pos, 0]
+ x * self.coeffs[pos, 1]
- self.coeffs[pos, 2] * np.exp(alpha * self.coeffs[pos, 3])
)
else:
m = len(x)
pos = np.searchsorted(self.x_list, x)
y = np.zeros(m)
if y.size > 0:
out_bot = pos == 0
out_top = pos == self.n
in_bnds = np.logical_not(np.logical_or(out_bot, out_top))
# Do the "in bounds" evaluation points
i = pos[in_bnds]
coeffs_in = self.coeffs[i, :]
alpha = (x[in_bnds] - self.x_list[i - 1]) / (
self.x_list[i] - self.x_list[i - 1]
)
y[in_bnds] = coeffs_in[:, 0] + alpha * (
coeffs_in[:, 1]
+ alpha * (coeffs_in[:, 2] + alpha * coeffs_in[:, 3])
)
# Do the "out of bounds" evaluation points
y[out_bot] = self.coeffs[0, 0] + self.coeffs[0, 1] * (
x[out_bot] - self.x_list[0]
)
alpha = x[out_top] - self.x_list[self.n - 1]
y[out_top] = (
self.coeffs[self.n, 0]
+ x[out_top] * self.coeffs[self.n, 1]
- self.coeffs[self.n, 2] * np.exp(alpha * self.coeffs[self.n, 3])
)
y[x == self.x_list[0]] = self.y_list[0]
return y
def _der(self, x):
"""
Returns the first derivative of the interpolated function at each value
in x. Only called internally by HARKinterpolator1D.derivative (etc).
"""
if _isscalar(x):
pos = np.searchsorted(self.x_list, x)
if pos == 0:
dydx = self.coeffs[0, 1]
elif pos < self.n:
alpha = (x - self.x_list[pos - 1]) / (
self.x_list[pos] - self.x_list[pos - 1]
)
dydx = (
self.coeffs[pos, 1]
+ alpha
* (2 * self.coeffs[pos, 2] + alpha * 3 * self.coeffs[pos, 3])
) / (self.x_list[pos] - self.x_list[pos - 1])
else:
alpha = x - self.x_list[self.n - 1]
dydx = self.coeffs[pos, 1] - self.coeffs[pos, 2] * self.coeffs[
pos, 3
] * np.exp(alpha * self.coeffs[pos, 3])
else:
m = len(x)
pos = np.searchsorted(self.x_list, x)
dydx = np.zeros(m)
if dydx.size > 0:
out_bot = pos == 0
out_top = pos == self.n
in_bnds = np.logical_not(np.logical_or(out_bot, out_top))
# Do the "in bounds" evaluation points
i = pos[in_bnds]
coeffs_in = self.coeffs[i, :]
alpha = (x[in_bnds] - self.x_list[i - 1]) / (
self.x_list[i] - self.x_list[i - 1]
)
dydx[in_bnds] = (
coeffs_in[:, 1]
+ alpha * (2 * coeffs_in[:, 2] + alpha * 3 * coeffs_in[:, 3])
) / (self.x_list[i] - self.x_list[i - 1])
# Do the "out of bounds" evaluation points
dydx[out_bot] = self.coeffs[0, 1]
alpha = x[out_top] - self.x_list[self.n - 1]
dydx[out_top] = self.coeffs[self.n, 1] - self.coeffs[
self.n, 2
] * self.coeffs[self.n, 3] * np.exp(alpha * self.coeffs[self.n, 3])
return dydx
def _evalAndDer(self, x):
"""
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der (etc).
"""
if _isscalar(x):
pos = np.searchsorted(self.x_list, x)
if pos == 0:
y = self.coeffs[0, 0] + self.coeffs[0, 1] * (x - self.x_list[0])
dydx = self.coeffs[0, 1]
elif pos < self.n:
alpha = (x - self.x_list[pos - 1]) / (
self.x_list[pos] - self.x_list[pos - 1]
)
y = self.coeffs[pos, 0] + alpha * (
self.coeffs[pos, 1]
+ alpha * (self.coeffs[pos, 2] + alpha * self.coeffs[pos, 3])
)
dydx = (
self.coeffs[pos, 1]
+ alpha
* (2 * self.coeffs[pos, 2] + alpha * 3 * self.coeffs[pos, 3])
) / (self.x_list[pos] - self.x_list[pos - 1])
else:
alpha = x - self.x_list[self.n - 1]
y = (
self.coeffs[pos, 0]
+ x * self.coeffs[pos, 1]
- self.coeffs[pos, 2] * np.exp(alpha * self.coeffs[pos, 3])
)
dydx = self.coeffs[pos, 1] - self.coeffs[pos, 2] * self.coeffs[
pos, 3
] * np.exp(alpha * self.coeffs[pos, 3])
else:
m = len(x)
pos = np.searchsorted(self.x_list, x)
y = np.zeros(m)
dydx = np.zeros(m)
if y.size > 0:
out_bot = pos == 0
out_top = pos == self.n
in_bnds = np.logical_not(np.logical_or(out_bot, out_top))
# Do the "in bounds" evaluation points
i = pos[in_bnds]
coeffs_in = self.coeffs[i, :]
alpha = (x[in_bnds] - self.x_list[i - 1]) / (
self.x_list[i] - self.x_list[i - 1]
)
y[in_bnds] = coeffs_in[:, 0] + alpha * (
coeffs_in[:, 1]
+ alpha * (coeffs_in[:, 2] + alpha * coeffs_in[:, 3])
)
dydx[in_bnds] = (
coeffs_in[:, 1]
+ alpha * (2 * coeffs_in[:, 2] + alpha * 3 * coeffs_in[:, 3])
) / (self.x_list[i] - self.x_list[i - 1])
# Do the "out of bounds" evaluation points
y[out_bot] = self.coeffs[0, 0] + self.coeffs[0, 1] * (
x[out_bot] - self.x_list[0]
)
dydx[out_bot] = self.coeffs[0, 1]
alpha = x[out_top] - self.x_list[self.n - 1]
y[out_top] = (
self.coeffs[self.n, 0]
+ x[out_top] * self.coeffs[self.n, 1]
- self.coeffs[self.n, 2] * np.exp(alpha * self.coeffs[self.n, 3])
)
dydx[out_top] = self.coeffs[self.n, 1] - self.coeffs[
self.n, 2
] * self.coeffs[self.n, 3] * np.exp(alpha * self.coeffs[self.n, 3])
return y, dydx
class BilinearInterp(HARKinterpolator2D):
"""
Bilinear full (or tensor) grid interpolation of a function f(x,y).
Parameters
----------
f_values : numpy.array
An array of size (x_n,y_n) such that f_values[i,j] = f(x_list[i],y_list[j])
x_list : numpy.array
An array of x values, with length designated x_n.
y_list : numpy.array
An array of y values, with length designated y_n.
xSearchFunc : function
An optional function that returns the reference location for x values:
indices = xSearchFunc(x_list,x). Default is np.searchsorted
ySearchFunc : function
An optional function that returns the reference location for y values:
indices = ySearchFunc(y_list,y). Default is np.searchsorted
"""
distance_criteria = ["x_list", "y_list", "f_values"]
def __init__(self, f_values, x_list, y_list, xSearchFunc=None, ySearchFunc=None):
self.f_values = f_values
self.x_list = (
np.array(x_list)
if _check_flatten(1, x_list)
else np.array(x_list).flatten()
)
self.y_list = (
np.array(y_list)
if _check_flatten(1, y_list)
else np.array(y_list).flatten()
)
_check_grid_dimensions(2, self.f_values, self.x_list, self.y_list)
self.x_n = x_list.size
self.y_n = y_list.size
if xSearchFunc is None:
xSearchFunc = np.searchsorted
if ySearchFunc is None:
ySearchFunc = np.searchsorted
self.xSearchFunc = xSearchFunc
self.ySearchFunc = ySearchFunc
def _evaluate(self, x, y):
"""
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
"""
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(self.ySearchFunc(self.y_list, y), self.y_n - 1), 1)
else:
x_pos = self.xSearchFunc(self.x_list, x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = self.ySearchFunc(self.y_list, y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
alpha = (x - self.x_list[x_pos - 1]) / (
self.x_list[x_pos] - self.x_list[x_pos - 1]
)
beta = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
f = (
(1 - alpha) * (1 - beta) * self.f_values[x_pos - 1, y_pos - 1]
+ (1 - alpha) * beta * self.f_values[x_pos - 1, y_pos]
+ alpha * (1 - beta) * self.f_values[x_pos, y_pos - 1]
+ alpha * beta * self.f_values[x_pos, y_pos]
)
return f
def _derX(self, x, y):
"""
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
"""
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(self.ySearchFunc(self.y_list, y), self.y_n - 1), 1)
else:
x_pos = self.xSearchFunc(self.x_list, x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = self.ySearchFunc(self.y_list, y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
beta = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
dfdx = (
(
(1 - beta) * self.f_values[x_pos, y_pos - 1]
+ beta * self.f_values[x_pos, y_pos]
)
- (
(1 - beta) * self.f_values[x_pos - 1, y_pos - 1]
+ beta * self.f_values[x_pos - 1, y_pos]
)
) / (self.x_list[x_pos] - self.x_list[x_pos - 1])
return dfdx
def _derY(self, x, y):
"""
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeY.
"""
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(self.ySearchFunc(self.y_list, y), self.y_n - 1), 1)
else:
x_pos = self.xSearchFunc(self.x_list, x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = self.ySearchFunc(self.y_list, y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
alpha = (x - self.x_list[x_pos - 1]) / (
self.x_list[x_pos] - self.x_list[x_pos - 1]
)
dfdy = (
(
(1 - alpha) * self.f_values[x_pos - 1, y_pos]
+ alpha * self.f_values[x_pos, y_pos]
)
- (
(1 - alpha) * self.f_values[x_pos - 1, y_pos - 1]
+ alpha * self.f_values[x_pos, y_pos - 1]
)
) / (self.y_list[y_pos] - self.y_list[y_pos - 1])
return dfdy
class TrilinearInterp(HARKinterpolator3D):
"""
Trilinear full (or tensor) grid interpolation of a function f(x,y,z).
Parameters
----------
f_values : numpy.array
An array of size (x_n,y_n,z_n) such that f_values[i,j,k] =
f(x_list[i],y_list[j],z_list[k])
x_list : numpy.array
An array of x values, with length designated x_n.
y_list : numpy.array
An array of y values, with length designated y_n.
z_list : numpy.array
An array of z values, with length designated z_n.
xSearchFunc : function
An optional function that returns the reference location for x values:
indices = xSearchFunc(x_list,x). Default is np.searchsorted
ySearchFunc : function
An optional function that returns the reference location for y values:
indices = ySearchFunc(y_list,y). Default is np.searchsorted
zSearchFunc : function
An optional function that returns the reference location for z values:
indices = zSearchFunc(z_list,z). Default is np.searchsorted
"""
distance_criteria = ["f_values", "x_list", "y_list", "z_list"]
def __init__(
self,
f_values,
x_list,
y_list,
z_list,
xSearchFunc=None,
ySearchFunc=None,
zSearchFunc=None,
):
self.f_values = f_values
self.x_list = (
np.array(x_list)
if _check_flatten(1, x_list)
else np.array(x_list).flatten()
)
self.y_list = (
np.array(y_list)
if _check_flatten(1, y_list)
else np.array(y_list).flatten()
)
self.z_list = (
np.array(z_list)
if _check_flatten(1, z_list)
else np.array(z_list).flatten()
)
_check_grid_dimensions(3, self.f_values, self.x_list, self.y_list, self.z_list)
self.x_n = x_list.size
self.y_n = y_list.size
self.z_n = z_list.size
if xSearchFunc is None:
xSearchFunc = np.searchsorted
if ySearchFunc is None:
ySearchFunc = np.searchsorted
if zSearchFunc is None:
zSearchFunc = np.searchsorted
self.xSearchFunc = xSearchFunc
self.ySearchFunc = ySearchFunc
self.zSearchFunc = zSearchFunc
def _evaluate(self, x, y, z):
"""
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator3D.__call__ (etc).
"""
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(self.ySearchFunc(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(self.zSearchFunc(self.z_list, z), self.z_n - 1), 1)
else:
x_pos = self.xSearchFunc(self.x_list, x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = self.ySearchFunc(self.y_list, y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
z_pos = self.zSearchFunc(self.z_list, z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
alpha = (x - self.x_list[x_pos - 1]) / (
self.x_list[x_pos] - self.x_list[x_pos - 1]
)
beta = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
gamma = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
f = (
(1 - alpha)
* (1 - beta)
* (1 - gamma)
* self.f_values[x_pos - 1, y_pos - 1, z_pos - 1]
+ (1 - alpha)
* (1 - beta)
* gamma
* self.f_values[x_pos - 1, y_pos - 1, z_pos]
+ (1 - alpha)
* beta
* (1 - gamma)
* self.f_values[x_pos - 1, y_pos, z_pos - 1]
+ (1 - alpha) * beta * gamma * self.f_values[x_pos - 1, y_pos, z_pos]
+ alpha
* (1 - beta)
* (1 - gamma)
* self.f_values[x_pos, y_pos - 1, z_pos - 1]
+ alpha * (1 - beta) * gamma * self.f_values[x_pos, y_pos - 1, z_pos]
+ alpha * beta * (1 - gamma) * self.f_values[x_pos, y_pos, z_pos - 1]
+ alpha * beta * gamma * self.f_values[x_pos, y_pos, z_pos]
)
return f
def _derX(self, x, y, z):
"""
Returns the derivative with respect to x of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeX.
"""
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(self.ySearchFunc(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(self.zSearchFunc(self.z_list, z), self.z_n - 1), 1)
else:
x_pos = self.xSearchFunc(self.x_list, x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = self.ySearchFunc(self.y_list, y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
z_pos = self.zSearchFunc(self.z_list, z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
beta = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
gamma = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
dfdx = (
(
(1 - beta) * (1 - gamma) * self.f_values[x_pos, y_pos - 1, z_pos - 1]
+ (1 - beta) * gamma * self.f_values[x_pos, y_pos - 1, z_pos]
+ beta * (1 - gamma) * self.f_values[x_pos, y_pos, z_pos - 1]
+ beta * gamma * self.f_values[x_pos, y_pos, z_pos]
)
- (
(1 - beta)
* (1 - gamma)
* self.f_values[x_pos - 1, y_pos - 1, z_pos - 1]
+ (1 - beta) * gamma * self.f_values[x_pos - 1, y_pos - 1, z_pos]
+ beta * (1 - gamma) * self.f_values[x_pos - 1, y_pos, z_pos - 1]
+ beta * gamma * self.f_values[x_pos - 1, y_pos, z_pos]
)
) / (self.x_list[x_pos] - self.x_list[x_pos - 1])
return dfdx
def _derY(self, x, y, z):
"""
Returns the derivative with respect to y of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeY.
"""
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(self.ySearchFunc(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(self.zSearchFunc(self.z_list, z), self.z_n - 1), 1)
else:
x_pos = self.xSearchFunc(self.x_list, x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = self.ySearchFunc(self.y_list, y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
z_pos = self.zSearchFunc(self.z_list, z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
alpha = (x - self.x_list[x_pos - 1]) / (
self.x_list[x_pos] - self.x_list[x_pos - 1]
)
gamma = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
dfdy = (
(
(1 - alpha) * (1 - gamma) * self.f_values[x_pos - 1, y_pos, z_pos - 1]
+ (1 - alpha) * gamma * self.f_values[x_pos - 1, y_pos, z_pos]
+ alpha * (1 - gamma) * self.f_values[x_pos, y_pos, z_pos - 1]
+ alpha * gamma * self.f_values[x_pos, y_pos, z_pos]
)
- (
(1 - alpha)
* (1 - gamma)
* self.f_values[x_pos - 1, y_pos - 1, z_pos - 1]
+ (1 - alpha) * gamma * self.f_values[x_pos - 1, y_pos - 1, z_pos]
+ alpha * (1 - gamma) * self.f_values[x_pos, y_pos - 1, z_pos - 1]
+ alpha * gamma * self.f_values[x_pos, y_pos - 1, z_pos]
)
) / (self.y_list[y_pos] - self.y_list[y_pos - 1])
return dfdy
def _derZ(self, x, y, z):
"""
Returns the derivative with respect to z of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeZ.
"""
if _isscalar(x):
x_pos = max(min(self.xSearchFunc(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(self.ySearchFunc(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(self.zSearchFunc(self.z_list, z), self.z_n - 1), 1)
else:
x_pos = self.xSearchFunc(self.x_list, x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = self.ySearchFunc(self.y_list, y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
z_pos = self.zSearchFunc(self.z_list, z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
alpha = (x - self.x_list[x_pos - 1]) / (
self.x_list[x_pos] - self.x_list[x_pos - 1]
)
beta = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
dfdz = (
(
(1 - alpha) * (1 - beta) * self.f_values[x_pos - 1, y_pos - 1, z_pos]
+ (1 - alpha) * beta * self.f_values[x_pos - 1, y_pos, z_pos]
+ alpha * (1 - beta) * self.f_values[x_pos, y_pos - 1, z_pos]
+ alpha * beta * self.f_values[x_pos, y_pos, z_pos]
)
- (
(1 - alpha)
* (1 - beta)
* self.f_values[x_pos - 1, y_pos - 1, z_pos - 1]
+ (1 - alpha) * beta * self.f_values[x_pos - 1, y_pos, z_pos - 1]
+ alpha * (1 - beta) * self.f_values[x_pos, y_pos - 1, z_pos - 1]
+ alpha * beta * self.f_values[x_pos, y_pos, z_pos - 1]
)
) / (self.z_list[z_pos] - self.z_list[z_pos - 1])
return dfdz
class QuadlinearInterp(HARKinterpolator4D):
"""
Quadlinear full (or tensor) grid interpolation of a function f(w,x,y,z).
Parameters
----------
f_values : numpy.array
An array of size (w_n,x_n,y_n,z_n) such that f_values[i,j,k,l] =
f(w_list[i],x_list[j],y_list[k],z_list[l])
w_list : numpy.array
An array of x values, with length designated w_n.
x_list : numpy.array
An array of x values, with length designated x_n.
y_list : numpy.array
An array of y values, with length designated y_n.
z_list : numpy.array
An array of z values, with length designated z_n.
wSearchFunc : function
An optional function that returns the reference location for w values:
indices = wSearchFunc(w_list,w). Default is np.searchsorted
xSearchFunc : function
An optional function that returns the reference location for x values:
indices = xSearchFunc(x_list,x). Default is np.searchsorted
ySearchFunc : function
An optional function that returns the reference location for y values:
indices = ySearchFunc(y_list,y). Default is np.searchsorted
zSearchFunc : function
An optional function that returns the reference location for z values:
indices = zSearchFunc(z_list,z). Default is np.searchsorted
"""
distance_criteria = ["f_values", "w_list", "x_list", "y_list", "z_list"]
def __init__(
self,
f_values,
w_list,
x_list,
y_list,
z_list,
wSearchFunc=None,
xSearchFunc=None,
ySearchFunc=None,
zSearchFunc=None,
):
self.f_values = f_values
self.w_list = (
np.array(w_list)
if _check_flatten(1, w_list)
else np.array(w_list).flatten()
)
self.x_list = (
np.array(x_list)
if _check_flatten(1, x_list)
else np.array(x_list).flatten()
)
self.y_list = (
np.array(y_list)
if _check_flatten(1, y_list)
else np.array(y_list).flatten()
)
self.z_list = (
np.array(z_list)
if _check_flatten(1, z_list)
else np.array(z_list).flatten()
)
_check_grid_dimensions(
4, self.f_values, self.w_list, self.x_list, self.y_list, self.z_list
)
self.w_n = w_list.size
self.x_n = x_list.size
self.y_n = y_list.size
self.z_n = z_list.size
if wSearchFunc is None:
wSearchFunc = np.searchsorted
if xSearchFunc is None:
xSearchFunc = np.searchsorted
if ySearchFunc is None:
ySearchFunc = np.searchsorted
if zSearchFunc is None:
zSearchFunc = np.searchsorted
self.wSearchFunc = wSearchFunc
self.xSearchFunc = xSearchFunc
self.ySearchFunc = ySearchFunc
self.zSearchFunc = zSearchFunc
def _evaluate(self, w, x, y, z):
"""
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator4D.__call__ (etc).
"""
if _isscalar(w):
w_pos = max(min(self.wSearchFunc(self.w_list, w), self.w_n - 1), 1)
x_pos = max(min(self.xSearchFunc(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(self.ySearchFunc(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(self.zSearchFunc(self.z_list, z), self.z_n - 1), 1)
else:
w_pos = self.wSearchFunc(self.w_list, w)
w_pos[w_pos < 1] = 1
w_pos[w_pos > self.w_n - 1] = self.w_n - 1
x_pos = self.xSearchFunc(self.x_list, x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = self.ySearchFunc(self.y_list, y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
z_pos = self.zSearchFunc(self.z_list, z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
i = w_pos # for convenience
j = x_pos
k = y_pos
l = z_pos
alpha = (w - self.w_list[i - 1]) / (self.w_list[i] - self.w_list[i - 1])
beta = (x - self.x_list[j - 1]) / (self.x_list[j] - self.x_list[j - 1])
gamma = (y - self.y_list[k - 1]) / (self.y_list[k] - self.y_list[k - 1])
delta = (z - self.z_list[l - 1]) / (self.z_list[l] - self.z_list[l - 1])
f = (1 - alpha) * (
(1 - beta)
* (
(1 - gamma) * (1 - delta) * self.f_values[i - 1, j - 1, k - 1, l - 1]
+ (1 - gamma) * delta * self.f_values[i - 1, j - 1, k - 1, l]
+ gamma * (1 - delta) * self.f_values[i - 1, j - 1, k, l - 1]
+ gamma * delta * self.f_values[i - 1, j - 1, k, l]
)
+ beta
* (
(1 - gamma) * (1 - delta) * self.f_values[i - 1, j, k - 1, l - 1]
+ (1 - gamma) * delta * self.f_values[i - 1, j, k - 1, l]
+ gamma * (1 - delta) * self.f_values[i - 1, j, k, l - 1]
+ gamma * delta * self.f_values[i - 1, j, k, l]
)
) + alpha * (
(1 - beta)
* (
(1 - gamma) * (1 - delta) * self.f_values[i, j - 1, k - 1, l - 1]
+ (1 - gamma) * delta * self.f_values[i, j - 1, k - 1, l]
+ gamma * (1 - delta) * self.f_values[i, j - 1, k, l - 1]
+ gamma * delta * self.f_values[i, j - 1, k, l]
)
+ beta
* (
(1 - gamma) * (1 - delta) * self.f_values[i, j, k - 1, l - 1]
+ (1 - gamma) * delta * self.f_values[i, j, k - 1, l]
+ gamma * (1 - delta) * self.f_values[i, j, k, l - 1]
+ gamma * delta * self.f_values[i, j, k, l]
)
)
return f
def _derW(self, w, x, y, z):
"""
Returns the derivative with respect to w of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeW.
"""
if _isscalar(w):
w_pos = max(min(self.wSearchFunc(self.w_list, w), self.w_n - 1), 1)
x_pos = max(min(self.xSearchFunc(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(self.ySearchFunc(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(self.zSearchFunc(self.z_list, z), self.z_n - 1), 1)
else:
w_pos = self.wSearchFunc(self.w_list, w)
w_pos[w_pos < 1] = 1
w_pos[w_pos > self.w_n - 1] = self.w_n - 1
x_pos = self.xSearchFunc(self.x_list, x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = self.ySearchFunc(self.y_list, y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
z_pos = self.zSearchFunc(self.z_list, z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
i = w_pos # for convenience
j = x_pos
k = y_pos
l = z_pos
beta = (x - self.x_list[j - 1]) / (self.x_list[j] - self.x_list[j - 1])
gamma = (y - self.y_list[k - 1]) / (self.y_list[k] - self.y_list[k - 1])
delta = (z - self.z_list[l - 1]) / (self.z_list[l] - self.z_list[l - 1])
dfdw = (
(
(1 - beta)
* (1 - gamma)
* (1 - delta)
* self.f_values[i, j - 1, k - 1, l - 1]
+ (1 - beta) * (1 - gamma) * delta * self.f_values[i, j - 1, k - 1, l]
+ (1 - beta) * gamma * (1 - delta) * self.f_values[i, j - 1, k, l - 1]
+ (1 - beta) * gamma * delta * self.f_values[i, j - 1, k, l]
+ beta * (1 - gamma) * (1 - delta) * self.f_values[i, j, k - 1, l - 1]
+ beta * (1 - gamma) * delta * self.f_values[i, j, k - 1, l]
+ beta * gamma * (1 - delta) * self.f_values[i, j, k, l - 1]
+ beta * gamma * delta * self.f_values[i, j, k, l]
)
- (
(1 - beta)
* (1 - gamma)
* (1 - delta)
* self.f_values[i - 1, j - 1, k - 1, l - 1]
+ (1 - beta)
* (1 - gamma)
* delta
* self.f_values[i - 1, j - 1, k - 1, l]
+ (1 - beta)
* gamma
* (1 - delta)
* self.f_values[i - 1, j - 1, k, l - 1]
+ (1 - beta) * gamma * delta * self.f_values[i - 1, j - 1, k, l]
+ beta
* (1 - gamma)
* (1 - delta)
* self.f_values[i - 1, j, k - 1, l - 1]
+ beta * (1 - gamma) * delta * self.f_values[i - 1, j, k - 1, l]
+ beta * gamma * (1 - delta) * self.f_values[i - 1, j, k, l - 1]
+ beta * gamma * delta * self.f_values[i - 1, j, k, l]
)
) / (self.w_list[i] - self.w_list[i - 1])
return dfdw
def _derX(self, w, x, y, z):
"""
Returns the derivative with respect to x of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeX.
"""
if _isscalar(w):
w_pos = max(min(self.wSearchFunc(self.w_list, w), self.w_n - 1), 1)
x_pos = max(min(self.xSearchFunc(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(self.ySearchFunc(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(self.zSearchFunc(self.z_list, z), self.z_n - 1), 1)
else:
w_pos = self.wSearchFunc(self.w_list, w)
w_pos[w_pos < 1] = 1
w_pos[w_pos > self.w_n - 1] = self.w_n - 1
x_pos = self.xSearchFunc(self.x_list, x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = self.ySearchFunc(self.y_list, y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
z_pos = self.zSearchFunc(self.z_list, z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
i = w_pos # for convenience
j = x_pos
k = y_pos
l = z_pos
alpha = (w - self.w_list[i - 1]) / (self.w_list[i] - self.w_list[i - 1])
gamma = (y - self.y_list[k - 1]) / (self.y_list[k] - self.y_list[k - 1])
delta = (z - self.z_list[l - 1]) / (self.z_list[l] - self.z_list[l - 1])
dfdx = (
(
(1 - alpha)
* (1 - gamma)
* (1 - delta)
* self.f_values[i - 1, j, k - 1, l - 1]
+ (1 - alpha) * (1 - gamma) * delta * self.f_values[i - 1, j, k - 1, l]
+ (1 - alpha) * gamma * (1 - delta) * self.f_values[i - 1, j, k, l - 1]
+ (1 - alpha) * gamma * delta * self.f_values[i - 1, j, k, l]
+ alpha * (1 - gamma) * (1 - delta) * self.f_values[i, j, k - 1, l - 1]
+ alpha * (1 - gamma) * delta * self.f_values[i, j, k - 1, l]
+ alpha * gamma * (1 - delta) * self.f_values[i, j, k, l - 1]
+ alpha * gamma * delta * self.f_values[i, j, k, l]
)
- (
(1 - alpha)
* (1 - gamma)
* (1 - delta)
* self.f_values[i - 1, j - 1, k - 1, l - 1]
+ (1 - alpha)
* (1 - gamma)
* delta
* self.f_values[i - 1, j - 1, k - 1, l]
+ (1 - alpha)
* gamma
* (1 - delta)
* self.f_values[i - 1, j - 1, k, l - 1]
+ (1 - alpha) * gamma * delta * self.f_values[i - 1, j - 1, k, l]
+ alpha
* (1 - gamma)
* (1 - delta)
* self.f_values[i, j - 1, k - 1, l - 1]
+ alpha * (1 - gamma) * delta * self.f_values[i, j - 1, k - 1, l]
+ alpha * gamma * (1 - delta) * self.f_values[i, j - 1, k, l - 1]
+ alpha * gamma * delta * self.f_values[i, j - 1, k, l]
)
) / (self.x_list[j] - self.x_list[j - 1])
return dfdx
def _derY(self, w, x, y, z):
"""
Returns the derivative with respect to y of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeY.
"""
if _isscalar(w):
w_pos = max(min(self.wSearchFunc(self.w_list, w), self.w_n - 1), 1)
x_pos = max(min(self.xSearchFunc(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(self.ySearchFunc(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(self.zSearchFunc(self.z_list, z), self.z_n - 1), 1)
else:
w_pos = self.wSearchFunc(self.w_list, w)
w_pos[w_pos < 1] = 1
w_pos[w_pos > self.w_n - 1] = self.w_n - 1
x_pos = self.xSearchFunc(self.x_list, x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = self.ySearchFunc(self.y_list, y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
z_pos = self.zSearchFunc(self.z_list, z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
i = w_pos # for convenience
j = x_pos
k = y_pos
l = z_pos
alpha = (w - self.w_list[i - 1]) / (self.w_list[i] - self.w_list[i - 1])
beta = (x - self.x_list[j - 1]) / (self.x_list[j] - self.x_list[j - 1])
delta = (z - self.z_list[l - 1]) / (self.z_list[l] - self.z_list[l - 1])
dfdy = (
(
(1 - alpha)
* (1 - beta)
* (1 - delta)
* self.f_values[i - 1, j - 1, k, l - 1]
+ (1 - alpha) * (1 - beta) * delta * self.f_values[i - 1, j - 1, k, l]
+ (1 - alpha) * beta * (1 - delta) * self.f_values[i - 1, j, k, l - 1]
+ (1 - alpha) * beta * delta * self.f_values[i - 1, j, k, l]
+ alpha * (1 - beta) * (1 - delta) * self.f_values[i, j - 1, k, l - 1]
+ alpha * (1 - beta) * delta * self.f_values[i, j - 1, k, l]
+ alpha * beta * (1 - delta) * self.f_values[i, j, k, l - 1]
+ alpha * beta * delta * self.f_values[i, j, k, l]
)
- (
(1 - alpha)
* (1 - beta)
* (1 - delta)
* self.f_values[i - 1, j - 1, k - 1, l - 1]
+ (1 - alpha)
* (1 - beta)
* delta
* self.f_values[i - 1, j - 1, k - 1, l]
+ (1 - alpha)
* beta
* (1 - delta)
* self.f_values[i - 1, j, k - 1, l - 1]
+ (1 - alpha) * beta * delta * self.f_values[i - 1, j, k - 1, l]
+ alpha
* (1 - beta)
* (1 - delta)
* self.f_values[i, j - 1, k - 1, l - 1]
+ alpha * (1 - beta) * delta * self.f_values[i, j - 1, k - 1, l]
+ alpha * beta * (1 - delta) * self.f_values[i, j, k - 1, l - 1]
+ alpha * beta * delta * self.f_values[i, j, k - 1, l]
)
) / (self.y_list[k] - self.y_list[k - 1])
return dfdy
def _derZ(self, w, x, y, z):
"""
Returns the derivative with respect to z of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeZ.
"""
if _isscalar(w):
w_pos = max(min(self.wSearchFunc(self.w_list, w), self.w_n - 1), 1)
x_pos = max(min(self.xSearchFunc(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(self.ySearchFunc(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(self.zSearchFunc(self.z_list, z), self.z_n - 1), 1)
else:
w_pos = self.wSearchFunc(self.w_list, w)
w_pos[w_pos < 1] = 1
w_pos[w_pos > self.w_n - 1] = self.w_n - 1
x_pos = self.xSearchFunc(self.x_list, x)
x_pos[x_pos < 1] = 1
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = self.ySearchFunc(self.y_list, y)
y_pos[y_pos < 1] = 1
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
z_pos = self.zSearchFunc(self.z_list, z)
z_pos[z_pos < 1] = 1
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
i = w_pos # for convenience
j = x_pos
k = y_pos
l = z_pos
alpha = (w - self.w_list[i - 1]) / (self.w_list[i] - self.w_list[i - 1])
beta = (x - self.x_list[j - 1]) / (self.x_list[j] - self.x_list[j - 1])
gamma = (y - self.y_list[k - 1]) / (self.y_list[k] - self.y_list[k - 1])
dfdz = (
(
(1 - alpha)
* (1 - beta)
* (1 - gamma)
* self.f_values[i - 1, j - 1, k - 1, l]
+ (1 - alpha) * (1 - beta) * gamma * self.f_values[i - 1, j - 1, k, l]
+ (1 - alpha) * beta * (1 - gamma) * self.f_values[i - 1, j, k - 1, l]
+ (1 - alpha) * beta * gamma * self.f_values[i - 1, j, k, l]
+ alpha * (1 - beta) * (1 - gamma) * self.f_values[i, j - 1, k - 1, l]
+ alpha * (1 - beta) * gamma * self.f_values[i, j - 1, k, l]
+ alpha * beta * (1 - gamma) * self.f_values[i, j, k - 1, l]
+ alpha * beta * gamma * self.f_values[i, j, k, l]
)
- (
(1 - alpha)
* (1 - beta)
* (1 - gamma)
* self.f_values[i - 1, j - 1, k - 1, l - 1]
+ (1 - alpha)
* (1 - beta)
* gamma
* self.f_values[i - 1, j - 1, k, l - 1]
+ (1 - alpha)
* beta
* (1 - gamma)
* self.f_values[i - 1, j, k - 1, l - 1]
+ (1 - alpha) * beta * gamma * self.f_values[i - 1, j, k, l - 1]
+ alpha
* (1 - beta)
* (1 - gamma)
* self.f_values[i, j - 1, k - 1, l - 1]
+ alpha * (1 - beta) * gamma * self.f_values[i, j - 1, k, l - 1]
+ alpha * beta * (1 - gamma) * self.f_values[i, j, k - 1, l - 1]
+ alpha * beta * gamma * self.f_values[i, j, k, l - 1]
)
) / (self.z_list[l] - self.z_list[l - 1])
return dfdz
class LowerEnvelope(HARKinterpolator1D):
"""
The lower envelope of a finite set of 1D functions, each of which can be of
any class that has the methods __call__, derivative, and eval_with_derivative.
Generally: it combines HARKinterpolator1Ds.
Parameters
----------
*functions : function
Any number of real functions; often instances of HARKinterpolator1D
nan_bool : boolean
An indicator for whether the solver should exclude NA's when
forming the lower envelope
"""
distance_criteria = ["functions"]
def __init__(self, *functions, nan_bool=True):
if nan_bool:
self.compare = np.nanmin
self.argcompare = np.nanargmin
else:
self.compare = np.min
self.argcompare = np.argmin
self.functions = []
for function in functions:
self.functions.append(function)
self.funcCount = len(self.functions)
def _evaluate(self, x):
"""
Returns the level of the function at each value in x as the minimum among
all of the functions. Only called internally by HARKinterpolator1D.__call__.
"""
if _isscalar(x):
y = self.compare([f(x) for f in self.functions])
else:
m = len(x)
fx = np.zeros((m, self.funcCount))
for j in range(self.funcCount):
fx[:, j] = self.functions[j](x)
y = self.compare(fx, axis=1)
return y
def _der(self, x):
"""
Returns the first derivative of the function at each value in x. Only
called internally by HARKinterpolator1D.derivative.
"""
y, dydx = self._evalAndDer(x)
return dydx # Sadly, this is the fastest / most convenient way...
def _evalAndDer(self, x):
"""
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der.
"""
m = len(x)
fx = np.zeros((m, self.funcCount))
for j in range(self.funcCount):
fx[:, j] = self.functions[j](x)
i = self.argcompare(fx, axis=1)
y = fx[np.arange(m), i]
dydx = np.zeros_like(y)
for j in range(self.funcCount):
c = i == j
dydx[c] = self.functions[j].derivative(x[c])
return y, dydx
class UpperEnvelope(HARKinterpolator1D):
"""
The upper envelope of a finite set of 1D functions, each of which can be of
any class that has the methods __call__, derivative, and eval_with_derivative.
Generally: it combines HARKinterpolator1Ds.
Parameters
----------
*functions : function
Any number of real functions; often instances of HARKinterpolator1D
nan_bool : boolean
An indicator for whether the solver should exclude NA's when forming
the lower envelope.
"""
distance_criteria = ["functions"]
def __init__(self, *functions, nan_bool=True):
if nan_bool:
self.compare = np.nanmax
self.argcompare = np.nanargmax
else:
self.compare = np.max
self.argcompare = np.argmax
self.functions = []
for function in functions:
self.functions.append(function)
self.funcCount = len(self.functions)
def _evaluate(self, x):
"""
Returns the level of the function at each value in x as the maximum among
all of the functions. Only called internally by HARKinterpolator1D.__call__.
"""
if _isscalar(x):
y = self.compare([f(x) for f in self.functions])
else:
m = len(x)
fx = np.zeros((m, self.funcCount))
for j in range(self.funcCount):
fx[:, j] = self.functions[j](x)
y = self.compare(fx, axis=1)
return y
def _der(self, x):
"""
Returns the first derivative of the function at each value in x. Only
called internally by HARKinterpolator1D.derivative.
"""
y, dydx = self._evalAndDer(x)
return dydx # Sadly, this is the fastest / most convenient way...
def _evalAndDer(self, x):
"""
Returns the level and first derivative of the function at each value in
x. Only called internally by HARKinterpolator1D.eval_and_der.
"""
m = len(x)
fx = np.zeros((m, self.funcCount))
for j in range(self.funcCount):
fx[:, j] = self.functions[j](x)
i = self.argcompare(fx, axis=1)
y = fx[np.arange(m), i]
dydx = np.zeros_like(y)
for j in range(self.funcCount):
c = i == j
dydx[c] = self.functions[j].derivative(x[c])
return y, dydx
class LowerEnvelope2D(HARKinterpolator2D):
"""
The lower envelope of a finite set of 2D functions, each of which can be of
any class that has the methods __call__, derivativeX, and derivativeY.
Generally: it combines HARKinterpolator2Ds.
Parameters
----------
*functions : function
Any number of real functions; often instances of HARKinterpolator2D
nan_bool : boolean
An indicator for whether the solver should exclude NA's when forming
the lower envelope.
"""
distance_criteria = ["functions"]
def __init__(self, *functions, nan_bool=True):
if nan_bool:
self.compare = np.nanmin
self.argcompare = np.nanargmin
else:
self.compare = np.min
self.argcompare = np.argmin
self.functions = []
for function in functions:
self.functions.append(function)
self.funcCount = len(self.functions)
def _evaluate(self, x, y):
"""
Returns the level of the function at each value in (x,y) as the minimum
among all of the functions. Only called internally by
HARKinterpolator2D.__call__.
"""
if _isscalar(x):
f = self.compare([f(x, y) for f in self.functions])
else:
m = len(x)
temp = np.zeros((m, self.funcCount))
for j in range(self.funcCount):
temp[:, j] = self.functions[j](x, y)
f = self.compare(temp, axis=1)
return f
def _derX(self, x, y):
"""
Returns the first derivative of the function with respect to X at each
value in (x,y). Only called internally by HARKinterpolator2D._derX.
"""
m = len(x)
temp = np.zeros((m, self.funcCount))
for j in range(self.funcCount):
temp[:, j] = self.functions[j](x, y)
i = self.argcompare(temp, axis=1)
dfdx = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdx[c] = self.functions[j].derivativeX(x[c], y[c])
return dfdx
def _derY(self, x, y):
"""
Returns the first derivative of the function with respect to Y at each
value in (x,y). Only called internally by HARKinterpolator2D._derY.
"""
m = len(x)
temp = np.zeros((m, self.funcCount))
for j in range(self.funcCount):
temp[:, j] = self.functions[j](x, y)
i = self.argcompare(temp, axis=1)
y = temp[np.arange(m), i]
dfdy = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdy[c] = self.functions[j].derivativeY(x[c], y[c])
return dfdy
class LowerEnvelope3D(HARKinterpolator3D):
"""
The lower envelope of a finite set of 3D functions, each of which can be of
any class that has the methods __call__, derivativeX, derivativeY, and
derivativeZ. Generally: it combines HARKinterpolator2Ds.
Parameters
----------
*functions : function
Any number of real functions; often instances of HARKinterpolator3D
nan_bool : boolean
An indicator for whether the solver should exclude NA's when forming
the lower envelope.
"""
distance_criteria = ["functions"]
def __init__(self, *functions, nan_bool=True):
if nan_bool:
self.compare = np.nanmin
self.argcompare = np.nanargmin
else:
self.compare = np.min
self.argcompare = np.argmin
self.functions = []
for function in functions:
self.functions.append(function)
self.funcCount = len(self.functions)
def _evaluate(self, x, y, z):
"""
Returns the level of the function at each value in (x,y,z) as the minimum
among all of the functions. Only called internally by
HARKinterpolator3D.__call__.
"""
if _isscalar(x):
f = self.compare([f(x, y, z) for f in self.functions])
else:
m = len(x)
temp = np.zeros((m, self.funcCount))
for j in range(self.funcCount):
temp[:, j] = self.functions[j](x, y, z)
f = self.compare(temp, axis=1)
return f
def _derX(self, x, y, z):
"""
Returns the first derivative of the function with respect to X at each
value in (x,y,z). Only called internally by HARKinterpolator3D._derX.
"""
m = len(x)
temp = np.zeros((m, self.funcCount))
for j in range(self.funcCount):
temp[:, j] = self.functions[j](x, y, z)
i = self.argcompare(temp, axis=1)
dfdx = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdx[c] = self.functions[j].derivativeX(x[c], y[c], z[c])
return dfdx
def _derY(self, x, y, z):
"""
Returns the first derivative of the function with respect to Y at each
value in (x,y,z). Only called internally by HARKinterpolator3D._derY.
"""
m = len(x)
temp = np.zeros((m, self.funcCount))
for j in range(self.funcCount):
temp[:, j] = self.functions[j](x, y, z)
i = self.argcompare(temp, axis=1)
y = temp[np.arange(m), i]
dfdy = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdy[c] = self.functions[j].derivativeY(x[c], y[c], z[c])
return dfdy
def _derZ(self, x, y, z):
"""
Returns the first derivative of the function with respect to Z at each
value in (x,y,z). Only called internally by HARKinterpolator3D._derZ.
"""
m = len(x)
temp = np.zeros((m, self.funcCount))
for j in range(self.funcCount):
temp[:, j] = self.functions[j](x, y, z)
i = self.argcompare(temp, axis=1)
y = temp[np.arange(m), i]
dfdz = np.zeros_like(x)
for j in range(self.funcCount):
c = i == j
dfdz[c] = self.functions[j].derivativeZ(x[c], y[c], z[c])
return dfdz
class VariableLowerBoundFunc2D(MetricObject):
"""
A class for representing a function with two real inputs whose lower bound
in the first input depends on the second input. Useful for managing curved
natural borrowing constraints, as occurs in the persistent shocks model.
Parameters
----------
func : function
A function f: (R_+ x R) --> R representing the function of interest
shifted by its lower bound in the first input.
lowerBound : function
The lower bound in the first input of the function of interest, as
a function of the second input.
"""
distance_criteria = ["func", "lowerBound"]
def __init__(self, func, lowerBound):
self.func = func
self.lowerBound = lowerBound
def __call__(self, x, y):
"""
Evaluate the function at given state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
f_out : np.array
Function evaluated at (x,y), of same shape as inputs.
"""
xShift = self.lowerBound(y)
f_out = self.func(x - xShift, y)
return f_out
def derivativeX(self, x, y):
"""
Evaluate the first derivative with respect to x of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
dfdx_out : np.array
First derivative of function with respect to the first input,
evaluated at (x,y), of same shape as inputs.
"""
xShift = self.lowerBound(y)
dfdx_out = self.func.derivativeX(x - xShift, y)
return dfdx_out
def derivativeY(self, x, y):
"""
Evaluate the first derivative with respect to y of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
Returns
-------
dfdy_out : np.array
First derivative of function with respect to the second input,
evaluated at (x,y), of same shape as inputs.
"""
xShift, xShiftDer = self.lowerBound.eval_with_derivative(y)
dfdy_out = self.func.derivativeY(
x - xShift, y
) - xShiftDer * self.func.derivativeX(x - xShift, y)
return dfdy_out
class VariableLowerBoundFunc3D(MetricObject):
"""
A class for representing a function with three real inputs whose lower bound
in the first input depends on the second input. Useful for managing curved
natural borrowing constraints.
Parameters
----------
func : function
A function f: (R_+ x R^2) --> R representing the function of interest
shifted by its lower bound in the first input.
lowerBound : function
The lower bound in the first input of the function of interest, as
a function of the second input.
"""
distance_criteria = ["func", "lowerBound"]
def __init__(self, func, lowerBound):
self.func = func
self.lowerBound = lowerBound
def __call__(self, x, y, z):
"""
Evaluate the function at given state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
f_out : np.array
Function evaluated at (x,y,z), of same shape as inputs.
"""
xShift = self.lowerBound(y)
f_out = self.func(x - xShift, y, z)
return f_out
def derivativeX(self, x, y, z):
"""
Evaluate the first derivative with respect to x of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
dfdx_out : np.array
First derivative of function with respect to the first input,
evaluated at (x,y,z), of same shape as inputs.
"""
xShift = self.lowerBound(y)
dfdx_out = self.func.derivativeX(x - xShift, y, z)
return dfdx_out
def derivativeY(self, x, y, z):
"""
Evaluate the first derivative with respect to y of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
dfdy_out : np.array
First derivative of function with respect to the second input,
evaluated at (x,y,z), of same shape as inputs.
"""
xShift, xShiftDer = self.lowerBound.eval_with_derivative(y)
dfdy_out = self.func.derivativeY(
x - xShift, y, z
) - xShiftDer * self.func.derivativeX(x - xShift, y, z)
return dfdy_out
def derivativeZ(self, x, y, z):
"""
Evaluate the first derivative with respect to z of the function at given
state space points.
Parameters
----------
x : np.array
First input values.
y : np.array
Second input values; should be of same shape as x.
z : np.array
Third input values; should be of same shape as x.
Returns
-------
dfdz_out : np.array
First derivative of function with respect to the third input,
evaluated at (x,y,z), of same shape as inputs.
"""
xShift = self.lowerBound(y)
dfdz_out = self.func.derivativeZ(x - xShift, y, z)
return dfdz_out
class LinearInterpOnInterp1D(HARKinterpolator2D):
"""
A 2D interpolator that linearly interpolates among a list of 1D interpolators.
Parameters
----------
xInterpolators : [HARKinterpolator1D]
A list of 1D interpolations over the x variable. The nth element of
xInterpolators represents f(x,y_values[n]).
y_values: numpy.array
An array of y values equal in length to xInterpolators.
"""
distance_criteria = ["xInterpolators", "y_list"]
def __init__(self, xInterpolators, y_values):
self.xInterpolators = xInterpolators
self.y_list = y_values
self.y_n = y_values.size
def _evaluate(self, x, y):
"""
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
"""
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
alpha = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
f = (1 - alpha) * self.xInterpolators[y_pos - 1](
x
) + alpha * self.xInterpolators[y_pos](x)
else:
m = len(x)
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
f = np.zeros(m) + np.nan
if y.size > 0:
for i in range(1, self.y_n):
c = y_pos == i
if np.any(c):
alpha = (y[c] - self.y_list[i - 1]) / (
self.y_list[i] - self.y_list[i - 1]
)
f[c] = (1 - alpha) * self.xInterpolators[i - 1](
x[c]
) + alpha * self.xInterpolators[i](x[c])
return f
def _derX(self, x, y):
"""
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
"""
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
alpha = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
dfdx = (1 - alpha) * self.xInterpolators[y_pos - 1]._der(
x
) + alpha * self.xInterpolators[y_pos]._der(x)
else:
m = len(x)
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
dfdx = np.zeros(m) + np.nan
if y.size > 0:
for i in range(1, self.y_n):
c = y_pos == i
if np.any(c):
alpha = (y[c] - self.y_list[i - 1]) / (
self.y_list[i] - self.y_list[i - 1]
)
dfdx[c] = (1 - alpha) * self.xInterpolators[i - 1]._der(
x[c]
) + alpha * self.xInterpolators[i]._der(x[c])
return dfdx
def _derY(self, x, y):
"""
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeY.
"""
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
dfdy = (
self.xInterpolators[y_pos](x) - self.xInterpolators[y_pos - 1](x)
) / (self.y_list[y_pos] - self.y_list[y_pos - 1])
else:
m = len(x)
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
dfdy = np.zeros(m) + np.nan
if y.size > 0:
for i in range(1, self.y_n):
c = y_pos == i
if np.any(c):
dfdy[c] = (
self.xInterpolators[i](x[c])
- self.xInterpolators[i - 1](x[c])
) / (self.y_list[i] - self.y_list[i - 1])
return dfdy
class BilinearInterpOnInterp1D(HARKinterpolator3D):
"""
A 3D interpolator that bilinearly interpolates among a list of lists of 1D
interpolators.
Constructor for the class, generating an approximation to a function of
the form f(x,y,z) using interpolations over f(x,y_0,z_0) for a fixed grid
of y_0 and z_0 values.
Parameters
----------
xInterpolators : [[HARKinterpolator1D]]
A list of lists of 1D interpolations over the x variable. The i,j-th
element of xInterpolators represents f(x,y_values[i],z_values[j]).
y_values: numpy.array
An array of y values equal in length to xInterpolators.
z_values: numpy.array
An array of z values equal in length to xInterpolators[0].
"""
distance_criteria = ["xInterpolators", "y_list", "z_list"]
def __init__(self, xInterpolators, y_values, z_values):
self.xInterpolators = xInterpolators
self.y_list = y_values
self.y_n = y_values.size
self.z_list = z_values
self.z_n = z_values.size
def _evaluate(self, x, y, z):
"""
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator3D.__call__ (etc).
"""
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
beta = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
f = (
(1 - alpha) * (1 - beta) * self.xInterpolators[y_pos - 1][z_pos - 1](x)
+ (1 - alpha) * beta * self.xInterpolators[y_pos - 1][z_pos](x)
+ alpha * (1 - beta) * self.xInterpolators[y_pos][z_pos - 1](x)
+ alpha * beta * self.xInterpolators[y_pos][z_pos](x)
)
else:
m = len(x)
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
f = np.zeros(m) + np.nan
for i in range(1, self.y_n):
for j in range(1, self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i - 1]) / (
self.y_list[i] - self.y_list[i - 1]
)
beta = (z[c] - self.z_list[j - 1]) / (
self.z_list[j] - self.z_list[j - 1]
)
f[c] = (
(1 - alpha)
* (1 - beta)
* self.xInterpolators[i - 1][j - 1](x[c])
+ (1 - alpha) * beta * self.xInterpolators[i - 1][j](x[c])
+ alpha * (1 - beta) * self.xInterpolators[i][j - 1](x[c])
+ alpha * beta * self.xInterpolators[i][j](x[c])
)
return f
def _derX(self, x, y, z):
"""
Returns the derivative with respect to x of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeX.
"""
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
beta = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
dfdx = (
(1 - alpha)
* (1 - beta)
* self.xInterpolators[y_pos - 1][z_pos - 1]._der(x)
+ (1 - alpha) * beta * self.xInterpolators[y_pos - 1][z_pos]._der(x)
+ alpha * (1 - beta) * self.xInterpolators[y_pos][z_pos - 1]._der(x)
+ alpha * beta * self.xInterpolators[y_pos][z_pos]._der(x)
)
else:
m = len(x)
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdx = np.zeros(m) + np.nan
for i in range(1, self.y_n):
for j in range(1, self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i - 1]) / (
self.y_list[i] - self.y_list[i - 1]
)
beta = (z[c] - self.z_list[j - 1]) / (
self.z_list[j] - self.z_list[j - 1]
)
dfdx[c] = (
(1 - alpha)
* (1 - beta)
* self.xInterpolators[i - 1][j - 1]._der(x[c])
+ (1 - alpha)
* beta
* self.xInterpolators[i - 1][j]._der(x[c])
+ alpha
* (1 - beta)
* self.xInterpolators[i][j - 1]._der(x[c])
+ alpha * beta * self.xInterpolators[i][j]._der(x[c])
)
return dfdx
def _derY(self, x, y, z):
"""
Returns the derivative with respect to y of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeY.
"""
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
beta = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
dfdy = (
(
(1 - beta) * self.xInterpolators[y_pos][z_pos - 1](x)
+ beta * self.xInterpolators[y_pos][z_pos](x)
)
- (
(1 - beta) * self.xInterpolators[y_pos - 1][z_pos - 1](x)
+ beta * self.xInterpolators[y_pos - 1][z_pos](x)
)
) / (self.y_list[y_pos] - self.y_list[y_pos - 1])
else:
m = len(x)
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdy = np.zeros(m) + np.nan
for i in range(1, self.y_n):
for j in range(1, self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
beta = (z[c] - self.z_list[j - 1]) / (
self.z_list[j] - self.z_list[j - 1]
)
dfdy[c] = (
(
(1 - beta) * self.xInterpolators[i][j - 1](x[c])
+ beta * self.xInterpolators[i][j](x[c])
)
- (
(1 - beta) * self.xInterpolators[i - 1][j - 1](x[c])
+ beta * self.xInterpolators[i - 1][j](x[c])
)
) / (self.y_list[i] - self.y_list[i - 1])
return dfdy
def _derZ(self, x, y, z):
"""
Returns the derivative with respect to z of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeZ.
"""
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
dfdz = (
(
(1 - alpha) * self.xInterpolators[y_pos - 1][z_pos](x)
+ alpha * self.xInterpolators[y_pos][z_pos](x)
)
- (
(1 - alpha) * self.xInterpolators[y_pos - 1][z_pos - 1](x)
+ alpha * self.xInterpolators[y_pos][z_pos - 1](x)
)
) / (self.z_list[z_pos] - self.z_list[z_pos - 1])
else:
m = len(x)
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdz = np.zeros(m) + np.nan
for i in range(1, self.y_n):
for j in range(1, self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i - 1]) / (
self.y_list[i] - self.y_list[i - 1]
)
dfdz[c] = (
(
(1 - alpha) * self.xInterpolators[i - 1][j](x[c])
+ alpha * self.xInterpolators[i][j](x[c])
)
- (
(1 - alpha) * self.xInterpolators[i - 1][j - 1](x[c])
+ alpha * self.xInterpolators[i][j - 1](x[c])
)
) / (self.z_list[j] - self.z_list[j - 1])
return dfdz
class TrilinearInterpOnInterp1D(HARKinterpolator4D):
"""
A 4D interpolator that trilinearly interpolates among a list of lists of 1D interpolators.
Constructor for the class, generating an approximation to a function of
the form f(w,x,y,z) using interpolations over f(w,x_0,y_0,z_0) for a fixed
grid of y_0 and z_0 values.
Parameters
----------
wInterpolators : [[[HARKinterpolator1D]]]
A list of lists of lists of 1D interpolations over the x variable.
The i,j,k-th element of wInterpolators represents f(w,x_values[i],y_values[j],z_values[k]).
x_values: numpy.array
An array of x values equal in length to wInterpolators.
y_values: numpy.array
An array of y values equal in length to wInterpolators[0].
z_values: numpy.array
An array of z values equal in length to wInterpolators[0][0]
"""
distance_criteria = ["wInterpolators", "x_list", "y_list", "z_list"]
def __init__(self, wInterpolators, x_values, y_values, z_values):
self.wInterpolators = wInterpolators
self.x_list = x_values
self.x_n = x_values.size
self.y_list = y_values
self.y_n = y_values.size
self.z_list = z_values
self.z_n = z_values.size
def _evaluate(self, w, x, y, z):
"""
Returns the level of the interpolated function at each value in w,x,y,z.
Only called internally by HARKinterpolator4D.__call__ (etc).
"""
if _isscalar(w):
x_pos = max(min(np.searchsorted(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (x - self.x_list[x_pos - 1]) / (
self.x_list[x_pos] - self.x_list[x_pos - 1]
)
beta = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
gamma = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
f = (
(1 - alpha)
* (1 - beta)
* (1 - gamma)
* self.wInterpolators[x_pos - 1][y_pos - 1][z_pos - 1](w)
+ (1 - alpha)
* (1 - beta)
* gamma
* self.wInterpolators[x_pos - 1][y_pos - 1][z_pos](w)
+ (1 - alpha)
* beta
* (1 - gamma)
* self.wInterpolators[x_pos - 1][y_pos][z_pos - 1](w)
+ (1 - alpha)
* beta
* gamma
* self.wInterpolators[x_pos - 1][y_pos][z_pos](w)
+ alpha
* (1 - beta)
* (1 - gamma)
* self.wInterpolators[x_pos][y_pos - 1][z_pos - 1](w)
+ alpha
* (1 - beta)
* gamma
* self.wInterpolators[x_pos][y_pos - 1][z_pos](w)
+ alpha
* beta
* (1 - gamma)
* self.wInterpolators[x_pos][y_pos][z_pos - 1](w)
+ alpha * beta * gamma * self.wInterpolators[x_pos][y_pos][z_pos](w)
)
else:
m = len(x)
x_pos = np.searchsorted(self.x_list, x)
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
f = np.zeros(m) + np.nan
for i in range(1, self.x_n):
for j in range(1, self.y_n):
for k in range(1, self.z_n):
c = np.logical_and(
np.logical_and(i == x_pos, j == y_pos), k == z_pos
)
if np.any(c):
alpha = (x[c] - self.x_list[i - 1]) / (
self.x_list[i] - self.x_list[i - 1]
)
beta = (y[c] - self.y_list[j - 1]) / (
self.y_list[j] - self.y_list[j - 1]
)
gamma = (z[c] - self.z_list[k - 1]) / (
self.z_list[k] - self.z_list[k - 1]
)
f[c] = (
(1 - alpha)
* (1 - beta)
* (1 - gamma)
* self.wInterpolators[i - 1][j - 1][k - 1](w[c])
+ (1 - alpha)
* (1 - beta)
* gamma
* self.wInterpolators[i - 1][j - 1][k](w[c])
+ (1 - alpha)
* beta
* (1 - gamma)
* self.wInterpolators[i - 1][j][k - 1](w[c])
+ (1 - alpha)
* beta
* gamma
* self.wInterpolators[i - 1][j][k](w[c])
+ alpha
* (1 - beta)
* (1 - gamma)
* self.wInterpolators[i][j - 1][k - 1](w[c])
+ alpha
* (1 - beta)
* gamma
* self.wInterpolators[i][j - 1][k](w[c])
+ alpha
* beta
* (1 - gamma)
* self.wInterpolators[i][j][k - 1](w[c])
+ alpha
* beta
* gamma
* self.wInterpolators[i][j][k](w[c])
)
return f
def _derW(self, w, x, y, z):
"""
Returns the derivative with respect to w of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeW.
"""
if _isscalar(w):
x_pos = max(min(np.searchsorted(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (x - self.x_list[x_pos - 1]) / (
self.x_list[x_pos] - self.x_list[x_pos - 1]
)
beta = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
gamma = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
dfdw = (
(1 - alpha)
* (1 - beta)
* (1 - gamma)
* self.wInterpolators[x_pos - 1][y_pos - 1][z_pos - 1]._der(w)
+ (1 - alpha)
* (1 - beta)
* gamma
* self.wInterpolators[x_pos - 1][y_pos - 1][z_pos]._der(w)
+ (1 - alpha)
* beta
* (1 - gamma)
* self.wInterpolators[x_pos - 1][y_pos][z_pos - 1]._der(w)
+ (1 - alpha)
* beta
* gamma
* self.wInterpolators[x_pos - 1][y_pos][z_pos]._der(w)
+ alpha
* (1 - beta)
* (1 - gamma)
* self.wInterpolators[x_pos][y_pos - 1][z_pos - 1]._der(w)
+ alpha
* (1 - beta)
* gamma
* self.wInterpolators[x_pos][y_pos - 1][z_pos]._der(w)
+ alpha
* beta
* (1 - gamma)
* self.wInterpolators[x_pos][y_pos][z_pos - 1]._der(w)
+ alpha
* beta
* gamma
* self.wInterpolators[x_pos][y_pos][z_pos]._der(w)
)
else:
m = len(x)
x_pos = np.searchsorted(self.x_list, x)
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdw = np.zeros(m) + np.nan
for i in range(1, self.x_n):
for j in range(1, self.y_n):
for k in range(1, self.z_n):
c = np.logical_and(
np.logical_and(i == x_pos, j == y_pos), k == z_pos
)
if np.any(c):
alpha = (x[c] - self.x_list[i - 1]) / (
self.x_list[i] - self.x_list[i - 1]
)
beta = (y[c] - self.y_list[j - 1]) / (
self.y_list[j] - self.y_list[j - 1]
)
gamma = (z[c] - self.z_list[k - 1]) / (
self.z_list[k] - self.z_list[k - 1]
)
dfdw[c] = (
(1 - alpha)
* (1 - beta)
* (1 - gamma)
* self.wInterpolators[i - 1][j - 1][k - 1]._der(w[c])
+ (1 - alpha)
* (1 - beta)
* gamma
* self.wInterpolators[i - 1][j - 1][k]._der(w[c])
+ (1 - alpha)
* beta
* (1 - gamma)
* self.wInterpolators[i - 1][j][k - 1]._der(w[c])
+ (1 - alpha)
* beta
* gamma
* self.wInterpolators[i - 1][j][k]._der(w[c])
+ alpha
* (1 - beta)
* (1 - gamma)
* self.wInterpolators[i][j - 1][k - 1]._der(w[c])
+ alpha
* (1 - beta)
* gamma
* self.wInterpolators[i][j - 1][k]._der(w[c])
+ alpha
* beta
* (1 - gamma)
* self.wInterpolators[i][j][k - 1]._der(w[c])
+ alpha
* beta
* gamma
* self.wInterpolators[i][j][k]._der(w[c])
)
return dfdw
def _derX(self, w, x, y, z):
"""
Returns the derivative with respect to x of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeX.
"""
if _isscalar(w):
x_pos = max(min(np.searchsorted(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
beta = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
gamma = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
dfdx = (
(
(1 - beta)
* (1 - gamma)
* self.wInterpolators[x_pos][y_pos - 1][z_pos - 1](w)
+ (1 - beta)
* gamma
* self.wInterpolators[x_pos][y_pos - 1][z_pos](w)
+ beta
* (1 - gamma)
* self.wInterpolators[x_pos][y_pos][z_pos - 1](w)
+ beta * gamma * self.wInterpolators[x_pos][y_pos][z_pos](w)
)
- (
(1 - beta)
* (1 - gamma)
* self.wInterpolators[x_pos - 1][y_pos - 1][z_pos - 1](w)
+ (1 - beta)
* gamma
* self.wInterpolators[x_pos - 1][y_pos - 1][z_pos](w)
+ beta
* (1 - gamma)
* self.wInterpolators[x_pos - 1][y_pos][z_pos - 1](w)
+ beta * gamma * self.wInterpolators[x_pos - 1][y_pos][z_pos](w)
)
) / (self.x_list[x_pos] - self.x_list[x_pos - 1])
else:
m = len(x)
x_pos = np.searchsorted(self.x_list, x)
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdx = np.zeros(m) + np.nan
for i in range(1, self.x_n):
for j in range(1, self.y_n):
for k in range(1, self.z_n):
c = np.logical_and(
np.logical_and(i == x_pos, j == y_pos), k == z_pos
)
if np.any(c):
beta = (y[c] - self.y_list[j - 1]) / (
self.y_list[j] - self.y_list[j - 1]
)
gamma = (z[c] - self.z_list[k - 1]) / (
self.z_list[k] - self.z_list[k - 1]
)
dfdx[c] = (
(
(1 - beta)
* (1 - gamma)
* self.wInterpolators[i][j - 1][k - 1](w[c])
+ (1 - beta)
* gamma
* self.wInterpolators[i][j - 1][k](w[c])
+ beta
* (1 - gamma)
* self.wInterpolators[i][j][k - 1](w[c])
+ beta * gamma * self.wInterpolators[i][j][k](w[c])
)
- (
(1 - beta)
* (1 - gamma)
* self.wInterpolators[i - 1][j - 1][k - 1](w[c])
+ (1 - beta)
* gamma
* self.wInterpolators[i - 1][j - 1][k](w[c])
+ beta
* (1 - gamma)
* self.wInterpolators[i - 1][j][k - 1](w[c])
+ beta
* gamma
* self.wInterpolators[i - 1][j][k](w[c])
)
) / (self.x_list[i] - self.x_list[i - 1])
return dfdx
def _derY(self, w, x, y, z):
"""
Returns the derivative with respect to y of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeY.
"""
if _isscalar(w):
x_pos = max(min(np.searchsorted(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (x - self.x_list[x_pos - 1]) / (
self.y_list[x_pos] - self.x_list[x_pos - 1]
)
gamma = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
dfdy = (
(
(1 - alpha)
* (1 - gamma)
* self.wInterpolators[x_pos - 1][y_pos][z_pos - 1](w)
+ (1 - alpha)
* gamma
* self.wInterpolators[x_pos - 1][y_pos][z_pos](w)
+ alpha
* (1 - gamma)
* self.wInterpolators[x_pos][y_pos][z_pos - 1](w)
+ alpha * gamma * self.wInterpolators[x_pos][y_pos][z_pos](w)
)
- (
(1 - alpha)
* (1 - gamma)
* self.wInterpolators[x_pos - 1][y_pos - 1][z_pos - 1](w)
+ (1 - alpha)
* gamma
* self.wInterpolators[x_pos - 1][y_pos - 1][z_pos](w)
+ alpha
* (1 - gamma)
* self.wInterpolators[x_pos][y_pos - 1][z_pos - 1](w)
+ alpha * gamma * self.wInterpolators[x_pos][y_pos - 1][z_pos](w)
)
) / (self.y_list[y_pos] - self.y_list[y_pos - 1])
else:
m = len(x)
x_pos = np.searchsorted(self.x_list, x)
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdy = np.zeros(m) + np.nan
for i in range(1, self.x_n):
for j in range(1, self.y_n):
for k in range(1, self.z_n):
c = np.logical_and(
np.logical_and(i == x_pos, j == y_pos), k == z_pos
)
if np.any(c):
alpha = (x[c] - self.x_list[i - 1]) / (
self.x_list[i] - self.x_list[i - 1]
)
gamma = (z[c] - self.z_list[k - 1]) / (
self.z_list[k] - self.z_list[k - 1]
)
dfdy[c] = (
(
(1 - alpha)
* (1 - gamma)
* self.wInterpolators[i - 1][j][k - 1](w[c])
+ (1 - alpha)
* gamma
* self.wInterpolators[i - 1][j][k](w[c])
+ alpha
* (1 - gamma)
* self.wInterpolators[i][j][k - 1](w[c])
+ alpha * gamma * self.wInterpolators[i][j][k](w[c])
)
- (
(1 - alpha)
* (1 - gamma)
* self.wInterpolators[i - 1][j - 1][k - 1](w[c])
+ (1 - alpha)
* gamma
* self.wInterpolators[i - 1][j - 1][k](w[c])
+ alpha
* (1 - gamma)
* self.wInterpolators[i][j - 1][k - 1](w[c])
+ alpha
* gamma
* self.wInterpolators[i][j - 1][k](w[c])
)
) / (self.y_list[j] - self.y_list[j - 1])
return dfdy
def _derZ(self, w, x, y, z):
"""
Returns the derivative with respect to z of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeZ.
"""
if _isscalar(w):
x_pos = max(min(np.searchsorted(self.x_list, x), self.x_n - 1), 1)
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (x - self.x_list[x_pos - 1]) / (
self.y_list[x_pos] - self.x_list[x_pos - 1]
)
beta = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
dfdz = (
(
(1 - alpha)
* (1 - beta)
* self.wInterpolators[x_pos - 1][y_pos - 1][z_pos](w)
+ (1 - alpha)
* beta
* self.wInterpolators[x_pos - 1][y_pos][z_pos](w)
+ alpha
* (1 - beta)
* self.wInterpolators[x_pos][y_pos - 1][z_pos](w)
+ alpha * beta * self.wInterpolators[x_pos][y_pos][z_pos](w)
)
- (
(1 - alpha)
* (1 - beta)
* self.wInterpolators[x_pos - 1][y_pos - 1][z_pos - 1](w)
+ (1 - alpha)
* beta
* self.wInterpolators[x_pos - 1][y_pos][z_pos - 1](w)
+ alpha
* (1 - beta)
* self.wInterpolators[x_pos][y_pos - 1][z_pos - 1](w)
+ alpha * beta * self.wInterpolators[x_pos][y_pos][z_pos - 1](w)
)
) / (self.z_list[z_pos] - self.z_list[z_pos - 1])
else:
m = len(x)
x_pos = np.searchsorted(self.x_list, x)
x_pos[x_pos > self.x_n - 1] = self.x_n - 1
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdz = np.zeros(m) + np.nan
for i in range(1, self.x_n):
for j in range(1, self.y_n):
for k in range(1, self.z_n):
c = np.logical_and(
np.logical_and(i == x_pos, j == y_pos), k == z_pos
)
if np.any(c):
alpha = (x[c] - self.x_list[i - 1]) / (
self.x_list[i] - self.x_list[i - 1]
)
beta = (y[c] - self.y_list[j - 1]) / (
self.y_list[j] - self.y_list[j - 1]
)
dfdz[c] = (
(
(1 - alpha)
* (1 - beta)
* self.wInterpolators[i - 1][j - 1][k](w[c])
+ (1 - alpha)
* beta
* self.wInterpolators[i - 1][j][k](w[c])
+ alpha
* (1 - beta)
* self.wInterpolators[i][j - 1][k](w[c])
+ alpha * beta * self.wInterpolators[i][j][k](w[c])
)
- (
(1 - alpha)
* (1 - beta)
* self.wInterpolators[i - 1][j - 1][k - 1](w[c])
+ (1 - alpha)
* beta
* self.wInterpolators[i - 1][j][k - 1](w[c])
+ alpha
* (1 - beta)
* self.wInterpolators[i][j - 1][k - 1](w[c])
+ alpha
* beta
* self.wInterpolators[i][j][k - 1](w[c])
)
) / (self.z_list[k] - self.z_list[k - 1])
return dfdz
class LinearInterpOnInterp2D(HARKinterpolator3D):
"""
A 3D interpolation method that linearly interpolates between "layers" of
arbitrary 2D interpolations. Useful for models with two endogenous state
variables and one exogenous state variable when solving with the endogenous
grid method. NOTE: should not be used if an exogenous 3D grid is used, will
be significantly slower than TrilinearInterp.
Constructor for the class, generating an approximation to a function of
the form f(x,y,z) using interpolations over f(x,y,z_0) for a fixed grid
of z_0 values.
Parameters
----------
xyInterpolators : [HARKinterpolator2D]
A list of 2D interpolations over the x and y variables. The nth
element of xyInterpolators represents f(x,y,z_values[n]).
z_values: numpy.array
An array of z values equal in length to xyInterpolators.
"""
distance_criteria = ["xyInterpolators", "z_list"]
def __init__(self, xyInterpolators, z_values):
self.xyInterpolators = xyInterpolators
self.z_list = z_values
self.z_n = z_values.size
def _evaluate(self, x, y, z):
"""
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator3D.__call__ (etc).
"""
if _isscalar(x):
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
f = (1 - alpha) * self.xyInterpolators[z_pos - 1](
x, y
) + alpha * self.xyInterpolators[z_pos](x, y)
else:
m = len(x)
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
f = np.zeros(m) + np.nan
if x.size > 0:
for i in range(1, self.z_n):
c = z_pos == i
if np.any(c):
alpha = (z[c] - self.z_list[i - 1]) / (
self.z_list[i] - self.z_list[i - 1]
)
f[c] = (1 - alpha) * self.xyInterpolators[i - 1](
x[c], y[c]
) + alpha * self.xyInterpolators[i](x[c], y[c])
return f
def _derX(self, x, y, z):
"""
Returns the derivative with respect to x of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeX.
"""
if _isscalar(x):
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
dfdx = (1 - alpha) * self.xyInterpolators[z_pos - 1].derivativeX(
x, y
) + alpha * self.xyInterpolators[z_pos].derivativeX(x, y)
else:
m = len(x)
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdx = np.zeros(m) + np.nan
if x.size > 0:
for i in range(1, self.z_n):
c = z_pos == i
if np.any(c):
alpha = (z[c] - self.z_list[i - 1]) / (
self.z_list[i] - self.z_list[i - 1]
)
dfdx[c] = (1 - alpha) * self.xyInterpolators[i - 1].derivativeX(
x[c], y[c]
) + alpha * self.xyInterpolators[i].derivativeX(x[c], y[c])
return dfdx
def _derY(self, x, y, z):
"""
Returns the derivative with respect to y of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeY.
"""
if _isscalar(x):
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
dfdy = (1 - alpha) * self.xyInterpolators[z_pos - 1].derivativeY(
x, y
) + alpha * self.xyInterpolators[z_pos].derivativeY(x, y)
else:
m = len(x)
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdy = np.zeros(m) + np.nan
if x.size > 0:
for i in range(1, self.z_n):
c = z_pos == i
if np.any(c):
alpha = (z[c] - self.z_list[i - 1]) / (
self.z_list[i] - self.z_list[i - 1]
)
dfdy[c] = (1 - alpha) * self.xyInterpolators[i - 1].derivativeY(
x[c], y[c]
) + alpha * self.xyInterpolators[i].derivativeY(x[c], y[c])
return dfdy
def _derZ(self, x, y, z):
"""
Returns the derivative with respect to z of the interpolated function
at each value in x,y,z. Only called internally by HARKinterpolator3D.derivativeZ.
"""
if _isscalar(x):
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
dfdz = (
self.xyInterpolators[z_pos].derivativeX(x, y)
- self.xyInterpolators[z_pos - 1].derivativeX(x, y)
) / (self.z_list[z_pos] - self.z_list[z_pos - 1])
else:
m = len(x)
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdz = np.zeros(m) + np.nan
if x.size > 0:
for i in range(1, self.z_n):
c = z_pos == i
if np.any(c):
dfdz[c] = (
self.xyInterpolators[i](x[c], y[c])
- self.xyInterpolators[i - 1](x[c], y[c])
) / (self.z_list[i] - self.z_list[i - 1])
return dfdz
class BilinearInterpOnInterp2D(HARKinterpolator4D):
"""
A 4D interpolation method that bilinearly interpolates among "layers" of
arbitrary 2D interpolations. Useful for models with two endogenous state
variables and two exogenous state variables when solving with the endogenous
grid method. NOTE: should not be used if an exogenous 4D grid is used, will
be significantly slower than QuadlinearInterp.
Constructor for the class, generating an approximation to a function of
the form f(w,x,y,z) using interpolations over f(w,x,y_0,z_0) for a fixed
grid of y_0 and z_0 values.
Parameters
----------
wxInterpolators : [[HARKinterpolator2D]]
A list of lists of 2D interpolations over the w and x variables.
The i,j-th element of wxInterpolators represents
f(w,x,y_values[i],z_values[j]).
y_values: numpy.array
An array of y values equal in length to wxInterpolators.
z_values: numpy.array
An array of z values equal in length to wxInterpolators[0].
"""
distance_criteria = ["wxInterpolators", "y_list", "z_list"]
def __init__(self, wxInterpolators, y_values, z_values):
self.wxInterpolators = wxInterpolators
self.y_list = y_values
self.y_n = y_values.size
self.z_list = z_values
self.z_n = z_values.size
def _evaluate(self, w, x, y, z):
"""
Returns the level of the interpolated function at each value in x,y,z.
Only called internally by HARKinterpolator4D.__call__ (etc).
"""
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
beta = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
f = (
(1 - alpha)
* (1 - beta)
* self.wxInterpolators[y_pos - 1][z_pos - 1](w, x)
+ (1 - alpha) * beta * self.wxInterpolators[y_pos - 1][z_pos](w, x)
+ alpha * (1 - beta) * self.wxInterpolators[y_pos][z_pos - 1](w, x)
+ alpha * beta * self.wxInterpolators[y_pos][z_pos](w, x)
)
else:
m = len(x)
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
f = np.zeros(m) + np.nan
for i in range(1, self.y_n):
for j in range(1, self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i - 1]) / (
self.y_list[i] - self.y_list[i - 1]
)
beta = (z[c] - self.z_list[j - 1]) / (
self.z_list[j] - self.z_list[j - 1]
)
f[c] = (
(1 - alpha)
* (1 - beta)
* self.wxInterpolators[i - 1][j - 1](w[c], x[c])
+ (1 - alpha)
* beta
* self.wxInterpolators[i - 1][j](w[c], x[c])
+ alpha
* (1 - beta)
* self.wxInterpolators[i][j - 1](w[c], x[c])
+ alpha * beta * self.wxInterpolators[i][j](w[c], x[c])
)
return f
def _derW(self, w, x, y, z):
"""
Returns the derivative with respect to w of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeW.
"""
# This may look strange, as we call the derivativeX() method to get the
# derivative with respect to w, but that's just a quirk of 4D interpolations
# beginning with w rather than x. The derivative wrt the first dimension
# of an element of wxInterpolators is the w-derivative of the main function.
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
beta = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
dfdw = (
(1 - alpha)
* (1 - beta)
* self.wxInterpolators[y_pos - 1][z_pos - 1].derivativeX(w, x)
+ (1 - alpha)
* beta
* self.wxInterpolators[y_pos - 1][z_pos].derivativeX(w, x)
+ alpha
* (1 - beta)
* self.wxInterpolators[y_pos][z_pos - 1].derivativeX(w, x)
+ alpha * beta * self.wxInterpolators[y_pos][z_pos].derivativeX(w, x)
)
else:
m = len(x)
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdw = np.zeros(m) + np.nan
for i in range(1, self.y_n):
for j in range(1, self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i - 1]) / (
self.y_list[i] - self.y_list[i - 1]
)
beta = (z[c] - self.z_list[j - 1]) / (
self.z_list[j] - self.z_list[j - 1]
)
dfdw[c] = (
(1 - alpha)
* (1 - beta)
* self.wxInterpolators[i - 1][j - 1].derivativeX(w[c], x[c])
+ (1 - alpha)
* beta
* self.wxInterpolators[i - 1][j].derivativeX(w[c], x[c])
+ alpha
* (1 - beta)
* self.wxInterpolators[i][j - 1].derivativeX(w[c], x[c])
+ alpha
* beta
* self.wxInterpolators[i][j].derivativeX(w[c], x[c])
)
return dfdw
def _derX(self, w, x, y, z):
"""
Returns the derivative with respect to x of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeX.
"""
# This may look strange, as we call the derivativeY() method to get the
# derivative with respect to x, but that's just a quirk of 4D interpolations
# beginning with w rather than x. The derivative wrt the second dimension
# of an element of wxInterpolators is the x-derivative of the main function.
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
beta = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
dfdx = (
(1 - alpha)
* (1 - beta)
* self.wxInterpolators[y_pos - 1][z_pos - 1].derivativeY(w, x)
+ (1 - alpha)
* beta
* self.wxInterpolators[y_pos - 1][z_pos].derivativeY(w, x)
+ alpha
* (1 - beta)
* self.wxInterpolators[y_pos][z_pos - 1].derivativeY(w, x)
+ alpha * beta * self.wxInterpolators[y_pos][z_pos].derivativeY(w, x)
)
else:
m = len(x)
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdx = np.zeros(m) + np.nan
for i in range(1, self.y_n):
for j in range(1, self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i - 1]) / (
self.y_list[i] - self.y_list[i - 1]
)
beta = (z[c] - self.z_list[j - 1]) / (
self.z_list[j] - self.z_list[j - 1]
)
dfdx[c] = (
(1 - alpha)
* (1 - beta)
* self.wxInterpolators[i - 1][j - 1].derivativeY(w[c], x[c])
+ (1 - alpha)
* beta
* self.wxInterpolators[i - 1][j].derivativeY(w[c], x[c])
+ alpha
* (1 - beta)
* self.wxInterpolators[i][j - 1].derivativeY(w[c], x[c])
+ alpha
* beta
* self.wxInterpolators[i][j].derivativeY(w[c], x[c])
)
return dfdx
def _derY(self, w, x, y, z):
"""
Returns the derivative with respect to y of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeY.
"""
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
beta = (z - self.z_list[z_pos - 1]) / (
self.z_list[z_pos] - self.z_list[z_pos - 1]
)
dfdy = (
(
(1 - beta) * self.wxInterpolators[y_pos][z_pos - 1](w, x)
+ beta * self.wxInterpolators[y_pos][z_pos](w, x)
)
- (
(1 - beta) * self.wxInterpolators[y_pos - 1][z_pos - 1](w, x)
+ beta * self.wxInterpolators[y_pos - 1][z_pos](w, x)
)
) / (self.y_list[y_pos] - self.y_list[y_pos - 1])
else:
m = len(x)
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdy = np.zeros(m) + np.nan
for i in range(1, self.y_n):
for j in range(1, self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
beta = (z[c] - self.z_list[j - 1]) / (
self.z_list[j] - self.z_list[j - 1]
)
dfdy[c] = (
(
(1 - beta) * self.wxInterpolators[i][j - 1](w[c], x[c])
+ beta * self.wxInterpolators[i][j](w[c], x[c])
)
- (
(1 - beta)
* self.wxInterpolators[i - 1][j - 1](w[c], x[c])
+ beta * self.wxInterpolators[i - 1][j](w[c], x[c])
)
) / (self.y_list[i] - self.y_list[i - 1])
return dfdy
def _derZ(self, w, x, y, z):
"""
Returns the derivative with respect to z of the interpolated function
at each value in w,x,y,z. Only called internally by HARKinterpolator4D.derivativeZ.
"""
if _isscalar(x):
y_pos = max(min(np.searchsorted(self.y_list, y), self.y_n - 1), 1)
z_pos = max(min(np.searchsorted(self.z_list, z), self.z_n - 1), 1)
alpha = (y - self.y_list[y_pos - 1]) / (
self.y_list[y_pos] - self.y_list[y_pos - 1]
)
dfdz = (
(
(1 - alpha) * self.wxInterpolators[y_pos - 1][z_pos](w, x)
+ alpha * self.wxInterpolators[y_pos][z_pos](w, x)
)
- (
(1 - alpha) * self.wxInterpolators[y_pos - 1][z_pos - 1](w, x)
+ alpha * self.wxInterpolators[y_pos][z_pos - 1](w, x)
)
) / (self.z_list[z_pos] - self.z_list[z_pos - 1])
else:
m = len(x)
y_pos = np.searchsorted(self.y_list, y)
y_pos[y_pos > self.y_n - 1] = self.y_n - 1
y_pos[y_pos < 1] = 1
z_pos = np.searchsorted(self.z_list, z)
z_pos[z_pos > self.z_n - 1] = self.z_n - 1
z_pos[z_pos < 1] = 1
dfdz = np.zeros(m) + np.nan
for i in range(1, self.y_n):
for j in range(1, self.z_n):
c = np.logical_and(i == y_pos, j == z_pos)
if np.any(c):
alpha = (y[c] - self.y_list[i - 1]) / (
self.y_list[i] - self.y_list[i - 1]
)
dfdz[c] = (
(
(1 - alpha) * self.wxInterpolators[i - 1][j](w[c], x[c])
+ alpha * self.wxInterpolators[i][j](w[c], x[c])
)
- (
(1 - alpha)
* self.wxInterpolators[i - 1][j - 1](w[c], x[c])
+ alpha * self.wxInterpolators[i][j - 1](w[c], x[c])
)
) / (self.z_list[j] - self.z_list[j - 1])
return dfdz
class Curvilinear2DInterp(HARKinterpolator2D):
"""
A 2D interpolation method for curvilinear or "warped grid" interpolation, as
in White (2015). Used for models with two endogenous states that are solved
with the endogenous grid method.
Parameters
----------
f_values: numpy.array
A 2D array of function values such that f_values[i,j] =
f(x_values[i,j],y_values[i,j]).
x_values: numpy.array
A 2D array of x values of the same size as f_values.
y_values: numpy.array
A 2D array of y values of the same size as f_values.
"""
distance_criteria = ["f_values", "x_values", "y_values"]
def __init__(self, f_values, x_values, y_values):
self.f_values = f_values
self.x_values = x_values
self.y_values = y_values
my_shape = f_values.shape
self.x_n = my_shape[0]
self.y_n = my_shape[1]
self.update_polarity()
def update_polarity(self):
"""
Fills in the polarity attribute of the interpolation, determining whether
the "plus" (True) or "minus" (False) solution of the system of equations
should be used for each sector. Needs to be called in __init__.
Parameters
----------
none
Returns
-------
none
"""
# Grab a point known to be inside each sector: the midway point between
# the lower left and upper right vertex of each sector
x_temp = 0.5 * (
self.x_values[0: (self.x_n - 1), 0: (self.y_n - 1)]
+ self.x_values[1: self.x_n, 1: self.y_n]
)
y_temp = 0.5 * (
self.y_values[0: (self.x_n - 1), 0: (self.y_n - 1)]
+ self.y_values[1: self.x_n, 1: self.y_n]
)
size = (self.x_n - 1) * (self.y_n - 1)
x_temp = np.reshape(x_temp, size)
y_temp = np.reshape(y_temp, size)
y_pos = np.tile(np.arange(0, self.y_n - 1), self.x_n - 1)
x_pos = np.reshape(
np.tile(np.arange(0, self.x_n - 1), (self.y_n - 1, 1)).transpose(), size
)
# Set the polarity of all sectors to "plus", then test each sector
self.polarity = np.ones((self.x_n - 1, self.y_n - 1), dtype=bool)
alpha, beta = self.find_coords(x_temp, y_temp, x_pos, y_pos)
polarity = np.logical_and(
np.logical_and(alpha > 0, alpha < 1), np.logical_and(beta > 0, beta < 1)
)
# Update polarity: if (alpha,beta) not in the unit square, then that
# sector must use the "minus" solution instead
self.polarity = np.reshape(polarity, (self.x_n - 1, self.y_n - 1))
def find_sector(self, x, y):
"""
Finds the quadrilateral "sector" for each (x,y) point in the input.
Only called as a subroutine of _evaluate().
Parameters
----------
x : np.array
Values whose sector should be found.
y : np.array
Values whose sector should be found. Should be same size as x.
Returns
-------
x_pos : np.array
Sector x-coordinates for each point of the input, of the same size.
y_pos : np.array
Sector y-coordinates for each point of the input, of the same size.
"""
# Initialize the sector guess
m = x.size
x_pos_guess = (np.ones(m) * self.x_n / 2).astype(int)
y_pos_guess = (np.ones(m) * self.y_n / 2).astype(int)
# Define a function that checks whether a set of points violates a linear
# boundary defined by (x_bound_1,y_bound_1) and (x_bound_2,y_bound_2),
# where the latter is *COUNTER CLOCKWISE* from the former. Returns
# 1 if the point is outside the boundary and 0 otherwise.
violation_check = (
lambda x_check, y_check, x_bound_1, y_bound_1, x_bound_2, y_bound_2: (
(y_bound_2 - y_bound_1) * x_check - (x_bound_2 - x_bound_1) * y_check
> x_bound_1 * y_bound_2 - y_bound_1 * x_bound_2
)
+ 0
)
# Identify the correct sector for each point to be evaluated
these = np.ones(m, dtype=bool)
max_loops = self.x_n + self.y_n
loops = 0
while np.any(these) and loops < max_loops:
# Get coordinates for the four vertices: (xA,yA),...,(xD,yD)
x_temp = x[these]
y_temp = y[these]
xA = self.x_values[x_pos_guess[these], y_pos_guess[these]]
xB = self.x_values[x_pos_guess[these] + 1, y_pos_guess[these]]
xC = self.x_values[x_pos_guess[these], y_pos_guess[these] + 1]
xD = self.x_values[x_pos_guess[these] + 1, y_pos_guess[these] + 1]
yA = self.y_values[x_pos_guess[these], y_pos_guess[these]]
yB = self.y_values[x_pos_guess[these] + 1, y_pos_guess[these]]
yC = self.y_values[x_pos_guess[these], y_pos_guess[these] + 1]
yD = self.y_values[x_pos_guess[these] + 1, y_pos_guess[these] + 1]
# Check the "bounding box" for the sector: is this guess plausible?
move_down = (y_temp < np.minimum(yA, yB)) + 0
move_right = (x_temp > np.maximum(xB, xD)) + 0
move_up = (y_temp > np.maximum(yC, yD)) + 0
move_left = (x_temp < np.minimum(xA, xC)) + 0
# Check which boundaries are violated (and thus where to look next)
c = (move_down + move_right + move_up + move_left) == 0
move_down[c] = violation_check(
x_temp[c], y_temp[c], xA[c], yA[c], xB[c], yB[c]
)
move_right[c] = violation_check(
x_temp[c], y_temp[c], xB[c], yB[c], xD[c], yD[c]
)
move_up[c] = violation_check(
x_temp[c], y_temp[c], xD[c], yD[c], xC[c], yC[c]
)
move_left[c] = violation_check(
x_temp[c], y_temp[c], xC[c], yC[c], xA[c], yA[c]
)
# Update the sector guess based on the violations
x_pos_next = x_pos_guess[these] - move_left + move_right
x_pos_next[x_pos_next < 0] = 0
x_pos_next[x_pos_next > (self.x_n - 2)] = self.x_n - 2
y_pos_next = y_pos_guess[these] - move_down + move_up
y_pos_next[y_pos_next < 0] = 0
y_pos_next[y_pos_next > (self.y_n - 2)] = self.y_n - 2
# Check which sectors have not changed, and mark them as complete
no_move = np.array(
np.logical_and(
x_pos_guess[these] == x_pos_next, y_pos_guess[these] == y_pos_next
)
)
x_pos_guess[these] = x_pos_next
y_pos_guess[these] = y_pos_next
temp = these.nonzero()
these[temp[0][no_move]] = False
# Move to the next iteration of the search
loops += 1
# Return the output
x_pos = x_pos_guess
y_pos = y_pos_guess
return x_pos, y_pos
def find_coords(self, x, y, x_pos, y_pos):
"""
Calculates the relative coordinates (alpha,beta) for each point (x,y),
given the sectors (x_pos,y_pos) in which they reside. Only called as
a subroutine of __call__().
Parameters
----------
x : np.array
Values whose sector should be found.
y : np.array
Values whose sector should be found. Should be same size as x.
x_pos : np.array
Sector x-coordinates for each point in (x,y), of the same size.
y_pos : np.array
Sector y-coordinates for each point in (x,y), of the same size.
Returns
-------
alpha : np.array
Relative "horizontal" position of the input in their respective sectors.
beta : np.array
Relative "vertical" position of the input in their respective sectors.
"""
# Calculate relative coordinates in the sector for each point
xA = self.x_values[x_pos, y_pos]
xB = self.x_values[x_pos + 1, y_pos]
xC = self.x_values[x_pos, y_pos + 1]
xD = self.x_values[x_pos + 1, y_pos + 1]
yA = self.y_values[x_pos, y_pos]
yB = self.y_values[x_pos + 1, y_pos]
yC = self.y_values[x_pos, y_pos + 1]
yD = self.y_values[x_pos + 1, y_pos + 1]
polarity = 2.0 * self.polarity[x_pos, y_pos] - 1.0
a = xA
b = xB - xA
c = xC - xA
d = xA - xB - xC + xD
e = yA
f = yB - yA
g = yC - yA
h = yA - yB - yC + yD
denom = d * g - h * c
mu = (h * b - d * f) / denom
tau = (h * (a - x) - d * (e - y)) / denom
zeta = a - x + c * tau
eta = b + c * mu + d * tau
theta = d * mu
alpha = (-eta + polarity * np.sqrt(eta ** 2.0 - 4.0 * zeta * theta)) / (
2.0 * theta
)
beta = mu * alpha + tau
# Alternate method if there are sectors that are "too regular"
z = np.logical_or(
np.isnan(alpha), np.isnan(beta)
) # These points weren't able to identify coordinates
if np.any(z):
these = np.isclose(
f / b, (yD - yC) / (xD - xC)
) # iso-beta lines have equal slope
if np.any(these):
kappa = f[these] / b[these]
int_bot = yA[these] - kappa * xA[these]
int_top = yC[these] - kappa * xC[these]
int_these = y[these] - kappa * x[these]
beta_temp = (int_these - int_bot) / (int_top - int_bot)
x_left = beta_temp * xC[these] + (1.0 - beta_temp) * xA[these]
x_right = beta_temp * xD[these] + (1.0 - beta_temp) * xB[these]
alpha_temp = (x[these] - x_left) / (x_right - x_left)
beta[these] = beta_temp
alpha[these] = alpha_temp
# print(np.sum(np.isclose(g/c,(yD-yB)/(xD-xB))))
return alpha, beta
def _evaluate(self, x, y):
"""
Returns the level of the interpolated function at each value in x,y.
Only called internally by HARKinterpolator2D.__call__ (etc).
"""
x_pos, y_pos = self.find_sector(x, y)
alpha, beta = self.find_coords(x, y, x_pos, y_pos)
# Calculate the function at each point using bilinear interpolation
f = (
(1 - alpha) * (1 - beta) * self.f_values[x_pos, y_pos]
+ (1 - alpha) * beta * self.f_values[x_pos, y_pos + 1]
+ alpha * (1 - beta) * self.f_values[x_pos + 1, y_pos]
+ alpha * beta * self.f_values[x_pos + 1, y_pos + 1]
)
return f
def _derX(self, x, y):
"""
Returns the derivative with respect to x of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
"""
x_pos, y_pos = self.find_sector(x, y)
alpha, beta = self.find_coords(x, y, x_pos, y_pos)
# Get four corners data for each point
xA = self.x_values[x_pos, y_pos]
xB = self.x_values[x_pos + 1, y_pos]
xC = self.x_values[x_pos, y_pos + 1]
xD = self.x_values[x_pos + 1, y_pos + 1]
yA = self.y_values[x_pos, y_pos]
yB = self.y_values[x_pos + 1, y_pos]
yC = self.y_values[x_pos, y_pos + 1]
yD = self.y_values[x_pos + 1, y_pos + 1]
fA = self.f_values[x_pos, y_pos]
fB = self.f_values[x_pos + 1, y_pos]
fC = self.f_values[x_pos, y_pos + 1]
fD = self.f_values[x_pos + 1, y_pos + 1]
# Calculate components of the alpha,beta --> x,y delta translation matrix
alpha_x = (1 - beta) * (xB - xA) + beta * (xD - xC)
alpha_y = (1 - beta) * (yB - yA) + beta * (yD - yC)
beta_x = (1 - alpha) * (xC - xA) + alpha * (xD - xB)
beta_y = (1 - alpha) * (yC - yA) + alpha * (yD - yB)
# Invert the delta translation matrix into x,y --> alpha,beta
det = alpha_x * beta_y - beta_x * alpha_y
x_alpha = beta_y / det
x_beta = -alpha_y / det
# Calculate the derivative of f w.r.t. alpha and beta
dfda = (1 - beta) * (fB - fA) + beta * (fD - fC)
dfdb = (1 - alpha) * (fC - fA) + alpha * (fD - fB)
# Calculate the derivative with respect to x (and return it)
dfdx = x_alpha * dfda + x_beta * dfdb
return dfdx
def _derY(self, x, y):
"""
Returns the derivative with respect to y of the interpolated function
at each value in x,y. Only called internally by HARKinterpolator2D.derivativeX.
"""
x_pos, y_pos = self.find_sector(x, y)
alpha, beta = self.find_coords(x, y, x_pos, y_pos)
# Get four corners data for each point
xA = self.x_values[x_pos, y_pos]
xB = self.x_values[x_pos + 1, y_pos]
xC = self.x_values[x_pos, y_pos + 1]
xD = self.x_values[x_pos + 1, y_pos + 1]
yA = self.y_values[x_pos, y_pos]
yB = self.y_values[x_pos + 1, y_pos]
yC = self.y_values[x_pos, y_pos + 1]
yD = self.y_values[x_pos + 1, y_pos + 1]
fA = self.f_values[x_pos, y_pos]
fB = self.f_values[x_pos + 1, y_pos]
fC = self.f_values[x_pos, y_pos + 1]
fD = self.f_values[x_pos + 1, y_pos + 1]
# Calculate components of the alpha,beta --> x,y delta translation matrix
alpha_x = (1 - beta) * (xB - xA) + beta * (xD - xC)
alpha_y = (1 - beta) * (yB - yA) + beta * (yD - yC)
beta_x = (1 - alpha) * (xC - xA) + alpha * (xD - xB)
beta_y = (1 - alpha) * (yC - yA) + alpha * (yD - yB)
# Invert the delta translation matrix into x,y --> alpha,beta
det = alpha_x * beta_y - beta_x * alpha_y
y_alpha = -beta_x / det
y_beta = alpha_x / det
# Calculate the derivative of f w.r.t. alpha and beta
dfda = (1 - beta) * (fB - fA) + beta * (fD - fC)
dfdb = (1 - alpha) * (fC - fA) + alpha * (fD - fB)
# Calculate the derivative with respect to x (and return it)
dfdy = y_alpha * dfda + y_beta * dfdb
return dfdy
class DiscreteInterp(MetricObject):
"""
An interpolator for variables that can only take a discrete set of values.
If the function we wish to interpolate, f(args) can take on the list of
values discrete_vals, this class expects an interpolator for the index of
f's value in discrete_vals.
E.g., if f(a,b,c) = discrete_vals[5], then index_interp(a,b,c) = 5.
Parameters
----------
index_interp: HARKInterpolator
An interpolator giving an approximation to the index of the value in
discrete_vals that corresponds to a given set of arguments.
discrete_vals: numpy.array
A 1D array containing the values in the range of the discrete function
to be interpolated.
"""
distance_criteria = ["index_interp"]
def __init__(self, index_interp, discrete_vals):
self.index_interp = index_interp
self.discrete_vals = discrete_vals
self.n_vals = len(self.discrete_vals)
def __call__(self, *args):
# Interpolate indices and round to integers
inds = np.rint(self.index_interp(*args)).astype(int)
if type(inds) is not np.ndarray:
inds = np.array(inds)
# Deal with out-of range indices
inds[inds < 0] = 0
inds[inds >= self.n_vals] = self.n_vals - 1
# Get values from grid
return self.discrete_vals[inds]
###############################################################################
## Functions used in discrete choice models with T1EV taste shocks ############
###############################################################################
def calc_log_sum_choice_probs(Vals, sigma):
"""
Returns the final optimal value and choice probabilities given the choice
specific value functions `Vals`. Probabilities are degenerate if sigma == 0.0.
Parameters
----------
Vals : [numpy.array]
A numpy.array that holds choice specific values at common grid points.
sigma : float
A number that controls the variance of the taste shocks
Returns
-------
V : [numpy.array]
A numpy.array that holds the integrated value function.
P : [numpy.array]
A numpy.array that holds the discrete choice probabilities
"""
# Assumes that NaNs have been replaced by -numpy.inf or similar
if sigma == 0.0:
# We could construct a linear index here and use unravel_index.
Pflat = np.argmax(Vals, axis=0)
V = np.zeros(Vals[0].shape)
Probs = np.zeros(Vals.shape)
for i in range(Vals.shape[0]):
optimalIndices = Pflat == i
V[optimalIndices] = Vals[i][optimalIndices]
Probs[i][optimalIndices] = 1
return V, Probs
# else we have a taste shock
maxV = np.max(Vals, axis=0)
# calculate maxV+sigma*log(sum_i=1^J exp((V[i]-maxV))/sigma)
sumexp = np.sum(np.exp((Vals - maxV) / sigma), axis=0)
LogSumV = np.log(sumexp)
LogSumV = maxV + sigma * LogSumV
Probs = np.exp((Vals - LogSumV) / sigma)
return LogSumV, Probs
def calc_choice_probs(Vals, sigma):
"""
Returns the choice probabilities given the choice specific value functions
`Vals`. Probabilities are degenerate if sigma == 0.0.
Parameters
----------
Vals : [numpy.array]
A numpy.array that holds choice specific values at common grid points.
sigma : float
A number that controls the variance of the taste shocks
Returns
-------
Probs : [numpy.array]
A numpy.array that holds the discrete choice probabilities
"""
# Assumes that NaNs have been replaced by -numpy.inf or similar
if sigma == 0.0:
# We could construct a linear index here and use unravel_index.
Pflat = np.argmax(Vals, axis=0)
Probs = np.zeros(Vals.shape)
for i in range(Vals.shape[0]):
Probs[i][Pflat == i] = 1
return Probs
maxV = np.max(Vals, axis=0)
Probs = np.divide(
np.exp((Vals - maxV) / sigma), np.sum(np.exp((Vals - maxV) / sigma), axis=0)
)
return Probs
def calc_log_sum(Vals, sigma):
"""
Returns the optimal value given the choice specific value functions Vals.
Parameters
----------
Vals : [numpy.array]
A numpy.array that holds choice specific values at common grid points.
sigma : float
A number that controls the variance of the taste shocks
Returns
-------
V : [numpy.array]
A numpy.array that holds the integrated value function.
"""
# Assumes that NaNs have been replaced by -numpy.inf or similar
if sigma == 0.0:
# We could construct a linear index here and use unravel_index.
V = np.amax(Vals, axis=0)
return V
# else we have a taste shock
maxV = np.max(Vals, axis=0)
# calculate maxV+sigma*log(sum_i=1^J exp((V[i]-maxV))/sigma)
sumexp = np.sum(np.exp((Vals - maxV) / sigma), axis=0)
LogSumV = np.log(sumexp)
LogSumV = maxV + sigma * LogSumV
return LogSumV
###############################################################################
# Tools for value and marginal-value functions in models where #
# - dvdm = u'(c). #
# - u is of the CRRA family. #
###############################################################################
class ValueFuncCRRA(MetricObject):
"""
A class for representing a value function. The underlying interpolation is
in the space of (state,u_inv(v)); this class "re-curves" to the value function.
Parameters
----------
vFuncNvrs : function
A real function representing the value function composed with the
inverse utility function, defined on the state: u_inv(vFunc(state))
CRRA : float
Coefficient of relative risk aversion.
"""
distance_criteria = ["func", "CRRA"]
def __init__(self, vFuncNvrs, CRRA):
self.func = deepcopy(vFuncNvrs)
self.CRRA = CRRA
def __call__(self, *vFuncArgs):
"""
Evaluate the value function at given levels of market resources m.
Parameters
----------
vFuncArgs : floats or np.arrays, all of the same dimensions.
Values for the state variables. These usually start with 'm',
market resources normalized by the level of permanent income.
Returns
-------
v : float or np.array
Lifetime value of beginning this period with the given states; has
same size as the state inputs.
"""
return CRRAutility(self.func(*vFuncArgs), gam=self.CRRA)
class MargValueFuncCRRA(MetricObject):
"""
A class for representing a marginal value function in models where the
standard envelope condition of dvdm(state) = u'(c(state)) holds (with CRRA utility).
Parameters
----------
cFunc : function.
Its first argument must be normalized market resources m.
A real function representing the marginal value function composed
with the inverse marginal utility function, defined on the state
variables: uP_inv(dvdmFunc(state)). Called cFunc because when standard
envelope condition applies, uP_inv(dvdm(state)) = cFunc(state).
CRRA : float
Coefficient of relative risk aversion.
"""
distance_criteria = ["cFunc", "CRRA"]
def __init__(self, cFunc, CRRA):
self.cFunc = deepcopy(cFunc)
self.CRRA = CRRA
def __call__(self, *cFuncArgs):
"""
Evaluate the marginal value function at given levels of market resources m.
Parameters
----------
cFuncArgs : floats or np.arrays
Values of the state variables at which to evaluate the marginal
value function.
Returns
-------
vP : float or np.array
Marginal lifetime value of beginning this period with state
cFuncArgs
"""
return CRRAutilityP(self.cFunc(*cFuncArgs), gam=self.CRRA)
def derivativeX(self, *cFuncArgs):
"""
Evaluate the derivative of the marginal value function with respect to
market resources at given state; this is the marginal marginal value
function.
Parameters
----------
cFuncArgs : floats or np.arrays
State variables.
Returns
-------
vPP : float or np.array
Marginal marginal lifetime value of beginning this period with
state cFuncArgs; has same size as inputs.
"""
# The derivative method depends on the dimension of the function
if isinstance(self.cFunc, (HARKinterpolator1D)):
c, MPC = self.cFunc.eval_with_derivative(*cFuncArgs)
elif hasattr(self.cFunc, 'derivativeX'):
c = self.cFunc(*cFuncArgs)
MPC = self.cFunc.derivativeX(*cFuncArgs)
else:
raise Exception(
"cFunc does not have a 'derivativeX' attribute. Can't compute"
+ "marginal marginal value."
)
return MPC * CRRAutilityPP(c, gam=self.CRRA)
class MargMargValueFuncCRRA(MetricObject):
"""
A class for representing a marginal marginal value function in models where
the standard envelope condition of dvdm = u'(c(state)) holds (with CRRA utility).
Parameters
----------
cFunc : function.
Its first argument must be normalized market resources m.
A real function representing the marginal value function composed
with the inverse marginal utility function, defined on the state
variables: uP_inv(dvdmFunc(state)). Called cFunc because when standard
envelope condition applies, uP_inv(dvdm(state)) = cFunc(state).
CRRA : float
Coefficient of relative risk aversion.
"""
distance_criteria = ["cFunc", "CRRA"]
def __init__(self, cFunc, CRRA):
self.cFunc = deepcopy(cFunc)
self.CRRA = CRRA
def __call__(self, *cFuncArgs):
"""
Evaluate the marginal marginal value function at given levels of market
resources m.
Parameters
----------
m : float or np.array
Market resources (normalized by permanent income) whose marginal
marginal value is to be found.
Returns
-------
vPP : float or np.array
Marginal marginal lifetime value of beginning this period with market
resources m; has same size as input m.
"""
# The derivative method depends on the dimension of the function
if isinstance(self.cFunc, (HARKinterpolator1D)):
c, MPC = self.cFunc.eval_with_derivative(*cFuncArgs)
elif hasattr(self.cFunc, 'derivativeX'):
c = self.cFunc(*cFuncArgs)
MPC = self.cFunc.derivativeX(*cFuncArgs)
else:
raise Exception(
"cFunc does not have a 'derivativeX' attribute. Can't compute"
+ "marginal marginal value."
)
return MPC * CRRAutilityPP(c, gam=self.CRRA)
##############################################################################
# Examples and tests
##############################################################################
def main():
print("Sorry, HARK.interpolation doesn't actually do much on its own.")
print("To see some examples of its interpolation methods in action, look at any")
print("of the model modules in /ConsumptionSavingModel. In the future, running")
print("this module will show examples of each interpolation class.")
from time import time
import matplotlib.pyplot as plt
RNG = np.random.RandomState(123)
if False:
x = np.linspace(1, 20, 39)
y = np.log(x)
dydx = 1.0 / x
f = CubicInterp(x, y, dydx)
x_test = np.linspace(0, 30, 200)
y_test = f(x_test)
plt.plot(x_test, y_test)
plt.show()
if False:
def f(x, y): return 3.0 * x ** 2.0 + x * y + 4.0 * y ** 2.0
def dfdx(x, y): return 6.0 * x + y
def dfdy(x, y): return x + 8.0 * y
y_list = np.linspace(0, 5, 100, dtype=float)
xInterpolators = []
xInterpolators_alt = []
for y in y_list:
this_x_list = np.sort((RNG.rand(100) * 5.0))
this_interpolation = LinearInterp(
this_x_list, f(this_x_list, y * np.ones(this_x_list.size))
)
that_interpolation = CubicInterp(
this_x_list,
f(this_x_list, y * np.ones(this_x_list.size)),
dfdx(this_x_list, y * np.ones(this_x_list.size)),
)
xInterpolators.append(this_interpolation)
xInterpolators_alt.append(that_interpolation)
g = LinearInterpOnInterp1D(xInterpolators, y_list)
h = LinearInterpOnInterp1D(xInterpolators_alt, y_list)
rand_x = RNG.rand(100) * 5.0
rand_y = RNG.rand(100) * 5.0
z = (f(rand_x, rand_y) - g(rand_x, rand_y)) / f(rand_x, rand_y)
q = (dfdx(rand_x, rand_y) - g.derivativeX(rand_x, rand_y)) / dfdx(
rand_x, rand_y
)
r = (dfdy(rand_x, rand_y) - g.derivativeY(rand_x, rand_y)) / dfdy(
rand_x, rand_y
)
# print(z)
# print(q)
# print(r)
z = (f(rand_x, rand_y) - g(rand_x, rand_y)) / f(rand_x, rand_y)
q = (dfdx(rand_x, rand_y) - g.derivativeX(rand_x, rand_y)) / dfdx(
rand_x, rand_y
)
r = (dfdy(rand_x, rand_y) - g.derivativeY(rand_x, rand_y)) / dfdy(
rand_x, rand_y
)
print(z)
# print(q)
# print(r)
if False:
f = (
lambda x, y, z: 3.0 * x ** 2.0
+ x * y
+ 4.0 * y ** 2.0
- 5 * z ** 2.0
+ 1.5 * x * z
)
def dfdx(x, y, z): return 6.0 * x + y + 1.5 * z
def dfdy(x, y, z): return x + 8.0 * y
def dfdz(x, y, z): return -10.0 * z + 1.5 * x
y_list = np.linspace(0, 5, 51, dtype=float)
z_list = np.linspace(0, 5, 51, dtype=float)
xInterpolators = []
for y in y_list:
temp = []
for z in z_list:
this_x_list = np.sort((RNG.rand(100) * 5.0))
this_interpolation = LinearInterp(
this_x_list,
f(
this_x_list,
y * np.ones(this_x_list.size),
z * np.ones(this_x_list.size),
),
)
temp.append(this_interpolation)
xInterpolators.append(deepcopy(temp))
g = BilinearInterpOnInterp1D(xInterpolators, y_list, z_list)
rand_x = RNG.rand(1000) * 5.0
rand_y = RNG.rand(1000) * 5.0
rand_z = RNG.rand(1000) * 5.0
z = (f(rand_x, rand_y, rand_z) - g(rand_x, rand_y, rand_z)) / f(
rand_x, rand_y, rand_z
)
q = (
dfdx(rand_x, rand_y, rand_z) - g.derivativeX(rand_x, rand_y, rand_z)
) / dfdx(rand_x, rand_y, rand_z)
r = (
dfdy(rand_x, rand_y, rand_z) - g.derivativeY(rand_x, rand_y, rand_z)
) / dfdy(rand_x, rand_y, rand_z)
p = (
dfdz(rand_x, rand_y, rand_z) - g.derivativeZ(rand_x, rand_y, rand_z)
) / dfdz(rand_x, rand_y, rand_z)
z.sort()
if False:
f = (
lambda w, x, y, z: 4.0 * w * z
- 2.5 * w * x
+ w * y
+ 6.0 * x * y
- 10.0 * x * z
+ 3.0 * y * z
- 7.0 * z
+ 4.0 * x
+ 2.0 * y
- 5.0 * w
)
def dfdw(w, x, y, z): return 4.0 * z - 2.5 * x + y - 5.0
def dfdx(w, x, y, z): return -2.5 * w + 6.0 * y - 10.0 * z + 4.0
def dfdy(w, x, y, z): return w + 6.0 * x + 3.0 * z + 2.0
def dfdz(w, x, y, z): return 4.0 * w - 10.0 * x + 3.0 * y - 7
x_list = np.linspace(0, 5, 16, dtype=float)
y_list = np.linspace(0, 5, 16, dtype=float)
z_list = np.linspace(0, 5, 16, dtype=float)
wInterpolators = []
for x in x_list:
temp = []
for y in y_list:
temptemp = []
for z in z_list:
this_w_list = np.sort((RNG.rand(16) * 5.0))
this_interpolation = LinearInterp(
this_w_list,
f(
this_w_list,
x * np.ones(this_w_list.size),
y * np.ones(this_w_list.size),
z * np.ones(this_w_list.size),
),
)
temptemp.append(this_interpolation)
temp.append(deepcopy(temptemp))
wInterpolators.append(deepcopy(temp))
g = TrilinearInterpOnInterp1D(wInterpolators, x_list, y_list, z_list)
N = 20000
rand_w = RNG.rand(N) * 5.0
rand_x = RNG.rand(N) * 5.0
rand_y = RNG.rand(N) * 5.0
rand_z = RNG.rand(N) * 5.0
t_start = time()
z = (f(rand_w, rand_x, rand_y, rand_z) - g(rand_w, rand_x, rand_y, rand_z)) / f(
rand_w, rand_x, rand_y, rand_z
)
q = (
dfdw(rand_w, rand_x, rand_y, rand_z)
- g.derivativeW(rand_w, rand_x, rand_y, rand_z)
) / dfdw(rand_w, rand_x, rand_y, rand_z)
r = (
dfdx(rand_w, rand_x, rand_y, rand_z)
- g.derivativeX(rand_w, rand_x, rand_y, rand_z)
) / dfdx(rand_w, rand_x, rand_y, rand_z)
p = (
dfdy(rand_w, rand_x, rand_y, rand_z)
- g.derivativeY(rand_w, rand_x, rand_y, rand_z)
) / dfdy(rand_w, rand_x, rand_y, rand_z)
s = (
dfdz(rand_w, rand_x, rand_y, rand_z)
- g.derivativeZ(rand_w, rand_x, rand_y, rand_z)
) / dfdz(rand_w, rand_x, rand_y, rand_z)
t_end = time()
z.sort()
print(z)
print(t_end - t_start)
if False:
def f(x, y): return 3.0 * x ** 2.0 + x * y + 4.0 * y ** 2.0
def dfdx(x, y): return 6.0 * x + y
def dfdy(x, y): return x + 8.0 * y
x_list = np.linspace(0, 5, 101, dtype=float)
y_list = np.linspace(0, 5, 101, dtype=float)
x_temp, y_temp = np.meshgrid(x_list, y_list, indexing="ij")
g = BilinearInterp(f(x_temp, y_temp), x_list, y_list)
rand_x = RNG.rand(100) * 5.0
rand_y = RNG.rand(100) * 5.0
z = (f(rand_x, rand_y) - g(rand_x, rand_y)) / f(rand_x, rand_y)
q = (f(x_temp, y_temp) - g(x_temp, y_temp)) / f(x_temp, y_temp)
# print(z)
# print(q)
if False:
f = (
lambda x, y, z: 3.0 * x ** 2.0
+ x * y
+ 4.0 * y ** 2.0
- 5 * z ** 2.0
+ 1.5 * x * z
)
def dfdx(x, y, z): return 6.0 * x + y + 1.5 * z
def dfdy(x, y, z): return x + 8.0 * y
def dfdz(x, y, z): return -10.0 * z + 1.5 * x
x_list = np.linspace(0, 5, 11, dtype=float)
y_list = np.linspace(0, 5, 11, dtype=float)
z_list = np.linspace(0, 5, 101, dtype=float)
x_temp, y_temp, z_temp = np.meshgrid(x_list, y_list, z_list, indexing="ij")
g = TrilinearInterp(f(x_temp, y_temp, z_temp), x_list, y_list, z_list)
rand_x = RNG.rand(1000) * 5.0
rand_y = RNG.rand(1000) * 5.0
rand_z = RNG.rand(1000) * 5.0
z = (f(rand_x, rand_y, rand_z) - g(rand_x, rand_y, rand_z)) / f(
rand_x, rand_y, rand_z
)
q = (
dfdx(rand_x, rand_y, rand_z) - g.derivativeX(rand_x, rand_y, rand_z)
) / dfdx(rand_x, rand_y, rand_z)
r = (
dfdy(rand_x, rand_y, rand_z) - g.derivativeY(rand_x, rand_y, rand_z)
) / dfdy(rand_x, rand_y, rand_z)
p = (
dfdz(rand_x, rand_y, rand_z) - g.derivativeZ(rand_x, rand_y, rand_z)
) / dfdz(rand_x, rand_y, rand_z)
p.sort()
plt.plot(p)
if False:
f = (
lambda w, x, y, z: 4.0 * w * z
- 2.5 * w * x
+ w * y
+ 6.0 * x * y
- 10.0 * x * z
+ 3.0 * y * z
- 7.0 * z
+ 4.0 * x
+ 2.0 * y
- 5.0 * w
)
def dfdw(w, x, y, z): return 4.0 * z - 2.5 * x + y - 5.0
def dfdx(w, x, y, z): return -2.5 * w + 6.0 * y - 10.0 * z + 4.0
def dfdy(w, x, y, z): return w + 6.0 * x + 3.0 * z + 2.0
def dfdz(w, x, y, z): return 4.0 * w - 10.0 * x + 3.0 * y - 7
w_list = np.linspace(0, 5, 16, dtype=float)
x_list = np.linspace(0, 5, 16, dtype=float)
y_list = np.linspace(0, 5, 16, dtype=float)
z_list = np.linspace(0, 5, 16, dtype=float)
w_temp, x_temp, y_temp, z_temp = np.meshgrid(
w_list, x_list, y_list, z_list, indexing="ij"
)
def mySearch(trash, x): return np.floor(x / 5 * 32).astype(int)
g = QuadlinearInterp(
f(w_temp, x_temp, y_temp, z_temp), w_list, x_list, y_list, z_list
)
N = 1000000
rand_w = RNG.rand(N) * 5.0
rand_x = RNG.rand(N) * 5.0
rand_y = RNG.rand(N) * 5.0
rand_z = RNG.rand(N) * 5.0
t_start = time()
z = (f(rand_w, rand_x, rand_y, rand_z) - g(rand_w, rand_x, rand_y, rand_z)) / f(
rand_w, rand_x, rand_y, rand_z
)
t_end = time()
# print(z)
print(t_end - t_start)
if False:
def f(x, y): return 3.0 * x ** 2.0 + x * y + 4.0 * y ** 2.0
def dfdx(x, y): return 6.0 * x + y
def dfdy(x, y): return x + 8.0 * y
warp_factor = 0.01
x_list = np.linspace(0, 5, 71, dtype=float)
y_list = np.linspace(0, 5, 51, dtype=float)
x_temp, y_temp = np.meshgrid(x_list, y_list, indexing="ij")
x_adj = x_temp + warp_factor * (RNG.rand(x_list.size, y_list.size) - 0.5)
y_adj = y_temp + warp_factor * (RNG.rand(x_list.size, y_list.size) - 0.5)
g = Curvilinear2DInterp(f(x_adj, y_adj), x_adj, y_adj)
rand_x = RNG.rand(1000) * 5.0
rand_y = RNG.rand(1000) * 5.0
t_start = time()
z = (f(rand_x, rand_y) - g(rand_x, rand_y)) / f(rand_x, rand_y)
q = (dfdx(rand_x, rand_y) - g.derivativeX(rand_x, rand_y)) / dfdx(
rand_x, rand_y
)
r = (dfdy(rand_x, rand_y) - g.derivativeY(rand_x, rand_y)) / dfdy(
rand_x, rand_y
)
t_end = time()
z.sort()
q.sort()
r.sort()
# print(z)
print(t_end - t_start)
if False:
f = (
lambda x, y, z: 3.0 * x ** 2.0
+ x * y
+ 4.0 * y ** 2.0
- 5 * z ** 2.0
+ 1.5 * x * z
)
def dfdx(x, y, z): return 6.0 * x + y + 1.5 * z
def dfdy(x, y, z): return x + 8.0 * y
def dfdz(x, y, z): return -10.0 * z + 1.5 * x
warp_factor = 0.01
x_list = np.linspace(0, 5, 11, dtype=float)
y_list = np.linspace(0, 5, 11, dtype=float)
z_list = np.linspace(0, 5, 101, dtype=float)
x_temp, y_temp = np.meshgrid(x_list, y_list, indexing="ij")
xyInterpolators = []
for j in range(z_list.size):
x_adj = x_temp + warp_factor * (RNG.rand(x_list.size, y_list.size) - 0.5)
y_adj = y_temp + warp_factor * (RNG.rand(x_list.size, y_list.size) - 0.5)
z_temp = z_list[j] * np.ones(x_adj.shape)
thisInterp = Curvilinear2DInterp(f(x_adj, y_adj, z_temp), x_adj, y_adj)
xyInterpolators.append(thisInterp)
g = LinearInterpOnInterp2D(xyInterpolators, z_list)
N = 1000
rand_x = RNG.rand(N) * 5.0
rand_y = RNG.rand(N) * 5.0
rand_z = RNG.rand(N) * 5.0
z = (f(rand_x, rand_y, rand_z) - g(rand_x, rand_y, rand_z)) / f(
rand_x, rand_y, rand_z
)
p = (
dfdz(rand_x, rand_y, rand_z) - g.derivativeZ(rand_x, rand_y, rand_z)
) / dfdz(rand_x, rand_y, rand_z)
p.sort()
plt.plot(p)
if False:
f = (
lambda w, x, y, z: 4.0 * w * z
- 2.5 * w * x
+ w * y
+ 6.0 * x * y
- 10.0 * x * z
+ 3.0 * y * z
- 7.0 * z
+ 4.0 * x
+ 2.0 * y
- 5.0 * w
)
def dfdw(w, x, y, z): return 4.0 * z - 2.5 * x + y - 5.0
def dfdx(w, x, y, z): return -2.5 * w + 6.0 * y - 10.0 * z + 4.0
def dfdy(w, x, y, z): return w + 6.0 * x + 3.0 * z + 2.0
def dfdz(w, x, y, z): return 4.0 * w - 10.0 * x + 3.0 * y - 7
warp_factor = 0.1
w_list = np.linspace(0, 5, 16, dtype=float)
x_list = np.linspace(0, 5, 16, dtype=float)
y_list = np.linspace(0, 5, 16, dtype=float)
z_list = np.linspace(0, 5, 16, dtype=float)
w_temp, x_temp = np.meshgrid(w_list, x_list, indexing="ij")
wxInterpolators = []
for i in range(y_list.size):
temp = []
for j in range(z_list.size):
w_adj = w_temp + warp_factor * (
RNG.rand(w_list.size, x_list.size) - 0.5
)
x_adj = x_temp + warp_factor * (
RNG.rand(w_list.size, x_list.size) - 0.5
)
y_temp = y_list[i] * np.ones(w_adj.shape)
z_temp = z_list[j] * np.ones(w_adj.shape)
thisInterp = Curvilinear2DInterp(
f(w_adj, x_adj, y_temp, z_temp), w_adj, x_adj
)
temp.append(thisInterp)
wxInterpolators.append(temp)
g = BilinearInterpOnInterp2D(wxInterpolators, y_list, z_list)
N = 1000000
rand_w = RNG.rand(N) * 5.0
rand_x = RNG.rand(N) * 5.0
rand_y = RNG.rand(N) * 5.0
rand_z = RNG.rand(N) * 5.0
t_start = time()
z = (f(rand_w, rand_x, rand_y, rand_z) - g(rand_w, rand_x, rand_y, rand_z)) / f(
rand_w, rand_x, rand_y, rand_z
)
t_end = time()
z.sort()
print(z)
print(t_end - t_start)
| apache-2.0 |
elijah513/scikit-learn | sklearn/qda.py | 140 | 7682 | """
Quadratic Discriminant Analysis
"""
# Author: Matthieu Perrot <[email protected]>
#
# License: BSD 3 clause
import warnings
import numpy as np
from .base import BaseEstimator, ClassifierMixin
from .externals.six.moves import xrange
from .utils import check_array, check_X_y
from .utils.validation import check_is_fitted
from .utils.fixes import bincount
__all__ = ['QDA']
class QDA(BaseEstimator, ClassifierMixin):
"""
Quadratic Discriminant Analysis (QDA)
A classifier with a quadratic decision boundary, generated
by fitting class conditional densities to the data
and using Bayes' rule.
The model fits a Gaussian density to each class.
Read more in the :ref:`User Guide <lda_qda>`.
Parameters
----------
priors : array, optional, shape = [n_classes]
Priors on classes
reg_param : float, optional
Regularizes the covariance estimate as
``(1-reg_param)*Sigma + reg_param*np.eye(n_features)``
Attributes
----------
covariances_ : list of array-like, shape = [n_features, n_features]
Covariance matrices of each class.
means_ : array-like, shape = [n_classes, n_features]
Class means.
priors_ : array-like, shape = [n_classes]
Class priors (sum to 1).
rotations_ : list of arrays
For each class k an array of shape [n_features, n_k], with
``n_k = min(n_features, number of elements in class k)``
It is the rotation of the Gaussian distribution, i.e. its
principal axis.
scalings_ : list of arrays
For each class k an array of shape [n_k]. It contains the scaling
of the Gaussian distributions along its principal axes, i.e. the
variance in the rotated coordinate system.
Examples
--------
>>> from sklearn.qda import QDA
>>> import numpy as np
>>> X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
>>> y = np.array([1, 1, 1, 2, 2, 2])
>>> clf = QDA()
>>> clf.fit(X, y)
QDA(priors=None, reg_param=0.0)
>>> print(clf.predict([[-0.8, -1]]))
[1]
See also
--------
sklearn.lda.LDA: Linear discriminant analysis
"""
def __init__(self, priors=None, reg_param=0.):
self.priors = np.asarray(priors) if priors is not None else None
self.reg_param = reg_param
def fit(self, X, y, store_covariances=False, tol=1.0e-4):
"""
Fit the QDA model according to the given training data and parameters.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training vector, where n_samples in the number of samples and
n_features is the number of features.
y : array, shape = [n_samples]
Target values (integers)
store_covariances : boolean
If True the covariance matrices are computed and stored in the
`self.covariances_` attribute.
tol : float, optional, default 1.0e-4
Threshold used for rank estimation.
"""
X, y = check_X_y(X, y)
self.classes_, y = np.unique(y, return_inverse=True)
n_samples, n_features = X.shape
n_classes = len(self.classes_)
if n_classes < 2:
raise ValueError('y has less than 2 classes')
if self.priors is None:
self.priors_ = bincount(y) / float(n_samples)
else:
self.priors_ = self.priors
cov = None
if store_covariances:
cov = []
means = []
scalings = []
rotations = []
for ind in xrange(n_classes):
Xg = X[y == ind, :]
meang = Xg.mean(0)
means.append(meang)
if len(Xg) == 1:
raise ValueError('y has only 1 sample in class %s, covariance '
'is ill defined.' % str(self.classes_[ind]))
Xgc = Xg - meang
# Xgc = U * S * V.T
U, S, Vt = np.linalg.svd(Xgc, full_matrices=False)
rank = np.sum(S > tol)
if rank < n_features:
warnings.warn("Variables are collinear")
S2 = (S ** 2) / (len(Xg) - 1)
S2 = ((1 - self.reg_param) * S2) + self.reg_param
if store_covariances:
# cov = V * (S^2 / (n-1)) * V.T
cov.append(np.dot(S2 * Vt.T, Vt))
scalings.append(S2)
rotations.append(Vt.T)
if store_covariances:
self.covariances_ = cov
self.means_ = np.asarray(means)
self.scalings_ = scalings
self.rotations_ = rotations
return self
def _decision_function(self, X):
check_is_fitted(self, 'classes_')
X = check_array(X)
norm2 = []
for i in range(len(self.classes_)):
R = self.rotations_[i]
S = self.scalings_[i]
Xm = X - self.means_[i]
X2 = np.dot(Xm, R * (S ** (-0.5)))
norm2.append(np.sum(X2 ** 2, 1))
norm2 = np.array(norm2).T # shape = [len(X), n_classes]
u = np.asarray([np.sum(np.log(s)) for s in self.scalings_])
return (-0.5 * (norm2 + u) + np.log(self.priors_))
def decision_function(self, X):
"""Apply decision function to an array of samples.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Array of samples (test vectors).
Returns
-------
C : array, shape = [n_samples, n_classes] or [n_samples,]
Decision function values related to each class, per sample.
In the two-class case, the shape is [n_samples,], giving the
log likelihood ratio of the positive class.
"""
dec_func = self._decision_function(X)
# handle special case of two classes
if len(self.classes_) == 2:
return dec_func[:, 1] - dec_func[:, 0]
return dec_func
def predict(self, X):
"""Perform classification on an array of test vectors X.
The predicted class C for each sample in X is returned.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Returns
-------
C : array, shape = [n_samples]
"""
d = self._decision_function(X)
y_pred = self.classes_.take(d.argmax(1))
return y_pred
def predict_proba(self, X):
"""Return posterior probabilities of classification.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Array of samples/test vectors.
Returns
-------
C : array, shape = [n_samples, n_classes]
Posterior probabilities of classification per class.
"""
values = self._decision_function(X)
# compute the likelihood of the underlying gaussian models
# up to a multiplicative constant.
likelihood = np.exp(values - values.max(axis=1)[:, np.newaxis])
# compute posterior probabilities
return likelihood / likelihood.sum(axis=1)[:, np.newaxis]
def predict_log_proba(self, X):
"""Return posterior probabilities of classification.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Array of samples/test vectors.
Returns
-------
C : array, shape = [n_samples, n_classes]
Posterior log-probabilities of classification per class.
"""
# XXX : can do better to avoid precision overflows
probas_ = self.predict_proba(X)
return np.log(probas_)
| bsd-3-clause |
anjalisood/spark-tk | regression-tests/sparktkregtests/testcases/frames/unflatten_test.py | 12 | 6371 | # vim: set encoding=utf-8
# Copyright (c) 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
""" Tests unflatten functionality"""
import unittest
from sparktkregtests.lib import sparktk_test
class Unflatten(sparktk_test.SparkTKTestCase):
def setUp(self):
super(Unflatten, self).setUp()
self.datafile_unflatten = self.get_file("unflatten_data_no_spaces.csv")
self.schema_unflatten = [("user", str),
("day", str),
("time", str),
("reading", int)]
def test_unflatten_one_column(self):
""" test for unflatten comma-separated rows """
frame = self.context.frame.import_csv(self.datafile_unflatten,
schema=self.schema_unflatten)
# get as a pandas frame to access data
pandas_frame = frame.to_pandas(frame.count())
pandas_data = []
# use this dictionary to store expected results by name
name_lookup = {}
# generate our expected results by unflattening
# each row ourselves and storing it by name
for index, row in pandas_frame.iterrows():
# if the name is not already in name lookup
# index 0 refers to user name (see schema above)
if row['user'] not in name_lookup:
name_lookup[row['user']] = row
else:
row_copy = name_lookup[row['user']]
# append each item in the row to a comma
# delineated string
for index in range(1, len(row_copy)):
row_copy[index] = str(row_copy[index]) + "," + str(row[index])
name_lookup[row['user']] = row_copy
# now we unflatten the columns using sparktk
frame.unflatten_columns(['user'])
# finally we iterate through what we got
# from sparktk unflatten and compare
# it to the expected results we created
unflatten_pandas = frame.to_pandas()
for index, row in unflatten_pandas.iterrows():
self.assertEqual(row['user'], name_lookup[row['user']]['user'])
self.assertEqual(row['day'], name_lookup[row['user']]['day'])
self.assertItemsEqual(row['time'].split(','), name_lookup[row['user']]['time'].split(','))
self.assertItemsEqual(row['reading'].split(','), name_lookup[row['user']]['reading'].split(','))
self.assertEqual(frame.count(), 5)
def test_unflatten_multiple_cols(self):
frame = self.context.frame.import_csv(self.datafile_unflatten,
schema=self.schema_unflatten)
# get a pandas frame of the data
pandas_frame = frame.to_pandas(frame.count())
name_lookup = {}
# same logic as for unflatten_one_column,
# generate our expected results by unflattening
for index, row in pandas_frame.iterrows():
if row['user'] not in name_lookup:
name_lookup[row['user']] = row
else:
row_copy = name_lookup[row['user']]
for index in range(1, len(row_copy)):
# then only difference between multi col and single col
# is that we will expect any data in the column at index 1
# (which is day, see the schema above)
# to also be flattened
# so we only append data if it isn't already there
if index is not 1 or str(row[index]) not in str(row_copy[index]):
row_copy[index] = str(row_copy[index]) + "," + str(row[index])
name_lookup[row['user']] = row_copy
# now we unflatten using sparktk
frame.unflatten_columns(['user', 'day'])
# and compare our expected data with sparktk results
# which we have taken as a pandas frame
unflatten_pandas = frame.to_pandas()
for index, row in unflatten_pandas.iterrows():
self.assertEqual(row['user'], name_lookup[row['user']]['user'])
self.assertEqual(row['day'], name_lookup[row['user']]['day'])
self.assertItemsEqual(row['time'].split(','), name_lookup[row['user']]['time'].split(','))
self.assertItemsEqual(row['reading'].split(','), name_lookup[row['user']]['reading'].split(','))
# same logic as single column but with sparse data
# because there are so many rows in this data
# (datafile contains thousands and thousands of lines)
# we will not compare the content of the unflatten frame
# as this would require us to iterate multiple times
def test_unflatten_sparse_data(self):
datafile_unflatten_sparse = self.get_file("unflatten_data_sparse.csv")
schema_unflatten_sparse = [("user", int),
("day", str),
("time", str),
("reading", str)]
frame_sparse = self.context.frame.import_csv(
datafile_unflatten_sparse, schema=schema_unflatten_sparse)
frame_sparse.unflatten_columns(['user'])
pandas_frame_sparse = frame_sparse.to_pandas()
# since this data set is huge we will just iterate once
# to make sure the data has been appended for time and
# reading for each row and that there are the correct
# number of items that we would expect for the
# unflattened frame
for index, row in pandas_frame_sparse.iterrows():
self.assertEqual(
len(str(row['time']).split(",")),
len(str(row['reading']).split(",")))
self.assertEqual(frame_sparse.count(), 100)
if __name__ == "__main__":
unittest.main()
| apache-2.0 |
hitszxp/scikit-learn | examples/mixture/plot_gmm_pdf.py | 284 | 1528 | """
=============================================
Density Estimation for a mixture of Gaussians
=============================================
Plot the density estimation of a mixture of two Gaussians. Data is
generated from two Gaussians with different centers and covariance
matrices.
"""
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
from sklearn import mixture
n_samples = 300
# generate random sample, two components
np.random.seed(0)
# generate spherical data centered on (20, 20)
shifted_gaussian = np.random.randn(n_samples, 2) + np.array([20, 20])
# generate zero centered stretched Gaussian data
C = np.array([[0., -0.7], [3.5, .7]])
stretched_gaussian = np.dot(np.random.randn(n_samples, 2), C)
# concatenate the two datasets into the final training set
X_train = np.vstack([shifted_gaussian, stretched_gaussian])
# fit a Gaussian Mixture Model with two components
clf = mixture.GMM(n_components=2, covariance_type='full')
clf.fit(X_train)
# display predicted scores by the model as a contour plot
x = np.linspace(-20.0, 30.0)
y = np.linspace(-20.0, 40.0)
X, Y = np.meshgrid(x, y)
XX = np.array([X.ravel(), Y.ravel()]).T
Z = -clf.score_samples(XX)[0]
Z = Z.reshape(X.shape)
CS = plt.contour(X, Y, Z, norm=LogNorm(vmin=1.0, vmax=1000.0),
levels=np.logspace(0, 3, 10))
CB = plt.colorbar(CS, shrink=0.8, extend='both')
plt.scatter(X_train[:, 0], X_train[:, 1], .8)
plt.title('Negative log-likelihood predicted by a GMM')
plt.axis('tight')
plt.show()
| bsd-3-clause |
stulp/dmpbbo | demos_cpp/functionapproximators/demoFunctionApproximatorTrainingWrapper.py | 1 | 2740 | # This file is part of DmpBbo, a set of libraries and programs for the
# black-box optimization of dynamical movement primitives.
# Copyright (C) 2014 Freek Stulp, ENSTA-ParisTech
#
# DmpBbo is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 2 of the License, or
# (at your option) any later version.
#
# DmpBbo is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with DmpBbo. If not, see <http://www.gnu.org/licenses/>.
from mpl_toolkits.mplot3d.axes3d import Axes3D
import numpy
import matplotlib.pyplot as plt
import os, sys
lib_path = os.path.abspath('../')
sys.path.append(lib_path)
from executeBinary import executeBinary
lib_path = os.path.abspath('../../python')
sys.path.append(lib_path)
from functionapproximators.functionapproximators_plotting import *
if __name__=='__main__':
"""Run some training sessions and plot results."""
fig_number = 1;
executable = "./demoFunctionApproximatorTraining"
directory = "./demoFunctionApproximatorTrainingDataTmp/"
fa_names = ["RBFN","GPR","RRRFF","LWR", "LWPR", "GMR"]
for fa_name in fa_names:
# Call the executable with the directory to which results should be written
arguments = directory+" "+fa_name
executeBinary(executable, arguments)
for fa_name in fa_names:
cur_directory = directory+fa_name+"_1D";
if not os.path.exists(cur_directory):
break
print("Plotting "+fa_name+" results")
fig = plt.figure(fig_number,figsize=(15,5))
fig_number = fig_number+1
for dim in [1, 2]:
cur_directory = directory+fa_name+"_"+str(dim)+"D";
if (getDataDimFromDirectory(cur_directory)==1):
ax = fig.add_subplot(1, 3, 1)
ax2 = None
else:
ax = fig.add_subplot(1, 3, 2, projection='3d')
ax2 = fig.add_subplot(1, 3, 3, projection='3d')
plotFunctionApproximatorTrainingFromDirectory(cur_directory,ax,ax2)
ax.set_title(fa_name+" ("+str(dim)+"D data)")
if ax2 != None:
ax2.set_title(fa_name+" ("+str(dim)+"D basis functions)")
plt.show()
| lgpl-2.1 |
paalge/scikit-image | doc/examples/transform/plot_ssim.py | 3 | 2376 | """
===========================
Structural similarity index
===========================
When comparing images, the mean squared error (MSE)--while simple to
implement--is not highly indicative of perceived similarity. Structural
similarity aims to address this shortcoming by taking texture into account
[1]_, [2]_.
The example shows two modifications of the input image, each with the same MSE,
but with very different mean structural similarity indices.
.. [1] Zhou Wang; Bovik, A.C.; ,"Mean squared error: Love it or leave it? A new
look at Signal Fidelity Measures," Signal Processing Magazine, IEEE,
vol. 26, no. 1, pp. 98-117, Jan. 2009.
.. [2] Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, "Image quality
assessment: From error visibility to structural similarity," IEEE
Transactions on Image Processing, vol. 13, no. 4, pp. 600-612,
Apr. 2004.
"""
import numpy as np
import matplotlib.pyplot as plt
from skimage import data, img_as_float
from skimage.measure import compare_ssim as ssim
img = img_as_float(data.camera())
rows, cols = img.shape
noise = np.ones_like(img) * 0.2 * (img.max() - img.min())
noise[np.random.random(size=noise.shape) > 0.5] *= -1
def mse(x, y):
return np.linalg.norm(x - y)
img_noise = img + noise
img_const = img + abs(noise)
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(10, 4),
sharex=True, sharey=True,
subplot_kw={'adjustable': 'box-forced'})
ax = axes.ravel()
mse_none = mse(img, img)
ssim_none = ssim(img, img, data_range=img.max() - img.min())
mse_noise = mse(img, img_noise)
ssim_noise = ssim(img, img_noise,
data_range=img_noise.max() - img_noise.min())
mse_const = mse(img, img_const)
ssim_const = ssim(img, img_const,
data_range=img_const.max() - img_const.min())
label = 'MSE: {:.2f}, SSIM: {:.2f}'
ax[0].imshow(img, cmap=plt.cm.gray, vmin=0, vmax=1)
ax[0].set_xlabel(label.format(mse_none, ssim_none))
ax[0].set_title('Original image')
ax[1].imshow(img_noise, cmap=plt.cm.gray, vmin=0, vmax=1)
ax[1].set_xlabel(label.format(mse_noise, ssim_noise))
ax[1].set_title('Image with noise')
ax[2].imshow(img_const, cmap=plt.cm.gray, vmin=0, vmax=1)
ax[2].set_xlabel(label.format(mse_const, ssim_const))
ax[2].set_title('Image plus constant')
plt.tight_layout()
plt.show()
| bsd-3-clause |
jamielapointe/PyPassiveRangingFilters | analysis/plotMqukf.py | 1 | 11680 | '''
Created on May 9, 2017
@author: jamie
'''
from analysis.plotter import Plotter
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
class PlotMqukf(Plotter):
'''
Plot the modified quaternion unscented Kalman filter (UKF)
'''
def __init__(self, inputData):
'''
Constructor
'''
super().__init__(inputData)
def plotAll(self):
self.plotRhoError()
self.plotRangeError()
self.plotLosError()
self.plotNuError()
self.plotRangeRateError()
self.plotAngularRateError()
self.plotResiduals()
self.plotLosErrors()
self.plotLos()
self.plotTrajectory()
plt.show()
def plotRhoError(self):
fig1, ax1 = plt.subplots()
ax1.plot(self.data.simTime,
[-x for x in self.data.estRho1Sigma],
color=self.colors[0],
linestyle=self.lines[0],
marker=self.markers[0],
linewidth=self.linewidth,
label='1-sigma error')
ax1.plot(self.data.simTime,
self.data.rhoError,
color=self.colors[1],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='rho error')
ax1.plot(self.data.simTime,
self.data.estRho1Sigma,
color=self.colors[0],
linestyle=self.lines[0],
marker=self.markers[0],
linewidth=self.linewidth,
label='1-sigma error')
ax1.legend()
ax1.set_title('Rho error')
ax1.set_ylabel('rhoError (1/m)')
self.setupAxis(ax1, fig1)
def plotRangeError(self):
fig1, ax1 = plt.subplots()
ax1.plot(self.data.simTime,
[-x for x in self.data.estRange1Sigma],
color=self.colors[0],
linestyle=self.lines[0],
marker=self.markers[0],
linewidth=self.linewidth,
label='1-sigma error')
ax1.plot(self.data.simTime,
self.data.rangeError,
color=self.colors[1],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='range error')
ax1.plot(self.data.simTime,
self.data.estRange1Sigma,
color=self.colors[0],
linestyle=self.lines[0],
marker=self.markers[0],
linewidth=self.linewidth,
label='1-sigma error')
ax1.legend()
ax1.set_title('Range error')
ax1.set_ylabel('rhoError (m)')
self.setupAxis(ax1, fig1)
def plotLosError(self):
fig1, ax1 = plt.subplots()
ax1.plot(self.data.simTime,
[-x for x in self.data.estLos1Sigma],
color=self.colors[0],
linestyle=self.lines[0],
marker=self.markers[0],
linewidth=self.linewidth,
label='1-sigma error')
ax1.plot(self.data.simTime,
self.data.losError_I,
color=self.colors[1],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='los error')
ax1.plot(self.data.simTime,
self.data.estLos1Sigma,
color=self.colors[0],
linestyle=self.lines[0],
marker=self.markers[0],
linewidth=self.linewidth,
label='1-sigma error')
ax1.legend()
ax1.set_title('LOS error')
ax1.set_ylabel('losError_I (rad)')
self.setupAxis(ax1, fig1)
def plotNuError(self):
fig1, ax1 = plt.subplots()
ax1.plot(self.data.simTime,
[-x for x in self.data.estNu1Sigma],
color=self.colors[0],
linestyle=self.lines[0],
marker=self.markers[0],
linewidth=self.linewidth,
label='1-sigma error')
ax1.plot(self.data.simTime,
self.data.nuError,
color=self.colors[1],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='nu error')
ax1.plot(self.data.simTime,
self.data.estNu1Sigma,
color=self.colors[0],
linestyle=self.lines[0],
marker=self.markers[0],
linewidth=self.linewidth,
label='1-sigma error')
ax1.legend()
ax1.set_title('Nu error')
ax1.set_ylabel('nuError (1/s)')
self.setupAxis(ax1, fig1)
def plotRangeRateError(self):
fig1, ax1 = plt.subplots()
ax1.plot(self.data.simTime,
self.data.rangeRateError,
color=self.colors[1],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='rDot error')
ax1.set_title('rDot error')
ax1.set_ylabel('rDot (m/s)')
self.setupAxis(ax1, fig1)
def plotAngularRateError(self):
fig1, ax1 = plt.subplots()
ax1.plot(self.data.simTime,
[-x for x in self.data.estAngularRate1Sigma],
color=self.colors[0],
linestyle=self.lines[0],
marker=self.markers[0],
linewidth=self.linewidth,
label='1-sigma error')
ax1.plot(self.data.simTime,
self.data.angularRateError_L,
color=self.colors[1],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='omega error')
ax1.plot(self.data.simTime,
self.data.estAngularRate1Sigma,
color=self.colors[0],
linestyle=self.lines[0],
marker=self.markers[0],
linewidth=self.linewidth,
label='1-sigma error')
ax1.legend()
ax1.set_title('Angular Rate error')
ax1.set_ylabel('omegaError (rad/sec)')
self.setupAxis(ax1, fig1)
def plotResiduals(self):
fig1, ax1 = plt.subplots()
ax1.plot(self.data.simTime,
[-x for x in self.data.Pyy],
color=self.colors[0],
linestyle=self.lines[0],
marker=self.markers[0],
linewidth=self.linewidth,
label='1-sigma error')
ax1.plot(self.data.simTime,
self.data.res,
color=self.colors[1],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='residual')
ax1.plot(self.data.simTime,
self.data.Pyy,
color=self.colors[0],
linestyle=self.lines[0],
marker=self.markers[0],
linewidth=self.linewidth,
label='1-sigma error')
ax1.legend()
ax1.set_title('Residuals')
ax1.set_ylabel('residuals (rad)')
self.setupAxis(ax1, fig1)
def plotLosErrors(self):
fig1, ax1 = plt.subplots()
ax1.plot(self.data.simTime,
self.data.measError[:,0],
color=self.colors[1],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='LOS x error')
ax1.plot(self.data.simTime,
self.data.measError[:,1],
color=self.colors[2],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='LOS y error')
ax1.plot(self.data.simTime,
self.data.measError[:,2],
color=self.colors[3],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='LOS z error')
ax1.legend()
ax1.set_title('LOS error')
ax1.set_ylabel('residuals (rad)')
self.setupAxis(ax1, fig1)
def plotLos(self):
fig1, ax1 = plt.subplots()
ax1.plot(self.data.simTime,
self.data.trueLos_I[:,0],
color=self.colors[1],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='LOS x')
ax1.plot(self.data.simTime,
self.data.trueLos_I[:,1],
color=self.colors[2],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='LOS y')
ax1.plot(self.data.simTime,
self.data.trueLos_I[:,2],
color=self.colors[3],
linestyle=self.lines[4],
marker=self.markers[3],
linewidth=self.linewidth,
label='LOS z')
ax1.legend()
ax1.set_title('LOS')
self.setupAxis(ax1, fig1)
def plotTrajectory(self):
fig1 = plt.figure()
ax1 = fig1.add_subplot(111, projection='3d')
ax1.plot(self.data.true_obsPos_I[:,0],
self.data.true_obsPos_I[:,1],
self.data.true_obsPos_I[:,2],
color=self.colors[1],
linestyle=self.lines[4],
marker=self.markers[1],
linewidth=self.linewidth,
markersize=1,
label='Obs')
ax1.plot([self.data.true_obsPos_I[0,0]],
[self.data.true_obsPos_I[0,1]],
[self.data.true_obsPos_I[0,2]],
marker=self.markers[2],
color=self.colors[1],
markersize=5)
ax1.plot([self.data.true_obsPos_I[-1,0]],
[self.data.true_obsPos_I[-1,1]],
[self.data.true_obsPos_I[-1,2]],
marker=self.markers[3],
color=self.colors[1],
markersize=5)
ax1.plot(self.data.true_tgtPos_I[:,0],
self.data.true_tgtPos_I[:,1],
self.data.true_tgtPos_I[:,2],
color=self.colors[2],
linestyle=self.lines[4],
marker=self.markers[1],
linewidth=self.linewidth,
markersize=1,
label='Tgt')
ax1.plot([self.data.true_tgtPos_I[0,0]],
[self.data.true_tgtPos_I[0,1]],
[self.data.true_tgtPos_I[0,2]],
marker=self.markers[2],
color=self.colors[2],
markersize=5)
ax1.plot([self.data.true_tgtPos_I[-1,0]],
[self.data.true_tgtPos_I[-1,1]],
[self.data.true_tgtPos_I[-1,2]],
marker=self.markers[3],
color=self.colors[2],
markersize=5)
ax1.legend()
ax1.set_aspect('equal')
ax1.axis('equal')
ax1.set_title('Trajectory')
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_zlabel('z')
| mit |
Achuth17/scikit-learn | sklearn/utils/tests/test_utils.py | 215 | 8100 | import warnings
import numpy as np
import scipy.sparse as sp
from scipy.linalg import pinv2
from itertools import chain
from sklearn.utils.testing import (assert_equal, assert_raises, assert_true,
assert_almost_equal, assert_array_equal,
SkipTest, assert_raises_regex)
from sklearn.utils import check_random_state
from sklearn.utils import deprecated
from sklearn.utils import resample
from sklearn.utils import safe_mask
from sklearn.utils import column_or_1d
from sklearn.utils import safe_indexing
from sklearn.utils import shuffle
from sklearn.utils import gen_even_slices
from sklearn.utils.extmath import pinvh
from sklearn.utils.mocking import MockDataFrame
def test_make_rng():
# Check the check_random_state utility function behavior
assert_true(check_random_state(None) is np.random.mtrand._rand)
assert_true(check_random_state(np.random) is np.random.mtrand._rand)
rng_42 = np.random.RandomState(42)
assert_true(check_random_state(42).randint(100) == rng_42.randint(100))
rng_42 = np.random.RandomState(42)
assert_true(check_random_state(rng_42) is rng_42)
rng_42 = np.random.RandomState(42)
assert_true(check_random_state(43).randint(100) != rng_42.randint(100))
assert_raises(ValueError, check_random_state, "some invalid seed")
def test_resample_noarg():
# Border case not worth mentioning in doctests
assert_true(resample() is None)
def test_deprecated():
# Test whether the deprecated decorator issues appropriate warnings
# Copied almost verbatim from http://docs.python.org/library/warnings.html
# First a function...
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
@deprecated()
def ham():
return "spam"
spam = ham()
assert_equal(spam, "spam") # function must remain usable
assert_equal(len(w), 1)
assert_true(issubclass(w[0].category, DeprecationWarning))
assert_true("deprecated" in str(w[0].message).lower())
# ... then a class.
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
@deprecated("don't use this")
class Ham(object):
SPAM = 1
ham = Ham()
assert_true(hasattr(ham, "SPAM"))
assert_equal(len(w), 1)
assert_true(issubclass(w[0].category, DeprecationWarning))
assert_true("deprecated" in str(w[0].message).lower())
def test_resample_value_errors():
# Check that invalid arguments yield ValueError
assert_raises(ValueError, resample, [0], [0, 1])
assert_raises(ValueError, resample, [0, 1], [0, 1], n_samples=3)
assert_raises(ValueError, resample, [0, 1], [0, 1], meaning_of_life=42)
def test_safe_mask():
random_state = check_random_state(0)
X = random_state.rand(5, 4)
X_csr = sp.csr_matrix(X)
mask = [False, False, True, True, True]
mask = safe_mask(X, mask)
assert_equal(X[mask].shape[0], 3)
mask = safe_mask(X_csr, mask)
assert_equal(X_csr[mask].shape[0], 3)
def test_pinvh_simple_real():
a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 10]], dtype=np.float64)
a = np.dot(a, a.T)
a_pinv = pinvh(a)
assert_almost_equal(np.dot(a, a_pinv), np.eye(3))
def test_pinvh_nonpositive():
a = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=np.float64)
a = np.dot(a, a.T)
u, s, vt = np.linalg.svd(a)
s[0] *= -1
a = np.dot(u * s, vt) # a is now symmetric non-positive and singular
a_pinv = pinv2(a)
a_pinvh = pinvh(a)
assert_almost_equal(a_pinv, a_pinvh)
def test_pinvh_simple_complex():
a = (np.array([[1, 2, 3], [4, 5, 6], [7, 8, 10]])
+ 1j * np.array([[10, 8, 7], [6, 5, 4], [3, 2, 1]]))
a = np.dot(a, a.conj().T)
a_pinv = pinvh(a)
assert_almost_equal(np.dot(a, a_pinv), np.eye(3))
def test_column_or_1d():
EXAMPLES = [
("binary", ["spam", "egg", "spam"]),
("binary", [0, 1, 0, 1]),
("continuous", np.arange(10) / 20.),
("multiclass", [1, 2, 3]),
("multiclass", [0, 1, 2, 2, 0]),
("multiclass", [[1], [2], [3]]),
("multilabel-indicator", [[0, 1, 0], [0, 0, 1]]),
("multiclass-multioutput", [[1, 2, 3]]),
("multiclass-multioutput", [[1, 1], [2, 2], [3, 1]]),
("multiclass-multioutput", [[5, 1], [4, 2], [3, 1]]),
("multiclass-multioutput", [[1, 2, 3]]),
("continuous-multioutput", np.arange(30).reshape((-1, 3))),
]
for y_type, y in EXAMPLES:
if y_type in ["binary", 'multiclass', "continuous"]:
assert_array_equal(column_or_1d(y), np.ravel(y))
else:
assert_raises(ValueError, column_or_1d, y)
def test_safe_indexing():
X = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
inds = np.array([1, 2])
X_inds = safe_indexing(X, inds)
X_arrays = safe_indexing(np.array(X), inds)
assert_array_equal(np.array(X_inds), X_arrays)
assert_array_equal(np.array(X_inds), np.array(X)[inds])
def test_safe_indexing_pandas():
try:
import pandas as pd
except ImportError:
raise SkipTest("Pandas not found")
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
X_df = pd.DataFrame(X)
inds = np.array([1, 2])
X_df_indexed = safe_indexing(X_df, inds)
X_indexed = safe_indexing(X_df, inds)
assert_array_equal(np.array(X_df_indexed), X_indexed)
# fun with read-only data in dataframes
# this happens in joblib memmapping
X.setflags(write=False)
X_df_readonly = pd.DataFrame(X)
with warnings.catch_warnings(record=True):
X_df_ro_indexed = safe_indexing(X_df_readonly, inds)
assert_array_equal(np.array(X_df_ro_indexed), X_indexed)
def test_safe_indexing_mock_pandas():
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
X_df = MockDataFrame(X)
inds = np.array([1, 2])
X_df_indexed = safe_indexing(X_df, inds)
X_indexed = safe_indexing(X_df, inds)
assert_array_equal(np.array(X_df_indexed), X_indexed)
def test_shuffle_on_ndim_equals_three():
def to_tuple(A): # to make the inner arrays hashable
return tuple(tuple(tuple(C) for C in B) for B in A)
A = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]]) # A.shape = (2,2,2)
S = set(to_tuple(A))
shuffle(A) # shouldn't raise a ValueError for dim = 3
assert_equal(set(to_tuple(A)), S)
def test_shuffle_dont_convert_to_array():
# Check that shuffle does not try to convert to numpy arrays with float
# dtypes can let any indexable datastructure pass-through.
a = ['a', 'b', 'c']
b = np.array(['a', 'b', 'c'], dtype=object)
c = [1, 2, 3]
d = MockDataFrame(np.array([['a', 0],
['b', 1],
['c', 2]],
dtype=object))
e = sp.csc_matrix(np.arange(6).reshape(3, 2))
a_s, b_s, c_s, d_s, e_s = shuffle(a, b, c, d, e, random_state=0)
assert_equal(a_s, ['c', 'b', 'a'])
assert_equal(type(a_s), list)
assert_array_equal(b_s, ['c', 'b', 'a'])
assert_equal(b_s.dtype, object)
assert_equal(c_s, [3, 2, 1])
assert_equal(type(c_s), list)
assert_array_equal(d_s, np.array([['c', 2],
['b', 1],
['a', 0]],
dtype=object))
assert_equal(type(d_s), MockDataFrame)
assert_array_equal(e_s.toarray(), np.array([[4, 5],
[2, 3],
[0, 1]]))
def test_gen_even_slices():
# check that gen_even_slices contains all samples
some_range = range(10)
joined_range = list(chain(*[some_range[slice] for slice in gen_even_slices(10, 3)]))
assert_array_equal(some_range, joined_range)
# check that passing negative n_chunks raises an error
slices = gen_even_slices(10, -1)
assert_raises_regex(ValueError, "gen_even_slices got n_packs=-1, must be"
" >=1", next, slices)
| bsd-3-clause |
iankuoli/final_rnn | rnn_minibatch.py | 1 | 39689 | """ Vanilla RNN
Parallelizes scan over sequences by using mini-batches.
@author Graham Taylor
"""
import numpy as np
import theano
import theano.tensor as T
from sklearn.base import BaseEstimator
import logging
import time
import os
import datetime
import _pickle as pickle
logger = logging.getLogger(__name__)
import matplotlib.pyplot as plt
plt.ion()
mode = theano.Mode(linker='cvm')
#mode = 'DEBUG_MODE'
class RNN(object):
""" Recurrent neural network class
Supported output types:
real : linear output units, use mean-squared error
binary : binary output units, use cross-entropy error
softmax : single softmax out, use cross-entropy error
"""
def __init__(self, input, n_in, n_hidden, n_out, activation=T.tanh,
output_type='real'):
self.input = input
self.activation = activation
self.output_type = output_type
self.batch_size = T.iscalar()
# theta is a vector of all trainable parameters
# it represents the value of W, W_in, W_out, h0, bh, by
theta_shape = n_hidden ** 2 + n_in * n_hidden + n_hidden * n_out + \
n_hidden + n_hidden + n_out
self.theta = theano.shared(value=np.zeros(theta_shape,
dtype=theano.config.floatX))
# Parameters are reshaped views of theta
param_idx = 0 # pointer to somewhere along parameter vector
# recurrent weights as a shared variable
self.W = self.theta[param_idx:(param_idx + n_hidden ** 2)].reshape(
(n_hidden, n_hidden))
self.W.name = 'W'
W_init = np.asarray(np.random.uniform(size=(n_hidden, n_hidden),
low=-0.01, high=0.01),
dtype=theano.config.floatX)
param_idx += n_hidden ** 2
# input to hidden layer weights
self.W_in = self.theta[param_idx:(param_idx + n_in * \
n_hidden)].reshape((n_in, n_hidden))
self.W_in.name = 'W_in'
W_in_init = np.asarray(np.random.uniform(size=(n_in, n_hidden),
low=-0.01, high=0.01),
dtype=theano.config.floatX)
param_idx += n_in * n_hidden
# hidden to output layer weights
self.W_out = self.theta[param_idx:(param_idx + n_hidden * \
n_out)].reshape((n_hidden, n_out))
self.W_out.name = 'W_out'
W_out_init = np.asarray(np.random.uniform(size=(n_hidden, n_out),
low=-0.01, high=0.01),
dtype=theano.config.floatX)
param_idx += n_hidden * n_out
self.h0 = self.theta[param_idx:(param_idx + n_hidden)]
self.h0.name = 'h0'
h0_init = np.zeros((n_hidden,), dtype=theano.config.floatX)
param_idx += n_hidden
self.bh = self.theta[param_idx:(param_idx + n_hidden)]
self.bh.name = 'bh'
bh_init = np.zeros((n_hidden,), dtype=theano.config.floatX)
param_idx += n_hidden
self.by = self.theta[param_idx:(param_idx + n_out)]
self.by.name = 'by'
by_init = np.zeros((n_out,), dtype=theano.config.floatX)
param_idx += n_out
assert(param_idx == theta_shape)
# for convenience
self.params = [self.W, self.W_in, self.W_out, self.h0, self.bh,
self.by]
# shortcut to norms (for monitoring)
self.l2_norms = {}
for param in self.params:
self.l2_norms[param] = T.sqrt(T.sum(param ** 2))
# initialize parameters
# DEBUG_MODE gives division by zero error when we leave parameters
# as zeros
self.theta.set_value(np.concatenate([x.ravel() for x in
(W_init, W_in_init, W_out_init, h0_init, bh_init, by_init)]))
self.theta_update = theano.shared(
value=np.zeros(theta_shape, dtype=theano.config.floatX))
# recurrent function (using tanh activation function) and arbitrary output
# activation function
def step(x_t, h_tm1):
h_t = self.activation(T.dot(x_t, self.W_in) + \
T.dot(h_tm1, self.W) + self.bh)
y_t = T.dot(h_t, self.W_out) + self.by
return h_t, y_t
# the hidden state `h` for the entire sequence, and the output for the
# entire sequence `y` (first dimension is always time)
# Note the implementation of weight-sharing h0 across variable-size
# batches using T.ones multiplying h0
# Alternatively, T.alloc approach is more robust
[self.h, self.y_pred], _ = theano.scan(step,
sequences=self.input,
outputs_info=[T.alloc(self.h0, self.input.shape[1],
n_hidden), None])
# outputs_info=[T.ones(shape=(self.input.shape[1],
# self.h0.shape[0])) * self.h0, None])
# L1 norm ; one regularization option is to enforce L1 norm to
# be small
self.L1 = 0
self.L1 += abs(self.W.sum())
self.L1 += abs(self.W_in.sum())
self.L1 += abs(self.W_out.sum())
# square of L2 norm ; one regularization option is to enforce
# square of L2 norm to be small
self.L2_sqr = 0
self.L2_sqr += (self.W ** 2).sum()
self.L2_sqr += (self.W_in ** 2).sum()
self.L2_sqr += (self.W_out ** 2).sum()
if self.output_type == 'real':
self.loss = lambda y: self.mse(y)
elif self.output_type == 'binary':
# push through sigmoid
self.p_y_given_x = T.nnet.sigmoid(self.y_pred) # apply sigmoid
self.y_out = T.round(self.p_y_given_x) # round to {0,1}
self.loss = lambda y: self.nll_binary(y)
elif self.output_type == 'softmax':
# push through softmax, computing vector of class-membership
# probabilities in symbolic form
#
# T.nnet.softmax will not operate on T.tensor3 types, only matrices
# We take our n_steps x n_seq x n_classes output from the net
# and reshape it into a (n_steps * n_seq) x n_classes matrix
# apply softmax, then reshape back
y_p = self.y_pred
y_p_m = T.reshape(y_p, (y_p.shape[0] * y_p.shape[1], -1))
y_p_s = T.nnet.softmax(y_p_m)
self.p_y_given_x = T.reshape(y_p_s, y_p.shape)
# compute prediction as class whose probability is maximal
self.y_out = T.argmax(self.p_y_given_x, axis=-1)
self.loss = lambda y: self.nll_multiclass(y)
else:
raise NotImplementedError
def mse(self, y):
# error between output and target
return T.mean((self.y_pred - y) ** 2)
def nll_binary(self, y):
# negative log likelihood based on binary cross entropy error
return T.mean(T.nnet.binary_crossentropy(self.p_y_given_x, y))
def nll_multiclass(self, y):
# negative log likelihood based on multiclass cross entropy error
#
# Theano's advanced indexing is limited
# therefore we reshape our n_steps x n_seq x n_classes tensor3 of probs
# to a (n_steps * n_seq) x n_classes matrix of probs
# so that we can use advanced indexing (i.e. get the probs which
# correspond to the true class)
# the labels y also must be flattened when we do this to use the
# advanced indexing
p_y = self.p_y_given_x
p_y_m = T.reshape(p_y, (p_y.shape[0] * p_y.shape[1], -1))
y_f = y.flatten(ndim=1)
return -T.mean(T.log(p_y_m)[T.arange(p_y_m.shape[0]), y_f])
def errors(self, y):
"""Return a float representing the number of errors in the minibatch
over the total number of examples of the minibatch ; zero one
loss over the size of the minibatch
:type y: theano.tensor.TensorType
:param y: corresponds to a vector that gives for each example the
correct label
"""
# check if y has same dimension of y_pred
if y.ndim != self.y_out.ndim:
raise TypeError('y should have the same shape as self.y_out',
('y', y.type, 'y_out', self.y_out.type))
# check if y is of the correct datatype
if y.dtype.startswith('int'):
# the T.neq operator returns a vector of 0s and 1s, where 1
# represents a mistake in prediction
return T.mean(T.neq(self.y_out, y))
else:
raise NotImplementedError()
class MetaRNN(BaseEstimator):
def __init__(self, n_in=5, n_hidden=50, n_out=5, learning_rate=0.01,
n_epochs=100, batch_size=100, L1_reg=0.00, L2_reg=0.00,
learning_rate_decay=1,
activation='tanh', output_type='real', final_momentum=0.9,
initial_momentum=0.5, momentum_switchover=5,
snapshot_every=None, snapshot_path='/tmp'):
self.n_in = int(n_in)
self.n_hidden = int(n_hidden)
self.n_out = int(n_out)
self.learning_rate = float(learning_rate)
self.learning_rate_decay = float(learning_rate_decay)
self.n_epochs = int(n_epochs)
self.batch_size = int(batch_size)
self.L1_reg = float(L1_reg)
self.L2_reg = float(L2_reg)
self.activation = activation
self.output_type = output_type
self.initial_momentum = float(initial_momentum)
self.final_momentum = float(final_momentum)
self.momentum_switchover = int(momentum_switchover)
if snapshot_every is not None:
self.snapshot_every = int(snapshot_every)
else:
self.snapshot_every = None
self.snapshot_path = snapshot_path
self.ready()
def ready(self):
# input (where first dimension is time)
self.x = T.tensor3(name='x')
# target (where first dimension is time)
if self.output_type == 'real':
self.y = T.tensor3(name='y', dtype=theano.config.floatX)
elif self.output_type == 'binary':
self.y = T.tensor3(name='y', dtype='int32')
elif self.output_type == 'softmax': # now it is a matrix (T x n_seq)
self.y = T.matrix(name='y', dtype='int32')
else:
raise NotImplementedError
# learning rate
self.lr = T.scalar()
if self.activation == 'tanh':
activation = T.tanh
elif self.activation == 'sigmoid':
activation = T.nnet.sigmoid
elif self.activation == 'relu':
activation = lambda x: x * (x > 0)
elif self.activation == 'cappedrelu':
activation = lambda x: T.minimum(x * (x > 0), 6)
else:
raise NotImplementedError
self.rnn = RNN(input=self.x, n_in=self.n_in,
n_hidden=self.n_hidden, n_out=self.n_out,
activation=activation, output_type=self.output_type)
if self.output_type == 'real':
self.predict = theano.function(inputs=[self.x, ],
outputs=self.rnn.y_pred,
mode=mode)
elif self.output_type == 'binary':
self.predict_proba = theano.function(inputs=[self.x, ],
outputs=self.rnn.p_y_given_x, mode=mode)
self.predict = theano.function(inputs=[self.x, ],
outputs=T.round(self.rnn.p_y_given_x),
mode=mode)
elif self.output_type == 'softmax':
self.predict_proba = theano.function(inputs=[self.x, ],
outputs=self.rnn.p_y_given_x, mode=mode)
self.predict = theano.function(inputs=[self.x, ],
outputs=self.rnn.y_out, mode=mode)
else:
raise NotImplementedError
def shared_dataset(self, data_xy, borrow=True):
""" Load the dataset into shared variables """
data_x, data_y = data_xy
shared_x = theano.shared(np.asarray(data_x,
dtype=theano.config.floatX),
borrow=True)
shared_y = theano.shared(np.asarray(data_y,
dtype=theano.config.floatX),
borrow=True)
if self.output_type in ('binary', 'softmax'):
return shared_x, T.cast(shared_y, 'int32')
else:
return shared_x, shared_y
def __getstate__(self):
""" Return state sequence."""
params = self._get_params() # parameters set in constructor
theta = self.rnn.theta.get_value()
state = (params, theta)
return state
def _set_weights(self, theta):
""" Set fittable parameters from weights sequence.
"""
self.rnn.theta.set_value(theta)
def __setstate__(self, state):
""" Set parameters from state sequence.
"""
params, theta = state
self.set_params(**params)
self.ready()
self._set_weights(theta)
def save(self, fpath='.', fname=None):
""" Save a pickled representation of Model state. """
fpathstart, fpathext = os.path.splitext(fpath)
if fpathext == '.pkl':
# User supplied an absolute path to a pickle file
fpath, fname = os.path.split(fpath)
elif fname is None:
# Generate filename based on date
date_obj = datetime.datetime.now()
date_str = date_obj.strftime('%Y-%m-%d-%H:%M:%S')
class_name = self.__class__.__name__
fname = '%s.%s.pkl' % (class_name, date_str)
fabspath = os.path.join(fpath, fname)
logger.info("Saving to %s ..." % fabspath)
file = open(fabspath, 'wb')
state = self.__getstate__()
pickle.dump(state, file, protocol=pickle.HIGHEST_PROTOCOL)
file.close()
def load(self, path):
""" Load model parameters from path. """
logger.info("Loading from %s ..." % path)
file = open(path, 'rb')
state = pickle.load(file)
self.__setstate__(state)
file.close()
def optional_output(self, train_set_x, show_norms=True, show_output=True):
""" Produces some debugging output. """
if show_norms:
norm_output = []
for param in self.rnn.params:
norm_output.append('%s: %6.4f' % (param.name,
self.get_norms[param]()))
logger.info("norms: {" + ', '.join(norm_output) + "}")
if show_output:
# show output for a single case
if self.output_type == 'binary':
output_fn = self.predict_proba
else:
output_fn = self.predict
logger.info("sample output: " + \
str(output_fn(train_set_x.get_value(
borrow=True)[:, 0, :][:, np.newaxis, :]).flatten()))
def fit(self, X_train, Y_train, X_test=None, Y_test=None,
validate_every=100, optimizer='sgd', compute_zero_one=False,
show_norms=True, show_output=True):
""" Fit model
Pass in X_test, Y_test to compute test error and report during
training.
X_train : ndarray (T x n_in)
Y_train : ndarray (T x n_out)
validation_frequency : int
in terms of number of epochs
optimizer : string
Optimizer type.
Possible values:
'sgd' : batch stochastic gradient descent
'cg' : nonlinear conjugate gradient algorithm
(scipy.optimize.fmin_cg)
'bfgs' : quasi-Newton method of Broyden, Fletcher, Goldfarb,
and Shanno (scipy.optimize.fmin_bfgs)
'l_bfgs_b' : Limited-memory BFGS (scipy.optimize.fmin_l_bfgs_b)
compute_zero_one : bool
in the case of binary output, compute zero-one error in addition to
cross-entropy error
show_norms : bool
Show L2 norms of individual parameter groups while training.
show_output : bool
Show the model output on first training case while training.
"""
if X_test is not None:
assert(Y_test is not None)
self.interactive = True
test_set_x, test_set_y = self.shared_dataset((X_test, Y_test))
else:
self.interactive = False
train_set_x, train_set_y = self.shared_dataset((X_train, Y_train))
if compute_zero_one:
assert(self.output_type == 'binary' \
or self.output_type == 'softmax')
# compute number of minibatches for training
# note that cases are the second dimension, not the first
n_train = train_set_x.get_value(borrow=True).shape[1]
n_train_batches = int(np.ceil(1.0 * n_train / self.batch_size))
if self.interactive:
n_test = test_set_x.get_value(borrow=True).shape[1]
n_test_batches = int(np.ceil(1.0 * n_test / self.batch_size))
#validate_every is specified in terms of epochs
validation_frequency = validate_every * n_train_batches
######################
# BUILD ACTUAL MODEL #
######################
logger.info('... building the model')
index = T.lscalar('index') # index to a [mini]batch
n_ex = T.lscalar('n_ex') # total number of examples
# learning rate (may change)
l_r = T.scalar('l_r', dtype=theano.config.floatX)
mom = T.scalar('mom', dtype=theano.config.floatX) # momentum
cost = self.rnn.loss(self.y) \
+ self.L1_reg * self.rnn.L1 \
+ self.L2_reg * self.rnn.L2_sqr
# Proper implementation of variable-batch size evaluation
# Note that classifier.errors() returns the mean error
# But the last batch may be a smaller size
# So we keep around the effective_batch_size (whose last element may
# be smaller than the rest)
# And weight the reported error by the batch_size when we average
# Also, by keeping batch_start and batch_stop as symbolic variables,
# we make the theano function easier to read
batch_start = index * self.batch_size
batch_stop = T.minimum(n_ex, (index + 1) * self.batch_size)
effective_batch_size = batch_stop - batch_start
get_batch_size = theano.function(inputs=[index, n_ex],
outputs=effective_batch_size)
compute_train_error = theano.function(inputs=[index, n_ex],
outputs=self.rnn.loss(self.y),
givens={self.x: train_set_x[:, batch_start:batch_stop],
self.y: train_set_y[:, batch_start:batch_stop]},
mode=mode)
if compute_zero_one:
compute_train_zo = theano.function(inputs=[index, n_ex],
outputs=self.rnn.errors(self.y),
givens={self.x: train_set_x[:, batch_start:batch_stop],
self.y: train_set_y[:, batch_start:batch_stop]},
mode=mode)
if self.interactive:
compute_test_error = theano.function(inputs=[index, n_ex],
outputs=self.rnn.loss(self.y),
givens={self.x: test_set_x[:, batch_start:batch_stop],
self.y: test_set_y[:, batch_start:batch_stop]},
mode=mode)
if compute_zero_one:
compute_test_zo = theano.function(inputs=[index, n_ex],
outputs=self.rnn.errors(self.y),
givens={self.x: test_set_x[:, batch_start:batch_stop],
self.y: test_set_y[:, batch_start:batch_stop]},
mode=mode)
self.get_norms = {}
for param in self.rnn.params:
self.get_norms[param] = theano.function(inputs=[],
outputs=self.rnn.l2_norms[param], mode=mode)
# compute the gradient of cost with respect to theta using BPTT
gtheta = T.grad(cost, self.rnn.theta)
if optimizer == 'sgd':
updates = {}
theta = self.rnn.theta
theta_update = self.rnn.theta_update
# careful here, update to the shared variable
# cannot depend on an updated other shared variable
# since updates happen in parallel
# so we need to be explicit
upd = mom * theta_update - l_r * gtheta
updates[theta_update] = upd
updates[theta] = theta + upd
# compiling a Theano function `train_model` that returns the
# cost, but in the same time updates the parameter of the
# model based on the rules defined in `updates`
train_model = theano.function(inputs=[index, n_ex, l_r, mom],
outputs=cost,
updates=updates,
givens={self.x: train_set_x[:, batch_start:batch_stop],
self.y: train_set_y[:, batch_start:batch_stop]},
mode=mode)
###############
# TRAIN MODEL #
###############
logger.info('... training')
epoch = 0
while (epoch < self.n_epochs):
epoch = epoch + 1
effective_momentum = self.final_momentum \
if epoch > self.momentum_switchover \
else self.initial_momentum
for minibatch_idx in range(n_train_batches):
minibatch_avg_cost = train_model(minibatch_idx, n_train,
self.learning_rate,
effective_momentum)
# iteration number (how many weight updates have we made?)
# epoch is 1-based, index is 0 based
iter = (epoch - 1) * n_train_batches + minibatch_idx + 1
if iter % validation_frequency == 0:
# compute loss on training set
train_losses = [compute_train_error(i, n_train)
for i in range(n_train_batches)]
train_batch_sizes = [get_batch_size(i, n_train)
for i in range(n_train_batches)]
this_train_loss = np.average(train_losses,
weights=train_batch_sizes)
if compute_zero_one:
train_zero_one = [compute_train_zo(i, n_train)
for i in range(n_train_batches)]
this_train_zero_one = np.average(train_zero_one,
weights=train_batch_sizes)
if self.interactive:
test_losses = [compute_test_error(i, n_test)
for i in range(n_test_batches)]
test_batch_sizes = [get_batch_size(i, n_test)
for i in range(n_test_batches)]
this_test_loss = np.average(test_losses,
weights=test_batch_sizes)
if compute_zero_one:
test_zero_one = [compute_test_zo(i, n_test)
for i in range(n_test_batches)]
this_test_zero_one = np.average(test_zero_one,
weights=test_batch_sizes)
if compute_zero_one:
logger.info('epoch %i, mb %i/%i, tr loss %f, '
'tr zo %f, te loss %f '
'te zo %f lr: %f' % \
(epoch, minibatch_idx + 1,
n_train_batches,
this_train_loss, this_train_zero_one,
this_test_loss, this_test_zero_one,
self.learning_rate))
else:
logger.info('epoch %i, mb %i/%i, tr loss %f '
'te loss %f lr: %f' % \
(epoch, minibatch_idx + 1, n_train_batches,
this_train_loss, this_test_loss,
self.learning_rate))
else:
if compute_zero_one:
logger.info('epoch %i, mb %i/%i, train loss %f'
' train zo %f '
'lr: %f' % (epoch,
minibatch_idx + 1,
n_train_batches,
this_train_loss,
this_train_zero_one,
self.learning_rate))
else:
logger.info('epoch %i, mb %i/%i, train loss %f'
' lr: %f' % (epoch,
minibatch_idx + 1,
n_train_batches,
this_train_loss,
self.learning_rate))
self.optional_output(train_set_x, show_norms,
show_output)
self.learning_rate *= self.learning_rate_decay
if self.snapshot_every is not None:
if (epoch + 1) % self.snapshot_every == 0:
date_obj = datetime.datetime.now()
date_str = date_obj.strftime('%Y-%m-%d-%H:%M:%S')
class_name = self.__class__.__name__
fname = '%s.%s-snapshot-%d.pkl' % (class_name,
date_str, epoch + 1)
fabspath = os.path.join(self.snapshot_path, fname)
self.save(fpath=fabspath)
elif optimizer == 'cg' or optimizer == 'bfgs' \
or optimizer == 'l_bfgs_b':
# compile a theano function that returns the cost of a minibatch
batch_cost = theano.function(inputs=[index, n_ex],
outputs=cost,
givens={self.x: train_set_x[:, batch_start:batch_stop],
self.y: train_set_y[:, batch_start:batch_stop]},
mode=mode, name="batch_cost")
# compile a theano function that returns the gradient of the
# minibatch with respect to theta
batch_grad = theano.function(inputs=[index, n_ex],
outputs=T.grad(cost, self.rnn.theta),
givens={self.x: train_set_x[:, batch_start:batch_stop],
self.y: train_set_y[:, batch_start:batch_stop]},
mode=mode, name="batch_grad")
# creates a function that computes the average cost on the training
# set
def train_fn(theta_value):
self.rnn.theta.set_value(theta_value, borrow=True)
train_losses = [batch_cost(i, n_train)
for i in range(n_train_batches)]
train_batch_sizes = [get_batch_size(i, n_train)
for i in range(n_train_batches)]
return np.average(train_losses, weights=train_batch_sizes)
# creates a function that computes the average gradient of cost
# with respect to theta
def train_fn_grad(theta_value):
self.rnn.theta.set_value(theta_value, borrow=True)
train_grads = [batch_grad(i, n_train)
for i in range(n_train_batches)]
train_batch_sizes = [get_batch_size(i, n_train)
for i in range(n_train_batches)]
return np.average(train_grads, weights=train_batch_sizes,
axis=0)
# validation function, prints useful output after each iteration
def callback(theta_value):
self.epoch += 1
if (self.epoch) % validate_every == 0:
self.rnn.theta.set_value(theta_value, borrow=True)
# compute loss on training set
train_losses = [compute_train_error(i, n_train)
for i in range(n_train_batches)]
train_batch_sizes = [get_batch_size(i, n_train)
for i in range(n_train_batches)]
this_train_loss = np.average(train_losses,
weights=train_batch_sizes)
if compute_zero_one:
train_zero_one = [compute_train_zo(i, n_train)
for i in range(n_train_batches)]
this_train_zero_one = np.average(train_zero_one,
weights=train_batch_sizes)
if self.interactive:
test_losses = [compute_test_error(i, n_test)
for i in range(n_test_batches)]
test_batch_sizes = [get_batch_size(i, n_test)
for i in range(n_test_batches)]
this_test_loss = np.average(test_losses,
weights=test_batch_sizes)
if compute_zero_one:
test_zero_one = [compute_test_zo(i, n_test)
for i in range(n_test_batches)]
this_test_zero_one = np.average(test_zero_one,
weights=test_batch_sizes)
if compute_zero_one:
logger.info('epoch %i, tr loss %f, '
'tr zo %f, te loss %f '
'te zo %f' % \
(self.epoch, this_train_loss,
this_train_zero_one, this_test_loss,
this_test_zero_one))
else:
logger.info('epoch %i, tr loss %f, te loss %f' % \
(self.epoch, this_train_loss,
this_test_loss, self.learning_rate))
else:
if compute_zero_one:
logger.info('epoch %i, train loss %f'
', train zo %f ' % \
(self.epoch, this_train_loss,
this_train_zero_one))
else:
logger.info('epoch %i, train loss %f ' % \
(self.epoch, this_train_loss))
self.optional_output(train_set_x, show_norms, show_output)
###############
# TRAIN MODEL #
###############
logger.info('... training')
# using scipy conjugate gradient optimizer
import scipy.optimize
if optimizer == 'cg':
of = scipy.optimize.fmin_cg
elif optimizer == 'bfgs':
of = scipy.optimize.fmin_bfgs
elif optimizer == 'l_bfgs_b':
of = scipy.optimize.fmin_l_bfgs_b
logger.info("Optimizing using %s..." % of.__name__)
start_time = time.clock()
# keep track of epochs externally
# these get updated through callback
self.epoch = 0
# interface to l_bfgs_b is different than that of cg, bfgs
# however, this will be changed in scipy 0.11
# unified under scipy.optimize.minimize
if optimizer == 'cg' or optimizer == 'bfgs':
best_theta = of(
f=train_fn,
x0=self.rnn.theta.get_value(),
# x0=np.zeros(self.rnn.theta.get_value().shape,
# dtype=theano.config.floatX),
fprime=train_fn_grad,
callback=callback,
disp=1,
retall=1,
maxiter=self.n_epochs)
elif optimizer == 'l_bfgs_b':
best_theta, f_best_theta, info = of(
func=train_fn,
x0=self.rnn.theta.get_value(),
fprime=train_fn_grad,
iprint=validate_every,
maxfun=self.n_epochs) # max number of feval
end_time = time.clock()
print("Optimization time: %f" % (end_time - start_time))
else:
raise NotImplementedError
def test_real(n_epochs=1000):
""" Test RNN with real-valued outputs. """
n_hidden = 10
n_in = 5
n_out = 3
n_steps = 10
n_seq = 10 # per batch
n_batches = 10
np.random.seed(0)
# simple lag test
seq = np.random.randn(n_steps, n_seq * n_batches, n_in)
targets = np.zeros((n_steps, n_seq * n_batches, n_out))
targets[1:, :, 0] = seq[:-1, :, 3] # delayed 1
targets[1:, :, 1] = seq[:-1, :, 2] # delayed 1
targets[2:, :, 2] = seq[:-2, :, 0] # delayed 2
targets += 0.01 * np.random.standard_normal(targets.shape)
model = MetaRNN(n_in=n_in, n_hidden=n_hidden, n_out=n_out,
learning_rate=0.01, learning_rate_decay=0.999,
n_epochs=n_epochs, batch_size=n_seq, activation='tanh',
L2_reg=1e-3)
model.fit(seq, targets, validate_every=100, optimizer='bfgs')
plt.close('all')
fig = plt.figure()
ax1 = plt.subplot(211)
plt.plot(seq[:, 0, :])
ax1.set_title('input')
ax2 = plt.subplot(212)
true_targets = plt.plot(targets[:, 0, :])
guess = model.predict(seq[:, 0, :][:, np.newaxis, :])
guessed_targets = plt.plot(guess.squeeze(), linestyle='--')
for i, x in enumerate(guessed_targets):
x.set_color(true_targets[i].get_color())
ax2.set_title('solid: true output, dashed: model output')
def test_binary(multiple_out=False, n_epochs=1000, optimizer='cg'):
""" Test RNN with binary outputs. """
n_hidden = 10
n_in = 5
if multiple_out:
n_out = 2
else:
n_out = 1
n_steps = 10
n_seq = 10 # per batch
n_batches = 50
np.random.seed(0)
# simple lag test
seq = np.random.randn(n_steps, n_seq * n_batches, n_in)
targets = np.zeros((n_steps, n_seq * n_batches, n_out))
# whether lag 1 (dim 3) is greater than lag 2 (dim 0)
targets[2:, :, 0] = np.cast[np.int](seq[1:-1, :, 3] > seq[:-2, :, 0])
if multiple_out:
# whether product of lag 1 (dim 4) and lag 1 (dim 2)
# is less than lag 2 (dim 0)
targets[2:, :, 1] = np.cast[np.int](
(seq[1:-1, :, 4] * seq[1:-1, :, 2]) > seq[:-2, :, 0])
model = MetaRNN(n_in=n_in, n_hidden=n_hidden, n_out=n_out,
learning_rate=0.005, learning_rate_decay=0.999,
n_epochs=n_epochs, batch_size=n_seq, activation='tanh',
output_type='binary')
model.fit(seq, targets, validate_every=100, compute_zero_one=True,
optimizer=optimizer)
seqs = range(10)
plt.close('all')
for seq_num in seqs:
fig = plt.figure()
ax1 = plt.subplot(211)
plt.plot(seq[:, seq_num, :])
ax1.set_title('input')
ax2 = plt.subplot(212)
true_targets = plt.step(range(n_steps), targets[:, seq_num, :],
marker='o')
guess = model.predict_proba(seq[:, seq_num, :][:, np.newaxis, :])
guessed_targets = plt.step(range(n_steps), guess.squeeze())
plt.setp(guessed_targets, linestyle='--', marker='d')
for i, x in enumerate(guessed_targets):
x.set_color(true_targets[i].get_color())
ax2.set_ylim((-0.1, 1.1))
ax2.set_title('solid: true output, dashed: model output (prob)')
def test_softmax(n_epochs=250, optimizer='cg'):
""" Test RNN with softmax outputs. """
n_hidden = 10
n_in = 5
n_steps = 10
n_seq = 10 # per batch
n_batches = 50
n_classes = 3
n_out = n_classes # restricted to single softmax per time step
np.random.seed(0)
# simple lag test
seq = np.random.randn(n_steps, n_seq * n_batches, n_in)
targets = np.zeros((n_steps, n_seq * n_batches), dtype=np.int)
thresh = 0.5
# if lag 1 (dim 3) is greater than lag 2 (dim 0) + thresh
# class 1
# if lag 1 (dim 3) is less than lag 2 (dim 0) - thresh
# class 2
# if lag 2(dim0) - thresh <= lag 1 (dim 3) <= lag2(dim0) + thresh
# class 0
targets[2:, :][seq[1:-1, :, 3] > seq[:-2, :, 0] + thresh] = 1
targets[2:, :][seq[1:-1, :, 3] < seq[:-2, :, 0] - thresh] = 2
#targets[:, 2:, 0] = np.cast[np.int](seq[:, 1:-1, 3] > seq[:, :-2, 0])
model = MetaRNN(n_in=n_in, n_hidden=n_hidden, n_out=n_out,
learning_rate=0.005, learning_rate_decay=0.999,
n_epochs=n_epochs, batch_size=n_seq, activation='tanh',
output_type='softmax')
model.fit(seq, targets, validate_every=10, compute_zero_one=True,
optimizer=optimizer)
seqs = range(10)
plt.close('all')
for seq_num in seqs:
fig = plt.figure()
ax1 = plt.subplot(211)
plt.plot(seq[:, seq_num])
ax1.set_title('input')
ax2 = plt.subplot(212)
# blue line will represent true classes
true_targets = plt.step(range(n_steps), targets[:, seq_num],
marker='o')
# show probabilities (in b/w) output by model
guess = model.predict_proba(seq[:, seq_num][:, np.newaxis])
guessed_probs = plt.imshow(guess.squeeze().T, interpolation='nearest',
cmap='gray')
ax2.set_title('blue: true class, grayscale: probs assigned by model')
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
t0 = time.time()
test_real(n_epochs=1000)
#test_binary(optimizer='sgd', n_epochs=1000)
#test_softmax(n_epochs=250, optimizer='sgd')
print("Elapsed time: %f" % (time.time() - t0))
| bsd-3-clause |
MostafaGazar/tensorflow | tensorflow/contrib/learn/python/learn/learn_io/__init__.py | 10 | 1992 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tools to allow different io formats."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow.contrib.learn.python.learn.learn_io.dask_io import extract_dask_data
from tensorflow.contrib.learn.python.learn.learn_io.dask_io import extract_dask_labels
from tensorflow.contrib.learn.python.learn.learn_io.dask_io import HAS_DASK
from tensorflow.contrib.learn.python.learn.learn_io.graph_io import queue_parsed_features
from tensorflow.contrib.learn.python.learn.learn_io.graph_io import read_batch_examples
from tensorflow.contrib.learn.python.learn.learn_io.graph_io import read_batch_features
from tensorflow.contrib.learn.python.learn.learn_io.graph_io import read_batch_record_features
from tensorflow.contrib.learn.python.learn.learn_io.graph_io import read_keyed_batch_examples
from tensorflow.contrib.learn.python.learn.learn_io.graph_io import read_keyed_batch_features
from tensorflow.contrib.learn.python.learn.learn_io.pandas_io import extract_pandas_data
from tensorflow.contrib.learn.python.learn.learn_io.pandas_io import extract_pandas_labels
from tensorflow.contrib.learn.python.learn.learn_io.pandas_io import extract_pandas_matrix
from tensorflow.contrib.learn.python.learn.learn_io.pandas_io import HAS_PANDAS
| apache-2.0 |
winklerand/pandas | pandas/stats/moments.py | 1 | 31628 | """
Provides rolling statistical moments and related descriptive
statistics implemented in Cython
"""
from __future__ import division
import warnings
import numpy as np
from pandas.core.dtypes.common import is_scalar
from pandas.core.api import DataFrame, Series
from pandas.util._decorators import Substitution, Appender
__all__ = ['rolling_count', 'rolling_max', 'rolling_min',
'rolling_sum', 'rolling_mean', 'rolling_std', 'rolling_cov',
'rolling_corr', 'rolling_var', 'rolling_skew', 'rolling_kurt',
'rolling_quantile', 'rolling_median', 'rolling_apply',
'rolling_window',
'ewma', 'ewmvar', 'ewmstd', 'ewmvol', 'ewmcorr', 'ewmcov',
'expanding_count', 'expanding_max', 'expanding_min',
'expanding_sum', 'expanding_mean', 'expanding_std',
'expanding_cov', 'expanding_corr', 'expanding_var',
'expanding_skew', 'expanding_kurt', 'expanding_quantile',
'expanding_median', 'expanding_apply']
# -----------------------------------------------------------------------------
# Docs
# The order of arguments for the _doc_template is:
# (header, args, kwargs, returns, notes)
_doc_template = """
%s
Parameters
----------
%s%s
Returns
-------
%s
%s
"""
_roll_kw = """window : int
Size of the moving window. This is the number of observations used for
calculating the statistic.
min_periods : int, default None
Minimum number of observations in window required to have a value
(otherwise result is NA).
freq : string or DateOffset object, optional (default None)
Frequency to conform the data to before computing the statistic. Specified
as a frequency string or DateOffset object.
center : boolean, default False
Set the labels at the center of the window.
how : string, default '%s'
Method for down- or re-sampling
"""
_roll_notes = r"""
Notes
-----
By default, the result is set to the right edge of the window. This can be
changed to the center of the window by setting ``center=True``.
The `freq` keyword is used to conform time series data to a specified
frequency by resampling the data. This is done with the default parameters
of :meth:`~pandas.Series.resample` (i.e. using the `mean`).
"""
_ewm_kw = r"""com : float, optional
Specify decay in terms of center of mass,
:math:`\alpha = 1 / (1 + com),\text{ for } com \geq 0`
span : float, optional
Specify decay in terms of span,
:math:`\alpha = 2 / (span + 1),\text{ for } span \geq 1`
halflife : float, optional
Specify decay in terms of half-life,
:math:`\alpha = 1 - exp(log(0.5) / halflife),\text{ for } halflife > 0`
alpha : float, optional
Specify smoothing factor :math:`\alpha` directly,
:math:`0 < \alpha \leq 1`
.. versionadded:: 0.18.0
min_periods : int, default 0
Minimum number of observations in window required to have a value
(otherwise result is NA).
freq : None or string alias / date offset object, default=None
Frequency to conform to before computing statistic
adjust : boolean, default True
Divide by decaying adjustment factor in beginning periods to account for
imbalance in relative weightings (viewing EWMA as a moving average)
how : string, default 'mean'
Method for down- or re-sampling
ignore_na : boolean, default False
Ignore missing values when calculating weights;
specify True to reproduce pre-0.15.0 behavior
"""
_ewm_notes = r"""
Notes
-----
Exactly one of center of mass, span, half-life, and alpha must be provided.
Allowed values and relationship between the parameters are specified in the
parameter descriptions above; see the link at the end of this section for
a detailed explanation.
When adjust is True (default), weighted averages are calculated using weights
(1-alpha)**(n-1), (1-alpha)**(n-2), ..., 1-alpha, 1.
When adjust is False, weighted averages are calculated recursively as:
weighted_average[0] = arg[0];
weighted_average[i] = (1-alpha)*weighted_average[i-1] + alpha*arg[i].
When ignore_na is False (default), weights are based on absolute positions.
For example, the weights of x and y used in calculating the final weighted
average of [x, None, y] are (1-alpha)**2 and 1 (if adjust is True), and
(1-alpha)**2 and alpha (if adjust is False).
When ignore_na is True (reproducing pre-0.15.0 behavior), weights are based on
relative positions. For example, the weights of x and y used in calculating
the final weighted average of [x, None, y] are 1-alpha and 1 (if adjust is
True), and 1-alpha and alpha (if adjust is False).
More details can be found at
http://pandas.pydata.org/pandas-docs/stable/computation.html#exponentially-weighted-windows
"""
_expanding_kw = """min_periods : int, default None
Minimum number of observations in window required to have a value
(otherwise result is NA).
freq : string or DateOffset object, optional (default None)
Frequency to conform the data to before computing the statistic. Specified
as a frequency string or DateOffset object.
"""
_type_of_input_retval = "y : type of input argument"
_flex_retval = """y : type depends on inputs
DataFrame / DataFrame -> DataFrame (matches on columns) or Panel (pairwise)
DataFrame / Series -> Computes result for each column
Series / Series -> Series"""
_pairwise_retval = "y : Panel whose items are df1.index values"
_unary_arg = "arg : Series, DataFrame\n"
_binary_arg_flex = """arg1 : Series, DataFrame, or ndarray
arg2 : Series, DataFrame, or ndarray, optional
if not supplied then will default to arg1 and produce pairwise output
"""
_binary_arg = """arg1 : Series, DataFrame, or ndarray
arg2 : Series, DataFrame, or ndarray
"""
_pairwise_arg = """df1 : DataFrame
df2 : DataFrame
"""
_pairwise_kw = """pairwise : bool, default False
If False then only matching columns between arg1 and arg2 will be used and
the output will be a DataFrame.
If True then all pairwise combinations will be calculated and the output
will be a Panel in the case of DataFrame inputs. In the case of missing
elements, only complete pairwise observations will be used.
"""
_ddof_kw = """ddof : int, default 1
Delta Degrees of Freedom. The divisor used in calculations
is ``N - ddof``, where ``N`` represents the number of elements.
"""
_bias_kw = r"""bias : boolean, default False
Use a standard estimation bias correction
"""
def ensure_compat(dispatch, name, arg, func_kw=None, *args, **kwargs):
"""
wrapper function to dispatch to the appropriate window functions
wraps/unwraps ndarrays for compat
can be removed when ndarray support is removed
"""
is_ndarray = isinstance(arg, np.ndarray)
if is_ndarray:
if arg.ndim == 1:
arg = Series(arg)
elif arg.ndim == 2:
arg = DataFrame(arg)
else:
raise AssertionError("cannot support ndim > 2 for ndarray compat")
warnings.warn("pd.{dispatch}_{name} is deprecated for ndarrays and "
"will be removed "
"in a future version"
.format(dispatch=dispatch, name=name),
FutureWarning, stacklevel=3)
# get the functional keywords here
if func_kw is None:
func_kw = []
kwds = {}
for k in func_kw:
value = kwargs.pop(k, None)
if value is not None:
kwds[k] = value
# how is a keyword that if not-None should be in kwds
how = kwargs.pop('how', None)
if how is not None:
kwds['how'] = how
r = getattr(arg, dispatch)(**kwargs)
if not is_ndarray:
# give a helpful deprecation message
# with copy-pastable arguments
pargs = ','.join("{a}={b}".format(a=a, b=b)
for a, b in kwargs.items() if b is not None)
aargs = ','.join(args)
if len(aargs):
aargs += ','
def f(a, b):
if is_scalar(b):
return "{a}={b}".format(a=a, b=b)
return "{a}=<{b}>".format(a=a, b=type(b).__name__)
aargs = ','.join(f(a, b) for a, b in kwds.items() if b is not None)
warnings.warn("pd.{dispatch}_{name} is deprecated for {klass} "
"and will be removed in a future version, replace with "
"\n\t{klass}.{dispatch}({pargs}).{name}({aargs})"
.format(klass=type(arg).__name__, pargs=pargs,
aargs=aargs, dispatch=dispatch, name=name),
FutureWarning, stacklevel=3)
result = getattr(r, name)(*args, **kwds)
if is_ndarray:
result = result.values
return result
def rolling_count(arg, window, **kwargs):
"""
Rolling count of number of non-NaN observations inside provided window.
Parameters
----------
arg : DataFrame or numpy ndarray-like
window : int
Size of the moving window. This is the number of observations used for
calculating the statistic.
freq : string or DateOffset object, optional (default None)
Frequency to conform the data to before computing the
statistic. Specified as a frequency string or DateOffset object.
center : boolean, default False
Whether the label should correspond with center of window
how : string, default 'mean'
Method for down- or re-sampling
Returns
-------
rolling_count : type of caller
Notes
-----
The `freq` keyword is used to conform time series data to a specified
frequency by resampling the data. This is done with the default parameters
of :meth:`~pandas.Series.resample` (i.e. using the `mean`).
To learn more about the frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
return ensure_compat('rolling', 'count', arg, window=window, **kwargs)
@Substitution("Unbiased moving covariance.", _binary_arg_flex,
_roll_kw % 'None' + _pairwise_kw + _ddof_kw, _flex_retval,
_roll_notes)
@Appender(_doc_template)
def rolling_cov(arg1, arg2=None, window=None, pairwise=None, **kwargs):
if window is None and isinstance(arg2, (int, float)):
window = arg2
arg2 = arg1
pairwise = True if pairwise is None else pairwise # only default unset
elif arg2 is None:
arg2 = arg1
pairwise = True if pairwise is None else pairwise # only default unset
return ensure_compat('rolling',
'cov',
arg1,
other=arg2,
window=window,
pairwise=pairwise,
func_kw=['other', 'pairwise', 'ddof'],
**kwargs)
@Substitution("Moving sample correlation.", _binary_arg_flex,
_roll_kw % 'None' + _pairwise_kw, _flex_retval, _roll_notes)
@Appender(_doc_template)
def rolling_corr(arg1, arg2=None, window=None, pairwise=None, **kwargs):
if window is None and isinstance(arg2, (int, float)):
window = arg2
arg2 = arg1
pairwise = True if pairwise is None else pairwise # only default unset
elif arg2 is None:
arg2 = arg1
pairwise = True if pairwise is None else pairwise # only default unset
return ensure_compat('rolling',
'corr',
arg1,
other=arg2,
window=window,
pairwise=pairwise,
func_kw=['other', 'pairwise'],
**kwargs)
# -----------------------------------------------------------------------------
# Exponential moving moments
@Substitution("Exponentially-weighted moving average", _unary_arg, _ewm_kw,
_type_of_input_retval, _ewm_notes)
@Appender(_doc_template)
def ewma(arg, com=None, span=None, halflife=None, alpha=None, min_periods=0,
freq=None, adjust=True, how=None, ignore_na=False):
return ensure_compat('ewm',
'mean',
arg,
com=com,
span=span,
halflife=halflife,
alpha=alpha,
min_periods=min_periods,
freq=freq,
adjust=adjust,
how=how,
ignore_na=ignore_na)
@Substitution("Exponentially-weighted moving variance", _unary_arg,
_ewm_kw + _bias_kw, _type_of_input_retval, _ewm_notes)
@Appender(_doc_template)
def ewmvar(arg, com=None, span=None, halflife=None, alpha=None, min_periods=0,
bias=False, freq=None, how=None, ignore_na=False, adjust=True):
return ensure_compat('ewm',
'var',
arg,
com=com,
span=span,
halflife=halflife,
alpha=alpha,
min_periods=min_periods,
freq=freq,
adjust=adjust,
how=how,
ignore_na=ignore_na,
bias=bias,
func_kw=['bias'])
@Substitution("Exponentially-weighted moving std", _unary_arg,
_ewm_kw + _bias_kw, _type_of_input_retval, _ewm_notes)
@Appender(_doc_template)
def ewmstd(arg, com=None, span=None, halflife=None, alpha=None, min_periods=0,
bias=False, freq=None, how=None, ignore_na=False, adjust=True):
return ensure_compat('ewm',
'std',
arg,
com=com,
span=span,
halflife=halflife,
alpha=alpha,
min_periods=min_periods,
freq=freq,
adjust=adjust,
how=how,
ignore_na=ignore_na,
bias=bias,
func_kw=['bias'])
ewmvol = ewmstd
@Substitution("Exponentially-weighted moving covariance", _binary_arg_flex,
_ewm_kw + _pairwise_kw, _type_of_input_retval, _ewm_notes)
@Appender(_doc_template)
def ewmcov(arg1, arg2=None, com=None, span=None, halflife=None, alpha=None,
min_periods=0, bias=False, freq=None, pairwise=None, how=None,
ignore_na=False, adjust=True):
if arg2 is None:
arg2 = arg1
pairwise = True if pairwise is None else pairwise
elif isinstance(arg2, (int, float)) and com is None:
com = arg2
arg2 = arg1
pairwise = True if pairwise is None else pairwise
return ensure_compat('ewm',
'cov',
arg1,
other=arg2,
com=com,
span=span,
halflife=halflife,
alpha=alpha,
min_periods=min_periods,
bias=bias,
freq=freq,
how=how,
ignore_na=ignore_na,
adjust=adjust,
pairwise=pairwise,
func_kw=['other', 'pairwise', 'bias'])
@Substitution("Exponentially-weighted moving correlation", _binary_arg_flex,
_ewm_kw + _pairwise_kw, _type_of_input_retval, _ewm_notes)
@Appender(_doc_template)
def ewmcorr(arg1, arg2=None, com=None, span=None, halflife=None, alpha=None,
min_periods=0, freq=None, pairwise=None, how=None, ignore_na=False,
adjust=True):
if arg2 is None:
arg2 = arg1
pairwise = True if pairwise is None else pairwise
elif isinstance(arg2, (int, float)) and com is None:
com = arg2
arg2 = arg1
pairwise = True if pairwise is None else pairwise
return ensure_compat('ewm',
'corr',
arg1,
other=arg2,
com=com,
span=span,
halflife=halflife,
alpha=alpha,
min_periods=min_periods,
freq=freq,
how=how,
ignore_na=ignore_na,
adjust=adjust,
pairwise=pairwise,
func_kw=['other', 'pairwise'])
# ---------------------------------------------------------------------
# Python interface to Cython functions
def _rolling_func(name, desc, how=None, func_kw=None, additional_kw=''):
if how is None:
how_arg_str = 'None'
else:
how_arg_str = "'{how}".format(how=how)
@Substitution(desc, _unary_arg, _roll_kw % how_arg_str + additional_kw,
_type_of_input_retval, _roll_notes)
@Appender(_doc_template)
def f(arg, window, min_periods=None, freq=None, center=False,
**kwargs):
return ensure_compat('rolling',
name,
arg,
window=window,
min_periods=min_periods,
freq=freq,
center=center,
func_kw=func_kw,
**kwargs)
return f
rolling_max = _rolling_func('max', 'Moving maximum.', how='max')
rolling_min = _rolling_func('min', 'Moving minimum.', how='min')
rolling_sum = _rolling_func('sum', 'Moving sum.')
rolling_mean = _rolling_func('mean', 'Moving mean.')
rolling_median = _rolling_func('median', 'Moving median.', how='median')
rolling_std = _rolling_func('std', 'Moving standard deviation.',
func_kw=['ddof'],
additional_kw=_ddof_kw)
rolling_var = _rolling_func('var', 'Moving variance.',
func_kw=['ddof'],
additional_kw=_ddof_kw)
rolling_skew = _rolling_func('skew', 'Unbiased moving skewness.')
rolling_kurt = _rolling_func('kurt', 'Unbiased moving kurtosis.')
def rolling_quantile(arg, window, quantile, min_periods=None, freq=None,
center=False):
"""Moving quantile.
Parameters
----------
arg : Series, DataFrame
window : int
Size of the moving window. This is the number of observations used for
calculating the statistic.
quantile : float
0 <= quantile <= 1
min_periods : int, default None
Minimum number of observations in window required to have a value
(otherwise result is NA).
freq : string or DateOffset object, optional (default None)
Frequency to conform the data to before computing the
statistic. Specified as a frequency string or DateOffset object.
center : boolean, default False
Whether the label should correspond with center of window
Returns
-------
y : type of input argument
Notes
-----
By default, the result is set to the right edge of the window. This can be
changed to the center of the window by setting ``center=True``.
The `freq` keyword is used to conform time series data to a specified
frequency by resampling the data. This is done with the default parameters
of :meth:`~pandas.Series.resample` (i.e. using the `mean`).
To learn more about the frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
return ensure_compat('rolling',
'quantile',
arg,
window=window,
freq=freq,
center=center,
min_periods=min_periods,
func_kw=['quantile'],
quantile=quantile)
def rolling_apply(arg, window, func, min_periods=None, freq=None,
center=False, args=(), kwargs={}):
"""Generic moving function application.
Parameters
----------
arg : Series, DataFrame
window : int
Size of the moving window. This is the number of observations used for
calculating the statistic.
func : function
Must produce a single value from an ndarray input
min_periods : int, default None
Minimum number of observations in window required to have a value
(otherwise result is NA).
freq : string or DateOffset object, optional (default None)
Frequency to conform the data to before computing the
statistic. Specified as a frequency string or DateOffset object.
center : boolean, default False
Whether the label should correspond with center of window
args : tuple
Passed on to func
kwargs : dict
Passed on to func
Returns
-------
y : type of input argument
Notes
-----
By default, the result is set to the right edge of the window. This can be
changed to the center of the window by setting ``center=True``.
The `freq` keyword is used to conform time series data to a specified
frequency by resampling the data. This is done with the default parameters
of :meth:`~pandas.Series.resample` (i.e. using the `mean`).
To learn more about the frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
return ensure_compat('rolling',
'apply',
arg,
window=window,
freq=freq,
center=center,
min_periods=min_periods,
func_kw=['func', 'args', 'kwargs'],
func=func,
args=args,
kwargs=kwargs)
def rolling_window(arg, window=None, win_type=None, min_periods=None,
freq=None, center=False, mean=True,
axis=0, how=None, **kwargs):
"""
Applies a moving window of type ``window_type`` and size ``window``
on the data.
Parameters
----------
arg : Series, DataFrame
window : int or ndarray
Weighting window specification. If the window is an integer, then it is
treated as the window length and win_type is required
win_type : str, default None
Window type (see Notes)
min_periods : int, default None
Minimum number of observations in window required to have a value
(otherwise result is NA).
freq : string or DateOffset object, optional (default None)
Frequency to conform the data to before computing the
statistic. Specified as a frequency string or DateOffset object.
center : boolean, default False
Whether the label should correspond with center of window
mean : boolean, default True
If True computes weighted mean, else weighted sum
axis : {0, 1}, default 0
how : string, default 'mean'
Method for down- or re-sampling
Returns
-------
y : type of input argument
Notes
-----
The recognized window types are:
* ``boxcar``
* ``triang``
* ``blackman``
* ``hamming``
* ``bartlett``
* ``parzen``
* ``bohman``
* ``blackmanharris``
* ``nuttall``
* ``barthann``
* ``kaiser`` (needs beta)
* ``gaussian`` (needs std)
* ``general_gaussian`` (needs power, width)
* ``slepian`` (needs width).
By default, the result is set to the right edge of the window. This can be
changed to the center of the window by setting ``center=True``.
The `freq` keyword is used to conform time series data to a specified
frequency by resampling the data. This is done with the default parameters
of :meth:`~pandas.Series.resample` (i.e. using the `mean`).
To learn more about the frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
func = 'mean' if mean else 'sum'
return ensure_compat('rolling',
func,
arg,
window=window,
win_type=win_type,
freq=freq,
center=center,
min_periods=min_periods,
axis=axis,
func_kw=kwargs.keys(),
**kwargs)
def _expanding_func(name, desc, func_kw=None, additional_kw=''):
@Substitution(desc, _unary_arg, _expanding_kw + additional_kw,
_type_of_input_retval, "")
@Appender(_doc_template)
def f(arg, min_periods=1, freq=None, **kwargs):
return ensure_compat('expanding',
name,
arg,
min_periods=min_periods,
freq=freq,
func_kw=func_kw,
**kwargs)
return f
expanding_max = _expanding_func('max', 'Expanding maximum.')
expanding_min = _expanding_func('min', 'Expanding minimum.')
expanding_sum = _expanding_func('sum', 'Expanding sum.')
expanding_mean = _expanding_func('mean', 'Expanding mean.')
expanding_median = _expanding_func('median', 'Expanding median.')
expanding_std = _expanding_func('std', 'Expanding standard deviation.',
func_kw=['ddof'],
additional_kw=_ddof_kw)
expanding_var = _expanding_func('var', 'Expanding variance.',
func_kw=['ddof'],
additional_kw=_ddof_kw)
expanding_skew = _expanding_func('skew', 'Unbiased expanding skewness.')
expanding_kurt = _expanding_func('kurt', 'Unbiased expanding kurtosis.')
def expanding_count(arg, freq=None):
"""
Expanding count of number of non-NaN observations.
Parameters
----------
arg : DataFrame or numpy ndarray-like
freq : string or DateOffset object, optional (default None)
Frequency to conform the data to before computing the
statistic. Specified as a frequency string or DateOffset object.
Returns
-------
expanding_count : type of caller
Notes
-----
The `freq` keyword is used to conform time series data to a specified
frequency by resampling the data. This is done with the default parameters
of :meth:`~pandas.Series.resample` (i.e. using the `mean`).
To learn more about the frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
return ensure_compat('expanding', 'count', arg, freq=freq)
def expanding_quantile(arg, quantile, min_periods=1, freq=None):
"""Expanding quantile.
Parameters
----------
arg : Series, DataFrame
quantile : float
0 <= quantile <= 1
min_periods : int, default None
Minimum number of observations in window required to have a value
(otherwise result is NA).
freq : string or DateOffset object, optional (default None)
Frequency to conform the data to before computing the
statistic. Specified as a frequency string or DateOffset object.
Returns
-------
y : type of input argument
Notes
-----
The `freq` keyword is used to conform time series data to a specified
frequency by resampling the data. This is done with the default parameters
of :meth:`~pandas.Series.resample` (i.e. using the `mean`).
To learn more about the frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
return ensure_compat('expanding',
'quantile',
arg,
freq=freq,
min_periods=min_periods,
func_kw=['quantile'],
quantile=quantile)
@Substitution("Unbiased expanding covariance.", _binary_arg_flex,
_expanding_kw + _pairwise_kw + _ddof_kw, _flex_retval, "")
@Appender(_doc_template)
def expanding_cov(arg1, arg2=None, min_periods=1, freq=None,
pairwise=None, ddof=1):
if arg2 is None:
arg2 = arg1
pairwise = True if pairwise is None else pairwise
elif isinstance(arg2, (int, float)) and min_periods is None:
min_periods = arg2
arg2 = arg1
pairwise = True if pairwise is None else pairwise
return ensure_compat('expanding',
'cov',
arg1,
other=arg2,
min_periods=min_periods,
pairwise=pairwise,
freq=freq,
ddof=ddof,
func_kw=['other', 'pairwise', 'ddof'])
@Substitution("Expanding sample correlation.", _binary_arg_flex,
_expanding_kw + _pairwise_kw, _flex_retval, "")
@Appender(_doc_template)
def expanding_corr(arg1, arg2=None, min_periods=1, freq=None, pairwise=None):
if arg2 is None:
arg2 = arg1
pairwise = True if pairwise is None else pairwise
elif isinstance(arg2, (int, float)) and min_periods is None:
min_periods = arg2
arg2 = arg1
pairwise = True if pairwise is None else pairwise
return ensure_compat('expanding',
'corr',
arg1,
other=arg2,
min_periods=min_periods,
pairwise=pairwise,
freq=freq,
func_kw=['other', 'pairwise', 'ddof'])
def expanding_apply(arg, func, min_periods=1, freq=None,
args=(), kwargs={}):
"""Generic expanding function application.
Parameters
----------
arg : Series, DataFrame
func : function
Must produce a single value from an ndarray input
min_periods : int, default None
Minimum number of observations in window required to have a value
(otherwise result is NA).
freq : string or DateOffset object, optional (default None)
Frequency to conform the data to before computing the
statistic. Specified as a frequency string or DateOffset object.
args : tuple
Passed on to func
kwargs : dict
Passed on to func
Returns
-------
y : type of input argument
Notes
-----
The `freq` keyword is used to conform time series data to a specified
frequency by resampling the data. This is done with the default parameters
of :meth:`~pandas.Series.resample` (i.e. using the `mean`).
To learn more about the frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
return ensure_compat('expanding',
'apply',
arg,
freq=freq,
min_periods=min_periods,
func_kw=['func', 'args', 'kwargs'],
func=func,
args=args,
kwargs=kwargs)
| bsd-3-clause |
PythonProgramming/Monte-Carlo-Simulator | montecarlo7.py | 1 | 2718 |
import random
import matplotlib
import matplotlib.pyplot as plt
import time
from matplotlib import style
style.use("ggplot")
def rollDice():
roll = random.randint(1,100)
if roll == 100:
return False
elif roll <= 50:
return False
elif 100 > roll >= 50:
return True
def doubler_bettor(funds,initial_wager,wager_count):
global broke_count
value = funds
wager = initial_wager
wX = []
vY = []
currentWager = 1
previousWager = 'win'
previousWagerAmount = initial_wager
while currentWager <= wager_count:
if previousWager == 'win':
if rollDice():
value += wager
wX.append(currentWager)
vY.append(value)
else:
value -= wager
previousWager = 'loss'
previousWagerAmount = wager
wX.append(currentWager)
vY.append(value)
if value < 0:
broke_count += 1
currentWager += 10000000000000000
elif previousWager == 'loss':
if rollDice():
wager = previousWagerAmount * 2
value += wager
wager = initial_wager
previousWager = 'win'
wX.append(currentWager)
vY.append(value)
else:
wager = previousWagerAmount * 2
value -= wager
previousWager = 'loss'
previousWagerAmount = wager
wX.append(currentWager)
vY.append(value)
if value < 0:
currentWager += 10000000000000000
broke_count += 1
currentWager += 1
plt.plot(wX,vY)
def simple_bettor(funds,initial_wager,wager_count):
####
global broke_count
value = funds
wager = initial_wager
wX = []
vY = []
currentWager = 1
while currentWager <= wager_count:
if rollDice():
value += wager
wX.append(currentWager)
vY.append(value)
else:
value -= wager
wX.append(currentWager)
vY.append(value)
###add me
if value < 0:
currentWager += 10000000000000000
broke_count += 1
currentWager += 1
plt.plot(wX,vY)
x = 0
broke_count = 0
while x < 1000:
simple_bettor(10000,100,1000)
x+=1
print(('death rate:',(broke_count/float(x)) * 100))
print(('survival rate:',100 - ((broke_count/float(x)) * 100)))
plt.axhline(0, color = 'r')
plt.ylabel('Account Value')
plt.xlabel('Wager Count')
plt.show()
| mit |
jseabold/statsmodels | statsmodels/tsa/holtwinters/results.py | 4 | 26366 | import numpy as np
import pandas as pd
from scipy.special import inv_boxcox
from scipy.stats import (
_distn_infrastructure,
boxcox,
rv_continuous,
rv_discrete,
)
from statsmodels.base.data import PandasData
from statsmodels.base.model import Results
from statsmodels.base.wrapper import (
ResultsWrapper,
populate_wrapper,
union_dicts,
)
class HoltWintersResults(Results):
"""
Results from fitting Exponential Smoothing models.
Parameters
----------
model : ExponentialSmoothing instance
The fitted model instance.
params : dict
All the parameters for the Exponential Smoothing model.
sse : float
The sum of squared errors.
aic : float
The Akaike information criterion.
aicc : float
AIC with a correction for finite sample sizes.
bic : float
The Bayesian information criterion.
optimized : bool
Flag indicating whether the model parameters were optimized to fit
the data.
level : ndarray
An array of the levels values that make up the fitted values.
trend : ndarray
An array of the trend values that make up the fitted values.
season : ndarray
An array of the seasonal values that make up the fitted values.
params_formatted : pd.DataFrame
DataFrame containing all parameters, their short names and a flag
indicating whether the parameter's value was optimized to fit the data.
resid : ndarray
An array of the residuals of the fittedvalues and actual values.
k : int
The k parameter used to remove the bias in AIC, BIC etc.
fittedvalues : ndarray
An array of the fitted values. Fitted by the Exponential Smoothing
model.
fittedfcast : ndarray
An array of both the fitted values and forecast values.
fcastvalues : ndarray
An array of the forecast values forecast by the Exponential Smoothing
model.
mle_retvals : {None, scipy.optimize.optimize.OptimizeResult}
Optimization results if the parameters were optimized to fit the data.
"""
def __init__(
self,
model,
params,
sse,
aic,
aicc,
bic,
optimized,
level,
trend,
season,
params_formatted,
resid,
k,
fittedvalues,
fittedfcast,
fcastvalues,
mle_retvals=None,
):
self.data = model.data
super(HoltWintersResults, self).__init__(model, params)
self._model = model
self._sse = sse
self._aic = aic
self._aicc = aicc
self._bic = bic
self._optimized = optimized
self._level = level
self._trend = trend
self._season = season
self._params_formatted = params_formatted
self._fittedvalues = fittedvalues
self._fittedfcast = fittedfcast
self._fcastvalues = fcastvalues
self._resid = resid
self._k = k
self._mle_retvals = mle_retvals
@property
def aic(self):
"""
The Akaike information criterion.
"""
return self._aic
@property
def aicc(self):
"""
AIC with a correction for finite sample sizes.
"""
return self._aicc
@property
def bic(self):
"""
The Bayesian information criterion.
"""
return self._bic
@property
def sse(self):
"""
The sum of squared errors between the data and the fittted value.
"""
return self._sse
@property
def model(self):
"""
The model used to produce the results instance.
"""
return self._model
@model.setter
def model(self, value):
self._model = value
@property
def level(self):
"""
An array of the levels values that make up the fitted values.
"""
return self._level
@property
def optimized(self):
"""
Flag indicating if model parameters were optimized to fit the data.
"""
return self._optimized
@property
def slope(self):
"""
An array of the slope values that make up the fitted values.
.. deprecated:: 0.12
Use the trend property instead.
"""
import warnings
warnings.warn(
"slope is deprecated and will be removed after 0.13", FutureWarning
)
return self._trend
@property
def trend(self):
"""
An array of the trend values that make up the fitted values.
"""
return self._trend
@property
def season(self):
"""
An array of the seasonal values that make up the fitted values.
"""
return self._season
@property
def params_formatted(self):
"""
DataFrame containing all parameters
Contains short names and a flag indicating whether the parameter's
value was optimized to fit the data.
"""
return self._params_formatted
@property
def fittedvalues(self):
"""
An array of the fitted values
"""
return self._fittedvalues
@property
def fittedfcast(self):
"""
An array of both the fitted values and forecast values.
"""
return self._fittedfcast
@property
def fcastvalues(self):
"""
An array of the forecast values
"""
return self._fcastvalues
@property
def resid(self):
"""
An array of the residuals of the fittedvalues and actual values.
"""
return self._resid
@property
def k(self):
"""
The k parameter used to remove the bias in AIC, BIC etc.
"""
return self._k
@property
def mle_retvals(self):
"""
Optimization results if the parameters were optimized to fit the data.
"""
return self._mle_retvals
@mle_retvals.setter
def mle_retvals(self, value):
self._mle_retvals = value
def predict(self, start=None, end=None):
"""
In-sample prediction and out-of-sample forecasting
Parameters
----------
start : int, str, or datetime, optional
Zero-indexed observation number at which to start forecasting, ie.,
the first forecast is start. Can also be a date string to
parse or a datetime type. Default is the the zeroth observation.
end : int, str, or datetime, optional
Zero-indexed observation number at which to end forecasting, ie.,
the first forecast is start. Can also be a date string to
parse or a datetime type. However, if the dates index does not
have a fixed frequency, end must be an integer index if you
want out of sample prediction. Default is the last observation in
the sample.
Returns
-------
forecast : ndarray
Array of out of sample forecasts.
"""
return self.model.predict(self.params, start, end)
def forecast(self, steps=1):
"""
Out-of-sample forecasts
Parameters
----------
steps : int
The number of out of sample forecasts from the end of the
sample.
Returns
-------
forecast : ndarray
Array of out of sample forecasts
"""
try:
freq = getattr(self.model._index, "freq", 1)
if isinstance(freq, int):
start = self.model._index.shape[0]
end = start + steps - 1
else:
start = self.model._index[-1] + freq
end = self.model._index[-1] + steps * freq
return self.model.predict(self.params, start=start, end=end)
except (AttributeError, ValueError):
# May occur when the index does not have a freq
return self.model._predict(h=steps, **self.params).fcastvalues
def summary(self):
"""
Summarize the fitted Model
Returns
-------
smry : Summary instance
This holds the summary table and text, which can be printed or
converted to various output formats.
See Also
--------
statsmodels.iolib.summary.Summary
"""
from statsmodels.iolib.summary import Summary
from statsmodels.iolib.table import SimpleTable
model = self.model
title = model.__class__.__name__ + " Model Results"
dep_variable = "endog"
orig_endog = self.model.data.orig_endog
if isinstance(orig_endog, pd.DataFrame):
dep_variable = orig_endog.columns[0]
elif isinstance(orig_endog, pd.Series):
dep_variable = orig_endog.name
seasonal_periods = (
None
if self.model.seasonal is None
else self.model.seasonal_periods
)
lookup = {
"add": "Additive",
"additive": "Additive",
"mul": "Multiplicative",
"multiplicative": "Multiplicative",
None: "None",
}
transform = self.params["use_boxcox"]
box_cox_transform = True if transform else False
box_cox_coeff = (
transform if isinstance(transform, str) else self.params["lamda"]
)
if isinstance(box_cox_coeff, float):
box_cox_coeff = "{:>10.5f}".format(box_cox_coeff)
top_left = [
("Dep. Variable:", [dep_variable]),
("Model:", [model.__class__.__name__]),
("Optimized:", [str(np.any(self.optimized))]),
("Trend:", [lookup[self.model.trend]]),
("Seasonal:", [lookup[self.model.seasonal]]),
("Seasonal Periods:", [str(seasonal_periods)]),
("Box-Cox:", [str(box_cox_transform)]),
("Box-Cox Coeff.:", [str(box_cox_coeff)]),
]
top_right = [
("No. Observations:", [str(len(self.model.endog))]),
("SSE", ["{:5.3f}".format(self.sse)]),
("AIC", ["{:5.3f}".format(self.aic)]),
("BIC", ["{:5.3f}".format(self.bic)]),
("AICC", ["{:5.3f}".format(self.aicc)]),
("Date:", None),
("Time:", None),
]
smry = Summary()
smry.add_table_2cols(
self, gleft=top_left, gright=top_right, title=title
)
formatted = self.params_formatted # type: pd.DataFrame
def _fmt(x):
abs_x = np.abs(x)
scale = 1
if np.isnan(x):
return f"{str(x):>20}"
if abs_x != 0:
scale = int(np.log10(abs_x))
if scale > 4 or scale < -3:
return "{:>20.5g}".format(x)
dec = min(7 - scale, 7)
fmt = "{{:>20.{0}f}}".format(dec)
return fmt.format(x)
tab = []
for _, vals in formatted.iterrows():
tab.append(
[
_fmt(vals.iloc[1]),
"{0:>20}".format(vals.iloc[0]),
"{0:>20}".format(str(bool(vals.iloc[2]))),
]
)
params_table = SimpleTable(
tab,
headers=["coeff", "code", "optimized"],
title="",
stubs=list(formatted.index),
)
smry.tables.append(params_table)
return smry
def simulate(
self,
nsimulations,
anchor=None,
repetitions=1,
error="add",
random_errors=None,
random_state=None,
):
r"""
Random simulations using the state space formulation.
Parameters
----------
nsimulations : int
The number of simulation steps.
anchor : int, str, or datetime, optional
First period for simulation. The simulation will be conditional on
all existing datapoints prior to the `anchor`. Type depends on the
index of the given `endog` in the model. Two special cases are the
strings 'start' and 'end'. `start` refers to beginning the
simulation at the first period of the sample, and `end` refers to
beginning the simulation at the first period after the sample.
Integer values can run from 0 to `nobs`, or can be negative to
apply negative indexing. Finally, if a date/time index was provided
to the model, then this argument can be a date string to parse or a
datetime type. Default is 'end'.
repetitions : int, optional
Number of simulated paths to generate. Default is 1 simulated path.
error : {"add", "mul", "additive", "multiplicative"}, optional
Error model for state space formulation. Default is ``"add"``.
random_errors : optional
Specifies how the random errors should be obtained. Can be one of
the following:
* ``None``: Random normally distributed values with variance
estimated from the fit errors drawn from numpy's standard
RNG (can be seeded with the `random_state` argument). This is the
default option.
* A distribution function from ``scipy.stats``, e.g.
``scipy.stats.norm``: Fits the distribution function to the fit
errors and draws from the fitted distribution.
Note the difference between ``scipy.stats.norm`` and
``scipy.stats.norm()``, the latter one is a frozen distribution
function.
* A frozen distribution function from ``scipy.stats``, e.g.
``scipy.stats.norm(scale=2)``: Draws from the frozen distribution
function.
* A ``np.ndarray`` with shape (`nsimulations`, `repetitions`): Uses
the given values as random errors.
* ``"bootstrap"``: Samples the random errors from the fit errors.
random_state : int or np.random.RandomState, optional
A seed for the random number generator or a
``np.random.RandomState`` object. Only used if `random_errors` is
``None``. Default is ``None``.
Returns
-------
sim : pd.Series, pd.DataFrame or np.ndarray
An ``np.ndarray``, ``pd.Series``, or ``pd.DataFrame`` of simulated
values.
If the original data was a ``pd.Series`` or ``pd.DataFrame``, `sim`
will be a ``pd.Series`` if `repetitions` is 1, and a
``pd.DataFrame`` of shape (`nsimulations`, `repetitions`) else.
Otherwise, if `repetitions` is 1, a ``np.ndarray`` of shape
(`nsimulations`,) is returned, and if `repetitions` is not 1 a
``np.ndarray`` of shape (`nsimulations`, `repetitions`) is
returned.
Notes
-----
The simulation is based on the state space model of the Holt-Winter's
methods. The state space model assumes that the true value at time
:math:`t` is randomly distributed around the prediction value.
If using the additive error model, this means:
.. math::
y_t &= \hat{y}_{t|t-1} + e_t\\
e_t &\sim \mathcal{N}(0, \sigma^2)
Using the multiplicative error model:
.. math::
y_t &= \hat{y}_{t|t-1} \cdot (1 + e_t)\\
e_t &\sim \mathcal{N}(0, \sigma^2)
Inserting these equations into the smoothing equation formulation leads
to the state space equations. The notation used here follows
[1]_.
Additionally,
.. math::
B_t &= b_{t-1} \circ_d \phi\\
L_t &= l_{t-1} \circ_b B_t\\
S_t &= s_{t-m}\\
Y_t &= L_t \circ_s S_t,
where :math:`\circ_d` is the operation linking trend and damping
parameter (multiplication if the trend is additive, power if the trend
is multiplicative), :math:`\circ_b` is the operation linking level and
trend (addition if the trend is additive, multiplication if the trend
is multiplicative), and :math:`\circ_s` is the operation linking
seasonality to the rest.
The state space equations can then be formulated as
.. math::
y_t &= Y_t + \eta \cdot e_t\\
l_t &= L_t + \alpha \cdot (M_e \cdot L_t + \kappa_l) \cdot e_t\\
b_t &= B_t + \beta \cdot (M_e \cdot B_t + \kappa_b) \cdot e_t\\
s_t &= S_t + \gamma \cdot (M_e \cdot S_t + \kappa_s) \cdot e_t\\
with
.. math::
\eta &= \begin{cases}
Y_t\quad\text{if error is multiplicative}\\
1\quad\text{else}
\end{cases}\\
M_e &= \begin{cases}
1\quad\text{if error is multiplicative}\\
0\quad\text{else}
\end{cases}\\
and, when using the additive error model,
.. math::
\kappa_l &= \begin{cases}
\frac{1}{S_t}\quad
\text{if seasonality is multiplicative}\\
1\quad\text{else}
\end{cases}\\
\kappa_b &= \begin{cases}
\frac{\kappa_l}{l_{t-1}}\quad
\text{if trend is multiplicative}\\
\kappa_l\quad\text{else}
\end{cases}\\
\kappa_s &= \begin{cases}
\frac{1}{L_t}\quad\text{if seasonality is
multiplicative}\\
1\quad\text{else}
\end{cases}
When using the multiplicative error model
.. math::
\kappa_l &= \begin{cases}
0\quad
\text{if seasonality is multiplicative}\\
S_t\quad\text{else}
\end{cases}\\
\kappa_b &= \begin{cases}
\frac{\kappa_l}{l_{t-1}}\quad
\text{if trend is multiplicative}\\
\kappa_l + l_{t-1}\quad\text{else}
\end{cases}\\
\kappa_s &= \begin{cases}
0\quad\text{if seasonality is multiplicative}\\
L_t\quad\text{else}
\end{cases}
References
----------
.. [1] Hyndman, R.J., & Athanasopoulos, G. (2018) *Forecasting:
principles and practice*, 2nd edition, OTexts: Melbourne,
Australia. OTexts.com/fpp2. Accessed on February 28th 2020.
"""
# check inputs
if error in ["additive", "multiplicative"]:
error = {"additive": "add", "multiplicative": "mul"}[error]
if error not in ["add", "mul"]:
raise ValueError("error must be 'add' or 'mul'!")
# Get the starting location
if anchor is None or anchor == "end":
start_idx = self.model.nobs
elif anchor == "start":
start_idx = 0
else:
start_idx, _, _ = self.model._get_index_loc(anchor)
if isinstance(start_idx, slice):
start_idx = start_idx.start
if start_idx < 0:
start_idx += self.model.nobs
if start_idx > self.model.nobs:
raise ValueError("Cannot anchor simulation outside of the sample.")
# get Holt-Winters settings and parameters
trend = self.model.trend
damped = self.model.damped_trend
seasonal = self.model.seasonal
use_boxcox = self.params["use_boxcox"]
lamda = self.params["lamda"]
alpha = self.params["smoothing_level"]
beta = self.params["smoothing_trend"]
gamma = self.params["smoothing_seasonal"]
phi = self.params["damping_trend"]
# if model has no seasonal component, use 1 as period length
m = max(self.model.seasonal_periods, 1)
n_params = (
2
+ 2 * self.model.has_trend
+ (m + 1) * self.model.has_seasonal
+ damped
)
mul_seasonal = seasonal == "mul"
mul_trend = trend == "mul"
mul_error = error == "mul"
# define trend, damping and seasonality operations
if mul_trend:
op_b = np.multiply
op_d = np.power
neutral_b = 1
else:
op_b = np.add
op_d = np.multiply
neutral_b = 0
if mul_seasonal:
op_s = np.multiply
neutral_s = 1
else:
op_s = np.add
neutral_s = 0
# set initial values
level = self.level
_trend = self.trend
season = self.season
# (notation as in https://otexts.com/fpp2/ets.html)
y = np.empty((nsimulations, repetitions))
# lvl instead of l because of E741
lvl = np.empty((nsimulations + 1, repetitions))
b = np.empty((nsimulations + 1, repetitions))
s = np.empty((nsimulations + m, repetitions))
# the following uses python's index wrapping
if start_idx == 0:
lvl[-1, :] = self.params["initial_level"]
b[-1, :] = self.params["initial_trend"]
else:
lvl[-1, :] = level[start_idx - 1]
b[-1, :] = _trend[start_idx - 1]
if 0 <= start_idx and start_idx <= m:
initial_seasons = self.params["initial_seasons"]
_s = np.concatenate(
(initial_seasons[start_idx:], season[:start_idx])
)
s[-m:, :] = np.tile(_s, (repetitions, 1)).T
else:
s[-m:, :] = np.tile(
season[start_idx - m : start_idx], (repetitions, 1)
).T
# set neutral values for unused features
if trend is None:
b[:, :] = neutral_b
phi = 1
beta = 0
if seasonal is None:
s[:, :] = neutral_s
gamma = 0
if not damped:
phi = 1
# calculate residuals for error covariance estimation
if use_boxcox:
fitted = boxcox(self.fittedvalues, lamda)
else:
fitted = self.fittedvalues
if error == "add":
resid = self.model._y - fitted
else:
resid = (self.model._y - fitted) / fitted
sigma = np.sqrt(np.sum(resid ** 2) / (len(resid) - n_params))
# get random error eps
if isinstance(random_errors, np.ndarray):
if random_errors.shape != (nsimulations, repetitions):
raise ValueError(
"If random_errors is an ndarray, it must have shape "
"(nsimulations, repetitions)"
)
eps = random_errors
elif random_errors == "bootstrap":
eps = np.random.choice(
resid, size=(nsimulations, repetitions), replace=True
)
elif random_errors is None:
if random_state is None:
eps = np.random.randn(nsimulations, repetitions) * sigma
elif isinstance(random_state, int):
rng = np.random.RandomState(random_state)
eps = rng.randn(nsimulations, repetitions) * sigma
elif isinstance(random_state, np.random.RandomState):
eps = random_state.randn(nsimulations, repetitions) * sigma
else:
raise ValueError(
"Argument random_state must be None, an integer, "
"or an instance of np.random.RandomState"
)
elif isinstance(random_errors, (rv_continuous, rv_discrete)):
params = random_errors.fit(resid)
eps = random_errors.rvs(*params, size=(nsimulations, repetitions))
elif isinstance(random_errors, _distn_infrastructure.rv_frozen):
eps = random_errors.rvs(size=(nsimulations, repetitions))
else:
raise ValueError("Argument random_errors has unexpected value!")
for t in range(nsimulations):
b0 = op_d(b[t - 1, :], phi)
l0 = op_b(lvl[t - 1, :], b0)
s0 = s[t - m, :]
y0 = op_s(l0, s0)
if error == "add":
eta = 1
kappa_l = 1 / s0 if mul_seasonal else 1
kappa_b = kappa_l / lvl[t - 1, :] if mul_trend else kappa_l
kappa_s = 1 / l0 if mul_seasonal else 1
else:
eta = y0
kappa_l = 0 if mul_seasonal else s0
kappa_b = (
kappa_l / lvl[t - 1, :]
if mul_trend
else kappa_l + lvl[t - 1, :]
)
kappa_s = 0 if mul_seasonal else l0
y[t, :] = y0 + eta * eps[t, :]
lvl[t, :] = l0 + alpha * (mul_error * l0 + kappa_l) * eps[t, :]
b[t, :] = b0 + beta * (mul_error * b0 + kappa_b) * eps[t, :]
s[t, :] = s0 + gamma * (mul_error * s0 + kappa_s) * eps[t, :]
if use_boxcox:
y = inv_boxcox(y, lamda)
sim = np.atleast_1d(np.squeeze(y))
if y.shape[0] == 1 and y.size > 1:
sim = sim[None, :]
# Wrap data / squeeze where appropriate
if not isinstance(self.model.data, PandasData):
return sim
_, _, _, index = self.model._get_prediction_index(
start_idx, start_idx + nsimulations - 1
)
if repetitions == 1:
sim = pd.Series(sim, index=index, name=self.model.endog_names)
else:
sim = pd.DataFrame(sim, index=index)
return sim
class HoltWintersResultsWrapper(ResultsWrapper):
_attrs = {
"fittedvalues": "rows",
"level": "rows",
"resid": "rows",
"season": "rows",
"trend": "rows",
"slope": "rows",
}
_wrap_attrs = union_dicts(ResultsWrapper._wrap_attrs, _attrs)
_methods = {"predict": "dates", "forecast": "dates"}
_wrap_methods = union_dicts(ResultsWrapper._wrap_methods, _methods)
populate_wrapper(HoltWintersResultsWrapper, HoltWintersResults)
| bsd-3-clause |
quheng/scikit-learn | examples/svm/plot_iris.py | 225 | 3252 | """
==================================================
Plot different SVM classifiers in the iris dataset
==================================================
Comparison of different linear SVM classifiers on a 2D projection of the iris
dataset. We only consider the first 2 features of this dataset:
- Sepal length
- Sepal width
This example shows how to plot the decision surface for four SVM classifiers
with different kernels.
The linear models ``LinearSVC()`` and ``SVC(kernel='linear')`` yield slightly
different decision boundaries. This can be a consequence of the following
differences:
- ``LinearSVC`` minimizes the squared hinge loss while ``SVC`` minimizes the
regular hinge loss.
- ``LinearSVC`` uses the One-vs-All (also known as One-vs-Rest) multiclass
reduction while ``SVC`` uses the One-vs-One multiclass reduction.
Both linear models have linear decision boundaries (intersecting hyperplanes)
while the non-linear kernel models (polynomial or Gaussian RBF) have more
flexible non-linear decision boundaries with shapes that depend on the kind of
kernel and its parameters.
.. NOTE:: while plotting the decision function of classifiers for toy 2D
datasets can help get an intuitive understanding of their respective
expressive power, be aware that those intuitions don't always generalize to
more realistic high-dimensional problems.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target
h = .02 # step size in the mesh
# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
C = 1.0 # SVM regularization parameter
svc = svm.SVC(kernel='linear', C=C).fit(X, y)
rbf_svc = svm.SVC(kernel='rbf', gamma=0.7, C=C).fit(X, y)
poly_svc = svm.SVC(kernel='poly', degree=3, C=C).fit(X, y)
lin_svc = svm.LinearSVC(C=C).fit(X, y)
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# title for the plots
titles = ['SVC with linear kernel',
'LinearSVC (linear kernel)',
'SVC with RBF kernel',
'SVC with polynomial (degree 3) kernel']
for i, clf in enumerate((svc, lin_svc, rbf_svc, poly_svc)):
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
plt.subplot(2, 2, i + 1)
plt.subplots_adjust(wspace=0.4, hspace=0.4)
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired)
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.title(titles[i])
plt.show()
| bsd-3-clause |
MathYourLife/sandbox | python/novelty_viz/novelty_detector.py | 1 | 3992 | import numpy as np
import matplotlib.pyplot as plt
import time
from sklearn import svm
plt.ion()
DURATION = 180
def generator():
z = np.random.normal
start = time.time()
center = (2, 2)
while start + DURATION > time.time():
mode = int(time.time() / 15) % 4
x = z() * 0.2 + center[0]
y = z() * 0.2 + center[1]
if mode in (2, 3):
x = -x
if mode in (1, 2):
y = -y
yield (x, y)
time.sleep(0.01)
class Novelty(object):
def __init__(self, variable_count, train_depth=20, threshold=0.3):
self.stage = 0
self.variable_count = variable_count
self.train_depth = train_depth
self.clear()
self.idx = 0
self.clf = None
self.threshold = threshold
self.rate = None
def clear(self):
self.train_data = np.array([np.nan] * self.train_depth * self.variable_count)
self.train_data.shape = (self.train_depth, self.variable_count)
self.normal = np.array([np.nan] * self.train_depth, dtype=np.bool)
def new(self, data):
self.idx += 1
self.idx %= self.train_depth
if self.stage == 0:
self.gather_training(data)
if self.stage == 1:
self.train()
raise NewModel()
if self.stage >= 2:
self.predict(data)
if self.stage == 2:
self.stage += 1
if self.stage == 3:
self.rate = float(np.sum(self.normal)) / self.train_depth
if self.rate < self.threshold:
print("Relearning: hit rate dropped to %s" % self.rate)
self.stage = 0
self.clear()
raise InvalidModel()
def gather_training(self, data):
self.train_data[self.idx] = data
if np.sum(np.isnan(self.train_data)) == 0:
self.stage += 1
def train(self):
# fit the model
self.clf = svm.OneClassSVM(nu=0.1, kernel="rbf", gamma=0.1)
self.clf.fit(self.train_data)
self.stage += 1
def predict(self, data):
v = self.clf.predict(data)
self.normal[self.idx] = v > 0
class NoveltyException(Exception):
pass
class InvalidModel(Exception):
pass
class NewModel(Exception):
pass
def update_title(ax, msg):
if not msg == ax.get_title():
print msg
ax.set_title(msg)
def main():
x = []
y = []
lim = 4
xx, yy = np.meshgrid(np.linspace(-lim, lim, 500), np.linspace(-lim, lim, 500))
boundary = None
n = Novelty(variable_count=2)
fig, ax = plt.subplots(1,1)
a, = ax.plot(x, y, "x", color="blue")
ax.set_xlim([-lim, lim])
ax.set_ylim([-lim, lim])
rate = plt.text(0, 0, s='')
for g in generator():
try:
n.new(g)
if n.stage == 0:
update_title(ax, "Gathering Data for a Model")
elif n.stage >= 2:
update_title(ax, "Tracking Expected Behaviour")
except NewModel:
update_title(ax, "New Behaviour Model Created")
Z = n.clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
if boundary:
ax.clear()
a, = ax.plot(x, y, "x", color="blue")
rate = plt.text(0, 0, s='')
boundary = ax.contour(xx, yy, Z, levels=[0, Z.max()], colors='red')
except InvalidModel:
x = []
y = []
ax.clear()
rate = plt.text(0, 0, s='')
update_title(ax, "Behaviour No Longer Matches the Model")
a, = ax.plot(x, y, "x", color="blue")
try:
msg = "Hit Rate: %s%%" % (n.rate * 100)
except TypeError:
msg = "N/A"
rate.set_text(msg)
x.append(g[0])
y.append(g[1])
a.set_data(x, y)
plt.draw()
plt.pause(0.0001)
if __name__ == '__main__':
main()
| mit |
monash-merc/karaage | setup.py | 2 | 3866 | #!/usr/bin/env python
# Copyright 2010, 2014-2015 VPAC
# Copyright 2010, 2014 The University of Melbourne
#
# This file is part of Karaage.
#
# Karaage is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Karaage is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Karaage If not, see <http://www.gnu.org/licenses/>.
"""Karaage setup script."""
from setuptools import setup
import os
def fullsplit(path, result=None):
"""
Split a pathname into components (the opposite of os.path.join) in a
platform-neutral way.
"""
if result is None:
result = []
head, tail = os.path.split(path)
if head == '':
return [tail] + result
if head == path:
return result
return fullsplit(head, [tail] + result)
dirs = ['karaage', ]
packages = []
for d in dirs:
for dirpath, dirnames, filenames in os.walk(d):
# Ignore dirnames that start with '.'
for i, dirname in enumerate(dirnames):
if dirname.startswith('.'):
del dirnames[i]
if filenames:
packages.append('.'.join(fullsplit(dirpath)))
tests_require = [
"django-extensions",
"factory_boy",
"mock",
"cracklib",
]
setup(
name="karaage",
version=open('VERSION.txt', 'r').readline().strip(),
url='https://github.com/Karaage-Cluster/karaage',
author='Brian May',
author_email='[email protected]',
description='Collection of Django apps to manage a clusters',
packages=packages,
license="GPL3+",
long_description=open('README.rst').read(),
classifiers=[
"Development Status :: 5 - Production/Stable",
"Framework :: Django",
"Intended Audience :: Developers",
"Intended Audience :: System Administrators",
"License :: OSI Approved :: GNU General Public "
"License v3 or later (GPLv3+)",
"Operating System :: OS Independent",
"Programming Language :: Python :: 2",
"Programming Language :: Python :: 3",
"Topic :: Software Development :: Libraries :: Python Modules",
],
keywords="karaage cluster user administration",
package_data={
'': ['*.css', '*.html', '*.js', '*.png', '*.gif', '*.map', '*.txt',
'*.json'],
},
scripts=[
'sbin/kg_set_secret_key',
'sbin/kg-manage',
'sbin/kg-migrate-south',
],
data_files=[
('/etc/karaage3',
['conf/settings.py', 'conf/karaage.wsgi']),
('/etc/apache2/conf-available',
['conf/karaage3-wsgi.conf']),
],
install_requires=[
"cssmin",
"Django >= 1.7",
"python-alogger >= 2.0",
"django-xmlrpc >= 0.1",
"django-simple-captcha",
"django-ajax-selects >= 1.1.3",
"django_jsonfield >= 0.9.12",
"django-model-utils >= 2.0.0",
"python-tldap >= 0.3.3",
"django-pipeline",
"django-tables2",
"django-filter",
"six",
"slimit>=0.8.1",
],
tests_require=tests_require,
extras_require={
'tests': tests_require,
'applications': [
# no dependencies for kgapplications
],
'software': [
"karaage4[applications]",
"karaage4[usage]",
],
'usage': [
"karaage4[software]",
"django_celery",
"django-filter",
"matplotlib",
],
},
)
| gpl-3.0 |
M4573R/discrete-optimization-coursera-1 | coloring/solver.py | 2 | 1335 | #!/usr/bin/python
# -*- coding: utf-8 -*-
import os
from subprocess import Popen, PIPE
import networkx as nx
import matplotlib.pyplot as plt
def solve_it(input_data):
# Writes the inputData to a temporay file
tmp_file_name = 'tmp.data'
tmp_file = open(tmp_file_name, 'w')
tmp_file.write(input_data)
tmp_file.close()
# Runs the command: java Solver -file=tmp.data
tmp_file = open(tmp_file_name, 'r')
process = Popen(['./Solver'], stdin=tmp_file, stdout=PIPE)
(stdout, stderr) = process.communicate()
# print stdout
# removes the temporay file
os.remove(tmp_file_name)
# G = nx.Graph()
# lines = input_data.split('\n')
# [N,E] = lines[0].split()
# G.add_nodes_from(range(int(N)))
# for l in lines[1:-1]:
# a = l.split(' ')
# G.add_edge(int(a[0]),int(a[1]))
# nx.draw(G)
# plt.show()
return stdout.strip()
import sys
if __name__ == '__main__':
if len(sys.argv) > 1:
file_location = sys.argv[1].strip()
input_data_file = open(file_location, 'r')
input_data = ''.join(input_data_file.readlines())
input_data_file.close()
print solve_it(input_data)
else:
print 'This test requires an input file. Please select one from the data directory. (i.e. python solver.py ./data/gc_4_1)'
| mit |
nipunbatra/bayespy | bayespy/demos/lssm.py | 2 | 11413 | ######################################################################
# Copyright (C) 2013-2014 Jaakko Luttinen
#
# This file is licensed under Version 3.0 of the GNU General Public
# License. See LICENSE for a text of the license.
######################################################################
######################################################################
# This file is part of BayesPy.
#
# BayesPy is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 3 as
# published by the Free Software Foundation.
#
# BayesPy is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with BayesPy. If not, see <http://www.gnu.org/licenses/>.
######################################################################
"""
Demonstrate linear Gaussian state-space model.
Some of the functions in this module are re-usable:
* ``model`` can be used to construct the classical linear state-space model.
* ``infer`` can be used to apply linear state-space model to given data.
"""
import numpy as np
import scipy
import matplotlib.pyplot as plt
from bayespy.nodes import GaussianMarkovChain
from bayespy.nodes import Gaussian, GaussianARD
from bayespy.nodes import Gamma
from bayespy.nodes import SumMultiply
from bayespy.inference.vmp.nodes.gamma import diagonal
from bayespy.utils import random
from bayespy.inference.vmp.vmp import VB
from bayespy.inference.vmp import transformations
import bayespy.plot.plotting as bpplt
def model(M=10, N=100, D=3):
"""
Construct linear state-space model.
See, for instance, the following publication:
"Fast variational Bayesian linear state-space model"
Luttinen (ECML 2013)
"""
# Dynamics matrix with ARD
alpha = Gamma(1e-5,
1e-5,
plates=(D,),
name='alpha')
A = GaussianARD(0,
alpha,
shape=(D,),
plates=(D,),
plotter=bpplt.GaussianHintonPlotter(rows=0,
cols=1,
scale=0),
name='A')
A.initialize_from_value(np.identity(D))
# Latent states with dynamics
X = GaussianMarkovChain(np.zeros(D), # mean of x0
1e-3*np.identity(D), # prec of x0
A, # dynamics
np.ones(D), # innovation
n=N, # time instances
plotter=bpplt.GaussianMarkovChainPlotter(scale=2),
name='X')
X.initialize_from_value(np.random.randn(N,D))
# Mixing matrix from latent space to observation space using ARD
gamma = Gamma(1e-5,
1e-5,
plates=(D,),
name='gamma')
gamma.initialize_from_value(1e-2*np.ones(D))
C = GaussianARD(0,
gamma,
shape=(D,),
plates=(M,1),
plotter=bpplt.GaussianHintonPlotter(rows=0,
cols=2,
scale=0),
name='C')
C.initialize_from_value(np.random.randn(M,1,D))
# Observation noise
tau = Gamma(1e-5,
1e-5,
name='tau')
tau.initialize_from_value(1e2)
# Underlying noiseless function
F = SumMultiply('i,i',
C,
X,
name='F')
# Noisy observations
Y = GaussianARD(F,
tau,
name='Y')
Q = VB(Y, F, C, gamma, X, A, alpha, tau, C)
return Q
def infer(y, D,
mask=True,
maxiter=100,
rotate=True,
debug=False,
precompute=False,
update_hyper=0,
start_rotating=0,
plot_C=True,
monitor=True,
autosave=None):
"""
Apply linear state-space model for the given data.
"""
(M, N) = np.shape(y)
# Construct the model
Q = model(M, N, D)
if not plot_C:
Q['C'].set_plotter(None)
if autosave is not None:
Q.set_autosave(autosave, iterations=10)
# Observe data
Q['Y'].observe(y, mask=mask)
# Set up rotation speed-up
if rotate:
# Initial rotate the D-dimensional state space (X, A, C)
# Does not update hyperparameters
rotA_init = transformations.RotateGaussianARD(Q['A'],
axis=0,
precompute=precompute)
rotX_init = transformations.RotateGaussianMarkovChain(Q['X'],
rotA_init)
rotC_init = transformations.RotateGaussianARD(Q['C'],
axis=0,
precompute=precompute)
R_X_init = transformations.RotationOptimizer(rotX_init, rotC_init, D)
# Rotate the D-dimensional state space (X, A, C)
rotA = transformations.RotateGaussianARD(Q['A'],
Q['alpha'],
axis=0,
precompute=precompute)
rotX = transformations.RotateGaussianMarkovChain(Q['X'],
rotA)
rotC = transformations.RotateGaussianARD(Q['C'],
Q['gamma'],
axis=0,
precompute=precompute)
R_X = transformations.RotationOptimizer(rotX, rotC, D)
# Keyword arguments for the rotation
if debug:
rotate_kwargs = {'maxiter': 10,
'check_bound': True,
'check_gradient': True}
else:
rotate_kwargs = {'maxiter': 10}
# Plot initial distributions
if monitor:
Q.plot()
# Run inference using rotations
for ind in range(maxiter):
if ind < update_hyper:
# It might be a good idea to learn the lower level nodes a bit
# before starting to learn the upper level nodes.
Q.update('X', 'C', 'A', 'tau', plot=monitor)
if rotate and ind >= start_rotating:
# Use the rotation which does not update alpha nor beta
R_X_init.rotate(**rotate_kwargs)
else:
Q.update(plot=monitor)
if rotate and ind >= start_rotating:
# It might be a good idea to not rotate immediately because it
# might lead to pruning out components too efficiently before
# even estimating them roughly
R_X.rotate(**rotate_kwargs)
# Return the posterior approximation
return Q
def simulate_data(M, N):
"""
Generate a dataset using linear state-space model.
The process has two latent oscillation components and one random walk
component.
"""
# Simulate some data
D = 3
c = np.random.randn(M, D)
w = 0.3
a = np.array([[np.cos(w), -np.sin(w), 0],
[np.sin(w), np.cos(w), 0],
[0, 0, 1]])
x = np.empty((N,D))
f = np.empty((M,N))
y = np.empty((M,N))
x[0] = 10*np.random.randn(D)
f[:,0] = np.dot(c,x[0])
y[:,0] = f[:,0] + 3*np.random.randn(M)
for n in range(N-1):
x[n+1] = np.dot(a,x[n]) + np.random.randn(D)
f[:,n+1] = np.dot(c,x[n+1])
y[:,n+1] = f[:,n+1] + 3*np.random.randn(M)
return (y, f)
def demo(M=6, N=200, D=3, maxiter=100, debug=False, seed=42, rotate=True,
precompute=False, plot=True, monitor=True):
"""
Run the demo for linear state-space model.
"""
# Use deterministic random numbers
if seed is not None:
np.random.seed(seed)
# Get data
(y, f) = simulate_data(M, N)
# Add missing values randomly
mask = random.mask(M, N, p=0.3)
# Add missing values to a period of time
mask[:,30:80] = False
y[~mask] = np.nan # BayesPy doesn't require this. Just for plotting.
# Run inference
Q = infer(y, D,
mask=mask,
rotate=rotate,
debug=debug,
monitor=monitor,
maxiter=maxiter)
if plot:
# Show results
plt.figure()
bpplt.timeseries_normal(Q['F'], scale=2)
bpplt.timeseries(f, 'b-')
bpplt.timeseries(y, 'r.')
plt.show()
if __name__ == '__main__':
import sys, getopt, os
try:
opts, args = getopt.getopt(sys.argv[1:],
"",
["m=",
"n=",
"d=",
"seed=",
"maxiter=",
"debug",
"precompute",
"no-plot",
"no-monitor",
"no-rotation"])
except getopt.GetoptError:
print('python lssm.py <options>')
print('--m=<INT> Dimensionality of data vectors')
print('--n=<INT> Number of data vectors')
print('--d=<INT> Dimensionality of the latent vectors in the model')
print('--no-rotation Do not apply speed-up rotations')
print('--maxiter=<INT> Maximum number of VB iterations')
print('--seed=<INT> Seed (integer) for the random number generator')
print('--debug Check that the rotations are implemented correctly')
print('--no-plot Do not plot the results')
print('--no-monitor Do not plot distributions during learning')
print('--precompute Precompute some moments when rotating. May '
'speed up or slow down.')
sys.exit(2)
kwargs = {}
for opt, arg in opts:
if opt == "--no-rotation":
kwargs["rotate"] = False
elif opt == "--maxiter":
kwargs["maxiter"] = int(arg)
elif opt == "--debug":
kwargs["debug"] = True
elif opt == "--precompute":
kwargs["precompute"] = True
elif opt == "--seed":
kwargs["seed"] = int(arg)
elif opt in ("--m",):
kwargs["M"] = int(arg)
elif opt in ("--n",):
kwargs["N"] = int(arg)
elif opt in ("--d",):
kwargs["D"] = int(arg)
elif opt in ("--no-plot"):
kwargs["plot"] = False
elif opt in ("--no-monitor"):
kwargs["monitor"] = False
else:
raise ValueError("Unhandled option given")
demo(**kwargs)
| gpl-3.0 |
jepegit/cellpy | cellpy/readers/instruments/biologics_mpr.py | 1 | 22645 | """This file contains methods for importing Bio-Logic mpr-type files"""
# This is based on the work by Chris Kerr
# (https://github.com/chatcannon/galvani/blob/master/galvani/BioLogic.py)
import os
import tempfile
import shutil
import logging
import warnings
import time
from collections import OrderedDict
import datetime
import pandas as pd
import numpy as np
from cellpy.readers.instruments.biologic_file_format import (
bl_dtypes,
hdr_dtype,
mpr_label,
bl_log_pos_dtype,
bl_flags,
)
from cellpy.readers.core import FileID, Cell, check64bit, humanize_bytes
from cellpy.parameters.internal_settings import get_headers_normal
from cellpy.readers.instruments.mixin import Loader
from cellpy.parameters import prms
OLE_TIME_ZERO = datetime.datetime(1899, 12, 30, 0, 0, 0)
SEEK_SET = 0 # from start
SEEK_CUR = 1 # from current position
SEEK_END = 2 # from end of file
def ole2datetime(oledt):
"""converts from ole datetime float to datetime"""
return OLE_TIME_ZERO + datetime.timedelta(days=float(oledt))
def datetime2ole(dt):
"""converts from datetime object to ole datetime float"""
delta = dt - OLE_TIME_ZERO
delta_float = delta / datetime.timedelta(days=1) # trick from SO
return delta_float
# The columns to choose if minimum selection is selected
MINIMUM_SELECTION = [
"Data_Point",
"Test_Time",
"Step_Time",
"DateTime",
"Step_Index",
"Cycle_Index",
"Current",
"Voltage",
"Charge_Capacity",
"Discharge_Capacity",
"Internal_Resistance",
]
def _read_modules(fileobj):
module_magic = fileobj.read(len(b"MODULE"))
hdr_bytes = fileobj.read(hdr_dtype.itemsize)
hdr = np.fromstring(
hdr_bytes, dtype=hdr_dtype, count=1
) # TODO: change to frombuffer
hdr_dict = dict(((n, hdr[n][0]) for n in hdr_dtype.names))
hdr_dict["offset"] = fileobj.tell()
hdr_dict["data"] = fileobj.read(hdr_dict["length"])
fileobj.seek(hdr_dict["offset"] + hdr_dict["length"], SEEK_SET)
hdr_dict["end"] = fileobj.tell()
return hdr_dict
class MprLoader(Loader):
""" Class for loading biologics-data from mpr-files."""
# Note: the class is sub-classing Loader. At the moment, Loader does
# not really contain anything...
def __init__(self):
self.logger = logging.getLogger(__name__)
self.headers_normal = get_headers_normal()
self.current_chunk = 0 # use this to set chunks to load
self.mpr_data = None
self.mpr_log = None
self.mpr_settings = None
self.cellpy_headers = get_headers_normal()
@staticmethod
def get_raw_units():
"""Include the settings for the units used by the instrument.
The units are defined w.r.t. the SI units ('unit-fractions';
currently only units that are multiples of
Si units can be used). For example, for current defined in mA,
the value for the
current unit-fraction will be 0.001.
Returns: dictionary containing the unit-fractions for current, charge,
and mass
"""
raw_units = dict()
raw_units["current"] = 1.0 # A
raw_units["charge"] = 1.0 # Ah
raw_units["mass"] = 0.001 # g
return raw_units
@staticmethod
def get_raw_limits():
"""Include the settings for how to decide what kind of
step you are examining here.
The raw limits are 'epsilons' used to check if the current
and/or voltage is stable (for example
for galvanostatic steps, one would expect that the current
is stable (constant) and non-zero).
It is expected that different instruments (with different
resolution etc.) have different
'epsilons'.
Returns: the raw limits (dict)
"""
raw_limits = dict()
raw_limits["current_hard"] = 0.0000000000001
raw_limits["current_soft"] = 0.00001
raw_limits["stable_current_hard"] = 2.0
raw_limits["stable_current_soft"] = 4.0
raw_limits["stable_voltage_hard"] = 2.0
raw_limits["stable_voltage_soft"] = 4.0
raw_limits["stable_charge_hard"] = 2.0
raw_limits["stable_charge_soft"] = 5.0
raw_limits["ir_change"] = 0.00001
return raw_limits
def load(self, file_name):
"""Load a raw data-file
Args:
file_name (path)
Returns:
loaded test
"""
raw_file_loader = self.loader
new_rundata = raw_file_loader(file_name)
new_rundata = self.inspect(new_rundata)
return new_rundata
def inspect(self, run_data):
"""inspect the file.
"""
return run_data
def repair(self, file_name):
"""try to repair a broken/corrupted file"""
raise NotImplementedError
def dump(self, file_name, path):
"""Dumps the raw file to an intermediate hdf5 file.
This method can be used if the raw file is too difficult to load and it
is likely that it is more efficient to convert it to an hdf5 format
and then load it using the `from_intermediate_file` function.
Args:
file_name: name of the raw file
path: path to where to store the intermediate hdf5 file (optional)
Returns:
full path to stored intermediate hdf5 file
information about the raw file (needed by the
`from_intermediate_file` function)
"""
raise NotImplementedError
def loader(self, file_name, bad_steps=None, **kwargs):
"""Loads data from biologics .mpr files.
Args:
file_name (str): path to .res file.
bad_steps (list of tuples): (c, s) tuples of steps s
(in cycle c) to skip loading.
Returns:
new_tests (list of data objects)
"""
new_tests = []
if not os.path.isfile(file_name):
self.logger.info("Missing file_\n %s" % file_name)
return None
filesize = os.path.getsize(file_name)
hfilesize = humanize_bytes(filesize)
txt = "Filesize: %i (%s)" % (filesize, hfilesize)
self.logger.debug(txt)
# creating temporary file and connection
temp_dir = tempfile.gettempdir()
temp_filename = os.path.join(temp_dir, os.path.basename(file_name))
shutil.copy2(file_name, temp_dir)
self.logger.debug("tmp file: %s" % temp_filename)
self.logger.debug("HERE WE LOAD THE DATA")
data = Cell()
fid = FileID(file_name)
# div parameters and information (probably load this last)
test_no = 1
data.cell_no = test_no
data.loaded_from = file_name
# some overall prms
data.channel_index = None
data.channel_number = None
data.creator = None
data.item_ID = None
data.schedule_file_name = None
data.start_datetime = None
data.test_ID = None
data.test_name = None
data.raw_data_files.append(fid)
# --------- read raw-data (normal-data) -------------------------
self.logger.debug("reading raw-data")
self.mpr_data = None
self.mpr_log = None
self.mpr_settings = None
self._load_mpr_data(temp_filename, bad_steps)
length_of_test = self.mpr_data.shape[0]
self.logger.debug(f"length of test: {length_of_test}")
self.logger.debug("renaming columns")
self._rename_headers()
# --------- stats-data (summary-data) -------------------------
summary_df = self._create_summary_data()
if summary_df.empty:
txt = "\nCould not find any summary (stats-file)!"
txt += " (summary_df.empty = True)"
txt += "\n -> issue make_summary(use_cellpy_stat_file=False)"
warnings.warn(txt)
data.summary = summary_df
data.raw = self.mpr_data
data.raw_data_files_length.append(length_of_test)
new_tests.append(data)
self._clean_up(temp_filename)
return new_tests
def _parse_mpr_log_data(self):
for value in bl_log_pos_dtype:
key, start, end, dtype = value
self.mpr_log[key] = np.frombuffer( # replaced np.fromstring
self.mpr_log["data"][start:], dtype=dtype, count=1
)[0]
if "a" in dtype:
self.mpr_log[key] = self.mpr_log[key].decode("utf8")
# converting dates
date_datetime = ole2datetime(self.mpr_log["Acquisition started on"])
self.mpr_log["Start"] = date_datetime
def _parse_mpr_settings_data(self, settings_mod):
tm = time.strptime(settings_mod["date"].decode(), "%m.%d.%y")
startdate = datetime.date(tm.tm_year, tm.tm_mon, tm.tm_mday)
mpr_settings = dict()
mpr_settings["start_date"] = startdate
mpr_settings["length"] = settings_mod["length"]
mpr_settings["end"] = settings_mod["end"]
mpr_settings["offset"] = settings_mod["offset"]
mpr_settings["version"] = settings_mod["version"]
mpr_settings["data"] = settings_mod["data"]
self.mpr_settings = mpr_settings
return None
def _get_flag(self, flag_name):
if flag_name in self.flags_dict:
mask, dtype = self.flags_dict[flag_name]
# print(f"flag: {flag_name}, mask: {mask}, dtype: {dtype}")
return np.array(self.mpr_data["flags"] & mask, dtype=dtype)
# elif flag_name in self.flags2_dict:
# mask, dtype = self.flags2_dict[flag_name]
# return np.array(self.mpr_data['flags2'] & mask, dtype=dtype)
else:
raise AttributeError("Flag '%s' not present" % flag_name)
def _load_mpr_data(self, filename, bad_steps):
if bad_steps is not None:
warnings.warn("Exluding bad steps is not implemented")
stats_info = os.stat(filename)
mpr_modules = []
mpr_log = None
mpr_data = None
mpr_settings = None
file_obj = open(filename, mode="rb")
label = file_obj.read(len(mpr_label))
self.logger.debug(f"label: {label}")
counter = 0
while True:
counter += 1
new_module = _read_modules(file_obj)
position = int(new_module["end"])
mpr_modules.append(new_module)
if position >= stats_info.st_size:
txt = "-reached end of file"
if position == stats_info.st_size:
txt += " --exactly at end of file"
self.logger.info(txt)
break
file_obj.close()
# ------------- set -----------------------------------
settings_mod = None
for m in mpr_modules:
if m["shortname"].strip().decode() == "VMP Set":
settings_mod = m
break
if settings_mod is None:
raise IOError("No settings-module found!")
self._parse_mpr_settings_data(settings_mod)
# ------------- data -----------------------------------
data_module = None
for m in mpr_modules:
if m["shortname"].strip().decode() == "VMP data":
data_module = m
if data_module is None:
raise IOError("No data module!")
data_version = data_module["version"]
n_data_points = np.fromstring(data_module["data"][:4], dtype="<u4")[0]
n_columns = np.fromstring(data_module["data"][4:5], dtype="u1")[0]
logging.debug(f"data (points, cols): {n_data_points}, {n_columns}")
if data_version == 0:
logging.debug("data version 0")
column_types = np.fromstring(
data_module["data"][5:], dtype="u1", count=n_columns
)
remaining_headers = data_module["data"][5 + n_columns : 100]
main_data = data_module["data"][100:]
elif data_version == 2:
logging.debug("data version 2")
column_types = np.fromstring(
data_module["data"][5:], dtype="<u2", count=n_columns
)
main_data = data_module["data"][405:]
remaining_headers = data_module["data"][5 + 2 * n_columns : 405]
else:
raise IOError("Unrecognised version for data module: %d" % data_version)
whats_left = remaining_headers.strip(b"\x00").decode("utf8")
if whats_left:
self.logger.debug("UPS! you have some columns left")
self.logger.debug(whats_left)
dtype_dict = OrderedDict()
flags_dict = OrderedDict()
for col in column_types:
if col in bl_flags.keys():
flags_dict[bl_flags[col][0]] = bl_flags[col][1]
dtype_dict[bl_dtypes[col][1]] = bl_dtypes[col][0]
self.dtype_dict = dtype_dict
self.flags_dict = flags_dict
dtype = np.dtype(list(dtype_dict.items()))
p = dtype.itemsize
if not p == (len(main_data) / n_data_points):
self.logger.info(
"WARNING! You have defined %i bytes, "
"but it seems it should be %i" % (p, len(main_data) / n_data_points)
)
bulk = main_data
bulk_data = np.fromstring(bulk, dtype=dtype)
mpr_data = pd.DataFrame(bulk_data)
self.logger.debug(mpr_data.columns)
self.logger.debug(mpr_data.head())
# ------------- log -----------------------------------
log_module = None
for m in mpr_modules:
if m["shortname"].strip().decode() == "VMP LOG":
log_module = m
if log_module is None:
txt = "error - no log module"
raise IOError(txt)
tm = time.strptime(log_module["date"].decode(), "%m.%d.%y")
enddate = datetime.date(tm.tm_year, tm.tm_mon, tm.tm_mday)
mpr_log = dict()
mpr_log["end_date"] = enddate
mpr_log["length2"] = log_module["length"]
mpr_log["end2"] = log_module["end"]
mpr_log["offset2"] = log_module["offset"]
mpr_log["version2"] = log_module["version"]
mpr_log["data"] = log_module[
"data"
] # Not sure if I will ever need it, but just in case....
self.mpr_log = mpr_log
self._parse_mpr_log_data()
self.mpr_data = mpr_data
def _rename_header(self, h_old, h_new):
try:
self.mpr_data.rename(
columns={h_new: self.cellpy_headers[h_old]}, inplace=True
)
except KeyError as e:
# warnings.warn(f"KeyError {e}")
self.logger.info(f"Problem during conversion to cellpy-format ({e})")
def _generate_cycle_index(self):
flag = "Ns changes"
n = self._get_flag(flag)
self.mpr_data[self.cellpy_headers["cycle_index_txt"]] = 1
ns_changes = self.mpr_data[n].index.values
for i in ns_changes:
self.mpr_data.loc[i:, self.cellpy_headers["cycle_index_txt"]] += 1
def _generate_datetime(self):
start_date = self.mpr_settings["start_date"]
start_datetime = self.mpr_log["Start"]
cellpy_header_txt = "datetime_txt"
date_format = "%Y-%m-%d %H:%M:%S" # without microseconds
self.mpr_data[self.cellpy_headers[cellpy_header_txt]] = [
start_datetime + datetime.timedelta(seconds=n)
for n in self.mpr_data["time"].values
]
# self.mpr_data[self.cellpy_headers[cellpy_header_txt]]
# .start_date.strftime(date_format)
# TODO: @jepe - currently storing as datetime object
# (while for arbindata it is stored as str)
def _generate_step_index(self):
# TODO: @jepe - check and optionally fix me
cellpy_header_txt = "step_index_txt"
biologics_header_txt = "flags2"
self._rename_header(cellpy_header_txt, biologics_header_txt)
self.mpr_data[self.cellpy_headers[cellpy_header_txt]] += 1
def _generate_step_time(self):
# TODO: @jepe - fix me
self.mpr_data[self.cellpy_headers["step_time_txt"]] = np.nan
def _generate_sub_step_time(self):
# TODO: @jepe - fix me
self.mpr_data[self.cellpy_headers["sub_step_time_txt"]] = np.nan
def _generate_capacities(self):
cap_col = self.mpr_data["QChargeDischarge"]
self.mpr_data[self.cellpy_headers["discharge_capacity_txt"]] = [
0.0 if x < 0 else x for x in cap_col
]
self.mpr_data[self.cellpy_headers["charge_capacity_txt"]] = [
0.0 if x >= 0 else x for x in cap_col
]
def _rename_headers(self):
# should ideally use the info from bl_dtypes, will do that later
self.mpr_data[self.cellpy_headers["internal_resistance_txt"]] = np.nan
self.mpr_data[self.cellpy_headers["data_point_txt"]] = np.arange(
1, self.mpr_data.shape[0] + 1, 1
)
self._generate_datetime()
self._generate_cycle_index()
self._generate_step_time()
self._generate_sub_step_time()
self._generate_step_index()
self._generate_capacities()
# simple renaming of column headers for the rest
self._rename_header("frequency_txt", "freq")
self._rename_header("voltage_txt", "Ewe")
self._rename_header("current_txt", "I")
self._rename_header("aci_phase_angle_txt", "phaseZ")
self._rename_header("amplitude_txt", "absZ")
self._rename_header("ref_voltage_txt", "Ece")
self._rename_header("ref_aci_phase_angle_txt", "phaseZce")
self._rename_header("test_time_txt", "time")
self.mpr_data[self.cellpy_headers["sub_step_index_txt"]] = self.mpr_data[
self.cellpy_headers["step_index_txt"]
]
def _create_summary_data(self):
# Summary data should contain datapoint-number
# for last point in the cycle. It must also contain
# capacity
df_summary = pd.DataFrame()
mpr_log = self.mpr_log
mpr_settings = self.mpr_settings
# TODO: @jepe - finalise making summary of mpr-files
# after figuring out steps etc
warnings.warn(
"Creating summary data for biologics mpr-files" " is not implemented yet"
)
self.logger.info(mpr_settings)
self.logger.info(mpr_log)
start_date = mpr_settings["start_date"]
self.logger.info(start_date)
return df_summary
def __raw_export(self, filename, df):
filename_out = os.path.splitext(filename)[0] + "_test_out.csv"
print("\n--------EXPORTING----------------------------")
print(filename)
print("->")
print(filename_out)
df.to_csv(filename_out, sep=";")
print("------OK--------------------------------------")
def _clean_up(self, tmp_filename):
if os.path.isfile(tmp_filename):
try:
os.remove(tmp_filename)
except WindowsError as e:
self.logger.warning(
"could not remove tmp-file\n%s %s" % (tmp_filename, e)
)
pass
if __name__ == "__main__":
import logging
import sys
import os
from cellpy import log
from cellpy import cellreader
# -------- defining overall path-names etc ----------
current_file_path = os.path.dirname(os.path.realpath(__file__))
# relative_test_data_dir = "../cellpy/data_ex"
relative_test_data_dir = "../../../testdata"
relative_out_data_dir = "../../../dev_data"
test_data_dir = os.path.abspath(
os.path.join(current_file_path, relative_test_data_dir)
)
test_data_dir_out = os.path.abspath(
os.path.join(current_file_path, relative_out_data_dir)
)
test_data_dir_raw = os.path.join(test_data_dir, "data")
if not os.path.isdir(test_data_dir_raw):
print(f"Could not find {test_data_dir_raw}")
sys.exit(-23)
if not os.path.isdir(test_data_dir_out):
sys.exit(-24)
if not os.path.isdir(os.path.join(test_data_dir_out, "out")):
os.mkdir(os.path.join(test_data_dir_out, "out"))
test_data_dir_out = os.path.join(test_data_dir_out, "out")
test_raw_file = "biol.mpr"
test_raw_file_full = os.path.join(test_data_dir_raw, test_raw_file)
test_data_dir_cellpy = os.path.join(test_data_dir, "hdf5")
test_cellpy_file = "geis.h5"
test_cellpy_file_tmp = "tmpfile.h5"
test_cellpy_file_full = os.path.join(test_data_dir_cellpy, test_cellpy_file)
test_cellpy_file_tmp_full = os.path.join(test_data_dir_cellpy, test_cellpy_file_tmp)
raw_file_name = test_raw_file_full
print("\n======================mpr-dev===========================")
print(f"Test-file: {raw_file_name}")
log.setup_logging(default_level="DEBUG")
instrument = "biologics_mpr"
cellpy_data_instance = cellreader.CellpyData()
cellpy_data_instance.set_instrument(instrument=instrument)
print("starting to load the file")
cellpy_data_instance.from_raw(raw_file_name)
print("printing cellpy instance:")
print(cellpy_data_instance)
print("---make step table")
cellpy_data_instance.make_step_table()
print("---make summary")
cellpy_data_instance.make_summary()
print("---saving to csv")
try:
temp_dir = tempfile.mkdtemp()
cellpy_data_instance.to_csv(datadir=temp_dir)
cellpy_data_instance.to_csv(datadir=test_data_dir_out)
print("---saving to hdf5")
print("NOT YET")
finally:
shutil.rmtree(temp_dir)
# dtype = dtype([('flags', 'u1'), ('time/s', '<f8'), ('Ewe/V', '<f4'), ('dQ/mA.h', '<f8'),
# ('I/mA', '<f4'), ('Ece/V', '<f4'), ('(Q-Qo)/mA.h', '<f8'), ('20', '<f8'),
# ('freq/Hz', '<f4'), ('Phase(Z)/deg', '<f4'), ('|Z|/Ohm', '<f4'),
# ('I Range', '<u2'), ('74', '<f8'), ('96', '<f8'), ('98', '<f8'),
# ('99', '<f8'), ('100', '<f8'), ('101', '<f8'), ('123', '<f8'),
# ('124', '<f8'), ('Capacitance charge/µF', '<f8'), ('Capacitance discharge/µF', '<f8'),
# ('Ns', '<u2'), ('430', '<f8'), ('431', '<f8'), ('432', '<f8'), ('433', '<f8'),
# ('Q charge/discharge/mA.h', '<f8'), ('half cycle', '<u4'), ('469', '<f8'),
# ('471', '<f8')])
#
# flags = OrderedDict(
# [
# ('mode', (3, <class 'numpy.uint8'>)),('ox/red', (4, <class 'numpy.bool_'>)),
# ('error', (8, <class 'numpy.bool_'>)), ('control changes', (16, <class 'numpy.bool_'>)),
# ('Ns changes', (32, <class 'numpy.bool_'>)), ('counter inc.', (128, <class 'numpy.bool_'>))]
# )
# flags2 = OrderedDict()
| mit |
Nyker510/scikit-learn | examples/linear_model/plot_lasso_model_selection.py | 311 | 5431 | """
===================================================
Lasso model selection: Cross-Validation / AIC / BIC
===================================================
Use the Akaike information criterion (AIC), the Bayes Information
criterion (BIC) and cross-validation to select an optimal value
of the regularization parameter alpha of the :ref:`lasso` estimator.
Results obtained with LassoLarsIC are based on AIC/BIC criteria.
Information-criterion based model selection is very fast, but it
relies on a proper estimation of degrees of freedom, are
derived for large samples (asymptotic results) and assume the model
is correct, i.e. that the data are actually generated by this model.
They also tend to break when the problem is badly conditioned
(more features than samples).
For cross-validation, we use 20-fold with 2 algorithms to compute the
Lasso path: coordinate descent, as implemented by the LassoCV class, and
Lars (least angle regression) as implemented by the LassoLarsCV class.
Both algorithms give roughly the same results. They differ with regards
to their execution speed and sources of numerical errors.
Lars computes a path solution only for each kink in the path. As a
result, it is very efficient when there are only of few kinks, which is
the case if there are few features or samples. Also, it is able to
compute the full path without setting any meta parameter. On the
opposite, coordinate descent compute the path points on a pre-specified
grid (here we use the default). Thus it is more efficient if the number
of grid points is smaller than the number of kinks in the path. Such a
strategy can be interesting if the number of features is really large
and there are enough samples to select a large amount. In terms of
numerical errors, for heavily correlated variables, Lars will accumulate
more errors, while the coordinate descent algorithm will only sample the
path on a grid.
Note how the optimal value of alpha varies for each fold. This
illustrates why nested-cross validation is necessary when trying to
evaluate the performance of a method for which a parameter is chosen by
cross-validation: this choice of parameter may not be optimal for unseen
data.
"""
print(__doc__)
# Author: Olivier Grisel, Gael Varoquaux, Alexandre Gramfort
# License: BSD 3 clause
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LassoCV, LassoLarsCV, LassoLarsIC
from sklearn import datasets
diabetes = datasets.load_diabetes()
X = diabetes.data
y = diabetes.target
rng = np.random.RandomState(42)
X = np.c_[X, rng.randn(X.shape[0], 14)] # add some bad features
# normalize data as done by Lars to allow for comparison
X /= np.sqrt(np.sum(X ** 2, axis=0))
##############################################################################
# LassoLarsIC: least angle regression with BIC/AIC criterion
model_bic = LassoLarsIC(criterion='bic')
t1 = time.time()
model_bic.fit(X, y)
t_bic = time.time() - t1
alpha_bic_ = model_bic.alpha_
model_aic = LassoLarsIC(criterion='aic')
model_aic.fit(X, y)
alpha_aic_ = model_aic.alpha_
def plot_ic_criterion(model, name, color):
alpha_ = model.alpha_
alphas_ = model.alphas_
criterion_ = model.criterion_
plt.plot(-np.log10(alphas_), criterion_, '--', color=color,
linewidth=3, label='%s criterion' % name)
plt.axvline(-np.log10(alpha_), color=color, linewidth=3,
label='alpha: %s estimate' % name)
plt.xlabel('-log(alpha)')
plt.ylabel('criterion')
plt.figure()
plot_ic_criterion(model_aic, 'AIC', 'b')
plot_ic_criterion(model_bic, 'BIC', 'r')
plt.legend()
plt.title('Information-criterion for model selection (training time %.3fs)'
% t_bic)
##############################################################################
# LassoCV: coordinate descent
# Compute paths
print("Computing regularization path using the coordinate descent lasso...")
t1 = time.time()
model = LassoCV(cv=20).fit(X, y)
t_lasso_cv = time.time() - t1
# Display results
m_log_alphas = -np.log10(model.alphas_)
plt.figure()
ymin, ymax = 2300, 3800
plt.plot(m_log_alphas, model.mse_path_, ':')
plt.plot(m_log_alphas, model.mse_path_.mean(axis=-1), 'k',
label='Average across the folds', linewidth=2)
plt.axvline(-np.log10(model.alpha_), linestyle='--', color='k',
label='alpha: CV estimate')
plt.legend()
plt.xlabel('-log(alpha)')
plt.ylabel('Mean square error')
plt.title('Mean square error on each fold: coordinate descent '
'(train time: %.2fs)' % t_lasso_cv)
plt.axis('tight')
plt.ylim(ymin, ymax)
##############################################################################
# LassoLarsCV: least angle regression
# Compute paths
print("Computing regularization path using the Lars lasso...")
t1 = time.time()
model = LassoLarsCV(cv=20).fit(X, y)
t_lasso_lars_cv = time.time() - t1
# Display results
m_log_alphas = -np.log10(model.cv_alphas_)
plt.figure()
plt.plot(m_log_alphas, model.cv_mse_path_, ':')
plt.plot(m_log_alphas, model.cv_mse_path_.mean(axis=-1), 'k',
label='Average across the folds', linewidth=2)
plt.axvline(-np.log10(model.alpha_), linestyle='--', color='k',
label='alpha CV')
plt.legend()
plt.xlabel('-log(alpha)')
plt.ylabel('Mean square error')
plt.title('Mean square error on each fold: Lars (train time: %.2fs)'
% t_lasso_lars_cv)
plt.axis('tight')
plt.ylim(ymin, ymax)
plt.show()
| bsd-3-clause |
manashmndl/scikit-learn | examples/cluster/plot_digits_agglomeration.py | 377 | 1694 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=========================================================
Feature agglomeration
=========================================================
These images how similar features are merged together using
feature agglomeration.
"""
print(__doc__)
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets, cluster
from sklearn.feature_extraction.image import grid_to_graph
digits = datasets.load_digits()
images = digits.images
X = np.reshape(images, (len(images), -1))
connectivity = grid_to_graph(*images[0].shape)
agglo = cluster.FeatureAgglomeration(connectivity=connectivity,
n_clusters=32)
agglo.fit(X)
X_reduced = agglo.transform(X)
X_restored = agglo.inverse_transform(X_reduced)
images_restored = np.reshape(X_restored, images.shape)
plt.figure(1, figsize=(4, 3.5))
plt.clf()
plt.subplots_adjust(left=.01, right=.99, bottom=.01, top=.91)
for i in range(4):
plt.subplot(3, 4, i + 1)
plt.imshow(images[i], cmap=plt.cm.gray, vmax=16, interpolation='nearest')
plt.xticks(())
plt.yticks(())
if i == 1:
plt.title('Original data')
plt.subplot(3, 4, 4 + i + 1)
plt.imshow(images_restored[i], cmap=plt.cm.gray, vmax=16,
interpolation='nearest')
if i == 1:
plt.title('Agglomerated data')
plt.xticks(())
plt.yticks(())
plt.subplot(3, 4, 10)
plt.imshow(np.reshape(agglo.labels_, images[0].shape),
interpolation='nearest', cmap=plt.cm.spectral)
plt.xticks(())
plt.yticks(())
plt.title('Labels')
plt.show()
| bsd-3-clause |
gautam1858/tensorflow | tensorflow/contrib/timeseries/examples/lstm.py | 17 | 13869 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""A more advanced example, of building an RNN-based time series model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
from os import path
import tempfile
import numpy
import tensorflow as tf
from tensorflow.contrib.timeseries.python.timeseries import estimators as ts_estimators
from tensorflow.contrib.timeseries.python.timeseries import model as ts_model
from tensorflow.contrib.timeseries.python.timeseries import state_management
try:
import matplotlib # pylint: disable=g-import-not-at-top
matplotlib.use("TkAgg") # Need Tk for interactive plots.
from matplotlib import pyplot # pylint: disable=g-import-not-at-top
HAS_MATPLOTLIB = True
except ImportError:
# Plotting requires matplotlib, but the unit test running this code may
# execute in an environment without it (i.e. matplotlib is not a build
# dependency). We'd still like to test the TensorFlow-dependent parts of this
# example.
HAS_MATPLOTLIB = False
_MODULE_PATH = path.dirname(__file__)
_DATA_FILE = path.join(_MODULE_PATH, "data/multivariate_periods.csv")
class _LSTMModel(ts_model.SequentialTimeSeriesModel):
"""A time series model-building example using an RNNCell."""
def __init__(self, num_units, num_features, exogenous_feature_columns=None,
dtype=tf.float32):
"""Initialize/configure the model object.
Note that we do not start graph building here. Rather, this object is a
configurable factory for TensorFlow graphs which are run by an Estimator.
Args:
num_units: The number of units in the model's LSTMCell.
num_features: The dimensionality of the time series (features per
timestep).
exogenous_feature_columns: A list of `tf.feature_column`s representing
features which are inputs to the model but are not predicted by
it. These must then be present for training, evaluation, and
prediction.
dtype: The floating point data type to use.
"""
super(_LSTMModel, self).__init__(
# Pre-register the metrics we'll be outputting (just a mean here).
train_output_names=["mean"],
predict_output_names=["mean"],
num_features=num_features,
exogenous_feature_columns=exogenous_feature_columns,
dtype=dtype)
self._num_units = num_units
# Filled in by initialize_graph()
self._lstm_cell = None
self._lstm_cell_run = None
self._predict_from_lstm_output = None
def initialize_graph(self, input_statistics=None):
"""Save templates for components, which can then be used repeatedly.
This method is called every time a new graph is created. It's safe to start
adding ops to the current default graph here, but the graph should be
constructed from scratch.
Args:
input_statistics: A math_utils.InputStatistics object.
"""
super(_LSTMModel, self).initialize_graph(input_statistics=input_statistics)
with tf.variable_scope("", use_resource=True):
# Use ResourceVariables to avoid race conditions.
self._lstm_cell = tf.nn.rnn_cell.LSTMCell(num_units=self._num_units)
# Create templates so we don't have to worry about variable reuse.
self._lstm_cell_run = tf.make_template(
name_="lstm_cell",
func_=self._lstm_cell,
create_scope_now_=True)
# Transforms LSTM output into mean predictions.
self._predict_from_lstm_output = tf.make_template(
name_="predict_from_lstm_output",
func_=functools.partial(tf.layers.dense, units=self.num_features),
create_scope_now_=True)
def get_start_state(self):
"""Return initial state for the time series model."""
return (
# Keeps track of the time associated with this state for error checking.
tf.zeros([], dtype=tf.int64),
# The previous observation or prediction.
tf.zeros([self.num_features], dtype=self.dtype),
# The most recently seen exogenous features.
tf.zeros(self._get_exogenous_embedding_shape(), dtype=self.dtype),
# The state of the RNNCell (batch dimension removed since this parent
# class will broadcast).
[tf.squeeze(state_element, axis=0)
for state_element
in self._lstm_cell.zero_state(batch_size=1, dtype=self.dtype)])
def _filtering_step(self, current_times, current_values, state, predictions):
"""Update model state based on observations.
Note that we don't do much here aside from computing a loss. In this case
it's easier to update the RNN state in _prediction_step, since that covers
running the RNN both on observations (from this method) and our own
predictions. This distinction can be important for probabilistic models,
where repeatedly predicting without filtering should lead to low-confidence
predictions.
Args:
current_times: A [batch size] integer Tensor.
current_values: A [batch size, self.num_features] floating point Tensor
with new observations.
state: The model's state tuple.
predictions: The output of the previous `_prediction_step`.
Returns:
A tuple of new state and a predictions dictionary updated to include a
loss (note that we could also return other measures of goodness of fit,
although only "loss" will be optimized).
"""
state_from_time, prediction, exogenous, lstm_state = state
with tf.control_dependencies(
[tf.assert_equal(current_times, state_from_time)]):
# Subtract the mean and divide by the variance of the series. Slightly
# more efficient if done for a whole window (using the normalize_features
# argument to SequentialTimeSeriesModel).
transformed_values = self._scale_data(current_values)
# Use mean squared error across features for the loss.
predictions["loss"] = tf.reduce_mean(
(prediction - transformed_values) ** 2, axis=-1)
# Keep track of the new observation in model state. It won't be run
# through the LSTM until the next _imputation_step.
new_state_tuple = (current_times, transformed_values,
exogenous, lstm_state)
return (new_state_tuple, predictions)
def _prediction_step(self, current_times, state):
"""Advance the RNN state using a previous observation or prediction."""
_, previous_observation_or_prediction, exogenous, lstm_state = state
# Update LSTM state based on the most recent exogenous and endogenous
# features.
inputs = tf.concat([previous_observation_or_prediction, exogenous],
axis=-1)
lstm_output, new_lstm_state = self._lstm_cell_run(
inputs=inputs, state=lstm_state)
next_prediction = self._predict_from_lstm_output(lstm_output)
new_state_tuple = (current_times, next_prediction,
exogenous, new_lstm_state)
return new_state_tuple, {"mean": self._scale_back_data(next_prediction)}
def _imputation_step(self, current_times, state):
"""Advance model state across a gap."""
# Does not do anything special if we're jumping across a gap. More advanced
# models, especially probabilistic ones, would want a special case that
# depends on the gap size.
return state
def _exogenous_input_step(
self, current_times, current_exogenous_regressors, state):
"""Save exogenous regressors in model state for use in _prediction_step."""
state_from_time, prediction, _, lstm_state = state
return (state_from_time, prediction,
current_exogenous_regressors, lstm_state)
def train_and_predict(
csv_file_name=_DATA_FILE, training_steps=200, estimator_config=None,
export_directory=None):
"""Train and predict using a custom time series model."""
# Construct an Estimator from our LSTM model.
categorical_column = tf.feature_column.categorical_column_with_hash_bucket(
key="categorical_exogenous_feature", hash_bucket_size=16)
exogenous_feature_columns = [
# Exogenous features are not part of the loss, but can inform
# predictions. In this example the features have no extra information, but
# are included as an API example.
tf.feature_column.numeric_column(
"2d_exogenous_feature", shape=(2,)),
tf.feature_column.embedding_column(
categorical_column=categorical_column, dimension=10)]
estimator = ts_estimators.TimeSeriesRegressor(
model=_LSTMModel(num_features=5, num_units=128,
exogenous_feature_columns=exogenous_feature_columns),
optimizer=tf.train.AdamOptimizer(0.001), config=estimator_config,
# Set state to be saved across windows.
state_manager=state_management.ChainingStateManager())
reader = tf.contrib.timeseries.CSVReader(
csv_file_name,
column_names=((tf.contrib.timeseries.TrainEvalFeatures.TIMES,)
+ (tf.contrib.timeseries.TrainEvalFeatures.VALUES,) * 5
+ ("2d_exogenous_feature",) * 2
+ ("categorical_exogenous_feature",)),
# Data types other than for `times` need to be specified if they aren't
# float32. In this case one of our exogenous features has string dtype.
column_dtypes=((tf.int64,) + (tf.float32,) * 7 + (tf.string,)))
train_input_fn = tf.contrib.timeseries.RandomWindowInputFn(
reader, batch_size=4, window_size=32)
estimator.train(input_fn=train_input_fn, steps=training_steps)
evaluation_input_fn = tf.contrib.timeseries.WholeDatasetInputFn(reader)
evaluation = estimator.evaluate(input_fn=evaluation_input_fn, steps=1)
# Predict starting after the evaluation
predict_exogenous_features = {
"2d_exogenous_feature": numpy.concatenate(
[numpy.ones([1, 100, 1]), numpy.zeros([1, 100, 1])],
axis=-1),
"categorical_exogenous_feature": numpy.array(
["strkey"] * 100)[None, :, None]}
(predictions,) = tuple(estimator.predict(
input_fn=tf.contrib.timeseries.predict_continuation_input_fn(
evaluation, steps=100,
exogenous_features=predict_exogenous_features)))
times = evaluation["times"][0]
observed = evaluation["observed"][0, :, :]
predicted_mean = numpy.squeeze(numpy.concatenate(
[evaluation["mean"][0], predictions["mean"]], axis=0))
all_times = numpy.concatenate([times, predictions["times"]], axis=0)
# Export the model in SavedModel format. We include a bit of extra boilerplate
# for "cold starting" as if we didn't have any state from the Estimator, which
# is the case when serving from a SavedModel. If Estimator output is
# available, the result of "Estimator.evaluate" can be passed directly to
# `tf.contrib.timeseries.saved_model_utils.predict_continuation` as the
# `continue_from` argument.
with tf.Graph().as_default():
filter_feature_tensors, _ = evaluation_input_fn()
with tf.train.MonitoredSession() as session:
# Fetch the series to "warm up" our state, which will allow us to make
# predictions for its future values. This is just a dictionary of times,
# values, and exogenous features mapping to numpy arrays. The use of an
# input_fn is just a convenience for the example; they can also be
# specified manually.
filter_features = session.run(filter_feature_tensors)
if export_directory is None:
export_directory = tempfile.mkdtemp()
input_receiver_fn = estimator.build_raw_serving_input_receiver_fn()
export_location = estimator.export_saved_model(export_directory,
input_receiver_fn)
# Warm up and predict using the SavedModel
with tf.Graph().as_default():
with tf.Session() as session:
signatures = tf.saved_model.loader.load(
session, [tf.saved_model.tag_constants.SERVING], export_location)
state = tf.contrib.timeseries.saved_model_utils.cold_start_filter(
signatures=signatures, session=session, features=filter_features)
saved_model_output = (
tf.contrib.timeseries.saved_model_utils.predict_continuation(
continue_from=state, signatures=signatures,
session=session, steps=100,
exogenous_features=predict_exogenous_features))
# The exported model gives the same results as the Estimator.predict()
# call above.
numpy.testing.assert_allclose(
predictions["mean"],
numpy.squeeze(saved_model_output["mean"], axis=0))
return times, observed, all_times, predicted_mean
def main(unused_argv):
if not HAS_MATPLOTLIB:
raise ImportError(
"Please install matplotlib to generate a plot from this example.")
(observed_times, observations,
all_times, predictions) = train_and_predict()
pyplot.axvline(99, linestyle="dotted")
observed_lines = pyplot.plot(
observed_times, observations, label="Observed", color="k")
predicted_lines = pyplot.plot(
all_times, predictions, label="Predicted", color="b")
pyplot.legend(handles=[observed_lines[0], predicted_lines[0]],
loc="upper left")
pyplot.show()
if __name__ == "__main__":
tf.app.run(main=main)
| apache-2.0 |
linebp/pandas | pandas/core/strings.py | 7 | 60118 | import numpy as np
from pandas.compat import zip
from pandas.core.dtypes.generic import ABCSeries, ABCIndex
from pandas.core.dtypes.missing import isnull, notnull
from pandas.core.dtypes.common import (
is_bool_dtype,
is_categorical_dtype,
is_object_dtype,
is_string_like,
is_list_like,
is_scalar,
is_integer,
is_re)
from pandas.core.common import _values_from_object
from pandas.core.algorithms import take_1d
import pandas.compat as compat
from pandas.core.base import AccessorProperty, NoNewAttributesMixin
from pandas.util._decorators import Appender
import re
import pandas._libs.lib as lib
import warnings
import textwrap
import codecs
_cpython_optimized_encoders = (
"utf-8", "utf8", "latin-1", "latin1", "iso-8859-1", "mbcs", "ascii"
)
_cpython_optimized_decoders = _cpython_optimized_encoders + (
"utf-16", "utf-32"
)
_shared_docs = dict()
def _get_array_list(arr, others):
from pandas.core.series import Series
if len(others) and isinstance(_values_from_object(others)[0],
(list, np.ndarray, Series)):
arrays = [arr] + list(others)
else:
arrays = [arr, others]
return [np.asarray(x, dtype=object) for x in arrays]
def str_cat(arr, others=None, sep=None, na_rep=None):
"""
Concatenate strings in the Series/Index with given separator.
Parameters
----------
others : list-like, or list of list-likes
If None, returns str concatenating strings of the Series
sep : string or None, default None
na_rep : string or None, default None
If None, NA in the series are ignored.
Returns
-------
concat : Series/Index of objects or str
Examples
--------
When ``na_rep`` is `None` (default behavior), NaN value(s)
in the Series are ignored.
>>> Series(['a','b',np.nan,'c']).str.cat(sep=' ')
'a b c'
>>> Series(['a','b',np.nan,'c']).str.cat(sep=' ', na_rep='?')
'a b ? c'
If ``others`` is specified, corresponding values are
concatenated with the separator. Result will be a Series of strings.
>>> Series(['a', 'b', 'c']).str.cat(['A', 'B', 'C'], sep=',')
0 a,A
1 b,B
2 c,C
dtype: object
Otherwise, strings in the Series are concatenated. Result will be a string.
>>> Series(['a', 'b', 'c']).str.cat(sep=',')
'a,b,c'
Also, you can pass a list of list-likes.
>>> Series(['a', 'b']).str.cat([['x', 'y'], ['1', '2']], sep=',')
0 a,x,1
1 b,y,2
dtype: object
"""
if sep is None:
sep = ''
if others is not None:
arrays = _get_array_list(arr, others)
n = _length_check(arrays)
masks = np.array([isnull(x) for x in arrays])
cats = None
if na_rep is None:
na_mask = np.logical_or.reduce(masks, axis=0)
result = np.empty(n, dtype=object)
np.putmask(result, na_mask, np.nan)
notmask = ~na_mask
tuples = zip(*[x[notmask] for x in arrays])
cats = [sep.join(tup) for tup in tuples]
result[notmask] = cats
else:
for i, x in enumerate(arrays):
x = np.where(masks[i], na_rep, x)
if cats is None:
cats = x
else:
cats = cats + sep + x
result = cats
return result
else:
arr = np.asarray(arr, dtype=object)
mask = isnull(arr)
if na_rep is None and mask.any():
if sep == '':
na_rep = ''
else:
return sep.join(arr[notnull(arr)])
return sep.join(np.where(mask, na_rep, arr))
def _length_check(others):
n = None
for x in others:
try:
if n is None:
n = len(x)
elif len(x) != n:
raise ValueError('All arrays must be same length')
except TypeError:
raise ValueError("Did you mean to supply a `sep` keyword?")
return n
def _na_map(f, arr, na_result=np.nan, dtype=object):
# should really _check_ for NA
return _map(f, arr, na_mask=True, na_value=na_result, dtype=dtype)
def _map(f, arr, na_mask=False, na_value=np.nan, dtype=object):
if not len(arr):
return np.ndarray(0, dtype=dtype)
if isinstance(arr, ABCSeries):
arr = arr.values
if not isinstance(arr, np.ndarray):
arr = np.asarray(arr, dtype=object)
if na_mask:
mask = isnull(arr)
try:
convert = not all(mask)
result = lib.map_infer_mask(arr, f, mask.view(np.uint8), convert)
except (TypeError, AttributeError) as e:
# Reraise the exception if callable `f` got wrong number of args.
# The user may want to be warned by this, instead of getting NaN
if compat.PY2:
p_err = r'takes (no|(exactly|at (least|most)) ?\d+) arguments?'
else:
p_err = (r'((takes)|(missing)) (?(2)from \d+ to )?\d+ '
r'(?(3)required )positional arguments?')
if len(e.args) >= 1 and re.search(p_err, e.args[0]):
raise e
def g(x):
try:
return f(x)
except (TypeError, AttributeError):
return na_value
return _map(g, arr, dtype=dtype)
if na_value is not np.nan:
np.putmask(result, mask, na_value)
if result.dtype == object:
result = lib.maybe_convert_objects(result)
return result
else:
return lib.map_infer(arr, f)
def str_count(arr, pat, flags=0):
"""
Count occurrences of pattern in each string of the Series/Index.
Parameters
----------
pat : string, valid regular expression
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE
Returns
-------
counts : Series/Index of integer values
"""
regex = re.compile(pat, flags=flags)
f = lambda x: len(regex.findall(x))
return _na_map(f, arr, dtype=int)
def str_contains(arr, pat, case=True, flags=0, na=np.nan, regex=True):
"""
Return boolean Series/``array`` whether given pattern/regex is
contained in each string in the Series/Index.
Parameters
----------
pat : string
Character sequence or regular expression
case : boolean, default True
If True, case sensitive
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE
na : default NaN, fill value for missing values.
regex : bool, default True
If True use re.search, otherwise use Python in operator
Returns
-------
contained : Series/array of boolean values
See Also
--------
match : analogous, but stricter, relying on re.match instead of re.search
"""
if regex:
if not case:
flags |= re.IGNORECASE
regex = re.compile(pat, flags=flags)
if regex.groups > 0:
warnings.warn("This pattern has match groups. To actually get the"
" groups, use str.extract.", UserWarning,
stacklevel=3)
f = lambda x: bool(regex.search(x))
else:
if case:
f = lambda x: pat in x
else:
upper_pat = pat.upper()
f = lambda x: upper_pat in x
uppered = _na_map(lambda x: x.upper(), arr)
return _na_map(f, uppered, na, dtype=bool)
return _na_map(f, arr, na, dtype=bool)
def str_startswith(arr, pat, na=np.nan):
"""
Return boolean Series/``array`` indicating whether each string in the
Series/Index starts with passed pattern. Equivalent to
:meth:`str.startswith`.
Parameters
----------
pat : string
Character sequence
na : bool, default NaN
Returns
-------
startswith : Series/array of boolean values
"""
f = lambda x: x.startswith(pat)
return _na_map(f, arr, na, dtype=bool)
def str_endswith(arr, pat, na=np.nan):
"""
Return boolean Series indicating whether each string in the
Series/Index ends with passed pattern. Equivalent to
:meth:`str.endswith`.
Parameters
----------
pat : string
Character sequence
na : bool, default NaN
Returns
-------
endswith : Series/array of boolean values
"""
f = lambda x: x.endswith(pat)
return _na_map(f, arr, na, dtype=bool)
def str_replace(arr, pat, repl, n=-1, case=None, flags=0):
"""
Replace occurrences of pattern/regex in the Series/Index with
some other string. Equivalent to :meth:`str.replace` or
:func:`re.sub`.
Parameters
----------
pat : string or compiled regex
String can be a character sequence or regular expression.
.. versionadded:: 0.20.0
`pat` also accepts a compiled regex.
repl : string or callable
Replacement string or a callable. The callable is passed the regex
match object and must return a replacement string to be used.
See :func:`re.sub`.
.. versionadded:: 0.20.0
`repl` also accepts a callable.
n : int, default -1 (all)
Number of replacements to make from start
case : boolean, default None
- If True, case sensitive (the default if `pat` is a string)
- Set to False for case insensitive
- Cannot be set if `pat` is a compiled regex
flags : int, default 0 (no flags)
- re module flags, e.g. re.IGNORECASE
- Cannot be set if `pat` is a compiled regex
Returns
-------
replaced : Series/Index of objects
Notes
-----
When `pat` is a compiled regex, all flags should be included in the
compiled regex. Use of `case` or `flags` with a compiled regex will
raise an error.
Examples
--------
When `repl` is a string, every `pat` is replaced as with
:meth:`str.replace`. NaN value(s) in the Series are left as is.
>>> pd.Series(['foo', 'fuz', np.nan]).str.replace('f', 'b')
0 boo
1 buz
2 NaN
dtype: object
When `repl` is a callable, it is called on every `pat` using
:func:`re.sub`. The callable should expect one positional argument
(a regex object) and return a string.
To get the idea:
>>> pd.Series(['foo', 'fuz', np.nan]).str.replace('f', repr)
0 <_sre.SRE_Match object; span=(0, 1), match='f'>oo
1 <_sre.SRE_Match object; span=(0, 1), match='f'>uz
2 NaN
dtype: object
Reverse every lowercase alphabetic word:
>>> repl = lambda m: m.group(0)[::-1]
>>> pd.Series(['foo 123', 'bar baz', np.nan]).str.replace(r'[a-z]+', repl)
0 oof 123
1 rab zab
2 NaN
dtype: object
Using regex groups (extract second group and swap case):
>>> pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)"
>>> repl = lambda m: m.group('two').swapcase()
>>> pd.Series(['One Two Three', 'Foo Bar Baz']).str.replace(pat, repl)
0 tWO
1 bAR
dtype: object
Using a compiled regex with flags
>>> regex_pat = re.compile(r'FUZ', flags=re.IGNORECASE)
>>> pd.Series(['foo', 'fuz', np.nan]).str.replace(regex_pat, 'bar')
0 foo
1 bar
2 NaN
dtype: object
"""
# Check whether repl is valid (GH 13438, GH 15055)
if not (is_string_like(repl) or callable(repl)):
raise TypeError("repl must be a string or callable")
is_compiled_re = is_re(pat)
if is_compiled_re:
if (case is not None) or (flags != 0):
raise ValueError("case and flags cannot be set"
" when pat is a compiled regex")
else:
# not a compiled regex
# set default case
if case is None:
case = True
# add case flag, if provided
if case is False:
flags |= re.IGNORECASE
use_re = is_compiled_re or len(pat) > 1 or flags or callable(repl)
if use_re:
n = n if n >= 0 else 0
regex = re.compile(pat, flags=flags)
f = lambda x: regex.sub(repl=repl, string=x, count=n)
else:
f = lambda x: x.replace(pat, repl, n)
return _na_map(f, arr)
def str_repeat(arr, repeats):
"""
Duplicate each string in the Series/Index by indicated number
of times.
Parameters
----------
repeats : int or array
Same value for all (int) or different value per (array)
Returns
-------
repeated : Series/Index of objects
"""
if is_scalar(repeats):
def rep(x):
try:
return compat.binary_type.__mul__(x, repeats)
except TypeError:
return compat.text_type.__mul__(x, repeats)
return _na_map(rep, arr)
else:
def rep(x, r):
try:
return compat.binary_type.__mul__(x, r)
except TypeError:
return compat.text_type.__mul__(x, r)
repeats = np.asarray(repeats, dtype=object)
result = lib.vec_binop(_values_from_object(arr), repeats, rep)
return result
def str_match(arr, pat, case=True, flags=0, na=np.nan, as_indexer=None):
"""
Determine if each string matches a regular expression.
Parameters
----------
pat : string
Character sequence or regular expression
case : boolean, default True
If True, case sensitive
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE
na : default NaN, fill value for missing values.
as_indexer : DEPRECATED
Returns
-------
Series/array of boolean values
See Also
--------
contains : analogous, but less strict, relying on re.search instead of
re.match
extract : extract matched groups
"""
if not case:
flags |= re.IGNORECASE
regex = re.compile(pat, flags=flags)
if (as_indexer is False) and (regex.groups > 0):
raise ValueError("as_indexer=False with a pattern with groups is no "
"longer supported. Use '.str.extract(pat)' instead")
elif as_indexer is not None:
# Previously, this keyword was used for changing the default but
# deprecated behaviour. This keyword is now no longer needed.
warnings.warn("'as_indexer' keyword was specified but is ignored "
"(match now returns a boolean indexer by default), "
"and will be removed in a future version.",
FutureWarning, stacklevel=3)
dtype = bool
f = lambda x: bool(regex.match(x))
return _na_map(f, arr, na, dtype=dtype)
def _get_single_group_name(rx):
try:
return list(rx.groupindex.keys()).pop()
except IndexError:
return None
def _groups_or_na_fun(regex):
"""Used in both extract_noexpand and extract_frame"""
if regex.groups == 0:
raise ValueError("pattern contains no capture groups")
empty_row = [np.nan] * regex.groups
def f(x):
if not isinstance(x, compat.string_types):
return empty_row
m = regex.search(x)
if m:
return [np.nan if item is None else item for item in m.groups()]
else:
return empty_row
return f
def _str_extract_noexpand(arr, pat, flags=0):
"""
Find groups in each string in the Series using passed regular
expression. This function is called from
str_extract(expand=False), and can return Series, DataFrame, or
Index.
"""
from pandas import DataFrame, Index
regex = re.compile(pat, flags=flags)
groups_or_na = _groups_or_na_fun(regex)
if regex.groups == 1:
result = np.array([groups_or_na(val)[0] for val in arr], dtype=object)
name = _get_single_group_name(regex)
else:
if isinstance(arr, Index):
raise ValueError("only one regex group is supported with Index")
name = None
names = dict(zip(regex.groupindex.values(), regex.groupindex.keys()))
columns = [names.get(1 + i, i) for i in range(regex.groups)]
if arr.empty:
result = DataFrame(columns=columns, dtype=object)
else:
result = DataFrame(
[groups_or_na(val) for val in arr],
columns=columns,
index=arr.index,
dtype=object)
return result, name
def _str_extract_frame(arr, pat, flags=0):
"""
For each subject string in the Series, extract groups from the
first match of regular expression pat. This function is called from
str_extract(expand=True), and always returns a DataFrame.
"""
from pandas import DataFrame
regex = re.compile(pat, flags=flags)
groups_or_na = _groups_or_na_fun(regex)
names = dict(zip(regex.groupindex.values(), regex.groupindex.keys()))
columns = [names.get(1 + i, i) for i in range(regex.groups)]
if len(arr) == 0:
return DataFrame(columns=columns, dtype=object)
try:
result_index = arr.index
except AttributeError:
result_index = None
return DataFrame(
[groups_or_na(val) for val in arr],
columns=columns,
index=result_index,
dtype=object)
def str_extract(arr, pat, flags=0, expand=None):
"""
For each subject string in the Series, extract groups from the
first match of regular expression pat.
.. versionadded:: 0.13.0
Parameters
----------
pat : string
Regular expression pattern with capturing groups
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE
.. versionadded:: 0.18.0
expand : bool, default False
* If True, return DataFrame.
* If False, return Series/Index/DataFrame.
Returns
-------
DataFrame with one row for each subject string, and one column for
each group. Any capture group names in regular expression pat will
be used for column names; otherwise capture group numbers will be
used. The dtype of each result column is always object, even when
no match is found. If expand=False and pat has only one capture group,
then return a Series (if subject is a Series) or Index (if subject
is an Index).
See Also
--------
extractall : returns all matches (not just the first match)
Examples
--------
A pattern with two groups will return a DataFrame with two columns.
Non-matches will be NaN.
>>> s = Series(['a1', 'b2', 'c3'])
>>> s.str.extract('([ab])(\d)')
0 1
0 a 1
1 b 2
2 NaN NaN
A pattern may contain optional groups.
>>> s.str.extract('([ab])?(\d)')
0 1
0 a 1
1 b 2
2 NaN 3
Named groups will become column names in the result.
>>> s.str.extract('(?P<letter>[ab])(?P<digit>\d)')
letter digit
0 a 1
1 b 2
2 NaN NaN
A pattern with one group will return a DataFrame with one column
if expand=True.
>>> s.str.extract('[ab](\d)', expand=True)
0
0 1
1 2
2 NaN
A pattern with one group will return a Series if expand=False.
>>> s.str.extract('[ab](\d)', expand=False)
0 1
1 2
2 NaN
dtype: object
"""
if expand is None:
warnings.warn(
"currently extract(expand=None) " +
"means expand=False (return Index/Series/DataFrame) " +
"but in a future version of pandas this will be changed " +
"to expand=True (return DataFrame)",
FutureWarning,
stacklevel=3)
expand = False
if not isinstance(expand, bool):
raise ValueError("expand must be True or False")
if expand:
return _str_extract_frame(arr._orig, pat, flags=flags)
else:
result, name = _str_extract_noexpand(arr._data, pat, flags=flags)
return arr._wrap_result(result, name=name, expand=expand)
def str_extractall(arr, pat, flags=0):
"""
For each subject string in the Series, extract groups from all
matches of regular expression pat. When each subject string in the
Series has exactly one match, extractall(pat).xs(0, level='match')
is the same as extract(pat).
.. versionadded:: 0.18.0
Parameters
----------
pat : string
Regular expression pattern with capturing groups
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE
Returns
-------
A DataFrame with one row for each match, and one column for each
group. Its rows have a MultiIndex with first levels that come from
the subject Series. The last level is named 'match' and indicates
the order in the subject. Any capture group names in regular
expression pat will be used for column names; otherwise capture
group numbers will be used.
See Also
--------
extract : returns first match only (not all matches)
Examples
--------
A pattern with one group will return a DataFrame with one column.
Indices with no matches will not appear in the result.
>>> s = Series(["a1a2", "b1", "c1"], index=["A", "B", "C"])
>>> s.str.extractall("[ab](\d)")
0
match
A 0 1
1 2
B 0 1
Capture group names are used for column names of the result.
>>> s.str.extractall("[ab](?P<digit>\d)")
digit
match
A 0 1
1 2
B 0 1
A pattern with two groups will return a DataFrame with two columns.
>>> s.str.extractall("(?P<letter>[ab])(?P<digit>\d)")
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
Optional groups that do not match are NaN in the result.
>>> s.str.extractall("(?P<letter>[ab])?(?P<digit>\d)")
letter digit
match
A 0 a 1
1 a 2
B 0 b 1
C 0 NaN 1
"""
regex = re.compile(pat, flags=flags)
# the regex must contain capture groups.
if regex.groups == 0:
raise ValueError("pattern contains no capture groups")
if isinstance(arr, ABCIndex):
arr = arr.to_series().reset_index(drop=True)
names = dict(zip(regex.groupindex.values(), regex.groupindex.keys()))
columns = [names.get(1 + i, i) for i in range(regex.groups)]
match_list = []
index_list = []
is_mi = arr.index.nlevels > 1
for subject_key, subject in arr.iteritems():
if isinstance(subject, compat.string_types):
if not is_mi:
subject_key = (subject_key, )
for match_i, match_tuple in enumerate(regex.findall(subject)):
if isinstance(match_tuple, compat.string_types):
match_tuple = (match_tuple,)
na_tuple = [np.NaN if group == "" else group
for group in match_tuple]
match_list.append(na_tuple)
result_key = tuple(subject_key + (match_i, ))
index_list.append(result_key)
if 0 < len(index_list):
from pandas import MultiIndex
index = MultiIndex.from_tuples(
index_list, names=arr.index.names + ["match"])
else:
index = None
result = arr._constructor_expanddim(match_list, index=index,
columns=columns)
return result
def str_get_dummies(arr, sep='|'):
"""
Split each string in the Series by sep and return a frame of
dummy/indicator variables.
Parameters
----------
sep : string, default "|"
String to split on.
Returns
-------
dummies : DataFrame
Examples
--------
>>> Series(['a|b', 'a', 'a|c']).str.get_dummies()
a b c
0 1 1 0
1 1 0 0
2 1 0 1
>>> Series(['a|b', np.nan, 'a|c']).str.get_dummies()
a b c
0 1 1 0
1 0 0 0
2 1 0 1
See Also
--------
pandas.get_dummies
"""
arr = arr.fillna('')
try:
arr = sep + arr + sep
except TypeError:
arr = sep + arr.astype(str) + sep
tags = set()
for ts in arr.str.split(sep):
tags.update(ts)
tags = sorted(tags - set([""]))
dummies = np.empty((len(arr), len(tags)), dtype=np.int64)
for i, t in enumerate(tags):
pat = sep + t + sep
dummies[:, i] = lib.map_infer(arr.values, lambda x: pat in x)
return dummies, tags
def str_join(arr, sep):
"""
Join lists contained as elements in the Series/Index with
passed delimiter. Equivalent to :meth:`str.join`.
Parameters
----------
sep : string
Delimiter
Returns
-------
joined : Series/Index of objects
"""
return _na_map(sep.join, arr)
def str_findall(arr, pat, flags=0):
"""
Find all occurrences of pattern or regular expression in the
Series/Index. Equivalent to :func:`re.findall`.
Parameters
----------
pat : string
Pattern or regular expression
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE
Returns
-------
matches : Series/Index of lists
See Also
--------
extractall : returns DataFrame with one column per capture group
"""
regex = re.compile(pat, flags=flags)
return _na_map(regex.findall, arr)
def str_find(arr, sub, start=0, end=None, side='left'):
"""
Return indexes in each strings in the Series/Index where the
substring is fully contained between [start:end]. Return -1 on failure.
Parameters
----------
sub : str
Substring being searched
start : int
Left edge index
end : int
Right edge index
side : {'left', 'right'}, default 'left'
Specifies a starting side, equivalent to ``find`` or ``rfind``
Returns
-------
found : Series/Index of integer values
"""
if not isinstance(sub, compat.string_types):
msg = 'expected a string object, not {0}'
raise TypeError(msg.format(type(sub).__name__))
if side == 'left':
method = 'find'
elif side == 'right':
method = 'rfind'
else: # pragma: no cover
raise ValueError('Invalid side')
if end is None:
f = lambda x: getattr(x, method)(sub, start)
else:
f = lambda x: getattr(x, method)(sub, start, end)
return _na_map(f, arr, dtype=int)
def str_index(arr, sub, start=0, end=None, side='left'):
if not isinstance(sub, compat.string_types):
msg = 'expected a string object, not {0}'
raise TypeError(msg.format(type(sub).__name__))
if side == 'left':
method = 'index'
elif side == 'right':
method = 'rindex'
else: # pragma: no cover
raise ValueError('Invalid side')
if end is None:
f = lambda x: getattr(x, method)(sub, start)
else:
f = lambda x: getattr(x, method)(sub, start, end)
return _na_map(f, arr, dtype=int)
def str_pad(arr, width, side='left', fillchar=' '):
"""
Pad strings in the Series/Index with an additional character to
specified side.
Parameters
----------
width : int
Minimum width of resulting string; additional characters will be filled
with spaces
side : {'left', 'right', 'both'}, default 'left'
fillchar : str
Additional character for filling, default is whitespace
Returns
-------
padded : Series/Index of objects
"""
if not isinstance(fillchar, compat.string_types):
msg = 'fillchar must be a character, not {0}'
raise TypeError(msg.format(type(fillchar).__name__))
if len(fillchar) != 1:
raise TypeError('fillchar must be a character, not str')
if not is_integer(width):
msg = 'width must be of integer type, not {0}'
raise TypeError(msg.format(type(width).__name__))
if side == 'left':
f = lambda x: x.rjust(width, fillchar)
elif side == 'right':
f = lambda x: x.ljust(width, fillchar)
elif side == 'both':
f = lambda x: x.center(width, fillchar)
else: # pragma: no cover
raise ValueError('Invalid side')
return _na_map(f, arr)
def str_split(arr, pat=None, n=None):
"""
Split each string (a la re.split) in the Series/Index by given
pattern, propagating NA values. Equivalent to :meth:`str.split`.
Parameters
----------
pat : string, default None
String or regular expression to split on. If None, splits on whitespace
n : int, default -1 (all)
None, 0 and -1 will be interpreted as return all splits
expand : bool, default False
* If True, return DataFrame/MultiIndex expanding dimensionality.
* If False, return Series/Index.
.. versionadded:: 0.16.1
return_type : deprecated, use `expand`
Returns
-------
split : Series/Index or DataFrame/MultiIndex of objects
"""
if pat is None:
if n is None or n == 0:
n = -1
f = lambda x: x.split(pat, n)
else:
if len(pat) == 1:
if n is None or n == 0:
n = -1
f = lambda x: x.split(pat, n)
else:
if n is None or n == -1:
n = 0
regex = re.compile(pat)
f = lambda x: regex.split(x, maxsplit=n)
res = _na_map(f, arr)
return res
def str_rsplit(arr, pat=None, n=None):
"""
Split each string in the Series/Index by the given delimiter
string, starting at the end of the string and working to the front.
Equivalent to :meth:`str.rsplit`.
.. versionadded:: 0.16.2
Parameters
----------
pat : string, default None
Separator to split on. If None, splits on whitespace
n : int, default -1 (all)
None, 0 and -1 will be interpreted as return all splits
expand : bool, default False
* If True, return DataFrame/MultiIndex expanding dimensionality.
* If False, return Series/Index.
Returns
-------
split : Series/Index or DataFrame/MultiIndex of objects
"""
if n is None or n == 0:
n = -1
f = lambda x: x.rsplit(pat, n)
res = _na_map(f, arr)
return res
def str_slice(arr, start=None, stop=None, step=None):
"""
Slice substrings from each element in the Series/Index
Parameters
----------
start : int or None
stop : int or None
step : int or None
Returns
-------
sliced : Series/Index of objects
"""
obj = slice(start, stop, step)
f = lambda x: x[obj]
return _na_map(f, arr)
def str_slice_replace(arr, start=None, stop=None, repl=None):
"""
Replace a slice of each string in the Series/Index with another
string.
Parameters
----------
start : int or None
stop : int or None
repl : str or None
String for replacement
Returns
-------
replaced : Series/Index of objects
"""
if repl is None:
repl = ''
def f(x):
if x[start:stop] == '':
local_stop = start
else:
local_stop = stop
y = ''
if start is not None:
y += x[:start]
y += repl
if stop is not None:
y += x[local_stop:]
return y
return _na_map(f, arr)
def str_strip(arr, to_strip=None, side='both'):
"""
Strip whitespace (including newlines) from each string in the
Series/Index.
Parameters
----------
to_strip : str or unicode
side : {'left', 'right', 'both'}, default 'both'
Returns
-------
stripped : Series/Index of objects
"""
if side == 'both':
f = lambda x: x.strip(to_strip)
elif side == 'left':
f = lambda x: x.lstrip(to_strip)
elif side == 'right':
f = lambda x: x.rstrip(to_strip)
else: # pragma: no cover
raise ValueError('Invalid side')
return _na_map(f, arr)
def str_wrap(arr, width, **kwargs):
r"""
Wrap long strings in the Series/Index to be formatted in
paragraphs with length less than a given width.
This method has the same keyword parameters and defaults as
:class:`textwrap.TextWrapper`.
Parameters
----------
width : int
Maximum line-width
expand_tabs : bool, optional
If true, tab characters will be expanded to spaces (default: True)
replace_whitespace : bool, optional
If true, each whitespace character (as defined by string.whitespace)
remaining after tab expansion will be replaced by a single space
(default: True)
drop_whitespace : bool, optional
If true, whitespace that, after wrapping, happens to end up at the
beginning or end of a line is dropped (default: True)
break_long_words : bool, optional
If true, then words longer than width will be broken in order to ensure
that no lines are longer than width. If it is false, long words will
not be broken, and some lines may be longer than width. (default: True)
break_on_hyphens : bool, optional
If true, wrapping will occur preferably on whitespace and right after
hyphens in compound words, as it is customary in English. If false,
only whitespaces will be considered as potentially good places for line
breaks, but you need to set break_long_words to false if you want truly
insecable words. (default: True)
Returns
-------
wrapped : Series/Index of objects
Notes
-----
Internally, this method uses a :class:`textwrap.TextWrapper` instance with
default settings. To achieve behavior matching R's stringr library str_wrap
function, use the arguments:
- expand_tabs = False
- replace_whitespace = True
- drop_whitespace = True
- break_long_words = False
- break_on_hyphens = False
Examples
--------
>>> s = pd.Series(['line to be wrapped', 'another line to be wrapped'])
>>> s.str.wrap(12)
0 line to be\nwrapped
1 another line\nto be\nwrapped
"""
kwargs['width'] = width
tw = textwrap.TextWrapper(**kwargs)
return _na_map(lambda s: '\n'.join(tw.wrap(s)), arr)
def str_translate(arr, table, deletechars=None):
"""
Map all characters in the string through the given mapping table.
Equivalent to standard :meth:`str.translate`. Note that the optional
argument deletechars is only valid if you are using python 2. For python 3,
character deletion should be specified via the table argument.
Parameters
----------
table : dict (python 3), str or None (python 2)
In python 3, table is a mapping of Unicode ordinals to Unicode
ordinals, strings, or None. Unmapped characters are left untouched.
Characters mapped to None are deleted. :meth:`str.maketrans` is a
helper function for making translation tables.
In python 2, table is either a string of length 256 or None. If the
table argument is None, no translation is applied and the operation
simply removes the characters in deletechars. :func:`string.maketrans`
is a helper function for making translation tables.
deletechars : str, optional (python 2)
A string of characters to delete. This argument is only valid
in python 2.
Returns
-------
translated : Series/Index of objects
"""
if deletechars is None:
f = lambda x: x.translate(table)
else:
from pandas import compat
if compat.PY3:
raise ValueError("deletechars is not a valid argument for "
"str.translate in python 3. You should simply "
"specify character deletions in the table "
"argument")
f = lambda x: x.translate(table, deletechars)
return _na_map(f, arr)
def str_get(arr, i):
"""
Extract element from lists, tuples, or strings in each element in the
Series/Index.
Parameters
----------
i : int
Integer index (location)
Returns
-------
items : Series/Index of objects
"""
f = lambda x: x[i] if len(x) > i else np.nan
return _na_map(f, arr)
def str_decode(arr, encoding, errors="strict"):
"""
Decode character string in the Series/Index using indicated encoding.
Equivalent to :meth:`str.decode` in python2 and :meth:`bytes.decode` in
python3.
Parameters
----------
encoding : str
errors : str, optional
Returns
-------
decoded : Series/Index of objects
"""
if encoding in _cpython_optimized_decoders:
# CPython optimized implementation
f = lambda x: x.decode(encoding, errors)
else:
decoder = codecs.getdecoder(encoding)
f = lambda x: decoder(x, errors)[0]
return _na_map(f, arr)
def str_encode(arr, encoding, errors="strict"):
"""
Encode character string in the Series/Index using indicated encoding.
Equivalent to :meth:`str.encode`.
Parameters
----------
encoding : str
errors : str, optional
Returns
-------
encoded : Series/Index of objects
"""
if encoding in _cpython_optimized_encoders:
# CPython optimized implementation
f = lambda x: x.encode(encoding, errors)
else:
encoder = codecs.getencoder(encoding)
f = lambda x: encoder(x, errors)[0]
return _na_map(f, arr)
def _noarg_wrapper(f, docstring=None, **kargs):
def wrapper(self):
result = _na_map(f, self._data, **kargs)
return self._wrap_result(result)
wrapper.__name__ = f.__name__
if docstring is not None:
wrapper.__doc__ = docstring
else:
raise ValueError('Provide docstring')
return wrapper
def _pat_wrapper(f, flags=False, na=False, **kwargs):
def wrapper1(self, pat):
result = f(self._data, pat)
return self._wrap_result(result)
def wrapper2(self, pat, flags=0, **kwargs):
result = f(self._data, pat, flags=flags, **kwargs)
return self._wrap_result(result)
def wrapper3(self, pat, na=np.nan):
result = f(self._data, pat, na=na)
return self._wrap_result(result)
wrapper = wrapper3 if na else wrapper2 if flags else wrapper1
wrapper.__name__ = f.__name__
if f.__doc__:
wrapper.__doc__ = f.__doc__
return wrapper
def copy(source):
"Copy a docstring from another source function (if present)"
def do_copy(target):
if source.__doc__:
target.__doc__ = source.__doc__
return target
return do_copy
class StringMethods(NoNewAttributesMixin):
"""
Vectorized string functions for Series and Index. NAs stay NA unless
handled otherwise by a particular method. Patterned after Python's string
methods, with some inspiration from R's stringr package.
Examples
--------
>>> s.str.split('_')
>>> s.str.replace('_', '')
"""
def __init__(self, data):
self._is_categorical = is_categorical_dtype(data)
self._data = data.cat.categories if self._is_categorical else data
# save orig to blow up categoricals to the right type
self._orig = data
self._freeze()
def __getitem__(self, key):
if isinstance(key, slice):
return self.slice(start=key.start, stop=key.stop, step=key.step)
else:
return self.get(key)
def __iter__(self):
i = 0
g = self.get(i)
while g.notnull().any():
yield g
i += 1
g = self.get(i)
def _wrap_result(self, result, use_codes=True,
name=None, expand=None):
from pandas.core.index import Index, MultiIndex
# for category, we do the stuff on the categories, so blow it up
# to the full series again
# But for some operations, we have to do the stuff on the full values,
# so make it possible to skip this step as the method already did this
# before the transformation...
if use_codes and self._is_categorical:
result = take_1d(result, self._orig.cat.codes)
if not hasattr(result, 'ndim') or not hasattr(result, 'dtype'):
return result
assert result.ndim < 3
if expand is None:
# infer from ndim if expand is not specified
expand = False if result.ndim == 1 else True
elif expand is True and not isinstance(self._orig, Index):
# required when expand=True is explicitly specified
# not needed when infered
def cons_row(x):
if is_list_like(x):
return x
else:
return [x]
result = [cons_row(x) for x in result]
if not isinstance(expand, bool):
raise ValueError("expand must be True or False")
if expand is False:
# if expand is False, result should have the same name
# as the original otherwise specified
if name is None:
name = getattr(result, 'name', None)
if name is None:
# do not use logical or, _orig may be a DataFrame
# which has "name" column
name = self._orig.name
# Wait until we are sure result is a Series or Index before
# checking attributes (GH 12180)
if isinstance(self._orig, Index):
# if result is a boolean np.array, return the np.array
# instead of wrapping it into a boolean Index (GH 8875)
if is_bool_dtype(result):
return result
if expand:
result = list(result)
return MultiIndex.from_tuples(result, names=name)
else:
return Index(result, name=name)
else:
index = self._orig.index
if expand:
cons = self._orig._constructor_expanddim
return cons(result, columns=name, index=index)
else:
# Must be a Series
cons = self._orig._constructor
return cons(result, name=name, index=index)
@copy(str_cat)
def cat(self, others=None, sep=None, na_rep=None):
data = self._orig if self._is_categorical else self._data
result = str_cat(data, others=others, sep=sep, na_rep=na_rep)
return self._wrap_result(result, use_codes=(not self._is_categorical))
@copy(str_split)
def split(self, pat=None, n=-1, expand=False):
result = str_split(self._data, pat, n=n)
return self._wrap_result(result, expand=expand)
@copy(str_rsplit)
def rsplit(self, pat=None, n=-1, expand=False):
result = str_rsplit(self._data, pat, n=n)
return self._wrap_result(result, expand=expand)
_shared_docs['str_partition'] = ("""
Split the string at the %(side)s occurrence of `sep`, and return 3 elements
containing the part before the separator, the separator itself,
and the part after the separator.
If the separator is not found, return %(return)s.
Parameters
----------
pat : string, default whitespace
String to split on.
expand : bool, default True
* If True, return DataFrame/MultiIndex expanding dimensionality.
* If False, return Series/Index.
Returns
-------
split : DataFrame/MultiIndex or Series/Index of objects
See Also
--------
%(also)s
Examples
--------
>>> s = Series(['A_B_C', 'D_E_F', 'X'])
0 A_B_C
1 D_E_F
2 X
dtype: object
>>> s.str.partition('_')
0 1 2
0 A _ B_C
1 D _ E_F
2 X
>>> s.str.rpartition('_')
0 1 2
0 A_B _ C
1 D_E _ F
2 X
""")
@Appender(_shared_docs['str_partition'] % {
'side': 'first',
'return': '3 elements containing the string itself, followed by two '
'empty strings',
'also': 'rpartition : Split the string at the last occurrence of `sep`'
})
def partition(self, pat=' ', expand=True):
f = lambda x: x.partition(pat)
result = _na_map(f, self._data)
return self._wrap_result(result, expand=expand)
@Appender(_shared_docs['str_partition'] % {
'side': 'last',
'return': '3 elements containing two empty strings, followed by the '
'string itself',
'also': 'partition : Split the string at the first occurrence of `sep`'
})
def rpartition(self, pat=' ', expand=True):
f = lambda x: x.rpartition(pat)
result = _na_map(f, self._data)
return self._wrap_result(result, expand=expand)
@copy(str_get)
def get(self, i):
result = str_get(self._data, i)
return self._wrap_result(result)
@copy(str_join)
def join(self, sep):
result = str_join(self._data, sep)
return self._wrap_result(result)
@copy(str_contains)
def contains(self, pat, case=True, flags=0, na=np.nan, regex=True):
result = str_contains(self._data, pat, case=case, flags=flags, na=na,
regex=regex)
return self._wrap_result(result)
@copy(str_match)
def match(self, pat, case=True, flags=0, na=np.nan, as_indexer=None):
result = str_match(self._data, pat, case=case, flags=flags, na=na,
as_indexer=as_indexer)
return self._wrap_result(result)
@copy(str_replace)
def replace(self, pat, repl, n=-1, case=None, flags=0):
result = str_replace(self._data, pat, repl, n=n, case=case,
flags=flags)
return self._wrap_result(result)
@copy(str_repeat)
def repeat(self, repeats):
result = str_repeat(self._data, repeats)
return self._wrap_result(result)
@copy(str_pad)
def pad(self, width, side='left', fillchar=' '):
result = str_pad(self._data, width, side=side, fillchar=fillchar)
return self._wrap_result(result)
_shared_docs['str_pad'] = ("""
Filling %(side)s side of strings in the Series/Index with an
additional character. Equivalent to :meth:`str.%(method)s`.
Parameters
----------
width : int
Minimum width of resulting string; additional characters will be filled
with ``fillchar``
fillchar : str
Additional character for filling, default is whitespace
Returns
-------
filled : Series/Index of objects
""")
@Appender(_shared_docs['str_pad'] % dict(side='left and right',
method='center'))
def center(self, width, fillchar=' '):
return self.pad(width, side='both', fillchar=fillchar)
@Appender(_shared_docs['str_pad'] % dict(side='right', method='ljust'))
def ljust(self, width, fillchar=' '):
return self.pad(width, side='right', fillchar=fillchar)
@Appender(_shared_docs['str_pad'] % dict(side='left', method='rjust'))
def rjust(self, width, fillchar=' '):
return self.pad(width, side='left', fillchar=fillchar)
def zfill(self, width):
"""
Filling left side of strings in the Series/Index with 0.
Equivalent to :meth:`str.zfill`.
Parameters
----------
width : int
Minimum width of resulting string; additional characters will be
filled with 0
Returns
-------
filled : Series/Index of objects
"""
result = str_pad(self._data, width, side='left', fillchar='0')
return self._wrap_result(result)
@copy(str_slice)
def slice(self, start=None, stop=None, step=None):
result = str_slice(self._data, start, stop, step)
return self._wrap_result(result)
@copy(str_slice_replace)
def slice_replace(self, start=None, stop=None, repl=None):
result = str_slice_replace(self._data, start, stop, repl)
return self._wrap_result(result)
@copy(str_decode)
def decode(self, encoding, errors="strict"):
result = str_decode(self._data, encoding, errors)
return self._wrap_result(result)
@copy(str_encode)
def encode(self, encoding, errors="strict"):
result = str_encode(self._data, encoding, errors)
return self._wrap_result(result)
_shared_docs['str_strip'] = ("""
Strip whitespace (including newlines) from each string in the
Series/Index from %(side)s. Equivalent to :meth:`str.%(method)s`.
Returns
-------
stripped : Series/Index of objects
""")
@Appender(_shared_docs['str_strip'] % dict(side='left and right sides',
method='strip'))
def strip(self, to_strip=None):
result = str_strip(self._data, to_strip, side='both')
return self._wrap_result(result)
@Appender(_shared_docs['str_strip'] % dict(side='left side',
method='lstrip'))
def lstrip(self, to_strip=None):
result = str_strip(self._data, to_strip, side='left')
return self._wrap_result(result)
@Appender(_shared_docs['str_strip'] % dict(side='right side',
method='rstrip'))
def rstrip(self, to_strip=None):
result = str_strip(self._data, to_strip, side='right')
return self._wrap_result(result)
@copy(str_wrap)
def wrap(self, width, **kwargs):
result = str_wrap(self._data, width, **kwargs)
return self._wrap_result(result)
@copy(str_get_dummies)
def get_dummies(self, sep='|'):
# we need to cast to Series of strings as only that has all
# methods available for making the dummies...
data = self._orig.astype(str) if self._is_categorical else self._data
result, name = str_get_dummies(data, sep)
return self._wrap_result(result, use_codes=(not self._is_categorical),
name=name, expand=True)
@copy(str_translate)
def translate(self, table, deletechars=None):
result = str_translate(self._data, table, deletechars)
return self._wrap_result(result)
count = _pat_wrapper(str_count, flags=True)
startswith = _pat_wrapper(str_startswith, na=True)
endswith = _pat_wrapper(str_endswith, na=True)
findall = _pat_wrapper(str_findall, flags=True)
@copy(str_extract)
def extract(self, pat, flags=0, expand=None):
return str_extract(self, pat, flags=flags, expand=expand)
@copy(str_extractall)
def extractall(self, pat, flags=0):
return str_extractall(self._orig, pat, flags=flags)
_shared_docs['find'] = ("""
Return %(side)s indexes in each strings in the Series/Index
where the substring is fully contained between [start:end].
Return -1 on failure. Equivalent to standard :meth:`str.%(method)s`.
Parameters
----------
sub : str
Substring being searched
start : int
Left edge index
end : int
Right edge index
Returns
-------
found : Series/Index of integer values
See Also
--------
%(also)s
""")
@Appender(_shared_docs['find'] %
dict(side='lowest', method='find',
also='rfind : Return highest indexes in each strings'))
def find(self, sub, start=0, end=None):
result = str_find(self._data, sub, start=start, end=end, side='left')
return self._wrap_result(result)
@Appender(_shared_docs['find'] %
dict(side='highest', method='rfind',
also='find : Return lowest indexes in each strings'))
def rfind(self, sub, start=0, end=None):
result = str_find(self._data, sub, start=start, end=end, side='right')
return self._wrap_result(result)
def normalize(self, form):
"""Return the Unicode normal form for the strings in the Series/Index.
For more information on the forms, see the
:func:`unicodedata.normalize`.
Parameters
----------
form : {'NFC', 'NFKC', 'NFD', 'NFKD'}
Unicode form
Returns
-------
normalized : Series/Index of objects
"""
import unicodedata
f = lambda x: unicodedata.normalize(form, compat.u_safe(x))
result = _na_map(f, self._data)
return self._wrap_result(result)
_shared_docs['index'] = ("""
Return %(side)s indexes in each strings where the substring is
fully contained between [start:end]. This is the same as
``str.%(similar)s`` except instead of returning -1, it raises a ValueError
when the substring is not found. Equivalent to standard ``str.%(method)s``.
Parameters
----------
sub : str
Substring being searched
start : int
Left edge index
end : int
Right edge index
Returns
-------
found : Series/Index of objects
See Also
--------
%(also)s
""")
@Appender(_shared_docs['index'] %
dict(side='lowest', similar='find', method='index',
also='rindex : Return highest indexes in each strings'))
def index(self, sub, start=0, end=None):
result = str_index(self._data, sub, start=start, end=end, side='left')
return self._wrap_result(result)
@Appender(_shared_docs['index'] %
dict(side='highest', similar='rfind', method='rindex',
also='index : Return lowest indexes in each strings'))
def rindex(self, sub, start=0, end=None):
result = str_index(self._data, sub, start=start, end=end, side='right')
return self._wrap_result(result)
_shared_docs['len'] = ("""
Compute length of each string in the Series/Index.
Returns
-------
lengths : Series/Index of integer values
""")
len = _noarg_wrapper(len, docstring=_shared_docs['len'], dtype=int)
_shared_docs['casemethods'] = ("""
Convert strings in the Series/Index to %(type)s.
Equivalent to :meth:`str.%(method)s`.
Returns
-------
converted : Series/Index of objects
""")
_shared_docs['lower'] = dict(type='lowercase', method='lower')
_shared_docs['upper'] = dict(type='uppercase', method='upper')
_shared_docs['title'] = dict(type='titlecase', method='title')
_shared_docs['capitalize'] = dict(type='be capitalized',
method='capitalize')
_shared_docs['swapcase'] = dict(type='be swapcased', method='swapcase')
lower = _noarg_wrapper(lambda x: x.lower(),
docstring=_shared_docs['casemethods'] %
_shared_docs['lower'])
upper = _noarg_wrapper(lambda x: x.upper(),
docstring=_shared_docs['casemethods'] %
_shared_docs['upper'])
title = _noarg_wrapper(lambda x: x.title(),
docstring=_shared_docs['casemethods'] %
_shared_docs['title'])
capitalize = _noarg_wrapper(lambda x: x.capitalize(),
docstring=_shared_docs['casemethods'] %
_shared_docs['capitalize'])
swapcase = _noarg_wrapper(lambda x: x.swapcase(),
docstring=_shared_docs['casemethods'] %
_shared_docs['swapcase'])
_shared_docs['ismethods'] = ("""
Check whether all characters in each string in the Series/Index
are %(type)s. Equivalent to :meth:`str.%(method)s`.
Returns
-------
is : Series/array of boolean values
""")
_shared_docs['isalnum'] = dict(type='alphanumeric', method='isalnum')
_shared_docs['isalpha'] = dict(type='alphabetic', method='isalpha')
_shared_docs['isdigit'] = dict(type='digits', method='isdigit')
_shared_docs['isspace'] = dict(type='whitespace', method='isspace')
_shared_docs['islower'] = dict(type='lowercase', method='islower')
_shared_docs['isupper'] = dict(type='uppercase', method='isupper')
_shared_docs['istitle'] = dict(type='titlecase', method='istitle')
_shared_docs['isnumeric'] = dict(type='numeric', method='isnumeric')
_shared_docs['isdecimal'] = dict(type='decimal', method='isdecimal')
isalnum = _noarg_wrapper(lambda x: x.isalnum(),
docstring=_shared_docs['ismethods'] %
_shared_docs['isalnum'])
isalpha = _noarg_wrapper(lambda x: x.isalpha(),
docstring=_shared_docs['ismethods'] %
_shared_docs['isalpha'])
isdigit = _noarg_wrapper(lambda x: x.isdigit(),
docstring=_shared_docs['ismethods'] %
_shared_docs['isdigit'])
isspace = _noarg_wrapper(lambda x: x.isspace(),
docstring=_shared_docs['ismethods'] %
_shared_docs['isspace'])
islower = _noarg_wrapper(lambda x: x.islower(),
docstring=_shared_docs['ismethods'] %
_shared_docs['islower'])
isupper = _noarg_wrapper(lambda x: x.isupper(),
docstring=_shared_docs['ismethods'] %
_shared_docs['isupper'])
istitle = _noarg_wrapper(lambda x: x.istitle(),
docstring=_shared_docs['ismethods'] %
_shared_docs['istitle'])
isnumeric = _noarg_wrapper(lambda x: compat.u_safe(x).isnumeric(),
docstring=_shared_docs['ismethods'] %
_shared_docs['isnumeric'])
isdecimal = _noarg_wrapper(lambda x: compat.u_safe(x).isdecimal(),
docstring=_shared_docs['ismethods'] %
_shared_docs['isdecimal'])
class StringAccessorMixin(object):
""" Mixin to add a `.str` acessor to the class."""
# string methods
def _make_str_accessor(self):
from pandas.core.index import Index
if (isinstance(self, ABCSeries) and
not ((is_categorical_dtype(self.dtype) and
is_object_dtype(self.values.categories)) or
(is_object_dtype(self.dtype)))):
# it's neither a string series not a categorical series with
# strings inside the categories.
# this really should exclude all series with any non-string values
# (instead of test for object dtype), but that isn't practical for
# performance reasons until we have a str dtype (GH 9343)
raise AttributeError("Can only use .str accessor with string "
"values, which use np.object_ dtype in "
"pandas")
elif isinstance(self, Index):
# can't use ABCIndex to exclude non-str
# see scc/inferrence.pyx which can contain string values
allowed_types = ('string', 'unicode', 'mixed', 'mixed-integer')
if self.inferred_type not in allowed_types:
message = ("Can only use .str accessor with string values "
"(i.e. inferred_type is 'string', 'unicode' or "
"'mixed')")
raise AttributeError(message)
if self.nlevels > 1:
message = ("Can only use .str accessor with Index, not "
"MultiIndex")
raise AttributeError(message)
return StringMethods(self)
str = AccessorProperty(StringMethods, _make_str_accessor)
def _dir_additions(self):
return set()
def _dir_deletions(self):
try:
getattr(self, 'str')
except AttributeError:
return set(['str'])
return set()
| bsd-3-clause |
rfinn/LCS | paper1code/LCSanalyzeWise24.py | 1 | 8015 | #!/usr/bin/env python
'''
GOAL:
compare the results from sextractor photometry with the results from wise
'''
import pyfits
from LCScommon import *
from pylab import *
import os
import mystuff as my
from LCSReadmasterBaseNSA import *
import matplotlib.cm as cm
import chary_elbaz_24um as chary
Lsol=3.826e33#normalize by solar luminosity
bellconv=9.8e-11#converts Lir (in L_sun) to SFR/yr
bellconv=4.5e-44#Kenn 98 conversion fro erg/s to SFR/yr
#usefwhm24=1
fieldColor='0.7'
Remin=1.
mypath=os.getcwd()
if mypath.find('Users') > -1:
print "Running on Rose's mac pro"
homedir='/Users/rfinn/'
elif mypath.find('home') > -1:
print "Running on coma"
homedir='/home/rfinn/'
figuredir=homedir+'Dropbox/Research/LocalClusters/SamplePlots/'
figuredir=homedir+'research/LocalClusters/SamplePlots/'
class cluster(baseClusterNSA):
def __init__(self,clustername):
baseClusterNSA.__init__(self,clustername)
mypath=os.getcwd()
if mypath.find('Users') > -1:
print "Running on Rose's mac pro"
infile='/Users/rfinn/research/LocalClusters/MasterTables/'+clustername+'mastertable.WithProfileFits.fits'
elif mypath.find('home') > -1:
print "Running on coma"
infile=homedir+'research/LocalClusters/MasterTables/'+clustername+'mastertable.WithProfileFits.fits'
self.mipssnrflag = self.mipssnr > 6.
try:
self.readsnr24NSA()
except:
print self.prefix,": couldn't read SNR24 file"
try:
self.readGalfitSersicResults()
except:
print self.prefix,": couln't read galfit sersic results"
try:
self.readGalfitResults()
except:
print self.prefix,": couldn't read galfit results"
self.fwhm24=self.sex24.FWHM_DEG*3600.
self.fwhm24=self.sex24.FLUX_RADIUS1*mipspixelscale
self.member=self.membflag
self.member=self.dvflag
self.sample24flag= self.sex24.MATCHFLAG24 & ~self.agnflag & (self.n.SERSIC_TH50 > Remin) & (log10(self.stellarmass) > 9.5) & (log10(self.stellarmass) < 12)
self.blueclustersample=self.member & self.blueflag & self.sample24flag
self.bluefieldsample=~self.member & self.blueflag & self.sample24flag
self.greenclustersample=self.member & self.greenflag & self.sample24flag
self.greenfieldsample=~self.member & self.greenflag & self.sample24flag
self.redclustersample=self.member & self.redflag & self.sample24flag
self.redfieldsample=~self.member & self.redflag & self.sample24flag
self.varlookup={'stellarmass':log10(self.stellarmass),'Re':self.n.SERSIC_TH50,'R24':self.sex24.FLUX_RADIUS1*mipspixelscale,'NUV':self.n.ABSMAG[:,1],'r':self.n.ABSMAG[:,4],'m24':self.sex24.MAG_BEST,'redshift':self.n.ZDIST,'NUVr':(self.n.ABSMAG[:,1]-self.n.ABSMAG[:,4]),'NUV24':(self.n.ABSMAG[:,1]-self.sex24.MAG_BEST),'24mass':(self.sex24.MAG_BEST-log10(self.stellarmass)),'ratioR':self.sex24.FLUX_RADIUS1*mipspixelscale/self.n.SERSIC_TH50}
self.fw4=8.363*10.**(-1.*self.wise.W4MAG_3/2.5)*1.e6 # uJy
self.fw4err=8.363*10.**(-1.*(self.wise.W4MAG_3 - self.wise.W4SIGM_3)/2.5)*1.e6 -self.fw4# uJy
def plotmipswise(self,plotsingle=1):
#zeropoint fluxes from http://wise2.ipac.caltech.edu/docs/release/allsky/expsup/sec4_4h.html#conv2ab
#self.fw1=309.54*10.**(-1.*wtable.w1mpro/2.5)
#self.fw2=171.787*10.**(-1.*wtable.w2mpro/2.5)
#self.fw3=31.674*10.**(-1.*wtable.w3mpro/2.5)
#self.fw4=8.363*10.**(-1.*wtable.w4mpro/2.5)
if plotsingle:
figure()
flag=self.n.On24ImageFlag_1 & self.sex24.MATCHFLAG24
x=self.sex24.FLUX_BEST[flag]*mipsflux2umJyconv/1e3 # convert to mJy
y=self.fw4[flag]/1e3
plot(x,y,'ko',markersize=4)
xl=arange(-1,2.5,.1)
xl=10.**xl
plot(xl,xl,'k--')
axhline(y=5.1,color='k',ls=':')
gca().set_yscale('log')
gca().set_xscale('log')
if plotsingle:
xlabel('MIPS 24um flux (mJy)')
ylabel('WISE 22um flux (mJy)')
ax=gca()
axis([15/1e3,5.e2,.2,5.e2])
text(.1,.85,'$'+self.clustername+'$',fontsize=14,transform=ax.transAxes,horizontalalignment='left')
def plotmipssnrflux(self,plotsingle=1):
if plotsingle:
figure()
flag=self.n.On24ImageFlag_1 & self.sex24.MATCHFLAG24
x=self.sex24.FLUX_BEST[flag]*mipsflux2umJyconv/1e3 # convert to mJy
xerr=self.sex24.FLUXERR_BEST[flag]*mipsflux2umJyconv/1e3 # convert to mJy
y=abs(x/xerr)
plot(x,y,'ro',markersize=4)
y=abs(self.fw4/self.fw4err)
plot(self.fw4/1e3,y,'ko',mfc='None',markersize=4)
xl=arange(-1,2.,.1)
xl=10.**xl
#plot(xl,xl,'k--')
axvline(x=5.1,color='k',ls=':')
axvline(x=2,color='r',ls='--')
axhline(y=5.,color='k',ls='--')
axhline(y=3.,color='k',ls='--')
axhline(y=10.,color='b',ls='--')
gca().set_yscale('log')
gca().set_xscale('log')
if plotsingle:
xlabel('MIPS 24um flux (mJy)')
ylabel('SNR')
ax=gca()
axis([15/1e3,5.e2,.5,90])
text(.1,.85,'$'+self.clustername+'$',fontsize=14,transform=ax.transAxes,horizontalalignment='left')
def plotF24hist(self):
mybins=arange(0,5000,100)
y=hist(self.mipsflux[self.apexflag],bins=mybins,histtype='step')
ngal=y[0]
x=y[1]
xbin=zeros(len(ngal))
for i in range(len(xbin)):
xbin[i]=0.5*(x[i]+x[i+1])
#clf()
self.xbin=xbin
self.ngal=ngal
plot(xbin,ngal,'ro')
errorbar(xbin,ngal,sqrt(ngal),fmt=None)
axis([0,2000,0,20])
title(self.clustername)
ax=gca()
#ax.set_yscale('log')
#ax.set_xscale('log')
#xlabel('24um Flux')
def plotL24hist(self):
#figure(1)
y=hist(log10(self.L24[self.apexflag]),bins=10,histtype='stepfilled')
ax=gca()
#ax.set_yscale('log')
axis([6.5,11.,1,22])
xmin,xmax=xlim()
xticks(arange(round(xmin),xmax+1,1,'i'),fontsize=10)
ymin,ymax=ylim()
title(self.clustername)
#yticks(arange(round(ymin),ymax+1,2,'i'),fontsize=10)
def plotF24histall():
figure(figsize=[10,10])
clf()
subplots_adjust(wspace=.25,hspace=.35)
i=0
for cl in mylocalclusters:
i += 1
subplot(3,3,i)
cl.plotF24hist()
ax=gca()
text(-.75,-.35,'$log_{10}(F_{24} \ (\mu Jy))$',fontsize=22,horizontalalignment='center',transform=ax.transAxes)
subplot(3,3,4)
text(-2.8,1.9,'$N_{gal}$',fontsize=22,verticalalignment='center',rotation=90,transform=ax.transAxes)
savefig(figuredir+'PlotF24histAll.png')
def plotwisemips24all():
figure(figsize=[10,8])
clf()
subplots_adjust(wspace=.02,hspace=.02)
i=0
for cl in mylocalclusters:
i +=1
subplot(3,3,i)
print cl.prefix
cl.plotmipswise(plotsingle=0)
multiplotaxes(i)
multiplotlabels('$MIPS \ F_{24} \ (mJy) $','$ WISE \ F_{22} \ (mJy) $')
savefig(figuredir+'PlotWiseMipsall.eps')
def plotmipssnrfluxall():
figure(figsize=[10,8])
clf()
subplots_adjust(wspace=.02,hspace=.02)
i=0
for cl in mylocalclusters:
i +=1
subplot(3,3,i)
print cl.prefix
cl.plotmipssnrflux(plotsingle=0)
multiplotaxes(i)
multiplotlabels('$MIPS \ F_{24} \ (mJy) $','$ SNR_{24} $')
savefig(figuredir+'PlotWiseMipsall.eps')
mkw11=cluster('MKW11')
coma=cluster('Coma')
mkw8=cluster('MKW8')
awm4=cluster('AWM4')
a2052=cluster('A2052')
a2063=cluster('A2063')
ngc=cluster('NGC6107')
herc=cluster('Hercules')
a1367=cluster('A1367')
clustersbymass=[mkw8,mkw11,ngc,awm4,a2052,a2063,herc,a1367,coma]
clustersbydistance=[a1367,coma,mkw11,mkw8,ngc,awm4,a2063,a2052]
clustersbylx=[mkw11,ngc,mkw8,awm4,herc,a1367,a2063,a2052,coma]
mylocalclusters=clustersbymass
| gpl-3.0 |
vrenaville/ngo-addons-backport | addons/resource/faces/timescale.py | 170 | 3902 | ############################################################################
# Copyright (C) 2005 by Reithinger GmbH
# [email protected]
#
# This file is part of faces.
#
# faces is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# faces is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the
# Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
############################################################################
import faces.pcalendar as pcal
import matplotlib.cbook as cbook
import datetime
import sys
class TimeScale(object):
def __init__(self, calendar):
self.data_calendar = calendar
self._create_chart_calendar()
self.now = self.to_num(self.data_calendar.now)
def to_datetime(self, xval):
return xval.to_datetime()
def to_num(self, date):
return self.chart_calendar.WorkingDate(date)
def is_free_slot(self, value):
dt1 = self.chart_calendar.to_starttime(value)
dt2 = self.data_calendar.to_starttime\
(self.data_calendar.from_datetime(dt1))
return dt1 != dt2
def is_free_day(self, value):
dt1 = self.chart_calendar.to_starttime(value)
dt2 = self.data_calendar.to_starttime\
(self.data_calendar.from_datetime(dt1))
return dt1.date() != dt2.date()
def _create_chart_calendar(self):
dcal = self.data_calendar
ccal = self.chart_calendar = pcal.Calendar()
ccal.minimum_time_unit = 1
#pad worktime slots of calendar (all days should be equally long)
slot_sum = lambda slots: sum(map(lambda slot: slot[1] - slot[0], slots))
day_sum = lambda day: slot_sum(dcal.get_working_times(day))
max_work_time = max(map(day_sum, range(7)))
#working_time should have 2/3
sum_time = 3 * max_work_time / 2
#now create timeslots for ccal
def create_time_slots(day):
src_slots = dcal.get_working_times(day)
slots = [0, src_slots, 24*60]
slots = tuple(cbook.flatten(slots))
slots = zip(slots[:-1], slots[1:])
#balance non working slots
work_time = slot_sum(src_slots)
non_work_time = sum_time - work_time
non_slots = filter(lambda s: s not in src_slots, slots)
non_slots = map(lambda s: (s[1] - s[0], s), non_slots)
non_slots.sort()
slots = []
i = 0
for l, s in non_slots:
delta = non_work_time / (len(non_slots) - i)
delta = min(l, delta)
non_work_time -= delta
slots.append((s[0], s[0] + delta))
i += 1
slots.extend(src_slots)
slots.sort()
return slots
min_delta = sys.maxint
for i in range(7):
slots = create_time_slots(i)
ccal.working_times[i] = slots
min_delta = min(min_delta, min(map(lambda s: s[1] - s[0], slots)))
ccal._recalc_working_time()
self.slot_delta = min_delta
self.day_delta = sum_time
self.week_delta = ccal.week_time
_default_scale = TimeScale(pcal._default_calendar)
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 |
UNR-AERIAL/scikit-learn | examples/cluster/plot_birch_vs_minibatchkmeans.py | 333 | 3694 | """
=================================
Compare BIRCH and MiniBatchKMeans
=================================
This example compares the timing of Birch (with and without the global
clustering step) and MiniBatchKMeans on a synthetic dataset having
100,000 samples and 2 features generated using make_blobs.
If ``n_clusters`` is set to None, the data is reduced from 100,000
samples to a set of 158 clusters. This can be viewed as a preprocessing
step before the final (global) clustering step that further reduces these
158 clusters to 100 clusters.
"""
# Authors: Manoj Kumar <[email protected]
# Alexandre Gramfort <[email protected]>
# License: BSD 3 clause
print(__doc__)
from itertools import cycle
from time import time
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import Birch, MiniBatchKMeans
from sklearn.datasets.samples_generator import make_blobs
# Generate centers for the blobs so that it forms a 10 X 10 grid.
xx = np.linspace(-22, 22, 10)
yy = np.linspace(-22, 22, 10)
xx, yy = np.meshgrid(xx, yy)
n_centres = np.hstack((np.ravel(xx)[:, np.newaxis],
np.ravel(yy)[:, np.newaxis]))
# Generate blobs to do a comparison between MiniBatchKMeans and Birch.
X, y = make_blobs(n_samples=100000, centers=n_centres, random_state=0)
# Use all colors that matplotlib provides by default.
colors_ = cycle(colors.cnames.keys())
fig = plt.figure(figsize=(12, 4))
fig.subplots_adjust(left=0.04, right=0.98, bottom=0.1, top=0.9)
# Compute clustering with Birch with and without the final clustering step
# and plot.
birch_models = [Birch(threshold=1.7, n_clusters=None),
Birch(threshold=1.7, n_clusters=100)]
final_step = ['without global clustering', 'with global clustering']
for ind, (birch_model, info) in enumerate(zip(birch_models, final_step)):
t = time()
birch_model.fit(X)
time_ = time() - t
print("Birch %s as the final step took %0.2f seconds" % (
info, (time() - t)))
# Plot result
labels = birch_model.labels_
centroids = birch_model.subcluster_centers_
n_clusters = np.unique(labels).size
print("n_clusters : %d" % n_clusters)
ax = fig.add_subplot(1, 3, ind + 1)
for this_centroid, k, col in zip(centroids, range(n_clusters), colors_):
mask = labels == k
ax.plot(X[mask, 0], X[mask, 1], 'w',
markerfacecolor=col, marker='.')
if birch_model.n_clusters is None:
ax.plot(this_centroid[0], this_centroid[1], '+', markerfacecolor=col,
markeredgecolor='k', markersize=5)
ax.set_ylim([-25, 25])
ax.set_xlim([-25, 25])
ax.set_autoscaley_on(False)
ax.set_title('Birch %s' % info)
# Compute clustering with MiniBatchKMeans.
mbk = MiniBatchKMeans(init='k-means++', n_clusters=100, batch_size=100,
n_init=10, max_no_improvement=10, verbose=0,
random_state=0)
t0 = time()
mbk.fit(X)
t_mini_batch = time() - t0
print("Time taken to run MiniBatchKMeans %0.2f seconds" % t_mini_batch)
mbk_means_labels_unique = np.unique(mbk.labels_)
ax = fig.add_subplot(1, 3, 3)
for this_centroid, k, col in zip(mbk.cluster_centers_,
range(n_clusters), colors_):
mask = mbk.labels_ == k
ax.plot(X[mask, 0], X[mask, 1], 'w', markerfacecolor=col, marker='.')
ax.plot(this_centroid[0], this_centroid[1], '+', markeredgecolor='k',
markersize=5)
ax.set_xlim([-25, 25])
ax.set_ylim([-25, 25])
ax.set_title("MiniBatchKMeans")
ax.set_autoscaley_on(False)
plt.show()
| bsd-3-clause |
yavalvas/yav_com | build/matplotlib/lib/matplotlib/tests/test_collections.py | 1 | 17190 | """
Tests specific to the collections module.
"""
from __future__ import (absolute_import, division, print_function,
unicode_literals)
import six
from nose.tools import assert_equal
import numpy as np
from numpy.testing import assert_array_equal, assert_array_almost_equal
import matplotlib.pyplot as plt
import matplotlib.collections as mcollections
import matplotlib.transforms as mtransforms
from matplotlib.collections import EventCollection
from matplotlib.testing.decorators import cleanup, image_comparison
def generate_EventCollection_plot():
'''
generate the initial collection and plot it
'''
positions = np.array([0., 1., 2., 3., 5., 8., 13., 21.])
extra_positions = np.array([34., 55., 89.])
orientation = 'horizontal'
lineoffset = 1
linelength = .5
linewidth = 2
color = [1, 0, 0, 1]
linestyle = 'solid'
antialiased = True
coll = EventCollection(positions,
orientation=orientation,
lineoffset=lineoffset,
linelength=linelength,
linewidth=linewidth,
color=color,
linestyle=linestyle,
antialiased=antialiased
)
fig = plt.figure()
splt = fig.add_subplot(1, 1, 1)
splt.add_collection(coll)
splt.set_title('EventCollection: default')
props = {'positions': positions,
'extra_positions': extra_positions,
'orientation': orientation,
'lineoffset': lineoffset,
'linelength': linelength,
'linewidth': linewidth,
'color': color,
'linestyle': linestyle,
'antialiased': antialiased
}
splt.set_xlim(-1, 22)
splt.set_ylim(0, 2)
return splt, coll, props
@image_comparison(baseline_images=['EventCollection_plot__default'])
def test__EventCollection__get_segments():
'''
check to make sure the default segments have the correct coordinates
'''
_, coll, props = generate_EventCollection_plot()
check_segments(coll,
props['positions'],
props['linelength'],
props['lineoffset'],
props['orientation'])
@cleanup
def test__EventCollection__get_positions():
'''
check to make sure the default positions match the input positions
'''
_, coll, props = generate_EventCollection_plot()
np.testing.assert_array_equal(props['positions'], coll.get_positions())
@cleanup
def test__EventCollection__get_orientation():
'''
check to make sure the default orientation matches the input
orientation
'''
_, coll, props = generate_EventCollection_plot()
assert_equal(props['orientation'], coll.get_orientation())
@cleanup
def test__EventCollection__is_horizontal():
'''
check to make sure the default orientation matches the input
orientation
'''
_, coll, _ = generate_EventCollection_plot()
assert_equal(True, coll.is_horizontal())
@cleanup
def test__EventCollection__get_linelength():
'''
check to make sure the default linelength matches the input linelength
'''
_, coll, props = generate_EventCollection_plot()
assert_equal(props['linelength'], coll.get_linelength())
@cleanup
def test__EventCollection__get_lineoffset():
'''
check to make sure the default lineoffset matches the input lineoffset
'''
_, coll, props = generate_EventCollection_plot()
assert_equal(props['lineoffset'], coll.get_lineoffset())
@cleanup
def test__EventCollection__get_linestyle():
'''
check to make sure the default linestyle matches the input linestyle
'''
_, coll, _ = generate_EventCollection_plot()
assert_equal(coll.get_linestyle(), [(None, None)])
@cleanup
def test__EventCollection__get_color():
'''
check to make sure the default color matches the input color
'''
_, coll, props = generate_EventCollection_plot()
np.testing.assert_array_equal(props['color'], coll.get_color())
check_allprop_array(coll.get_colors(), props['color'])
@image_comparison(baseline_images=['EventCollection_plot__set_positions'])
def test__EventCollection__set_positions():
'''
check to make sure set_positions works properly
'''
splt, coll, props = generate_EventCollection_plot()
new_positions = np.hstack([props['positions'], props['extra_positions']])
coll.set_positions(new_positions)
np.testing.assert_array_equal(new_positions, coll.get_positions())
check_segments(coll, new_positions,
props['linelength'],
props['lineoffset'],
props['orientation'])
splt.set_title('EventCollection: set_positions')
splt.set_xlim(-1, 90)
@image_comparison(baseline_images=['EventCollection_plot__add_positions'])
def test__EventCollection__add_positions():
'''
check to make sure add_positions works properly
'''
splt, coll, props = generate_EventCollection_plot()
new_positions = np.hstack([props['positions'],
props['extra_positions'][0]])
coll.add_positions(props['extra_positions'][0])
np.testing.assert_array_equal(new_positions, coll.get_positions())
check_segments(coll,
new_positions,
props['linelength'],
props['lineoffset'],
props['orientation'])
splt.set_title('EventCollection: add_positions')
splt.set_xlim(-1, 35)
@image_comparison(baseline_images=['EventCollection_plot__append_positions'])
def test__EventCollection__append_positions():
'''
check to make sure append_positions works properly
'''
splt, coll, props = generate_EventCollection_plot()
new_positions = np.hstack([props['positions'],
props['extra_positions'][2]])
coll.append_positions(props['extra_positions'][2])
np.testing.assert_array_equal(new_positions, coll.get_positions())
check_segments(coll,
new_positions,
props['linelength'],
props['lineoffset'],
props['orientation'])
splt.set_title('EventCollection: append_positions')
splt.set_xlim(-1, 90)
@image_comparison(baseline_images=['EventCollection_plot__extend_positions'])
def test__EventCollection__extend_positions():
'''
check to make sure extend_positions works properly
'''
splt, coll, props = generate_EventCollection_plot()
new_positions = np.hstack([props['positions'],
props['extra_positions'][1:]])
coll.extend_positions(props['extra_positions'][1:])
np.testing.assert_array_equal(new_positions, coll.get_positions())
check_segments(coll,
new_positions,
props['linelength'],
props['lineoffset'],
props['orientation'])
splt.set_title('EventCollection: extend_positions')
splt.set_xlim(-1, 90)
@image_comparison(baseline_images=['EventCollection_plot__switch_orientation'])
def test__EventCollection__switch_orientation():
'''
check to make sure switch_orientation works properly
'''
splt, coll, props = generate_EventCollection_plot()
new_orientation = 'vertical'
coll.switch_orientation()
assert_equal(new_orientation, coll.get_orientation())
assert_equal(False, coll.is_horizontal())
new_positions = coll.get_positions()
check_segments(coll,
new_positions,
props['linelength'],
props['lineoffset'], new_orientation)
splt.set_title('EventCollection: switch_orientation')
splt.set_ylim(-1, 22)
splt.set_xlim(0, 2)
@image_comparison(
baseline_images=['EventCollection_plot__switch_orientation__2x'])
def test__EventCollection__switch_orientation_2x():
'''
check to make sure calling switch_orientation twice sets the
orientation back to the default
'''
splt, coll, props = generate_EventCollection_plot()
coll.switch_orientation()
coll.switch_orientation()
new_positions = coll.get_positions()
assert_equal(props['orientation'], coll.get_orientation())
assert_equal(True, coll.is_horizontal())
np.testing.assert_array_equal(props['positions'], new_positions)
check_segments(coll,
new_positions,
props['linelength'],
props['lineoffset'],
props['orientation'])
splt.set_title('EventCollection: switch_orientation 2x')
@image_comparison(baseline_images=['EventCollection_plot__set_orientation'])
def test__EventCollection__set_orientation():
'''
check to make sure set_orientation works properly
'''
splt, coll, props = generate_EventCollection_plot()
new_orientation = 'vertical'
coll.set_orientation(new_orientation)
assert_equal(new_orientation, coll.get_orientation())
assert_equal(False, coll.is_horizontal())
check_segments(coll,
props['positions'],
props['linelength'],
props['lineoffset'],
new_orientation)
splt.set_title('EventCollection: set_orientation')
splt.set_ylim(-1, 22)
splt.set_xlim(0, 2)
@image_comparison(baseline_images=['EventCollection_plot__set_linelength'])
def test__EventCollection__set_linelength():
'''
check to make sure set_linelength works properly
'''
splt, coll, props = generate_EventCollection_plot()
new_linelength = 15
coll.set_linelength(new_linelength)
assert_equal(new_linelength, coll.get_linelength())
check_segments(coll,
props['positions'],
new_linelength,
props['lineoffset'],
props['orientation'])
splt.set_title('EventCollection: set_linelength')
splt.set_ylim(-20, 20)
@image_comparison(baseline_images=['EventCollection_plot__set_lineoffset'])
def test__EventCollection__set_lineoffset():
'''
check to make sure set_lineoffset works properly
'''
splt, coll, props = generate_EventCollection_plot()
new_lineoffset = -5.
coll.set_lineoffset(new_lineoffset)
assert_equal(new_lineoffset, coll.get_lineoffset())
check_segments(coll,
props['positions'],
props['linelength'],
new_lineoffset,
props['orientation'])
splt.set_title('EventCollection: set_lineoffset')
splt.set_ylim(-6, -4)
@image_comparison(baseline_images=['EventCollection_plot__set_linestyle'])
def test__EventCollection__set_linestyle():
'''
check to make sure set_linestyle works properly
'''
splt, coll, _ = generate_EventCollection_plot()
new_linestyle = 'dashed'
coll.set_linestyle(new_linestyle)
assert_equal(coll.get_linestyle(), [(0, (6.0, 6.0))])
splt.set_title('EventCollection: set_linestyle')
@image_comparison(baseline_images=['EventCollection_plot__set_linewidth'])
def test__EventCollection__set_linewidth():
'''
check to make sure set_linestyle works properly
'''
splt, coll, _ = generate_EventCollection_plot()
new_linewidth = 5
coll.set_linewidth(new_linewidth)
assert_equal(coll.get_linewidth(), new_linewidth)
splt.set_title('EventCollection: set_linewidth')
@image_comparison(baseline_images=['EventCollection_plot__set_color'])
def test__EventCollection__set_color():
'''
check to make sure set_color works properly
'''
splt, coll, _ = generate_EventCollection_plot()
new_color = np.array([0, 1, 1, 1])
coll.set_color(new_color)
np.testing.assert_array_equal(new_color, coll.get_color())
check_allprop_array(coll.get_colors(), new_color)
splt.set_title('EventCollection: set_color')
def check_segments(coll, positions, linelength, lineoffset, orientation):
'''
check to make sure all values in the segment are correct, given a
particular set of inputs
note: this is not a test, it is used by tests
'''
segments = coll.get_segments()
if (orientation.lower() == 'horizontal'
or orientation.lower() == 'none' or orientation is None):
# if horizontal, the position in is in the y-axis
pos1 = 1
pos2 = 0
elif orientation.lower() == 'vertical':
# if vertical, the position in is in the x-axis
pos1 = 0
pos2 = 1
else:
raise ValueError("orientation must be 'horizontal' or 'vertical'")
# test to make sure each segment is correct
for i, segment in enumerate(segments):
assert_equal(segment[0, pos1], lineoffset + linelength / 2.)
assert_equal(segment[1, pos1], lineoffset - linelength / 2.)
assert_equal(segment[0, pos2], positions[i])
assert_equal(segment[1, pos2], positions[i])
def check_allprop(values, target):
'''
check to make sure all values match the given target
note: this is not a test, it is used by tests
'''
for value in values:
assert_equal(value, target)
def check_allprop_array(values, target):
'''
check to make sure all values match the given target if arrays
note: this is not a test, it is used by tests
'''
for value in values:
np.testing.assert_array_equal(value, target)
def test_null_collection_datalim():
col = mcollections.PathCollection([])
col_data_lim = col.get_datalim(mtransforms.IdentityTransform())
assert_array_equal(col_data_lim.get_points(),
mtransforms.Bbox.null().get_points())
@cleanup
def test_add_collection():
# Test if data limits are unchanged by adding an empty collection.
# Github issue #1490, pull #1497.
ax = plt.axes()
plt.figure()
ax2 = plt.axes()
coll = ax2.scatter([0, 1], [0, 1])
ax.add_collection(coll)
bounds = ax.dataLim.bounds
coll = ax2.scatter([], [])
ax.add_collection(coll)
assert_equal(ax.dataLim.bounds, bounds)
@cleanup
def test_quiver_limits():
ax = plt.axes()
x, y = np.arange(8), np.arange(10)
data = u = v = np.linspace(0, 10, 80).reshape(10, 8)
q = plt.quiver(x, y, u, v)
assert_equal(q.get_datalim(ax.transData).bounds, (0., 0., 7., 9.))
plt.figure()
ax = plt.axes()
x = np.linspace(-5, 10, 20)
y = np.linspace(-2, 4, 10)
y, x = np.meshgrid(y, x)
trans = mtransforms.Affine2D().translate(25, 32) + ax.transData
plt.quiver(x, y, np.sin(x), np.cos(y), transform=trans)
assert_equal(ax.dataLim.bounds, (20.0, 30.0, 15.0, 6.0))
@cleanup
def test_barb_limits():
ax = plt.axes()
x = np.linspace(-5, 10, 20)
y = np.linspace(-2, 4, 10)
y, x = np.meshgrid(y, x)
trans = mtransforms.Affine2D().translate(25, 32) + ax.transData
plt.barbs(x, y, np.sin(x), np.cos(y), transform=trans)
# The calculated bounds are approximately the bounds of the original data,
# this is because the entire path is taken into account when updating the
# datalim.
assert_array_almost_equal(ax.dataLim.bounds, (20, 30, 15, 6),
decimal=1)
@image_comparison(baseline_images=['EllipseCollection_test_image'],
extensions=['png'],
remove_text=True)
def test_EllipseCollection():
# Test basic functionality
fig, ax = plt.subplots()
x = np.arange(4)
y = np.arange(3)
X, Y = np.meshgrid(x, y)
XY = np.vstack((X.ravel(), Y.ravel())).T
ww = X/float(x[-1])
hh = Y/float(y[-1])
aa = np.ones_like(ww) * 20 # first axis is 20 degrees CCW from x axis
ec = mcollections.EllipseCollection(ww, hh, aa,
units='x',
offsets=XY,
transOffset=ax.transData,
facecolors='none')
ax.add_collection(ec)
ax.autoscale_view()
@image_comparison(baseline_images=['polycollection_close'],
extensions=['png'], remove_text=True)
def test_polycollection_close():
from mpl_toolkits.mplot3d import Axes3D
vertsQuad = [
[[0., 0.], [0., 1.], [1., 1.], [1., 0.]],
[[0., 1.], [2., 3.], [2., 2.], [1., 1.]],
[[2., 2.], [2., 3.], [4., 1.], [3., 1.]],
[[3., 0.], [3., 1.], [4., 1.], [4., 0.]]]
fig = plt.figure()
ax = Axes3D(fig)
colors = ['r', 'g', 'b', 'y', 'k']
zpos = list(range(5))
poly = mcollections.PolyCollection(
vertsQuad * len(zpos), linewidth=0.25)
poly.set_alpha(0.7)
## need to have a z-value for *each* polygon = element!
zs = []
cs = []
for z, c in zip(zpos, colors):
zs.extend([z] * len(vertsQuad))
cs.extend([c] * len(vertsQuad))
poly.set_color(cs)
ax.add_collection3d(poly, zs=zs, zdir='y')
## axis limit settings:
ax.set_xlim3d(0, 4)
ax.set_zlim3d(0, 3)
ax.set_ylim3d(0, 4)
if __name__ == '__main__':
import nose
nose.runmodule(argv=['-s', '--with-doctest'], exit=False)
| mit |
adrianschlatter/pyDynamics | tanuna/root.py | 1 | 23905 | # -*- coding: utf-8 -*-
"""
Root module of tanuna package.
@author: Adrian Schlatter
"""
import numpy as np
from scipy.linalg import expm
class ApproximationError(Exception):
pass
class MatrixError(Exception):
pass
class ConnectionError(Exception):
pass
def minor(A, i, j):
"""Returns matrix obtained by deleting row i and column j from matrix A."""
rows, cols = A.shape
rows = set(range(rows))
cols = set(range(cols))
M = A[list(rows - set([i])), :]
M = M[:, list(cols - set([j]))]
return(M)
def determinant(A):
"""Determinant of square matrix A. Can handle matrices of poly1d."""
if A.shape == (1, 1):
return(A[0, 0])
if A.shape == (0, 0):
return(1.)
cofacsum = 0.
for j in range(A.shape[1]):
cofacsum += (-1)**(0 + j) * A[0, j] * determinant(minor(A, 0, j))
return(cofacsum)
def cofactorMat(A):
"""Cofactor matrix of matrix A. Can handle matrices of poly1d."""
C = np.zeros(A.shape, dtype=object)
for i in range(A.shape[0]):
for j in range(A.shape[1]):
C[i, j] = (-1)**(i + j) * determinant(minor(A, i, j))
return(C)
def polyDiag(polyList):
"""Construct diagonal matrix from list of poly1d"""
N = len(polyList)
A = np.matrix(np.zeros((N, N), dtype=object))
for i in range(N):
A[i, i] = polyList[i]
return(A)
def connect(H, G, Gout=None, Hin=None):
"""
Connect outputs Gout of G to inputs Hin of H. The outputs and inputs of
the connected system are arranged as follows:
- remaining outputs of G get lower, the outputs of H the higher indices
- inputs of G get the lower, remaining inputs of H the higher indices
connect(H, G) is equivalent to H * G.
"""
if issubclass(type(H), type(G)):
try:
connection = H.__connect__(G, Gout, Hin)
except AttributeError:
connection = G.__rconnect__(H, Gout, Hin)
else:
try:
connection = G.__rconnect__(H, Gout, Hin)
except AttributeError:
connection = H.__connect__(G, Gout, Hin)
return(connection)
def feedback(G, Gout, Gin):
"""Create feedback connection from outputs Gout to inputs Gin"""
return(G.__feedback__(Gout, Gin))
class CT_System(object):
"""
Describes a continuous-time system with dynamics described by ordinary
differential equations.
s: Internal state (vector) of the system
s0: Initial state of the system
u: External input (vector)
f(t, s, u): Dynamics of the system (ds/dt = f(t, s, u))
g(t, s, u): Function that maps state s to output y = g(t, s, u)
It is solved by simply calling it with an argument t. t is either a float
or array-like. In the latter case, the system is solved for all the times
t in the array.
"""
def __init__(self, f, g, s0):
pass
def __call__(self, t):
pass
def steadyStates(self, u0, t):
"""Returns a list of tuples (s_i, stability_i) with:
- s_i: A steady-state at time t, i.e. f(t, s_i, u0) = 0
- stability_i: True if this steady-state is stable, false otherwise
"""
raise NotImplementedError
def observable(self, t):
"""Returns whether the system is observable at time t (i.e. its
internal state is determinable from inputs u and outputs y)."""
raise NotImplementedError
def reachable(self, t):
"""Returns whether the system is reachable at time t (i.e. all states
are reachable by providing an appropriate input u(t))."""
raise NotImplementedError
def tangentLTI(self, s0, u0, t):
"""
Approximates the CT_System at time t near state s0 and input u0
by an LTISystem (linear, time-invariant system).
Raises ApproximationError if the system can not be linearized.
"""
raise NotImplementedError
class CT_LTI_System(CT_System):
"""Continuous-time, Linear, time-invariant system"""
def __init__(self, A, B, C, D, x0=None):
A, B, C, D = map(np.asmatrix, (A, B, C, D))
A, B, C, D = map(lambda M: M.astype('float'), (A, B, C, D))
order = A.shape[1]
if x0 is None:
x0 = np.matrix(np.zeros((order, 1)))
# Verify dimensions:
nout, nin = D.shape
if not (A.shape == (order, order) and B.shape == (order, nin) and
C.shape == (nout, order) and D.shape == (nout, nin)):
raise MatrixError('State matrices do not have proper shapes')
if not x0.shape == (order, 1):
raise MatrixError('Initial state has wrong shape')
self._A, self._B, self._C, self._D = A, B, C, D
self.x = np.matrix(x0)
@property
def ABCD(self):
return([self._A, self._B, self._C, self._D])
@property
def order(self):
"""The order of the system"""
return(self._A.shape[0])
@property
def shape(self):
"""Number of outputs and inputs"""
return(self._D.shape)
@property
def poles(self):
"""Eigenvalues of the state matrix"""
return(self.zpk[1])
@property
def stable(self):
return(np.all(self.poles.real < 0))
@property
def Wo(self):
"""Observability matrix"""
W = np.matrix(np.zeros((0, self._C.shape[1])))
for n in range(self.order):
W = np.vstack((W, self._C * self._A**n))
return(W)
@property
def Wr(self):
"""Reachability matrix"""
W = np.matrix(np.zeros((self._B.shape[0], 0)))
for n in range(self.order):
W = np.hstack((W, self._A**n * self._B))
return(W)
@property
def reachable(self):
"""Returns True if the system is reachable."""
return(np.linalg.matrix_rank(self.Wr) == self.order)
@property
def observable(self):
"""Returns True if the system is observable."""
return(np.linalg.matrix_rank(self.Wo) == self.order)
def _tResponse(self):
"""Automatically determines appropriate time axis for step- and
impulse-response plotting"""
tau = np.abs(1. / self.poles.real)
f = self.poles.imag / (2 * np.pi)
period = np.abs(1. / f[f != 0])
timescales = np.concatenate([tau, period])
dt = timescales.min() / 20.
T = tau.max() * 10.
return(np.arange(0., T, dt))
def stepResponse(self, t=None):
"""
Returns (t, ystep), where
ystep : Step response
t : Corresponding array of times
t is either provided as an argument to this function or determined
automatically.
"""
if t is None:
t = self._tResponse()
A, B, C, D = self.ABCD
steady = D - C * A.I * B
y = [C * A.I * expm(A * ti) * B + steady for ti in t]
return((t, np.array(y).reshape((-1,) + self.shape)))
def impulseResponse(self, t=None):
"""
Returns (t, yimpulse), where
yimpulse : Impulse response (*without* direct term D)
t : Corresponding array of times
t is either provided as an argument to this function or determined
automatically.
"""
if t is None:
t = self._tResponse()
A, B, C = self._A, self._B, self._C
y = [C * expm(A * ti) * B for ti in t]
return((t, np.array(y).reshape((-1,) + self.shape)))
def freqResponse(self, f=None):
"""
Returns (f, r), where
f : Array of frequencies
r : (Complex) frequency response
f is either provided as an argument to thin function or determined
automatically.
"""
# see [astrom_feedback]_, page 153
# Automatically determine frequency axis:
if f is None:
t = self._tResponse()
dt = t[1] - t[0]
T = t[-1] + dt
fmax = 1. / (2 * dt)
fmin = 1. / T
f = np.logspace(np.log10(fmin), np.log10(fmax), 200)
b, a = self.tf
s = 2 * np.pi * 1j * f
resp = np.zeros((len(f),) + b.shape, dtype=complex)
for i in range(b.shape[0]):
for j in range(b.shape[1]):
resp[:, i, j] = b[i, j](s) / a(s)
return(f, resp)
@property
def tf(self):
"""
Transfer-function representation [b, a] of the system. Returns
numerator (b) and denominator (a) coefficients.
.. math::
G(s) = \\frac{b[0] * s^0 + ... + b[m] * s^m}
{a[0] * s^0 + ... + a[n] * s^n}
"""
A, B, C, D = self.ABCD
Aprime = polyDiag([np.poly1d([1, 0])] * self.order) - A
det = determinant(Aprime)
nout = self.shape[0]
nominator = C * cofactorMat(Aprime).T * B + polyDiag([det] * nout) * D
denominator = det
return(nominator, denominator)
@property
def zpk(self):
"""
Gain, Pole, Zero representation of the system. Returns a tuple
(z, p, k) with z the zeros, p the poles, and k the gain of the system.
p is an array. The format of z and k depends on the number of inputs
and outputs of the system:
For a SISO system z is an array and k is float. For a system with more
inputs or outputs, z and k are lists of 'shape' (nout, nin) containing
arrays and floats, respectively.
"""
b, a = self.tf
zeros = np.zeros(b.shape, dtype=list)
gains = np.zeros(b.shape)
for i in range(b.shape[0]):
for j in range(b.shape[1]):
zeros[i, j] = np.roots(b[i, j])
gains[i, j] = b[i, j][b[i, j].order]
poles = np.roots(a)
return(zeros, poles, gains)
def __feedback__(self, Gout, Gin):
G = self
Nports = np.min(G.shape)
if len(Gout) >= Nports:
# cannot connect all ports:
raise ConnectionError(
'at least 1 input and at least 1 output must remain unconnected')
# connect one channel at a time. Start with Gout[0] => Hin[0]
iout = Gout[0]
jin = Gin[0]
# Re-arange ports so that iout and jin in are the last output
# and the last input, respectively:
outorder = list(range(G.shape[0]))
outorder.pop(iout)
outorder += [iout]
inorder = list(range(G.shape[1]))
inorder.pop(jin)
inorder += [jin]
a, b, c, d = G.ABCD
b = b[:, inorder]
c = c[outorder, :]
d = d[:, inorder]
d = d[outorder, :]
# Connect feedback:
A = a + b[:, -1] * c[-1, :]
B = b[:, :-1] + b[:, -1] * d[-1, :-1]
C = c[:-1, :] + d[:-1, -1] * c[-1, :]
D = d[:-1, :-1] + d[:-1, -1] * d[-1, :-1]
if len(Gout) == 1:
# work done => return result
return(CT_LTI_System(A, B, C, D, G.x))
else:
# More ports have to be connected => recurse
return(self.__connect__(self, Gout[1:], Gin[1:]))
def __connect__(self, right, Gout=None, Hin=None):
H = self
G = right
if issubclass(type(G), CT_LTI_System):
# Prepare Gout, Hin:
# ===============================
if Gout is None:
# connect all outputs:
Gout = np.arange(G.shape[0])
if Hin is None:
# connect all inputs
Hin = np.arange(H.shape[1])
if len(Gout) != len(Hin):
raise ConnectionError(
'Number of inputs does not match number of outputs')
Gout = np.asarray(Gout)
Hin = np.asarray(Hin)
# Prepare connection matrices:
# ===============================
# u_h = Sh * y_g:
Sh = np.matrix(np.zeros((H.shape[1], G.shape[0])))
for k in range(len(Hin)):
i = Hin[k]
j = Gout[k]
Sh[i, j] = 1.
# u_h = sh * u_h,unconnected:
sh = np.matrix(np.zeros((H.shape[1], H.shape[1] - len(Hin))))
u_h_unconnected = list(set(range(H.shape[1])) - set(Hin))
sh[u_h_unconnected, :] = np.eye(H.shape[1] - len(Hin))
# y_g,unconnected = sg * y_g:
sg = np.matrix(np.zeros((G.shape[0] - len(Gout), G.shape[0])))
y_g_unconnected = list(set(range(G.shape[0])) - set(Gout))
sg[:, y_g_unconnected] = np.eye(G.shape[0] - len(Gout))
# Setup state matrices:
# ===============================
nH = H.order
nG = G.order
A = np.bmat([[G._A, np.zeros((nG, nH))],
[H._B * Sh * G._C, H._A]])
B = np.bmat([[G._B, np.zeros((nG, len(u_h_unconnected)))],
[H._B * Sh * G._D, H._B * sh]])
C = np.bmat([[sg * G._C, np.zeros((len(y_g_unconnected), nH))],
[H._D * Sh * G._C, H._C]])
D = np.bmat([[sg * G._D, np.zeros((len(y_g_unconnected),
len(u_h_unconnected)))],
[H._D * Sh * G._D, H._D * sh]])
x0 = np.vstack([G.x, H.x])
elif issubclass(type(G), CT_System):
# delegate to super class:
return(G.__rconnect__(H, Gout, Hin))
elif issubclass(type(G), np.matrix):
if H.shape[1] != G.shape[0]:
raise ConnectionError('No. inputs and outputs do not match')
# Multiply u by matrix before feeding into H:
A = np.matrix(H._A)
B = H._B * G
C = np.matrix(H._C)
D = H._D * G
x0 = np.matrix(H.x)
elif type(G) in [float, int]:
# Apply gain G on input side:
A = np.matrix(H._A)
B = np.matrix(H._B)
C = G * H._C
D = G * H._D
x0 = np.matrix(H.x)
else:
return(NotImplemented)
return(CT_LTI_System(A, B, C, D, x0))
def __rconnect__(self, left, Gout=None, Hin=None):
G = self
H = left
if issubclass(type(H), CT_LTI_System):
return(H.__connect__(G, Gout, Hin))
elif issubclass(type(H), CT_System):
# delegate to super class:
return(H.__connect__(G, Gout, Hin))
elif issubclass(type(H), np.matrix):
if H.shape[1] != G.shape[0]:
raise ConnectionError('No. inputs and outputs do not match')
# Multiply output of G by matrix:
A = np.matrix(G._A)
B = np.matrix(G._B)
C = H * G._C
D = H * G._D
x0 = np.matrix(G.x)
elif type(H) in [float, int]:
# Apply gain H on output side:
A = np.matrix(G._A)
B = np.matrix(G._B)
C = H * G._C
D = H * G._D
x0 = np.matrix(G.x)
else:
return(NotImplemented)
return(CT_LTI_System(A, B, C, D, x0))
def __add__(self, right):
G = self
nG = G.order
if issubclass(type(right), CT_LTI_System):
H = right
nH = H.order
if self.shape != H.shape:
raise ConnectionError('System shapes must be equal')
A = np.bmat([[G._A, np.zeros((nG, nH))],
[np.zeros((nH, nG)), H._A]])
B = np.vstack([G._B, H._B])
C = np.hstack([G._C, H._C])
D = G._D + H._D
x0 = np.vstack([G.x, H.x])
return(CT_LTI_System(A, B, C, D, x0))
elif issubclass(type(right), np.matrix):
if right.shape != G._D.shape:
raise MatrixError('Shapes of right and self._D have to match')
A = G._A
B = G._B
C = G._C
D = G._D + right
x0 = G.x
return(CT_LTI_System(A, B, C, D, x0))
elif type(right) in [int, float]:
right = np.matrix(np.ones(self.shape) * right)
return(self + right)
else:
return(NotImplemented)
def __radd__(self, left):
return(self + left)
def __sub__(self, right):
return(self + -right)
def __rsub__(self, left):
return(left + -self)
def __mul__(self, right):
return(self.__connect__(right))
def __rmul__(self, left):
return(self.__rconnect__(left))
def __neg__(self):
return(self * (-1))
def __truediv__(self, right):
if type(right) in [float, int]:
invright = 1. / float(right)
return(self * invright)
else:
return(NotImplemented)
def __or__(self, right):
"""Connect systems in parallel"""
Ag, Bg, Cg, Dg = self.ABCD
gout, gin = self.shape
ng = self.order
Ah, Bh, Ch, Dh = right.ABCD
hout, hin = right.shape
nh = right.order
A = np.bmat([[Ag, np.zeros([ng, ng])],
[np.zeros([nh, nh]), Ah]])
B = np.bmat([[Bg, np.zeros([ng, gin])],
[np.zeros([nh, hin]), Bh]])
C = np.bmat([[Cg, np.zeros([gout, ng])],
[np.zeros([hout, nh]), Ch]])
D = np.bmat([[Dg, np.zeros([gout, gin])],
[np.zeros([hout, hin]), Dh]])
x = np.vstack([self.x, right.x])
return(CT_LTI_System(A, B, C, D, x))
def __pow__(self, power):
"""Raise system to integer power"""
if type(power) is not int:
return(NotImplemented)
if power < 1:
return(NotImplemented)
if power == 1:
return(self)
else:
return(self * self**(power - 1))
def Thetaphi(b, a):
"""Translate filter-coefficient arrays b and a to Theta, phi
representation:
phi(B)*y_t = Theta(B)*x_t
Theta, phi = Thetaphi(b, a) are the coefficient of the back-shift-operator
polynomials (index i belongs to B^i)"""
phi = np.array(a)
if len(phi) > 1:
phi[1:] = -phi[1:]
Theta = np.array(b)
return [Theta, phi]
def ba(Theta, phi):
"""Translate backshift-operator polynomials Theta and phi to filter
coefficient array b, a.
a[0]*y[t] = a[1]*y[t-1] + ... + a[n]*y[t-n] + b[0]*x[t] + ... + b[m]*x[t-m]
"""
# XXX these b and a are not compatible with scipy.lfilter. Appararently,
# scipy.lfilter expects Theta and phi
# Thetaphi() is its own inverse:
return(Thetaphi(Theta, phi))
def differenceEquation(b, a):
"""Takes filter coefficient arrays b and a and returns string with
difference equation using powers of B, where B the backshift operator."""
Theta, phi = Thetaphi(b, a)
s = '('
for i in range(len(phi)):
s += '%.2f B^%d+' % (phi[i], i)
s = s[:-1] + ')*y_t = ('
for i in range(len(Theta)):
s += '%.2f B^%d+' % (Theta[i], i)
s = s[:-1]+')*x_t'
return s
class DT_LTV_System():
"""Implements the discrete linear, time-variant system with input vector
u[t], internal state vector x[t], and output vector y[t]:
x[t+1] = A[t]*x[t] + B[t]*u[t]
y[t] = C*x[t] + D*u[t]
where
A[t]: state matrices
B[t]: input matrices
C[t]: output matrices
D[t]: feedthrough matrices
The system is initialized with state vector x[0] = X0.
"""
def __init__(self, At, Bt, Ct, Dt, X0):
self.At = At
self.Bt = Bt
self.Ct = Ct
self.Dt = Dt
self.X = X0
self.t = 0
def update(self, U):
U.shape = (-1, 1)
t = min(self.t, len(self.At))
self.X = np.dot(self.At[t], self.X) + np.dot(self.Bt[t], U)
self.t += 1
return np.dot(self.Ct[t], self.X) + np.dot(self.Dt[t], U)
def feed(self, Ut):
return np.concatenate([self.update(U) for U in Ut.T]).T
class DT_LTI_System(object):
"""Implements the discrete-time linear, time-invariant system with input
vector u[t], internal state vector x[t], and output vector y[t]:
x[t+1] = A * x[t] + B * u[t]
y[t] = C * x[t] + D * u[t]
where
A: state matrix
B: input matrix
C: output matrix
D: feedthrough matrix
The system is initialized with state vector x[0] = x0.
"""
def __init__(self, A, B, C, D, x0=np.matrix([0., 0.]).T):
self.A, self.B, self.C, self.C = A, B, C, D
self.x = x0
@classmethod
def fromTransferFunction(Theta, phi):
"""Initialize DiscreteLTI instance from transfer-function coefficients
'Theta' and 'phi'."""
raise NotImplementedError
def __repr__(self):
raise NotImplementedError
def stable(self):
"""Returns True if the system is strictly stable"""
raise NotImplementedError
def observable(self):
"""Returns true if the system is observable"""
raise NotImplementedError
def reachable(self):
"""Returns True if the system is observable"""
raise NotImplementedError
def tf(self):
"""Returns the transfer function (b, a) where 'b' are the coefficients
of the nominator polynomial and 'a' are the coefficients of the
denominator polynomial."""
raise NotImplementedError
def proper(self):
"""Returns true if the system's transfer function is strictly proper,
i.e. the degree of the numerator is less than the degree of the
denominator."""
raise NotImplementedError
def __add__(self, right):
raise NotImplementedError
def __radd__(self, left):
raise NotImplementedError
def __rsub__(self, left):
raise NotImplementedError
def __mul__(self, right):
raise NotImplementedError
def __rmul__(self, left):
raise NotImplementedError
def __iadd__(self, right):
raise NotImplementedError
def __isub__(self, right):
raise NotImplementedError
def __imul__(self, right):
raise NotImplementedError
def __idiv__(self, right):
raise NotImplementedError
if __name__ == '__main__':
import matplotlib.pyplot as pl
pl.close('all')
from CT_LTI import LowPass, HighPass
J = connect(LowPass(10.), LowPass(10.), Gout=(), Hin=())
J = np.matrix([[1, 1]]) * J * np.matrix([[1], [1]])
w0 = 2 * np.pi * 10
zeta = 0.5
k = 1.
A = np.matrix([[0, w0], [-w0, -2 * zeta * w0]])
B = np.matrix([0, k * w0]).T
C = np.matrix([k, 0.])
D = np.matrix([0.])
G = CT_LTI_System(A, B, C, D)
G = HighPass(10, 2)
pl.figure()
# STEP RESPONSE
pl.subplot(4, 1, 1)
pl.title('Step-Response')
pl.plot(*G.stepResponse())
pl.xlabel('Time After Step (s)')
pl.ylabel('y')
# IMPULSE RESPONSE
pl.subplot(4, 1, 2)
pl.title('Impulse-Response')
pl.plot(*G.impulseResponse())
pl.xlabel('Time After Impulse (s)')
pl.ylabel('y')
# BODE PLOT
ax1 = pl.subplot(4, 1, 3)
ax1.set_title('Bode Plot')
f, Chi = G.freqResponse()
Chi.shape = (-1)
ax1.semilogx(f, 20 * np.log10(np.abs(Chi)), r'b-')
ax1.set_xlabel('Frequency (Hz)')
ax1.set_ylabel('Magnitude (dB)')
ax2 = ax1.twinx()
ax2.semilogx(f, np.angle(Chi) / np.pi, r'r-')
ax2.set_ylabel('Phase ($\pi$)')
# NYQUIST PLOT
ax = pl.subplot(4, 1, 4)
pl.title('Nyquist Plot')
pl.plot(np.real(Chi), np.imag(Chi))
pl.plot([-1], [0], r'ro')
pl.xlim([-2.5, 2])
pl.ylim([-1.5, 0.5])
ax.set_aspect('equal')
pl.axhline(y=0, color='k')
pl.axvline(x=0, color='k')
pl.xlabel('Real Part')
pl.ylabel('Imaginary Part')
| bsd-3-clause |
mdelorme/MNn | mnn/examples/simple_model.py | 1 | 1497 | import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from mnn.model import MNnModel
# Single disc
model = MNnModel()
model.add_disc('z', 1.0, 10.0, 100.0)
# Evaluating density and potential :
print(model.evaluate_density(1.0, 2.0, -0.5))
print(model.evaluate_potential(1.0, 2.0, -0.5))
print(model.evaluate_force(1.0, 2.0, -0.5))
# Using vectors to evaluate density along an axis :
x = np.linspace(0.0, 30.0, 100.0)
density = model.evaluate_density(x, 0.0, 0.0)
fig = plt.plot(x, density)
plt.show()
# Plotting density meshgrid
x, y, z, v = model.generate_dataset_meshgrid((0.0, 0.0, -10.0), (30.0, 0.0, 10.0), (300, 1, 200))
fig = plt.imshow(v[:, 0].T)
plt.show()
# Contour plot
x = np.linspace(0.0, 30.0, 300)
z = np.linspace(-10.0, 10.0, 200)
plt.contour(x, z, v[:, 0].T)
plt.show()
# Plotting force meshgrid
plt.close('all')
x, y, z, f = model.generate_dataset_meshgrid((-30.0, -30.0, 0.0), (30.0, 30.0, 0.0), (30, 30, 1), 'force')
x = x[:, :, 0].reshape(-1)
y = y[:, :, 0].reshape(-1)
fx = f[0, :, :, 0].reshape(-1)
fy = f[1, :, :, 0].reshape(-1)
extent = [x.min(), x.max(), y.min(), y.max()]
plt.figure(figsize=(10, 10))
gs = gridspec.GridSpec(2, 2)
ax1 = plt.subplot(gs[1, 0])
pl1 = ax1.imshow(f[1, :, :, 0].T, extent=extent, aspect='auto')
ax2 = plt.subplot(gs[0, 1])
pl2 = ax2.imshow(f[0, :, :, 0].T, extent=extent, aspect='auto')
ax3 = plt.subplot(gs[1, 1])
pl3 = ax3.quiver(x.T, y.T, fx.T, fy.T, units='width', scale=0.045)
plt.show()
| mit |
stephenliu1989/msmbuilder | msmbuilder/cluster/kcenters.py | 3 | 6519 | # Author: Robert McGibbon <[email protected]>
# Contributors:
# Copyright (c) 2014, Stanford University
# All rights reserved.
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
from __future__ import absolute_import, print_function, division
import numpy as np
from sklearn.utils import check_random_state
from sklearn.base import ClusterMixin, TransformerMixin
from .. import libdistance
from . import MultiSequenceClusterMixin
from ..base import BaseEstimator
__all__ = ['KCenters']
#-----------------------------------------------------------------------------
# Code
#-----------------------------------------------------------------------------
class _KCenters(ClusterMixin, TransformerMixin):
"""K-Centers clustering
Cluster a vector or Trajectory dataset using a simple heuristic to minimize
the maximum distance from any data point to its assigned cluster center.
The runtime of this algorithm is O(kN), where k is the number of
clusters and N is the size of the dataset, making it one of the least
expensive clustering algorithms available.
Parameters
----------
n_clusters : int, optional, default: 8
The number of clusters to form as well as the number of
centroids to generate.
metric : {"euclidean", "sqeuclidean", "cityblock", "chebyshev", "canberra",
"braycurtis", "hamming", "jaccard", "cityblock", "rmsd"}
The distance metric to use. metric = "rmsd" requires that sequences
passed to ``fit()`` be ```md.Trajectory```; other distance metrics
require ``np.ndarray``s.
random_state : integer or numpy.RandomState, optional
The generator used to initialize the centers. If an integer is
given, it fixes the seed. Defaults to the global numpy random
number generator.
References
----------
.. [1] Gonzalez, Teofilo F. "Clustering to minimize the maximum
intercluster distance." Theor. Comput. Sci. 38 (1985): 293-306.
.. [2] Beauchamp, Kyle A., et al. "MSMBuilder2: modeling conformational
dynamics on the picosecond to millisecond scale." J. Chem. Theory.
Comput. 7.10 (2011): 3412-3419.
Attributes
----------
cluster_ids_ : array, [n_clusters]
Index of the data point that each cluster label corresponds to.
cluster_centers_ : array, [n_clusters, n_features] or md.Trajectory
Coordinates of cluster centers
labels_ : array, [n_samples,]
The label of each point is an integer in [0, n_clusters).
distances_ : array, [n_samples,]
Distance from each sample to the cluster center it is
assigned to.
inertia_ : float
Sum of distances of samples to their closest cluster center.
"""
def __init__(self, n_clusters=8, metric='euclidean', random_state=None):
self.n_clusters = n_clusters
self.metric = metric
self.random_state = random_state
def fit(self, X, y=None):
n_samples = len(X)
new_center_index = check_random_state(self.random_state).randint(0, n_samples)
self.labels_ = np.zeros(n_samples, dtype=int)
self.distances_ = np.empty(n_samples, dtype=float)
self.distances_.fill(np.inf)
cluster_ids_ = []
for i in range(self.n_clusters):
d = libdistance.dist(X, X[new_center_index], metric=self.metric)
mask = (d < self.distances_)
self.distances_[mask] = d[mask]
self.labels_[mask] = i
cluster_ids_.append(new_center_index)
new_center_index = np.argmax(self.distances_)
self.cluster_ids_ = cluster_ids_
self.cluster_centers_ = X[cluster_ids_]
self.inertia_ = np.sum(self.distances_)
return self
def predict(self, X):
"""Predict the closest cluster each sample in X belongs to.
In the vector quantization literature, `cluster_centers_` is called
the code book and each value returned by `predict` is the index of
the closest code in the code book.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
New data to predict.
Returns
-------
Y : array, shape [n_samples,]
Index of the closest center each sample belongs to.
"""
labels, inertia = libdistance.assign_nearest(
X, self.cluster_centers_, metric=self.metric)
return labels
def fit_predict(self, X, y=None):
return self.fit(X, y).labels_
class KCenters(MultiSequenceClusterMixin, _KCenters, BaseEstimator):
_allow_trajectory = True
__doc__ = _KCenters.__doc__[: _KCenters.__doc__.find('Attributes')] + \
'''
Attributes
----------
`cluster_centers_` : array, [n_clusters, n_features]
Coordinates of cluster centers
`labels_` : list of arrays, each of shape [sequence_length, ]
`labels_[i]` is an array of the labels of each point in
sequence `i`. The label of each point is an integer in
[0, n_clusters).
`distances_` : list of arrays, each of shape [sequence_length, ]
`distances_[i]` is an array of the labels of each point in
sequence `i`. Distance from each sample to the cluster center
it is assigned to.
'''
def fit(self, sequences, y=None):
"""Fit the kcenters clustering on the data
Parameters
----------
sequences : list of array-like, each of shape [sequence_length, n_features]
A list of multivariate timeseries, or ``md.Trajectory``. Each
sequence may have a different length, but they all must have the
same number of features, or the same number of atoms if they are
``md.Trajectory``s.
Returns
-------
self
"""
MultiSequenceClusterMixin.fit(self, sequences)
self.distances_ = self._split(self.distances_)
return self
def summarize(self):
return """KCenters clustering
--------------------
n_clusters : {n_clusters}
metric : {metric}
Inertia : {inertia}
Mean distance : {mean_distance}
Max distance : {max_distance}
""".format(n_clusters=self.n_clusters, metric=self.metric,
inertia=self.inertia_, mean_distance=np.mean(np.concatenate(self.distances_)),
max_distance=np.max(np.concatenate(self.distances_)))
| lgpl-2.1 |
Scapogo/zipline | tests/test_restrictions.py | 6 | 17001 | import pandas as pd
from pandas.util.testing import assert_series_equal
from six import iteritems
from functools import partial
from toolz import groupby
from zipline.finance.asset_restrictions import (
RESTRICTION_STATES,
Restriction,
HistoricalRestrictions,
StaticRestrictions,
SecurityListRestrictions,
NoRestrictions,
_UnionRestrictions,
)
from zipline.testing import parameter_space
from zipline.testing.fixtures import (
WithDataPortal,
ZiplineTestCase,
)
def str_to_ts(dt_str):
return pd.Timestamp(dt_str, tz='UTC')
FROZEN = RESTRICTION_STATES.FROZEN
ALLOWED = RESTRICTION_STATES.ALLOWED
MINUTE = pd.Timedelta(minutes=1)
class RestrictionsTestCase(WithDataPortal, ZiplineTestCase):
ASSET_FINDER_EQUITY_SIDS = 1, 2, 3
@classmethod
def init_class_fixtures(cls):
super(RestrictionsTestCase, cls).init_class_fixtures()
cls.ASSET1 = cls.asset_finder.retrieve_asset(1)
cls.ASSET2 = cls.asset_finder.retrieve_asset(2)
cls.ASSET3 = cls.asset_finder.retrieve_asset(3)
cls.ALL_ASSETS = [cls.ASSET1, cls.ASSET2, cls.ASSET3]
def assert_is_restricted(self, rl, asset, dt):
self.assertTrue(rl.is_restricted(asset, dt))
def assert_not_restricted(self, rl, asset, dt):
self.assertFalse(rl.is_restricted(asset, dt))
def assert_all_restrictions(self, rl, expected, dt):
self.assert_many_restrictions(rl, self.ALL_ASSETS, expected, dt)
def assert_many_restrictions(self, rl, assets, expected, dt):
assert_series_equal(
rl.is_restricted(assets, dt),
pd.Series(index=pd.Index(assets), data=expected),
)
@parameter_space(
date_offset=(
pd.Timedelta(0),
pd.Timedelta('1 minute'),
pd.Timedelta('15 hours 5 minutes')
),
restriction_order=(
list(range(6)), # Keep restrictions in order.
[0, 2, 1, 3, 5, 4], # Re-order within asset.
[0, 3, 1, 4, 2, 5], # Scramble assets, maintain per-asset order.
[0, 5, 2, 3, 1, 4], # Scramble assets and per-asset order.
),
__fail_fast=True,
)
def test_historical_restrictions(self, date_offset, restriction_order):
"""
Test historical restrictions for both interday and intraday
restrictions, as well as restrictions defined in/not in order, for both
single- and multi-asset queries
"""
def rdate(s):
"""Convert a date string into a restriction for that date."""
# Add date_offset to check that we handle intraday changes.
return str_to_ts(s) + date_offset
base_restrictions = [
Restriction(self.ASSET1, rdate('2011-01-04'), FROZEN),
Restriction(self.ASSET1, rdate('2011-01-05'), ALLOWED),
Restriction(self.ASSET1, rdate('2011-01-06'), FROZEN),
Restriction(self.ASSET2, rdate('2011-01-05'), FROZEN),
Restriction(self.ASSET2, rdate('2011-01-06'), ALLOWED),
Restriction(self.ASSET2, rdate('2011-01-07'), FROZEN),
]
# Scramble the restrictions based on restriction_order to check that we
# don't depend on the order in which restrictions are provided to us.
all_restrictions = [base_restrictions[i] for i in restriction_order]
restrictions_by_asset = groupby(lambda r: r.asset, all_restrictions)
rl = HistoricalRestrictions(all_restrictions)
assert_not_restricted = partial(self.assert_not_restricted, rl)
assert_is_restricted = partial(self.assert_is_restricted, rl)
assert_all_restrictions = partial(self.assert_all_restrictions, rl)
# Check individual restrictions.
for asset, r_history in iteritems(restrictions_by_asset):
freeze_dt, unfreeze_dt, re_freeze_dt = (
sorted([r.effective_date for r in r_history])
)
# Starts implicitly unrestricted. Restricted on or after the freeze
assert_not_restricted(asset, freeze_dt - MINUTE)
assert_is_restricted(asset, freeze_dt)
assert_is_restricted(asset, freeze_dt + MINUTE)
# Unrestricted on or after the unfreeze
assert_is_restricted(asset, unfreeze_dt - MINUTE)
assert_not_restricted(asset, unfreeze_dt)
assert_not_restricted(asset, unfreeze_dt + MINUTE)
# Restricted again on or after the freeze
assert_not_restricted(asset, re_freeze_dt - MINUTE)
assert_is_restricted(asset, re_freeze_dt)
assert_is_restricted(asset, re_freeze_dt + MINUTE)
# Should stay restricted for the rest of time
assert_is_restricted(asset, re_freeze_dt + MINUTE * 1000000)
# Check vectorized restrictions.
# Expected results for [self.ASSET1, self.ASSET2, self.ASSET3],
# ASSET3 is always False as it has no defined restrictions
# 01/04 XX:00 ASSET1: ALLOWED --> FROZEN; ASSET2: ALLOWED
d0 = rdate('2011-01-04')
assert_all_restrictions([False, False, False], d0 - MINUTE)
assert_all_restrictions([True, False, False], d0)
assert_all_restrictions([True, False, False], d0 + MINUTE)
# 01/05 XX:00 ASSET1: FROZEN --> ALLOWED; ASSET2: ALLOWED --> FROZEN
d1 = rdate('2011-01-05')
assert_all_restrictions([True, False, False], d1 - MINUTE)
assert_all_restrictions([False, True, False], d1)
assert_all_restrictions([False, True, False], d1 + MINUTE)
# 01/06 XX:00 ASSET1: ALLOWED --> FROZEN; ASSET2: FROZEN --> ALLOWED
d2 = rdate('2011-01-06')
assert_all_restrictions([False, True, False], d2 - MINUTE)
assert_all_restrictions([True, False, False], d2)
assert_all_restrictions([True, False, False], d2 + MINUTE)
# 01/07 XX:00 ASSET1: FROZEN; ASSET2: ALLOWED --> FROZEN
d3 = rdate('2011-01-07')
assert_all_restrictions([True, False, False], d3 - MINUTE)
assert_all_restrictions([True, True, False], d3)
assert_all_restrictions([True, True, False], d3 + MINUTE)
# Should stay restricted for the rest of time
assert_all_restrictions(
[True, True, False],
d3 + (MINUTE * 10000000)
)
def test_historical_restrictions_consecutive_states(self):
"""
Test that defining redundant consecutive restrictions still works
"""
rl = HistoricalRestrictions([
Restriction(self.ASSET1, str_to_ts('2011-01-04'), ALLOWED),
Restriction(self.ASSET1, str_to_ts('2011-01-05'), ALLOWED),
Restriction(self.ASSET1, str_to_ts('2011-01-06'), FROZEN),
Restriction(self.ASSET1, str_to_ts('2011-01-07'), FROZEN),
])
assert_not_restricted = partial(self.assert_not_restricted, rl)
assert_is_restricted = partial(self.assert_is_restricted, rl)
# (implicit) ALLOWED --> ALLOWED
assert_not_restricted(self.ASSET1, str_to_ts('2011-01-04') - MINUTE)
assert_not_restricted(self.ASSET1, str_to_ts('2011-01-04'))
assert_not_restricted(self.ASSET1, str_to_ts('2011-01-04') + MINUTE)
# ALLOWED --> ALLOWED
assert_not_restricted(self.ASSET1, str_to_ts('2011-01-05') - MINUTE)
assert_not_restricted(self.ASSET1, str_to_ts('2011-01-05'))
assert_not_restricted(self.ASSET1, str_to_ts('2011-01-05') + MINUTE)
# ALLOWED --> FROZEN
assert_not_restricted(self.ASSET1, str_to_ts('2011-01-06') - MINUTE)
assert_is_restricted(self.ASSET1, str_to_ts('2011-01-06'))
assert_is_restricted(self.ASSET1, str_to_ts('2011-01-06') + MINUTE)
# FROZEN --> FROZEN
assert_is_restricted(self.ASSET1, str_to_ts('2011-01-07') - MINUTE)
assert_is_restricted(self.ASSET1, str_to_ts('2011-01-07'))
assert_is_restricted(self.ASSET1, str_to_ts('2011-01-07') + MINUTE)
def test_static_restrictions(self):
"""
Test single- and multi-asset queries on static restrictions
"""
restricted_a1 = self.ASSET1
restricted_a2 = self.ASSET2
unrestricted_a3 = self.ASSET3
rl = StaticRestrictions([restricted_a1, restricted_a2])
assert_not_restricted = partial(self.assert_not_restricted, rl)
assert_is_restricted = partial(self.assert_is_restricted, rl)
assert_all_restrictions = partial(self.assert_all_restrictions, rl)
for dt in [str_to_ts(dt_str) for dt_str in ('2011-01-03',
'2011-01-04',
'2011-01-04 1:01',
'2020-01-04')]:
assert_is_restricted(restricted_a1, dt)
assert_is_restricted(restricted_a2, dt)
assert_not_restricted(unrestricted_a3, dt)
assert_all_restrictions([True, True, False], dt)
def test_security_list_restrictions(self):
"""
Test single- and multi-asset queries on restrictions defined by
zipline.utils.security_list.SecurityList
"""
# A mock SecurityList object filled with fake data
class SecurityList(object):
def __init__(self, assets_by_dt):
self.assets_by_dt = assets_by_dt
def current_securities(self, dt):
return self.assets_by_dt[dt]
assets_by_dt = {
str_to_ts('2011-01-03'): [self.ASSET1],
str_to_ts('2011-01-04'): [self.ASSET2, self.ASSET3],
str_to_ts('2011-01-05'): [self.ASSET1, self.ASSET2, self.ASSET3],
}
rl = SecurityListRestrictions(SecurityList(assets_by_dt))
assert_not_restricted = partial(self.assert_not_restricted, rl)
assert_is_restricted = partial(self.assert_is_restricted, rl)
assert_all_restrictions = partial(self.assert_all_restrictions, rl)
assert_is_restricted(self.ASSET1, str_to_ts('2011-01-03'))
assert_not_restricted(self.ASSET2, str_to_ts('2011-01-03'))
assert_not_restricted(self.ASSET3, str_to_ts('2011-01-03'))
assert_all_restrictions(
[True, False, False], str_to_ts('2011-01-03')
)
assert_not_restricted(self.ASSET1, str_to_ts('2011-01-04'))
assert_is_restricted(self.ASSET2, str_to_ts('2011-01-04'))
assert_is_restricted(self.ASSET3, str_to_ts('2011-01-04'))
assert_all_restrictions(
[False, True, True], str_to_ts('2011-01-04')
)
assert_is_restricted(self.ASSET1, str_to_ts('2011-01-05'))
assert_is_restricted(self.ASSET2, str_to_ts('2011-01-05'))
assert_is_restricted(self.ASSET3, str_to_ts('2011-01-05'))
assert_all_restrictions(
[True, True, True],
str_to_ts('2011-01-05')
)
def test_noop_restrictions(self):
"""
Test single- and multi-asset queries on no-op restrictions
"""
rl = NoRestrictions()
assert_not_restricted = partial(self.assert_not_restricted, rl)
assert_all_restrictions = partial(self.assert_all_restrictions, rl)
for dt in [str_to_ts(dt_str) for dt_str in ('2011-01-03',
'2011-01-04',
'2020-01-04')]:
assert_not_restricted(self.ASSET1, dt)
assert_not_restricted(self.ASSET2, dt)
assert_not_restricted(self.ASSET3, dt)
assert_all_restrictions([False, False, False], dt)
def test_union_restrictions(self):
"""
Test that we appropriately union restrictions together, including
eliminating redundancy (ignoring NoRestrictions) and flattening out
the underlying sub-restrictions of _UnionRestrictions
"""
no_restrictions_rl = NoRestrictions()
st_restrict_asset1 = StaticRestrictions([self.ASSET1])
st_restrict_asset2 = StaticRestrictions([self.ASSET2])
st_restricted_assets = [self.ASSET1, self.ASSET2]
before_frozen_dt = str_to_ts('2011-01-05')
freeze_dt_1 = str_to_ts('2011-01-06')
unfreeze_dt = str_to_ts('2011-01-06 16:00')
hist_restrict_asset3_1 = HistoricalRestrictions([
Restriction(self.ASSET3, freeze_dt_1, FROZEN),
Restriction(self.ASSET3, unfreeze_dt, ALLOWED)
])
freeze_dt_2 = str_to_ts('2011-01-07')
hist_restrict_asset3_2 = HistoricalRestrictions([
Restriction(self.ASSET3, freeze_dt_2, FROZEN)
])
# A union of a NoRestrictions with a non-trivial restriction should
# yield the original restriction
trivial_union_restrictions = no_restrictions_rl | st_restrict_asset1
self.assertIsInstance(trivial_union_restrictions, StaticRestrictions)
# A union of two non-trivial restrictions should yield a
# UnionRestrictions
st_union_restrictions = st_restrict_asset1 | st_restrict_asset2
self.assertIsInstance(st_union_restrictions, _UnionRestrictions)
arb_dt = str_to_ts('2011-01-04')
self.assert_is_restricted(st_restrict_asset1, self.ASSET1, arb_dt)
self.assert_not_restricted(st_restrict_asset1, self.ASSET2, arb_dt)
self.assert_not_restricted(st_restrict_asset2, self.ASSET1, arb_dt)
self.assert_is_restricted(st_restrict_asset2, self.ASSET2, arb_dt)
self.assert_is_restricted(st_union_restrictions, self.ASSET1, arb_dt)
self.assert_is_restricted(st_union_restrictions, self.ASSET2, arb_dt)
self.assert_many_restrictions(
st_restrict_asset1,
st_restricted_assets,
[True, False],
arb_dt
)
self.assert_many_restrictions(
st_restrict_asset2,
st_restricted_assets,
[False, True],
arb_dt
)
self.assert_many_restrictions(
st_union_restrictions,
st_restricted_assets,
[True, True],
arb_dt
)
# A union of a 2-sub-restriction UnionRestrictions and a
# non-trivial restrictions should yield a UnionRestrictions with
# 3 sub restrictions. Works with UnionRestrictions on both the left
# side or right side
for r1, r2 in [
(st_union_restrictions, hist_restrict_asset3_1),
(hist_restrict_asset3_1, st_union_restrictions)
]:
union_or_hist_restrictions = r1 | r2
self.assertIsInstance(
union_or_hist_restrictions, _UnionRestrictions)
self.assertEqual(
len(union_or_hist_restrictions.sub_restrictions), 3)
# Includes the two static restrictions on ASSET1 and ASSET2,
# and the historical restriction on ASSET3 starting on freeze_dt_1
# and ending on unfreeze_dt
self.assert_all_restrictions(
union_or_hist_restrictions,
[True, True, False],
before_frozen_dt
)
self.assert_all_restrictions(
union_or_hist_restrictions,
[True, True, True],
freeze_dt_1
)
self.assert_all_restrictions(
union_or_hist_restrictions,
[True, True, False],
unfreeze_dt
)
self.assert_all_restrictions(
union_or_hist_restrictions,
[True, True, False],
freeze_dt_2
)
# A union of two 2-sub-restrictions UnionRestrictions should yield a
# UnionRestrictions with 4 sub restrictions.
hist_union_restrictions = \
hist_restrict_asset3_1 | hist_restrict_asset3_2
multi_union_restrictions = \
st_union_restrictions | hist_union_restrictions
self.assertIsInstance(multi_union_restrictions, _UnionRestrictions)
self.assertEqual(len(multi_union_restrictions.sub_restrictions), 4)
# Includes the two static restrictions on ASSET1 and ASSET2, the
# first historical restriction on ASSET3 starting on freeze_dt_1 and
# ending on unfreeze_dt, and the second historical restriction on
# ASSET3 starting on freeze_dt_2
self.assert_all_restrictions(
multi_union_restrictions,
[True, True, False],
before_frozen_dt
)
self.assert_all_restrictions(
multi_union_restrictions,
[True, True, True],
freeze_dt_1
)
self.assert_all_restrictions(
multi_union_restrictions,
[True, True, False],
unfreeze_dt
)
self.assert_all_restrictions(
multi_union_restrictions,
[True, True, True],
freeze_dt_2
)
| apache-2.0 |
phoenixqm/cuda-convnet2 | shownet.py | 180 | 18206 | # Copyright 2014 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
from tarfile import TarFile, TarInfo
from matplotlib import pylab as pl
import numpy as n
import getopt as opt
from python_util.util import *
from math import sqrt, ceil, floor
from python_util.gpumodel import IGPUModel
import random as r
import numpy.random as nr
from convnet import ConvNet
from python_util.options import *
from PIL import Image
from time import sleep
class ShowNetError(Exception):
pass
class ShowConvNet(ConvNet):
def __init__(self, op, load_dic):
ConvNet.__init__(self, op, load_dic)
def init_data_providers(self):
self.need_gpu = self.op.get_value('show_preds')
class Dummy:
def advance_batch(self):
pass
if self.need_gpu:
ConvNet.init_data_providers(self)
else:
self.train_data_provider = self.test_data_provider = Dummy()
def import_model(self):
if self.need_gpu:
ConvNet.import_model(self)
def init_model_state(self):
if self.op.get_value('show_preds'):
self.softmax_name = self.op.get_value('show_preds')
def init_model_lib(self):
if self.need_gpu:
ConvNet.init_model_lib(self)
def plot_cost(self):
if self.show_cost not in self.train_outputs[0][0]:
raise ShowNetError("Cost function with name '%s' not defined by given convnet." % self.show_cost)
# print self.test_outputs
train_errors = [eval(self.layers[self.show_cost]['outputFilter'])(o[0][self.show_cost], o[1])[self.cost_idx] for o in self.train_outputs]
test_errors = [eval(self.layers[self.show_cost]['outputFilter'])(o[0][self.show_cost], o[1])[self.cost_idx] for o in self.test_outputs]
if self.smooth_test_errors:
test_errors = [sum(test_errors[max(0,i-len(self.test_batch_range)):i])/(i-max(0,i-len(self.test_batch_range))) for i in xrange(1,len(test_errors)+1)]
numbatches = len(self.train_batch_range)
test_errors = n.row_stack(test_errors)
test_errors = n.tile(test_errors, (1, self.testing_freq))
test_errors = list(test_errors.flatten())
test_errors += [test_errors[-1]] * max(0,len(train_errors) - len(test_errors))
test_errors = test_errors[:len(train_errors)]
numepochs = len(train_errors) / float(numbatches)
pl.figure(1)
x = range(0, len(train_errors))
pl.plot(x, train_errors, 'k-', label='Training set')
pl.plot(x, test_errors, 'r-', label='Test set')
pl.legend()
ticklocs = range(numbatches, len(train_errors) - len(train_errors) % numbatches + 1, numbatches)
epoch_label_gran = int(ceil(numepochs / 20.))
epoch_label_gran = int(ceil(float(epoch_label_gran) / 10) * 10) if numepochs >= 10 else epoch_label_gran
ticklabels = map(lambda x: str((x[1] / numbatches)) if x[0] % epoch_label_gran == epoch_label_gran-1 else '', enumerate(ticklocs))
pl.xticks(ticklocs, ticklabels)
pl.xlabel('Epoch')
# pl.ylabel(self.show_cost)
pl.title('%s[%d]' % (self.show_cost, self.cost_idx))
# print "plotted cost"
def make_filter_fig(self, filters, filter_start, fignum, _title, num_filters, combine_chans, FILTERS_PER_ROW=16):
MAX_ROWS = 24
MAX_FILTERS = FILTERS_PER_ROW * MAX_ROWS
num_colors = filters.shape[0]
f_per_row = int(ceil(FILTERS_PER_ROW / float(1 if combine_chans else num_colors)))
filter_end = min(filter_start+MAX_FILTERS, num_filters)
filter_rows = int(ceil(float(filter_end - filter_start) / f_per_row))
filter_pixels = filters.shape[1]
filter_size = int(sqrt(filters.shape[1]))
fig = pl.figure(fignum)
fig.text(.5, .95, '%s %dx%d filters %d-%d' % (_title, filter_size, filter_size, filter_start, filter_end-1), horizontalalignment='center')
num_filters = filter_end - filter_start
if not combine_chans:
bigpic = n.zeros((filter_size * filter_rows + filter_rows + 1, filter_size*num_colors * f_per_row + f_per_row + 1), dtype=n.single)
else:
bigpic = n.zeros((3, filter_size * filter_rows + filter_rows + 1, filter_size * f_per_row + f_per_row + 1), dtype=n.single)
for m in xrange(filter_start,filter_end ):
filter = filters[:,:,m]
y, x = (m - filter_start) / f_per_row, (m - filter_start) % f_per_row
if not combine_chans:
for c in xrange(num_colors):
filter_pic = filter[c,:].reshape((filter_size,filter_size))
bigpic[1 + (1 + filter_size) * y:1 + (1 + filter_size) * y + filter_size,
1 + (1 + filter_size*num_colors) * x + filter_size*c:1 + (1 + filter_size*num_colors) * x + filter_size*(c+1)] = filter_pic
else:
filter_pic = filter.reshape((3, filter_size,filter_size))
bigpic[:,
1 + (1 + filter_size) * y:1 + (1 + filter_size) * y + filter_size,
1 + (1 + filter_size) * x:1 + (1 + filter_size) * x + filter_size] = filter_pic
pl.xticks([])
pl.yticks([])
if not combine_chans:
pl.imshow(bigpic, cmap=pl.cm.gray, interpolation='nearest')
else:
bigpic = bigpic.swapaxes(0,2).swapaxes(0,1)
pl.imshow(bigpic, interpolation='nearest')
def plot_filters(self):
FILTERS_PER_ROW = 16
filter_start = 0 # First filter to show
if self.show_filters not in self.layers:
raise ShowNetError("Layer with name '%s' not defined by given convnet." % self.show_filters)
layer = self.layers[self.show_filters]
filters = layer['weights'][self.input_idx]
# filters = filters - filters.min()
# filters = filters / filters.max()
if layer['type'] == 'fc': # Fully-connected layer
num_filters = layer['outputs']
channels = self.channels
filters = filters.reshape(channels, filters.shape[0]/channels, filters.shape[1])
elif layer['type'] in ('conv', 'local'): # Conv layer
num_filters = layer['filters']
channels = layer['filterChannels'][self.input_idx]
if layer['type'] == 'local':
filters = filters.reshape((layer['modules'], channels, layer['filterPixels'][self.input_idx], num_filters))
filters = filters[:, :, :, self.local_plane] # first map for now (modules, channels, pixels)
filters = filters.swapaxes(0,2).swapaxes(0,1)
num_filters = layer['modules']
# filters = filters.swapaxes(0,1).reshape(channels * layer['filterPixels'][self.input_idx], num_filters * layer['modules'])
# num_filters *= layer['modules']
FILTERS_PER_ROW = layer['modulesX']
else:
filters = filters.reshape(channels, filters.shape[0]/channels, filters.shape[1])
# Convert YUV filters to RGB
if self.yuv_to_rgb and channels == 3:
R = filters[0,:,:] + 1.28033 * filters[2,:,:]
G = filters[0,:,:] + -0.21482 * filters[1,:,:] + -0.38059 * filters[2,:,:]
B = filters[0,:,:] + 2.12798 * filters[1,:,:]
filters[0,:,:], filters[1,:,:], filters[2,:,:] = R, G, B
combine_chans = not self.no_rgb and channels == 3
# Make sure you don't modify the backing array itself here -- so no -= or /=
if self.norm_filters:
#print filters.shape
filters = filters - n.tile(filters.reshape((filters.shape[0] * filters.shape[1], filters.shape[2])).mean(axis=0).reshape(1, 1, filters.shape[2]), (filters.shape[0], filters.shape[1], 1))
filters = filters / n.sqrt(n.tile(filters.reshape((filters.shape[0] * filters.shape[1], filters.shape[2])).var(axis=0).reshape(1, 1, filters.shape[2]), (filters.shape[0], filters.shape[1], 1)))
#filters = filters - n.tile(filters.min(axis=0).min(axis=0), (3, filters.shape[1], 1))
#filters = filters / n.tile(filters.max(axis=0).max(axis=0), (3, filters.shape[1], 1))
#else:
filters = filters - filters.min()
filters = filters / filters.max()
self.make_filter_fig(filters, filter_start, 2, 'Layer %s' % self.show_filters, num_filters, combine_chans, FILTERS_PER_ROW=FILTERS_PER_ROW)
def plot_predictions(self):
epoch, batch, data = self.get_next_batch(train=False) # get a test batch
num_classes = self.test_data_provider.get_num_classes()
NUM_ROWS = 2
NUM_COLS = 4
NUM_IMGS = NUM_ROWS * NUM_COLS if not self.save_preds else data[0].shape[1]
NUM_TOP_CLASSES = min(num_classes, 5) # show this many top labels
NUM_OUTPUTS = self.model_state['layers'][self.softmax_name]['outputs']
PRED_IDX = 1
label_names = [lab.split(',')[0] for lab in self.test_data_provider.batch_meta['label_names']]
if self.only_errors:
preds = n.zeros((data[0].shape[1], NUM_OUTPUTS), dtype=n.single)
else:
preds = n.zeros((NUM_IMGS, NUM_OUTPUTS), dtype=n.single)
#rand_idx = nr.permutation(n.r_[n.arange(1), n.where(data[1] == 552)[1], n.where(data[1] == 795)[1], n.where(data[1] == 449)[1], n.where(data[1] == 274)[1]])[:NUM_IMGS]
rand_idx = nr.randint(0, data[0].shape[1], NUM_IMGS)
if NUM_IMGS < data[0].shape[1]:
data = [n.require(d[:,rand_idx], requirements='C') for d in data]
# data += [preds]
# Run the model
print [d.shape for d in data], preds.shape
self.libmodel.startFeatureWriter(data, [preds], [self.softmax_name])
IGPUModel.finish_batch(self)
print preds
data[0] = self.test_data_provider.get_plottable_data(data[0])
if self.save_preds:
if not gfile.Exists(self.save_preds):
gfile.MakeDirs(self.save_preds)
preds_thresh = preds > 0.5 # Binarize predictions
data[0] = data[0] * 255.0
data[0][data[0]<0] = 0
data[0][data[0]>255] = 255
data[0] = n.require(data[0], dtype=n.uint8)
dir_name = '%s_predictions_batch_%d' % (os.path.basename(self.save_file), batch)
tar_name = os.path.join(self.save_preds, '%s.tar' % dir_name)
tfo = gfile.GFile(tar_name, "w")
tf = TarFile(fileobj=tfo, mode='w')
for img_idx in xrange(NUM_IMGS):
img = data[0][img_idx,:,:,:]
imsave = Image.fromarray(img)
prefix = "CORRECT" if data[1][0,img_idx] == preds_thresh[img_idx,PRED_IDX] else "FALSE_POS" if preds_thresh[img_idx,PRED_IDX] == 1 else "FALSE_NEG"
file_name = "%s_%.2f_%d_%05d_%d.png" % (prefix, preds[img_idx,PRED_IDX], batch, img_idx, data[1][0,img_idx])
# gf = gfile.GFile(file_name, "w")
file_string = StringIO()
imsave.save(file_string, "PNG")
tarinf = TarInfo(os.path.join(dir_name, file_name))
tarinf.size = file_string.tell()
file_string.seek(0)
tf.addfile(tarinf, file_string)
tf.close()
tfo.close()
# gf.close()
print "Wrote %d prediction PNGs to %s" % (preds.shape[0], tar_name)
else:
fig = pl.figure(3, figsize=(12,9))
fig.text(.4, .95, '%s test samples' % ('Mistaken' if self.only_errors else 'Random'))
if self.only_errors:
# what the net got wrong
if NUM_OUTPUTS > 1:
err_idx = [i for i,p in enumerate(preds.argmax(axis=1)) if p not in n.where(data[2][:,i] > 0)[0]]
else:
err_idx = n.where(data[1][0,:] != preds[:,0].T)[0]
print err_idx
err_idx = r.sample(err_idx, min(len(err_idx), NUM_IMGS))
data[0], data[1], preds = data[0][:,err_idx], data[1][:,err_idx], preds[err_idx,:]
import matplotlib.gridspec as gridspec
import matplotlib.colors as colors
cconv = colors.ColorConverter()
gs = gridspec.GridSpec(NUM_ROWS*2, NUM_COLS,
width_ratios=[1]*NUM_COLS, height_ratios=[2,1]*NUM_ROWS )
#print data[1]
for row in xrange(NUM_ROWS):
for col in xrange(NUM_COLS):
img_idx = row * NUM_COLS + col
if data[0].shape[0] <= img_idx:
break
pl.subplot(gs[(row * 2) * NUM_COLS + col])
#pl.subplot(NUM_ROWS*2, NUM_COLS, row * 2 * NUM_COLS + col + 1)
pl.xticks([])
pl.yticks([])
img = data[0][img_idx,:,:,:]
pl.imshow(img, interpolation='lanczos')
show_title = data[1].shape[0] == 1
true_label = [int(data[1][0,img_idx])] if show_title else n.where(data[1][:,img_idx]==1)[0]
#print true_label
#print preds[img_idx,:].shape
#print preds[img_idx,:].max()
true_label_names = [label_names[i] for i in true_label]
img_labels = sorted(zip(preds[img_idx,:], label_names), key=lambda x: x[0])[-NUM_TOP_CLASSES:]
#print img_labels
axes = pl.subplot(gs[(row * 2 + 1) * NUM_COLS + col])
height = 0.5
ylocs = n.array(range(NUM_TOP_CLASSES))*height
pl.barh(ylocs, [l[0] for l in img_labels], height=height, \
color=['#ffaaaa' if l[1] in true_label_names else '#aaaaff' for l in img_labels])
#pl.title(", ".join(true_labels))
if show_title:
pl.title(", ".join(true_label_names), fontsize=15, fontweight='bold')
else:
print true_label_names
pl.yticks(ylocs + height/2, [l[1] for l in img_labels], x=1, backgroundcolor=cconv.to_rgba('0.65', alpha=0.5), weight='bold')
for line in enumerate(axes.get_yticklines()):
line[1].set_visible(False)
#pl.xticks([width], [''])
#pl.yticks([])
pl.xticks([])
pl.ylim(0, ylocs[-1] + height)
pl.xlim(0, 1)
def start(self):
self.op.print_values()
# print self.show_cost
if self.show_cost:
self.plot_cost()
if self.show_filters:
self.plot_filters()
if self.show_preds:
self.plot_predictions()
if pl:
pl.show()
sys.exit(0)
@classmethod
def get_options_parser(cls):
op = ConvNet.get_options_parser()
for option in list(op.options):
if option not in ('gpu', 'load_file', 'inner_size', 'train_batch_range', 'test_batch_range', 'multiview_test', 'data_path', 'pca_noise', 'scalar_mean'):
op.delete_option(option)
op.add_option("show-cost", "show_cost", StringOptionParser, "Show specified objective function", default="")
op.add_option("show-filters", "show_filters", StringOptionParser, "Show learned filters in specified layer", default="")
op.add_option("norm-filters", "norm_filters", BooleanOptionParser, "Individually normalize filters shown with --show-filters", default=0)
op.add_option("input-idx", "input_idx", IntegerOptionParser, "Input index for layer given to --show-filters", default=0)
op.add_option("cost-idx", "cost_idx", IntegerOptionParser, "Cost function return value index for --show-cost", default=0)
op.add_option("no-rgb", "no_rgb", BooleanOptionParser, "Don't combine filter channels into RGB in layer given to --show-filters", default=False)
op.add_option("yuv-to-rgb", "yuv_to_rgb", BooleanOptionParser, "Convert RGB filters to YUV in layer given to --show-filters", default=False)
op.add_option("channels", "channels", IntegerOptionParser, "Number of channels in layer given to --show-filters (fully-connected layers only)", default=0)
op.add_option("show-preds", "show_preds", StringOptionParser, "Show predictions made by given softmax on test set", default="")
op.add_option("save-preds", "save_preds", StringOptionParser, "Save predictions to given path instead of showing them", default="")
op.add_option("only-errors", "only_errors", BooleanOptionParser, "Show only mistaken predictions (to be used with --show-preds)", default=False, requires=['show_preds'])
op.add_option("local-plane", "local_plane", IntegerOptionParser, "Local plane to show", default=0)
op.add_option("smooth-test-errors", "smooth_test_errors", BooleanOptionParser, "Use running average for test error plot?", default=1)
op.options['load_file'].default = None
return op
if __name__ == "__main__":
#nr.seed(6)
try:
op = ShowConvNet.get_options_parser()
op, load_dic = IGPUModel.parse_options(op)
model = ShowConvNet(op, load_dic)
model.start()
except (UnpickleError, ShowNetError, opt.GetoptError), e:
print "----------------"
print "Error:"
print e
| apache-2.0 |
sekikn/incubator-airflow | tests/test_utils/perf/scheduler_ops_metrics.py | 7 | 7513 | #
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import logging
import sys
import pandas as pd
from airflow import settings
from airflow.configuration import conf
from airflow.jobs.scheduler_job import SchedulerJob
from airflow.models import DagBag, DagModel, DagRun, TaskInstance
from airflow.utils import timezone
from airflow.utils.state import State
SUBDIR = 'scripts/perf/dags'
DAG_IDS = ['perf_dag_1', 'perf_dag_2']
MAX_RUNTIME_SECS = 6
class SchedulerMetricsJob(SchedulerJob):
"""
This class extends SchedulerJob to instrument the execution performance of
task instances contained in each DAG. We want to know if any DAG
is starved of resources, and this will be reflected in the stats printed
out at the end of the test run. The following metrics will be instrumented
for each task instance (dag_id, task_id, execution_date) tuple:
1. Queuing delay - time taken from starting the executor to the task
instance to be added to the executor queue.
2. Start delay - time taken from starting the executor to the task instance
to start execution.
3. Land time - time taken from starting the executor to task instance
completion.
4. Duration - time taken for executing the task instance.
The DAGs implement bash operators that call the system wait command. This
is representative of typical operators run on Airflow - queries that are
run on remote systems and spend the majority of their time on I/O wait.
To Run:
$ python scripts/perf/scheduler_ops_metrics.py [timeout]
You can specify timeout in seconds as an optional parameter.
Its default value is 6 seconds.
"""
__mapper_args__ = {'polymorphic_identity': 'SchedulerMetricsJob'}
def __init__(self, dag_ids, subdir, max_runtime_secs):
self.max_runtime_secs = max_runtime_secs
super().__init__(dag_ids=dag_ids, subdir=subdir)
def print_stats(self):
"""
Print operational metrics for the scheduler test.
"""
session = settings.Session()
TI = TaskInstance
tis = session.query(TI).filter(TI.dag_id.in_(DAG_IDS)).all()
successful_tis = [x for x in tis if x.state == State.SUCCESS]
ti_perf = [
(
ti.dag_id,
ti.task_id,
ti.execution_date,
(ti.queued_dttm - self.start_date).total_seconds(),
(ti.start_date - self.start_date).total_seconds(),
(ti.end_date - self.start_date).total_seconds(),
ti.duration,
)
for ti in successful_tis
]
ti_perf_df = pd.DataFrame(
ti_perf,
columns=[
'dag_id',
'task_id',
'execution_date',
'queue_delay',
'start_delay',
'land_time',
'duration',
],
)
print('Performance Results')
print('###################')
for dag_id in DAG_IDS:
print(f'DAG {dag_id}')
print(ti_perf_df[ti_perf_df['dag_id'] == dag_id])
print('###################')
if len(tis) > len(successful_tis):
print("WARNING!! The following task instances haven't completed")
print(
pd.DataFrame(
[
(ti.dag_id, ti.task_id, ti.execution_date, ti.state)
for ti in filter(lambda x: x.state != State.SUCCESS, tis)
],
columns=['dag_id', 'task_id', 'execution_date', 'state'],
)
)
session.commit()
def heartbeat(self):
"""
Override the scheduler heartbeat to determine when the test is complete
"""
super().heartbeat()
session = settings.Session()
# Get all the relevant task instances
TI = TaskInstance
successful_tis = (
session.query(TI).filter(TI.dag_id.in_(DAG_IDS)).filter(TI.state.in_([State.SUCCESS])).all()
)
session.commit()
dagbag = DagBag(SUBDIR)
dags = [dagbag.dags[dag_id] for dag_id in DAG_IDS]
# the tasks in perf_dag_1 and per_dag_2 have a daily schedule interval.
num_task_instances = sum(
[(timezone.utcnow() - task.start_date).days for dag in dags for task in dag.tasks]
)
if (
len(successful_tis) == num_task_instances
or (timezone.utcnow() - self.start_date).total_seconds() > self.max_runtime_secs
):
if len(successful_tis) == num_task_instances:
self.log.info("All tasks processed! Printing stats.")
else:
self.log.info("Test timeout reached. Printing available stats.")
self.print_stats()
set_dags_paused_state(True)
sys.exit()
def clear_dag_runs():
"""
Remove any existing DAG runs for the perf test DAGs.
"""
session = settings.Session()
drs = (
session.query(DagRun)
.filter(
DagRun.dag_id.in_(DAG_IDS),
)
.all()
)
for dr in drs:
logging.info('Deleting DagRun :: %s', dr)
session.delete(dr)
def clear_dag_task_instances():
"""
Remove any existing task instances for the perf test DAGs.
"""
session = settings.Session()
TI = TaskInstance
tis = session.query(TI).filter(TI.dag_id.in_(DAG_IDS)).all()
for ti in tis:
logging.info('Deleting TaskInstance :: %s', ti)
session.delete(ti)
session.commit()
def set_dags_paused_state(is_paused):
"""
Toggle the pause state of the DAGs in the test.
"""
session = settings.Session()
dag_models = session.query(DagModel).filter(DagModel.dag_id.in_(DAG_IDS))
for dag_model in dag_models:
logging.info('Setting DAG :: %s is_paused=%s', dag_model, is_paused)
dag_model.is_paused = is_paused
session.commit()
def main():
"""
Run the scheduler metrics jobs after loading the test configuration and
clearing old instances of dags and tasks
"""
max_runtime_secs = MAX_RUNTIME_SECS
if len(sys.argv) > 1:
try:
max_runtime_secs = int(sys.argv[1])
if max_runtime_secs < 1:
raise ValueError
except ValueError:
logging.error('Specify a positive integer for timeout.')
sys.exit(1)
conf.load_test_config()
set_dags_paused_state(False)
clear_dag_runs()
clear_dag_task_instances()
job = SchedulerMetricsJob(dag_ids=DAG_IDS, subdir=SUBDIR, max_runtime_secs=max_runtime_secs)
job.run()
if __name__ == "__main__":
main()
| apache-2.0 |
lifeinoppo/littlefishlet-scode | RES/REF/python_sourcecode/ipython-master/IPython/core/interactiveshell.py | 6 | 132557 | # -*- coding: utf-8 -*-
"""Main IPython class."""
#-----------------------------------------------------------------------------
# Copyright (C) 2001 Janko Hauser <[email protected]>
# Copyright (C) 2001-2007 Fernando Perez. <[email protected]>
# Copyright (C) 2008-2011 The IPython Development Team
#
# Distributed under the terms of the BSD License. The full license is in
# the file COPYING, distributed as part of this software.
#-----------------------------------------------------------------------------
from __future__ import absolute_import, print_function
import __future__
import abc
import ast
import atexit
import functools
import os
import re
import runpy
import sys
import tempfile
import traceback
import types
import subprocess
import warnings
from io import open as io_open
from pickleshare import PickleShareDB
from traitlets.config.configurable import SingletonConfigurable
from IPython.core import debugger, oinspect
from IPython.core import magic
from IPython.core import page
from IPython.core import prefilter
from IPython.core import shadowns
from IPython.core import ultratb
from IPython.core.alias import Alias, AliasManager
from IPython.core.autocall import ExitAutocall
from IPython.core.builtin_trap import BuiltinTrap
from IPython.core.events import EventManager, available_events
from IPython.core.compilerop import CachingCompiler, check_linecache_ipython
from IPython.core.display_trap import DisplayTrap
from IPython.core.displayhook import DisplayHook
from IPython.core.displaypub import DisplayPublisher
from IPython.core.error import InputRejected, UsageError
from IPython.core.extensions import ExtensionManager
from IPython.core.formatters import DisplayFormatter
from IPython.core.history import HistoryManager
from IPython.core.inputsplitter import IPythonInputSplitter, ESC_MAGIC, ESC_MAGIC2
from IPython.core.logger import Logger
from IPython.core.macro import Macro
from IPython.core.payload import PayloadManager
from IPython.core.prefilter import PrefilterManager
from IPython.core.profiledir import ProfileDir
from IPython.core.prompts import PromptManager
from IPython.core.usage import default_banner
from IPython.testing.skipdoctest import skip_doctest
from IPython.utils import PyColorize
from IPython.utils import io
from IPython.utils import py3compat
from IPython.utils import openpy
from IPython.utils.contexts import NoOpContext
from IPython.utils.decorators import undoc
from IPython.utils.io import ask_yes_no
from IPython.utils.ipstruct import Struct
from IPython.paths import get_ipython_dir
from IPython.utils.path import get_home_dir, get_py_filename, unquote_filename, ensure_dir_exists
from IPython.utils.process import system, getoutput
from IPython.utils.py3compat import (builtin_mod, unicode_type, string_types,
with_metaclass, iteritems)
from IPython.utils.strdispatch import StrDispatch
from IPython.utils.syspathcontext import prepended_to_syspath
from IPython.utils.text import (format_screen, LSString, SList,
DollarFormatter)
from traitlets import (Integer, Bool, CBool, CaselessStrEnum, Enum,
List, Dict, Unicode, Instance, Type)
from IPython.utils.warn import warn, error
import IPython.core.hooks
#-----------------------------------------------------------------------------
# Globals
#-----------------------------------------------------------------------------
# compiled regexps for autoindent management
dedent_re = re.compile(r'^\s+raise|^\s+return|^\s+pass')
#-----------------------------------------------------------------------------
# Utilities
#-----------------------------------------------------------------------------
@undoc
def softspace(file, newvalue):
"""Copied from code.py, to remove the dependency"""
oldvalue = 0
try:
oldvalue = file.softspace
except AttributeError:
pass
try:
file.softspace = newvalue
except (AttributeError, TypeError):
# "attribute-less object" or "read-only attributes"
pass
return oldvalue
@undoc
def no_op(*a, **kw): pass
class SpaceInInput(Exception): pass
@undoc
class Bunch: pass
def get_default_colors():
if sys.platform=='darwin':
return "LightBG"
elif os.name=='nt':
return 'Linux'
else:
return 'Linux'
class SeparateUnicode(Unicode):
r"""A Unicode subclass to validate separate_in, separate_out, etc.
This is a Unicode based trait that converts '0'->'' and ``'\\n'->'\n'``.
"""
def validate(self, obj, value):
if value == '0': value = ''
value = value.replace('\\n','\n')
return super(SeparateUnicode, self).validate(obj, value)
@undoc
class DummyMod(object):
"""A dummy module used for IPython's interactive module when
a namespace must be assigned to the module's __dict__."""
pass
class ExecutionResult(object):
"""The result of a call to :meth:`InteractiveShell.run_cell`
Stores information about what took place.
"""
execution_count = None
error_before_exec = None
error_in_exec = None
result = None
@property
def success(self):
return (self.error_before_exec is None) and (self.error_in_exec is None)
def raise_error(self):
"""Reraises error if `success` is `False`, otherwise does nothing"""
if self.error_before_exec is not None:
raise self.error_before_exec
if self.error_in_exec is not None:
raise self.error_in_exec
class InteractiveShell(SingletonConfigurable):
"""An enhanced, interactive shell for Python."""
_instance = None
ast_transformers = List([], config=True, help=
"""
A list of ast.NodeTransformer subclass instances, which will be applied
to user input before code is run.
"""
)
autocall = Enum((0,1,2), default_value=0, config=True, help=
"""
Make IPython automatically call any callable object even if you didn't
type explicit parentheses. For example, 'str 43' becomes 'str(43)'
automatically. The value can be '0' to disable the feature, '1' for
'smart' autocall, where it is not applied if there are no more
arguments on the line, and '2' for 'full' autocall, where all callable
objects are automatically called (even if no arguments are present).
"""
)
# TODO: remove all autoindent logic and put into frontends.
# We can't do this yet because even runlines uses the autoindent.
autoindent = CBool(True, config=True, help=
"""
Autoindent IPython code entered interactively.
"""
)
automagic = CBool(True, config=True, help=
"""
Enable magic commands to be called without the leading %.
"""
)
banner1 = Unicode(default_banner, config=True,
help="""The part of the banner to be printed before the profile"""
)
banner2 = Unicode('', config=True,
help="""The part of the banner to be printed after the profile"""
)
cache_size = Integer(1000, config=True, help=
"""
Set the size of the output cache. The default is 1000, you can
change it permanently in your config file. Setting it to 0 completely
disables the caching system, and the minimum value accepted is 20 (if
you provide a value less than 20, it is reset to 0 and a warning is
issued). This limit is defined because otherwise you'll spend more
time re-flushing a too small cache than working
"""
)
color_info = CBool(True, config=True, help=
"""
Use colors for displaying information about objects. Because this
information is passed through a pager (like 'less'), and some pagers
get confused with color codes, this capability can be turned off.
"""
)
colors = CaselessStrEnum(('NoColor','LightBG','Linux'),
default_value=get_default_colors(), config=True,
help="Set the color scheme (NoColor, Linux, or LightBG)."
)
colors_force = CBool(False, help=
"""
Force use of ANSI color codes, regardless of OS and readline
availability.
"""
# FIXME: This is essentially a hack to allow ZMQShell to show colors
# without readline on Win32. When the ZMQ formatting system is
# refactored, this should be removed.
)
debug = CBool(False, config=True)
deep_reload = CBool(False, config=True, help=
"""
**Deprecated**
Will be removed in IPython 6.0
Enable deep (recursive) reloading by default. IPython can use the
deep_reload module which reloads changes in modules recursively (it
replaces the reload() function, so you don't need to change anything to
use it). `deep_reload` forces a full reload of modules whose code may
have changed, which the default reload() function does not. When
deep_reload is off, IPython will use the normal reload(), but
deep_reload will still be available as dreload().
"""
)
disable_failing_post_execute = CBool(False, config=True,
help="Don't call post-execute functions that have failed in the past."
)
display_formatter = Instance(DisplayFormatter, allow_none=True)
displayhook_class = Type(DisplayHook)
display_pub_class = Type(DisplayPublisher)
data_pub_class = None
exit_now = CBool(False)
exiter = Instance(ExitAutocall)
def _exiter_default(self):
return ExitAutocall(self)
# Monotonically increasing execution counter
execution_count = Integer(1)
filename = Unicode("<ipython console>")
ipython_dir= Unicode('', config=True) # Set to get_ipython_dir() in __init__
# Input splitter, to transform input line by line and detect when a block
# is ready to be executed.
input_splitter = Instance('IPython.core.inputsplitter.IPythonInputSplitter',
(), {'line_input_checker': True})
# This InputSplitter instance is used to transform completed cells before
# running them. It allows cell magics to contain blank lines.
input_transformer_manager = Instance('IPython.core.inputsplitter.IPythonInputSplitter',
(), {'line_input_checker': False})
logstart = CBool(False, config=True, help=
"""
Start logging to the default log file in overwrite mode.
Use `logappend` to specify a log file to **append** logs to.
"""
)
logfile = Unicode('', config=True, help=
"""
The name of the logfile to use.
"""
)
logappend = Unicode('', config=True, help=
"""
Start logging to the given file in append mode.
Use `logfile` to specify a log file to **overwrite** logs to.
"""
)
object_info_string_level = Enum((0,1,2), default_value=0,
config=True)
pdb = CBool(False, config=True, help=
"""
Automatically call the pdb debugger after every exception.
"""
)
multiline_history = CBool(sys.platform != 'win32', config=True,
help="Save multi-line entries as one entry in readline history"
)
display_page = Bool(False, config=True,
help="""If True, anything that would be passed to the pager
will be displayed as regular output instead."""
)
# deprecated prompt traits:
prompt_in1 = Unicode('In [\\#]: ', config=True,
help="Deprecated, will be removed in IPython 5.0, use PromptManager.in_template")
prompt_in2 = Unicode(' .\\D.: ', config=True,
help="Deprecated, will be removed in IPython 5.0, use PromptManager.in2_template")
prompt_out = Unicode('Out[\\#]: ', config=True,
help="Deprecated, will be removed in IPython 5.0, use PromptManager.out_template")
prompts_pad_left = CBool(True, config=True,
help="Deprecated, will be removed in IPython 5.0, use PromptManager.justify")
def _prompt_trait_changed(self, name, old, new):
table = {
'prompt_in1' : 'in_template',
'prompt_in2' : 'in2_template',
'prompt_out' : 'out_template',
'prompts_pad_left' : 'justify',
}
warn("InteractiveShell.{name} is deprecated, use PromptManager.{newname}".format(
name=name, newname=table[name])
)
# protect against weird cases where self.config may not exist:
if self.config is not None:
# propagate to corresponding PromptManager trait
setattr(self.config.PromptManager, table[name], new)
_prompt_in1_changed = _prompt_trait_changed
_prompt_in2_changed = _prompt_trait_changed
_prompt_out_changed = _prompt_trait_changed
_prompt_pad_left_changed = _prompt_trait_changed
show_rewritten_input = CBool(True, config=True,
help="Show rewritten input, e.g. for autocall."
)
quiet = CBool(False, config=True)
history_length = Integer(10000, config=True)
history_load_length = Integer(1000, config=True, help=
"""
The number of saved history entries to be loaded
into the readline buffer at startup.
"""
)
# The readline stuff will eventually be moved to the terminal subclass
# but for now, we can't do that as readline is welded in everywhere.
readline_use = CBool(True, config=True)
readline_remove_delims = Unicode('-/~', config=True)
readline_delims = Unicode() # set by init_readline()
# don't use \M- bindings by default, because they
# conflict with 8-bit encodings. See gh-58,gh-88
readline_parse_and_bind = List([
'tab: complete',
'"\C-l": clear-screen',
'set show-all-if-ambiguous on',
'"\C-o": tab-insert',
'"\C-r": reverse-search-history',
'"\C-s": forward-search-history',
'"\C-p": history-search-backward',
'"\C-n": history-search-forward',
'"\e[A": history-search-backward',
'"\e[B": history-search-forward',
'"\C-k": kill-line',
'"\C-u": unix-line-discard',
], config=True)
_custom_readline_config = False
def _readline_parse_and_bind_changed(self, name, old, new):
# notice that readline config is customized
# indicates that it should have higher priority than inputrc
self._custom_readline_config = True
ast_node_interactivity = Enum(['all', 'last', 'last_expr', 'none'],
default_value='last_expr', config=True,
help="""
'all', 'last', 'last_expr' or 'none', specifying which nodes should be
run interactively (displaying output from expressions).""")
# TODO: this part of prompt management should be moved to the frontends.
# Use custom TraitTypes that convert '0'->'' and '\\n'->'\n'
separate_in = SeparateUnicode('\n', config=True)
separate_out = SeparateUnicode('', config=True)
separate_out2 = SeparateUnicode('', config=True)
wildcards_case_sensitive = CBool(True, config=True)
xmode = CaselessStrEnum(('Context','Plain', 'Verbose'),
default_value='Context', config=True)
# Subcomponents of InteractiveShell
alias_manager = Instance('IPython.core.alias.AliasManager', allow_none=True)
prefilter_manager = Instance('IPython.core.prefilter.PrefilterManager', allow_none=True)
builtin_trap = Instance('IPython.core.builtin_trap.BuiltinTrap', allow_none=True)
display_trap = Instance('IPython.core.display_trap.DisplayTrap', allow_none=True)
extension_manager = Instance('IPython.core.extensions.ExtensionManager', allow_none=True)
payload_manager = Instance('IPython.core.payload.PayloadManager', allow_none=True)
history_manager = Instance('IPython.core.history.HistoryAccessorBase', allow_none=True)
magics_manager = Instance('IPython.core.magic.MagicsManager', allow_none=True)
profile_dir = Instance('IPython.core.application.ProfileDir', allow_none=True)
@property
def profile(self):
if self.profile_dir is not None:
name = os.path.basename(self.profile_dir.location)
return name.replace('profile_','')
# Private interface
_post_execute = Dict()
# Tracks any GUI loop loaded for pylab
pylab_gui_select = None
def __init__(self, ipython_dir=None, profile_dir=None,
user_module=None, user_ns=None,
custom_exceptions=((), None), **kwargs):
# This is where traits with a config_key argument are updated
# from the values on config.
super(InteractiveShell, self).__init__(**kwargs)
self.configurables = [self]
# These are relatively independent and stateless
self.init_ipython_dir(ipython_dir)
self.init_profile_dir(profile_dir)
self.init_instance_attrs()
self.init_environment()
# Check if we're in a virtualenv, and set up sys.path.
self.init_virtualenv()
# Create namespaces (user_ns, user_global_ns, etc.)
self.init_create_namespaces(user_module, user_ns)
# This has to be done after init_create_namespaces because it uses
# something in self.user_ns, but before init_sys_modules, which
# is the first thing to modify sys.
# TODO: When we override sys.stdout and sys.stderr before this class
# is created, we are saving the overridden ones here. Not sure if this
# is what we want to do.
self.save_sys_module_state()
self.init_sys_modules()
# While we're trying to have each part of the code directly access what
# it needs without keeping redundant references to objects, we have too
# much legacy code that expects ip.db to exist.
self.db = PickleShareDB(os.path.join(self.profile_dir.location, 'db'))
self.init_history()
self.init_encoding()
self.init_prefilter()
self.init_syntax_highlighting()
self.init_hooks()
self.init_events()
self.init_pushd_popd_magic()
# self.init_traceback_handlers use to be here, but we moved it below
# because it and init_io have to come after init_readline.
self.init_user_ns()
self.init_logger()
self.init_builtins()
# The following was in post_config_initialization
self.init_inspector()
# init_readline() must come before init_io(), because init_io uses
# readline related things.
self.init_readline()
# We save this here in case user code replaces raw_input, but it needs
# to be after init_readline(), because PyPy's readline works by replacing
# raw_input.
if py3compat.PY3:
self.raw_input_original = input
else:
self.raw_input_original = raw_input
# init_completer must come after init_readline, because it needs to
# know whether readline is present or not system-wide to configure the
# completers, since the completion machinery can now operate
# independently of readline (e.g. over the network)
self.init_completer()
# TODO: init_io() needs to happen before init_traceback handlers
# because the traceback handlers hardcode the stdout/stderr streams.
# This logic in in debugger.Pdb and should eventually be changed.
self.init_io()
self.init_traceback_handlers(custom_exceptions)
self.init_prompts()
self.init_display_formatter()
self.init_display_pub()
self.init_data_pub()
self.init_displayhook()
self.init_magics()
self.init_alias()
self.init_logstart()
self.init_pdb()
self.init_extension_manager()
self.init_payload()
self.init_deprecation_warnings()
self.hooks.late_startup_hook()
self.events.trigger('shell_initialized', self)
atexit.register(self.atexit_operations)
def get_ipython(self):
"""Return the currently running IPython instance."""
return self
#-------------------------------------------------------------------------
# Trait changed handlers
#-------------------------------------------------------------------------
def _ipython_dir_changed(self, name, new):
ensure_dir_exists(new)
def set_autoindent(self,value=None):
"""Set the autoindent flag, checking for readline support.
If called with no arguments, it acts as a toggle."""
if value != 0 and not self.has_readline:
if os.name == 'posix':
warn("The auto-indent feature requires the readline library")
self.autoindent = 0
return
if value is None:
self.autoindent = not self.autoindent
else:
self.autoindent = value
#-------------------------------------------------------------------------
# init_* methods called by __init__
#-------------------------------------------------------------------------
def init_ipython_dir(self, ipython_dir):
if ipython_dir is not None:
self.ipython_dir = ipython_dir
return
self.ipython_dir = get_ipython_dir()
def init_profile_dir(self, profile_dir):
if profile_dir is not None:
self.profile_dir = profile_dir
return
self.profile_dir =\
ProfileDir.create_profile_dir_by_name(self.ipython_dir, 'default')
def init_instance_attrs(self):
self.more = False
# command compiler
self.compile = CachingCompiler()
# Make an empty namespace, which extension writers can rely on both
# existing and NEVER being used by ipython itself. This gives them a
# convenient location for storing additional information and state
# their extensions may require, without fear of collisions with other
# ipython names that may develop later.
self.meta = Struct()
# Temporary files used for various purposes. Deleted at exit.
self.tempfiles = []
self.tempdirs = []
# Keep track of readline usage (later set by init_readline)
self.has_readline = False
# keep track of where we started running (mainly for crash post-mortem)
# This is not being used anywhere currently.
self.starting_dir = py3compat.getcwd()
# Indentation management
self.indent_current_nsp = 0
# Dict to track post-execution functions that have been registered
self._post_execute = {}
def init_environment(self):
"""Any changes we need to make to the user's environment."""
pass
def init_encoding(self):
# Get system encoding at startup time. Certain terminals (like Emacs
# under Win32 have it set to None, and we need to have a known valid
# encoding to use in the raw_input() method
try:
self.stdin_encoding = sys.stdin.encoding or 'ascii'
except AttributeError:
self.stdin_encoding = 'ascii'
def init_syntax_highlighting(self):
# Python source parser/formatter for syntax highlighting
pyformat = PyColorize.Parser().format
self.pycolorize = lambda src: pyformat(src,'str',self.colors)
def init_pushd_popd_magic(self):
# for pushd/popd management
self.home_dir = get_home_dir()
self.dir_stack = []
def init_logger(self):
self.logger = Logger(self.home_dir, logfname='ipython_log.py',
logmode='rotate')
def init_logstart(self):
"""Initialize logging in case it was requested at the command line.
"""
if self.logappend:
self.magic('logstart %s append' % self.logappend)
elif self.logfile:
self.magic('logstart %s' % self.logfile)
elif self.logstart:
self.magic('logstart')
def init_deprecation_warnings(self):
"""
register default filter for deprecation warning.
This will allow deprecation warning of function used interactively to show
warning to users, and still hide deprecation warning from libraries import.
"""
warnings.filterwarnings("default", category=DeprecationWarning, module=self.user_ns.get("__name__"))
def init_builtins(self):
# A single, static flag that we set to True. Its presence indicates
# that an IPython shell has been created, and we make no attempts at
# removing on exit or representing the existence of more than one
# IPython at a time.
builtin_mod.__dict__['__IPYTHON__'] = True
# In 0.11 we introduced '__IPYTHON__active' as an integer we'd try to
# manage on enter/exit, but with all our shells it's virtually
# impossible to get all the cases right. We're leaving the name in for
# those who adapted their codes to check for this flag, but will
# eventually remove it after a few more releases.
builtin_mod.__dict__['__IPYTHON__active'] = \
'Deprecated, check for __IPYTHON__'
self.builtin_trap = BuiltinTrap(shell=self)
def init_inspector(self):
# Object inspector
self.inspector = oinspect.Inspector(oinspect.InspectColors,
PyColorize.ANSICodeColors,
'NoColor',
self.object_info_string_level)
def init_io(self):
# This will just use sys.stdout and sys.stderr. If you want to
# override sys.stdout and sys.stderr themselves, you need to do that
# *before* instantiating this class, because io holds onto
# references to the underlying streams.
if (sys.platform == 'win32' or sys.platform == 'cli') and self.has_readline:
io.stdout = io.stderr = io.IOStream(self.readline._outputfile)
else:
io.stdout = io.IOStream(sys.stdout)
io.stderr = io.IOStream(sys.stderr)
def init_prompts(self):
self.prompt_manager = PromptManager(shell=self, parent=self)
self.configurables.append(self.prompt_manager)
# Set system prompts, so that scripts can decide if they are running
# interactively.
sys.ps1 = 'In : '
sys.ps2 = '...: '
sys.ps3 = 'Out: '
def init_display_formatter(self):
self.display_formatter = DisplayFormatter(parent=self)
self.configurables.append(self.display_formatter)
def init_display_pub(self):
self.display_pub = self.display_pub_class(parent=self)
self.configurables.append(self.display_pub)
def init_data_pub(self):
if not self.data_pub_class:
self.data_pub = None
return
self.data_pub = self.data_pub_class(parent=self)
self.configurables.append(self.data_pub)
def init_displayhook(self):
# Initialize displayhook, set in/out prompts and printing system
self.displayhook = self.displayhook_class(
parent=self,
shell=self,
cache_size=self.cache_size,
)
self.configurables.append(self.displayhook)
# This is a context manager that installs/revmoes the displayhook at
# the appropriate time.
self.display_trap = DisplayTrap(hook=self.displayhook)
def init_virtualenv(self):
"""Add a virtualenv to sys.path so the user can import modules from it.
This isn't perfect: it doesn't use the Python interpreter with which the
virtualenv was built, and it ignores the --no-site-packages option. A
warning will appear suggesting the user installs IPython in the
virtualenv, but for many cases, it probably works well enough.
Adapted from code snippets online.
http://blog.ufsoft.org/2009/1/29/ipython-and-virtualenv
"""
if 'VIRTUAL_ENV' not in os.environ:
# Not in a virtualenv
return
# venv detection:
# stdlib venv may symlink sys.executable, so we can't use realpath.
# but others can symlink *to* the venv Python, so we can't just use sys.executable.
# So we just check every item in the symlink tree (generally <= 3)
p = os.path.normcase(sys.executable)
paths = [p]
while os.path.islink(p):
p = os.path.normcase(os.path.join(os.path.dirname(p), os.readlink(p)))
paths.append(p)
p_venv = os.path.normcase(os.environ['VIRTUAL_ENV'])
if any(p.startswith(p_venv) for p in paths):
# Running properly in the virtualenv, don't need to do anything
return
warn("Attempting to work in a virtualenv. If you encounter problems, please "
"install IPython inside the virtualenv.")
if sys.platform == "win32":
virtual_env = os.path.join(os.environ['VIRTUAL_ENV'], 'Lib', 'site-packages')
else:
virtual_env = os.path.join(os.environ['VIRTUAL_ENV'], 'lib',
'python%d.%d' % sys.version_info[:2], 'site-packages')
import site
sys.path.insert(0, virtual_env)
site.addsitedir(virtual_env)
#-------------------------------------------------------------------------
# Things related to injections into the sys module
#-------------------------------------------------------------------------
def save_sys_module_state(self):
"""Save the state of hooks in the sys module.
This has to be called after self.user_module is created.
"""
self._orig_sys_module_state = {'stdin': sys.stdin,
'stdout': sys.stdout,
'stderr': sys.stderr,
'excepthook': sys.excepthook}
self._orig_sys_modules_main_name = self.user_module.__name__
self._orig_sys_modules_main_mod = sys.modules.get(self.user_module.__name__)
def restore_sys_module_state(self):
"""Restore the state of the sys module."""
try:
for k, v in iteritems(self._orig_sys_module_state):
setattr(sys, k, v)
except AttributeError:
pass
# Reset what what done in self.init_sys_modules
if self._orig_sys_modules_main_mod is not None:
sys.modules[self._orig_sys_modules_main_name] = self._orig_sys_modules_main_mod
#-------------------------------------------------------------------------
# Things related to the banner
#-------------------------------------------------------------------------
@property
def banner(self):
banner = self.banner1
if self.profile and self.profile != 'default':
banner += '\nIPython profile: %s\n' % self.profile
if self.banner2:
banner += '\n' + self.banner2
return banner
def show_banner(self, banner=None):
if banner is None:
banner = self.banner
self.write(banner)
#-------------------------------------------------------------------------
# Things related to hooks
#-------------------------------------------------------------------------
def init_hooks(self):
# hooks holds pointers used for user-side customizations
self.hooks = Struct()
self.strdispatchers = {}
# Set all default hooks, defined in the IPython.hooks module.
hooks = IPython.core.hooks
for hook_name in hooks.__all__:
# default hooks have priority 100, i.e. low; user hooks should have
# 0-100 priority
self.set_hook(hook_name,getattr(hooks,hook_name), 100, _warn_deprecated=False)
if self.display_page:
self.set_hook('show_in_pager', page.as_hook(page.display_page), 90)
def set_hook(self,name,hook, priority=50, str_key=None, re_key=None,
_warn_deprecated=True):
"""set_hook(name,hook) -> sets an internal IPython hook.
IPython exposes some of its internal API as user-modifiable hooks. By
adding your function to one of these hooks, you can modify IPython's
behavior to call at runtime your own routines."""
# At some point in the future, this should validate the hook before it
# accepts it. Probably at least check that the hook takes the number
# of args it's supposed to.
f = types.MethodType(hook,self)
# check if the hook is for strdispatcher first
if str_key is not None:
sdp = self.strdispatchers.get(name, StrDispatch())
sdp.add_s(str_key, f, priority )
self.strdispatchers[name] = sdp
return
if re_key is not None:
sdp = self.strdispatchers.get(name, StrDispatch())
sdp.add_re(re.compile(re_key), f, priority )
self.strdispatchers[name] = sdp
return
dp = getattr(self.hooks, name, None)
if name not in IPython.core.hooks.__all__:
print("Warning! Hook '%s' is not one of %s" % \
(name, IPython.core.hooks.__all__ ))
if _warn_deprecated and (name in IPython.core.hooks.deprecated):
alternative = IPython.core.hooks.deprecated[name]
warn("Hook {} is deprecated. Use {} instead.".format(name, alternative))
if not dp:
dp = IPython.core.hooks.CommandChainDispatcher()
try:
dp.add(f,priority)
except AttributeError:
# it was not commandchain, plain old func - replace
dp = f
setattr(self.hooks,name, dp)
#-------------------------------------------------------------------------
# Things related to events
#-------------------------------------------------------------------------
def init_events(self):
self.events = EventManager(self, available_events)
self.events.register("pre_execute", self._clear_warning_registry)
def register_post_execute(self, func):
"""DEPRECATED: Use ip.events.register('post_run_cell', func)
Register a function for calling after code execution.
"""
warn("ip.register_post_execute is deprecated, use "
"ip.events.register('post_run_cell', func) instead.")
self.events.register('post_run_cell', func)
def _clear_warning_registry(self):
# clear the warning registry, so that different code blocks with
# overlapping line number ranges don't cause spurious suppression of
# warnings (see gh-6611 for details)
if "__warningregistry__" in self.user_global_ns:
del self.user_global_ns["__warningregistry__"]
#-------------------------------------------------------------------------
# Things related to the "main" module
#-------------------------------------------------------------------------
def new_main_mod(self, filename, modname):
"""Return a new 'main' module object for user code execution.
``filename`` should be the path of the script which will be run in the
module. Requests with the same filename will get the same module, with
its namespace cleared.
``modname`` should be the module name - normally either '__main__' or
the basename of the file without the extension.
When scripts are executed via %run, we must keep a reference to their
__main__ module around so that Python doesn't
clear it, rendering references to module globals useless.
This method keeps said reference in a private dict, keyed by the
absolute path of the script. This way, for multiple executions of the
same script we only keep one copy of the namespace (the last one),
thus preventing memory leaks from old references while allowing the
objects from the last execution to be accessible.
"""
filename = os.path.abspath(filename)
try:
main_mod = self._main_mod_cache[filename]
except KeyError:
main_mod = self._main_mod_cache[filename] = types.ModuleType(
py3compat.cast_bytes_py2(modname),
doc="Module created for script run in IPython")
else:
main_mod.__dict__.clear()
main_mod.__name__ = modname
main_mod.__file__ = filename
# It seems pydoc (and perhaps others) needs any module instance to
# implement a __nonzero__ method
main_mod.__nonzero__ = lambda : True
return main_mod
def clear_main_mod_cache(self):
"""Clear the cache of main modules.
Mainly for use by utilities like %reset.
Examples
--------
In [15]: import IPython
In [16]: m = _ip.new_main_mod(IPython.__file__, 'IPython')
In [17]: len(_ip._main_mod_cache) > 0
Out[17]: True
In [18]: _ip.clear_main_mod_cache()
In [19]: len(_ip._main_mod_cache) == 0
Out[19]: True
"""
self._main_mod_cache.clear()
#-------------------------------------------------------------------------
# Things related to debugging
#-------------------------------------------------------------------------
def init_pdb(self):
# Set calling of pdb on exceptions
# self.call_pdb is a property
self.call_pdb = self.pdb
def _get_call_pdb(self):
return self._call_pdb
def _set_call_pdb(self,val):
if val not in (0,1,False,True):
raise ValueError('new call_pdb value must be boolean')
# store value in instance
self._call_pdb = val
# notify the actual exception handlers
self.InteractiveTB.call_pdb = val
call_pdb = property(_get_call_pdb,_set_call_pdb,None,
'Control auto-activation of pdb at exceptions')
def debugger(self,force=False):
"""Call the pydb/pdb debugger.
Keywords:
- force(False): by default, this routine checks the instance call_pdb
flag and does not actually invoke the debugger if the flag is false.
The 'force' option forces the debugger to activate even if the flag
is false.
"""
if not (force or self.call_pdb):
return
if not hasattr(sys,'last_traceback'):
error('No traceback has been produced, nothing to debug.')
return
# use pydb if available
if debugger.has_pydb:
from pydb import pm
else:
# fallback to our internal debugger
pm = lambda : self.InteractiveTB.debugger(force=True)
with self.readline_no_record:
pm()
#-------------------------------------------------------------------------
# Things related to IPython's various namespaces
#-------------------------------------------------------------------------
default_user_namespaces = True
def init_create_namespaces(self, user_module=None, user_ns=None):
# Create the namespace where the user will operate. user_ns is
# normally the only one used, and it is passed to the exec calls as
# the locals argument. But we do carry a user_global_ns namespace
# given as the exec 'globals' argument, This is useful in embedding
# situations where the ipython shell opens in a context where the
# distinction between locals and globals is meaningful. For
# non-embedded contexts, it is just the same object as the user_ns dict.
# FIXME. For some strange reason, __builtins__ is showing up at user
# level as a dict instead of a module. This is a manual fix, but I
# should really track down where the problem is coming from. Alex
# Schmolck reported this problem first.
# A useful post by Alex Martelli on this topic:
# Re: inconsistent value from __builtins__
# Von: Alex Martelli <[email protected]>
# Datum: Freitag 01 Oktober 2004 04:45:34 nachmittags/abends
# Gruppen: comp.lang.python
# Michael Hohn <[email protected]> wrote:
# > >>> print type(builtin_check.get_global_binding('__builtins__'))
# > <type 'dict'>
# > >>> print type(__builtins__)
# > <type 'module'>
# > Is this difference in return value intentional?
# Well, it's documented that '__builtins__' can be either a dictionary
# or a module, and it's been that way for a long time. Whether it's
# intentional (or sensible), I don't know. In any case, the idea is
# that if you need to access the built-in namespace directly, you
# should start with "import __builtin__" (note, no 's') which will
# definitely give you a module. Yeah, it's somewhat confusing:-(.
# These routines return a properly built module and dict as needed by
# the rest of the code, and can also be used by extension writers to
# generate properly initialized namespaces.
if (user_ns is not None) or (user_module is not None):
self.default_user_namespaces = False
self.user_module, self.user_ns = self.prepare_user_module(user_module, user_ns)
# A record of hidden variables we have added to the user namespace, so
# we can list later only variables defined in actual interactive use.
self.user_ns_hidden = {}
# Now that FakeModule produces a real module, we've run into a nasty
# problem: after script execution (via %run), the module where the user
# code ran is deleted. Now that this object is a true module (needed
# so doctest and other tools work correctly), the Python module
# teardown mechanism runs over it, and sets to None every variable
# present in that module. Top-level references to objects from the
# script survive, because the user_ns is updated with them. However,
# calling functions defined in the script that use other things from
# the script will fail, because the function's closure had references
# to the original objects, which are now all None. So we must protect
# these modules from deletion by keeping a cache.
#
# To avoid keeping stale modules around (we only need the one from the
# last run), we use a dict keyed with the full path to the script, so
# only the last version of the module is held in the cache. Note,
# however, that we must cache the module *namespace contents* (their
# __dict__). Because if we try to cache the actual modules, old ones
# (uncached) could be destroyed while still holding references (such as
# those held by GUI objects that tend to be long-lived)>
#
# The %reset command will flush this cache. See the cache_main_mod()
# and clear_main_mod_cache() methods for details on use.
# This is the cache used for 'main' namespaces
self._main_mod_cache = {}
# A table holding all the namespaces IPython deals with, so that
# introspection facilities can search easily.
self.ns_table = {'user_global':self.user_module.__dict__,
'user_local':self.user_ns,
'builtin':builtin_mod.__dict__
}
@property
def user_global_ns(self):
return self.user_module.__dict__
def prepare_user_module(self, user_module=None, user_ns=None):
"""Prepare the module and namespace in which user code will be run.
When IPython is started normally, both parameters are None: a new module
is created automatically, and its __dict__ used as the namespace.
If only user_module is provided, its __dict__ is used as the namespace.
If only user_ns is provided, a dummy module is created, and user_ns
becomes the global namespace. If both are provided (as they may be
when embedding), user_ns is the local namespace, and user_module
provides the global namespace.
Parameters
----------
user_module : module, optional
The current user module in which IPython is being run. If None,
a clean module will be created.
user_ns : dict, optional
A namespace in which to run interactive commands.
Returns
-------
A tuple of user_module and user_ns, each properly initialised.
"""
if user_module is None and user_ns is not None:
user_ns.setdefault("__name__", "__main__")
user_module = DummyMod()
user_module.__dict__ = user_ns
if user_module is None:
user_module = types.ModuleType("__main__",
doc="Automatically created module for IPython interactive environment")
# We must ensure that __builtin__ (without the final 's') is always
# available and pointing to the __builtin__ *module*. For more details:
# http://mail.python.org/pipermail/python-dev/2001-April/014068.html
user_module.__dict__.setdefault('__builtin__', builtin_mod)
user_module.__dict__.setdefault('__builtins__', builtin_mod)
if user_ns is None:
user_ns = user_module.__dict__
return user_module, user_ns
def init_sys_modules(self):
# We need to insert into sys.modules something that looks like a
# module but which accesses the IPython namespace, for shelve and
# pickle to work interactively. Normally they rely on getting
# everything out of __main__, but for embedding purposes each IPython
# instance has its own private namespace, so we can't go shoving
# everything into __main__.
# note, however, that we should only do this for non-embedded
# ipythons, which really mimic the __main__.__dict__ with their own
# namespace. Embedded instances, on the other hand, should not do
# this because they need to manage the user local/global namespaces
# only, but they live within a 'normal' __main__ (meaning, they
# shouldn't overtake the execution environment of the script they're
# embedded in).
# This is overridden in the InteractiveShellEmbed subclass to a no-op.
main_name = self.user_module.__name__
sys.modules[main_name] = self.user_module
def init_user_ns(self):
"""Initialize all user-visible namespaces to their minimum defaults.
Certain history lists are also initialized here, as they effectively
act as user namespaces.
Notes
-----
All data structures here are only filled in, they are NOT reset by this
method. If they were not empty before, data will simply be added to
therm.
"""
# This function works in two parts: first we put a few things in
# user_ns, and we sync that contents into user_ns_hidden so that these
# initial variables aren't shown by %who. After the sync, we add the
# rest of what we *do* want the user to see with %who even on a new
# session (probably nothing, so they really only see their own stuff)
# The user dict must *always* have a __builtin__ reference to the
# Python standard __builtin__ namespace, which must be imported.
# This is so that certain operations in prompt evaluation can be
# reliably executed with builtins. Note that we can NOT use
# __builtins__ (note the 's'), because that can either be a dict or a
# module, and can even mutate at runtime, depending on the context
# (Python makes no guarantees on it). In contrast, __builtin__ is
# always a module object, though it must be explicitly imported.
# For more details:
# http://mail.python.org/pipermail/python-dev/2001-April/014068.html
ns = dict()
# make global variables for user access to the histories
ns['_ih'] = self.history_manager.input_hist_parsed
ns['_oh'] = self.history_manager.output_hist
ns['_dh'] = self.history_manager.dir_hist
ns['_sh'] = shadowns
# user aliases to input and output histories. These shouldn't show up
# in %who, as they can have very large reprs.
ns['In'] = self.history_manager.input_hist_parsed
ns['Out'] = self.history_manager.output_hist
# Store myself as the public api!!!
ns['get_ipython'] = self.get_ipython
ns['exit'] = self.exiter
ns['quit'] = self.exiter
# Sync what we've added so far to user_ns_hidden so these aren't seen
# by %who
self.user_ns_hidden.update(ns)
# Anything put into ns now would show up in %who. Think twice before
# putting anything here, as we really want %who to show the user their
# stuff, not our variables.
# Finally, update the real user's namespace
self.user_ns.update(ns)
@property
def all_ns_refs(self):
"""Get a list of references to all the namespace dictionaries in which
IPython might store a user-created object.
Note that this does not include the displayhook, which also caches
objects from the output."""
return [self.user_ns, self.user_global_ns, self.user_ns_hidden] + \
[m.__dict__ for m in self._main_mod_cache.values()]
def reset(self, new_session=True):
"""Clear all internal namespaces, and attempt to release references to
user objects.
If new_session is True, a new history session will be opened.
"""
# Clear histories
self.history_manager.reset(new_session)
# Reset counter used to index all histories
if new_session:
self.execution_count = 1
# Flush cached output items
if self.displayhook.do_full_cache:
self.displayhook.flush()
# The main execution namespaces must be cleared very carefully,
# skipping the deletion of the builtin-related keys, because doing so
# would cause errors in many object's __del__ methods.
if self.user_ns is not self.user_global_ns:
self.user_ns.clear()
ns = self.user_global_ns
drop_keys = set(ns.keys())
drop_keys.discard('__builtin__')
drop_keys.discard('__builtins__')
drop_keys.discard('__name__')
for k in drop_keys:
del ns[k]
self.user_ns_hidden.clear()
# Restore the user namespaces to minimal usability
self.init_user_ns()
# Restore the default and user aliases
self.alias_manager.clear_aliases()
self.alias_manager.init_aliases()
# Flush the private list of module references kept for script
# execution protection
self.clear_main_mod_cache()
def del_var(self, varname, by_name=False):
"""Delete a variable from the various namespaces, so that, as
far as possible, we're not keeping any hidden references to it.
Parameters
----------
varname : str
The name of the variable to delete.
by_name : bool
If True, delete variables with the given name in each
namespace. If False (default), find the variable in the user
namespace, and delete references to it.
"""
if varname in ('__builtin__', '__builtins__'):
raise ValueError("Refusing to delete %s" % varname)
ns_refs = self.all_ns_refs
if by_name: # Delete by name
for ns in ns_refs:
try:
del ns[varname]
except KeyError:
pass
else: # Delete by object
try:
obj = self.user_ns[varname]
except KeyError:
raise NameError("name '%s' is not defined" % varname)
# Also check in output history
ns_refs.append(self.history_manager.output_hist)
for ns in ns_refs:
to_delete = [n for n, o in iteritems(ns) if o is obj]
for name in to_delete:
del ns[name]
# displayhook keeps extra references, but not in a dictionary
for name in ('_', '__', '___'):
if getattr(self.displayhook, name) is obj:
setattr(self.displayhook, name, None)
def reset_selective(self, regex=None):
"""Clear selective variables from internal namespaces based on a
specified regular expression.
Parameters
----------
regex : string or compiled pattern, optional
A regular expression pattern that will be used in searching
variable names in the users namespaces.
"""
if regex is not None:
try:
m = re.compile(regex)
except TypeError:
raise TypeError('regex must be a string or compiled pattern')
# Search for keys in each namespace that match the given regex
# If a match is found, delete the key/value pair.
for ns in self.all_ns_refs:
for var in ns:
if m.search(var):
del ns[var]
def push(self, variables, interactive=True):
"""Inject a group of variables into the IPython user namespace.
Parameters
----------
variables : dict, str or list/tuple of str
The variables to inject into the user's namespace. If a dict, a
simple update is done. If a str, the string is assumed to have
variable names separated by spaces. A list/tuple of str can also
be used to give the variable names. If just the variable names are
give (list/tuple/str) then the variable values looked up in the
callers frame.
interactive : bool
If True (default), the variables will be listed with the ``who``
magic.
"""
vdict = None
# We need a dict of name/value pairs to do namespace updates.
if isinstance(variables, dict):
vdict = variables
elif isinstance(variables, string_types+(list, tuple)):
if isinstance(variables, string_types):
vlist = variables.split()
else:
vlist = variables
vdict = {}
cf = sys._getframe(1)
for name in vlist:
try:
vdict[name] = eval(name, cf.f_globals, cf.f_locals)
except:
print('Could not get variable %s from %s' %
(name,cf.f_code.co_name))
else:
raise ValueError('variables must be a dict/str/list/tuple')
# Propagate variables to user namespace
self.user_ns.update(vdict)
# And configure interactive visibility
user_ns_hidden = self.user_ns_hidden
if interactive:
for name in vdict:
user_ns_hidden.pop(name, None)
else:
user_ns_hidden.update(vdict)
def drop_by_id(self, variables):
"""Remove a dict of variables from the user namespace, if they are the
same as the values in the dictionary.
This is intended for use by extensions: variables that they've added can
be taken back out if they are unloaded, without removing any that the
user has overwritten.
Parameters
----------
variables : dict
A dictionary mapping object names (as strings) to the objects.
"""
for name, obj in iteritems(variables):
if name in self.user_ns and self.user_ns[name] is obj:
del self.user_ns[name]
self.user_ns_hidden.pop(name, None)
#-------------------------------------------------------------------------
# Things related to object introspection
#-------------------------------------------------------------------------
def _ofind(self, oname, namespaces=None):
"""Find an object in the available namespaces.
self._ofind(oname) -> dict with keys: found,obj,ospace,ismagic
Has special code to detect magic functions.
"""
oname = oname.strip()
#print '1- oname: <%r>' % oname # dbg
if not oname.startswith(ESC_MAGIC) and \
not oname.startswith(ESC_MAGIC2) and \
not py3compat.isidentifier(oname, dotted=True):
return dict(found=False)
if namespaces is None:
# Namespaces to search in:
# Put them in a list. The order is important so that we
# find things in the same order that Python finds them.
namespaces = [ ('Interactive', self.user_ns),
('Interactive (global)', self.user_global_ns),
('Python builtin', builtin_mod.__dict__),
]
# initialize results to 'null'
found = False; obj = None; ospace = None;
ismagic = False; isalias = False; parent = None
# We need to special-case 'print', which as of python2.6 registers as a
# function but should only be treated as one if print_function was
# loaded with a future import. In this case, just bail.
if (oname == 'print' and not py3compat.PY3 and not \
(self.compile.compiler_flags & __future__.CO_FUTURE_PRINT_FUNCTION)):
return {'found':found, 'obj':obj, 'namespace':ospace,
'ismagic':ismagic, 'isalias':isalias, 'parent':parent}
# Look for the given name by splitting it in parts. If the head is
# found, then we look for all the remaining parts as members, and only
# declare success if we can find them all.
oname_parts = oname.split('.')
oname_head, oname_rest = oname_parts[0],oname_parts[1:]
for nsname,ns in namespaces:
try:
obj = ns[oname_head]
except KeyError:
continue
else:
#print 'oname_rest:', oname_rest # dbg
for idx, part in enumerate(oname_rest):
try:
parent = obj
# The last part is looked up in a special way to avoid
# descriptor invocation as it may raise or have side
# effects.
if idx == len(oname_rest) - 1:
obj = self._getattr_property(obj, part)
else:
obj = getattr(obj, part)
except:
# Blanket except b/c some badly implemented objects
# allow __getattr__ to raise exceptions other than
# AttributeError, which then crashes IPython.
break
else:
# If we finish the for loop (no break), we got all members
found = True
ospace = nsname
break # namespace loop
# Try to see if it's magic
if not found:
obj = None
if oname.startswith(ESC_MAGIC2):
oname = oname.lstrip(ESC_MAGIC2)
obj = self.find_cell_magic(oname)
elif oname.startswith(ESC_MAGIC):
oname = oname.lstrip(ESC_MAGIC)
obj = self.find_line_magic(oname)
else:
# search without prefix, so run? will find %run?
obj = self.find_line_magic(oname)
if obj is None:
obj = self.find_cell_magic(oname)
if obj is not None:
found = True
ospace = 'IPython internal'
ismagic = True
isalias = isinstance(obj, Alias)
# Last try: special-case some literals like '', [], {}, etc:
if not found and oname_head in ["''",'""','[]','{}','()']:
obj = eval(oname_head)
found = True
ospace = 'Interactive'
return {'found':found, 'obj':obj, 'namespace':ospace,
'ismagic':ismagic, 'isalias':isalias, 'parent':parent}
@staticmethod
def _getattr_property(obj, attrname):
"""Property-aware getattr to use in object finding.
If attrname represents a property, return it unevaluated (in case it has
side effects or raises an error.
"""
if not isinstance(obj, type):
try:
# `getattr(type(obj), attrname)` is not guaranteed to return
# `obj`, but does so for property:
#
# property.__get__(self, None, cls) -> self
#
# The universal alternative is to traverse the mro manually
# searching for attrname in class dicts.
attr = getattr(type(obj), attrname)
except AttributeError:
pass
else:
# This relies on the fact that data descriptors (with both
# __get__ & __set__ magic methods) take precedence over
# instance-level attributes:
#
# class A(object):
# @property
# def foobar(self): return 123
# a = A()
# a.__dict__['foobar'] = 345
# a.foobar # == 123
#
# So, a property may be returned right away.
if isinstance(attr, property):
return attr
# Nothing helped, fall back.
return getattr(obj, attrname)
def _object_find(self, oname, namespaces=None):
"""Find an object and return a struct with info about it."""
return Struct(self._ofind(oname, namespaces))
def _inspect(self, meth, oname, namespaces=None, **kw):
"""Generic interface to the inspector system.
This function is meant to be called by pdef, pdoc & friends."""
info = self._object_find(oname, namespaces)
if info.found:
pmethod = getattr(self.inspector, meth)
formatter = format_screen if info.ismagic else None
if meth == 'pdoc':
pmethod(info.obj, oname, formatter)
elif meth == 'pinfo':
pmethod(info.obj, oname, formatter, info, **kw)
else:
pmethod(info.obj, oname)
else:
print('Object `%s` not found.' % oname)
return 'not found' # so callers can take other action
def object_inspect(self, oname, detail_level=0):
"""Get object info about oname"""
with self.builtin_trap:
info = self._object_find(oname)
if info.found:
return self.inspector.info(info.obj, oname, info=info,
detail_level=detail_level
)
else:
return oinspect.object_info(name=oname, found=False)
def object_inspect_text(self, oname, detail_level=0):
"""Get object info as formatted text"""
with self.builtin_trap:
info = self._object_find(oname)
if info.found:
return self.inspector._format_info(info.obj, oname, info=info,
detail_level=detail_level
)
else:
raise KeyError(oname)
#-------------------------------------------------------------------------
# Things related to history management
#-------------------------------------------------------------------------
def init_history(self):
"""Sets up the command history, and starts regular autosaves."""
self.history_manager = HistoryManager(shell=self, parent=self)
self.configurables.append(self.history_manager)
#-------------------------------------------------------------------------
# Things related to exception handling and tracebacks (not debugging)
#-------------------------------------------------------------------------
def init_traceback_handlers(self, custom_exceptions):
# Syntax error handler.
self.SyntaxTB = ultratb.SyntaxTB(color_scheme='NoColor')
# The interactive one is initialized with an offset, meaning we always
# want to remove the topmost item in the traceback, which is our own
# internal code. Valid modes: ['Plain','Context','Verbose']
self.InteractiveTB = ultratb.AutoFormattedTB(mode = 'Plain',
color_scheme='NoColor',
tb_offset = 1,
check_cache=check_linecache_ipython)
# The instance will store a pointer to the system-wide exception hook,
# so that runtime code (such as magics) can access it. This is because
# during the read-eval loop, it may get temporarily overwritten.
self.sys_excepthook = sys.excepthook
# and add any custom exception handlers the user may have specified
self.set_custom_exc(*custom_exceptions)
# Set the exception mode
self.InteractiveTB.set_mode(mode=self.xmode)
def set_custom_exc(self, exc_tuple, handler):
"""set_custom_exc(exc_tuple,handler)
Set a custom exception handler, which will be called if any of the
exceptions in exc_tuple occur in the mainloop (specifically, in the
run_code() method).
Parameters
----------
exc_tuple : tuple of exception classes
A *tuple* of exception classes, for which to call the defined
handler. It is very important that you use a tuple, and NOT A
LIST here, because of the way Python's except statement works. If
you only want to trap a single exception, use a singleton tuple::
exc_tuple == (MyCustomException,)
handler : callable
handler must have the following signature::
def my_handler(self, etype, value, tb, tb_offset=None):
...
return structured_traceback
Your handler must return a structured traceback (a list of strings),
or None.
This will be made into an instance method (via types.MethodType)
of IPython itself, and it will be called if any of the exceptions
listed in the exc_tuple are caught. If the handler is None, an
internal basic one is used, which just prints basic info.
To protect IPython from crashes, if your handler ever raises an
exception or returns an invalid result, it will be immediately
disabled.
WARNING: by putting in your own exception handler into IPython's main
execution loop, you run a very good chance of nasty crashes. This
facility should only be used if you really know what you are doing."""
assert type(exc_tuple)==type(()) , \
"The custom exceptions must be given AS A TUPLE."
def dummy_handler(self,etype,value,tb,tb_offset=None):
print('*** Simple custom exception handler ***')
print('Exception type :',etype)
print('Exception value:',value)
print('Traceback :',tb)
#print 'Source code :','\n'.join(self.buffer)
def validate_stb(stb):
"""validate structured traceback return type
return type of CustomTB *should* be a list of strings, but allow
single strings or None, which are harmless.
This function will *always* return a list of strings,
and will raise a TypeError if stb is inappropriate.
"""
msg = "CustomTB must return list of strings, not %r" % stb
if stb is None:
return []
elif isinstance(stb, string_types):
return [stb]
elif not isinstance(stb, list):
raise TypeError(msg)
# it's a list
for line in stb:
# check every element
if not isinstance(line, string_types):
raise TypeError(msg)
return stb
if handler is None:
wrapped = dummy_handler
else:
def wrapped(self,etype,value,tb,tb_offset=None):
"""wrap CustomTB handler, to protect IPython from user code
This makes it harder (but not impossible) for custom exception
handlers to crash IPython.
"""
try:
stb = handler(self,etype,value,tb,tb_offset=tb_offset)
return validate_stb(stb)
except:
# clear custom handler immediately
self.set_custom_exc((), None)
print("Custom TB Handler failed, unregistering", file=io.stderr)
# show the exception in handler first
stb = self.InteractiveTB.structured_traceback(*sys.exc_info())
print(self.InteractiveTB.stb2text(stb), file=io.stdout)
print("The original exception:", file=io.stdout)
stb = self.InteractiveTB.structured_traceback(
(etype,value,tb), tb_offset=tb_offset
)
return stb
self.CustomTB = types.MethodType(wrapped,self)
self.custom_exceptions = exc_tuple
def excepthook(self, etype, value, tb):
"""One more defense for GUI apps that call sys.excepthook.
GUI frameworks like wxPython trap exceptions and call
sys.excepthook themselves. I guess this is a feature that
enables them to keep running after exceptions that would
otherwise kill their mainloop. This is a bother for IPython
which excepts to catch all of the program exceptions with a try:
except: statement.
Normally, IPython sets sys.excepthook to a CrashHandler instance, so if
any app directly invokes sys.excepthook, it will look to the user like
IPython crashed. In order to work around this, we can disable the
CrashHandler and replace it with this excepthook instead, which prints a
regular traceback using our InteractiveTB. In this fashion, apps which
call sys.excepthook will generate a regular-looking exception from
IPython, and the CrashHandler will only be triggered by real IPython
crashes.
This hook should be used sparingly, only in places which are not likely
to be true IPython errors.
"""
self.showtraceback((etype, value, tb), tb_offset=0)
def _get_exc_info(self, exc_tuple=None):
"""get exc_info from a given tuple, sys.exc_info() or sys.last_type etc.
Ensures sys.last_type,value,traceback hold the exc_info we found,
from whichever source.
raises ValueError if none of these contain any information
"""
if exc_tuple is None:
etype, value, tb = sys.exc_info()
else:
etype, value, tb = exc_tuple
if etype is None:
if hasattr(sys, 'last_type'):
etype, value, tb = sys.last_type, sys.last_value, \
sys.last_traceback
if etype is None:
raise ValueError("No exception to find")
# Now store the exception info in sys.last_type etc.
# WARNING: these variables are somewhat deprecated and not
# necessarily safe to use in a threaded environment, but tools
# like pdb depend on their existence, so let's set them. If we
# find problems in the field, we'll need to revisit their use.
sys.last_type = etype
sys.last_value = value
sys.last_traceback = tb
return etype, value, tb
def show_usage_error(self, exc):
"""Show a short message for UsageErrors
These are special exceptions that shouldn't show a traceback.
"""
self.write_err("UsageError: %s" % exc)
def get_exception_only(self, exc_tuple=None):
"""
Return as a string (ending with a newline) the exception that
just occurred, without any traceback.
"""
etype, value, tb = self._get_exc_info(exc_tuple)
msg = traceback.format_exception_only(etype, value)
return ''.join(msg)
def showtraceback(self, exc_tuple=None, filename=None, tb_offset=None,
exception_only=False):
"""Display the exception that just occurred.
If nothing is known about the exception, this is the method which
should be used throughout the code for presenting user tracebacks,
rather than directly invoking the InteractiveTB object.
A specific showsyntaxerror() also exists, but this method can take
care of calling it if needed, so unless you are explicitly catching a
SyntaxError exception, don't try to analyze the stack manually and
simply call this method."""
try:
try:
etype, value, tb = self._get_exc_info(exc_tuple)
except ValueError:
self.write_err('No traceback available to show.\n')
return
if issubclass(etype, SyntaxError):
# Though this won't be called by syntax errors in the input
# line, there may be SyntaxError cases with imported code.
self.showsyntaxerror(filename)
elif etype is UsageError:
self.show_usage_error(value)
else:
if exception_only:
stb = ['An exception has occurred, use %tb to see '
'the full traceback.\n']
stb.extend(self.InteractiveTB.get_exception_only(etype,
value))
else:
try:
# Exception classes can customise their traceback - we
# use this in IPython.parallel for exceptions occurring
# in the engines. This should return a list of strings.
stb = value._render_traceback_()
except Exception:
stb = self.InteractiveTB.structured_traceback(etype,
value, tb, tb_offset=tb_offset)
self._showtraceback(etype, value, stb)
if self.call_pdb:
# drop into debugger
self.debugger(force=True)
return
# Actually show the traceback
self._showtraceback(etype, value, stb)
except KeyboardInterrupt:
self.write_err('\n' + self.get_exception_only())
def _showtraceback(self, etype, evalue, stb):
"""Actually show a traceback.
Subclasses may override this method to put the traceback on a different
place, like a side channel.
"""
print(self.InteractiveTB.stb2text(stb), file=io.stdout)
def showsyntaxerror(self, filename=None):
"""Display the syntax error that just occurred.
This doesn't display a stack trace because there isn't one.
If a filename is given, it is stuffed in the exception instead
of what was there before (because Python's parser always uses
"<string>" when reading from a string).
"""
etype, value, last_traceback = self._get_exc_info()
if filename and issubclass(etype, SyntaxError):
try:
value.filename = filename
except:
# Not the format we expect; leave it alone
pass
stb = self.SyntaxTB.structured_traceback(etype, value, [])
self._showtraceback(etype, value, stb)
# This is overridden in TerminalInteractiveShell to show a message about
# the %paste magic.
def showindentationerror(self):
"""Called by run_cell when there's an IndentationError in code entered
at the prompt.
This is overridden in TerminalInteractiveShell to show a message about
the %paste magic."""
self.showsyntaxerror()
#-------------------------------------------------------------------------
# Things related to readline
#-------------------------------------------------------------------------
def init_readline(self):
"""Moved to terminal subclass, here only to simplify the init logic."""
self.readline = None
# Set a number of methods that depend on readline to be no-op
self.readline_no_record = NoOpContext()
self.set_readline_completer = no_op
self.set_custom_completer = no_op
@skip_doctest
def set_next_input(self, s, replace=False):
""" Sets the 'default' input string for the next command line.
Example::
In [1]: _ip.set_next_input("Hello Word")
In [2]: Hello Word_ # cursor is here
"""
self.rl_next_input = py3compat.cast_bytes_py2(s)
def _indent_current_str(self):
"""return the current level of indentation as a string"""
return self.input_splitter.indent_spaces * ' '
#-------------------------------------------------------------------------
# Things related to text completion
#-------------------------------------------------------------------------
def init_completer(self):
"""Initialize the completion machinery.
This creates completion machinery that can be used by client code,
either interactively in-process (typically triggered by the readline
library), programmatically (such as in test suites) or out-of-process
(typically over the network by remote frontends).
"""
from IPython.core.completer import IPCompleter
from IPython.core.completerlib import (module_completer,
magic_run_completer, cd_completer, reset_completer)
self.Completer = IPCompleter(shell=self,
namespace=self.user_ns,
global_namespace=self.user_global_ns,
use_readline=self.has_readline,
parent=self,
)
self.configurables.append(self.Completer)
# Add custom completers to the basic ones built into IPCompleter
sdisp = self.strdispatchers.get('complete_command', StrDispatch())
self.strdispatchers['complete_command'] = sdisp
self.Completer.custom_completers = sdisp
self.set_hook('complete_command', module_completer, str_key = 'import')
self.set_hook('complete_command', module_completer, str_key = 'from')
self.set_hook('complete_command', module_completer, str_key = '%aimport')
self.set_hook('complete_command', magic_run_completer, str_key = '%run')
self.set_hook('complete_command', cd_completer, str_key = '%cd')
self.set_hook('complete_command', reset_completer, str_key = '%reset')
def complete(self, text, line=None, cursor_pos=None):
"""Return the completed text and a list of completions.
Parameters
----------
text : string
A string of text to be completed on. It can be given as empty and
instead a line/position pair are given. In this case, the
completer itself will split the line like readline does.
line : string, optional
The complete line that text is part of.
cursor_pos : int, optional
The position of the cursor on the input line.
Returns
-------
text : string
The actual text that was completed.
matches : list
A sorted list with all possible completions.
The optional arguments allow the completion to take more context into
account, and are part of the low-level completion API.
This is a wrapper around the completion mechanism, similar to what
readline does at the command line when the TAB key is hit. By
exposing it as a method, it can be used by other non-readline
environments (such as GUIs) for text completion.
Simple usage example:
In [1]: x = 'hello'
In [2]: _ip.complete('x.l')
Out[2]: ('x.l', ['x.ljust', 'x.lower', 'x.lstrip'])
"""
# Inject names into __builtin__ so we can complete on the added names.
with self.builtin_trap:
return self.Completer.complete(text, line, cursor_pos)
def set_custom_completer(self, completer, pos=0):
"""Adds a new custom completer function.
The position argument (defaults to 0) is the index in the completers
list where you want the completer to be inserted."""
newcomp = types.MethodType(completer,self.Completer)
self.Completer.matchers.insert(pos,newcomp)
def set_completer_frame(self, frame=None):
"""Set the frame of the completer."""
if frame:
self.Completer.namespace = frame.f_locals
self.Completer.global_namespace = frame.f_globals
else:
self.Completer.namespace = self.user_ns
self.Completer.global_namespace = self.user_global_ns
#-------------------------------------------------------------------------
# Things related to magics
#-------------------------------------------------------------------------
def init_magics(self):
from IPython.core import magics as m
self.magics_manager = magic.MagicsManager(shell=self,
parent=self,
user_magics=m.UserMagics(self))
self.configurables.append(self.magics_manager)
# Expose as public API from the magics manager
self.register_magics = self.magics_manager.register
self.define_magic = self.magics_manager.define_magic
self.register_magics(m.AutoMagics, m.BasicMagics, m.CodeMagics,
m.ConfigMagics, m.DeprecatedMagics, m.DisplayMagics, m.ExecutionMagics,
m.ExtensionMagics, m.HistoryMagics, m.LoggingMagics,
m.NamespaceMagics, m.OSMagics, m.PylabMagics, m.ScriptMagics,
)
# Register Magic Aliases
mman = self.magics_manager
# FIXME: magic aliases should be defined by the Magics classes
# or in MagicsManager, not here
mman.register_alias('ed', 'edit')
mman.register_alias('hist', 'history')
mman.register_alias('rep', 'recall')
mman.register_alias('SVG', 'svg', 'cell')
mman.register_alias('HTML', 'html', 'cell')
mman.register_alias('file', 'writefile', 'cell')
# FIXME: Move the color initialization to the DisplayHook, which
# should be split into a prompt manager and displayhook. We probably
# even need a centralize colors management object.
self.magic('colors %s' % self.colors)
# Defined here so that it's included in the documentation
@functools.wraps(magic.MagicsManager.register_function)
def register_magic_function(self, func, magic_kind='line', magic_name=None):
self.magics_manager.register_function(func,
magic_kind=magic_kind, magic_name=magic_name)
def run_line_magic(self, magic_name, line):
"""Execute the given line magic.
Parameters
----------
magic_name : str
Name of the desired magic function, without '%' prefix.
line : str
The rest of the input line as a single string.
"""
fn = self.find_line_magic(magic_name)
if fn is None:
cm = self.find_cell_magic(magic_name)
etpl = "Line magic function `%%%s` not found%s."
extra = '' if cm is None else (' (But cell magic `%%%%%s` exists, '
'did you mean that instead?)' % magic_name )
error(etpl % (magic_name, extra))
else:
# Note: this is the distance in the stack to the user's frame.
# This will need to be updated if the internal calling logic gets
# refactored, or else we'll be expanding the wrong variables.
stack_depth = 2
magic_arg_s = self.var_expand(line, stack_depth)
# Put magic args in a list so we can call with f(*a) syntax
args = [magic_arg_s]
kwargs = {}
# Grab local namespace if we need it:
if getattr(fn, "needs_local_scope", False):
kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
with self.builtin_trap:
result = fn(*args,**kwargs)
return result
def run_cell_magic(self, magic_name, line, cell):
"""Execute the given cell magic.
Parameters
----------
magic_name : str
Name of the desired magic function, without '%' prefix.
line : str
The rest of the first input line as a single string.
cell : str
The body of the cell as a (possibly multiline) string.
"""
fn = self.find_cell_magic(magic_name)
if fn is None:
lm = self.find_line_magic(magic_name)
etpl = "Cell magic `%%{0}` not found{1}."
extra = '' if lm is None else (' (But line magic `%{0}` exists, '
'did you mean that instead?)'.format(magic_name))
error(etpl.format(magic_name, extra))
elif cell == '':
message = '%%{0} is a cell magic, but the cell body is empty.'.format(magic_name)
if self.find_line_magic(magic_name) is not None:
message += ' Did you mean the line magic %{0} (single %)?'.format(magic_name)
raise UsageError(message)
else:
# Note: this is the distance in the stack to the user's frame.
# This will need to be updated if the internal calling logic gets
# refactored, or else we'll be expanding the wrong variables.
stack_depth = 2
magic_arg_s = self.var_expand(line, stack_depth)
with self.builtin_trap:
result = fn(magic_arg_s, cell)
return result
def find_line_magic(self, magic_name):
"""Find and return a line magic by name.
Returns None if the magic isn't found."""
return self.magics_manager.magics['line'].get(magic_name)
def find_cell_magic(self, magic_name):
"""Find and return a cell magic by name.
Returns None if the magic isn't found."""
return self.magics_manager.magics['cell'].get(magic_name)
def find_magic(self, magic_name, magic_kind='line'):
"""Find and return a magic of the given type by name.
Returns None if the magic isn't found."""
return self.magics_manager.magics[magic_kind].get(magic_name)
def magic(self, arg_s):
"""DEPRECATED. Use run_line_magic() instead.
Call a magic function by name.
Input: a string containing the name of the magic function to call and
any additional arguments to be passed to the magic.
magic('name -opt foo bar') is equivalent to typing at the ipython
prompt:
In[1]: %name -opt foo bar
To call a magic without arguments, simply use magic('name').
This provides a proper Python function to call IPython's magics in any
valid Python code you can type at the interpreter, including loops and
compound statements.
"""
# TODO: should we issue a loud deprecation warning here?
magic_name, _, magic_arg_s = arg_s.partition(' ')
magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
return self.run_line_magic(magic_name, magic_arg_s)
#-------------------------------------------------------------------------
# Things related to macros
#-------------------------------------------------------------------------
def define_macro(self, name, themacro):
"""Define a new macro
Parameters
----------
name : str
The name of the macro.
themacro : str or Macro
The action to do upon invoking the macro. If a string, a new
Macro object is created by passing the string to it.
"""
from IPython.core import macro
if isinstance(themacro, string_types):
themacro = macro.Macro(themacro)
if not isinstance(themacro, macro.Macro):
raise ValueError('A macro must be a string or a Macro instance.')
self.user_ns[name] = themacro
#-------------------------------------------------------------------------
# Things related to the running of system commands
#-------------------------------------------------------------------------
def system_piped(self, cmd):
"""Call the given cmd in a subprocess, piping stdout/err
Parameters
----------
cmd : str
Command to execute (can not end in '&', as background processes are
not supported. Should not be a command that expects input
other than simple text.
"""
if cmd.rstrip().endswith('&'):
# this is *far* from a rigorous test
# We do not support backgrounding processes because we either use
# pexpect or pipes to read from. Users can always just call
# os.system() or use ip.system=ip.system_raw
# if they really want a background process.
raise OSError("Background processes not supported.")
# we explicitly do NOT return the subprocess status code, because
# a non-None value would trigger :func:`sys.displayhook` calls.
# Instead, we store the exit_code in user_ns.
self.user_ns['_exit_code'] = system(self.var_expand(cmd, depth=1))
def system_raw(self, cmd):
"""Call the given cmd in a subprocess using os.system on Windows or
subprocess.call using the system shell on other platforms.
Parameters
----------
cmd : str
Command to execute.
"""
cmd = self.var_expand(cmd, depth=1)
# protect os.system from UNC paths on Windows, which it can't handle:
if sys.platform == 'win32':
from IPython.utils._process_win32 import AvoidUNCPath
with AvoidUNCPath() as path:
if path is not None:
cmd = '"pushd %s &&"%s' % (path, cmd)
cmd = py3compat.unicode_to_str(cmd)
try:
ec = os.system(cmd)
except KeyboardInterrupt:
self.write_err('\n' + self.get_exception_only())
ec = -2
else:
cmd = py3compat.unicode_to_str(cmd)
# For posix the result of the subprocess.call() below is an exit
# code, which by convention is zero for success, positive for
# program failure. Exit codes above 128 are reserved for signals,
# and the formula for converting a signal to an exit code is usually
# signal_number+128. To more easily differentiate between exit
# codes and signals, ipython uses negative numbers. For instance
# since control-c is signal 2 but exit code 130, ipython's
# _exit_code variable will read -2. Note that some shells like
# csh and fish don't follow sh/bash conventions for exit codes.
executable = os.environ.get('SHELL', None)
try:
# Use env shell instead of default /bin/sh
ec = subprocess.call(cmd, shell=True, executable=executable)
except KeyboardInterrupt:
# intercept control-C; a long traceback is not useful here
self.write_err('\n' + self.get_exception_only())
ec = 130
if ec > 128:
ec = -(ec - 128)
# We explicitly do NOT return the subprocess status code, because
# a non-None value would trigger :func:`sys.displayhook` calls.
# Instead, we store the exit_code in user_ns. Note the semantics
# of _exit_code: for control-c, _exit_code == -signal.SIGNIT,
# but raising SystemExit(_exit_code) will give status 254!
self.user_ns['_exit_code'] = ec
# use piped system by default, because it is better behaved
system = system_piped
def getoutput(self, cmd, split=True, depth=0):
"""Get output (possibly including stderr) from a subprocess.
Parameters
----------
cmd : str
Command to execute (can not end in '&', as background processes are
not supported.
split : bool, optional
If True, split the output into an IPython SList. Otherwise, an
IPython LSString is returned. These are objects similar to normal
lists and strings, with a few convenience attributes for easier
manipulation of line-based output. You can use '?' on them for
details.
depth : int, optional
How many frames above the caller are the local variables which should
be expanded in the command string? The default (0) assumes that the
expansion variables are in the stack frame calling this function.
"""
if cmd.rstrip().endswith('&'):
# this is *far* from a rigorous test
raise OSError("Background processes not supported.")
out = getoutput(self.var_expand(cmd, depth=depth+1))
if split:
out = SList(out.splitlines())
else:
out = LSString(out)
return out
#-------------------------------------------------------------------------
# Things related to aliases
#-------------------------------------------------------------------------
def init_alias(self):
self.alias_manager = AliasManager(shell=self, parent=self)
self.configurables.append(self.alias_manager)
#-------------------------------------------------------------------------
# Things related to extensions
#-------------------------------------------------------------------------
def init_extension_manager(self):
self.extension_manager = ExtensionManager(shell=self, parent=self)
self.configurables.append(self.extension_manager)
#-------------------------------------------------------------------------
# Things related to payloads
#-------------------------------------------------------------------------
def init_payload(self):
self.payload_manager = PayloadManager(parent=self)
self.configurables.append(self.payload_manager)
#-------------------------------------------------------------------------
# Things related to the prefilter
#-------------------------------------------------------------------------
def init_prefilter(self):
self.prefilter_manager = PrefilterManager(shell=self, parent=self)
self.configurables.append(self.prefilter_manager)
# Ultimately this will be refactored in the new interpreter code, but
# for now, we should expose the main prefilter method (there's legacy
# code out there that may rely on this).
self.prefilter = self.prefilter_manager.prefilter_lines
def auto_rewrite_input(self, cmd):
"""Print to the screen the rewritten form of the user's command.
This shows visual feedback by rewriting input lines that cause
automatic calling to kick in, like::
/f x
into::
------> f(x)
after the user's input prompt. This helps the user understand that the
input line was transformed automatically by IPython.
"""
if not self.show_rewritten_input:
return
rw = self.prompt_manager.render('rewrite') + cmd
try:
# plain ascii works better w/ pyreadline, on some machines, so
# we use it and only print uncolored rewrite if we have unicode
rw = str(rw)
print(rw, file=io.stdout)
except UnicodeEncodeError:
print("------> " + cmd)
#-------------------------------------------------------------------------
# Things related to extracting values/expressions from kernel and user_ns
#-------------------------------------------------------------------------
def _user_obj_error(self):
"""return simple exception dict
for use in user_expressions
"""
etype, evalue, tb = self._get_exc_info()
stb = self.InteractiveTB.get_exception_only(etype, evalue)
exc_info = {
u'status' : 'error',
u'traceback' : stb,
u'ename' : unicode_type(etype.__name__),
u'evalue' : py3compat.safe_unicode(evalue),
}
return exc_info
def _format_user_obj(self, obj):
"""format a user object to display dict
for use in user_expressions
"""
data, md = self.display_formatter.format(obj)
value = {
'status' : 'ok',
'data' : data,
'metadata' : md,
}
return value
def user_expressions(self, expressions):
"""Evaluate a dict of expressions in the user's namespace.
Parameters
----------
expressions : dict
A dict with string keys and string values. The expression values
should be valid Python expressions, each of which will be evaluated
in the user namespace.
Returns
-------
A dict, keyed like the input expressions dict, with the rich mime-typed
display_data of each value.
"""
out = {}
user_ns = self.user_ns
global_ns = self.user_global_ns
for key, expr in iteritems(expressions):
try:
value = self._format_user_obj(eval(expr, global_ns, user_ns))
except:
value = self._user_obj_error()
out[key] = value
return out
#-------------------------------------------------------------------------
# Things related to the running of code
#-------------------------------------------------------------------------
def ex(self, cmd):
"""Execute a normal python statement in user namespace."""
with self.builtin_trap:
exec(cmd, self.user_global_ns, self.user_ns)
def ev(self, expr):
"""Evaluate python expression expr in user namespace.
Returns the result of evaluation
"""
with self.builtin_trap:
return eval(expr, self.user_global_ns, self.user_ns)
def safe_execfile(self, fname, *where, **kw):
"""A safe version of the builtin execfile().
This version will never throw an exception, but instead print
helpful error messages to the screen. This only works on pure
Python files with the .py extension.
Parameters
----------
fname : string
The name of the file to be executed.
where : tuple
One or two namespaces, passed to execfile() as (globals,locals).
If only one is given, it is passed as both.
exit_ignore : bool (False)
If True, then silence SystemExit for non-zero status (it is always
silenced for zero status, as it is so common).
raise_exceptions : bool (False)
If True raise exceptions everywhere. Meant for testing.
shell_futures : bool (False)
If True, the code will share future statements with the interactive
shell. It will both be affected by previous __future__ imports, and
any __future__ imports in the code will affect the shell. If False,
__future__ imports are not shared in either direction.
"""
kw.setdefault('exit_ignore', False)
kw.setdefault('raise_exceptions', False)
kw.setdefault('shell_futures', False)
fname = os.path.abspath(os.path.expanduser(fname))
# Make sure we can open the file
try:
with open(fname):
pass
except:
warn('Could not open file <%s> for safe execution.' % fname)
return
# Find things also in current directory. This is needed to mimic the
# behavior of running a script from the system command line, where
# Python inserts the script's directory into sys.path
dname = os.path.dirname(fname)
with prepended_to_syspath(dname):
try:
glob, loc = (where + (None, ))[:2]
py3compat.execfile(
fname, glob, loc,
self.compile if kw['shell_futures'] else None)
except SystemExit as status:
# If the call was made with 0 or None exit status (sys.exit(0)
# or sys.exit() ), don't bother showing a traceback, as both of
# these are considered normal by the OS:
# > python -c'import sys;sys.exit(0)'; echo $?
# 0
# > python -c'import sys;sys.exit()'; echo $?
# 0
# For other exit status, we show the exception unless
# explicitly silenced, but only in short form.
if status.code:
if kw['raise_exceptions']:
raise
if not kw['exit_ignore']:
self.showtraceback(exception_only=True)
except:
if kw['raise_exceptions']:
raise
# tb offset is 2 because we wrap execfile
self.showtraceback(tb_offset=2)
def safe_execfile_ipy(self, fname, shell_futures=False, raise_exceptions=False):
"""Like safe_execfile, but for .ipy or .ipynb files with IPython syntax.
Parameters
----------
fname : str
The name of the file to execute. The filename must have a
.ipy or .ipynb extension.
shell_futures : bool (False)
If True, the code will share future statements with the interactive
shell. It will both be affected by previous __future__ imports, and
any __future__ imports in the code will affect the shell. If False,
__future__ imports are not shared in either direction.
raise_exceptions : bool (False)
If True raise exceptions everywhere. Meant for testing.
"""
fname = os.path.abspath(os.path.expanduser(fname))
# Make sure we can open the file
try:
with open(fname):
pass
except:
warn('Could not open file <%s> for safe execution.' % fname)
return
# Find things also in current directory. This is needed to mimic the
# behavior of running a script from the system command line, where
# Python inserts the script's directory into sys.path
dname = os.path.dirname(fname)
def get_cells():
"""generator for sequence of code blocks to run"""
if fname.endswith('.ipynb'):
from nbformat import read
with io_open(fname) as f:
nb = read(f, as_version=4)
if not nb.cells:
return
for cell in nb.cells:
if cell.cell_type == 'code':
yield cell.source
else:
with open(fname) as f:
yield f.read()
with prepended_to_syspath(dname):
try:
for cell in get_cells():
result = self.run_cell(cell, silent=True, shell_futures=shell_futures)
if raise_exceptions:
result.raise_error()
elif not result.success:
break
except:
if raise_exceptions:
raise
self.showtraceback()
warn('Unknown failure executing file: <%s>' % fname)
def safe_run_module(self, mod_name, where):
"""A safe version of runpy.run_module().
This version will never throw an exception, but instead print
helpful error messages to the screen.
`SystemExit` exceptions with status code 0 or None are ignored.
Parameters
----------
mod_name : string
The name of the module to be executed.
where : dict
The globals namespace.
"""
try:
try:
where.update(
runpy.run_module(str(mod_name), run_name="__main__",
alter_sys=True)
)
except SystemExit as status:
if status.code:
raise
except:
self.showtraceback()
warn('Unknown failure executing module: <%s>' % mod_name)
def run_cell(self, raw_cell, store_history=False, silent=False, shell_futures=True):
"""Run a complete IPython cell.
Parameters
----------
raw_cell : str
The code (including IPython code such as %magic functions) to run.
store_history : bool
If True, the raw and translated cell will be stored in IPython's
history. For user code calling back into IPython's machinery, this
should be set to False.
silent : bool
If True, avoid side-effects, such as implicit displayhooks and
and logging. silent=True forces store_history=False.
shell_futures : bool
If True, the code will share future statements with the interactive
shell. It will both be affected by previous __future__ imports, and
any __future__ imports in the code will affect the shell. If False,
__future__ imports are not shared in either direction.
Returns
-------
result : :class:`ExecutionResult`
"""
result = ExecutionResult()
if (not raw_cell) or raw_cell.isspace():
return result
if silent:
store_history = False
if store_history:
result.execution_count = self.execution_count
def error_before_exec(value):
result.error_before_exec = value
return result
self.events.trigger('pre_execute')
if not silent:
self.events.trigger('pre_run_cell')
# If any of our input transformation (input_transformer_manager or
# prefilter_manager) raises an exception, we store it in this variable
# so that we can display the error after logging the input and storing
# it in the history.
preprocessing_exc_tuple = None
try:
# Static input transformations
cell = self.input_transformer_manager.transform_cell(raw_cell)
except SyntaxError:
preprocessing_exc_tuple = sys.exc_info()
cell = raw_cell # cell has to exist so it can be stored/logged
else:
if len(cell.splitlines()) == 1:
# Dynamic transformations - only applied for single line commands
with self.builtin_trap:
try:
# use prefilter_lines to handle trailing newlines
# restore trailing newline for ast.parse
cell = self.prefilter_manager.prefilter_lines(cell) + '\n'
except Exception:
# don't allow prefilter errors to crash IPython
preprocessing_exc_tuple = sys.exc_info()
# Store raw and processed history
if store_history:
self.history_manager.store_inputs(self.execution_count,
cell, raw_cell)
if not silent:
self.logger.log(cell, raw_cell)
# Display the exception if input processing failed.
if preprocessing_exc_tuple is not None:
self.showtraceback(preprocessing_exc_tuple)
if store_history:
self.execution_count += 1
return error_before_exec(preprocessing_exc_tuple[2])
# Our own compiler remembers the __future__ environment. If we want to
# run code with a separate __future__ environment, use the default
# compiler
compiler = self.compile if shell_futures else CachingCompiler()
with self.builtin_trap:
cell_name = self.compile.cache(cell, self.execution_count)
with self.display_trap:
# Compile to bytecode
try:
code_ast = compiler.ast_parse(cell, filename=cell_name)
except IndentationError as e:
self.showindentationerror()
if store_history:
self.execution_count += 1
return error_before_exec(e)
except (OverflowError, SyntaxError, ValueError, TypeError,
MemoryError) as e:
self.showsyntaxerror()
if store_history:
self.execution_count += 1
return error_before_exec(e)
# Apply AST transformations
try:
code_ast = self.transform_ast(code_ast)
except InputRejected as e:
self.showtraceback()
if store_history:
self.execution_count += 1
return error_before_exec(e)
# Give the displayhook a reference to our ExecutionResult so it
# can fill in the output value.
self.displayhook.exec_result = result
# Execute the user code
interactivity = "none" if silent else self.ast_node_interactivity
self.run_ast_nodes(code_ast.body, cell_name,
interactivity=interactivity, compiler=compiler, result=result)
# Reset this so later displayed values do not modify the
# ExecutionResult
self.displayhook.exec_result = None
self.events.trigger('post_execute')
if not silent:
self.events.trigger('post_run_cell')
if store_history:
# Write output to the database. Does nothing unless
# history output logging is enabled.
self.history_manager.store_output(self.execution_count)
# Each cell is a *single* input, regardless of how many lines it has
self.execution_count += 1
return result
def transform_ast(self, node):
"""Apply the AST transformations from self.ast_transformers
Parameters
----------
node : ast.Node
The root node to be transformed. Typically called with the ast.Module
produced by parsing user input.
Returns
-------
An ast.Node corresponding to the node it was called with. Note that it
may also modify the passed object, so don't rely on references to the
original AST.
"""
for transformer in self.ast_transformers:
try:
node = transformer.visit(node)
except InputRejected:
# User-supplied AST transformers can reject an input by raising
# an InputRejected. Short-circuit in this case so that we
# don't unregister the transform.
raise
except Exception:
warn("AST transformer %r threw an error. It will be unregistered." % transformer)
self.ast_transformers.remove(transformer)
if self.ast_transformers:
ast.fix_missing_locations(node)
return node
def run_ast_nodes(self, nodelist, cell_name, interactivity='last_expr',
compiler=compile, result=None):
"""Run a sequence of AST nodes. The execution mode depends on the
interactivity parameter.
Parameters
----------
nodelist : list
A sequence of AST nodes to run.
cell_name : str
Will be passed to the compiler as the filename of the cell. Typically
the value returned by ip.compile.cache(cell).
interactivity : str
'all', 'last', 'last_expr' or 'none', specifying which nodes should be
run interactively (displaying output from expressions). 'last_expr'
will run the last node interactively only if it is an expression (i.e.
expressions in loops or other blocks are not displayed. Other values
for this parameter will raise a ValueError.
compiler : callable
A function with the same interface as the built-in compile(), to turn
the AST nodes into code objects. Default is the built-in compile().
result : ExecutionResult, optional
An object to store exceptions that occur during execution.
Returns
-------
True if an exception occurred while running code, False if it finished
running.
"""
if not nodelist:
return
if interactivity == 'last_expr':
if isinstance(nodelist[-1], ast.Expr):
interactivity = "last"
else:
interactivity = "none"
if interactivity == 'none':
to_run_exec, to_run_interactive = nodelist, []
elif interactivity == 'last':
to_run_exec, to_run_interactive = nodelist[:-1], nodelist[-1:]
elif interactivity == 'all':
to_run_exec, to_run_interactive = [], nodelist
else:
raise ValueError("Interactivity was %r" % interactivity)
try:
for i, node in enumerate(to_run_exec):
mod = ast.Module([node])
code = compiler(mod, cell_name, "exec")
if self.run_code(code, result):
return True
for i, node in enumerate(to_run_interactive):
mod = ast.Interactive([node])
code = compiler(mod, cell_name, "single")
if self.run_code(code, result):
return True
# Flush softspace
if softspace(sys.stdout, 0):
print()
except:
# It's possible to have exceptions raised here, typically by
# compilation of odd code (such as a naked 'return' outside a
# function) that did parse but isn't valid. Typically the exception
# is a SyntaxError, but it's safest just to catch anything and show
# the user a traceback.
# We do only one try/except outside the loop to minimize the impact
# on runtime, and also because if any node in the node list is
# broken, we should stop execution completely.
if result:
result.error_before_exec = sys.exc_info()[1]
self.showtraceback()
return True
return False
def run_code(self, code_obj, result=None):
"""Execute a code object.
When an exception occurs, self.showtraceback() is called to display a
traceback.
Parameters
----------
code_obj : code object
A compiled code object, to be executed
result : ExecutionResult, optional
An object to store exceptions that occur during execution.
Returns
-------
False : successful execution.
True : an error occurred.
"""
# Set our own excepthook in case the user code tries to call it
# directly, so that the IPython crash handler doesn't get triggered
old_excepthook, sys.excepthook = sys.excepthook, self.excepthook
# we save the original sys.excepthook in the instance, in case config
# code (such as magics) needs access to it.
self.sys_excepthook = old_excepthook
outflag = 1 # happens in more places, so it's easier as default
try:
try:
self.hooks.pre_run_code_hook()
#rprint('Running code', repr(code_obj)) # dbg
exec(code_obj, self.user_global_ns, self.user_ns)
finally:
# Reset our crash handler in place
sys.excepthook = old_excepthook
except SystemExit as e:
if result is not None:
result.error_in_exec = e
self.showtraceback(exception_only=True)
warn("To exit: use 'exit', 'quit', or Ctrl-D.", level=1)
except self.custom_exceptions:
etype, value, tb = sys.exc_info()
if result is not None:
result.error_in_exec = value
self.CustomTB(etype, value, tb)
except:
if result is not None:
result.error_in_exec = sys.exc_info()[1]
self.showtraceback()
else:
outflag = 0
return outflag
# For backwards compatibility
runcode = run_code
#-------------------------------------------------------------------------
# Things related to GUI support and pylab
#-------------------------------------------------------------------------
def enable_gui(self, gui=None):
raise NotImplementedError('Implement enable_gui in a subclass')
def enable_matplotlib(self, gui=None):
"""Enable interactive matplotlib and inline figure support.
This takes the following steps:
1. select the appropriate eventloop and matplotlib backend
2. set up matplotlib for interactive use with that backend
3. configure formatters for inline figure display
4. enable the selected gui eventloop
Parameters
----------
gui : optional, string
If given, dictates the choice of matplotlib GUI backend to use
(should be one of IPython's supported backends, 'qt', 'osx', 'tk',
'gtk', 'wx' or 'inline'), otherwise we use the default chosen by
matplotlib (as dictated by the matplotlib build-time options plus the
user's matplotlibrc configuration file). Note that not all backends
make sense in all contexts, for example a terminal ipython can't
display figures inline.
"""
from IPython.core import pylabtools as pt
gui, backend = pt.find_gui_and_backend(gui, self.pylab_gui_select)
if gui != 'inline':
# If we have our first gui selection, store it
if self.pylab_gui_select is None:
self.pylab_gui_select = gui
# Otherwise if they are different
elif gui != self.pylab_gui_select:
print ('Warning: Cannot change to a different GUI toolkit: %s.'
' Using %s instead.' % (gui, self.pylab_gui_select))
gui, backend = pt.find_gui_and_backend(self.pylab_gui_select)
pt.activate_matplotlib(backend)
pt.configure_inline_support(self, backend)
# Now we must activate the gui pylab wants to use, and fix %run to take
# plot updates into account
self.enable_gui(gui)
self.magics_manager.registry['ExecutionMagics'].default_runner = \
pt.mpl_runner(self.safe_execfile)
return gui, backend
def enable_pylab(self, gui=None, import_all=True, welcome_message=False):
"""Activate pylab support at runtime.
This turns on support for matplotlib, preloads into the interactive
namespace all of numpy and pylab, and configures IPython to correctly
interact with the GUI event loop. The GUI backend to be used can be
optionally selected with the optional ``gui`` argument.
This method only adds preloading the namespace to InteractiveShell.enable_matplotlib.
Parameters
----------
gui : optional, string
If given, dictates the choice of matplotlib GUI backend to use
(should be one of IPython's supported backends, 'qt', 'osx', 'tk',
'gtk', 'wx' or 'inline'), otherwise we use the default chosen by
matplotlib (as dictated by the matplotlib build-time options plus the
user's matplotlibrc configuration file). Note that not all backends
make sense in all contexts, for example a terminal ipython can't
display figures inline.
import_all : optional, bool, default: True
Whether to do `from numpy import *` and `from pylab import *`
in addition to module imports.
welcome_message : deprecated
This argument is ignored, no welcome message will be displayed.
"""
from IPython.core.pylabtools import import_pylab
gui, backend = self.enable_matplotlib(gui)
# We want to prevent the loading of pylab to pollute the user's
# namespace as shown by the %who* magics, so we execute the activation
# code in an empty namespace, and we update *both* user_ns and
# user_ns_hidden with this information.
ns = {}
import_pylab(ns, import_all)
# warn about clobbered names
ignored = {"__builtins__"}
both = set(ns).intersection(self.user_ns).difference(ignored)
clobbered = [ name for name in both if self.user_ns[name] is not ns[name] ]
self.user_ns.update(ns)
self.user_ns_hidden.update(ns)
return gui, backend, clobbered
#-------------------------------------------------------------------------
# Utilities
#-------------------------------------------------------------------------
def var_expand(self, cmd, depth=0, formatter=DollarFormatter()):
"""Expand python variables in a string.
The depth argument indicates how many frames above the caller should
be walked to look for the local namespace where to expand variables.
The global namespace for expansion is always the user's interactive
namespace.
"""
ns = self.user_ns.copy()
try:
frame = sys._getframe(depth+1)
except ValueError:
# This is thrown if there aren't that many frames on the stack,
# e.g. if a script called run_line_magic() directly.
pass
else:
ns.update(frame.f_locals)
try:
# We have to use .vformat() here, because 'self' is a valid and common
# name, and expanding **ns for .format() would make it collide with
# the 'self' argument of the method.
cmd = formatter.vformat(cmd, args=[], kwargs=ns)
except Exception:
# if formatter couldn't format, just let it go untransformed
pass
return cmd
def mktempfile(self, data=None, prefix='ipython_edit_'):
"""Make a new tempfile and return its filename.
This makes a call to tempfile.mkstemp (created in a tempfile.mkdtemp),
but it registers the created filename internally so ipython cleans it up
at exit time.
Optional inputs:
- data(None): if data is given, it gets written out to the temp file
immediately, and the file is closed again."""
dirname = tempfile.mkdtemp(prefix=prefix)
self.tempdirs.append(dirname)
handle, filename = tempfile.mkstemp('.py', prefix, dir=dirname)
os.close(handle) # On Windows, there can only be one open handle on a file
self.tempfiles.append(filename)
if data:
tmp_file = open(filename,'w')
tmp_file.write(data)
tmp_file.close()
return filename
# TODO: This should be removed when Term is refactored.
def write(self,data):
"""Write a string to the default output"""
io.stdout.write(data)
# TODO: This should be removed when Term is refactored.
def write_err(self,data):
"""Write a string to the default error output"""
io.stderr.write(data)
def ask_yes_no(self, prompt, default=None, interrupt=None):
if self.quiet:
return True
return ask_yes_no(prompt,default,interrupt)
def show_usage(self):
"""Show a usage message"""
page.page(IPython.core.usage.interactive_usage)
def extract_input_lines(self, range_str, raw=False):
"""Return as a string a set of input history slices.
Parameters
----------
range_str : string
The set of slices is given as a string, like "~5/6-~4/2 4:8 9",
since this function is for use by magic functions which get their
arguments as strings. The number before the / is the session
number: ~n goes n back from the current session.
raw : bool, optional
By default, the processed input is used. If this is true, the raw
input history is used instead.
Notes
-----
Slices can be described with two notations:
* ``N:M`` -> standard python form, means including items N...(M-1).
* ``N-M`` -> include items N..M (closed endpoint).
"""
lines = self.history_manager.get_range_by_str(range_str, raw=raw)
return "\n".join(x for _, _, x in lines)
def find_user_code(self, target, raw=True, py_only=False, skip_encoding_cookie=True, search_ns=False):
"""Get a code string from history, file, url, or a string or macro.
This is mainly used by magic functions.
Parameters
----------
target : str
A string specifying code to retrieve. This will be tried respectively
as: ranges of input history (see %history for syntax), url,
corresponding .py file, filename, or an expression evaluating to a
string or Macro in the user namespace.
raw : bool
If true (default), retrieve raw history. Has no effect on the other
retrieval mechanisms.
py_only : bool (default False)
Only try to fetch python code, do not try alternative methods to decode file
if unicode fails.
Returns
-------
A string of code.
ValueError is raised if nothing is found, and TypeError if it evaluates
to an object of another type. In each case, .args[0] is a printable
message.
"""
code = self.extract_input_lines(target, raw=raw) # Grab history
if code:
return code
utarget = unquote_filename(target)
try:
if utarget.startswith(('http://', 'https://')):
return openpy.read_py_url(utarget, skip_encoding_cookie=skip_encoding_cookie)
except UnicodeDecodeError:
if not py_only :
# Deferred import
try:
from urllib.request import urlopen # Py3
except ImportError:
from urllib import urlopen
response = urlopen(target)
return response.read().decode('latin1')
raise ValueError(("'%s' seem to be unreadable.") % utarget)
potential_target = [target]
try :
potential_target.insert(0,get_py_filename(target))
except IOError:
pass
for tgt in potential_target :
if os.path.isfile(tgt): # Read file
try :
return openpy.read_py_file(tgt, skip_encoding_cookie=skip_encoding_cookie)
except UnicodeDecodeError :
if not py_only :
with io_open(tgt,'r', encoding='latin1') as f :
return f.read()
raise ValueError(("'%s' seem to be unreadable.") % target)
elif os.path.isdir(os.path.expanduser(tgt)):
raise ValueError("'%s' is a directory, not a regular file." % target)
if search_ns:
# Inspect namespace to load object source
object_info = self.object_inspect(target, detail_level=1)
if object_info['found'] and object_info['source']:
return object_info['source']
try: # User namespace
codeobj = eval(target, self.user_ns)
except Exception:
raise ValueError(("'%s' was not found in history, as a file, url, "
"nor in the user namespace.") % target)
if isinstance(codeobj, string_types):
return codeobj
elif isinstance(codeobj, Macro):
return codeobj.value
raise TypeError("%s is neither a string nor a macro." % target,
codeobj)
#-------------------------------------------------------------------------
# Things related to IPython exiting
#-------------------------------------------------------------------------
def atexit_operations(self):
"""This will be executed at the time of exit.
Cleanup operations and saving of persistent data that is done
unconditionally by IPython should be performed here.
For things that may depend on startup flags or platform specifics (such
as having readline or not), register a separate atexit function in the
code that has the appropriate information, rather than trying to
clutter
"""
# Close the history session (this stores the end time and line count)
# this must be *before* the tempfile cleanup, in case of temporary
# history db
self.history_manager.end_session()
# Cleanup all tempfiles and folders left around
for tfile in self.tempfiles:
try:
os.unlink(tfile)
except OSError:
pass
for tdir in self.tempdirs:
try:
os.rmdir(tdir)
except OSError:
pass
# Clear all user namespaces to release all references cleanly.
self.reset(new_session=False)
# Run user hooks
self.hooks.shutdown_hook()
def cleanup(self):
self.restore_sys_module_state()
class InteractiveShellABC(with_metaclass(abc.ABCMeta, object)):
"""An abstract base class for InteractiveShell."""
InteractiveShellABC.register(InteractiveShell)
| gpl-2.0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.