repo_name
stringlengths 6
112
| path
stringlengths 4
204
| copies
stringlengths 1
3
| size
stringlengths 4
6
| content
stringlengths 714
810k
| license
stringclasses 15
values |
---|---|---|---|---|---|
Garrett-R/scikit-learn | sklearn/metrics/cluster/tests/test_unsupervised.py | 26 | 2870 | import numpy as np
from scipy.sparse import csr_matrix
from .... import datasets
from ..unsupervised import silhouette_score
from ... import pairwise_distances
from sklearn.utils.testing import assert_false, assert_almost_equal
from sklearn.utils.testing import assert_raises_regexp
def test_silhouette():
"""Tests the Silhouette Coefficient. """
dataset = datasets.load_iris()
X = dataset.data
y = dataset.target
D = pairwise_distances(X, metric='euclidean')
# Given that the actual labels are used, we can assume that S would be
# positive.
silhouette = silhouette_score(D, y, metric='precomputed')
assert(silhouette > 0)
# Test without calculating D
silhouette_metric = silhouette_score(X, y, metric='euclidean')
assert_almost_equal(silhouette, silhouette_metric)
# Test with sampling
silhouette = silhouette_score(D, y, metric='precomputed',
sample_size=int(X.shape[0] / 2),
random_state=0)
silhouette_metric = silhouette_score(X, y, metric='euclidean',
sample_size=int(X.shape[0] / 2),
random_state=0)
assert(silhouette > 0)
assert(silhouette_metric > 0)
assert_almost_equal(silhouette_metric, silhouette)
# Test with sparse X
X_sparse = csr_matrix(X)
D = pairwise_distances(X_sparse, metric='euclidean')
silhouette = silhouette_score(D, y, metric='precomputed')
assert(silhouette > 0)
def test_no_nan():
"""Assert Silhouette Coefficient != nan when there is 1 sample in a class.
This tests for the condition that caused issue 960.
"""
# Note that there is only one sample in cluster 0. This used to cause the
# silhouette_score to return nan (see bug #960).
labels = np.array([1, 0, 1, 1, 1])
# The distance matrix doesn't actually matter.
D = np.random.RandomState(0).rand(len(labels), len(labels))
silhouette = silhouette_score(D, labels, metric='precomputed')
assert_false(np.isnan(silhouette))
def test_correct_labelsize():
""" Assert 2 <= n_labels <= nsample -1 """
dataset = datasets.load_iris()
X = dataset.data
# n_labels = n_samples
y = np.arange(X.shape[0])
assert_raises_regexp(ValueError,
"Number of labels is %d "
"but should be more than 2"
"and less than n_samples - 1" % len(np.unique(y)),
silhouette_score, X, y)
# n_labels = 1
y = np.zeros(X.shape[0])
assert_raises_regexp(ValueError,
"Number of labels is %d "
"but should be more than 2"
"and less than n_samples - 1" % len(np.unique(y)),
silhouette_score, X, y)
| bsd-3-clause |
jniediek/mne-python | tutorials/plot_background_filtering.py | 4 | 42317 | # -*- coding: utf-8 -*-
r"""
.. _tut_background_filtering:
===================================
Background information on filtering
===================================
Here we give some background information on filtering in general,
and how it is done in MNE-Python in particular.
Recommended reading for practical applications of digital
filter design can be found in Parks & Burrus [1]_ and
Ifeachor and Jervis [2]_, and for filtering in an
M/EEG context we recommend reading Widmann *et al.* 2015 [7]_.
To see how to use the default filters in MNE-Python on actual data, see
the :ref:`tut_artifacts_filter` tutorial.
.. contents::
:local:
Problem statement
=================
The practical issues with filtering electrophysiological data are covered
well by Widmann *et al.* in [7]_, in a follow-up to an article where they
conclude with this statement:
Filtering can result in considerable distortions of the time course
(and amplitude) of a signal as demonstrated by VanRullen (2011) [[3]_].
Thus, filtering should not be used lightly. However, if effects of
filtering are cautiously considered and filter artifacts are minimized,
a valid interpretation of the temporal dynamics of filtered
electrophysiological data is possible and signals missed otherwise
can be detected with filtering.
In other words, filtering can increase SNR, but if it is not used carefully,
it can distort data. Here we hope to cover some filtering basics so
users can better understand filtering tradeoffs, and why MNE-Python has
chosen particular defaults.
.. _tut_filtering_basics:
Filtering basics
================
Let's get some of the basic math down. In the frequency domain, digital
filters have a transfer function that is given by:
.. math::
H(z) &= \frac{b_0 + b_1 z^{-1} + b_2 z^{-2} + ... + b_M z^{-M}}
{1 + a_1 z^{-1} + a_2 z^{-2} + ... + a_N z^{-M}} \\
&= \frac{\sum_0^Mb_kz^{-k}}{\sum_1^Na_kz^{-k}}
In the time domain, the numerator coefficients :math:`b_k` and denominator
coefficients :math:`a_k` can be used to obtain our output data
:math:`y(n)` in terms of our input data :math:`x(n)` as:
.. math::
:label: summations
y(n) &= b_0 x(n) + b_1 x(n-1) + ... + b_M x(n-M)
- a_1 y(n-1) - a_2 y(n - 2) - ... - a_N y(n - N)\\
&= \sum_0^M b_k x(n-k) - \sum_1^N a_k y(n-k)
In other words, the output at time :math:`n` is determined by a sum over:
1. The numerator coefficients :math:`b_k`, which get multiplied by
the previous input :math:`x(n-k)` values, and
2. The denominator coefficients :math:`a_k`, which get multiplied by
the previous output :math:`y(n-k)` values.
Note that these summations in :eq:`summations` correspond nicely to
(1) a weighted `moving average`_ and (2) an autoregression_.
Filters are broken into two classes: FIR_ (finite impulse response) and
IIR_ (infinite impulse response) based on these coefficients.
FIR filters use a finite number of numerator
coefficients :math:`b_k` (:math:`\forall k, a_k=0`), and thus each output
value of :math:`y(n)` depends only on the :math:`M` previous input values.
IIR filters depend on the previous input and output values, and thus can have
effectively infinite impulse responses.
As outlined in [1]_, FIR and IIR have different tradeoffs:
* A causal FIR filter can be linear-phase -- i.e., the same time delay
across all frequencies -- whereas a causal IIR filter cannot. The phase
and group delay characteristics are also usually better for FIR filters.
* IIR filters can generally have a steeper cutoff than an FIR filter of
equivalent order.
* IIR filters are generally less numerically stable, in part due to
accumulating error (due to its recursive calculations).
In MNE-Python we default to using FIR filtering. As noted in Widmann *et al.*
2015 [7]_:
Despite IIR filters often being considered as computationally more
efficient, they are recommended only when high throughput and sharp
cutoffs are required (Ifeachor and Jervis, 2002[2]_, p. 321),
...FIR filters are easier to control, are always stable, have a
well-defined passband, can be corrected to zero-phase without
additional computations, and can be converted to minimum-phase.
We therefore recommend FIR filters for most purposes in
electrophysiological data analysis.
When designing a filter (FIR or IIR), there are always tradeoffs that
need to be considered, including but not limited to:
1. Ripple in the pass-band
2. Attenuation of the stop-band
3. Steepness of roll-off
4. Filter order (i.e., length for FIR filters)
5. Time-domain ringing
In general, the sharper something is in frequency, the broader it is in time,
and vice-versa. This is a fundamental time-frequency tradeoff, and it will
show up below.
FIR Filters
===========
First we will focus first on FIR filters, which are the default filters used by
MNE-Python.
"""
###############################################################################
# Designing FIR filters
# ---------------------
# Here we'll try designing a low-pass filter, and look at trade-offs in terms
# of time- and frequency-domain filter characteristics. Later, in
# :ref:`tut_effect_on_signals`, we'll look at how such filters can affect
# signals when they are used.
#
# First let's import some useful tools for filtering, and set some default
# values for our data that are reasonable for M/EEG data.
import numpy as np
from scipy import signal, fftpack
import matplotlib.pyplot as plt
from mne.time_frequency.tfr import morlet
import mne
sfreq = 1000.
f_p = 40.
ylim = [-60, 10] # for dB plots
xlim = [2, sfreq / 2.]
blue = '#1f77b4'
###############################################################################
# Take for example an ideal low-pass filter, which would give a value of 1 in
# the pass-band (up to frequency :math:`f_p`) and a value of 0 in the stop-band
# (down to frequency :math:`f_s`) such that :math:`f_p=f_s=40` Hz here
# (shown to a lower limit of -60 dB for simplicity):
nyq = sfreq / 2. # the Nyquist frequency is half our sample rate
freq = [0, f_p, f_p, nyq]
gain = [1, 1, 0, 0]
def box_off(ax):
ax.grid(zorder=0)
for key in ('top', 'right'):
ax.spines[key].set_visible(False)
def plot_ideal(freq, gain, ax):
freq = np.maximum(freq, xlim[0])
xs, ys = list(), list()
my_freq, my_gain = list(), list()
for ii in range(len(freq)):
xs.append(freq[ii])
ys.append(ylim[0])
if ii < len(freq) - 1 and gain[ii] != gain[ii + 1]:
xs += [freq[ii], freq[ii + 1]]
ys += [ylim[1]] * 2
my_freq += np.linspace(freq[ii], freq[ii + 1], 20,
endpoint=False).tolist()
my_gain += np.linspace(gain[ii], gain[ii + 1], 20,
endpoint=False).tolist()
else:
my_freq.append(freq[ii])
my_gain.append(gain[ii])
my_gain = 10 * np.log10(np.maximum(my_gain, 10 ** (ylim[0] / 10.)))
ax.fill_between(xs, ylim[0], ys, color='r', alpha=0.1)
ax.semilogx(my_freq, my_gain, 'r--', alpha=0.5, linewidth=4, zorder=3)
xticks = [1, 2, 4, 10, 20, 40, 100, 200, 400]
ax.set(xlim=xlim, ylim=ylim, xticks=xticks, xlabel='Frequency (Hz)',
ylabel='Amplitude (dB)')
ax.set(xticklabels=xticks)
box_off(ax)
half_height = np.array(plt.rcParams['figure.figsize']) * [1, 0.5]
ax = plt.subplots(1, figsize=half_height)[1]
plot_ideal(freq, gain, ax)
ax.set(title='Ideal %s Hz lowpass' % f_p)
mne.viz.tight_layout()
plt.show()
###############################################################################
# This filter hypothetically achieves zero ripple in the frequency domain,
# perfect attenuation, and perfect steepness. However, due to the discontunity
# in the frequency response, the filter would require infinite ringing in the
# time domain (i.e., infinite order) to be realized. Another way to think of
# this is that a rectangular window in frequency is actually sinc_ function
# in time, which requires an infinite number of samples, and thus infinite
# time, to represent. So although this filter has ideal frequency suppression,
# it has poor time-domain characteristics.
#
# Let's try to naïvely make a brick-wall filter of length 0.1 sec, and look
# at the filter itself in the time domain and the frequency domain:
n = int(round(0.1 * sfreq)) + 1
t = np.arange(-n // 2, n // 2) / sfreq # center our sinc
h = np.sinc(2 * f_p * t) / (4 * np.pi)
def plot_filter(h, title, freq, gain, show=True):
if h.ndim == 2: # second-order sections
sos = h
n = mne.filter.estimate_ringing_samples(sos)
h = np.zeros(n)
h[0] = 1
h = signal.sosfilt(sos, h)
H = np.ones(512, np.complex128)
for section in sos:
f, this_H = signal.freqz(section[:3], section[3:])
H *= this_H
else:
f, H = signal.freqz(h)
fig, axs = plt.subplots(2)
t = np.arange(len(h)) / sfreq
axs[0].plot(t, h, color=blue)
axs[0].set(xlim=t[[0, -1]], xlabel='Time (sec)',
ylabel='Amplitude h(n)', title=title)
box_off(axs[0])
f *= sfreq / (2 * np.pi)
axs[1].semilogx(f, 10 * np.log10((H * H.conj()).real), color=blue,
linewidth=2, zorder=4)
plot_ideal(freq, gain, axs[1])
mne.viz.tight_layout()
if show:
plt.show()
plot_filter(h, 'Sinc (0.1 sec)', freq, gain)
###############################################################################
# This is not so good! Making the filter 10 times longer (1 sec) gets us a
# bit better stop-band suppression, but still has a lot of ringing in
# the time domain. Note the x-axis is an order of magnitude longer here:
n = int(round(1. * sfreq)) + 1
t = np.arange(-n // 2, n // 2) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, 'Sinc (1.0 sec)', freq, gain)
###############################################################################
# Let's make the stop-band tighter still with a longer filter (10 sec),
# with a resulting larger x-axis:
n = int(round(10. * sfreq)) + 1
t = np.arange(-n // 2, n // 2) / sfreq
h = np.sinc(2 * f_p * t) / (4 * np.pi)
plot_filter(h, 'Sinc (10.0 sec)', freq, gain)
###############################################################################
# Now we have very sharp frequency suppression, but our filter rings for the
# entire second. So this naïve method is probably not a good way to build
# our low-pass filter.
#
# Fortunately, there are multiple established methods to design FIR filters
# based on desired response characteristics. These include:
#
# 1. The Remez_ algorithm (:func:`scipy.signal.remez`, `MATLAB firpm`_)
# 2. Windowed FIR design (:func:`scipy.signal.firwin2`, `MATLAB fir2`_)
# 3. Least squares designs (:func:`scipy.signal.firls`, `MATLAB firls`_)
# 4. Frequency-domain design (construct filter in Fourier
# domain and use an :func:`IFFT <scipy.fftpack.ifft>` to invert it)
#
# .. note:: Remez and least squares designs have advantages when there are
# "do not care" regions in our frequency response. However, we want
# well controlled responses in all frequency regions.
# Frequency-domain construction is good when an arbitrary response
# is desired, but generally less clean (due to sampling issues) than
# a windowed approach for more straightfroward filter applications.
# Since our filters (low-pass, high-pass, band-pass, band-stop)
# are fairly simple and we require precisel control of all frequency
# regions, here we will use and explore primarily windowed FIR
# design.
#
# If we relax our frequency-domain filter requirements a little bit, we can
# use these functions to construct a lowpass filter that instead has a
# *transition band*, or a region between the pass frequency :math:`f_p`
# and stop frequency :math:`f_s`, e.g.:
trans_bandwidth = 10 # 10 Hz transition band
f_s = f_p + trans_bandwidth # = 50 Hz
freq = [0., f_p, f_s, nyq]
gain = [1., 1., 0., 0.]
ax = plt.subplots(1, figsize=half_height)[1]
plot_ideal(freq, gain, ax)
ax.set(title='%s Hz lowpass with a %s Hz transition' % (f_p, trans_bandwidth))
mne.viz.tight_layout()
plt.show()
###############################################################################
# Accepting a shallower roll-off of the filter in the frequency domain makes
# our time-domain response potentially much better. We end up with a
# smoother slope through the transition region, but a *much* cleaner time
# domain signal. Here again for the 1 sec filter:
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, 'Windowed 10-Hz transition (1.0 sec)', freq, gain)
###############################################################################
# Since our lowpass is around 40 Hz with a 10 Hz transition, we can actually
# use a shorter filter (5 cycles at 10 Hz = 0.5 sec) and still get okay
# stop-band attenuation:
n = int(round(sfreq * 0.5)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, 'Windowed 10-Hz transition (0.5 sec)', freq, gain)
###############################################################################
# But then if we shorten the filter too much (2 cycles of 10 Hz = 0.2 sec),
# our effective stop frequency gets pushed out past 60 Hz:
n = int(round(sfreq * 0.2)) + 1
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, 'Windowed 10-Hz transition (0.2 sec)', freq, gain)
###############################################################################
# If we want a filter that is only 0.1 seconds long, we should probably use
# something more like a 25 Hz transition band (0.2 sec = 5 cycles @ 25 Hz):
trans_bandwidth = 25
f_s = f_p + trans_bandwidth
freq = [0, f_p, f_s, nyq]
h = signal.firwin2(n, freq, gain, nyq=nyq)
plot_filter(h, 'Windowed 50-Hz transition (0.2 sec)', freq, gain)
###############################################################################
# .. _tut_effect_on_signals:
#
# Applying FIR filters
# --------------------
#
# Now lets look at some practical effects of these filters by applying
# them to some data.
#
# Let's construct a Gaussian-windowed sinusoid (i.e., Morlet imaginary part)
# plus noise (random + line). Note that the original, clean signal contains
# frequency content in both the pass band and transition bands of our
# low-pass filter.
dur = 10.
center = 2.
morlet_freq = f_p
tlim = [center - 0.2, center + 0.2]
tticks = [tlim[0], center, tlim[1]]
flim = [20, 70]
x = np.zeros(int(sfreq * dur))
blip = morlet(sfreq, [morlet_freq], n_cycles=7)[0].imag / 20.
n_onset = int(center * sfreq) - len(blip) // 2
x[n_onset:n_onset + len(blip)] += blip
x_orig = x.copy()
rng = np.random.RandomState(0)
x += rng.randn(len(x)) / 1000.
x += np.sin(2. * np.pi * 60. * np.arange(len(x)) / sfreq) / 2000.
###############################################################################
# Filter it with a shallow cutoff, linear-phase FIR and compensate for
# the delay:
transition_band = 0.25 * f_p
f_s = f_p + transition_band
filter_dur = 6.6 / transition_band # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
x_shallow = np.convolve(h, x)[len(h) // 2:]
plot_filter(h, 'MNE-Python 0.14 default', freq, gain)
###############################################################################
# This is actually set to become the default type of filter used in MNE-Python
# in 0.14 (see :ref:`tut_filtering_in_python`).
#
# Let's also filter with the MNE-Python 0.13 default, which is a
# long-duration, steep cutoff FIR that gets applied twice:
transition_band = 0.5 # Hz
f_s = f_p + transition_band
filter_dur = 10. # sec
n = int(sfreq * filter_dur)
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
h = signal.firwin2(n, freq, gain, nyq=sfreq / 2.)
x_steep = np.convolve(np.convolve(h, x)[::-1], h)[::-1][len(h) - 1:-len(h) - 1]
plot_filter(h, 'MNE-Python 0.13 default', freq, gain)
###############################################################################
# Finally, Let's also filter it with the
# MNE-C default, which is a long-duration steep-slope FIR filter designed
# using frequency-domain techniques:
h = mne.filter.design_mne_c_filter(sfreq, l_freq=None, h_freq=f_p + 2.5)
x_mne_c = np.convolve(h, x)[len(h) // 2:]
transition_band = 5 # Hz (default in MNE-C)
f_s = f_p + transition_band
freq = [0., f_p, f_s, sfreq / 2.]
gain = [1., 1., 0., 0.]
plot_filter(h, 'MNE-C default', freq, gain)
###############################################################################
# Both the MNE-Python 0.13 and MNE-C filhters have excellent frequency
# attenuation, but it comes at a cost of potential
# ringing (long-lasting ripples) in the time domain. Ringing can occur with
# steep filters, especially on signals with frequency content around the
# transition band. Our Morlet wavelet signal has power in our transition band,
# and the time-domain ringing is thus more pronounced for the steep-slope,
# long-duration filter than the shorter, shallower-slope filter:
axs = plt.subplots(1, 2)[1]
def plot_signal(x, offset):
t = np.arange(len(x)) / sfreq
axs[0].plot(t, x + offset)
axs[0].set(xlabel='Time (sec)', xlim=t[[0, -1]])
box_off(axs[0])
X = fftpack.fft(x)
freqs = fftpack.fftfreq(len(x), 1. / sfreq)
mask = freqs >= 0
X = X[mask]
freqs = freqs[mask]
axs[1].plot(freqs, 20 * np.log10(np.abs(X)))
axs[1].set(xlim=xlim)
yticks = np.arange(5) / -30.
yticklabels = ['Original', 'Noisy', 'FIR-shallow (0.14)', 'FIR-steep (0.13)',
'FIR-steep (MNE-C)']
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_shallow, offset=yticks[2])
plot_signal(x_steep, offset=yticks[3])
plot_signal(x_mne_c, offset=yticks[4])
axs[0].set(xlim=tlim, title='FIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-0.150, 0.025], yticks=yticks, yticklabels=yticklabels,)
for text in axs[0].get_yticklabels():
text.set(rotation=45, size=8)
axs[1].set(xlim=flim, ylim=ylim, xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
box_off(axs[0])
box_off(axs[1])
mne.viz.tight_layout()
plt.show()
###############################################################################
# IIR filters
# ===========
#
# MNE-Python also offers IIR filtering functionality that is based on the
# methods from :mod:`scipy.signal`. Specifically, we use the general-purpose
# functions :func:`scipy.signal.iirfilter` and :func:`scipy.signal.iirdesign`,
# which provide unified interfaces to IIR filter design.
#
# Designing IIR filters
# ---------------------
#
# Let's continue with our design of a 40 Hz low-pass filter, and look at
# some trade-offs of different IIR filters.
#
# Often the default IIR filter is a `Butterworth filter`_, which is designed
# to have a *maximally flat pass-band*. Let's look at a few orders of filter,
# i.e., a few different number of coefficients used and therefore steepness
# of the filter:
sos = signal.iirfilter(2, f_p / nyq, btype='low', ftype='butter', output='sos')
plot_filter(sos, 'Butterworth order=2', freq, gain)
# Eventually this will just be from scipy signal.sosfiltfilt, but 0.18 is
# not widely adopted yet (as of June 2016), so we use our wrapper...
sosfiltfilt = mne.fixes.get_sosfiltfilt()
x_shallow = sosfiltfilt(sos, x)
###############################################################################
# The falloff of this filter is not very steep.
#
# .. warning:: For brevity, we do not show the phase of these filters here.
# In the FIR case, we can design linear-phase filters, and
# compensate for the delay (making the filter acausal) if
# necessary. This cannot be done
# with IIR filters, as they have a non-linear phase.
# As the filter order increases, the
# phase distortion near and in the transition band worsens.
# However, if acausal (forward-backward) filtering can be used,
# e.g. with :func:`scipy.signal.filtfilt`, these phase issues
# can be mitigated.
#
# .. note:: Here we have made use of second-order sections (SOS)
# by using :func:`scipy.signal.sosfilt` and, under the
# hood, :func:`scipy.signal.zpk2sos` when passing the
# ``output='sos'`` keyword argument to
# :func:`scipy.signal.iirfilter`. The filter definitions
# given in tut_filtering_basics_ use the polynomial
# numerator/denominator (sometimes called "tf") form ``(b, a)``,
# which are theoretically equivalent to the SOS form used here.
# In practice, however, the SOS form can give much better results
# due to issues with numerical precision (see
# :func:`scipy.signal.sosfilt` for an example), so SOS should be
# used when possible to do IIR filtering.
#
# Let's increase the order, and note that now we have better attenuation,
# with a longer impulse response:
sos = signal.iirfilter(8, f_p / nyq, btype='low', ftype='butter', output='sos')
plot_filter(sos, 'Butterworth order=8', freq, gain)
x_steep = sosfiltfilt(sos, x)
###############################################################################
# There are other types of IIR filters that we can use. For a complete list,
# check out the documentation for :func:`scipy.signal.iirdesign`. Let's
# try a Chebychev (type I) filter, which trades off ripple in the pass-band
# to get better attenuation in the stop-band:
sos = signal.iirfilter(8, f_p / nyq, btype='low', ftype='cheby1', output='sos',
rp=1) # dB of acceptable pass-band ripple
plot_filter(sos, 'Chebychev-1 order=8, ripple=1 dB', freq, gain)
###############################################################################
# And if we can live with even more ripple, we can get it slightly steeper,
# but the impulse response begins to ring substantially longer (note the
# different x-axis scale):
sos = signal.iirfilter(8, f_p / nyq, btype='low', ftype='cheby1', output='sos',
rp=6)
plot_filter(sos, 'Chebychev-1 order=8, ripple=6 dB', freq, gain)
###############################################################################
# Applying IIR filters
# --------------------
#
# Now let's look at how our shallow and steep Butterworth IIR filters
# perform on our Morlet signal from before:
axs = plt.subplots(1, 2)[1]
yticks = np.arange(4) / -30.
yticklabels = ['Original', 'Noisy', 'Butterworth-2', 'Butterworth-8']
plot_signal(x_orig, offset=yticks[0])
plot_signal(x, offset=yticks[1])
plot_signal(x_shallow, offset=yticks[2])
plot_signal(x_steep, offset=yticks[3])
axs[0].set(xlim=tlim, title='IIR, Lowpass=%d Hz' % f_p, xticks=tticks,
ylim=[-0.125, 0.025], yticks=yticks, yticklabels=yticklabels,)
for text in axs[0].get_yticklabels():
text.set(rotation=45, size=8)
axs[1].set(xlim=flim, ylim=ylim, xlabel='Frequency (Hz)',
ylabel='Magnitude (dB)')
box_off(axs[0])
box_off(axs[1])
mne.viz.tight_layout()
plt.show()
###############################################################################
# Some pitfalls of filtering
# ==========================
#
# Multiple recent papers have noted potential risks of drawing
# errant inferences due to misapplication of filters.
#
# Low-pass problems
# -----------------
#
# Filters in general, especially those that are acausal (zero-phase), can make
# activity appear to occur earlier or later than it truly did. As
# mentioned in VanRullen 2011 [3]_, investigations of commonly (at the time)
# used low-pass filters created artifacts when they were applied to smulated
# data. However, such deleterious effects were minimal in many real-world
# examples in Rousselet 2012 [5]_.
#
# Perhaps more revealing, it was noted in Widmann & Schröger 2012 [6]_ that
# the problematic low-pass filters from VanRullen 2011 [3]_:
#
# 1. Used a least-squares design (like :func:`scipy.signal.firls`) that
# included "do-not-care" transition regions, which can lead to
# uncontrolled behavior.
# 2. Had a filter length that was independent of the transition bandwidth,
# which can cause excessive ringing and signal distortion.
#
# .. _tut_filtering_hp_problems:
#
# High-pass problems
# ------------------
#
# When it comes to high-pass filtering, using corner frequencies above 0.1 Hz
# were found in Acunzo *et al.* 2012 [4]_ to:
#
# "...generate a systematic bias easily leading to misinterpretations of
# neural activity.”
#
# In a related paper, Widmann *et al.* 2015 [7]_ also came to suggest a 0.1 Hz
# highpass. And more evidence followed in Tanner *et al.* 2015 [8]_ of such
# distortions. Using data from language ERP studies of semantic and syntactic
# processing (i.e., N400 and P600), using a high-pass above 0.3 Hz caused
# significant effects to be introduced implausibly early when compared to the
# unfiltered data. From this, the authors suggested the optimal high-pass
# value for language processing to be 0.1 Hz.
#
# We can recreate a problematic simulation from Tanner *et al.* 2015 [8]_:
#
# "The simulated component is a single-cycle cosine wave with an amplitude
# of 5µV, onset of 500 ms poststimulus, and duration of 800 ms. The
# simulated component was embedded in 20 s of zero values to avoid
# filtering edge effects... Distortions [were] caused by 2 Hz low-pass and
# high-pass filters... No visible distortion to the original waveform
# [occurred] with 30 Hz low-pass and 0.01 Hz high-pass filters...
# Filter frequencies correspond to the half-amplitude (-6 dB) cutoff
# (12 dB/octave roll-off)."
#
# .. note:: This simulated signal contains energy not just within the
# pass-band, but also within the transition and stop-bands -- perhaps
# most easily understood because the signal has a non-zero DC value,
# but also because it is a shifted cosine that has been
# *windowed* (here multiplied by a rectangular window), which
# makes the cosine and DC frequencies spread to other frequencies
# (multiplication in time is convolution in frequency, so multiplying
# by a rectangular window in the time domain means convolving a sinc
# function with the impulses at DC and the cosine frequency in the
# frequency domain).
#
x = np.zeros(int(2 * sfreq))
t = np.arange(0, len(x)) / sfreq - 0.2
onset = np.where(t >= 0.5)[0][0]
cos_t = np.arange(0, int(sfreq * 0.8)) / sfreq
sig = 2.5 - 2.5 * np.cos(2 * np.pi * (1. / 0.8) * cos_t)
x[onset:onset + len(sig)] = sig
iir_lp_30 = signal.iirfilter(2, 30. / sfreq, btype='lowpass')
iir_hp_p1 = signal.iirfilter(2, 0.1 / sfreq, btype='highpass')
iir_lp_2 = signal.iirfilter(2, 2. / sfreq, btype='lowpass')
iir_hp_2 = signal.iirfilter(2, 2. / sfreq, btype='highpass')
x_lp_30 = signal.filtfilt(iir_lp_30[0], iir_lp_30[1], x, padlen=0)
x_hp_p1 = signal.filtfilt(iir_hp_p1[0], iir_hp_p1[1], x, padlen=0)
x_lp_2 = signal.filtfilt(iir_lp_2[0], iir_lp_2[1], x, padlen=0)
x_hp_2 = signal.filtfilt(iir_hp_2[0], iir_hp_2[1], x, padlen=0)
xlim = t[[0, -1]]
ylim = [-2, 6]
xlabel = 'Time (sec)'
ylabel = 'Amplitude ($\mu$V)'
tticks = [0, 0.5, 1.3, t[-1]]
axs = plt.subplots(2, 2)[1].ravel()
for ax, x_f, title in zip(axs, [x_lp_2, x_lp_30, x_hp_2, x_hp_p1],
['LP$_2$', 'LP$_{30}$', 'HP$_2$', 'LP$_{0.1}$']):
ax.plot(t, x, color='0.5')
ax.plot(t, x_f, color='k', linestyle='--')
ax.set(ylim=ylim, xlim=xlim, xticks=tticks,
title=title, xlabel=xlabel, ylabel=ylabel)
box_off(ax)
mne.viz.tight_layout()
plt.show()
###############################################################################
# Similarly, in a P300 paradigm reported by Kappenman & Luck 2010 [12]_,
# they found that applying a 1 Hz high-pass decreased the probaility of
# finding a significant difference in the N100 response, likely because
# the P300 response was smeared (and inverted) in time by the high-pass
# filter such that it tended to cancel out the increased N100. However,
# they nonetheless note that some high-passing can still be useful to deal
# with drifts in the data.
#
# Even though these papers generally advise a 0.1 HZ or lower frequency for
# a high-pass, it is important to keep in mind (as most authors note) that
# filtering choices should depend on the frequency content of both the
# signal(s) of interest and the noise to be suppressed. For example, in
# some of the MNE-Python examples involving :ref:`ch_sample_data`,
# high-pass values of around 1 Hz are used when looking at auditory
# or visual N100 responses, because we analyze standard (not deviant) trials
# and thus expect that contamination by later or slower components will
# be limited.
#
# Baseline problems (or solutions?)
# ---------------------------------
#
# In an evolving discussion, Tanner *et al.* 2015 [8]_ suggest using baseline
# correction to remove slow drifts in data. However, Maess *et al.* 2016 [9]_
# suggest that baseline correction, which is a form of high-passing, does
# not offer substantial advantages over standard high-pass filtering.
# Tanner *et al.* [10]_ rebutted that baseline correction can correct for
# problems with filtering.
#
# To see what they mean, consider again our old simulated signal ``x`` from
# before:
def baseline_plot(x):
all_axs = plt.subplots(3, 2)[1]
for ri, (axs, freq) in enumerate(zip(all_axs, [0.1, 0.3, 0.5])):
for ci, ax in enumerate(axs):
if ci == 0:
iir_hp = signal.iirfilter(4, freq / sfreq, btype='highpass',
output='sos')
x_hp = sosfiltfilt(iir_hp, x, padlen=0)
else:
x_hp -= x_hp[t < 0].mean()
ax.plot(t, x, color='0.5')
ax.plot(t, x_hp, color='k', linestyle='--')
if ri == 0:
ax.set(title=('No ' if ci == 0 else '') +
'Baseline Correction')
box_off(ax)
ax.set(xticks=tticks, ylim=ylim, xlim=xlim, xlabel=xlabel)
ax.set_ylabel('%0.1f Hz' % freq, rotation=0,
horizontalalignment='right')
mne.viz.tight_layout()
plt.suptitle(title)
plt.show()
baseline_plot(x)
###############################################################################
# In respose, Maess *et al.* 2016 [11]_ note that these simulations do not
# address cases of pre-stimulus activity that is shared across conditions, as
# applying baseline correction will effectively copy the topology outside the
# baseline period. We can see this if we give our signal ``x`` with some
# consistent pre-stimulus activity, which makes everything look bad.
#
# .. note:: An important thing to keep in mind with these plots is that they
# are for a single simulated sensor. In multielectrode recordings
# the topology (i.e., spatial pattiern) of the pre-stimulus activity
# will leak into the post-stimulus period. This will likely create a
# spatially varying distortion of the time-domain signals, as the
# averaged pre-stimulus spatial pattern gets subtracted from the
# sensor time courses.
#
# Putting some activity in the baseline period:
n_pre = (t < 0).sum()
sig_pre = 1 - np.cos(2 * np.pi * np.arange(n_pre) / (0.5 * n_pre))
x[:n_pre] += sig_pre
baseline_plot(x)
###############################################################################
# Both groups seem to acknowledge that the choices of filtering cutoffs, and
# perhaps even the application of baseline correction, depend on the
# characteristics of the data being investigated, especially when it comes to:
#
# 1. The frequency content of the underlying evoked activity relative
# to the filtering parameters.
# 2. The validity of the assumption of no consistent evoked activity
# in the baseline period.
#
# We thus recommend carefully applying baseline correction and/or high-pass
# values based on the characteristics of the data to be analyzed.
#
#
# Filtering defaults
# ==================
#
# .. _tut_filtering_in_python:
#
# Defaults in MNE-Python
# ----------------------
#
# Most often, filtering in MNE-Python is done at the :class:`mne.io.Raw` level,
# and thus :func:`mne.io.Raw.filter` is used. This function under the hood
# (among other things) calls :func:`mne.filter.filter_data` to actually
# filter the data, which by default applies a zero-phase FIR filter designed
# using :func:`scipy.signal.firwin2`. In Widmann *et al.* 2015 [7]_, they
# suggest a specific set of parameters to use for high-pass filtering,
# including:
#
# "... providing a transition bandwidth of 25% of the lower passband
# edge but, where possible, not lower than 2 Hz and otherwise the
# distance from the passband edge to the critical frequency.”
#
# In practice, this means that for each high-pass value ``l_freq`` or
# low-pass value ``h_freq`` below, you would get this corresponding
# ``l_trans_bandwidth`` or ``h_trans_bandwidth``, respectively,
# if the sample rate were 100 Hz (i.e., Nyquist frequency of 50 Hz):
#
# +------------------+-------------------+-------------------+
# | l_freq or h_freq | l_trans_bandwidth | h_trans_bandwidth |
# +==================+===================+===================+
# | 0.01 | 0.01 | 2.0 |
# +------------------+-------------------+-------------------+
# | 0.1 | 0.1 | 2.0 |
# +------------------+-------------------+-------------------+
# | 1.0 | 1.0 | 2.0 |
# +------------------+-------------------+-------------------+
# | 2.0 | 2.0 | 2.0 |
# +------------------+-------------------+-------------------+
# | 4.0 | 2.0 | 2.0 |
# +------------------+-------------------+-------------------+
# | 8.0 | 2.0 | 2.0 |
# +------------------+-------------------+-------------------+
# | 10.0 | 2.5 | 2.5 |
# +------------------+-------------------+-------------------+
# | 20.0 | 5.0 | 5.0 |
# +------------------+-------------------+-------------------+
# | 40.0 | 10.0 | 10.0 |
# +------------------+-------------------+-------------------+
# | 45.0 | 11.25 | 5.0 |
# +------------------+-------------------+-------------------+
# | 48.0 | 12.0 | 2.0 |
# +------------------+-------------------+-------------------+
#
# MNE-Python has adopted this definition for its high-pass (and low-pass)
# transition bandwidth choices when using ``l_trans_bandwidth='auto'`` and
# ``h_trans_bandwidth='auto'``.
#
# To choose the filter length automatically with ``filter_length='auto'``,
# the reciprocal of the shortest transition bandwidth is used to ensure
# decent attenuation at the stop frequency. Specifically, the reciprocal
# (in samples) is multiplied by 6.2, 6.6, or 11.0 for the Hann, Hamming,
# or Blackman windows, respectively as selected by the ``fir_window``
# argument.
#
# .. note:: These multiplicative factors are double what is given in
# Ifeachor and Jervis [2]_ (p. 357). The window functions have a
# smearing effect on the frequency response; I&J thus take the
# approach of setting the stop frequency as
# :math:`f_s = f_p + f_{trans} / 2.`, but our stated definitions of
# :math:`f_s` and :math:`f_{trans}` do not
# allow us to do this in a nice way. Instead, we increase our filter
# length to achieve acceptable (20+ dB) attenuation by
# :math:`f_s = f_p + f_{trans}`, and excellent (50+ dB)
# attenuation by :math:`f_s + f_{trans}` (and usually earlier).
#
# In 0.14, we default to using a Hamming window in filter design, as it
# provides up to 53 dB of stop-band attenuation with small pass-band ripple.
#
# .. note:: In band-pass applications, often a low-pass filter can operate
# effectively with fewer samples than the high-pass filter, so
# it is advisable to apply the high-pass and low-pass separately.
#
# For more information on how to use the
# MNE-Python filtering functions with real data, consult the preprocessing
# tutorial on :ref:`tut_artifacts_filter`.
#
# Defaults in MNE-C
# -----------------
# MNE-C by default uses:
#
# 1. 5 Hz transition band for low-pass filters.
# 2. 3-sample transition band for high-pass filters.
# 3. Filter length of 8197 samples.
#
# The filter is designed in the frequency domain, creating a linear-phase
# filter such that the delay is compensated for as is done with the MNE-Python
# ``phase='zero'`` filtering option.
#
# Squared-cosine ramps are used in the transition regions. Because these
# are used in place of more gradual (e.g., linear) transitions,
# a given transition width will result in more temporal ringing but also more
# rapid attenuation than the same transition width in windowed FIR designs.
#
# The default filter length will generally have excellent attenuation
# but long ringing for the sample rates typically encountered in M-EEG data
# (e.g. 500-2000 Hz).
#
# Defaults in other software
# --------------------------
# A good but possibly outdated comparison of filtering in various software
# packages is available in [7]_. Briefly:
#
# * EEGLAB
# MNE-Python in 0.14 defaults to behavior very similar to that of EEGLAB,
# see the `EEGLAB filtering FAQ`_ for more information.
# * Fieldrip
# By default FieldTrip applies a forward-backward Butterworth IIR filter
# of order 4 (band-pass and band-stop filters) or 2 (for low-pass and
# high-pass filters). Similar filters can be achieved in MNE-Python when
# filtering with :meth:`raw.filter(..., method='iir') <mne.io.Raw.filter>`
# (see also :func:`mne.filter.construct_iir_filter` for options).
# For more inforamtion, see e.g. `FieldTrip band-pass documentation`_.
#
# Summary
# =======
#
# When filtering, there are always tradeoffs that should be considered.
# One important tradeoff is between time-domain characteristics (like ringing)
# and frequency-domain attenuation characteristics (like effective transition
# bandwidth). Filters with sharp frequency cutoffs can produce outputs that
# ring for a long time when they operate on signals with frequency content
# in the transition band. In general, therefore, the wider a transition band
# that can be tolerated, the better behaved the filter will be in the time
# domain.
#
# References
# ==========
#
# .. [1] Parks TW, Burrus CS (1987). Digital Filter Design.
# New York: Wiley-Interscience.
# .. [2] Ifeachor, E. C., & Jervis, B. W. (2002). Digital Signal Processing:
# A Practical Approach. Prentice Hall.
# .. [3] Vanrullen, R. (2011). Four common conceptual fallacies in mapping
# the time course of recognition. Perception Science, 2, 365.
# .. [4] Acunzo, D. J., MacKenzie, G., & van Rossum, M. C. W. (2012).
# Systematic biases in early ERP and ERF components as a result
# of high-pass filtering. Journal of Neuroscience Methods,
# 209(1), 212–218. http://doi.org/10.1016/j.jneumeth.2012.06.011
# .. [5] Rousselet, G. A. (2012). Does filtering preclude us from studying
# ERP time-courses? Frontiers in Psychology, 3(131)
# .. [6] Widmann, A., & Schröger, E. (2012). Filter effects and filter
# artifacts in the analysis of electrophysiological data.
# Perception Science, 233.
# .. [7] Widmann, A., Schröger, E., & Maess, B. (2015). Digital filter
# design for electrophysiological data – a practical approach.
# Journal of Neuroscience Methods, 250, 34–46.
# .. [8] Tanner, D., Morgan-Short, K., & Luck, S. J. (2015).
# How inappropriate high-pass filters can produce artifactual effects
# and incorrect conclusions in ERP studies of language and cognition.
# Psychophysiology, 52(8), 997–1009. http://doi.org/10.1111/psyp.12437
# .. [9] Maess, B., Schröger, E., & Widmann, A. (2016).
# High-pass filters and baseline correction in M/EEG analysis.
# Commentary on: “How inappropriate high-pass filters can produce
# artefacts and incorrect conclusions in ERP studies of language
# and cognition.” Journal of Neuroscience Methods, 266, 164–165.
# .. [10] Tanner, D., Norton, J. J. S., Morgan-Short, K., & Luck, S. J. (2016).
# On high-pass filter artifacts (they’re real) and baseline correction
# (it’s a good idea) in ERP/ERMF analysis.
# .. [11] Maess, B., Schröger, E., & Widmann, A. (2016).
# High-pass filters and baseline correction in M/EEG analysis-continued
# discussion. Journal of Neuroscience Methods, 266, 171–172.
# Journal of Neuroscience Methods, 266, 166–170.
# .. [12] Kappenman E. & Luck, S. (2010). The effects of impedance on data
# quality and statistical significance in ERP recordings.
# Psychophysiology, 47, 888-904.
#
# .. _FIR: https://en.wikipedia.org/wiki/Finite_impulse_response
# .. _IIR: https://en.wikipedia.org/wiki/Infinite_impulse_response
# .. _sinc: https://en.wikipedia.org/wiki/Sinc_function
# .. _moving average: https://en.wikipedia.org/wiki/Moving_average
# .. _autoregression: https://en.wikipedia.org/wiki/Autoregressive_model
# .. _Remez: https://en.wikipedia.org/wiki/Remez_algorithm
# .. _matlab firpm: http://www.mathworks.com/help/signal/ref/firpm.html
# .. _matlab fir2: http://www.mathworks.com/help/signal/ref/fir2.html
# .. _matlab firls: http://www.mathworks.com/help/signal/ref/firls.html
# .. _Butterworth filter: https://en.wikipedia.org/wiki/Butterworth_filter
# .. _eeglab filtering faq: https://sccn.ucsd.edu/wiki/Firfilt_FAQ
# .. _fieldtrip band-pass documentation: http://www.fieldtriptoolbox.org/reference/ft_preproc_bandpassfilter # noqa
| bsd-3-clause |
DSLituiev/scikit-learn | sklearn/tests/test_discriminant_analysis.py | 22 | 12830 | import sys
import numpy as np
from nose import SkipTest
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_raise_message
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import ignore_warnings
from sklearn.datasets import make_blobs
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
from sklearn.discriminant_analysis import _cov
# import reload
version = sys.version_info
if version[0] == 3:
# Python 3+ import for reload. Builtin in Python2
if version[1] == 3:
reload = None
else:
from importlib import reload
# Data is just 6 separable points in the plane
X = np.array([[-2, -1], [-1, -1], [-1, -2], [1, 1], [1, 2], [2, 1]], dtype='f')
y = np.array([1, 1, 1, 2, 2, 2])
y3 = np.array([1, 1, 2, 2, 3, 3])
# Degenerate data with only one feature (still should be separable)
X1 = np.array([[-2, ], [-1, ], [-1, ], [1, ], [1, ], [2, ]], dtype='f')
# Data is just 9 separable points in the plane
X6 = np.array([[0, 0], [-2, -2], [-2, -1], [-1, -1], [-1, -2],
[1, 3], [1, 2], [2, 1], [2, 2]])
y6 = np.array([1, 1, 1, 1, 1, 2, 2, 2, 2])
y7 = np.array([1, 2, 3, 2, 3, 1, 2, 3, 1])
# Degenerate data with 1 feature (still should be separable)
X7 = np.array([[-3, ], [-2, ], [-1, ], [-1, ], [0, ], [1, ], [1, ],
[2, ], [3, ]])
# Data that has zero variance in one dimension and needs regularization
X2 = np.array([[-3, 0], [-2, 0], [-1, 0], [-1, 0], [0, 0], [1, 0], [1, 0],
[2, 0], [3, 0]])
# One element class
y4 = np.array([1, 1, 1, 1, 1, 1, 1, 1, 2])
# Data with less samples in a class than n_features
X5 = np.c_[np.arange(8), np.zeros((8, 3))]
y5 = np.array([0, 0, 0, 0, 0, 1, 1, 1])
solver_shrinkage = [('svd', None), ('lsqr', None), ('eigen', None),
('lsqr', 'auto'), ('lsqr', 0), ('lsqr', 0.43),
('eigen', 'auto'), ('eigen', 0), ('eigen', 0.43)]
def test_lda_predict():
# Test LDA classification.
# This checks that LDA implements fit and predict and returns correct
# values for simple toy data.
for test_case in solver_shrinkage:
solver, shrinkage = test_case
clf = LinearDiscriminantAnalysis(solver=solver, shrinkage=shrinkage)
y_pred = clf.fit(X, y).predict(X)
assert_array_equal(y_pred, y, 'solver %s' % solver)
# Assert that it works with 1D data
y_pred1 = clf.fit(X1, y).predict(X1)
assert_array_equal(y_pred1, y, 'solver %s' % solver)
# Test probability estimates
y_proba_pred1 = clf.predict_proba(X1)
assert_array_equal((y_proba_pred1[:, 1] > 0.5) + 1, y,
'solver %s' % solver)
y_log_proba_pred1 = clf.predict_log_proba(X1)
assert_array_almost_equal(np.exp(y_log_proba_pred1), y_proba_pred1,
8, 'solver %s' % solver)
# Primarily test for commit 2f34950 -- "reuse" of priors
y_pred3 = clf.fit(X, y3).predict(X)
# LDA shouldn't be able to separate those
assert_true(np.any(y_pred3 != y3), 'solver %s' % solver)
# Test invalid shrinkages
clf = LinearDiscriminantAnalysis(solver="lsqr", shrinkage=-0.2231)
assert_raises(ValueError, clf.fit, X, y)
clf = LinearDiscriminantAnalysis(solver="eigen", shrinkage="dummy")
assert_raises(ValueError, clf.fit, X, y)
clf = LinearDiscriminantAnalysis(solver="svd", shrinkage="auto")
assert_raises(NotImplementedError, clf.fit, X, y)
# Test unknown solver
clf = LinearDiscriminantAnalysis(solver="dummy")
assert_raises(ValueError, clf.fit, X, y)
def test_lda_priors():
# Test priors (negative priors)
priors = np.array([0.5, -0.5])
clf = LinearDiscriminantAnalysis(priors=priors)
msg = "priors must be non-negative"
assert_raise_message(ValueError, msg, clf.fit, X, y)
# Test that priors passed as a list are correctly handled (run to see if
# failure)
clf = LinearDiscriminantAnalysis(priors=[0.5, 0.5])
clf.fit(X, y)
# Test that priors always sum to 1
priors = np.array([0.5, 0.6])
prior_norm = np.array([0.45, 0.55])
clf = LinearDiscriminantAnalysis(priors=priors)
assert_warns(UserWarning, clf.fit, X, y)
assert_array_almost_equal(clf.priors_, prior_norm, 2)
def test_lda_coefs():
# Test if the coefficients of the solvers are approximately the same.
n_features = 2
n_classes = 2
n_samples = 1000
X, y = make_blobs(n_samples=n_samples, n_features=n_features,
centers=n_classes, random_state=11)
clf_lda_svd = LinearDiscriminantAnalysis(solver="svd")
clf_lda_lsqr = LinearDiscriminantAnalysis(solver="lsqr")
clf_lda_eigen = LinearDiscriminantAnalysis(solver="eigen")
clf_lda_svd.fit(X, y)
clf_lda_lsqr.fit(X, y)
clf_lda_eigen.fit(X, y)
assert_array_almost_equal(clf_lda_svd.coef_, clf_lda_lsqr.coef_, 1)
assert_array_almost_equal(clf_lda_svd.coef_, clf_lda_eigen.coef_, 1)
assert_array_almost_equal(clf_lda_eigen.coef_, clf_lda_lsqr.coef_, 1)
def test_lda_transform():
# Test LDA transform.
clf = LinearDiscriminantAnalysis(solver="svd", n_components=1)
X_transformed = clf.fit(X, y).transform(X)
assert_equal(X_transformed.shape[1], 1)
clf = LinearDiscriminantAnalysis(solver="eigen", n_components=1)
X_transformed = clf.fit(X, y).transform(X)
assert_equal(X_transformed.shape[1], 1)
clf = LinearDiscriminantAnalysis(solver="lsqr", n_components=1)
clf.fit(X, y)
msg = "transform not implemented for 'lsqr'"
assert_raise_message(NotImplementedError, msg, clf.transform, X)
def test_lda_explained_variance_ratio():
# Test if the sum of the normalized eigen vectors values equals 1,
# Also tests whether the explained_variance_ratio_ formed by the
# eigen solver is the same as the explained_variance_ratio_ formed
# by the svd solver
n_features = 2
n_classes = 2
n_samples = 1000
X, y = make_blobs(n_samples=n_samples, n_features=n_features,
centers=n_classes, random_state=11)
clf_lda_eigen = LinearDiscriminantAnalysis(solver="eigen")
clf_lda_eigen.fit(X, y)
assert_almost_equal(clf_lda_eigen.explained_variance_ratio_.sum(), 1.0, 3)
clf_lda_svd = LinearDiscriminantAnalysis(solver="svd")
clf_lda_svd.fit(X, y)
assert_almost_equal(clf_lda_svd.explained_variance_ratio_.sum(), 1.0, 3)
assert_array_almost_equal(clf_lda_svd.explained_variance_ratio_,
clf_lda_eigen.explained_variance_ratio_)
def test_lda_orthogonality():
# arrange four classes with their means in a kite-shaped pattern
# the longer distance should be transformed to the first component, and
# the shorter distance to the second component.
means = np.array([[0, 0, -1], [0, 2, 0], [0, -2, 0], [0, 0, 5]])
# We construct perfectly symmetric distributions, so the LDA can estimate
# precise means.
scatter = np.array([[0.1, 0, 0], [-0.1, 0, 0], [0, 0.1, 0], [0, -0.1, 0],
[0, 0, 0.1], [0, 0, -0.1]])
X = (means[:, np.newaxis, :] + scatter[np.newaxis, :, :]).reshape((-1, 3))
y = np.repeat(np.arange(means.shape[0]), scatter.shape[0])
# Fit LDA and transform the means
clf = LinearDiscriminantAnalysis(solver="svd").fit(X, y)
means_transformed = clf.transform(means)
d1 = means_transformed[3] - means_transformed[0]
d2 = means_transformed[2] - means_transformed[1]
d1 /= np.sqrt(np.sum(d1 ** 2))
d2 /= np.sqrt(np.sum(d2 ** 2))
# the transformed within-class covariance should be the identity matrix
assert_almost_equal(np.cov(clf.transform(scatter).T), np.eye(2))
# the means of classes 0 and 3 should lie on the first component
assert_almost_equal(np.abs(np.dot(d1[:2], [1, 0])), 1.0)
# the means of classes 1 and 2 should lie on the second component
assert_almost_equal(np.abs(np.dot(d2[:2], [0, 1])), 1.0)
def test_lda_scaling():
# Test if classification works correctly with differently scaled features.
n = 100
rng = np.random.RandomState(1234)
# use uniform distribution of features to make sure there is absolutely no
# overlap between classes.
x1 = rng.uniform(-1, 1, (n, 3)) + [-10, 0, 0]
x2 = rng.uniform(-1, 1, (n, 3)) + [10, 0, 0]
x = np.vstack((x1, x2)) * [1, 100, 10000]
y = [-1] * n + [1] * n
for solver in ('svd', 'lsqr', 'eigen'):
clf = LinearDiscriminantAnalysis(solver=solver)
# should be able to separate the data perfectly
assert_equal(clf.fit(x, y).score(x, y), 1.0,
'using covariance: %s' % solver)
def test_qda():
# QDA classification.
# This checks that QDA implements fit and predict and returns
# correct values for a simple toy dataset.
clf = QuadraticDiscriminantAnalysis()
y_pred = clf.fit(X6, y6).predict(X6)
assert_array_equal(y_pred, y6)
# Assure that it works with 1D data
y_pred1 = clf.fit(X7, y6).predict(X7)
assert_array_equal(y_pred1, y6)
# Test probas estimates
y_proba_pred1 = clf.predict_proba(X7)
assert_array_equal((y_proba_pred1[:, 1] > 0.5) + 1, y6)
y_log_proba_pred1 = clf.predict_log_proba(X7)
assert_array_almost_equal(np.exp(y_log_proba_pred1), y_proba_pred1, 8)
y_pred3 = clf.fit(X6, y7).predict(X6)
# QDA shouldn't be able to separate those
assert_true(np.any(y_pred3 != y7))
# Classes should have at least 2 elements
assert_raises(ValueError, clf.fit, X6, y4)
def test_qda_priors():
clf = QuadraticDiscriminantAnalysis()
y_pred = clf.fit(X6, y6).predict(X6)
n_pos = np.sum(y_pred == 2)
neg = 1e-10
clf = QuadraticDiscriminantAnalysis(priors=np.array([neg, 1 - neg]))
y_pred = clf.fit(X6, y6).predict(X6)
n_pos2 = np.sum(y_pred == 2)
assert_greater(n_pos2, n_pos)
def test_qda_store_covariances():
# The default is to not set the covariances_ attribute
clf = QuadraticDiscriminantAnalysis().fit(X6, y6)
assert_true(not hasattr(clf, 'covariances_'))
# Test the actual attribute:
clf = QuadraticDiscriminantAnalysis(store_covariances=True).fit(X6, y6)
assert_true(hasattr(clf, 'covariances_'))
assert_array_almost_equal(
clf.covariances_[0],
np.array([[0.7, 0.45], [0.45, 0.7]])
)
assert_array_almost_equal(
clf.covariances_[1],
np.array([[0.33333333, -0.33333333], [-0.33333333, 0.66666667]])
)
def test_qda_regularization():
# the default is reg_param=0. and will cause issues
# when there is a constant variable
clf = QuadraticDiscriminantAnalysis()
with ignore_warnings():
y_pred = clf.fit(X2, y6).predict(X2)
assert_true(np.any(y_pred != y6))
# adding a little regularization fixes the problem
clf = QuadraticDiscriminantAnalysis(reg_param=0.01)
with ignore_warnings():
clf.fit(X2, y6)
y_pred = clf.predict(X2)
assert_array_equal(y_pred, y6)
# Case n_samples_in_a_class < n_features
clf = QuadraticDiscriminantAnalysis(reg_param=0.1)
with ignore_warnings():
clf.fit(X5, y5)
y_pred5 = clf.predict(X5)
assert_array_equal(y_pred5, y5)
def test_deprecated_lda_qda_deprecation():
if reload is None:
raise SkipTest("Can't reload module on Python3.3")
def import_lda_module():
import sklearn.lda
# ensure that we trigger DeprecationWarning even if the sklearn.lda
# was loaded previously by another test.
reload(sklearn.lda)
return sklearn.lda
lda = assert_warns(DeprecationWarning, import_lda_module)
assert lda.LDA is LinearDiscriminantAnalysis
def import_qda_module():
import sklearn.qda
# ensure that we trigger DeprecationWarning even if the sklearn.qda
# was loaded previously by another test.
reload(sklearn.qda)
return sklearn.qda
qda = assert_warns(DeprecationWarning, import_qda_module)
assert qda.QDA is QuadraticDiscriminantAnalysis
def test_covariance():
x, y = make_blobs(n_samples=100, n_features=5,
centers=1, random_state=42)
# make features correlated
x = np.dot(x, np.arange(x.shape[1] ** 2).reshape(x.shape[1], x.shape[1]))
c_e = _cov(x, 'empirical')
assert_almost_equal(c_e, c_e.T)
c_s = _cov(x, 'auto')
assert_almost_equal(c_s, c_s.T)
| bsd-3-clause |
viveksck/langchangetrack | langchangetrack/cpdetection/detect_changepoints_word_ts.py | 1 | 8247 | from argparse import ArgumentParser
import logging
import pandas as pd
import numpy as np
import itertools
import more_itertools
import os
from functools import partial
from changepoint.mean_shift_model import MeanShiftModel
from changepoint.utils.ts_stats import parallelize_func
__author__ = "Vivek Kulkarni"
__email__ = "[email protected]"
import psutil
from multiprocessing import cpu_count
p = psutil.Process(os.getpid())
p.set_cpu_affinity(list(range(cpu_count())))
# Global variable specifying which column index the time series
# begins in a dataframe
TS_OFFSET = 2
LOGFORMAT = "%(asctime).19s %(levelname)s %(filename)s: %(lineno)s %(message)s"
def normalize_timeseries(df):
"""
Normalize each column of the data frame by its mean and standard
deviation.
"""
dfm = df.copy(deep=True)
dfmean = df.mean()
dfstd = df.std()
for col in df.columns[2:]:
dfm[col] = (df[col] - dfmean[col]) / dfstd[col]
return dfm
def get_filtered_df(df, vocab_file):
""" Return a data frame with only the words present in the vocab file. """
if vocab_file:
vocab = open(vocab_file).readlines()
vocab = [v.strip() for v in vocab]
# Get the set of words.
words = pd.Series(df.word.values.ravel()).unique()
set_words = set(words)
# Find the words common to data frame and vocab
common_set_words = set_words & set(vocab)
# Filter the dataframe
df_filtered = df[df.word.isin(common_set_words)]
return df_filtered
else:
return df
def get_pval_word(df, word, B):
"""
Get the pvalue of a change point at each time point 't' corresponding to
the word. Also return the number of tail successes during boot strap.
Use a mean shift model for this.
"""
# Remove the first TS_OFFSET columns as it is 'index' and 'word' to get the
# time series for that word.
ts = df[df.word == word].values[0][TS_OFFSET:]
# Create a mean shift model
model = MeanShiftModel()
# Detect the change points using a mean shift model
stats_ts, pvals, nums = model.detect_mean_shift(ts, B=B)
# Return the word and pvals associated with each time point.
L = [word]
L.extend(pvals)
H = [word]
H.extend(nums)
return L, H
def get_pval_word_chunk(chunk, df, B):
""" Get the p-values for each time point for a chunk of words. """
results = [get_pval_word(df, w, B) for w in chunk]
return results
def get_minpval_cp(pvalue_df_row):
"""
Get the minimum p-value and the corresponding time point for each word.
"""
# first column is 'word', so ignore it
index_series = pvalue_df_row.index[1:]
row_series = pvalue_df_row.values[1:]
assert(len(index_series) == len(row_series))
# Find the minimum pvalue
min_pval = np.min(row_series)
# Find the index where the minimum pvalue occurrs.
min_idx = np.argmin(row_series)
# Get the timepoint corresponding to that index
min_cp = index_series[min_idx]
return min_pval, min_cp
def get_cp_pval(pvalue_df_row, zscore_df, threshold=0.0):
"""
Get the minimum p-value corresponding timepoint which also has
a Z-SCORE > threshold.
"""
# First column is 'word', so ignore it
row_series = pvalue_df_row.values[1:]
# Corresponding Z-Score series for the exact same set of timepoints.
zscore_series = np.array(zscore_df[zscore_df.word == pvalue_df_row.word][pvalue_df_row.index[1:]])[0]
assert(len(zscore_series) == len(row_series))
# Get all the indices where zscore exceeds a threshold
sel_idx = np.where(zscore_series > threshold)[0]
# If there are no such indices return NAN
if not len(sel_idx):
return 1.0, np.nan
# We have some indices. Select the pvalues for those indices.
pvals_indices = np.take(row_series, sel_idx)
# Find the minimum pvalue among those candidates.
min_pval = np.min(pvals_indices)
# Find the minimum candidate index corresponding to that pvalue
min_idx = np.argmin(pvals_indices)
# Select the actual index that it corresponds to
cp_idx = sel_idx[min_idx]
# Translate that to the actual timepoint and return it.
cp = pvalue_df_row.index[1:][cp_idx]
return min_pval, cp
def main(args):
# Read the arguments
df_f = args.filename
common_vocab_file = args.vocab_file
pval_file = args.pval_file
sample_file = args.sample_file
col_to_drop = args.col
should_normalize = not(args.dont_normalize)
threshold = float(args.threshold)
workers = args.workers
print "Config:"
print "Input data frame file name:", df_f
print "Vocab file", common_vocab_file
print "Output pvalue file", pval_file
print "Output sample file", sample_file
print "Columns to drop", col_to_drop
print "Normalize Time series:", should_normalize
print "Threshold", threshold
# Read the time series data
df = pd.read_csv(df_f)
# Consider only words in the common vocabulary.
df = get_filtered_df(df, common_vocab_file)
# Normalize the data frame
if should_normalize:
norm_df = normalize_timeseries(df)
else:
norm_df = df
# Drop the column if needed. We typically drop the 1st column as it always is 0 by
# default.
if col_to_drop in norm_df.columns:
cols = df.columns.tolist()
if col_to_drop == norm_df.columns[-1]:
time_points = cols[2:]
new_cols = cols[0:2] + time_points[::-1]
norm_df = norm_df[new_cols]
norm_df.drop(col_to_drop, axis=1, inplace=True)
print "Dropped column", col_to_drop
print "Columns of the data frame are", norm_df.columns
cwords = norm_df.word.values
print "Number of words we are analyzing:", len(cwords)
chunksz = np.ceil(len(cwords) / float(workers))
results = parallelize_func(cwords[:], get_pval_word_chunk, chunksz=chunksz, n_jobs=workers, df=norm_df, B=args.B)
pvals, num_samples = zip(*results)
header = ['word'] + list(norm_df.columns[TS_OFFSET:len(pvals[0]) + 1])
pvalue_df = pd.DataFrame().from_records(list(pvals), columns=header)
# Append additonal columns to the final df
pvalue_df_final = pvalue_df.copy(deep=True)
pvalue_df_final['min_pval'], pvalue_df_final['cp'] = zip(*pvalue_df.apply(get_minpval_cp, axis=1))
pvalue_df_final['tpval'], pvalue_df_final['tcp'] = zip(*pvalue_df.apply(get_cp_pval, axis=1, zscore_df=norm_df, threshold=threshold))
pvalue_df_final.drop(norm_df.columns[TS_OFFSET:len(pvals[0]) + 1], axis=1, inplace = True)
# Write the pvalue output.
num_samples_df = pd.DataFrame().from_records(list(num_samples), columns=header)
num_samples_df.to_csv(sample_file, encoding='utf-8')
# Write the sample output
sdf = pvalue_df_final.sort(columns=['tpval'])
sdf.to_csv(pval_file, encoding='utf-8')
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("-f", "--file", dest="filename", help="Input time series file")
parser.add_argument("-v", "--vfile", dest="vocab_file", help="Common Vocab file")
parser.add_argument("-p", "--pfile", dest="pval_file", help="Output pvalue file")
parser.add_argument("-n", "--nfile", dest="sample_file", help="Output sample file")
parser.add_argument("-c", "--col", dest="col", help="column to drop")
parser.add_argument("-d", "--dont_normalize", dest="dont_normalize", action='store_true', default=False, help="Dont normalize")
parser.add_argument("-t", "--threshold", dest="threshold", default=1.75, type=float, help="Threshold to use for mean shift model.")
parser.add_argument("-b", "--bootstrap", dest="B", default=1000, type=int, help="Number of bootstrapped samples to take(default:1000)")
parser.add_argument("-w", "--workers", dest="workers", default=1, type=int, help="Number of workers to use")
parser.add_argument("-l", "--log", dest="log", help="log verbosity level", default="INFO")
args = parser.parse_args()
if args.log == 'DEBUG':
sys.excepthook = debug
numeric_level = getattr(logging, args.log.upper(), None)
logging.basicConfig(level=numeric_level, format=LOGFORMAT)
main(args)
| bsd-3-clause |
aetilley/scikit-learn | sklearn/linear_model/least_angle.py | 57 | 49338 | """
Least Angle Regression algorithm. See the documentation on the
Generalized Linear Model for a complete discussion.
"""
from __future__ import print_function
# Author: Fabian Pedregosa <[email protected]>
# Alexandre Gramfort <[email protected]>
# Gael Varoquaux
#
# License: BSD 3 clause
from math import log
import sys
import warnings
from distutils.version import LooseVersion
import numpy as np
from scipy import linalg, interpolate
from scipy.linalg.lapack import get_lapack_funcs
from .base import LinearModel
from ..base import RegressorMixin
from ..utils import arrayfuncs, as_float_array, check_X_y
from ..cross_validation import check_cv
from ..utils import ConvergenceWarning
from ..externals.joblib import Parallel, delayed
from ..externals.six.moves import xrange
import scipy
solve_triangular_args = {}
if LooseVersion(scipy.__version__) >= LooseVersion('0.12'):
solve_triangular_args = {'check_finite': False}
def lars_path(X, y, Xy=None, Gram=None, max_iter=500,
alpha_min=0, method='lar', copy_X=True,
eps=np.finfo(np.float).eps,
copy_Gram=True, verbose=0, return_path=True,
return_n_iter=False):
"""Compute Least Angle Regression or Lasso path using LARS algorithm [1]
The optimization objective for the case method='lasso' is::
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
in the case of method='lars', the objective function is only known in
the form of an implicit equation (see discussion in [1])
Read more in the :ref:`User Guide <least_angle_regression>`.
Parameters
-----------
X : array, shape: (n_samples, n_features)
Input data.
y : array, shape: (n_samples)
Input targets.
max_iter : integer, optional (default=500)
Maximum number of iterations to perform, set to infinity for no limit.
Gram : None, 'auto', array, shape: (n_features, n_features), optional
Precomputed Gram matrix (X' * X), if ``'auto'``, the Gram
matrix is precomputed from the given X, if there are more samples
than features.
alpha_min : float, optional (default=0)
Minimum correlation along the path. It corresponds to the
regularization parameter alpha parameter in the Lasso.
method : {'lar', 'lasso'}, optional (default='lar')
Specifies the returned model. Select ``'lar'`` for Least Angle
Regression, ``'lasso'`` for the Lasso.
eps : float, optional (default=``np.finfo(np.float).eps``)
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems.
copy_X : bool, optional (default=True)
If ``False``, ``X`` is overwritten.
copy_Gram : bool, optional (default=True)
If ``False``, ``Gram`` is overwritten.
verbose : int (default=0)
Controls output verbosity.
return_path : bool, optional (default=True)
If ``return_path==True`` returns the entire path, else returns only the
last point of the path.
return_n_iter : bool, optional (default=False)
Whether to return the number of iterations.
Returns
--------
alphas : array, shape: [n_alphas + 1]
Maximum of covariances (in absolute value) at each iteration.
``n_alphas`` is either ``max_iter``, ``n_features`` or the
number of nodes in the path with ``alpha >= alpha_min``, whichever
is smaller.
active : array, shape [n_alphas]
Indices of active variables at the end of the path.
coefs : array, shape (n_features, n_alphas + 1)
Coefficients along the path
n_iter : int
Number of iterations run. Returned only if return_n_iter is set
to True.
See also
--------
lasso_path
LassoLars
Lars
LassoLarsCV
LarsCV
sklearn.decomposition.sparse_encode
References
----------
.. [1] "Least Angle Regression", Effron et al.
http://www-stat.stanford.edu/~tibs/ftp/lars.pdf
.. [2] `Wikipedia entry on the Least-angle regression
<http://en.wikipedia.org/wiki/Least-angle_regression>`_
.. [3] `Wikipedia entry on the Lasso
<http://en.wikipedia.org/wiki/Lasso_(statistics)#Lasso_method>`_
"""
n_features = X.shape[1]
n_samples = y.size
max_features = min(max_iter, n_features)
if return_path:
coefs = np.zeros((max_features + 1, n_features))
alphas = np.zeros(max_features + 1)
else:
coef, prev_coef = np.zeros(n_features), np.zeros(n_features)
alpha, prev_alpha = np.array([0.]), np.array([0.]) # better ideas?
n_iter, n_active = 0, 0
active, indices = list(), np.arange(n_features)
# holds the sign of covariance
sign_active = np.empty(max_features, dtype=np.int8)
drop = False
# will hold the cholesky factorization. Only lower part is
# referenced.
# We are initializing this to "zeros" and not empty, because
# it is passed to scipy linalg functions and thus if it has NaNs,
# even if they are in the upper part that it not used, we
# get errors raised.
# Once we support only scipy > 0.12 we can use check_finite=False and
# go back to "empty"
L = np.zeros((max_features, max_features), dtype=X.dtype)
swap, nrm2 = linalg.get_blas_funcs(('swap', 'nrm2'), (X,))
solve_cholesky, = get_lapack_funcs(('potrs',), (X,))
if Gram is None:
if copy_X:
# force copy. setting the array to be fortran-ordered
# speeds up the calculation of the (partial) Gram matrix
# and allows to easily swap columns
X = X.copy('F')
elif Gram == 'auto':
Gram = None
if X.shape[0] > X.shape[1]:
Gram = np.dot(X.T, X)
elif copy_Gram:
Gram = Gram.copy()
if Xy is None:
Cov = np.dot(X.T, y)
else:
Cov = Xy.copy()
if verbose:
if verbose > 1:
print("Step\t\tAdded\t\tDropped\t\tActive set size\t\tC")
else:
sys.stdout.write('.')
sys.stdout.flush()
tiny = np.finfo(np.float).tiny # to avoid division by 0 warning
tiny32 = np.finfo(np.float32).tiny # to avoid division by 0 warning
equality_tolerance = np.finfo(np.float32).eps
while True:
if Cov.size:
C_idx = np.argmax(np.abs(Cov))
C_ = Cov[C_idx]
C = np.fabs(C_)
else:
C = 0.
if return_path:
alpha = alphas[n_iter, np.newaxis]
coef = coefs[n_iter]
prev_alpha = alphas[n_iter - 1, np.newaxis]
prev_coef = coefs[n_iter - 1]
alpha[0] = C / n_samples
if alpha[0] <= alpha_min + equality_tolerance: # early stopping
if abs(alpha[0] - alpha_min) > equality_tolerance:
# interpolation factor 0 <= ss < 1
if n_iter > 0:
# In the first iteration, all alphas are zero, the formula
# below would make ss a NaN
ss = ((prev_alpha[0] - alpha_min) /
(prev_alpha[0] - alpha[0]))
coef[:] = prev_coef + ss * (coef - prev_coef)
alpha[0] = alpha_min
if return_path:
coefs[n_iter] = coef
break
if n_iter >= max_iter or n_active >= n_features:
break
if not drop:
##########################################################
# Append x_j to the Cholesky factorization of (Xa * Xa') #
# #
# ( L 0 ) #
# L -> ( ) , where L * w = Xa' x_j #
# ( w z ) and z = ||x_j|| #
# #
##########################################################
sign_active[n_active] = np.sign(C_)
m, n = n_active, C_idx + n_active
Cov[C_idx], Cov[0] = swap(Cov[C_idx], Cov[0])
indices[n], indices[m] = indices[m], indices[n]
Cov_not_shortened = Cov
Cov = Cov[1:] # remove Cov[0]
if Gram is None:
X.T[n], X.T[m] = swap(X.T[n], X.T[m])
c = nrm2(X.T[n_active]) ** 2
L[n_active, :n_active] = \
np.dot(X.T[n_active], X.T[:n_active].T)
else:
# swap does only work inplace if matrix is fortran
# contiguous ...
Gram[m], Gram[n] = swap(Gram[m], Gram[n])
Gram[:, m], Gram[:, n] = swap(Gram[:, m], Gram[:, n])
c = Gram[n_active, n_active]
L[n_active, :n_active] = Gram[n_active, :n_active]
# Update the cholesky decomposition for the Gram matrix
if n_active:
linalg.solve_triangular(L[:n_active, :n_active],
L[n_active, :n_active],
trans=0, lower=1,
overwrite_b=True,
**solve_triangular_args)
v = np.dot(L[n_active, :n_active], L[n_active, :n_active])
diag = max(np.sqrt(np.abs(c - v)), eps)
L[n_active, n_active] = diag
if diag < 1e-7:
# The system is becoming too ill-conditioned.
# We have degenerate vectors in our active set.
# We'll 'drop for good' the last regressor added.
# Note: this case is very rare. It is no longer triggered by the
# test suite. The `equality_tolerance` margin added in 0.16.0 to
# get early stopping to work consistently on all versions of
# Python including 32 bit Python under Windows seems to make it
# very difficult to trigger the 'drop for good' strategy.
warnings.warn('Regressors in active set degenerate. '
'Dropping a regressor, after %i iterations, '
'i.e. alpha=%.3e, '
'with an active set of %i regressors, and '
'the smallest cholesky pivot element being %.3e'
% (n_iter, alpha, n_active, diag),
ConvergenceWarning)
# XXX: need to figure a 'drop for good' way
Cov = Cov_not_shortened
Cov[0] = 0
Cov[C_idx], Cov[0] = swap(Cov[C_idx], Cov[0])
continue
active.append(indices[n_active])
n_active += 1
if verbose > 1:
print("%s\t\t%s\t\t%s\t\t%s\t\t%s" % (n_iter, active[-1], '',
n_active, C))
if method == 'lasso' and n_iter > 0 and prev_alpha[0] < alpha[0]:
# alpha is increasing. This is because the updates of Cov are
# bringing in too much numerical error that is greater than
# than the remaining correlation with the
# regressors. Time to bail out
warnings.warn('Early stopping the lars path, as the residues '
'are small and the current value of alpha is no '
'longer well controlled. %i iterations, alpha=%.3e, '
'previous alpha=%.3e, with an active set of %i '
'regressors.'
% (n_iter, alpha, prev_alpha, n_active),
ConvergenceWarning)
break
# least squares solution
least_squares, info = solve_cholesky(L[:n_active, :n_active],
sign_active[:n_active],
lower=True)
if least_squares.size == 1 and least_squares == 0:
# This happens because sign_active[:n_active] = 0
least_squares[...] = 1
AA = 1.
else:
# is this really needed ?
AA = 1. / np.sqrt(np.sum(least_squares * sign_active[:n_active]))
if not np.isfinite(AA):
# L is too ill-conditioned
i = 0
L_ = L[:n_active, :n_active].copy()
while not np.isfinite(AA):
L_.flat[::n_active + 1] += (2 ** i) * eps
least_squares, info = solve_cholesky(
L_, sign_active[:n_active], lower=True)
tmp = max(np.sum(least_squares * sign_active[:n_active]),
eps)
AA = 1. / np.sqrt(tmp)
i += 1
least_squares *= AA
if Gram is None:
# equiangular direction of variables in the active set
eq_dir = np.dot(X.T[:n_active].T, least_squares)
# correlation between each unactive variables and
# eqiangular vector
corr_eq_dir = np.dot(X.T[n_active:], eq_dir)
else:
# if huge number of features, this takes 50% of time, I
# think could be avoided if we just update it using an
# orthogonal (QR) decomposition of X
corr_eq_dir = np.dot(Gram[:n_active, n_active:].T,
least_squares)
g1 = arrayfuncs.min_pos((C - Cov) / (AA - corr_eq_dir + tiny))
g2 = arrayfuncs.min_pos((C + Cov) / (AA + corr_eq_dir + tiny))
gamma_ = min(g1, g2, C / AA)
# TODO: better names for these variables: z
drop = False
z = -coef[active] / (least_squares + tiny32)
z_pos = arrayfuncs.min_pos(z)
if z_pos < gamma_:
# some coefficients have changed sign
idx = np.where(z == z_pos)[0][::-1]
# update the sign, important for LAR
sign_active[idx] = -sign_active[idx]
if method == 'lasso':
gamma_ = z_pos
drop = True
n_iter += 1
if return_path:
if n_iter >= coefs.shape[0]:
del coef, alpha, prev_alpha, prev_coef
# resize the coefs and alphas array
add_features = 2 * max(1, (max_features - n_active))
coefs = np.resize(coefs, (n_iter + add_features, n_features))
alphas = np.resize(alphas, n_iter + add_features)
coef = coefs[n_iter]
prev_coef = coefs[n_iter - 1]
alpha = alphas[n_iter, np.newaxis]
prev_alpha = alphas[n_iter - 1, np.newaxis]
else:
# mimic the effect of incrementing n_iter on the array references
prev_coef = coef
prev_alpha[0] = alpha[0]
coef = np.zeros_like(coef)
coef[active] = prev_coef[active] + gamma_ * least_squares
# update correlations
Cov -= gamma_ * corr_eq_dir
# See if any coefficient has changed sign
if drop and method == 'lasso':
# handle the case when idx is not length of 1
[arrayfuncs.cholesky_delete(L[:n_active, :n_active], ii) for ii in
idx]
n_active -= 1
m, n = idx, n_active
# handle the case when idx is not length of 1
drop_idx = [active.pop(ii) for ii in idx]
if Gram is None:
# propagate dropped variable
for ii in idx:
for i in range(ii, n_active):
X.T[i], X.T[i + 1] = swap(X.T[i], X.T[i + 1])
# yeah this is stupid
indices[i], indices[i + 1] = indices[i + 1], indices[i]
# TODO: this could be updated
residual = y - np.dot(X[:, :n_active], coef[active])
temp = np.dot(X.T[n_active], residual)
Cov = np.r_[temp, Cov]
else:
for ii in idx:
for i in range(ii, n_active):
indices[i], indices[i + 1] = indices[i + 1], indices[i]
Gram[i], Gram[i + 1] = swap(Gram[i], Gram[i + 1])
Gram[:, i], Gram[:, i + 1] = swap(Gram[:, i],
Gram[:, i + 1])
# Cov_n = Cov_j + x_j * X + increment(betas) TODO:
# will this still work with multiple drops ?
# recompute covariance. Probably could be done better
# wrong as Xy is not swapped with the rest of variables
# TODO: this could be updated
residual = y - np.dot(X, coef)
temp = np.dot(X.T[drop_idx], residual)
Cov = np.r_[temp, Cov]
sign_active = np.delete(sign_active, idx)
sign_active = np.append(sign_active, 0.) # just to maintain size
if verbose > 1:
print("%s\t\t%s\t\t%s\t\t%s\t\t%s" % (n_iter, '', drop_idx,
n_active, abs(temp)))
if return_path:
# resize coefs in case of early stop
alphas = alphas[:n_iter + 1]
coefs = coefs[:n_iter + 1]
if return_n_iter:
return alphas, active, coefs.T, n_iter
else:
return alphas, active, coefs.T
else:
if return_n_iter:
return alpha, active, coef, n_iter
else:
return alpha, active, coef
###############################################################################
# Estimator classes
class Lars(LinearModel, RegressorMixin):
"""Least Angle Regression model a.k.a. LAR
Read more in the :ref:`User Guide <least_angle_regression>`.
Parameters
----------
n_nonzero_coefs : int, optional
Target number of non-zero coefficients. Use ``np.inf`` for no limit.
fit_intercept : boolean
Whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default False
If ``True``, the regressors X will be normalized before regression.
precompute : True | False | 'auto' | array-like
Whether to use a precomputed Gram matrix to speed up
calculations. If set to ``'auto'`` let us decide. The Gram
matrix can also be passed as argument.
copy_X : boolean, optional, default True
If ``True``, X will be copied; else, it may be overwritten.
eps : float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems. Unlike the ``tol`` parameter in some iterative
optimization-based algorithms, this parameter does not control
the tolerance of the optimization.
fit_path : boolean
If True the full path is stored in the ``coef_path_`` attribute.
If you compute the solution for a large problem or many targets,
setting ``fit_path`` to ``False`` will lead to a speedup, especially
with a small alpha.
Attributes
----------
alphas_ : array, shape (n_alphas + 1,) | list of n_targets such arrays
Maximum of covariances (in absolute value) at each iteration. \
``n_alphas`` is either ``n_nonzero_coefs`` or ``n_features``, \
whichever is smaller.
active_ : list, length = n_alphas | list of n_targets such lists
Indices of active variables at the end of the path.
coef_path_ : array, shape (n_features, n_alphas + 1) \
| list of n_targets such arrays
The varying values of the coefficients along the path. It is not
present if the ``fit_path`` parameter is ``False``.
coef_ : array, shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the formulation formula).
intercept_ : float | array, shape (n_targets,)
Independent term in decision function.
n_iter_ : array-like or int
The number of iterations taken by lars_path to find the
grid of alphas for each target.
Examples
--------
>>> from sklearn import linear_model
>>> clf = linear_model.Lars(n_nonzero_coefs=1)
>>> clf.fit([[-1, 1], [0, 0], [1, 1]], [-1.1111, 0, -1.1111])
... # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
Lars(copy_X=True, eps=..., fit_intercept=True, fit_path=True,
n_nonzero_coefs=1, normalize=True, precompute='auto', verbose=False)
>>> print(clf.coef_) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
[ 0. -1.11...]
See also
--------
lars_path, LarsCV
sklearn.decomposition.sparse_encode
"""
def __init__(self, fit_intercept=True, verbose=False, normalize=True,
precompute='auto', n_nonzero_coefs=500,
eps=np.finfo(np.float).eps, copy_X=True, fit_path=True):
self.fit_intercept = fit_intercept
self.verbose = verbose
self.normalize = normalize
self.method = 'lar'
self.precompute = precompute
self.n_nonzero_coefs = n_nonzero_coefs
self.eps = eps
self.copy_X = copy_X
self.fit_path = fit_path
def _get_gram(self):
# precompute if n_samples > n_features
precompute = self.precompute
if hasattr(precompute, '__array__'):
Gram = precompute
elif precompute == 'auto':
Gram = 'auto'
else:
Gram = None
return Gram
def fit(self, X, y, Xy=None):
"""Fit the model using X, y as training data.
parameters
----------
X : array-like, shape (n_samples, n_features)
Training data.
y : array-like, shape (n_samples,) or (n_samples, n_targets)
Target values.
Xy : array-like, shape (n_samples,) or (n_samples, n_targets), \
optional
Xy = np.dot(X.T, y) that can be precomputed. It is useful
only when the Gram matrix is precomputed.
returns
-------
self : object
returns an instance of self.
"""
X, y = check_X_y(X, y, y_numeric=True, multi_output=True)
n_features = X.shape[1]
X, y, X_mean, y_mean, X_std = self._center_data(X, y,
self.fit_intercept,
self.normalize,
self.copy_X)
if y.ndim == 1:
y = y[:, np.newaxis]
n_targets = y.shape[1]
alpha = getattr(self, 'alpha', 0.)
if hasattr(self, 'n_nonzero_coefs'):
alpha = 0. # n_nonzero_coefs parametrization takes priority
max_iter = self.n_nonzero_coefs
else:
max_iter = self.max_iter
precompute = self.precompute
if not hasattr(precompute, '__array__') and (
precompute is True or
(precompute == 'auto' and X.shape[0] > X.shape[1]) or
(precompute == 'auto' and y.shape[1] > 1)):
Gram = np.dot(X.T, X)
else:
Gram = self._get_gram()
self.alphas_ = []
self.n_iter_ = []
if self.fit_path:
self.coef_ = []
self.active_ = []
self.coef_path_ = []
for k in xrange(n_targets):
this_Xy = None if Xy is None else Xy[:, k]
alphas, active, coef_path, n_iter_ = lars_path(
X, y[:, k], Gram=Gram, Xy=this_Xy, copy_X=self.copy_X,
copy_Gram=True, alpha_min=alpha, method=self.method,
verbose=max(0, self.verbose - 1), max_iter=max_iter,
eps=self.eps, return_path=True,
return_n_iter=True)
self.alphas_.append(alphas)
self.active_.append(active)
self.n_iter_.append(n_iter_)
self.coef_path_.append(coef_path)
self.coef_.append(coef_path[:, -1])
if n_targets == 1:
self.alphas_, self.active_, self.coef_path_, self.coef_ = [
a[0] for a in (self.alphas_, self.active_, self.coef_path_,
self.coef_)]
self.n_iter_ = self.n_iter_[0]
else:
self.coef_ = np.empty((n_targets, n_features))
for k in xrange(n_targets):
this_Xy = None if Xy is None else Xy[:, k]
alphas, _, self.coef_[k], n_iter_ = lars_path(
X, y[:, k], Gram=Gram, Xy=this_Xy, copy_X=self.copy_X,
copy_Gram=True, alpha_min=alpha, method=self.method,
verbose=max(0, self.verbose - 1), max_iter=max_iter,
eps=self.eps, return_path=False, return_n_iter=True)
self.alphas_.append(alphas)
self.n_iter_.append(n_iter_)
if n_targets == 1:
self.alphas_ = self.alphas_[0]
self.n_iter_ = self.n_iter_[0]
self._set_intercept(X_mean, y_mean, X_std)
return self
class LassoLars(Lars):
"""Lasso model fit with Least Angle Regression a.k.a. Lars
It is a Linear Model trained with an L1 prior as regularizer.
The optimization objective for Lasso is::
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Read more in the :ref:`User Guide <least_angle_regression>`.
Parameters
----------
alpha : float
Constant that multiplies the penalty term. Defaults to 1.0.
``alpha = 0`` is equivalent to an ordinary least square, solved
by :class:`LinearRegression`. For numerical reasons, using
``alpha = 0`` with the LassoLars object is not advised and you
should prefer the LinearRegression object.
fit_intercept : boolean
whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
copy_X : boolean, optional, default True
If True, X will be copied; else, it may be overwritten.
precompute : True | False | 'auto' | array-like
Whether to use a precomputed Gram matrix to speed up
calculations. If set to ``'auto'`` let us decide. The Gram
matrix can also be passed as argument.
max_iter : integer, optional
Maximum number of iterations to perform.
eps : float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems. Unlike the ``tol`` parameter in some iterative
optimization-based algorithms, this parameter does not control
the tolerance of the optimization.
fit_path : boolean
If ``True`` the full path is stored in the ``coef_path_`` attribute.
If you compute the solution for a large problem or many targets,
setting ``fit_path`` to ``False`` will lead to a speedup, especially
with a small alpha.
Attributes
----------
alphas_ : array, shape (n_alphas + 1,) | list of n_targets such arrays
Maximum of covariances (in absolute value) at each iteration. \
``n_alphas`` is either ``max_iter``, ``n_features``, or the number of \
nodes in the path with correlation greater than ``alpha``, whichever \
is smaller.
active_ : list, length = n_alphas | list of n_targets such lists
Indices of active variables at the end of the path.
coef_path_ : array, shape (n_features, n_alphas + 1) or list
If a list is passed it's expected to be one of n_targets such arrays.
The varying values of the coefficients along the path. It is not
present if the ``fit_path`` parameter is ``False``.
coef_ : array, shape (n_features,) or (n_targets, n_features)
Parameter vector (w in the formulation formula).
intercept_ : float | array, shape (n_targets,)
Independent term in decision function.
n_iter_ : array-like or int.
The number of iterations taken by lars_path to find the
grid of alphas for each target.
Examples
--------
>>> from sklearn import linear_model
>>> clf = linear_model.LassoLars(alpha=0.01)
>>> clf.fit([[-1, 1], [0, 0], [1, 1]], [-1, 0, -1])
... # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
LassoLars(alpha=0.01, copy_X=True, eps=..., fit_intercept=True,
fit_path=True, max_iter=500, normalize=True, precompute='auto',
verbose=False)
>>> print(clf.coef_) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
[ 0. -0.963257...]
See also
--------
lars_path
lasso_path
Lasso
LassoCV
LassoLarsCV
sklearn.decomposition.sparse_encode
"""
def __init__(self, alpha=1.0, fit_intercept=True, verbose=False,
normalize=True, precompute='auto', max_iter=500,
eps=np.finfo(np.float).eps, copy_X=True, fit_path=True):
self.alpha = alpha
self.fit_intercept = fit_intercept
self.max_iter = max_iter
self.verbose = verbose
self.normalize = normalize
self.method = 'lasso'
self.precompute = precompute
self.copy_X = copy_X
self.eps = eps
self.fit_path = fit_path
###############################################################################
# Cross-validated estimator classes
def _check_copy_and_writeable(array, copy=False):
if copy or not array.flags.writeable:
return array.copy()
return array
def _lars_path_residues(X_train, y_train, X_test, y_test, Gram=None,
copy=True, method='lars', verbose=False,
fit_intercept=True, normalize=True, max_iter=500,
eps=np.finfo(np.float).eps):
"""Compute the residues on left-out data for a full LARS path
Parameters
-----------
X_train : array, shape (n_samples, n_features)
The data to fit the LARS on
y_train : array, shape (n_samples)
The target variable to fit LARS on
X_test : array, shape (n_samples, n_features)
The data to compute the residues on
y_test : array, shape (n_samples)
The target variable to compute the residues on
Gram : None, 'auto', array, shape: (n_features, n_features), optional
Precomputed Gram matrix (X' * X), if ``'auto'``, the Gram
matrix is precomputed from the given X, if there are more samples
than features
copy : boolean, optional
Whether X_train, X_test, y_train and y_test should be copied;
if False, they may be overwritten.
method : 'lar' | 'lasso'
Specifies the returned model. Select ``'lar'`` for Least Angle
Regression, ``'lasso'`` for the Lasso.
verbose : integer, optional
Sets the amount of verbosity
fit_intercept : boolean
whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
max_iter : integer, optional
Maximum number of iterations to perform.
eps : float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems. Unlike the ``tol`` parameter in some iterative
optimization-based algorithms, this parameter does not control
the tolerance of the optimization.
Returns
--------
alphas : array, shape (n_alphas,)
Maximum of covariances (in absolute value) at each iteration.
``n_alphas`` is either ``max_iter`` or ``n_features``, whichever
is smaller.
active : list
Indices of active variables at the end of the path.
coefs : array, shape (n_features, n_alphas)
Coefficients along the path
residues : array, shape (n_alphas, n_samples)
Residues of the prediction on the test data
"""
X_train = _check_copy_and_writeable(X_train, copy)
y_train = _check_copy_and_writeable(y_train, copy)
X_test = _check_copy_and_writeable(X_test, copy)
y_test = _check_copy_and_writeable(y_test, copy)
if fit_intercept:
X_mean = X_train.mean(axis=0)
X_train -= X_mean
X_test -= X_mean
y_mean = y_train.mean(axis=0)
y_train = as_float_array(y_train, copy=False)
y_train -= y_mean
y_test = as_float_array(y_test, copy=False)
y_test -= y_mean
if normalize:
norms = np.sqrt(np.sum(X_train ** 2, axis=0))
nonzeros = np.flatnonzero(norms)
X_train[:, nonzeros] /= norms[nonzeros]
alphas, active, coefs = lars_path(
X_train, y_train, Gram=Gram, copy_X=False, copy_Gram=False,
method=method, verbose=max(0, verbose - 1), max_iter=max_iter, eps=eps)
if normalize:
coefs[nonzeros] /= norms[nonzeros][:, np.newaxis]
residues = np.dot(X_test, coefs) - y_test[:, np.newaxis]
return alphas, active, coefs, residues.T
class LarsCV(Lars):
"""Cross-validated Least Angle Regression model
Read more in the :ref:`User Guide <least_angle_regression>`.
Parameters
----------
fit_intercept : boolean
whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
copy_X : boolean, optional, default True
If ``True``, X will be copied; else, it may be overwritten.
precompute : True | False | 'auto' | array-like
Whether to use a precomputed Gram matrix to speed up
calculations. If set to ``'auto'`` let us decide. The Gram
matrix can also be passed as argument.
max_iter: integer, optional
Maximum number of iterations to perform.
cv : cross-validation generator, optional
see :mod:`sklearn.cross_validation`. If ``None`` is passed, default to
a 5-fold strategy
max_n_alphas : integer, optional
The maximum number of points on the path used to compute the
residuals in the cross-validation
n_jobs : integer, optional
Number of CPUs to use during the cross validation. If ``-1``, use
all the CPUs
eps : float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems.
Attributes
----------
coef_ : array, shape (n_features,)
parameter vector (w in the formulation formula)
intercept_ : float
independent term in decision function
coef_path_ : array, shape (n_features, n_alphas)
the varying values of the coefficients along the path
alpha_ : float
the estimated regularization parameter alpha
alphas_ : array, shape (n_alphas,)
the different values of alpha along the path
cv_alphas_ : array, shape (n_cv_alphas,)
all the values of alpha along the path for the different folds
cv_mse_path_ : array, shape (n_folds, n_cv_alphas)
the mean square error on left-out for each fold along the path
(alpha values given by ``cv_alphas``)
n_iter_ : array-like or int
the number of iterations run by Lars with the optimal alpha.
See also
--------
lars_path, LassoLars, LassoLarsCV
"""
method = 'lar'
def __init__(self, fit_intercept=True, verbose=False, max_iter=500,
normalize=True, precompute='auto', cv=None,
max_n_alphas=1000, n_jobs=1, eps=np.finfo(np.float).eps,
copy_X=True):
self.fit_intercept = fit_intercept
self.max_iter = max_iter
self.verbose = verbose
self.normalize = normalize
self.precompute = precompute
self.copy_X = copy_X
self.cv = cv
self.max_n_alphas = max_n_alphas
self.n_jobs = n_jobs
self.eps = eps
def fit(self, X, y):
"""Fit the model using X, y as training data.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Training data.
y : array-like, shape (n_samples,)
Target values.
Returns
-------
self : object
returns an instance of self.
"""
self.fit_path = True
X, y = check_X_y(X, y, y_numeric=True)
# init cross-validation generator
cv = check_cv(self.cv, X, y, classifier=False)
Gram = 'auto' if self.precompute else None
cv_paths = Parallel(n_jobs=self.n_jobs, verbose=self.verbose)(
delayed(_lars_path_residues)(
X[train], y[train], X[test], y[test], Gram=Gram, copy=False,
method=self.method, verbose=max(0, self.verbose - 1),
normalize=self.normalize, fit_intercept=self.fit_intercept,
max_iter=self.max_iter, eps=self.eps)
for train, test in cv)
all_alphas = np.concatenate(list(zip(*cv_paths))[0])
# Unique also sorts
all_alphas = np.unique(all_alphas)
# Take at most max_n_alphas values
stride = int(max(1, int(len(all_alphas) / float(self.max_n_alphas))))
all_alphas = all_alphas[::stride]
mse_path = np.empty((len(all_alphas), len(cv_paths)))
for index, (alphas, active, coefs, residues) in enumerate(cv_paths):
alphas = alphas[::-1]
residues = residues[::-1]
if alphas[0] != 0:
alphas = np.r_[0, alphas]
residues = np.r_[residues[0, np.newaxis], residues]
if alphas[-1] != all_alphas[-1]:
alphas = np.r_[alphas, all_alphas[-1]]
residues = np.r_[residues, residues[-1, np.newaxis]]
this_residues = interpolate.interp1d(alphas,
residues,
axis=0)(all_alphas)
this_residues **= 2
mse_path[:, index] = np.mean(this_residues, axis=-1)
mask = np.all(np.isfinite(mse_path), axis=-1)
all_alphas = all_alphas[mask]
mse_path = mse_path[mask]
# Select the alpha that minimizes left-out error
i_best_alpha = np.argmin(mse_path.mean(axis=-1))
best_alpha = all_alphas[i_best_alpha]
# Store our parameters
self.alpha_ = best_alpha
self.cv_alphas_ = all_alphas
self.cv_mse_path_ = mse_path
# Now compute the full model
# it will call a lasso internally when self if LassoLarsCV
# as self.method == 'lasso'
Lars.fit(self, X, y)
return self
@property
def alpha(self):
# impedance matching for the above Lars.fit (should not be documented)
return self.alpha_
class LassoLarsCV(LarsCV):
"""Cross-validated Lasso, using the LARS algorithm
The optimization objective for Lasso is::
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
Read more in the :ref:`User Guide <least_angle_regression>`.
Parameters
----------
fit_intercept : boolean
whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
precompute : True | False | 'auto' | array-like
Whether to use a precomputed Gram matrix to speed up
calculations. If set to ``'auto'`` let us decide. The Gram
matrix can also be passed as argument.
max_iter : integer, optional
Maximum number of iterations to perform.
cv : cross-validation generator, optional
see sklearn.cross_validation module. If None is passed, default to
a 5-fold strategy
max_n_alphas : integer, optional
The maximum number of points on the path used to compute the
residuals in the cross-validation
n_jobs : integer, optional
Number of CPUs to use during the cross validation. If ``-1``, use
all the CPUs
eps : float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems.
copy_X : boolean, optional, default True
If True, X will be copied; else, it may be overwritten.
Attributes
----------
coef_ : array, shape (n_features,)
parameter vector (w in the formulation formula)
intercept_ : float
independent term in decision function.
coef_path_ : array, shape (n_features, n_alphas)
the varying values of the coefficients along the path
alpha_ : float
the estimated regularization parameter alpha
alphas_ : array, shape (n_alphas,)
the different values of alpha along the path
cv_alphas_ : array, shape (n_cv_alphas,)
all the values of alpha along the path for the different folds
cv_mse_path_ : array, shape (n_folds, n_cv_alphas)
the mean square error on left-out for each fold along the path
(alpha values given by ``cv_alphas``)
n_iter_ : array-like or int
the number of iterations run by Lars with the optimal alpha.
Notes
-----
The object solves the same problem as the LassoCV object. However,
unlike the LassoCV, it find the relevant alphas values by itself.
In general, because of this property, it will be more stable.
However, it is more fragile to heavily multicollinear datasets.
It is more efficient than the LassoCV if only a small number of
features are selected compared to the total number, for instance if
there are very few samples compared to the number of features.
See also
--------
lars_path, LassoLars, LarsCV, LassoCV
"""
method = 'lasso'
class LassoLarsIC(LassoLars):
"""Lasso model fit with Lars using BIC or AIC for model selection
The optimization objective for Lasso is::
(1 / (2 * n_samples)) * ||y - Xw||^2_2 + alpha * ||w||_1
AIC is the Akaike information criterion and BIC is the Bayes
Information criterion. Such criteria are useful to select the value
of the regularization parameter by making a trade-off between the
goodness of fit and the complexity of the model. A good model should
explain well the data while being simple.
Read more in the :ref:`User Guide <least_angle_regression>`.
Parameters
----------
criterion : 'bic' | 'aic'
The type of criterion to use.
fit_intercept : boolean
whether to calculate the intercept for this model. If set
to false, no intercept will be used in calculations
(e.g. data is expected to be already centered).
verbose : boolean or integer, optional
Sets the verbosity amount
normalize : boolean, optional, default False
If True, the regressors X will be normalized before regression.
copy_X : boolean, optional, default True
If True, X will be copied; else, it may be overwritten.
precompute : True | False | 'auto' | array-like
Whether to use a precomputed Gram matrix to speed up
calculations. If set to ``'auto'`` let us decide. The Gram
matrix can also be passed as argument.
max_iter : integer, optional
Maximum number of iterations to perform. Can be used for
early stopping.
eps : float, optional
The machine-precision regularization in the computation of the
Cholesky diagonal factors. Increase this for very ill-conditioned
systems. Unlike the ``tol`` parameter in some iterative
optimization-based algorithms, this parameter does not control
the tolerance of the optimization.
Attributes
----------
coef_ : array, shape (n_features,)
parameter vector (w in the formulation formula)
intercept_ : float
independent term in decision function.
alpha_ : float
the alpha parameter chosen by the information criterion
n_iter_ : int
number of iterations run by lars_path to find the grid of
alphas.
criterion_ : array, shape (n_alphas,)
The value of the information criteria ('aic', 'bic') across all
alphas. The alpha which has the smallest information criteria
is chosen.
Examples
--------
>>> from sklearn import linear_model
>>> clf = linear_model.LassoLarsIC(criterion='bic')
>>> clf.fit([[-1, 1], [0, 0], [1, 1]], [-1.1111, 0, -1.1111])
... # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
LassoLarsIC(copy_X=True, criterion='bic', eps=..., fit_intercept=True,
max_iter=500, normalize=True, precompute='auto',
verbose=False)
>>> print(clf.coef_) # doctest: +ELLIPSIS, +NORMALIZE_WHITESPACE
[ 0. -1.11...]
Notes
-----
The estimation of the number of degrees of freedom is given by:
"On the degrees of freedom of the lasso"
Hui Zou, Trevor Hastie, and Robert Tibshirani
Ann. Statist. Volume 35, Number 5 (2007), 2173-2192.
http://en.wikipedia.org/wiki/Akaike_information_criterion
http://en.wikipedia.org/wiki/Bayesian_information_criterion
See also
--------
lars_path, LassoLars, LassoLarsCV
"""
def __init__(self, criterion='aic', fit_intercept=True, verbose=False,
normalize=True, precompute='auto', max_iter=500,
eps=np.finfo(np.float).eps, copy_X=True):
self.criterion = criterion
self.fit_intercept = fit_intercept
self.max_iter = max_iter
self.verbose = verbose
self.normalize = normalize
self.copy_X = copy_X
self.precompute = precompute
self.eps = eps
def fit(self, X, y, copy_X=True):
"""Fit the model using X, y as training data.
Parameters
----------
X : array-like, shape (n_samples, n_features)
training data.
y : array-like, shape (n_samples,)
target values.
copy_X : boolean, optional, default True
If ``True``, X will be copied; else, it may be overwritten.
Returns
-------
self : object
returns an instance of self.
"""
self.fit_path = True
X, y = check_X_y(X, y, y_numeric=True)
X, y, Xmean, ymean, Xstd = LinearModel._center_data(
X, y, self.fit_intercept, self.normalize, self.copy_X)
max_iter = self.max_iter
Gram = self._get_gram()
alphas_, active_, coef_path_, self.n_iter_ = lars_path(
X, y, Gram=Gram, copy_X=copy_X, copy_Gram=True, alpha_min=0.0,
method='lasso', verbose=self.verbose, max_iter=max_iter,
eps=self.eps, return_n_iter=True)
n_samples = X.shape[0]
if self.criterion == 'aic':
K = 2 # AIC
elif self.criterion == 'bic':
K = log(n_samples) # BIC
else:
raise ValueError('criterion should be either bic or aic')
R = y[:, np.newaxis] - np.dot(X, coef_path_) # residuals
mean_squared_error = np.mean(R ** 2, axis=0)
df = np.zeros(coef_path_.shape[1], dtype=np.int) # Degrees of freedom
for k, coef in enumerate(coef_path_.T):
mask = np.abs(coef) > np.finfo(coef.dtype).eps
if not np.any(mask):
continue
# get the number of degrees of freedom equal to:
# Xc = X[:, mask]
# Trace(Xc * inv(Xc.T, Xc) * Xc.T) ie the number of non-zero coefs
df[k] = np.sum(mask)
self.alphas_ = alphas_
with np.errstate(divide='ignore'):
self.criterion_ = n_samples * np.log(mean_squared_error) + K * df
n_best = np.argmin(self.criterion_)
self.alpha_ = alphas_[n_best]
self.coef_ = coef_path_[:, n_best]
self._set_intercept(Xmean, ymean, Xstd)
return self
| bsd-3-clause |
calebjordan/PyQLab | analysis/SSRO.py | 4 | 6234 | import numpy as np
from scipy.signal import butter, lfilter
from scipy.io import loadmat
from sklearn import decomposition, preprocessing, cross_validation
from sklearn import svm, grid_search
import pywt
import h5py
import matplotlib.pyplot as plt
def create_fake_data(SNR, dt, maxTime, numShots, T1=1.0):
"""
Helper function to create fake training data.
"""
#Create a random set of decay times
decayTimes = np.random.exponential(scale=T1, size=numShots/2 )
#Create the noiseless decays
timePts = np.arange(dt, maxTime, dt)
fakeData = np.zeros((numShots, timePts.size))
#Put the ground state runs down to -1
fakeData[::2, :] = -1.0
#Put the excited state ones to one before they decay
for ct, decayTime in enumerate(decayTimes):
fakeData[2*ct-1] = 2*(decayTime>timePts)-1
#Now add Gaussian noise
fakeData += np.random.normal(scale=1.0/np.sqrt(dt*SNR), size=fakeData.shape)
return fakeData
def extract_meas_data(fileName, numAvgs):
"""
Helper function to load, filter and extract pulsed measurment data from single shot records.
"""
#Load the matlab data
rawData = np.mean(loadmat(fileName)['demodSignal'].reshape((3200, 8000/numAvgs, numAvgs), order='F'), axis=2)
#Decimate by a factor of 8 twice
b,a = butter(5, 0.5/8)
filteredData = lfilter(b,a, avgData, axis=0)[::8, :]
filteredData = lfilter(b,a, filteredData, axis=0)[::8, :]
#Extract the pulse part
pulseData = filteredData[10:40,:]
#Pull out real and imaginary components
return np.hstack((pulseData.real.T, pulseData.imag.T))
def fidelity_est(testSignals):
"""
Estimate the optimal fidelity by estimating the probability distributions.
"""
rangeMin = np.min(testSignals)
rangeMax = np.max(testSignals)
groundProb = np.histogram(testSignals[::2], bins=100, range=(rangeMin, rangeMax), density=True)[0]
excitedProb, binEdges = np.histogram(testSignals[1::2], bins=100, range=(rangeMin, rangeMax), density=True)
return 0.5*(binEdges[1]-binEdges[0])*np.sum(np.abs(groundProb-excitedProb))
def test_fixed(SNRs):
"""
Fixed (infinite T1) qubit.
"""
fidelities = []
numShots = 10000
dt = 1e-3
for SNR in SNRs:
fakeData = create_fake_data(SNR, dt, 1, numShots, T1=1e9)
signal = dt*np.sum(fakeData, axis=1)
fidelities.append(fidelity_est(signal))
return fidelities
def test_boxcar(SNRs, intTimes):
"""
Simple box-car integration with finite T1.
"""
fidelities = []
numShots = 10000
dt = 1e-3
trueStates = np.tile([False, True], numShots/2)
for SNR, intTime in zip(SNRs, intTimes):
fakeData = create_fake_data(SNR, dt, intTime, numShots)
signal = dt*np.sum(fakeData, axis=1)
fidelities.append(fidelity_est(signal))
return fidelities
def test_nn(SNR):
pass
def load_exp_data(fileName):
"""
Helper function to load data dumped from matlab
"""
f = h5py.File(fileName, 'r')
gData = np.fromstring(f['groundData'].value, dtype=np.complex).reshape(f['groundData'].shape)
eData = np.fromstring(f['excitedData'].value, dtype=np.complex).reshape(f['excitedData'].shape)
return gData, eData
def wavelet_transform(measRecords, wavelet):
"""
Take and array of measurment records, wavelet transform and return the most significant components.
"""
out = []
for record in measRecords:
cA3, cD3, cD2, cD1 = pywt.wavedec(record, wavelet, level=3)
out.append(np.hstack((cA3, cD3)))
return np.array(out)
def credible_interval(outcomes, c=0.95):
"""
Calculate the credible interval for a fidelity estimate.
"""
from scipy.special import betaincinv
N = outcomes.size
S = np.count_nonzero(outcomes)
xlo = betaincinv(S+1,N-S+1,(1-c)/2.)
xup = betaincinv(S+1,N-S+1,(1+c)/2.)
return xlo, xup
for ct in range(90,120):
testSignals[::2] = np.sum((weights*gUnWound.real)[:,:ct], axis=1)
testSignals[1::2] = np.sum((weights*eUnWound.real)[:,:ct], axis=1)
print(fidelity_est(testSignals))
if __name__ == '__main__':
pass
# SNR = 1e4
# fakeData = create_fake_data(SNR, 1e-3, 1, 4000)
# # trainData = pca.transform(fakeData)
# # validateData = pca.transform(create_fake_dpata(SNR, 1e-2, 1, 2000))
# trueStates = np.tile([0,1], 2000).flatten()
# # trueStates = np.hstack((np.zeros(1000), np.ones(1000)))
# # testData = wavelet_transform(fakeData, 'db4')
# testData = pca.transform(fakeData)
# # scaler = preprocessing.Scaler().fit(testData)
# # testData = scaler.transform(testData)
# # fakeData = create_fake_data(SNR, 1e-2, 1, 5000)
# validateData = scaler.transform(validateData)
gData, eData = load_exp_data('/home/cryan/Desktop/SSData.mat')
# #Use PCA to extract fewer, more useful features
# allData = np.vstack((np.hstack((gData.real, gData.imag)), np.hstack((eData.real, eData.imag))))
# pca = decomposition.PCA()
# pca.n_components = 20
# reducedData = pca.fit_transform(allData)
# #Assing the assumed states
# states = np.repeat([0,1], 10000)
# X_train, X_test, y_train, y_test = cross_validation.train_test_split(reducedData, states, test_size=0.2, random_state=0)
# # searchParams = {'gamma':(1.0/100)*np.logspace(-3, 0, 10), 'nu':np.arange(0.01, 0.2, 0.02)}
# # clf = grid_search.GridSearchCV(svm.NuSVC(cache_size=2000), searchParams, n_jobs=2)
# searchParams = {'C':np.linspace(0.1,4,20)}
# clf = grid_search.GridSearchCV(svm.SVC(cache_size=2000), searchParams)
# # clf = svm.SVC()
# clf.fit(X_train, y_train)
# print clf.score(X_test, y_test)
# gridScores = np.reshape([x[1] for x in clf.grid_scores_], (clf.param_grid['nu'].size, clf.param_grid['gamma'].size))
# x_min, x_max = reducedData[:, 0].min() - 0.1, reducedData[:, 0].max() + 0.1
# y_min, y_max = reducedData[:, 1].min() - 0.1, reducedData[:, 1].max() + 0.1
# xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.005), np.arange(y_min, y_max, 0.005))
# Z = clf.predict(np.c_[xx.ravel(), yy.ravel(), np.zeros(xx.size), np.zeros(xx.size)])
# Z = Z.reshape(xx.shape)
# plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
# plt.scatter(reducedData[:10000,0], reducedData[:10000,1], s=1, c='b', edgecolors='none')
# plt.scatter(reducedData[10000:,0], reducedData[10000:,1], s=1, c='r', edgecolors='none')
# plt.xlim((x_min, x_max))
# plt.ylim((y_min, y_max))
# plt.xlabel('Principal Component 1', fontsize=16)
# plt.ylabel('Principal Component 2', fontsize=16)
# plt.show()
| apache-2.0 |
asadziach/tensorflow | tensorflow/contrib/learn/python/learn/estimators/estimator.py | 7 | 54562 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Base Estimator class."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import abc
import copy
import inspect
import os
import tempfile
import numpy as np
import six
from tensorflow.contrib import framework as contrib_framework
from tensorflow.contrib import layers
from tensorflow.contrib import metrics as metrics_lib
from tensorflow.contrib.framework import deprecated
from tensorflow.contrib.framework import deprecated_args
from tensorflow.contrib.framework import list_variables
from tensorflow.contrib.framework import load_variable
from tensorflow.contrib.framework.python.ops import variables as contrib_variables
from tensorflow.contrib.learn.python.learn import evaluable
from tensorflow.contrib.learn.python.learn import metric_spec
from tensorflow.contrib.learn.python.learn import monitors as monitor_lib
from tensorflow.contrib.learn.python.learn import trainable
from tensorflow.contrib.learn.python.learn.estimators import _sklearn as sklearn
from tensorflow.contrib.learn.python.learn.estimators import metric_key
from tensorflow.contrib.learn.python.learn.estimators import model_fn as model_fn_lib
from tensorflow.contrib.learn.python.learn.estimators import run_config
from tensorflow.contrib.learn.python.learn.estimators import tensor_signature
from tensorflow.contrib.learn.python.learn.estimators._sklearn import NotFittedError
from tensorflow.contrib.learn.python.learn.learn_io import data_feeder
from tensorflow.contrib.learn.python.learn.utils import export
from tensorflow.contrib.learn.python.learn.utils import saved_model_export_utils
from tensorflow.contrib.training.python.training import evaluation
from tensorflow.core.framework import summary_pb2
from tensorflow.core.protobuf import config_pb2
from tensorflow.python.client import session as tf_session
from tensorflow.python.framework import ops
from tensorflow.python.framework import random_seed
from tensorflow.python.framework import sparse_tensor
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import data_flow_ops
from tensorflow.python.ops import resources
from tensorflow.python.ops import variables
from tensorflow.python.platform import gfile
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.training import basic_session_run_hooks
from tensorflow.python.training import device_setter
from tensorflow.python.training import monitored_session
from tensorflow.python.training import saver
from tensorflow.python.training import summary_io
from tensorflow.python.util import compat
AS_ITERABLE_DATE = '2016-09-15'
AS_ITERABLE_INSTRUCTIONS = (
'The default behavior of predict() is changing. The default value for\n'
'as_iterable will change to True, and then the flag will be removed\n'
'altogether. The behavior of this flag is described below.')
SCIKIT_DECOUPLE_DATE = '2016-12-01'
SCIKIT_DECOUPLE_INSTRUCTIONS = (
'Estimator is decoupled from Scikit Learn interface by moving into\n'
'separate class SKCompat. Arguments x, y and batch_size are only\n'
'available in the SKCompat class, Estimator will only accept input_fn.\n'
'Example conversion:\n'
' est = Estimator(...) -> est = SKCompat(Estimator(...))')
def _verify_input_args(x, y, input_fn, feed_fn, batch_size):
"""Verifies validity of co-existance of input arguments."""
if input_fn is None:
if x is None:
raise ValueError('Either x or input_fn must be provided.')
if contrib_framework.is_tensor(x) or (y is not None and
contrib_framework.is_tensor(y)):
raise ValueError('Inputs cannot be tensors. Please provide input_fn.')
if feed_fn is not None:
raise ValueError('Can not provide both feed_fn and x or y.')
else:
if (x is not None) or (y is not None):
raise ValueError('Can not provide both input_fn and x or y.')
if batch_size is not None:
raise ValueError('Can not provide both input_fn and batch_size.')
def _get_input_fn(x, y, input_fn, feed_fn, batch_size, shuffle=False, epochs=1):
"""Make inputs into input and feed functions.
Args:
x: Numpy, Pandas or Dask matrix or iterable.
y: Numpy, Pandas or Dask matrix or iterable.
input_fn: Pre-defined input function for training data.
feed_fn: Pre-defined data feeder function.
batch_size: Size to split data into parts. Must be >= 1.
shuffle: Whether to shuffle the inputs.
epochs: Number of epochs to run.
Returns:
Data input and feeder function based on training data.
Raises:
ValueError: Only one of `(x & y)` or `input_fn` must be provided.
"""
_verify_input_args(x, y, input_fn, feed_fn, batch_size)
if input_fn is not None:
return input_fn, feed_fn
df = data_feeder.setup_train_data_feeder(
x,
y,
n_classes=None,
batch_size=batch_size,
shuffle=shuffle,
epochs=epochs)
return df.input_builder, df.get_feed_dict_fn()
def infer_real_valued_columns_from_input_fn(input_fn):
"""Creates `FeatureColumn` objects for inputs defined by `input_fn`.
This interprets all inputs as dense, fixed-length float values. This creates
a local graph in which it calls `input_fn` to build the tensors, then discards
it.
Args:
input_fn: Input function returning a tuple of:
features - Dictionary of string feature name to `Tensor` or `Tensor`.
labels - `Tensor` of label values.
Returns:
List of `FeatureColumn` objects.
"""
with ops.Graph().as_default():
features, _ = input_fn()
return layers.infer_real_valued_columns(features)
def infer_real_valued_columns_from_input(x):
"""Creates `FeatureColumn` objects for inputs defined by input `x`.
This interprets all inputs as dense, fixed-length float values.
Args:
x: Real-valued matrix of shape [n_samples, n_features...]. Can be
iterator that returns arrays of features.
Returns:
List of `FeatureColumn` objects.
"""
input_fn, _ = _get_input_fn(
x=x, y=None, input_fn=None, feed_fn=None, batch_size=None)
return infer_real_valued_columns_from_input_fn(input_fn)
def _get_arguments(func):
"""Returns list of arguments this function has."""
if hasattr(func, '__code__'):
# Regular function.
return inspect.getargspec(func).args
elif hasattr(func, '__call__'):
# Callable object.
return _get_arguments(func.__call__)
elif hasattr(func, 'func'):
# Partial function.
return _get_arguments(func.func)
def _get_replica_device_setter(config):
"""Creates a replica device setter if required.
Args:
config: A RunConfig instance.
Returns:
A replica device setter, or None.
"""
ps_ops = [
'Variable', 'VariableV2', 'AutoReloadVariable', 'MutableHashTable',
'MutableHashTableOfTensors', 'MutableDenseHashTable'
]
if config.task_type:
worker_device = '/job:%s/task:%d' % (config.task_type, config.task_id)
else:
worker_device = '/job:worker'
if config.num_ps_replicas > 0:
return device_setter.replica_device_setter(
ps_tasks=config.num_ps_replicas, worker_device=worker_device,
merge_devices=True, ps_ops=ps_ops, cluster=config.cluster_spec)
else:
return None
def _make_metrics_ops(metrics, features, labels, predictions):
"""Add metrics based on `features`, `labels`, and `predictions`.
`metrics` contains a specification for how to run metrics. It is a dict
mapping friendly names to either `MetricSpec` objects, or directly to a metric
function (assuming that `predictions` and `labels` are single tensors), or to
`(pred_name, metric)` `tuple`, which passes `predictions[pred_name]` and
`labels` to `metric` (assuming `labels` is a single tensor).
Users are encouraged to use `MetricSpec` objects, which are more flexible and
cleaner. They also lead to clearer errors.
Args:
metrics: A dict mapping names to metrics specification, for example
`MetricSpec` objects.
features: A dict of tensors returned from an input_fn as features/inputs.
labels: A single tensor or a dict of tensors returned from an input_fn as
labels.
predictions: A single tensor or a dict of tensors output from a model as
predictions.
Returns:
A dict mapping the friendly given in `metrics` to the result of calling the
given metric function.
Raises:
ValueError: If metrics specifications do not work with the type of
`features`, `labels`, or `predictions` provided. Mostly, a dict is given
but no pred_name specified.
"""
metrics = metrics or {}
# If labels is a dict with a single key, unpack into a single tensor.
labels_tensor_or_dict = labels
if isinstance(labels, dict) and len(labels) == 1:
labels_tensor_or_dict = labels[list(labels.keys())[0]]
result = {}
# Iterate in lexicographic order, so the graph is identical among runs.
for name, metric in sorted(six.iteritems(metrics)):
if isinstance(metric, metric_spec.MetricSpec):
result[name] = metric.create_metric_ops(features, labels, predictions)
continue
# TODO(b/31229024): Remove the rest of this loop
logging.warning('Please specify metrics using MetricSpec. Using bare '
'functions or (key, fn) tuples is deprecated and support '
'for it will be removed on Oct 1, 2016.')
if isinstance(name, tuple):
# Multi-head metrics.
if len(name) != 2:
raise ValueError('Invalid metric for {}. It returned a tuple with '
'len {}, expected 2.'.format(name, len(name)))
if not isinstance(predictions, dict):
raise ValueError(
'Metrics passed provide (name, prediction), '
'but predictions are not dict. '
'Metrics: %s, Predictions: %s.' % (metrics, predictions))
# Here are two options: labels are single Tensor or a dict.
if isinstance(labels, dict) and name[1] in labels:
# If labels are dict and the prediction name is in it, apply metric.
result[name[0]] = metric(predictions[name[1]], labels[name[1]])
else:
# Otherwise pass the labels to the metric.
result[name[0]] = metric(predictions[name[1]], labels_tensor_or_dict)
else:
# Single head metrics.
if isinstance(predictions, dict):
raise ValueError(
'Metrics passed provide only name, no prediction, '
'but predictions are dict. '
'Metrics: %s, Labels: %s.' % (metrics, labels_tensor_or_dict))
result[name] = metric(predictions, labels_tensor_or_dict)
return result
def _dict_to_str(dictionary):
"""Get a `str` representation of a `dict`.
Args:
dictionary: The `dict` to be represented as `str`.
Returns:
A `str` representing the `dictionary`.
"""
return ', '.join('%s = %s' % (k, v) for k, v in sorted(dictionary.items()))
def _write_dict_to_summary(output_dir,
dictionary,
current_global_step):
"""Writes a `dict` into summary file in given output directory.
Args:
output_dir: `str`, directory to write the summary file in.
dictionary: the `dict` to be written to summary file.
current_global_step: `int`, the current global step.
"""
logging.info('Saving dict for global step %d: %s', current_global_step,
_dict_to_str(dictionary))
summary_writer = summary_io.SummaryWriterCache.get(output_dir)
summary_proto = summary_pb2.Summary()
for key in dictionary:
if dictionary[key] is None:
continue
value = summary_proto.value.add()
value.tag = key
if (isinstance(dictionary[key], np.float32) or
isinstance(dictionary[key], float)):
value.simple_value = float(dictionary[key])
else:
logging.warn('Skipping summary for %s, must be a float or np.float32.',
key)
summary_writer.add_summary(summary_proto, current_global_step)
summary_writer.flush()
class BaseEstimator(
sklearn.BaseEstimator, evaluable.Evaluable, trainable.Trainable):
"""Abstract BaseEstimator class to train and evaluate TensorFlow models.
Users should not instantiate or subclass this class. Instead, use `Estimator`.
"""
__metaclass__ = abc.ABCMeta
# Note that for Google users, this is overriden with
# learn_runner.EstimatorConfig.
# TODO(wicke): Remove this once launcher takes over config functionality
_Config = run_config.RunConfig # pylint: disable=invalid-name
def __init__(self, model_dir=None, config=None):
"""Initializes a BaseEstimator instance.
Args:
model_dir: Directory to save model parameters, graph and etc. This can
also be used to load checkpoints from the directory into a estimator to
continue training a previously saved model. If `None`, the model_dir in
`config` will be used if set. If both are set, they must be same.
config: A RunConfig instance.
"""
# Create a run configuration.
if config is None:
self._config = BaseEstimator._Config()
logging.info('Using default config.')
else:
self._config = config
logging.info('Using config: %s', str(vars(self._config)))
if self._config.session_config is None:
self._session_config = config_pb2.ConfigProto(allow_soft_placement=True)
else:
self._session_config = self._config.session_config
# Model directory.
if (model_dir is not None) and (self._config.model_dir is not None):
if model_dir != self._config.model_dir:
# TODO(b/9965722): remove this suppression after it is no longer
# necessary.
# pylint: disable=g-doc-exception
raise ValueError(
"model_dir are set both in constructor and RunConfig, but with "
"different values. In constructor: '{}', in RunConfig: "
"'{}' ".format(model_dir, self._config.model_dir))
self._model_dir = model_dir or self._config.model_dir
if self._model_dir is None:
self._model_dir = tempfile.mkdtemp()
logging.warning('Using temporary folder as model directory: %s',
self._model_dir)
if self._config.model_dir is None:
self._config = self._config.replace(model_dir=self._model_dir)
# Set device function depending if there are replicas or not.
self._device_fn = _get_replica_device_setter(self._config)
# Features and labels TensorSignature objects.
# TODO(wicke): Rename these to something more descriptive
self._features_info = None
self._labels_info = None
self._graph = None
@property
def config(self):
# TODO(wicke): make RunConfig immutable, and then return it without a copy.
return copy.deepcopy(self._config)
@deprecated_args(
SCIKIT_DECOUPLE_DATE, SCIKIT_DECOUPLE_INSTRUCTIONS, ('x', None),
('y', None), ('batch_size', None)
)
def fit(self, x=None, y=None, input_fn=None, steps=None, batch_size=None,
monitors=None, max_steps=None):
# pylint: disable=g-doc-args,g-doc-return-or-yield
"""See `Trainable`.
Raises:
ValueError: If `x` or `y` are not `None` while `input_fn` is not `None`.
ValueError: If both `steps` and `max_steps` are not `None`.
"""
if (steps is not None) and (max_steps is not None):
raise ValueError('Can not provide both steps and max_steps.')
_verify_input_args(x, y, input_fn, None, batch_size)
if x is not None:
SKCompat(self).fit(x, y, batch_size, steps, max_steps, monitors)
return self
if max_steps is not None:
try:
start_step = load_variable(self._model_dir, ops.GraphKeys.GLOBAL_STEP)
if max_steps <= start_step:
logging.info('Skipping training since max_steps has already saved.')
return self
except: # pylint: disable=bare-except
pass
hooks = monitor_lib.replace_monitors_with_hooks(monitors, self)
if steps is not None or max_steps is not None:
hooks.append(basic_session_run_hooks.StopAtStepHook(steps, max_steps))
loss = self._train_model(input_fn=input_fn, hooks=hooks)
logging.info('Loss for final step: %s.', loss)
return self
@deprecated_args(
SCIKIT_DECOUPLE_DATE, SCIKIT_DECOUPLE_INSTRUCTIONS, ('x', None),
('y', None), ('batch_size', None)
)
def partial_fit(
self, x=None, y=None, input_fn=None, steps=1, batch_size=None,
monitors=None):
"""Incremental fit on a batch of samples.
This method is expected to be called several times consecutively
on different or the same chunks of the dataset. This either can
implement iterative training or out-of-core/online training.
This is especially useful when the whole dataset is too big to
fit in memory at the same time. Or when model is taking long time
to converge, and you want to split up training into subparts.
Args:
x: Matrix of shape [n_samples, n_features...]. Can be iterator that
returns arrays of features. The training input samples for fitting the
model. If set, `input_fn` must be `None`.
y: Vector or matrix [n_samples] or [n_samples, n_outputs]. Can be
iterator that returns array of labels. The training label values
(class labels in classification, real numbers in regression). If set,
`input_fn` must be `None`.
input_fn: Input function. If set, `x`, `y`, and `batch_size` must be
`None`.
steps: Number of steps for which to train model. If `None`, train forever.
batch_size: minibatch size to use on the input, defaults to first
dimension of `x`. Must be `None` if `input_fn` is provided.
monitors: List of `BaseMonitor` subclass instances. Used for callbacks
inside the training loop.
Returns:
`self`, for chaining.
Raises:
ValueError: If at least one of `x` and `y` is provided, and `input_fn` is
provided.
"""
logging.warning('The current implementation of partial_fit is not optimized'
' for use in a loop. Consider using fit() instead.')
return self.fit(x=x, y=y, input_fn=input_fn, steps=steps,
batch_size=batch_size, monitors=monitors)
@deprecated_args(
SCIKIT_DECOUPLE_DATE, SCIKIT_DECOUPLE_INSTRUCTIONS, ('x', None),
('y', None), ('batch_size', None)
)
def evaluate(self,
x=None,
y=None,
input_fn=None,
feed_fn=None,
batch_size=None,
steps=None,
metrics=None,
name=None,
checkpoint_path=None,
hooks=None,
log_progress=True):
# pylint: disable=g-doc-args,g-doc-return-or-yield
"""See `Evaluable`.
Raises:
ValueError: If at least one of `x` or `y` is provided, and at least one of
`input_fn` or `feed_fn` is provided.
Or if `metrics` is not `None` or `dict`.
"""
_verify_input_args(x, y, input_fn, feed_fn, batch_size)
if x is not None:
return SKCompat(self).score(x, y, batch_size, steps, metrics)
if metrics is not None and not isinstance(metrics, dict):
raise ValueError('Metrics argument should be None or dict. '
'Got %s.' % metrics)
eval_results, global_step = self._evaluate_model(
input_fn=input_fn,
feed_fn=feed_fn,
steps=steps,
metrics=metrics,
name=name,
checkpoint_path=checkpoint_path,
hooks=hooks,
log_progress=log_progress)
if eval_results is not None:
eval_results.update({'global_step': global_step})
return eval_results
@deprecated_args(
SCIKIT_DECOUPLE_DATE, SCIKIT_DECOUPLE_INSTRUCTIONS, ('x', None),
('batch_size', None), ('as_iterable', True)
)
def predict(
self, x=None, input_fn=None, batch_size=None, outputs=None,
as_iterable=True):
"""Returns predictions for given features.
Args:
x: Matrix of shape [n_samples, n_features...]. Can be iterator that
returns arrays of features. The training input samples for fitting the
model. If set, `input_fn` must be `None`.
input_fn: Input function. If set, `x` and 'batch_size' must be `None`.
batch_size: Override default batch size. If set, 'input_fn' must be
'None'.
outputs: list of `str`, name of the output to predict.
If `None`, returns all.
as_iterable: If True, return an iterable which keeps yielding predictions
for each example until inputs are exhausted. Note: The inputs must
terminate if you want the iterable to terminate (e.g. be sure to pass
num_epochs=1 if you are using something like read_batch_features).
Returns:
A numpy array of predicted classes or regression values if the
constructor's `model_fn` returns a `Tensor` for `predictions` or a `dict`
of numpy arrays if `model_fn` returns a `dict`. Returns an iterable of
predictions if as_iterable is True.
Raises:
ValueError: If x and input_fn are both provided or both `None`.
"""
_verify_input_args(x, None, input_fn, None, batch_size)
if x is not None and not as_iterable:
return SKCompat(self).predict(x, batch_size)
input_fn, feed_fn = _get_input_fn(x, None, input_fn, None, batch_size)
return self._infer_model(
input_fn=input_fn,
feed_fn=feed_fn,
outputs=outputs,
as_iterable=as_iterable)
def get_variable_value(self, name):
"""Returns value of the variable given by name.
Args:
name: string, name of the tensor.
Returns:
Numpy array - value of the tensor.
"""
return load_variable(self.model_dir, name)
def get_variable_names(self):
"""Returns list of all variable names in this model.
Returns:
List of names.
"""
return [name for name, _ in list_variables(self.model_dir)]
@property
def model_dir(self):
return self._model_dir
@deprecated('2017-03-25', 'Please use Estimator.export_savedmodel() instead.')
def export(self,
export_dir,
input_fn=export._default_input_fn, # pylint: disable=protected-access
input_feature_key=None,
use_deprecated_input_fn=True,
signature_fn=None,
prediction_key=None,
default_batch_size=1,
exports_to_keep=None,
checkpoint_path=None):
"""Exports inference graph into given dir.
Args:
export_dir: A string containing a directory to write the exported graph
and checkpoints.
input_fn: If `use_deprecated_input_fn` is true, then a function that given
`Tensor` of `Example` strings, parses it into features that are then
passed to the model. Otherwise, a function that takes no argument and
returns a tuple of (features, labels), where features is a dict of
string key to `Tensor` and labels is a `Tensor` that's currently not
used (and so can be `None`).
input_feature_key: Only used if `use_deprecated_input_fn` is false. String
key into the features dict returned by `input_fn` that corresponds to a
the raw `Example` strings `Tensor` that the exported model will take as
input. Can only be `None` if you're using a custom `signature_fn` that
does not use the first arg (examples).
use_deprecated_input_fn: Determines the signature format of `input_fn`.
signature_fn: Function that returns a default signature and a named
signature map, given `Tensor` of `Example` strings, `dict` of `Tensor`s
for features and `Tensor` or `dict` of `Tensor`s for predictions.
prediction_key: The key for a tensor in the `predictions` dict (output
from the `model_fn`) to use as the `predictions` input to the
`signature_fn`. Optional. If `None`, predictions will pass to
`signature_fn` without filtering.
default_batch_size: Default batch size of the `Example` placeholder.
exports_to_keep: Number of exports to keep.
checkpoint_path: the checkpoint path of the model to be exported. If it is
`None` (which is default), will use the latest checkpoint in
export_dir.
Returns:
The string path to the exported directory. NB: this functionality was
added ca. 2016/09/25; clients that depend on the return value may need
to handle the case where this function returns None because subclasses
are not returning a value.
"""
# pylint: disable=protected-access
return export._export_estimator(
estimator=self,
export_dir=export_dir,
signature_fn=signature_fn,
prediction_key=prediction_key,
input_fn=input_fn,
input_feature_key=input_feature_key,
use_deprecated_input_fn=use_deprecated_input_fn,
default_batch_size=default_batch_size,
exports_to_keep=exports_to_keep,
checkpoint_path=checkpoint_path)
@abc.abstractproperty
def _get_train_ops(self, features, labels):
"""Method that builds model graph and returns trainer ops.
Expected to be overridden by sub-classes that require custom support.
Args:
features: `Tensor` or `dict` of `Tensor` objects.
labels: `Tensor` or `dict` of `Tensor` objects.
Returns:
A `ModelFnOps` object.
"""
pass
@abc.abstractproperty
def _get_predict_ops(self, features):
"""Method that builds model graph and returns prediction ops.
Args:
features: `Tensor` or `dict` of `Tensor` objects.
Returns:
A `ModelFnOps` object.
"""
pass
def _get_eval_ops(self, features, labels, metrics):
"""Method that builds model graph and returns evaluation ops.
Expected to be overriden by sub-classes that require custom support.
Args:
features: `Tensor` or `dict` of `Tensor` objects.
labels: `Tensor` or `dict` of `Tensor` objects.
metrics: Dict of metrics to run. If None, the default metric functions
are used; if {}, no metrics are used. Otherwise, `metrics` should map
friendly names for the metric to a `MetricSpec` object defining which
model outputs to evaluate against which labels with which metric
function. Metric ops should support streaming, e.g., returning
update_op and value tensors. See more details in
`../../../../metrics/python/metrics/ops/streaming_metrics.py` and
`../metric_spec.py`.
Returns:
A `ModelFnOps` object.
"""
raise NotImplementedError('_get_eval_ops not implemented in BaseEstimator')
@deprecated(
'2016-09-23',
'The signature of the input_fn accepted by export is changing to be '
'consistent with what\'s used by tf.Learn Estimator\'s train/evaluate, '
'which makes this function useless. This will be removed after the '
'deprecation date.')
def _get_feature_ops_from_example(self, examples_batch):
"""Returns feature parser for given example batch using features info.
This function requires `fit()` has been called.
Args:
examples_batch: batch of tf.Example
Returns:
features: `Tensor` or `dict` of `Tensor` objects.
Raises:
ValueError: If `_features_info` attribute is not available (usually
because `fit()` has not been called).
"""
if self._features_info is None:
raise ValueError('Features information missing, was fit() ever called?')
return tensor_signature.create_example_parser_from_signatures(
self._features_info, examples_batch)
def _check_inputs(self, features, labels):
if self._features_info is not None:
logging.debug('Given features: %s, required signatures: %s.',
str(features), str(self._features_info))
if not tensor_signature.tensors_compatible(features, self._features_info):
raise ValueError('Features are incompatible with given information. '
'Given features: %s, required signatures: %s.' %
(str(features), str(self._features_info)))
else:
self._features_info = tensor_signature.create_signatures(features)
logging.debug('Setting feature info to %s.', str(self._features_info))
if labels is not None:
if self._labels_info is not None:
logging.debug('Given labels: %s, required signatures: %s.',
str(labels), str(self._labels_info))
if not tensor_signature.tensors_compatible(labels, self._labels_info):
raise ValueError('Labels are incompatible with given information. '
'Given labels: %s, required signatures: %s.' %
(str(labels), str(self._labels_info)))
else:
self._labels_info = tensor_signature.create_signatures(labels)
logging.debug('Setting labels info to %s', str(self._labels_info))
def _extract_metric_update_ops(self, eval_dict):
"""Separate update operations from metric value operations."""
update_ops = []
value_ops = {}
for name, metric_ops in six.iteritems(eval_dict):
if isinstance(metric_ops, (list, tuple)):
if len(metric_ops) == 2:
value_ops[name] = metric_ops[0]
update_ops.append(metric_ops[1])
else:
logging.warning(
'Ignoring metric {}. It returned a list|tuple with len {}, '
'expected 2'.format(name, len(metric_ops)))
value_ops[name] = metric_ops
else:
value_ops[name] = metric_ops
if update_ops:
update_ops = control_flow_ops.group(*update_ops)
else:
update_ops = None
return update_ops, value_ops
def _evaluate_model(self,
input_fn,
steps,
feed_fn=None,
metrics=None,
name='',
checkpoint_path=None,
hooks=None,
log_progress=True):
# TODO(wicke): Remove this once Model and associated code are gone.
if (hasattr(self._config, 'execution_mode') and
self._config.execution_mode not in ('all', 'evaluate', 'eval_evalset')):
return None, None
# Check that model has been trained (if nothing has been set explicitly).
if not checkpoint_path:
latest_path = saver.latest_checkpoint(self._model_dir)
if not latest_path:
raise NotFittedError("Couldn't find trained model at %s."
% self._model_dir)
checkpoint_path = latest_path
# Setup output directory.
eval_dir = os.path.join(self._model_dir, 'eval' if not name else
'eval_' + name)
with ops.Graph().as_default() as g:
random_seed.set_random_seed(self._config.tf_random_seed)
global_step = contrib_framework.create_global_step(g)
features, labels = input_fn()
self._check_inputs(features, labels)
model_fn_results = self._get_eval_ops(features, labels, metrics)
eval_dict = model_fn_results.eval_metric_ops
update_op, eval_dict = self._extract_metric_update_ops(eval_dict)
# We need to copy the hook array as we modify it, thus [:].
hooks = hooks[:] if hooks else []
if feed_fn:
hooks.append(basic_session_run_hooks.FeedFnHook(feed_fn))
if steps:
hooks.append(
evaluation.StopAfterNEvalsHook(
steps, log_progress=log_progress))
global_step_key = 'global_step'
while global_step_key in eval_dict:
global_step_key = '_' + global_step_key
eval_dict[global_step_key] = global_step
eval_results = evaluation.evaluate_once(
checkpoint_path=checkpoint_path,
master=self._config.evaluation_master,
scaffold=model_fn_results.scaffold,
eval_ops=update_op,
final_ops=eval_dict,
hooks=hooks,
config=self._session_config)
current_global_step = eval_results[global_step_key]
_write_dict_to_summary(eval_dir, eval_results, current_global_step)
return eval_results, current_global_step
def _get_features_from_input_fn(self, input_fn):
result = input_fn()
if isinstance(result, (list, tuple)):
return result[0]
return result
def _infer_model(self,
input_fn,
feed_fn=None,
outputs=None,
as_iterable=True,
iterate_batches=False):
# Check that model has been trained.
checkpoint_path = saver.latest_checkpoint(self._model_dir)
if not checkpoint_path:
raise NotFittedError("Couldn't find trained model at %s."
% self._model_dir)
with ops.Graph().as_default() as g:
random_seed.set_random_seed(self._config.tf_random_seed)
contrib_framework.create_global_step(g)
features = self._get_features_from_input_fn(input_fn)
infer_ops = self._get_predict_ops(features)
predictions = self._filter_predictions(infer_ops.predictions, outputs)
mon_sess = monitored_session.MonitoredSession(
session_creator=monitored_session.ChiefSessionCreator(
checkpoint_filename_with_path=checkpoint_path,
scaffold=infer_ops.scaffold,
config=self._session_config))
if not as_iterable:
with mon_sess:
if not mon_sess.should_stop():
return mon_sess.run(predictions, feed_fn() if feed_fn else None)
else:
return self._predict_generator(mon_sess, predictions, feed_fn,
iterate_batches)
def _predict_generator(self, mon_sess, predictions, feed_fn, iterate_batches):
with mon_sess:
while not mon_sess.should_stop():
preds = mon_sess.run(predictions, feed_fn() if feed_fn else None)
if iterate_batches:
yield preds
elif not isinstance(predictions, dict):
for pred in preds:
yield pred
else:
first_tensor = list(preds.values())[0]
if isinstance(first_tensor, sparse_tensor.SparseTensorValue):
batch_length = first_tensor.dense_shape[0]
else:
batch_length = first_tensor.shape[0]
for i in range(batch_length):
yield {key: value[i] for key, value in six.iteritems(preds)}
if self._is_input_constant(feed_fn, mon_sess.graph):
return
def _is_input_constant(self, feed_fn, graph):
# If there are no queue_runners, the input `predictions` is a
# constant, and we should stop after the first epoch. If,
# instead, there are queue_runners, eventually they should throw
# an `OutOfRangeError`.
if graph.get_collection(ops.GraphKeys.QUEUE_RUNNERS):
return False
# data_feeder uses feed_fn to generate `OutOfRangeError`.
if feed_fn is not None:
return False
return True
def _filter_predictions(self, predictions, outputs):
if not outputs:
return predictions
if not isinstance(predictions, dict):
raise ValueError(
'outputs argument is not valid in case of non-dict predictions.')
existing_keys = predictions.keys()
predictions = {
key: value
for key, value in six.iteritems(predictions) if key in outputs
}
if not predictions:
raise ValueError('Expected to run at least one output from %s, '
'provided %s.' % (existing_keys, outputs))
return predictions
def _train_model(self, input_fn, hooks):
all_hooks = []
self._graph = ops.Graph()
with self._graph.as_default() as g, g.device(self._device_fn):
random_seed.set_random_seed(self._config.tf_random_seed)
global_step = contrib_framework.create_global_step(g)
features, labels = input_fn()
self._check_inputs(features, labels)
model_fn_ops = self._get_train_ops(features, labels)
ops.add_to_collection(ops.GraphKeys.LOSSES, model_fn_ops.loss)
all_hooks.extend([
basic_session_run_hooks.NanTensorHook(model_fn_ops.loss),
basic_session_run_hooks.LoggingTensorHook(
{
'loss': model_fn_ops.loss,
'step': global_step
},
every_n_iter=100)
])
all_hooks.extend(hooks)
scaffold = model_fn_ops.scaffold or monitored_session.Scaffold()
if not (scaffold.saver or ops.get_collection(ops.GraphKeys.SAVERS)):
ops.add_to_collection(
ops.GraphKeys.SAVERS,
saver.Saver(
sharded=True,
max_to_keep=self._config.keep_checkpoint_max,
defer_build=True))
chief_hooks = []
if (self._config.save_checkpoints_secs or
self._config.save_checkpoints_steps):
saver_hook_exists = any([
isinstance(h, basic_session_run_hooks.CheckpointSaverHook)
for h in (all_hooks + model_fn_ops.training_hooks + chief_hooks +
model_fn_ops.training_chief_hooks)
])
if not saver_hook_exists:
chief_hooks = [
basic_session_run_hooks.CheckpointSaverHook(
self._model_dir,
save_secs=self._config.save_checkpoints_secs,
save_steps=self._config.save_checkpoints_steps,
scaffold=scaffold)
]
with monitored_session.MonitoredTrainingSession(
master=self._config.master,
is_chief=self._config.is_chief,
checkpoint_dir=self._model_dir,
scaffold=scaffold,
hooks=all_hooks + model_fn_ops.training_hooks,
chief_only_hooks=chief_hooks + model_fn_ops.training_chief_hooks,
save_checkpoint_secs=0, # Saving is handled by a hook.
save_summaries_steps=self._config.save_summary_steps,
config=self._session_config
) as mon_sess:
loss = None
while not mon_sess.should_stop():
_, loss = mon_sess.run([model_fn_ops.train_op, model_fn_ops.loss])
summary_io.SummaryWriterCache.clear()
return loss
def _identity_feature_engineering_fn(features, labels):
return features, labels
class Estimator(BaseEstimator):
"""Estimator class is the basic TensorFlow model trainer/evaluator.
"""
def __init__(self,
model_fn=None,
model_dir=None,
config=None,
params=None,
feature_engineering_fn=None):
"""Constructs an `Estimator` instance.
Args:
model_fn: Model function. Follows the signature:
* Args:
* `features`: single `Tensor` or `dict` of `Tensor`s
(depending on data passed to `fit`),
* `labels`: `Tensor` or `dict` of `Tensor`s (for multi-head
models). If mode is `ModeKeys.INFER`, `labels=None` will be
passed. If the `model_fn`'s signature does not accept
`mode`, the `model_fn` must still be able to handle
`labels=None`.
* `mode`: Optional. Specifies if this training, evaluation or
prediction. See `ModeKeys`.
* `params`: Optional `dict` of hyperparameters. Will receive what
is passed to Estimator in `params` parameter. This allows
to configure Estimators from hyper parameter tuning.
* `config`: Optional configuration object. Will receive what is passed
to Estimator in `config` parameter, or the default `config`.
Allows updating things in your model_fn based on configuration
such as `num_ps_replicas`.
* `model_dir`: Optional directory where model parameters, graph etc
are saved. Will receive what is passed to Estimator in
`model_dir` parameter, or the default `model_dir`. Allows
updating things in your model_fn that expect model_dir, such as
training hooks.
* Returns:
`ModelFnOps`
Also supports a legacy signature which returns tuple of:
* predictions: `Tensor`, `SparseTensor` or dictionary of same.
Can also be any type that is convertible to a `Tensor` or
`SparseTensor`, or dictionary of same.
* loss: Scalar loss `Tensor`.
* train_op: Training update `Tensor` or `Operation`.
Supports next three signatures for the function:
* `(features, labels) -> (predictions, loss, train_op)`
* `(features, labels, mode) -> (predictions, loss, train_op)`
* `(features, labels, mode, params) -> (predictions, loss, train_op)`
* `(features, labels, mode, params, config) ->
(predictions, loss, train_op)`
* `(features, labels, mode, params, config, model_dir) ->
(predictions, loss, train_op)`
model_dir: Directory to save model parameters, graph and etc. This can
also be used to load checkpoints from the directory into a estimator to
continue training a previously saved model.
config: Configuration object.
params: `dict` of hyper parameters that will be passed into `model_fn`.
Keys are names of parameters, values are basic python types.
feature_engineering_fn: Feature engineering function. Takes features and
labels which are the output of `input_fn` and
returns features and labels which will be fed
into `model_fn`. Please check `model_fn` for
a definition of features and labels.
Raises:
ValueError: parameters of `model_fn` don't match `params`.
"""
super(Estimator, self).__init__(model_dir=model_dir, config=config)
if model_fn is not None:
# Check number of arguments of the given function matches requirements.
model_fn_args = _get_arguments(model_fn)
if params is not None and 'params' not in model_fn_args:
raise ValueError('Estimator\'s model_fn (%s) has less than 4 '
'arguments, but not None params (%s) are passed.' %
(model_fn, params))
if params is None and 'params' in model_fn_args:
logging.warning('Estimator\'s model_fn (%s) includes params '
'argument, but params are not passed to Estimator.',
model_fn)
self._model_fn = model_fn
self.params = params
self._feature_engineering_fn = (
feature_engineering_fn or _identity_feature_engineering_fn)
def _call_model_fn(self, features, labels, mode):
"""Calls model function with support of 2, 3 or 4 arguments.
Args:
features: features dict.
labels: labels dict.
mode: ModeKeys
Returns:
A `ModelFnOps` object. If model_fn returns a tuple, wraps them up in a
`ModelFnOps` object.
Raises:
ValueError: if model_fn returns invalid objects.
"""
features, labels = self._feature_engineering_fn(features, labels)
model_fn_args = _get_arguments(self._model_fn)
kwargs = {}
if 'mode' in model_fn_args:
kwargs['mode'] = mode
if 'params' in model_fn_args:
kwargs['params'] = self.params
if 'config' in model_fn_args:
kwargs['config'] = self.config
if 'model_dir' in model_fn_args:
kwargs['model_dir'] = self.model_dir
model_fn_results = self._model_fn(features, labels, **kwargs)
if isinstance(model_fn_results, model_fn_lib.ModelFnOps):
return model_fn_results
# Here model_fn_results should be a tuple with 3 elements.
if len(model_fn_results) != 3:
raise ValueError('Unrecognized value returned by model_fn, '
'please return ModelFnOps.')
return model_fn_lib.ModelFnOps(
mode=mode,
predictions=model_fn_results[0],
loss=model_fn_results[1],
train_op=model_fn_results[2])
def _get_train_ops(self, features, labels):
"""Method that builds model graph and returns trainer ops.
Expected to be overriden by sub-classes that require custom support.
This implementation uses `model_fn` passed as parameter to constructor to
build model.
Args:
features: `Tensor` or `dict` of `Tensor` objects.
labels: `Tensor` or `dict` of `Tensor` objects.
Returns:
`ModelFnOps` object.
"""
return self._call_model_fn(features, labels, model_fn_lib.ModeKeys.TRAIN)
def _get_eval_ops(self, features, labels, metrics):
"""Method that builds model graph and returns evaluation ops.
Expected to be overriden by sub-classes that require custom support.
This implementation uses `model_fn` passed as parameter to constructor to
build model.
Args:
features: `Tensor` or `dict` of `Tensor` objects.
labels: `Tensor` or `dict` of `Tensor` objects.
metrics: Dict of metrics to run. If None, the default metric functions
are used; if {}, no metrics are used. Otherwise, `metrics` should map
friendly names for the metric to a `MetricSpec` object defining which
model outputs to evaluate against which labels with which metric
function. Metric ops should support streaming, e.g., returning
update_op and value tensors. See more details in
`../../../../metrics/python/metrics/ops/streaming_metrics.py` and
`../metric_spec.py`.
Returns:
`ModelFnOps` object.
Raises:
ValueError: if `metrics` don't match `labels`.
"""
model_fn_ops = self._call_model_fn(
features, labels, model_fn_lib.ModeKeys.EVAL)
features, labels = self._feature_engineering_fn(features, labels)
# Custom metrics should overwrite defaults.
if metrics:
model_fn_ops.eval_metric_ops.update(_make_metrics_ops(
metrics, features, labels, model_fn_ops.predictions))
if metric_key.MetricKey.LOSS not in model_fn_ops.eval_metric_ops:
model_fn_ops.eval_metric_ops[metric_key.MetricKey.LOSS] = (
metrics_lib.streaming_mean(model_fn_ops.loss))
return model_fn_ops
def _get_predict_ops(self, features):
"""Method that builds model graph and returns prediction ops.
Expected to be overriden by sub-classes that require custom support.
This implementation uses `model_fn` passed as parameter to constructor to
build model.
Args:
features: `Tensor` or `dict` of `Tensor` objects.
Returns:
`ModelFnOps` object.
"""
labels = tensor_signature.create_placeholders_from_signatures(
self._labels_info)
return self._call_model_fn(features, labels, model_fn_lib.ModeKeys.INFER)
def export_savedmodel(
self, export_dir_base, serving_input_fn,
default_output_alternative_key=None,
assets_extra=None,
as_text=False,
checkpoint_path=None):
"""Exports inference graph as a SavedModel into given dir.
Args:
export_dir_base: A string containing a directory to write the exported
graph and checkpoints.
serving_input_fn: A function that takes no argument and
returns an `InputFnOps`.
default_output_alternative_key: the name of the head to serve when none is
specified. Not needed for single-headed models.
assets_extra: A dict specifying how to populate the assets.extra directory
within the exported SavedModel. Each key should give the destination
path (including the filename) relative to the assets.extra directory.
The corresponding value gives the full path of the source file to be
copied. For example, the simple case of copying a single file without
renaming it is specified as
`{'my_asset_file.txt': '/path/to/my_asset_file.txt'}`.
as_text: whether to write the SavedModel proto in text format.
checkpoint_path: The checkpoint path to export. If None (the default),
the most recent checkpoint found within the model directory is chosen.
Returns:
The string path to the exported directory.
Raises:
ValueError: if an unrecognized export_type is requested.
"""
if serving_input_fn is None:
raise ValueError('serving_input_fn must be defined.')
with ops.Graph().as_default() as g:
contrib_variables.create_global_step(g)
# Call the serving_input_fn and collect the input alternatives.
input_ops = serving_input_fn()
input_alternatives, features = (
saved_model_export_utils.get_input_alternatives(input_ops))
# Call the model_fn and collect the output alternatives.
model_fn_ops = self._call_model_fn(features, None,
model_fn_lib.ModeKeys.INFER)
output_alternatives, actual_default_output_alternative_key = (
saved_model_export_utils.get_output_alternatives(
model_fn_ops, default_output_alternative_key))
# Build the SignatureDefs from all pairs of input and output alternatives
signature_def_map = saved_model_export_utils.build_all_signature_defs(
input_alternatives, output_alternatives,
actual_default_output_alternative_key)
if not checkpoint_path:
# Locate the latest checkpoint
checkpoint_path = saver.latest_checkpoint(self._model_dir)
if not checkpoint_path:
raise NotFittedError("Couldn't find trained model at %s."
% self._model_dir)
export_dir = saved_model_export_utils.get_timestamped_export_dir(
export_dir_base)
if (model_fn_ops.scaffold is not None and
model_fn_ops.scaffold.saver is not None):
saver_for_restore = model_fn_ops.scaffold.saver
else:
saver_for_restore = saver.Saver(sharded=True)
with tf_session.Session('') as session:
variables.initialize_local_variables()
data_flow_ops.tables_initializer()
resources.initialize_resources(resources.shared_resources())
saver_for_restore.restore(session, checkpoint_path)
init_op = control_flow_ops.group(
variables.local_variables_initializer(),
resources.initialize_resources(resources.shared_resources()),
data_flow_ops.tables_initializer())
# Perform the export
builder = saved_model_builder.SavedModelBuilder(export_dir)
builder.add_meta_graph_and_variables(
session, [tag_constants.SERVING],
signature_def_map=signature_def_map,
assets_collection=ops.get_collection(
ops.GraphKeys.ASSET_FILEPATHS),
legacy_init_op=init_op)
builder.save(as_text)
# Add the extra assets
if assets_extra:
assets_extra_path = os.path.join(compat.as_bytes(export_dir),
compat.as_bytes('assets.extra'))
for dest_relative, source in assets_extra.items():
dest_absolute = os.path.join(compat.as_bytes(assets_extra_path),
compat.as_bytes(dest_relative))
dest_path = os.path.dirname(dest_absolute)
gfile.MakeDirs(dest_path)
gfile.Copy(source, dest_absolute)
return export_dir
# For time of deprecation x,y from Estimator allow direct access.
# pylint: disable=protected-access
class SKCompat(sklearn.BaseEstimator):
"""Scikit learn wrapper for TensorFlow Learn Estimator."""
def __init__(self, estimator):
self._estimator = estimator
def fit(self, x, y, batch_size=128, steps=None, max_steps=None,
monitors=None):
input_fn, feed_fn = _get_input_fn(x, y, input_fn=None, feed_fn=None,
batch_size=batch_size, shuffle=True,
epochs=None)
all_monitors = []
if feed_fn:
all_monitors = [basic_session_run_hooks.FeedFnHook(feed_fn)]
if monitors:
all_monitors.extend(monitors)
self._estimator.fit(input_fn=input_fn,
steps=steps,
max_steps=max_steps,
monitors=all_monitors)
return self
def score(self, x, y, batch_size=128, steps=None, metrics=None):
input_fn, feed_fn = _get_input_fn(x, y, input_fn=None,
feed_fn=None, batch_size=batch_size,
shuffle=False, epochs=1)
if metrics is not None and not isinstance(metrics, dict):
raise ValueError('Metrics argument should be None or dict. '
'Got %s.' % metrics)
eval_results, global_step = self._estimator._evaluate_model(
input_fn=input_fn,
feed_fn=feed_fn,
steps=steps,
metrics=metrics,
name='score')
if eval_results is not None:
eval_results.update({'global_step': global_step})
return eval_results
def predict(self, x, batch_size=128, outputs=None):
input_fn, feed_fn = _get_input_fn(
x, None, input_fn=None, feed_fn=None, batch_size=batch_size,
shuffle=False, epochs=1)
results = list(
self._estimator._infer_model(
input_fn=input_fn,
feed_fn=feed_fn,
outputs=outputs,
as_iterable=True,
iterate_batches=True))
if not isinstance(results[0], dict):
return np.concatenate([output for output in results], axis=0)
return {
key: np.concatenate(
[output[key] for output in results], axis=0)
for key in results[0]
}
| apache-2.0 |
pystruct/pystruct | examples/plot_hidden_short_snakes_typed.py | 1 | 23478 | """
==============================================
Conditional Interactions on the Snakes Dataset
==============================================
This is a variant of plot_snakes.py
Snake are hidding, so we have 2 categorisers:
- determining if a snake is in the picture,
- identifying its head to tail body (at pixel-level)
We use the NodeTypeEdgeFeatureGraphCRF class with 2 type of nodes.
This example uses the snake dataset introduced in
Nowozin, Rother, Bagon, Sharp, Yao, Kohli: Decision Tree Fields ICCV 2011
This dataset is specifically designed to require the pairwise interaction terms
to be conditioned on the input, in other words to use non-trival edge-features.
The task is as following: a "snake" of length ten wandered over a grid. For
each cell, it had the option to go up, down, left or right (unless it came from
there). The input consists of these decisions, while the desired output is an
annotation of the snake from 0 (head) to 9 (tail). See the plots for an
example.
As input features we use a 3x3 window around each pixel (and pad with background
where necessary). We code the five different input colors (for up, down, left, right,
background) using a one-hot encoding. This is a rather naive approach, not using any
information about the dataset (other than that it is a 2d grid).
The task can not be solved using the simple DirectionalGridCRF - which can only
infer head and tail (which are also possible to infer just from the unary
features). If we add edge-features that contain the features of the nodes that are
connected by the edge, the CRF can solve the task.
From an inference point of view, this task is very hard. QPBO move-making is
not able to solve it alone, so we use the relaxed AD3 inference for learning.
PS: This example runs a bit (5 minutes on 12 cores, 20 minutes on one core for me).
But it does work as well as Decision Tree Fields ;)
JL Meunier - January 2017
Developed for the EU project READ. The READ project has received funding
from the European Union's Horizon 2020 research and innovation programme
under grant agreement No 674943
Copyright Xerox
"""
from __future__ import (absolute_import, division, print_function)
import sys, os, time
import random
try:
import cPickle as pickle
except:
import pickle
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.linear_model import LogisticRegression
#from sklearn.grid_search import GridSearchCV
from sklearn.model_selection import GridSearchCV
from pystruct.learners import OneSlackSSVM
from pystruct.datasets import load_snakes
from pystruct.models import NodeTypeEdgeFeatureGraphCRF
from plot_snakes import one_hot_colors, neighborhood_feature, prepare_data
from plot_hidden_snakes import augmentWithNoSnakeImages, shuffle_in_unison, shorten_snakes
#==============================================================================================
bFIXED_RANDOM_SEED = True
NCELL=10
nbSWAP_Pixel_Pict_TYPES = 0 #0,1,2 are useful (this was for DEBUG)
bMAKE_PICT_EASY = False #DEBUG: we had a feature on the picture that tells directly if a snake is present or not
#INFERENCE="ad3+" #ad3+ is required when there are hard logic constraints
INFERENCE="ad3" #ad3 is faster than ad3+
N_JOBS=8
MAXITER=750
sMODELFILE = None
#sMODELFILE = "model.pkl" #we save the model in a file and do not re-trian if the file exists
#==============================================================================================
def printConfig():
print("== NCELL=", NCELL)
print("== FIXED_SEED=", bFIXED_RANDOM_SEED)
print("== INFERENCE =", INFERENCE)
print("== N_JOBS =", N_JOBS)
print("== SWAP=", nbSWAP_Pixel_Pict_TYPES)
print("== EASY=", bMAKE_PICT_EASY)
print("== MAX_ITER=", MAXITER)
print("== MODEL FILE=", sMODELFILE)
if __name__ == '__main__':
printConfig()
def plot_snake(picture):
plt.imshow(picture, interpolation='nearest')
plt.show()
def prepare_picture_data(X):
"""
compute picture features (on 1-hot encoded pictures)
"""
lPictFeat = list()
for a_hot_picture in X:
#count number of cells of each color
#feat = np.zeros((1,5), dtype=np.int8)
feat = np.zeros((1,7), dtype=np.int64)
#Histogram of pixels from 0 to 4
"""
Test accuracy: 0.500
[[45 55]
[45 55]]
"""
for i in range(5):
ai, aj = np.where(a_hot_picture[...,i] == 1)
feat[0,i] = len(ai)
#adding height and width of the snake
"""
Test accuracy: 0.420 Test accuracy: 0.515 Test accuracy: 0.495
[[39 61] [[48 52] [[52 48]
[55 45]] [45 55]] [53 47]]
"""
ai, aj = np.where(a_hot_picture[...,3] != 1)
feat[0,5] = max(ai)-min(ai) #height
feat[0,6] = max(aj)-min(aj) #width
lPictFeat.append(feat)
return lPictFeat
def convertToTwoType(X_train, #list of hot pictures
X_train_directions, # list of node_feat (2D array) , edges (_ x 2 array), edge_feat (2D array) for pixel nodes
Y_train, # list of 2D arrays
X_train_pict_feat, #a list of picture_node_features
Y_train_pict, #a list of integers [0,1]
nCell=10):
"""
return X,Y for NodeTypeEdgeFeatureGraphCRF
X and Y
-------
Node features are given as a list of n_types arrays of shape (n_type_nodes, n_type_features):
- n_type_nodes is the number of nodes of that type
- n_type_features is the number of features for this type of node
Edges are given as a list of n_types x n_types arrays of shape (n_type_edges, 2).
Columns are resp.: node index (in corresponding node type), node index (in corresponding node type)
Edge features are given as a list of n_types x n_types arrays of shape (n_type_type_edge, n_type_type_edge_features)
- n_type_type_edge is the number of edges of type type_type
- n_type_type_edge_features is the number of features for edge of type type_type
An instance ``X`` is represented as a tuple ``([node_features, ..], [edges, ..], [edge_features, ..])``
Labels ``Y`` are given as one array of shape (n_nodes) The meaning of a label depends upon the node type.
"""
lX, lY = list(), list()
for (X,
(aPixelFeat, aPixelPixelEdges, aPixelPixelEdgeFeat),
aPixelLbl,
aPictFeat,
iPictLbl) in zip(X_train, X_train_directions, Y_train, X_train_pict_feat, Y_train_pict ):
aPixelPictEdges = np.zeros( (aPixelFeat.shape[0], 2), np.int64)
aPixelPictEdges[:,0] = np.arange(aPixelFeat.shape[0])
features = neighborhood_feature(X)
aPixelPictEdgeFeat = features
lNodeFeat = [aPixelFeat, aPictFeat]
lEdge = [aPixelPixelEdges,
aPixelPictEdges, #pixel to picture
None, #picture to pixel
None] #picture to picture
lEdgeFeat = [aPixelPixelEdgeFeat,
aPixelPictEdgeFeat,
None,
None]
#Y is flat for each graph
y = np.zeros((aPixelLbl.size+1, ), dtype=np.int64)
y[:-1] = aPixelLbl.ravel()
y[-1] = int(iPictLbl)+nCell+1
x = (lNodeFeat, lEdge, lEdgeFeat)
lX.append(x)
lY.append(y)
return lX,lY
def swap_node_types(l_perm, l_n_state, lX, lY, constraints=None):
"""
lX and lY have been produced for a CRF configured with l_n_state
We permute this as indicated by the permutation (typically for the snake: l_perm=[1, 0] )
"""
_lX, _lY = [], []
_constraints = None
n_types = len(l_n_state)
a_perm = np.asarray(l_perm) #e.g. 3 for l_n_state = [2, 3, 4]
a_cumsum_n_state = np.asarray([sum(l_n_state[:i]) for i in range(len(l_n_state))]) # [0, 2, 5]
a_delta_y_by_y = np.asarray([item for i,n in enumerate(l_n_state) for item in n*(a_cumsum_n_state[i:i+1]).tolist()]) # [0, 0, 2, 2, 2, 5, 5, 5, 5]
a_typ_by_y = np.asarray([item for i,n in enumerate(l_n_state) for item in n*[i]]) # [0, 0, 1, 1, 1, 2, 2, 2, 2]
_l_n_state = [l_n_state[i] for i in l_perm]
_a_cumsum_n_state = np.asarray([sum(_l_n_state[:i]) for i in range(len(_l_n_state))])
for (lNF, lE, lEF), Y in zip(lX, lY):
_lNF = [lNF[i] for i in l_perm]
_Y = np.zeros(Y.shape, dtype=Y.dtype)
#we need to re-arrange the Ys accordingly
l_n_nodes = [nf.shape[0] for nf in lNF]
_l_n_nodes = [nf.shape[0] for nf in _lNF]
cumsum_n_nodes = [0] + [sum( l_n_nodes[:i+1]) for i in range(len( l_n_nodes))]
_cumsum_n_nodes = [0] + [sum(_l_n_nodes[:i+1]) for i in range(len(_l_n_nodes))]
for i in range(len(lNF)):
j = l_perm[i]
_Y[_cumsum_n_nodes[j]:_cumsum_n_nodes[j+1]] = Y[cumsum_n_nodes[i]:cumsum_n_nodes[i+1]]
_Y = _Y - a_delta_y_by_y[_Y] + _a_cumsum_n_state[a_perm[a_typ_by_y[_Y]]]
_lE = [lE[i*n_types+j] for i in l_perm for j in l_perm]
_lEF = [lEF[i*n_types+j] for i in l_perm for j in l_perm]
_lX.append( (_lNF, _lE, _lEF) )
_lY.append(_Y)
if constraints:
print("WARNING: some constraints are not properly swapped because the "
"node order has a meaning.")
_constraints = list()
for _lConstraints in constraints:
for (op, l_l_unary, l_l_state, l_lnegated) in _lConstraints:
#keep the op but permute by types
_l_l_unary = [l_l_unary [i] for i in l_perm]
_l_l_state = [l_l_state [i] for i in l_perm]
_l_lnegated = [l_lnegated[i] for i in l_perm]
_lConstraints.append( (op, _l_l_unary, _l_l_state, _l_lnegated))
_constraints.append(_lConstraints)
return _lX, _lY, _constraints
def listConstraints(lX, ncell=NCELL):
"""
produce the list of constraints for this list of multi-type graphs
"""
lConstraints = list()
for _lNF, _lE, _lEF in lX:
nf_pixel, nf_pict = _lNF
nb_pixels = len(nf_pixel)
l_l_unary = [ range(nb_pixels), [0]]
l_l_states = [ 0, 0 ] #we pass a scalar for each type instead of a list since the values are the same across each type
l_l_negated = [ False, False ] #same
lConstraint_for_X = [("ANDOUT", l_l_unary, l_l_states, l_l_negated)] #we have a list of constraints per X
for _state in range(1, ncell+1):
lConstraint_for_X.append( ("XOROUT" , l_l_unary
, [ _state, 1 ] #exactly one cell in state _state with picture label being snake
, l_l_negated)
) #we have a list of constraints per X
lConstraints.append( lConstraint_for_X )
return lConstraints
def listConstraints_ATMOSTONE(lX, ncell=NCELL):
"""
produce the list of constraints for this list of multi-type graphs
"""
lConstraints = list()
for _lNF, _lE, _lEF in lX:
nf_pixel, nf_pict = _lNF
nb_pixels = len(nf_pixel)
lConstraint_for_X = list()
for _state in range(1, ncell+1):
lConstraint_for_X.append( ("ATMOSTONE" , [ range(nb_pixels), []]
, [ _state, None ] #atmost one cell in state _state whatever picture label
, [ False, None ])
) #we have a list of constraints per X
lConstraints.append( lConstraint_for_X )
return lConstraints
def makeItEasy(lX_pict_feat, lY_pict):
"""
add the picture label in a feature...
"""
for X,y in zip(lX_pict_feat, lY_pict):
X[0] = y
def appendIntVectorToCsv(fd, name, aV):
saV = np.array_str(aV, max_line_width=99999, precision=0)
saV = saV.strip()[1:-1] #removal of brackets
saV = ','.join(saV.split())
fd.write("%s,%s\n"%(name, saV))
fd.flush()
def REPORT(l_Y_GT, lY_Pred, t=None, ncell=NCELL, filename=None, bHisto=False, name=""):
if t:
print("\t( predict DONE IN %.1fs)"%t)
_flat_GT, _flat_P = (np.hstack([y.ravel() for y in l_Y_GT]),
np.hstack([y.ravel() for y in lY_Pred]))
confmat = confusion_matrix(_flat_GT, _flat_P)
print(confmat)
print("\ttrace =", confmat.trace())
score = accuracy_score(_flat_GT, _flat_P)
print("\tAccuracy= %.3f"%score)
#CSV out?
if filename:
histo = np.histogram(np.hstack(_flat_GT), bins=range(ncell+2))
diag = np.diag(confmat)
with open(filename, "ab") as fdCSV:
if bHisto: appendIntVectorToCsv(fdCSV, name+"_histo,", histo[0])
appendIntVectorToCsv(fdCSV, name+",%.3f"%score, diag)
if __name__ == '__main__':
if bFIXED_RANDOM_SEED:
np.random.seed(1605)
random.seed(98)
else:
np.random.seed()
random.seed()
print("Please be patient...")
snakes = load_snakes()
#--------------------------------------------------------------------------------------------------
X_train, Y_train = snakes['X_train'], snakes['Y_train']
#X_train, Y_train = X_train[:3], Y_train[:3]
print("TRAIN SET ", len(X_train), len(Y_train))
if NCELL <10: X_train, Y_train = shorten_snakes(X_train, Y_train, NCELL)
nb_hidden, X_train, Y_train = augmentWithNoSnakeImages(X_train, Y_train, "train", False, nCell=NCELL)
print("TRAIN SET ",len(X_train), len(Y_train))
Y_train_pict = np.array([1]*(len(X_train)-nb_hidden) + [0]*nb_hidden)
X_train = [one_hot_colors(x) for x in X_train]
X_train, Y_train, Y_train_pict = shuffle_in_unison(X_train, Y_train, Y_train_pict)
X_train_pict_feat = prepare_picture_data(X_train)
if bMAKE_PICT_EASY:
print("Making the train picture task easy")
makeItEasy(X_train_pict_feat, Y_train_pict)
X_train_directions, X_train_edge_features = prepare_data(X_train)
#--------------------------------------------------------------------------------------------------
X_test, Y_test = snakes['X_test'], snakes['Y_test']
if NCELL <10: X_test, Y_test = shorten_snakes(X_test, Y_test, NCELL)
nb_hidden, X_test, Y_test = augmentWithNoSnakeImages(X_test, Y_test, "test", False, nCell=NCELL)
Y_test_pict = np.array([1]*(len(X_test)-nb_hidden) + [0]*nb_hidden)
print("TEST SET ", len(X_test), len(Y_test))
X_test = [one_hot_colors(x) for x in X_test]
#useless X_test, Y_test, Y_test_pict = shuffle_in_unison(X_test, Y_test, Y_test_pict)
X_test_pict_feat = prepare_picture_data(X_test)
if bMAKE_PICT_EASY:
print("Making the test picture task easy")
makeItEasy(X_test_pict_feat, Y_test_pict)
X_test_directions, X_test_edge_features = prepare_data(X_test)
#--------------------------------------------------------------------------------------------------
print("==================================================================="
"===================================")
if True:
from pystruct.models.edge_feature_graph_crf import EdgeFeatureGraphCRF
print("ONE TYPE TRAINING AND TESTING: PIXELS")
# inference = 'ad3+'
# inference = 'qpbo'
inference=INFERENCE
inference = "qpbo"
crf = EdgeFeatureGraphCRF(inference_method=inference)
ssvm = OneSlackSSVM(crf, inference_cache=50, C=.1, tol=0,
max_iter=MAXITER,
n_jobs=N_JOBS
#,verbose=1
, switch_to='ad3'
)
Y_train_flat = [y_.ravel() for y_ in Y_train]
print( "\ttrain label histogram : ",
np.histogram(np.hstack(Y_train_flat), bins=range(NCELL+2)))
t0 = time.time()
ssvm.fit(X_train_edge_features, Y_train_flat)
print("FIT DONE IN %.1fs"%(time.time() - t0))
sys.stdout.flush()
t0 = time.time()
_Y_pred = ssvm.predict( X_test_edge_features )
REPORT(Y_test, _Y_pred, time.time() - t0)
#--------------------------------------------------------------------------------------------------
if True:
print("_"*50)
print("ONE TYPE TRAINING AND TESTING: PICTURES")
print( "\ttrain label histogram : ",
np.histogram(Y_train_pict, bins=range(3)))
lr = LogisticRegression(class_weight='balanced')
mdl = GridSearchCV(lr , {'C':[0.1, 0.5, 1.0, 2.0] })
XX = np.vstack(X_train_pict_feat)
t0 = time.time()
mdl.fit(XX, Y_train_pict)
print("FIT DONE IN %.1fs"%(time.time() - t0))
t0 = time.time()
_Y_pred = mdl.predict( np.vstack(X_test_pict_feat) )
REPORT([Y_test_pict], _Y_pred, time.time() - t0)
#--------------------------------------------------------------------------------------------------
print("==================================================================="
"===================================")
# first, train on X with directions only:
#crf = NodeTypeEdgeFeatureGraphCRF(1, [11], [45], [[2]], inference_method=inference)
# first, train on X with directions only:
# l_weights = [
# [10.0/200] + [10.0/200]*10,
# [10.0/20 , 10.0/20]
# ]
# print("WEIGHTS:", l_weights
if nbSWAP_Pixel_Pict_TYPES %2 == 0:
l_n_states = [NCELL+1, 2] # 11 states for pixel nodes, 2 states for pictures
l_n_feat = [45, 7] # 45 features for pixels, 7 for pictures
ll_n_feat = [[180, 45], # 2 feature between pixel nodes, 1 between pixel and picture
[45 , 0]] # , nothing between picture nodes (no picture_to_picture edge anyway)
else:
l_n_states = [2, NCELL+1]
l_n_feat = [7, 45]
ll_n_feat = [[0, 45], [45 , 180]]
if not sMODELFILE or not os.path.exists(sMODELFILE):
print(" TRAINING MULTI-TYPE MODEL ")
#TRAINING
crf = NodeTypeEdgeFeatureGraphCRF(2, # How many node types?
l_n_states, # How many states per type?
l_n_feat, # How many node features per type?
ll_n_feat, # How many edge features per type x type?
inference_method=INFERENCE
# , l_class_weight = l_weights
)
print(crf)
ssvm = OneSlackSSVM(crf, inference_cache=50, C=.1, tol=0,
max_iter=MAXITER,
n_jobs=N_JOBS
#,verbose=1
#, switch_to='ad3'
)
print("==============================================================="
"=======================================")
print("YY[0].shape", Y_train[0].shape)
XX, YY = convertToTwoType(X_train,
X_train_edge_features, # list of node_feat , edges, edge_feat for pixel nodes
Y_train,
X_train_pict_feat, #a list of picture_node_features
Y_train_pict, #a list of integers [0,1]
nCell=NCELL)
if nbSWAP_Pixel_Pict_TYPES:
if nbSWAP_Pixel_Pict_TYPES % 2 == 0:
XX, YY = swap_node_types([1,0], [NCELL+1, 2], XX, YY)
XX, YY = swap_node_types([1,0], [2 , NCELL+1], XX, YY)
else:
XX, YY = swap_node_types([1,0], [NCELL+1, 2], XX, YY)
print( "\tlabel histogram : ",
np.histogram(np.hstack([y.ravel() for y in YY]),
bins=range(14)))
print("YY[0].shape", YY[0].shape)
crf.initialize(XX, YY)# check if the data is properly built
sys.stdout.flush()
t0 = time.time()
ssvm.fit(XX, YY)
print("FIT DONE IN %.1fs"%(time.time() - t0))
sys.stdout.flush()
ssvm.alphas = None
ssvm.constraints_ = None
ssvm.inference_cache_ = None
if sMODELFILE:
print("Saving model in: ", sMODELFILE)
with open(sMODELFILE, "wb") as fd:
cPickle.dump(ssvm, fd)
else:
#REUSE PREVIOUSLY TRAINED MODEL
print(" RUSING PREVIOULSLY TRAINED MULTI-TYPE MODEL: ", sMODELFILE)
with open(sMODELFILE, "rb") as fd:
ssvm = pickle.load(fd)
print("INFERENCE WITH ", INFERENCE)
XX_test, YY_test =convertToTwoType(X_test,
X_test_edge_features, # list of node_feat , edges, edge_feat for pixel nodes
Y_test,
X_test_pict_feat, #a list of picture_node_features
Y_test_pict, #a list of integers [0,1]
nCell=NCELL)
print( "\tlabel histogram (PIXELs and PICTUREs): ",
np.histogram(np.hstack([y.ravel() for y in YY_test]),
bins=range(14)))
# l_constraints = listConstraints(XX_test)
l_constraints = listConstraints_ATMOSTONE(XX_test)
if nbSWAP_Pixel_Pict_TYPES %2 == 1:
XX_test, YY_test, l_constraints = swap_node_types([1,0], [NCELL+1, 2], XX_test, YY_test, l_constraints)
print("\t- results without constraints (using %s)"%INFERENCE)
t0 = time.time()
YY_pred = ssvm.predict( XX_test )
REPORT(YY_test, YY_pred, time.time() - t0)
print("_"*50)
print("\t- results exploiting constraints (using ad3+)")
ssvm.model.inference_method = "ad3+"
t0 = time.time()
YY_pred = ssvm.predict( XX_test, l_constraints )
REPORT(YY_test, YY_pred, time.time() - t0)
print("_"*50)
if INFERENCE == "ad3":
ssvm.model.inference_method = "ad3+"
else:
ssvm.model.inference_method = "ad3"
print("\t- results without constraints (using %s)"%ssvm.model.inference_method)
t0 = time.time()
YY_pred = ssvm.predict( XX_test )
REPORT(YY_test, YY_pred, time.time() - t0)
print("DONE")
printConfig()
"""
""" | bsd-2-clause |
rosenbrockc/acorn | acorn/importer.py | 1 | 6198 | """Class for altering the search paths within acorn for new/undecorated packages
(i.e., those that weren't checked explicitly when it was first designed.
"""
from acorn import msg
_packages = {}
"""dict: keys are package names; values are `bool` indicating whether `acorn`
should decorate the package with the loggers.
"""
_special = ["scipy", "sklearn", "matplotlib"]
"""list: of packages that have special extensions or import behavior and cannot
be deco-imported automatically by `acorn`.
"""
def _load_package_config(reload_=False):
"""Loads the package configurations from the global `acorn.cfg` file.
"""
global _packages
from acorn.config import settings
packset = settings("acorn", reload_)
if packset.has_section("acorn.packages"):
for package, value in packset.items("acorn.packages"):
_packages[package] = value.strip() == "1"
def reload_cache():
"""Reloads the configuration file settings for which packages to decorate.
"""
global _packages
_packages = {}
_load_package_config(True)
hooks = []
"""list: of package names that have *already* been intercepted by our
loader/decorator. This allows us to skip them the next time they are imported
(by our scripts) so that we don't get into an infinite loop.
"""
import sys
class AcornMetaImportFinder(object):
"""Overrides the default `import` behavior of python for packages so that we
can intercept and decorate certain packages, but not others.
Args:
prefix (str): prefix on import full names before they are considered
loadable by acorn. Also available as an attribute.
"""
def __init__(self, prefix="acorn"):
self.prefix = prefix
def find_module(self, fullname, packpath=None):
global hooks
if (fullname[0:len(self.prefix)] != self.prefix):
return None
if fullname.count('.') > 1:
#Acorn isn't setup to work with sub-packages.
return None
already = False
package = fullname.split('.')[-1]
if package in _packages and _packages[package]:
if package not in hooks:
#We can import this one and decorate it using the default or
#custom tools in acorn.
hooks.append(package)
msg.okay("Import override: '{}'.".format(package), 3)
return AcornDecoratingLoader(package)
else:
already = True
if not already:
msg.info("Skipping import override: '{}'.".format(packpath), 3)
def load_decorate(package):
"""Imports and decorates the package with the specified name.
"""
# We import the decoration logic from acorn and then overwrite the sys.module
# for this package with the decorated, original pandas package.
from acorn.logging.decoration import set_decorating, decorating
#Before we do any imports, we need to set that we are decorating so that
#everything works as if `acorn` wasn't even here.
origdecor = decorating
set_decorating(True)
#If we try and import the module directly, we will get stuck in a loop; at
#some point we must invoke the built-in module loader from python. We do
#this by removing our sys.path hook.
import sys
from importlib import import_module
apack = import_module(package)
from acorn.logging.decoration import decorate
decorate(apack)
sys.modules["acorn.{}".format(package)] = apack
#Set the decoration back to what it was.
from acorn.logging.decoration import set_decorating
set_decorating(origdecor)
return apack
class AcornStandardLoader(object): # pragma: no cover
#I am having issues with intercepting all loads. At the least, pytest won't
#import properly, so I am disabling this for now.
"""Loads packages *without* decoration, but with the correct flags enabled
so that if those packages imported acorn decorated ones, they don't run all
the logging machinery during imports.
"""
def __init__(self, package):
self.package = package
def load_module(self, fullname):
from acorn.logging.decoration import set_decorating, decorating
global hooks
odecor = decorating
set_decorating(True)
from importlib import import_module
mod = import_module(fullname, self.package)
hooks.remove(fullname)
set_decorating(odecor)
return mod
class AcornDecoratingLoader(object):
"""Loads packages that need to be decorated for automatic logging by
`acorn`.
Args:
package (str): name of the package being loaded.
"""
def __init__(self, package):
self.package = package
def load_module(self, fullname):
if fullname in sys.modules: # pragma: no cover.
msg.info("Reusing existing import for '{}'".format(fullname), 3)
mod = sys.modules[fullname]
else:
msg.info("Decorating import for '{}'".format(self.package), 3)
#First we import the package, then the specified sub-module that
#they asked for.
if self.package in _special:
from importlib import import_module
mod = import_module(fullname)
else:
mod = load_decorate(self.package)
return mod
_load_package_config()
sys.meta_path.insert(0, AcornMetaImportFinder())
#TODO: we still need to get a package manager going. When we import the modules,
#we should always use the original package items. sklearn dies when numpy has
#already been decorated because our subclass doesn't jive with the c-extension
#modules it uses. But it is okay if it can use the original numpy class. If we
#get our own special __getattr__ method for acorn, we could return the decorated
#versions and still have the other imports work correctly. We lose the ability
#for the user to import directly from the module later (they would always have
#to go through acorn). Or perhaps we could look at the value of the global
#`decorating` and then decide which of the objects to return in getattr.
| mit |
gallantlab/pycortex | examples/surface_analyses/plot_geodesic_path.py | 1 | 1525 | """
=======================
Plotting Geodesic Paths
=======================
This will plot a geodesic path between two vertices on the cortical surface.
This path is based on geodesic distances across the surface. The path starts
at the given endpoint and selects the neighbor of that point in the surface
map that is closest to the other endpoint. This process continues iteratilvely
until the last vertex in the path is the endpoint you gave to it.
All you need to do is supply a surface object and two vertices on that surface
and you can find the geodesic path. This script additionally makes a plot to
show all of the vertices listed in the path.
"""
import cortex
import cortex.polyutils
import numpy as np
import matplotlib.pyplot as plt
subject = "S1"
# First we need to import the surfaces for this subject
surfs = [cortex.polyutils.Surface(*d)
for d in cortex.db.get_surf(subject, "fiducial")]
numl = surfs[0].pts.shape[0]
# Now we need to pick the start and end points of the line we will draw
pt_a = 100
pt_b = 50000
# Then we find the geodesic path between these points
path = surfs[0].geodesic_path(pt_a, pt_b)
# In order to plot this on the cortical surface, we need an array that is the
# same size as the number of vertices in the left hemisphere
path_data = np.zeros(numl)
for v in path:
path_data[v] = 1
# And now plot these distances onto the cortical surface
path_verts = cortex.Vertex(path_data, subject, cmap="Blues_r")
cortex.quickshow(path_verts, with_colorbar=False)
plt.show()
| bsd-2-clause |
ndtrung81/lammps | examples/SPIN/test_problems/validation_nvt/plot_nvt.py | 7 | 1151 | #!/usr/bin/env python3
import numpy as np, pylab, tkinter
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from decimal import *
import sys, string, os
argv = sys.argv
if len(argv) != 3:
print("Syntax: ./plot_precession.py res_nvt_spin.dat res_nvt_lattice.dat")
sys.exit()
dirname = os.path.join(os.getcwd(), "Feb_07")
nvtspin_file = sys.argv[1]
nvtlatt_file = sys.argv[2]
ts,tmags,temps = np.loadtxt(nvtspin_file,skiprows=0, usecols=(1,2,3),unpack=True)
tl,tmagl,templ = np.loadtxt(nvtlatt_file,skiprows=0, usecols=(1,2,3),unpack=True)
fig = plt.figure(figsize=(8,8))
ax1 = plt.subplot(2,1,1)
ax2 = plt.subplot(2,1,2)
ax1.plot(ts, tmags, 'r-', label='Spin temp. (thermostat)')
ax1.plot(ts, temps, 'g-', label='Lattice temp.')
ax1.set_yscale("log")
ax1.set_ylabel("T (K)")
ax1.legend(loc=3)
ax2.plot(tl, tmagl, 'r-', label='Spin temp.')
ax2.plot(tl, templ, 'g-', label='Lattice temp. (thermostat)')
ax2.set_yscale("log")
ax2.set_ylabel("T (K)")
ax2.legend(loc=3)
plt.xlabel('Time (in ps)')
plt.legend()
plt.show()
fig.savefig(os.path.join(os.getcwd(), "nvt_spin_lattice.pdf"), bbox_inches="tight")
plt.close(fig)
| gpl-2.0 |
baojianzhou/DLReadingGroup | keras/examples/neural_doodle.py | 2 | 14079 | '''Neural doodle with Keras
Script Usage:
# Arguments:
```
--nlabels: # of regions (colors) in mask images
--style-image: image to learn style from
--style-mask: semantic labels for style image
--target-mask: semantic labels for target image (your doodle)
--content-image: optional image to learn content from
--target-image-prefix: path prefix for generated target images
```
# Example 1: doodle using a style image, style mask
and target mask.
```
python neural_doodle.py --nlabels 4 --style-image Monet/style.png \
--style-mask Monet/style_mask.png --target-mask Monet/target_mask.png \
--target-image-prefix generated/monet
```
# Example 2: doodle using a style image, style mask,
target mask and an optional content image.
```
python neural_doodle.py --nlabels 4 --style-image Renoir/style.png \
--style-mask Renoir/style_mask.png --target-mask Renoir/target_mask.png \
--content-image Renoir/creek.jpg \
--target-image-prefix generated/renoir
```
References:
[Dmitry Ulyanov's blog on fast-neural-doodle](http://dmitryulyanov.github.io/feed-forward-neural-doodle/)
[Torch code for fast-neural-doodle](https://github.com/DmitryUlyanov/fast-neural-doodle)
[Torch code for online-neural-doodle](https://github.com/DmitryUlyanov/online-neural-doodle)
[Paper Texture Networks: Feed-forward Synthesis of Textures and Stylized Images](http://arxiv.org/abs/1603.03417)
[Discussion on parameter tuning](https://github.com/fchollet/keras/issues/3705)
Resources:
Example images can be downloaded from
https://github.com/DmitryUlyanov/fast-neural-doodle/tree/master/data
'''
from __future__ import print_function
import time
import argparse
import numpy as np
from scipy.optimize import fmin_l_bfgs_b
from scipy.misc import imread, imsave
from keras import backend as K
from keras.layers import Input, AveragePooling2D
from keras.models import Model
from keras.preprocessing.image import load_img, img_to_array
from keras.applications import vgg19
# Command line arguments
parser = argparse.ArgumentParser(description='Keras neural doodle example')
parser.add_argument('--nlabels', type=int,
help='number of semantic labels'
' (regions in differnet colors)'
' in style_mask/target_mask')
parser.add_argument('--style-image', type=str,
help='path to image to learn style from')
parser.add_argument('--style-mask', type=str,
help='path to semantic mask of style image')
parser.add_argument('--target-mask', type=str,
help='path to semantic mask of target image')
parser.add_argument('--content-image', type=str, default=None,
help='path to optional content image')
parser.add_argument('--target-image-prefix', type=str,
help='path prefix for generated results')
args = parser.parse_args()
style_img_path = args.style_image
style_mask_path = args.style_mask
target_mask_path = args.target_mask
content_img_path = args.content_image
target_img_prefix = args.target_image_prefix
use_content_img = content_img_path is not None
num_labels = args.nlabels
num_colors = 3 # RGB
# determine image sizes based on target_mask
ref_img = imread(target_mask_path)
img_nrows, img_ncols = ref_img.shape[:2]
total_variation_weight = 50.
style_weight = 1.
content_weight = 0.1 if use_content_img else 0
content_feature_layers = ['block5_conv2']
# To get better generation qualities, use more conv layers for style features
style_feature_layers = ['block1_conv1', 'block2_conv1', 'block3_conv1',
'block4_conv1', 'block5_conv1']
# helper functions for reading/processing images
def preprocess_image(image_path):
img = load_img(image_path, target_size=(img_nrows, img_ncols))
img = img_to_array(img)
img = np.expand_dims(img, axis=0)
img = vgg19.preprocess_input(img)
return img
def deprocess_image(x):
if K.image_data_format() == 'channels_first':
x = x.reshape((3, img_nrows, img_ncols))
x = x.transpose((1, 2, 0))
else:
x = x.reshape((img_nrows, img_ncols, 3))
# Remove zero-center by mean pixel
x[:, :, 0] += 103.939
x[:, :, 1] += 116.779
x[:, :, 2] += 123.68
# 'BGR'->'RGB'
x = x[:, :, ::-1]
x = np.clip(x, 0, 255).astype('uint8')
return x
def kmeans(xs, k):
assert xs.ndim == 2
try:
from sklearn.cluster import k_means
_, labels, _ = k_means(xs.astype('float64'), k)
except ImportError:
from scipy.cluster.vq import kmeans2
_, labels = kmeans2(xs, k, missing='raise')
return labels
def load_mask_labels():
'''Load both target and style masks.
A mask image (nr x nc) with m labels/colors will be loaded
as a 4D boolean tensor: (1, m, nr, nc) for 'channels_first' or (1, nr, nc, m) for 'channels_last'
'''
target_mask_img = load_img(target_mask_path,
target_size=(img_nrows, img_ncols))
target_mask_img = img_to_array(target_mask_img)
style_mask_img = load_img(style_mask_path,
target_size=(img_nrows, img_ncols))
style_mask_img = img_to_array(style_mask_img)
if K.image_data_format() == 'channels_first':
mask_vecs = np.vstack([style_mask_img.reshape((3, -1)).T,
target_mask_img.reshape((3, -1)).T])
else:
mask_vecs = np.vstack([style_mask_img.reshape((-1, 3)),
target_mask_img.reshape((-1, 3))])
labels = kmeans(mask_vecs, num_labels)
style_mask_label = labels[:img_nrows *
img_ncols].reshape((img_nrows, img_ncols))
target_mask_label = labels[img_nrows *
img_ncols:].reshape((img_nrows, img_ncols))
stack_axis = 0 if K.image_data_format() == 'channels_first' else -1
style_mask = np.stack([style_mask_label == r for r in xrange(num_labels)],
axis=stack_axis)
target_mask = np.stack([target_mask_label == r for r in xrange(num_labels)],
axis=stack_axis)
return (np.expand_dims(style_mask, axis=0),
np.expand_dims(target_mask, axis=0))
# Create tensor variables for images
if K.image_data_format() == 'channels_first':
shape = (1, num_colors, img_nrows, img_ncols)
else:
shape = (1, img_nrows, img_ncols, num_colors)
style_image = K.variable(preprocess_image(style_img_path))
target_image = K.placeholder(shape=shape)
if use_content_img:
content_image = K.variable(preprocess_image(content_img_path))
else:
content_image = K.zeros(shape=shape)
images = K.concatenate([style_image, target_image, content_image], axis=0)
# Create tensor variables for masks
raw_style_mask, raw_target_mask = load_mask_labels()
style_mask = K.variable(raw_style_mask.astype('float32'))
target_mask = K.variable(raw_target_mask.astype('float32'))
masks = K.concatenate([style_mask, target_mask], axis=0)
# index constants for images and tasks variables
STYLE, TARGET, CONTENT = 0, 1, 2
# Build image model, mask model and use layer outputs as features
# image model as VGG19
image_model = vgg19.VGG19(include_top=False, input_tensor=images)
# mask model as a series of pooling
mask_input = Input(tensor=masks, shape=(None, None, None), name='mask_input')
x = mask_input
for layer in image_model.layers[1:]:
name = 'mask_%s' % layer.name
if 'conv' in layer.name:
x = AveragePooling2D((3, 3), strides=(
1, 1), name=name, border_mode='same')(x)
elif 'pool' in layer.name:
x = AveragePooling2D((2, 2), name=name)(x)
mask_model = Model(mask_input, x)
# Collect features from image_model and task_model
image_features = {}
mask_features = {}
for img_layer, mask_layer in zip(image_model.layers, mask_model.layers):
if 'conv' in img_layer.name:
assert 'mask_' + img_layer.name == mask_layer.name
layer_name = img_layer.name
img_feat, mask_feat = img_layer.output, mask_layer.output
image_features[layer_name] = img_feat
mask_features[layer_name] = mask_feat
# Define loss functions
def gram_matrix(x):
assert K.ndim(x) == 3
features = K.batch_flatten(x)
gram = K.dot(features, K.transpose(features))
return gram
def region_style_loss(style_image, target_image, style_mask, target_mask):
'''Calculate style loss between style_image and target_image,
for one common region specified by their (boolean) masks
'''
assert 3 == K.ndim(style_image) == K.ndim(target_image)
assert 2 == K.ndim(style_mask) == K.ndim(target_mask)
if K.image_data_format() == 'channels_first':
masked_style = style_image * style_mask
masked_target = target_image * target_mask
num_channels = K.shape(style_image)[0]
else:
masked_style = K.permute_dimensions(
style_image, (2, 0, 1)) * style_mask
masked_target = K.permute_dimensions(
target_image, (2, 0, 1)) * target_mask
num_channels = K.shape(style_image)[-1]
s = gram_matrix(masked_style) / K.mean(style_mask) / num_channels
c = gram_matrix(masked_target) / K.mean(target_mask) / num_channels
return K.mean(K.square(s - c))
def style_loss(style_image, target_image, style_masks, target_masks):
'''Calculate style loss between style_image and target_image,
in all regions.
'''
assert 3 == K.ndim(style_image) == K.ndim(target_image)
assert 3 == K.ndim(style_masks) == K.ndim(target_masks)
loss = K.variable(0)
for i in xrange(num_labels):
if K.image_data_format() == 'channels_first':
style_mask = style_masks[i, :, :]
target_mask = target_masks[i, :, :]
else:
style_mask = style_masks[:, :, i]
target_mask = target_masks[:, :, i]
loss += region_style_loss(style_image,
target_image, style_mask, target_mask)
return loss
def content_loss(content_image, target_image):
return K.sum(K.square(target_image - content_image))
def total_variation_loss(x):
assert 4 == K.ndim(x)
if K.image_data_format() == 'channels_first':
a = K.square(x[:, :, :img_nrows - 1, :img_ncols - 1] -
x[:, :, 1:, :img_ncols - 1])
b = K.square(x[:, :, :img_nrows - 1, :img_ncols - 1] -
x[:, :, :img_nrows - 1, 1:])
else:
a = K.square(x[:, :img_nrows - 1, :img_ncols - 1, :] -
x[:, 1:, :img_ncols - 1, :])
b = K.square(x[:, :img_nrows - 1, :img_ncols - 1, :] -
x[:, :img_nrows - 1, 1:, :])
return K.sum(K.pow(a + b, 1.25))
# Overall loss is the weighted sum of content_loss, style_loss and tv_loss
# Each individual loss uses features from image/mask models.
loss = K.variable(0)
for layer in content_feature_layers:
content_feat = image_features[layer][CONTENT, :, :, :]
target_feat = image_features[layer][TARGET, :, :, :]
loss += content_weight * content_loss(content_feat, target_feat)
for layer in style_feature_layers:
style_feat = image_features[layer][STYLE, :, :, :]
target_feat = image_features[layer][TARGET, :, :, :]
style_masks = mask_features[layer][STYLE, :, :, :]
target_masks = mask_features[layer][TARGET, :, :, :]
sl = style_loss(style_feat, target_feat, style_masks, target_masks)
loss += (style_weight / len(style_feature_layers)) * sl
loss += total_variation_weight * total_variation_loss(target_image)
loss_grads = K.gradients(loss, target_image)
# Evaluator class for computing efficiency
outputs = [loss]
if isinstance(loss_grads, (list, tuple)):
outputs += loss_grads
else:
outputs.append(loss_grads)
f_outputs = K.function([target_image], outputs)
def eval_loss_and_grads(x):
if K.image_data_format() == 'channels_first':
x = x.reshape((1, 3, img_nrows, img_ncols))
else:
x = x.reshape((1, img_nrows, img_ncols, 3))
outs = f_outputs([x])
loss_value = outs[0]
if len(outs[1:]) == 1:
grad_values = outs[1].flatten().astype('float64')
else:
grad_values = np.array(outs[1:]).flatten().astype('float64')
return loss_value, grad_values
class Evaluator(object):
def __init__(self):
self.loss_value = None
self.grads_values = None
def loss(self, x):
assert self.loss_value is None
loss_value, grad_values = eval_loss_and_grads(x)
self.loss_value = loss_value
self.grad_values = grad_values
return self.loss_value
def grads(self, x):
assert self.loss_value is not None
grad_values = np.copy(self.grad_values)
self.loss_value = None
self.grad_values = None
return grad_values
evaluator = Evaluator()
# Generate images by iterative optimization
if K.image_data_format() == 'channels_first':
x = np.random.uniform(0, 255, (1, 3, img_nrows, img_ncols)) - 128.
else:
x = np.random.uniform(0, 255, (1, img_nrows, img_ncols, 3)) - 128.
for i in range(50):
print('Start of iteration', i)
start_time = time.time()
x, min_val, info = fmin_l_bfgs_b(evaluator.loss, x.flatten(),
fprime=evaluator.grads, maxfun=20)
print('Current loss value:', min_val)
# save current generated image
img = deprocess_image(x.copy())
fname = target_img_prefix + '_at_iteration_%d.png' % i
imsave(fname, img)
end_time = time.time()
print('Image saved as', fname)
print('Iteration %d completed in %ds' % (i, end_time - start_time))
| apache-2.0 |
boomsbloom/dtm-fmri | DTM/for_gensim/lib/python2.7/site-packages/sklearn/linear_model/tests/test_base.py | 83 | 15089 | # Author: Alexandre Gramfort <[email protected]>
# Fabian Pedregosa <[email protected]>
#
# License: BSD 3 clause
import numpy as np
from scipy import sparse
from scipy import linalg
from itertools import product
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import ignore_warnings
from sklearn.linear_model.base import LinearRegression
from sklearn.linear_model.base import _preprocess_data
from sklearn.linear_model.base import sparse_center_data, center_data
from sklearn.linear_model.base import _rescale_data
from sklearn.utils import check_random_state
from sklearn.utils.testing import assert_greater
from sklearn.datasets.samples_generator import make_sparse_uncorrelated
from sklearn.datasets.samples_generator import make_regression
rng = np.random.RandomState(0)
def test_linear_regression():
# Test LinearRegression on a simple dataset.
# a simple dataset
X = [[1], [2]]
Y = [1, 2]
reg = LinearRegression()
reg.fit(X, Y)
assert_array_almost_equal(reg.coef_, [1])
assert_array_almost_equal(reg.intercept_, [0])
assert_array_almost_equal(reg.predict(X), [1, 2])
# test it also for degenerate input
X = [[1]]
Y = [0]
reg = LinearRegression()
reg.fit(X, Y)
assert_array_almost_equal(reg.coef_, [0])
assert_array_almost_equal(reg.intercept_, [0])
assert_array_almost_equal(reg.predict(X), [0])
def test_linear_regression_sample_weights():
# TODO: loop over sparse data as well
rng = np.random.RandomState(0)
# It would not work with under-determined systems
for n_samples, n_features in ((6, 5), ):
y = rng.randn(n_samples)
X = rng.randn(n_samples, n_features)
sample_weight = 1.0 + rng.rand(n_samples)
for intercept in (True, False):
# LinearRegression with explicit sample_weight
reg = LinearRegression(fit_intercept=intercept)
reg.fit(X, y, sample_weight=sample_weight)
coefs1 = reg.coef_
inter1 = reg.intercept_
assert_equal(reg.coef_.shape, (X.shape[1], )) # sanity checks
assert_greater(reg.score(X, y), 0.5)
# Closed form of the weighted least square
# theta = (X^T W X)^(-1) * X^T W y
W = np.diag(sample_weight)
if intercept is False:
X_aug = X
else:
dummy_column = np.ones(shape=(n_samples, 1))
X_aug = np.concatenate((dummy_column, X), axis=1)
coefs2 = linalg.solve(X_aug.T.dot(W).dot(X_aug),
X_aug.T.dot(W).dot(y))
if intercept is False:
assert_array_almost_equal(coefs1, coefs2)
else:
assert_array_almost_equal(coefs1, coefs2[1:])
assert_almost_equal(inter1, coefs2[0])
def test_raises_value_error_if_sample_weights_greater_than_1d():
# Sample weights must be either scalar or 1D
n_sampless = [2, 3]
n_featuress = [3, 2]
for n_samples, n_features in zip(n_sampless, n_featuress):
X = rng.randn(n_samples, n_features)
y = rng.randn(n_samples)
sample_weights_OK = rng.randn(n_samples) ** 2 + 1
sample_weights_OK_1 = 1.
sample_weights_OK_2 = 2.
reg = LinearRegression()
# make sure the "OK" sample weights actually work
reg.fit(X, y, sample_weights_OK)
reg.fit(X, y, sample_weights_OK_1)
reg.fit(X, y, sample_weights_OK_2)
def test_fit_intercept():
# Test assertions on betas shape.
X2 = np.array([[0.38349978, 0.61650022],
[0.58853682, 0.41146318]])
X3 = np.array([[0.27677969, 0.70693172, 0.01628859],
[0.08385139, 0.20692515, 0.70922346]])
y = np.array([1, 1])
lr2_without_intercept = LinearRegression(fit_intercept=False).fit(X2, y)
lr2_with_intercept = LinearRegression(fit_intercept=True).fit(X2, y)
lr3_without_intercept = LinearRegression(fit_intercept=False).fit(X3, y)
lr3_with_intercept = LinearRegression(fit_intercept=True).fit(X3, y)
assert_equal(lr2_with_intercept.coef_.shape,
lr2_without_intercept.coef_.shape)
assert_equal(lr3_with_intercept.coef_.shape,
lr3_without_intercept.coef_.shape)
assert_equal(lr2_without_intercept.coef_.ndim,
lr3_without_intercept.coef_.ndim)
def test_linear_regression_sparse(random_state=0):
# Test that linear regression also works with sparse data
random_state = check_random_state(random_state)
for i in range(10):
n = 100
X = sparse.eye(n, n)
beta = random_state.rand(n)
y = X * beta[:, np.newaxis]
ols = LinearRegression()
ols.fit(X, y.ravel())
assert_array_almost_equal(beta, ols.coef_ + ols.intercept_)
assert_array_almost_equal(ols.predict(X) - y.ravel(), 0)
def test_linear_regression_multiple_outcome(random_state=0):
# Test multiple-outcome linear regressions
X, y = make_regression(random_state=random_state)
Y = np.vstack((y, y)).T
n_features = X.shape[1]
reg = LinearRegression(fit_intercept=True)
reg.fit((X), Y)
assert_equal(reg.coef_.shape, (2, n_features))
Y_pred = reg.predict(X)
reg.fit(X, y)
y_pred = reg.predict(X)
assert_array_almost_equal(np.vstack((y_pred, y_pred)).T, Y_pred, decimal=3)
def test_linear_regression_sparse_multiple_outcome(random_state=0):
# Test multiple-outcome linear regressions with sparse data
random_state = check_random_state(random_state)
X, y = make_sparse_uncorrelated(random_state=random_state)
X = sparse.coo_matrix(X)
Y = np.vstack((y, y)).T
n_features = X.shape[1]
ols = LinearRegression()
ols.fit(X, Y)
assert_equal(ols.coef_.shape, (2, n_features))
Y_pred = ols.predict(X)
ols.fit(X, y.ravel())
y_pred = ols.predict(X)
assert_array_almost_equal(np.vstack((y_pred, y_pred)).T, Y_pred, decimal=3)
def test_preprocess_data():
n_samples = 200
n_features = 2
X = rng.rand(n_samples, n_features)
y = rng.rand(n_samples)
expected_X_mean = np.mean(X, axis=0)
expected_X_norm = np.std(X, axis=0) * np.sqrt(X.shape[0])
expected_y_mean = np.mean(y, axis=0)
Xt, yt, X_mean, y_mean, X_norm = \
_preprocess_data(X, y, fit_intercept=False, normalize=False)
assert_array_almost_equal(X_mean, np.zeros(n_features))
assert_array_almost_equal(y_mean, 0)
assert_array_almost_equal(X_norm, np.ones(n_features))
assert_array_almost_equal(Xt, X)
assert_array_almost_equal(yt, y)
Xt, yt, X_mean, y_mean, X_norm = \
_preprocess_data(X, y, fit_intercept=True, normalize=False)
assert_array_almost_equal(X_mean, expected_X_mean)
assert_array_almost_equal(y_mean, expected_y_mean)
assert_array_almost_equal(X_norm, np.ones(n_features))
assert_array_almost_equal(Xt, X - expected_X_mean)
assert_array_almost_equal(yt, y - expected_y_mean)
Xt, yt, X_mean, y_mean, X_norm = \
_preprocess_data(X, y, fit_intercept=True, normalize=True)
assert_array_almost_equal(X_mean, expected_X_mean)
assert_array_almost_equal(y_mean, expected_y_mean)
assert_array_almost_equal(X_norm, expected_X_norm)
assert_array_almost_equal(Xt, (X - expected_X_mean) / expected_X_norm)
assert_array_almost_equal(yt, y - expected_y_mean)
def test_preprocess_data_multioutput():
n_samples = 200
n_features = 3
n_outputs = 2
X = rng.rand(n_samples, n_features)
y = rng.rand(n_samples, n_outputs)
expected_y_mean = np.mean(y, axis=0)
args = [X, sparse.csc_matrix(X)]
for X in args:
_, yt, _, y_mean, _ = _preprocess_data(X, y, fit_intercept=False,
normalize=False)
assert_array_almost_equal(y_mean, np.zeros(n_outputs))
assert_array_almost_equal(yt, y)
_, yt, _, y_mean, _ = _preprocess_data(X, y, fit_intercept=True,
normalize=False)
assert_array_almost_equal(y_mean, expected_y_mean)
assert_array_almost_equal(yt, y - y_mean)
_, yt, _, y_mean, _ = _preprocess_data(X, y, fit_intercept=True,
normalize=True)
assert_array_almost_equal(y_mean, expected_y_mean)
assert_array_almost_equal(yt, y - y_mean)
def test_preprocess_data_weighted():
n_samples = 200
n_features = 2
X = rng.rand(n_samples, n_features)
y = rng.rand(n_samples)
sample_weight = rng.rand(n_samples)
expected_X_mean = np.average(X, axis=0, weights=sample_weight)
expected_y_mean = np.average(y, axis=0, weights=sample_weight)
# XXX: if normalize=True, should we expect a weighted standard deviation?
# Currently not weighted, but calculated with respect to weighted mean
expected_X_norm = (np.sqrt(X.shape[0]) *
np.mean((X - expected_X_mean) ** 2, axis=0) ** .5)
Xt, yt, X_mean, y_mean, X_norm = \
_preprocess_data(X, y, fit_intercept=True, normalize=False,
sample_weight=sample_weight)
assert_array_almost_equal(X_mean, expected_X_mean)
assert_array_almost_equal(y_mean, expected_y_mean)
assert_array_almost_equal(X_norm, np.ones(n_features))
assert_array_almost_equal(Xt, X - expected_X_mean)
assert_array_almost_equal(yt, y - expected_y_mean)
Xt, yt, X_mean, y_mean, X_norm = \
_preprocess_data(X, y, fit_intercept=True, normalize=True,
sample_weight=sample_weight)
assert_array_almost_equal(X_mean, expected_X_mean)
assert_array_almost_equal(y_mean, expected_y_mean)
assert_array_almost_equal(X_norm, expected_X_norm)
assert_array_almost_equal(Xt, (X - expected_X_mean) / expected_X_norm)
assert_array_almost_equal(yt, y - expected_y_mean)
def test_sparse_preprocess_data_with_return_mean():
n_samples = 200
n_features = 2
# random_state not supported yet in sparse.rand
X = sparse.rand(n_samples, n_features, density=.5) # , random_state=rng
X = X.tolil()
y = rng.rand(n_samples)
XA = X.toarray()
expected_X_norm = np.std(XA, axis=0) * np.sqrt(X.shape[0])
Xt, yt, X_mean, y_mean, X_norm = \
_preprocess_data(X, y, fit_intercept=False, normalize=False,
return_mean=True)
assert_array_almost_equal(X_mean, np.zeros(n_features))
assert_array_almost_equal(y_mean, 0)
assert_array_almost_equal(X_norm, np.ones(n_features))
assert_array_almost_equal(Xt.A, XA)
assert_array_almost_equal(yt, y)
Xt, yt, X_mean, y_mean, X_norm = \
_preprocess_data(X, y, fit_intercept=True, normalize=False,
return_mean=True)
assert_array_almost_equal(X_mean, np.mean(XA, axis=0))
assert_array_almost_equal(y_mean, np.mean(y, axis=0))
assert_array_almost_equal(X_norm, np.ones(n_features))
assert_array_almost_equal(Xt.A, XA)
assert_array_almost_equal(yt, y - np.mean(y, axis=0))
Xt, yt, X_mean, y_mean, X_norm = \
_preprocess_data(X, y, fit_intercept=True, normalize=True,
return_mean=True)
assert_array_almost_equal(X_mean, np.mean(XA, axis=0))
assert_array_almost_equal(y_mean, np.mean(y, axis=0))
assert_array_almost_equal(X_norm, expected_X_norm)
assert_array_almost_equal(Xt.A, XA / expected_X_norm)
assert_array_almost_equal(yt, y - np.mean(y, axis=0))
def test_csr_preprocess_data():
# Test output format of _preprocess_data, when input is csr
X, y = make_regression()
X[X < 2.5] = 0.0
csr = sparse.csr_matrix(X)
csr_, y, _, _, _ = _preprocess_data(csr, y, True)
assert_equal(csr_.getformat(), 'csr')
def test_rescale_data():
n_samples = 200
n_features = 2
sample_weight = 1.0 + rng.rand(n_samples)
X = rng.rand(n_samples, n_features)
y = rng.rand(n_samples)
rescaled_X, rescaled_y = _rescale_data(X, y, sample_weight)
rescaled_X2 = X * np.sqrt(sample_weight)[:, np.newaxis]
rescaled_y2 = y * np.sqrt(sample_weight)
assert_array_almost_equal(rescaled_X, rescaled_X2)
assert_array_almost_equal(rescaled_y, rescaled_y2)
@ignore_warnings # all deprecation warnings
def test_deprecation_center_data():
n_samples = 200
n_features = 2
w = 1.0 + rng.rand(n_samples)
X = rng.rand(n_samples, n_features)
y = rng.rand(n_samples)
param_grid = product([True, False], [True, False], [True, False],
[None, w])
for (fit_intercept, normalize, copy, sample_weight) in param_grid:
XX = X.copy() # such that we can try copy=False as well
X1, y1, X1_mean, X1_var, y1_mean = \
center_data(XX, y, fit_intercept=fit_intercept,
normalize=normalize, copy=copy,
sample_weight=sample_weight)
XX = X.copy()
X2, y2, X2_mean, X2_var, y2_mean = \
_preprocess_data(XX, y, fit_intercept=fit_intercept,
normalize=normalize, copy=copy,
sample_weight=sample_weight)
assert_array_almost_equal(X1, X2)
assert_array_almost_equal(y1, y2)
assert_array_almost_equal(X1_mean, X2_mean)
assert_array_almost_equal(X1_var, X2_var)
assert_array_almost_equal(y1_mean, y2_mean)
# Sparse cases
X = sparse.csr_matrix(X)
for (fit_intercept, normalize, copy, sample_weight) in param_grid:
X1, y1, X1_mean, X1_var, y1_mean = \
center_data(X, y, fit_intercept=fit_intercept, normalize=normalize,
copy=copy, sample_weight=sample_weight)
X2, y2, X2_mean, X2_var, y2_mean = \
_preprocess_data(X, y, fit_intercept=fit_intercept,
normalize=normalize, copy=copy,
sample_weight=sample_weight, return_mean=False)
assert_array_almost_equal(X1.toarray(), X2.toarray())
assert_array_almost_equal(y1, y2)
assert_array_almost_equal(X1_mean, X2_mean)
assert_array_almost_equal(X1_var, X2_var)
assert_array_almost_equal(y1_mean, y2_mean)
for (fit_intercept, normalize) in product([True, False], [True, False]):
X1, y1, X1_mean, X1_var, y1_mean = \
sparse_center_data(X, y, fit_intercept=fit_intercept,
normalize=normalize)
X2, y2, X2_mean, X2_var, y2_mean = \
_preprocess_data(X, y, fit_intercept=fit_intercept,
normalize=normalize, return_mean=True)
assert_array_almost_equal(X1.toarray(), X2.toarray())
assert_array_almost_equal(y1, y2)
assert_array_almost_equal(X1_mean, X2_mean)
assert_array_almost_equal(X1_var, X2_var)
assert_array_almost_equal(y1_mean, y2_mean)
| mit |
JarronL/pynrc | setup.py | 1 | 6219 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""A setuptools based setup module.
See:
https://packaging.python.org/en/latest/distributing.html
https://github.com/pypa/sampleproject
"""
import glob
import os
import sys
import importlib #imp
import ast
################################################################
# astropy-helpers
################################################################
import ah_bootstrap
#A dirty hack to get around some early import/configurations ambiguities
if sys.version_info[0] >= 3:
import builtins
else:
import __builtin__ as builtins
builtins._ASTROPY_SETUP_ = True
from astropy_helpers.setup_helpers import (
register_commands, get_debug_option, get_package_info)
from astropy_helpers.git_helpers import get_git_devstr
from astropy_helpers.version_helpers import generate_version_py
################################################################
# boiler plate
################################################################
# Always prefer setuptools over distutils
from setuptools import setup, find_packages
# To use a consistent encoding
from codecs import open
from os import path
root = path.abspath(path.dirname(__file__))
from pynrc.version import __version__
version = __version__
RELEASE = 'dev' not in version
if not RELEASE:
version += get_git_devstr(False)
################################################################
# shortcuts for publishing, tagging, testing
################################################################
if sys.argv[-1] == 'publish':
os.system("python setup.py sdist upload")
os.system("python setup.py bdist_wheel upload")
print("You probably want to also tag the version now:")
print(" python setup.py tag")
sys.exit()
# Requires .pypirc
#[pypitest]
#repository: https://test.pypi.org/legacy/
#username: jarron
#password: ****
if sys.argv[-1] == 'pubtest':
os.system("python setup.py sdist upload -r pypitest")
os.system("python setup.py bdist_wheel upload -r pypitest")
sys.exit()
if sys.argv[-1] == 'tag':
os.system("git tag -a v%s -m 'Release %s'" % (version, version))
os.system("git push origin v%s" % (version))
sys.exit()
if sys.argv[-1] == 'test':
test_requirements = [
'pytest',
'coverage'
]
try:
modules = map(__import__, test_requirements)
except ImportError as e:
err_msg = e.message.replace("No module named ", "")
msg = "%s is not installed. Install your test requirements." % err_msg
raise ImportError(msg)
print('pyNRC testing not yet implemented!!')
os.system('py.test')
sys.exit()
################################################################
# Package Information
################################################################
# Get the long description from the README and HISTORY files
with open('README.rst') as readme_file:
readme = readme_file.read()
with open('HISTORY.rst') as history_file:
history = history_file.read()
requirements = [
'Click>=6.0',
'numpy>=1.15.0',
'matplotlib>=2',
'scipy>=0.16.0',
'pysynphot>=0.9.7',
'poppy>=0.7.0',
'webbpsf>=0.7.0',
]
# RTD cannot handle certain
# if not (os.environ.get('READTHEDOCS') == 'True'):
# # requirements.append('jwst_backgrounds>=1.1')
# requirements.append('astropy>=2.0,<3.0')
# else:
# requirements.append('astropy>=2.0')
setup_requirements = ['pytest-runner', ]
test_requirements = ['pytest', ]
setup(
name='pynrc',
# Versions should comply with PEP440.
version=version,
description="JWST NIRCam ETC and Simulator",
long_description=readme + '\n\n' + history,
# The project's main homepage.
#url='https://github.com/JarronL/pynrc',
url='https://pynrc.readthedocs.io',
# Author details
author='Jarron Leisenring',
author_email='[email protected]',
license='MIT license',
keywords='jwst nircam etc simulator',
# See https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[
# How mature is this project? Common values are
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
'Development Status :: 4 - Beta',
# Indicate who your project is intended for
'Intended Audience :: Developers',
'Topic :: Scientific/Engineering :: Astronomy',
# Pick your license as you wish (should match "license" above)
'License :: OSI Approved :: MIT License',
# Specify the Python versions you support here. In particular, ensure
# that you indicate whether you support Python 2, Python 3 or both.
#'Programming Language :: Python :: 2',
#'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
],
# You can just specify the packages manually here if your project is
# simple. Or you can use find_packages().
#packages=find_packages(exclude=['contrib', 'docs', 'tests']),
packages=find_packages(include=['pynrc*']),
# List run-time dependencies here. These will be installed by pip when
# your project is installed. For an analysis of "install_requires" vs pip's
# requirements files see:
# https://packaging.python.org/en/latest/requirements.html
install_requires=requirements,
# List additional groups of dependencies here (e.g. development
# dependencies). You can install these using the following syntax,
# for example:
# $ pip install -e .[dev,test]
#extras_require={
# 'dev': ['check-manifest>=0.34', 'lxml>=3.6.4', 'pytest>=3.0.2'],
#},
# If there are data files included in your packages that need to be
# installed, specify them here. If using Python 2.6 or less, then these
# have to be included in MANIFEST.in as well.
#package_data={
# 'pynrc': ['package_data.dat'],
#},
setup_requires=setup_requirements,
test_suite='tests',
tests_require=test_requirements,
zip_safe=False,
) | mit |
LohithBlaze/scikit-learn | sklearn/datasets/mlcomp.py | 289 | 3855 | # Copyright (c) 2010 Olivier Grisel <[email protected]>
# License: BSD 3 clause
"""Glue code to load http://mlcomp.org data as a scikit.learn dataset"""
import os
import numbers
from sklearn.datasets.base import load_files
def _load_document_classification(dataset_path, metadata, set_=None, **kwargs):
if set_ is not None:
dataset_path = os.path.join(dataset_path, set_)
return load_files(dataset_path, metadata.get('description'), **kwargs)
LOADERS = {
'DocumentClassification': _load_document_classification,
# TODO: implement the remaining domain formats
}
def load_mlcomp(name_or_id, set_="raw", mlcomp_root=None, **kwargs):
"""Load a datasets as downloaded from http://mlcomp.org
Parameters
----------
name_or_id : the integer id or the string name metadata of the MLComp
dataset to load
set_ : select the portion to load: 'train', 'test' or 'raw'
mlcomp_root : the filesystem path to the root folder where MLComp datasets
are stored, if mlcomp_root is None, the MLCOMP_DATASETS_HOME
environment variable is looked up instead.
**kwargs : domain specific kwargs to be passed to the dataset loader.
Read more in the :ref:`User Guide <datasets>`.
Returns
-------
data : Bunch
Dictionary-like object, the interesting attributes are:
'filenames', the files holding the raw to learn, 'target', the
classification labels (integer index), 'target_names',
the meaning of the labels, and 'DESCR', the full description of the
dataset.
Note on the lookup process: depending on the type of name_or_id,
will choose between integer id lookup or metadata name lookup by
looking at the unzipped archives and metadata file.
TODO: implement zip dataset loading too
"""
if mlcomp_root is None:
try:
mlcomp_root = os.environ['MLCOMP_DATASETS_HOME']
except KeyError:
raise ValueError("MLCOMP_DATASETS_HOME env variable is undefined")
mlcomp_root = os.path.expanduser(mlcomp_root)
mlcomp_root = os.path.abspath(mlcomp_root)
mlcomp_root = os.path.normpath(mlcomp_root)
if not os.path.exists(mlcomp_root):
raise ValueError("Could not find folder: " + mlcomp_root)
# dataset lookup
if isinstance(name_or_id, numbers.Integral):
# id lookup
dataset_path = os.path.join(mlcomp_root, str(name_or_id))
else:
# assume name based lookup
dataset_path = None
expected_name_line = "name: " + name_or_id
for dataset in os.listdir(mlcomp_root):
metadata_file = os.path.join(mlcomp_root, dataset, 'metadata')
if not os.path.exists(metadata_file):
continue
with open(metadata_file) as f:
for line in f:
if line.strip() == expected_name_line:
dataset_path = os.path.join(mlcomp_root, dataset)
break
if dataset_path is None:
raise ValueError("Could not find dataset with metadata line: " +
expected_name_line)
# loading the dataset metadata
metadata = dict()
metadata_file = os.path.join(dataset_path, 'metadata')
if not os.path.exists(metadata_file):
raise ValueError(dataset_path + ' is not a valid MLComp dataset')
with open(metadata_file) as f:
for line in f:
if ":" in line:
key, value = line.split(":", 1)
metadata[key.strip()] = value.strip()
format = metadata.get('format', 'unknow')
loader = LOADERS.get(format)
if loader is None:
raise ValueError("No loader implemented for format: " + format)
return loader(dataset_path, metadata, set_=set_, **kwargs)
| bsd-3-clause |
lkumar93/Robot-Learning | DDPG/src/Run.py | 1 | 7999 |
#
# THIS IS AN IMPLEMENTATION OF DEEP DETERMINISTIC POLICY GRADIENT ALGORITHM
# TO CONTROL A QUADCOPTER
#
# COPYRIGHT BELONGS TO THE AUTHOR OF THIS CODE
#
# AUTHOR : LAKSHMAN KUMAR
# AFFILIATION : UNIVERSITY OF MARYLAND, MARYLAND ROBOTICS CENTER
# EMAIL : [email protected]
# LINKEDIN : WWW.LINKEDIN.COM/IN/LAKSHMANKUMAR1993
#
# THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THE MIT LICENSE
# THE WORK IS PROTECTED BY COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF
# THE WORK OTHER THAN AS AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED.
#
# BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE TO
# BE BOUND BY THE TERMS OF THIS LICENSE. THE LICENSOR GRANTS YOU THE RIGHTS
# CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND
# CONDITIONS.
###########################################
##
## LIBRARIES
##
###########################################
import pygame, sys, getopt, os
import rospy
import time
import traceback
import matplotlib.pyplot as plotter
import numpy
from DDPG import DDPG
from rospy.exceptions import ROSException
###########################################
##
## VARIABLES
##
###########################################
EPOCHS = 50000
FPS = 10
STEPS = FPS*10
STEP_RANGE = range(0,STEPS)
EPOCH_RANGE = range(0,EPOCHS)
DRONE = 'ardrone'
TUNEPARAM = 'Thrust'
DQN_FLAG = True
TRAIN_FLAG = True
###########################################
##
## HELPER FUNCTIONS
##
###########################################
def init_controllers() :
thrust_controller = DDPG(DRONE, param = 'Thrust', controller = 'PID' , setpoint = 0.5, epsilon = 1.0)
roll_controller = DDPG(DRONE, param = 'Roll', controller = 'PID', setpoint = 0.5,epsilon = 1.0)
pitch_controller = DDPG(DRONE, param = 'Pitch', controller = 'PID', setpoint = 0.5, epsilon = 1.0)
return [thrust_controller,roll_controller,pitch_controller]
def run(controllers, cmd_publisher) :
cmd = RollPitchYawrateThrust()
cmd.header.stamp = rospy.Time.now()
for controller in controllers:
if controller.param == 'Thrust' :
cmd.thrust.z = 15.0 + controller.run()
print "Thrust = " + str(cmd.thrust.z)
elif controller.param == 'Roll' :
cmd.roll = controller.run()
print "Roll = " + str(cmd.roll)
elif controller.param == 'Pitch' :
cmd.pitch = controller.run()
print "Pitch = " + str(cmd.pitch)
cmd_publisher.publish(cmd)
def epsilon_decay(controllers) :
for controller in controllers :
controller.decrement_epsilon(STEPS*EPOCHS)
def update(controllers) :
reward = 0.0
for controller in controllers:
reward+=controller.update()
return reward
def tune_pid(controllers, param, val, coeff) :
for controller in controllers :
if controller.param == param and controller.controller == 'PID':
if coeff == 'kp' :
controller.kp += val
elif coeff == 'kd' :
controller.kd += val
print param+" Kd Value = " + str(controller.kd)
print param+" Kp Value = " + str(controller.kp)
def check_validity(controllers) :
for controller in controllers :
if controller.check_validity() is False :
return False
return True
def extract_state(controllers,param) :
for controller in controllers :
if controller.param == param :
return controller.state[0]
def simulator_reset(controllers,cmd_publisher,gazebo_publisher) :
cmd = RollPitchYawrateThrust()
cmd.header.stamp = rospy.Time.now()
cmd.roll = 0.0
cmd.pitch = 0.0
cmd.thrust.z = 0.0
cmd_publisher.publish(cmd)
time.sleep(0.1)
reset_cmd = ModelState()
reset_cmd.model_name = DRONE
reset_cmd.pose.position.x = 0.0
reset_cmd.pose.position.y = 0.0
reset_cmd.pose.position.z = 0.07999
reset_cmd.pose.orientation.x = 0.0
reset_cmd.pose.orientation.y = 0.0
reset_cmd.pose.orientation.z = 0.0
reset_cmd.pose.orientation.w = 1.0
reset_cmd.twist.linear.x = 0.0
reset_cmd.twist.linear.y = 0.0
reset_cmd.twist.linear.z = 0.0
reset_cmd.twist.angular.x = 0.0
reset_cmd.twist.angular.y = 0.0
reset_cmd.twist.angular.z = 0.0
reset_cmd.reference_frame= 'world'
gazebo_publisher.publish(reset_cmd)
time.sleep(0.05)
for controller in controllers :
controller.reset()
time.sleep(0.05)
###########################################
##
## MAIN FUNCTION
##
###########################################
if __name__ == '__main__':
#Initialize ros node
rospy.init_node('QLearner', anonymous=True)
#Parse command line arguments to check if the user wants to enable function approximation, experience replay or random opponent
argv = sys.argv[1:]
try:
opts, args = getopt.getopt(argv,"t:",["train="])
except getopt.GetoptError:
print 'Usage: python Run.py -t <bool>'
sys.exit(2)
for opt, arg in opts:
if opt in ("-t", "--test"):
if arg == 'True' :
TRAIN_FLAG = True
elif arg == 'False' :
TRAIN_FLAG = False
else :
print 'Usage: python Run.py -t <bool> '
sys.exit(2)
cmd_topic = '/'+ DRONE+'/command/roll_pitch_yawrate_thrust'
cmd_publisher = rospy.Publisher(cmd_topic, RollPitchYawrateThrust, queue_size = 1)
gazebo_publisher = rospy.Publisher('/gazebo/set_model_state', ModelState, queue_size = 1)
rate = rospy.Rate(FPS)
pygame.init()
pygame.display.set_mode((20, 20))
clock = pygame.time.Clock()
rewards_per_episode = []
count = 0
#Initialize position controllers as DDPG Agents
controllers = init_controllers()
print "initialized"
#If the user wants to train
if TRAIN_FLAG :
#For n episodes and m steps keep looping
for i in EPOCH_RANGE :
simulator_reset(controllers,cmd_publisher,gazebo_publisher)
total_reward_per_episode = 0.0
for j in STEP_RANGE :
run(controllers,cmd_publisher)
rate.sleep()
total_reward_per_episode += update(controllers)
epsilon_decay(controllers)
#Reset if drone goes out of bounds
if check_validity(controllers) is False :
simulator_reset(controllers, cmd_publisher, gazebo_publisher)
rate.sleep()
break
for event in pygame.event.get():
if event.type == pygame.QUIT:
simulator_reset(controllers, cmd_publisher, gazebo_publisher)
pygame.quit();
sys.exit()
#For tuning PID controllers
if event.type == pygame.KEYDOWN :
if event.key == pygame.K_UP:
tune_pid(controllers, TUNEPARAM , 0.5, 'kp')
if event.key == pygame.K_DOWN:
tune_pid(controllers, TUNEPARAM , -0.5, 'kp')
if event.key == pygame.K_LEFT:
tune_pid(controllers, TUNEPARAM, 0.5, 'kd')
if event.key == pygame.K_RIGHT:
tune_pid(controllers, TUNEPARAM , -0.5, 'kd')
if event.key == pygame.K_q:
simulator_reset(controllers,cmd_publisher,gazebo_publisher)
time.sleep(2)
break;
count += 1
print " Count = " + str(count) +" ,Epsilon = " + str(controllers[0].epsilon)
rewards_per_episode.append(total_reward_per_episode)
print '\n \n \n rewards =' +str(total_reward_per_episode) + " ,epoch = "+str(i)
clock.tick(FPS*2)
plotter.figure()
plotter.plot(EPOCH_RANGE, rewards_per_episode,'g',label='Rewards' )
plotter.xlabel('Episode')
plotter.ylabel('Rewards')
plotter.title('Learning Curve ')
plotter.savefig('../figures/LearningCurve.png')
plotter.show()
else :
simulator_reset(controllers,cmd_publisher,gazebo_publisher)
count = 0
states = []
while not rospy.is_shutdown():
count = count + 1
run(controllers,cmd_publisher)
rate.sleep()
if check_validity(controllers) is False :
simulator_reset(controllers,cmd_publisher,gazebo_publisher)
states.append(extract_state(controllers,'Thrust'))
if count > 500 :
break
plotter.figure()
plotter.plot(range(0,len(states)), states ,'g',label='Error' )
plotter.xlabel('Time')
plotter.ylabel('Error')
plotter.title('Learning Curve - PID ')
plotter.savefig('../figures/LearningCurvePID.png')
plotter.show()
rospy.spin()
| mit |
stimpsonsg/moose | modules/tensor_mechanics/tests/capped_drucker_prager/small_deform3.py | 23 | 3585 | #!/usr/bin/env python
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
def expected(scheme, sqrtj2):
cohesion = 10
friction_degrees = 35
tip_smoother = 8
friction = friction_degrees * np.pi / 180.0
if (scheme == "native"):
aaa = cohesion
bbb = np.tan(friction)
elif (scheme == "outer_tip"):
aaa = 2 * np.sqrt(3) * cohesion * np.cos(friction) / (3.0 - np.sin(friction))
bbb = 2 * np.sin(friction) / np.sqrt(3) / (3.0 - np.sin(friction))
elif (scheme == "inner_tip"):
aaa = 2 * np.sqrt(3) * cohesion * np.cos(friction) / (3.0 + np.sin(friction))
bbb = 2 * np.sin(friction) / np.sqrt(3) / (3.0 + np.sin(friction))
elif (scheme == "lode_zero"):
aaa = cohesion * np.cos(friction)
bbb = np.sin(friction) / 3.0
elif (scheme == "inner_edge"):
aaa = 3 * cohesion * np.cos(friction) / np.sqrt(9.0 + 3.0 * np.power(np.sin(friction), 2))
bbb = np.sin(friction) / np.sqrt(9.0 + 3.0 * np.power(np.sin(friction), 2))
return (aaa - np.sqrt(tip_smoother * tip_smoother + sqrtj2 * sqrtj2)) / bbb
def sigma_mean(stress):
return (stress[0] + stress[3] + stress[5])/3.0
def sigma_bar(stress):
mean = sigma_mean(stress)
return np.sqrt(0.5 * (np.power(stress[0] - mean, 2) + 2*stress[1]*stress[1] + 2*stress[2]*stress[2] + np.power(stress[3] - mean, 2) + 2*stress[4]*stress[4] + np.power(stress[5] - mean, 2)))
def third_inv(stress):
mean = sigma_mean(stress)
return (stress[0] - mean)*(stress[3] - mean)*(stress[5] - mean)
def lode_angle(stress):
bar = sigma_bar(stress)
third = third_inv(stress)
return np.arcsin(-1.5 * np.sqrt(3.0) * third / np.power(bar, 3)) / 3.0
def moose_result(fn):
f = open(fn)
x = []
y = []
for line in f:
if not line.strip():
continue
line = line.strip()
if line.startswith("time") or line.startswith("0"):
continue
line = map(float, line.split(","))
if line[1] < -1E-10:
continue # this is an elastic deformation
trace = 3.0 * sigma_mean(line[3:])
bar = sigma_bar(line[3:])
x.append(trace)
y.append(bar)
f.close()
return (x, y)
plt.figure()
sqrtj2 = np.arange(0, 30, 0.25)
plt.plot(expected("native", sqrtj2), sqrtj2, 'k-', label = 'expected (native)')
mr = moose_result("gold/small_deform3_native.csv")
plt.plot(mr[0], mr[1], 'k^', label = 'MOOSE (native)')
plt.plot(expected("outer_tip", sqrtj2), sqrtj2, 'g-', label = 'expected (outer_tip)')
mr = moose_result("gold/small_deform3_outer_tip.csv")
plt.plot(mr[0], mr[1], 'g^', label = 'MOOSE (outer_tip)')
plt.plot(expected("inner_tip", sqrtj2), sqrtj2, 'b-', label = 'expected (inner_tip)')
mr = moose_result("gold/small_deform3_inner_tip.csv")
plt.plot(mr[0], mr[1], 'b^', label = 'MOOSE (inner_tip)')
plt.plot(expected("lode_zero", sqrtj2), sqrtj2, 'c-', label = 'expected (lode_zero)')
mr = moose_result("gold/small_deform3_lode_zero.csv")
plt.plot(mr[0], mr[1], 'c^', label = 'MOOSE (lode_zero)')
plt.plot(expected("inner_edge", sqrtj2), sqrtj2, 'r-', label = 'expected (inner_edge)')
mr = moose_result("gold/small_deform3_inner_edge.csv")
plt.plot(mr[0], mr[1], 'r^', label = 'MOOSE (inner_edge)')
legend = plt.legend(bbox_to_anchor=(1.16, 0.95))
for label in legend.get_texts():
label.set_fontsize('small')
plt.xlabel("Tr(stress)")
plt.ylabel("sqrt(J2)")
plt.title("Drucker-Prager yield function on meridional plane")
plt.axis([-25, 15, 0, 25])
plt.savefig("small_deform3.png")
sys.exit(0)
| lgpl-2.1 |
snipsco/ntm-lasagne | examples/associative-recall-task.py | 1 | 4480 | import theano
import theano.tensor as T
import numpy as np
import matplotlib.pyplot as plt
from lasagne.layers import InputLayer, DenseLayer, ReshapeLayer
import lasagne.layers
import lasagne.nonlinearities
import lasagne.updates
import lasagne.objectives
import lasagne.init
from ntm.layers import NTMLayer
from ntm.memory import Memory
from ntm.controllers import DenseController
from ntm.heads import WriteHead, ReadHead
from ntm.updates import graves_rmsprop
from utils.generators import AssociativeRecallTask
from utils.visualization import Dashboard
def model(input_var, batch_size=1, size=8, num_units=100, memory_shape=(128, 20)):
# Input Layer
l_input = InputLayer((batch_size, None, size + 2), input_var=input_var)
_, seqlen, _ = l_input.input_var.shape
# Neural Turing Machine Layer
memory = Memory(memory_shape, name='memory', memory_init=lasagne.init.Constant(1e-6), learn_init=False)
controller = DenseController(l_input, memory_shape=memory_shape,
num_units=num_units, num_reads=1,
nonlinearity=lasagne.nonlinearities.rectify,
name='controller')
heads = [
WriteHead(controller, num_shifts=3, memory_shape=memory_shape, name='write', learn_init=False,
nonlinearity_key=lasagne.nonlinearities.rectify,
nonlinearity_add=lasagne.nonlinearities.rectify),
ReadHead(controller, num_shifts=3, memory_shape=memory_shape, name='read', learn_init=False,
nonlinearity_key=lasagne.nonlinearities.rectify)
]
l_ntm = NTMLayer(l_input, memory=memory, controller=controller, heads=heads)
# Output Layer
l_output_reshape = ReshapeLayer(l_ntm, (-1, num_units))
l_output_dense = DenseLayer(l_output_reshape, num_units=size + 2, nonlinearity=lasagne.nonlinearities.sigmoid, \
name='dense')
l_output = ReshapeLayer(l_output_dense, (batch_size, seqlen, size + 2))
return l_output, l_ntm
if __name__ == '__main__':
# Define the input and expected output variable
input_var, target_var = T.tensor3s('input', 'target')
# The generator to sample examples from
generator = AssociativeRecallTask(batch_size=1, max_iter=1000000, size=8, max_num_items=6, \
min_item_length=1, max_item_length=3)
# The model (1-layer Neural Turing Machine)
l_output, l_ntm = model(input_var, batch_size=generator.batch_size,
size=generator.size, num_units=100, memory_shape=(128, 20))
# The generated output variable and the loss function
pred_var = T.clip(lasagne.layers.get_output(l_output), 1e-6, 1. - 1e-6)
loss = T.mean(lasagne.objectives.binary_crossentropy(pred_var, target_var))
# Create the update expressions
params = lasagne.layers.get_all_params(l_output, trainable=True)
learning_rate = theano.shared(1e-4)
updates = lasagne.updates.adam(loss, params, learning_rate=learning_rate)
# Compile the function for a training step, as well as the prediction function and
# a utility function to get the inner details of the NTM
train_fn = theano.function([input_var, target_var], loss, updates=updates)
ntm_fn = theano.function([input_var], pred_var)
ntm_layer_fn = theano.function([input_var], lasagne.layers.get_output(l_ntm, get_details=True))
# Training
try:
scores, all_scores = [], []
for i, (example_input, example_output) in generator:
score = train_fn(example_input, example_output)
scores.append(score)
all_scores.append(score)
if i % 500 == 0:
mean_scores = np.mean(scores)
if mean_scores < 0.01:
learning_rate.set_value(1e-5)
print 'Batch #%d: %.6f' % (i, mean_scores)
scores = []
except KeyboardInterrupt:
pass
# Visualization
def marker1(params):
return params['num_items'] * (params['item_length'] + 1)
def marker2(params):
return (params['num_items'] + 1) * (params['item_length'] + 1)
markers = [
{
'location': marker1,
'style': {'color': 'red', 'ls': '-'}
},
{
'location': marker2,
'style': {'color': 'green', 'ls': '-'}
}
]
dashboard = Dashboard(generator=generator, ntm_fn=ntm_fn, ntm_layer_fn=ntm_layer_fn, \
memory_shape=(128, 20), markers=markers, cmap='bone')
# Example
params = generator.sample_params()
dashboard.sample(**params)
| mit |
fpeder/pyXKin | xkin/gesture.py | 1 | 7021 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import cv2
import config
import numpy as np
import pickle
from sklearn.hmm import GaussianHMM
#from sklearn.cluster import KMeans
class GestureClassifier():
def __init__(self):
self._resample = GestureResample(32)
self._features = GestureFeatures()
self._models = None
def run(self, seq, debug=False):
seq = self._resample.run(seq, upsample=True)
f1, f2 = self._features.run(seq)
seq = np.vstack((f1[:, 1], f2)).T
scores = [model['model'].score(seq) for model in self._models]
if debug:
print seq
print scores
print '-------'
gesture = self._models[np.argsort(scores)[-1]]['id']
return gesture
def load(self, src):
data = pickle.load(open(src, 'rb'))
self._models = data['models']
class GestureTraining():
def __init__(self, K=4, covar='diag'):
self._training = GestureTrainingSetGen()
self._feature = GestureFeatures()
self._K = K
self._covar = covar
self._vq = None
self._modesl = None
def run(self, protos):
models = []
for nstate, label, seq in protos:
train = self._training.run(seq)
f1, f2 = self._feature.run(train, True)
o = np.vstack((f1[:,1], f2)).T
(start, trans) = self.init_left_right_model(nstate)
clf = GaussianHMM(n_components=nstate, covariance_type=self._covar,
transmat=trans, startprob=start)
clf.fit(np.array([o]))
models.append({'id':label, 'model':clf})
self._models = models
return models
def init_left_right_model(self, nstate, type='left-right'):
norm = lambda x: x/x.sum()
trans = np.zeros((nstate, nstate))
start = norm(np.random.rand(nstate))
for i in range(nstate):
trans[i, i:] = norm(np.random.rand(nstate-i))
return (start, trans)
def save(self, dst):
if self._models:
data = {'models': self._models}
pickle.dump(data, open(dst, 'wb'))
class GestureFeatures():
def __init__(self, nangle=config.NUM_ANGLES, N=4):
self._nangles = nangle
self._N = N
def run(self, seq, train=False):
if train:
(p3, p1) = self._get_angles(seq[0])
#p2 = self._get_stft(seq[0])
#p3 = self._get_ang_variation(p3)
p4 = self._get_integral_ang(p1)
for s in seq[1:]:
t3, t1 = self._get_angles(s)
#p1 = np.vstack((p1, t1))
#p3 = np.vstack((t3, self._get_ang_variation(s)))
#p2 = np.vstack((p2, self._get_stft(s)))
p1 = np.vstack((p1, t1))
p4 = np.hstack((p4, self._get_integral_ang(t1)))
else:
(p3, p1) = self._get_angles(seq)
#p2 = self._get_stft(seq)
#p3 = self._get_ang_variation(p3)
p4 = self._get_integral_ang(p1)
return [p1, p4]
def _get_angles(self, seq):
def cart2pol(seq):
seq = np.diff(seq, axis=0)
out = np.zeros(seq.shape, np.float32)
out[:, 0] = np.sqrt(seq[:, 0]**2 + seq[:, 1]**2)
out[:, 1] = np.arctan2(seq[:, 1], seq[:, 0]) * 180./np.pi
return out
def quantiz(angle):
angle = (angle / (360 / self._nangles)).astype(np.int)
return angle
seq = cart2pol(seq)
qseq = quantiz(seq)
return (seq, qseq)
def _get_ang_variation(self, x):
N = 2
ang = x[:, 1]
x = np.hstack((ang[0]*np.ones(N), ang, ang[-1]*np.ones(N)))
y = np.zeros(len(ang))
for i in range(N, len(x) - N):
tmp = np.abs(np.diff(x[i-N: i+N])).sum()/(2*N-1)
y[i-N] = tmp
return y
def _get_integral_ang(self, x):
ang = x[:, 1]
y = np.zeros(len(ang))
for i in range(len(y)):
y[i] = np.diff(ang[0:i]).sum()
y[-1] = y[-2]
return y
def _get_stft(self, x1):
N = self._N
x = np.vstack((x1[0] * np.ones((N, 2)), x1, x1[-1] * np.ones((N, 2))))
y = np.zeros((len(x1) - 1 , 2*N-2))
for i in range(N, len(x) - N - 1):
tmp = x[i-N: i+N]
tmp = tmp[:, 0] + 1j * tmp[:, 1]
tmp = np.abs(np.fft.fft(tmp))
y[i-N, :] = tmp[2:]/tmp[1] #np.abs(np.fft.fft(tmp))[1:]
return y
class GestureTrainingSetGen():
def __init__(self, nseq=50, xvar=config.XVAR, yvar=config.YVAR):
self._nseq = nseq
self._var = (15, 15)
self._resample = GestureResample(32)
def run(self, seq):
train = []
for i in range(self._nseq):
seq = self._resample.upsample(seq)
noisy_seq = self._add_awgn_noise(seq)
noisy_seq = self._resample.run(noisy_seq)
train.append(noisy_seq)
return train
def _add_awgn_noise(self, seq):
noise = np.random.rand(seq.shape[0], seq.shape[1])
noise[:, 0] = self._var[0] * noise[:, 0]
noise[:, 1] = self._var[1] * noise[:, 1]
out = seq + noise
return out.astype(np.float32)
class GestureResample():
def __init__(self, n=32):
self._n = n
def run(self, points, upsample=False):
if upsample:
points = self.upsample(points)
I = cv2.arcLength(points.astype(np.float32), False) / (len(points)-1)
new_points = points[0]
D = 0
for i in range(1, len(points)):
d = self._dist(points[i], points[i-1])
if D + d >= I:
q = points[i-1] + ((I - D)/d) * (points[i] - points[i-1])
new_points = np.vstack((new_points, q.astype(np.int32)))
points[i, :] = q
D = 0
else:
D = D + d
return new_points
def upsample(self, seq):
seq = seq.astype(np.uint16)
out = np.zeros((self._n, 2))
out[:, 0] = cv2.resize(seq[:, 0], (1, self._n)).flatten()
out[:, 1] = cv2.resize(seq[:, 1], (1, self._n)).flatten()
return out
def _dist(self, p1, p2):
d = np.sqrt((p1[0]-p2[0])**2 + (p1[1]-p2[1])**2)
return d
if __name__ == '__main__':
pass
#import sys
#from util import draw_points
# gc = GestureClassifier()
# for f in sys.argv[1:]:
# proto = pickle.load(open(f, 'rb'))
# gc.train(proto)
# gc.save('gesture.pck')
# points = pickle.load(open(sys.argv[1], 'rb'))['seq']
# draw_points(points)
# points = gp.upsample(points.astype(np.uint16), 32)
# draw_points(points)
# points = gp.resample(points)
# draw_points(points)
# seq = [pickle.load(open(x,'rb')) for x in sys.argv[1:]]
# seq = [list(x.itervalues()) for x in seq]
# gt = GestureTraining()
# gt.run(seq)
# gt.save('gesture2.pck')
| bsd-2-clause |
lituan/tools | pairwise_align.py | 1 | 6243 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
input seqs in fasta format, output sequence similarity matrix
usuage example
python pairwise_align example.fasta
"""
import os
import sys
import seaborn as sns
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.cluster.hierarchy as sch
import scipy.spatial.distance as spd
# from wdsp import Wdsp
def align_lis_lis(lis_lis):
"""align and nested list to print a table"""
lis_lis = [[str(l) for l in lis]
for lis in lis_lis] # trans every element to str
# make all inner lists of the same length
inner_lis_max_len = max(len(lis) for lis in lis_lis)
lis_lis = [lis + (inner_lis_max_len - len(lis)) * [''] for lis in lis_lis]
# trans list, so that the elements of the same column are in one list
lis_lis = [[lis[i] for lis in lis_lis] for i in range(inner_lis_max_len)]
# make element in the same list have the same length
aligned = []
for lis in lis_lis:
width = max([len(l) for l in lis])
lis = [l + (width - len(l)) * ' ' for l in lis]
aligned.append(lis)
# trans list_list to the original list_list
inner_lis_max_len = max(len(lis) for lis in lis_lis)
lis_lis = [[lis[i] for lis in aligned] for i in range(inner_lis_max_len)]
return lis_lis
def align(seq1, seq2):
from Bio import pairwise2
from Bio.SubsMat import MatrixInfo as matlist
matrix = matlist.blosum62
gap_open = -10 # usual value
gap_extend = -0.5 # usual value
alns = pairwise2.align.globalds(seq1, seq2, matrix, gap_open, gap_extend)
seq1 = alns[0][0]
seq2 = alns[0][1]
identity = [1 for i, s in enumerate(seq1) if s == seq2[i]]
identity = 1.0 * len(identity)/ len(seq1)
return float('{0:<4.2f}'.format(identity))
def readfa(fa_f):
# readin seqs in fasta format
# seqs foramt:[(pro,seq),...]
lines = fa_f.readlines()
lines = [line.rstrip('\r\n') for line in lines]
pro_line_num = [i for i, line in enumerate(
lines) if '>' in line] + [len(lines)]
seqs = [lines[n:pro_line_num[i + 1]]
for i, n in enumerate(pro_line_num[:-1])]
seqs = [(seq[0][1:], ''.join(seq[1:])) for seq in seqs]
return seqs
def read_wdsp(wdsp_f):
w = Wdsp(wdsp_f)
seqs = w.seqs
seqs = [(k, v) for k, v in seqs.iteritems()]
return seqs
def align_seqs(seqs):
lens = len(seqs)
scores = []
for i in xrange(lens):
score_i = []
for j in xrange(lens):
# print i, '\t', j
if j < i:
score_i.append(scores[j][i])
elif j >= i:
score = align(seqs[i][1], seqs[j][1])
score_i.append(score)
scores.append(score_i)
return scores
def write_resutls(seqs, scores, file_path, file_name):
result = [[seq[0]] + score for seq, score in zip(seqs, scores)]
header = [['ID'] + [seq[0] for seq in seqs]]
result = header + result
filename = os.path.join(file_path, file_name + '_scores_tab.txt')
with open(filename, 'w') as w_f:
for r in result:
print >> w_f, '\t'.join([str(ri)for ri in r])
result = align_lis_lis(result)
filename = os.path.join(file_path, file_name + '_scores_align.txt')
with open(filename, 'w') as w_f:
for r in result:
print >> w_f, '\t'.join([str(ri)for ri in r])
pair_scores = []
ids = [seq[0] for seq in seqs]
for i in range(len(ids)):
for j in range(i):
pair_scores.append([scores[i][j], (ids[i], ids[j])])
pair_scores = sorted(pair_scores)
filename = os.path.join(file_path, file_name + '_pair_scores.txt')
with open(filename, 'w') as w_f:
for score, pair in pair_scores:
print >> w_f, score, '\t', '{0:<25}{1:<}'.format(pair[0], pair[1])
# change similarity to dissimilarity
length = len(scores)
distances = []
for i in xrange(length):
distance = []
for j in xrange(length):
distance.append(1.0 - scores[i][j])
distances.append(distance)
result = [[seq[0]] + score for seq, score in zip(seqs, distances)]
header = [['ID'] + [seq[0] for seq in seqs]]
result = header + result
filename = os.path.join(file_path, file_name + '_distances_tab.txt')
with open(filename, 'w') as w_f:
for r in result:
print >> w_f, '\t'.join([str(ri)for ri in r])
result = align_lis_lis(result)
filename = os.path.join(file_path, file_name + '_distances_align.txt')
with open(filename, 'w') as w_f:
for r in result:
print >> w_f, '\t'.join([str(ri)for ri in r])
def plot_heatmap(seqs, scores,file_name):
column_labels = [s[0] for s in seqs]
row_labels = column_labels
scores = [map(lambda x: float(x), row) for row in scores]
scores = np.array(scores)
distances = [map(lambda x: 1-x,row) for row in scores]
linkage = sch.linkage(spd.squareform(distances),method='complete')
df = pd.DataFrame(scores,columns=column_labels, index=row_labels)
if len(df.columns) > 100:
# sns_plot = sns.clustermap(df,row_linkage=linkage,col_linkage=linkage,annot=True,fmt='3.2f',xticklabels='',yticklabels='')
sns_plot = sns.clustermap(df,row_linkage=linkage,col_linkage=linkage,xticklabels='',yticklabels='')
else:
sns_plot = sns.clustermap(df,figsize=figsize,row_linkage=linkage,col_linkage=linkage,annot=True,fmt='3.2f')
# sns_plot = sns.clustermap(df,row_linkage=linkage,col_linkage=linkage)
plt.setp(sns_plot.ax_heatmap.yaxis.get_majorticklabels(), rotation=20)
plt.setp(sns_plot.ax_heatmap.xaxis.get_majorticklabels(), rotation=70)
# plt.yticks(rotation=90)
sns_plot.savefig(file_name+'.png')
plt.close('all')
def main():
with open(sys.argv[-1]) as fa_f:
seqs = readfa(fa_f)
# with open(sys.argv[-1]) as wdsp_f:
# seqs = read_wdsp(wdsp_f)
scores = align_seqs(seqs)
file_path, file_name = os.path.split(sys.argv[-1])
file_name, file_extention = os.path.splitext(file_name)
write_resutls(seqs, scores, file_path, file_name)
plot_heatmap(seqs, scores,file_name)
if __name__ == "__main__":
main()
| cc0-1.0 |
cmorgan/trading-with-python | lib/csvDatabase.py | 1 | 5984 | # -*- coding: utf-8 -*-
"""
intraday data handlers in csv format.
@author: jev
"""
from __future__ import division
from pandas import *
import datetime as dt
import os
from extra import ProgressBar
dateFormat = "%Y%m%d" # date format for converting filenames to dates
dateTimeFormat = "%Y%m%d %H:%M:%S"
def fileName2date(fName):
'''convert filename to date'''
name = os.path.splitext(fName)[0]
return dt.datetime.strptime(name.split('_')[1],dateFormat).date()
def parseDateTime(dateTimeStr):
return dt.datetime.strptime(dateTimeStr,dateTimeFormat)
def loadCsv(fName):
''' load DataFrame from csv file '''
with open(fName,'r') as f:
lines = f.readlines()
dates= []
header = [h.strip() for h in lines[0].strip().split(',')[1:]]
data = [[] for i in range(len(header))]
for line in lines[1:]:
fields = line.rstrip().split(',')
dates.append(parseDateTime(fields[0]))
for i,field in enumerate(fields[1:]):
data[i].append(float(field))
return DataFrame(data=dict(zip(header,data)),index=Index(dates))
class HistDataCsv(object):
'''class for working with historic database in .csv format'''
def __init__(self,symbol,dbDir):
self.symbol = symbol
self.dbDir = os.path.normpath(os.path.join(dbDir,symbol))
if not os.path.exists(self.dbDir):
print 'Creating data directory ', self.dbDir
os.mkdir(self.dbDir)
self.dates = []
for fName in os.listdir(self.dbDir):
self.dates.append(fileName2date(fName))
def saveData(self,date, df,lowerCaseColumns=True):
''' add data to database'''
if lowerCaseColumns: # this should provide consistency to column names. All lowercase
df.columns = [ c.lower() for c in df.columns]
s = self.symbol+'_'+date.strftime(dateFormat)+'.csv' # file name
dest = os.path.join(self.dbDir,s) # full path destination
print 'Saving data to: ', dest
df.to_csv(dest)
def loadDate(self,date):
''' load data '''
s = self.symbol+'_'+date.strftime(dateFormat)+'.csv' # file name
df = DataFrame.from_csv(os.path.join(self.dbDir,s))
cols = [col.strip() for col in df.columns.tolist()]
df.columns = cols
#df = loadCsv(os.path.join(self.dbDir,s))
return df
def loadDates(self,dates):
''' load multiple dates, concantenating to one DataFrame '''
tmp =[]
print 'Loading multiple dates for ' , self.symbol
p = ProgressBar(len(dates))
for i,date in enumerate(dates):
tmp.append(self.loadDate(date))
p.animate(i+1)
print ''
return concat(tmp)
def createOHLC(self):
''' create ohlc from intraday data'''
ohlc = DataFrame(index=self.dates, columns=['open','high','low','close'])
for date in self.dates:
print 'Processing', date
try:
df = self.loadDate(date)
ohlc.set_value(date,'open',df['open'][0])
ohlc.set_value(date,'high',df['wap'].max())
ohlc.set_value(date,'low', df['wap'].min())
ohlc.set_value(date,'close',df['close'][-1])
except Exception as e:
print 'Could not convert:', e
return ohlc
def __repr__(self):
return '{symbol} dataset with {nrDates} days of data'.format(symbol=self.symbol, nrDates=len(self.dates))
class HistDatabase(object):
''' class working with multiple symbols at once '''
def __init__(self, dataDir):
# get symbols from directory names
symbols = []
for l in os.listdir(dataDir):
if os.path.isdir(os.path.join(dataDir,l)):
symbols.append(l)
#build dataset
self.csv = {} # dict of HistDataCsv halndlers
for symbol in symbols:
self.csv[symbol] = HistDataCsv(symbol,dataDir)
def loadDates(self,dates=None):
'''
get data for all symbols as wide panel
provide a dates list. If no dates list is provided, common dates are used.
'''
if dates is None: dates=self.commonDates
tmp = {}
for k,v in self.csv.iteritems():
tmp[k] = v.loadDates(dates)
return WidePanel(tmp)
def toHDF(self,dataFile,dates=None):
''' write wide panel data to a hdfstore file '''
if dates is None: dates=self.commonDates
store = HDFStore(dataFile)
wp = self.loadDates(dates)
store['data'] = wp
store.close()
@property
def commonDates(self):
''' return dates common for all symbols '''
t = [v.dates for v in self.csv.itervalues()] # get all dates in a list
d = list(set(t[0]).intersection(*t[1:]))
return sorted(d)
def __repr__(self):
s = '-----Hist CSV Database-----\n'
for k,v in self.csv.iteritems():
s+= (str(v)+'\n')
return s
#--------------------
if __name__=='__main__':
dbDir =os.path.normpath('D:/data/30sec')
vxx = HistDataCsv('VXX',dbDir)
spy = HistDataCsv('SPY',dbDir)
#
date = dt.date(2012,8,31)
print date
#
pair = DataFrame({'SPY':spy.loadDate(date)['close'],'VXX':vxx.loadDate(date)['close']})
print pair.tail() | bsd-3-clause |
pgmpy/pgmpy | pgmpy/models/SEM.py | 2 | 53950 | import networkx as nx
import numpy as np
import warnings
import itertools
from networkx.algorithms.dag import descendants
from pyparsing import OneOrMore, Word, Optional, Suppress, alphanums, nums
from pgmpy.base import DAG
from pgmpy.global_vars import HAS_PANDAS
if HAS_PANDAS:
import pandas as pd
class SEMGraph(DAG):
"""
Base class for graphical representation of Structural Equation Models(SEMs).
All variables are by default assumed to have an associated error latent variable, therefore
doesn't need to be specified.
Attributes
----------
latents: list
List of all the latent variables in the model except the error terms.
observed: list
List of all the observed variables in the model.
graph: nx.DirectedGraph
The graphical structure of the latent and observed variables except the error terms.
The parameteers are stored in the `weight` attribute of each edge.
err_graph: nx.Graph
An undirected graph representing the relations between the error terms of the model.
The node of the graph has the same name as the variable but represents the error terms.
The variance is stored in the `weight` attribute of the node and the covariance is stored
in the `weight` attribute of the edge.
full_graph_struct: nx.DiGraph
Represents the full graph structure. The names of error terms starts with `.` and
new nodes are added for each correlation which starts with `..`.
"""
def __init__(self, ebunch=[], latents=[], err_corr=[], err_var={}):
"""
Initializes a `SEMGraph` object.
Parameters
----------
ebunch: list/array-like
List of edges in form of tuples. Each tuple can be of two possible shape:
1. (u, v): This would add an edge from u to v without setting any parameter
for the edge.
2. (u, v, parameter): This would add an edge from u to v and set the edge's
parameter to `parameter`.
latents: list/array-like
List of nodes which are latent. All other variables are considered observed.
err_corr: list/array-like
List of tuples representing edges between error terms. It can be of the following forms:
1. (u, v): Add correlation between error terms of `u` and `v`. Doesn't set any variance or
covariance values.
2. (u, v, covar): Adds correlation between the error terms of `u` and `v` and sets the
parameter to `covar`.
err_var: dict (variable: variance)
Sets variance for the error terms in the model.
Examples
--------
Defining a model (Union sentiment model[1]) without setting any paramaters.
>>> from pgmpy.models import SEMGraph
>>> sem = SEMGraph(ebunch=[('deferenc', 'unionsen'), ('laboract', 'unionsen'),
... ('yrsmill', 'unionsen'), ('age', 'deferenc'),
... ('age', 'laboract'), ('deferenc', 'laboract')],
... latents=[],
... err_corr=[('yrsmill', 'age')],
... err_var={})
Defining a model (Education [2]) with all the parameters set. For not setting any
parameter `np.NaN` can be explicitly passed.
>>> sem_edu = SEMGraph(ebunch=[('intelligence', 'academic', 0.8), ('intelligence', 'scale_1', 0.7),
... ('intelligence', 'scale_2', 0.64), ('intelligence', 'scale_3', 0.73),
... ('intelligence', 'scale_4', 0.82), ('academic', 'SAT_score', 0.98),
... ('academic', 'High_school_gpa', 0.75), ('academic', 'ACT_score', 0.87)],
... latents=['intelligence', 'academic'],
... err_corr=[],
... err_var={'intelligence': 1})
References
----------
[1] McDonald, A, J., & Clelland, D. A. (1984). Textile Workers and Union Sentiment.
Social Forces, 63(2), 502–521
[2] https://en.wikipedia.org/wiki/Structural_equation_modeling#/
media/File:Example_Structural_equation_model.svg
"""
super(SEMGraph, self).__init__()
# Construct the graph and set the parameters.
self.graph = nx.DiGraph()
for t in ebunch:
if len(t) == 3:
self.graph.add_edge(t[0], t[1], weight=t[2])
elif len(t) == 2:
self.graph.add_edge(t[0], t[1], weight=np.NaN)
else:
raise ValueError(
f"Expected tuple length: 2 or 3. Got {t} of len {len(t)}"
)
self.latents = set(latents)
self.observed = set(self.graph.nodes()) - self.latents
# Construct the error graph and set the parameters.
self.err_graph = nx.Graph()
self.err_graph.add_nodes_from(self.graph.nodes())
for t in err_corr:
if len(t) == 2:
self.err_graph.add_edge(t[0], t[1], weight=np.NaN)
elif len(t) == 3:
self.err_graph.add_edge(t[0], t[1], weight=t[2])
else:
raise ValueError(
f"Expected tuple length: 2 or 3. Got {t} of len {len(t)}"
)
# Set the error variances
for var in self.err_graph.nodes():
self.err_graph.nodes[var]["weight"] = (
err_var[var] if var in err_var.keys() else np.NaN
)
self.full_graph_struct = self._get_full_graph_struct()
def _get_full_graph_struct(self):
"""
Creates a directed graph by joining `self.graph` and `self.err_graph`.
Adds new nodes to replace undirected edges (u <--> v) with two directed
edges (u <-- ..uv) and (..uv --> v).
Returns
-------
nx.DiGraph: A full directed graph strucuture with error nodes starting
with `.` and bidirected edges replaced with common cause
nodes starting with `..`.
Examples
--------
>>> from pgmpy.models import SEMGraph
>>> sem = SEMGraph(ebunch=[('deferenc', 'unionsen'), ('laboract', 'unionsen'),
... ('yrsmill', 'unionsen'), ('age', 'deferenc'),
... ('age', 'laboract'), ('deferenc', 'laboract')],
... latents=[],
... err_corr=[('yrsmill', 'age')])
>>> sem._get_full_graph_struct()
"""
full_graph = self.graph.copy()
mapping_dict = {"." + node: node for node in self.err_graph.nodes}
full_graph.add_edges_from([(u, v) for u, v in mapping_dict.items()])
for u, v in self.err_graph.edges:
cov_node = ".." + "".join(sorted([u, v]))
full_graph.add_edges_from([(cov_node, "." + u), (cov_node, "." + v)])
return full_graph
def get_scaling_indicators(self):
"""
Returns a scaling indicator for each of the latent variables in the model.
The scaling indicator is chosen randomly among the observed measurement
variables of the latent variable.
Examples
--------
>>> from pgmpy.models import SEMGraph
>>> model = SEMGraph(ebunch=[('xi1', 'eta1'), ('xi1', 'x1'), ('xi1', 'x2'),
... ('eta1', 'y1'), ('eta1', 'y2')],
... latents=['xi1', 'eta1'])
>>> model.get_scaling_indicators()
{'xi1': 'x1', 'eta1': 'y1'}
Returns
-------
dict: Returns a dict with latent variables as the key and their value being the
scaling indicator.
"""
scaling_indicators = {}
for node in self.latents:
for neighbor in self.graph.neighbors(node):
if neighbor in self.observed:
scaling_indicators[node] = neighbor
break
return scaling_indicators
def active_trail_nodes(self, variables, observed=[], avoid_nodes=[], struct="full"):
"""
Finds all the observed variables which are d-connected to `variables` in the `graph_struct`
when `observed` variables are observed.
Parameters
----------
variables: str or array like
Observed variables whose d-connected variables are to be found.
observed : list/array-like
If given the active trails would be computed assuming these nodes to be observed.
avoid_nodes: list/array-like
If specificed, the algorithm doesn't account for paths that have influence flowing
through the avoid node.
struct: str or nx.DiGraph instance
If "full", considers correlation between error terms for computing d-connection.
If "non_error", doesn't condised error correlations for computing d-connection.
If instance of nx.DiGraph, finds d-connected variables on the given graph.
Examples
--------
>>> from pgmpy.models import SEM
>>> model = SEMGraph(ebunch=[('yrsmill', 'unionsen'), ('age', 'laboract'),
... ('age', 'deferenc'), ('deferenc', 'laboract'),
... ('deferenc', 'unionsen'), ('laboract', 'unionsen')],
... latents=[],
... err_corr=[('yrsmill', 'age')])
>>> model.active_trail_nodes('age')
Returns
-------
dict: {str: list}
Returns a dict with `variables` as the key and a list of d-connected variables as the
value.
References
----------
Details of the algorithm can be found in 'Probabilistic Graphical Model
Principles and Techniques' - Koller and Friedman
Page 75 Algorithm 3.1
"""
if struct == "full":
graph_struct = self.full_graph_struct
elif struct == "non_error":
graph_struct = self.graph
elif isinstance(struct, nx.DiGraph):
graph_struct = struct
else:
raise ValueError(
f"Expected struct to be str or nx.DiGraph. Got {type(struct)}"
)
ancestors_list = set()
for node in observed:
ancestors_list = ancestors_list.union(
nx.algorithms.dag.ancestors(graph_struct, node)
)
# Direction of flow of information
# up -> from parent to child
# down -> from child to parent
active_trails = {}
for start in variables if isinstance(variables, (list, tuple)) else [variables]:
visit_list = set()
visit_list.add((start, "up"))
traversed_list = set()
active_nodes = set()
while visit_list:
node, direction = visit_list.pop()
if node in avoid_nodes:
continue
if (node, direction) not in traversed_list:
if (
(node not in observed)
and (not node.startswith("."))
and (node not in self.latents)
):
active_nodes.add(node)
traversed_list.add((node, direction))
if direction == "up" and node not in observed:
for parent in graph_struct.predecessors(node):
visit_list.add((parent, "up"))
for child in graph_struct.successors(node):
visit_list.add((child, "down"))
elif direction == "down":
if node not in observed:
for child in graph_struct.successors(node):
visit_list.add((child, "down"))
if node in ancestors_list:
for parent in graph_struct.predecessors(node):
visit_list.add((parent, "up"))
active_trails[start] = active_nodes
return active_trails
def _iv_transformations(self, X, Y, scaling_indicators={}):
"""
Transforms the graph structure of SEM so that the d-separation criterion is
applicable for finding IVs. The method transforms the graph for finding MIIV
for the estimation of X \rightarrow Y given the scaling indicator for all the
parent latent variables.
Parameters
----------
X: node
The explantory variable.
Y: node
The dependent variable.
scaling_indicators: dict
Scaling indicator for each latent variable in the model.
Returns
-------
nx.DiGraph: The transformed full graph structure.
Examples
--------
>>> from pgmpy.models import SEMGraph
>>> model = SEMGraph(ebunch=[('xi1', 'eta1'), ('xi1', 'x1'), ('xi1', 'x2'),
... ('eta1', 'y1'), ('eta1', 'y2')],
... latents=['xi1', 'eta1'])
>>> model._iv_transformations('xi1', 'eta1',
... scaling_indicators={'xi1': 'x1', 'eta1': 'y1'})
"""
full_graph = self.full_graph_struct.copy()
if not (X, Y) in full_graph.edges():
raise ValueError(f"The edge from {X} -> {Y} doesn't exist in the graph")
if (X in self.observed) and (Y in self.observed):
full_graph.remove_edge(X, Y)
return full_graph, Y
elif Y in self.latents:
full_graph.add_edge("." + Y, scaling_indicators[Y])
dependent_var = scaling_indicators[Y]
else:
dependent_var = Y
for parent_y in self.graph.predecessors(Y):
# Remove edge even when the parent is observed ????
full_graph.remove_edge(parent_y, Y)
if parent_y in self.latents:
full_graph.add_edge("." + scaling_indicators[parent_y], dependent_var)
return full_graph, dependent_var
def get_ivs(self, X, Y, scaling_indicators={}):
"""
Returns the Instrumental variables(IVs) for the relation X -> Y
Parameters
----------
X: node
The variable name (observed or latent)
Y: node
The variable name (observed or latent)
scaling_indicators: dict (optional)
A dict representing which observed variable to use as scaling indicator for
the latent variables.
If not given the method automatically selects one of the measurement variables
at random as the scaling indicator.
Returns
-------
set: {str}
The set of Instrumental Variables for X -> Y.
Examples
--------
>>> from pgmpy.models import SEMGraph
>>> model = SEMGraph(ebunch=[('I', 'X'), ('X', 'Y')],
... latents=[],
... err_corr=[('X', 'Y')])
>>> model.get_ivs('X', 'Y')
{'I'}
"""
if not scaling_indicators:
scaling_indicators = self.get_scaling_indicators()
if (X in scaling_indicators.keys()) and (scaling_indicators[X] == Y):
warnings.warn(
f"{Y} is the scaling indicator of {X}. Please specify `scaling_indicators`"
)
transformed_graph, dependent_var = self._iv_transformations(
X, Y, scaling_indicators=scaling_indicators
)
if X in self.latents:
explanatory_var = scaling_indicators[X]
else:
explanatory_var = X
d_connected_x = self.active_trail_nodes(
[explanatory_var], struct=transformed_graph
)[explanatory_var]
# Condition on X to block any paths going through X.
d_connected_y = self.active_trail_nodes(
[dependent_var], avoid_nodes=[explanatory_var], struct=transformed_graph
)[dependent_var]
# Remove {X, Y} because they can't be IV for X -> Y
return d_connected_x - d_connected_y - {dependent_var, explanatory_var}
def moralize(self, graph="full"):
"""
TODO: This needs to go to a parent class.
Removes all the immoralities in the DirectedGraph and creates a moral
graph (UndirectedGraph).
A v-structure X->Z<-Y is an immorality if there is no directed edge
between X and Y.
Parameters
----------
graph:
Examples
--------
"""
if graph == "full":
graph = self.full_graph_struct
elif isinstance(graph, nx.DiGraph):
graph = graph
else:
graph = self.graph
moral_graph = graph.to_undirected()
for node in graph.nodes():
moral_graph.add_edges_from(
itertools.combinations(graph.predecessors(node), 2)
)
return moral_graph
def _nearest_separator(self, G, Y, Z):
"""
Finds the set of nearest separators for `Y` and `Z` in `G`.
Parameters
----------
G: nx.DiGraph instance
The graph in which to the find the nearest separation for `Y` and `Z`.
Y: str
The variable name for which the separators are needed.
Z: str
The other variable for which the separators are needed.
Returns
-------
set or None: If there is a nearest separator returns the set of separators else returns None.
"""
W = set()
ancestral_G = G.subgraph(
nx.ancestors(G, Y).union(nx.ancestors(G, Z)).union({Y, Z})
).copy()
# Optimization: Remove all error nodes which don't have any correlation as it doesn't add any new path. If not removed it can create a lot of
# extra paths resulting in a much higher runtime.
err_nodes_to_remove = set(self.err_graph.nodes()) - set(
[node for edge in self.err_graph.edges() for node in edge]
)
ancestral_G.remove_nodes_from(["." + node for node in err_nodes_to_remove])
M = self.moralize(graph=ancestral_G)
visited = set([Y])
to_visit = list(M.neighbors(Y))
# Another optimization over the original algo. Rather than going through all the paths does
# a DFS search to find a markov blanket of observed variables. This doesn't ensure minimal observed
# set.
while to_visit:
node = to_visit.pop()
if node == Z:
return None
visited.add(node)
if node in self.observed:
W.add(node)
else:
to_visit.extend(
[node for node in M.neighbors(node) if node not in visited]
)
# for path in nx.all_simple_paths(M, Y, Z):
# path_set = set(path)
# if (len(path) >= 3) and not (W & path_set):
# for index in range(1, len(path)-1):
# if path[index] in self.observed:
# W.add(path[index])
# break
if Y not in self.active_trail_nodes([Z], observed=W, struct=ancestral_G)[Z]:
return W
else:
return None
def get_conditional_ivs(self, X, Y, scaling_indicators={}):
"""
Returns the conditional IVs for the relation X -> Y
Parameters
----------
X: node
The observed variable's name
Y: node
The oberved variable's name
scaling_indicators: dict (optional)
A dict representing which observed variable to use as scaling indicator for
the latent variables.
If not provided, automatically finds scaling indicators by randomly selecting
one of the measurement variables of each latent variable.
Returns
-------
set: Set of 2-tuples representing tuple[0] is an IV for X -> Y given tuple[1].
References
----------
.. [1] Van Der Zander, B., Textor, J., & Liskiewicz, M. (2015, June). Efficiently finding
conditional instruments for causal inference. In Twenty-Fourth International Joint
Conference on Artificial Intelligence.
Examples
--------
>>> from pgmpy.models import SEMGraph
>>> model = SEMGraph(ebunch=[('I', 'X'), ('X', 'Y'), ('W', 'I')],
... latents=[],
... err_corr=[('W', 'Y')])
>>> model.get_ivs('X', 'Y')
[('I', {'W'})]
"""
if not scaling_indicators:
scaling_indicators = self.get_scaling_indicators()
if (X in scaling_indicators.keys()) and (scaling_indicators[X] == Y):
warnings.warn(
f"{Y} is the scaling indicator of {X}. Please specify `scaling_indicators`"
)
transformed_graph, dependent_var = self._iv_transformations(
X, Y, scaling_indicators=scaling_indicators
)
if (X, Y) in transformed_graph.edges:
G_c = transformed_graph.remove_edge(X, Y)
else:
G_c = transformed_graph
instruments = []
for Z in self.observed - {X, Y}:
W = self._nearest_separator(G_c, Y, Z)
# Condition to check if W d-separates Y from Z
if (not W) or (W.intersection(descendants(G_c, Y))) or (X in W):
continue
# Condition to check if X d-connected to I after conditioning on W.
elif X in self.active_trail_nodes([Z], observed=W, struct=G_c)[Z]:
instruments.append((Z, W))
else:
continue
return instruments
def to_lisrel(self):
"""
Converts the model from a graphical representation to an equivalent algebraic
representation. This converts the model into a Reticular Action Model (RAM) model
representation which is implemented by `pgmpy.models.SEMAlg` class.
Returns
-------
SEMAlg instance: Instance of `SEMAlg` representing the model.
Examples
--------
>>> from pgmpy.models import SEM
>>> sem = SEM.from_graph(ebunch=[('deferenc', 'unionsen'), ('laboract', 'unionsen'),
... ('yrsmill', 'unionsen'), ('age', 'deferenc'),
... ('age', 'laboract'), ('deferenc', 'laboract')],
... latents=[],
... err_corr=[('yrsmill', 'age')],
... err_var={})
>>> sem.to_lisrel()
# TODO: Complete this.
See Also
--------
to_standard_lisrel: Converts to the standard lisrel format and returns the parameters.
"""
nodelist = list(self.observed) + list(self.latents)
graph_adj = nx.to_numpy_matrix(self.graph, nodelist=nodelist, weight=None)
graph_fixed = nx.to_numpy_matrix(self.graph, nodelist=nodelist, weight="weight")
err_adj = nx.to_numpy_matrix(self.err_graph, nodelist=nodelist, weight=None)
np.fill_diagonal(err_adj, 1.0) # Variance exists for each error term.
err_fixed = nx.to_numpy_matrix(
self.err_graph, nodelist=nodelist, weight="weight"
)
# Add the variance of the error terms.
for index, node in enumerate(nodelist):
try:
err_fixed[index, index] = self.err_graph.nodes[node]["weight"]
except KeyError:
err_fixed[index, index] = 0.0
wedge_y = np.zeros((len(self.observed), len(nodelist)), dtype=int)
for index, obs_var in enumerate(self.observed):
wedge_y[index][nodelist.index(obs_var)] = 1.0
from pgmpy.models import SEMAlg
return SEMAlg(
eta=nodelist,
B=graph_adj.T,
zeta=err_adj.T,
wedge_y=wedge_y,
fixed_values={"B": graph_fixed.T, "zeta": err_fixed.T},
)
@staticmethod
def __standard_lisrel_masks(graph, err_graph, weight, var):
"""
This method is called by `get_fixed_masks` and `get_masks` methods.
Parameters
----------
weight: None | 'weight'
If None: Returns a 1.0 for an edge in the graph else 0.0
If 'weight': Returns the weight if a weight is assigned to an edge
else 0.0
var: dict
Dict with keys eta, xi, y, and x representing the variables in them.
Returns
-------
np.ndarray: Adjecency matrix of model's graph structure.
Notes
-----
B: Effect matrix of eta on eta
\gamma: Effect matrix of xi on eta
\wedge_y: Effect matrix of eta on y
\wedge_x: Effect matrix of xi on x
\phi: Covariance matrix of xi
\psi: Covariance matrix of eta errors
\theta_e: Covariance matrix of y errors
\theta_del: Covariance matrix of x errors
Examples
--------
"""
# Arrage the adjacency matrix in order y, x, eta, xi and then slice masks from it.
# y(p) x(q) eta(m) xi(n)
# y
# x
# eta \wedge_y B
# xi \wedge_x \Gamma
#
# But here we are slicing from the transpose of adjacency because we want incoming
# edges instead of outgoing because parameters come before variables in equations.
#
# y(p) x(q) eta(m) xi(n)
# y \wedge_y
# x \wedge_x
# eta B \Gamma
# xi
y_vars, x_vars, eta_vars, xi_vars = var["y"], var["x"], var["eta"], var["xi"]
p, q, m, n = (len(y_vars), len(x_vars), len(eta_vars), len(xi_vars))
nodelist = y_vars + x_vars + eta_vars + xi_vars
adj_matrix = nx.to_numpy_matrix(graph, nodelist=nodelist, weight=weight).T
B_mask = adj_matrix[p + q : p + q + m, p + q : p + q + m]
gamma_mask = adj_matrix[p + q : p + q + m, p + q + m :]
wedge_y_mask = adj_matrix[0:p, p + q : p + q + m]
wedge_x_mask = adj_matrix[p : p + q, p + q + m :]
err_nodelist = y_vars + x_vars + eta_vars + xi_vars
err_adj_matrix = nx.to_numpy_matrix(
err_graph, nodelist=err_nodelist, weight=weight
)
if not weight == "weight":
np.fill_diagonal(err_adj_matrix, 1.0)
theta_e_mask = err_adj_matrix[:p, :p]
theta_del_mask = err_adj_matrix[p : p + q, p : p + q]
psi_mask = err_adj_matrix[p + q : p + q + m, p + q : p + q + m]
phi_mask = err_adj_matrix[p + q + m :, p + q + m :]
return {
"B": B_mask,
"gamma": gamma_mask,
"wedge_y": wedge_y_mask,
"wedge_x": wedge_x_mask,
"phi": phi_mask,
"theta_e": theta_e_mask,
"theta_del": theta_del_mask,
"psi": psi_mask,
}
def to_standard_lisrel(self):
r"""
Transforms the model to the standard LISREL representation of latent and measurement
equations. The standard LISREL representation is given as:
..math::
\mathbf{\eta} = \mathbf{B \eta} + \mathbf{\Gamma \xi} + \mathbf{\zeta} \\
\mathbf{y} = \mathbf{\wedge_y \eta} + \mathbf{\epsilon} \\
\mathbf{x} = \mathbf{\wedge_x \xi} + \mathbf{\delta} \\
\mathbf{\Theta_e} = COV(\mathbf{\epsilon}) \\
\mathbf{\Theta_\delta} = COV(\mathbf{\delta}) \\
\mathbf{\Psi} = COV(\mathbf{\eta}) \\
\mathbf{\Phi} = COV(\mathbf{\xi}) \\
Since the standard LISREL representation has restrictions on the types of model,
this method adds extra latent variables with fixed loadings of `1` to make the model
consistent with the restrictions.
Returns
-------
var_names: dict (keys: eta, xi, y, x)
Returns the variable names in :math:`\mathbf{\eta}`, :math:`\mathbf{\xi}`,
:math:`\mathbf{y}`, :math:`\mathbf{x}`.
params: dict (keys: B, gamma, wedge_y, wedge_x, theta_e, theta_del, phi, psi)
Returns a boolean matrix for each of the parameters. A 1 in the matrix
represents that there is an edge in the model, 0 represents there is no edge.
fixed_values: dict (keys: B, gamma, wedge_y, wedge_x, theta_e, theta_del, phi, psi)
Returns a matrix for each of the parameters. A value in the matrix represents the
set value for the parameter in the model else it is 0.
See Also
--------
to_lisrel: Converts the model to `pgmpy.models.SEMAlg` instance.
Examples
--------
TODO: Finish this.
"""
lisrel_err_graph = self.err_graph.copy()
lisrel_latents = self.latents.copy()
lisrel_observed = self.observed.copy()
# Add new latent nodes to convert it to LISREL format.
mapping = {}
for u, v in self.graph.edges:
if (u not in self.latents) and (v in self.latents):
mapping[u] = "_l_" + u
elif (u not in self.latents) and (v not in self.latents):
mapping[u] = "_l_" + u
lisrel_latents.update(mapping.values())
lisrel_graph = nx.relabel_nodes(self.graph, mapping, copy=True)
for u, v in mapping.items():
lisrel_graph.add_edge(v, u, weight=1.0)
# Get values of eta, xi, y, x
latent_struct = lisrel_graph.subgraph(lisrel_latents)
latent_indegree = lisrel_graph.in_degree()
eta = []
xi = []
for node in latent_struct.nodes():
if latent_indegree[node]:
eta.append(node)
else:
xi.append(node)
x = set()
y = set()
for exo in xi:
x.update(
[x for x in lisrel_graph.neighbors(exo) if x not in lisrel_latents]
)
for endo in eta:
y.update(
[y for y in lisrel_graph.neighbors(endo) if y not in lisrel_latents]
)
# If some node has edges from both eta and xi, replace it with another latent variable
# otherwise it won't get included in any of the matrices.
# TODO: Patchy work. Find a better solution.
common_elements = set(x).intersection(set(y))
if common_elements:
mapping = {}
for var in common_elements:
mapping[var] = "_l_" + var
lisrel_graph = nx.relabel_nodes(lisrel_graph, mapping, copy=True)
for v, u in mapping.items():
lisrel_graph.add_edge(u, v, weight=1.0)
eta.extend(mapping.values())
x = list(set(x) - common_elements)
y.update(common_elements)
var_names = {"eta": eta, "xi": xi, "y": list(y), "x": list(x)}
edges_masks = self.__standard_lisrel_masks(
graph=lisrel_graph, err_graph=lisrel_err_graph, weight=None, var=var_names
)
fixed_masks = self.__standard_lisrel_masks(
graph=lisrel_graph,
err_graph=lisrel_err_graph,
weight="weight",
var=var_names,
)
return (var_names, edges_masks, fixed_masks)
class SEMAlg:
"""
Base class for algebraic representation of Structural Equation Models(SEMs). The model is
represented using the Reticular Action Model (RAM).
"""
def __init__(self, eta=None, B=None, zeta=None, wedge_y=None, fixed_values=None):
r"""
Initializes SEMAlg model. The model is represented using the Reticular Action Model(RAM)
which is given as:
..math::
\mathbf{\eta} = \mathbf{B \eta} + \mathbf{\zeta}
\mathbf{y} = \mathbf{\wedge_y \eta}
where :math:`\mathbf{\eta}` is the set of all the observed and latent variables in the
model, :math:`\mathbf{y}` are the set of observed variables, :math:`\mathbf{\zeta}` is
the error terms for :math:`\mathbf{\eta}`, and \mathbf{\wedge_y} is a boolean array to
select the observed variables from :math:`\mathbf{\eta}`.
Parameters
----------
The following set of parameters are used to set the learnable parameters in the model.
To specify the values of the parameter use the `fixed_values` parameter. Either `eta`,
`B`, `zeta`, and `wedge_y`, or `fixed_values` need to be specified.
eta: list/array-like
The name of the variables in the model.
B: 2-D array (boolean)
The learnable parameters in the `B` matrix.
zeta: 2-D array (boolean)
The learnable parameters in the covariance matrix of the error terms.
wedge_y: 2-D array
The `wedge_y` matrix.
fixed_params: dict (default: None)
A dict of fixed values for parameters.
If None all the parameters specified by `B`, and `zeta` are learnable.
Returns
-------
pgmpy.models.SEMAlg instance: An instance of the object with initalized values.
Examples
--------
>>> from pgmpy.models import SEMAlg
# TODO: Finish this example
"""
self.eta = eta
self.B = np.array(B)
self.zeta = np.array(zeta)
self.wedge_y = wedge_y
# Get the observed variables
self.y = []
for row_i in range(self.wedge_y.shape[0]):
for index, val in enumerate(self.wedge_y[row_i]):
if val:
self.y.append(self.eta[index])
if fixed_values:
self.B_fixed_mask = fixed_values["B"]
self.zeta_fixed_mask = fixed_values["zeta"]
else:
self.B_fixed_mask = np.zeros(self.B.shape)
self.zeta_fixed_mask = np.zeros(self.zeta.shape)
# Masks represent the parameters which need to be learnt while training.
self.B_mask = np.multiply(np.where(self.B_fixed_mask != 0, 0.0, 1.0), self.B)
self.zeta_mask = np.multiply(
np.where(self.zeta_fixed_mask != 0, 0.0, 1.0), self.zeta
)
def to_SEMGraph(self):
"""
Creates a graph structure from the LISREL representation.
Returns
-------
pgmpy.models.SEMGraph instance: A path model of the model.
Examples
--------
>>> from pgmpy.models import SEMAlg
>>> model = SEMAlg()
# TODO: Finish this example
"""
err_var = {var: np.diag(self.zeta)[i] for i, var in enumerate(self.eta)}
graph = nx.relabel_nodes(
nx.from_numpy_matrix(self.B.T, create_using=nx.DiGraph),
mapping={i: self.eta[i] for i in range(self.B.shape[0])},
)
# Fill zeta diagonal with 0's as they represent variance and would add self loops in the graph.
zeta = self.zeta.copy()
np.fill_diagonal(zeta, 0)
err_graph = nx.relabel_nodes(
nx.from_numpy_matrix(zeta.T, create_using=nx.Graph),
mapping={i: self.eta[i] for i in range(self.zeta.shape[0])},
)
latents = set(self.eta) - set(self.y)
from pgmpy.models import SEMGraph
# TODO: Add edge weights
sem_graph = SEMGraph(
ebunch=graph.edges(),
latents=latents,
err_corr=err_graph.edges(),
err_var=err_var,
)
return sem_graph
def set_params(self, B, zeta):
"""
Sets the fixed parameters of the model.
Parameters
----------
B: 2D array
The B matrix.
zeta: 2D array
The covariance matrix.
"""
self.B_fixed_mask = B
self.zeta_fixed_mask = zeta
def generate_samples(self, n_samples=100):
"""
Generates random samples from the model.
Parameters
----------
n_samples: int
The number of samples to generate.
Returns
-------
pd.DataFrame: The genrated samples.
"""
if (self.B_fixed_mask is None) or (self.zeta_fixed_mask is None):
raise ValueError("Parameters for the model has not been specified.")
B_inv = np.linalg.inv(np.eye(self.B_fixed_mask.shape[0]) - self.B_fixed_mask)
implied_cov = (
self.wedge_y @ B_inv @ self.zeta_fixed_mask @ B_inv.T @ self.wedge_y.T
)
# Check if implied covariance matrix is positive definite.
if not np.all(np.linalg.eigvals(implied_cov) > 0):
raise ValueError(
"The implied covariance matrix is not positive definite."
+ "Please check model parameters."
)
# Get the order of observed variables
x_index, y_index = np.nonzero(self.wedge_y)
observed = [self.eta[i] for i in y_index]
# Generate samples and return a dataframe.
samples = np.random.multivariate_normal(
mean=[0 for i in range(implied_cov.shape[0])],
cov=implied_cov,
size=n_samples,
)
return pd.DataFrame(samples, columns=observed)
class SEM(SEMGraph):
"""
Class for representing Structural Equation Models. This class is a wrapper over
`SEMGraph` and `SEMAlg` to provide a consistent API over the different representations.
Attributes
----------
model: SEMGraph instance
A graphical representation of the model.
"""
def __init__(self, syntax, **kwargs):
"""
Initialize a `SEM` object. Prefered way to initialize the object is to use one of
the `from_lavaan`, `from_graph`, `from_lisrel`, or `from_RAM` methods.
There are three possible ways to initialize the model:
1. Lavaan syntax: `lavaan_str` needs to be specified.
2. Graph structure: `ebunch`, `latents`, `err_corr`, and `err_var` need to specified.
3. LISREL syntax: `var_names`, `params`, and `fixed_masks` need to be specified.
4. Reticular Action Model (RAM/all-y) syntax: `var_names`, `B`, `zeta`, and `wedge_y`
need to be specified.
Parameters
----------
syntax: str (lavaan|graph|lisrel|ram)
The syntax used to initialize the model.
kwargs:
For parameter details, check docstrings for `from_lavaan`, `from_graph`, `from_lisrel`,
and `from_RAM` methods.
See Also
--------
from_lavaan: Initialize a model using lavaan syntax.
from_graph: Initialize a model using graph structure.
from_lisrel: Initialize a model using LISREL syntax.
from_RAM: Initialize a model using Reticular Action Model(RAM/all-y) syntax.
"""
if syntax.lower() == "lavaan":
# Create a SEMGraph model using the lavaan str.
# Step 1: Define the grammar for each type of string.
var = Word(alphanums)
reg_gram = (
OneOrMore(
var.setResultsName("predictors", listAllMatches=True)
+ Optional(Suppress("+"))
)
+ "~"
+ OneOrMore(
var.setResultsName("covariates", listAllMatches=True)
+ Optional(Suppress("+"))
)
)
intercept_gram = var("inter_var") + "~" + Word("1")
covar_gram = (
var("covar_var1")
+ "~~"
+ OneOrMore(
var.setResultsName("covar_var2", listAllMatches=True)
+ Optional(Suppress("+"))
)
)
latent_gram = (
var("latent")
+ "=~"
+ OneOrMore(
var.setResultsName("obs", listAllMatches=True)
+ Optional(Suppress("+"))
)
)
# Step 2: Preprocess string to lines
lines = kwargs["lavaan_str"]
# Step 3: Initialize arguments and fill them by parsing each line.
ebunch = []
latents = []
err_corr = []
err_var = []
for line in lines:
line = line.strip()
if (line != "") and (not line.startswith("#")):
if intercept_gram.matches(line):
continue
elif reg_gram.matches(line):
results = reg_gram.parseString(line, parseAll=True)
for pred in results["predictors"]:
ebunch.extend(
[
(covariate, pred)
for covariate in results["covariates"]
]
)
elif covar_gram.matches(line):
results = covar_gram.parseString(line, parseAll=True)
for var in results["covar_var2"]:
err_corr.append((results["covar_var1"], var))
elif latent_gram.matches(line):
results = latent_gram.parseString(line, parseAll=True)
latents.append(results["latent"])
ebunch.extend(
[(results["latent"], obs) for obs in results["obs"]]
)
# Step 4: Call the parent __init__ with the arguments
super(SEM, self).__init__(ebunch=ebunch, latents=latents, err_corr=err_corr)
elif syntax.lower() == "graph":
super(SEM, self).__init__(
ebunch=kwargs["ebunch"],
latents=kwargs["latents"],
err_corr=kwargs["err_corr"],
err_var=kwargs["err_var"],
)
elif syntax.lower() == "lisrel":
model = SEMAlg(
var_names=var_names, params=params, fixed_masks=fixed_masks
).to_SEMGraph()
# Initialize an empty SEMGraph instance and set the properties.
# TODO: Boilerplate code, find a better way to do this.
super(SEM, self).__init__(ebunch=[], latents=[], err_corr=[], err_var={})
self.graph = model.graph
self.latents = model.latents
self.obseved = model.observed
self.err_graph = model.err_graph
self.full_graph_struct = model.full_graph_struct
elif syntax.lower() == "ram":
model = SEMAlg(
eta=kwargs["var_names"],
B=kwargs["B"],
zeta=kwargs["zeta"],
wedge_y=kwargs["wedge_y"],
fixed_values=fixed_masks,
)
@classmethod
def from_lavaan(cls, string=None, filename=None):
"""
Initializes a `SEM` instance using lavaan syntax.
Parameters
----------
string: str (default: None)
A `lavaan` style multiline set of regression equation representing the model.
Refer http://lavaan.ugent.be/tutorial/syntax1.html for details.
filename: str (default: None)
The filename of the file containing the model in lavaan syntax.
Examples
--------
"""
if filename:
with open(filename, "r") as f:
lavaan_str = f.readlines()
elif string:
lavaan_str = string.split("\n")
else:
raise ValueError("Either `filename` or `string` need to be specified")
return cls(syntax="lavaan", lavaan_str=lavaan_str)
@classmethod
def from_graph(cls, ebunch, latents=[], err_corr=[], err_var={}):
"""
Initializes a `SEM` instance using graphical structure.
Parameters
----------
ebunch: list/array-like
List of edges in form of tuples. Each tuple can be of two possible shape:
1. (u, v): This would add an edge from u to v without setting any parameter
for the edge.
2. (u, v, parameter): This would add an edge from u to v and set the edge's
parameter to `parameter`.
latents: list/array-like
List of nodes which are latent. All other variables are considered observed.
err_corr: list/array-like
List of tuples representing edges between error terms. It can be of the following forms:
1. (u, v): Add correlation between error terms of `u` and `v`. Doesn't set any variance or
covariance values.
2. (u, v, covar): Adds correlation between the error terms of `u` and `v` and sets the
parameter to `covar`.
err_var: dict
Dict of the form (var: variance).
Examples
--------
Defining a model (Union sentiment model[1]) without setting any paramaters.
>>> from pgmpy.models import SEM
>>> sem = SEM.from_graph(ebunch=[('deferenc', 'unionsen'), ('laboract', 'unionsen'),
... ('yrsmill', 'unionsen'), ('age', 'deferenc'),
... ('age', 'laboract'), ('deferenc', 'laboract')],
... latents=[],
... err_corr=[('yrsmill', 'age')],
... err_var={})
Defining a model (Education [2]) with all the parameters set. For not setting any
parameter `np.NaN` can be explicitly passed.
>>> sem_edu = SEM.from_graph(ebunch=[('intelligence', 'academic', 0.8), ('intelligence', 'scale_1', 0.7),
... ('intelligence', 'scale_2', 0.64), ('intelligence', 'scale_3', 0.73),
... ('intelligence', 'scale_4', 0.82), ('academic', 'SAT_score', 0.98),
... ('academic', 'High_school_gpa', 0.75), ('academic', 'ACT_score', 0.87)],
... latents=['intelligence', 'academic'],
... err_corr=[],
... err_var={})
References
----------
[1] McDonald, A, J., & Clelland, D. A. (1984). Textile Workers and Union Sentiment.
Social Forces, 63(2), 502–521
[2] https://en.wikipedia.org/wiki/Structural_equation_modeling#/
media/File:Example_Structural_equation_model.svg
"""
return cls(
syntax="graph",
ebunch=ebunch,
latents=latents,
err_corr=err_corr,
err_var=err_var,
)
@classmethod
def from_lisrel(cls, var_names, params, fixed_masks=None):
r"""
Initializes a `SEM` instance using LISREL notation. The LISREL notation is defined as:
..math::
\mathbf{\eta} = \mathbf{B \eta} + \mathbf{\Gamma \xi} + mathbf{\zeta} \\
\mathbf{y} = \mathbf{\wedge_y \eta} + \mathbf{\epsilon} \\
\mathbf{x} = \mathbf{\wedge_x \xi} + \mathbf{\delta}
where :math:`\mathbf{\eta}` is the set of endogenous variables, :math:`\mathbf{\xi}`
is the set of exogeneous variables, :math:`\mathbf{y}` and :math:`\mathbf{x}` are the
set of measurement variables for :math:`\mathbf{\eta}` and :math:`\mathbf{\xi}`
respectively. :math:`\mathbf{\zeta}`, :math:`\mathbf{\epsilon}`, and :math:`\mathbf{\delta}`
are the error terms for :math:`\mathbf{\eta}`, :math:`\mathbf{y}`, and :math:`\mathbf{x}`
respectively.
Parameters
----------
str_model: str (default: None)
A `lavaan` style multiline set of regression equation representing the model.
Refer http://lavaan.ugent.be/tutorial/syntax1.html for details.
If None requires `var_names` and `params` to be specified.
var_names: dict (default: None)
A dict with the keys: eta, xi, y, and x. Each keys should have a list as the value
with the name of variables.
params: dict (default: None)
A dict of LISREL representation non-zero parameters. Must contain the following
keys: B, gamma, wedge_y, wedge_x, phi, theta_e, theta_del, and psi.
If None `str_model` must be specified.
fixed_params: dict (default: None)
A dict of fixed values for parameters. The shape of the parameters should be same
as params.
If None all the parameters are learnable.
Returns
-------
pgmpy.models.SEM instance: An instance of the object with initalized values.
Examples
--------
>>> from pgmpy.models import SEMAlg
# TODO: Finish this example
"""
eta = var_names["y"] + var_names["x"] + var_names["eta"] + var_names["xi"]
m, n, p, q = (
len(var_names["y"]),
len(var_names["x"]),
len(var_names["eta"]),
len(var_names["xi"]),
)
B = np.block(
[
[np.zeros((m, m + n)), params["wedge_y"], np.zeros((m, q))],
[np.zeros((n, m + n + p)), params["wedge_x"]],
[np.zeros((p, m + n)), params["B"], params["gamma"]],
[np.zeros((q, m + n + p + q))],
]
)
zeta = np.block(
[
[params["theta_e"], np.zeros((m, n + p + q))],
[np.zeros((n, m)), params["theta_del"], np.zeros((n, p + q))],
[np.zeros((p, m + n)), params["psi"], np.zeros((p, q))],
[np.zeros((q, m + n + p)), params["phi"]],
]
)
B = np.block(
[
[np.zeros((m, m + n)), fixed_params["wedge_y"], np.zeros((m, q))],
[np.zeros((n, m + n + p)), fixed_params["wedge_x"]],
[np.zeros((p, m + n)), fixed_params["B"], fixed_params["gamma"]],
[np.zeros((q, m + n + p + q))],
]
)
zeta = np.block(
[
[fixed_params["theta_e"], np.zeros((m, n + p + q))],
[np.zeros((n, m)), fixed_params["theta_del"], np.zeros((n, p + q))],
[np.zeros((p, m + n)), fixed_params["psi"], np.zeros((p, q))],
[np.zeros((q, m + n + p)), fixed_params["phi"]],
]
)
observed = var_names["y"] + var_names["x"]
return cls.from_RAM(
variables=eta,
B=B,
zeta=zeta,
observed=observed,
fixed_values={"B": B, "zeta": zeta},
)
@classmethod
def from_RAM(
cls, variables, B, zeta, observed=None, wedge_y=None, fixed_values=None
):
r"""
Initializes a `SEM` instance using Reticular Action Model(RAM) notation. The model
is defined as:
..math::
\mathbf{\eta} = \mathbf{B \eta} + \mathbf{\epsilon} \\
\mathbf{\y} = \wedge_y \mathbf{\eta}
\zeta = COV(\mathbf{\epsilon})
where :math:`\mathbf{\eta}` is the set of variables (both latent and observed),
:math:`\mathbf{\epsilon}` are the error terms, :math:`\mathbf{y}` is the set
of observed variables, :math:`\wedge_y` is a boolean array of the shape (no of
observed variables, no of total variables).
Parameters
----------
variables: list, array-like
List of variables (both latent and observed) in the model.
B: 2-D boolean array (shape: `len(variables)` x `len(variables)`)
The non-zero parameters in :math:`B` matrix. Refer model definition in docstring for details.
zeta: 2-D boolean array (shape: `len(variables)` x `len(variables)`)
The non-zero parameters in :math:`\zeta` (error covariance) matrix. Refer model definition
in docstring for details.
observed: list, array-like (optional: Either `observed` or `wedge_y` needs to be specified)
List of observed variables in the model.
wedge_y: 2-D array (shape: no. observed x total vars) (optional: Either `observed` or `wedge_y`)
The :math:`\wedge_y` matrix. Refer model definition in docstring for details.
fixed_values: dict (optional)
If specified, fixes the parameter values and are not changed during estimation.
A dict with the keys B, zeta.
Returns
-------
pgmpy.models.SEM instance: An instance of the object with initialized values.
Examples
--------
>>> from pgmpy.models import SEM
>>> SEM.from_RAM # TODO: Finish this
"""
if observed:
wedge_y = np.zeros((len(variables), len(observed)))
obs_dict = {var: index for index, var in enumerate(observed)}
all_dict = {var: index for index, var in enumerate(variables)}
for var in observed:
wedge_y[obs_dict[var], all_dict[var]] = 1
return cls(
syntax="ram",
var_names=variables,
B=B,
zeta=zeta,
wedge_y=wedge_y,
fixed_values=fixed_values,
)
def fit(self):
pass
| mit |
EttusResearch/gnuradio | gr-analog/examples/fmtest.py | 40 | 7941 | #!/usr/bin/env python
#
# Copyright 2009,2012,2013 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
from gnuradio import gr
from gnuradio import blocks
from gnuradio import filter
from gnuradio import analog
from gnuradio import channels
import sys, math, time
try:
import scipy
from scipy import fftpack
except ImportError:
print "Error: Program requires scipy (see: www.scipy.org)."
sys.exit(1)
try:
import pylab
except ImportError:
print "Error: Program requires matplotlib (see: matplotlib.sourceforge.net)."
sys.exit(1)
class fmtx(gr.hier_block2):
def __init__(self, lo_freq, audio_rate, if_rate):
gr.hier_block2.__init__(self, "build_fm",
gr.io_signature(1, 1, gr.sizeof_float),
gr.io_signature(1, 1, gr.sizeof_gr_complex))
fmtx = analog.nbfm_tx(audio_rate, if_rate, max_dev=5e3, tau=75e-6)
# Local oscillator
lo = analog.sig_source_c(if_rate, # sample rate
analog.GR_SIN_WAVE, # waveform type
lo_freq, # frequency
1.0, # amplitude
0) # DC Offset
mixer = blocks.multiply_cc()
self.connect(self, fmtx, (mixer, 0))
self.connect(lo, (mixer, 1))
self.connect(mixer, self)
class fmtest(gr.top_block):
def __init__(self):
gr.top_block.__init__(self)
self._nsamples = 1000000
self._audio_rate = 8000
# Set up N channels with their own baseband and IF frequencies
self._N = 5
chspacing = 16000
freq = [10, 20, 30, 40, 50]
f_lo = [0, 1*chspacing, -1*chspacing, 2*chspacing, -2*chspacing]
self._if_rate = 4*self._N*self._audio_rate
# Create a signal source and frequency modulate it
self.sum = blocks.add_cc()
for n in xrange(self._N):
sig = analog.sig_source_f(self._audio_rate, analog.GR_SIN_WAVE, freq[n], 0.5)
fm = fmtx(f_lo[n], self._audio_rate, self._if_rate)
self.connect(sig, fm)
self.connect(fm, (self.sum, n))
self.head = blocks.head(gr.sizeof_gr_complex, self._nsamples)
self.snk_tx = blocks.vector_sink_c()
self.channel = channels.channel_model(0.1)
self.connect(self.sum, self.head, self.channel, self.snk_tx)
# Design the channlizer
self._M = 10
bw = chspacing/2.0
t_bw = chspacing/10.0
self._chan_rate = self._if_rate / self._M
self._taps = filter.firdes.low_pass_2(1, self._if_rate, bw, t_bw,
attenuation_dB=100,
window=filter.firdes.WIN_BLACKMAN_hARRIS)
tpc = math.ceil(float(len(self._taps)) / float(self._M))
print "Number of taps: ", len(self._taps)
print "Number of channels: ", self._M
print "Taps per channel: ", tpc
self.pfb = filter.pfb.channelizer_ccf(self._M, self._taps)
self.connect(self.channel, self.pfb)
# Create a file sink for each of M output channels of the filter and connect it
self.fmdet = list()
self.squelch = list()
self.snks = list()
for i in xrange(self._M):
self.fmdet.append(analog.nbfm_rx(self._audio_rate, self._chan_rate))
self.squelch.append(analog.standard_squelch(self._audio_rate*10))
self.snks.append(blocks.vector_sink_f())
self.connect((self.pfb, i), self.fmdet[i], self.squelch[i], self.snks[i])
def num_tx_channels(self):
return self._N
def num_rx_channels(self):
return self._M
def main():
fm = fmtest()
tstart = time.time()
fm.run()
tend = time.time()
if 1:
fig1 = pylab.figure(1, figsize=(12,10), facecolor="w")
fig2 = pylab.figure(2, figsize=(12,10), facecolor="w")
fig3 = pylab.figure(3, figsize=(12,10), facecolor="w")
Ns = 10000
Ne = 100000
fftlen = 8192
winfunc = scipy.blackman
# Plot transmitted signal
fs = fm._if_rate
d = fm.snk_tx.data()[Ns:Ns+Ne]
sp1_f = fig1.add_subplot(2, 1, 1)
X,freq = sp1_f.psd(d, NFFT=fftlen, noverlap=fftlen/4, Fs=fs,
window = lambda d: d*winfunc(fftlen),
visible=False)
X_in = 10.0*scipy.log10(abs(fftpack.fftshift(X)))
f_in = scipy.arange(-fs/2.0, fs/2.0, fs/float(X_in.size))
p1_f = sp1_f.plot(f_in, X_in, "b")
sp1_f.set_xlim([min(f_in), max(f_in)+1])
sp1_f.set_ylim([-120.0, 20.0])
sp1_f.set_title("Input Signal", weight="bold")
sp1_f.set_xlabel("Frequency (Hz)")
sp1_f.set_ylabel("Power (dBW)")
Ts = 1.0/fs
Tmax = len(d)*Ts
t_in = scipy.arange(0, Tmax, Ts)
x_in = scipy.array(d)
sp1_t = fig1.add_subplot(2, 1, 2)
p1_t = sp1_t.plot(t_in, x_in.real, "b-o")
#p1_t = sp1_t.plot(t_in, x_in.imag, "r-o")
sp1_t.set_ylim([-5, 5])
# Set up the number of rows and columns for plotting the subfigures
Ncols = int(scipy.floor(scipy.sqrt(fm.num_rx_channels())))
Nrows = int(scipy.floor(fm.num_rx_channels() / Ncols))
if(fm.num_rx_channels() % Ncols != 0):
Nrows += 1
# Plot each of the channels outputs. Frequencies on Figure 2 and
# time signals on Figure 3
fs_o = fm._audio_rate
for i in xrange(len(fm.snks)):
# remove issues with the transients at the beginning
# also remove some corruption at the end of the stream
# this is a bug, probably due to the corner cases
d = fm.snks[i].data()[Ns:Ne]
sp2_f = fig2.add_subplot(Nrows, Ncols, 1+i)
X,freq = sp2_f.psd(d, NFFT=fftlen, noverlap=fftlen/4, Fs=fs_o,
window = lambda d: d*winfunc(fftlen),
visible=False)
#X_o = 10.0*scipy.log10(abs(fftpack.fftshift(X)))
X_o = 10.0*scipy.log10(abs(X))
#f_o = scipy.arange(-fs_o/2.0, fs_o/2.0, fs_o/float(X_o.size))
f_o = scipy.arange(0, fs_o/2.0, fs_o/2.0/float(X_o.size))
p2_f = sp2_f.plot(f_o, X_o, "b")
sp2_f.set_xlim([min(f_o), max(f_o)+0.1])
sp2_f.set_ylim([-120.0, 20.0])
sp2_f.grid(True)
sp2_f.set_title(("Channel %d" % i), weight="bold")
sp2_f.set_xlabel("Frequency (kHz)")
sp2_f.set_ylabel("Power (dBW)")
Ts = 1.0/fs_o
Tmax = len(d)*Ts
t_o = scipy.arange(0, Tmax, Ts)
x_t = scipy.array(d)
sp2_t = fig3.add_subplot(Nrows, Ncols, 1+i)
p2_t = sp2_t.plot(t_o, x_t.real, "b")
p2_t = sp2_t.plot(t_o, x_t.imag, "r")
sp2_t.set_xlim([min(t_o), max(t_o)+1])
sp2_t.set_ylim([-1, 1])
sp2_t.set_xlabel("Time (s)")
sp2_t.set_ylabel("Amplitude")
pylab.show()
if __name__ == "__main__":
main()
| gpl-3.0 |
KristoferHellman/gimli | python/pygimli/physics/traveltime/fastMarchingTest.py | 1 | 6186 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Solve the particular Hamilton-Jacobi (HJ) equation, known as the Eikonal
equation :cite:`SunFomel2000`
.. math::
|\grad u(x)| & = f(x) \\\\
|\grad t| & = s \\\\
(\grad t)^2 & = s^2
.. error::
check equation and reference!!!
where :math:`t` denote traveltime for a spatial distributed slowness :math:`s`
In the special case when :math:`f(x) = 1`, the solution gives the signed
distance from the boundary
"""
import numpy as np
import time
import pygimli as pg
import matplotlib.pyplot as plt
from pygimli.mplviewer import drawMesh, drawField
import heapq
def findSlowness(edge):
if edge.leftCell() is None:
slowness = edge.rightCell().attribute()
elif edge.rightCell() is None:
slowness = edge.leftCell().attribute()
else:
slowness = min(edge.leftCell().attribute(),
edge.rightCell().attribute())
return slowness
# def findSlowness(...)
def fastMarch(mesh, downwind, times, upTags, downTags):
upCandidate = []
for node in downwind:
neighNodes = pg.commonNodes(node.cellSet())
upNodes = []
for n in neighNodes:
if upTags[n.id()]:
upNodes.append(n)
if len(upNodes) == 1:
# this is the dijkstra case
edge = pg.findBoundary(upNodes[0], node)
tt = times[upNodes[0].id()] + \
findSlowness(edge) * edge.shape().domainSize()
heapq.heappush(upCandidate, (tt, node))
else:
cells = node.cellSet()
for c in cells:
for i in range(c.nodeCount()):
edge = pg.findBoundary(c.node(i), c.node((i + 1) % 3))
a = edge.node(0)
b = edge.node(1)
ta = times[a.id()]
tb = times[b.id()]
if upTags[a.id()] and upTags[b.id()]:
line = pg.Line(a.pos(), b.pos())
t = min(1., max(0., line.nearest(node.pos())))
ea = pg.findBoundary(a, node)
eb = pg.findBoundary(b, node)
if t == 0:
slowness = findSlowness(ea)
elif t == 1:
slowness = findSlowness(eb)
else:
slowness = c.attribute()
ttimeA = (ta + slowness * a.pos().distance(node.pos()))
ttimeQ = (ta + t * (tb - ta)) + \
slowness * line(t).distance(node.pos())
ttimeB = (tb + slowness * b.pos().distance(node.pos()))
heapq.heappush(upCandidate,
(min(ttimeA, ttimeQ, ttimeB), node))
# for c in upCandidate:
# print c[1].id(), c[0]
candidate = heapq.heappop(upCandidate)
# print candidate
newUpNode = candidate[1]
times[newUpNode.id()] = candidate[0]
upTags[newUpNode.id()] = 1
# print newUpNode
downwind.remove(newUpNode)
newDownNodes = pg.commonNodes(newUpNode.cellSet())
for nn in newDownNodes:
if not upTags[nn.id()] and not downTags[nn.id()]:
downwind.add(nn)
downTags[nn.id()] = 1
# def fastMarch(...)
if __name__ is '__main__':
mesh = pg.Mesh('mesh/test2d')
mesh.createNeighbourInfos()
print(mesh)
source = pg.RVector3(-80, 0.)
times = pg.RVector(mesh.nodeCount(), 0.)
for c in mesh.cells():
if c.marker() == 1:
c.setAttribute(1.)
elif c.marker() == 2:
c.setAttribute(0.5)
# c.setAttribute(abs(1./c.center()[1]))
fig, a = plt.add_subplots()
anaTimes = pg.RVector(mesh.nodeCount(), 0.0)
for n in mesh.nodes():
anaTimes[n.id()] = source.distance(n.pos())
#d = pg.DataContainer()
#dijk = pg.TravelTimeDijkstraModelling(mesh, d)
upwind = set()
downwind = set()
upTags = np.zeros(mesh.nodeCount())
downTags = np.zeros(mesh.nodeCount())
# define initial condition
cell = mesh.findCell(source)
for i, n in enumerate(cell.nodes()):
times[n.id()] = cell.attribute() * n.pos().distance(source)
upTags[n.id()] = 1
for i, n in enumerate(cell.nodes()):
tmpNodes = pg.commonNodes(n.cellSet())
for nn in tmpNodes:
if not upTags[nn.id()] and not downTags[nn.id()]:
downwind.add(nn)
downTags[nn.id()] = 1
# start fast marching
tic = time.time()
while len(downwind) > 0:
model = pg.RVector(mesh.cellCount(), 0.0)
#for c in upwind: model.setVal(2, c.id())
#for c in downwind: model.setVal(1, c.id())
# drawMesh(a, mesh)
#a.plot(source[0], source[1], 'x')
# for c in mesh.cells():
#a.text(c.center()[0],c.center()[1], str(c.id()))
# for n in mesh.nodes():
# if upTags[n.id()]:
#a.plot(n.pos()[0], n.pos()[1], 'o', color='black')
#mindist = 9e99
# for n in downwind:
#a.plot(n.pos()[0], n.pos()[1], 'o', color='white')
# if n.pos().distance(source) < mindist:
#mindist = n.pos().distance(source)
#nextPredict = n
# print "next predicted ", n.id(), mindist
#a.plot(nextPredict.pos()[0], nextPredict.pos()[1], 'o', color='green')
#drawField(a, mesh, times)
###drawField(a, mesh, anaTimes, colors = 'white')
# a.figure.canvas.draw()
# a.figure.show()
##raw_input('Press Enter...')
# a.clear()
fastMarch(mesh, downwind, times, upTags, downTags)
print(time.time() - tic, "s")
drawMesh(a, mesh)
drawField(a, mesh, times, filled=True)
# drawStreamCircular(a, mesh, times, source, 30.,
# nLines = 50, step = 0.1, showStartPos = True)
#ax1.streamplot(X, Y, U, V, density=[0.5, 1])
# drawStreamLinear(a, mesh, times,
#pg.RVector3(-100., -10.0),
#pg.RVector3(100., -10.0),
# nLines = 50, step = 0.01,
# showStartPos = True)
plt.show()
| gpl-3.0 |
friedrichromstedt/matplotlayers | matplotlayers/layers/layer_pcolor.py | 1 | 3079 | # Copyright (c) 2008, 2009, 2010 Friedrich Romstedt
# <www.friedrichromstedt.org>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
# Last changed: 2010 Mar 13
# Developed since: Jul 2008
# File version: 0.1.0b
import matplotlayers.layer
import keyconf
class LayerPColor(matplotlayers.layer.Layer):
def __init__(self, **kwargs):
"""CMAP may be an abbreviation as defined by matplotlib, it defaults
to 'gray'.
The layer is specified empty, if X, Y, or C are not specified or
None."""
kwargs.setdefault('cmap', 'gray')
matplotlayers.layer.Layer.__init__(self)
# Unfortunately, mpl pcolor needs X, Y, C positional.
# So we have to hide X, Y and C config away.
self._XYC = keyconf.Configuration()
self.add_components(XYC = self._XYC)
self.set_aliases(X = 'XYC_X', Y = 'XYC_Y', C = 'XYC_C')
# We forward non-pcolormesh arguments to some dedicated Configuration,
# because they should not show up in the call to pcolormesh().
self._explicits = keyconf.Configuration()
self.add_components(explicits=self._explicits)
self.set_aliases(layer_colorbar='explicits_layer_colorbar')
self.configure(**kwargs)
def to_axes(self, axes):
"""Plot the data to matplotlib.axes.Axes instance AXES."""
# Skip plotting if not all data is specified ...
if not self.is_configured('X') or \
not self.is_configured('Y') or \
not self.is_configured('C'):
return
# Plot ...
# Unfortunately, mpl pcolor needs X, Y, C positional.
X = self['X']
Y = self['Y']
C = self['C']
# X, Y, C are actually stored in ._XYC.
mappable = axes.pcolor(X, Y, C, **self)
# Notify also the LayerColorbar of the new mappable.
if self.is_configured('layer_colorbar'):
self['layer_colorbar'].set_mappable(mappable)
| mit |
benjaminoh1/tensorflowcookbook | Chapter 06/using_a_multiple_layer_network.py | 1 | 6328 | # Using a Multiple Layer Network
#---------------------------------------
#
# We will illustrate how to use a Multiple
# Layer Network in Tensorflow
#
# Low Birthrate data:
#
#Columns Variable Abbreviation
#-----------------------------------------------------------------------------
# Identification Code ID
# Low Birth Weight (0 = Birth Weight >= 2500g, LOW
# 1 = Birth Weight < 2500g)
# Age of the Mother in Years AGE
# Weight in Pounds at the Last Menstrual Period LWT
# Race (1 = White, 2 = Black, 3 = Other) RACE
# Smoking Status During Pregnancy (1 = Yes, 0 = No) SMOKE
# History of Premature Labor (0 = None 1 = One, etc.) PTL
# History of Hypertension (1 = Yes, 0 = No) HT
# Presence of Uterine Irritability (1 = Yes, 0 = No) UI
# Number of Physician Visits During the First Trimester FTV
# (0 = None, 1 = One, 2 = Two, etc.)
# Birth Weight in Grams BWT
#------------------------------
# The multiple neural network layer we will create will be composed of
# three fully connected hidden layers, with node sizes 25, 10, and 3
import tensorflow as tf
import matplotlib.pyplot as plt
import requests
import numpy as np
from tensorflow.python.framework import ops
ops.reset_default_graph()
# Set Seed
seed = 3
tf.set_random_seed(seed)
np.random.seed(seed)
birthdata_url = 'https://www.umass.edu/statdata/statdata/data/lowbwt.dat'
birth_file = requests.get(birthdata_url)
birth_data = birth_file.text.split('\r\n')[5:]
birth_header = [x for x in birth_data[0].split(' ') if len(x)>=1]
birth_data = [[float(x) for x in y.split(' ') if len(x)>=1] for y in birth_data[1:] if len(y)>=1]
batch_size = 100
# Extract y-target (birth weight)
y_vals = np.array([x[10] for x in birth_data])
# Filter for features of interest
cols_of_interest = ['AGE', 'LWT', 'RACE', 'SMOKE', 'PTL', 'HT', 'UI', 'FTV']
x_vals = np.array([[x[ix] for ix, feature in enumerate(birth_header) if feature in cols_of_interest] for x in birth_data])
# Create graph session
sess = tf.Session()
# Split data into train/test = 80%/20%
train_indices = np.random.choice(len(x_vals), round(len(x_vals)*0.8), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
# Normalize by column (min-max norm to be between 0 and 1)
def normalize_cols(m):
col_max = m.max(axis=0)
col_min = m.min(axis=0)
return (m-col_min) / (col_max - col_min)
x_vals_train = np.nan_to_num(normalize_cols(x_vals_train))
x_vals_test = np.nan_to_num(normalize_cols(x_vals_test))
# Define Variable Functions (weights and bias)
def init_weight(shape, st_dev):
weight = tf.Variable(tf.random_normal(shape, stddev=st_dev))
return(weight)
def init_bias(shape, st_dev):
bias = tf.Variable(tf.random_normal(shape, stddev=st_dev))
return(bias)
# Create Placeholders
x_data = tf.placeholder(shape=[None, 8], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
# Create a fully connected layer:
def fully_connected(input_layer, weights, biases):
layer = tf.add(tf.matmul(input_layer, weights), biases)
return(tf.nn.relu(layer))
#--------Create the first layer (25 hidden nodes)--------
weight_1 = init_weight(shape=[8, 25], st_dev=10.0)
bias_1 = init_bias(shape=[25], st_dev=10.0)
layer_1 = fully_connected(x_data, weight_1, bias_1)
#--------Create second layer (10 hidden nodes)--------
weight_2 = init_weight(shape=[25, 10], st_dev=10.0)
bias_2 = init_bias(shape=[10], st_dev=10.0)
layer_2 = fully_connected(layer_1, weight_2, bias_2)
#--------Create third layer (3 hidden nodes)--------
weight_3 = init_weight(shape=[10, 3], st_dev=10.0)
bias_3 = init_bias(shape=[3], st_dev=10.0)
layer_3 = fully_connected(layer_2, weight_3, bias_3)
#--------Create output layer (1 output value)--------
weight_4 = init_weight(shape=[3, 1], st_dev=10.0)
bias_4 = init_bias(shape=[1], st_dev=10.0)
final_output = fully_connected(layer_3, weight_4, bias_4)
# Declare loss function (L1)
loss = tf.reduce_mean(tf.abs(y_target - final_output))
# Declare optimizer
my_opt = tf.train.AdamOptimizer(0.05)
train_step = my_opt.minimize(loss)
# Initialize Variables
init = tf.initialize_all_variables()
sess.run(init)
# Training loop
loss_vec = []
test_loss = []
for i in range(200):
rand_index = np.random.choice(len(x_vals_train), size=batch_size)
rand_x = x_vals_train[rand_index]
rand_y = np.transpose([y_vals_train[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss)
test_temp_loss = sess.run(loss, feed_dict={x_data: x_vals_test, y_target: np.transpose([y_vals_test])})
test_loss.append(test_temp_loss)
if (i+1)%25==0:
print('Generation: ' + str(i+1) + '. Loss = ' + str(temp_loss))
# Plot loss over time
plt.plot(loss_vec, 'k-', label='Train Loss')
plt.plot(test_loss, 'r--', label='Test Loss')
plt.title('Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.legend(loc="upper right")
plt.show()
# Find the % classified correctly above/below the cutoff of 2500 g
# >= 2500 g = 0
# < 2500 g = 1
actuals = np.array([x[1] for x in birth_data])
test_actuals = actuals[test_indices]
train_actuals = actuals[train_indices]
test_preds = [x[0] for x in sess.run(final_output, feed_dict={x_data: x_vals_test})]
train_preds = [x[0] for x in sess.run(final_output, feed_dict={x_data: x_vals_train})]
test_preds = np.array([1.0 if x<2500.0 else 0.0 for x in test_preds])
train_preds = np.array([1.0 if x<2500.0 else 0.0 for x in train_preds])
# Print out accuracies
test_acc = np.mean([x==y for x,y in zip(test_preds, test_actuals)])
train_acc = np.mean([x==y for x,y in zip(train_preds, train_actuals)])
print('On predicting the category of low birthweight from regression output (<2500g):')
print('Test Accuracy: {}'.format(test_acc))
print('Train Accuracy: {}'.format(train_acc)) | mit |
jjunell/paparazzi | sw/airborne/test/ahrs/ahrs_utils.py | 86 | 4923 | #! /usr/bin/env python
# Copyright (C) 2011 Antoine Drouin
#
# This file is part of Paparazzi.
#
# Paparazzi is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# Paparazzi is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Paparazzi; see the file COPYING. If not, write to
# the Free Software Foundation, 59 Temple Place - Suite 330,
# Boston, MA 02111-1307, USA.
#
from __future__ import print_function
import subprocess
import numpy as np
import matplotlib.pyplot as plt
def run_simulation(ahrs_type, build_opt, traj_nb):
print("\nBuilding ahrs")
args = ["make", "clean", "run_ahrs_on_synth", "AHRS_TYPE=AHRS_TYPE_" + ahrs_type] + build_opt
#print(args)
p = subprocess.Popen(args=args, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=False)
outputlines = p.stdout.readlines()
p.wait()
for i in outputlines:
print(" # " + i, end=' ')
print()
print("Running simulation")
print(" using traj " + str(traj_nb))
p = subprocess.Popen(args=["./run_ahrs_on_synth", str(traj_nb)], stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
shell=False)
outputlines = p.stdout.readlines()
p.wait()
# for i in outputlines:
# print(" "+i, end=' ')
# print("\n")
ahrs_data_type = [('time', 'float32'),
('phi_true', 'float32'), ('theta_true', 'float32'), ('psi_true', 'float32'),
('p_true', 'float32'), ('q_true', 'float32'), ('r_true', 'float32'),
('bp_true', 'float32'), ('bq_true', 'float32'), ('br_true', 'float32'),
('phi_ahrs', 'float32'), ('theta_ahrs', 'float32'), ('psi_ahrs', 'float32'),
('p_ahrs', 'float32'), ('q_ahrs', 'float32'), ('r_ahrs', 'float32'),
('bp_ahrs', 'float32'), ('bq_ahrs', 'float32'), ('br_ahrs', 'float32')]
mydescr = np.dtype(ahrs_data_type)
data = [[] for dummy in xrange(len(mydescr))]
# import code; code.interact(local=locals())
for line in outputlines:
if line.startswith("#"):
print(" " + line, end=' ')
else:
fields = line.strip().split(' ')
#print(fields)
for i, number in enumerate(fields):
data[i].append(number)
print()
for i in xrange(len(mydescr)):
data[i] = np.cast[mydescr[i]](data[i])
return np.rec.array(data, dtype=mydescr)
def plot_simulation_results(plot_true_state, lsty, label, sim_res):
print("Plotting Results")
# f, (ax1, ax2, ax3) = plt.subplots(3, sharex=True, sharey=True)
plt.subplot(3, 3, 1)
plt.plot(sim_res.time, sim_res.phi_ahrs, lsty, label=label)
plt.ylabel('degres')
plt.title('phi')
plt.legend()
plt.subplot(3, 3, 2)
plt.plot(sim_res.time, sim_res.theta_ahrs, lsty)
plt.title('theta')
plt.subplot(3, 3, 3)
plt.plot(sim_res.time, sim_res.psi_ahrs, lsty)
plt.title('psi')
plt.subplot(3, 3, 4)
plt.plot(sim_res.time, sim_res.p_ahrs, lsty)
plt.ylabel('degres/s')
plt.title('p')
plt.subplot(3, 3, 5)
plt.plot(sim_res.time, sim_res.q_ahrs, lsty)
plt.title('q')
plt.subplot(3, 3, 6)
plt.plot(sim_res.time, sim_res.r_ahrs, lsty)
plt.title('r')
plt.subplot(3, 3, 7)
plt.plot(sim_res.time, sim_res.bp_ahrs, lsty)
plt.ylabel('degres/s')
plt.xlabel('time in s')
plt.title('bp')
plt.subplot(3, 3, 8)
plt.plot(sim_res.time, sim_res.bq_ahrs, lsty)
plt.xlabel('time in s')
plt.title('bq')
plt.subplot(3, 3, 9)
plt.plot(sim_res.time, sim_res.br_ahrs, lsty)
plt.xlabel('time in s')
plt.title('br')
if plot_true_state:
plt.subplot(3, 3, 1)
plt.plot(sim_res.time, sim_res.phi_true, 'r--')
plt.subplot(3, 3, 2)
plt.plot(sim_res.time, sim_res.theta_true, 'r--')
plt.subplot(3, 3, 3)
plt.plot(sim_res.time, sim_res.psi_true, 'r--')
plt.subplot(3, 3, 4)
plt.plot(sim_res.time, sim_res.p_true, 'r--')
plt.subplot(3, 3, 5)
plt.plot(sim_res.time, sim_res.q_true, 'r--')
plt.subplot(3, 3, 6)
plt.plot(sim_res.time, sim_res.r_true, 'r--')
plt.subplot(3, 3, 7)
plt.plot(sim_res.time, sim_res.bp_true, 'r--')
plt.subplot(3, 3, 8)
plt.plot(sim_res.time, sim_res.bq_true, 'r--')
plt.subplot(3, 3, 9)
plt.plot(sim_res.time, sim_res.br_true, 'r--')
def show_plot():
plt.show()
| gpl-2.0 |
Dannnno/odo | odo/odo.py | 3 | 3472 | from .into import into
def odo(source, target, **kwargs):
""" Push one dataset into another
Parameters
----------
source: object or string
The source of your data. Either an object (e.g. DataFrame),
target: object or string or type
The target for where you want your data to go.
Either an object, (e.g. []), a type, (e.g. list)
or a string (e.g. 'postgresql://hostname::tablename'
raise_on_errors: bool (optional, defaults to False)
Raise exceptions rather than reroute around them
**kwargs:
keyword arguments to pass through to conversion functions.
Optional Keyword Arguments
--------------------------
Odo passes keyword arguments (like ``sep=';'``) down to the functions
that it uses to perform conversions (like ``pandas.read_csv``). Due to the
quantity of possible optional keyword arguments we can not list them here.
See the following documentation for your format
* AWS - http://odo.pydata.org/en/latest/aws.html
* CSV - http://odo.pydata.org/en/latest/csv.html
* JSON - http://odo.pydata.org/en/latest/json.html
* HDF5 - http://odo.pydata.org/en/latest/hdf5.html
* HDFS - http://odo.pydata.org/en/latest/hdfs.html
* Hive - http://odo.pydata.org/en/latest/hive.html
* SAS - http://odo.pydata.org/en/latest/sas.html
* SQL - http://odo.pydata.org/en/latest/sql.html
* SSH - http://odo.pydata.org/en/latest/ssh.html
* Mongo - http://odo.pydata.org/en/latest/mongo.html
* Spark - http://odo.pydata.org/en/latest/spark.html
Examples
--------
>>> L = odo((1, 2, 3), list) # Convert things into new things
>>> L
[1, 2, 3]
>>> _ = odo((4, 5, 6), L) # Append things onto existing things
>>> L
[1, 2, 3, 4, 5, 6]
>>> odo([('Alice', 1), ('Bob', 2)], 'myfile.csv') # doctest: +SKIP
Explanation
-----------
We can specify data with a Python object like a ``list``, ``DataFrame``,
``sqlalchemy.Table``, ``h5py.Dataset``, etc..
We can specify data with a string URI like ``'myfile.csv'``,
``'myfiles.*.json'`` or ``'sqlite:///data.db::tablename'``. These are
matched by regular expression. See the ``resource`` function for more
details on string URIs.
We can optionally specify datatypes with the ``dshape=`` keyword, providing
a datashape. This allows us to be explicit about types when mismatches
occur or when our data doesn't hold the whole picture. See the
``discover`` function for more information on ``dshape``.
>>> ds = 'var * {name: string, balance: float64}'
>>> odo([('Alice', 100), ('Bob', 200)], 'accounts.json', , dshape=ds) # doctest: +SKIP
We can optionally specify keyword arguments to pass down to relevant
conversion functions. For example, when converting a CSV file we might
want to specify delimiter
>>> odo('accounts.csv', list, has_header=True, delimiter=';') # doctest: +SKIP
These keyword arguments trickle down to whatever function ``into`` uses
convert this particular format, functions like ``pandas.read_csv``.
See Also
--------
odo.resource.resource - Specify things with strings
datashape.discover - Get datashape of data
odo.convert.convert - Convert things into new things
odo.append.append - Add things onto existing things
"""
return into(target, source, **kwargs)
| bsd-3-clause |
trankmichael/scikit-learn | sklearn/metrics/cluster/tests/test_supervised.py | 206 | 7643 | import numpy as np
from sklearn.metrics.cluster import adjusted_rand_score
from sklearn.metrics.cluster import homogeneity_score
from sklearn.metrics.cluster import completeness_score
from sklearn.metrics.cluster import v_measure_score
from sklearn.metrics.cluster import homogeneity_completeness_v_measure
from sklearn.metrics.cluster import adjusted_mutual_info_score
from sklearn.metrics.cluster import normalized_mutual_info_score
from sklearn.metrics.cluster import mutual_info_score
from sklearn.metrics.cluster import expected_mutual_information
from sklearn.metrics.cluster import contingency_matrix
from sklearn.metrics.cluster import entropy
from sklearn.utils.testing import assert_raise_message
from nose.tools import assert_almost_equal
from nose.tools import assert_equal
from numpy.testing import assert_array_almost_equal
score_funcs = [
adjusted_rand_score,
homogeneity_score,
completeness_score,
v_measure_score,
adjusted_mutual_info_score,
normalized_mutual_info_score,
]
def test_error_messages_on_wrong_input():
for score_func in score_funcs:
expected = ('labels_true and labels_pred must have same size,'
' got 2 and 3')
assert_raise_message(ValueError, expected, score_func,
[0, 1], [1, 1, 1])
expected = "labels_true must be 1D: shape is (2"
assert_raise_message(ValueError, expected, score_func,
[[0, 1], [1, 0]], [1, 1, 1])
expected = "labels_pred must be 1D: shape is (2"
assert_raise_message(ValueError, expected, score_func,
[0, 1, 0], [[1, 1], [0, 0]])
def test_perfect_matches():
for score_func in score_funcs:
assert_equal(score_func([], []), 1.0)
assert_equal(score_func([0], [1]), 1.0)
assert_equal(score_func([0, 0, 0], [0, 0, 0]), 1.0)
assert_equal(score_func([0, 1, 0], [42, 7, 42]), 1.0)
assert_equal(score_func([0., 1., 0.], [42., 7., 42.]), 1.0)
assert_equal(score_func([0., 1., 2.], [42., 7., 2.]), 1.0)
assert_equal(score_func([0, 1, 2], [42, 7, 2]), 1.0)
def test_homogeneous_but_not_complete_labeling():
# homogeneous but not complete clustering
h, c, v = homogeneity_completeness_v_measure(
[0, 0, 0, 1, 1, 1],
[0, 0, 0, 1, 2, 2])
assert_almost_equal(h, 1.00, 2)
assert_almost_equal(c, 0.69, 2)
assert_almost_equal(v, 0.81, 2)
def test_complete_but_not_homogeneous_labeling():
# complete but not homogeneous clustering
h, c, v = homogeneity_completeness_v_measure(
[0, 0, 1, 1, 2, 2],
[0, 0, 1, 1, 1, 1])
assert_almost_equal(h, 0.58, 2)
assert_almost_equal(c, 1.00, 2)
assert_almost_equal(v, 0.73, 2)
def test_not_complete_and_not_homogeneous_labeling():
# neither complete nor homogeneous but not so bad either
h, c, v = homogeneity_completeness_v_measure(
[0, 0, 0, 1, 1, 1],
[0, 1, 0, 1, 2, 2])
assert_almost_equal(h, 0.67, 2)
assert_almost_equal(c, 0.42, 2)
assert_almost_equal(v, 0.52, 2)
def test_non_consicutive_labels():
# regression tests for labels with gaps
h, c, v = homogeneity_completeness_v_measure(
[0, 0, 0, 2, 2, 2],
[0, 1, 0, 1, 2, 2])
assert_almost_equal(h, 0.67, 2)
assert_almost_equal(c, 0.42, 2)
assert_almost_equal(v, 0.52, 2)
h, c, v = homogeneity_completeness_v_measure(
[0, 0, 0, 1, 1, 1],
[0, 4, 0, 4, 2, 2])
assert_almost_equal(h, 0.67, 2)
assert_almost_equal(c, 0.42, 2)
assert_almost_equal(v, 0.52, 2)
ari_1 = adjusted_rand_score([0, 0, 0, 1, 1, 1], [0, 1, 0, 1, 2, 2])
ari_2 = adjusted_rand_score([0, 0, 0, 1, 1, 1], [0, 4, 0, 4, 2, 2])
assert_almost_equal(ari_1, 0.24, 2)
assert_almost_equal(ari_2, 0.24, 2)
def uniform_labelings_scores(score_func, n_samples, k_range, n_runs=10,
seed=42):
# Compute score for random uniform cluster labelings
random_labels = np.random.RandomState(seed).random_integers
scores = np.zeros((len(k_range), n_runs))
for i, k in enumerate(k_range):
for j in range(n_runs):
labels_a = random_labels(low=0, high=k - 1, size=n_samples)
labels_b = random_labels(low=0, high=k - 1, size=n_samples)
scores[i, j] = score_func(labels_a, labels_b)
return scores
def test_adjustment_for_chance():
# Check that adjusted scores are almost zero on random labels
n_clusters_range = [2, 10, 50, 90]
n_samples = 100
n_runs = 10
scores = uniform_labelings_scores(
adjusted_rand_score, n_samples, n_clusters_range, n_runs)
max_abs_scores = np.abs(scores).max(axis=1)
assert_array_almost_equal(max_abs_scores, [0.02, 0.03, 0.03, 0.02], 2)
def test_adjusted_mutual_info_score():
# Compute the Adjusted Mutual Information and test against known values
labels_a = np.array([1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3])
labels_b = np.array([1, 1, 1, 1, 2, 1, 2, 2, 2, 2, 3, 1, 3, 3, 3, 2, 2])
# Mutual information
mi = mutual_info_score(labels_a, labels_b)
assert_almost_equal(mi, 0.41022, 5)
# Expected mutual information
C = contingency_matrix(labels_a, labels_b)
n_samples = np.sum(C)
emi = expected_mutual_information(C, n_samples)
assert_almost_equal(emi, 0.15042, 5)
# Adjusted mutual information
ami = adjusted_mutual_info_score(labels_a, labels_b)
assert_almost_equal(ami, 0.27502, 5)
ami = adjusted_mutual_info_score([1, 1, 2, 2], [2, 2, 3, 3])
assert_equal(ami, 1.0)
# Test with a very large array
a110 = np.array([list(labels_a) * 110]).flatten()
b110 = np.array([list(labels_b) * 110]).flatten()
ami = adjusted_mutual_info_score(a110, b110)
# This is not accurate to more than 2 places
assert_almost_equal(ami, 0.37, 2)
def test_entropy():
ent = entropy([0, 0, 42.])
assert_almost_equal(ent, 0.6365141, 5)
assert_almost_equal(entropy([]), 1)
def test_contingency_matrix():
labels_a = np.array([1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3])
labels_b = np.array([1, 1, 1, 1, 2, 1, 2, 2, 2, 2, 3, 1, 3, 3, 3, 2, 2])
C = contingency_matrix(labels_a, labels_b)
C2 = np.histogram2d(labels_a, labels_b,
bins=(np.arange(1, 5),
np.arange(1, 5)))[0]
assert_array_almost_equal(C, C2)
C = contingency_matrix(labels_a, labels_b, eps=.1)
assert_array_almost_equal(C, C2 + .1)
def test_exactly_zero_info_score():
# Check numerical stability when information is exactly zero
for i in np.logspace(1, 4, 4).astype(np.int):
labels_a, labels_b = np.ones(i, dtype=np.int),\
np.arange(i, dtype=np.int)
assert_equal(normalized_mutual_info_score(labels_a, labels_b), 0.0)
assert_equal(v_measure_score(labels_a, labels_b), 0.0)
assert_equal(adjusted_mutual_info_score(labels_a, labels_b), 0.0)
assert_equal(normalized_mutual_info_score(labels_a, labels_b), 0.0)
def test_v_measure_and_mutual_information(seed=36):
# Check relation between v_measure, entropy and mutual information
for i in np.logspace(1, 4, 4).astype(np.int):
random_state = np.random.RandomState(seed)
labels_a, labels_b = random_state.random_integers(0, 10, i),\
random_state.random_integers(0, 10, i)
assert_almost_equal(v_measure_score(labels_a, labels_b),
2.0 * mutual_info_score(labels_a, labels_b) /
(entropy(labels_a) + entropy(labels_b)), 0)
| bsd-3-clause |
JT5D/scikit-learn | sklearn/manifold/isomap.py | 12 | 7145 | """Isomap for manifold learning"""
# Author: Jake Vanderplas -- <[email protected]>
# License: BSD 3 clause (C) 2011
import numpy as np
from ..base import BaseEstimator, TransformerMixin
from ..neighbors import NearestNeighbors, kneighbors_graph
from ..utils import check_arrays
from ..utils.graph import graph_shortest_path
from ..decomposition import KernelPCA
from ..preprocessing import KernelCenterer
class Isomap(BaseEstimator, TransformerMixin):
"""Isomap Embedding
Non-linear dimensionality reduction through Isometric Mapping
Parameters
----------
n_neighbors : integer
number of neighbors to consider for each point.
n_components : integer
number of coordinates for the manifold
eigen_solver : ['auto'|'arpack'|'dense']
'auto' : Attempt to choose the most efficient solver
for the given problem.
'arpack' : Use Arnoldi decomposition to find the eigenvalues
and eigenvectors.
'dense' : Use a direct solver (i.e. LAPACK)
for the eigenvalue decomposition.
tol : float
Convergence tolerance passed to arpack or lobpcg.
not used if eigen_solver == 'dense'.
max_iter : integer
Maximum number of iterations for the arpack solver.
not used if eigen_solver == 'dense'.
path_method : string ['auto'|'FW'|'D']
Method to use in finding shortest path.
'auto' : attempt to choose the best algorithm automatically
'FW' : Floyd-Warshall algorithm
'D' : Dijkstra algorithm with Fibonacci Heaps
neighbors_algorithm : string ['auto'|'brute'|'kd_tree'|'ball_tree']
Algorithm to use for nearest neighbors search,
passed to neighbors.NearestNeighbors instance.
Attributes
----------
`embedding_` : array-like, shape (n_samples, n_components)
Stores the embedding vectors.
`kernel_pca_` : object
`KernelPCA` object used to implement the embedding.
`training_data_` : array-like, shape (n_samples, n_features)
Stores the training data.
`nbrs_` : sklearn.neighbors.NearestNeighbors instance
Stores nearest neighbors instance, including BallTree or KDtree
if applicable.
`dist_matrix_` : array-like, shape (n_samples, n_samples)
Stores the geodesic distance matrix of training data.
References
----------
[1] Tenenbaum, J.B.; De Silva, V.; & Langford, J.C. A global geometric
framework for nonlinear dimensionality reduction. Science 290 (5500)
"""
def __init__(self, n_neighbors=5, n_components=2, eigen_solver='auto',
tol=0, max_iter=None, path_method='auto',
neighbors_algorithm='auto'):
self.n_neighbors = n_neighbors
self.n_components = n_components
self.eigen_solver = eigen_solver
self.tol = tol
self.max_iter = max_iter
self.path_method = path_method
self.neighbors_algorithm = neighbors_algorithm
self.nbrs_ = NearestNeighbors(n_neighbors=n_neighbors,
algorithm=neighbors_algorithm)
def _fit_transform(self, X):
X, = check_arrays(X, sparse_format='dense')
self.nbrs_.fit(X)
self.training_data_ = self.nbrs_._fit_X
self.kernel_pca_ = KernelPCA(n_components=self.n_components,
kernel="precomputed",
eigen_solver=self.eigen_solver,
tol=self.tol, max_iter=self.max_iter)
kng = kneighbors_graph(self.nbrs_, self.n_neighbors,
mode='distance')
self.dist_matrix_ = graph_shortest_path(kng,
method=self.path_method,
directed=False)
G = self.dist_matrix_ ** 2
G *= -0.5
self.embedding_ = self.kernel_pca_.fit_transform(G)
def reconstruction_error(self):
"""Compute the reconstruction error for the embedding.
Returns
-------
reconstruction_error : float
Notes
-------
The cost function of an isomap embedding is
``E = frobenius_norm[K(D) - K(D_fit)] / n_samples``
Where D is the matrix of distances for the input data X,
D_fit is the matrix of distances for the output embedding X_fit,
and K is the isomap kernel:
``K(D) = -0.5 * (I - 1/n_samples) * D^2 * (I - 1/n_samples)``
"""
G = -0.5 * self.dist_matrix_ ** 2
G_center = KernelCenterer().fit_transform(G)
evals = self.kernel_pca_.lambdas_
return np.sqrt(np.sum(G_center ** 2) - np.sum(evals ** 2)) / G.shape[0]
def fit(self, X, y=None):
"""Compute the embedding vectors for data X
Parameters
----------
X : {array-like, sparse matrix, BallTree, KDTree, NearestNeighbors}
Sample data, shape = (n_samples, n_features), in the form of a
numpy array, precomputed tree, or NearestNeighbors
object.
Returns
-------
self : returns an instance of self.
"""
self._fit_transform(X)
return self
def fit_transform(self, X, y=None):
"""Fit the model from data in X and transform X.
Parameters
----------
X: {array-like, sparse matrix, BallTree, KDTree}
Training vector, where n_samples in the number of samples
and n_features is the number of features.
Returns
-------
X_new: array-like, shape (n_samples, n_components)
"""
self._fit_transform(X)
return self.embedding_
def transform(self, X):
"""Transform X.
This is implemented by linking the points X into the graph of geodesic
distances of the training data. First the `n_neighbors` nearest
neighbors of X are found in the training data, and from these the
shortest geodesic distances from each point in X to each point in
the training data are computed in order to construct the kernel.
The embedding of X is the projection of this kernel onto the
embedding vectors of the training set.
Parameters
----------
X: array-like, shape (n_samples, n_features)
Returns
-------
X_new: array-like, shape (n_samples, n_components)
"""
distances, indices = self.nbrs_.kneighbors(X, return_distance=True)
#Create the graph of shortest distances from X to self.training_data_
# via the nearest neighbors of X.
#This can be done as a single array operation, but it potentially
# takes a lot of memory. To avoid that, use a loop:
G_X = np.zeros((X.shape[0], self.training_data_.shape[0]))
for i in range(X.shape[0]):
G_X[i] = np.min((self.dist_matrix_[indices[i]]
+ distances[i][:, None]), 0)
G_X **= 2
G_X *= -0.5
return self.kernel_pca_.transform(G_X)
| bsd-3-clause |
smcantab/pele | playground/plate_folding/geometric_folding.py | 5 | 10127 | from itertools import izip
import numpy as np
from pele.angleaxis import RBTopology, RBSystem, RigidFragment, RBPotentialWrapper
from pele.potentials import BasePotential
from pele.utils import rotations
from plate_potential import PlatePotential
EDGE1_TYPE = "O"
EDGE2_TYPE = "C"
EDGE3_TYPE = "N"
OTHER_TYPE = "H"
def draw_pymol(coords):
import pele.utils.pymolwrapper as pym
pym.start()
pym.draw_spheres(coords, "A", 1)
#class HarmonicPotential(BasePotential):
# def __init__(self, atoms1, atoms2):
# self.atoms1 = np.array(atoms1)
# self.atoms2 = np.array(atoms2)
#
# def getEnergy(self, x):
# e, g = self.getEnergyGradient(x)
# return e
#
# def getEnergyGradient(self, x):
# x = x.reshape([-1,3])
# grad = np.zeros(x.shape)
# etot = 0.
#
# for a1, a2 in izip(self.atoms1, self.atoms2):
# dx = x[a1,:] - x[a2,:]
# r2 = np.sum(dx**2)
# etot += 0.5 * r2
# grad[a1,:] += dx
# grad[a2,:] -= dx
# return etot, grad.ravel()
class MolAtomIndexParser(object):
"""this tool helps with getting the correct indices for the atoms on a given edge of a plate"""
def __init__(self, aatopology, nrigid):
self.aatopology = aatopology
atomtypes = self.aatopology.get_atomtypes()
self.atom_types = np.array(atomtypes).reshape([nrigid, -1])
self.atoms_per_mol = self.atom_types.shape[1]
def get_atom_indices(self, molecule_number, atom_type):
mol_atom_types = self.atom_types[molecule_number, :]
atoms = np.where(mol_atom_types == atom_type)[0]
atoms += molecule_number * self.atoms_per_mol
atoms.sort()
return list(atoms)
#class CombinePotential(BasePotential):
# def __init__(self, potentials):
# self.potentials = potentials
#
# def getEnergy(self, coords):
# e = 0
# for pot in self.potentials:
# e += pot.getEnergy(coords)
# return e
#
# def getEnergyGradient(self, coords):
# etot = 0
# grad = np.zeros(coords.size)
# for pot in self.potentials:
# e, g = pot.getEnergyGradient(coords)
# etot += e
# grad += g
# return etot, grad.ravel()
def make_triangular_plate(atoms_per_side=8):
"""construct a single triangular plate
"""
theta = 60. * np.pi / 180.
v1 = np.array([1,0,0])
v2 = np.array([0.5, np.sin(theta), 0])
aps = atoms_per_side
plate = RigidFragment()
for i in xrange(aps-1):
for j in xrange(aps-1):
if i + j >= aps-1:
break
xnew = v1*i + v2*j
if (i == 0 and j == 0 or
i == 0 and j == aps-2 or
i == aps-2 and j == 0):
atomtype = OTHER_TYPE
elif i == 0:
atomtype = EDGE1_TYPE
elif j == 0:
atomtype = EDGE2_TYPE
elif i + j == aps-2:
atomtype = EDGE3_TYPE
else:
atomtype = OTHER_TYPE
plate.add_atom(atomtype, xnew, 1)
# draw(coords)
plate.finalize_setup()
return plate
class PlateFolder(RBSystem):
"""
This will build a system class for a cluster of interacting plates
"""
def __init__(self, nmol):
self.nrigid = nmol
super(PlateFolder, self).__init__()
self.setup_params(self.params)
self._create_potential()
def get_random_configuration(self):
# js850> this is a bit sketchy because nrigid might not be defined here.
# probably we can get the number of molecules some other way.
coords = 10.*np.random.random(6*self.nrigid)
ca = self.aasystem.coords_adapter(coords)
for p in ca.rotRigid:
p[:] = rotations.random_aa()
return coords
def setup_aatopology(self):
"""this sets up the topology for the whole rigid body system"""
topology = RBTopology()
topology.add_sites([make_triangular_plate() for i in xrange(self.nrigid)])
self.render_scale = 0.2
self.atom_types = topology.get_atomtypes()
self.draw_bonds = []
# for i in xrange(self.nrigid):
# self.draw_bonds.append((3*i, 3*i+1))
# self.draw_bonds.append((3*i, 3*i+2))
topology.finalize_setup()
return topology
def setup_params(self, params):
"""set some system dependent parameters to imrprove algorithm performance"""
params.double_ended_connect.local_connect_params.tsSearchParams.iprint = 10
nebparams = params.double_ended_connect.local_connect_params.NEBparams
nebparams.max_images = 50
nebparams.image_density = 5
nebparams.iter_density = 10.
nebparams.k = 5.
nebparams.reinterpolate = 50
nebparams.NEBquenchParams["iprint"] = 10
tssearch = params.double_ended_connect.local_connect_params.tsSearchParams
tssearch.nsteps_tangent1 = 10
tssearch.nsteps_tangent2 = 30
tssearch.lowestEigenvectorQuenchParams["nsteps"] = 50
tssearch.iprint=1
tssearch.nfail_max = 100
params.takestep.translate = 5.
def _create_potential(self):
"""construct the potential which will compute the energy and gradient in atomistic (cartesian) coordinates
The bonded (hinged) sides interact with an attractive harmonic potential. Each atom
in the bond has a single interaction partner.
The loosly attractive sides interact with an LJ potential. These interactions are
not specific. Each atom interacts with every other one.
All atoms repel each other with a WCA potential.
"""
parser = MolAtomIndexParser(self.aatopology, self.nrigid)
# this is currently only set up for a tetrahedron
assert self.nrigid == 4
# do hinges
harmonic_atoms1 = []
harmonic_atoms2 = []
harmonic_atoms1 += parser.get_atom_indices(0, EDGE1_TYPE)
harmonic_atoms2 += parser.get_atom_indices(1, EDGE1_TYPE)
harmonic_atoms1 += parser.get_atom_indices(0, EDGE2_TYPE)
harmonic_atoms2 += parser.get_atom_indices(2, EDGE1_TYPE)
harmonic_atoms1 += parser.get_atom_indices(0, EDGE3_TYPE)
harmonic_atoms2 += parser.get_atom_indices(3, EDGE1_TYPE)
self.harmonic_atoms = np.array(harmonic_atoms1 + harmonic_atoms2, np.integer)
harmonic_atoms1 = np.array(harmonic_atoms1, dtype=np.integer).ravel()
harmonic_atoms2 = np.array(harmonic_atoms2, dtype=np.integer).ravel()
for i, j in izip(harmonic_atoms1, harmonic_atoms2):
self.draw_bonds.append((i,j))
# do attractive part
lj_atoms = []
lj_atoms += parser.get_atom_indices(1, EDGE2_TYPE)
lj_atoms += parser.get_atom_indices(1, EDGE3_TYPE)
lj_atoms += parser.get_atom_indices(2, EDGE2_TYPE)
lj_atoms += parser.get_atom_indices(2, EDGE3_TYPE)
lj_atoms += parser.get_atom_indices(3, EDGE2_TYPE)
lj_atoms += parser.get_atom_indices(3, EDGE3_TYPE)
lj_atoms = np.array(sorted(lj_atoms)).ravel()
self.lj_atoms = lj_atoms
self.extra_atoms = []
for i in xrange(self.nrigid):
self.extra_atoms += parser.get_atom_indices(i, OTHER_TYPE)
plate_pot = PlatePotential(harmonic_atoms1, harmonic_atoms2, lj_atoms, k=10)
# wrap it so it can be used with angle axis coordinates
pot = RBPotentialWrapper(self.aatopology.cpp_topology, plate_pot)
# self.aasystem.set_cpp_topology(self.pot.topology)
# raise Exception
return pot
def get_potential(self):
"""construct the rigid body potential"""
try:
return self.pot
except AttributeError:
return self._create_potential()
def get_mindist(self, **kwargs):
from pele.angleaxis import MinPermDistAACluster
from pele.angleaxis import TransformAngleAxisCluster, MeasureAngleAxisCluster
transform = TransformAngleAxisCluster(self.aatopology)
measure = MeasureAngleAxisCluster(self.aatopology, transform=transform,
permlist=[])
return MinPermDistAACluster(self.aasystem, measure=measure, transform=transform,
accuracy=0.1, **kwargs)
def draw(self, rbcoords, index, shift_com=True): # pragma: no cover
from pele.systems._opengl_tools import draw_atoms, draw_cylinder
from matplotlib.colors import cnames, hex2color
coords = self.aasystem.to_atomistic(rbcoords).copy()
coords = coords.reshape([-1,3])
if shift_com:
com=np.mean(coords, axis=0)
coords -= com[np.newaxis, :]
else:
com = np.zeros(3)
radius = 0.42
red = hex2color(cnames["red"])
c2 = hex2color(cnames["grey"])
draw_atoms(coords, self.harmonic_atoms, c2, radius=radius)
draw_atoms(coords, self.lj_atoms, red, radius=radius)
draw_atoms(coords, self.extra_atoms, c2, radius=radius)
for i1, i2 in self.draw_bonds:
draw_cylinder(coords[i1,:], coords[i2,:], color=c2)
def test_bh():
np.random.seed(0)
nmol = 4
system = PlateFolder(nmol)
db = system.create_database()
bh = system.get_basinhopping(db)
bh.run(100)
m1 = db.minima()[0]
print m1.coords
for x in m1.coords:
print "%.12f," % x,
print ""
print m1.energy
def test_gui():
from pele.gui import run_gui
nmol = 4
system = PlateFolder(nmol)
db = system.create_database("tetrahedra.sqlite")
run_gui(system, db)
if __name__ == "__main__":
test_gui()
# test_bh()
| gpl-3.0 |
chrsrds/scikit-learn | sklearn/decomposition/tests/test_nmf.py | 2 | 19002 | import numpy as np
import scipy.sparse as sp
import numbers
from scipy import linalg
from sklearn.decomposition import NMF, non_negative_factorization
from sklearn.decomposition import nmf # For testing internals
from scipy.sparse import csc_matrix
import pytest
from sklearn.utils.testing import assert_raise_message
from sklearn.utils.testing import assert_warns_message
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import ignore_warnings
from sklearn.utils.extmath import squared_norm
from sklearn.base import clone
from sklearn.exceptions import ConvergenceWarning
def test_initialize_nn_output():
# Test that initialization does not return negative values
rng = np.random.mtrand.RandomState(42)
data = np.abs(rng.randn(10, 10))
for init in ('random', 'nndsvd', 'nndsvda', 'nndsvdar'):
W, H = nmf._initialize_nmf(data, 10, init=init, random_state=0)
assert not ((W < 0).any() or (H < 0).any())
def test_parameter_checking():
A = np.ones((2, 2))
name = 'spam'
msg = "Invalid solver parameter: got 'spam' instead of one of"
assert_raise_message(ValueError, msg, NMF(solver=name).fit, A)
msg = "Invalid init parameter: got 'spam' instead of one of"
assert_raise_message(ValueError, msg, NMF(init=name).fit, A)
msg = "Invalid beta_loss parameter: got 'spam' instead of one"
assert_raise_message(ValueError, msg, NMF(solver='mu',
beta_loss=name).fit, A)
msg = "Invalid beta_loss parameter: solver 'cd' does not handle "
msg += "beta_loss = 1.0"
assert_raise_message(ValueError, msg, NMF(solver='cd',
beta_loss=1.0).fit, A)
msg = "Negative values in data passed to"
assert_raise_message(ValueError, msg, NMF().fit, -A)
assert_raise_message(ValueError, msg, nmf._initialize_nmf, -A,
2, 'nndsvd')
clf = NMF(2, tol=0.1).fit(A)
assert_raise_message(ValueError, msg, clf.transform, -A)
for init in ['nndsvd', 'nndsvda', 'nndsvdar']:
msg = ("init = '{}' can only be used when "
"n_components <= min(n_samples, n_features)"
.format(init))
assert_raise_message(ValueError, msg, NMF(3, init).fit, A)
assert_raise_message(ValueError, msg, nmf._initialize_nmf, A,
3, init)
def test_initialize_close():
# Test NNDSVD error
# Test that _initialize_nmf error is less than the standard deviation of
# the entries in the matrix.
rng = np.random.mtrand.RandomState(42)
A = np.abs(rng.randn(10, 10))
W, H = nmf._initialize_nmf(A, 10, init='nndsvd')
error = linalg.norm(np.dot(W, H) - A)
sdev = linalg.norm(A - A.mean())
assert error <= sdev
def test_initialize_variants():
# Test NNDSVD variants correctness
# Test that the variants 'nndsvda' and 'nndsvdar' differ from basic
# 'nndsvd' only where the basic version has zeros.
rng = np.random.mtrand.RandomState(42)
data = np.abs(rng.randn(10, 10))
W0, H0 = nmf._initialize_nmf(data, 10, init='nndsvd')
Wa, Ha = nmf._initialize_nmf(data, 10, init='nndsvda')
War, Har = nmf._initialize_nmf(data, 10, init='nndsvdar',
random_state=0)
for ref, evl in ((W0, Wa), (W0, War), (H0, Ha), (H0, Har)):
assert_almost_equal(evl[ref != 0], ref[ref != 0])
# ignore UserWarning raised when both solver='mu' and init='nndsvd'
@ignore_warnings(category=UserWarning)
def test_nmf_fit_nn_output():
# Test that the decomposition does not contain negative values
A = np.c_[5. - np.arange(1, 6),
5. + np.arange(1, 6)]
for solver in ('cd', 'mu'):
for init in (None, 'nndsvd', 'nndsvda', 'nndsvdar', 'random'):
model = NMF(n_components=2, solver=solver, init=init,
random_state=0)
transf = model.fit_transform(A)
assert not((model.components_ < 0).any() or
(transf < 0).any())
@pytest.mark.parametrize('solver', ('cd', 'mu'))
def test_nmf_fit_close(solver):
rng = np.random.mtrand.RandomState(42)
# Test that the fit is not too far away
pnmf = NMF(5, solver=solver, init='nndsvdar', random_state=0,
max_iter=600)
X = np.abs(rng.randn(6, 5))
assert pnmf.fit(X).reconstruction_err_ < 0.1
@pytest.mark.parametrize('solver', ('cd', 'mu'))
def test_nmf_transform(solver):
# Test that NMF.transform returns close values
rng = np.random.mtrand.RandomState(42)
A = np.abs(rng.randn(6, 5))
m = NMF(solver=solver, n_components=3, init='random',
random_state=0, tol=1e-5)
ft = m.fit_transform(A)
t = m.transform(A)
assert_array_almost_equal(ft, t, decimal=2)
def test_nmf_transform_custom_init():
# Smoke test that checks if NMF.transform works with custom initialization
random_state = np.random.RandomState(0)
A = np.abs(random_state.randn(6, 5))
n_components = 4
avg = np.sqrt(A.mean() / n_components)
H_init = np.abs(avg * random_state.randn(n_components, 5))
W_init = np.abs(avg * random_state.randn(6, n_components))
m = NMF(solver='cd', n_components=n_components, init='custom',
random_state=0)
m.fit_transform(A, W=W_init, H=H_init)
m.transform(A)
@pytest.mark.parametrize('solver', ('cd', 'mu'))
def test_nmf_inverse_transform(solver):
# Test that NMF.inverse_transform returns close values
random_state = np.random.RandomState(0)
A = np.abs(random_state.randn(6, 4))
m = NMF(solver=solver, n_components=4, init='random', random_state=0,
max_iter=1000)
ft = m.fit_transform(A)
A_new = m.inverse_transform(ft)
assert_array_almost_equal(A, A_new, decimal=2)
def test_n_components_greater_n_features():
# Smoke test for the case of more components than features.
rng = np.random.mtrand.RandomState(42)
A = np.abs(rng.randn(30, 10))
NMF(n_components=15, random_state=0, tol=1e-2).fit(A)
def test_nmf_sparse_input():
# Test that sparse matrices are accepted as input
from scipy.sparse import csc_matrix
rng = np.random.mtrand.RandomState(42)
A = np.abs(rng.randn(10, 10))
A[:, 2 * np.arange(5)] = 0
A_sparse = csc_matrix(A)
for solver in ('cd', 'mu'):
est1 = NMF(solver=solver, n_components=5, init='random',
random_state=0, tol=1e-2)
est2 = clone(est1)
W1 = est1.fit_transform(A)
W2 = est2.fit_transform(A_sparse)
H1 = est1.components_
H2 = est2.components_
assert_array_almost_equal(W1, W2)
assert_array_almost_equal(H1, H2)
def test_nmf_sparse_transform():
# Test that transform works on sparse data. Issue #2124
rng = np.random.mtrand.RandomState(42)
A = np.abs(rng.randn(3, 2))
A[1, 1] = 0
A = csc_matrix(A)
for solver in ('cd', 'mu'):
model = NMF(solver=solver, random_state=0, n_components=2,
max_iter=400)
A_fit_tr = model.fit_transform(A)
A_tr = model.transform(A)
assert_array_almost_equal(A_fit_tr, A_tr, decimal=1)
def test_non_negative_factorization_consistency():
# Test that the function is called in the same way, either directly
# or through the NMF class
rng = np.random.mtrand.RandomState(42)
A = np.abs(rng.randn(10, 10))
A[:, 2 * np.arange(5)] = 0
for init in ['random', 'nndsvd']:
for solver in ('cd', 'mu'):
W_nmf, H, _ = non_negative_factorization(
A, init=init, solver=solver, random_state=1, tol=1e-2)
W_nmf_2, _, _ = non_negative_factorization(
A, H=H, update_H=False, init=init, solver=solver,
random_state=1, tol=1e-2)
model_class = NMF(init=init, solver=solver, random_state=1,
tol=1e-2)
W_cls = model_class.fit_transform(A)
W_cls_2 = model_class.transform(A)
assert_array_almost_equal(W_nmf, W_cls, decimal=10)
assert_array_almost_equal(W_nmf_2, W_cls_2, decimal=10)
def test_non_negative_factorization_checking():
A = np.ones((2, 2))
# Test parameters checking is public function
nnmf = non_negative_factorization
msg = ("The default value of init will change from "
"random to None in 0.23 to make it consistent "
"with decomposition.NMF.")
assert_warns_message(FutureWarning, msg, nnmf, A, A, A, np.int64(1))
msg = ("Number of components must be a positive integer; "
"got (n_components=1.5)")
assert_raise_message(ValueError, msg, nnmf, A, A, A, 1.5, 'random')
msg = ("Number of components must be a positive integer; "
"got (n_components='2')")
assert_raise_message(ValueError, msg, nnmf, A, A, A, '2', 'random')
msg = "Negative values in data passed to NMF (input H)"
assert_raise_message(ValueError, msg, nnmf, A, A, -A, 2, 'custom')
msg = "Negative values in data passed to NMF (input W)"
assert_raise_message(ValueError, msg, nnmf, A, -A, A, 2, 'custom')
msg = "Array passed to NMF (input H) is full of zeros"
assert_raise_message(ValueError, msg, nnmf, A, A, 0 * A, 2, 'custom')
msg = "Invalid regularization parameter: got 'spam' instead of one of"
assert_raise_message(ValueError, msg, nnmf, A, A, 0 * A, 2, 'custom', True,
'cd', 2., 1e-4, 200, 0., 0., 'spam')
def _beta_divergence_dense(X, W, H, beta):
"""Compute the beta-divergence of X and W.H for dense array only.
Used as a reference for testing nmf._beta_divergence.
"""
if isinstance(X, numbers.Number):
W = np.array([[W]])
H = np.array([[H]])
X = np.array([[X]])
WH = np.dot(W, H)
if beta == 2:
return squared_norm(X - WH) / 2
WH_Xnonzero = WH[X != 0]
X_nonzero = X[X != 0]
np.maximum(WH_Xnonzero, 1e-9, out=WH_Xnonzero)
if beta == 1:
res = np.sum(X_nonzero * np.log(X_nonzero / WH_Xnonzero))
res += WH.sum() - X.sum()
elif beta == 0:
div = X_nonzero / WH_Xnonzero
res = np.sum(div) - X.size - np.sum(np.log(div))
else:
res = (X_nonzero ** beta).sum()
res += (beta - 1) * (WH ** beta).sum()
res -= beta * (X_nonzero * (WH_Xnonzero ** (beta - 1))).sum()
res /= beta * (beta - 1)
return res
def test_beta_divergence():
# Compare _beta_divergence with the reference _beta_divergence_dense
n_samples = 20
n_features = 10
n_components = 5
beta_losses = [0., 0.5, 1., 1.5, 2.]
# initialization
rng = np.random.mtrand.RandomState(42)
X = rng.randn(n_samples, n_features)
np.clip(X, 0, None, out=X)
X_csr = sp.csr_matrix(X)
W, H = nmf._initialize_nmf(X, n_components, init='random', random_state=42)
for beta in beta_losses:
ref = _beta_divergence_dense(X, W, H, beta)
loss = nmf._beta_divergence(X, W, H, beta)
loss_csr = nmf._beta_divergence(X_csr, W, H, beta)
assert_almost_equal(ref, loss, decimal=7)
assert_almost_equal(ref, loss_csr, decimal=7)
def test_special_sparse_dot():
# Test the function that computes np.dot(W, H), only where X is non zero.
n_samples = 10
n_features = 5
n_components = 3
rng = np.random.mtrand.RandomState(42)
X = rng.randn(n_samples, n_features)
np.clip(X, 0, None, out=X)
X_csr = sp.csr_matrix(X)
W = np.abs(rng.randn(n_samples, n_components))
H = np.abs(rng.randn(n_components, n_features))
WH_safe = nmf._special_sparse_dot(W, H, X_csr)
WH = nmf._special_sparse_dot(W, H, X)
# test that both results have same values, in X_csr nonzero elements
ii, jj = X_csr.nonzero()
WH_safe_data = np.asarray(WH_safe[ii, jj]).ravel()
assert_array_almost_equal(WH_safe_data, WH[ii, jj], decimal=10)
# test that WH_safe and X_csr have the same sparse structure
assert_array_equal(WH_safe.indices, X_csr.indices)
assert_array_equal(WH_safe.indptr, X_csr.indptr)
assert_array_equal(WH_safe.shape, X_csr.shape)
@ignore_warnings(category=ConvergenceWarning)
def test_nmf_multiplicative_update_sparse():
# Compare sparse and dense input in multiplicative update NMF
# Also test continuity of the results with respect to beta_loss parameter
n_samples = 20
n_features = 10
n_components = 5
alpha = 0.1
l1_ratio = 0.5
n_iter = 20
# initialization
rng = np.random.mtrand.RandomState(1337)
X = rng.randn(n_samples, n_features)
X = np.abs(X)
X_csr = sp.csr_matrix(X)
W0, H0 = nmf._initialize_nmf(X, n_components, init='random',
random_state=42)
for beta_loss in (-1.2, 0, 0.2, 1., 2., 2.5):
# Reference with dense array X
W, H = W0.copy(), H0.copy()
W1, H1, _ = non_negative_factorization(
X, W, H, n_components, init='custom', update_H=True,
solver='mu', beta_loss=beta_loss, max_iter=n_iter, alpha=alpha,
l1_ratio=l1_ratio, regularization='both', random_state=42)
# Compare with sparse X
W, H = W0.copy(), H0.copy()
W2, H2, _ = non_negative_factorization(
X_csr, W, H, n_components, init='custom', update_H=True,
solver='mu', beta_loss=beta_loss, max_iter=n_iter, alpha=alpha,
l1_ratio=l1_ratio, regularization='both', random_state=42)
assert_array_almost_equal(W1, W2, decimal=7)
assert_array_almost_equal(H1, H2, decimal=7)
# Compare with almost same beta_loss, since some values have a specific
# behavior, but the results should be continuous w.r.t beta_loss
beta_loss -= 1.e-5
W, H = W0.copy(), H0.copy()
W3, H3, _ = non_negative_factorization(
X_csr, W, H, n_components, init='custom', update_H=True,
solver='mu', beta_loss=beta_loss, max_iter=n_iter, alpha=alpha,
l1_ratio=l1_ratio, regularization='both', random_state=42)
assert_array_almost_equal(W1, W3, decimal=4)
assert_array_almost_equal(H1, H3, decimal=4)
def test_nmf_negative_beta_loss():
# Test that an error is raised if beta_loss < 0 and X contains zeros.
# Test that the output has not NaN values when the input contains zeros.
n_samples = 6
n_features = 5
n_components = 3
rng = np.random.mtrand.RandomState(42)
X = rng.randn(n_samples, n_features)
np.clip(X, 0, None, out=X)
X_csr = sp.csr_matrix(X)
def _assert_nmf_no_nan(X, beta_loss):
W, H, _ = non_negative_factorization(
X, init='random', n_components=n_components, solver='mu',
beta_loss=beta_loss, random_state=0, max_iter=1000)
assert not np.any(np.isnan(W))
assert not np.any(np.isnan(H))
msg = "When beta_loss <= 0 and X contains zeros, the solver may diverge."
for beta_loss in (-0.6, 0.):
assert_raise_message(ValueError, msg, _assert_nmf_no_nan, X, beta_loss)
_assert_nmf_no_nan(X + 1e-9, beta_loss)
for beta_loss in (0.2, 1., 1.2, 2., 2.5):
_assert_nmf_no_nan(X, beta_loss)
_assert_nmf_no_nan(X_csr, beta_loss)
def test_nmf_regularization():
# Test the effect of L1 and L2 regularizations
n_samples = 6
n_features = 5
n_components = 3
rng = np.random.mtrand.RandomState(42)
X = np.abs(rng.randn(n_samples, n_features))
# L1 regularization should increase the number of zeros
l1_ratio = 1.
for solver in ['cd', 'mu']:
regul = nmf.NMF(n_components=n_components, solver=solver,
alpha=0.5, l1_ratio=l1_ratio, random_state=42)
model = nmf.NMF(n_components=n_components, solver=solver,
alpha=0., l1_ratio=l1_ratio, random_state=42)
W_regul = regul.fit_transform(X)
W_model = model.fit_transform(X)
H_regul = regul.components_
H_model = model.components_
W_regul_n_zeros = W_regul[W_regul == 0].size
W_model_n_zeros = W_model[W_model == 0].size
H_regul_n_zeros = H_regul[H_regul == 0].size
H_model_n_zeros = H_model[H_model == 0].size
assert W_regul_n_zeros > W_model_n_zeros
assert H_regul_n_zeros > H_model_n_zeros
# L2 regularization should decrease the mean of the coefficients
l1_ratio = 0.
for solver in ['cd', 'mu']:
regul = nmf.NMF(n_components=n_components, solver=solver,
alpha=0.5, l1_ratio=l1_ratio, random_state=42)
model = nmf.NMF(n_components=n_components, solver=solver,
alpha=0., l1_ratio=l1_ratio, random_state=42)
W_regul = regul.fit_transform(X)
W_model = model.fit_transform(X)
H_regul = regul.components_
H_model = model.components_
assert W_model.mean() > W_regul.mean()
assert H_model.mean() > H_regul.mean()
@ignore_warnings(category=ConvergenceWarning)
def test_nmf_decreasing():
# test that the objective function is decreasing at each iteration
n_samples = 20
n_features = 15
n_components = 10
alpha = 0.1
l1_ratio = 0.5
tol = 0.
# initialization
rng = np.random.mtrand.RandomState(42)
X = rng.randn(n_samples, n_features)
np.abs(X, X)
W0, H0 = nmf._initialize_nmf(X, n_components, init='random',
random_state=42)
for beta_loss in (-1.2, 0, 0.2, 1., 2., 2.5):
for solver in ('cd', 'mu'):
if solver != 'mu' and beta_loss != 2:
# not implemented
continue
W, H = W0.copy(), H0.copy()
previous_loss = None
for _ in range(30):
# one more iteration starting from the previous results
W, H, _ = non_negative_factorization(
X, W, H, beta_loss=beta_loss, init='custom',
n_components=n_components, max_iter=1, alpha=alpha,
solver=solver, tol=tol, l1_ratio=l1_ratio, verbose=0,
regularization='both', random_state=0, update_H=True)
loss = nmf._beta_divergence(X, W, H, beta_loss)
if previous_loss is not None:
assert previous_loss > loss
previous_loss = loss
def test_nmf_underflow():
# Regression test for an underflow issue in _beta_divergence
rng = np.random.RandomState(0)
n_samples, n_features, n_components = 10, 2, 2
X = np.abs(rng.randn(n_samples, n_features)) * 10
W = np.abs(rng.randn(n_samples, n_components)) * 10
H = np.abs(rng.randn(n_components, n_features))
X[0, 0] = 0
ref = nmf._beta_divergence(X, W, H, beta=1.0)
X[0, 0] = 1e-323
res = nmf._beta_divergence(X, W, H, beta=1.0)
assert_almost_equal(res, ref)
| bsd-3-clause |
NEONScience/NEON-Data-Skills | tutorials/Python/Lidar/intro-lidar/classify_raster_with_threshold-2018-py/classify_raster_with_threshold-2018-py.py | 1 | 13725 | #!/usr/bin/env python
# coding: utf-8
# ---
# syncID: b0860577d1994b6e8abd23a6edf9e005
# title: "Classify a Raster Using Threshold Values in Python - 2018"
# description: "Learn how to read NEON lidar raster GeoTIFFs (e.g., CHM, slope, aspect) into Python numpy arrays with gdal and create a classified raster object."
# dateCreated: 2018-07-04
# authors: Bridget Hass
# contributors: Donal O'Leary, Max Burner
# estimatedTime: 1 hour
# packagesLibraries: numpy, gdal, matplotlib, matplotlib.pyplot, os
# topics: lidar, raster, remote-sensing
# languagesTool: python
# dataProduct: DP1.30003, DP3.30015, DP3.30024, DP3.30025
# code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Lidar/intro-lidar/classify_raster_with_threshold-2018-py/classify_raster_with_threshold-2018-py.ipynb
# tutorialSeries: intro-lidar-py-series
# urlTitle: classify-raster-thresholds-2018-py
# ---
# In this tutorial, we will learn how to read NEON lidar raster GeoTIFFS
# (e.g., CHM, slope aspect) into Python `numpy` arrays with gdal and create a
# classified raster object.
#
# <div id="ds-objectives" markdown="1">
#
# ### Objectives
#
# After completing this tutorial, you will be able to:
#
# * Read NEON lidar raster GeoTIFFS (e.g., CHM, slope aspect) into Python numpy arrays with gdal.
# * Create a classified raster object using thresholds.
#
# ### Install Python Packages
#
# * **numpy**
# * **gdal**
# * **matplotlib**
#
# ### Download Data
#
# For this lesson, we will be using a 1km tile of a Canopy Height Model derived from lidar data collected at the Smithsonian Environmental Research Center (SERC) NEON site. <a href="https://ndownloader.figshare.com/files/25787420">Download Data Here</a>.
#
# <a href="https://ndownloader.figshare.com/files/25787420" class="link--button link--arrow">
# Download Dataset</a>
#
# </div>
# In this tutorial, we will work with the NEON AOP L3 LiDAR ecoysystem structure (Canopy Height Model) data product. For more information about NEON data products and the CHM product DP3.30015.001, see the <a href="http://data.neonscience.org/data-products/DP3.30015.001" target="_blank">NEON Data Product Catalog</a>.
#
# First, let's import the required packages and set our plot display to be in-line:
# In[1]:
import numpy as np
import gdal, copy
import matplotlib.pyplot as plt
get_ipython().run_line_magic('matplotlib', 'inline')
import warnings
warnings.filterwarnings('ignore')
# ## Open a GeoTIFF with GDAL
#
# Let's look at the SERC Canopy Height Model (CHM) to start. We can open and read this in Python using the ```gdal.Open``` function:
# In[2]:
# Note that you will need to update the filepath below according to your local machine
chm_filename = '/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_CHM.tif'
chm_dataset = gdal.Open(chm_filename)
# ### Read GeoTIFF Tags
# The GeoTIFF file format comes with associated metadata containing information about the location and coordinate system/projection. Once we have read in the dataset, we can access this information with the following commands:
# In[3]:
#Display the dataset dimensions, number of bands, driver, and geotransform
cols = chm_dataset.RasterXSize; print('# of columns:',cols)
rows = chm_dataset.RasterYSize; print('# of rows:',rows)
print('# of bands:',chm_dataset.RasterCount)
print('driver:',chm_dataset.GetDriver().LongName)
# ### Use GetProjection
#
# We can use the `gdal` ```GetProjection``` method to display information about the coordinate system and EPSG code.
# In[4]:
print('projection:',chm_dataset.GetProjection())
# ### Use GetGeoTransform
#
# The geotransform contains information about the origin (upper-left corner) of the raster, the pixel size, and the rotation angle of the data. All NEON data in the latest format have zero rotation. In this example, the values correspond to:
# In[5]:
print('geotransform:',chm_dataset.GetGeoTransform())
# In this case, the geotransform values correspond to:
#
# 1. Left-Most X Coordinate = `367000.0`
# 2. W-E Pixel Resolution = `1.0`
# 3. Rotation (0 if Image is North-Up) = `0.0`
# 4. Upper Y Coordinate = `4307000.0`
# 5. Rotation (0 if Image is North-Up) = `0.0`
# 6. N-S Pixel Resolution = `-1.0`
#
# The negative value for the N-S Pixel resolution reflects that the origin of the image is the upper left corner. We can convert this geotransform information into a spatial extent (xMin, xMax, yMin, yMax) by combining information about the origin, number of columns & rows, and pixel size, as follows:
# In[6]:
chm_mapinfo = chm_dataset.GetGeoTransform()
xMin = chm_mapinfo[0]
yMax = chm_mapinfo[3]
xMax = xMin + chm_dataset.RasterXSize/chm_mapinfo[1] #divide by pixel width
yMin = yMax + chm_dataset.RasterYSize/chm_mapinfo[5] #divide by pixel height (note sign +/-)
chm_ext = (xMin,xMax,yMin,yMax)
print('chm raster extent:',chm_ext)
# ### Use GetRasterBand
#
# We can read in a single raster band with `GetRasterBand` and access information about this raster band such as the No Data Value, Scale Factor, and Statitiscs as follows:
# In[7]:
chm_raster = chm_dataset.GetRasterBand(1)
noDataVal = chm_raster.GetNoDataValue(); print('no data value:',noDataVal)
scaleFactor = chm_raster.GetScale(); print('scale factor:',scaleFactor)
chm_stats = chm_raster.GetStatistics(True,True)
print('SERC CHM Statistics: Minimum=%.2f, Maximum=%.2f, Mean=%.3f, StDev=%.3f' %
(chm_stats[0], chm_stats[1], chm_stats[2], chm_stats[3]))
# ### Use ReadAsArray
#
# Finally we can convert the raster to an array using the `ReadAsArray` method. Cast the array to a floating point value using `astype(np.float)`. Once we generate the array, we want to set No Data Values to NaN, and apply the scale factor:
# In[8]:
chm_array = chm_dataset.GetRasterBand(1).ReadAsArray(0,0,cols,rows).astype(np.float)
chm_array[chm_array==int(noDataVal)]=np.nan #Assign CHM No Data Values to NaN
chm_array=chm_array/scaleFactor
print('SERC CHM Array:\n',chm_array) #display array values
# In[9]:
chm_array.shape
# In[10]:
# Calculate the % of pixels that are NaN and non-zero:
pct_nan = np.count_nonzero(np.isnan(chm_array))/(rows*cols)
print('% NaN:',round(pct_nan*100,2))
print('% non-zero:',round(100*np.count_nonzero(chm_array)/(rows*cols),2))
# ## Plot Canopy Height Data
#
# To get a better idea of the dataset, we can use a similar function to `plot_aop_refl` that we used in the NEON AOP reflectance tutorials:
# In[11]:
def plot_band_array(band_array,refl_extent,colorlimit,ax=plt.gca(),title='',cmap_title='',colormap=''):
plot = plt.imshow(band_array,extent=refl_extent,clim=colorlimit);
cbar = plt.colorbar(plot,aspect=40); plt.set_cmap(colormap);
cbar.set_label(cmap_title,rotation=90,labelpad=20);
plt.title(title); ax = plt.gca();
ax.ticklabel_format(useOffset=False, style='plain'); #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90); #rotate x tick labels 90 degrees
# ### Histogram of Data
#
# As we did with the reflectance tile, it is often useful to plot a histogram of the geotiff data in order to get a sense of the range and distribution of values. First we'll make a copy of the array and remove the `nan` values.
# In[12]:
import copy
chm_nonan_array = copy.copy(chm_array)
chm_nonan_array = chm_nonan_array[~np.isnan(chm_array)]
plt.hist(chm_nonan_array,weights=np.zeros_like(chm_nonan_array)+1./
(chm_array.shape[0]*chm_array.shape[1]),bins=50);
plt.title('Distribution of SERC Canopy Height')
plt.xlabel('Tree Height (m)'); plt.ylabel('Relative Frequency')
# On your own, adjust the number of bins, and range of the y-axis to get a good idea of the distribution of the canopy height values. We can see that most of the values are zero. In SERC, many of the zero CHM values correspond to bodies of water as well as regions of land without trees. Let's look at a histogram and plot the data without zero values:
# In[13]:
chm_nonzero_array = copy.copy(chm_array)
chm_nonzero_array[chm_array==0]=np.nan
chm_nonzero_nonan_array = chm_nonzero_array[~np.isnan(chm_nonzero_array)]
# Use weighting to plot relative frequency
plt.hist(chm_nonzero_nonan_array,bins=50);
# plt.hist(chm_nonzero_nonan_array.flatten(),50)
plt.title('Distribution of SERC Non-Zero Canopy Height')
plt.xlabel('Tree Height (m)'); plt.ylabel('Relative Frequency')
# Note that it appears that the trees don't have a smooth or normal distribution, but instead appear blocked off in chunks. This is an artifact of the Canopy Height Model algorithm, which bins the trees into 5m increments (this is done to avoid another artifact of "pits" (Khosravipour et al., 2014).
#
# From the histogram we can see that the majority of the trees are < 30m. We can re-plot the CHM array, this time adjusting the color bar limits to better visualize the variation in canopy height. We will plot the non-zero array so that CHM=0 appears white.
# In[14]:
plot_band_array(chm_array,
chm_ext,
(0,35),
title='SERC Canopy Height',
cmap_title='Canopy Height, m',
colormap='BuGn')
# ## Threshold Based Raster Classification
# Next, we will create a classified raster object. To do this, we will use the numpy.where function to create a new raster based off boolean classifications. Let's classify the canopy height into five groups:
# - Class 1: **CHM = 0 m**
# - Class 2: **0m < CHM <= 10m**
# - Class 3: **10m < CHM <= 20m**
# - Class 4: **20m < CHM <= 30m**
# - Class 5: **CHM > 30m**
#
# We can use `np.where` to find the indices where a boolean criteria is met.
# In[15]:
chm_reclass = copy.copy(chm_array)
chm_reclass[np.where(chm_array==0)] = 1 # CHM = 0 : Class 1
chm_reclass[np.where((chm_array>0) & (chm_array<=10))] = 2 # 0m < CHM <= 10m - Class 2
chm_reclass[np.where((chm_array>10) & (chm_array<=20))] = 3 # 10m < CHM <= 20m - Class 3
chm_reclass[np.where((chm_array>20) & (chm_array<=30))] = 4 # 20m < CHM <= 30m - Class 4
chm_reclass[np.where(chm_array>30)] = 5 # CHM > 30m - Class 5
# We can define our own colormap to plot these discrete classifications, and create a custom legend to label the classes:
# In[16]:
import matplotlib.colors as colors
plt.figure();
cmapCHM = colors.ListedColormap(['lightblue','yellow','orange','green','red'])
plt.imshow(chm_reclass,extent=chm_ext,cmap=cmapCHM)
plt.title('SERC CHM Classification')
ax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
# Create custom legend to label the four canopy height classes:
import matplotlib.patches as mpatches
class1_box = mpatches.Patch(color='lightblue', label='CHM = 0m')
class2_box = mpatches.Patch(color='yellow', label='0m < CHM <= 10m')
class3_box = mpatches.Patch(color='orange', label='10m < CHM <= 20m')
class4_box = mpatches.Patch(color='green', label='20m < CHM <= 30m')
class5_box = mpatches.Patch(color='red', label='CHM > 30m')
ax.legend(handles=[class1_box,class2_box,class3_box,class4_box,class5_box],
handlelength=0.7,bbox_to_anchor=(1.05, 0.4),loc='lower left',borderaxespad=0.)
# <div id="ds-challenge" markdown="1">
# **Challenge: Document Your Workflow**
#
# 1. Look at the code that you created for this lesson. Now imagine yourself months in the future. Document your script so that your methods and process is clear and reproducible for yourself or others to follow when you come back to this work at a later date.
# 2. In documenting your script, synthesize the outputs. Do they tell you anything about the vegetation structure at the field site?
#
# </div>
#
# <div id="ds-challenge" markdown="1">
# **Challenge: Try Another Classification**
#
# Create the following threshold classified outputs:
#
# 1. A raster where NDVI values are classified into the following categories:
# * Low greenness: NDVI < 0.3
# * Medium greenness: 0.3 < NDVI < 0.6
# * High greenness: NDVI > 0.6
# 2. A raster where aspect is classified into North and South facing slopes:
# * North: 0-45 & 315-360 degrees
# * South: 135-225 degrees
#
# <figure>
# <a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/geospatial-skills/NSEWclassification_BozEtAl2015.jpg">
# <img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/geospatial-skills/NSEWclassification_BozEtAl2015.jpg"></a>
# <figcaption> Reclassification of aspect (azimuth) values: North, 315-45
# degrees; East, 45-135 degrees; South, 135-225 degrees; West, 225-315 degrees.
# Source: <a href="http://www.aimspress.com/article/10.3934/energy.2015.3.401/fulltext.html"> Boz et al. 2015 </a>
# </figcaption>
# </figure>
#
# Be sure to document your workflow as you go using Jupyter Markdown cells.
#
# **Data Institute Participants:** When you are finished, export your outputs to HTML by selecting File > Download As > HTML (.html). Save the file as LastName_Tues_classifyThreshold.html. Add this to the Tuesday directory in your DI-NEON-participants Git directory and push them to your fork in GitHub. Merge with the central repository using a pull request.
#
# </div>
# ## References
#
# Khosravipour, Anahita & Skidmore, Andrew & Isenburg, Martin & Wang, Tiejun & Hussin, Yousif. (2014). <a href="https://www.researchgate.net/publication/273663100_Generating_Pit-free_Canopy_Height_Models_from_Airborne_Lidar" target="_blank"> Generating Pit-free Canopy Height Models from Airborne Lidar. Photogrammetric Engineering & Remote Sensing</a>. 80. 863-872. 10.14358/PERS.80.9.863.
#
| agpl-3.0 |
tomlof/scikit-learn | sklearn/linear_model/tests/test_bayes.py | 23 | 3376 | # Author: Alexandre Gramfort <[email protected]>
# Fabian Pedregosa <[email protected]>
#
# License: BSD 3 clause
import numpy as np
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import SkipTest
from sklearn.linear_model.bayes import BayesianRidge, ARDRegression
from sklearn.linear_model import Ridge
from sklearn import datasets
def test_bayesian_on_diabetes():
# Test BayesianRidge on diabetes
raise SkipTest("XFailed Test")
diabetes = datasets.load_diabetes()
X, y = diabetes.data, diabetes.target
clf = BayesianRidge(compute_score=True)
# Test with more samples than features
clf.fit(X, y)
# Test that scores are increasing at each iteration
assert_array_equal(np.diff(clf.scores_) > 0, True)
# Test with more features than samples
X = X[:5, :]
y = y[:5]
clf.fit(X, y)
# Test that scores are increasing at each iteration
assert_array_equal(np.diff(clf.scores_) > 0, True)
def test_bayesian_ridge_parameter():
# Test correctness of lambda_ and alpha_ parameters (Github issue #8224)
X = np.array([[1, 1], [3, 4], [5, 7], [4, 1], [2, 6], [3, 10], [3, 2]])
y = np.array([1, 2, 3, 2, 0, 4, 5]).T
# A Ridge regression model using an alpha value equal to the ratio of
# lambda_ and alpha_ from the Bayesian Ridge model must be identical
br_model = BayesianRidge(compute_score=True).fit(X, y)
rr_model = Ridge(alpha=br_model.lambda_ / br_model.alpha_).fit(X, y)
assert_array_almost_equal(rr_model.coef_, br_model.coef_)
assert_almost_equal(rr_model.intercept_, br_model.intercept_)
def test_toy_bayesian_ridge_object():
# Test BayesianRidge on toy
X = np.array([[1], [2], [6], [8], [10]])
Y = np.array([1, 2, 6, 8, 10])
clf = BayesianRidge(compute_score=True)
clf.fit(X, Y)
# Check that the model could approximately learn the identity function
test = [[1], [3], [4]]
assert_array_almost_equal(clf.predict(test), [1, 3, 4], 2)
def test_toy_ard_object():
# Test BayesianRegression ARD classifier
X = np.array([[1], [2], [3]])
Y = np.array([1, 2, 3])
clf = ARDRegression(compute_score=True)
clf.fit(X, Y)
# Check that the model could approximately learn the identity function
test = [[1], [3], [4]]
assert_array_almost_equal(clf.predict(test), [1, 3, 4], 2)
def test_return_std():
# Test return_std option for both Bayesian regressors
def f(X):
return np.dot(X, w) + b
def f_noise(X, noise_mult):
return f(X) + np.random.randn(X.shape[0]) * noise_mult
d = 5
n_train = 50
n_test = 10
w = np.array([1.0, 0.0, 1.0, -1.0, 0.0])
b = 1.0
X = np.random.random((n_train, d))
X_test = np.random.random((n_test, d))
for decimal, noise_mult in enumerate([1, 0.1, 0.01]):
y = f_noise(X, noise_mult)
m1 = BayesianRidge()
m1.fit(X, y)
y_mean1, y_std1 = m1.predict(X_test, return_std=True)
assert_array_almost_equal(y_std1, noise_mult, decimal=decimal)
m2 = ARDRegression()
m2.fit(X, y)
y_mean2, y_std2 = m2.predict(X_test, return_std=True)
assert_array_almost_equal(y_std2, noise_mult, decimal=decimal)
| bsd-3-clause |
OSSOS/MOP | src/ossos/plotting/scripts/sky_location_plots.py | 1 | 22390 |
import math
import sys
import argparse
import os
import numpy as np
import ephem
from astropy.io import votable
from astropy.time import Time, TimeDelta
import Polygon
import matplotlib.patches
# from matplotlib.patches import Rectangle
from matplotlib.collections import PatchCollection
from matplotlib.font_manager import FontProperties
import matplotlib.pyplot as plt
from ossos.planning import (megacam, invariable)
from ossos import (parsers, parameters)
# import ossos.core.ossos.planning.plotting.plot_fanciness
import plot_objects
from ossos import cameras # bit over the top to show all the ccds?
plot_extents = {"13AE": [209.8, 218.2, -15.5, -9.5],
"13AO": [235.4, 243.8, -15.5, -9.5],
"13A": [209, 244, -22, -9], # for both 13A blocks together
"15AM": [230, 238, -16, -9],
"15AP": [198, 207, -12, -4],
"13B": [30, 5, -2, 18], # for both 13B blocks together
"13BL": [9, 18, 1, 7],
"14BH": [18, 27, 10, 16],
"15BS": [4, 11, 1.5, 8.5],
"15BT": [4, 11, 1.5, 8.5],
"15BC": [44, 54, 13, 20],
"15BD": [44, 54, 13, 20]
}
xgrid = {'2013': [-3, -2, -1, 0, 1, 2, 3],
'2014r': [-3.5, -2.5, -1.5, -0.5, 0.5, 1.5, 2.5, 3.5],
'2014': [-3, -2, -1, 0, 1, 2, 3],
'astr': [1, 2, 3, 4]}
ygrid = {'2013': [-1, 0, 1],
'2014': [-1, 0, 1],
'astr': [-2.5, -1.5, -0.5, 0.5, 1.5],
'2014r': [-1.5, -0.5, 0.5, 1.5]}
block_centre = ephem.Ecliptic(0, 0)
field_centre = ephem.Ecliptic(0, 0)
# the offset (in RA and DEC) between fields
field_offset = math.radians(1.00)
# the size of the field
camera_dimen = 0.98
years = {"2014": {"ra_off": ephem.hours("00:00:00"),
"dec_off": ephem.hours("00:00:00"),
"fill": False,
"facecolor": 'k',
"alpha": 0.5,
"color": 'b'},
"2014r": {"ra_off": ephem.hours("00:05:00"),
"dec_off": ephem.degrees("00:18:00"),
"alpha": 0.5,
"fill": False,
"facecolor": 'k',
"color": 'g'},
'astr': {"ra_off": ephem.hours("00:00:00"),
"dec_off": ephem.degrees("-00:12:00"),
"alpha": 0.05,
"fill": True,
'facecolor': 'k',
"color": 'none'}
}
## camera_dimen is the size of the field.
## this is the 40 CD Layout. The width is set to exlcude one of the
## Ears. dimen is really the height
pixscale = 0.184
detsec_ccd09 = ((4161, 2114), (14318, 9707))
detsec_ccd10 = ((6279, 4232), (14317, 9706))
detsec_ccd36 = ((2048, 1), (14318, 9707))
detsec_ccd37 = ((23219, 21172), (14309, 9698))
detsec_ccd00 = ((4160, 2113), (19351, 14740))
detsec_ccd17 = ((21097, 19050), (14309, 9698))
detsec_ccd35 = ((16934, 18981), (11, 4622))
print(detsec_ccd09[0][0] - detsec_ccd10[0][1], detsec_ccd09[0][1] - detsec_ccd36[0][0])
camera_width = (max(detsec_ccd37[0]) - min(detsec_ccd09[0]) + 1) * pixscale / 3600.0
camera_height = (max(detsec_ccd00[1]) - min(detsec_ccd35[1]) + 1) * pixscale / 3600.0
camera_width_40 = (max(detsec_ccd37[0]) - min(detsec_ccd36[0]) + 1) * pixscale / 3600.0
camera_width_36 = (max(detsec_ccd17[0]) - min(detsec_ccd09[0]) + 1) * pixscale / 3600.0
# I can't remember why this multiplication is here
camera_width = 0.983 * camera_width
camera_height = 0.983 * camera_height
# FIXME: redo this to have brewermpl colours?
def colsym():
# needs a light grey to get the original populations shown as background.
mcolours = ['0.75', 'k', 'yellow', 'black', 'blue'] # closer to 1 is lighter. 0.85 is good.
msymbols = ['o', 'o', 'o', 'x', 'd']
return mcolours, msymbols
def basic_skysurvey_plot_setup(from_plane=ephem.degrees('0')):
# # EmulateApJ columnwidth=245.26 pts
# fig_width_pt = 246.0
# inches_per_pt = 1.0 / 72.27
# golden_mean = (sqrt(5.) - 1.0) / 2.0
# fig_width = fig_width_pt * inches_per_pt
# fig_height = fig_width * golden_mean * 1.5
# fig_size = [fig_width, fig_height]
fig = plt.figure()
ax = fig.add_subplot(111) # , aspect="equal")
handles = [] # this handles the creation of the legend properly at the end of plotting
labels = []
handles, labels = plot_galactic_plane(handles, labels)
# handles, labels = plot_ecliptic_plane(handles, labels)
handles, labels = plot_invariable_plane(handles, labels, from_plane)
fontP = FontProperties()
fontP.set_size('small') # make the fonts smaller
plt.xlabel('RA (deg)')
plt.ylabel('Declination (deg)')
plt.grid(True, which='both')
# plot_fanciness.remove_border(ax)
return handles, labels, ax, fontP
def plot_galactic_plane(handles, labels):
gp = [ephem.Galactic(str(lon), str(0)) for lon in range(0, 360)]
galPlane = [ephem.Equatorial(coord) for coord in gp]
# rewrap the ra to start at zero: that otherwise causes a problem in the plotting!
temp = [(math.degrees(coord.ra), math.degrees(coord.dec)) for coord in galPlane]
temp.sort(key=lambda x: x[0])
ra_galPlane = [t[0] for t in temp]
dec_galPlane = [tt[1] for tt in temp]
handles.append(plt.plot(ra_galPlane, dec_galPlane, 'b'))
labels.append('galactic plane')
# plt.plot([r+360 for r in ra_galPlane], dec_galPlane, 'b') # echo
return handles, labels
def plot_ecliptic_plane(handles, labels):
ep = [ephem.Ecliptic(str(lon), str(0)) for lon in
range(0, 360)] # this line is fine: need Ecliptic(long, lat) format in creation
ecPlane = [ephem.Equatorial(coord) for coord in ep] # so this must be where the issue is.
ra_ecPlane = [math.degrees(coord.ra) for coord in ecPlane]
dec_ecPlane = [math.degrees(coord.dec) for coord in ecPlane]
handles.append(plt.plot(ra_ecPlane, dec_ecPlane, '#E47833'))
labels.append('ecliptic')
# plt.plot([rr+360 for rr in ra_ecPlane], dec_ecPlane, '#E47833') # echo
return handles, labels
def plot_invariable_plane(handles, labels, from_plane=ephem.degrees('0')):
# Plot the invariable plane: values from DE405, table 5, Souami and Souchay 2012
# http://www.aanda.org/articles/aa/pdf/2012/07/aa19011-12.pdf
# Ecliptic J2000
lon = np.arange(-2 * math.pi, 2 * math.pi, 0.5 / 57)
lat = 0 * lon
(lat, lon) = invariable.trevonc(lat, lon)
ec = [ephem.Ecliptic(x, y) for (x, y) in np.array((lon, lat)).transpose()]
eq = [ephem.Equatorial(coord) for coord in ec]
handles.append(plt.plot([math.degrees(coord.ra) for coord in eq],
[math.degrees(coord.dec) for coord in eq],
ls='--',
color='#262626',
lw=1,
alpha=0.7))
labels.append('invariable')
return handles, labels
def build_ossos_footprint(ax, block_name, block, field_offset, plot=True, plot_col='b'):
print('starting footprint')
# build the pointings that will be used for the discovery fields
x = []
y = []
names = []
coverage = []
year = '2014'
if year == 'astr':
field_offset = field_offset * 0.75
sign = -1
# if 'f' in block: # unsure what this was ever meant to do
# sign = 1
rac = ephem.hours(block["RA"]) + years[year]["ra_off"]
decc = ephem.degrees(block["DEC"]) + sign * years[year]["dec_off"]
width = field_offset / math.cos(decc)
block_centre.from_radec(rac, decc)
block_centre.set(block_centre.lon + xgrid[year][0] * width, block_centre.lat)
field_centre.set(block_centre.lon, block_centre.lat)
for dx in xgrid[year]:
(rac, decc) = field_centre.to_radec()
for dy in ygrid[year]:
ddec = dy * field_offset
dec = math.degrees(decc + ddec)
ra = math.degrees(rac)
names.append("%s%+d%+d" % ( block_name, dx, dy))
y.append(dec)
x.append(ra)
xcen = ra
ycen = dec
dimen = camera_dimen
coverage.append(Polygon.Polygon((
(xcen - dimen / 2.0, ycen - dimen / 2.0),
(xcen - dimen / 2.0, ycen + dimen / 2.0),
(xcen + dimen / 2.0, ycen + dimen / 2.0),
(xcen + dimen / 2.0, ycen - dimen / 2.0),
(xcen - dimen / 2.0, ycen - dimen / 2.0))))
if plot:
print('plotting footprint')
fill = True
# ax.add_artist(Rectangle(xy=(ra - dimen / 2.0, dec - dimen / 2.0),
# height=dimen,
# width=dimen,
# edgecolor='k',
# facecolor=plot_col,
# lw=0.5, fill=True, alpha=0.15, zorder=0))
print(ra, dec)
ax.add_artist(Rectangle(xy=(math.degrees(ra) - camera_width_40/2.0, math.degrees(dec) - camera_height / 4.0 ),
width= camera_width_40,
height= camera_height/2.0,
color='b',
lw=0.5, fill=fill, alpha=0.3))
ax.add_artist(Rectangle(xy=(math.degrees(ra) - camera_width_36/2.0, math.degrees(dec) - camera_height / 2.0),
height=camera_height/4.0,
width=camera_width_36,
color='b',
lw=0.5, fill=fill, alpha=0.3))
ax.add_artist(Rectangle(xy=(math.degrees(ra) - camera_width_36/2.0, math.degrees(dec) + camera_height / 4.0),
height=camera_height/4.0,
width=camera_width_36,
color='b',
lw=0.5, fill=fill, alpha=0.3))
rac += field_offset / math.cos(decc)
for i in range(3):
field_centre.from_radec(rac, decc)
field_centre.set(field_centre.lon, block_centre.lat)
(ttt, decc) = field_centre.to_radec()
ras = np.radians(x)
decs = np.radians(y)
return ras, decs, coverage, names, ax
def true_footprint(ax, blockname):
# with open('/Users/bannisterm/Dropbox/OSSOS/ossos-pipeline/src/ossos/core/ossos/planning/triplet_and_processing_notes/T_15B_discovery_expnums.txt') as infile:
# for line in infile.readlines():
# Trying a test field on T block only
ra = 7.55171
dec = 4.52681
print('plotting footprint')
fill = False
print(ra, dec)
patches = []
patches.append(matplotlib.patches.Rectangle(xy=(math.degrees(ra) - camera_width_40 / 2.0, math.degrees(dec) - camera_height / 4.0),
width=camera_width_40,
height=camera_height / 2.0,
color='b',
zorder=0,
lw=0.5, fill=fill, alpha=0.3))
ax.add_artist(matplotlib.patches.Rectangle(xy=(math.degrees(ra) - camera_width_36 / 2.0, math.degrees(dec) - camera_height / 2.0),
height=camera_height / 4.0,
width=camera_width_36,
color='b',
zorder=0,
lw=0.5, fill=fill, alpha=0.3))
ax.add_artist(matplotlib.patches.Rectangle(xy=(math.degrees(ra) - camera_width_36 / 2.0, math.degrees(dec) + camera_height / 4.0),
height=camera_height / 4.0,
width=camera_width_36,
color='b',
zorder=0,
lw=0.5, fill=fill, alpha=0.3))
# WHERE IS MY FIELD whyyyyy
collection = PatchCollection(patches)
ax.add_collection(collection)
return ax
def synthetic_model_kbos(coverage, input_date=parameters.DISCOVERY_NEW_MOON):
# # build a list of KBOs that will be in the discovery fields.
raise NotImplementedError('Not yet fully completed.')
# kbos = parsers.synthetic_model_kbos(input_date)
# ## keep a list of KBOs that are in the discovery pointings
#
# # FIXME
# for field in coverage:
# for kbo in kbos:
# if kbo.isInside(ra[-1], dec[-1]):
# kbos.append(kbo)
# break
#
# return ra, dec, kbos
def keplerian_sheared_field_locations(ax, kbos, date, ras, decs, names, elongation=False, plot=False):
"""
Shift fields from the discovery set to the requested date by the average motion of L7 kbos in the discovery field.
:param ras:
:param decs:
:param plot:
:param ax:
:param kbos: precomputed at the discovery date for that block. e.g. Oct new moon for 13B
:param date:
:param names:
:param elongation:
"""
seps = {'dra': 0., 'ddec': 0.}
for kbo in kbos:
ra = kbo.ra
dec = kbo.dec
kbo.compute(date)
seps['dra'] += kbo.ra - ra
seps['ddec'] += kbo.dec - dec
seps['dra'] /= float(len(kbos))
seps['ddec'] /= float(len(kbos))
print(date, seps, len(kbos))
for idx in range(len(ras)):
name = names[idx]
ra = ras[idx] + seps['dra']
dec = decs[idx] + seps['ddec']
if plot:
ax.add_artist(Rectangle(xy=(math.degrees(ra) - camera_dimen / 2.0, math.degrees(dec) - camera_dimen / 2.0),
height=camera_dimen,
width=camera_dimen,
edgecolor='b',
facecolor='b',
lw=0.5, fill=True, alpha=0.2))
if elongation:
# For each field centre, plot the elongation onto the field at that date.
elong = field_elongation(ephem.degrees(ra), ephem.degrees(dec), date)
ax.annotate(name, (math.degrees(ra) + camera_dimen / 2., math.degrees(dec)), size=3)
ax.annotate("%0.1f" % elong, (math.degrees(ra) + camera_dimen / 4., math.degrees(dec) - camera_dimen / 4.),
size=5)
return ax
def field_elongation(ra, dec, date):
"""
For a given field, calculate the solar elongation at the given date.
:param ra: field's right ascension. unit="h" format="RAh:RAm:RAs"
:param dec: field's declination. degrees
:param date: date at which to calculate elongation
:return: elongation from the Sun in degrees
"""
sun = ephem.Sun()
sun.compute(date)
sep = ephem.separation((ra, dec), sun)
retval = 180. - math.degrees(sep)
return retval
def plot_USNO_B1_stars(ax):
t = votable.parse(usnoB1.TAPQuery(ra_cen, dec_cen, width, height)).get_first_table()
Rmag = t.array['Bmag'][t.array['Bmag'] < 15]
min = max(Rmag.min(), 11)
max = Rmag.max()
scale = 0.5 * 10 ** ((min - Rmag) / 2.5)
print(scale.min(), scale.max())
ax.scatter(t.array['RAJ2000'], t.array['DEJ2000'], s=scale, marker='o', facecolor='y', alpha=0.8, edgecolor='',
zorder=-10)
return ax
def plot_existing_CFHT_Megacam_observations_in_area(ax, qra, qdec, discovery=None, obs_ids=None):
filename = '/Users/michele/' + "megacam{:+6.2f}{:+6.2f}.xml".format(qra, qdec)
if not os.access(filename, os.R_OK):
data = megacam.TAPQuery(qra, qdec, 60.0, 30.0).read()
fobj = open(filename, 'w')
fobj.write(data)
fobj.close()
fobj = open(filename, 'r')
t = votable.parse(fobj).get_first_table()
ra = t.array['RAJ2000']
dec = t.array['DEJ2000']
mjdate = t.array['MJDATE']
camera_dimen = 0.98
# More sophisticated date ID on plot. Let's identify first-year, discovery, second-year followup
for idx in range(ra.size):
alpha = 0.1
# print abs(Time(mjdate[idx], format='mjd') - discovery) < TimeDelta(1, format='jd')
if Time(mjdate[idx], format='mjd') < Time('2014-01-01'):
ec = 'b'
elif (discovery is not None and abs(Time(mjdate[idx], format='mjd') - discovery) < TimeDelta(1, format='jd')):
print(Time(mjdate[idx], format='mjd').iso)
ec = 'r'
alpha = 1
else:
ec = '#E47833'
r = Rectangle(xy=(ra[idx] - camera_dimen / 2.0, dec[idx] - camera_dimen / 2.0),
height=camera_dimen,
width=camera_dimen,
edgecolor=ec,
alpha=alpha,
lw=0.1,
zorder=-100,
fill=False)
ax.add_artist(r)
return ax
def plot_discovery_uncertainty_ellipses(ax, date):
# plot objects at time of triplet with an X, location each 3 days with a single grey pixel, actual observations
# with a solid dot. Will give suitable corkscrew effect.
# plot wireframe of footprint at discovery triplet
# plot hollow circle as location at start of dark run, ellipse (no fill I think) for orbit uncertainty.
raise NotImplementedError
return ax
def saving_closeout(blockname, date, extents, file_id):
# plt.title(date)
plt.axis(extents) # depends on which survey field is being plotted
plt.draw()
outfile = blockname + file_id + date.split(' ')[0].replace('/', '-') + '.pdf'
# Always use axes.set_rasterized(True) if you are saving as an EPS file.
print('Saving file.', outfile)
plt.savefig(outfile, transparent=True)
plt.close()
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-opposition",
help="Plots blocks at their opposition dates (the date .")
parser.add_argument("-discovery",
action="store_false",
help="Plots blocks at their discovery dates.")
parser.add_argument("-date",
help="Plot blocks at a specific user-provided date, format yyyy/mm/dd HH:MM:SS.")
parser.add_argument("-elongation",
help="Display text of block IDs and arc distance of blocks from opposition on the given date")
parser.add_argument("-synthetic",
action="store_true",
help="Use L7 model to slew blocks, or plot L7 model objects that fall in a given block.")
parser.add_argument("-existing",
action="store_true",
help="Plot all OSSOS imaging of the block.")
parser.add_argument("blocks",
nargs='*',
help="specify blocks to be plotted, e.g. 13AE. Without specifying, will do all eight blocks,"
"making eight separate plots.")
args = parser.parse_args()
print(args)
file_id = ""
if args.blocks: # Coordinates at discovery-opposition of currently-determined OSSOS blocks (2013: 4; 2015: 8
# total).
blocks = {}
for b in args.blocks:
assert b in parameters.BLOCKS
blocks[b] = parameters.BLOCKS[b]
else: # do them all!
blocks = parameters.BLOCKS
discoveries = parsers.ossos_release_parser(table=True)
# extent = plot_extents['13A'] # hardwired for doing both at once
for blockname, block in list(blocks.items()):
extent = plot_extents[blockname]
if args.opposition:
# 13AE is a month different in opposition from 13AO, but 13B are both in October
date = parameters.OPPOSITION_DATES[blockname]
file_id = 'opposition-'
if args.discovery:
# E block discovery triplets are April 4,9,few 19; O block are May 7,8.
# BL is 09-29 AND 10-31. Discoveries in 10-31 are placed at predicted locations, to accommodate split.
date = parameters.DISCOVERY_DATES[blockname]
file_id = 'discovery-'
if args.date:
date = args.date
args.synthetic = True
file_id = '-at-date-'
if args.elongation:
file_id = 'elongation-'
if args.existing:
file_id += '-bg-imaging-'
assert date
print(blockname, date)
if blockname == '13AO':
handles, labels, ax, fontP = basic_skysurvey_plot_setup(from_plane=ephem.degrees('6.0'))
ras, decs, coverage, names, ax = build_ossos_footprint(ax, blockname, block, field_offset, plot=True,
plot_col='#E47833')
else:
handles, labels, ax, fontP = basic_skysurvey_plot_setup()
# ras, decs, coverage, names, ax = build_ossos_footprint(ax, blockname, block, field_offset, plot=True)
ax = true_footprint(ax, blockname)
ax, kbos = plot_objects.plot_known_tnos_singly(ax, extent, date)
if args.synthetic: # defaults to discovery new moon: FIXME: set date appropriately
ra, dec, kbos = synthetic_model_kbos(coverage)
ax = keplerian_sheared_field_locations(ax, kbos, date, ras, decs, names,
elongation=args.elongation, plot=True)
ax = plot_objects.plot_synthetic_kbos(ax, coverage)
block_discoveries = discoveries[
np.array([name.startswith('o5' + blockname[-1].lower()) for name in discoveries['object']])]
ax = plot_objects.plot_ossos_discoveries(ax, block_discoveries, blockname,
prediction_date=date) # will predict for dates other than discovery
# ax = plot_objects.plot_planets(ax, extent, date)
# ax = plot_discovery_uncertainty_ellipses(ax, date)
# if blockname == '13AE': # special case: Saturn was adjacent at discovery
# ax = plot_objects.plot_saturn_moons(ax)
if args.existing:
plot_existing_CFHT_Megacam_observations_in_area(ax)
saving_closeout(blockname, date, plot_extents[blockname], file_id)
# saving_closeout('13A', date, extent, file_id)
sys.stderr.write("Finished.\n")
| gpl-3.0 |
JacekPierzchlewski/RxCS | examples/acquisitions/uniform_ex0.py | 1 | 4652 | """
This script is an example of how to use the uniform signal sampler|br|
In this example 1 random multitone signal is generated and sampled |br|
The signal contains 3 random tones, the highest possible frequency in the
signal is 10 kHz.
The signal is uniformly sampled with the average sampling frequency
equal to 8 kHz. The sampling gird is 1 us.
After the signal generation and sampling, the original signal and the observed
signal is plotted in the time domain. Additionally, the sampling pattern is
plotted.
*Author*:
Jacek Pierzchlewski, Aalborg University, Denmark. <[email protected]>
*Version*:
1.0 | 29-JAN-2015 : * Version 1.0 released. |br|
2.0 | 14-AUG-2015 : * Adjusted to uniform sampler v2.0 |br|
2.1 | 17-AUG-2015 : * Adjusted to uniform sampler v2.1 (observation matrices in a list) |br|
*License*:
BSD 2-Clause
"""
from __future__ import division
import rxcs
import numpy as np
import matplotlib.pyplot as plt
def _uniform_ex0():
# Put the stuff on board
gen = rxcs.sig.randMult() # Signal generator
samp = rxcs.acq.uniform() # Sampler
# General settings
TIME = 1e-3 # Time of the signal is 1 ms
FSMP = 1e6 # The signal representation sampling frequency is 1 MHz
# Settings for the generator
gen.tS = TIME # Time of the signal is 1 ms
gen.fR = FSMP # The signal representation sampling frequency is 1 MHz
gen.fMax = 10e3 # The highest possible frequency in the signal is 10 kHz
gen.fRes = 1e3 # The signal spectrum resolution is 1 kHz
gen.nTones = 1 # The number of random tones
# Generate settings for the sampler
samp.tS = TIME # Time of the signal
samp.fR = FSMP # The signal representation sampling freuqnecy
samp.Tg = 1e-6 # The sampling grid period
samp.fSamp = 8e3 # The average sampling frequency
samp.iAlpha = 0.5 # Alpha parameter
# -----------------------------------------------------------------
# Run the multitone signal generator and the sampler
gen.run() # Run the generator
samp.mSig = gen.mSig # Connect the signal from the generator to the sampler
samp.run() # Run the sampler
# -----------------------------------------------------------------
# Plot the results of sampling
vSig = gen.mSig[0, :] # The signal from the generator
vT = gen.vTSig # The time vector of the original signal
vObSig = samp.mObSig[0, :] # The observed signal
vPattsT = samp.mPattsT[0, :] # The sampling moments
# Plot the signal and the observed sampling points
hFig1 = plt.figure(1)
hSubPlot1 = hFig1.add_subplot(211)
hSubPlot1.grid(True)
hSubPlot1.set_title('Signal and the observed sampling points')
hSubPlot1.plot(vT, vSig, '-')
hSubPlot1.plot(vPattsT, vObSig, 'r*', markersize=10)
# Plot the sampling pattern
hSubPlot2 = hFig1.add_subplot(212)
hSubPlot2.grid(True)
hSubPlot2.set_title('The sampling pattern')
hSubPlot2.set_xlabel('Time [s]')
(markerline, stemlines, baseline) = hSubPlot2.stem(vPattsT,
np.ones(vPattsT.shape),
linefmt='b-',
markerfmt='bo',
basefmt='-')
hSubPlot2.set_ylim(0, 1.1)
hSubPlot2.set_xlim(0, 0.001)
plt.setp(stemlines, color='b', linewidth=2.0)
plt.setp(markerline, color='b', markersize=10.0)
# -----------------------------------------------------------------
# APPENDIX:
# This part is to show how to use the observation matrix, if it is needed
# (for example in compressed sensing systems)
mPhi = samp.lPhi[0] # Get the observation matrix (1st element of the list with observation matrices)
vObSigPhi = np.dot(mPhi, vSig) # Sample the signal using the observation matrix
# Plot the signal and the observed sampling points
hFig2 = plt.figure(2)
hSubPlot1 = hFig2.add_subplot(111)
hSubPlot1.grid(True)
hSubPlot1.set_title('Signal and the observed sampling points')
hSubPlot1.plot(vT, vSig, '-')
hSubPlot1.plot(vPattsT, vObSigPhi, 'ro', markersize=10)
# -----------------------------------------------------------------
plt.show(block=True)
# =====================================================================
# Trigger when start as a script
# =====================================================================
if __name__ == '__main__':
_uniform_ex0()
| bsd-2-clause |
njchiang/task-fmri-utils | fmri_core/analysis.py | 1 | 7675 | from .utils import write_to_logger, mask_img, data_to_img
from .rsa_searchlight import SearchLight as RSASearchlight
from .cross_searchlight import SearchLight
import numpy as np
from datetime import datetime
from nilearn.input_data import NiftiMasker
from scipy.signal import savgol_filter
from nipy.modalities.fmri.design_matrix import make_dmtx
from nipy.modalities.fmri.experimental_paradigm import BlockParadigm
from sklearn.preprocessing import FunctionTransformer
import sklearn.model_selection as ms
from nilearn import decoding, masking
from .rsa import rdm, wilcoxon_onesided
#######################################
# Analysis setup
#######################################
# TODO : make this return an image instead of just data? (use dataToImg)
# TODO : also need to make this return reoriented labels, verify it is working
def nipreproc(img, mask=None, sessions=None, logger=None, **kwargs):
"""
applies nilearn's NiftiMasker to data
:param img: image to be processed
:param mask: mask (optional)
:param sessions: chunks (optional)
:param logger: logger instance
:param kwargs: kwargs for NiftiMasker from NiLearn
:return: preprocessed image (result of fit_transform() on img)
"""
write_to_logger("Running NiftiMasker...", logger)
return NiftiMasker(mask_img=mask,
sessions=sessions,
**kwargs).fit_transform(img)
def op_by_label(d, l, op=None, logger=None):
"""
apply operation to each unique value of the label and
returns the data in its original order
:param d: data (2D numpy array)
:param l: label to operate on
:param op: operation to carry (scikit learn)
:return: processed data
"""
write_to_logger("applying operation by label at " + str(datetime.now()), logger)
if op is None:
from sklearn.preprocessing import StandardScaler
op = StandardScaler()
opD = np.concatenate([op.fit_transform(d[l.values == i])
for i in l.unique()], axis=0)
lOrder = np.concatenate([l.index[l.values == i]
for i in l.unique()], axis=0)
write_to_logger("Ended at " + str(datetime.now()), logger=logger)
return opD[lOrder] # I really hope this works...
def sgfilter(logger=None, **sgparams):
write_to_logger("Creating SG filter", logger)
return FunctionTransformer(savgol_filter, kw_args=sgparams)
#######################################
# Analysis
#######################################
# TODO : featureized design matrix
# take trial by trial (beta extraction) matrix and multiply by
# feature space (in same order)
def make_designmat(frametimes, cond_ids, onsets, durations, amplitudes=None,
design_kwargs=None, constant=False, logger=None):
"""
Creates design matrix from TSV columns
:param frametimes: time index (in s) of each TR
:param cond_ids: condition ids. each unique string will become a regressor
:param onsets: condition onsets
:param durations: durations of trials
:param amplitudes: amplitude of trials (default None)
:param design_kwargs: additional arguments(motion parameters, HRF, etc)
:param logger: logger instance
:return: design matrix instance
"""
if design_kwargs is None:
design_kwargs = {}
if "drift_model" not in design_kwargs.keys():
design_kwargs["drift_model"] = "blank"
write_to_logger("Creating design matrix at " + str(datetime.now()), logger)
paradigm = BlockParadigm(con_id=cond_ids,
onset=onsets,
duration=durations,
amplitude=amplitudes)
dm = make_dmtx(frametimes, paradigm, **design_kwargs)
if constant is False:
dm.matrix = np.delete(dm.matrix, dm.names.index("constant"), axis=1)
dm.names.remove("constant")
write_to_logger("Ended at " + str(datetime.now()), logger=logger)
return dm
def predict(clf, x, y, log=False, logger=None):
"""
Encoding prediction. Assumes data is pre-split into train and test
For now, just uses Ridge regression. Later will use cross-validated Ridge.
:param clf: trained classifier
:param x: Test design matrix
:param y: Test data
:param logger: logging instance
:return: Correlation scores, weights
"""
pred = clf.predict(x)
if y.ndim < 2:
y = y[:, np.newaxis]
if pred.ndim < 2:
pred = pred[:, np.newaxis]
if log:
write_to_logger("Predicting at " + str(datetime.now()), logger)
corrs = np.array([np.corrcoef(y[:, i], pred[:, i])[0, 1]
for i in range(pred.shape[1])])
if log:
write_to_logger("Ended at " + str(datetime.now()), logger=logger)
return corrs
def roi(x, y, clf, m=None, cv=None, logger=None, **roiargs):
"""
Cross validation on a masked roi. Need to decide if this
function does preprocessing or not
(probably should pipeline in)
pa.roi(x, y, clf, m, cv, groups=labels['chunks'])
:param x: input image
:param y: labels
:param clf: classifier or pipeline
:param m: mask (optional)
:param cv: cross validator
:param roiargs: other model_selection arguments, especially groups
:return: CV results
"""
if m is not None:
X = mask_img(x, m, logger=logger)
else:
X = x
return ms.cross_val_score(estimator=clf, X=X, y=y, cv=cv, **roiargs)
def searchlight(x, y, m=None, groups=None, cv=None,
write=False, logger=None, permutations=0, random_state=42, **searchlight_args):
"""
Wrapper to launch searchlight
:param x: Data
:param y: labels
:param m: mask
:param groups: group labels
:param cv: cross validator
:param write: if image for writing is desired or not
:param logger:
:param searchlight_args:(default) process_mask_img(None),
radius(2mm), estimator(svc),
n_jobs(-1), scoring(none), cv(3fold), verbose(0)
:return: trained SL object and SL results
"""
write_to_logger("starting searchlight at " + str(datetime.now()), logger=logger)
if m is None:
m = masking.compute_epi_mask(x)
searchlight_args["process_mask_img"] = m
write_to_logger("searchlight params: " + str(searchlight_args), logger=logger)
sl = SearchLight(mask_img=m, cv=cv, **searchlight_args)
sl.fit(x, y, groups, permutations=permutations, random_state=random_state)
write_to_logger("Searchlight ended at " + str(datetime.now()), logger=logger)
if write:
return sl, data_to_img(sl.scores_, x, logger=logger)
else:
return sl
def searchlight_rsa(x, y, m=None, write=False,
logger=None, **searchlight_args):
"""
Wrapper to launch searchlight
:param x: Data
:param y: model
:param m: mask
:param write: if image for writing is desired or not
:param logger:
:param searchlight_args:(default) process_mask_img(None),
radius(2mm), estimator(svc),
n_jobs(-1), verbose(0)
:return: trained SL object and SL results
"""
write_to_logger("starting searchlight at " + str(datetime.now()), logger=logger)
if m is None:
m = masking.compute_epi_mask(x)
write_to_logger("searchlight params: " + str(searchlight_args))
sl = RSASearchlight(mask_img=m, **searchlight_args)
sl.fit(x, y)
write_to_logger("Searchlight ended at " + str(datetime.now()), logger=logger)
if write:
return sl, data_to_img(sl.scores_, x, logger=logger)
else:
return sl
| mit |
jabooth/menpo-archive | menpo/fitmultilevel/clm/classifierfunctions.py | 1 | 1590 | from sklearn import svm
from sklearn import linear_model
def classifier(X, t, classifier_type, **kwargs):
r"""
General binary classifier function. Provides a consistent signature for
specific implementation of binary classifier functions.
Parameters
----------
X: (n_samples, n_features) ndarray
Training vectors.
t: (n_samples, 1) ndarray
Binary class labels.
classifier_type: closure
Closure implementing a particular type of binary classifier.
Returns
-------
classifier_closure: function
The classifier.
"""
if hasattr(classifier_type, '__call__'):
classifier_closure = classifier_type(X, t, **kwargs)
return classifier_closure
else:
raise ValueError("classifier_type can only be a closure defining "
"a particular classifier technique. Several "
"examples of such closures can be found in "
"`menpo.fitmultilevel.clm.classifierfunctions` "
"(linear_svm_lr, ...).")
def linear_svm_lr(X, t):
r"""
Binary classifier that combines Linear Support Vector Machines and
Logistic Regression.
"""
clf1 = svm.LinearSVC(class_weight='auto')
clf1.fit(X, t)
t1 = clf1.decision_function(X)
clf2 = linear_model.LogisticRegression(class_weight='auto')
clf2.fit(t1[..., None], t)
def linear_svm_predict(x):
t1_pred = clf1.decision_function(x)
return clf2.predict_proba(t1_pred[..., None])[:, 1]
return linear_svm_predict
| bsd-3-clause |
zhonghualiu/FaST-LMM | fastlmm/feature_selection/kernel_ridge_cv.py | 1 | 6511 | import time
import os
import sys
import scipy as SP
import sklearn.metrics as SKM
import sklearn.feature_selection as SKFS
import sklearn.cross_validation as SKCV
import sklearn.metrics as SKM
from fastlmm.util.distributable import *
from fastlmm.util.runner import *
import fastlmm.pyplink.plink as plink
import fastlmm.pyplink.Bed as Bed
import fastlmm.pyplink.snpset.PositionRange as PositionRange
import fastlmm.pyplink.snpset.SnpSetAndName as SnpSetAndName
import fastlmm.util.util as util
import fastlmm.inference as fastlmm
from feature_selection_cv import load_snp_data
class KernelRidgeCV(): # implements IDistributable
'''
A class for running cross-validation on kernel ridge regression: The method determines the best regularization parameter alpha by
cross-validating it.
The Rdige regression optimization problem is given by:
min 1./2n*||y - Xw||_2^2 + alpha * ||w||_2
'''
def __init__(self, bed_fn, pheno_fn, num_folds,random_state=None,cov_fn=None,offset=True):
"""set up kernel ridge regression
----------
bed_fn : str
File name of binary SNP file
pheno_fn : str
File name of phenotype file
num_folds : int
Number of folds in k-fold cross-validation
cov_fn : str, optional, default=None
File name of covariates file
offset : bool, default=True
adds offset to the covariates specified in cov_fn, if necessary
"""
# data file names
self.bed_fn = bed_fn
self.pheno_fn = pheno_fn
self.cov_fn = cov_fn
# optional parameters
self.num_folds = num_folds
self.fold_to_train_data = None
self.random_state = random_state
self.offset = offset
self.K = None
def perform_selection(self,delta_values,strategy,plots_fn=None,results_fn=None):
"""Perform delta selection for kernel ridge regression
delta_values : array-like, shape = [n_steps_delta]
Array of delta values to test
strategy : {'full_cv','insample_cv'}
Strategy to perform feature selection:
- 'full_cv' perform cross-validation over delta
- 'insample_cv' pestimates delta in sample using maximum likelihood.
plots_fn : str, optional, default=None
File name for generated plot. if not specified, the plot is not saved
results_fn : str, optional, default=None
file name for saving cross-validation results. if not specified, nothing is saved
Returns
-------
best_delta : float
best regularization parameter delta for ridge regression
"""
import matplotlib
matplotlib.use('Agg') #This lets it work even on machines without graphics displays
import matplotlib.pylab as PLT
# use precomputed data if available
if self.K == None:
self.setup_kernel()
print 'run selection strategy %s'%strategy
model = fastlmm.lmm()
nInds = self.K.shape[0]
if strategy=='insample':
# take delta with largest likelihood
model.setK(self.K)
model.sety(self.y)
model.setX(self.X)
best_delta = None
best_nLL = SP.inf
# evaluate negative log-likelihood for different values of alpha
nLLs = SP.zeros(len(delta_values))
for delta_idx, delta in enumerate(delta_values):
res = model.nLLeval(delta=delta,REML=True)
if res["nLL"] < best_nLL:
best_delta = delta
best_nLL = res["nLL"]
nLLs[delta_idx] = res['nLL']
fig = PLT.figure()
fig.add_subplot(111)
PLT.semilogx(delta_values,nLLs,color='g',linestyle='-')
PLT.axvline(best_delta,color='r',linestyle='--')
PLT.xlabel('logdelta')
PLT.ylabel('nLL')
PLT.title('Best delta: %f'%best_delta)
PLT.grid(True)
if plots_fn!=None:
PLT.savefig(plots_fn)
if results_fn!=None:
SP.savetxt(results_fn, SP.vstack((delta_values,nLLs)).T,delimiter='\t',header='delta\tnLLs')
if strategy=='cv':
# run cross-validation for determining best delta
kfoldIter = SKCV.KFold(nInds,n_folds=self.num_folds,shuffle=True,random_state=self.random_state)
Ypred = SP.zeros((len(delta_values),nInds))
for Itrain,Itest in kfoldIter:
model.setK(self.K[Itrain][:,Itrain])
model.sety(self.y[Itrain])
model.setX(self.X[Itrain])
model.setTestData(Xstar=self.X[Itest],K0star=self.K[Itest][:,Itrain])
for delta_idx,delta in enumerate(delta_values):
res = model.nLLeval(delta=delta,REML=True)
beta = res['beta']
Ypred[delta_idx,Itest] = model.predictMean(beta=beta,delta=delta)
MSE = SP.zeros(len(delta_values))
for i in range(len(delta_values)):
MSE[i] = SKM.mean_squared_error(self.y,Ypred[i])
idx_bestdelta = SP.argmin(MSE)
best_delta = delta_values[idx_bestdelta]
fig = PLT.figure()
fig.add_subplot(111)
PLT.semilogx(delta_values,MSE,color='g',linestyle='-')
PLT.axvline(best_delta,color='r',linestyle='--')
PLT.xlabel('logdelta')
PLT.ylabel('MSE')
PLT.grid(True)
PLT.title('Best delta: %f'%best_delta)
if plots_fn!=None:
PLT.savefig(plots_fn)
if results_fn!=None:
SP.savetxt(results_fn, SP.vstack((delta_values,MSE)).T,delimiter='\t',header='delta\tnLLs')
return best_delta
def setup_kernel(self):
"""precomputes the kernel
"""
print "loading data..."
G, self.X, self.y = load_snp_data(self.bed_fn, self.pheno_fn, cov_fn=self.cov_fn,offset=self.offset)
print "done."
print "precomputing kernel... "
nSnps = G.shape[1]
self.K = 1./nSnps * SP.dot(G,G.T)
print "done."
del G
| apache-2.0 |
semio/zipline | tests/finance/test_slippage.py | 32 | 18400 | #
# Copyright 2013 Quantopian, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Unit tests for finance.slippage
"""
import datetime
import pytz
from unittest import TestCase
from nose_parameterized import parameterized
import pandas as pd
from zipline.finance.slippage import VolumeShareSlippage
from zipline.protocol import Event, DATASOURCE_TYPE
from zipline.finance.blotter import Order
class SlippageTestCase(TestCase):
def test_volume_share_slippage(self):
event = Event(
{'volume': 200,
'type': 4,
'price': 3.0,
'datetime': datetime.datetime(
2006, 1, 5, 14, 31, tzinfo=pytz.utc),
'high': 3.15,
'low': 2.85,
'sid': 133,
'source_id': 'test_source',
'close': 3.0,
'dt':
datetime.datetime(2006, 1, 5, 14, 31, tzinfo=pytz.utc),
'open': 3.0}
)
slippage_model = VolumeShareSlippage()
open_orders = [
Order(dt=datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
amount=100,
filled=0,
sid=133)
]
orders_txns = list(slippage_model.simulate(
event,
open_orders
))
self.assertEquals(len(orders_txns), 1)
_, txn = orders_txns[0]
expected_txn = {
'price': float(3.01875),
'dt': datetime.datetime(
2006, 1, 5, 14, 31, tzinfo=pytz.utc),
'amount': int(50),
'sid': int(133),
'commission': None,
'type': DATASOURCE_TYPE.TRANSACTION,
'order_id': open_orders[0].id
}
self.assertIsNotNone(txn)
# TODO: Make expected_txn an Transaction object and ensure there
# is a __eq__ for that class.
self.assertEquals(expected_txn, txn.__dict__)
def test_orders_limit(self):
events = self.gen_trades()
slippage_model = VolumeShareSlippage()
# long, does not trade
open_orders = [
Order(**{
'dt': datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
'amount': 100,
'filled': 0,
'sid': 133,
'limit': 3.5})
]
orders_txns = list(slippage_model.simulate(
events[3],
open_orders
))
self.assertEquals(len(orders_txns), 0)
# long, does not trade - impacted price worse than limit price
open_orders = [
Order(**{
'dt': datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
'amount': 100,
'filled': 0,
'sid': 133,
'limit': 3.5})
]
orders_txns = list(slippage_model.simulate(
events[3],
open_orders
))
self.assertEquals(len(orders_txns), 0)
# long, does trade
open_orders = [
Order(**{
'dt': datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
'amount': 100,
'filled': 0,
'sid': 133,
'limit': 3.6})
]
orders_txns = list(slippage_model.simulate(
events[3],
open_orders
))
self.assertEquals(len(orders_txns), 1)
txn = orders_txns[0][1]
expected_txn = {
'price': float(3.500875),
'dt': datetime.datetime(
2006, 1, 5, 14, 34, tzinfo=pytz.utc),
'amount': int(100),
'sid': int(133),
'order_id': open_orders[0].id
}
self.assertIsNotNone(txn)
for key, value in expected_txn.items():
self.assertEquals(value, txn[key])
# short, does not trade
open_orders = [
Order(**{
'dt': datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
'amount': -100,
'filled': 0,
'sid': 133,
'limit': 3.5})
]
orders_txns = list(slippage_model.simulate(
events[0],
open_orders
))
expected_txn = {}
self.assertEquals(len(orders_txns), 0)
# short, does not trade - impacted price worse than limit price
open_orders = [
Order(**{
'dt': datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
'amount': -100,
'filled': 0,
'sid': 133,
'limit': 3.5})
]
orders_txns = list(slippage_model.simulate(
events[1],
open_orders
))
self.assertEquals(len(orders_txns), 0)
# short, does trade
open_orders = [
Order(**{
'dt': datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
'amount': -100,
'filled': 0,
'sid': 133,
'limit': 3.4})
]
orders_txns = list(slippage_model.simulate(
events[1],
open_orders
))
self.assertEquals(len(orders_txns), 1)
_, txn = orders_txns[0]
expected_txn = {
'price': float(3.499125),
'dt': datetime.datetime(
2006, 1, 5, 14, 32, tzinfo=pytz.utc),
'amount': int(-100),
'sid': int(133)
}
self.assertIsNotNone(txn)
for key, value in expected_txn.items():
self.assertEquals(value, txn[key])
STOP_ORDER_CASES = {
# Stop orders can be long/short and have their price greater or
# less than the stop.
#
# A stop being reached is conditional on the order direction.
# Long orders reach the stop when the price is greater than the stop.
# Short orders reach the stop when the price is less than the stop.
#
# Which leads to the following 4 cases:
#
# | long | short |
# | price > stop | | |
# | price < stop | | |
#
# Currently the slippage module acts according to the following table,
# where 'X' represents triggering a transaction
# | long | short |
# | price > stop | | X |
# | price < stop | X | |
#
# However, the following behavior *should* be followed.
#
# | long | short |
# | price > stop | X | |
# | price < stop | | X |
'long | price gt stop': {
'order': {
'dt': pd.Timestamp('2006-01-05 14:30', tz='UTC'),
'amount': 100,
'filled': 0,
'sid': 133,
'stop': 3.5
},
'event': {
'dt': pd.Timestamp('2006-01-05 14:31', tz='UTC'),
'volume': 2000,
'price': 4.0,
'high': 3.15,
'low': 2.85,
'sid': 133,
'close': 4.0,
'open': 3.5
},
'expected': {
'transaction': {
'price': 4.001,
'dt': pd.Timestamp('2006-01-05 14:31', tz='UTC'),
'amount': 100,
'sid': 133,
}
}
},
'long | price lt stop': {
'order': {
'dt': pd.Timestamp('2006-01-05 14:30', tz='UTC'),
'amount': 100,
'filled': 0,
'sid': 133,
'stop': 3.6
},
'event': {
'dt': pd.Timestamp('2006-01-05 14:31', tz='UTC'),
'volume': 2000,
'price': 3.5,
'high': 3.15,
'low': 2.85,
'sid': 133,
'close': 3.5,
'open': 4.0
},
'expected': {
'transaction': None
}
},
'short | price gt stop': {
'order': {
'dt': pd.Timestamp('2006-01-05 14:30', tz='UTC'),
'amount': -100,
'filled': 0,
'sid': 133,
'stop': 3.4
},
'event': {
'dt': pd.Timestamp('2006-01-05 14:31', tz='UTC'),
'volume': 2000,
'price': 3.5,
'high': 3.15,
'low': 2.85,
'sid': 133,
'close': 3.5,
'open': 3.0
},
'expected': {
'transaction': None
}
},
'short | price lt stop': {
'order': {
'dt': pd.Timestamp('2006-01-05 14:30', tz='UTC'),
'amount': -100,
'filled': 0,
'sid': 133,
'stop': 3.5
},
'event': {
'dt': pd.Timestamp('2006-01-05 14:31', tz='UTC'),
'volume': 2000,
'price': 3.0,
'high': 3.15,
'low': 2.85,
'sid': 133,
'close': 3.0,
'open': 3.0
},
'expected': {
'transaction': {
'price': 2.99925,
'dt': pd.Timestamp('2006-01-05 14:31', tz='UTC'),
'amount': -100,
'sid': 133,
}
}
},
}
@parameterized.expand([
(name, case['order'], case['event'], case['expected'])
for name, case in STOP_ORDER_CASES.items()
])
def test_orders_stop(self, name, order_data, event_data, expected):
order = Order(**order_data)
event = Event(initial_values=event_data)
slippage_model = VolumeShareSlippage()
try:
_, txn = next(slippage_model.simulate(event, [order]))
except StopIteration:
txn = None
if expected['transaction'] is None:
self.assertIsNone(txn)
else:
self.assertIsNotNone(txn)
for key, value in expected['transaction'].items():
self.assertEquals(value, txn[key])
def test_orders_stop_limit(self):
events = self.gen_trades()
slippage_model = VolumeShareSlippage()
# long, does not trade
open_orders = [
Order(**{
'dt': datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
'amount': 100,
'filled': 0,
'sid': 133,
'stop': 4.0,
'limit': 3.0})
]
orders_txns = list(slippage_model.simulate(
events[2],
open_orders
))
self.assertEquals(len(orders_txns), 0)
orders_txns = list(slippage_model.simulate(
events[3],
open_orders
))
self.assertEquals(len(orders_txns), 0)
# long, does not trade - impacted price worse than limit price
open_orders = [
Order(**{
'dt': datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
'amount': 100,
'filled': 0,
'sid': 133,
'stop': 4.0,
'limit': 3.5})
]
orders_txns = list(slippage_model.simulate(
events[2],
open_orders
))
self.assertEquals(len(orders_txns), 0)
orders_txns = list(slippage_model.simulate(
events[3],
open_orders
))
self.assertEquals(len(orders_txns), 0)
# long, does trade
open_orders = [
Order(**{
'dt': datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
'amount': 100,
'filled': 0,
'sid': 133,
'stop': 4.0,
'limit': 3.6})
]
orders_txns = list(slippage_model.simulate(
events[2],
open_orders
))
self.assertEquals(len(orders_txns), 0)
orders_txns = list(slippage_model.simulate(
events[3],
open_orders
))
self.assertEquals(len(orders_txns), 1)
_, txn = orders_txns[0]
expected_txn = {
'price': float(3.500875),
'dt': datetime.datetime(
2006, 1, 5, 14, 34, tzinfo=pytz.utc),
'amount': int(100),
'sid': int(133)
}
for key, value in expected_txn.items():
self.assertEquals(value, txn[key])
# short, does not trade
open_orders = [
Order(**{
'dt': datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
'amount': -100,
'filled': 0,
'sid': 133,
'stop': 3.0,
'limit': 4.0})
]
orders_txns = list(slippage_model.simulate(
events[0],
open_orders
))
self.assertEquals(len(orders_txns), 0)
orders_txns = list(slippage_model.simulate(
events[1],
open_orders
))
self.assertEquals(len(orders_txns), 0)
# short, does not trade - impacted price worse than limit price
open_orders = [
Order(**{
'dt': datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
'amount': -100,
'filled': 0,
'sid': 133,
'stop': 3.0,
'limit': 3.5})
]
orders_txns = list(slippage_model.simulate(
events[0],
open_orders
))
self.assertEquals(len(orders_txns), 0)
orders_txns = list(slippage_model.simulate(
events[1],
open_orders
))
self.assertEquals(len(orders_txns), 0)
# short, does trade
open_orders = [
Order(**{
'dt': datetime.datetime(2006, 1, 5, 14, 30, tzinfo=pytz.utc),
'amount': -100,
'filled': 0,
'sid': 133,
'stop': 3.0,
'limit': 3.4})
]
orders_txns = list(slippage_model.simulate(
events[0],
open_orders
))
self.assertEquals(len(orders_txns), 0)
orders_txns = list(slippage_model.simulate(
events[1],
open_orders
))
self.assertEquals(len(orders_txns), 1)
_, txn = orders_txns[0]
expected_txn = {
'price': float(3.499125),
'dt': datetime.datetime(
2006, 1, 5, 14, 32, tzinfo=pytz.utc),
'amount': int(-100),
'sid': int(133)
}
for key, value in expected_txn.items():
self.assertEquals(value, txn[key])
def gen_trades(self):
# create a sequence of trades
events = [
Event({
'volume': 2000,
'type': 4,
'price': 3.0,
'datetime': datetime.datetime(
2006, 1, 5, 14, 31, tzinfo=pytz.utc),
'high': 3.15,
'low': 2.85,
'sid': 133,
'source_id': 'test_source',
'close': 3.0,
'dt':
datetime.datetime(2006, 1, 5, 14, 31, tzinfo=pytz.utc),
'open': 3.0
}),
Event({
'volume': 2000,
'type': 4,
'price': 3.5,
'datetime': datetime.datetime(
2006, 1, 5, 14, 32, tzinfo=pytz.utc),
'high': 3.15,
'low': 2.85,
'sid': 133,
'source_id': 'test_source',
'close': 3.5,
'dt':
datetime.datetime(2006, 1, 5, 14, 32, tzinfo=pytz.utc),
'open': 3.0
}),
Event({
'volume': 2000,
'type': 4,
'price': 4.0,
'datetime': datetime.datetime(
2006, 1, 5, 14, 33, tzinfo=pytz.utc),
'high': 3.15,
'low': 2.85,
'sid': 133,
'source_id': 'test_source',
'close': 4.0,
'dt':
datetime.datetime(2006, 1, 5, 14, 33, tzinfo=pytz.utc),
'open': 3.5
}),
Event({
'volume': 2000,
'type': 4,
'price': 3.5,
'datetime': datetime.datetime(
2006, 1, 5, 14, 34, tzinfo=pytz.utc),
'high': 3.15,
'low': 2.85,
'sid': 133,
'source_id': 'test_source',
'close': 3.5,
'dt':
datetime.datetime(2006, 1, 5, 14, 34, tzinfo=pytz.utc),
'open': 4.0
}),
Event({
'volume': 2000,
'type': 4,
'price': 3.0,
'datetime': datetime.datetime(
2006, 1, 5, 14, 35, tzinfo=pytz.utc),
'high': 3.15,
'low': 2.85,
'sid': 133,
'source_id': 'test_source',
'close': 3.0,
'dt':
datetime.datetime(2006, 1, 5, 14, 35, tzinfo=pytz.utc),
'open': 3.5
})
]
return events
| apache-2.0 |
hrantzsch/signature-verification | eval_embedding.py | 1 | 21650 | """A script to evaluate a model's embeddings.
The script expects embedded data as a .pkl file.
Currently the script prints min, mean, and max distances intra-class and
comparing a class's samples to the respective forgeries.
"""
import pickle
import subprocess
import sys
import numpy as np
from scipy.spatial.distance import cdist, pdist
from scipy import stats
from scipy.optimize import fmin
from scipy.special import gamma as gammaf
from chainer import cuda
from matplotlib.pyplot import gca
from matplotlib import rcParams
import seaborn as sns
sns.set_palette('colorblind')
sns.set_color_codes("colorblind")
AVG = True
NUM_REF = 6 # 12 is used in the ICDAR SigWiComp 2013 Dutch Offline challenge
DIST_METHOD = 'sqeuclidean'
SCALE = 1.0 # scaling dist_to_score
# ============================================================================
# [*1] HACK:
# scaling the genuine-forgery result by (num_genuines / num_forgeries) per user
# in order to balance the number for genuines and forgeries I have
# (see https://en.wikipedia.org/wiki/Accuracy_paradox)
# however num_genuines depends on whether or not I average the genuines:
# 5 if I do, 4 if I don't (not comparing a genuine with itself)
# gen_forge_dists[user].shape[0] is however always 5; thus, reduce by one if I
# don't take the average
# ============================================================================
def false_positives(gen_forge_dists, threshold):
"""Number of Genuine-Forgery pairs that are closer than threshold"""
result = 0
for user in gen_forge_dists.keys():
user_result = np.sum(gen_forge_dists[user] <= threshold)
user_result *= (gen_forge_dists[user].shape[0] - (1 - AVG)) / \
gen_forge_dists[user].shape[1] # HACK [*1]
result += user_result
return result
def true_negatives(gen_forge_dists, threshold):
"""Number of Genuine-Forgery pairs that are farther than threshold"""
result = 0
for user in gen_forge_dists.keys():
user_result = np.sum(gen_forge_dists[user] > threshold)
user_result *= (gen_forge_dists[user].shape[0] - (1 - AVG)) / \
gen_forge_dists[user].shape[1] # HACK [*1]
result += user_result
return result
def true_positives(gen_gen_dists, threshold):
"""Number of Genuine-Genuine pairs that are closer than threshold"""
result = 0
for user in gen_gen_dists.keys():
dists = gen_gen_dists[user][gen_gen_dists[user] != 0]
result += np.sum(dists <= threshold)
return result
def false_negatives(gen_gen_dists, threshold):
"""Number of Genuine-Genuine pairs that are farther than threshold"""
result = 0
for user in gen_gen_dists.keys():
dists = gen_gen_dists[user][gen_gen_dists[user] != 0]
result += np.sum(dists > threshold)
return result
# ============================================================================
def genuine_genuine_dists(data, average):
gen_keys = [k for k in data.keys() if 'f' not in k]
dists = {}
for k in gen_keys:
gen = cuda.cupy.asnumpy(data[k])
if average:
gen_mean = []
for i in range(len(gen)):
others = list(range(len(gen)))
others.remove(i)
# choose NUM_REF of others for reference
# fails (and should fail) if len(others) < NUM_REF
others = np.random.choice(others, replace=False, size=NUM_REF)
gen_mean.append(np.mean(gen[others], axis=0))
dists[k] = cdist(gen_mean, gen, DIST_METHOD)
else:
d = np.unique(cdist(gen, gen, DIST_METHOD))
dists[k] = d[d != 0] # remove same sample comparisons
# dists[k] = cdist(gen, gen, DIST_METHOD)
return dists
def genuine_forgeries_dists(data, average):
gen_keys = [k for k in data.keys() if 'f' not in k]
dists = {}
for k in gen_keys:
gen = cuda.cupy.asnumpy(data[k])
if average:
gen_mean = []
for i in range(len(gen)):
others = list(range(len(gen)))
others.remove(i)
gen_mean.append(np.mean(gen[others], axis=0))
gen = gen_mean
forge = cuda.cupy.asnumpy(data[k + '_f'])
# np.random.shuffle(forge)
# forge = forge[:5] # HACK reduce number of forgeries
dists[k] = cdist(gen, forge, DIST_METHOD)
return dists
# ============================================================================
def roc(thresholds, data):
"""Compute point in a ROC curve for given thresholds.
Returns a point (false-pos-rate, true-pos-rate)"""
gen_gen_dists = genuine_genuine_dists(data, AVG)
gen_forge_dists = genuine_forgeries_dists(data, AVG)
for i in thresholds:
tp = true_positives(gen_gen_dists, i)
fp = false_positives(gen_forge_dists, i)
tn = true_negatives(gen_forge_dists, i)
fn = false_negatives(gen_gen_dists, i)
fpr = fp / (fp + tn) # FAR
tpr = tp / (tp + fn) # TAR
fnr = fn / (tp + fn) # FRR
aer = (fpr + fnr) / 2 # Avg Error Rate
if tp + fp == 0:
# make sure we don't divide by zero
f1 = 0.0
else:
precision = tp / (tp + fp)
recall = tpr
f1 = 2 * precision * recall / (precision + recall)
acc = (tp + tn) / (tp + fp + fn + tn)
yield (fpr, fnr, tpr, f1, acc, aer)
# ============================================================================
def print_stats(data):
intra_class_dists = {k: pdists_for_key(data, k) for k in data.keys()}
gen_forge_dists = genuine_forgeries_dists(data, False)
print("class\t|\tintra-class\t\t|\tforgeries")
print("-----\t|\tmin - mean - max\t|\tmin - mean - max")
for k in sorted(intra_class_dists.keys()):
if 'f' in k:
continue
dist = intra_class_dists[k]
print("{}\t|\t{} - {} - {}".format(k,
int(np.min(dist)),
int(np.mean(dist)),
int(np.max(dist))),
end='\t\t')
dist = gen_forge_dists[k]
print("|\t{} - {} - {}".format(int(np.min(dist)),
int(np.mean(dist)),
int(np.max(dist))))
# ============================================================================
def load_keys(dictionary, keys):
values = [s for k in keys for s in dictionary[k]]
return np.stack(list(map(cuda.cupy.asnumpy, values)))
def pdists_for_key(embeddings, k):
samples = cuda.cupy.asnumpy(embeddings[k])
return pdist(samples, DIST_METHOD)
# ============================================================================
def dist_to_score(dist, max_dist):
"""Supposed to compute P(target_trial | s)"""
return max(0, 2.5 * dist / max_dist)
# return max(0, 1 - dist / max_dist)
def llr(s, func, target_params, nontarget_params):
"""Compute the log-likelihood ratio.
`s` is either a distance or a score computed from that distance.
`target_params` and `nontarget_params` are the parameters to `func`,
either for the curve fitted to the (non)target distances or scores.
"""
# TODO: llr(score, target_score_distr, nontarget_score_distr) !=
# llr(dist, target_dist_distr, nontarget_dist_distr)
return np.log(func(s, *target_params) / func(s, *nontarget_params))
# def neglogsigmoid(x):
# return -np.log(1 / (1 + np.exp(-x)))
# def cllr(target_llrs, nontarget_llrs):
# # % target trials
# # c1 = mean(neglogsigmoid(tar_llrs))/log(2);
# # % non_target trials
# # c2 = mean(neglogsigmoid(-nontar_llrs))/log(2);
# # cllr = (c1+c2)/2;
# c1 = np.mean(map(neglogsigmoid, target_llrs)) / np.log(2)
# c2 = np.mean(map(neglogsigmoid, nontarget_llrs)) / np.log(2)
# return (c1+c2)/2
# ============================================================================
def write_score(out, target_dists, nontarget_dists,
func, target_params, nontarget_params):
"""Write a score file as expected by the FoCal toolkit[1].
[1]: Brummer, Niko, and David A. Van Leeuwen. "On calibration of
language recognition scores." 2006 IEEE Odyssey-The Speaker and
Language Recognition Workshop. IEEE, 2006.
"""
with open(out, 'w') as f:
for d in target_dists:
f.write("1 {} {}\n".format(func(d, *target_params),
func(d, *nontarget_params)))
for d in nontarget_dists:
f.write("2 {} {}\n".format(func(d, *target_params),
func(d, *nontarget_params)))
# ============================================================================
if __name__ == '__main__':
import matplotlib.pyplot as plt
import matplotlib.scale as mscale
from tools.ProbitScale import ProbitScale
mscale.register_scale(ProbitScale)
data_path = sys.argv[1]
data = pickle.load(open(data_path, 'rb'))
# data = {'0000': data['0000'], '0000_f': data['0000_f']}
# ============================================================================
target_dists = genuine_genuine_dists(data, AVG)
target_dists = np.concatenate([target_dists[k].ravel()
for k in target_dists.keys()])
nontarget_dists = genuine_forgeries_dists(data, AVG)
nontarget_dists = np.concatenate([nontarget_dists[k].ravel()
for k in nontarget_dists.keys()])
max_dist = np.max(np.concatenate((target_dists, nontarget_dists)))
target_scores = list(map(lambda s: dist_to_score(s, max_dist),
target_dists))
nontarget_scores = list(map(lambda s: dist_to_score(s, max_dist),
nontarget_dists))
# ============================================================================
HIST_BINS = 1000 # 50 seems to be good for visualization
target_bins, target_bin_edges = np.histogram(target_scores,
bins=HIST_BINS,
range=(0.0, SCALE),
density=True)
nontarget_bins, nontarget_bin_edges = np.histogram(nontarget_scores,
bins=HIST_BINS,
range=(0.0, SCALE),
density=True)
target_dbins, target_dbin_edges = np.histogram(target_dists,
bins=HIST_BINS,
range=(0.0, max_dist),
density=True)
nontarget_dbins, nontarget_dbin_edges = np.histogram(nontarget_dists,
bins=HIST_BINS,
range=(0.0, max_dist),
density=True)
# ============================================================================
print_stats(data)
STEP = 0.1
thresholds = np.arange(0.0, max_dist + STEP, STEP)
zipped = list(roc(thresholds, data))
(fpr, fnr, tpr, f1, acc, aer) = zip(*zipped)
best_f1_idx = np.argmax(f1)
best_acc_idx = np.argmax(acc)
best_aer_idx = np.argmin(aer)
eer_idx = np.argmin(np.abs(np.array(fpr) - np.array(fnr)))
print("F1: {:.4f} (threshold = {})".format(f1[best_f1_idx],
thresholds[best_f1_idx]))
print("Accuracy: {:.4f} (threshold = {})".format(acc[best_acc_idx],
thresholds[best_acc_idx]))
print("AER: {:.4f} (fpr = {}, fnr = {}, t = {})"
.format(aer[best_aer_idx],
fpr[best_aer_idx],
fnr[best_aer_idx],
thresholds[best_aer_idx]))
print("EER: acc = {} fpr = {}, fnr = {}, t = {})"
.format(acc[eer_idx], fpr[eer_idx], fnr[eer_idx],
thresholds[eer_idx]))
# ============================================================================
# F1 score and Accuracy
# ============================================================================
fig, acc_plot = plt.subplots(2, sharex=True)
acc_plot[0].plot(thresholds, f1)
acc_plot[0].set_xlabel('Threshold')
acc_plot[0].set_ylabel('F1')
acc_plot[0].set_xticks(np.arange(thresholds[0], thresholds[-1], STEP * 5))
acc_plot[0].set_title('F1')
acc_plot[0].set_ylim([0.0, 1.0])
acc_plot[1].plot(thresholds, acc)
acc_plot[1].set_xlabel('Threshold')
acc_plot[1].set_ylabel('Accuracy')
acc_plot[1].set_xticks(np.arange(thresholds[0], thresholds[-1], STEP * 5))
acc_plot[1].set_title('Accuracy')
acc_plot[1].set_ylim([0.0, 1.0])
# ============================================================================
# ROC curve
# ============================================================================
fig, roc_plot = plt.subplots()
a = gca()
fontProperties = {'size': 12}
a.set_xticklabels(a.get_xticks(), fontProperties)
a.set_yticklabels(a.get_yticks(), fontProperties)
plt.gcf().subplots_adjust(bottom=0.15)
roc_plot.plot(fpr, tpr)
roc_plot.set_xlabel('false accept rate (FAR)', fontsize=14)
roc_plot.set_ylabel('true accept rate (TAR)', fontsize=14)
roc_plot.set_xlim([-0.05, 1.0])
roc_plot.set_ylim([0.0, 1.05])
roc_plot.scatter([fpr[best_acc_idx]], [tpr[best_acc_idx]],
c='r', edgecolor='r', s=150,
label='best accuracy: {:.2%} (t = {:.1f})'.format(
acc[best_acc_idx], thresholds[best_acc_idx]))
roc_plot.scatter([fpr[best_f1_idx]], [tpr[best_f1_idx]],
c='g', edgecolor='g', s=150,
label='best F1: {:.2%} (t = {:.1f})'.format(
f1[best_f1_idx], thresholds[best_f1_idx]))
roc_plot.scatter([fpr[eer_idx]], [tpr[eer_idx]], # marker='x',
c='c', edgecolor='c', s=150,
label='EER (t = {:.1f})'.format(thresholds[eer_idx]))
roc_plot.legend(loc='lower right', scatterpoints=1, fontsize=12)
# plt.rc('legend',**{'fontsize':6})
# plt.savefig("roc.svg", dpi=180, format='svg')
# ============================================================================
# Histograms and Weibull distribution fit
# ============================================================================
# === Distances ===
# xx = np.linspace(0, max_dist, 500)
# w, h = plt.figaspect(0.3)
# fontsize = 16
# fig, dhist_plots_target = plt.subplots(figsize=(w, h))
# width = 1.0 * (target_dbin_edges[1] - target_dbin_edges[0])
# center = (target_dbin_edges[:-1] + target_dbin_edges[1:]) / 2
# dhist_plots_target.bar(center, target_dbins, align='center', width=width)
target_d_wb = stats.exponweib.fit(target_dists, 1, 1, scale=2, loc=0)
# yy = stats.exponweib.pdf(xx, *target_d_wb)
# dhist_plots_target.plot(xx, yy, 'r')
# dhist_plots_target.set_xlabel('Distance', fontsize=22)
# dhist_plots_target.set_ylabel('Density', fontsize=22)
# a = gca()
# fontProperties = {'size': fontsize}
# a.set_xticklabels(a.get_xticks(), fontProperties)
# a.set_yticklabels(a.get_yticks(), fontProperties)
# plt.gcf().subplots_adjust(bottom=0.15)
# plt.savefig("hist_target_jap.svg", dpi=180, format='svg')
# fig, dhist_plots_nontarget = plt.subplots(figsize=(w, h))
# width = 1.0 * (nontarget_dbin_edges[1] - nontarget_dbin_edges[0])
# center = (nontarget_dbin_edges[:-1] + nontarget_dbin_edges[1:]) / 2
# dhist_plots_nontarget.bar(center, nontarget_dbins, align='center', width=width)
nontarget_d_wb = stats.exponweib.fit(nontarget_dists, 1, 1, scale=2, loc=0)
# yy = stats.exponweib.pdf(xx, *nontarget_d_wb)
# dhist_plots_nontarget.plot(xx, yy, 'r')
# dhist_plots_nontarget.set_xlabel('Distance', fontsize=22)
# dhist_plots_nontarget.set_ylabel('Density', fontsize=22)
# a = gca()
# fontProperties = {'size': fontsize}
# a.set_xticklabels(a.get_xticks(), fontProperties)
# a.set_yticklabels(a.get_yticks(), fontProperties)
# plt.gcf().subplots_adjust(bottom=0.15)
# plt.savefig("hist_nontarget_jap.svg", dpi=180, format='svg')
# === Scores ===
# fig, hist_plots = plt.subplots(2)
# width = 1.0 * (target_bin_edges[1] - target_bin_edges[0])
# center = (target_bin_edges[:-1] + target_bin_edges[1:]) / 2
# hist_plots[0].bar(center, target_bins, align='center', width=width)
# width = 1.0 * (nontarget_bin_edges[1] - nontarget_bin_edges[0])
# center = (nontarget_bin_edges[:-1] + nontarget_bin_edges[1:]) / 2
# hist_plots[1].bar(center, nontarget_bins, align='center', width=width)
# hist_plots[0].set_title("Score histogram target trials")
# hist_plots[1].set_title("Score histogram non-target trials")
# fit weibull to scores
# xx = np.linspace(0, 1.0, 500)
target_wb = stats.exponweib.fit(target_scores, 1, 1, scale=2, loc=0)
# yy = stats.exponweib.pdf(xx, *target_wb)
# hist_plots[0].plot(xx, yy, 'r')
nontarget_wb = stats.exponweib.fit(nontarget_scores, 1, 1, scale=2, loc=0)
# yy = stats.exponweib.pdf(xx, *nontarget_wb)
# hist_plots[1].plot(xx, yy, 'r')
# fig, wbs = plt.subplots()
# wbs.plot(xx, stats.exponweib.cdf(xx, *target_wb), 'g')
# wbs.plot(xx, stats.exponweib.pdf(xx, *target_wb), 'g')
# wbs.plot(xx, stats.exponweib.cdf(xx, *nontarget_wb), 'r')
# wbs.plot(xx, stats.exponweib.pdf(xx, *nontarget_wb), 'r')
# ============================================================================
# Beta distribution fit -- doesn't work that well
# ============================================================================
# target_beta = stats.beta.fit(target_scores)
# nontarget_beta = stats.beta.fit(nontarget_scores)
# hist_plots[0].plot(xx, stats.beta.pdf(xx, *target_beta), 'g')
# hist_plots[1].plot(xx, stats.beta.pdf(xx, *nontarget_beta), 'g')
# ============================================================================
# PDF
# ============================================================================
# fig, pdf_plot = plt.subplots()
# NUM_SCORES = np.sum(target_hist[0]) + np.sum(nontarget_hist[0])
# pdf_plot.plot(target_hist[1][:-1], target_hist[0] / NUM_SCORES, 'g')
# pdf_plot.plot(nontarget_hist[1][:-1], nontarget_hist[0] / NUM_SCORES, 'r')
# TODO maybe more interesting on distances histogram
# ============================================================================
# LLR computation
# ============================================================================
# target_llrs = list(map(lambda d: llr(d, stats.exponweib.pdf,
# target_d_wb, nontarget_d_wb),
# target_dists))
# nontarget_llrs = list(map(lambda d: llr(d, stats.exponweib.pdf,
# target_d_wb, nontarget_d_wb),
# nontarget_dists))
# ============================================================================
# DET-Plot
# ============================================================================
# fig, det_plot = plt.subplots()
# plt.xscale('probit')
# plt.yscale('probit')
# plt.xlim([0, 0.5])
# plt.ylim([0, 0.5])
# det_plot.set_title("DET-plot")
# det_plot.plot(fpr, fnr)
# det_plot.plot(plt.xticks()[0], plt.yticks()[0], ':')
# ============================================================================
# FoCal Toolkit
# ============================================================================
# score_out = "dists.score"
# write_score(score_out, target_dists, nontarget_dists,
# stats.exponweib.pdf, target_d_wb, nontarget_d_wb)
# print("Wrote score to", score_out)
# cmd = "java -jar /home/hannes/src/cllr_evaluation/jFocal/VectorCal.jar " \
# "-analyze -t " + score_out
# subprocess.run(cmd.split())
score_out = "scores.score"
write_score(score_out, target_scores, nontarget_scores,
stats.exponweib.pdf, target_wb, nontarget_wb)
print("Wrote score to", score_out)
cmd = "java -jar /home/hannes/src/cllr_evaluation/jFocal/VectorCal.jar " \
"-analyze -t " + score_out
subprocess.run(cmd.split())
# ============================================================================
# Score and Dist LLRs
# ============================================================================
# dists = np.linspace(0, 500.0, 500)
# dist_llrs = list(map(
# lambda s: np.log(stats.exponweib.pdf(s, *target_d_wb) /
# stats.exponweib.pdf(s, *nontarget_d_wb)), dists)
# )
# scores = list(map(lambda d: dist_to_score(d, 500.0), dists))
# score_llrs = list(map(
# lambda s: np.log(stats.exponweib.pdf(s, *target_wb) /
# stats.exponweib.pdf(s, *nontarget_wb)), scores)
# )
# fig, llr_plots = plt.subplots(2, sharex=False)
# llr_plots[0].plot(scores, score_llrs)
# llr_plots[1].plot(dists, dist_llrs)
# ============================================================================
# Plot
# ============================================================================
plt.show()
| gpl-3.0 |
Dapid/GPy | GPy/core/parameterization/priors.py | 8 | 45900 | # Copyright (c) 2012 - 2014, GPy authors (see AUTHORS.txt).
# Licensed under the BSD 3-clause license (see LICENSE.txt)
import numpy as np
from scipy.special import gammaln, digamma
from ...util.linalg import pdinv
from .domains import _REAL, _POSITIVE
import warnings
import weakref
class Prior(object):
domain = None
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance or cls._instance.__class__ is not cls:
newfunc = super(Prior, cls).__new__
if newfunc is object.__new__:
cls._instance = newfunc(cls)
else:
cls._instance = newfunc(cls, *args, **kwargs)
return cls._instance
def pdf(self, x):
return np.exp(self.lnpdf(x))
def plot(self):
import sys
assert "matplotlib" in sys.modules, "matplotlib package has not been imported."
from ...plotting.matplot_dep import priors_plots
priors_plots.univariate_plot(self)
def __repr__(self, *args, **kwargs):
return self.__str__()
class Gaussian(Prior):
"""
Implementation of the univariate Gaussian probability function, coupled with random variables.
:param mu: mean
:param sigma: standard deviation
.. Note:: Bishop 2006 notation is used throughout the code
"""
domain = _REAL
_instances = []
def __new__(cls, mu=0, sigma=1): # Singleton:
if cls._instances:
cls._instances[:] = [instance for instance in cls._instances if instance()]
for instance in cls._instances:
if instance().mu == mu and instance().sigma == sigma:
return instance()
newfunc = super(Prior, cls).__new__
if newfunc is object.__new__:
o = newfunc(cls)
else:
o = newfunc(cls, mu, sigma)
cls._instances.append(weakref.ref(o))
return cls._instances[-1]()
def __init__(self, mu, sigma):
self.mu = float(mu)
self.sigma = float(sigma)
self.sigma2 = np.square(self.sigma)
self.constant = -0.5 * np.log(2 * np.pi * self.sigma2)
def __str__(self):
return "N({:.2g}, {:.2g})".format(self.mu, self.sigma)
def lnpdf(self, x):
return self.constant - 0.5 * np.square(x - self.mu) / self.sigma2
def lnpdf_grad(self, x):
return -(x - self.mu) / self.sigma2
def rvs(self, n):
return np.random.randn(n) * self.sigma + self.mu
# def __getstate__(self):
# return self.mu, self.sigma
#
# def __setstate__(self, state):
# self.mu = state[0]
# self.sigma = state[1]
# self.sigma2 = np.square(self.sigma)
# self.constant = -0.5 * np.log(2 * np.pi * self.sigma2)
class Uniform(Prior):
domain = _REAL
_instances = []
def __new__(cls, lower=0, upper=1): # Singleton:
if cls._instances:
cls._instances[:] = [instance for instance in cls._instances if instance()]
for instance in cls._instances:
if instance().lower == lower and instance().upper == upper:
return instance()
o = super(Prior, cls).__new__(cls, lower, upper)
cls._instances.append(weakref.ref(o))
return cls._instances[-1]()
def __init__(self, lower, upper):
self.lower = float(lower)
self.upper = float(upper)
def __str__(self):
return "[{:.2g}, {:.2g}]".format(self.lower, self.upper)
def lnpdf(self, x):
region = (x >= self.lower) * (x <= self.upper)
return region
def lnpdf_grad(self, x):
return np.zeros(x.shape)
def rvs(self, n):
return np.random.uniform(self.lower, self.upper, size=n)
# def __getstate__(self):
# return self.lower, self.upper
#
# def __setstate__(self, state):
# self.lower = state[0]
# self.upper = state[1]
class LogGaussian(Gaussian):
"""
Implementation of the univariate *log*-Gaussian probability function, coupled with random variables.
:param mu: mean
:param sigma: standard deviation
.. Note:: Bishop 2006 notation is used throughout the code
"""
domain = _POSITIVE
_instances = []
def __new__(cls, mu=0, sigma=1): # Singleton:
if cls._instances:
cls._instances[:] = [instance for instance in cls._instances if instance()]
for instance in cls._instances:
if instance().mu == mu and instance().sigma == sigma:
return instance()
newfunc = super(Prior, cls).__new__
if newfunc is object.__new__:
o = newfunc(cls)
else:
o = newfunc(cls, mu, sigma)
cls._instances.append(weakref.ref(o))
return cls._instances[-1]()
def __init__(self, mu, sigma):
self.mu = float(mu)
self.sigma = float(sigma)
self.sigma2 = np.square(self.sigma)
self.constant = -0.5 * np.log(2 * np.pi * self.sigma2)
def __str__(self):
return "lnN({:.2g}, {:.2g})".format(self.mu, self.sigma)
def lnpdf(self, x):
return self.constant - 0.5 * np.square(np.log(x) - self.mu) / self.sigma2 - np.log(x)
def lnpdf_grad(self, x):
return -((np.log(x) - self.mu) / self.sigma2 + 1.) / x
def rvs(self, n):
return np.exp(np.random.randn(n) * self.sigma + self.mu)
class MultivariateGaussian(Prior):
"""
Implementation of the multivariate Gaussian probability function, coupled with random variables.
:param mu: mean (N-dimensional array)
:param var: covariance matrix (NxN)
.. Note:: Bishop 2006 notation is used throughout the code
"""
domain = _REAL
_instances = []
def __new__(cls, mu=0, var=1): # Singleton:
if cls._instances:
cls._instances[:] = [instance for instance in cls._instances if instance()]
for instance in cls._instances:
if np.all(instance().mu == mu) and np.all(instance().var == var):
return instance()
o = super(Prior, cls).__new__(cls, mu, var)
cls._instances.append(weakref.ref(o))
return cls._instances[-1]()
def __init__(self, mu, var):
self.mu = np.array(mu).flatten()
self.var = np.array(var)
assert len(self.var.shape) == 2
assert self.var.shape[0] == self.var.shape[1]
assert self.var.shape[0] == self.mu.size
self.input_dim = self.mu.size
self.inv, self.hld = pdinv(self.var)
self.constant = -0.5 * self.input_dim * np.log(2 * np.pi) - self.hld
def summary(self):
raise NotImplementedError
def pdf(self, x):
return np.exp(self.lnpdf(x))
def lnpdf(self, x):
d = x - self.mu
return self.constant - 0.5 * np.sum(d * np.dot(d, self.inv), 1)
def lnpdf_grad(self, x):
d = x - self.mu
return -np.dot(self.inv, d)
def rvs(self, n):
return np.random.multivariate_normal(self.mu, self.var, n)
def plot(self):
import sys
assert "matplotlib" in sys.modules, "matplotlib package has not been imported."
from ..plotting.matplot_dep import priors_plots
priors_plots.multivariate_plot(self)
def __getstate__(self):
return self.mu, self.var
def __setstate__(self, state):
self.mu = state[0]
self.var = state[1]
assert len(self.var.shape) == 2
assert self.var.shape[0] == self.var.shape[1]
assert self.var.shape[0] == self.mu.size
self.input_dim = self.mu.size
self.inv, self.hld = pdinv(self.var)
self.constant = -0.5 * self.input_dim * np.log(2 * np.pi) - self.hld
def gamma_from_EV(E, V):
warnings.warn("use Gamma.from_EV to create Gamma Prior", FutureWarning)
return Gamma.from_EV(E, V)
class Gamma(Prior):
"""
Implementation of the Gamma probability function, coupled with random variables.
:param a: shape parameter
:param b: rate parameter (warning: it's the *inverse* of the scale)
.. Note:: Bishop 2006 notation is used throughout the code
"""
domain = _POSITIVE
_instances = []
def __new__(cls, a=1, b=.5): # Singleton:
if cls._instances:
cls._instances[:] = [instance for instance in cls._instances if instance()]
for instance in cls._instances:
if instance().a == a and instance().b == b:
return instance()
newfunc = super(Prior, cls).__new__
if newfunc is object.__new__:
o = newfunc(cls)
else:
o = newfunc(cls, a, b)
cls._instances.append(weakref.ref(o))
return cls._instances[-1]()
def __init__(self, a, b):
self.a = float(a)
self.b = float(b)
self.constant = -gammaln(self.a) + a * np.log(b)
def __str__(self):
return "Ga({:.2g}, {:.2g})".format(self.a, self.b)
def summary(self):
ret = {"E[x]": self.a / self.b, \
"E[ln x]": digamma(self.a) - np.log(self.b), \
"var[x]": self.a / self.b / self.b, \
"Entropy": gammaln(self.a) - (self.a - 1.) * digamma(self.a) - np.log(self.b) + self.a}
if self.a > 1:
ret['Mode'] = (self.a - 1.) / self.b
else:
ret['mode'] = np.nan
return ret
def lnpdf(self, x):
return self.constant + (self.a - 1) * np.log(x) - self.b * x
def lnpdf_grad(self, x):
return (self.a - 1.) / x - self.b
def rvs(self, n):
return np.random.gamma(scale=1. / self.b, shape=self.a, size=n)
@staticmethod
def from_EV(E, V):
"""
Creates an instance of a Gamma Prior by specifying the Expected value(s)
and Variance(s) of the distribution.
:param E: expected value
:param V: variance
"""
a = np.square(E) / V
b = E / V
return Gamma(a, b)
def __getstate__(self):
return self.a, self.b
def __setstate__(self, state):
self.a = state[0]
self.b = state[1]
self.constant = -gammaln(self.a) + self.a * np.log(self.b)
class InverseGamma(Gamma):
"""
Implementation of the inverse-Gamma probability function, coupled with random variables.
:param a: shape parameter
:param b: rate parameter (warning: it's the *inverse* of the scale)
.. Note:: Bishop 2006 notation is used throughout the code
"""
domain = _POSITIVE
_instances = []
def __new__(cls, a=1, b=.5): # Singleton:
if cls._instances:
cls._instances[:] = [instance for instance in cls._instances if instance()]
for instance in cls._instances:
if instance().a == a and instance().b == b:
return instance()
o = super(Prior, cls).__new__(cls, a, b)
cls._instances.append(weakref.ref(o))
return cls._instances[-1]()
def __init__(self, a, b):
self.a = float(a)
self.b = float(b)
self.constant = -gammaln(self.a) + a * np.log(b)
def __str__(self):
return "iGa({:.2g}, {:.2g})".format(self.a, self.b)
def lnpdf(self, x):
return self.constant - (self.a + 1) * np.log(x) - self.b / x
def lnpdf_grad(self, x):
return -(self.a + 1.) / x + self.b / x ** 2
def rvs(self, n):
return 1. / np.random.gamma(scale=1. / self.b, shape=self.a, size=n)
class DGPLVM_KFDA(Prior):
"""
Implementation of the Discriminative Gaussian Process Latent Variable function using
Kernel Fisher Discriminant Analysis by Seung-Jean Kim for implementing Face paper
by Chaochao Lu.
:param lambdaa: constant
:param sigma2: constant
.. Note:: Surpassing Human-Level Face paper dgplvm implementation
"""
domain = _REAL
# _instances = []
# def __new__(cls, lambdaa, sigma2): # Singleton:
# if cls._instances:
# cls._instances[:] = [instance for instance in cls._instances if instance()]
# for instance in cls._instances:
# if instance().mu == mu and instance().sigma == sigma:
# return instance()
# o = super(Prior, cls).__new__(cls, mu, sigma)
# cls._instances.append(weakref.ref(o))
# return cls._instances[-1]()
def __init__(self, lambdaa, sigma2, lbl, kern, x_shape):
"""A description for init"""
self.datanum = lbl.shape[0]
self.classnum = lbl.shape[1]
self.lambdaa = lambdaa
self.sigma2 = sigma2
self.lbl = lbl
self.kern = kern
lst_ni = self.compute_lst_ni()
self.a = self.compute_a(lst_ni)
self.A = self.compute_A(lst_ni)
self.x_shape = x_shape
def get_class_label(self, y):
for idx, v in enumerate(y):
if v == 1:
return idx
return -1
# This function assigns each data point to its own class
# and returns the dictionary which contains the class name and parameters.
def compute_cls(self, x):
cls = {}
# Appending each data point to its proper class
for j in range(self.datanum):
class_label = self.get_class_label(self.lbl[j])
if class_label not in cls:
cls[class_label] = []
cls[class_label].append(x[j])
if len(cls) > 2:
for i in range(2, self.classnum):
del cls[i]
return cls
def x_reduced(self, cls):
x1 = cls[0]
x2 = cls[1]
x = np.concatenate((x1, x2), axis=0)
return x
def compute_lst_ni(self):
lst_ni = []
lst_ni1 = []
lst_ni2 = []
f1 = (np.where(self.lbl[:, 0] == 1)[0])
f2 = (np.where(self.lbl[:, 1] == 1)[0])
for idx in f1:
lst_ni1.append(idx)
for idx in f2:
lst_ni2.append(idx)
lst_ni.append(len(lst_ni1))
lst_ni.append(len(lst_ni2))
return lst_ni
def compute_a(self, lst_ni):
a = np.ones((self.datanum, 1))
count = 0
for N_i in lst_ni:
if N_i == lst_ni[0]:
a[count:count + N_i] = (float(1) / N_i) * a[count]
count += N_i
else:
if N_i == lst_ni[1]:
a[count: count + N_i] = -(float(1) / N_i) * a[count]
count += N_i
return a
def compute_A(self, lst_ni):
A = np.zeros((self.datanum, self.datanum))
idx = 0
for N_i in lst_ni:
B = float(1) / np.sqrt(N_i) * (np.eye(N_i) - ((float(1) / N_i) * np.ones((N_i, N_i))))
A[idx:idx + N_i, idx:idx + N_i] = B
idx += N_i
return A
# Here log function
def lnpdf(self, x):
x = x.reshape(self.x_shape)
K = self.kern.K(x)
a_trans = np.transpose(self.a)
paran = self.lambdaa * np.eye(x.shape[0]) + self.A.dot(K).dot(self.A)
inv_part = pdinv(paran)[0]
J = a_trans.dot(K).dot(self.a) - a_trans.dot(K).dot(self.A).dot(inv_part).dot(self.A).dot(K).dot(self.a)
J_star = (1. / self.lambdaa) * J
return (-1. / self.sigma2) * J_star
# Here gradient function
def lnpdf_grad(self, x):
x = x.reshape(self.x_shape)
K = self.kern.K(x)
paran = self.lambdaa * np.eye(x.shape[0]) + self.A.dot(K).dot(self.A)
inv_part = pdinv(paran)[0]
b = self.A.dot(inv_part).dot(self.A).dot(K).dot(self.a)
a_Minus_b = self.a - b
a_b_trans = np.transpose(a_Minus_b)
DJ_star_DK = (1. / self.lambdaa) * (a_Minus_b.dot(a_b_trans))
DJ_star_DX = self.kern.gradients_X(DJ_star_DK, x)
return (-1. / self.sigma2) * DJ_star_DX
def rvs(self, n):
return np.random.rand(n) # A WRONG implementation
def __str__(self):
return 'DGPLVM_prior'
def __getstate___(self):
return self.lbl, self.lambdaa, self.sigma2, self.kern, self.x_shape
def __setstate__(self, state):
lbl, lambdaa, sigma2, kern, a, A, x_shape = state
self.datanum = lbl.shape[0]
self.classnum = lbl.shape[1]
self.lambdaa = lambdaa
self.sigma2 = sigma2
self.lbl = lbl
self.kern = kern
lst_ni = self.compute_lst_ni()
self.a = self.compute_a(lst_ni)
self.A = self.compute_A(lst_ni)
self.x_shape = x_shape
class DGPLVM(Prior):
"""
Implementation of the Discriminative Gaussian Process Latent Variable model paper, by Raquel.
:param sigma2: constant
.. Note:: DGPLVM for Classification paper implementation
"""
domain = _REAL
def __new__(cls, sigma2, lbl, x_shape):
return super(Prior, cls).__new__(cls, sigma2, lbl, x_shape)
def __init__(self, sigma2, lbl, x_shape):
self.sigma2 = sigma2
# self.x = x
self.lbl = lbl
self.classnum = lbl.shape[1]
self.datanum = lbl.shape[0]
self.x_shape = x_shape
self.dim = x_shape[1]
def get_class_label(self, y):
for idx, v in enumerate(y):
if v == 1:
return idx
return -1
# This function assigns each data point to its own class
# and returns the dictionary which contains the class name and parameters.
def compute_cls(self, x):
cls = {}
# Appending each data point to its proper class
for j in range(self.datanum):
class_label = self.get_class_label(self.lbl[j])
if class_label not in cls:
cls[class_label] = []
cls[class_label].append(x[j])
return cls
# This function computes mean of each class. The mean is calculated through each dimension
def compute_Mi(self, cls):
M_i = np.zeros((self.classnum, self.dim))
for i in cls:
# Mean of each class
class_i = cls[i]
M_i[i] = np.mean(class_i, axis=0)
return M_i
# Adding data points as tuple to the dictionary so that we can access indices
def compute_indices(self, x):
data_idx = {}
for j in range(self.datanum):
class_label = self.get_class_label(self.lbl[j])
if class_label not in data_idx:
data_idx[class_label] = []
t = (j, x[j])
data_idx[class_label].append(t)
return data_idx
# Adding indices to the list so we can access whole the indices
def compute_listIndices(self, data_idx):
lst_idx = []
lst_idx_all = []
for i in data_idx:
if len(lst_idx) == 0:
pass
#Do nothing, because it is the first time list is created so is empty
else:
lst_idx = []
# Here we put indices of each class in to the list called lst_idx_all
for m in range(len(data_idx[i])):
lst_idx.append(data_idx[i][m][0])
lst_idx_all.append(lst_idx)
return lst_idx_all
# This function calculates between classes variances
def compute_Sb(self, cls, M_i, M_0):
Sb = np.zeros((self.dim, self.dim))
for i in cls:
B = (M_i[i] - M_0).reshape(self.dim, 1)
B_trans = B.transpose()
Sb += (float(len(cls[i])) / self.datanum) * B.dot(B_trans)
return Sb
# This function calculates within classes variances
def compute_Sw(self, cls, M_i):
Sw = np.zeros((self.dim, self.dim))
for i in cls:
N_i = float(len(cls[i]))
W_WT = np.zeros((self.dim, self.dim))
for xk in cls[i]:
W = (xk - M_i[i])
W_WT += np.outer(W, W)
Sw += (N_i / self.datanum) * ((1. / N_i) * W_WT)
return Sw
# Calculating beta and Bi for Sb
def compute_sig_beta_Bi(self, data_idx, M_i, M_0, lst_idx_all):
# import pdb
# pdb.set_trace()
B_i = np.zeros((self.classnum, self.dim))
Sig_beta_B_i_all = np.zeros((self.datanum, self.dim))
for i in data_idx:
# pdb.set_trace()
# Calculating Bi
B_i[i] = (M_i[i] - M_0).reshape(1, self.dim)
for k in range(self.datanum):
for i in data_idx:
N_i = float(len(data_idx[i]))
if k in lst_idx_all[i]:
beta = (float(1) / N_i) - (float(1) / self.datanum)
Sig_beta_B_i_all[k] += float(N_i) / self.datanum * (beta * B_i[i])
else:
beta = -(float(1) / self.datanum)
Sig_beta_B_i_all[k] += float(N_i) / self.datanum * (beta * B_i[i])
Sig_beta_B_i_all = Sig_beta_B_i_all.transpose()
return Sig_beta_B_i_all
# Calculating W_j s separately so we can access all the W_j s anytime
def compute_wj(self, data_idx, M_i):
W_i = np.zeros((self.datanum, self.dim))
for i in data_idx:
N_i = float(len(data_idx[i]))
for tpl in data_idx[i]:
xj = tpl[1]
j = tpl[0]
W_i[j] = (xj - M_i[i])
return W_i
# Calculating alpha and Wj for Sw
def compute_sig_alpha_W(self, data_idx, lst_idx_all, W_i):
Sig_alpha_W_i = np.zeros((self.datanum, self.dim))
for i in data_idx:
N_i = float(len(data_idx[i]))
for tpl in data_idx[i]:
k = tpl[0]
for j in lst_idx_all[i]:
if k == j:
alpha = 1 - (float(1) / N_i)
Sig_alpha_W_i[k] += (alpha * W_i[j])
else:
alpha = 0 - (float(1) / N_i)
Sig_alpha_W_i[k] += (alpha * W_i[j])
Sig_alpha_W_i = (1. / self.datanum) * np.transpose(Sig_alpha_W_i)
return Sig_alpha_W_i
# This function calculates log of our prior
def lnpdf(self, x):
x = x.reshape(self.x_shape)
cls = self.compute_cls(x)
M_0 = np.mean(x, axis=0)
M_i = self.compute_Mi(cls)
Sb = self.compute_Sb(cls, M_i, M_0)
Sw = self.compute_Sw(cls, M_i)
# sb_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))
#Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)
#Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))[0]
Sb_inv_N = pdinv(Sb + np.eye(Sb.shape[0])*0.1)[0]
return (-1 / self.sigma2) * np.trace(Sb_inv_N.dot(Sw))
# This function calculates derivative of the log of prior function
def lnpdf_grad(self, x):
x = x.reshape(self.x_shape)
cls = self.compute_cls(x)
M_0 = np.mean(x, axis=0)
M_i = self.compute_Mi(cls)
Sb = self.compute_Sb(cls, M_i, M_0)
Sw = self.compute_Sw(cls, M_i)
data_idx = self.compute_indices(x)
lst_idx_all = self.compute_listIndices(data_idx)
Sig_beta_B_i_all = self.compute_sig_beta_Bi(data_idx, M_i, M_0, lst_idx_all)
W_i = self.compute_wj(data_idx, M_i)
Sig_alpha_W_i = self.compute_sig_alpha_W(data_idx, lst_idx_all, W_i)
# Calculating inverse of Sb and its transpose and minus
# Sb_inv_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))
#Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)
#Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))[0]
Sb_inv_N = pdinv(Sb + np.eye(Sb.shape[0])*0.1)[0]
Sb_inv_N_trans = np.transpose(Sb_inv_N)
Sb_inv_N_trans_minus = -1 * Sb_inv_N_trans
Sw_trans = np.transpose(Sw)
# Calculating DJ/DXk
DJ_Dxk = 2 * (
Sb_inv_N_trans_minus.dot(Sw_trans).dot(Sb_inv_N_trans).dot(Sig_beta_B_i_all) + Sb_inv_N_trans.dot(
Sig_alpha_W_i))
# Calculating derivative of the log of the prior
DPx_Dx = ((-1 / self.sigma2) * DJ_Dxk)
return DPx_Dx.T
# def frb(self, x):
# from functools import partial
# from GPy.models import GradientChecker
# f = partial(self.lnpdf)
# df = partial(self.lnpdf_grad)
# grad = GradientChecker(f, df, x, 'X')
# grad.checkgrad(verbose=1)
def rvs(self, n):
return np.random.rand(n) # A WRONG implementation
def __str__(self):
return 'DGPLVM_prior_Raq'
# ******************************************
from .. import Parameterized
from .. import Param
class DGPLVM_Lamda(Prior, Parameterized):
"""
Implementation of the Discriminative Gaussian Process Latent Variable model paper, by Raquel.
:param sigma2: constant
.. Note:: DGPLVM for Classification paper implementation
"""
domain = _REAL
# _instances = []
# def __new__(cls, mu, sigma): # Singleton:
# if cls._instances:
# cls._instances[:] = [instance for instance in cls._instances if instance()]
# for instance in cls._instances:
# if instance().mu == mu and instance().sigma == sigma:
# return instance()
# o = super(Prior, cls).__new__(cls, mu, sigma)
# cls._instances.append(weakref.ref(o))
# return cls._instances[-1]()
def __init__(self, sigma2, lbl, x_shape, lamda, name='DP_prior'):
super(DGPLVM_Lamda, self).__init__(name=name)
self.sigma2 = sigma2
# self.x = x
self.lbl = lbl
self.lamda = lamda
self.classnum = lbl.shape[1]
self.datanum = lbl.shape[0]
self.x_shape = x_shape
self.dim = x_shape[1]
self.lamda = Param('lamda', np.diag(lamda))
self.link_parameter(self.lamda)
def get_class_label(self, y):
for idx, v in enumerate(y):
if v == 1:
return idx
return -1
# This function assigns each data point to its own class
# and returns the dictionary which contains the class name and parameters.
def compute_cls(self, x):
cls = {}
# Appending each data point to its proper class
for j in xrange(self.datanum):
class_label = self.get_class_label(self.lbl[j])
if class_label not in cls:
cls[class_label] = []
cls[class_label].append(x[j])
return cls
# This function computes mean of each class. The mean is calculated through each dimension
def compute_Mi(self, cls):
M_i = np.zeros((self.classnum, self.dim))
for i in cls:
# Mean of each class
class_i = cls[i]
M_i[i] = np.mean(class_i, axis=0)
return M_i
# Adding data points as tuple to the dictionary so that we can access indices
def compute_indices(self, x):
data_idx = {}
for j in xrange(self.datanum):
class_label = self.get_class_label(self.lbl[j])
if class_label not in data_idx:
data_idx[class_label] = []
t = (j, x[j])
data_idx[class_label].append(t)
return data_idx
# Adding indices to the list so we can access whole the indices
def compute_listIndices(self, data_idx):
lst_idx = []
lst_idx_all = []
for i in data_idx:
if len(lst_idx) == 0:
pass
#Do nothing, because it is the first time list is created so is empty
else:
lst_idx = []
# Here we put indices of each class in to the list called lst_idx_all
for m in xrange(len(data_idx[i])):
lst_idx.append(data_idx[i][m][0])
lst_idx_all.append(lst_idx)
return lst_idx_all
# This function calculates between classes variances
def compute_Sb(self, cls, M_i, M_0):
Sb = np.zeros((self.dim, self.dim))
for i in cls:
B = (M_i[i] - M_0).reshape(self.dim, 1)
B_trans = B.transpose()
Sb += (float(len(cls[i])) / self.datanum) * B.dot(B_trans)
return Sb
# This function calculates within classes variances
def compute_Sw(self, cls, M_i):
Sw = np.zeros((self.dim, self.dim))
for i in cls:
N_i = float(len(cls[i]))
W_WT = np.zeros((self.dim, self.dim))
for xk in cls[i]:
W = (xk - M_i[i])
W_WT += np.outer(W, W)
Sw += (N_i / self.datanum) * ((1. / N_i) * W_WT)
return Sw
# Calculating beta and Bi for Sb
def compute_sig_beta_Bi(self, data_idx, M_i, M_0, lst_idx_all):
# import pdb
# pdb.set_trace()
B_i = np.zeros((self.classnum, self.dim))
Sig_beta_B_i_all = np.zeros((self.datanum, self.dim))
for i in data_idx:
# pdb.set_trace()
# Calculating Bi
B_i[i] = (M_i[i] - M_0).reshape(1, self.dim)
for k in xrange(self.datanum):
for i in data_idx:
N_i = float(len(data_idx[i]))
if k in lst_idx_all[i]:
beta = (float(1) / N_i) - (float(1) / self.datanum)
Sig_beta_B_i_all[k] += float(N_i) / self.datanum * (beta * B_i[i])
else:
beta = -(float(1) / self.datanum)
Sig_beta_B_i_all[k] += float(N_i) / self.datanum * (beta * B_i[i])
Sig_beta_B_i_all = Sig_beta_B_i_all.transpose()
return Sig_beta_B_i_all
# Calculating W_j s separately so we can access all the W_j s anytime
def compute_wj(self, data_idx, M_i):
W_i = np.zeros((self.datanum, self.dim))
for i in data_idx:
N_i = float(len(data_idx[i]))
for tpl in data_idx[i]:
xj = tpl[1]
j = tpl[0]
W_i[j] = (xj - M_i[i])
return W_i
# Calculating alpha and Wj for Sw
def compute_sig_alpha_W(self, data_idx, lst_idx_all, W_i):
Sig_alpha_W_i = np.zeros((self.datanum, self.dim))
for i in data_idx:
N_i = float(len(data_idx[i]))
for tpl in data_idx[i]:
k = tpl[0]
for j in lst_idx_all[i]:
if k == j:
alpha = 1 - (float(1) / N_i)
Sig_alpha_W_i[k] += (alpha * W_i[j])
else:
alpha = 0 - (float(1) / N_i)
Sig_alpha_W_i[k] += (alpha * W_i[j])
Sig_alpha_W_i = (1. / self.datanum) * np.transpose(Sig_alpha_W_i)
return Sig_alpha_W_i
# This function calculates log of our prior
def lnpdf(self, x):
x = x.reshape(self.x_shape)
#!!!!!!!!!!!!!!!!!!!!!!!!!!!
#self.lamda.values[:] = self.lamda.values/self.lamda.values.sum()
xprime = x.dot(np.diagflat(self.lamda))
x = xprime
# print x
cls = self.compute_cls(x)
M_0 = np.mean(x, axis=0)
M_i = self.compute_Mi(cls)
Sb = self.compute_Sb(cls, M_i, M_0)
Sw = self.compute_Sw(cls, M_i)
# Sb_inv_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))
#Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)
#Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.5))[0]
Sb_inv_N = pdinv(Sb + np.eye(Sb.shape[0])*0.9)[0]
return (-1 / self.sigma2) * np.trace(Sb_inv_N.dot(Sw))
# This function calculates derivative of the log of prior function
def lnpdf_grad(self, x):
x = x.reshape(self.x_shape)
xprime = x.dot(np.diagflat(self.lamda))
x = xprime
# print x
cls = self.compute_cls(x)
M_0 = np.mean(x, axis=0)
M_i = self.compute_Mi(cls)
Sb = self.compute_Sb(cls, M_i, M_0)
Sw = self.compute_Sw(cls, M_i)
data_idx = self.compute_indices(x)
lst_idx_all = self.compute_listIndices(data_idx)
Sig_beta_B_i_all = self.compute_sig_beta_Bi(data_idx, M_i, M_0, lst_idx_all)
W_i = self.compute_wj(data_idx, M_i)
Sig_alpha_W_i = self.compute_sig_alpha_W(data_idx, lst_idx_all, W_i)
# Calculating inverse of Sb and its transpose and minus
# Sb_inv_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))
#Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)
#Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.5))[0]
Sb_inv_N = pdinv(Sb + np.eye(Sb.shape[0])*0.9)[0]
Sb_inv_N_trans = np.transpose(Sb_inv_N)
Sb_inv_N_trans_minus = -1 * Sb_inv_N_trans
Sw_trans = np.transpose(Sw)
# Calculating DJ/DXk
DJ_Dxk = 2 * (
Sb_inv_N_trans_minus.dot(Sw_trans).dot(Sb_inv_N_trans).dot(Sig_beta_B_i_all) + Sb_inv_N_trans.dot(
Sig_alpha_W_i))
# Calculating derivative of the log of the prior
DPx_Dx = ((-1 / self.sigma2) * DJ_Dxk)
DPxprim_Dx = np.diagflat(self.lamda).dot(DPx_Dx)
# Because of the GPy we need to transpose our matrix so that it gets the same shape as out matrix (denominator layout!!!)
DPxprim_Dx = DPxprim_Dx.T
DPxprim_Dlamda = DPx_Dx.dot(x)
# Because of the GPy we need to transpose our matrix so that it gets the same shape as out matrix (denominator layout!!!)
DPxprim_Dlamda = DPxprim_Dlamda.T
self.lamda.gradient = np.diag(DPxprim_Dlamda)
# print DPxprim_Dx
return DPxprim_Dx
# def frb(self, x):
# from functools import partial
# from GPy.models import GradientChecker
# f = partial(self.lnpdf)
# df = partial(self.lnpdf_grad)
# grad = GradientChecker(f, df, x, 'X')
# grad.checkgrad(verbose=1)
def rvs(self, n):
return np.random.rand(n) # A WRONG implementation
def __str__(self):
return 'DGPLVM_prior_Raq_Lamda'
# ******************************************
class DGPLVM_T(Prior):
"""
Implementation of the Discriminative Gaussian Process Latent Variable model paper, by Raquel.
:param sigma2: constant
.. Note:: DGPLVM for Classification paper implementation
"""
domain = _REAL
# _instances = []
# def __new__(cls, mu, sigma): # Singleton:
# if cls._instances:
# cls._instances[:] = [instance for instance in cls._instances if instance()]
# for instance in cls._instances:
# if instance().mu == mu and instance().sigma == sigma:
# return instance()
# o = super(Prior, cls).__new__(cls, mu, sigma)
# cls._instances.append(weakref.ref(o))
# return cls._instances[-1]()
def __init__(self, sigma2, lbl, x_shape, vec):
self.sigma2 = sigma2
# self.x = x
self.lbl = lbl
self.classnum = lbl.shape[1]
self.datanum = lbl.shape[0]
self.x_shape = x_shape
self.dim = x_shape[1]
self.vec = vec
def get_class_label(self, y):
for idx, v in enumerate(y):
if v == 1:
return idx
return -1
# This function assigns each data point to its own class
# and returns the dictionary which contains the class name and parameters.
def compute_cls(self, x):
cls = {}
# Appending each data point to its proper class
for j in range(self.datanum):
class_label = self.get_class_label(self.lbl[j])
if class_label not in cls:
cls[class_label] = []
cls[class_label].append(x[j])
return cls
# This function computes mean of each class. The mean is calculated through each dimension
def compute_Mi(self, cls):
M_i = np.zeros((self.classnum, self.dim))
for i in cls:
# Mean of each class
# class_i = np.multiply(cls[i],vec)
class_i = cls[i]
M_i[i] = np.mean(class_i, axis=0)
return M_i
# Adding data points as tuple to the dictionary so that we can access indices
def compute_indices(self, x):
data_idx = {}
for j in range(self.datanum):
class_label = self.get_class_label(self.lbl[j])
if class_label not in data_idx:
data_idx[class_label] = []
t = (j, x[j])
data_idx[class_label].append(t)
return data_idx
# Adding indices to the list so we can access whole the indices
def compute_listIndices(self, data_idx):
lst_idx = []
lst_idx_all = []
for i in data_idx:
if len(lst_idx) == 0:
pass
#Do nothing, because it is the first time list is created so is empty
else:
lst_idx = []
# Here we put indices of each class in to the list called lst_idx_all
for m in range(len(data_idx[i])):
lst_idx.append(data_idx[i][m][0])
lst_idx_all.append(lst_idx)
return lst_idx_all
# This function calculates between classes variances
def compute_Sb(self, cls, M_i, M_0):
Sb = np.zeros((self.dim, self.dim))
for i in cls:
B = (M_i[i] - M_0).reshape(self.dim, 1)
B_trans = B.transpose()
Sb += (float(len(cls[i])) / self.datanum) * B.dot(B_trans)
return Sb
# This function calculates within classes variances
def compute_Sw(self, cls, M_i):
Sw = np.zeros((self.dim, self.dim))
for i in cls:
N_i = float(len(cls[i]))
W_WT = np.zeros((self.dim, self.dim))
for xk in cls[i]:
W = (xk - M_i[i])
W_WT += np.outer(W, W)
Sw += (N_i / self.datanum) * ((1. / N_i) * W_WT)
return Sw
# Calculating beta and Bi for Sb
def compute_sig_beta_Bi(self, data_idx, M_i, M_0, lst_idx_all):
# import pdb
# pdb.set_trace()
B_i = np.zeros((self.classnum, self.dim))
Sig_beta_B_i_all = np.zeros((self.datanum, self.dim))
for i in data_idx:
# pdb.set_trace()
# Calculating Bi
B_i[i] = (M_i[i] - M_0).reshape(1, self.dim)
for k in range(self.datanum):
for i in data_idx:
N_i = float(len(data_idx[i]))
if k in lst_idx_all[i]:
beta = (float(1) / N_i) - (float(1) / self.datanum)
Sig_beta_B_i_all[k] += float(N_i) / self.datanum * (beta * B_i[i])
else:
beta = -(float(1) / self.datanum)
Sig_beta_B_i_all[k] += float(N_i) / self.datanum * (beta * B_i[i])
Sig_beta_B_i_all = Sig_beta_B_i_all.transpose()
return Sig_beta_B_i_all
# Calculating W_j s separately so we can access all the W_j s anytime
def compute_wj(self, data_idx, M_i):
W_i = np.zeros((self.datanum, self.dim))
for i in data_idx:
N_i = float(len(data_idx[i]))
for tpl in data_idx[i]:
xj = tpl[1]
j = tpl[0]
W_i[j] = (xj - M_i[i])
return W_i
# Calculating alpha and Wj for Sw
def compute_sig_alpha_W(self, data_idx, lst_idx_all, W_i):
Sig_alpha_W_i = np.zeros((self.datanum, self.dim))
for i in data_idx:
N_i = float(len(data_idx[i]))
for tpl in data_idx[i]:
k = tpl[0]
for j in lst_idx_all[i]:
if k == j:
alpha = 1 - (float(1) / N_i)
Sig_alpha_W_i[k] += (alpha * W_i[j])
else:
alpha = 0 - (float(1) / N_i)
Sig_alpha_W_i[k] += (alpha * W_i[j])
Sig_alpha_W_i = (1. / self.datanum) * np.transpose(Sig_alpha_W_i)
return Sig_alpha_W_i
# This function calculates log of our prior
def lnpdf(self, x):
x = x.reshape(self.x_shape)
xprim = x.dot(self.vec)
x = xprim
# print x
cls = self.compute_cls(x)
M_0 = np.mean(x, axis=0)
M_i = self.compute_Mi(cls)
Sb = self.compute_Sb(cls, M_i, M_0)
Sw = self.compute_Sw(cls, M_i)
# Sb_inv_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))
#Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)
#print 'SB_inv: ', Sb_inv_N
#Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))[0]
Sb_inv_N = pdinv(Sb+np.eye(Sb.shape[0])*0.1)[0]
return (-1 / self.sigma2) * np.trace(Sb_inv_N.dot(Sw))
# This function calculates derivative of the log of prior function
def lnpdf_grad(self, x):
x = x.reshape(self.x_shape)
xprim = x.dot(self.vec)
x = xprim
# print x
cls = self.compute_cls(x)
M_0 = np.mean(x, axis=0)
M_i = self.compute_Mi(cls)
Sb = self.compute_Sb(cls, M_i, M_0)
Sw = self.compute_Sw(cls, M_i)
data_idx = self.compute_indices(x)
lst_idx_all = self.compute_listIndices(data_idx)
Sig_beta_B_i_all = self.compute_sig_beta_Bi(data_idx, M_i, M_0, lst_idx_all)
W_i = self.compute_wj(data_idx, M_i)
Sig_alpha_W_i = self.compute_sig_alpha_W(data_idx, lst_idx_all, W_i)
# Calculating inverse of Sb and its transpose and minus
# Sb_inv_N = np.linalg.inv(Sb + np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))
#Sb_inv_N = np.linalg.inv(Sb+np.eye(Sb.shape[0])*0.1)
#print 'SB_inv: ',Sb_inv_N
#Sb_inv_N = pdinv(Sb+ np.eye(Sb.shape[0]) * (np.diag(Sb).min() * 0.1))[0]
Sb_inv_N = pdinv(Sb+np.eye(Sb.shape[0])*0.1)[0]
Sb_inv_N_trans = np.transpose(Sb_inv_N)
Sb_inv_N_trans_minus = -1 * Sb_inv_N_trans
Sw_trans = np.transpose(Sw)
# Calculating DJ/DXk
DJ_Dxk = 2 * (
Sb_inv_N_trans_minus.dot(Sw_trans).dot(Sb_inv_N_trans).dot(Sig_beta_B_i_all) + Sb_inv_N_trans.dot(
Sig_alpha_W_i))
# Calculating derivative of the log of the prior
DPx_Dx = ((-1 / self.sigma2) * DJ_Dxk)
return DPx_Dx.T
# def frb(self, x):
# from functools import partial
# from GPy.models import GradientChecker
# f = partial(self.lnpdf)
# df = partial(self.lnpdf_grad)
# grad = GradientChecker(f, df, x, 'X')
# grad.checkgrad(verbose=1)
def rvs(self, n):
return np.random.rand(n) # A WRONG implementation
def __str__(self):
return 'DGPLVM_prior_Raq_TTT'
class HalfT(Prior):
"""
Implementation of the half student t probability function, coupled with random variables.
:param A: scale parameter
:param nu: degrees of freedom
"""
domain = _POSITIVE
_instances = []
def __new__(cls, A, nu): # Singleton:
if cls._instances:
cls._instances[:] = [instance for instance in cls._instances if instance()]
for instance in cls._instances:
if instance().A == A and instance().nu == nu:
return instance()
o = super(Prior, cls).__new__(cls, A, nu)
cls._instances.append(weakref.ref(o))
return cls._instances[-1]()
def __init__(self, A, nu):
self.A = float(A)
self.nu = float(nu)
self.constant = gammaln(.5*(self.nu+1.)) - gammaln(.5*self.nu) - .5*np.log(np.pi*self.A*self.nu)
def __str__(self):
return "hT({:.2g}, {:.2g})".format(self.A, self.nu)
def lnpdf(self, theta):
return (theta > 0) * (self.constant - .5*(self.nu + 1) * np.log(1. + (1./self.nu) * (theta/self.A)**2))
# theta = theta if isinstance(theta,np.ndarray) else np.array([theta])
# lnpdfs = np.zeros_like(theta)
# theta = np.array([theta])
# above_zero = theta.flatten()>1e-6
# v = self.nu
# sigma2=self.A
# stop
# lnpdfs[above_zero] = (+ gammaln((v + 1) * 0.5)
# - gammaln(v * 0.5)
# - 0.5*np.log(sigma2 * v * np.pi)
# - 0.5*(v + 1)*np.log(1 + (1/np.float(v))*((theta[above_zero][0]**2)/sigma2))
# )
# return lnpdfs
def lnpdf_grad(self, theta):
theta = theta if isinstance(theta, np.ndarray) else np.array([theta])
grad = np.zeros_like(theta)
above_zero = theta > 1e-6
v = self.nu
sigma2 = self.A
grad[above_zero] = -0.5*(v+1)*(2*theta[above_zero])/(v*sigma2 + theta[above_zero][0]**2)
return grad
def rvs(self, n):
# return np.random.randn(n) * self.sigma + self.mu
from scipy.stats import t
# [np.abs(x) for x in t.rvs(df=4,loc=0,scale=50, size=10000)])
ret = t.rvs(self.nu, loc=0, scale=self.A, size=n)
ret[ret < 0] = 0
return ret
class Exponential(Prior):
"""
Implementation of the Exponential probability function,
coupled with random variables.
:param l: shape parameter
"""
domain = _POSITIVE
_instances = []
def __new__(cls, l): # Singleton:
if cls._instances:
cls._instances[:] = [instance for instance in cls._instances if instance()]
for instance in cls._instances:
if instance().l == l:
return instance()
o = super(Exponential, cls).__new__(cls, l)
cls._instances.append(weakref.ref(o))
return cls._instances[-1]()
def __init__(self, l):
self.l = l
def __str__(self):
return "Exp({:.2g})".format(self.l)
def summary(self):
ret = {"E[x]": 1. / self.l,
"E[ln x]": np.nan,
"var[x]": 1. / self.l**2,
"Entropy": 1. - np.log(self.l),
"Mode": 0.}
return ret
def lnpdf(self, x):
return np.log(self.l) - self.l * x
def lnpdf_grad(self, x):
return - self.l
def rvs(self, n):
return np.random.exponential(scale=self.l, size=n)
| bsd-3-clause |
anirudhjayaraman/scikit-learn | examples/svm/plot_svm_nonlinear.py | 268 | 1091 | """
==============
Non-linear SVM
==============
Perform binary classification using non-linear SVC
with RBF kernel. The target to predict is a XOR of the
inputs.
The color map illustrates the decision function learned by the SVC.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
xx, yy = np.meshgrid(np.linspace(-3, 3, 500),
np.linspace(-3, 3, 500))
np.random.seed(0)
X = np.random.randn(300, 2)
Y = np.logical_xor(X[:, 0] > 0, X[:, 1] > 0)
# fit the model
clf = svm.NuSVC()
clf.fit(X, Y)
# plot the decision function for each datapoint on the grid
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()), aspect='auto',
origin='lower', cmap=plt.cm.PuOr_r)
contours = plt.contour(xx, yy, Z, levels=[0], linewidths=2,
linetypes='--')
plt.scatter(X[:, 0], X[:, 1], s=30, c=Y, cmap=plt.cm.Paired)
plt.xticks(())
plt.yticks(())
plt.axis([-3, 3, -3, 3])
plt.show()
| bsd-3-clause |
IndraVikas/scikit-learn | sklearn/linear_model/tests/test_bayes.py | 299 | 1770 | # Author: Alexandre Gramfort <[email protected]>
# Fabian Pedregosa <[email protected]>
#
# License: BSD 3 clause
import numpy as np
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import SkipTest
from sklearn.linear_model.bayes import BayesianRidge, ARDRegression
from sklearn import datasets
from sklearn.utils.testing import assert_array_almost_equal
def test_bayesian_on_diabetes():
# Test BayesianRidge on diabetes
raise SkipTest("XFailed Test")
diabetes = datasets.load_diabetes()
X, y = diabetes.data, diabetes.target
clf = BayesianRidge(compute_score=True)
# Test with more samples than features
clf.fit(X, y)
# Test that scores are increasing at each iteration
assert_array_equal(np.diff(clf.scores_) > 0, True)
# Test with more features than samples
X = X[:5, :]
y = y[:5]
clf.fit(X, y)
# Test that scores are increasing at each iteration
assert_array_equal(np.diff(clf.scores_) > 0, True)
def test_toy_bayesian_ridge_object():
# Test BayesianRidge on toy
X = np.array([[1], [2], [6], [8], [10]])
Y = np.array([1, 2, 6, 8, 10])
clf = BayesianRidge(compute_score=True)
clf.fit(X, Y)
# Check that the model could approximately learn the identity function
test = [[1], [3], [4]]
assert_array_almost_equal(clf.predict(test), [1, 3, 4], 2)
def test_toy_ard_object():
# Test BayesianRegression ARD classifier
X = np.array([[1], [2], [3]])
Y = np.array([1, 2, 3])
clf = ARDRegression(compute_score=True)
clf.fit(X, Y)
# Check that the model could approximately learn the identity function
test = [[1], [3], [4]]
assert_array_almost_equal(clf.predict(test), [1, 3, 4], 2)
| bsd-3-clause |
ishank08/scikit-learn | examples/model_selection/grid_search_digits.py | 33 | 2764 | """
============================================================
Parameter estimation using grid search with cross-validation
============================================================
This examples shows how a classifier is optimized by cross-validation,
which is done using the :class:`sklearn.model_selection.GridSearchCV` object
on a development set that comprises only half of the available labeled data.
The performance of the selected hyper-parameters and trained model is
then measured on a dedicated evaluation set that was not used during
the model selection step.
More details on tools available for model selection can be found in the
sections on :ref:`cross_validation` and :ref:`grid_search`.
"""
from __future__ import print_function
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import classification_report
from sklearn.svm import SVC
print(__doc__)
# Loading the Digits dataset
digits = datasets.load_digits()
# To apply an classifier on this data, we need to flatten the image, to
# turn the data in a (samples, feature) matrix:
n_samples = len(digits.images)
X = digits.images.reshape((n_samples, -1))
y = digits.target
# Split the dataset in two equal parts
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.5, random_state=0)
# Set the parameters by cross-validation
tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4],
'C': [1, 10, 100, 1000]},
{'kernel': ['linear'], 'C': [1, 10, 100, 1000]}]
scores = ['precision', 'recall']
for score in scores:
print("# Tuning hyper-parameters for %s" % score)
print()
clf = GridSearchCV(SVC(C=1), tuned_parameters, cv=5,
scoring='%s_macro' % score)
clf.fit(X_train, y_train)
print("Best parameters set found on development set:")
print()
print(clf.best_params_)
print()
print("Grid scores on development set:")
print()
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r"
% (mean, std * 2, params))
print()
print("Detailed classification report:")
print()
print("The model is trained on the full development set.")
print("The scores are computed on the full evaluation set.")
print()
y_true, y_pred = y_test, clf.predict(X_test)
print(classification_report(y_true, y_pred))
print()
# Note the problem is too easy: the hyperparameter plateau is too flat and the
# output model is the same for precision and recall with ties in quality.
| bsd-3-clause |
0x0all/scikit-learn | benchmarks/bench_plot_fastkmeans.py | 294 | 4676 | from __future__ import print_function
from collections import defaultdict
from time import time
import numpy as np
from numpy import random as nr
from sklearn.cluster.k_means_ import KMeans, MiniBatchKMeans
def compute_bench(samples_range, features_range):
it = 0
results = defaultdict(lambda: [])
chunk = 100
max_it = len(samples_range) * len(features_range)
for n_samples in samples_range:
for n_features in features_range:
it += 1
print('==============================')
print('Iteration %03d of %03d' % (it, max_it))
print('==============================')
print()
data = nr.random_integers(-50, 50, (n_samples, n_features))
print('K-Means')
tstart = time()
kmeans = KMeans(init='k-means++', n_clusters=10).fit(data)
delta = time() - tstart
print("Speed: %0.3fs" % delta)
print("Inertia: %0.5f" % kmeans.inertia_)
print()
results['kmeans_speed'].append(delta)
results['kmeans_quality'].append(kmeans.inertia_)
print('Fast K-Means')
# let's prepare the data in small chunks
mbkmeans = MiniBatchKMeans(init='k-means++',
n_clusters=10,
batch_size=chunk)
tstart = time()
mbkmeans.fit(data)
delta = time() - tstart
print("Speed: %0.3fs" % delta)
print("Inertia: %f" % mbkmeans.inertia_)
print()
print()
results['MiniBatchKMeans Speed'].append(delta)
results['MiniBatchKMeans Quality'].append(mbkmeans.inertia_)
return results
def compute_bench_2(chunks):
results = defaultdict(lambda: [])
n_features = 50000
means = np.array([[1, 1], [-1, -1], [1, -1], [-1, 1],
[0.5, 0.5], [0.75, -0.5], [-1, 0.75], [1, 0]])
X = np.empty((0, 2))
for i in range(8):
X = np.r_[X, means[i] + 0.8 * np.random.randn(n_features, 2)]
max_it = len(chunks)
it = 0
for chunk in chunks:
it += 1
print('==============================')
print('Iteration %03d of %03d' % (it, max_it))
print('==============================')
print()
print('Fast K-Means')
tstart = time()
mbkmeans = MiniBatchKMeans(init='k-means++',
n_clusters=8,
batch_size=chunk)
mbkmeans.fit(X)
delta = time() - tstart
print("Speed: %0.3fs" % delta)
print("Inertia: %0.3fs" % mbkmeans.inertia_)
print()
results['MiniBatchKMeans Speed'].append(delta)
results['MiniBatchKMeans Quality'].append(mbkmeans.inertia_)
return results
if __name__ == '__main__':
from mpl_toolkits.mplot3d import axes3d # register the 3d projection
import matplotlib.pyplot as plt
samples_range = np.linspace(50, 150, 5).astype(np.int)
features_range = np.linspace(150, 50000, 5).astype(np.int)
chunks = np.linspace(500, 10000, 15).astype(np.int)
results = compute_bench(samples_range, features_range)
results_2 = compute_bench_2(chunks)
max_time = max([max(i) for i in [t for (label, t) in results.iteritems()
if "speed" in label]])
max_inertia = max([max(i) for i in [
t for (label, t) in results.iteritems()
if "speed" not in label]])
fig = plt.figure('scikit-learn K-Means benchmark results')
for c, (label, timings) in zip('brcy',
sorted(results.iteritems())):
if 'speed' in label:
ax = fig.add_subplot(2, 2, 1, projection='3d')
ax.set_zlim3d(0.0, max_time * 1.1)
else:
ax = fig.add_subplot(2, 2, 2, projection='3d')
ax.set_zlim3d(0.0, max_inertia * 1.1)
X, Y = np.meshgrid(samples_range, features_range)
Z = np.asarray(timings).reshape(samples_range.shape[0],
features_range.shape[0])
ax.plot_surface(X, Y, Z.T, cstride=1, rstride=1, color=c, alpha=0.5)
ax.set_xlabel('n_samples')
ax.set_ylabel('n_features')
i = 0
for c, (label, timings) in zip('br',
sorted(results_2.iteritems())):
i += 1
ax = fig.add_subplot(2, 2, i + 2)
y = np.asarray(timings)
ax.plot(chunks, y, color=c, alpha=0.8)
ax.set_xlabel('Chunks')
ax.set_ylabel(label)
plt.show()
| bsd-3-clause |
breznak/nupic | external/linux32/lib/python2.6/site-packages/matplotlib/numerix/__init__.py | 69 | 5473 | """
numerix imports either Numeric or numarray based on various selectors.
0. If the value "--numpy","--numarray" or "--Numeric" is specified on the
command line, then numerix imports the specified
array package.
1. The value of numerix in matplotlibrc: either Numeric or numarray
2. If none of the above is done, the default array package is Numeric.
Because the matplotlibrc always provides *some* value for numerix
(it has it's own system of default values), this default is most
likely never used.
To summarize: the commandline is examined first, the rc file second,
and the default array package is Numeric.
"""
import sys, os, struct
from matplotlib import rcParams, verbose
which = None, None
use_maskedarray = None
# First, see if --numarray or --Numeric was specified on the command
# line:
for a in sys.argv:
if a in ["--Numeric", "--numeric", "--NUMERIC",
"--Numarray", "--numarray", "--NUMARRAY",
"--NumPy", "--numpy", "--NUMPY", "--Numpy",
]:
which = a[2:], "command line"
if a == "--maskedarray":
use_maskedarray = True
if a == "--ma":
use_maskedarray = False
try: del a
except NameError: pass
if which[0] is None:
try: # In theory, rcParams always has *some* value for numerix.
which = rcParams['numerix'], "rc"
except KeyError:
pass
if use_maskedarray is None:
try:
use_maskedarray = rcParams['maskedarray']
except KeyError:
use_maskedarray = False
# If all the above fail, default to Numeric. Most likely not used.
if which[0] is None:
which = "numeric", "defaulted"
which = which[0].strip().lower(), which[1]
if which[0] not in ["numeric", "numarray", "numpy"]:
raise ValueError("numerix selector must be either 'Numeric', 'numarray', or 'numpy' but the value obtained from the %s was '%s'." % (which[1], which[0]))
if which[0] == "numarray":
import warnings
warnings.warn("numarray use as a numerix backed for matplotlib is deprecated",
DeprecationWarning, stacklevel=1)
#from na_imports import *
from numarray import *
from _na_imports import nx, inf, infinity, Infinity, Matrix, isnan, all
from numarray.numeric import nonzero
from numarray.convolve import cross_correlate, convolve
import numarray
version = 'numarray %s'%numarray.__version__
nan = struct.unpack('d', struct.pack('Q', 0x7ff8000000000000))[0]
elif which[0] == "numeric":
import warnings
warnings.warn("Numeric use as a numerix backed for matplotlib is deprecated",
DeprecationWarning, stacklevel=1)
#from nc_imports import *
from Numeric import *
from _nc_imports import nx, inf, infinity, Infinity, isnan, all, any
from Matrix import Matrix
import Numeric
version = 'Numeric %s'%Numeric.__version__
nan = struct.unpack('d', struct.pack('Q', 0x7ff8000000000000))[0]
elif which[0] == "numpy":
try:
import numpy.oldnumeric as numpy
from numpy.oldnumeric import *
except ImportError:
import numpy
from numpy import *
print 'except asarray', asarray
from _sp_imports import nx, infinity, rand, randn, isnan, all, any
from _sp_imports import UInt8, UInt16, UInt32, Infinity
try:
from numpy.oldnumeric.matrix import Matrix
except ImportError:
Matrix = matrix
version = 'numpy %s' % numpy.__version__
from numpy import nan
else:
raise RuntimeError("invalid numerix selector")
# Some changes are only applicable to the new numpy:
if (which[0] == 'numarray' or
which[0] == 'numeric'):
from mlab import amin, amax
newaxis = NewAxis
def typecode(a):
return a.typecode()
def iscontiguous(a):
return a.iscontiguous()
def byteswapped(a):
return a.byteswapped()
def itemsize(a):
return a.itemsize()
def angle(a):
return arctan2(a.imag, a.real)
else:
# We've already checked for a valid numerix selector,
# so assume numpy.
from mlab import amin, amax
newaxis = NewAxis
from numpy import angle
def typecode(a):
return a.dtype.char
def iscontiguous(a):
return a.flags.contiguous
def byteswapped(a):
return a.byteswap()
def itemsize(a):
return a.itemsize
verbose.report('numerix %s'%version)
# a bug fix for blas numeric suggested by Fernando Perez
matrixmultiply=dot
asum = sum
def _import_fail_message(module, version):
"""Prints a message when the array package specific version of an extension
fails to import correctly.
"""
_dict = { "which" : which[0],
"module" : module,
"specific" : version + module
}
print """
The import of the %(which)s version of the %(module)s module,
%(specific)s, failed. This is is either because %(which)s was
unavailable when matplotlib was compiled, because a dependency of
%(specific)s could not be satisfied, or because the build flag for
this module was turned off in setup.py. If it appears that
%(specific)s was not built, make sure you have a working copy of
%(which)s and then re-install matplotlib. Otherwise, the following
traceback gives more details:\n""" % _dict
g = globals()
l = locals()
__import__('ma', g, l)
__import__('fft', g, l)
__import__('linear_algebra', g, l)
__import__('random_array', g, l)
__import__('mlab', g, l)
la = linear_algebra
ra = random_array
| agpl-3.0 |
jelugbo/tundex | docs/en_us/developers/source/conf.py | 30 | 6955 | # -*- coding: utf-8 -*-
# pylint: disable=C0103
# pylint: disable=W0622
# pylint: disable=W0212
# pylint: disable=W0613
import sys, os
from path import path
on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
sys.path.append('../../../../')
from docs.shared.conf import *
# Add any paths that contain templates here, relative to this directory.
templates_path.append('source/_templates')
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path.append('source/_static')
if not on_rtd: # only import and set the theme if we're building docs locally
import sphinx_rtd_theme
html_theme = 'sphinx_rtd_theme'
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
root = path('../../../..').abspath()
sys.path.insert(0, root)
sys.path.append(root / "common/djangoapps")
sys.path.append(root / "common/lib")
sys.path.append(root / "common/lib/capa")
sys.path.append(root / "common/lib/chem")
sys.path.append(root / "common/lib/sandbox-packages")
sys.path.append(root / "common/lib/xmodule")
sys.path.append(root / "common/lib/opaque_keys")
sys.path.append(root / "lms/djangoapps")
sys.path.append(root / "lms/lib")
sys.path.append(root / "cms/djangoapps")
sys.path.append(root / "cms/lib")
sys.path.insert(0, os.path.abspath(os.path.normpath(os.path.dirname(__file__)
+ '/../../../')))
sys.path.append('.')
# django configuration - careful here
if on_rtd:
os.environ['DJANGO_SETTINGS_MODULE'] = 'lms'
else:
os.environ['DJANGO_SETTINGS_MODULE'] = 'lms.envs.test'
# -- General configuration -----------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.intersphinx',
'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.pngmath',
'sphinx.ext.mathjax', 'sphinx.ext.viewcode', 'sphinxcontrib.napoleon']
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['build']
# Output file base name for HTML help builder.
htmlhelp_basename = 'edXDocs'
project = u'edX Platform Developer Documentation'
copyright = u'2014, edX'
# --- Mock modules ------------------------------------------------------------
# Mock all the modules that the readthedocs build can't import
class Mock(object):
def __init__(self, *args, **kwargs):
pass
def __call__(self, *args, **kwargs):
return Mock()
@classmethod
def __getattr__(cls, name):
if name in ('__file__', '__path__'):
return '/dev/null'
elif name[0] == name[0].upper():
mockType = type(name, (), {})
mockType.__module__ = __name__
return mockType
else:
return Mock()
# The list of modules and submodules that we know give RTD trouble.
# Make sure you've tried including the relevant package in
# docs/share/requirements.txt before adding to this list.
MOCK_MODULES = [
'bson',
'bson.errors',
'bson.objectid',
'dateutil',
'dateutil.parser',
'fs',
'fs.errors',
'fs.osfs',
'lazy',
'mako',
'mako.template',
'matplotlib',
'matplotlib.pyplot',
'mock',
'numpy',
'oauthlib',
'oauthlib.oauth1',
'oauthlib.oauth1.rfc5849',
'PIL',
'pymongo',
'pyparsing',
'pysrt',
'requests',
'scipy.interpolate',
'scipy.constants',
'scipy.optimize',
'yaml',
'webob',
'webob.multidict',
]
if on_rtd:
for mod_name in MOCK_MODULES:
sys.modules[mod_name] = Mock()
# -----------------------------------------------------------------------------
# from http://djangosnippets.org/snippets/2533/
# autogenerate models definitions
import inspect
import types
from HTMLParser import HTMLParser
def force_unicode(s, encoding='utf-8', strings_only=False, errors='strict'):
"""
Similar to smart_unicode, except that lazy instances are resolved to
strings, rather than kept as lazy objects.
If strings_only is True, don't convert (some) non-string-like objects.
"""
if strings_only and isinstance(s, (types.NoneType, int)):
return s
if not isinstance(s, basestring,):
if hasattr(s, '__unicode__'):
s = unicode(s)
else:
s = unicode(str(s), encoding, errors)
elif not isinstance(s, unicode):
s = unicode(s, encoding, errors)
return s
class MLStripper(HTMLParser):
def __init__(self):
self.reset()
self.fed = []
def handle_data(self, d):
self.fed.append(d)
def get_data(self):
return ''.join(self.fed)
def strip_tags(html):
s = MLStripper()
s.feed(html)
return s.get_data()
def process_docstring(app, what, name, obj, options, lines):
"""Autodoc django models"""
# This causes import errors if left outside the function
from django.db import models
# If you want extract docs from django forms:
# from django import forms
# from django.forms.models import BaseInlineFormSet
# Only look at objects that inherit from Django's base MODEL class
if inspect.isclass(obj) and issubclass(obj, models.Model):
# Grab the field list from the meta class
fields = obj._meta._fields()
for field in fields:
# Decode and strip any html out of the field's help text
help_text = strip_tags(force_unicode(field.help_text))
# Decode and capitalize the verbose name, for use if there isn't
# any help text
verbose_name = force_unicode(field.verbose_name).capitalize()
if help_text:
# Add the model field to the end of the docstring as a param
# using the help text as the description
lines.append(u':param %s: %s' % (field.attname, help_text))
else:
# Add the model field to the end of the docstring as a param
# using the verbose name as the description
lines.append(u':param %s: %s' % (field.attname, verbose_name))
# Add the field's type to the docstring
lines.append(u':type %s: %s' % (field.attname, type(field).__name__))
return lines
def setup(app):
"""Setup docsting processors"""
#Register the docstring processor with sphinx
app.connect('autodoc-process-docstring', process_docstring)
| agpl-3.0 |
mbayon/TFG-MachineLearning | venv/lib/python3.6/site-packages/pandas/core/sparse/array.py | 6 | 26143 | """
SparseArray data structure
"""
from __future__ import division
# pylint: disable=E1101,E1103,W0231
import numpy as np
import warnings
import pandas as pd
from pandas.core.base import PandasObject
from pandas import compat
from pandas.compat import range
from pandas.compat.numpy import function as nv
from pandas.core.dtypes.generic import (
ABCSparseArray, ABCSparseSeries)
from pandas.core.dtypes.common import (
_ensure_platform_int,
is_float, is_integer,
is_integer_dtype,
is_bool_dtype,
is_list_like,
is_string_dtype,
is_scalar, is_dtype_equal)
from pandas.core.dtypes.cast import (
maybe_convert_platform, maybe_promote,
astype_nansafe, find_common_type)
from pandas.core.dtypes.missing import isnull, notnull, na_value_for_dtype
import pandas._libs.sparse as splib
from pandas._libs.sparse import SparseIndex, BlockIndex, IntIndex
from pandas._libs import index as libindex
import pandas.core.algorithms as algos
import pandas.core.ops as ops
import pandas.io.formats.printing as printing
from pandas.util._decorators import Appender
from pandas.core.indexes.base import _index_shared_docs
_sparray_doc_kwargs = dict(klass='SparseArray')
def _arith_method(op, name, str_rep=None, default_axis=None, fill_zeros=None,
**eval_kwargs):
"""
Wrapper function for Series arithmetic operations, to avoid
code duplication.
"""
def wrapper(self, other):
if isinstance(other, np.ndarray):
if len(self) != len(other):
raise AssertionError("length mismatch: %d vs. %d" %
(len(self), len(other)))
if not isinstance(other, ABCSparseArray):
dtype = getattr(other, 'dtype', None)
other = SparseArray(other, fill_value=self.fill_value,
dtype=dtype)
return _sparse_array_op(self, other, op, name)
elif is_scalar(other):
with np.errstate(all='ignore'):
fill = op(_get_fill(self), np.asarray(other))
result = op(self.sp_values, other)
return _wrap_result(name, result, self.sp_index, fill)
else: # pragma: no cover
raise TypeError('operation with %s not supported' % type(other))
if name.startswith("__"):
name = name[2:-2]
wrapper.__name__ = name
return wrapper
def _get_fill(arr):
# coerce fill_value to arr dtype if possible
# int64 SparseArray can have NaN as fill_value if there is no missing
try:
return np.asarray(arr.fill_value, dtype=arr.dtype)
except ValueError:
return np.asarray(arr.fill_value)
def _sparse_array_op(left, right, op, name, series=False):
if series and is_integer_dtype(left) and is_integer_dtype(right):
# series coerces to float64 if result should have NaN/inf
if name in ('floordiv', 'mod') and (right.values == 0).any():
left = left.astype(np.float64)
right = right.astype(np.float64)
elif name in ('rfloordiv', 'rmod') and (left.values == 0).any():
left = left.astype(np.float64)
right = right.astype(np.float64)
# dtype used to find corresponding sparse method
if not is_dtype_equal(left.dtype, right.dtype):
dtype = find_common_type([left.dtype, right.dtype])
left = left.astype(dtype)
right = right.astype(dtype)
else:
dtype = left.dtype
# dtype the result must have
result_dtype = None
if left.sp_index.ngaps == 0 or right.sp_index.ngaps == 0:
with np.errstate(all='ignore'):
result = op(left.get_values(), right.get_values())
fill = op(_get_fill(left), _get_fill(right))
if left.sp_index.ngaps == 0:
index = left.sp_index
else:
index = right.sp_index
elif left.sp_index.equals(right.sp_index):
with np.errstate(all='ignore'):
result = op(left.sp_values, right.sp_values)
fill = op(_get_fill(left), _get_fill(right))
index = left.sp_index
else:
if name[0] == 'r':
left, right = right, left
name = name[1:]
if name in ('and', 'or') and dtype == 'bool':
opname = 'sparse_{name}_uint8'.format(name=name, dtype=dtype)
# to make template simple, cast here
left_sp_values = left.sp_values.view(np.uint8)
right_sp_values = right.sp_values.view(np.uint8)
result_dtype = np.bool
else:
opname = 'sparse_{name}_{dtype}'.format(name=name, dtype=dtype)
left_sp_values = left.sp_values
right_sp_values = right.sp_values
sparse_op = getattr(splib, opname)
with np.errstate(all='ignore'):
result, index, fill = sparse_op(left_sp_values, left.sp_index,
left.fill_value, right_sp_values,
right.sp_index, right.fill_value)
if result_dtype is None:
result_dtype = result.dtype
return _wrap_result(name, result, index, fill, dtype=result_dtype)
def _wrap_result(name, data, sparse_index, fill_value, dtype=None):
""" wrap op result to have correct dtype """
if name in ('eq', 'ne', 'lt', 'gt', 'le', 'ge'):
dtype = np.bool
if is_bool_dtype(dtype):
# fill_value may be np.bool_
fill_value = bool(fill_value)
return SparseArray(data, sparse_index=sparse_index,
fill_value=fill_value, dtype=dtype)
class SparseArray(PandasObject, np.ndarray):
"""Data structure for labeled, sparse floating point 1-D data
Parameters
----------
data : {array-like (1-D), Series, SparseSeries, dict}
kind : {'block', 'integer'}
fill_value : float
Code for missing value. Defaults depends on dtype.
0 for int dtype, False for bool dtype, and NaN for other dtypes
sparse_index : {BlockIndex, IntIndex}, optional
Only if you have one. Mainly used internally
Notes
-----
SparseArray objects are immutable via the typical Python means. If you
must change values, convert to dense, make your changes, then convert back
to sparse
"""
__array_priority__ = 15
_typ = 'array'
_subtyp = 'sparse_array'
sp_index = None
fill_value = None
def __new__(cls, data, sparse_index=None, index=None, kind='integer',
fill_value=None, dtype=None, copy=False):
if index is not None:
if data is None:
data = np.nan
if not is_scalar(data):
raise Exception("must only pass scalars with an index ")
values = np.empty(len(index), dtype='float64')
values.fill(data)
data = values
if isinstance(data, ABCSparseSeries):
data = data.values
is_sparse_array = isinstance(data, SparseArray)
if dtype is not None:
dtype = np.dtype(dtype)
if is_sparse_array:
sparse_index = data.sp_index
values = data.sp_values
fill_value = data.fill_value
else:
# array-like
if sparse_index is None:
if dtype is not None:
data = np.asarray(data, dtype=dtype)
res = make_sparse(data, kind=kind, fill_value=fill_value)
values, sparse_index, fill_value = res
else:
values = _sanitize_values(data)
if len(values) != sparse_index.npoints:
raise AssertionError("Non array-like type {0} must have"
" the same length as the"
" index".format(type(values)))
# Create array, do *not* copy data by default
if copy:
subarr = np.array(values, dtype=dtype, copy=True)
else:
subarr = np.asarray(values, dtype=dtype)
# Change the class of the array to be the subclass type.
return cls._simple_new(subarr, sparse_index, fill_value)
@classmethod
def _simple_new(cls, data, sp_index, fill_value):
if not isinstance(sp_index, SparseIndex):
# caller must pass SparseIndex
raise ValueError('sp_index must be a SparseIndex')
if fill_value is None:
if sp_index.ngaps > 0:
# has missing hole
fill_value = np.nan
else:
fill_value = na_value_for_dtype(data.dtype)
if (is_integer_dtype(data) and is_float(fill_value) and
sp_index.ngaps > 0):
# if float fill_value is being included in dense repr,
# convert values to float
data = data.astype(float)
result = data.view(cls)
if not isinstance(sp_index, SparseIndex):
# caller must pass SparseIndex
raise ValueError('sp_index must be a SparseIndex')
result.sp_index = sp_index
result._fill_value = fill_value
return result
@property
def _constructor(self):
return lambda x: SparseArray(x, fill_value=self.fill_value,
kind=self.kind)
@property
def kind(self):
if isinstance(self.sp_index, BlockIndex):
return 'block'
elif isinstance(self.sp_index, IntIndex):
return 'integer'
def __array_wrap__(self, out_arr, context=None):
"""
NumPy calls this method when ufunc is applied
Parameters
----------
out_arr : ndarray
ufunc result (note that ufunc is only applied to sp_values)
context : tuple of 3 elements (ufunc, signature, domain)
for example, following is a context when np.sin is applied to
SparseArray,
(<ufunc 'sin'>, (SparseArray,), 0))
See http://docs.scipy.org/doc/numpy/user/basics.subclassing.html
"""
if isinstance(context, tuple) and len(context) == 3:
ufunc, args, domain = context
# to apply ufunc only to fill_value (to avoid recursive call)
args = [getattr(a, 'fill_value', a) for a in args]
with np.errstate(all='ignore'):
fill_value = ufunc(self.fill_value, *args[1:])
else:
fill_value = self.fill_value
return self._simple_new(out_arr, sp_index=self.sp_index,
fill_value=fill_value)
def __array_finalize__(self, obj):
"""
Gets called after any ufunc or other array operations, necessary
to pass on the index.
"""
self.sp_index = getattr(obj, 'sp_index', None)
self._fill_value = getattr(obj, 'fill_value', None)
def __reduce__(self):
"""Necessary for making this object picklable"""
object_state = list(np.ndarray.__reduce__(self))
subclass_state = self.fill_value, self.sp_index
object_state[2] = (object_state[2], subclass_state)
return tuple(object_state)
def __setstate__(self, state):
"""Necessary for making this object picklable"""
nd_state, own_state = state
np.ndarray.__setstate__(self, nd_state)
fill_value, sp_index = own_state[:2]
self.sp_index = sp_index
self._fill_value = fill_value
def __len__(self):
try:
return self.sp_index.length
except:
return 0
def __unicode__(self):
return '%s\nFill: %s\n%s' % (printing.pprint_thing(self),
printing.pprint_thing(self.fill_value),
printing.pprint_thing(self.sp_index))
def disable(self, other):
raise NotImplementedError('inplace binary ops not supported')
# Inplace operators
__iadd__ = disable
__isub__ = disable
__imul__ = disable
__itruediv__ = disable
__ifloordiv__ = disable
__ipow__ = disable
# Python 2 division operators
if not compat.PY3:
__idiv__ = disable
@property
def values(self):
"""
Dense values
"""
output = np.empty(len(self), dtype=self.dtype)
int_index = self.sp_index.to_int_index()
output.fill(self.fill_value)
output.put(int_index.indices, self)
return output
@property
def sp_values(self):
# caching not an option, leaks memory
return self.view(np.ndarray)
@property
def fill_value(self):
return self._fill_value
@fill_value.setter
def fill_value(self, value):
if not is_scalar(value):
raise ValueError('fill_value must be a scalar')
# if the specified value triggers type promotion, raise ValueError
new_dtype, fill_value = maybe_promote(self.dtype, value)
if is_dtype_equal(self.dtype, new_dtype):
self._fill_value = fill_value
else:
msg = 'unable to set fill_value {0} to {1} dtype'
raise ValueError(msg.format(value, self.dtype))
def get_values(self, fill=None):
""" return a dense representation """
return self.to_dense(fill=fill)
def to_dense(self, fill=None):
"""
Convert SparseArray to a NumPy array.
Parameters
----------
fill: float, default None
DEPRECATED: this argument will be removed in a future version
because it is not respected by this function.
Returns
-------
arr : NumPy array
"""
if fill is not None:
warnings.warn(("The 'fill' parameter has been deprecated and "
"will be removed in a future version."),
FutureWarning, stacklevel=2)
return self.values
def __iter__(self):
for i in range(len(self)):
yield self._get_val_at(i)
def __getitem__(self, key):
"""
"""
if is_integer(key):
return self._get_val_at(key)
elif isinstance(key, tuple):
data_slice = self.values[key]
else:
if isinstance(key, SparseArray):
if is_bool_dtype(key):
key = key.to_dense()
else:
key = np.asarray(key)
if hasattr(key, '__len__') and len(self) != len(key):
return self.take(key)
else:
data_slice = self.values[key]
return self._constructor(data_slice)
def __getslice__(self, i, j):
if i < 0:
i = 0
if j < 0:
j = 0
slobj = slice(i, j)
return self.__getitem__(slobj)
def _get_val_at(self, loc):
n = len(self)
if loc < 0:
loc += n
if loc >= n or loc < 0:
raise IndexError('Out of bounds access')
sp_loc = self.sp_index.lookup(loc)
if sp_loc == -1:
return self.fill_value
else:
return libindex.get_value_at(self, sp_loc)
@Appender(_index_shared_docs['take'] % _sparray_doc_kwargs)
def take(self, indices, axis=0, allow_fill=True,
fill_value=None, **kwargs):
"""
Sparse-compatible version of ndarray.take
Returns
-------
taken : ndarray
"""
nv.validate_take(tuple(), kwargs)
if axis:
raise ValueError("axis must be 0, input was {0}".format(axis))
if is_integer(indices):
# return scalar
return self[indices]
indices = _ensure_platform_int(indices)
n = len(self)
if allow_fill and fill_value is not None:
# allow -1 to indicate self.fill_value,
# self.fill_value may not be NaN
if (indices < -1).any():
msg = ('When allow_fill=True and fill_value is not None, '
'all indices must be >= -1')
raise ValueError(msg)
elif (n <= indices).any():
msg = 'index is out of bounds for size {0}'
raise IndexError(msg.format(n))
else:
if ((indices < -n) | (n <= indices)).any():
msg = 'index is out of bounds for size {0}'
raise IndexError(msg.format(n))
indices = indices.astype(np.int32)
if not (allow_fill and fill_value is not None):
indices = indices.copy()
indices[indices < 0] += n
locs = self.sp_index.lookup_array(indices)
indexer = np.arange(len(locs), dtype=np.int32)
mask = locs != -1
if mask.any():
indexer = indexer[mask]
new_values = self.sp_values.take(locs[mask])
else:
indexer = np.empty(shape=(0, ), dtype=np.int32)
new_values = np.empty(shape=(0, ), dtype=self.sp_values.dtype)
sp_index = _make_index(len(indices), indexer, kind=self.sp_index)
return self._simple_new(new_values, sp_index, self.fill_value)
def __setitem__(self, key, value):
# if is_integer(key):
# self.values[key] = value
# else:
# raise Exception("SparseArray does not support seting non-scalars
# via setitem")
raise TypeError(
"SparseArray does not support item assignment via setitem")
def __setslice__(self, i, j, value):
if i < 0:
i = 0
if j < 0:
j = 0
slobj = slice(i, j) # noqa
# if not is_scalar(value):
# raise Exception("SparseArray does not support seting non-scalars
# via slices")
# x = self.values
# x[slobj] = value
# self.values = x
raise TypeError("SparseArray does not support item assignment via "
"slices")
def astype(self, dtype=None, copy=True):
dtype = np.dtype(dtype)
sp_values = astype_nansafe(self.sp_values, dtype, copy=copy)
try:
if is_bool_dtype(dtype):
# to avoid np.bool_ dtype
fill_value = bool(self.fill_value)
else:
fill_value = dtype.type(self.fill_value)
except ValueError:
msg = 'unable to coerce current fill_value {0} to {1} dtype'
raise ValueError(msg.format(self.fill_value, dtype))
return self._simple_new(sp_values, self.sp_index,
fill_value=fill_value)
def copy(self, deep=True):
"""
Make a copy of the SparseArray. Only the actual sparse values need to
be copied.
"""
if deep:
values = self.sp_values.copy()
else:
values = self.sp_values
return SparseArray(values, sparse_index=self.sp_index,
dtype=self.dtype, fill_value=self.fill_value)
def count(self):
"""
Compute sum of non-NA/null observations in SparseArray. If the
fill_value is not NaN, the "sparse" locations will be included in the
observation count.
Returns
-------
nobs : int
"""
sp_values = self.sp_values
valid_spvals = np.isfinite(sp_values).sum()
if self._null_fill_value:
return valid_spvals
else:
return valid_spvals + self.sp_index.ngaps
@property
def _null_fill_value(self):
return isnull(self.fill_value)
@property
def _valid_sp_values(self):
sp_vals = self.sp_values
mask = notnull(sp_vals)
return sp_vals[mask]
@Appender(_index_shared_docs['fillna'] % _sparray_doc_kwargs)
def fillna(self, value, downcast=None):
if downcast is not None:
raise NotImplementedError
if issubclass(self.dtype.type, np.floating):
value = float(value)
if self._null_fill_value:
return self._simple_new(self.sp_values, self.sp_index,
fill_value=value)
else:
new_values = self.sp_values.copy()
new_values[isnull(new_values)] = value
return self._simple_new(new_values, self.sp_index,
fill_value=self.fill_value)
def sum(self, axis=0, *args, **kwargs):
"""
Sum of non-NA/null values
Returns
-------
sum : float
"""
nv.validate_sum(args, kwargs)
valid_vals = self._valid_sp_values
sp_sum = valid_vals.sum()
if self._null_fill_value:
return sp_sum
else:
nsparse = self.sp_index.ngaps
return sp_sum + self.fill_value * nsparse
def cumsum(self, axis=0, *args, **kwargs):
"""
Cumulative sum of non-NA/null values.
When performing the cumulative summation, any non-NA/null values will
be skipped. The resulting SparseArray will preserve the locations of
NaN values, but the fill value will be `np.nan` regardless.
Parameters
----------
axis : int or None
Axis over which to perform the cumulative summation. If None,
perform cumulative summation over flattened array.
Returns
-------
cumsum : SparseArray
"""
nv.validate_cumsum(args, kwargs)
if axis is not None and axis >= self.ndim: # Mimic ndarray behaviour.
raise ValueError("axis(={axis}) out of bounds".format(axis=axis))
if not self._null_fill_value:
return SparseArray(self.to_dense()).cumsum()
return SparseArray(self.sp_values.cumsum(), sparse_index=self.sp_index,
fill_value=self.fill_value)
def mean(self, axis=0, *args, **kwargs):
"""
Mean of non-NA/null values
Returns
-------
mean : float
"""
nv.validate_mean(args, kwargs)
valid_vals = self._valid_sp_values
sp_sum = valid_vals.sum()
ct = len(valid_vals)
if self._null_fill_value:
return sp_sum / ct
else:
nsparse = self.sp_index.ngaps
return (sp_sum + self.fill_value * nsparse) / (ct + nsparse)
def value_counts(self, dropna=True):
"""
Returns a Series containing counts of unique values.
Parameters
----------
dropna : boolean, default True
Don't include counts of NaN, even if NaN is in sp_values.
Returns
-------
counts : Series
"""
keys, counts = algos._value_counts_arraylike(self.sp_values,
dropna=dropna)
fcounts = self.sp_index.ngaps
if fcounts > 0:
if self._null_fill_value and dropna:
pass
else:
if self._null_fill_value:
mask = pd.isnull(keys)
else:
mask = keys == self.fill_value
if mask.any():
counts[mask] += fcounts
else:
keys = np.insert(keys, 0, self.fill_value)
counts = np.insert(counts, 0, fcounts)
if not isinstance(keys, pd.Index):
keys = pd.Index(keys)
result = pd.Series(counts, index=keys)
return result
def _maybe_to_dense(obj):
""" try to convert to dense """
if hasattr(obj, 'to_dense'):
return obj.to_dense()
return obj
def _maybe_to_sparse(array):
""" array must be SparseSeries or SparseArray """
if isinstance(array, ABCSparseSeries):
array = array.values.copy()
return array
def _sanitize_values(arr):
"""
return an ndarray for our input,
in a platform independent manner
"""
if hasattr(arr, 'values'):
arr = arr.values
else:
# scalar
if is_scalar(arr):
arr = [arr]
# ndarray
if isinstance(arr, np.ndarray):
pass
elif is_list_like(arr) and len(arr) > 0:
arr = maybe_convert_platform(arr)
else:
arr = np.asarray(arr)
return arr
def make_sparse(arr, kind='block', fill_value=None):
"""
Convert ndarray to sparse format
Parameters
----------
arr : ndarray
kind : {'block', 'integer'}
fill_value : NaN or another value
Returns
-------
(sparse_values, index) : (ndarray, SparseIndex)
"""
arr = _sanitize_values(arr)
if arr.ndim > 1:
raise TypeError("expected dimension <= 1 data")
if fill_value is None:
fill_value = na_value_for_dtype(arr.dtype)
if isnull(fill_value):
mask = notnull(arr)
else:
# For str arrays in NumPy 1.12.0, operator!= below isn't
# element-wise but just returns False if fill_value is not str,
# so cast to object comparison to be safe
if is_string_dtype(arr):
arr = arr.astype(object)
mask = arr != fill_value
length = len(arr)
if length != mask.size:
# the arr is a SparseArray
indices = mask.sp_index.indices
else:
indices = mask.nonzero()[0].astype(np.int32)
index = _make_index(length, indices, kind)
sparsified_values = arr[mask]
return sparsified_values, index, fill_value
def _make_index(length, indices, kind):
if kind == 'block' or isinstance(kind, BlockIndex):
locs, lens = splib.get_blocks(indices)
index = BlockIndex(length, locs, lens)
elif kind == 'integer' or isinstance(kind, IntIndex):
index = IntIndex(length, indices)
else: # pragma: no cover
raise ValueError('must be block or integer type')
return index
ops.add_special_arithmetic_methods(SparseArray, arith_method=_arith_method,
comp_method=_arith_method,
bool_method=_arith_method,
use_numexpr=False)
| mit |
pauljohnleonard/pod-world | CI_2015/demos/GA/PowerSysNetwork/GA_V1_Old.py | 1 | 6060 |
# parameters
import world
import array
import random
import Tkinter
import gui
import matplotlib.pyplot
POPSIZE = 200; # population size
MAXITER = 100000; # maximum iterations
BREEDPROB = 0.5; # mutation rate
NELITE = 10; # top of population survive
NBREED = 100;
do_breed=False
def blank_gene(length):
g=Gene()
g.str=array.array('B',[0] * length)
return g
def random_gene(length,maxTok):
g=blank_gene(length)
for i in range(length):
g.str[i]=random_token(maxTok)
return g
def mate(a,b):
length=len(a.str)
i=random.randint(1,length-1)
g=Gene()
g.str=array.array('B',a.str[0:i]+b.str[i:length])
return g
def random_token(maxTok):
return random.randint(0,maxTok)
def mutate(g,maxTok): # randomly replace a character
length=len(g.str)
i=random.randint(0,length-1)
#g2=g.clone()
g.str[i]=random_token(maxTok)
class Gene:
def clone(self):
g=Gene()
g.str=self.str[:]
return g
def breedPopulation(pop):
newpop=[]
# copy top NELITE to the new population
for m in pop[0:NELITE]:
newpop.append(m)
# create the rest by breeding from the top NBREED
for i in range(NELITE,POPSIZE):
i1 = random.randint(0,NBREED-1)
i2 = random.randint(0,NBREED-1)
if do_breed and random.random()<BREEDPROB:
gene=mate(pop[i1],pop[i2])
else:
gene=pop[i1].clone()
mutate(gene,world.World.maxTok)
newpop.append(gene)
return newpop
class Run:
def __init__(self):
master = Tkinter.Tk()
self.frame=Tkinter.Frame(master)
self.frame.pack()
matplotlib.pyplot.ion()
matplotlib.pyplot.xlabel("Evaluations")
matplotlib.pyplot.ylabel("Fitness")
b = Tkinter.Button(self.frame, text="RANDOM WORLD", fg="black", command=self.init_world)
c=0
b.grid(row=0,column=c)
c+=1
b = Tkinter.Button(self.frame, text="RESET GA", fg="black", command=self.ga_init)
b.grid(row=0,column=c)
c+=1
b = Tkinter.Button(self.frame, text="STEP", fg="black", command=self.step)
b.grid(row=0,column=c)
c+=1
b = Tkinter.Button(self.frame, text="RUN (GA)", fg="black", command=self.run_ga)
b.grid(row=0,column=c)
c+=1
b = Tkinter.Button(self.frame, text="RUN (RANDOM)", fg="black", command=self.run_random)
b.grid(row=0,column=c)
c+=1
b = Tkinter.Button(self.frame, text="STOP", fg="black", command=self.stop)
b.grid(row=0,column=c)
c+=1
self.breed_flag= Tkinter.IntVar()
b = Tkinter.Checkbutton(self.frame, text="BREED", fg="black", variable=self.breed_flag,command=self.breed_func)
b.grid(row=0,column=c)
#b.pack()
c+=1
b = Tkinter.Button(self.frame, text="PLOT", fg="black", command=self.plot)
b.grid(row=0,column=c)
gui.init(600,600,10)
self.canvas = Tkinter.Canvas(self.frame, width=gui.xMax, height=gui.yMax+40)
self.canvas.grid(row=1,columnspan=c)
self.run_init()
self.init_world()
def breed_func(self):
global do_breed
do_breed = not do_breed
print " Breed=",self.breed_flag.get(),do_breed
def init_world(self):
self.world=world.World()
self.ga_init()
gui.draw_world_init(self.world, self.canvas)
def run_init(self):
self.iter=[]
self.cost=[]
self.count=0
self.best=-1e20
self.running=False
def ga_init(self):
self.pop=[]
for i in range(POPSIZE):
self.pop.append(random_gene(self.world.gene_length,self.world.maxTok))
def stop(self):
self.running=False
def run_ga(self):
if self.running:
return
self.run_init()
self.running=True
while self.count < MAXITER and self.running:
self.step();
def step_random(self):
g=random_gene(self.world.gene_length,self.world.maxTok)
fit=self.world.evaluate(g)
if fit > self.best:
text=" evaluations:"+str(self.count)+ " fitness:"+str(fit)
self.iter.append(self.count)
self.cost.append(fit)
self.best = fit
self.world.evaluate(g, True)
self.canvas.delete(Tkinter.ALL)
gui.draw_world(self.world, self.canvas,text)
# give the GUI a chance to do stuff
if (self.count % POPSIZE) == 0:
self.frame.update()
self.count += 1
def run_random(self):
self.run_init()
self.running=True
while self.count < MAXITER and self.running:
self.step_random();
def step(self):
for m in self.pop:
m.fitness=self.world.evaluate(m)
pop = sorted(self.pop, key = lambda x:x.fitness,reverse=True)
p = self.pop[0]
if p.fitness > self.best:
text=" evaluations:"+str(self.count*POPSIZE)+ " fitness:"+str(p.fitness)
self.iter.append(self.count*POPSIZE)
self.cost.append(p.fitness)
self.best = p.fitness
self.world.evaluate(p, True)
self.canvas.delete(Tkinter.ALL)
gui.draw_world(self.world, self.canvas,text)
# give the GUI a chance to do stuff
self.frame.update()
self.pop = breedPopulation(pop);
self.count += 1
def plot(self):
matplotlib.pyplot.plot(self.iter,self.cost)
if __name__ == '__main__':
run=Run()
Tkinter.mainloop()
| gpl-2.0 |
maropu/spark | python/pyspark/pandas/sql_processor.py | 15 | 10963 | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import _string # type: ignore
from typing import Any, Dict, Optional # noqa: F401 (SPARK-34943)
import inspect
import pandas as pd
from pyspark.sql import SparkSession, DataFrame as SDataFrame # noqa: F401 (SPARK-34943)
from pyspark import pandas as ps # For running doctests and reference resolution in PyCharm.
from pyspark.pandas.utils import default_session
from pyspark.pandas.frame import DataFrame
from pyspark.pandas.series import Series
__all__ = ["sql"]
from builtins import globals as builtin_globals
from builtins import locals as builtin_locals
def sql(
query: str,
globals: Optional[Dict[str, Any]] = None,
locals: Optional[Dict[str, Any]] = None,
**kwargs: Any
) -> DataFrame:
"""
Execute a SQL query and return the result as a pandas-on-Spark DataFrame.
This function also supports embedding Python variables (locals, globals, and parameters)
in the SQL statement by wrapping them in curly braces. See examples section for details.
In addition to the locals, globals and parameters, the function will also attempt
to determine if the program currently runs in an IPython (or Jupyter) environment
and to import the variables from this environment. The variables have the same
precedence as globals.
The following variable types are supported:
* string
* int
* float
* list, tuple, range of above types
* pandas-on-Spark DataFrame
* pandas-on-Spark Series
* pandas DataFrame
Parameters
----------
query : str
the SQL query
globals : dict, optional
the dictionary of global variables, if explicitly set by the user
locals : dict, optional
the dictionary of local variables, if explicitly set by the user
kwargs
other variables that the user may want to set manually that can be referenced in the query
Returns
-------
pandas-on-Spark DataFrame
Examples
--------
Calling a built-in SQL function.
>>> ps.sql("select * from range(10) where id > 7")
id
0 8
1 9
A query can also reference a local variable or parameter by wrapping them in curly braces:
>>> bound1 = 7
>>> ps.sql("select * from range(10) where id > {bound1} and id < {bound2}", bound2=9)
id
0 8
You can also wrap a DataFrame with curly braces to query it directly. Note that when you do
that, the indexes, if any, automatically become top level columns.
>>> mydf = ps.range(10)
>>> x = range(4)
>>> ps.sql("SELECT * from {mydf} WHERE id IN {x}")
id
0 0
1 1
2 2
3 3
Queries can also be arbitrarily nested in functions:
>>> def statement():
... mydf2 = ps.DataFrame({"x": range(2)})
... return ps.sql("SELECT * from {mydf2}")
>>> statement()
x
0 0
1 1
Mixing pandas-on-Spark and pandas DataFrames in a join operation. Note that the index is
dropped.
>>> ps.sql('''
... SELECT m1.a, m2.b
... FROM {table1} m1 INNER JOIN {table2} m2
... ON m1.key = m2.key
... ORDER BY m1.a, m2.b''',
... table1=ps.DataFrame({"a": [1,2], "key": ["a", "b"]}),
... table2=pd.DataFrame({"b": [3,4,5], "key": ["a", "b", "b"]}))
a b
0 1 3
1 2 4
2 2 5
Also, it is possible to query using Series.
>>> myser = ps.Series({'a': [1.0, 2.0, 3.0], 'b': [15.0, 30.0, 45.0]})
>>> ps.sql("SELECT * from {myser}")
0
0 [1.0, 2.0, 3.0]
1 [15.0, 30.0, 45.0]
"""
if globals is None:
globals = _get_ipython_scope()
_globals = builtin_globals() if globals is None else dict(globals)
_locals = builtin_locals() if locals is None else dict(locals)
# The default choice is the globals
_dict = dict(_globals)
# The vars:
_scope = _get_local_scope()
_dict.update(_scope)
# Then the locals
_dict.update(_locals)
# Highest order of precedence is the locals
_dict.update(kwargs)
return SQLProcessor(_dict, query, default_session()).execute()
_CAPTURE_SCOPES = 2
def _get_local_scope() -> Dict[str, Any]:
# Get 2 scopes above (_get_local_scope -> sql -> ...) to capture the vars there.
try:
return inspect.stack()[_CAPTURE_SCOPES][0].f_locals
except Exception as e:
# TODO (rxin, thunterdb): use a more narrow scope exception.
# See https://github.com/pyspark.pandas/pull/448
return {}
def _get_ipython_scope() -> Dict[str, Any]:
"""
Tries to extract the dictionary of variables if the program is running
in an IPython notebook environment.
"""
try:
from IPython import get_ipython # type: ignore
shell = get_ipython()
return shell.user_ns
except Exception as e:
# TODO (rxin, thunterdb): use a more narrow scope exception.
# See https://github.com/pyspark.pandas/pull/448
return None
# Originally from pymysql package
_escape_table = [chr(x) for x in range(128)]
_escape_table[0] = "\\0"
_escape_table[ord("\\")] = "\\\\"
_escape_table[ord("\n")] = "\\n"
_escape_table[ord("\r")] = "\\r"
_escape_table[ord("\032")] = "\\Z"
_escape_table[ord('"')] = '\\"'
_escape_table[ord("'")] = "\\'"
def escape_sql_string(value: str) -> str:
"""Escapes value without adding quotes.
>>> escape_sql_string("foo\\nbar")
'foo\\\\nbar'
>>> escape_sql_string("'abc'de")
"\\\\'abc\\\\'de"
>>> escape_sql_string('"abc"de')
'\\\\"abc\\\\"de'
"""
return value.translate(_escape_table)
class SQLProcessor(object):
def __init__(self, scope: Dict[str, Any], statement: str, session: SparkSession):
self._scope = scope
self._statement = statement
# All the temporary views created when executing this statement
# The key is the name of the variable in {}
# The value is the cached Spark Dataframe.
self._temp_views = {} # type: Dict[str, SDataFrame]
# All the other variables, converted to a normalized form.
# The normalized form is typically a string
self._cached_vars = {} # type: Dict[str, Any]
# The SQL statement after:
# - all the dataframes have been have been registered as temporary views
# - all the values have been converted normalized to equivalent SQL representations
self._normalized_statement = None # type: Optional[str]
self._session = session
def execute(self) -> DataFrame:
"""
Returns a DataFrame for which the SQL statement has been executed by
the underlying SQL engine.
>>> str0 = 'abc'
>>> ps.sql("select {str0}")
abc
0 abc
>>> str1 = 'abc"abc'
>>> str2 = "abc'abc"
>>> ps.sql("select {str0}, {str1}, {str2}")
abc abc"abc abc'abc
0 abc abc"abc abc'abc
>>> strs = ['a', 'b']
>>> ps.sql("select 'a' in {strs} as cond1, 'c' in {strs} as cond2")
cond1 cond2
0 True False
"""
blocks = _string.formatter_parser(self._statement)
# TODO: use a string builder
res = ""
try:
for (pre, inner, _, _) in blocks:
var_next = "" if inner is None else self._convert(inner)
res = res + pre + var_next
self._normalized_statement = res
sdf = self._session.sql(self._normalized_statement)
finally:
for v in self._temp_views:
self._session.catalog.dropTempView(v)
return DataFrame(sdf)
def _convert(self, key: str) -> Any:
"""
Given a {} key, returns an equivalent SQL representation.
This conversion performs all the necessary escaping so that the string
returned can be directly injected into the SQL statement.
"""
# Already cached?
if key in self._cached_vars:
return self._cached_vars[key]
# Analyze:
if key not in self._scope:
raise ValueError(
"The key {} in the SQL statement was not found in global,"
" local or parameters variables".format(key)
)
var = self._scope[key]
fillin = self._convert_var(var)
self._cached_vars[key] = fillin
return fillin
def _convert_var(self, var: Any) -> Any:
"""
Converts a python object into a string that is legal SQL.
"""
if isinstance(var, (int, float)):
return str(var)
if isinstance(var, Series):
return self._convert_var(var.to_dataframe())
if isinstance(var, pd.DataFrame):
return self._convert_var(ps.DataFrame(var))
if isinstance(var, DataFrame):
df_id = "pandas_on_spark_" + str(id(var))
if df_id not in self._temp_views:
sdf = var.to_spark()
sdf.createOrReplaceTempView(df_id)
self._temp_views[df_id] = sdf
return df_id
if isinstance(var, str):
return '"' + escape_sql_string(var) + '"'
if isinstance(var, list):
return "(" + ", ".join([self._convert_var(v) for v in var]) + ")"
if isinstance(var, (tuple, range)):
return self._convert_var(list(var))
raise ValueError("Unsupported variable type {}: {}".format(type(var).__name__, str(var)))
def _test() -> None:
import os
import doctest
import sys
from pyspark.sql import SparkSession
import pyspark.pandas.sql_processor
os.chdir(os.environ["SPARK_HOME"])
globs = pyspark.pandas.sql_processor.__dict__.copy()
globs["ps"] = pyspark.pandas
spark = (
SparkSession.builder.master("local[4]")
.appName("pyspark.pandas.sql_processor tests")
.getOrCreate()
)
(failure_count, test_count) = doctest.testmod(
pyspark.pandas.sql_processor,
globs=globs,
optionflags=doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE,
)
spark.stop()
if failure_count:
sys.exit(-1)
if __name__ == "__main__":
_test()
| apache-2.0 |
i026e/PygamePiano | sounds_generator.py | 1 | 2159 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Created on Wed May 13 15:19:57 2015
@author: pavel
"""
from sys import argv
from struct import pack
import wave, numpy, os
#import matplotlib.pyplot as plt
NUM_CHANNELS = 1
SAMPLE_WIDTH = 2
FRAME_RATE = 44100
def mkdir(path):
directory = os.path.dirname(path)
if not os.path.exists(directory):
os.makedirs(directory)
def write_file(path, data):
mkdir(path)
wf = wave.open(path, 'wb')
wf.setnchannels(NUM_CHANNELS)
wf.setframerate(FRAME_RATE)
wf.setsampwidth(SAMPLE_WIDTH)
wfData = ""
for i in range(len(data)):
if NUM_CHANNELS == 1:
wfData += pack('h', data[i])
elif NUM_CHANNELS > 1:
for ch in range(NUM_CHANNELS):
wfData += pack('h', data[ch][i])
wf.writeframes(wfData)
wf.close()
def sine_wave(freq, time_ms, amp=15000):
num_frames = numpy.int(time_ms*FRAME_RATE/1000.0)
omega = 2.0*numpy.pi*freq/FRAME_RATE
x = numpy.array([omega*i for i in xrange(num_frames)])
return amp*numpy.sin(x)
def full_period_sine_wave_time(freq, appr_time_ms, amp=15000):
wave_period_ms = 1000.0 / freq #ms
repeats = round(appr_time_ms/wave_period_ms)
time_ms = repeats*wave_period_ms
return sine_wave(freq, time_ms, amp)
def full_period_sine_wave(freq, amp=15000):
num_frames = FRAME_RATE #- 1
omega = 2.0*numpy.pi*freq/FRAME_RATE
x = numpy.array([omega*i for i in xrange(num_frames)])
return amp*numpy.sin(x)
def amplify(signal, amp = 15000):
return amp*signal
def decay(signal, time_const=None, factor=3.0):
if not time_const:
time_const = 1.0*len(signal)/FRAME_RATE / factor
k = -1.0/(time_const*FRAME_RATE)
new_signal = numpy.zeros(len(signal))
for i in xrange(len(signal)):
new_signal[i] = signal[i]*numpy.exp(k*i)
return new_signal
def generate_full_period_wave(freq, path, decayed= True):
data = full_period_sine_wave(freq)
if decayed:
data = decay(data)
write_file(path, data)
def main(arg):
pass
if __name__ == "__main__":
main(argv)
| mit |
lenovor/scikit-learn | sklearn/decomposition/tests/test_factor_analysis.py | 222 | 3055 | # Author: Christian Osendorfer <[email protected]>
# Alexandre Gramfort <[email protected]>
# Licence: BSD3
import numpy as np
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_less
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils import ConvergenceWarning
from sklearn.decomposition import FactorAnalysis
def test_factor_analysis():
# Test FactorAnalysis ability to recover the data covariance structure
rng = np.random.RandomState(0)
n_samples, n_features, n_components = 20, 5, 3
# Some random settings for the generative model
W = rng.randn(n_components, n_features)
# latent variable of dim 3, 20 of it
h = rng.randn(n_samples, n_components)
# using gamma to model different noise variance
# per component
noise = rng.gamma(1, size=n_features) * rng.randn(n_samples, n_features)
# generate observations
# wlog, mean is 0
X = np.dot(h, W) + noise
assert_raises(ValueError, FactorAnalysis, svd_method='foo')
fa_fail = FactorAnalysis()
fa_fail.svd_method = 'foo'
assert_raises(ValueError, fa_fail.fit, X)
fas = []
for method in ['randomized', 'lapack']:
fa = FactorAnalysis(n_components=n_components, svd_method=method)
fa.fit(X)
fas.append(fa)
X_t = fa.transform(X)
assert_equal(X_t.shape, (n_samples, n_components))
assert_almost_equal(fa.loglike_[-1], fa.score_samples(X).sum())
assert_almost_equal(fa.score_samples(X).mean(), fa.score(X))
diff = np.all(np.diff(fa.loglike_))
assert_greater(diff, 0., 'Log likelihood dif not increase')
# Sample Covariance
scov = np.cov(X, rowvar=0., bias=1.)
# Model Covariance
mcov = fa.get_covariance()
diff = np.sum(np.abs(scov - mcov)) / W.size
assert_less(diff, 0.1, "Mean absolute difference is %f" % diff)
fa = FactorAnalysis(n_components=n_components,
noise_variance_init=np.ones(n_features))
assert_raises(ValueError, fa.fit, X[:, :2])
f = lambda x, y: np.abs(getattr(x, y)) # sign will not be equal
fa1, fa2 = fas
for attr in ['loglike_', 'components_', 'noise_variance_']:
assert_almost_equal(f(fa1, attr), f(fa2, attr))
fa1.max_iter = 1
fa1.verbose = True
assert_warns(ConvergenceWarning, fa1.fit, X)
# Test get_covariance and get_precision with n_components == n_features
# with n_components < n_features and with n_components == 0
for n_components in [0, 2, X.shape[1]]:
fa.n_components = n_components
fa.fit(X)
cov = fa.get_covariance()
precision = fa.get_precision()
assert_array_almost_equal(np.dot(cov, precision),
np.eye(X.shape[1]), 12)
| bsd-3-clause |
Eric-Gaudiello/tensorflow_dev | tensorflow_home/tensorflow_venv/lib/python3.4/site-packages/numpy/lib/recfunctions.py | 148 | 35012 | """
Collection of utilities to manipulate structured arrays.
Most of these functions were initially implemented by John Hunter for
matplotlib. They have been rewritten and extended for convenience.
"""
from __future__ import division, absolute_import, print_function
import sys
import itertools
import numpy as np
import numpy.ma as ma
from numpy import ndarray, recarray
from numpy.ma import MaskedArray
from numpy.ma.mrecords import MaskedRecords
from numpy.lib._iotools import _is_string_like
from numpy.compat import basestring
if sys.version_info[0] < 3:
from future_builtins import zip
_check_fill_value = np.ma.core._check_fill_value
__all__ = [
'append_fields', 'drop_fields', 'find_duplicates',
'get_fieldstructure', 'join_by', 'merge_arrays',
'rec_append_fields', 'rec_drop_fields', 'rec_join',
'recursive_fill_fields', 'rename_fields', 'stack_arrays',
]
def recursive_fill_fields(input, output):
"""
Fills fields from output with fields from input,
with support for nested structures.
Parameters
----------
input : ndarray
Input array.
output : ndarray
Output array.
Notes
-----
* `output` should be at least the same size as `input`
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> a = np.array([(1, 10.), (2, 20.)], dtype=[('A', int), ('B', float)])
>>> b = np.zeros((3,), dtype=a.dtype)
>>> rfn.recursive_fill_fields(a, b)
array([(1, 10.0), (2, 20.0), (0, 0.0)],
dtype=[('A', '<i4'), ('B', '<f8')])
"""
newdtype = output.dtype
for field in newdtype.names:
try:
current = input[field]
except ValueError:
continue
if current.dtype.names:
recursive_fill_fields(current, output[field])
else:
output[field][:len(current)] = current
return output
def get_names(adtype):
"""
Returns the field names of the input datatype as a tuple.
Parameters
----------
adtype : dtype
Input datatype
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> rfn.get_names(np.empty((1,), dtype=int)) is None
True
>>> rfn.get_names(np.empty((1,), dtype=[('A',int), ('B', float)]))
('A', 'B')
>>> adtype = np.dtype([('a', int), ('b', [('ba', int), ('bb', int)])])
>>> rfn.get_names(adtype)
('a', ('b', ('ba', 'bb')))
"""
listnames = []
names = adtype.names
for name in names:
current = adtype[name]
if current.names:
listnames.append((name, tuple(get_names(current))))
else:
listnames.append(name)
return tuple(listnames) or None
def get_names_flat(adtype):
"""
Returns the field names of the input datatype as a tuple. Nested structure
are flattend beforehand.
Parameters
----------
adtype : dtype
Input datatype
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> rfn.get_names_flat(np.empty((1,), dtype=int)) is None
True
>>> rfn.get_names_flat(np.empty((1,), dtype=[('A',int), ('B', float)]))
('A', 'B')
>>> adtype = np.dtype([('a', int), ('b', [('ba', int), ('bb', int)])])
>>> rfn.get_names_flat(adtype)
('a', 'b', 'ba', 'bb')
"""
listnames = []
names = adtype.names
for name in names:
listnames.append(name)
current = adtype[name]
if current.names:
listnames.extend(get_names_flat(current))
return tuple(listnames) or None
def flatten_descr(ndtype):
"""
Flatten a structured data-type description.
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> ndtype = np.dtype([('a', '<i4'), ('b', [('ba', '<f8'), ('bb', '<i4')])])
>>> rfn.flatten_descr(ndtype)
(('a', dtype('int32')), ('ba', dtype('float64')), ('bb', dtype('int32')))
"""
names = ndtype.names
if names is None:
return ndtype.descr
else:
descr = []
for field in names:
(typ, _) = ndtype.fields[field]
if typ.names:
descr.extend(flatten_descr(typ))
else:
descr.append((field, typ))
return tuple(descr)
def zip_descr(seqarrays, flatten=False):
"""
Combine the dtype description of a series of arrays.
Parameters
----------
seqarrays : sequence of arrays
Sequence of arrays
flatten : {boolean}, optional
Whether to collapse nested descriptions.
"""
newdtype = []
if flatten:
for a in seqarrays:
newdtype.extend(flatten_descr(a.dtype))
else:
for a in seqarrays:
current = a.dtype
names = current.names or ()
if len(names) > 1:
newdtype.append(('', current.descr))
else:
newdtype.extend(current.descr)
return np.dtype(newdtype).descr
def get_fieldstructure(adtype, lastname=None, parents=None,):
"""
Returns a dictionary with fields indexing lists of their parent fields.
This function is used to simplify access to fields nested in other fields.
Parameters
----------
adtype : np.dtype
Input datatype
lastname : optional
Last processed field name (used internally during recursion).
parents : dictionary
Dictionary of parent fields (used interbally during recursion).
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> ndtype = np.dtype([('A', int),
... ('B', [('BA', int),
... ('BB', [('BBA', int), ('BBB', int)])])])
>>> rfn.get_fieldstructure(ndtype)
... # XXX: possible regression, order of BBA and BBB is swapped
{'A': [], 'B': [], 'BA': ['B'], 'BB': ['B'], 'BBA': ['B', 'BB'], 'BBB': ['B', 'BB']}
"""
if parents is None:
parents = {}
names = adtype.names
for name in names:
current = adtype[name]
if current.names:
if lastname:
parents[name] = [lastname, ]
else:
parents[name] = []
parents.update(get_fieldstructure(current, name, parents))
else:
lastparent = [_ for _ in (parents.get(lastname, []) or [])]
if lastparent:
lastparent.append(lastname)
elif lastname:
lastparent = [lastname, ]
parents[name] = lastparent or []
return parents or None
def _izip_fields_flat(iterable):
"""
Returns an iterator of concatenated fields from a sequence of arrays,
collapsing any nested structure.
"""
for element in iterable:
if isinstance(element, np.void):
for f in _izip_fields_flat(tuple(element)):
yield f
else:
yield element
def _izip_fields(iterable):
"""
Returns an iterator of concatenated fields from a sequence of arrays.
"""
for element in iterable:
if (hasattr(element, '__iter__') and
not isinstance(element, basestring)):
for f in _izip_fields(element):
yield f
elif isinstance(element, np.void) and len(tuple(element)) == 1:
for f in _izip_fields(element):
yield f
else:
yield element
def izip_records(seqarrays, fill_value=None, flatten=True):
"""
Returns an iterator of concatenated items from a sequence of arrays.
Parameters
----------
seqarrays : sequence of arrays
Sequence of arrays.
fill_value : {None, integer}
Value used to pad shorter iterables.
flatten : {True, False},
Whether to
"""
# OK, that's a complete ripoff from Python2.6 itertools.izip_longest
def sentinel(counter=([fill_value] * (len(seqarrays) - 1)).pop):
"Yields the fill_value or raises IndexError"
yield counter()
#
fillers = itertools.repeat(fill_value)
iters = [itertools.chain(it, sentinel(), fillers) for it in seqarrays]
# Should we flatten the items, or just use a nested approach
if flatten:
zipfunc = _izip_fields_flat
else:
zipfunc = _izip_fields
#
try:
for tup in zip(*iters):
yield tuple(zipfunc(tup))
except IndexError:
pass
def _fix_output(output, usemask=True, asrecarray=False):
"""
Private function: return a recarray, a ndarray, a MaskedArray
or a MaskedRecords depending on the input parameters
"""
if not isinstance(output, MaskedArray):
usemask = False
if usemask:
if asrecarray:
output = output.view(MaskedRecords)
else:
output = ma.filled(output)
if asrecarray:
output = output.view(recarray)
return output
def _fix_defaults(output, defaults=None):
"""
Update the fill_value and masked data of `output`
from the default given in a dictionary defaults.
"""
names = output.dtype.names
(data, mask, fill_value) = (output.data, output.mask, output.fill_value)
for (k, v) in (defaults or {}).items():
if k in names:
fill_value[k] = v
data[k][mask[k]] = v
return output
def merge_arrays(seqarrays, fill_value=-1, flatten=False,
usemask=False, asrecarray=False):
"""
Merge arrays field by field.
Parameters
----------
seqarrays : sequence of ndarrays
Sequence of arrays
fill_value : {float}, optional
Filling value used to pad missing data on the shorter arrays.
flatten : {False, True}, optional
Whether to collapse nested fields.
usemask : {False, True}, optional
Whether to return a masked array or not.
asrecarray : {False, True}, optional
Whether to return a recarray (MaskedRecords) or not.
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> rfn.merge_arrays((np.array([1, 2]), np.array([10., 20., 30.])))
masked_array(data = [(1, 10.0) (2, 20.0) (--, 30.0)],
mask = [(False, False) (False, False) (True, False)],
fill_value = (999999, 1e+20),
dtype = [('f0', '<i4'), ('f1', '<f8')])
>>> rfn.merge_arrays((np.array([1, 2]), np.array([10., 20., 30.])),
... usemask=False)
array([(1, 10.0), (2, 20.0), (-1, 30.0)],
dtype=[('f0', '<i4'), ('f1', '<f8')])
>>> rfn.merge_arrays((np.array([1, 2]).view([('a', int)]),
... np.array([10., 20., 30.])),
... usemask=False, asrecarray=True)
rec.array([(1, 10.0), (2, 20.0), (-1, 30.0)],
dtype=[('a', '<i4'), ('f1', '<f8')])
Notes
-----
* Without a mask, the missing value will be filled with something,
* depending on what its corresponding type:
-1 for integers
-1.0 for floating point numbers
'-' for characters
'-1' for strings
True for boolean values
* XXX: I just obtained these values empirically
"""
# Only one item in the input sequence ?
if (len(seqarrays) == 1):
seqarrays = np.asanyarray(seqarrays[0])
# Do we have a single ndarray as input ?
if isinstance(seqarrays, (ndarray, np.void)):
seqdtype = seqarrays.dtype
if (not flatten) or \
(zip_descr((seqarrays,), flatten=True) == seqdtype.descr):
# Minimal processing needed: just make sure everythng's a-ok
seqarrays = seqarrays.ravel()
# Make sure we have named fields
if not seqdtype.names:
seqdtype = [('', seqdtype)]
# Find what type of array we must return
if usemask:
if asrecarray:
seqtype = MaskedRecords
else:
seqtype = MaskedArray
elif asrecarray:
seqtype = recarray
else:
seqtype = ndarray
return seqarrays.view(dtype=seqdtype, type=seqtype)
else:
seqarrays = (seqarrays,)
else:
# Make sure we have arrays in the input sequence
seqarrays = [np.asanyarray(_m) for _m in seqarrays]
# Find the sizes of the inputs and their maximum
sizes = tuple(a.size for a in seqarrays)
maxlength = max(sizes)
# Get the dtype of the output (flattening if needed)
newdtype = zip_descr(seqarrays, flatten=flatten)
# Initialize the sequences for data and mask
seqdata = []
seqmask = []
# If we expect some kind of MaskedArray, make a special loop.
if usemask:
for (a, n) in zip(seqarrays, sizes):
nbmissing = (maxlength - n)
# Get the data and mask
data = a.ravel().__array__()
mask = ma.getmaskarray(a).ravel()
# Get the filling value (if needed)
if nbmissing:
fval = _check_fill_value(fill_value, a.dtype)
if isinstance(fval, (ndarray, np.void)):
if len(fval.dtype) == 1:
fval = fval.item()[0]
fmsk = True
else:
fval = np.array(fval, dtype=a.dtype, ndmin=1)
fmsk = np.ones((1,), dtype=mask.dtype)
else:
fval = None
fmsk = True
# Store an iterator padding the input to the expected length
seqdata.append(itertools.chain(data, [fval] * nbmissing))
seqmask.append(itertools.chain(mask, [fmsk] * nbmissing))
# Create an iterator for the data
data = tuple(izip_records(seqdata, flatten=flatten))
output = ma.array(np.fromiter(data, dtype=newdtype, count=maxlength),
mask=list(izip_records(seqmask, flatten=flatten)))
if asrecarray:
output = output.view(MaskedRecords)
else:
# Same as before, without the mask we don't need...
for (a, n) in zip(seqarrays, sizes):
nbmissing = (maxlength - n)
data = a.ravel().__array__()
if nbmissing:
fval = _check_fill_value(fill_value, a.dtype)
if isinstance(fval, (ndarray, np.void)):
if len(fval.dtype) == 1:
fval = fval.item()[0]
else:
fval = np.array(fval, dtype=a.dtype, ndmin=1)
else:
fval = None
seqdata.append(itertools.chain(data, [fval] * nbmissing))
output = np.fromiter(tuple(izip_records(seqdata, flatten=flatten)),
dtype=newdtype, count=maxlength)
if asrecarray:
output = output.view(recarray)
# And we're done...
return output
def drop_fields(base, drop_names, usemask=True, asrecarray=False):
"""
Return a new array with fields in `drop_names` dropped.
Nested fields are supported.
Parameters
----------
base : array
Input array
drop_names : string or sequence
String or sequence of strings corresponding to the names of the
fields to drop.
usemask : {False, True}, optional
Whether to return a masked array or not.
asrecarray : string or sequence, optional
Whether to return a recarray or a mrecarray (`asrecarray=True`) or
a plain ndarray or masked array with flexible dtype. The default
is False.
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> a = np.array([(1, (2, 3.0)), (4, (5, 6.0))],
... dtype=[('a', int), ('b', [('ba', float), ('bb', int)])])
>>> rfn.drop_fields(a, 'a')
array([((2.0, 3),), ((5.0, 6),)],
dtype=[('b', [('ba', '<f8'), ('bb', '<i4')])])
>>> rfn.drop_fields(a, 'ba')
array([(1, (3,)), (4, (6,))],
dtype=[('a', '<i4'), ('b', [('bb', '<i4')])])
>>> rfn.drop_fields(a, ['ba', 'bb'])
array([(1,), (4,)],
dtype=[('a', '<i4')])
"""
if _is_string_like(drop_names):
drop_names = [drop_names, ]
else:
drop_names = set(drop_names)
def _drop_descr(ndtype, drop_names):
names = ndtype.names
newdtype = []
for name in names:
current = ndtype[name]
if name in drop_names:
continue
if current.names:
descr = _drop_descr(current, drop_names)
if descr:
newdtype.append((name, descr))
else:
newdtype.append((name, current))
return newdtype
newdtype = _drop_descr(base.dtype, drop_names)
if not newdtype:
return None
output = np.empty(base.shape, dtype=newdtype)
output = recursive_fill_fields(base, output)
return _fix_output(output, usemask=usemask, asrecarray=asrecarray)
def rec_drop_fields(base, drop_names):
"""
Returns a new numpy.recarray with fields in `drop_names` dropped.
"""
return drop_fields(base, drop_names, usemask=False, asrecarray=True)
def rename_fields(base, namemapper):
"""
Rename the fields from a flexible-datatype ndarray or recarray.
Nested fields are supported.
Parameters
----------
base : ndarray
Input array whose fields must be modified.
namemapper : dictionary
Dictionary mapping old field names to their new version.
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> a = np.array([(1, (2, [3.0, 30.])), (4, (5, [6.0, 60.]))],
... dtype=[('a', int),('b', [('ba', float), ('bb', (float, 2))])])
>>> rfn.rename_fields(a, {'a':'A', 'bb':'BB'})
array([(1, (2.0, [3.0, 30.0])), (4, (5.0, [6.0, 60.0]))],
dtype=[('A', '<i4'), ('b', [('ba', '<f8'), ('BB', '<f8', 2)])])
"""
def _recursive_rename_fields(ndtype, namemapper):
newdtype = []
for name in ndtype.names:
newname = namemapper.get(name, name)
current = ndtype[name]
if current.names:
newdtype.append(
(newname, _recursive_rename_fields(current, namemapper))
)
else:
newdtype.append((newname, current))
return newdtype
newdtype = _recursive_rename_fields(base.dtype, namemapper)
return base.view(newdtype)
def append_fields(base, names, data, dtypes=None,
fill_value=-1, usemask=True, asrecarray=False):
"""
Add new fields to an existing array.
The names of the fields are given with the `names` arguments,
the corresponding values with the `data` arguments.
If a single field is appended, `names`, `data` and `dtypes` do not have
to be lists but just values.
Parameters
----------
base : array
Input array to extend.
names : string, sequence
String or sequence of strings corresponding to the names
of the new fields.
data : array or sequence of arrays
Array or sequence of arrays storing the fields to add to the base.
dtypes : sequence of datatypes, optional
Datatype or sequence of datatypes.
If None, the datatypes are estimated from the `data`.
fill_value : {float}, optional
Filling value used to pad missing data on the shorter arrays.
usemask : {False, True}, optional
Whether to return a masked array or not.
asrecarray : {False, True}, optional
Whether to return a recarray (MaskedRecords) or not.
"""
# Check the names
if isinstance(names, (tuple, list)):
if len(names) != len(data):
msg = "The number of arrays does not match the number of names"
raise ValueError(msg)
elif isinstance(names, basestring):
names = [names, ]
data = [data, ]
#
if dtypes is None:
data = [np.array(a, copy=False, subok=True) for a in data]
data = [a.view([(name, a.dtype)]) for (name, a) in zip(names, data)]
else:
if not isinstance(dtypes, (tuple, list)):
dtypes = [dtypes, ]
if len(data) != len(dtypes):
if len(dtypes) == 1:
dtypes = dtypes * len(data)
else:
msg = "The dtypes argument must be None, a dtype, or a list."
raise ValueError(msg)
data = [np.array(a, copy=False, subok=True, dtype=d).view([(n, d)])
for (a, n, d) in zip(data, names, dtypes)]
#
base = merge_arrays(base, usemask=usemask, fill_value=fill_value)
if len(data) > 1:
data = merge_arrays(data, flatten=True, usemask=usemask,
fill_value=fill_value)
else:
data = data.pop()
#
output = ma.masked_all(max(len(base), len(data)),
dtype=base.dtype.descr + data.dtype.descr)
output = recursive_fill_fields(base, output)
output = recursive_fill_fields(data, output)
#
return _fix_output(output, usemask=usemask, asrecarray=asrecarray)
def rec_append_fields(base, names, data, dtypes=None):
"""
Add new fields to an existing array.
The names of the fields are given with the `names` arguments,
the corresponding values with the `data` arguments.
If a single field is appended, `names`, `data` and `dtypes` do not have
to be lists but just values.
Parameters
----------
base : array
Input array to extend.
names : string, sequence
String or sequence of strings corresponding to the names
of the new fields.
data : array or sequence of arrays
Array or sequence of arrays storing the fields to add to the base.
dtypes : sequence of datatypes, optional
Datatype or sequence of datatypes.
If None, the datatypes are estimated from the `data`.
See Also
--------
append_fields
Returns
-------
appended_array : np.recarray
"""
return append_fields(base, names, data=data, dtypes=dtypes,
asrecarray=True, usemask=False)
def stack_arrays(arrays, defaults=None, usemask=True, asrecarray=False,
autoconvert=False):
"""
Superposes arrays fields by fields
Parameters
----------
arrays : array or sequence
Sequence of input arrays.
defaults : dictionary, optional
Dictionary mapping field names to the corresponding default values.
usemask : {True, False}, optional
Whether to return a MaskedArray (or MaskedRecords is
`asrecarray==True`) or a ndarray.
asrecarray : {False, True}, optional
Whether to return a recarray (or MaskedRecords if `usemask==True`)
or just a flexible-type ndarray.
autoconvert : {False, True}, optional
Whether automatically cast the type of the field to the maximum.
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> x = np.array([1, 2,])
>>> rfn.stack_arrays(x) is x
True
>>> z = np.array([('A', 1), ('B', 2)], dtype=[('A', '|S3'), ('B', float)])
>>> zz = np.array([('a', 10., 100.), ('b', 20., 200.), ('c', 30., 300.)],
... dtype=[('A', '|S3'), ('B', float), ('C', float)])
>>> test = rfn.stack_arrays((z,zz))
>>> test
masked_array(data = [('A', 1.0, --) ('B', 2.0, --) ('a', 10.0, 100.0) ('b', 20.0, 200.0)
('c', 30.0, 300.0)],
mask = [(False, False, True) (False, False, True) (False, False, False)
(False, False, False) (False, False, False)],
fill_value = ('N/A', 1e+20, 1e+20),
dtype = [('A', '|S3'), ('B', '<f8'), ('C', '<f8')])
"""
if isinstance(arrays, ndarray):
return arrays
elif len(arrays) == 1:
return arrays[0]
seqarrays = [np.asanyarray(a).ravel() for a in arrays]
nrecords = [len(a) for a in seqarrays]
ndtype = [a.dtype for a in seqarrays]
fldnames = [d.names for d in ndtype]
#
dtype_l = ndtype[0]
newdescr = dtype_l.descr
names = [_[0] for _ in newdescr]
for dtype_n in ndtype[1:]:
for descr in dtype_n.descr:
name = descr[0] or ''
if name not in names:
newdescr.append(descr)
names.append(name)
else:
nameidx = names.index(name)
current_descr = newdescr[nameidx]
if autoconvert:
if np.dtype(descr[1]) > np.dtype(current_descr[-1]):
current_descr = list(current_descr)
current_descr[-1] = descr[1]
newdescr[nameidx] = tuple(current_descr)
elif descr[1] != current_descr[-1]:
raise TypeError("Incompatible type '%s' <> '%s'" %
(dict(newdescr)[name], descr[1]))
# Only one field: use concatenate
if len(newdescr) == 1:
output = ma.concatenate(seqarrays)
else:
#
output = ma.masked_all((np.sum(nrecords),), newdescr)
offset = np.cumsum(np.r_[0, nrecords])
seen = []
for (a, n, i, j) in zip(seqarrays, fldnames, offset[:-1], offset[1:]):
names = a.dtype.names
if names is None:
output['f%i' % len(seen)][i:j] = a
else:
for name in n:
output[name][i:j] = a[name]
if name not in seen:
seen.append(name)
#
return _fix_output(_fix_defaults(output, defaults),
usemask=usemask, asrecarray=asrecarray)
def find_duplicates(a, key=None, ignoremask=True, return_index=False):
"""
Find the duplicates in a structured array along a given key
Parameters
----------
a : array-like
Input array
key : {string, None}, optional
Name of the fields along which to check the duplicates.
If None, the search is performed by records
ignoremask : {True, False}, optional
Whether masked data should be discarded or considered as duplicates.
return_index : {False, True}, optional
Whether to return the indices of the duplicated values.
Examples
--------
>>> from numpy.lib import recfunctions as rfn
>>> ndtype = [('a', int)]
>>> a = np.ma.array([1, 1, 1, 2, 2, 3, 3],
... mask=[0, 0, 1, 0, 0, 0, 1]).view(ndtype)
>>> rfn.find_duplicates(a, ignoremask=True, return_index=True)
... # XXX: judging by the output, the ignoremask flag has no effect
"""
a = np.asanyarray(a).ravel()
# Get a dictionary of fields
fields = get_fieldstructure(a.dtype)
# Get the sorting data (by selecting the corresponding field)
base = a
if key:
for f in fields[key]:
base = base[f]
base = base[key]
# Get the sorting indices and the sorted data
sortidx = base.argsort()
sortedbase = base[sortidx]
sorteddata = sortedbase.filled()
# Compare the sorting data
flag = (sorteddata[:-1] == sorteddata[1:])
# If masked data must be ignored, set the flag to false where needed
if ignoremask:
sortedmask = sortedbase.recordmask
flag[sortedmask[1:]] = False
flag = np.concatenate(([False], flag))
# We need to take the point on the left as well (else we're missing it)
flag[:-1] = flag[:-1] + flag[1:]
duplicates = a[sortidx][flag]
if return_index:
return (duplicates, sortidx[flag])
else:
return duplicates
def join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2',
defaults=None, usemask=True, asrecarray=False):
"""
Join arrays `r1` and `r2` on key `key`.
The key should be either a string or a sequence of string corresponding
to the fields used to join the array. An exception is raised if the
`key` field cannot be found in the two input arrays. Neither `r1` nor
`r2` should have any duplicates along `key`: the presence of duplicates
will make the output quite unreliable. Note that duplicates are not
looked for by the algorithm.
Parameters
----------
key : {string, sequence}
A string or a sequence of strings corresponding to the fields used
for comparison.
r1, r2 : arrays
Structured arrays.
jointype : {'inner', 'outer', 'leftouter'}, optional
If 'inner', returns the elements common to both r1 and r2.
If 'outer', returns the common elements as well as the elements of
r1 not in r2 and the elements of not in r2.
If 'leftouter', returns the common elements and the elements of r1
not in r2.
r1postfix : string, optional
String appended to the names of the fields of r1 that are present
in r2 but absent of the key.
r2postfix : string, optional
String appended to the names of the fields of r2 that are present
in r1 but absent of the key.
defaults : {dictionary}, optional
Dictionary mapping field names to the corresponding default values.
usemask : {True, False}, optional
Whether to return a MaskedArray (or MaskedRecords is
`asrecarray==True`) or a ndarray.
asrecarray : {False, True}, optional
Whether to return a recarray (or MaskedRecords if `usemask==True`)
or just a flexible-type ndarray.
Notes
-----
* The output is sorted along the key.
* A temporary array is formed by dropping the fields not in the key for
the two arrays and concatenating the result. This array is then
sorted, and the common entries selected. The output is constructed by
filling the fields with the selected entries. Matching is not
preserved if there are some duplicates...
"""
# Check jointype
if jointype not in ('inner', 'outer', 'leftouter'):
raise ValueError(
"The 'jointype' argument should be in 'inner', "
"'outer' or 'leftouter' (got '%s' instead)" % jointype
)
# If we have a single key, put it in a tuple
if isinstance(key, basestring):
key = (key,)
# Check the keys
for name in key:
if name not in r1.dtype.names:
raise ValueError('r1 does not have key field %s' % name)
if name not in r2.dtype.names:
raise ValueError('r2 does not have key field %s' % name)
# Make sure we work with ravelled arrays
r1 = r1.ravel()
r2 = r2.ravel()
# Fixme: nb2 below is never used. Commenting out for pyflakes.
# (nb1, nb2) = (len(r1), len(r2))
nb1 = len(r1)
(r1names, r2names) = (r1.dtype.names, r2.dtype.names)
# Check the names for collision
if (set.intersection(set(r1names), set(r2names)).difference(key) and
not (r1postfix or r2postfix)):
msg = "r1 and r2 contain common names, r1postfix and r2postfix "
msg += "can't be empty"
raise ValueError(msg)
# Make temporary arrays of just the keys
r1k = drop_fields(r1, [n for n in r1names if n not in key])
r2k = drop_fields(r2, [n for n in r2names if n not in key])
# Concatenate the two arrays for comparison
aux = ma.concatenate((r1k, r2k))
idx_sort = aux.argsort(order=key)
aux = aux[idx_sort]
#
# Get the common keys
flag_in = ma.concatenate(([False], aux[1:] == aux[:-1]))
flag_in[:-1] = flag_in[1:] + flag_in[:-1]
idx_in = idx_sort[flag_in]
idx_1 = idx_in[(idx_in < nb1)]
idx_2 = idx_in[(idx_in >= nb1)] - nb1
(r1cmn, r2cmn) = (len(idx_1), len(idx_2))
if jointype == 'inner':
(r1spc, r2spc) = (0, 0)
elif jointype == 'outer':
idx_out = idx_sort[~flag_in]
idx_1 = np.concatenate((idx_1, idx_out[(idx_out < nb1)]))
idx_2 = np.concatenate((idx_2, idx_out[(idx_out >= nb1)] - nb1))
(r1spc, r2spc) = (len(idx_1) - r1cmn, len(idx_2) - r2cmn)
elif jointype == 'leftouter':
idx_out = idx_sort[~flag_in]
idx_1 = np.concatenate((idx_1, idx_out[(idx_out < nb1)]))
(r1spc, r2spc) = (len(idx_1) - r1cmn, 0)
# Select the entries from each input
(s1, s2) = (r1[idx_1], r2[idx_2])
#
# Build the new description of the output array .......
# Start with the key fields
ndtype = [list(_) for _ in r1k.dtype.descr]
# Add the other fields
ndtype.extend(list(_) for _ in r1.dtype.descr if _[0] not in key)
# Find the new list of names (it may be different from r1names)
names = list(_[0] for _ in ndtype)
for desc in r2.dtype.descr:
desc = list(desc)
name = desc[0]
# Have we seen the current name already ?
if name in names:
nameidx = ndtype.index(desc)
current = ndtype[nameidx]
# The current field is part of the key: take the largest dtype
if name in key:
current[-1] = max(desc[1], current[-1])
# The current field is not part of the key: add the suffixes
else:
current[0] += r1postfix
desc[0] += r2postfix
ndtype.insert(nameidx + 1, desc)
#... we haven't: just add the description to the current list
else:
names.extend(desc[0])
ndtype.append(desc)
# Revert the elements to tuples
ndtype = [tuple(_) for _ in ndtype]
# Find the largest nb of common fields :
# r1cmn and r2cmn should be equal, but...
cmn = max(r1cmn, r2cmn)
# Construct an empty array
output = ma.masked_all((cmn + r1spc + r2spc,), dtype=ndtype)
names = output.dtype.names
for f in r1names:
selected = s1[f]
if f not in names or (f in r2names and not r2postfix and f not in key):
f += r1postfix
current = output[f]
current[:r1cmn] = selected[:r1cmn]
if jointype in ('outer', 'leftouter'):
current[cmn:cmn + r1spc] = selected[r1cmn:]
for f in r2names:
selected = s2[f]
if f not in names or (f in r1names and not r1postfix and f not in key):
f += r2postfix
current = output[f]
current[:r2cmn] = selected[:r2cmn]
if (jointype == 'outer') and r2spc:
current[-r2spc:] = selected[r2cmn:]
# Sort and finalize the output
output.sort(order=key)
kwargs = dict(usemask=usemask, asrecarray=asrecarray)
return _fix_output(_fix_defaults(output, defaults), **kwargs)
def rec_join(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2',
defaults=None):
"""
Join arrays `r1` and `r2` on keys.
Alternative to join_by, that always returns a np.recarray.
See Also
--------
join_by : equivalent function
"""
kwargs = dict(jointype=jointype, r1postfix=r1postfix, r2postfix=r2postfix,
defaults=defaults, usemask=False, asrecarray=True)
return join_by(key, r1, r2, **kwargs)
| gpl-3.0 |
lmallin/coverage_test | python_venv/lib/python2.7/site-packages/pandas/core/indexes/range.py | 6 | 21868 | from sys import getsizeof
import operator
import numpy as np
from pandas._libs import index as libindex
from pandas.core.dtypes.common import (
is_integer,
is_scalar,
is_int64_dtype)
from pandas import compat
from pandas.compat import lrange, range
from pandas.compat.numpy import function as nv
from pandas.core.indexes.base import Index, _index_shared_docs
from pandas.util._decorators import Appender, cache_readonly
import pandas.core.indexes.base as ibase
from pandas.core.indexes.numeric import Int64Index
class RangeIndex(Int64Index):
"""
Immutable Index implementing a monotonic range. RangeIndex is a
memory-saving special case of Int64Index limited to representing
monotonic ranges.
Parameters
----------
start : int (default: 0), or other RangeIndex instance.
If int and "stop" is not given, interpreted as "stop" instead.
stop : int (default: 0)
step : int (default: 1)
name : object, optional
Name to be stored in the index
copy : bool, default False
Unused, accepted for homogeneity with other index types.
"""
_typ = 'rangeindex'
_engine_type = libindex.Int64Engine
def __new__(cls, start=None, stop=None, step=None, name=None, dtype=None,
fastpath=False, copy=False, **kwargs):
if fastpath:
return cls._simple_new(start, stop, step, name=name)
cls._validate_dtype(dtype)
# RangeIndex
if isinstance(start, RangeIndex):
if name is None:
name = start.name
return cls._simple_new(name=name,
**dict(start._get_data_as_items()))
# validate the arguments
def _ensure_int(value, field):
msg = ("RangeIndex(...) must be called with integers,"
" {value} was passed for {field}")
if not is_scalar(value):
raise TypeError(msg.format(value=type(value).__name__,
field=field))
try:
new_value = int(value)
assert(new_value == value)
except (TypeError, ValueError, AssertionError):
raise TypeError(msg.format(value=type(value).__name__,
field=field))
return new_value
if start is None and stop is None and step is None:
msg = "RangeIndex(...) must be called with integers"
raise TypeError(msg)
elif start is None:
start = 0
else:
start = _ensure_int(start, 'start')
if stop is None:
stop = start
start = 0
else:
stop = _ensure_int(stop, 'stop')
if step is None:
step = 1
elif step == 0:
raise ValueError("Step must not be zero")
else:
step = _ensure_int(step, 'step')
return cls._simple_new(start, stop, step, name)
@classmethod
def from_range(cls, data, name=None, dtype=None, **kwargs):
""" create RangeIndex from a range (py3), or xrange (py2) object """
if not isinstance(data, range):
raise TypeError(
'{0}(...) must be called with object coercible to a '
'range, {1} was passed'.format(cls.__name__, repr(data)))
if compat.PY3:
step = data.step
stop = data.stop
start = data.start
else:
# seems we only have indexing ops to infer
# rather than direct accessors
if len(data) > 1:
step = data[1] - data[0]
stop = data[-1] + step
start = data[0]
elif len(data):
start = data[0]
stop = data[0] + 1
step = 1
else:
start = stop = 0
step = 1
return RangeIndex(start, stop, step, dtype=dtype, name=name, **kwargs)
@classmethod
def _simple_new(cls, start, stop=None, step=None, name=None,
dtype=None, **kwargs):
result = object.__new__(cls)
# handle passed None, non-integers
if start is None and stop is None:
# empty
start, stop, step = 0, 0, 1
if start is None or not is_integer(start):
try:
return RangeIndex(start, stop, step, name=name, **kwargs)
except TypeError:
return Index(start, stop, step, name=name, **kwargs)
result._start = start
result._stop = stop or 0
result._step = step or 1
result.name = name
for k, v in compat.iteritems(kwargs):
setattr(result, k, v)
result._reset_identity()
return result
@staticmethod
def _validate_dtype(dtype):
""" require dtype to be None or int64 """
if not (dtype is None or is_int64_dtype(dtype)):
raise TypeError('Invalid to pass a non-int64 dtype to RangeIndex')
@cache_readonly
def _constructor(self):
""" return the class to use for construction """
return Int64Index
@cache_readonly
def _data(self):
return np.arange(self._start, self._stop, self._step, dtype=np.int64)
@cache_readonly
def _int64index(self):
return Int64Index(self._data, name=self.name, fastpath=True)
def _get_data_as_items(self):
""" return a list of tuples of start, stop, step """
return [('start', self._start),
('stop', self._stop),
('step', self._step)]
def __reduce__(self):
d = self._get_attributes_dict()
d.update(dict(self._get_data_as_items()))
return ibase._new_Index, (self.__class__, d), None
def _format_attrs(self):
"""
Return a list of tuples of the (attr, formatted_value)
"""
attrs = self._get_data_as_items()
if self.name is not None:
attrs.append(('name', ibase.default_pprint(self.name)))
return attrs
def _format_data(self):
# we are formatting thru the attributes
return None
@cache_readonly
def nbytes(self):
""" return the number of bytes in the underlying data """
return sum([getsizeof(getattr(self, v)) for v in
['_start', '_stop', '_step']])
def memory_usage(self, deep=False):
"""
Memory usage of my values
Parameters
----------
deep : bool
Introspect the data deeply, interrogate
`object` dtypes for system-level memory consumption
Returns
-------
bytes used
Notes
-----
Memory usage does not include memory consumed by elements that
are not components of the array if deep=False
See Also
--------
numpy.ndarray.nbytes
"""
return self.nbytes
@property
def dtype(self):
return np.dtype(np.int64)
@property
def is_unique(self):
""" return if the index has unique values """
return True
@cache_readonly
def is_monotonic_increasing(self):
return self._step > 0 or len(self) <= 1
@cache_readonly
def is_monotonic_decreasing(self):
return self._step < 0 or len(self) <= 1
@property
def has_duplicates(self):
return False
def tolist(self):
return lrange(self._start, self._stop, self._step)
@Appender(_index_shared_docs['_shallow_copy'])
def _shallow_copy(self, values=None, **kwargs):
if values is None:
return RangeIndex(name=self.name, fastpath=True,
**dict(self._get_data_as_items()))
else:
kwargs.setdefault('name', self.name)
return self._int64index._shallow_copy(values, **kwargs)
@Appender(ibase._index_shared_docs['copy'])
def copy(self, name=None, deep=False, dtype=None, **kwargs):
self._validate_dtype(dtype)
if name is None:
name = self.name
return RangeIndex(name=name, fastpath=True,
**dict(self._get_data_as_items()))
def argsort(self, *args, **kwargs):
"""
Returns the indices that would sort the index and its
underlying data.
Returns
-------
argsorted : numpy array
See also
--------
numpy.ndarray.argsort
"""
nv.validate_argsort(args, kwargs)
if self._step > 0:
return np.arange(len(self))
else:
return np.arange(len(self) - 1, -1, -1)
def equals(self, other):
"""
Determines if two Index objects contain the same elements.
"""
if isinstance(other, RangeIndex):
ls = len(self)
lo = len(other)
return (ls == lo == 0 or
ls == lo == 1 and
self._start == other._start or
ls == lo and
self._start == other._start and
self._step == other._step)
return super(RangeIndex, self).equals(other)
def intersection(self, other):
"""
Form the intersection of two Index objects. Sortedness of the result is
not guaranteed
Parameters
----------
other : Index or array-like
Returns
-------
intersection : Index
"""
if not isinstance(other, RangeIndex):
return super(RangeIndex, self).intersection(other)
if not len(self) or not len(other):
return RangeIndex._simple_new(None)
# check whether intervals intersect
# deals with in- and decreasing ranges
int_low = max(min(self._start, self._stop + 1),
min(other._start, other._stop + 1))
int_high = min(max(self._stop, self._start + 1),
max(other._stop, other._start + 1))
if int_high <= int_low:
return RangeIndex._simple_new(None)
# Method hint: linear Diophantine equation
# solve intersection problem
# performance hint: for identical step sizes, could use
# cheaper alternative
gcd, s, t = self._extended_gcd(self._step, other._step)
# check whether element sets intersect
if (self._start - other._start) % gcd:
return RangeIndex._simple_new(None)
# calculate parameters for the RangeIndex describing the
# intersection disregarding the lower bounds
tmp_start = self._start + (other._start - self._start) * \
self._step // gcd * s
new_step = self._step * other._step // gcd
new_index = RangeIndex(tmp_start, int_high, new_step, fastpath=True)
# adjust index to limiting interval
new_index._start = new_index._min_fitting_element(int_low)
return new_index
def _min_fitting_element(self, lower_limit):
"""Returns the smallest element greater than or equal to the limit"""
no_steps = -(-(lower_limit - self._start) // abs(self._step))
return self._start + abs(self._step) * no_steps
def _max_fitting_element(self, upper_limit):
"""Returns the largest element smaller than or equal to the limit"""
no_steps = (upper_limit - self._start) // abs(self._step)
return self._start + abs(self._step) * no_steps
def _extended_gcd(self, a, b):
"""
Extended Euclidean algorithms to solve Bezout's identity:
a*x + b*y = gcd(x, y)
Finds one particular solution for x, y: s, t
Returns: gcd, s, t
"""
s, old_s = 0, 1
t, old_t = 1, 0
r, old_r = b, a
while r:
quotient = old_r // r
old_r, r = r, old_r - quotient * r
old_s, s = s, old_s - quotient * s
old_t, t = t, old_t - quotient * t
return old_r, old_s, old_t
def union(self, other):
"""
Form the union of two Index objects and sorts if possible
Parameters
----------
other : Index or array-like
Returns
-------
union : Index
"""
self._assert_can_do_setop(other)
if len(other) == 0 or self.equals(other):
return self
if len(self) == 0:
return other
if isinstance(other, RangeIndex):
start_s, step_s = self._start, self._step
end_s = self._start + self._step * (len(self) - 1)
start_o, step_o = other._start, other._step
end_o = other._start + other._step * (len(other) - 1)
if self._step < 0:
start_s, step_s, end_s = end_s, -step_s, start_s
if other._step < 0:
start_o, step_o, end_o = end_o, -step_o, start_o
if len(self) == 1 and len(other) == 1:
step_s = step_o = abs(self._start - other._start)
elif len(self) == 1:
step_s = step_o
elif len(other) == 1:
step_o = step_s
start_r = min(start_s, start_o)
end_r = max(end_s, end_o)
if step_o == step_s:
if ((start_s - start_o) % step_s == 0 and
(start_s - end_o) <= step_s and
(start_o - end_s) <= step_s):
return RangeIndex(start_r, end_r + step_s, step_s)
if ((step_s % 2 == 0) and
(abs(start_s - start_o) <= step_s / 2) and
(abs(end_s - end_o) <= step_s / 2)):
return RangeIndex(start_r, end_r + step_s / 2, step_s / 2)
elif step_o % step_s == 0:
if ((start_o - start_s) % step_s == 0 and
(start_o + step_s >= start_s) and
(end_o - step_s <= end_s)):
return RangeIndex(start_r, end_r + step_s, step_s)
elif step_s % step_o == 0:
if ((start_s - start_o) % step_o == 0 and
(start_s + step_o >= start_o) and
(end_s - step_o <= end_o)):
return RangeIndex(start_r, end_r + step_o, step_o)
return self._int64index.union(other)
@Appender(_index_shared_docs['join'])
def join(self, other, how='left', level=None, return_indexers=False,
sort=False):
if how == 'outer' and self is not other:
# note: could return RangeIndex in more circumstances
return self._int64index.join(other, how, level, return_indexers,
sort)
return super(RangeIndex, self).join(other, how, level, return_indexers,
sort)
def __len__(self):
"""
return the length of the RangeIndex
"""
return max(0, -(-(self._stop - self._start) // self._step))
@property
def size(self):
return len(self)
def __getitem__(self, key):
"""
Conserve RangeIndex type for scalar and slice keys.
"""
super_getitem = super(RangeIndex, self).__getitem__
if is_scalar(key):
n = int(key)
if n != key:
return super_getitem(key)
if n < 0:
n = len(self) + key
if n < 0 or n > len(self) - 1:
raise IndexError("index {key} is out of bounds for axis 0 "
"with size {size}".format(key=key,
size=len(self)))
return self._start + n * self._step
if isinstance(key, slice):
# This is basically PySlice_GetIndicesEx, but delegation to our
# super routines if we don't have integers
l = len(self)
# complete missing slice information
step = 1 if key.step is None else key.step
if key.start is None:
start = l - 1 if step < 0 else 0
else:
start = key.start
if start < 0:
start += l
if start < 0:
start = -1 if step < 0 else 0
if start >= l:
start = l - 1 if step < 0 else l
if key.stop is None:
stop = -1 if step < 0 else l
else:
stop = key.stop
if stop < 0:
stop += l
if stop < 0:
stop = -1
if stop > l:
stop = l
# delegate non-integer slices
if (start != int(start) or
stop != int(stop) or
step != int(step)):
return super_getitem(key)
# convert indexes to values
start = self._start + self._step * start
stop = self._start + self._step * stop
step = self._step * step
return RangeIndex(start, stop, step, self.name, fastpath=True)
# fall back to Int64Index
return super_getitem(key)
def __floordiv__(self, other):
if is_integer(other):
if (len(self) == 0 or
self._start % other == 0 and
self._step % other == 0):
start = self._start // other
step = self._step // other
stop = start + len(self) * step
return RangeIndex(start, stop, step, name=self.name,
fastpath=True)
if len(self) == 1:
start = self._start // other
return RangeIndex(start, start + 1, 1, name=self.name,
fastpath=True)
return self._int64index // other
@classmethod
def _add_numeric_methods_binary(cls):
""" add in numeric methods, specialized to RangeIndex """
def _make_evaluate_binop(op, opstr, reversed=False, step=False):
"""
Parameters
----------
op : callable that accepts 2 parms
perform the binary op
opstr : string
string name of ops
reversed : boolean, default False
if this is a reversed op, e.g. radd
step : callable, optional, default to False
op to apply to the step parm if not None
if False, use the existing step
"""
def _evaluate_numeric_binop(self, other):
other = self._validate_for_numeric_binop(other, op, opstr)
attrs = self._get_attributes_dict()
attrs = self._maybe_update_attributes(attrs)
if reversed:
self, other = other, self
try:
# alppy if we have an override
if step:
with np.errstate(all='ignore'):
rstep = step(self._step, other)
# we don't have a representable op
# so return a base index
if not is_integer(rstep) or not rstep:
raise ValueError
else:
rstep = self._step
with np.errstate(all='ignore'):
rstart = op(self._start, other)
rstop = op(self._stop, other)
result = RangeIndex(rstart,
rstop,
rstep,
**attrs)
# for compat with numpy / Int64Index
# even if we can represent as a RangeIndex, return
# as a Float64Index if we have float-like descriptors
if not all([is_integer(x) for x in
[rstart, rstop, rstep]]):
result = result.astype('float64')
return result
except (ValueError, TypeError, AttributeError):
pass
# convert to Int64Index ops
if isinstance(self, RangeIndex):
self = self.values
if isinstance(other, RangeIndex):
other = other.values
with np.errstate(all='ignore'):
results = op(self, other)
return Index(results, **attrs)
return _evaluate_numeric_binop
cls.__add__ = cls.__radd__ = _make_evaluate_binop(
operator.add, '__add__')
cls.__sub__ = _make_evaluate_binop(operator.sub, '__sub__')
cls.__rsub__ = _make_evaluate_binop(
operator.sub, '__sub__', reversed=True)
cls.__mul__ = cls.__rmul__ = _make_evaluate_binop(
operator.mul,
'__mul__',
step=operator.mul)
cls.__truediv__ = _make_evaluate_binop(
operator.truediv,
'__truediv__',
step=operator.truediv)
cls.__rtruediv__ = _make_evaluate_binop(
operator.truediv,
'__truediv__',
reversed=True,
step=operator.truediv)
if not compat.PY3:
cls.__div__ = _make_evaluate_binop(
operator.div,
'__div__',
step=operator.div)
cls.__rdiv__ = _make_evaluate_binop(
operator.div,
'__div__',
reversed=True,
step=operator.div)
RangeIndex._add_numeric_methods()
RangeIndex._add_logical_methods()
| mit |
guaix-ucm/pyemir | emirdrp/recipes/image/checks.py | 3 | 8941 | #
# Copyright 2011-2018 Universidad Complutense de Madrid
#
# This file is part of PyEmir
#
# PyEmir is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# PyEmir is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with PyEmir. If not, see <http://www.gnu.org/licenses/>.
#
"""Routines shared by image mode recipes"""
import logging
import operator
import six
import numpy
from astropy.io import fits
import sep
import matplotlib.pyplot as plt
from emirdrp.util.sextractor import SExtractor
from .naming import name_skysub_proc
_logger = logging.getLogger(__name__)
# Actions to carry over images when checking the flux
# of the objects in different images
def warn_action(img):
_logger.warning('Image %s has low flux in objects', img.baselabel)
img.valid_science = True
def reject_action(img):
img.valid_science = False
_logger.info('Image %s rejected, has low flux in objects', img.baselabel)
pass
def default_action(img):
_logger.info(
'Image %s accepted, has correct flux in objects', img.baselabel)
img.valid_science = True
# Actions
_dactions = {'warn': warn_action,
'reject': reject_action, 'default': default_action}
def check_photometry(frames, sf_data, seeing_fwhm, step=0,
border=300, extinction=0.0,
check_photometry_levels=[0.5, 0.8],
check_photometry_actions=['warn', 'warn', 'default'],
figure=None):
# Check photometry of few objects
weigthmap = 'weights4rms.fits'
wmap = numpy.ones_like(sf_data[0], dtype='bool')
# Center of the image
wmap[border:-border, border:-border] = 0
# fits.writeto(weigthmap, wmap.astype('uintt8'), overwrite=True)
basename = 'result_i%0d.fits' % (step)
data_res = fits.getdata(basename)
data_res = data_res.byteswap().newbyteorder()
bkg = sep.Background(data_res)
data_sub = data_res - bkg
_logger.info('Runing source extraction tor in %s', basename)
objects = sep.extract(data_sub, 1.5, err=bkg.globalrms, mask=wmap)
# if seeing_fwhm is not None:
# sex.config['SEEING_FWHM'] = seeing_fwhm * sex.config['PIXEL_SCALE']
# sex.config['PARAMETERS_LIST'].append('CLASS_STAR')
# sex.config['CATALOG_NAME'] = 'master-catalogue-i%01d.cat' % step
LIMIT_AREA = 5000
idx_small = objects['npix'] < LIMIT_AREA
objects_small = objects[idx_small]
NKEEP = 15
idx_flux = objects_small['flux'].argsort()
objects_nth = objects_small[idx_flux][-NKEEP:]
# set of indices of the N first objects
fluxes = []
errors = []
times = []
airmasses = []
for idx, frame in enumerate(frames):
imagename = name_skysub_proc(frame.baselabel, step)
#sex.config['CATALOG_NAME'] = ('catalogue-%s-i%01d.cat' %
# (frame.baselabel, step))
# Lauch SExtractor on a FITS file
# om double image mode
_logger.info('Runing sextractor in %s', imagename)
with fits.open(imagename) as hdul:
header = hdul[0].header
airmasses.append(header['airmass'])
times.append(header['tstamp'])
data_i = hdul[0].data
data_i = data_i.byteswap().newbyteorder()
bkg_i = sep.Background(data_i)
data_sub_i = data_i - bkg_i
# objects_i = sep.extract(data_sub_i, 1.5, err=bkg_i.globalrms, mask=wmap)
flux_i, fluxerr_i, flag_i = sep.sum_circle(data_sub_i,
objects_nth['x'], objects_nth['y'],
3.0, err=bkg_i.globalrms)
# Extinction correction
excor = pow(10, -0.4 * frame.airmass * extinction)
flux_i = excor * flux_i
fluxerr_i = excor * fluxerr_i
fluxes.append(flux_i)
errors.append(fluxerr_i)
fluxes_a = numpy.array(fluxes)
errors_a = numpy.array(errors)
fluxes_n = fluxes_a / fluxes_a[0]
errors_a = errors_a / fluxes_a[0] # sigma
w = 1.0 / (errors_a) ** 2
# weighted mean of the flux values
wdata = numpy.average(fluxes_n, axis=1, weights=w)
wsigma = 1 / numpy.sqrt(w.sum(axis=1))
levels = check_photometry_levels
actions = check_photometry_actions
x = list(six.moves.range(len(frames)))
vals, (_, sigma) = check_photometry_categorize(
x, wdata, levels, tags=actions)
# n sigma level to plt
nsig = 3
if True:
figure = plt.figure()
ax = figure.add_subplot(111)
plot_photometry_check(ax, vals, wsigma, check_photometry_levels, nsig * sigma)
plt.savefig('figure-relative-flux_i%01d.png' % step)
for x, _, t in vals:
try:
action = _dactions[t]
except KeyError:
_logger.warning('Action named %s not recognized, ignoring', t)
action = default_action
for p in x:
action(frames[p])
def check_photometry_categorize(x, y, levels, tags=None):
'''Put every point in its category.
levels must be sorted.'''
x = numpy.asarray(x)
y = numpy.asarray(y)
ys = y.copy()
ys.sort()
# Mean of the upper half
m = ys[len(ys) // 2:].mean()
y /= m
m = 1.0
s = ys[len(ys) // 2:].std()
result = []
if tags is None:
tags = list(six.moves.range(len(levels) + 1))
for l, t in zip(levels, tags):
indc = y < l
if indc.any():
x1 = x[indc]
y1 = y[indc]
result.append((x1, y1, t))
x = x[~indc]
y = y[~indc]
else:
result.append((x, y, tags[-1]))
return result, (m, s)
def plot_photometry_check(ax, vals, errors, levels, nsigma):
x = range(len(errors))
ax.set_title('Relative flux of brightest object')
for v, c in zip(vals, ['b', 'r', 'g', 'y']):
ax.scatter(v[0], v[1], c=c)
w = errors[v[0]]
ax.errorbar(v[0], v[1], yerr=w, fmt='none', c=c)
ax.plot([x[0], x[-1]], [1, 1], 'r--')
ax.plot([x[0], x[-1]], [1 - nsigma, 1 - nsigma], 'b--')
for f in levels:
ax.plot([x[0], x[-1]], [f, f], 'g--')
return ax
def check_position(images_info, sf_data, seeing_fwhm, step=0):
# FIXME: this method has to be updated
_logger.info('Checking positions')
# Check position of bright objects
weigthmap = 'weights4rms.fits'
wmap = numpy.zeros_like(sf_data[0])
# Center of the image
border = 300
wmap[border:-border, border:-border] = 1
fits.writeto(weigthmap, wmap.astype('uint8'), overwrite=True)
basename = 'result_i%0d.fits' % (step)
sex = SExtractor()
sex.config['VERBOSE_TYPE'] = 'QUIET'
sex.config['PIXEL_SCALE'] = 1
sex.config['BACK_TYPE'] = 'AUTO'
if seeing_fwhm is not None and seeing_fwhm > 0:
sex.config['SEEING_FWHM'] = seeing_fwhm * sex.config['PIXEL_SCALE']
sex.config['WEIGHT_TYPE'] = 'MAP_WEIGHT'
sex.config['WEIGHT_IMAGE'] = weigthmap
sex.config['PARAMETERS_LIST'].append('FLUX_BEST')
sex.config['PARAMETERS_LIST'].append('FLUXERR_BEST')
sex.config['PARAMETERS_LIST'].append('FWHM_IMAGE')
sex.config['PARAMETERS_LIST'].append('CLASS_STAR')
sex.config['CATALOG_NAME'] = 'master-catalogue-i%01d.cat' % step
_logger.info('Runing sextractor in %s', basename)
sex.run('%s,%s' % (basename, basename))
# Sort catalog by flux
catalog = sex.catalog()
catalog = sorted(
catalog, key=operator.itemgetter('FLUX_BEST'), reverse=True)
# set of indices of the N first objects
OBJS_I_KEEP = 10
# master = [(obj['X_IMAGE'], obj['Y_IMAGE'])
# for obj in catalog[:OBJS_I_KEEP]]
for image in images_info:
imagename = name_skysub_proc(image.baselabel, step)
sex.config['CATALOG_NAME'] = ('catalogue-self-%s-i%01d.cat' %
(image.baselabel, step))
# Lauch SExtractor on a FITS file
# on double image mode
_logger.info('Runing sextractor in %s', imagename)
sex.run(imagename)
catalog = sex.catalog()
# data = [(obj['X_IMAGE'], obj['Y_IMAGE']) for obj in catalog]
# tree = KDTree(data)
# Search 2 neighbors
# dists, _ids = tree.query(master, 2, distance_upper_bound=5)
# for i in dists[:,0]:
# print i
# _logger.info('Mean offset correction for image %s is %f',
# imagename, dists[:,0].mean())
# raw_input('press any key')
| gpl-3.0 |
cwu2011/scikit-learn | sklearn/preprocessing/tests/test_label.py | 48 | 18419 | import numpy as np
from scipy.sparse import issparse
from scipy.sparse import coo_matrix
from scipy.sparse import csc_matrix
from scipy.sparse import csr_matrix
from scipy.sparse import dok_matrix
from scipy.sparse import lil_matrix
from sklearn.utils.multiclass import type_of_target
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import ignore_warnings
from sklearn.preprocessing.label import LabelBinarizer
from sklearn.preprocessing.label import MultiLabelBinarizer
from sklearn.preprocessing.label import LabelEncoder
from sklearn.preprocessing.label import label_binarize
from sklearn.preprocessing.label import _inverse_binarize_thresholding
from sklearn.preprocessing.label import _inverse_binarize_multiclass
from sklearn import datasets
iris = datasets.load_iris()
def toarray(a):
if hasattr(a, "toarray"):
a = a.toarray()
return a
def test_label_binarizer():
lb = LabelBinarizer()
# one-class case defaults to negative label
inp = ["pos", "pos", "pos", "pos"]
expected = np.array([[0, 0, 0, 0]]).T
got = lb.fit_transform(inp)
assert_array_equal(lb.classes_, ["pos"])
assert_array_equal(expected, got)
assert_array_equal(lb.inverse_transform(got), inp)
# two-class case
inp = ["neg", "pos", "pos", "neg"]
expected = np.array([[0, 1, 1, 0]]).T
got = lb.fit_transform(inp)
assert_array_equal(lb.classes_, ["neg", "pos"])
assert_array_equal(expected, got)
to_invert = np.array([[1, 0],
[0, 1],
[0, 1],
[1, 0]])
assert_array_equal(lb.inverse_transform(to_invert), inp)
# multi-class case
inp = ["spam", "ham", "eggs", "ham", "0"]
expected = np.array([[0, 0, 0, 1],
[0, 0, 1, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[1, 0, 0, 0]])
got = lb.fit_transform(inp)
assert_array_equal(lb.classes_, ['0', 'eggs', 'ham', 'spam'])
assert_array_equal(expected, got)
assert_array_equal(lb.inverse_transform(got), inp)
def test_label_binarizer_unseen_labels():
lb = LabelBinarizer()
expected = np.array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]])
got = lb.fit_transform(['b', 'd', 'e'])
assert_array_equal(expected, got)
expected = np.array([[0, 0, 0],
[1, 0, 0],
[0, 0, 0],
[0, 1, 0],
[0, 0, 1],
[0, 0, 0]])
got = lb.transform(['a', 'b', 'c', 'd', 'e', 'f'])
assert_array_equal(expected, got)
@ignore_warnings
def test_label_binarizer_column_y():
# first for binary classification vs multi-label with 1 possible class
# lists are multi-label, array is multi-class :-/
inp_list = [[1], [2], [1]]
inp_array = np.array(inp_list)
multilabel_indicator = np.array([[1, 0], [0, 1], [1, 0]])
binaryclass_array = np.array([[0], [1], [0]])
lb_1 = LabelBinarizer()
out_1 = lb_1.fit_transform(inp_list)
lb_2 = LabelBinarizer()
out_2 = lb_2.fit_transform(inp_array)
assert_array_equal(out_1, multilabel_indicator)
assert_array_equal(out_2, binaryclass_array)
# second for multiclass classification vs multi-label with multiple
# classes
inp_list = [[1], [2], [1], [3]]
inp_array = np.array(inp_list)
# the indicator matrix output is the same in this case
indicator = np.array([[1, 0, 0], [0, 1, 0], [1, 0, 0], [0, 0, 1]])
lb_1 = LabelBinarizer()
out_1 = lb_1.fit_transform(inp_list)
lb_2 = LabelBinarizer()
out_2 = lb_2.fit_transform(inp_array)
assert_array_equal(out_1, out_2)
assert_array_equal(out_2, indicator)
def test_label_binarizer_set_label_encoding():
lb = LabelBinarizer(neg_label=-2, pos_label=0)
# two-class case with pos_label=0
inp = np.array([0, 1, 1, 0])
expected = np.array([[-2, 0, 0, -2]]).T
got = lb.fit_transform(inp)
assert_array_equal(expected, got)
assert_array_equal(lb.inverse_transform(got), inp)
lb = LabelBinarizer(neg_label=-2, pos_label=2)
# multi-class case
inp = np.array([3, 2, 1, 2, 0])
expected = np.array([[-2, -2, -2, +2],
[-2, -2, +2, -2],
[-2, +2, -2, -2],
[-2, -2, +2, -2],
[+2, -2, -2, -2]])
got = lb.fit_transform(inp)
assert_array_equal(expected, got)
assert_array_equal(lb.inverse_transform(got), inp)
@ignore_warnings
def test_label_binarizer_errors():
# Check that invalid arguments yield ValueError
one_class = np.array([0, 0, 0, 0])
lb = LabelBinarizer().fit(one_class)
multi_label = [(2, 3), (0,), (0, 2)]
assert_raises(ValueError, lb.transform, multi_label)
lb = LabelBinarizer()
assert_raises(ValueError, lb.transform, [])
assert_raises(ValueError, lb.inverse_transform, [])
assert_raises(ValueError, LabelBinarizer, neg_label=2, pos_label=1)
assert_raises(ValueError, LabelBinarizer, neg_label=2, pos_label=2)
assert_raises(ValueError, LabelBinarizer, neg_label=1, pos_label=2,
sparse_output=True)
# Fail on y_type
assert_raises(ValueError, _inverse_binarize_thresholding,
y=csr_matrix([[1, 2], [2, 1]]), output_type="foo",
classes=[1, 2], threshold=0)
# Fail on the number of classes
assert_raises(ValueError, _inverse_binarize_thresholding,
y=csr_matrix([[1, 2], [2, 1]]), output_type="foo",
classes=[1, 2, 3], threshold=0)
# Fail on the dimension of 'binary'
assert_raises(ValueError, _inverse_binarize_thresholding,
y=np.array([[1, 2, 3], [2, 1, 3]]), output_type="binary",
classes=[1, 2, 3], threshold=0)
# Fail on multioutput data
assert_raises(ValueError, LabelBinarizer().fit, np.array([[1, 3], [2, 1]]))
assert_raises(ValueError, label_binarize, np.array([[1, 3], [2, 1]]),
[1, 2, 3])
def test_label_encoder():
# Test LabelEncoder's transform and inverse_transform methods
le = LabelEncoder()
le.fit([1, 1, 4, 5, -1, 0])
assert_array_equal(le.classes_, [-1, 0, 1, 4, 5])
assert_array_equal(le.transform([0, 1, 4, 4, 5, -1, -1]),
[1, 2, 3, 3, 4, 0, 0])
assert_array_equal(le.inverse_transform([1, 2, 3, 3, 4, 0, 0]),
[0, 1, 4, 4, 5, -1, -1])
assert_raises(ValueError, le.transform, [0, 6])
def test_label_encoder_fit_transform():
# Test fit_transform
le = LabelEncoder()
ret = le.fit_transform([1, 1, 4, 5, -1, 0])
assert_array_equal(ret, [2, 2, 3, 4, 0, 1])
le = LabelEncoder()
ret = le.fit_transform(["paris", "paris", "tokyo", "amsterdam"])
assert_array_equal(ret, [1, 1, 2, 0])
def test_label_encoder_errors():
# Check that invalid arguments yield ValueError
le = LabelEncoder()
assert_raises(ValueError, le.transform, [])
assert_raises(ValueError, le.inverse_transform, [])
def test_sparse_output_multilabel_binarizer():
# test input as iterable of iterables
inputs = [
lambda: [(2, 3), (1,), (1, 2)],
lambda: (set([2, 3]), set([1]), set([1, 2])),
lambda: iter([iter((2, 3)), iter((1,)), set([1, 2])]),
]
indicator_mat = np.array([[0, 1, 1],
[1, 0, 0],
[1, 1, 0]])
inverse = inputs[0]()
for sparse_output in [True, False]:
for inp in inputs:
# With fit_tranform
mlb = MultiLabelBinarizer(sparse_output=sparse_output)
got = mlb.fit_transform(inp())
assert_equal(issparse(got), sparse_output)
if sparse_output:
got = got.toarray()
assert_array_equal(indicator_mat, got)
assert_array_equal([1, 2, 3], mlb.classes_)
assert_equal(mlb.inverse_transform(got), inverse)
# With fit
mlb = MultiLabelBinarizer(sparse_output=sparse_output)
got = mlb.fit(inp()).transform(inp())
assert_equal(issparse(got), sparse_output)
if sparse_output:
got = got.toarray()
assert_array_equal(indicator_mat, got)
assert_array_equal([1, 2, 3], mlb.classes_)
assert_equal(mlb.inverse_transform(got), inverse)
assert_raises(ValueError, mlb.inverse_transform,
csr_matrix(np.array([[0, 1, 1],
[2, 0, 0],
[1, 1, 0]])))
def test_multilabel_binarizer():
# test input as iterable of iterables
inputs = [
lambda: [(2, 3), (1,), (1, 2)],
lambda: (set([2, 3]), set([1]), set([1, 2])),
lambda: iter([iter((2, 3)), iter((1,)), set([1, 2])]),
]
indicator_mat = np.array([[0, 1, 1],
[1, 0, 0],
[1, 1, 0]])
inverse = inputs[0]()
for inp in inputs:
# With fit_tranform
mlb = MultiLabelBinarizer()
got = mlb.fit_transform(inp())
assert_array_equal(indicator_mat, got)
assert_array_equal([1, 2, 3], mlb.classes_)
assert_equal(mlb.inverse_transform(got), inverse)
# With fit
mlb = MultiLabelBinarizer()
got = mlb.fit(inp()).transform(inp())
assert_array_equal(indicator_mat, got)
assert_array_equal([1, 2, 3], mlb.classes_)
assert_equal(mlb.inverse_transform(got), inverse)
def test_multilabel_binarizer_empty_sample():
mlb = MultiLabelBinarizer()
y = [[1, 2], [1], []]
Y = np.array([[1, 1],
[1, 0],
[0, 0]])
assert_array_equal(mlb.fit_transform(y), Y)
def test_multilabel_binarizer_unknown_class():
mlb = MultiLabelBinarizer()
y = [[1, 2]]
assert_raises(KeyError, mlb.fit(y).transform, [[0]])
mlb = MultiLabelBinarizer(classes=[1, 2])
assert_raises(KeyError, mlb.fit_transform, [[0]])
def test_multilabel_binarizer_given_classes():
inp = [(2, 3), (1,), (1, 2)]
indicator_mat = np.array([[0, 1, 1],
[1, 0, 0],
[1, 0, 1]])
# fit_transform()
mlb = MultiLabelBinarizer(classes=[1, 3, 2])
assert_array_equal(mlb.fit_transform(inp), indicator_mat)
assert_array_equal(mlb.classes_, [1, 3, 2])
# fit().transform()
mlb = MultiLabelBinarizer(classes=[1, 3, 2])
assert_array_equal(mlb.fit(inp).transform(inp), indicator_mat)
assert_array_equal(mlb.classes_, [1, 3, 2])
# ensure works with extra class
mlb = MultiLabelBinarizer(classes=[4, 1, 3, 2])
assert_array_equal(mlb.fit_transform(inp),
np.hstack(([[0], [0], [0]], indicator_mat)))
assert_array_equal(mlb.classes_, [4, 1, 3, 2])
# ensure fit is no-op as iterable is not consumed
inp = iter(inp)
mlb = MultiLabelBinarizer(classes=[1, 3, 2])
assert_array_equal(mlb.fit(inp).transform(inp), indicator_mat)
def test_multilabel_binarizer_same_length_sequence():
# Ensure sequences of the same length are not interpreted as a 2-d array
inp = [[1], [0], [2]]
indicator_mat = np.array([[0, 1, 0],
[1, 0, 0],
[0, 0, 1]])
# fit_transform()
mlb = MultiLabelBinarizer()
assert_array_equal(mlb.fit_transform(inp), indicator_mat)
assert_array_equal(mlb.inverse_transform(indicator_mat), inp)
# fit().transform()
mlb = MultiLabelBinarizer()
assert_array_equal(mlb.fit(inp).transform(inp), indicator_mat)
assert_array_equal(mlb.inverse_transform(indicator_mat), inp)
def test_multilabel_binarizer_non_integer_labels():
tuple_classes = np.empty(3, dtype=object)
tuple_classes[:] = [(1,), (2,), (3,)]
inputs = [
([('2', '3'), ('1',), ('1', '2')], ['1', '2', '3']),
([('b', 'c'), ('a',), ('a', 'b')], ['a', 'b', 'c']),
([((2,), (3,)), ((1,),), ((1,), (2,))], tuple_classes),
]
indicator_mat = np.array([[0, 1, 1],
[1, 0, 0],
[1, 1, 0]])
for inp, classes in inputs:
# fit_transform()
mlb = MultiLabelBinarizer()
assert_array_equal(mlb.fit_transform(inp), indicator_mat)
assert_array_equal(mlb.classes_, classes)
assert_array_equal(mlb.inverse_transform(indicator_mat), inp)
# fit().transform()
mlb = MultiLabelBinarizer()
assert_array_equal(mlb.fit(inp).transform(inp), indicator_mat)
assert_array_equal(mlb.classes_, classes)
assert_array_equal(mlb.inverse_transform(indicator_mat), inp)
mlb = MultiLabelBinarizer()
assert_raises(TypeError, mlb.fit_transform, [({}), ({}, {'a': 'b'})])
def test_multilabel_binarizer_non_unique():
inp = [(1, 1, 1, 0)]
indicator_mat = np.array([[1, 1]])
mlb = MultiLabelBinarizer()
assert_array_equal(mlb.fit_transform(inp), indicator_mat)
def test_multilabel_binarizer_inverse_validation():
inp = [(1, 1, 1, 0)]
mlb = MultiLabelBinarizer()
mlb.fit_transform(inp)
# Not binary
assert_raises(ValueError, mlb.inverse_transform, np.array([[1, 3]]))
# The following binary cases are fine, however
mlb.inverse_transform(np.array([[0, 0]]))
mlb.inverse_transform(np.array([[1, 1]]))
mlb.inverse_transform(np.array([[1, 0]]))
# Wrong shape
assert_raises(ValueError, mlb.inverse_transform, np.array([[1]]))
assert_raises(ValueError, mlb.inverse_transform, np.array([[1, 1, 1]]))
def test_label_binarize_with_class_order():
out = label_binarize([1, 6], classes=[1, 2, 4, 6])
expected = np.array([[1, 0, 0, 0], [0, 0, 0, 1]])
assert_array_equal(out, expected)
# Modified class order
out = label_binarize([1, 6], classes=[1, 6, 4, 2])
expected = np.array([[1, 0, 0, 0], [0, 1, 0, 0]])
assert_array_equal(out, expected)
out = label_binarize([0, 1, 2, 3], classes=[3, 2, 0, 1])
expected = np.array([[0, 0, 1, 0],
[0, 0, 0, 1],
[0, 1, 0, 0],
[1, 0, 0, 0]])
assert_array_equal(out, expected)
def check_binarized_results(y, classes, pos_label, neg_label, expected):
for sparse_output in [True, False]:
if ((pos_label == 0 or neg_label != 0) and sparse_output):
assert_raises(ValueError, label_binarize, y, classes,
neg_label=neg_label, pos_label=pos_label,
sparse_output=sparse_output)
continue
# check label_binarize
binarized = label_binarize(y, classes, neg_label=neg_label,
pos_label=pos_label,
sparse_output=sparse_output)
assert_array_equal(toarray(binarized), expected)
assert_equal(issparse(binarized), sparse_output)
# check inverse
y_type = type_of_target(y)
if y_type == "multiclass":
inversed = _inverse_binarize_multiclass(binarized, classes=classes)
else:
inversed = _inverse_binarize_thresholding(binarized,
output_type=y_type,
classes=classes,
threshold=((neg_label +
pos_label) /
2.))
assert_array_equal(toarray(inversed), toarray(y))
# Check label binarizer
lb = LabelBinarizer(neg_label=neg_label, pos_label=pos_label,
sparse_output=sparse_output)
binarized = lb.fit_transform(y)
assert_array_equal(toarray(binarized), expected)
assert_equal(issparse(binarized), sparse_output)
inverse_output = lb.inverse_transform(binarized)
assert_array_equal(toarray(inverse_output), toarray(y))
assert_equal(issparse(inverse_output), issparse(y))
def test_label_binarize_binary():
y = [0, 1, 0]
classes = [0, 1]
pos_label = 2
neg_label = -1
expected = np.array([[2, -1], [-1, 2], [2, -1]])[:, 1].reshape((-1, 1))
yield check_binarized_results, y, classes, pos_label, neg_label, expected
# Binary case where sparse_output = True will not result in a ValueError
y = [0, 1, 0]
classes = [0, 1]
pos_label = 3
neg_label = 0
expected = np.array([[3, 0], [0, 3], [3, 0]])[:, 1].reshape((-1, 1))
yield check_binarized_results, y, classes, pos_label, neg_label, expected
def test_label_binarize_multiclass():
y = [0, 1, 2]
classes = [0, 1, 2]
pos_label = 2
neg_label = 0
expected = 2 * np.eye(3)
yield check_binarized_results, y, classes, pos_label, neg_label, expected
assert_raises(ValueError, label_binarize, y, classes, neg_label=-1,
pos_label=pos_label, sparse_output=True)
def test_label_binarize_multilabel():
y_ind = np.array([[0, 1, 0], [1, 1, 1], [0, 0, 0]])
classes = [0, 1, 2]
pos_label = 2
neg_label = 0
expected = pos_label * y_ind
y_sparse = [sparse_matrix(y_ind)
for sparse_matrix in [coo_matrix, csc_matrix, csr_matrix,
dok_matrix, lil_matrix]]
for y in [y_ind] + y_sparse:
yield (check_binarized_results, y, classes, pos_label, neg_label,
expected)
assert_raises(ValueError, label_binarize, y, classes, neg_label=-1,
pos_label=pos_label, sparse_output=True)
def test_invalid_input_label_binarize():
assert_raises(ValueError, label_binarize, [0, 2], classes=[0, 2],
pos_label=0, neg_label=1)
def test_inverse_binarize_multiclass():
got = _inverse_binarize_multiclass(csr_matrix([[0, 1, 0],
[-1, 0, -1],
[0, 0, 0]]),
np.arange(3))
assert_array_equal(got, np.array([1, 1, 0]))
| bsd-3-clause |
cactusbin/nyt | matplotlib/lib/matplotlib/tests/test_simplification.py | 2 | 7078 | from __future__ import print_function
import numpy as np
import matplotlib
from matplotlib.testing.decorators import image_comparison, knownfailureif, cleanup
import matplotlib.pyplot as plt
from pylab import *
import numpy as np
from matplotlib import patches, path, transforms
from nose.tools import raises
import io
nan = np.nan
Path = path.Path
# NOTE: All of these tests assume that path.simplify is set to True
# (the default)
@image_comparison(baseline_images=['clipping'], remove_text=True)
def test_clipping():
t = np.arange(0.0, 2.0, 0.01)
s = np.sin(2*pi*t)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(t, s, linewidth=1.0)
ax.set_ylim((-0.20, -0.28))
@image_comparison(baseline_images=['overflow'], remove_text=True)
def test_overflow():
x = np.array([1.0,2.0,3.0,2.0e5])
y = np.arange(len(x))
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x,y)
ax.set_xlim(xmin=2,xmax=6)
@image_comparison(baseline_images=['clipping_diamond'], remove_text=True)
def test_diamond():
x = np.array([0.0, 1.0, 0.0, -1.0, 0.0])
y = np.array([1.0, 0.0, -1.0, 0.0, 1.0])
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y)
ax.set_xlim(xmin=-0.6, xmax=0.6)
ax.set_ylim(ymin=-0.6, ymax=0.6)
@cleanup
def test_noise():
np.random.seed(0)
x = np.random.uniform(size=(5000,)) * 50
fig = plt.figure()
ax = fig.add_subplot(111)
p1 = ax.plot(x, solid_joinstyle='round', linewidth=2.0)
path = p1[0].get_path()
transform = p1[0].get_transform()
path = transform.transform_path(path)
simplified = list(path.iter_segments(simplify=(800, 600)))
assert len(simplified) == 3884
@cleanup
def test_sine_plus_noise():
np.random.seed(0)
x = np.sin(np.linspace(0, np.pi * 2.0, 1000)) + np.random.uniform(size=(1000,)) * 0.01
fig = plt.figure()
ax = fig.add_subplot(111)
p1 = ax.plot(x, solid_joinstyle='round', linewidth=2.0)
path = p1[0].get_path()
transform = p1[0].get_transform()
path = transform.transform_path(path)
simplified = list(path.iter_segments(simplify=(800, 600)))
assert len(simplified) == 876
@image_comparison(baseline_images=['simplify_curve'], remove_text=True)
def test_simplify_curve():
pp1 = patches.PathPatch(
Path([(0, 0), (1, 0), (1, 1), (nan, 1), (0, 0), (2, 0), (2, 2), (0, 0)],
[Path.MOVETO, Path.CURVE3, Path.CURVE3, Path.CURVE3, Path.CURVE3, Path.CURVE3, Path.CURVE3, Path.CLOSEPOLY]),
fc="none")
fig = plt.figure()
ax = fig.add_subplot(111)
ax.add_patch(pp1)
ax.set_xlim((0, 2))
ax.set_ylim((0, 2))
@image_comparison(baseline_images=['hatch_simplify'], remove_text=True)
def test_hatch():
fig = plt.figure()
ax = fig.add_subplot(111)
ax.add_patch(Rectangle((0, 0), 1, 1, fill=False, hatch="/"))
ax.set_xlim((0.45, 0.55))
ax.set_ylim((0.45, 0.55))
@image_comparison(baseline_images=['fft_peaks'], remove_text=True)
def test_fft_peaks():
fig = plt.figure()
t = arange(65536)
ax = fig.add_subplot(111)
p1 = ax.plot(abs(fft(sin(2*pi*.01*t)*blackman(len(t)))))
path = p1[0].get_path()
transform = p1[0].get_transform()
path = transform.transform_path(path)
simplified = list(path.iter_segments(simplify=(800, 600)))
assert len(simplified) == 20
@cleanup
def test_start_with_moveto():
# Should be entirely clipped away to a single MOVETO
data = b"""
ZwAAAAku+v9UAQAA+Tj6/z8CAADpQ/r/KAMAANlO+v8QBAAAyVn6//UEAAC6ZPr/2gUAAKpv+v+8
BgAAm3r6/50HAACLhfr/ewgAAHyQ+v9ZCQAAbZv6/zQKAABepvr/DgsAAE+x+v/lCwAAQLz6/7wM
AAAxx/r/kA0AACPS+v9jDgAAFN36/zQPAAAF6Pr/AxAAAPfy+v/QEAAA6f36/5wRAADbCPv/ZhIA
AMwT+/8uEwAAvh77//UTAACwKfv/uRQAAKM0+/98FQAAlT/7/z0WAACHSvv//RYAAHlV+/+7FwAA
bGD7/3cYAABea/v/MRkAAFF2+//pGQAARIH7/6AaAAA3jPv/VRsAACmX+/8JHAAAHKL7/7ocAAAP
rfv/ah0AAAO4+/8YHgAA9sL7/8QeAADpzfv/bx8AANzY+/8YIAAA0OP7/78gAADD7vv/ZCEAALf5
+/8IIgAAqwT8/6kiAACeD/z/SiMAAJIa/P/oIwAAhiX8/4QkAAB6MPz/HyUAAG47/P+4JQAAYkb8
/1AmAABWUfz/5SYAAEpc/P95JwAAPmf8/wsoAAAzcvz/nCgAACd9/P8qKQAAHIj8/7cpAAAQk/z/
QyoAAAWe/P/MKgAA+aj8/1QrAADus/z/2isAAOO+/P9eLAAA2Mn8/+AsAADM1Pz/YS0AAMHf/P/g
LQAAtur8/10uAACr9fz/2C4AAKEA/f9SLwAAlgv9/8ovAACLFv3/QDAAAIAh/f+1MAAAdSz9/ycx
AABrN/3/mDEAAGBC/f8IMgAAVk39/3UyAABLWP3/4TIAAEFj/f9LMwAANm79/7MzAAAsef3/GjQA
ACKE/f9+NAAAF4/9/+E0AAANmv3/QzUAAAOl/f+iNQAA+a/9/wA2AADvuv3/XDYAAOXF/f+2NgAA
29D9/w83AADR2/3/ZjcAAMfm/f+7NwAAvfH9/w44AACz/P3/XzgAAKkH/v+vOAAAnxL+//04AACW
Hf7/SjkAAIwo/v+UOQAAgjP+/905AAB5Pv7/JDoAAG9J/v9pOgAAZVT+/606AABcX/7/7zoAAFJq
/v8vOwAASXX+/207AAA/gP7/qjsAADaL/v/lOwAALZb+/x48AAAjof7/VTwAABqs/v+LPAAAELf+
/788AAAHwv7/8TwAAP7M/v8hPQAA9df+/1A9AADr4v7/fT0AAOLt/v+oPQAA2fj+/9E9AADQA///
+T0AAMYO//8fPgAAvRn//0M+AAC0JP//ZT4AAKsv//+GPgAAojr//6U+AACZRf//wj4AAJBQ///d
PgAAh1v///c+AAB+Zv//Dz8AAHRx//8lPwAAa3z//zk/AABih///TD8AAFmS//9dPwAAUJ3//2w/
AABHqP//ej8AAD6z//+FPwAANb7//48/AAAsyf//lz8AACPU//+ePwAAGt///6M/AAAR6v//pj8A
AAj1//+nPwAA/////w=="""
import base64
if hasattr(base64, 'encodebytes'):
# Python 3 case
decodebytes = base64.decodebytes
else:
# Python 2 case
decodebytes = base64.decodestring
verts = np.fromstring(decodebytes(data), dtype='<i4')
verts = verts.reshape((len(verts) / 2, 2))
path = Path(verts)
segs = path.iter_segments(transforms.IdentityTransform(), clip=(0.0, 0.0, 100.0, 100.0))
segs = list(segs)
assert len(segs) == 1
assert segs[0][1] == Path.MOVETO
@cleanup
@raises(OverflowError)
def test_throw_rendering_complexity_exceeded():
rcParams['path.simplify'] = False
xx = np.arange(200000)
yy = np.random.rand(200000)
yy[1000] = np.nan
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(xx, yy)
try:
fig.savefig(io.BytesIO())
finally:
rcParams['path.simplify'] = True
@image_comparison(baseline_images=['clipper_edge'], remove_text=True)
def test_clipper():
dat = (0, 1, 0, 2, 0, 3, 0, 4, 0, 5)
fig = plt.figure(figsize=(2, 1))
fig.subplots_adjust(left = 0, bottom = 0, wspace = 0, hspace = 0)
ax = fig.add_axes((0, 0, 1.0, 1.0), ylim = (0, 5), autoscale_on = False)
ax.plot(dat)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(1))
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.set_xlim(5, 9)
@image_comparison(baseline_images=['para_equal_perp'], remove_text=True)
def test_para_equal_perp():
x = np.array([0, 1, 2, 1, 0, -1, 0, 1] + [1] * 128)
y = np.array([1, 1, 2, 1, 0, -1, 0, 0] + [0] * 128)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x + 1, y + 1)
ax.plot(x + 1, y + 1, 'ro')
@image_comparison(baseline_images=['clipping_with_nans'])
def test_clipping_with_nans():
x = np.linspace(0, 3.14 * 2, 3000)
y = np.sin(x)
x[::100] = np.nan
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y)
ax.set_ylim(-0.25, 0.25)
if __name__=='__main__':
import nose
nose.runmodule(argv=['-s','--with-doctest'], exit=False)
| unlicense |
thientu/scikit-learn | sklearn/utils/tests/test_sparsefuncs.py | 157 | 13799 | import numpy as np
import scipy.sparse as sp
from scipy import linalg
from numpy.testing import assert_array_almost_equal, assert_array_equal
from sklearn.datasets import make_classification
from sklearn.utils.sparsefuncs import (mean_variance_axis,
inplace_column_scale,
inplace_row_scale,
inplace_swap_row, inplace_swap_column,
min_max_axis,
count_nonzero, csc_median_axis_0)
from sklearn.utils.sparsefuncs_fast import assign_rows_csr
from sklearn.utils.testing import assert_raises
def test_mean_variance_axis0():
X, _ = make_classification(5, 4, random_state=0)
# Sparsify the array a little bit
X[0, 0] = 0
X[2, 1] = 0
X[4, 3] = 0
X_lil = sp.lil_matrix(X)
X_lil[1, 0] = 0
X[1, 0] = 0
X_csr = sp.csr_matrix(X_lil)
X_means, X_vars = mean_variance_axis(X_csr, axis=0)
assert_array_almost_equal(X_means, np.mean(X, axis=0))
assert_array_almost_equal(X_vars, np.var(X, axis=0))
X_csc = sp.csc_matrix(X_lil)
X_means, X_vars = mean_variance_axis(X_csc, axis=0)
assert_array_almost_equal(X_means, np.mean(X, axis=0))
assert_array_almost_equal(X_vars, np.var(X, axis=0))
assert_raises(TypeError, mean_variance_axis, X_lil, axis=0)
X = X.astype(np.float32)
X_csr = X_csr.astype(np.float32)
X_csc = X_csr.astype(np.float32)
X_means, X_vars = mean_variance_axis(X_csr, axis=0)
assert_array_almost_equal(X_means, np.mean(X, axis=0))
assert_array_almost_equal(X_vars, np.var(X, axis=0))
X_means, X_vars = mean_variance_axis(X_csc, axis=0)
assert_array_almost_equal(X_means, np.mean(X, axis=0))
assert_array_almost_equal(X_vars, np.var(X, axis=0))
assert_raises(TypeError, mean_variance_axis, X_lil, axis=0)
def test_mean_variance_illegal_axis():
X, _ = make_classification(5, 4, random_state=0)
# Sparsify the array a little bit
X[0, 0] = 0
X[2, 1] = 0
X[4, 3] = 0
X_csr = sp.csr_matrix(X)
assert_raises(ValueError, mean_variance_axis, X_csr, axis=-3)
assert_raises(ValueError, mean_variance_axis, X_csr, axis=2)
assert_raises(ValueError, mean_variance_axis, X_csr, axis=-1)
def test_mean_variance_axis1():
X, _ = make_classification(5, 4, random_state=0)
# Sparsify the array a little bit
X[0, 0] = 0
X[2, 1] = 0
X[4, 3] = 0
X_lil = sp.lil_matrix(X)
X_lil[1, 0] = 0
X[1, 0] = 0
X_csr = sp.csr_matrix(X_lil)
X_means, X_vars = mean_variance_axis(X_csr, axis=1)
assert_array_almost_equal(X_means, np.mean(X, axis=1))
assert_array_almost_equal(X_vars, np.var(X, axis=1))
X_csc = sp.csc_matrix(X_lil)
X_means, X_vars = mean_variance_axis(X_csc, axis=1)
assert_array_almost_equal(X_means, np.mean(X, axis=1))
assert_array_almost_equal(X_vars, np.var(X, axis=1))
assert_raises(TypeError, mean_variance_axis, X_lil, axis=1)
X = X.astype(np.float32)
X_csr = X_csr.astype(np.float32)
X_csc = X_csr.astype(np.float32)
X_means, X_vars = mean_variance_axis(X_csr, axis=1)
assert_array_almost_equal(X_means, np.mean(X, axis=1))
assert_array_almost_equal(X_vars, np.var(X, axis=1))
X_means, X_vars = mean_variance_axis(X_csc, axis=1)
assert_array_almost_equal(X_means, np.mean(X, axis=1))
assert_array_almost_equal(X_vars, np.var(X, axis=1))
assert_raises(TypeError, mean_variance_axis, X_lil, axis=1)
def test_densify_rows():
X = sp.csr_matrix([[0, 3, 0],
[2, 4, 0],
[0, 0, 0],
[9, 8, 7],
[4, 0, 5]], dtype=np.float64)
X_rows = np.array([0, 2, 3], dtype=np.intp)
out = np.ones((6, X.shape[1]), dtype=np.float64)
out_rows = np.array([1, 3, 4], dtype=np.intp)
expect = np.ones_like(out)
expect[out_rows] = X[X_rows, :].toarray()
assign_rows_csr(X, X_rows, out_rows, out)
assert_array_equal(out, expect)
def test_inplace_column_scale():
rng = np.random.RandomState(0)
X = sp.rand(100, 200, 0.05)
Xr = X.tocsr()
Xc = X.tocsc()
XA = X.toarray()
scale = rng.rand(200)
XA *= scale
inplace_column_scale(Xc, scale)
inplace_column_scale(Xr, scale)
assert_array_almost_equal(Xr.toarray(), Xc.toarray())
assert_array_almost_equal(XA, Xc.toarray())
assert_array_almost_equal(XA, Xr.toarray())
assert_raises(TypeError, inplace_column_scale, X.tolil(), scale)
X = X.astype(np.float32)
scale = scale.astype(np.float32)
Xr = X.tocsr()
Xc = X.tocsc()
XA = X.toarray()
XA *= scale
inplace_column_scale(Xc, scale)
inplace_column_scale(Xr, scale)
assert_array_almost_equal(Xr.toarray(), Xc.toarray())
assert_array_almost_equal(XA, Xc.toarray())
assert_array_almost_equal(XA, Xr.toarray())
assert_raises(TypeError, inplace_column_scale, X.tolil(), scale)
def test_inplace_row_scale():
rng = np.random.RandomState(0)
X = sp.rand(100, 200, 0.05)
Xr = X.tocsr()
Xc = X.tocsc()
XA = X.toarray()
scale = rng.rand(100)
XA *= scale.reshape(-1, 1)
inplace_row_scale(Xc, scale)
inplace_row_scale(Xr, scale)
assert_array_almost_equal(Xr.toarray(), Xc.toarray())
assert_array_almost_equal(XA, Xc.toarray())
assert_array_almost_equal(XA, Xr.toarray())
assert_raises(TypeError, inplace_column_scale, X.tolil(), scale)
X = X.astype(np.float32)
scale = scale.astype(np.float32)
Xr = X.tocsr()
Xc = X.tocsc()
XA = X.toarray()
XA *= scale.reshape(-1, 1)
inplace_row_scale(Xc, scale)
inplace_row_scale(Xr, scale)
assert_array_almost_equal(Xr.toarray(), Xc.toarray())
assert_array_almost_equal(XA, Xc.toarray())
assert_array_almost_equal(XA, Xr.toarray())
assert_raises(TypeError, inplace_column_scale, X.tolil(), scale)
def test_inplace_swap_row():
X = np.array([[0, 3, 0],
[2, 4, 0],
[0, 0, 0],
[9, 8, 7],
[4, 0, 5]], dtype=np.float64)
X_csr = sp.csr_matrix(X)
X_csc = sp.csc_matrix(X)
swap = linalg.get_blas_funcs(('swap',), (X,))
swap = swap[0]
X[0], X[-1] = swap(X[0], X[-1])
inplace_swap_row(X_csr, 0, -1)
inplace_swap_row(X_csc, 0, -1)
assert_array_equal(X_csr.toarray(), X_csc.toarray())
assert_array_equal(X, X_csc.toarray())
assert_array_equal(X, X_csr.toarray())
X[2], X[3] = swap(X[2], X[3])
inplace_swap_row(X_csr, 2, 3)
inplace_swap_row(X_csc, 2, 3)
assert_array_equal(X_csr.toarray(), X_csc.toarray())
assert_array_equal(X, X_csc.toarray())
assert_array_equal(X, X_csr.toarray())
assert_raises(TypeError, inplace_swap_row, X_csr.tolil())
X = np.array([[0, 3, 0],
[2, 4, 0],
[0, 0, 0],
[9, 8, 7],
[4, 0, 5]], dtype=np.float32)
X_csr = sp.csr_matrix(X)
X_csc = sp.csc_matrix(X)
swap = linalg.get_blas_funcs(('swap',), (X,))
swap = swap[0]
X[0], X[-1] = swap(X[0], X[-1])
inplace_swap_row(X_csr, 0, -1)
inplace_swap_row(X_csc, 0, -1)
assert_array_equal(X_csr.toarray(), X_csc.toarray())
assert_array_equal(X, X_csc.toarray())
assert_array_equal(X, X_csr.toarray())
X[2], X[3] = swap(X[2], X[3])
inplace_swap_row(X_csr, 2, 3)
inplace_swap_row(X_csc, 2, 3)
assert_array_equal(X_csr.toarray(), X_csc.toarray())
assert_array_equal(X, X_csc.toarray())
assert_array_equal(X, X_csr.toarray())
assert_raises(TypeError, inplace_swap_row, X_csr.tolil())
def test_inplace_swap_column():
X = np.array([[0, 3, 0],
[2, 4, 0],
[0, 0, 0],
[9, 8, 7],
[4, 0, 5]], dtype=np.float64)
X_csr = sp.csr_matrix(X)
X_csc = sp.csc_matrix(X)
swap = linalg.get_blas_funcs(('swap',), (X,))
swap = swap[0]
X[:, 0], X[:, -1] = swap(X[:, 0], X[:, -1])
inplace_swap_column(X_csr, 0, -1)
inplace_swap_column(X_csc, 0, -1)
assert_array_equal(X_csr.toarray(), X_csc.toarray())
assert_array_equal(X, X_csc.toarray())
assert_array_equal(X, X_csr.toarray())
X[:, 0], X[:, 1] = swap(X[:, 0], X[:, 1])
inplace_swap_column(X_csr, 0, 1)
inplace_swap_column(X_csc, 0, 1)
assert_array_equal(X_csr.toarray(), X_csc.toarray())
assert_array_equal(X, X_csc.toarray())
assert_array_equal(X, X_csr.toarray())
assert_raises(TypeError, inplace_swap_column, X_csr.tolil())
X = np.array([[0, 3, 0],
[2, 4, 0],
[0, 0, 0],
[9, 8, 7],
[4, 0, 5]], dtype=np.float32)
X_csr = sp.csr_matrix(X)
X_csc = sp.csc_matrix(X)
swap = linalg.get_blas_funcs(('swap',), (X,))
swap = swap[0]
X[:, 0], X[:, -1] = swap(X[:, 0], X[:, -1])
inplace_swap_column(X_csr, 0, -1)
inplace_swap_column(X_csc, 0, -1)
assert_array_equal(X_csr.toarray(), X_csc.toarray())
assert_array_equal(X, X_csc.toarray())
assert_array_equal(X, X_csr.toarray())
X[:, 0], X[:, 1] = swap(X[:, 0], X[:, 1])
inplace_swap_column(X_csr, 0, 1)
inplace_swap_column(X_csc, 0, 1)
assert_array_equal(X_csr.toarray(), X_csc.toarray())
assert_array_equal(X, X_csc.toarray())
assert_array_equal(X, X_csr.toarray())
assert_raises(TypeError, inplace_swap_column, X_csr.tolil())
def test_min_max_axis0():
X = np.array([[0, 3, 0],
[2, -1, 0],
[0, 0, 0],
[9, 8, 7],
[4, 0, 5]], dtype=np.float64)
X_csr = sp.csr_matrix(X)
X_csc = sp.csc_matrix(X)
mins_csr, maxs_csr = min_max_axis(X_csr, axis=0)
assert_array_equal(mins_csr, X.min(axis=0))
assert_array_equal(maxs_csr, X.max(axis=0))
mins_csc, maxs_csc = min_max_axis(X_csc, axis=0)
assert_array_equal(mins_csc, X.min(axis=0))
assert_array_equal(maxs_csc, X.max(axis=0))
X = X.astype(np.float32)
X_csr = sp.csr_matrix(X)
X_csc = sp.csc_matrix(X)
mins_csr, maxs_csr = min_max_axis(X_csr, axis=0)
assert_array_equal(mins_csr, X.min(axis=0))
assert_array_equal(maxs_csr, X.max(axis=0))
mins_csc, maxs_csc = min_max_axis(X_csc, axis=0)
assert_array_equal(mins_csc, X.min(axis=0))
assert_array_equal(maxs_csc, X.max(axis=0))
def test_min_max_axis1():
X = np.array([[0, 3, 0],
[2, -1, 0],
[0, 0, 0],
[9, 8, 7],
[4, 0, 5]], dtype=np.float64)
X_csr = sp.csr_matrix(X)
X_csc = sp.csc_matrix(X)
mins_csr, maxs_csr = min_max_axis(X_csr, axis=1)
assert_array_equal(mins_csr, X.min(axis=1))
assert_array_equal(maxs_csr, X.max(axis=1))
mins_csc, maxs_csc = min_max_axis(X_csc, axis=1)
assert_array_equal(mins_csc, X.min(axis=1))
assert_array_equal(maxs_csc, X.max(axis=1))
X = X.astype(np.float32)
X_csr = sp.csr_matrix(X)
X_csc = sp.csc_matrix(X)
mins_csr, maxs_csr = min_max_axis(X_csr, axis=1)
assert_array_equal(mins_csr, X.min(axis=1))
assert_array_equal(maxs_csr, X.max(axis=1))
mins_csc, maxs_csc = min_max_axis(X_csc, axis=1)
assert_array_equal(mins_csc, X.min(axis=1))
assert_array_equal(maxs_csc, X.max(axis=1))
def test_min_max_axis_errors():
X = np.array([[0, 3, 0],
[2, -1, 0],
[0, 0, 0],
[9, 8, 7],
[4, 0, 5]], dtype=np.float64)
X_csr = sp.csr_matrix(X)
X_csc = sp.csc_matrix(X)
assert_raises(TypeError, min_max_axis, X_csr.tolil(), axis=0)
assert_raises(ValueError, min_max_axis, X_csr, axis=2)
assert_raises(ValueError, min_max_axis, X_csc, axis=-3)
def test_count_nonzero():
X = np.array([[0, 3, 0],
[2, -1, 0],
[0, 0, 0],
[9, 8, 7],
[4, 0, 5]], dtype=np.float64)
X_csr = sp.csr_matrix(X)
X_csc = sp.csc_matrix(X)
X_nonzero = X != 0
sample_weight = [.5, .2, .3, .1, .1]
X_nonzero_weighted = X_nonzero * np.array(sample_weight)[:, None]
for axis in [0, 1, -1, -2, None]:
assert_array_almost_equal(count_nonzero(X_csr, axis=axis),
X_nonzero.sum(axis=axis))
assert_array_almost_equal(count_nonzero(X_csr, axis=axis,
sample_weight=sample_weight),
X_nonzero_weighted.sum(axis=axis))
assert_raises(TypeError, count_nonzero, X_csc)
assert_raises(ValueError, count_nonzero, X_csr, axis=2)
def test_csc_row_median():
# Test csc_row_median actually calculates the median.
# Test that it gives the same output when X is dense.
rng = np.random.RandomState(0)
X = rng.rand(100, 50)
dense_median = np.median(X, axis=0)
csc = sp.csc_matrix(X)
sparse_median = csc_median_axis_0(csc)
assert_array_equal(sparse_median, dense_median)
# Test that it gives the same output when X is sparse
X = rng.rand(51, 100)
X[X < 0.7] = 0.0
ind = rng.randint(0, 50, 10)
X[ind] = -X[ind]
csc = sp.csc_matrix(X)
dense_median = np.median(X, axis=0)
sparse_median = csc_median_axis_0(csc)
assert_array_equal(sparse_median, dense_median)
# Test for toy data.
X = [[0, -2], [-1, -1], [1, 0], [2, 1]]
csc = sp.csc_matrix(X)
assert_array_equal(csc_median_axis_0(csc), np.array([0.5, -0.5]))
X = [[0, -2], [-1, -5], [1, -3]]
csc = sp.csc_matrix(X)
assert_array_equal(csc_median_axis_0(csc), np.array([0., -3]))
# Test that it raises an Error for non-csc matrices.
assert_raises(TypeError, csc_median_axis_0, sp.csr_matrix(X))
| bsd-3-clause |
shenzebang/scikit-learn | sklearn/tests/test_kernel_approximation.py | 244 | 7588 | import numpy as np
from scipy.sparse import csr_matrix
from sklearn.utils.testing import assert_array_equal, assert_equal, assert_true
from sklearn.utils.testing import assert_not_equal
from sklearn.utils.testing import assert_array_almost_equal, assert_raises
from sklearn.utils.testing import assert_less_equal
from sklearn.metrics.pairwise import kernel_metrics
from sklearn.kernel_approximation import RBFSampler
from sklearn.kernel_approximation import AdditiveChi2Sampler
from sklearn.kernel_approximation import SkewedChi2Sampler
from sklearn.kernel_approximation import Nystroem
from sklearn.metrics.pairwise import polynomial_kernel, rbf_kernel
# generate data
rng = np.random.RandomState(0)
X = rng.random_sample(size=(300, 50))
Y = rng.random_sample(size=(300, 50))
X /= X.sum(axis=1)[:, np.newaxis]
Y /= Y.sum(axis=1)[:, np.newaxis]
def test_additive_chi2_sampler():
# test that AdditiveChi2Sampler approximates kernel on random data
# compute exact kernel
# appreviations for easier formular
X_ = X[:, np.newaxis, :]
Y_ = Y[np.newaxis, :, :]
large_kernel = 2 * X_ * Y_ / (X_ + Y_)
# reduce to n_samples_x x n_samples_y by summing over features
kernel = (large_kernel.sum(axis=2))
# approximate kernel mapping
transform = AdditiveChi2Sampler(sample_steps=3)
X_trans = transform.fit_transform(X)
Y_trans = transform.transform(Y)
kernel_approx = np.dot(X_trans, Y_trans.T)
assert_array_almost_equal(kernel, kernel_approx, 1)
X_sp_trans = transform.fit_transform(csr_matrix(X))
Y_sp_trans = transform.transform(csr_matrix(Y))
assert_array_equal(X_trans, X_sp_trans.A)
assert_array_equal(Y_trans, Y_sp_trans.A)
# test error is raised on negative input
Y_neg = Y.copy()
Y_neg[0, 0] = -1
assert_raises(ValueError, transform.transform, Y_neg)
# test error on invalid sample_steps
transform = AdditiveChi2Sampler(sample_steps=4)
assert_raises(ValueError, transform.fit, X)
# test that the sample interval is set correctly
sample_steps_available = [1, 2, 3]
for sample_steps in sample_steps_available:
# test that the sample_interval is initialized correctly
transform = AdditiveChi2Sampler(sample_steps=sample_steps)
assert_equal(transform.sample_interval, None)
# test that the sample_interval is changed in the fit method
transform.fit(X)
assert_not_equal(transform.sample_interval_, None)
# test that the sample_interval is set correctly
sample_interval = 0.3
transform = AdditiveChi2Sampler(sample_steps=4,
sample_interval=sample_interval)
assert_equal(transform.sample_interval, sample_interval)
transform.fit(X)
assert_equal(transform.sample_interval_, sample_interval)
def test_skewed_chi2_sampler():
# test that RBFSampler approximates kernel on random data
# compute exact kernel
c = 0.03
# appreviations for easier formular
X_c = (X + c)[:, np.newaxis, :]
Y_c = (Y + c)[np.newaxis, :, :]
# we do it in log-space in the hope that it's more stable
# this array is n_samples_x x n_samples_y big x n_features
log_kernel = ((np.log(X_c) / 2.) + (np.log(Y_c) / 2.) + np.log(2.) -
np.log(X_c + Y_c))
# reduce to n_samples_x x n_samples_y by summing over features in log-space
kernel = np.exp(log_kernel.sum(axis=2))
# approximate kernel mapping
transform = SkewedChi2Sampler(skewedness=c, n_components=1000,
random_state=42)
X_trans = transform.fit_transform(X)
Y_trans = transform.transform(Y)
kernel_approx = np.dot(X_trans, Y_trans.T)
assert_array_almost_equal(kernel, kernel_approx, 1)
# test error is raised on negative input
Y_neg = Y.copy()
Y_neg[0, 0] = -1
assert_raises(ValueError, transform.transform, Y_neg)
def test_rbf_sampler():
# test that RBFSampler approximates kernel on random data
# compute exact kernel
gamma = 10.
kernel = rbf_kernel(X, Y, gamma=gamma)
# approximate kernel mapping
rbf_transform = RBFSampler(gamma=gamma, n_components=1000, random_state=42)
X_trans = rbf_transform.fit_transform(X)
Y_trans = rbf_transform.transform(Y)
kernel_approx = np.dot(X_trans, Y_trans.T)
error = kernel - kernel_approx
assert_less_equal(np.abs(np.mean(error)), 0.01) # close to unbiased
np.abs(error, out=error)
assert_less_equal(np.max(error), 0.1) # nothing too far off
assert_less_equal(np.mean(error), 0.05) # mean is fairly close
def test_input_validation():
# Regression test: kernel approx. transformers should work on lists
# No assertions; the old versions would simply crash
X = [[1, 2], [3, 4], [5, 6]]
AdditiveChi2Sampler().fit(X).transform(X)
SkewedChi2Sampler().fit(X).transform(X)
RBFSampler().fit(X).transform(X)
X = csr_matrix(X)
RBFSampler().fit(X).transform(X)
def test_nystroem_approximation():
# some basic tests
rnd = np.random.RandomState(0)
X = rnd.uniform(size=(10, 4))
# With n_components = n_samples this is exact
X_transformed = Nystroem(n_components=X.shape[0]).fit_transform(X)
K = rbf_kernel(X)
assert_array_almost_equal(np.dot(X_transformed, X_transformed.T), K)
trans = Nystroem(n_components=2, random_state=rnd)
X_transformed = trans.fit(X).transform(X)
assert_equal(X_transformed.shape, (X.shape[0], 2))
# test callable kernel
linear_kernel = lambda X, Y: np.dot(X, Y.T)
trans = Nystroem(n_components=2, kernel=linear_kernel, random_state=rnd)
X_transformed = trans.fit(X).transform(X)
assert_equal(X_transformed.shape, (X.shape[0], 2))
# test that available kernels fit and transform
kernels_available = kernel_metrics()
for kern in kernels_available:
trans = Nystroem(n_components=2, kernel=kern, random_state=rnd)
X_transformed = trans.fit(X).transform(X)
assert_equal(X_transformed.shape, (X.shape[0], 2))
def test_nystroem_singular_kernel():
# test that nystroem works with singular kernel matrix
rng = np.random.RandomState(0)
X = rng.rand(10, 20)
X = np.vstack([X] * 2) # duplicate samples
gamma = 100
N = Nystroem(gamma=gamma, n_components=X.shape[0]).fit(X)
X_transformed = N.transform(X)
K = rbf_kernel(X, gamma=gamma)
assert_array_almost_equal(K, np.dot(X_transformed, X_transformed.T))
assert_true(np.all(np.isfinite(Y)))
def test_nystroem_poly_kernel_params():
# Non-regression: Nystroem should pass other parameters beside gamma.
rnd = np.random.RandomState(37)
X = rnd.uniform(size=(10, 4))
K = polynomial_kernel(X, degree=3.1, coef0=.1)
nystroem = Nystroem(kernel="polynomial", n_components=X.shape[0],
degree=3.1, coef0=.1)
X_transformed = nystroem.fit_transform(X)
assert_array_almost_equal(np.dot(X_transformed, X_transformed.T), K)
def test_nystroem_callable():
# Test Nystroem on a callable.
rnd = np.random.RandomState(42)
n_samples = 10
X = rnd.uniform(size=(n_samples, 4))
def logging_histogram_kernel(x, y, log):
"""Histogram kernel that writes to a log."""
log.append(1)
return np.minimum(x, y).sum()
kernel_log = []
X = list(X) # test input validation
Nystroem(kernel=logging_histogram_kernel,
n_components=(n_samples - 1),
kernel_params={'log': kernel_log}).fit(X)
assert_equal(len(kernel_log), n_samples * (n_samples - 1) / 2)
| bsd-3-clause |
yaojenkuo/BuildingMachineLearningSystemsWithPython | ch08/regression.py | 23 | 1510 | # This code is supporting material for the book
# Building Machine Learning Systems with Python
# by Willi Richert and Luis Pedro Coelho
# published by PACKT Publishing
#
# It is made available under the MIT License
import numpy as np
from sklearn.linear_model import ElasticNetCV
from norm import NormalizePositive
from sklearn import metrics
def predict(train):
binary = (train > 0)
reg = ElasticNetCV(fit_intercept=True, alphas=[
0.0125, 0.025, 0.05, .125, .25, .5, 1., 2., 4.])
norm = NormalizePositive()
train = norm.fit_transform(train)
filled = train.copy()
# iterate over all users
for u in range(train.shape[0]):
# remove the current user for training
curtrain = np.delete(train, u, axis=0)
bu = binary[u]
if np.sum(bu) > 5:
reg.fit(curtrain[:,bu].T, train[u, bu])
# Fill the values that were not there already
filled[u, ~bu] = reg.predict(curtrain[:,~bu].T)
return norm.inverse_transform(filled)
def main(transpose_inputs=False):
from load_ml100k import get_train_test
train,test = get_train_test(random_state=12)
if transpose_inputs:
train = train.T
test = test.T
filled = predict(train)
r2 = metrics.r2_score(test[test > 0], filled[test > 0])
print('R2 score ({} regression): {:.1%}'.format(
('movie' if transpose_inputs else 'user'),
r2))
if __name__ == '__main__':
main()
main(transpose_inputs=True)
| mit |
herval/zeppelin | python/src/main/resources/python/zeppelin_context.py | 8 | 8476 | #
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import os, sys
import warnings
import base64
from io import BytesIO
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
class PyZeppelinContext(object):
""" A context impl that uses Py4j to communicate to JVM
"""
def __init__(self, z, gateway):
self.z = z
self.gateway = gateway
self.paramOption = gateway.jvm.org.apache.zeppelin.display.ui.OptionInput.ParamOption
self.javaList = gateway.jvm.java.util.ArrayList
self.max_result = z.getMaxResult()
self._displayhook = lambda *args: None
self._setup_matplotlib()
# By implementing special methods it makes operating on it more Pythonic
def __setitem__(self, key, item):
self.z.put(key, item)
def __getitem__(self, key):
return self.z.get(key)
def __delitem__(self, key):
self.z.remove(key)
def __contains__(self, item):
return self.z.containsKey(item)
def add(self, key, value):
self.__setitem__(key, value)
def put(self, key, value):
self.__setitem__(key, value)
def get(self, key):
return self.__getitem__(key)
def getInterpreterContext(self):
return self.z.getInterpreterContext()
def input(self, name, defaultValue=""):
return self.z.input(name, defaultValue)
def textbox(self, name, defaultValue=""):
return self.z.textbox(name, defaultValue)
def password(self, name):
return self.z.password(name)
def noteTextbox(self, name, defaultValue=""):
return self.z.noteTextbox(name, defaultValue)
def select(self, name, options, defaultValue=""):
return self.z.select(name, defaultValue, self.getParamOptions(options))
def noteSelect(self, name, options, defaultValue=""):
return self.z.noteSelect(name, defaultValue, self.getParamOptions(options))
def checkbox(self, name, options, defaultChecked=[]):
return self.z.checkbox(name, self.getDefaultChecked(defaultChecked), self.getParamOptions(options))
def noteCheckbox(self, name, options, defaultChecked=[]):
return self.z.noteCheckbox(name, self.getDefaultChecked(defaultChecked), self.getParamOptions(options))
def registerHook(self, event, cmd, replName=None):
if replName is None:
self.z.registerHook(event, cmd)
else:
self.z.registerHook(event, cmd, replName)
def unregisterHook(self, event, replName=None):
if replName is None:
self.z.unregisterHook(event)
else:
self.z.unregisterHook(event, replName)
def registerNoteHook(self, event, cmd, noteId, replName=None):
if replName is None:
self.z.registerNoteHook(event, cmd, noteId)
else:
self.z.registerNoteHook(event, cmd, noteId, replName)
def unregisterNoteHook(self, event, noteId, replName=None):
if replName is None:
self.z.unregisterNoteHook(event, noteId)
else:
self.z.unregisterNoteHook(event, noteId, replName)
def getParamOptions(self, options):
javaOptions = self.gateway.new_array(self.paramOption, len(options))
i = 0
for tuple in options:
javaOptions[i] = self.paramOption(tuple[0], tuple[1])
i += 1
return javaOptions
def getDefaultChecked(self, defaultChecked):
javaDefaultChecked = self.javaList()
for check in defaultChecked:
javaDefaultChecked.append(check)
return javaDefaultChecked
def show(self, p, **kwargs):
if hasattr(p, '__name__') and p.__name__ == "matplotlib.pyplot":
self.show_matplotlib(p, **kwargs)
elif type(p).__name__ == "DataFrame": # does not play well with sub-classes
# `isinstance(p, DataFrame)` would req `import pandas.core.frame.DataFrame`
# and so a dependency on pandas
self.show_dataframe(p, **kwargs)
else:
print(str(p))
def show_dataframe(self, df, show_index=False, **kwargs):
"""Pretty prints DF using Table Display System
"""
exceed_limit = len(df) > self.max_result
header_buf = StringIO("")
if show_index:
idx_name = str(df.index.name) if df.index.name is not None else ""
header_buf.write(idx_name + "\t")
header_buf.write(str(df.columns[0]))
for col in df.columns[1:]:
header_buf.write("\t")
header_buf.write(str(col))
header_buf.write("\n")
body_buf = StringIO("")
rows = df.head(self.max_result).values if exceed_limit else df.values
index = df.index.values
for idx, row in zip(index, rows):
if show_index:
body_buf.write("%html <strong>{}</strong>".format(idx))
body_buf.write("\t")
body_buf.write(str(row[0]))
for cell in row[1:]:
body_buf.write("\t")
body_buf.write(str(cell))
body_buf.write("\n")
body_buf.seek(0)
header_buf.seek(0)
print("%table " + header_buf.read() + body_buf.read())
body_buf.close(); header_buf.close()
if exceed_limit:
print("%html <font color=red>Results are limited by {}.</font>".format(self.max_result))
def show_matplotlib(self, p, fmt="png", width="auto", height="auto",
**kwargs):
"""Matplotlib show function
"""
if fmt == "png":
img = BytesIO()
p.savefig(img, format=fmt)
img_str = b"data:image/png;base64,"
img_str += base64.b64encode(img.getvalue().strip())
img_tag = "<img src={img} style='width={width};height:{height}'>"
# Decoding is necessary for Python 3 compatibility
img_str = img_str.decode("ascii")
img_str = img_tag.format(img=img_str, width=width, height=height)
elif fmt == "svg":
img = StringIO()
p.savefig(img, format=fmt)
img_str = img.getvalue()
else:
raise ValueError("fmt must be 'png' or 'svg'")
html = "%html <div style='width:{width};height:{height}'>{img}<div>"
print(html.format(width=width, height=height, img=img_str))
img.close()
def configure_mpl(self, **kwargs):
import mpl_config
mpl_config.configure(**kwargs)
def _setup_matplotlib(self):
# If we don't have matplotlib installed don't bother continuing
try:
import matplotlib
except ImportError:
return
# Make sure custom backends are available in the PYTHONPATH
rootdir = os.environ.get('ZEPPELIN_HOME', os.getcwd())
mpl_path = os.path.join(rootdir, 'interpreter', 'lib', 'python')
if mpl_path not in sys.path:
sys.path.append(mpl_path)
# Finally check if backend exists, and if so configure as appropriate
try:
matplotlib.use('module://backend_zinline')
import backend_zinline
# Everything looks good so make config assuming that we are using
# an inline backend
self._displayhook = backend_zinline.displayhook
self.configure_mpl(width=600, height=400, dpi=72, fontsize=10,
interactive=True, format='png', context=self.z)
except ImportError:
# Fall back to Agg if no custom backend installed
matplotlib.use('Agg')
warnings.warn("Unable to load inline matplotlib backend, "
"falling back to Agg")
| apache-2.0 |
NunoEdgarGub1/scikit-learn | examples/linear_model/plot_ols_ridge_variance.py | 387 | 2060 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=========================================================
Ordinary Least Squares and Ridge Regression Variance
=========================================================
Due to the few points in each dimension and the straight
line that linear regression uses to follow these points
as well as it can, noise on the observations will cause
great variance as shown in the first plot. Every line's slope
can vary quite a bit for each prediction due to the noise
induced in the observations.
Ridge regression is basically minimizing a penalised version
of the least-squared function. The penalising `shrinks` the
value of the regression coefficients.
Despite the few data points in each dimension, the slope
of the prediction is much more stable and the variance
in the line itself is greatly reduced, in comparison to that
of the standard linear regression
"""
print(__doc__)
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import linear_model
X_train = np.c_[.5, 1].T
y_train = [.5, 1]
X_test = np.c_[0, 2].T
np.random.seed(0)
classifiers = dict(ols=linear_model.LinearRegression(),
ridge=linear_model.Ridge(alpha=.1))
fignum = 1
for name, clf in classifiers.items():
fig = plt.figure(fignum, figsize=(4, 3))
plt.clf()
plt.title(name)
ax = plt.axes([.12, .12, .8, .8])
for _ in range(6):
this_X = .1 * np.random.normal(size=(2, 1)) + X_train
clf.fit(this_X, y_train)
ax.plot(X_test, clf.predict(X_test), color='.5')
ax.scatter(this_X, y_train, s=3, c='.5', marker='o', zorder=10)
clf.fit(X_train, y_train)
ax.plot(X_test, clf.predict(X_test), linewidth=2, color='blue')
ax.scatter(X_train, y_train, s=30, c='r', marker='+', zorder=10)
ax.set_xticks(())
ax.set_yticks(())
ax.set_ylim((0, 1.6))
ax.set_xlabel('X')
ax.set_ylabel('y')
ax.set_xlim(0, 2)
fignum += 1
plt.show()
| bsd-3-clause |
DonBeo/statsmodels | statsmodels/genmod/_prediction.py | 27 | 9437 | # -*- coding: utf-8 -*-
"""
Created on Fri Dec 19 11:29:18 2014
Author: Josef Perktold
License: BSD-3
"""
import numpy as np
from scipy import stats
# this is similar to ContrastResults after t_test, partially copied and adjusted
class PredictionResults(object):
def __init__(self, predicted_mean, var_pred_mean, var_resid=None,
df=None, dist=None, row_labels=None, linpred=None, link=None):
# TODO: is var_resid used? drop from arguments?
self.predicted_mean = predicted_mean
self.var_pred_mean = var_pred_mean
self.df = df
self.var_resid = var_resid
self.row_labels = row_labels
self.linpred = linpred
self.link = link
if dist is None or dist == 'norm':
self.dist = stats.norm
self.dist_args = ()
elif dist == 't':
self.dist = stats.t
self.dist_args = (self.df,)
else:
self.dist = dist
self.dist_args = ()
@property
def se_obs(self):
raise NotImplementedError
return np.sqrt(self.var_pred_mean + self.var_resid)
@property
def se_mean(self):
return np.sqrt(self.var_pred_mean)
@property
def tvalues(self):
return self.predicted_mean / self.se_mean
def t_test(self, value=0, alternative='two-sided'):
'''z- or t-test for hypothesis that mean is equal to value
Parameters
----------
value : array_like
value under the null hypothesis
alternative : string
'two-sided', 'larger', 'smaller'
Returns
-------
stat : ndarray
test statistic
pvalue : ndarray
p-value of the hypothesis test, the distribution is given by
the attribute of the instance, specified in `__init__`. Default
if not specified is the normal distribution.
'''
# from statsmodels.stats.weightstats
# assumes symmetric distribution
stat = (self.predicted_mean - value) / self.se_mean
if alternative in ['two-sided', '2-sided', '2s']:
pvalue = self.dist.sf(np.abs(stat), *self.dist_args)*2
elif alternative in ['larger', 'l']:
pvalue = self.dist.sf(stat, *self.dist_args)
elif alternative in ['smaller', 's']:
pvalue = self.dist.cdf(stat, *self.dist_args)
else:
raise ValueError('invalid alternative')
return stat, pvalue
def conf_int(self, method='endpoint', alpha=0.05, **kwds):
"""
Returns the confidence interval of the value, `effect` of the constraint.
This is currently only available for t and z tests.
Parameters
----------
alpha : float, optional
The significance level for the confidence interval.
ie., The default `alpha` = .05 returns a 95% confidence interval.
kwds : extra keyword arguments
currently ignored, only for compatibility, consistent signature
Returns
-------
ci : ndarray, (k_constraints, 2)
The array has the lower and the upper limit of the confidence
interval in the columns.
"""
tmp = np.linspace(0, 1, 6)
is_linear = (self.link.inverse(tmp) == tmp).all()
if method == 'endpoint' and not is_linear:
ci_linear = self.linpred.conf_int(alpha=alpha, obs=False)
ci = self.link.inverse(ci_linear)
elif method == 'delta' or is_linear:
se = self.se_mean
q = self.dist.ppf(1 - alpha / 2., *self.dist_args)
lower = self.predicted_mean - q * se
upper = self.predicted_mean + q * se
ci = np.column_stack((lower, upper))
# if we want to stack at a new last axis, for lower.ndim > 1
# np.concatenate((lower[..., None], upper[..., None]), axis=-1)
return ci
def summary_frame(self, what='all', alpha=0.05):
# TODO: finish and cleanup
import pandas as pd
from statsmodels.compat.collections import OrderedDict
#ci_obs = self.conf_int(alpha=alpha, obs=True) # need to split
ci_mean = self.conf_int(alpha=alpha)
to_include = OrderedDict()
to_include['mean'] = self.predicted_mean
to_include['mean_se'] = self.se_mean
to_include['mean_ci_lower'] = ci_mean[:, 0]
to_include['mean_ci_upper'] = ci_mean[:, 1]
self.table = to_include
#OrderedDict doesn't work to preserve sequence
# pandas dict doesn't handle 2d_array
#data = np.column_stack(list(to_include.values()))
#names = ....
res = pd.DataFrame(to_include, index=self.row_labels,
columns=to_include.keys())
return res
def get_prediction_glm(self, exog=None, transform=True, weights=None,
row_labels=None, linpred=None, link=None, pred_kwds=None):
"""
compute prediction results
Parameters
----------
exog : array-like, optional
The values for which you want to predict.
transform : bool, optional
If the model was fit via a formula, do you want to pass
exog through the formula. Default is True. E.g., if you fit
a model y ~ log(x1) + log(x2), and transform is True, then
you can pass a data structure that contains x1 and x2 in
their original form. Otherwise, you'd need to log the data
first.
weights : array_like, optional
Weights interpreted as in WLS, used for the variance of the predicted
residual.
args, kwargs :
Some models can take additional arguments or keywords, see the
predict method of the model for the details.
Returns
-------
prediction_results : instance
The prediction results instance contains prediction and prediction
variance and can on demand calculate confidence intervals and summary
tables for the prediction of the mean and of new observations.
"""
### prepare exog and row_labels, based on base Results.predict
if transform and hasattr(self.model, 'formula') and exog is not None:
from patsy import dmatrix
exog = dmatrix(self.model.data.design_info.builder,
exog)
if exog is not None:
if row_labels is None:
if hasattr(exog, 'index'):
row_labels = exog.index
else:
row_labels = None
exog = np.asarray(exog)
if exog.ndim == 1 and (self.model.exog.ndim == 1 or
self.model.exog.shape[1] == 1):
exog = exog[:, None]
exog = np.atleast_2d(exog) # needed in count model shape[1]
else:
exog = self.model.exog
if weights is None:
weights = getattr(self.model, 'weights', None)
if row_labels is None:
row_labels = getattr(self.model.data, 'row_labels', None)
# need to handle other arrays, TODO: is delegating to model possible ?
if weights is not None:
weights = np.asarray(weights)
if (weights.size > 1 and
(weights.ndim != 1 or weights.shape[0] == exog.shape[1])):
raise ValueError('weights has wrong shape')
### end
pred_kwds['linear'] = False
predicted_mean = self.model.predict(self.params, exog, **pred_kwds)
covb = self.cov_params()
link_deriv = self.model.family.link.inverse_deriv(linpred.predicted_mean)
var_pred_mean = link_deriv**2 * (exog * np.dot(covb, exog.T).T).sum(1)
# TODO: check that we have correct scale, Refactor scale #???
var_resid = self.scale / weights # self.mse_resid / weights
# special case for now:
if self.cov_type == 'fixed scale':
var_resid = self.cov_kwds['scale'] / weights
dist = ['norm', 't'][self.use_t]
return PredictionResults(predicted_mean, var_pred_mean, var_resid,
df=self.df_resid, dist=dist,
row_labels=row_labels, linpred=linpred, link=link)
def params_transform_univariate(params, cov_params, link=None, transform=None,
row_labels=None):
"""
results for univariate, nonlinear, monotonicaly transformed parameters
This provides transformed values, standard errors and confidence interval
for transformations of parameters, for example in calculating rates with
`exp(params)` in the case of Poisson or other models with exponential
mean function.
"""
from statsmodels.genmod.families import links
if link is None and transform is None:
link = links.Log()
if row_labels is None and hasattr(params, 'index'):
row_labels = params.index
params = np.asarray(params)
predicted_mean = link.inverse(params)
link_deriv = link.inverse_deriv(params)
var_pred_mean = link_deriv**2 * np.diag(cov_params)
# TODO: do we want covariance also, or just var/se
dist = stats.norm
# TODO: need ci for linear prediction, method of `lin_pred
linpred = PredictionResults(params, np.diag(cov_params), dist=dist,
row_labels=row_labels, link=links.identity())
res = PredictionResults(predicted_mean, var_pred_mean, dist=dist,
row_labels=row_labels, linpred=linpred, link=link)
return res
| bsd-3-clause |
RachitKansal/scikit-learn | examples/cluster/plot_kmeans_digits.py | 230 | 4524 | """
===========================================================
A demo of K-Means clustering on the handwritten digits data
===========================================================
In this example we compare the various initialization strategies for
K-means in terms of runtime and quality of the results.
As the ground truth is known here, we also apply different cluster
quality metrics to judge the goodness of fit of the cluster labels to the
ground truth.
Cluster quality metrics evaluated (see :ref:`clustering_evaluation` for
definitions and discussions of the metrics):
=========== ========================================================
Shorthand full name
=========== ========================================================
homo homogeneity score
compl completeness score
v-meas V measure
ARI adjusted Rand index
AMI adjusted mutual information
silhouette silhouette coefficient
=========== ========================================================
"""
print(__doc__)
from time import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.cluster import KMeans
from sklearn.datasets import load_digits
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
np.random.seed(42)
digits = load_digits()
data = scale(digits.data)
n_samples, n_features = data.shape
n_digits = len(np.unique(digits.target))
labels = digits.target
sample_size = 300
print("n_digits: %d, \t n_samples %d, \t n_features %d"
% (n_digits, n_samples, n_features))
print(79 * '_')
print('% 9s' % 'init'
' time inertia homo compl v-meas ARI AMI silhouette')
def bench_k_means(estimator, name, data):
t0 = time()
estimator.fit(data)
print('% 9s %.2fs %i %.3f %.3f %.3f %.3f %.3f %.3f'
% (name, (time() - t0), estimator.inertia_,
metrics.homogeneity_score(labels, estimator.labels_),
metrics.completeness_score(labels, estimator.labels_),
metrics.v_measure_score(labels, estimator.labels_),
metrics.adjusted_rand_score(labels, estimator.labels_),
metrics.adjusted_mutual_info_score(labels, estimator.labels_),
metrics.silhouette_score(data, estimator.labels_,
metric='euclidean',
sample_size=sample_size)))
bench_k_means(KMeans(init='k-means++', n_clusters=n_digits, n_init=10),
name="k-means++", data=data)
bench_k_means(KMeans(init='random', n_clusters=n_digits, n_init=10),
name="random", data=data)
# in this case the seeding of the centers is deterministic, hence we run the
# kmeans algorithm only once with n_init=1
pca = PCA(n_components=n_digits).fit(data)
bench_k_means(KMeans(init=pca.components_, n_clusters=n_digits, n_init=1),
name="PCA-based",
data=data)
print(79 * '_')
###############################################################################
# Visualize the results on PCA-reduced data
reduced_data = PCA(n_components=2).fit_transform(data)
kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)
kmeans.fit(reduced_data)
# Step size of the mesh. Decrease to increase the quality of the VQ.
h = .02 # point in the mesh [x_min, m_max]x[y_min, y_max].
# Plot the decision boundary. For that, we will assign a color to each
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh. Use last trained model.
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)
# Plot the centroids as a white X
centroids = kmeans.cluster_centers_
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n'
'Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
| bsd-3-clause |
istb-mia/MIALab | bin/hello_world.py | 1 | 1189 | """A hello world.
Uses the main libraries to verify the environment installation.
"""
import sys
import matplotlib.pyplot as plt
import numpy as np
# import pydensecrf.densecrf as crf
import pymia.evaluation.evaluator as pymia_eval
import SimpleITK as sitk
import sklearn.ensemble as sk_ensemble
def main():
print('Welcome to MIALab 2020!')
# --- scikit-learn
clf = sk_ensemble.RandomForestClassifier(max_depth=2, random_state=0)
# --- SimpleITK
image = sitk.Image(256, 128, 64, sitk.sitkInt16)
print('Image dimension:', image.GetDimension())
print('Voxel intensity before setting:', image.GetPixel(0, 0, 0))
image.SetPixel(0, 0, 0, 1)
print('Voxel intensity after setting:', image.GetPixel(0, 0, 0))
# --- numpy and matplotlib
array = np.array([1, 23, 2, 4])
plt.plot(array)
plt.ylabel('Some meaningful numbers')
plt.xlabel('The x-axis')
plt.title('Wohoo')
plt.show()
# --- pydensecrf
# d = crf.DenseCRF(1000, 2)
# --- pymia
eval = pymia_eval.SegmentationEvaluator([], {})
print('Everything seems to work fine!')
if __name__ == "__main__":
"""The program's entry point."""
main()
| apache-2.0 |
madmax983/h2o-3 | h2o-py/tests/pydemo_utils/utilsPY.py | 1 | 1976 | import json
def ipy_notebook_exec(path, save_and_norun=None):
notebook = json.load(open(path))
program = ''
for block in ipy_code_blocks(notebook):
for line in ipy_valid_lines(block):
if "h2o.init" not in line:
program += line if '\n' in line else line + '\n'
if not save_and_norun == None:
with open(save_and_norun,"w") as f: f.write(program)
else :
d={}
exec program in d # safe, but horrible (exec is horrible)
def ipy_blocks(notebook):
if 'worksheets' in notebook.keys():
return notebook['worksheets'][0]['cells'] # just take the first worksheet
elif 'cells' in notebook.keys():
return notebook['cells']
else:
raise NotImplementedError, "ipython notebook cell/block json format not handled"
def ipy_code_blocks(notebook):
return [cell for cell in ipy_blocks(notebook) if cell['cell_type'] == 'code']
def ipy_lines(block):
if 'source' in block.keys():
return block['source']
elif 'input' in block.keys():
return block['input']
else:
raise NotImplementedError, "ipython notebook source/line json format not handled"
def ipy_valid_lines(block):
lines = ipy_lines(block)
# matplotlib handling
for line in lines:
if "import matplotlib.pyplot as plt" in line or "%matplotlib inline" in line:
import matplotlib
matplotlib.use('Agg', warn=False)
# remove ipython magic functions
lines = [line for line in lines if not line.startswith('%')]
# don't show any plots
lines = [line for line in lines if not "plt.show()" in line]
return lines
def pydemo_exec(test_name):
with open (test_name, "r") as t: demo = t.read()
program = ''
for line in demo.split('\n'):
if "h2o.init" not in line:
program += line if '\n' in line else line + '\n'
demo_c = compile(program, '<string>', 'exec')
p = {}
exec demo_c in p
| apache-2.0 |
aflaxman/mplexporter | mplexporter/renderers/vega_renderer.py | 54 | 5284 | import warnings
import json
import random
from .base import Renderer
from ..exporter import Exporter
class VegaRenderer(Renderer):
def open_figure(self, fig, props):
self.props = props
self.figwidth = int(props['figwidth'] * props['dpi'])
self.figheight = int(props['figheight'] * props['dpi'])
self.data = []
self.scales = []
self.axes = []
self.marks = []
def open_axes(self, ax, props):
if len(self.axes) > 0:
warnings.warn("multiple axes not yet supported")
self.axes = [dict(type="x", scale="x", ticks=10),
dict(type="y", scale="y", ticks=10)]
self.scales = [dict(name="x",
domain=props['xlim'],
type="linear",
range="width",
),
dict(name="y",
domain=props['ylim'],
type="linear",
range="height",
),]
def draw_line(self, data, coordinates, style, label, mplobj=None):
if coordinates != 'data':
warnings.warn("Only data coordinates supported. Skipping this")
dataname = "table{0:03d}".format(len(self.data) + 1)
# TODO: respect the other style settings
self.data.append({'name': dataname,
'values': [dict(x=d[0], y=d[1]) for d in data]})
self.marks.append({'type': 'line',
'from': {'data': dataname},
'properties': {
"enter": {
"interpolate": {"value": "monotone"},
"x": {"scale": "x", "field": "data.x"},
"y": {"scale": "y", "field": "data.y"},
"stroke": {"value": style['color']},
"strokeOpacity": {"value": style['alpha']},
"strokeWidth": {"value": style['linewidth']},
}
}
})
def draw_markers(self, data, coordinates, style, label, mplobj=None):
if coordinates != 'data':
warnings.warn("Only data coordinates supported. Skipping this")
dataname = "table{0:03d}".format(len(self.data) + 1)
# TODO: respect the other style settings
self.data.append({'name': dataname,
'values': [dict(x=d[0], y=d[1]) for d in data]})
self.marks.append({'type': 'symbol',
'from': {'data': dataname},
'properties': {
"enter": {
"interpolate": {"value": "monotone"},
"x": {"scale": "x", "field": "data.x"},
"y": {"scale": "y", "field": "data.y"},
"fill": {"value": style['facecolor']},
"fillOpacity": {"value": style['alpha']},
"stroke": {"value": style['edgecolor']},
"strokeOpacity": {"value": style['alpha']},
"strokeWidth": {"value": style['edgewidth']},
}
}
})
def draw_text(self, text, position, coordinates, style,
text_type=None, mplobj=None):
if text_type == 'xlabel':
self.axes[0]['title'] = text
elif text_type == 'ylabel':
self.axes[1]['title'] = text
class VegaHTML(object):
def __init__(self, renderer):
self.specification = dict(width=renderer.figwidth,
height=renderer.figheight,
data=renderer.data,
scales=renderer.scales,
axes=renderer.axes,
marks=renderer.marks)
def html(self):
"""Build the HTML representation for IPython."""
id = random.randint(0, 2 ** 16)
html = '<div id="vis%d"></div>' % id
html += '<script>\n'
html += VEGA_TEMPLATE % (json.dumps(self.specification), id)
html += '</script>\n'
return html
def _repr_html_(self):
return self.html()
def fig_to_vega(fig, notebook=False):
"""Convert a matplotlib figure to vega dictionary
if notebook=True, then return an object which will display in a notebook
otherwise, return an HTML string.
"""
renderer = VegaRenderer()
Exporter(renderer).run(fig)
vega_html = VegaHTML(renderer)
if notebook:
return vega_html
else:
return vega_html.html()
VEGA_TEMPLATE = """
( function() {
var _do_plot = function() {
if ( (typeof vg == 'undefined') && (typeof IPython != 'undefined')) {
$([IPython.events]).on("vega_loaded.vincent", _do_plot);
return;
}
vg.parse.spec(%s, function(chart) {
chart({el: "#vis%d"}).update();
});
};
_do_plot();
})();
"""
| bsd-3-clause |
JosmanPS/scikit-learn | examples/bicluster/bicluster_newsgroups.py | 162 | 7103 | """
================================================================
Biclustering documents with the Spectral Co-clustering algorithm
================================================================
This example demonstrates the Spectral Co-clustering algorithm on the
twenty newsgroups dataset. The 'comp.os.ms-windows.misc' category is
excluded because it contains many posts containing nothing but data.
The TF-IDF vectorized posts form a word frequency matrix, which is
then biclustered using Dhillon's Spectral Co-Clustering algorithm. The
resulting document-word biclusters indicate subsets words used more
often in those subsets documents.
For a few of the best biclusters, its most common document categories
and its ten most important words get printed. The best biclusters are
determined by their normalized cut. The best words are determined by
comparing their sums inside and outside the bicluster.
For comparison, the documents are also clustered using
MiniBatchKMeans. The document clusters derived from the biclusters
achieve a better V-measure than clusters found by MiniBatchKMeans.
Output::
Vectorizing...
Coclustering...
Done in 9.53s. V-measure: 0.4455
MiniBatchKMeans...
Done in 12.00s. V-measure: 0.3309
Best biclusters:
----------------
bicluster 0 : 1951 documents, 4373 words
categories : 23% talk.politics.guns, 19% talk.politics.misc, 14% sci.med
words : gun, guns, geb, banks, firearms, drugs, gordon, clinton, cdt, amendment
bicluster 1 : 1165 documents, 3304 words
categories : 29% talk.politics.mideast, 26% soc.religion.christian, 25% alt.atheism
words : god, jesus, christians, atheists, kent, sin, morality, belief, resurrection, marriage
bicluster 2 : 2219 documents, 2830 words
categories : 18% comp.sys.mac.hardware, 16% comp.sys.ibm.pc.hardware, 16% comp.graphics
words : voltage, dsp, board, receiver, circuit, shipping, packages, stereo, compression, package
bicluster 3 : 1860 documents, 2745 words
categories : 26% rec.motorcycles, 23% rec.autos, 13% misc.forsale
words : bike, car, dod, engine, motorcycle, ride, honda, cars, bmw, bikes
bicluster 4 : 12 documents, 155 words
categories : 100% rec.sport.hockey
words : scorer, unassisted, reichel, semak, sweeney, kovalenko, ricci, audette, momesso, nedved
"""
from __future__ import print_function
print(__doc__)
from collections import defaultdict
import operator
import re
from time import time
import numpy as np
from sklearn.cluster.bicluster import SpectralCoclustering
from sklearn.cluster import MiniBatchKMeans
from sklearn.externals.six import iteritems
from sklearn.datasets.twenty_newsgroups import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.cluster import v_measure_score
def number_aware_tokenizer(doc):
""" Tokenizer that maps all numeric tokens to a placeholder.
For many applications, tokens that begin with a number are not directly
useful, but the fact that such a token exists can be relevant. By applying
this form of dimensionality reduction, some methods may perform better.
"""
token_pattern = re.compile(u'(?u)\\b\\w\\w+\\b')
tokens = token_pattern.findall(doc)
tokens = ["#NUMBER" if token[0] in "0123456789_" else token
for token in tokens]
return tokens
# exclude 'comp.os.ms-windows.misc'
categories = ['alt.atheism', 'comp.graphics',
'comp.sys.ibm.pc.hardware', 'comp.sys.mac.hardware',
'comp.windows.x', 'misc.forsale', 'rec.autos',
'rec.motorcycles', 'rec.sport.baseball',
'rec.sport.hockey', 'sci.crypt', 'sci.electronics',
'sci.med', 'sci.space', 'soc.religion.christian',
'talk.politics.guns', 'talk.politics.mideast',
'talk.politics.misc', 'talk.religion.misc']
newsgroups = fetch_20newsgroups(categories=categories)
y_true = newsgroups.target
vectorizer = TfidfVectorizer(stop_words='english', min_df=5,
tokenizer=number_aware_tokenizer)
cocluster = SpectralCoclustering(n_clusters=len(categories),
svd_method='arpack', random_state=0)
kmeans = MiniBatchKMeans(n_clusters=len(categories), batch_size=20000,
random_state=0)
print("Vectorizing...")
X = vectorizer.fit_transform(newsgroups.data)
print("Coclustering...")
start_time = time()
cocluster.fit(X)
y_cocluster = cocluster.row_labels_
print("Done in {:.2f}s. V-measure: {:.4f}".format(
time() - start_time,
v_measure_score(y_cocluster, y_true)))
print("MiniBatchKMeans...")
start_time = time()
y_kmeans = kmeans.fit_predict(X)
print("Done in {:.2f}s. V-measure: {:.4f}".format(
time() - start_time,
v_measure_score(y_kmeans, y_true)))
feature_names = vectorizer.get_feature_names()
document_names = list(newsgroups.target_names[i] for i in newsgroups.target)
def bicluster_ncut(i):
rows, cols = cocluster.get_indices(i)
if not (np.any(rows) and np.any(cols)):
import sys
return sys.float_info.max
row_complement = np.nonzero(np.logical_not(cocluster.rows_[i]))[0]
col_complement = np.nonzero(np.logical_not(cocluster.columns_[i]))[0]
weight = X[rows[:, np.newaxis], cols].sum()
cut = (X[row_complement[:, np.newaxis], cols].sum() +
X[rows[:, np.newaxis], col_complement].sum())
return cut / weight
def most_common(d):
"""Items of a defaultdict(int) with the highest values.
Like Counter.most_common in Python >=2.7.
"""
return sorted(iteritems(d), key=operator.itemgetter(1), reverse=True)
bicluster_ncuts = list(bicluster_ncut(i)
for i in range(len(newsgroups.target_names)))
best_idx = np.argsort(bicluster_ncuts)[:5]
print()
print("Best biclusters:")
print("----------------")
for idx, cluster in enumerate(best_idx):
n_rows, n_cols = cocluster.get_shape(cluster)
cluster_docs, cluster_words = cocluster.get_indices(cluster)
if not len(cluster_docs) or not len(cluster_words):
continue
# categories
counter = defaultdict(int)
for i in cluster_docs:
counter[document_names[i]] += 1
cat_string = ", ".join("{:.0f}% {}".format(float(c) / n_rows * 100, name)
for name, c in most_common(counter)[:3])
# words
out_of_cluster_docs = cocluster.row_labels_ != cluster
out_of_cluster_docs = np.where(out_of_cluster_docs)[0]
word_col = X[:, cluster_words]
word_scores = np.array(word_col[cluster_docs, :].sum(axis=0) -
word_col[out_of_cluster_docs, :].sum(axis=0))
word_scores = word_scores.ravel()
important_words = list(feature_names[cluster_words[i]]
for i in word_scores.argsort()[:-11:-1])
print("bicluster {} : {} documents, {} words".format(
idx, n_rows, n_cols))
print("categories : {}".format(cat_string))
print("words : {}\n".format(', '.join(important_words)))
| bsd-3-clause |
grv87/thesis-code | Plot.py | 1 | 4026 | #!/usr/bin/env python3
# -*- coding, utf-8 -*-
# Оценка параметров гамма-распределения методом МП
# Estimation of gamma distributions by ML method
# Copyright © 2013–2014 Василий Горохов-Апельсинов
# This file is part of code for my bachelor's thesis.
#
# Code for my bachelor's thesis is free software: you can redistribute
# it and/or modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# Code for my bachelor's thesis is distributed in the hope that it will
# be useful, but WITHOUT ANY WARRANTY; without even the implied warranty
# of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with code for my bachelor's thesis. If not, see
# <http://www.gnu.org/licenses/>.
# Requirements: Python 3 (works with 3.3), Python-dateutil, NumPy,
# SciPy, MatPlotLib, XeLaTeX, GhostScript
# Data
from dateutil.parser import parse
tableName = 'MICEX_SBER'
startDateTime = parse('2011-06-01T10:30')
# endDateTime = parse('2011-06-01T11:30')
plotTitle = 'Гистограмма исходных данных'
imageName = 'Plot.png'
n = 200 # Size of window
precision = 3
# imageIndex = 0
binWidth = 50
# Code
from common.get_data import getData
import numpy as np
from scipy.stats import gamma # Gamma distribution
cdf = lambda u, p, alpha, beta: sum(p[i] * gamma.cdf(u, alpha[i], scale = 1 / beta[i]) for i in range(len(p)))
import warnings
warnings.filterwarnings('error')
import matplotlib as mpl
mpl.use('pgf')
mpl.rcParams.update({
'pgf.texsystem': 'xelatex',
'pgf.preamble': [r'\usepackage{unicode-math}'],
'text.usetex': True,
'text.latex.unicode': True,
'font.family': 'PT Serif',
'font.size': 14,
})
import matplotlib.pyplot as plt
cursor = getData(tableName, startDateTime, n, precision)
for row in cursor:
moment = row[1]
print(moment)
x = np.array(row[2])
assert len(x) == n # Make sure we have exactly n values
x_max = 3000 # int(x.max())
bins = range(0, x_max + binWidth, binWidth)
print('Plotting...')
plt.hist(x, bins = bins, facecolor = 'white')
# p = [1.0]
# alpha = [0.2233]
# beta = [0.0001473]
# x_for_plot = np.zeros(len(bins) - 1)
# y_for_plot = np.zeros(len(bins) - 1)
# for i in range(len(bins) - 1):
# x_for_plot[i] = (bins[i] + bins[i + 1]) / 2
# y_for_plot[i] = (cdf(bins[i + 1], p, alpha, beta) - cdf(bins[i], p, alpha, beta)) * n
# plt.plot(x_for_plot, y_for_plot, linewidth = 2)
# p = [0.8371, 0.1629]
# alpha = [0.4131, 48.8275]
# beta = [0.0002283, 14.2616]
# x_for_plot = np.zeros(len(bins) - 1)
# y_for_plot = np.zeros(len(bins) - 1)
# for i in range(len(bins) - 1):
# x_for_plot[i] = (bins[i] + bins[i + 1]) / 2
# y_for_plot[i] = (cdf(bins[i + 1], p, alpha, beta) - cdf(bins[i], p, alpha, beta)) * n
# plt.plot(x_for_plot, y_for_plot, linewidth = 2)
# p = [0.7860, 0.2140]
# alpha = [0.4642, 10.5978]
# beta = [0.0002409, 2.5544]
# x_for_plot = np.zeros(len(bins) - 1)
# y_for_plot = np.zeros(len(bins) - 1)
# for i in range(len(bins) - 1):
# x_for_plot[i] = (bins[i] + bins[i + 1]) / 2
# y_for_plot[i] = (cdf(bins[i + 1], p, alpha, beta) - cdf(bins[i], p, alpha, beta)) * n
# plt.plot(x_for_plot, y_for_plot, linewidth = 2)
# p = [0.3859, 0.2349, 0.3792]
# alpha = [0.7092, 9.7036, 1.4479]
# beta = [0.0001942, 2.2856, 0.0052378]
# x_for_plot = np.zeros(len(bins))
# y_for_plot = np.zeros(len(bins))
# for i in range(len(bins) - 1):
# x_for_plot[i] = (bins[i] + bins[i + 1]) / 2
# y_for_plot[i] = (cdf(bins[i + 1], p, alpha, beta) - cdf(bins[i], p, alpha, beta)) * n
# plt.plot(x_for_plot, y_for_plot, linewidth = 2)
plt.title(plotTitle, fontsize = 14, fontweight = 'bold')
plt.savefig(imageName)
print('Plot has been saved to {:s}.'.format(imageName))
print('Done.')
| gpl-3.0 |
Aasmi/scikit-learn | sklearn/ensemble/__init__.py | 217 | 1307 | """
The :mod:`sklearn.ensemble` module includes ensemble-based methods for
classification and regression.
"""
from .base import BaseEnsemble
from .forest import RandomForestClassifier
from .forest import RandomForestRegressor
from .forest import RandomTreesEmbedding
from .forest import ExtraTreesClassifier
from .forest import ExtraTreesRegressor
from .bagging import BaggingClassifier
from .bagging import BaggingRegressor
from .weight_boosting import AdaBoostClassifier
from .weight_boosting import AdaBoostRegressor
from .gradient_boosting import GradientBoostingClassifier
from .gradient_boosting import GradientBoostingRegressor
from .voting_classifier import VotingClassifier
from . import bagging
from . import forest
from . import weight_boosting
from . import gradient_boosting
from . import partial_dependence
__all__ = ["BaseEnsemble",
"RandomForestClassifier", "RandomForestRegressor",
"RandomTreesEmbedding", "ExtraTreesClassifier",
"ExtraTreesRegressor", "BaggingClassifier",
"BaggingRegressor", "GradientBoostingClassifier",
"GradientBoostingRegressor", "AdaBoostClassifier",
"AdaBoostRegressor", "VotingClassifier",
"bagging", "forest", "gradient_boosting",
"partial_dependence", "weight_boosting"]
| bsd-3-clause |
Eric89GXL/mne-python | tutorials/simulation/plot_dics.py | 5 | 12254 | # -*- coding: utf-8 -*-
"""
DICS for power mapping
======================
In this tutorial, we'll simulate two signals originating from two
locations on the cortex. These signals will be sinusoids, so we'll be looking
at oscillatory activity (as opposed to evoked activity).
We'll use dynamic imaging of coherent sources (DICS) :footcite:`GrossEtAl2001`
to map out spectral power along the cortex. Let's see if we can find our two
simulated sources.
"""
# Author: Marijn van Vliet <[email protected]>
#
# License: BSD (3-clause)
###############################################################################
# Setup
# -----
# We first import the required packages to run this tutorial and define a list
# of filenames for various things we'll be using.
import os.path as op
import numpy as np
from scipy.signal import welch, coherence, unit_impulse
from matplotlib import pyplot as plt
import mne
from mne.simulation import simulate_raw, add_noise
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.time_frequency import csd_morlet
from mne.beamformer import make_dics, apply_dics_csd
# We use the MEG and MRI setup from the MNE-sample dataset
data_path = sample.data_path(download=False)
subjects_dir = op.join(data_path, 'subjects')
# Filenames for various files we'll be using
meg_path = op.join(data_path, 'MEG', 'sample')
raw_fname = op.join(meg_path, 'sample_audvis_raw.fif')
fwd_fname = op.join(meg_path, 'sample_audvis-meg-eeg-oct-6-fwd.fif')
cov_fname = op.join(meg_path, 'sample_audvis-cov.fif')
fwd = mne.read_forward_solution(fwd_fname)
# Seed for the random number generator
rand = np.random.RandomState(42)
###############################################################################
# Data simulation
# ---------------
#
# The following function generates a timeseries that contains an oscillator,
# whose frequency fluctuates a little over time, but stays close to 10 Hz.
# We'll use this function to generate our two signals.
sfreq = 50. # Sampling frequency of the generated signal
n_samp = int(round(10. * sfreq))
times = np.arange(n_samp) / sfreq # 10 seconds of signal
n_times = len(times)
def coh_signal_gen():
"""Generate an oscillating signal.
Returns
-------
signal : ndarray
The generated signal.
"""
t_rand = 0.001 # Variation in the instantaneous frequency of the signal
std = 0.1 # Std-dev of the random fluctuations added to the signal
base_freq = 10. # Base frequency of the oscillators in Hertz
n_times = len(times)
# Generate an oscillator with varying frequency and phase lag.
signal = np.sin(2.0 * np.pi *
(base_freq * np.arange(n_times) / sfreq +
np.cumsum(t_rand * rand.randn(n_times))))
# Add some random fluctuations to the signal.
signal += std * rand.randn(n_times)
# Scale the signal to be in the right order of magnitude (~100 nAm)
# for MEG data.
signal *= 100e-9
return signal
###############################################################################
# Let's simulate two timeseries and plot some basic information about them.
signal1 = coh_signal_gen()
signal2 = coh_signal_gen()
fig, axes = plt.subplots(2, 2, figsize=(8, 4))
# Plot the timeseries
ax = axes[0][0]
ax.plot(times, 1e9 * signal1, lw=0.5)
ax.set(xlabel='Time (s)', xlim=times[[0, -1]], ylabel='Amplitude (Am)',
title='Signal 1')
ax = axes[0][1]
ax.plot(times, 1e9 * signal2, lw=0.5)
ax.set(xlabel='Time (s)', xlim=times[[0, -1]], title='Signal 2')
# Power spectrum of the first timeseries
f, p = welch(signal1, fs=sfreq, nperseg=128, nfft=256)
ax = axes[1][0]
# Only plot the first 100 frequencies
ax.plot(f[:100], 20 * np.log10(p[:100]), lw=1.)
ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 99]],
ylabel='Power (dB)', title='Power spectrum of signal 1')
# Compute the coherence between the two timeseries
f, coh = coherence(signal1, signal2, fs=sfreq, nperseg=100, noverlap=64)
ax = axes[1][1]
ax.plot(f[:50], coh[:50], lw=1.)
ax.set(xlabel='Frequency (Hz)', xlim=f[[0, 49]], ylabel='Coherence',
title='Coherence between the timeseries')
fig.tight_layout()
###############################################################################
# Now we put the signals at two locations on the cortex. We construct a
# :class:`mne.SourceEstimate` object to store them in.
#
# The timeseries will have a part where the signal is active and a part where
# it is not. The techniques we'll be using in this tutorial depend on being
# able to contrast data that contains the signal of interest versus data that
# does not (i.e. it contains only noise).
# The locations on the cortex where the signal will originate from. These
# locations are indicated as vertex numbers.
vertices = [[146374], [33830]]
# Construct SourceEstimates that describe the signals at the cortical level.
data = np.vstack((signal1, signal2))
stc_signal = mne.SourceEstimate(
data, vertices, tmin=0, tstep=1. / sfreq, subject='sample')
stc_noise = stc_signal * 0.
###############################################################################
# Before we simulate the sensor-level data, let's define a signal-to-noise
# ratio. You are encouraged to play with this parameter and see the effect of
# noise on our results.
snr = 1. # Signal-to-noise ratio. Decrease to add more noise.
###############################################################################
# Now we run the signal through the forward model to obtain simulated sensor
# data. To save computation time, we'll only simulate gradiometer data. You can
# try simulating other types of sensors as well.
#
# Some noise is added based on the baseline noise covariance matrix from the
# sample dataset, scaled to implement the desired SNR.
# Read the info from the sample dataset. This defines the location of the
# sensors and such.
info = mne.io.read_info(raw_fname)
info.update(sfreq=sfreq, bads=[])
# Only use gradiometers
picks = mne.pick_types(info, meg='grad', stim=True, exclude=())
mne.pick_info(info, picks, copy=False)
# Define a covariance matrix for the simulated noise. In this tutorial, we use
# a simple diagonal matrix.
cov = mne.cov.make_ad_hoc_cov(info)
cov['data'] *= (20. / snr) ** 2 # Scale the noise to achieve the desired SNR
# Simulate the raw data, with a lowpass filter on the noise
stcs = [(stc_signal, unit_impulse(n_samp, dtype=int) * 1),
(stc_noise, unit_impulse(n_samp, dtype=int) * 2)] # stacked in time
duration = (len(stc_signal.times) * 2) / sfreq
raw = simulate_raw(info, stcs, forward=fwd)
add_noise(raw, cov, iir_filter=[4, -4, 0.8], random_state=rand)
###############################################################################
# We create an :class:`mne.Epochs` object containing two trials: one with
# both noise and signal and one with just noise
events = mne.find_events(raw, initial_event=True)
tmax = (len(stc_signal.times) - 1) / sfreq
epochs = mne.Epochs(raw, events, event_id=dict(signal=1, noise=2),
tmin=0, tmax=tmax, baseline=None, preload=True)
assert len(epochs) == 2 # ensure that we got the two expected events
# Plot some of the channels of the simulated data that are situated above one
# of our simulated sources.
picks = mne.pick_channels(epochs.ch_names, mne.read_selection('Left-frontal'))
epochs.plot(picks=picks)
###############################################################################
# Power mapping
# -------------
# With our simulated dataset ready, we can now pretend to be researchers that
# have just recorded this from a real subject and are going to study what parts
# of the brain communicate with each other.
#
# First, we'll create a source estimate of the MEG data. We'll use both a
# straightforward MNE-dSPM inverse solution for this, and the DICS beamformer
# which is specifically designed to work with oscillatory data.
###############################################################################
# Computing the inverse using MNE-dSPM:
# Compute the inverse operator
fwd = mne.read_forward_solution(fwd_fname)
inv = make_inverse_operator(epochs.info, fwd, cov)
# Apply the inverse model to the trial that also contains the signal.
s = apply_inverse(epochs['signal'].average(), inv)
# Take the root-mean square along the time dimension and plot the result.
s_rms = np.sqrt((s ** 2).mean())
title = 'MNE-dSPM inverse (RMS)'
brain = s_rms.plot('sample', subjects_dir=subjects_dir, hemi='both', figure=1,
size=600, time_label=title, title=title)
# Indicate the true locations of the source activity on the plot.
brain.add_foci(vertices[0][0], coords_as_verts=True, hemi='lh')
brain.add_foci(vertices[1][0], coords_as_verts=True, hemi='rh')
# Rotate the view and add a title.
brain.show_view(view={'azimuth': 0, 'elevation': 0, 'distance': 550,
'focalpoint': [0, 0, 0]})
###############################################################################
# We will now compute the cortical power map at 10 Hz. using a DICS beamformer.
# A beamformer will construct for each vertex a spatial filter that aims to
# pass activity originating from the vertex, while dampening activity from
# other sources as much as possible.
#
# The :func:`mne.beamformer.make_dics` function has many switches that offer
# precise control
# over the way the filter weights are computed. Currently, there is no clear
# consensus regarding the best approach. This is why we will demonstrate two
# approaches here:
#
# 1. The approach as described in :footcite:`vanVlietEtAl2018`, which first
# normalizes the forward solution and computes a vector beamformer.
# 2. The scalar beamforming approach based on
# :footcite:`SekiharaNagarajan2008`, which uses weight normalization
# instead of normalizing the forward solution.
# Estimate the cross-spectral density (CSD) matrix on the trial containing the
# signal.
csd_signal = csd_morlet(epochs['signal'], frequencies=[10])
# Compute the spatial filters for each vertex, using two approaches.
filters_approach1 = make_dics(
info, fwd, csd_signal, reg=0.05, pick_ori='max-power', depth=1.,
inversion='single', weight_norm=None)
print(filters_approach1)
filters_approach2 = make_dics(
info, fwd, csd_signal, reg=0.05, pick_ori='max-power', depth=None,
inversion='matrix', weight_norm='unit-noise-gain')
print(filters_approach2)
# You can save these to disk with:
# filters_approach1.save('filters_1-dics.h5')
# Compute the DICS power map by applying the spatial filters to the CSD matrix.
power_approach1, f = apply_dics_csd(csd_signal, filters_approach1)
power_approach2, f = apply_dics_csd(csd_signal, filters_approach2)
###############################################################################
# Plot the DICS power maps for both approaches, starting with the first:
def plot_approach(power, n):
"""Plot the results on a brain."""
title = 'DICS power map, approach %d' % n
brain = power_approach1.plot(
'sample', subjects_dir=subjects_dir, hemi='both',
size=600, time_label=title, title=title)
# Indicate the true locations of the source activity on the plot.
brain.add_foci(vertices[0][0], coords_as_verts=True, hemi='lh', color='b')
brain.add_foci(vertices[1][0], coords_as_verts=True, hemi='rh', color='b')
# Rotate the view and add a title.
brain.show_view(view={'azimuth': 0, 'elevation': 0, 'distance': 550,
'focalpoint': [0, 0, 0]})
return brain
brain1 = plot_approach(power_approach1, 1)
###############################################################################
# Now the second:
brain2 = plot_approach(power_approach2, 2)
###############################################################################
# Excellent! All methods found our two simulated sources. Of course, with a
# signal-to-noise ratio (SNR) of 1, is isn't very hard to find them. You can
# try playing with the SNR and see how the MNE-dSPM and DICS approaches hold up
# in the presence of increasing noise. In the presence of more noise, you may
# need to increase the regularization parameter of the DICS beamformer.
#
# References
# ----------
# .. footbibliography::
| bsd-3-clause |
RPGOne/scikit-learn | examples/plot_multioutput_face_completion.py | 330 | 3019 | """
==============================================
Face completion with a multi-output estimators
==============================================
This example shows the use of multi-output estimator to complete images.
The goal is to predict the lower half of a face given its upper half.
The first column of images shows true faces. The next columns illustrate
how extremely randomized trees, k nearest neighbors, linear
regression and ridge regression complete the lower half of those faces.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_olivetti_faces
from sklearn.utils.validation import check_random_state
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import RidgeCV
# Load the faces datasets
data = fetch_olivetti_faces()
targets = data.target
data = data.images.reshape((len(data.images), -1))
train = data[targets < 30]
test = data[targets >= 30] # Test on independent people
# Test on a subset of people
n_faces = 5
rng = check_random_state(4)
face_ids = rng.randint(test.shape[0], size=(n_faces, ))
test = test[face_ids, :]
n_pixels = data.shape[1]
X_train = train[:, :np.ceil(0.5 * n_pixels)] # Upper half of the faces
y_train = train[:, np.floor(0.5 * n_pixels):] # Lower half of the faces
X_test = test[:, :np.ceil(0.5 * n_pixels)]
y_test = test[:, np.floor(0.5 * n_pixels):]
# Fit estimators
ESTIMATORS = {
"Extra trees": ExtraTreesRegressor(n_estimators=10, max_features=32,
random_state=0),
"K-nn": KNeighborsRegressor(),
"Linear regression": LinearRegression(),
"Ridge": RidgeCV(),
}
y_test_predict = dict()
for name, estimator in ESTIMATORS.items():
estimator.fit(X_train, y_train)
y_test_predict[name] = estimator.predict(X_test)
# Plot the completed faces
image_shape = (64, 64)
n_cols = 1 + len(ESTIMATORS)
plt.figure(figsize=(2. * n_cols, 2.26 * n_faces))
plt.suptitle("Face completion with multi-output estimators", size=16)
for i in range(n_faces):
true_face = np.hstack((X_test[i], y_test[i]))
if i:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 1)
else:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 1,
title="true faces")
sub.axis("off")
sub.imshow(true_face.reshape(image_shape),
cmap=plt.cm.gray,
interpolation="nearest")
for j, est in enumerate(sorted(ESTIMATORS)):
completed_face = np.hstack((X_test[i], y_test_predict[est][i]))
if i:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 2 + j)
else:
sub = plt.subplot(n_faces, n_cols, i * n_cols + 2 + j,
title=est)
sub.axis("off")
sub.imshow(completed_face.reshape(image_shape),
cmap=plt.cm.gray,
interpolation="nearest")
plt.show()
| bsd-3-clause |
handroissuazo/tensorflow | tensorflow/tools/dist_test/python/census_widendeep.py | 54 | 11900 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Distributed training and evaluation of a wide and deep model."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import json
import os
import sys
from six.moves import urllib
import tensorflow as tf
from tensorflow.contrib.learn.python.learn import learn_runner
from tensorflow.contrib.learn.python.learn.estimators import run_config
# Constants: Data download URLs
TRAIN_DATA_URL = "http://mlr.cs.umass.edu/ml/machine-learning-databases/adult/adult.data"
TEST_DATA_URL = "http://mlr.cs.umass.edu/ml/machine-learning-databases/adult/adult.test"
# Define features for the model
def census_model_config():
"""Configuration for the census Wide & Deep model.
Returns:
columns: Column names to retrieve from the data source
label_column: Name of the label column
wide_columns: List of wide columns
deep_columns: List of deep columns
categorical_column_names: Names of the categorical columns
continuous_column_names: Names of the continuous columns
"""
# 1. Categorical base columns.
gender = tf.contrib.layers.sparse_column_with_keys(
column_name="gender", keys=["female", "male"])
race = tf.contrib.layers.sparse_column_with_keys(
column_name="race",
keys=["Amer-Indian-Eskimo",
"Asian-Pac-Islander",
"Black",
"Other",
"White"])
education = tf.contrib.layers.sparse_column_with_hash_bucket(
"education", hash_bucket_size=1000)
marital_status = tf.contrib.layers.sparse_column_with_hash_bucket(
"marital_status", hash_bucket_size=100)
relationship = tf.contrib.layers.sparse_column_with_hash_bucket(
"relationship", hash_bucket_size=100)
workclass = tf.contrib.layers.sparse_column_with_hash_bucket(
"workclass", hash_bucket_size=100)
occupation = tf.contrib.layers.sparse_column_with_hash_bucket(
"occupation", hash_bucket_size=1000)
native_country = tf.contrib.layers.sparse_column_with_hash_bucket(
"native_country", hash_bucket_size=1000)
# 2. Continuous base columns.
age = tf.contrib.layers.real_valued_column("age")
age_buckets = tf.contrib.layers.bucketized_column(
age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])
education_num = tf.contrib.layers.real_valued_column("education_num")
capital_gain = tf.contrib.layers.real_valued_column("capital_gain")
capital_loss = tf.contrib.layers.real_valued_column("capital_loss")
hours_per_week = tf.contrib.layers.real_valued_column("hours_per_week")
wide_columns = [
gender, native_country, education, occupation, workclass,
marital_status, relationship, age_buckets,
tf.contrib.layers.crossed_column([education, occupation],
hash_bucket_size=int(1e4)),
tf.contrib.layers.crossed_column([native_country, occupation],
hash_bucket_size=int(1e4)),
tf.contrib.layers.crossed_column([age_buckets, race, occupation],
hash_bucket_size=int(1e6))]
deep_columns = [
tf.contrib.layers.embedding_column(workclass, dimension=8),
tf.contrib.layers.embedding_column(education, dimension=8),
tf.contrib.layers.embedding_column(marital_status, dimension=8),
tf.contrib.layers.embedding_column(gender, dimension=8),
tf.contrib.layers.embedding_column(relationship, dimension=8),
tf.contrib.layers.embedding_column(race, dimension=8),
tf.contrib.layers.embedding_column(native_country, dimension=8),
tf.contrib.layers.embedding_column(occupation, dimension=8),
age, education_num, capital_gain, capital_loss, hours_per_week]
# Define the column names for the data sets.
columns = ["age", "workclass", "fnlwgt", "education", "education_num",
"marital_status", "occupation", "relationship", "race", "gender",
"capital_gain", "capital_loss", "hours_per_week",
"native_country", "income_bracket"]
label_column = "label"
categorical_columns = ["workclass", "education", "marital_status",
"occupation", "relationship", "race", "gender",
"native_country"]
continuous_columns = ["age", "education_num", "capital_gain",
"capital_loss", "hours_per_week"]
return (columns, label_column, wide_columns, deep_columns,
categorical_columns, continuous_columns)
class CensusDataSource(object):
"""Source of census data."""
def __init__(self, data_dir, train_data_url, test_data_url,
columns, label_column,
categorical_columns, continuous_columns):
"""Constructor of CensusDataSource.
Args:
data_dir: Directory to save/load the data files
train_data_url: URL from which the training data can be downloaded
test_data_url: URL from which the test data can be downloaded
columns: Columns to retrieve from the data files (A list of strings)
label_column: Name of the label column
categorical_columns: Names of the categorical columns (A list of strings)
continuous_columns: Names of the continuous columsn (A list of strings)
"""
# Retrieve data from disk (if available) or download from the web.
train_file_path = os.path.join(data_dir, "adult.data")
if os.path.isfile(train_file_path):
print("Loading training data from file: %s" % train_file_path)
train_file = open(train_file_path)
else:
urllib.urlretrieve(train_data_url, train_file_path)
test_file_path = os.path.join(data_dir, "adult.test")
if os.path.isfile(test_file_path):
print("Loading test data from file: %s" % test_file_path)
test_file = open(test_file_path)
else:
test_file = open(test_file_path)
urllib.urlretrieve(test_data_url, test_file_path)
# Read the training and testing data sets into Pandas DataFrame.
import pandas # pylint: disable=g-import-not-at-top
self._df_train = pandas.read_csv(train_file, names=columns,
skipinitialspace=True)
self._df_test = pandas.read_csv(test_file, names=columns,
skipinitialspace=True, skiprows=1)
# Remove the NaN values in the last rows of the tables
self._df_train = self._df_train[:-1]
self._df_test = self._df_test[:-1]
# Apply the threshold to get the labels.
income_thresh = lambda x: ">50K" in x
self._df_train[label_column] = (
self._df_train["income_bracket"].apply(income_thresh)).astype(int)
self._df_test[label_column] = (
self._df_test["income_bracket"].apply(income_thresh)).astype(int)
self.label_column = label_column
self.categorical_columns = categorical_columns
self.continuous_columns = continuous_columns
def input_train_fn(self):
return self._input_fn(self._df_train)
def input_test_fn(self):
return self._input_fn(self._df_test)
# TODO(cais): Turn into minibatch feeder
def _input_fn(self, df):
"""Input data function.
Creates a dictionary mapping from each continuous feature column name
(k) to the values of that column stored in a constant Tensor.
Args:
df: data feed
Returns:
feature columns and labels
"""
continuous_cols = {k: tf.constant(df[k].values)
for k in self.continuous_columns}
# Creates a dictionary mapping from each categorical feature column name (k)
# to the values of that column stored in a tf.SparseTensor.
categorical_cols = {
k: tf.SparseTensor(
indices=[[i, 0] for i in range(df[k].size)],
values=df[k].values,
dense_shape=[df[k].size, 1])
for k in self.categorical_columns}
# Merges the two dictionaries into one.
feature_cols = dict(continuous_cols.items() + categorical_cols.items())
# Converts the label column into a constant Tensor.
label = tf.constant(df[self.label_column].values)
# Returns the feature columns and the label.
return feature_cols, label
def _create_experiment_fn(output_dir): # pylint: disable=unused-argument
"""Experiment creation function."""
(columns, label_column, wide_columns, deep_columns, categorical_columns,
continuous_columns) = census_model_config()
census_data_source = CensusDataSource(FLAGS.data_dir,
TRAIN_DATA_URL, TEST_DATA_URL,
columns, label_column,
categorical_columns,
continuous_columns)
os.environ["TF_CONFIG"] = json.dumps({
"cluster": {
tf.contrib.learn.TaskType.PS: ["fake_ps"] *
FLAGS.num_parameter_servers
},
"task": {
"index": FLAGS.worker_index
}
})
config = run_config.RunConfig(master=FLAGS.master_grpc_url)
estimator = tf.contrib.learn.DNNLinearCombinedClassifier(
model_dir=FLAGS.model_dir,
linear_feature_columns=wide_columns,
dnn_feature_columns=deep_columns,
dnn_hidden_units=[5],
config=config)
return tf.contrib.learn.Experiment(
estimator=estimator,
train_input_fn=census_data_source.input_train_fn,
eval_input_fn=census_data_source.input_test_fn,
train_steps=FLAGS.train_steps,
eval_steps=FLAGS.eval_steps
)
def main(unused_argv):
print("Worker index: %d" % FLAGS.worker_index)
learn_runner.run(experiment_fn=_create_experiment_fn,
output_dir=FLAGS.output_dir,
schedule=FLAGS.schedule)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.register("type", "bool", lambda v: v.lower() == "true")
parser.add_argument(
"--data_dir",
type=str,
default="/tmp/census-data",
help="Directory for storing the cesnsus data"
)
parser.add_argument(
"--model_dir",
type=str,
default="/tmp/census_wide_and_deep_model",
help="Directory for storing the model"
)
parser.add_argument(
"--output_dir",
type=str,
default="",
help="Base output directory."
)
parser.add_argument(
"--schedule",
type=str,
default="local_run",
help="Schedule to run for this experiment."
)
parser.add_argument(
"--master_grpc_url",
type=str,
default="",
help="URL to master GRPC tensorflow server, e.g.,grpc://127.0.0.1:2222"
)
parser.add_argument(
"--num_parameter_servers",
type=int,
default=0,
help="Number of parameter servers"
)
parser.add_argument(
"--worker_index",
type=int,
default=0,
help="Worker index (>=0)"
)
parser.add_argument(
"--train_steps",
type=int,
default=1000,
help="Number of training steps"
)
parser.add_argument(
"--eval_steps",
type=int,
default=1,
help="Number of evaluation steps"
)
global FLAGS # pylint:disable=global-at-module-level
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
| apache-2.0 |
passiweinberger/nupic | external/linux32/lib/python2.6/site-packages/matplotlib/fontconfig_pattern.py | 72 | 6429 | """
A module for parsing and generating fontconfig patterns.
See the `fontconfig pattern specification
<http://www.fontconfig.org/fontconfig-user.html>`_ for more
information.
"""
# Author : Michael Droettboom <[email protected]>
# License : matplotlib license (PSF compatible)
# This class is defined here because it must be available in:
# - The old-style config framework (:file:`rcsetup.py`)
# - The traits-based config framework (:file:`mpltraits.py`)
# - The font manager (:file:`font_manager.py`)
# It probably logically belongs in :file:`font_manager.py`, but
# placing it in any of these places would have created cyclical
# dependency problems, or an undesired dependency on traits even
# when the traits-based config framework is not used.
import re
from matplotlib.pyparsing import Literal, ZeroOrMore, \
Optional, Regex, StringEnd, ParseException, Suppress
family_punc = r'\\\-:,'
family_unescape = re.compile(r'\\([%s])' % family_punc).sub
family_escape = re.compile(r'([%s])' % family_punc).sub
value_punc = r'\\=_:,'
value_unescape = re.compile(r'\\([%s])' % value_punc).sub
value_escape = re.compile(r'([%s])' % value_punc).sub
class FontconfigPatternParser:
"""A simple pyparsing-based parser for fontconfig-style patterns.
See the `fontconfig pattern specification
<http://www.fontconfig.org/fontconfig-user.html>`_ for more
information.
"""
_constants = {
'thin' : ('weight', 'light'),
'extralight' : ('weight', 'light'),
'ultralight' : ('weight', 'light'),
'light' : ('weight', 'light'),
'book' : ('weight', 'book'),
'regular' : ('weight', 'regular'),
'normal' : ('weight', 'normal'),
'medium' : ('weight', 'medium'),
'demibold' : ('weight', 'demibold'),
'semibold' : ('weight', 'semibold'),
'bold' : ('weight', 'bold'),
'extrabold' : ('weight', 'extra bold'),
'black' : ('weight', 'black'),
'heavy' : ('weight', 'heavy'),
'roman' : ('slant', 'normal'),
'italic' : ('slant', 'italic'),
'oblique' : ('slant', 'oblique'),
'ultracondensed' : ('width', 'ultra-condensed'),
'extracondensed' : ('width', 'extra-condensed'),
'condensed' : ('width', 'condensed'),
'semicondensed' : ('width', 'semi-condensed'),
'expanded' : ('width', 'expanded'),
'extraexpanded' : ('width', 'extra-expanded'),
'ultraexpanded' : ('width', 'ultra-expanded')
}
def __init__(self):
family = Regex(r'([^%s]|(\\[%s]))*' %
(family_punc, family_punc)) \
.setParseAction(self._family)
size = Regex(r"([0-9]+\.?[0-9]*|\.[0-9]+)") \
.setParseAction(self._size)
name = Regex(r'[a-z]+') \
.setParseAction(self._name)
value = Regex(r'([^%s]|(\\[%s]))*' %
(value_punc, value_punc)) \
.setParseAction(self._value)
families =(family
+ ZeroOrMore(
Literal(',')
+ family)
).setParseAction(self._families)
point_sizes =(size
+ ZeroOrMore(
Literal(',')
+ size)
).setParseAction(self._point_sizes)
property =( (name
+ Suppress(Literal('='))
+ value
+ ZeroOrMore(
Suppress(Literal(','))
+ value)
)
| name
).setParseAction(self._property)
pattern =(Optional(
families)
+ Optional(
Literal('-')
+ point_sizes)
+ ZeroOrMore(
Literal(':')
+ property)
+ StringEnd()
)
self._parser = pattern
self.ParseException = ParseException
def parse(self, pattern):
"""
Parse the given fontconfig *pattern* and return a dictionary
of key/value pairs useful for initializing a
:class:`font_manager.FontProperties` object.
"""
props = self._properties = {}
try:
self._parser.parseString(pattern)
except self.ParseException, e:
raise ValueError("Could not parse font string: '%s'\n%s" % (pattern, e))
self._properties = None
return props
def _family(self, s, loc, tokens):
return [family_unescape(r'\1', str(tokens[0]))]
def _size(self, s, loc, tokens):
return [float(tokens[0])]
def _name(self, s, loc, tokens):
return [str(tokens[0])]
def _value(self, s, loc, tokens):
return [value_unescape(r'\1', str(tokens[0]))]
def _families(self, s, loc, tokens):
self._properties['family'] = [str(x) for x in tokens]
return []
def _point_sizes(self, s, loc, tokens):
self._properties['size'] = [str(x) for x in tokens]
return []
def _property(self, s, loc, tokens):
if len(tokens) == 1:
if tokens[0] in self._constants:
key, val = self._constants[tokens[0]]
self._properties.setdefault(key, []).append(val)
else:
key = tokens[0]
val = tokens[1:]
self._properties.setdefault(key, []).extend(val)
return []
parse_fontconfig_pattern = FontconfigPatternParser().parse
def generate_fontconfig_pattern(d):
"""
Given a dictionary of key/value pairs, generates a fontconfig
pattern string.
"""
props = []
families = ''
size = ''
for key in 'family style variant weight stretch file size'.split():
val = getattr(d, 'get_' + key)()
if val is not None and val != []:
if type(val) == list:
val = [value_escape(r'\\\1', str(x)) for x in val if x is not None]
if val != []:
val = ','.join(val)
props.append(":%s=%s" % (key, val))
return ''.join(props)
| agpl-3.0 |
mannyfin/VolatilityForecasting | src/VAR2.py | 1 | 6011 | import pandas as pd
import numpy as np
from Performance_Measure import *
from SEplot import se_plot as SE
import matplotlib.pyplot as plt
from LASSO import lasso_regression
class VAR(object):
"""
VAR forecaster
p = lag
warmup_period = training period, int
combined_vol = vol of all currencies
"""
def __init__(self, p, combined_vol, warmup_period):
assert isinstance(combined_vol, pd.core.frame.DataFrame)
self.p = p
self.combined_vol = combined_vol
self.warmup_period = warmup_period
def VAR_calc(self, Timedt, dates, filename):
"""
Calculate VAR
:param Timedt: str to be added to plot
:param dates: df of dates (timestamps)
:param filename: filename of currency
:return: MSE and QL
"""
# provides the whole x matrix.
self.xmat = pd.DataFrame([sum([self.combined_vol[currency].loc[i + self.p - 1:i:-1].as_matrix().tolist()
for currency in self.combined_vol.keys()], [])
for i in range(len(self.combined_vol) - self.p)])
# provides the whole y matrix
self.ymat = self.combined_vol[self.p:]
"""
initial xmat, ymat
self.xmat[:self.warmup_period]
feel free to use log of the vols, or whatever you'd like as an input. Doesn't need to be defined inside the fcn.
ymat = daily_vol_combined[p:warmup_period + p]
test = xmat[:warmup_period]
check: ymat[-4:] vs xmat: test[-3:]
Ex.p = 3 and warmup = 100 here...
ymat[-4:]
Out[236]:
SEKUSD CADUSD CHFUSD
99 0.207160 0.132623 0.180368
100 0.193095 0.115839 0.146339
101 0.202393 0.119725 0.158681
102 0.185685 0.113315 0.147309
test[-3:]
Out[238]:
0 1 2 3 4 5 6 \
97 0.207160 0.217591 0.262496 0.132623 0.157432 0.204130 0.180368
98 0.193095 0.207160 0.217591 0.115839 0.132623 0.157432 0.146339
99 0.202393 0.193095 0.207160 0.119725 0.115839 0.132623 0.158681
7 8
97 0.182417 0.224175
98 0.180368 0.182417
99 0.146339 0.180368
# Calculate beta
# beta = (X_T * X)^-1 * ( X_T * Y)
beta = np.matmul(np.linalg.pinv(self.xmat.T.dot(self.xmat)), np.matmul(self.xmat.T, self.ymat))
Calculate y_predicted:
y_predicted = X_T1*beta_T1,fit
where y_predicted = y_T1+1
the line below is wrong because it uses X_T1-1 instead of X_T1
y_prediction = np.matmul(self.test[-1:], beta)
Here is the last row of ymat. i.e. y_T1
ymat[-1:]
Out[240]:
SEKUSD CADUSD CHFUSD
102 0.185685 0.113315 0.147309
We instead index into the row after the last row of xmat_var using xmat (the complete one)
This is incorrect:
test[-1:]
Out[239]:
0 1 2 3 4 5 6 \
99 0.202393 0.193095 0.20716 0.119725 0.115839 0.132623 0.158681
7 8
99 0.146339 0.180368
This is correct:
xmat[len(test):len(test)+1]
Out[230]:
0 1 2 3 4 5 6 \
100 0.185685 0.202393 0.193095 0.113315 0.119725 0.115839 0.147309
7 8
100 0.158681 0.146339
Notice columns, 0, 3, 6 are the elements in ymat[-1:] (i.e. y_T1).
This means that xmat[len(test):len(test)+1] is X_T1
We use this to calculate y_predicted: (i.e. y_T1+1):
y_predicted = X_T1*beta_T1
len(self.xmat)-self.warmup_period)
"""
beta = []
prediction=[]
for iteration in range(len(self.ymat)-self.warmup_period):
# X goes from 0 to warmup_period (T1-1). Ex. for p=3 and warmup=100,
# x index goes from 0 to 99, & col=3*numfiles
xmat_var = self.xmat[:(self.warmup_period+ iteration) ]
# Y goes from p to the warmup period+p. Ex. for p = 3 and warmup = 100x y index goes from 3 to 102 inclusive
ymat_var = self.combined_vol[self.p:(self.warmup_period + self.p + iteration)]
# We can ravel the betas below to stack them row by row if we want to use them later for a pandas DataFrame
# The ravel would have to be done after the prediction.append() line.
beta.append(np.matmul(np.linalg.pinv(xmat_var.T.dot(xmat_var)), np.matmul(xmat_var.T, ymat_var)))
# the x used here is the row after the warmup period, T1. i.e. if xmat_var is from 0:99 inclusive, then
# value passed for x is row 100
prediction.append(np.matmul(self.xmat[len(xmat_var):len(xmat_var) + 1], beta[-1])[0].tolist())
prediction = pd.DataFrame(prediction, columns=self.combined_vol.keys())
# observed: ex.For the case of p = 3 is from index 103:1299 inclusive, 1197 elements total for warmup_period=100
observed = self.ymat[self.warmup_period:]
# now calculate MSE, QL and so forth
Performance_ = PerformanceMeasure()
MSE = Performance_.mean_se(observed=observed, prediction=prediction)
QL = Performance_.quasi_likelihood(observed=observed, prediction=prediction)
""" return a plot of the log of the Squared error"""
label = str(filename) + " " + str(Timedt) + " SE (" + str(self.p) + ") VAR Volatility"
SE(observed, prediction, dates.iloc[(self.warmup_period+self.p):], function_method=label)
plt.title('VAR for p = '+str(self.p))
return MSE, QL
| gpl-3.0 |
chris-chris/tensorflow | tensorflow/contrib/learn/python/learn/tests/dataframe/feeding_queue_runner_test.py | 62 | 5053 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests `FeedingQueueRunner` using arrays and `DataFrames`."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.contrib.learn.python.learn.dataframe.queues import feeding_functions as ff
from tensorflow.python.client import session
from tensorflow.python.framework import ops
from tensorflow.python.platform import test
from tensorflow.python.training import coordinator
from tensorflow.python.training import queue_runner_impl
# pylint: disable=g-import-not-at-top
try:
import pandas as pd
HAS_PANDAS = True
except ImportError:
HAS_PANDAS = False
def get_rows(array, row_indices):
rows = [array[i] for i in row_indices]
return np.vstack(rows)
class FeedingQueueRunnerTestCase(test.TestCase):
"""Tests for `FeedingQueueRunner`."""
def testArrayFeeding(self):
with ops.Graph().as_default():
array = np.arange(32).reshape([16, 2])
q = ff.enqueue_data(array, capacity=100)
batch_size = 3
dq_op = q.dequeue_many(batch_size)
with session.Session() as sess:
coord = coordinator.Coordinator()
threads = queue_runner_impl.start_queue_runners(sess=sess, coord=coord)
for i in range(100):
indices = [
j % array.shape[0]
for j in range(batch_size * i, batch_size * (i + 1))
]
expected_dq = get_rows(array, indices)
dq = sess.run(dq_op)
np.testing.assert_array_equal(indices, dq[0])
np.testing.assert_array_equal(expected_dq, dq[1])
coord.request_stop()
coord.join(threads)
def testArrayFeedingMultiThread(self):
with ops.Graph().as_default():
array = np.arange(256).reshape([128, 2])
q = ff.enqueue_data(array, capacity=128, num_threads=8, shuffle=True)
batch_size = 3
dq_op = q.dequeue_many(batch_size)
with session.Session() as sess:
coord = coordinator.Coordinator()
threads = queue_runner_impl.start_queue_runners(sess=sess, coord=coord)
for _ in range(100):
dq = sess.run(dq_op)
indices = dq[0]
expected_dq = get_rows(array, indices)
np.testing.assert_array_equal(expected_dq, dq[1])
coord.request_stop()
coord.join(threads)
def testPandasFeeding(self):
if not HAS_PANDAS:
return
with ops.Graph().as_default():
array1 = np.arange(32)
array2 = np.arange(32, 64)
df = pd.DataFrame({"a": array1, "b": array2}, index=np.arange(64, 96))
q = ff.enqueue_data(df, capacity=100)
batch_size = 5
dq_op = q.dequeue_many(5)
with session.Session() as sess:
coord = coordinator.Coordinator()
threads = queue_runner_impl.start_queue_runners(sess=sess, coord=coord)
for i in range(100):
indices = [
j % array1.shape[0]
for j in range(batch_size * i, batch_size * (i + 1))
]
expected_df_indices = df.index[indices]
expected_rows = df.iloc[indices]
dq = sess.run(dq_op)
np.testing.assert_array_equal(expected_df_indices, dq[0])
for col_num, col in enumerate(df.columns):
np.testing.assert_array_equal(expected_rows[col].values,
dq[col_num + 1])
coord.request_stop()
coord.join(threads)
def testPandasFeedingMultiThread(self):
if not HAS_PANDAS:
return
with ops.Graph().as_default():
array1 = np.arange(128, 256)
array2 = 2 * array1
df = pd.DataFrame({"a": array1, "b": array2}, index=np.arange(128))
q = ff.enqueue_data(df, capacity=128, num_threads=8, shuffle=True)
batch_size = 5
dq_op = q.dequeue_many(batch_size)
with session.Session() as sess:
coord = coordinator.Coordinator()
threads = queue_runner_impl.start_queue_runners(sess=sess, coord=coord)
for _ in range(100):
dq = sess.run(dq_op)
indices = dq[0]
expected_rows = df.iloc[indices]
for col_num, col in enumerate(df.columns):
np.testing.assert_array_equal(expected_rows[col].values,
dq[col_num + 1])
coord.request_stop()
coord.join(threads)
if __name__ == "__main__":
test.main()
| apache-2.0 |
matus-chochlik/various | atmost/linker/callbacks.py | 1 | 11193 | #!/usr/bin/python3 -B
# coding: UTF-8
# Copyright (c) 2019 Matus Chochlik
import os
import sys
import math
import json
import gzip
import pickle
import pandas
import multiprocessing
# ------------------------------------------------------------------------------
_1GB = (1024.0**3.0)
# ------------------------------------------------------------------------------
def is_linker(proc):
linker_names = [
'x86_64-linux-gnu-gold',
'x86_64-linux-gnu-ld',
'gold',
'ld'
]
return proc.basename() in linker_names
# ------------------------------------------------------------------------------
def get_ld_info(context, proc):
if not is_linker(proc):
return None
args = proc.args()[1:]
cwd = proc.cwd()
outputs = []
sinputs = []
dinputs = []
plugins = []
libdirs = []
prev = None
for arg in args:
if prev in ["-L", "--library-path"]:
if os.path.isdir(arg):
libdirs.append(arg)
elif arg.startswith("-L"):
tmp = arg[len("-L"):].strip()
if os.path.isdir(tmp):
libdirs.append(tmp)
elif arg.startswith("--library-path"):
tmp = arg[len("--library-path"):].strip()
if os.path.isdir(tmp):
libdirs.append(tmp)
prev = arg
for arg in ["/usr/local/lib", "/usr/lib", "/lib"]:
if os.path.isdir(arg):
libdirs.append(arg)
libdirs = set([os.path.realpath(p) for p in libdirs])
def _is_so_path(path):
temp = path
while temp:
temp, ext = os.path.splitext(temp)
if not ext:
break
if ext == ".so":
return True
return False
def _append_lib(name):
for libdir in libdirs:
if os.path.isfile(os.path.join(libdir, "lib%s.so" % name)):
dinputs.append(os.path.join(libdir, "lib%s.so" % name))
break
elif os.path.isfile(os.path.join(libdir, "lib%s.a" % name)):
sinputs.append(os.path.join(libdir, "lib%s.a" % name))
break
def _append_plgn(path):
if os.path.isfile(path):
plugins.append(path)
elif os.path.isfile(os.path.join(cwd, path)):
plugins.append(os.path.join(cwd, path))
opt_level = 0
prev = None
for arg in args:
if arg[0] != '-':
if prev in ["-O"]:
try:
opt_level = int(arg)
except TypeError: pass
else:
path_arg = None
if os.path.isfile(arg):
path_arg = arg
elif os.path.isfile(os.path.join(cwd, arg)):
path_arg = os.path.join(cwd, arg)
if path_arg is not None:
if prev in ["-o", "--output"]:
outputs.append(path_arg)
elif prev in ["--plugin", "-plugin"]:
plugins.append(path_arg)
elif _is_so_path(path_arg):
dinputs.append(path_arg)
else:
sinputs.append(path_arg)
else:
if prev in ["-o", "--output"]:
outputs.append(arg)
elif prev in ["-l", "--library"]:
_append_lib(arg)
elif arg.startswith("-l"):
_append_lib(arg[len("-l"):].strip())
elif arg.startswith("--library"):
_append_lib(arg[len("--library"):].strip())
elif arg.startswith("-O"):
try:
opt_level = int(arg[len("-O"):])
except TypeError: pass
elif arg.startswith("--plugin"):
_append_plgn(arg[len("--plugin"):].strip())
elif arg.startswith("-plugin"):
_append_plgn(arg[len("-plugin"):].strip())
prev = arg
sinputs = set([os.path.realpath(p) for p in sinputs])
dinputs = set([os.path.realpath(p) for p in dinputs])
plugins = set([os.path.realpath(p) for p in plugins])
try:
sco = sum(1 for f in sinputs if os.path.isfile(f))
dco = sum(1 for f in dinputs if os.path.isfile(f))
npg = sum(1 for f in plugins if os.path.isfile(f))
ssz = sum(os.path.getsize(f) for f in sinputs if os.path.isfile(f))
dsz = sum(os.path.getsize(f) for f in dinputs if os.path.isfile(f))
osz = sum(os.path.getsize(f) for f in outputs if os.path.isfile(f))
mem = proc.max_memory_bytes()
pie = 1 if ("-pie" in args or "--pic-executable" in args) else 0
pie = 0 if ("-no-pie" in args or "--no-pic-executable" in args) else pie
return {
"outputs": outputs,
"plugin_count": npg,
"static_count": sco,
"static_size": ssz,
"shared_count": dco,
"shared_size": dsz,
"memory_size": mem,
"output_size": osz,
"b_static": 1 if "-Bstatic" in args else 0,
"b_dynamic": 1 if "-Bdynamic" in args else 0,
"no_mmap_whole_files": 1 if "--no-map-whole-files" in args else 0,
"no_mmap_output_file": 1 if "--no-mmap-output-file" in args else 0,
"pie": pie,
"opt": opt_level
}
except Exception as error:
sys.stderr.write("atmost: error: %s\n" % error)
sys.stderr.flush()
# ------------------------------------------------------------------------------
class Context(object):
# --------------------------------------------------------------------------
def __init__(self, d):
self._scaler = d["scaler"]
self._model = d["model"]
self._fields = d["fields"]
self._input_chunk_size = d["input_chunk_size"]
self._output_chunk_size = d["output_chunk_size"]
self._pos_error_margin = d["pos_error_margin"] * self._output_chunk_size
self._neg_error_margin = d["neg_error_margin"] * self._output_chunk_size
self._transforms = {
"static_size": lambda x: math.ceil(float(x)/self._input_chunk_size),
"shared_size": lambda x: math.ceil(float(x)/self._input_chunk_size)
}
# --------------------------------------------------------------------------
def _transform(self, k, v):
try:
return self._transforms[k](v)
except KeyError:
self._transforms[k] = lambda x: float(x)
return float(v)
# --------------------------------------------------------------------------
def predict_ld_info(self, proc):
ldi = get_ld_info(self, proc)
if ldi is not None:
fldi = {
k: self._transform(k, v) \
for k, v in ldi.items() \
if k in self._fields
}
df = pandas.DataFrame(fldi, columns=self._fields, index=[0])
cls = self._model.classes_
pro = self._model.predict_proba(self._scaler.transform(df))
pre = sum(p * c for p, c in zip(pro[0], cls)) * self._output_chunk_size
ldi["memory_size"] = pre
return ldi
return None
# --------------------------------------------------------------------------
def error_margin(self):
return self._neg_error_margin
# ------------------------------------------------------------------------------
def load_callback_data():
sys.stdout.write("[{}\n")
try:
return Context(
pickle.load(
gzip.open("atmost.linker.pickle.gz", "rb")
)
)
except Exception as error:
sys.stderr.write("atmost: error: %s\n" % error)
sys.stderr.flush()
# ------------------------------------------------------------------------------
def save_callback_data(context):
sys.stdout.write("]")
# ------------------------------------------------------------------------------
def process_initialized(context, proc):
if is_linker(proc):
proc_info = context.predict_ld_info(proc)
proc.set_callback_data(proc_info)
outputs = proc_info["outputs"]
sys.stderr.write(
"atmost: arrived '%s'\n" % (
os.path.basename(outputs[0]) if len(outputs) > 0 else "N/A"
)
)
sys.stderr.flush()
# ------------------------------------------------------------------------------
def let_process_go(context, procs):
proc = procs.current()
actp = procs.active()
actn = len(actp)
if is_linker(proc):
total_mem = procs.total_memory()
avail_mem = procs.available_memory()
proc_info = proc.callback_data()
proc_pred_usage = proc_info["memory_size"]
proc_pred_error = context.error_margin()
active_mem_usage = 0.0
let_go = False
if actn > 0:
for act_proc in actp:
ap_info = act_proc.callback_data()
max_usage = act_proc.max_memory_bytes()
try:
est_usage = ap_info["memory_size"]
except:
est_usage = max_usage
alpha = math.exp(-act_proc.run_time() / 300.0)
active_mem_usage += alpha*est_usage + (1.0-alpha)*max_usage
pred_avail_mem = min(total_mem-active_mem_usage, avail_mem)
if pred_avail_mem > (proc_pred_usage + proc_pred_error):
let_go = True
else:
pred_avail_mem = avail_mem
let_go = True
if let_go:
outputs = proc_info["outputs"]
sys.stderr.write(
"atmost: linking '%s': (predicted=%4.2fG±%4.2fG|available=%4.2fG)\n" % (
os.path.basename(outputs[0]) if len(outputs) > 0 else "N/A",
proc_pred_usage / _1GB,
proc_pred_error / _1GB,
pred_avail_mem / _1GB
)
)
sys.stderr.flush()
return True
return False
return actn < multiprocessing.cpu_count()
# ------------------------------------------------------------------------------
def process_finished(context, proc):
if is_linker(proc):
info = get_ld_info(context, proc)
if info:
pred = proc.callback_data()
outputs = info["outputs"]
proc_real_usage = info["memory_size"]
proc_pred_usage = pred["memory_size"]
proc_pred_error = context.error_margin()
sys.stderr.write(
"atmost: linked '%s': (predicted=%4.2fG±%4.2fG|actual=%4.2fG)\n" % (
os.path.basename(outputs[0]) if len(outputs) > 0 else "N/A",
proc_pred_usage / _1GB,
proc_pred_error / _1GB,
proc_real_usage / _1GB
)
)
sys.stderr.flush()
del info["outputs"]
sys.stdout.write(",")
json.dump(info, sys.stdout)
sys.stdout.write("\n")
sys.stdout.flush()
# ------------------------------------------------------------------------------
| mit |
raghavrv/scikit-learn | sklearn/mixture/tests/test_dpgmm.py | 84 | 7866 | # Important note for the deprecation cleaning of 0.20 :
# All the function and classes of this file have been deprecated in 0.18.
# When you remove this file please also remove the related files
# - 'sklearn/mixture/dpgmm.py'
# - 'sklearn/mixture/gmm.py'
# - 'sklearn/mixture/test_gmm.py'
import unittest
import sys
import numpy as np
from sklearn.mixture import DPGMM, VBGMM
from sklearn.mixture.dpgmm import log_normalize
from sklearn.datasets import make_blobs
from sklearn.utils.testing import assert_array_less, assert_equal
from sklearn.utils.testing import assert_warns_message, ignore_warnings
from sklearn.mixture.tests.test_gmm import GMMTester
from sklearn.externals.six.moves import cStringIO as StringIO
from sklearn.mixture.dpgmm import digamma, gammaln
from sklearn.mixture.dpgmm import wishart_log_det, wishart_logz
np.seterr(all='warn')
@ignore_warnings(category=DeprecationWarning)
def test_class_weights():
# check that the class weights are updated
# simple 3 cluster dataset
X, y = make_blobs(random_state=1)
for Model in [DPGMM, VBGMM]:
dpgmm = Model(n_components=10, random_state=1, alpha=20, n_iter=50)
dpgmm.fit(X)
# get indices of components that are used:
indices = np.unique(dpgmm.predict(X))
active = np.zeros(10, dtype=np.bool)
active[indices] = True
# used components are important
assert_array_less(.1, dpgmm.weights_[active])
# others are not
assert_array_less(dpgmm.weights_[~active], .05)
@ignore_warnings(category=DeprecationWarning)
def test_verbose_boolean():
# checks that the output for the verbose output is the same
# for the flag values '1' and 'True'
# simple 3 cluster dataset
X, y = make_blobs(random_state=1)
for Model in [DPGMM, VBGMM]:
dpgmm_bool = Model(n_components=10, random_state=1, alpha=20,
n_iter=50, verbose=True)
dpgmm_int = Model(n_components=10, random_state=1, alpha=20,
n_iter=50, verbose=1)
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
# generate output with the boolean flag
dpgmm_bool.fit(X)
verbose_output = sys.stdout
verbose_output.seek(0)
bool_output = verbose_output.readline()
# generate output with the int flag
dpgmm_int.fit(X)
verbose_output = sys.stdout
verbose_output.seek(0)
int_output = verbose_output.readline()
assert_equal(bool_output, int_output)
finally:
sys.stdout = old_stdout
@ignore_warnings(category=DeprecationWarning)
def test_verbose_first_level():
# simple 3 cluster dataset
X, y = make_blobs(random_state=1)
for Model in [DPGMM, VBGMM]:
dpgmm = Model(n_components=10, random_state=1, alpha=20, n_iter=50,
verbose=1)
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
dpgmm.fit(X)
finally:
sys.stdout = old_stdout
@ignore_warnings(category=DeprecationWarning)
def test_verbose_second_level():
# simple 3 cluster dataset
X, y = make_blobs(random_state=1)
for Model in [DPGMM, VBGMM]:
dpgmm = Model(n_components=10, random_state=1, alpha=20, n_iter=50,
verbose=2)
old_stdout = sys.stdout
sys.stdout = StringIO()
try:
dpgmm.fit(X)
finally:
sys.stdout = old_stdout
@ignore_warnings(category=DeprecationWarning)
def test_digamma():
assert_warns_message(DeprecationWarning, "The function digamma is"
" deprecated in 0.18 and will be removed in 0.20. "
"Use scipy.special.digamma instead.", digamma, 3)
@ignore_warnings(category=DeprecationWarning)
def test_gammaln():
assert_warns_message(DeprecationWarning, "The function gammaln"
" is deprecated in 0.18 and will be removed"
" in 0.20. Use scipy.special.gammaln instead.",
gammaln, 3)
@ignore_warnings(category=DeprecationWarning)
def test_log_normalize():
v = np.array([0.1, 0.8, 0.01, 0.09])
a = np.log(2 * v)
result = assert_warns_message(DeprecationWarning, "The function "
"log_normalize is deprecated in 0.18 and"
" will be removed in 0.20.",
log_normalize, a)
assert np.allclose(v, result, rtol=0.01)
@ignore_warnings(category=DeprecationWarning)
def test_wishart_log_det():
a = np.array([0.1, 0.8, 0.01, 0.09])
b = np.array([0.2, 0.7, 0.05, 0.1])
assert_warns_message(DeprecationWarning, "The function "
"wishart_log_det is deprecated in 0.18 and"
" will be removed in 0.20.",
wishart_log_det, a, b, 2, 4)
@ignore_warnings(category=DeprecationWarning)
def test_wishart_logz():
assert_warns_message(DeprecationWarning, "The function "
"wishart_logz is deprecated in 0.18 and "
"will be removed in 0.20.", wishart_logz,
3, np.identity(3), 1, 3)
@ignore_warnings(category=DeprecationWarning)
def test_DPGMM_deprecation():
assert_warns_message(
DeprecationWarning, "The `DPGMM` class is not working correctly and "
"it's better to use `sklearn.mixture.BayesianGaussianMixture` class "
"with parameter `weight_concentration_prior_type='dirichlet_process'` "
"instead. DPGMM is deprecated in 0.18 and will be removed in 0.20.",
DPGMM)
def do_model(self, **kwds):
return VBGMM(verbose=False, **kwds)
class DPGMMTester(GMMTester):
model = DPGMM
do_test_eval = False
def score(self, g, train_obs):
_, z = g.score_samples(train_obs)
return g.lower_bound(train_obs, z)
class TestDPGMMWithSphericalCovars(unittest.TestCase, DPGMMTester):
covariance_type = 'spherical'
setUp = GMMTester._setUp
class TestDPGMMWithDiagCovars(unittest.TestCase, DPGMMTester):
covariance_type = 'diag'
setUp = GMMTester._setUp
class TestDPGMMWithTiedCovars(unittest.TestCase, DPGMMTester):
covariance_type = 'tied'
setUp = GMMTester._setUp
class TestDPGMMWithFullCovars(unittest.TestCase, DPGMMTester):
covariance_type = 'full'
setUp = GMMTester._setUp
def test_VBGMM_deprecation():
assert_warns_message(
DeprecationWarning, "The `VBGMM` class is not working correctly and "
"it's better to use `sklearn.mixture.BayesianGaussianMixture` class "
"with parameter `weight_concentration_prior_type="
"'dirichlet_distribution'` instead. VBGMM is deprecated "
"in 0.18 and will be removed in 0.20.", VBGMM)
class VBGMMTester(GMMTester):
model = do_model
do_test_eval = False
def score(self, g, train_obs):
_, z = g.score_samples(train_obs)
return g.lower_bound(train_obs, z)
class TestVBGMMWithSphericalCovars(unittest.TestCase, VBGMMTester):
covariance_type = 'spherical'
setUp = GMMTester._setUp
class TestVBGMMWithDiagCovars(unittest.TestCase, VBGMMTester):
covariance_type = 'diag'
setUp = GMMTester._setUp
class TestVBGMMWithTiedCovars(unittest.TestCase, VBGMMTester):
covariance_type = 'tied'
setUp = GMMTester._setUp
class TestVBGMMWithFullCovars(unittest.TestCase, VBGMMTester):
covariance_type = 'full'
setUp = GMMTester._setUp
def test_vbgmm_no_modify_alpha():
alpha = 2.
n_components = 3
X, y = make_blobs(random_state=1)
vbgmm = VBGMM(n_components=n_components, alpha=alpha, n_iter=1)
assert_equal(vbgmm.alpha, alpha)
assert_equal(vbgmm.fit(X).alpha_, float(alpha) / n_components)
| bsd-3-clause |
mariusvniekerk/ibis | ibis/impala/tests/test_pandas_interop.py | 2 | 9562 | # Copyright 2015 Cloudera Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import pytest
from pandas.util.testing import assert_frame_equal
import pandas as pd
from ibis.compat import unittest
from ibis.common import IbisTypeError
from ibis.impala.pandas_interop import pandas_to_ibis_schema, DataFrameWriter
from ibis.impala.tests.common import ImpalaE2E
import ibis.expr.datatypes as dt
import ibis.expr.types as ir
import ibis.util as util
import ibis
class TestPandasTypeInterop(unittest.TestCase):
def test_series_to_ibis_literal(self):
values = [1, 2, 3, 4]
s = pd.Series(values)
expr = ir.as_value_expr(s)
expected = ir.sequence(list(s))
assert expr.equals(expected)
class TestPandasSchemaInference(unittest.TestCase):
def test_dtype_bool(self):
df = pd.DataFrame({'col': [True, False, False]})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'boolean')])
assert inferred == expected
def test_dtype_int8(self):
df = pd.DataFrame({'col': np.int8([-3, 9, 17])})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'int8')])
assert inferred == expected
def test_dtype_int16(self):
df = pd.DataFrame({'col': np.int16([-5, 0, 12])})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'int16')])
assert inferred == expected
def test_dtype_int32(self):
df = pd.DataFrame({'col': np.int32([-12, 3, 25000])})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'int32')])
assert inferred == expected
def test_dtype_int64(self):
df = pd.DataFrame({'col': np.int64([102, 67228734, -0])})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'int64')])
assert inferred == expected
def test_dtype_float32(self):
df = pd.DataFrame({'col': np.float32([45e-3, -0.4, 99.])})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'float')])
assert inferred == expected
def test_dtype_float64(self):
df = pd.DataFrame({'col': np.float64([-3e43, 43., 10000000.])})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'double')])
assert inferred == expected
def test_dtype_uint8(self):
df = pd.DataFrame({'col': np.uint8([3, 0, 16])})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'int16')])
assert inferred == expected
def test_dtype_uint16(self):
df = pd.DataFrame({'col': np.uint16([5569, 1, 33])})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'int32')])
assert inferred == expected
def test_dtype_uint32(self):
df = pd.DataFrame({'col': np.uint32([100, 0, 6])})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'int64')])
assert inferred == expected
def test_dtype_uint64(self):
df = pd.DataFrame({'col': np.uint64([666, 2, 3])})
with self.assertRaises(IbisTypeError):
inferred = pandas_to_ibis_schema(df) # noqa
def test_dtype_datetime64(self):
df = pd.DataFrame({
'col': [pd.Timestamp('2010-11-01 00:01:00'),
pd.Timestamp('2010-11-01 00:02:00.1000'),
pd.Timestamp('2010-11-01 00:03:00.300000')]})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'timestamp')])
assert inferred == expected
def test_dtype_timedelta64(self):
df = pd.DataFrame({
'col': [pd.Timedelta('1 days'),
pd.Timedelta('-1 days 2 min 3us'),
pd.Timedelta('-2 days +23:57:59.999997')]})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'int64')])
assert inferred == expected
def test_dtype_string(self):
df = pd.DataFrame({'col': ['foo', 'bar', 'hello']})
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', 'string')])
assert inferred == expected
def test_dtype_categorical(self):
df = pd.DataFrame({'col': ['a', 'b', 'c', 'a']}, dtype='category')
inferred = pandas_to_ibis_schema(df)
expected = ibis.schema([('col', dt.Category(3))])
assert inferred == expected
exhaustive_df = pd.DataFrame({
'bigint_col': np.array([0, 10, 20, 30, 40, 50, 60, 70, 80, 90],
dtype='i8'),
'bool_col': np.array([True, False, True, False, True, None,
True, False, True, False], dtype=np.bool_),
'bool_obj_col': np.array([True, False, np.nan, False, True, np.nan,
True, np.nan, True, False], dtype=np.object_),
'date_string_col': ['11/01/10', None, '11/01/10', '11/01/10',
'11/01/10', '11/01/10', '11/01/10', '11/01/10',
'11/01/10', '11/01/10'],
'double_col': np.array([0.0, 10.1, np.nan, 30.299999999999997,
40.399999999999999, 50.5, 60.599999999999994,
70.700000000000003, 80.799999999999997,
90.899999999999991], dtype=np.float64),
'float_col': np.array([np.nan, 1.1000000238418579, 2.2000000476837158,
3.2999999523162842, 4.4000000953674316, 5.5,
6.5999999046325684, 7.6999998092651367,
8.8000001907348633,
9.8999996185302734], dtype='f4'),
'int_col': np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype='i4'),
'month': [11, 11, 11, 11, 2, 11, 11, 11, 11, 11],
'smallint_col': np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype='i2'),
'string_col': ['0', '1', None, 'double , whammy', '4', '5',
'6', '7', '8', '9'],
'timestamp_col': [pd.Timestamp('2010-11-01 00:00:00'),
None,
pd.Timestamp('2010-11-01 00:02:00.100000'),
pd.Timestamp('2010-11-01 00:03:00.300000'),
pd.Timestamp('2010-11-01 00:04:00.600000'),
pd.Timestamp('2010-11-01 00:05:00.100000'),
pd.Timestamp('2010-11-01 00:06:00.150000'),
pd.Timestamp('2010-11-01 00:07:00.210000'),
pd.Timestamp('2010-11-01 00:08:00.280000'),
pd.Timestamp('2010-11-01 00:09:00.360000')],
'tinyint_col': np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype='i1'),
'year': [2010, 2010, 2010, 2010, 2010, 2009, 2009, 2009, 2009, 2009]})
class TestPandasInterop(ImpalaE2E, unittest.TestCase):
@classmethod
def setUpClass(cls):
super(TestPandasInterop, cls).setUpClass()
cls.alltypes = cls.alltypes.execute()
def test_alltypes_roundtrip(self):
pytest.skip('IMPALA-2750')
self._check_roundtrip(self.alltypes)
def test_writer_cleanup_deletes_hdfs_dir(self):
writer = DataFrameWriter(self.con, self.alltypes)
path = writer.write_temp_csv()
assert self.con.hdfs.exists(path)
writer.cleanup()
assert not self.con.hdfs.exists(path)
# noop
writer.cleanup()
assert not self.con.hdfs.exists(path)
def test_create_table_from_dataframe(self):
pytest.skip('IMPALA-2750')
tname = 'tmp_pandas_{0}'.format(util.guid())
self.con.create_table(tname, self.alltypes, database=self.tmp_db)
self.temp_tables.append(tname)
table = self.con.table(tname, database=self.tmp_db)
df = table.execute()
assert_frame_equal(df, self.alltypes)
def test_insert(self):
pytest.skip('IMPALA-2750')
schema = pandas_to_ibis_schema(exhaustive_df)
table_name = 'tmp_pandas_{0}'.format(util.guid())
self.con.create_table(table_name, database=self.tmp_db,
schema=schema)
self.temp_tables.append(table_name)
self.con.insert(table_name, exhaustive_df.iloc[:4],
database=self.tmp_db)
self.con.insert(table_name, exhaustive_df.iloc[4:],
database=self.tmp_db)
table = self.con.table(table_name, database=self.tmp_db)
result = (table.execute()
.sort_index(by='tinyint_col')
.reset_index(drop=True))
assert_frame_equal(result, exhaustive_df)
def test_insert_partition(self):
# overwrite
# no overwrite
pass
def test_round_trip_exhaustive(self):
pytest.skip('IMPALA-2750')
self._check_roundtrip(exhaustive_df)
def _check_roundtrip(self, df):
writer = DataFrameWriter(self.con, df)
path = writer.write_temp_csv()
table = writer.delimited_table(path)
df2 = table.execute()
assert_frame_equal(df2, df)
| apache-2.0 |
dotsdl/msmbuilder | msmbuilder/tests/test_msm.py | 2 | 11980 | from __future__ import print_function, division
import os
import tempfile
import mdtraj as md
import pandas as pd
import numpy as np
from numpy.testing import assert_approx_equal
from mdtraj.testing import eq
from sklearn.externals.joblib import load, dump
import sklearn.pipeline
from six import PY3
from msmbuilder.utils import map_drawn_samples
from msmbuilder import cluster
from msmbuilder.msm.core import _transition_counts
from msmbuilder.msm import MarkovStateModel
def test_counts_1():
# test counts matrix without trimming
model = MarkovStateModel(reversible_type=None, ergodic_cutoff=0)
model.fit([[1, 1, 1, 1, 1, 1, 1, 1, 1]])
eq(model.countsmat_, np.array([[8.0]]))
eq(model.mapping_, {1: 0})
def test_counts_2():
# test counts matrix with trimming
model = MarkovStateModel(reversible_type=None, ergodic_cutoff=1)
model.fit([[1, 1, 1, 1, 1, 1, 1, 1, 1, 2]])
eq(model.mapping_, {1: 0})
eq(model.countsmat_, np.array([[8]]))
def test_counts_3():
# test counts matrix scaling
seq = [1] * 4 + [2] * 4 + [1] * 4
model1 = MarkovStateModel(reversible_type=None, lag_time=2,
sliding_window=True).fit([seq])
model2 = MarkovStateModel(reversible_type=None, lag_time=2,
sliding_window=False).fit([seq])
model3 = MarkovStateModel(reversible_type=None, lag_time=2,
ergodic_cutoff='off').fit([seq])
eq(model1.countsmat_, model2.countsmat_)
eq(model1.countsmat_, model3.countsmat_)
eq(model2.countsmat_, model3.countsmat_)
def test_3():
model = MarkovStateModel(reversible_type='mle')
model.fit([[0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 2, 2, 2, 2, 0, 0, 0]])
counts = np.array([[8, 1, 1], [1, 3, 0], [1, 0, 3]])
eq(model.countsmat_, counts)
assert np.sum(model.populations_) == 1.0
model.timescales_
# test pickleable
try:
dir = tempfile.mkdtemp()
fn = os.path.join(dir, 'test-msm-temp.npy')
dump(model, fn, compress=1)
model2 = load(fn)
eq(model2.timescales_, model.timescales_)
finally:
os.unlink(fn)
os.rmdir(dir)
def test_4():
data = [np.random.randn(10, 1), np.random.randn(100, 1)]
print(cluster.KMeans(n_clusters=3).fit_predict(data))
print(cluster.MiniBatchKMeans(n_clusters=3).fit_predict(data))
print(cluster.AffinityPropagation().fit_predict(data))
print(cluster.MeanShift().fit_predict(data))
print(cluster.SpectralClustering(n_clusters=2).fit_predict(data))
print(cluster.Ward(n_clusters=2).fit_predict(data))
def test_5():
# test score_ll
model = MarkovStateModel(reversible_type='mle')
sequence = ['a', 'a', 'b', 'b', 'a', 'a', 'b', 'b']
model.fit([sequence])
assert model.mapping_ == {'a': 0, 'b': 1}
score_aa = model.score_ll([['a', 'a']])
assert score_aa == np.log(model.transmat_[0, 0])
score_bb = model.score_ll([['b', 'b']])
assert score_bb == np.log(model.transmat_[1, 1])
score_ab = model.score_ll([['a', 'b']])
assert score_ab == np.log(model.transmat_[0, 1])
score_abb = model.score_ll([['a', 'b', 'b']])
assert score_abb == (np.log(model.transmat_[0, 1]) +
np.log(model.transmat_[1, 1]))
assert model.state_labels_ == ['a', 'b']
assert np.sum(model.populations_) == 1.0
def test_51():
# test score_ll
model = MarkovStateModel(reversible_type='mle')
sequence = ['a', 'a', 'b', 'b', 'a', 'a', 'b', 'b', 'c', 'c', 'c', 'a', 'a']
model.fit([sequence])
assert model.mapping_ == {'a': 0, 'b': 1, 'c': 2}
score_ac = model.score_ll([['a', 'c']])
assert score_ac == np.log(model.transmat_[0, 2])
def test_6():
# test score_ll with novel entries
model = MarkovStateModel(reversible_type='mle')
sequence = ['a', 'a', 'b', 'b', 'a', 'a', 'b', 'b']
model.fit([sequence])
assert not np.isfinite(model.score_ll([['c']]))
assert not np.isfinite(model.score_ll([['c', 'c']]))
assert not np.isfinite(model.score_ll([['a', 'c']]))
def test_7():
# test timescales
model = MarkovStateModel()
model.fit([[0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1]])
assert np.all(np.isfinite(model.timescales_))
assert len(model.timescales_) == 1
model.fit([[0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 2, 2, 0, 0]])
assert np.all(np.isfinite(model.timescales_))
assert len(model.timescales_) == 2
assert model.n_states_ == 3
model = MarkovStateModel(n_timescales=1)
model.fit([[0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 2, 2, 0, 0]])
assert len(model.timescales_) == 1
model = MarkovStateModel(n_timescales=100)
model.fit([[0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 2, 2, 0, 0]])
assert len(model.timescales_) == 2
assert np.sum(model.populations_) == 1.0
def test_8():
# test transform
model = MarkovStateModel()
model.fit([['a', 'a', 'b', 'b', 'c', 'c', 'a', 'a']])
assert model.mapping_ == {'a': 0, 'b': 1, 'c': 2}
v = model.transform([['a', 'b', 'c']])
assert isinstance(v, list)
assert len(v) == 1
assert v[0].dtype == np.int
np.testing.assert_array_equal(v[0], [0, 1, 2])
v = model.transform([['a', 'b', 'c', 'd']], 'clip')
assert isinstance(v, list)
assert len(v) == 1
assert v[0].dtype == np.int
np.testing.assert_array_equal(v[0], [0, 1, 2])
v = model.transform([['a', 'b', 'c', 'd']], 'clip')
assert isinstance(v, list)
assert len(v) == 1
assert v[0].dtype == np.int
np.testing.assert_array_equal(v[0], [0, 1, 2])
v = model.transform([['a', 'b', 'c', 'd']], 'fill')
assert isinstance(v, list)
assert len(v) == 1
assert v[0].dtype == np.float
np.testing.assert_array_equal(v[0], [0, 1, 2, np.nan])
v = model.transform([['a', 'a', 'SPLIT', 'b', 'b', 'b']], 'clip')
assert isinstance(v, list)
assert len(v) == 2
assert v[0].dtype == np.int
assert v[1].dtype == np.int
np.testing.assert_array_equal(v[0], [0, 0])
np.testing.assert_array_equal(v[1], [1, 1, 1])
def test_9():
# what if the input data contains NaN? They should be ignored
model = MarkovStateModel(ergodic_cutoff=0)
seq = [0, 1, 0, 1, np.nan]
model.fit(seq)
assert model.n_states_ == 2
assert model.mapping_ == {0: 0, 1: 1}
if not PY3:
model = MarkovStateModel()
seq = [0, 1, 0, None, 0, 1]
model.fit(seq)
assert model.n_states_ == 2
assert model.mapping_ == {0: 0, 1: 1}
def test_10():
# test inverse transform
model = MarkovStateModel(reversible_type=None, ergodic_cutoff=0)
model.fit([['a', 'b', 'c', 'a', 'a', 'b']])
v = model.inverse_transform([[0, 1, 2]])
assert len(v) == 1
np.testing.assert_array_equal(v[0], ['a', 'b', 'c'])
def test_11():
# test sample
model = MarkovStateModel()
model.fit([[0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 2, 2, 0, 0]])
sample = model.sample_discrete(n_steps=1000, random_state=0)
assert isinstance(sample, np.ndarray)
assert len(sample) == 1000
bc = np.bincount(sample)
diff = model.populations_ - (bc / np.sum(bc))
assert np.sum(np.abs(diff)) < 0.1
def test_12():
# test eigtransform
model = MarkovStateModel(n_timescales=1)
model.fit([[4, 3, 0, 0, 0, 1, 2, 1, 0, 0, 0, 1, 0, 1, 1, 2, 2, 0, 0]])
assert model.mapping_ == {0: 0, 1: 1, 2: 2}
assert len(model.eigenvalues_) == 2
t = model.eigtransform([[0, 1]], right=True)
assert t[0][0] == model.right_eigenvectors_[0, 1]
assert t[0][1] == model.right_eigenvectors_[1, 1]
s = model.eigtransform([[0, 1]], right=False)
assert s[0][0] == model.left_eigenvectors_[0, 1]
assert s[0][1] == model.left_eigenvectors_[1, 1]
def test_eigtransform_2():
model = MarkovStateModel(n_timescales=2)
traj = [4, 3, 0, 0, 0, 1, 2, 1, 0, 0, 0, 1, 0, 1, 1, 2, 2, 0, 0]
model.fit([traj])
transformed_0 = model.eigtransform([traj], mode='clip')
# clip off the first two states (not ergodic)
assert transformed_0[0].shape == (len(traj) - 2, model.n_timescales)
transformed_1 = model.eigtransform([traj], mode='fill')
assert transformed_1[0].shape == (len(traj), model.n_timescales)
assert np.all(np.isnan(transformed_1[0][:2, :]))
assert not np.any(np.isnan(transformed_1[0][2:]))
def test_13():
model = MarkovStateModel(n_timescales=2)
model.fit([[0, 0, 0, 1, 2, 1, 0, 0, 0, 1, 3, 3, 3, 1, 1, 2, 2, 0, 0]])
left_right = np.dot(model.left_eigenvectors_.T, model.right_eigenvectors_)
# check biorthonormal
np.testing.assert_array_almost_equal(
left_right,
np.eye(3))
# check that the stationary left eigenvector is normalized to be 1
np.testing.assert_almost_equal(model.left_eigenvectors_[:, 0].sum(), 1)
# the left eigenvectors satisfy <\phi_i, \phi_i>_{\mu^{-1}} = 1
for i in range(3):
np.testing.assert_almost_equal(
np.dot(model.left_eigenvectors_[:, i],
model.left_eigenvectors_[:, i] / model.populations_), 1)
# and that the right eigenvectors satisfy <\psi_i, \psi_i>_{\mu} = 1
for i in range(3):
np.testing.assert_almost_equal(
np.dot(model.right_eigenvectors_[:, i],
model.right_eigenvectors_[:, i] *
model.populations_), 1)
def test_14():
from msmbuilder.example_datasets import load_doublewell
from msmbuilder.cluster import NDGrid
from sklearn.pipeline import Pipeline
ds = load_doublewell(random_state=0)
p = Pipeline([
('ndgrid', NDGrid(n_bins_per_feature=100)),
('msm', MarkovStateModel(lag_time=100))
])
p.fit(ds.trajectories)
p.named_steps['msm'].summarize()
def test_sample_1():
# Test that the code actually runs and gives something non-crazy
# Make an ergodic dataset with two gaussian centers offset by 25 units.
chunk = np.random.normal(size=(20000, 3))
data = [np.vstack((chunk, chunk + 25)), np.vstack((chunk + 25, chunk))]
clusterer = cluster.KMeans(n_clusters=2)
msm = MarkovStateModel()
pipeline = sklearn.pipeline.Pipeline(
[("clusterer", clusterer), ("msm", msm)]
)
pipeline.fit(data)
trimmed_assignments = pipeline.transform(data)
# Now let's make make the output assignments start with
# zero at the first position.
i0 = trimmed_assignments[0][0]
if i0 == 1:
for m in trimmed_assignments:
m *= -1
m += 1
pairs = msm.draw_samples(trimmed_assignments, 2000)
samples = map_drawn_samples(pairs, data)
mu = np.mean(samples, axis=1)
eq(mu, np.array([[0., 0., 0.0], [25., 25., 25.]]), decimal=1)
# We should make sure we can sample from Trajectory objects too...
# Create a fake topology with 1 atom to match our input dataset
top = md.Topology.from_dataframe(
pd.DataFrame({
"serial": [0], "name": ["HN"], "element": ["H"], "resSeq": [1],
"resName": "RES", "chainID": [0]
}), bonds=np.zeros(shape=(0, 2), dtype='int')
)
# np.newaxis reshapes the data to have a 40000 frames, 1 atom, 3 xyz
trajectories = [md.Trajectory(x[:, np.newaxis], top)
for x in data]
trj_samples = map_drawn_samples(pairs, trajectories)
mu = np.array([t.xyz.mean(0)[0] for t in trj_samples])
eq(mu, np.array([[0., 0., 0.0], [25., 25., 25.]]), decimal=1)
def test_score_1():
# test that GMRQ is equal to the sum of the first n eigenvalues,
# when testing and training on the same dataset.
sequence = [0, 0, 0, 1, 1, 1, 2, 2, 2, 1, 1, 1,
0, 0, 0, 1, 2, 2, 2, 1, 1, 1, 0, 0]
for n in [0, 1, 2]:
model = MarkovStateModel(verbose=False, n_timescales=n)
model.fit([sequence])
assert_approx_equal(model.score([sequence]), model.eigenvalues_.sum())
assert_approx_equal(model.score([sequence]), model.score_)
| lgpl-2.1 |
sgenoud/scikit-learn | sklearn/linear_model/tests/test_sparse_coordinate_descent.py | 1 | 7659 | import numpy as np
import scipy.sparse as sp
from numpy.testing import assert_array_almost_equal
from numpy.testing import assert_almost_equal
from numpy.testing import assert_equal
from nose.tools import assert_true
from sklearn.utils.testing import assert_less, assert_greater
from sklearn.linear_model.coordinate_descent import Lasso, ElasticNet
def test_sparse_coef():
""" Check that the sparse_coef propery works """
clf = ElasticNet()
clf.coef_ = [1, 2, 3]
assert_true(sp.isspmatrix(clf.sparse_coef_))
assert_equal(clf.sparse_coef_.todense().tolist()[0], clf.coef_)
def test_normalize_option():
""" Check that the normalize option in enet works """
X = sp.csc_matrix([[-1], [0], [1]])
y = [-1, 0, 1]
clf_dense = ElasticNet(fit_intercept=True, normalize=True)
clf_sparse = ElasticNet(fit_intercept=True, normalize=True)
clf_dense.fit(X, y)
X = sp.csc_matrix(X)
clf_sparse.fit(X, y)
assert_almost_equal(clf_dense.dual_gap_, 0)
assert_array_almost_equal(clf_dense.coef_, clf_sparse.coef_)
def test_lasso_zero():
"""Check that the sparse lasso can handle zero data without crashing"""
X = sp.csc_matrix((3, 1))
y = [0, 0, 0]
T = np.array([[1], [2], [3]])
clf = Lasso().fit(X, y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [0])
assert_array_almost_equal(pred, [0, 0, 0])
assert_almost_equal(clf.dual_gap_, 0)
def test_enet_toy_list_input():
"""Test ElasticNet for various parameters of alpha and rho with list X"""
X = np.array([[-1], [0], [1]])
X = sp.csc_matrix(X)
Y = [-1, 0, 1] # just a straight line
T = np.array([[2], [3], [4]]) # test sample
# this should be the same as unregularized least squares
clf = ElasticNet(alpha=0, rho=1.0)
clf.fit(X, Y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [1])
assert_array_almost_equal(pred, [2, 3, 4])
assert_almost_equal(clf.dual_gap_, 0)
clf = ElasticNet(alpha=0.5, rho=0.3, max_iter=1000)
clf.fit(X, Y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [0.50819], decimal=3)
assert_array_almost_equal(pred, [1.0163, 1.5245, 2.0327], decimal=3)
assert_almost_equal(clf.dual_gap_, 0)
clf = ElasticNet(alpha=0.5, rho=0.5)
clf.fit(X, Y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [0.45454], 3)
assert_array_almost_equal(pred, [0.9090, 1.3636, 1.8181], 3)
assert_almost_equal(clf.dual_gap_, 0)
def test_enet_toy_explicit_sparse_input():
"""Test ElasticNet for various parameters of alpha and rho with sparse X"""
# training samples
X = sp.lil_matrix((3, 1))
X[0, 0] = -1
# X[1, 0] = 0
X[2, 0] = 1
Y = [-1, 0, 1] # just a straight line (the identity function)
# test samples
T = sp.lil_matrix((3, 1))
T[0, 0] = 2
T[1, 0] = 3
T[2, 0] = 4
# this should be the same as lasso
clf = ElasticNet(alpha=0, rho=1.0)
clf.fit(X, Y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [1])
assert_array_almost_equal(pred, [2, 3, 4])
assert_almost_equal(clf.dual_gap_, 0)
clf = ElasticNet(alpha=0.5, rho=0.3, max_iter=1000)
clf.fit(X, Y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [0.50819], decimal=3)
assert_array_almost_equal(pred, [1.0163, 1.5245, 2.0327], decimal=3)
assert_almost_equal(clf.dual_gap_, 0)
clf = ElasticNet(alpha=0.5, rho=0.5)
clf.fit(X, Y)
pred = clf.predict(T)
assert_array_almost_equal(clf.coef_, [0.45454], 3)
assert_array_almost_equal(pred, [0.9090, 1.3636, 1.8181], 3)
assert_almost_equal(clf.dual_gap_, 0)
def make_sparse_data(n_samples, n_features, n_informative, seed=42,
positive=False):
random_state = np.random.RandomState(seed)
# build an ill-posed linear regression problem with many noisy features and
# comparatively few samples
# generate a ground truth model
w = random_state.randn(n_features)
w[n_informative:] = 0.0 # only the top features are impacting the model
if positive:
w = np.abs(w)
X = random_state.randn(n_samples, n_features)
rnd = random_state.uniform(size=(n_samples, n_features))
X[rnd > 0.5] = 0.0 # 50% of zeros in input signal
# generate training ground truth labels
y = np.dot(X, w)
X = sp.csc_matrix(X)
return X, y
def _test_sparse_enet_not_as_toy_dataset(alpha, fit_intercept, positive):
n_samples, n_features, max_iter = 100, 100, 1000
n_informative = 10
X, y = make_sparse_data(n_samples, n_features, n_informative,
positive=positive)
X_train, X_test = X[n_samples / 2:], X[:n_samples / 2]
y_train, y_test = y[n_samples / 2:], y[:n_samples / 2]
s_clf = ElasticNet(alpha=alpha, rho=0.8, fit_intercept=fit_intercept,
max_iter=max_iter, tol=1e-7, positive=positive,
warm_start=True)
s_clf.fit(X_train, y_train)
assert_almost_equal(s_clf.dual_gap_, 0, 4)
assert_greater(s_clf.score(X_test, y_test), 0.85)
# check the convergence is the same as the dense version
d_clf = ElasticNet(alpha=alpha, rho=0.8, fit_intercept=fit_intercept,
max_iter=max_iter, tol=1e-7, positive=positive,
warm_start=True)
d_clf.fit(X_train.todense(), y_train)
assert_almost_equal(d_clf.dual_gap_, 0, 4)
assert_greater(d_clf.score(X_test, y_test), 0.85)
assert_almost_equal(s_clf.coef_, d_clf.coef_, 5)
assert_almost_equal(s_clf.intercept_, d_clf.intercept_, 5)
# check that the coefs are sparse
assert_less(np.sum(s_clf.coef_ != 0.0), 2 * n_informative)
# check that warm restart leads to the same result with
# sparse and dense versions
rng = np.random.RandomState(seed=0)
coef_init = rng.randn(n_features)
d_clf.fit(X_train.todense(), y_train, coef_init=coef_init)
s_clf.fit(X_train, y_train, coef_init=coef_init)
assert_almost_equal(s_clf.coef_, d_clf.coef_, 5)
assert_almost_equal(s_clf.intercept_, d_clf.intercept_, 5)
def test_sparse_enet_not_as_toy_dataset():
_test_sparse_enet_not_as_toy_dataset(alpha=0.1, fit_intercept=False,
positive=False)
_test_sparse_enet_not_as_toy_dataset(alpha=0.1, fit_intercept=True,
positive=False)
_test_sparse_enet_not_as_toy_dataset(alpha=1e-3, fit_intercept=False,
positive=True)
_test_sparse_enet_not_as_toy_dataset(alpha=1e-3, fit_intercept=True,
positive=True)
def test_sparse_lasso_not_as_toy_dataset():
n_samples, n_features, max_iter = 100, 100, 1000
n_informative = 10
X, y = make_sparse_data(n_samples, n_features, n_informative)
X_train, X_test = X[n_samples / 2:], X[:n_samples / 2]
y_train, y_test = y[n_samples / 2:], y[:n_samples / 2]
s_clf = Lasso(alpha=0.1, fit_intercept=False,
max_iter=max_iter, tol=1e-7)
s_clf.fit(X_train, y_train)
assert_almost_equal(s_clf.dual_gap_, 0, 4)
assert_greater(s_clf.score(X_test, y_test), 0.85)
# check the convergence is the same as the dense version
d_clf = Lasso(alpha=0.1, fit_intercept=False, max_iter=max_iter,
tol=1e-7)
d_clf.fit(X_train.todense(), y_train)
assert_almost_equal(d_clf.dual_gap_, 0, 4)
assert_greater(d_clf.score(X_test, y_test), 0.85)
# check that the coefs are sparse
assert_equal(np.sum(s_clf.coef_ != 0.0), n_informative)
| bsd-3-clause |
pathfinder14/OpenSAPM | osamp/postprocess.py | 1 | 13595 | import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import sys
import imageio
import os
import shutil
import glob
from mpl_toolkits import mplot3d
from mpl_toolkits.mplot3d import axes3d
def draw1DSlice(solution, t_slice, x_start, x_end, legend, solution_max_value):
print ('in draw 1d slice')
M = len(solution)
x_step = (x_end - x_start) / M
x = np.arange(x_start,x_end,x_step)
#Устанавливаем размеры рисунка.
ax = plt.figure(figsize = (30,30)).add_subplot(111)
#Устанавливаем подписи значений по осям.
xax = ax.xaxis
xlabels = xax.get_ticklabels()
for label in xlabels:
label.set_fontsize(40)
yax= ax.yaxis
ylabels = yax.get_ticklabels()
for label in ylabels:
label.set_fontsize(40)
#Устанавливаем границы отображения по x,f(x).
plt.ylim(-3/2 * solution_max_value, 3/2 * solution_max_value)
plt.xlim(x[0],x[-1])
#Устанавливаем легенду на графике, включаем отображение сетки.
plt.title(legend + ' plot, ' + 't = ' + str(t_slice) + 's', fontsize = 50)
plt.xlabel('x', fontsize = 40)
plt.ylabel(legend+'(x)', fontsize = 40)
plt.grid(True)
plt.plot(x, solution, "--",linewidth=5)
plt.savefig('img' + os.sep + str(t_slice) + 's.png' ) # - если рисовать мувик, будет сохранять .png
plt.clf()
#plt.show() - если нужно отображать, не будет сохранять
#t_step = T_real_step / t_real_step
#t_filming_step - шаг, с которым мы хотим отрисовывать, секунды
#t_grid_step - шаг по сетке, секунды
def draw1DMovie(solution, t_filming_step, x_start, x_end, legend, t_grid_step):
#Удаляем файлы из директории img\ перед рисование мультика.
files = glob.glob('img' + os.sep + '*')
for f in files:
os.remove(f)
#Вызываем рисовалку срезов по времени в цикле.
for i in range(0, solution.shape[1], t_filming_step):
draw1DSlice(solution[:,i, :], i * t_grid_step, x_start, x_end, legend, np.max(solution[:,i, :]))
#Рисуем гифку из содержимого папки img\, сохраняем ее туда же.
images = []
print("Making gif")
filenames = [str(i * t_grid_step) + 's.png' for i in range(solution.shape[1])]
for filename in filenames:
images.append(imageio.imread('img' + os.sep + filename))
imageio.mimsave('img' + os.sep + 'movie.gif', images, duration = 0.1)
def draw2DSlice(solution, t_slice, x_start, x_end, y_start, y_end, legend, solution_min_value, solution_max_value,
time_marker_length):
print('in draw 2d slice')
# npArray = np.array(solution)
# npArrayTransposed = npArray.transpose()
# solution = npArrayTransposed
M = len(solution)
x_step = (x_end - x_start) / M
x = np.arange(x_start,x_end,x_step)
M = len(solution[0])
y_step = (y_end - y_start) / M
y = np.arange(y_start,y_end,y_step)
#Устанавливаем размеры рисунка.
fig = plt.figure(figsize = (15,10))
ax = fig.add_subplot(111)
x_axis_step = (x_end - x_start)/8
xlabel = np.arange(x_start, x_end, x_axis_step) # Сделаем подписи произвольных долгот
ax.set_xticklabels(xlabel)
ax.set_xlabel('x', fontsize = 40)
y_axis_step = (y_end - y_start)/8
ylabel = np.arange(y_start, y_end, y_axis_step) # Сделаем подписи произвольных долгот
ax.set_yticklabels(ylabel)
ax.set_ylabel('y', fontsize = 40)
ax.set_title(legend + ' colorbar, ' + 't = ' + str(t_slice) + 's', fontsize = 50)
ax.grid(True)
# cpool = ['blue','cyan','green','yellow','pink']
cpool = ['#f7f5f2','#e5dfd3','#ccbea1','#ad9b74','#8e7a4e', '#705b2f', '#4f3d17', '#352709', '#070500']
cmap = mpl.colors.ListedColormap(cpool) # Задаём дискретную шкалу цветов из списка cpool
cmap.set_over('red') # задаём цвет для значений, выходящих за рамки границы levels[-1] (сверху шкалы)
cmap.set_under('grey') # задаём цвет для значений, выходящих за рамки границы levels[0] (снизу шкалы)
# Задаём список границ изолиний. Причём значения попадают в интервал z1 < z <= z2
levels = np.linspace(solution_min_value, solution_max_value, len(cpool)+1)
norm = mpl.colors.Normalize(vmin=solution_min_value, vmax=solution_max_value) # Ставим границы для цветовых отрезков шкалы
#Боковая шкала
# Задаём границы для изолиний levels.
# Также для метода contour и contourf треугольные индикаторы, которые отражают превышение заданных
# границ levels указываются в самом методе contour.
#Привязываем границы по значениям зон нашей боковой шкалы к цветам.
cs = ax.contourf(solution, levels, cmap=cmap, norm=norm, extend='both')
cbar = fig.colorbar(cs, ax=ax, spacing='proportional', # сделаем цветовые сегменты шкалы пропоциональными границам levels
extendfrac='auto', # изменим длину треугольных индикаторов
orientation='vertical') # можно изменить положение шкалы на горизонтальное (horizontal)
#Создаем папку, если ее не существует
if not os.path.exists('img'):
os.makedirs('img')
plt.savefig('img' + os.sep + str(t_slice) + 's.png') # - если рисовать мувик, будет сохранять .png
plt.clf()
#plt.show() - если нужно отображать, не будет сохранять
def draw2DMovie(solution, t_filming_step, x_start, x_end, y_start, y_end, legend, solution_min_value, solution_max_value, t_grid_step):
#Переменная, контролирующая длину имен файла(Нужна для округления)
time_marker_length = len(str(t_filming_step))
#!!!Здесь указываем, вдоль какого x и вдоль какого y мы хотим получить срезы
x_slice_value = 3
y_slice_value = -1
#Ищем значения на сетке, которые наиболее близки к заданным значениям срезов:
#1: Взяли один срез по времени
# npArray = np.array(solution[0])
#2: Ищем индекс по иксу
M = solution.shape[0]
x_step = (x_end - x_start) / M
x = np.arange(x_start,x_end,x_step)
#3: Ищем индекс по игреку
M = solution.shape[1]
y_step = (y_start - y_end) / M
y = np.arange(y_start,y_end,y_step)
#Создаем папку, если ее не существует
if not os.path.exists('img'):
os.makedirs('img')
#Удаляем файлы из директории img\ перед рисование мультика.
files = glob.glob('img' + os.sep + '*')
for f in files:
os.remove(f)
#Если откомментили это, значит, вы готовы к тому, что малые значения сеточной функции отображаться не будут.
#Закомментите инициализацию этих же переменных в цикле.
#Нормировка по цвету будет единой для всего фильма.
# absolute_solution_minimum = solution_min_value
# absolute_solution_maximum = solution_max_value
#Вызываем рисовалку срезов по времени в цикле.
# Если откомментили это, значит вы готовы к мигающему фону. Закомментите инициализацию этих же переменных прямо перед циклом
# Нормировка по цвету будет выполняться отдельно для каждого кадра
absolute_solution_minimum = np.min(solution[:, :, :])
absolute_solution_maximum = np.max(solution[:, :, :])
for i in range(0, solution.shape[2], t_filming_step):
draw2DSlice(solution[:,:,i], i * t_grid_step,
x_start, x_end, y_start, y_end, legend,
absolute_solution_minimum, absolute_solution_maximum,
time_marker_length)
#Рисуем гифку из содержимого папки img\, сохраняем ее туда же.
images = []
filenames = [str(i * t_grid_step) + 's.png' for i in range(solution.shape[2])]
print("Making gif")
for filename in filenames:
tmp = imageio.imread('img' + os.sep + filename)
images.append(tmp)
imageio.mimsave('img' + os.sep + legend + ' movie.gif', images, duration = 0.2)
def draw3DSlice(solution, t_slice, x_start, x_end, y_start, y_end, legend, solution_min_value, solution_max_value,
time_marker_length):
print('in draw 3d slice')
M = len(solution)
x_step = (x_end - x_start) / M
x = np.arange(x_start, x_end, x_step)
M = len(solution[0])
y_step = (y_end - y_start) / M
y = np.arange(y_start, y_end, y_step)
# Устанавливаем размеры рисунка.
fig = plt.figure(figsize=(15, 10))
ax = fig.add_subplot(111, projection='3d')
x_axis_step = (x_end - x_start) / 8
xlabel = np.arange(x_start, x_end, x_axis_step) # Сделаем подписи произвольных долгот
ax.set_xticklabels(xlabel)
ax.set_xlabel('x', fontsize=40)
y_axis_step = (y_end - y_start) / 8
ylabel = np.arange(y_start, y_end, y_axis_step) # Сделаем подписи произвольных долгот
ax.set_yticklabels(ylabel)
ax.set_ylabel('y', fontsize=40)
ax.set_title(legend + ' colorbar, ' + 't = ' + str(t_slice) + 's', fontsize=50)
ax.grid(True)
xgrid, ygrid = np.meshgrid(x, y)
ax.plot_wireframe(xgrid, ygrid, solution[:, :])
# Создаем папку, если ее не существует
if not os.path.exists('img'):
os.makedirs('img')
plt.savefig('img' + os.sep + str(t_slice) + 's.png') # - если рисовать мувик, будет сохранять .png
plt.clf()
# plt.show() - если нужно отображать,
def draw3DMovie(solution, t_filming_step, x_start, x_end, y_start, y_end, legend, solution_min_value,
solution_max_value, t_grid_step):
# Переменная, контролирующая длину имен файла(Нужна для округления)
time_marker_length = len(str(t_filming_step))
M = solution.shape[0]
x_step = (x_end - x_start) / M
x = np.arange(x_start, x_end, x_step)
# 3: Ищем индекс по игреку
M = solution.shape[1]
y_step = (y_start - y_end) / M
y = np.arange(y_start, y_end, y_step)
# Создаем папку, если ее не существует
if not os.path.exists('img'):
os.makedirs('img')
# Удаляем файлы из директории img\ перед рисование мультика.
files = glob.glob('img' + os.sep + '*')
for f in files:
os.remove(f)
absolute_solution_minimum = np.min(solution[:, :, :])
absolute_solution_maximum = np.max(solution[:, :, :])
for i in range(0, solution.shape[2], t_filming_step):
draw3DSlice(solution[:, :, i], i * t_grid_step,
x_start, x_end, y_start, y_end, legend,
absolute_solution_minimum, absolute_solution_maximum,
time_marker_length)
# Рисуем гифку из содержимого папки img\, сохраняем ее туда же.
images = []
print("Making gif")
filenames = [str(i * t_grid_step) + 's.png' for i in range(solution.shape[2])]
for filename in filenames:
tmp = imageio.imread('img' + os.sep + filename)
images.append(tmp)
imageio.mimsave('img' + os.sep + legend + ' movie.gif', images, duration=0.2)
def do_2_postprocess(solution, t_filming_step, x_start, x_end, y_start, y_end, legend, solution_min_value, solution_max_value, t_grid_step):
draw2DMovie(solution, t_filming_step, x_start, x_end, y_start, y_end, legend, solution_min_value, solution_max_value, t_grid_step)
def do_postprocess(solution, t_filming_step, x_start, x_end, legend, t_grid_step):
draw1DMovie(solution, t_filming_step, x_start, x_end, legend, t_grid_step)
def do_3_postprocess(solution, t_filming_step, x_start, x_end, y_start, y_end, legend, solution_min_value, solution_max_value, t_grid_step):
draw3DMovie(solution, t_filming_step, x_start, x_end, y_start, y_end, legend, solution_min_value, solution_max_value, t_grid_step) | mit |
eickenberg/scikit-learn | sklearn/covariance/robust_covariance.py | 2 | 28960 | """
Robust location and covariance estimators.
Here are implemented estimators that are resistant to outliers.
"""
# Author: Virgile Fritsch <[email protected]>
#
# License: BSD 3 clause
import warnings
import numbers
import numpy as np
from scipy import linalg
from scipy.stats import chi2
from . import empirical_covariance, EmpiricalCovariance
from ..utils.extmath import fast_logdet, pinvh
from ..utils import check_random_state
# Minimum Covariance Determinant
# Implementing of an algorithm by Rousseeuw & Van Driessen described in
# (A Fast Algorithm for the Minimum Covariance Determinant Estimator,
# 1999, American Statistical Association and the American Society
# for Quality, TECHNOMETRICS)
# XXX Is this really a public function? It's not listed in the docs or
# exported by sklearn.covariance. Deprecate?
def c_step(X, n_support, remaining_iterations=30, initial_estimates=None,
verbose=False, cov_computation_method=empirical_covariance,
random_state=None):
"""C_step procedure described in [Rouseeuw1984]_ aiming at computing MCD.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Data set in which we look for the n_support observations whose
scatter matrix has minimum determinant.
n_support : int, > n_samples / 2
Number of observations to compute the robust estimates of location
and covariance from.
remaining_iterations : int, optional
Number of iterations to perform.
According to [Rouseeuw1999]_, two iterations are sufficient to get
close to the minimum, and we never need more than 30 to reach
convergence.
initial_estimates : 2-tuple, optional
Initial estimates of location and shape from which to run the c_step
procedure:
- initial_estimates[0]: an initial location estimate
- initial_estimates[1]: an initial covariance estimate
verbose : boolean, optional
Verbose mode.
random_state : integer or numpy.RandomState, optional
The random generator used. If an integer is given, it fixes the
seed. Defaults to the global numpy random number generator.
Returns
-------
location : array-like, shape (n_features,)
Robust location estimates.
covariance : array-like, shape (n_features, n_features)
Robust covariance estimates.
support : array-like, shape (n_samples,)
A mask for the `n_support` observations whose scatter matrix has
minimum determinant.
References
----------
.. [Rouseeuw1999] A Fast Algorithm for the Minimum Covariance Determinant
Estimator, 1999, American Statistical Association and the American
Society for Quality, TECHNOMETRICS
"""
X = np.asarray(X)
random_state = check_random_state(random_state)
return _c_step(X, n_support, remaining_iterations=remaining_iterations,
initial_estimates=initial_estimates, verbose=verbose,
cov_computation_method=cov_computation_method,
random_state=random_state)
def _c_step(X, n_support, random_state, remaining_iterations=30,
initial_estimates=None, verbose=False,
cov_computation_method=empirical_covariance):
n_samples, n_features = X.shape
# Initialisation
support = np.zeros(n_samples, dtype=bool)
if initial_estimates is None:
# compute initial robust estimates from a random subset
support[random_state.permutation(n_samples)[:n_support]] = True
else:
# get initial robust estimates from the function parameters
location = initial_estimates[0]
covariance = initial_estimates[1]
# run a special iteration for that case (to get an initial support)
precision = pinvh(covariance)
X_centered = X - location
dist = (np.dot(X_centered, precision) * X_centered).sum(1)
# compute new estimates
support[np.argsort(dist)[:n_support]] = True
X_support = X[support]
location = X_support.mean(0)
covariance = cov_computation_method(X_support)
# Iterative procedure for Minimum Covariance Determinant computation
det = fast_logdet(covariance)
previous_det = np.inf
while (det < previous_det) and (remaining_iterations > 0):
# save old estimates values
previous_location = location
previous_covariance = covariance
previous_det = det
previous_support = support
# compute a new support from the full data set mahalanobis distances
precision = pinvh(covariance)
X_centered = X - location
dist = (np.dot(X_centered, precision) * X_centered).sum(axis=1)
# compute new estimates
support = np.zeros(n_samples, dtype=bool)
support[np.argsort(dist)[:n_support]] = True
X_support = X[support]
location = X_support.mean(axis=0)
covariance = cov_computation_method(X_support)
det = fast_logdet(covariance)
# update remaining iterations for early stopping
remaining_iterations -= 1
previous_dist = dist
dist = (np.dot(X - location, precision) * (X - location)).sum(axis=1)
# Catch computation errors
if np.isinf(det):
raise ValueError(
"Singular covariance matrix. "
"Please check that the covariance matrix corresponding "
"to the dataset is full rank and that MinCovDet is used with "
"Gaussian-distributed data (or at least data drawn from a "
"unimodal, symmetric distribution.")
# Check convergence
if np.allclose(det, previous_det):
# c_step procedure converged
if verbose:
print("Optimal couple (location, covariance) found before"
" ending iterations (%d left)" % (remaining_iterations))
results = location, covariance, det, support, dist
elif det > previous_det:
# determinant has increased (should not happen)
warnings.warn("Warning! det > previous_det (%.15f > %.15f)"
% (det, previous_det), RuntimeWarning)
results = previous_location, previous_covariance, \
previous_det, previous_support, previous_dist
# Check early stopping
if remaining_iterations == 0:
if verbose:
print('Maximum number of iterations reached')
results = location, covariance, det, support, dist
return results
def select_candidates(X, n_support, n_trials, select=1, n_iter=30,
verbose=False,
cov_computation_method=empirical_covariance,
random_state=None):
"""Finds the best pure subset of observations to compute MCD from it.
The purpose of this function is to find the best sets of n_support
observations with respect to a minimization of their covariance
matrix determinant. Equivalently, it removes n_samples-n_support
observations to construct what we call a pure data set (i.e. not
containing outliers). The list of the observations of the pure
data set is referred to as the `support`.
Starting from a random support, the pure data set is found by the
c_step procedure introduced by Rousseeuw and Van Driessen in
[Rouseeuw1999]_.
Parameters
----------
X : array-like, shape (n_samples, n_features)
Data (sub)set in which we look for the n_support purest observations.
n_support : int, [(n + p + 1)/2] < n_support < n
The number of samples the pure data set must contain.
select : int, int > 0
Number of best candidates results to return.
n_trials : int, nb_trials > 0 or 2-tuple
Number of different initial sets of observations from which to
run the algorithm.
Instead of giving a number of trials to perform, one can provide a
list of initial estimates that will be used to iteratively run
c_step procedures. In this case:
- n_trials[0]: array-like, shape (n_trials, n_features)
is the list of `n_trials` initial location estimates
- n_trials[1]: array-like, shape (n_trials, n_features, n_features)
is the list of `n_trials` initial covariances estimates
n_iter : int, nb_iter > 0
Maximum number of iterations for the c_step procedure.
(2 is enough to be close to the final solution. "Never" exceeds 20).
random_state : integer or numpy.RandomState, optional
The random generator used. If an integer is given, it fixes the
seed. Defaults to the global numpy random number generator.
See Also
---------
`c_step` function
Returns
-------
best_locations : array-like, shape (select, n_features)
The `select` location estimates computed from the `select` best
supports found in the data set (`X`).
best_covariances : array-like, shape (select, n_features, n_features)
The `select` covariance estimates computed from the `select`
best supports found in the data set (`X`).
best_supports : array-like, shape (select, n_samples)
The `select` best supports found in the data set (`X`).
References
----------
.. [Rouseeuw1999] A Fast Algorithm for the Minimum Covariance Determinant
Estimator, 1999, American Statistical Association and the American
Society for Quality, TECHNOMETRICS
"""
random_state = check_random_state(random_state)
n_samples, n_features = X.shape
if isinstance(n_trials, numbers.Integral):
run_from_estimates = False
elif isinstance(n_trials, tuple):
run_from_estimates = True
estimates_list = n_trials
n_trials = estimates_list[0].shape[0]
else:
raise TypeError("Invalid 'n_trials' parameter, expected tuple or "
" integer, got %s (%s)" % (n_trials, type(n_trials)))
# compute `n_trials` location and shape estimates candidates in the subset
all_estimates = []
if not run_from_estimates:
# perform `n_trials` computations from random initial supports
for j in range(n_trials):
all_estimates.append(
_c_step(
X, n_support, remaining_iterations=n_iter, verbose=verbose,
cov_computation_method=cov_computation_method,
random_state=random_state))
else:
# perform computations from every given initial estimates
for j in range(n_trials):
initial_estimates = (estimates_list[0][j], estimates_list[1][j])
all_estimates.append(_c_step(
X, n_support, remaining_iterations=n_iter,
initial_estimates=initial_estimates, verbose=verbose,
cov_computation_method=cov_computation_method,
random_state=random_state))
all_locs_sub, all_covs_sub, all_dets_sub, all_supports_sub, all_ds_sub = \
zip(*all_estimates)
# find the `n_best` best results among the `n_trials` ones
index_best = np.argsort(all_dets_sub)[:select]
best_locations = np.asarray(all_locs_sub)[index_best]
best_covariances = np.asarray(all_covs_sub)[index_best]
best_supports = np.asarray(all_supports_sub)[index_best]
best_ds = np.asarray(all_ds_sub)[index_best]
return best_locations, best_covariances, best_supports, best_ds
def fast_mcd(X, support_fraction=None,
cov_computation_method=empirical_covariance,
random_state=None):
"""Estimates the Minimum Covariance Determinant matrix.
Parameters
----------
X : array-like, shape (n_samples, n_features)
The data matrix, with p features and n samples.
support_fraction : float, 0 < support_fraction < 1
The proportion of points to be included in the support of the raw
MCD estimate. Default is None, which implies that the minimum
value of support_fraction will be used within the algorithm:
`[n_sample + n_features + 1] / 2`.
random_state : integer or numpy.RandomState, optional
The generator used to randomly subsample. If an integer is
given, it fixes the seed. Defaults to the global numpy random
number generator.
Notes
-----
The FastMCD algorithm has been introduced by Rousseuw and Van Driessen
in "A Fast Algorithm for the Minimum Covariance Determinant Estimator,
1999, American Statistical Association and the American Society
for Quality, TECHNOMETRICS".
The principle is to compute robust estimates and random subsets before
pooling them into a larger subsets, and finally into the full data set.
Depending on the size of the initial sample, we have one, two or three
such computation levels.
Note that only raw estimates are returned. If one is interested in
the correction and reweighting steps described in [Rouseeuw1999]_,
see the MinCovDet object.
References
----------
.. [Rouseeuw1999] A Fast Algorithm for the Minimum Covariance
Determinant Estimator, 1999, American Statistical Association
and the American Society for Quality, TECHNOMETRICS
.. [Butler1993] R. W. Butler, P. L. Davies and M. Jhun,
Asymptotics For The Minimum Covariance Determinant Estimator,
The Annals of Statistics, 1993, Vol. 21, No. 3, 1385-1400
Returns
-------
location : array-like, shape (n_features,)
Robust location of the data.
covariance : array-like, shape (n_features, n_features)
Robust covariance of the features.
support : array-like, type boolean, shape (n_samples,)
A mask of the observations that have been used to compute
the robust location and covariance estimates of the data set.
"""
random_state = check_random_state(random_state)
X = np.asarray(X)
if X.ndim == 1:
X = np.reshape(X, (1, -1))
warnings.warn("Only one sample available. "
"You may want to reshape your data array")
n_samples, n_features = X.shape
# minimum breakdown value
if support_fraction is None:
n_support = int(np.ceil(0.5 * (n_samples + n_features + 1)))
else:
n_support = int(support_fraction * n_samples)
# 1-dimensional case quick computation
# (Rousseeuw, P. J. and Leroy, A. M. (2005) References, in Robust
# Regression and Outlier Detection, John Wiley & Sons, chapter 4)
if n_features == 1:
if n_support < n_samples:
# find the sample shortest halves
X_sorted = np.sort(np.ravel(X))
diff = X_sorted[n_support:] - X_sorted[:(n_samples - n_support)]
halves_start = np.where(diff == np.min(diff))[0]
# take the middle points' mean to get the robust location estimate
location = 0.5 * (X_sorted[n_support + halves_start]
+ X_sorted[halves_start]).mean()
support = np.zeros(n_samples, dtype=bool)
X_centered = X - location
support[np.argsort(np.abs(X_centered), 0)[:n_support]] = True
covariance = np.asarray([[np.var(X[support])]])
location = np.array([location])
# get precision matrix in an optimized way
precision = pinvh(covariance)
dist = (np.dot(X_centered, precision) * (X_centered)).sum(axis=1)
else:
support = np.ones(n_samples, dtype=bool)
covariance = np.asarray([[np.var(X)]])
location = np.asarray([np.mean(X)])
X_centered = X - location
# get precision matrix in an optimized way
precision = pinvh(covariance)
dist = (np.dot(X_centered, precision) * (X_centered)).sum(axis=1)
# Starting FastMCD algorithm for p-dimensional case
if (n_samples > 500) and (n_features > 1):
# 1. Find candidate supports on subsets
# a. split the set in subsets of size ~ 300
n_subsets = n_samples // 300
n_samples_subsets = n_samples // n_subsets
samples_shuffle = random_state.permutation(n_samples)
h_subset = int(np.ceil(n_samples_subsets *
(n_support / float(n_samples))))
# b. perform a total of 500 trials
n_trials_tot = 500
# c. select 10 best (location, covariance) for each subset
n_best_sub = 10
n_trials = max(10, n_trials_tot // n_subsets)
n_best_tot = n_subsets * n_best_sub
all_best_locations = np.zeros((n_best_tot, n_features))
try:
all_best_covariances = np.zeros((n_best_tot, n_features,
n_features))
except MemoryError:
# The above is too big. Let's try with something much small
# (and less optimal)
all_best_covariances = np.zeros((n_best_tot, n_features,
n_features))
n_best_tot = 10
n_best_sub = 2
for i in range(n_subsets):
low_bound = i * n_samples_subsets
high_bound = low_bound + n_samples_subsets
current_subset = X[samples_shuffle[low_bound:high_bound]]
best_locations_sub, best_covariances_sub, _, _ = select_candidates(
current_subset, h_subset, n_trials,
select=n_best_sub, n_iter=2,
cov_computation_method=cov_computation_method,
random_state=random_state)
subset_slice = np.arange(i * n_best_sub, (i + 1) * n_best_sub)
all_best_locations[subset_slice] = best_locations_sub
all_best_covariances[subset_slice] = best_covariances_sub
# 2. Pool the candidate supports into a merged set
# (possibly the full dataset)
n_samples_merged = min(1500, n_samples)
h_merged = int(np.ceil(n_samples_merged *
(n_support / float(n_samples))))
if n_samples > 1500:
n_best_merged = 10
else:
n_best_merged = 1
# find the best couples (location, covariance) on the merged set
selection = random_state.permutation(n_samples)[:n_samples_merged]
locations_merged, covariances_merged, supports_merged, d = \
select_candidates(
X[selection], h_merged,
n_trials=(all_best_locations, all_best_covariances),
select=n_best_merged,
cov_computation_method=cov_computation_method,
random_state=random_state)
# 3. Finally get the overall best (locations, covariance) couple
if n_samples < 1500:
# directly get the best couple (location, covariance)
location = locations_merged[0]
covariance = covariances_merged[0]
support = np.zeros(n_samples, dtype=bool)
dist = np.zeros(n_samples)
support[selection] = supports_merged[0]
dist[selection] = d[0]
else:
# select the best couple on the full dataset
locations_full, covariances_full, supports_full, d = \
select_candidates(
X, n_support,
n_trials=(locations_merged, covariances_merged),
select=1,
cov_computation_method=cov_computation_method,
random_state=random_state)
location = locations_full[0]
covariance = covariances_full[0]
support = supports_full[0]
dist = d[0]
elif n_features > 1:
# 1. Find the 10 best couples (location, covariance)
# considering two iterations
n_trials = 30
n_best = 10
locations_best, covariances_best, _, _ = select_candidates(
X, n_support, n_trials=n_trials, select=n_best, n_iter=2,
cov_computation_method=cov_computation_method,
random_state=random_state)
# 2. Select the best couple on the full dataset amongst the 10
locations_full, covariances_full, supports_full, d = select_candidates(
X, n_support, n_trials=(locations_best, covariances_best),
select=1, cov_computation_method=cov_computation_method,
random_state=random_state)
location = locations_full[0]
covariance = covariances_full[0]
support = supports_full[0]
dist = d[0]
return location, covariance, support, dist
class MinCovDet(EmpiricalCovariance):
"""Minimum Covariance Determinant (MCD): robust estimator of covariance.
The Minimum Covariance Determinant covariance estimator is to be applied
on Gaussian-distributed data, but could still be relevant on data
drawn from a unimodal, symmetric distribution. It is not meant to be used
with multi-modal data (the algorithm used to fit a MinCovDet object is
likely to fail in such a case).
One should consider projection pursuit methods to deal with multi-modal
datasets.
Parameters
----------
store_precision : bool
Specify if the estimated precision is stored.
assume_centered : Boolean
If True, the support of the robust location and the covariance
estimates is computed, and a covariance estimate is recomputed from
it, without centering the data.
Useful to work with data whose mean is significantly equal to
zero but is not exactly zero.
If False, the robust location and covariance are directly computed
with the FastMCD algorithm without additional treatment.
support_fraction : float, 0 < support_fraction < 1
The proportion of points to be included in the support of the raw
MCD estimate. Default is None, which implies that the minimum
value of support_fraction will be used within the algorithm:
[n_sample + n_features + 1] / 2
random_state : integer or numpy.RandomState, optional
The random generator used. If an integer is given, it fixes the
seed. Defaults to the global numpy random number generator.
Attributes
----------
`raw_location_` : array-like, shape (n_features,)
The raw robust estimated location before correction and re-weighting.
`raw_covariance_` : array-like, shape (n_features, n_features)
The raw robust estimated covariance before correction and re-weighting.
`raw_support_` : array-like, shape (n_samples,)
A mask of the observations that have been used to compute
the raw robust estimates of location and shape, before correction
and re-weighting.
`location_` : array-like, shape (n_features,)
Estimated robust location
`covariance_` : array-like, shape (n_features, n_features)
Estimated robust covariance matrix
`precision_` : array-like, shape (n_features, n_features)
Estimated pseudo inverse matrix.
(stored only if store_precision is True)
`support_` : array-like, shape (n_samples,)
A mask of the observations that have been used to compute
the robust estimates of location and shape.
`dist_` : array-like, shape (n_samples,)
Mahalanobis distances of the training set (on which `fit` is called)
observations.
References
----------
.. [Rouseeuw1984] `P. J. Rousseeuw. Least median of squares regression.
J. Am Stat Ass, 79:871, 1984.`
.. [Rouseeuw1999] `A Fast Algorithm for the Minimum Covariance Determinant
Estimator, 1999, American Statistical Association and the American
Society for Quality, TECHNOMETRICS`
.. [Butler1993] `R. W. Butler, P. L. Davies and M. Jhun,
Asymptotics For The Minimum Covariance Determinant Estimator,
The Annals of Statistics, 1993, Vol. 21, No. 3, 1385-1400`
"""
_nonrobust_covariance = staticmethod(empirical_covariance)
def __init__(self, store_precision=True, assume_centered=False,
support_fraction=None, random_state=None):
self.store_precision = store_precision
self.assume_centered = assume_centered
self.support_fraction = support_fraction
self.random_state = random_state
def fit(self, X, y=None):
"""Fits a Minimum Covariance Determinant with the FastMCD algorithm.
Parameters
----------
X : array-like, shape = [n_samples, n_features]
Training data, where n_samples is the number of samples
and n_features is the number of features.
y : not used, present for API consistence purpose.
Returns
-------
self : object
Returns self.
"""
random_state = check_random_state(self.random_state)
n_samples, n_features = X.shape
# check that the empirical covariance is full rank
if (linalg.svdvals(np.dot(X.T, X)) > 1e-8).sum() != n_features:
warnings.warn("The covariance matrix associated to your dataset "
"is not full rank")
# compute and store raw estimates
raw_location, raw_covariance, raw_support, raw_dist = fast_mcd(
X, support_fraction=self.support_fraction,
cov_computation_method=self._nonrobust_covariance,
random_state=random_state)
if self.assume_centered:
raw_location = np.zeros(n_features)
raw_covariance = self._nonrobust_covariance(X[raw_support],
assume_centered=True)
# get precision matrix in an optimized way
precision = pinvh(raw_covariance)
raw_dist = np.sum(np.dot(X, precision) * X, 1)
self.raw_location_ = raw_location
self.raw_covariance_ = raw_covariance
self.raw_support_ = raw_support
self.location_ = raw_location
self.support_ = raw_support
self.dist_ = raw_dist
# obtain consistency at normal models
self.correct_covariance(X)
# re-weight estimator
self.reweight_covariance(X)
return self
def correct_covariance(self, data):
"""Apply a correction to raw Minimum Covariance Determinant estimates.
Correction using the empirical correction factor suggested
by Rousseeuw and Van Driessen in [Rouseeuw1984]_.
Parameters
----------
data : array-like, shape (n_samples, n_features)
The data matrix, with p features and n samples.
The data set must be the one which was used to compute
the raw estimates.
Returns
-------
covariance_corrected : array-like, shape (n_features, n_features)
Corrected robust covariance estimate.
"""
correction = np.median(self.dist_) / chi2(data.shape[1]).isf(0.5)
covariance_corrected = self.raw_covariance_ * correction
self.dist_ /= correction
return covariance_corrected
def reweight_covariance(self, data):
"""Re-weight raw Minimum Covariance Determinant estimates.
Re-weight observations using Rousseeuw's method (equivalent to
deleting outlying observations from the data set before
computing location and covariance estimates). [Rouseeuw1984]_
Parameters
----------
data : array-like, shape (n_samples, n_features)
The data matrix, with p features and n samples.
The data set must be the one which was used to compute
the raw estimates.
Returns
-------
location_reweighted : array-like, shape (n_features, )
Re-weighted robust location estimate.
covariance_reweighted : array-like, shape (n_features, n_features)
Re-weighted robust covariance estimate.
support_reweighted : array-like, type boolean, shape (n_samples,)
A mask of the observations that have been used to compute
the re-weighted robust location and covariance estimates.
"""
n_samples, n_features = data.shape
mask = self.dist_ < chi2(n_features).isf(0.025)
if self.assume_centered:
location_reweighted = np.zeros(n_features)
else:
location_reweighted = data[mask].mean(0)
covariance_reweighted = self._nonrobust_covariance(
data[mask], assume_centered=self.assume_centered)
support_reweighted = np.zeros(n_samples, dtype=bool)
support_reweighted[mask] = True
self._set_covariance(covariance_reweighted)
self.location_ = location_reweighted
self.support_ = support_reweighted
X_centered = data - self.location_
self.dist_ = np.sum(
np.dot(X_centered, self.get_precision()) * X_centered, 1)
return location_reweighted, covariance_reweighted, support_reweighted
| bsd-3-clause |
mkoledoye/mds_experiments | animation.py | 2 | 2636 | from matplotlib import pyplot as plt
from matplotlib import animation
from core.config import Config
from utils import generate_dynamic_nodes
# number of mobile node transitions
NO_TRANS = 100
class NodeAnimation(object):
'''create animation of tag-anchor deployment from a given configuration of parameters'''
def __init__(self, cfg, data, show_trail=False):
self.fig = plt.figure()
self.data = data
self.show_trail = show_trail
self.no_of_anchors = cfg.no_of_anchors
self.no_of_tags = cfg.no_of_tags
def init_plot(self):
real_coords, mds_coords, *others = next(self.data)
self.anchors_scat = plt.scatter(real_coords[:self.no_of_anchors, 0], real_coords[:self.no_of_anchors, 1], color='blue', s=100, lw=1, label='Anchors positions', marker='o')
self.real_scat = plt.scatter(real_coords[self.no_of_anchors:, 0], real_coords[self.no_of_anchors:, 1], color='blue', s=100, lw=1, label='Tag positions', marker='o', facecolors='none')
self.mds_scat = plt.scatter(mds_coords[:, 0], mds_coords[:, 1], color='red', s=150, lw=2, label='Estimated positions', marker='+')
scatterplots = self.real_scat, self.mds_scat
if self.show_trail:
last_n_mds_coords = others[0][2:]
self.last_n_scat = plt.scatter(last_n_mds_coords[:, :, 0], last_n_mds_coords[:, :, 1], label='MDS iterations', color='magenta', s=0.5)
scatterplots += (self.last_n_scat,)
plt.legend(loc='best', scatterpoints=1, fontsize=11)
return scatterplots
def update_plot(self, _):
real_coords, mds_coords, *others = next(self.data)
self.real_scat.set_offsets(real_coords)
self.mds_scat.set_offsets(mds_coords)
scatterplots = self.real_scat, self.mds_scat
if self.show_trail:
last_n_mds_coords = others[0][2:]
self.last_n_scat.set_offsets(last_n_mds_coords)
scatterplots += (self.last_n_scat,)
return scatterplots
def draw_plot(self, save_to_file=False):
anim = animation.FuncAnimation(self.fig, self.update_plot, init_func=self.init_plot, interval=300)
# matplotlib bug? Not calling plt.show in the same scope as anim gives a blank figure
axes = plt.gca()
axes.set_xlabel('coordinate x [m]')
axes.set_ylabel('coordinate y [m]')
axes.set_xlim([-5, 35])
axes.set_ylim([-5, 29])
if save_to_file:
anim.save('mds_animation_filtering.mp4', extra_args=['-vcodec', 'libx264'])
plt.show()
if __name__ == '__main__':
cfg = Config(no_of_anchors=4, no_of_tags=7, missingdata=True, sigma=0)
data = generate_dynamic_nodes(cfg, algorithm='_smacof_with_anchors_single', no_of_trans=NO_TRANS, add_noise=True, filter_noise=False)
anim = NodeAnimation(cfg, data, show_trail=False)
anim.draw_plot()
| mit |
Mushirahmed/gnuradio | gnuradio-core/src/examples/pfb/decimate.py | 17 | 5706 | #!/usr/bin/env python
#
# Copyright 2009 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
#
from gnuradio import gr, blks2
import sys, time
try:
import scipy
from scipy import fftpack
except ImportError:
print "Error: Program requires scipy (see: www.scipy.org)."
sys.exit(1)
try:
import pylab
from pylab import mlab
except ImportError:
print "Error: Program requires matplotlib (see: matplotlib.sourceforge.net)."
sys.exit(1)
class pfb_top_block(gr.top_block):
def __init__(self):
gr.top_block.__init__(self)
self._N = 10000000 # number of samples to use
self._fs = 10000 # initial sampling rate
self._decim = 20 # Decimation rate
# Generate the prototype filter taps for the decimators with a 200 Hz bandwidth
self._taps = gr.firdes.low_pass_2(1, self._fs, 200, 150,
attenuation_dB=120, window=gr.firdes.WIN_BLACKMAN_hARRIS)
# Calculate the number of taps per channel for our own information
tpc = scipy.ceil(float(len(self._taps)) / float(self._decim))
print "Number of taps: ", len(self._taps)
print "Number of filters: ", self._decim
print "Taps per channel: ", tpc
# Build the input signal source
# We create a list of freqs, and a sine wave is generated and added to the source
# for each one of these frequencies.
self.signals = list()
self.add = gr.add_cc()
freqs = [10, 20, 2040]
for i in xrange(len(freqs)):
self.signals.append(gr.sig_source_c(self._fs, gr.GR_SIN_WAVE, freqs[i], 1))
self.connect(self.signals[i], (self.add,i))
self.head = gr.head(gr.sizeof_gr_complex, self._N)
# Construct a PFB decimator filter
self.pfb = blks2.pfb_decimator_ccf(self._decim, self._taps, 0)
# Construct a standard FIR decimating filter
self.dec = gr.fir_filter_ccf(self._decim, self._taps)
self.snk_i = gr.vector_sink_c()
# Connect the blocks
self.connect(self.add, self.head, self.pfb)
self.connect(self.add, self.snk_i)
# Create the sink for the decimated siganl
self.snk = gr.vector_sink_c()
self.connect(self.pfb, self.snk)
def main():
tb = pfb_top_block()
tstart = time.time()
tb.run()
tend = time.time()
print "Run time: %f" % (tend - tstart)
if 1:
fig1 = pylab.figure(1, figsize=(16,9))
fig2 = pylab.figure(2, figsize=(16,9))
Ns = 10000
Ne = 10000
fftlen = 8192
winfunc = scipy.blackman
fs = tb._fs
# Plot the input to the decimator
d = tb.snk_i.data()[Ns:Ns+Ne]
sp1_f = fig1.add_subplot(2, 1, 1)
X,freq = mlab.psd(d, NFFT=fftlen, noverlap=fftlen/4, Fs=fs,
window = lambda d: d*winfunc(fftlen),
scale_by_freq=True)
X_in = 10.0*scipy.log10(abs(fftpack.fftshift(X)))
f_in = scipy.arange(-fs/2.0, fs/2.0, fs/float(X_in.size))
p1_f = sp1_f.plot(f_in, X_in, "b")
sp1_f.set_xlim([min(f_in), max(f_in)+1])
sp1_f.set_ylim([-200.0, 50.0])
sp1_f.set_title("Input Signal", weight="bold")
sp1_f.set_xlabel("Frequency (Hz)")
sp1_f.set_ylabel("Power (dBW)")
Ts = 1.0/fs
Tmax = len(d)*Ts
t_in = scipy.arange(0, Tmax, Ts)
x_in = scipy.array(d)
sp1_t = fig1.add_subplot(2, 1, 2)
p1_t = sp1_t.plot(t_in, x_in.real, "b")
p1_t = sp1_t.plot(t_in, x_in.imag, "r")
sp1_t.set_ylim([-tb._decim*1.1, tb._decim*1.1])
sp1_t.set_xlabel("Time (s)")
sp1_t.set_ylabel("Amplitude")
# Plot the output of the decimator
fs_o = tb._fs / tb._decim
sp2_f = fig2.add_subplot(2, 1, 1)
d = tb.snk.data()[Ns:Ns+Ne]
X,freq = mlab.psd(d, NFFT=fftlen, noverlap=fftlen/4, Fs=fs_o,
window = lambda d: d*winfunc(fftlen),
scale_by_freq=True)
X_o = 10.0*scipy.log10(abs(fftpack.fftshift(X)))
f_o = scipy.arange(-fs_o/2.0, fs_o/2.0, fs_o/float(X_o.size))
p2_f = sp2_f.plot(f_o, X_o, "b")
sp2_f.set_xlim([min(f_o), max(f_o)+1])
sp2_f.set_ylim([-200.0, 50.0])
sp2_f.set_title("PFB Decimated Signal", weight="bold")
sp2_f.set_xlabel("Frequency (Hz)")
sp2_f.set_ylabel("Power (dBW)")
Ts_o = 1.0/fs_o
Tmax_o = len(d)*Ts_o
x_o = scipy.array(d)
t_o = scipy.arange(0, Tmax_o, Ts_o)
sp2_t = fig2.add_subplot(2, 1, 2)
p2_t = sp2_t.plot(t_o, x_o.real, "b-o")
p2_t = sp2_t.plot(t_o, x_o.imag, "r-o")
sp2_t.set_ylim([-2.5, 2.5])
sp2_t.set_xlabel("Time (s)")
sp2_t.set_ylabel("Amplitude")
pylab.show()
if __name__ == "__main__":
try:
main()
except KeyboardInterrupt:
pass
| gpl-3.0 |
madmax983/h2o-3 | h2o-py/tests/testdir_algos/kmeans/pyunit_prostateKmeans.py | 1 | 1187 | import sys
sys.path.insert(1,"../../../")
import h2o
from tests import pyunit_utils
from h2o.estimators.kmeans import H2OKMeansEstimator
import numpy as np
from sklearn.cluster import KMeans
def prostateKmeans():
# Connect to a pre-existing cluster
# connect to localhost:54321
#Log.info("Importing prostate.csv data...\n")
prostate_h2o = h2o.import_file(path=pyunit_utils.locate("smalldata/logreg/prostate.csv"))
#prostate.summary()
prostate_sci = np.loadtxt(pyunit_utils.locate("smalldata/logreg/prostate_train.csv"), delimiter=',', skiprows=1)
prostate_sci = prostate_sci[:,1:]
for i in range(5,9):
#Log.info(paste("H2O K-Means with ", i, " clusters:\n", sep = ""))
#Log.info(paste( "Using these columns: ", colnames(prostate.hex)[-1]) )
prostate_km_h2o = H2OKMeansEstimator(k=i)
prostate_km_h2o.train(x=range(1,prostate_h2o.ncol), training_frame=prostate_h2o)
prostate_km_h2o.show()
prostate_km_sci = KMeans(n_clusters=i, init='k-means++', n_init=1)
prostate_km_sci.fit(prostate_sci)
print prostate_km_sci.cluster_centers_
if __name__ == "__main__":
pyunit_utils.standalone_test(prostateKmeans)
else:
prostateKmeans()
| apache-2.0 |
linebp/pandas | pandas/tests/io/test_packers.py | 7 | 31902 | import pytest
from warnings import catch_warnings
import os
import datetime
import numpy as np
import sys
from distutils.version import LooseVersion
from pandas import compat
from pandas.compat import u, PY3
from pandas import (Series, DataFrame, Panel, MultiIndex, bdate_range,
date_range, period_range, Index, Categorical)
from pandas.errors import PerformanceWarning
from pandas.io.packers import to_msgpack, read_msgpack
import pandas.util.testing as tm
from pandas.util.testing import (ensure_clean,
assert_categorical_equal,
assert_frame_equal,
assert_index_equal,
assert_series_equal,
patch)
from pandas.tests.test_panel import assert_panel_equal
import pandas
from pandas import Timestamp, NaT
from pandas._libs.tslib import iNaT
nan = np.nan
try:
import blosc # NOQA
except ImportError:
_BLOSC_INSTALLED = False
else:
_BLOSC_INSTALLED = True
try:
import zlib # NOQA
except ImportError:
_ZLIB_INSTALLED = False
else:
_ZLIB_INSTALLED = True
@pytest.fixture(scope='module')
def current_packers_data():
# our current version packers data
from pandas.tests.io.generate_legacy_storage_files import (
create_msgpack_data)
return create_msgpack_data()
@pytest.fixture(scope='module')
def all_packers_data():
# our all of our current version packers data
from pandas.tests.io.generate_legacy_storage_files import (
create_data)
return create_data()
def check_arbitrary(a, b):
if isinstance(a, (list, tuple)) and isinstance(b, (list, tuple)):
assert(len(a) == len(b))
for a_, b_ in zip(a, b):
check_arbitrary(a_, b_)
elif isinstance(a, Panel):
assert_panel_equal(a, b)
elif isinstance(a, DataFrame):
assert_frame_equal(a, b)
elif isinstance(a, Series):
assert_series_equal(a, b)
elif isinstance(a, Index):
assert_index_equal(a, b)
elif isinstance(a, Categorical):
# Temp,
# Categorical.categories is changed from str to bytes in PY3
# maybe the same as GH 13591
if PY3 and b.categories.inferred_type == 'string':
pass
else:
tm.assert_categorical_equal(a, b)
elif a is NaT:
assert b is NaT
elif isinstance(a, Timestamp):
assert a == b
assert a.freq == b.freq
else:
assert(a == b)
class TestPackers(object):
def setup_method(self, method):
self.path = '__%s__.msg' % tm.rands(10)
def teardown_method(self, method):
pass
def encode_decode(self, x, compress=None, **kwargs):
with ensure_clean(self.path) as p:
to_msgpack(p, x, compress=compress, **kwargs)
return read_msgpack(p, **kwargs)
class TestAPI(TestPackers):
def test_string_io(self):
df = DataFrame(np.random.randn(10, 2))
s = df.to_msgpack(None)
result = read_msgpack(s)
tm.assert_frame_equal(result, df)
s = df.to_msgpack()
result = read_msgpack(s)
tm.assert_frame_equal(result, df)
s = df.to_msgpack()
result = read_msgpack(compat.BytesIO(s))
tm.assert_frame_equal(result, df)
s = to_msgpack(None, df)
result = read_msgpack(s)
tm.assert_frame_equal(result, df)
with ensure_clean(self.path) as p:
s = df.to_msgpack()
fh = open(p, 'wb')
fh.write(s)
fh.close()
result = read_msgpack(p)
tm.assert_frame_equal(result, df)
def test_path_pathlib(self):
df = tm.makeDataFrame()
result = tm.round_trip_pathlib(df.to_msgpack, read_msgpack)
tm.assert_frame_equal(df, result)
def test_path_localpath(self):
df = tm.makeDataFrame()
result = tm.round_trip_localpath(df.to_msgpack, read_msgpack)
tm.assert_frame_equal(df, result)
def test_iterator_with_string_io(self):
dfs = [DataFrame(np.random.randn(10, 2)) for i in range(5)]
s = to_msgpack(None, *dfs)
for i, result in enumerate(read_msgpack(s, iterator=True)):
tm.assert_frame_equal(result, dfs[i])
def test_invalid_arg(self):
# GH10369
class A(object):
def __init__(self):
self.read = 0
pytest.raises(ValueError, read_msgpack, path_or_buf=None)
pytest.raises(ValueError, read_msgpack, path_or_buf={})
pytest.raises(ValueError, read_msgpack, path_or_buf=A())
class TestNumpy(TestPackers):
def test_numpy_scalar_float(self):
x = np.float32(np.random.rand())
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_numpy_scalar_complex(self):
x = np.complex64(np.random.rand() + 1j * np.random.rand())
x_rec = self.encode_decode(x)
assert np.allclose(x, x_rec)
def test_scalar_float(self):
x = np.random.rand()
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_scalar_complex(self):
x = np.random.rand() + 1j * np.random.rand()
x_rec = self.encode_decode(x)
assert np.allclose(x, x_rec)
def test_list_numpy_float(self):
x = [np.float32(np.random.rand()) for i in range(5)]
x_rec = self.encode_decode(x)
# current msgpack cannot distinguish list/tuple
tm.assert_almost_equal(tuple(x), x_rec)
x_rec = self.encode_decode(tuple(x))
tm.assert_almost_equal(tuple(x), x_rec)
def test_list_numpy_float_complex(self):
if not hasattr(np, 'complex128'):
pytest.skip('numpy cant handle complex128')
x = [np.float32(np.random.rand()) for i in range(5)] + \
[np.complex128(np.random.rand() + 1j * np.random.rand())
for i in range(5)]
x_rec = self.encode_decode(x)
assert np.allclose(x, x_rec)
def test_list_float(self):
x = [np.random.rand() for i in range(5)]
x_rec = self.encode_decode(x)
# current msgpack cannot distinguish list/tuple
tm.assert_almost_equal(tuple(x), x_rec)
x_rec = self.encode_decode(tuple(x))
tm.assert_almost_equal(tuple(x), x_rec)
def test_list_float_complex(self):
x = [np.random.rand() for i in range(5)] + \
[(np.random.rand() + 1j * np.random.rand()) for i in range(5)]
x_rec = self.encode_decode(x)
assert np.allclose(x, x_rec)
def test_dict_float(self):
x = {'foo': 1.0, 'bar': 2.0}
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_dict_complex(self):
x = {'foo': 1.0 + 1.0j, 'bar': 2.0 + 2.0j}
x_rec = self.encode_decode(x)
tm.assert_dict_equal(x, x_rec)
for key in x:
tm.assert_class_equal(x[key], x_rec[key], obj="complex value")
def test_dict_numpy_float(self):
x = {'foo': np.float32(1.0), 'bar': np.float32(2.0)}
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_dict_numpy_complex(self):
x = {'foo': np.complex128(1.0 + 1.0j),
'bar': np.complex128(2.0 + 2.0j)}
x_rec = self.encode_decode(x)
tm.assert_dict_equal(x, x_rec)
for key in x:
tm.assert_class_equal(x[key], x_rec[key], obj="numpy complex128")
def test_numpy_array_float(self):
# run multiple times
for n in range(10):
x = np.random.rand(10)
for dtype in ['float32', 'float64']:
x = x.astype(dtype)
x_rec = self.encode_decode(x)
tm.assert_almost_equal(x, x_rec)
def test_numpy_array_complex(self):
x = (np.random.rand(5) + 1j * np.random.rand(5)).astype(np.complex128)
x_rec = self.encode_decode(x)
assert (all(map(lambda x, y: x == y, x, x_rec)) and
x.dtype == x_rec.dtype)
def test_list_mixed(self):
x = [1.0, np.float32(3.5), np.complex128(4.25), u('foo')]
x_rec = self.encode_decode(x)
# current msgpack cannot distinguish list/tuple
tm.assert_almost_equal(tuple(x), x_rec)
x_rec = self.encode_decode(tuple(x))
tm.assert_almost_equal(tuple(x), x_rec)
class TestBasic(TestPackers):
def test_timestamp(self):
for i in [Timestamp(
'20130101'), Timestamp('20130101', tz='US/Eastern'),
Timestamp('201301010501')]:
i_rec = self.encode_decode(i)
assert i == i_rec
def test_nat(self):
nat_rec = self.encode_decode(NaT)
assert NaT is nat_rec
def test_datetimes(self):
# fails under 2.6/win32 (np.datetime64 seems broken)
if LooseVersion(sys.version) < '2.7':
pytest.skip('2.6 with np.datetime64 is broken')
for i in [datetime.datetime(2013, 1, 1),
datetime.datetime(2013, 1, 1, 5, 1),
datetime.date(2013, 1, 1),
np.datetime64(datetime.datetime(2013, 1, 5, 2, 15))]:
i_rec = self.encode_decode(i)
assert i == i_rec
def test_timedeltas(self):
for i in [datetime.timedelta(days=1),
datetime.timedelta(days=1, seconds=10),
np.timedelta64(1000000)]:
i_rec = self.encode_decode(i)
assert i == i_rec
class TestIndex(TestPackers):
def setup_method(self, method):
super(TestIndex, self).setup_method(method)
self.d = {
'string': tm.makeStringIndex(100),
'date': tm.makeDateIndex(100),
'int': tm.makeIntIndex(100),
'rng': tm.makeRangeIndex(100),
'float': tm.makeFloatIndex(100),
'empty': Index([]),
'tuple': Index(zip(['foo', 'bar', 'baz'], [1, 2, 3])),
'period': Index(period_range('2012-1-1', freq='M', periods=3)),
'date2': Index(date_range('2013-01-1', periods=10)),
'bdate': Index(bdate_range('2013-01-02', periods=10)),
'cat': tm.makeCategoricalIndex(100)
}
self.mi = {
'reg': MultiIndex.from_tuples([('bar', 'one'), ('baz', 'two'),
('foo', 'two'),
('qux', 'one'), ('qux', 'two')],
names=['first', 'second']),
}
def test_basic_index(self):
for s, i in self.d.items():
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
# datetime with no freq (GH5506)
i = Index([Timestamp('20130101'), Timestamp('20130103')])
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
# datetime with timezone
i = Index([Timestamp('20130101 9:00:00'), Timestamp(
'20130103 11:00:00')]).tz_localize('US/Eastern')
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
def test_multi_index(self):
for s, i in self.mi.items():
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
def test_unicode(self):
i = tm.makeUnicodeIndex(100)
i_rec = self.encode_decode(i)
tm.assert_index_equal(i, i_rec)
def categorical_index(self):
# GH15487
df = DataFrame(np.random.randn(10, 2))
df = df.astype({0: 'category'}).set_index(0)
result = self.encode_decode(df)
tm.assert_frame_equal(result, df)
class TestSeries(TestPackers):
def setup_method(self, method):
super(TestSeries, self).setup_method(method)
self.d = {}
s = tm.makeStringSeries()
s.name = 'string'
self.d['string'] = s
s = tm.makeObjectSeries()
s.name = 'object'
self.d['object'] = s
s = Series(iNaT, dtype='M8[ns]', index=range(5))
self.d['date'] = s
data = {
'A': [0., 1., 2., 3., np.nan],
'B': [0, 1, 0, 1, 0],
'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'],
'D': date_range('1/1/2009', periods=5),
'E': [0., 1, Timestamp('20100101'), 'foo', 2.],
'F': [Timestamp('20130102', tz='US/Eastern')] * 2 +
[Timestamp('20130603', tz='CET')] * 3,
'G': [Timestamp('20130102', tz='US/Eastern')] * 5,
'H': Categorical([1, 2, 3, 4, 5]),
'I': Categorical([1, 2, 3, 4, 5], ordered=True),
}
self.d['float'] = Series(data['A'])
self.d['int'] = Series(data['B'])
self.d['mixed'] = Series(data['E'])
self.d['dt_tz_mixed'] = Series(data['F'])
self.d['dt_tz'] = Series(data['G'])
self.d['cat_ordered'] = Series(data['H'])
self.d['cat_unordered'] = Series(data['I'])
def test_basic(self):
# run multiple times here
for n in range(10):
for s, i in self.d.items():
i_rec = self.encode_decode(i)
assert_series_equal(i, i_rec)
class TestCategorical(TestPackers):
def setup_method(self, method):
super(TestCategorical, self).setup_method(method)
self.d = {}
self.d['plain_str'] = Categorical(['a', 'b', 'c', 'd', 'e'])
self.d['plain_str_ordered'] = Categorical(['a', 'b', 'c', 'd', 'e'],
ordered=True)
self.d['plain_int'] = Categorical([5, 6, 7, 8])
self.d['plain_int_ordered'] = Categorical([5, 6, 7, 8], ordered=True)
def test_basic(self):
# run multiple times here
for n in range(10):
for s, i in self.d.items():
i_rec = self.encode_decode(i)
assert_categorical_equal(i, i_rec)
class TestNDFrame(TestPackers):
def setup_method(self, method):
super(TestNDFrame, self).setup_method(method)
data = {
'A': [0., 1., 2., 3., np.nan],
'B': [0, 1, 0, 1, 0],
'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'],
'D': date_range('1/1/2009', periods=5),
'E': [0., 1, Timestamp('20100101'), 'foo', 2.],
'F': [Timestamp('20130102', tz='US/Eastern')] * 5,
'G': [Timestamp('20130603', tz='CET')] * 5,
'H': Categorical(['a', 'b', 'c', 'd', 'e']),
'I': Categorical(['a', 'b', 'c', 'd', 'e'], ordered=True),
}
self.frame = {
'float': DataFrame(dict(A=data['A'], B=Series(data['A']) + 1)),
'int': DataFrame(dict(A=data['B'], B=Series(data['B']) + 1)),
'mixed': DataFrame(data)}
with catch_warnings(record=True):
self.panel = {
'float': Panel(dict(ItemA=self.frame['float'],
ItemB=self.frame['float'] + 1))}
def test_basic_frame(self):
for s, i in self.frame.items():
i_rec = self.encode_decode(i)
assert_frame_equal(i, i_rec)
def test_basic_panel(self):
with catch_warnings(record=True):
for s, i in self.panel.items():
i_rec = self.encode_decode(i)
assert_panel_equal(i, i_rec)
def test_multi(self):
i_rec = self.encode_decode(self.frame)
for k in self.frame.keys():
assert_frame_equal(self.frame[k], i_rec[k])
l = tuple([self.frame['float'], self.frame['float'].A,
self.frame['float'].B, None])
l_rec = self.encode_decode(l)
check_arbitrary(l, l_rec)
# this is an oddity in that packed lists will be returned as tuples
l = [self.frame['float'], self.frame['float']
.A, self.frame['float'].B, None]
l_rec = self.encode_decode(l)
assert isinstance(l_rec, tuple)
check_arbitrary(l, l_rec)
def test_iterator(self):
l = [self.frame['float'], self.frame['float']
.A, self.frame['float'].B, None]
with ensure_clean(self.path) as path:
to_msgpack(path, *l)
for i, packed in enumerate(read_msgpack(path, iterator=True)):
check_arbitrary(packed, l[i])
def tests_datetimeindex_freq_issue(self):
# GH 5947
# inferring freq on the datetimeindex
df = DataFrame([1, 2, 3], index=date_range('1/1/2013', '1/3/2013'))
result = self.encode_decode(df)
assert_frame_equal(result, df)
df = DataFrame([1, 2], index=date_range('1/1/2013', '1/2/2013'))
result = self.encode_decode(df)
assert_frame_equal(result, df)
def test_dataframe_duplicate_column_names(self):
# GH 9618
expected_1 = DataFrame(columns=['a', 'a'])
expected_2 = DataFrame(columns=[1] * 100)
expected_2.loc[0] = np.random.randn(100)
expected_3 = DataFrame(columns=[1, 1])
expected_3.loc[0] = ['abc', np.nan]
result_1 = self.encode_decode(expected_1)
result_2 = self.encode_decode(expected_2)
result_3 = self.encode_decode(expected_3)
assert_frame_equal(result_1, expected_1)
assert_frame_equal(result_2, expected_2)
assert_frame_equal(result_3, expected_3)
class TestSparse(TestPackers):
def _check_roundtrip(self, obj, comparator, **kwargs):
# currently these are not implemetned
# i_rec = self.encode_decode(obj)
# comparator(obj, i_rec, **kwargs)
pytest.raises(NotImplementedError, self.encode_decode, obj)
def test_sparse_series(self):
s = tm.makeStringSeries()
s[3:5] = np.nan
ss = s.to_sparse()
self._check_roundtrip(ss, tm.assert_series_equal,
check_series_type=True)
ss2 = s.to_sparse(kind='integer')
self._check_roundtrip(ss2, tm.assert_series_equal,
check_series_type=True)
ss3 = s.to_sparse(fill_value=0)
self._check_roundtrip(ss3, tm.assert_series_equal,
check_series_type=True)
def test_sparse_frame(self):
s = tm.makeDataFrame()
s.loc[3:5, 1:3] = np.nan
s.loc[8:10, -2] = np.nan
ss = s.to_sparse()
self._check_roundtrip(ss, tm.assert_frame_equal,
check_frame_type=True)
ss2 = s.to_sparse(kind='integer')
self._check_roundtrip(ss2, tm.assert_frame_equal,
check_frame_type=True)
ss3 = s.to_sparse(fill_value=0)
self._check_roundtrip(ss3, tm.assert_frame_equal,
check_frame_type=True)
class TestCompression(TestPackers):
"""See https://github.com/pandas-dev/pandas/pull/9783
"""
def setup_method(self, method):
try:
from sqlalchemy import create_engine
self._create_sql_engine = create_engine
except ImportError:
self._SQLALCHEMY_INSTALLED = False
else:
self._SQLALCHEMY_INSTALLED = True
super(TestCompression, self).setup_method(method)
data = {
'A': np.arange(1000, dtype=np.float64),
'B': np.arange(1000, dtype=np.int32),
'C': list(100 * 'abcdefghij'),
'D': date_range(datetime.datetime(2015, 4, 1), periods=1000),
'E': [datetime.timedelta(days=x) for x in range(1000)],
}
self.frame = {
'float': DataFrame(dict((k, data[k]) for k in ['A', 'A'])),
'int': DataFrame(dict((k, data[k]) for k in ['B', 'B'])),
'mixed': DataFrame(data),
}
def test_plain(self):
i_rec = self.encode_decode(self.frame)
for k in self.frame.keys():
assert_frame_equal(self.frame[k], i_rec[k])
def _test_compression(self, compress):
i_rec = self.encode_decode(self.frame, compress=compress)
for k in self.frame.keys():
value = i_rec[k]
expected = self.frame[k]
assert_frame_equal(value, expected)
# make sure that we can write to the new frames
for block in value._data.blocks:
assert block.values.flags.writeable
def test_compression_zlib(self):
if not _ZLIB_INSTALLED:
pytest.skip('no zlib')
self._test_compression('zlib')
def test_compression_blosc(self):
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
self._test_compression('blosc')
def _test_compression_warns_when_decompress_caches(self, compress):
not_garbage = []
control = [] # copied data
compress_module = globals()[compress]
real_decompress = compress_module.decompress
def decompress(ob):
"""mock decompress function that delegates to the real
decompress but caches the result and a copy of the result.
"""
res = real_decompress(ob)
not_garbage.append(res) # hold a reference to this bytes object
control.append(bytearray(res)) # copy the data here to check later
return res
# types mapped to values to add in place.
rhs = {
np.dtype('float64'): 1.0,
np.dtype('int32'): 1,
np.dtype('object'): 'a',
np.dtype('datetime64[ns]'): np.timedelta64(1, 'ns'),
np.dtype('timedelta64[ns]'): np.timedelta64(1, 'ns'),
}
with patch(compress_module, 'decompress', decompress), \
tm.assert_produces_warning(PerformanceWarning) as ws:
i_rec = self.encode_decode(self.frame, compress=compress)
for k in self.frame.keys():
value = i_rec[k]
expected = self.frame[k]
assert_frame_equal(value, expected)
# make sure that we can write to the new frames even though
# we needed to copy the data
for block in value._data.blocks:
assert block.values.flags.writeable
# mutate the data in some way
block.values[0] += rhs[block.dtype]
for w in ws:
# check the messages from our warnings
assert str(w.message) == ('copying data after decompressing; '
'this may mean that decompress is '
'caching its result')
for buf, control_buf in zip(not_garbage, control):
# make sure none of our mutations above affected the
# original buffers
assert buf == control_buf
def test_compression_warns_when_decompress_caches_zlib(self):
if not _ZLIB_INSTALLED:
pytest.skip('no zlib')
self._test_compression_warns_when_decompress_caches('zlib')
def test_compression_warns_when_decompress_caches_blosc(self):
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
self._test_compression_warns_when_decompress_caches('blosc')
def _test_small_strings_no_warn(self, compress):
empty = np.array([], dtype='uint8')
with tm.assert_produces_warning(None):
empty_unpacked = self.encode_decode(empty, compress=compress)
tm.assert_numpy_array_equal(empty_unpacked, empty)
assert empty_unpacked.flags.writeable
char = np.array([ord(b'a')], dtype='uint8')
with tm.assert_produces_warning(None):
char_unpacked = self.encode_decode(char, compress=compress)
tm.assert_numpy_array_equal(char_unpacked, char)
assert char_unpacked.flags.writeable
# if this test fails I am sorry because the interpreter is now in a
# bad state where b'a' points to 98 == ord(b'b').
char_unpacked[0] = ord(b'b')
# we compare the ord of bytes b'a' with unicode u'a' because the should
# always be the same (unless we were able to mutate the shared
# character singleton in which case ord(b'a') == ord(b'b').
assert ord(b'a') == ord(u'a')
tm.assert_numpy_array_equal(
char_unpacked,
np.array([ord(b'b')], dtype='uint8'),
)
def test_small_strings_no_warn_zlib(self):
if not _ZLIB_INSTALLED:
pytest.skip('no zlib')
self._test_small_strings_no_warn('zlib')
def test_small_strings_no_warn_blosc(self):
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
self._test_small_strings_no_warn('blosc')
def test_readonly_axis_blosc(self):
# GH11880
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
df1 = DataFrame({'A': list('abcd')})
df2 = DataFrame(df1, index=[1., 2., 3., 4.])
assert 1 in self.encode_decode(df1['A'], compress='blosc')
assert 1. in self.encode_decode(df2['A'], compress='blosc')
def test_readonly_axis_zlib(self):
# GH11880
df1 = DataFrame({'A': list('abcd')})
df2 = DataFrame(df1, index=[1., 2., 3., 4.])
assert 1 in self.encode_decode(df1['A'], compress='zlib')
assert 1. in self.encode_decode(df2['A'], compress='zlib')
def test_readonly_axis_blosc_to_sql(self):
# GH11880
if not _BLOSC_INSTALLED:
pytest.skip('no blosc')
if not self._SQLALCHEMY_INSTALLED:
pytest.skip('no sqlalchemy')
expected = DataFrame({'A': list('abcd')})
df = self.encode_decode(expected, compress='blosc')
eng = self._create_sql_engine("sqlite:///:memory:")
df.to_sql('test', eng, if_exists='append')
result = pandas.read_sql_table('test', eng, index_col='index')
result.index.names = [None]
assert_frame_equal(expected, result)
def test_readonly_axis_zlib_to_sql(self):
# GH11880
if not _ZLIB_INSTALLED:
pytest.skip('no zlib')
if not self._SQLALCHEMY_INSTALLED:
pytest.skip('no sqlalchemy')
expected = DataFrame({'A': list('abcd')})
df = self.encode_decode(expected, compress='zlib')
eng = self._create_sql_engine("sqlite:///:memory:")
df.to_sql('test', eng, if_exists='append')
result = pandas.read_sql_table('test', eng, index_col='index')
result.index.names = [None]
assert_frame_equal(expected, result)
class TestEncoding(TestPackers):
def setup_method(self, method):
super(TestEncoding, self).setup_method(method)
data = {
'A': [compat.u('\u2019')] * 1000,
'B': np.arange(1000, dtype=np.int32),
'C': list(100 * 'abcdefghij'),
'D': date_range(datetime.datetime(2015, 4, 1), periods=1000),
'E': [datetime.timedelta(days=x) for x in range(1000)],
'G': [400] * 1000
}
self.frame = {
'float': DataFrame(dict((k, data[k]) for k in ['A', 'A'])),
'int': DataFrame(dict((k, data[k]) for k in ['B', 'B'])),
'mixed': DataFrame(data),
}
self.utf_encodings = ['utf8', 'utf16', 'utf32']
def test_utf(self):
# GH10581
for encoding in self.utf_encodings:
for frame in compat.itervalues(self.frame):
result = self.encode_decode(frame, encoding=encoding)
assert_frame_equal(result, frame)
def test_default_encoding(self):
for frame in compat.itervalues(self.frame):
result = frame.to_msgpack()
expected = frame.to_msgpack(encoding='utf8')
assert result == expected
result = self.encode_decode(frame)
assert_frame_equal(result, frame)
def legacy_packers_versions():
# yield the packers versions
path = tm.get_data_path('legacy_msgpack')
for v in os.listdir(path):
p = os.path.join(path, v)
if os.path.isdir(p):
yield v
class TestMsgpack(object):
"""
How to add msgpack tests:
1. Install pandas version intended to output the msgpack.
TestPackers
2. Execute "generate_legacy_storage_files.py" to create the msgpack.
$ python generate_legacy_storage_files.py <output_dir> msgpack
3. Move the created pickle to "data/legacy_msgpack/<version>" directory.
"""
minimum_structure = {'series': ['float', 'int', 'mixed',
'ts', 'mi', 'dup'],
'frame': ['float', 'int', 'mixed', 'mi'],
'panel': ['float'],
'index': ['int', 'date', 'period'],
'mi': ['reg2']}
def check_min_structure(self, data, version):
for typ, v in self.minimum_structure.items():
assert typ in data, '"{0}" not found in unpacked data'.format(typ)
for kind in v:
msg = '"{0}" not found in data["{1}"]'.format(kind, typ)
assert kind in data[typ], msg
def compare(self, current_data, all_data, vf, version):
# GH12277 encoding default used to be latin-1, now utf-8
if LooseVersion(version) < '0.18.0':
data = read_msgpack(vf, encoding='latin-1')
else:
data = read_msgpack(vf)
self.check_min_structure(data, version)
for typ, dv in data.items():
assert typ in all_data, ('unpacked data contains '
'extra key "{0}"'
.format(typ))
for dt, result in dv.items():
assert dt in current_data[typ], ('data["{0}"] contains extra '
'key "{1}"'.format(typ, dt))
try:
expected = current_data[typ][dt]
except KeyError:
continue
# use a specific comparator
# if available
comp_method = "compare_{typ}_{dt}".format(typ=typ, dt=dt)
comparator = getattr(self, comp_method, None)
if comparator is not None:
comparator(result, expected, typ, version)
else:
check_arbitrary(result, expected)
return data
def compare_series_dt_tz(self, result, expected, typ, version):
# 8260
# dtype is object < 0.17.0
if LooseVersion(version) < '0.17.0':
expected = expected.astype(object)
tm.assert_series_equal(result, expected)
else:
tm.assert_series_equal(result, expected)
def compare_frame_dt_mixed_tzs(self, result, expected, typ, version):
# 8260
# dtype is object < 0.17.0
if LooseVersion(version) < '0.17.0':
expected = expected.astype(object)
tm.assert_frame_equal(result, expected)
else:
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize('version', legacy_packers_versions())
def test_msgpacks_legacy(self, current_packers_data, all_packers_data,
version):
pth = tm.get_data_path('legacy_msgpack/{0}'.format(version))
n = 0
for f in os.listdir(pth):
# GH12142 0.17 files packed in P2 can't be read in P3
if (compat.PY3 and version.startswith('0.17.') and
f.split('.')[-4][-1] == '2'):
continue
vf = os.path.join(pth, f)
try:
with catch_warnings(record=True):
self.compare(current_packers_data, all_packers_data,
vf, version)
except ImportError:
# blosc not installed
continue
n += 1
assert n > 0, 'Msgpack files are not tested'
| bsd-3-clause |
DonghoChoi/Exploration_Study | local/WS_url_list.py | 2 | 3092 | #!/usr/bin/python
# Author: Dongho Choi
'''
This script
'''
import os.path
import datetime
import math
import time
import itertools
import pandas as pd
from sshtunnel import SSHTunnelForwarder # for SSH connection
import pymysql.cursors # MySQL handling API
import sys
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
import statsmodels.api as sm
import csv
#sys.path.append("./configs/")
sys.path.append("/Users/donghochoi/Documents/Work/Exploration_Study/Dissertation/Code/local/configs/")
import server_config # (1) info2_server (2) exploration_db
if __name__ == "__main__":
# Server connection
server = SSHTunnelForwarder(
(server_config.info2_server['host'], 22),
ssh_username=server_config.info2_server['user'],
ssh_password=server_config.info2_server['password'],
remote_bind_address=('127.0.0.1', 3306))
server.start()
connection = pymysql.connect(host='127.0.0.1',
port=server.local_bind_port,
user=server_config.exploration_db['user'],
password=server_config.exploration_db['password'],
db=server_config.exploration_db['database'],
use_unicode=True,
charset="utf8"
)
connection.autocommit(True)
cursor = connection.cursor()
print("MySQL connection established")
'''
# Get the participants list from the table of 'final_participants'
df_participants = pd.read_sql('SELECT * FROM final_participants', con=connection)
print("Participants Table READ")
userID_list = df_participants['userID'].tolist()
'''
#userID_list = [2,10,22,7]
#userID_list = [8,11,12,20,21,25]
userID_list = [2,10,22,7,8,11,12,20,21,25,6,14,16,23,42,26,27,32,46,1,3,5,9,28,34,35,37,38,40,45]
# TH behavior data for the whole session
#for i in range(0,1):
for i in range(0,len(userID_list)): # i - current userID
current_userID = userID_list[i]
#current_userID = 22
print("current user: {0}".format(current_userID))
# pages_lab table order by localTimestamp_int adding an index column named page_index
#sql = "SET @row_num = 0;"
#cursor.execute(sql)
sql="SELECT DISTINCT url FROM WS_eye_duration_per_page_with_url WHERE (userID={0});".format(current_userID)
df_user_pages_lab = pd.read_sql(sql,con=connection)
print("distinct url length: {0}".format(len(df_user_pages_lab)))
distinct_url_index = 1
for j in range(0,len(df_user_pages_lab)):
temp_url = df_user_pages_lab.iloc[j]['url']
sql="INSERT INTO WS_url_list (userID,url,distinct_url_index) VALUES ({0},'{1}',{2})".\
format(str(current_userID),str(temp_url),str(distinct_url_index))
print(sql)
cursor.execute(sql)
distinct_url_index = distinct_url_index + 1
server.stop()
| gpl-3.0 |
bam2332g/proj1part3 | rahulCode_redo/project1Part3/nba/final/playerProbabilityGenerator.py | 1 | 5124 | from sklearn import svm
import datetime
from datetime import date
import random
from scipy import spatial
# import numpy as np
playerPos={}
with open("nbaPlayerInfo") as f:
for line in f:
tup=eval(line)
playerPos[tup[0]]=tup[6]
topPlayerInfo={}
with open("nbaTopPlayersInfo") as f:
for line in f:
tup=eval(line)
topPlayerInfo[tup[0]]=[tup[4],tup[5],tup[6]]
allStar={
"G":[],
"F":[],
"C":[]
}
nonAllStar={
"G":[],
"F":[],
"C":[]
}
mvp={
"G":[],
"F":[],
"C":[]
}
nonMvp={
"G":[],
"F":[],
"C":[]
}
playerStats={}
with open("nbaPlayerStats") as f:
for line in f:
tup=eval(line)
# print(tup[2])
# print(tup[7])
# print(tup[9])
# print(tup[11])
temp=[float(tup[4])/float(tup[2]),float(tup[5])/float(tup[2]),float(tup[6])/float(tup[2]),float(tup[7])/max(float(tup[8]),1.0),float(tup[9])/max(float(tup[10]),1.0),float(tup[11])/max(float(tup[12]),1.0),float(tup[13])/float(tup[2]),float(tup[14])/float(tup[2])]
if tup[1]==2014:
if tup[0]==275:
print(tup)
print(temp)
playerStats[tup[0]]=temp
if tup[len(tup)-2]:
allStar[playerPos[tup[0]]].append(temp)
else:
nonAllStar[playerPos[tup[0]]].append(temp)
if tup[len(tup)-1]:
mvp[playerPos[tup[0]]].append(temp)
else:
nonMvp[playerPos[tup[0]]].append(temp)
topPlayerStats={}
with open("nbaTopPlayersStats") as f:
for line in f:
tup=eval(line)
temp=[]
for i in range(1,len(tup)):
temp.append(tup[i])
topPlayerStats[tup[0]]=temp
if topPlayerInfo[tup[0]][1]>0:
allStar[topPlayerInfo[tup[0]][0]].append(temp)
else:
nonAllStar[topPlayerInfo[tup[0]][0]].append(temp)
if topPlayerInfo[tup[0]][2]>0:
mvp[topPlayerInfo[tup[0]][0]].append(temp)
else:
nonMvp[topPlayerInfo[tup[0]][0]].append(temp)
allStarModels={
"F":None,
"G":None,
"C":None
}
mvpModels={
"F":None,
"G":None,
"C":None
}
for pos in allStarModels:
X=[]
y=[]
for value in allStar[pos]:
X.append(value)
y.append(1)
rand_smpl = [nonAllStar[pos][i] for i in sorted(random.sample(xrange(len(nonAllStar[pos])),len(allStar[pos])))]
for value in rand_smpl:
X.append(value)
y.append(0)
# print(X)
# X=np.array(X)
# y=np.array(y)
clf=svm.SVC(probability=True)
clf.fit(X,y)
allStarModels[pos]=clf
for pos in mvpModels:
X=[]
y=[]
for value in mvp[pos]:
X.append(value)
y.append(1)
rand_smpl = [nonMvp[pos][i] for i in sorted(random.sample(xrange(len(nonMvp[pos])),len(mvp[pos])))]
for value in rand_smpl:
X.append(value)
y.append(0)
# print(X)
# X=np.array(X)
# y=np.array(y)
clf=svm.SVC(kernel="linear",probability=True)
clf.fit(X,y)
mvpModels[pos]=clf
probs={}
with open("probs","w") as f:
for i in range(0,3):
for player in playerPos:
if player in playerStats:
v=playerStats[player]
amodel=allStarModels[playerPos[player]]
mmodel=mvpModels[playerPos[player]]
if player in probs:
probs[player][0]+=mmodel.predict_proba(v)[0][1]
probs[player][1]+=amodel.predict_proba(v)[0][1]
else:
probs[player]=[mmodel.predict_proba(v)[0][1],amodel.predict_proba(v)[0][1]]
else:
probs[player]=[0,0]
if i<2:
for pos in allStarModels:
X=[]
y=[]
for value in allStar[pos]:
X.append(value)
y.append(1)
rand_smpl = [nonAllStar[pos][i] for i in sorted(random.sample(xrange(len(nonAllStar[pos])),len(allStar[pos])))]
for value in rand_smpl:
X.append(value)
y.append(0)
# print(X)
# X=np.array(X)
# y=np.array(y)
clf=svm.SVC(probability=True)
clf.fit(X,y)
allStarModels[pos]=clf
for pos in mvpModels:
X=[]
y=[]
for value in mvp[pos]:
X.append(value)
y.append(1)
rand_smpl = [nonMvp[pos][i] for i in sorted(random.sample(xrange(len(nonMvp[pos])),len(mvp[pos])))]
for value in rand_smpl:
X.append(value)
y.append(0)
# print(X)
# X=np.array(X)
# y=np.array(y)
clf=svm.SVC(kernel="linear",probability=True)
clf.fit(X,y)
mvpModels[pos]=clf
for player in probs:
cur=[0,0]
if player in playerStats:
cur=[-1,0]
v1=playerStats[player]
# print(v1)
for player2 in topPlayerStats:
# print(str(player2))
v2=topPlayerStats[player2]
result=1-spatial.distance.cosine(v1,v2)
if result>cur[1]:
cur[0]=player2
cur[1]=result
# print(player2)
# break
# break
# 184
temp=(player,cur[0],cur[1],(probs[player][1]/3.0),(probs[player][0]/4.5))
f.write(str(temp))
f.write("\n")
# test=playerStats[275]
# test1=allStarModels[playerPos[275]]
# test2=mvpModels[playerPos[275]]
# print(test1.predict(test))
# print(test1.predict_proba(test))
# print(test2.predict_proba(test))
# test=playerStats[351]
# test1=allStarModels[playerPos[351]]
# print(test1.predict(test))
# print(test1.predict_proba(test))
# test=playerStats[361]
# test1=allStarModels[playerPos[361]]
# print(test1.predict(test))
# print(test1.predict_proba(test))
# test=playerStats[26]
# test1=allStarModels[playerPos[26]]
# print(test1.predict(test))
# print(test1.predict_proba(test))
# for key in allStar:
# print(key)
# print len(allStar[key])
# for key in mvp:
# print(key)
# print len(mvp[key])
| mit |
calcdude84se/lasersandbox | interpolator.py | 1 | 1067 | import numpy as np
import scipy.spatial
import scipy.ndimage.filters as filt
import matplotlib.pyplot as plt
def interpolate(xmesh, ymesh, pointcloud):
shape = xmesh.shape
mytree = scipy.spatial.cKDTree(pointcloud[:,:2])
xflat = xmesh.flatten()
yflat = ymesh.flatten()
_, indecies = mytree.query(np.array([xflat, yflat]).transpose())
z = pointcloud[indecies, 2]
z = z.reshape(shape)
#z = filt.gaussian_filter(z, 2)
return z, mytree
def test():
xarray = np.linspace(-5, 5, 30)
yarray = np.linspace(-5, 5, 30)
xmesh, ymesh = np.meshgrid(xarray, yarray)
xpoint = np.random.ranf(400)*10 -5
ypoint = np.random.ranf(400)*10 -5
zpoint = xpoint**2 + ypoint **2
pointcloud = np.array([xpoint, ypoint, zpoint]).transpose()
zarray , mytree = interpolate(xmesh, ymesh, pointcloud)
#zarray = filt.gaussian_filter(zarray, 4)
return zarray
if __name__ == "__main__":
z = test()
plt.imshow(z)
plt.show()
| mit |
anirudhjayaraman/scikit-learn | examples/svm/plot_svm_regression.py | 249 | 1451 | """
===================================================================
Support Vector Regression (SVR) using linear and non-linear kernels
===================================================================
Toy example of 1D regression using linear, polynomial and RBF kernels.
"""
print(__doc__)
import numpy as np
from sklearn.svm import SVR
import matplotlib.pyplot as plt
###############################################################################
# Generate sample data
X = np.sort(5 * np.random.rand(40, 1), axis=0)
y = np.sin(X).ravel()
###############################################################################
# Add noise to targets
y[::5] += 3 * (0.5 - np.random.rand(8))
###############################################################################
# Fit regression model
svr_rbf = SVR(kernel='rbf', C=1e3, gamma=0.1)
svr_lin = SVR(kernel='linear', C=1e3)
svr_poly = SVR(kernel='poly', C=1e3, degree=2)
y_rbf = svr_rbf.fit(X, y).predict(X)
y_lin = svr_lin.fit(X, y).predict(X)
y_poly = svr_poly.fit(X, y).predict(X)
###############################################################################
# look at the results
plt.scatter(X, y, c='k', label='data')
plt.hold('on')
plt.plot(X, y_rbf, c='g', label='RBF model')
plt.plot(X, y_lin, c='r', label='Linear model')
plt.plot(X, y_poly, c='b', label='Polynomial model')
plt.xlabel('data')
plt.ylabel('target')
plt.title('Support Vector Regression')
plt.legend()
plt.show()
| bsd-3-clause |
bikash/kaggleCompetition | microsoft malware/Malware_Say_No_To_Overfitting/kaggle_Microsoft_malware_small/model1.py | 2 | 6965 | from sklearn.ensemble import RandomForestClassifier as RF
from sklearn.linear_model import LogisticRegression as LGR
from sklearn.ensemble import GradientBoostingClassifier as GBC
from sklearn.ensemble import ExtraTreesClassifier as ET
from xgboost_multi import XGBC
from sklearn import cross_validation
from sklearn.cross_validation import StratifiedKFold as KFold
from sklearn.metrics import log_loss
import numpy as np
import pandas as pd
import pickle
# create model_list
def get_model_list():
model_list = []
for num_round in [1]:
for max_depth in [1]:
for eta in [0.2]:
for min_child_weight in [1]:
for col_sample in [1]:
model_list.append((XGBC(num_round = num_round, max_depth = max_depth, eta = eta,
min_child_weight = min_child_weight, colsample_bytree = col_sample),
'xgb_tree_%i_depth_%i_lr_%f_child_%i_col_sample_%i'%(num_round, max_depth, eta, min_child_weight,col_sample)))
return model_list
def gen_data():
# the 4k features!
the_train = pickle.load(open('X33_train_reproduce.p','rb'))
the_test = pickle.load(open('X33_test_reproduce.p','rb'))
print the_train.shape, the_test.shape
# corresponding id and labels
Id = pickle.load(open('xid.p','rb'))
labels = pickle.load(open('y.p','rb'))
Id_test = pickle.load(open('Xt_id.p','rb'))
# merge them into pandas
join_train = np.column_stack((Id, the_train, labels))
join_test = np.column_stack((Id_test, the_test))
train = pd.DataFrame(join_train, columns=['Id']+['the_fea%i'%i for i in xrange(the_train.shape[1])] + ['Class'])
test = pd.DataFrame(join_test, columns=['Id']+['the_fea%i'%i for i in xrange(the_train.shape[1])])
del join_train, join_test
# convert into numeric features
train = train.convert_objects(convert_numeric=True)
test = test.convert_objects(convert_numeric=True)
# including more things
train_count = pd.read_csv("train_frequency.csv")
test_count = pd.read_csv("test_frequency.csv")
train = pd.merge(train, train_count, on='Id')
test = pd.merge(test, test_count, on='Id')
print train.shape, test.shape
## all right, include more!
grams_train = pd.read_csv("train_data_750.csv")
grams_test = pd.read_csv("test_data_750.csv")
print grams_train.shape, grams_test.shape
# daf features
train_daf = pd.read_csv("train_daf.csv")
test_daf = pd.read_csv("test_daf.csv")
#daf_list = [0,165,91,60,108,84,42,93,152,100] #daf list for 500 grams.
print train_daf.shape, test_daf.shape
# merge all them
mine = pd.merge(grams_train, train_daf,on='Id')
mine_labels = pd.read_csv("trainLabels.csv")
mine = pd.merge(mine, mine_labels, on='Id')
mine_labels = mine.Class
mine_Id = mine.Id
del mine['Class']
del mine['Id']
mine_train = mine.as_matrix()
mine_test = pd.merge(grams_test, test_daf,on='Id')
mine_test_id = mine_test.Id
del mine_test['Id']
print mine.shape,mine_train.shape, mine_test.shape
train_mine = pd.DataFrame(np.column_stack((mine_Id, mine_train)), columns=['Id']+['mine_'+str(x) for x in xrange(mine_train.shape[1])]).convert_objects(convert_numeric=True)
test_mine = pd.DataFrame(np.column_stack((mine_test_id, mine_test)), columns=['Id']+['mine_'+str(x) for x in xrange(mine_test.shape[1])]).convert_objects(convert_numeric=True)
train = pd.merge(train, train_mine, on='Id')
test = pd.merge(test, test_mine, on='Id')
train_image = pd.read_csv("train_asm_image.csv", usecols=['Id']+['asm_%i'%i for i in xrange(800)])
test_image = pd.read_csv("test_asm_image.csv", usecols=['Id']+['asm_%i'%i for i in xrange(800)])
train = pd.merge(train, train_image, on='Id')
test = pd.merge(test, test_image, on='Id')
print "the data dimension:"
print train.shape, test.shape
return train, test
def gen_submission(model):
# read in data
print "read data and prepare modelling..."
train, test = gen_data()
X = train
Id = X.Id
labels = np.array(X.Class - 1) # for the purpose of using multilogloss fun.
del X['Id']
del X['Class']
X = X.as_matrix()
X_test = test
id_test = X_test.Id
del X_test['Id']
X_test = X_test.as_matrix()
clf, clf_name = model
print "generating model %s..."%clf_name
clf.fit(X, labels)
pred = clf.predict_proba(X_test)
print pred.shape
pred= pred.reshape(id_test.shape[0],9)
pred = np.column_stack((id_test, pred))
submission = pd.DataFrame(pred, columns=['Id']+['Prediction%i'%i for i in xrange(1,10)])
submission = submission.convert_objects(convert_numeric=True)
submission.to_csv('model1.csv',index = False)
def cross_validate(model_list):
# read in data
print "read data and prepare modelling..."
train, test = gen_data()
X = train
Id = X.Id
labels = np.array(X.Class - 1) # for the purpose of using multilogloss fun.
del X['Id']
del X['Class']
X = X.as_matrix()
X_test = test
id_test = X_test.Id
del X_test['Id']
X_test = X_test.as_matrix()
kf = KFold(labels, n_folds=4) # 4 folds
best_score = 1.0
for j, (clf, clf_name) in enumerate(model_list):
print "modelling %s"%clf_name
stack_train = np.zeros((len(Id),9)) # 9 classes.
for i, (train_fold, validate) in enumerate(kf):
X_train, X_validate, labels_train, labels_validate = X[train_fold,:], X[validate,:], labels[train_fold], labels[validate]
clf.fit(X_train,labels_train)
stack_train[validate] = clf.predict_proba(X_validate)
print multiclass_log_loss(labels, stack_train)
if multiclass_log_loss(labels, stack_train) < best_score:
best_score = multiclass_log_loss(labels, stack_train)
best_selection = j
return model_list[best_selection]
def multiclass_log_loss(y_true, y_pred, eps=1e-15):
"""Multi class version of Logarithmic Loss metric.
https://www.kaggle.com/wiki/MultiClassLogLoss
Parameters
----------
y_true : array, shape = [n_samples]
true class, intergers in [0, n_classes - 1)
y_pred : array, shape = [n_samples, n_classes]
Returns
-------
loss : float
"""
predictions = np.clip(y_pred, eps, 1 - eps)
# normalize row sums to 1
predictions /= predictions.sum(axis=1)[:, np.newaxis]
actual = np.zeros(y_pred.shape)
n_samples = actual.shape[0]
actual[np.arange(n_samples), y_true.astype(int)] = 1
vectsum = np.sum(actual * np.log(predictions))
loss = -1.0 / n_samples * vectsum
return loss
if __name__ == '__main__':
model_list = get_model_list()
#best_model = cross_validate(model_list)
gen_submission(model_list[0])#0.0051
print "ALL DONE!!!"
| apache-2.0 |
yavalvas/yav_com | build/matplotlib/lib/mpl_examples/api/collections_demo.py | 7 | 4193 | #!/usr/bin/env python
'''Demonstration of LineCollection, PolyCollection, and
RegularPolyCollection with autoscaling.
For the first two subplots, we will use spirals. Their
size will be set in plot units, not data units. Their positions
will be set in data units by using the "offsets" and "transOffset"
kwargs of the LineCollection and PolyCollection.
The third subplot will make regular polygons, with the same
type of scaling and positioning as in the first two.
The last subplot illustrates the use of "offsets=(xo,yo)",
that is, a single tuple instead of a list of tuples, to generate
successively offset curves, with the offset given in data
units. This behavior is available only for the LineCollection.
'''
import matplotlib.pyplot as plt
from matplotlib import collections, transforms
from matplotlib.colors import colorConverter
import numpy as np
nverts = 50
npts = 100
# Make some spirals
r = np.array(range(nverts))
theta = np.array(range(nverts)) * (2*np.pi)/(nverts-1)
xx = r * np.sin(theta)
yy = r * np.cos(theta)
spiral = list(zip(xx,yy))
# Make some offsets
rs = np.random.RandomState([12345678])
xo = rs.randn(npts)
yo = rs.randn(npts)
xyo = list(zip(xo, yo))
# Make a list of colors cycling through the rgbcmyk series.
colors = [colorConverter.to_rgba(c) for c in ('r','g','b','c','y','m','k')]
fig, axes = plt.subplots(2,2)
((ax1, ax2), (ax3, ax4)) = axes # unpack the axes
col = collections.LineCollection([spiral], offsets=xyo,
transOffset=ax1.transData)
trans = fig.dpi_scale_trans + transforms.Affine2D().scale(1.0/72.0)
col.set_transform(trans) # the points to pixels transform
# Note: the first argument to the collection initializer
# must be a list of sequences of x,y tuples; we have only
# one sequence, but we still have to put it in a list.
ax1.add_collection(col, autolim=True)
# autolim=True enables autoscaling. For collections with
# offsets like this, it is neither efficient nor accurate,
# but it is good enough to generate a plot that you can use
# as a starting point. If you know beforehand the range of
# x and y that you want to show, it is better to set them
# explicitly, leave out the autolim kwarg (or set it to False),
# and omit the 'ax1.autoscale_view()' call below.
# Make a transform for the line segments such that their size is
# given in points:
col.set_color(colors)
ax1.autoscale_view() # See comment above, after ax1.add_collection.
ax1.set_title('LineCollection using offsets')
# The same data as above, but fill the curves.
col = collections.PolyCollection([spiral], offsets=xyo,
transOffset=ax2.transData)
trans = transforms.Affine2D().scale(fig.dpi/72.0)
col.set_transform(trans) # the points to pixels transform
ax2.add_collection(col, autolim=True)
col.set_color(colors)
ax2.autoscale_view()
ax2.set_title('PolyCollection using offsets')
# 7-sided regular polygons
col = collections.RegularPolyCollection(7,
sizes = np.fabs(xx)*10.0, offsets=xyo,
transOffset=ax3.transData)
trans = transforms.Affine2D().scale(fig.dpi/72.0)
col.set_transform(trans) # the points to pixels transform
ax3.add_collection(col, autolim=True)
col.set_color(colors)
ax3.autoscale_view()
ax3.set_title('RegularPolyCollection using offsets')
# Simulate a series of ocean current profiles, successively
# offset by 0.1 m/s so that they form what is sometimes called
# a "waterfall" plot or a "stagger" plot.
nverts = 60
ncurves = 20
offs = (0.1, 0.0)
yy = np.linspace(0, 2*np.pi, nverts)
ym = np.amax(yy)
xx = (0.2 + (ym-yy)/ym)**2 * np.cos(yy-0.4) * 0.5
segs = []
for i in range(ncurves):
xxx = xx + 0.02*rs.randn(nverts)
curve = list(zip(xxx, yy*100))
segs.append(curve)
col = collections.LineCollection(segs, offsets=offs)
ax4.add_collection(col, autolim=True)
col.set_color(colors)
ax4.autoscale_view()
ax4.set_title('Successive data offsets')
ax4.set_xlabel('Zonal velocity component (m/s)')
ax4.set_ylabel('Depth (m)')
# Reverse the y-axis so depth increases downward
ax4.set_ylim(ax4.get_ylim()[::-1])
plt.show()
| mit |
walkowiak/mips | tests/draw.py | 1 | 4556 | import argparse
import csv
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import host_subplot
MARKER_SIZE = 3.5
WIDTH = 0.5
TIME_DS = 8
RECALL_DS = 6 # recall at 10
INTER_DS = 9 # intersection
N_QUERIES = 100
ALGORITHM_LIST = ['ivf', 'kmeans', 'quant', 'alsh']
def create_plot(plot_label, plot_type, fig, host, data, limits):
labels = [plot_label + ' - ' + str(x) for x in ALGORITHM_LIST]
if plot_type == 'recall':
markers = ['ro', 'yo', 'bo', 'go']
DS = RECALL_DS
elif plot_type == 'inter':
markers = ['rs', 'ys', 'bs', 'gs']
DS = INTER_DS
plt.suptitle(str(plot_label) + ' vs. time')
host.set_ylabel(str(plot_label))
host.set_ylim(0.9 * limits[0], 1.1 * limits[1])
with open(str(plot_type) + '_plot.html', 'w') as html:
html.write('<!DOCTYPE html><html><head>\n')
html.write('<script src="https://cdn.plot.ly/plotly-latest.min.js"></script>')
html.write('</head><body><div id="myDiv"></div><script>')
for i, alg in enumerate(ALGORITHM_LIST):
html.write('var data' + str(i) + ' = {\n x: ')
x = [float(d[TIME_DS]) * 1000 / N_QUERIES for d in data[i]]
y = [d[DS] for d in data[i]]
html.write(str(x) + ',\n y: ' + str([float(a) for a in y]))
if alg == 'ivf':
text = ['nprobe=' + str(d[0]) for d in data[i]]
elif alg == 'kmeans':
text = ['layers=' + str(d[0]) + ' op_tr=' + str(d[4]) +
' aug_type=' + str(d[1]) + ' U=' + str(d[3])
for d in data[i]]
elif alg == 'quant':
text = ['subsp=' + str(d[0]) + ' centr=' + str(d[1])
for d in data[i]]
elif alg == 'alsh':
text = ['tabl=' + str(d[0]) + ' fun=' + str(d[1]) +
' aug_type=' + str(d[3]) + ' U=' + str(d[4]) +
' r=' + str(d[2]) for d in data[i]]
html.write(',\n text: ' + str(text) + ',\n mode: \'markers\',\n')
html.write('name :\'' + str(alg) + '\' };\n')
host.plot(x, y, markers[i], label=labels[i],
markersize=MARKER_SIZE, mfc='none', markeredgewidth=WIDTH)
html.write('\nvar data = [data0, data1, data2, data3];\n\nvar layout = { title:\'')
html.write(str(plot_label) + ' vs. time\', yaxis: { title: \'' + str(plot_label))
html.write('\' }, xaxis: { type: \'log\', autorange: true, title: \'Time per query [ms]\' }, height : 1000 };\n\n')
html.write('Plotly.newPlot(\'myDiv\', data, layout);\n</script></body></html>')
host.legend(loc='upper center', bbox_to_anchor=(0.5, -0.15), ncol=2)
fig.savefig(str(plot_type) + '_plot.pdf')
def main(args):
with open(args.input) as f:
reader = csv.reader(f, delimiter="\t")
inlist = list(reader)
minimum_time = min([float(d[TIME_DS + 1]) for d in inlist])
maximum_time = max([float(d[TIME_DS + 1]) for d in inlist])
minimum_recall = min([float(d[RECALL_DS + 1]) for d in inlist])
maximum_recall = max([float(d[RECALL_DS + 1]) for d in inlist])
minimum_inter = min([float(d[INTER_DS + 1]) for d in inlist])
maximum_inter = max([float(d[INTER_DS + 1]) for d in inlist])
data = []
for algorithm in ALGORITHM_LIST:
data.append([x[1:] for x in inlist if x[0] == algorithm])
# prepare the plot
fig = plt.figure()
host = host_subplot(111)
box = host.get_position()
host.set_position([box.x0, box.y0 + box.height*0.35, box.width, box.height * 0.7])
host.set_xscale('log')
host.set_xlabel("Time per query [ms]")
host.set_xlim(0.9 * 1000 * minimum_time / N_QUERIES,
1.1 * 1000 * maximum_time / N_QUERIES)
if args.mode == 'recall':
create_plot('Recall@10', 'recall', fig=fig, host=host, data=data, limits=(minimum_recall, maximum_recall))
if args.mode == 'inter':
create_plot('Top-100 intersection', 'inter', fig=fig, host=host, data=data, limits=(minimum_inter, maximum_inter))
if __name__ == '__main__':
parser = argparse.ArgumentParser("A simple utility to create plots with benchmark results.")
parser.add_argument("--input", default="output.txt",
help="File from which the data should be read.")
parser.add_argument("--mode", choices={"recall", "inter"}, default="recall",
help="Controls what plot will be drawn")
args = parser.parse_args()
main(args)
| mit |
plotly/python-api | packages/python/plotly/plotly/graph_objs/_histogram2d.py | 1 | 95519 | from plotly.basedatatypes import BaseTraceType as _BaseTraceType
import copy as _copy
class Histogram2d(_BaseTraceType):
# class properties
# --------------------
_parent_path_str = ""
_path_str = "histogram2d"
_valid_props = {
"autobinx",
"autobiny",
"autocolorscale",
"bingroup",
"coloraxis",
"colorbar",
"colorscale",
"customdata",
"customdatasrc",
"histfunc",
"histnorm",
"hoverinfo",
"hoverinfosrc",
"hoverlabel",
"hovertemplate",
"hovertemplatesrc",
"ids",
"idssrc",
"legendgroup",
"marker",
"meta",
"metasrc",
"name",
"nbinsx",
"nbinsy",
"opacity",
"reversescale",
"showlegend",
"showscale",
"stream",
"type",
"uid",
"uirevision",
"visible",
"x",
"xaxis",
"xbingroup",
"xbins",
"xcalendar",
"xgap",
"xsrc",
"y",
"yaxis",
"ybingroup",
"ybins",
"ycalendar",
"ygap",
"ysrc",
"z",
"zauto",
"zhoverformat",
"zmax",
"zmid",
"zmin",
"zsmooth",
"zsrc",
}
# autobinx
# --------
@property
def autobinx(self):
"""
Obsolete: since v1.42 each bin attribute is auto-determined
separately and `autobinx` is not needed. However, we accept
`autobinx: true` or `false` and will update `xbins` accordingly
before deleting `autobinx` from the trace.
The 'autobinx' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["autobinx"]
@autobinx.setter
def autobinx(self, val):
self["autobinx"] = val
# autobiny
# --------
@property
def autobiny(self):
"""
Obsolete: since v1.42 each bin attribute is auto-determined
separately and `autobiny` is not needed. However, we accept
`autobiny: true` or `false` and will update `ybins` accordingly
before deleting `autobiny` from the trace.
The 'autobiny' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["autobiny"]
@autobiny.setter
def autobiny(self, val):
self["autobiny"] = val
# autocolorscale
# --------------
@property
def autocolorscale(self):
"""
Determines whether the colorscale is a default palette
(`autocolorscale: true`) or the palette determined by
`colorscale`. In case `colorscale` is unspecified or
`autocolorscale` is true, the default palette will be chosen
according to whether numbers in the `color` array are all
positive, all negative or mixed.
The 'autocolorscale' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["autocolorscale"]
@autocolorscale.setter
def autocolorscale(self, val):
self["autocolorscale"] = val
# bingroup
# --------
@property
def bingroup(self):
"""
Set the `xbingroup` and `ybingroup` default prefix For example,
setting a `bingroup` of 1 on two histogram2d traces will make
them their x-bins and y-bins match separately.
The 'bingroup' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["bingroup"]
@bingroup.setter
def bingroup(self, val):
self["bingroup"] = val
# coloraxis
# ---------
@property
def coloraxis(self):
"""
Sets a reference to a shared color axis. References to these
shared color axes are "coloraxis", "coloraxis2", "coloraxis3",
etc. Settings for these shared color axes are set in the
layout, under `layout.coloraxis`, `layout.coloraxis2`, etc.
Note that multiple color scales can be linked to the same color
axis.
The 'coloraxis' property is an identifier of a particular
subplot, of type 'coloraxis', that may be specified as the string 'coloraxis'
optionally followed by an integer >= 1
(e.g. 'coloraxis', 'coloraxis1', 'coloraxis2', 'coloraxis3', etc.)
Returns
-------
str
"""
return self["coloraxis"]
@coloraxis.setter
def coloraxis(self, val):
self["coloraxis"] = val
# colorbar
# --------
@property
def colorbar(self):
"""
The 'colorbar' property is an instance of ColorBar
that may be specified as:
- An instance of :class:`plotly.graph_objs.histogram2d.ColorBar`
- A dict of string/value properties that will be passed
to the ColorBar constructor
Supported dict properties:
bgcolor
Sets the color of padded area.
bordercolor
Sets the axis line color.
borderwidth
Sets the width (in px) or the border enclosing
this color bar.
dtick
Sets the step in-between ticks on this axis.
Use with `tick0`. Must be a positive number, or
special strings available to "log" and "date"
axes. If the axis `type` is "log", then ticks
are set every 10^(n*dtick) where n is the tick
number. For example, to set a tick mark at 1,
10, 100, 1000, ... set dtick to 1. To set tick
marks at 1, 100, 10000, ... set dtick to 2. To
set tick marks at 1, 5, 25, 125, 625, 3125, ...
set dtick to log_10(5), or 0.69897000433. "log"
has several special values; "L<f>", where `f`
is a positive number, gives ticks linearly
spaced in value (but not position). For example
`tick0` = 0.1, `dtick` = "L0.5" will put ticks
at 0.1, 0.6, 1.1, 1.6 etc. To show powers of 10
plus small digits between, use "D1" (all
digits) or "D2" (only 2 and 5). `tick0` is
ignored for "D1" and "D2". If the axis `type`
is "date", then you must convert the time to
milliseconds. For example, to set the interval
between ticks to one day, set `dtick` to
86400000.0. "date" also has special values
"M<n>" gives ticks spaced by a number of
months. `n` must be a positive integer. To set
ticks on the 15th of every third month, set
`tick0` to "2000-01-15" and `dtick` to "M3". To
set ticks every 4 years, set `dtick` to "M48"
exponentformat
Determines a formatting rule for the tick
exponents. For example, consider the number
1,000,000,000. If "none", it appears as
1,000,000,000. If "e", 1e+9. If "E", 1E+9. If
"power", 1x10^9 (with 9 in a super script). If
"SI", 1G. If "B", 1B.
len
Sets the length of the color bar This measure
excludes the padding of both ends. That is, the
color bar length is this length minus the
padding on both ends.
lenmode
Determines whether this color bar's length
(i.e. the measure in the color variation
direction) is set in units of plot "fraction"
or in *pixels. Use `len` to set the value.
nticks
Specifies the maximum number of ticks for the
particular axis. The actual number of ticks
will be chosen automatically to be less than or
equal to `nticks`. Has an effect only if
`tickmode` is set to "auto".
outlinecolor
Sets the axis line color.
outlinewidth
Sets the width (in px) of the axis line.
separatethousands
If "true", even 4-digit integers are separated
showexponent
If "all", all exponents are shown besides their
significands. If "first", only the exponent of
the first tick is shown. If "last", only the
exponent of the last tick is shown. If "none",
no exponents appear.
showticklabels
Determines whether or not the tick labels are
drawn.
showtickprefix
If "all", all tick labels are displayed with a
prefix. If "first", only the first tick is
displayed with a prefix. If "last", only the
last tick is displayed with a suffix. If
"none", tick prefixes are hidden.
showticksuffix
Same as `showtickprefix` but for tick suffixes.
thickness
Sets the thickness of the color bar This
measure excludes the size of the padding, ticks
and labels.
thicknessmode
Determines whether this color bar's thickness
(i.e. the measure in the constant color
direction) is set in units of plot "fraction"
or in "pixels". Use `thickness` to set the
value.
tick0
Sets the placement of the first tick on this
axis. Use with `dtick`. If the axis `type` is
"log", then you must take the log of your
starting tick (e.g. to set the starting tick to
100, set the `tick0` to 2) except when
`dtick`=*L<f>* (see `dtick` for more info). If
the axis `type` is "date", it should be a date
string, like date data. If the axis `type` is
"category", it should be a number, using the
scale where each category is assigned a serial
number from zero in the order it appears.
tickangle
Sets the angle of the tick labels with respect
to the horizontal. For example, a `tickangle`
of -90 draws the tick labels vertically.
tickcolor
Sets the tick color.
tickfont
Sets the color bar's tick label font
tickformat
Sets the tick label formatting rule using d3
formatting mini-languages which are very
similar to those in Python. For numbers, see:
https://github.com/d3/d3-3.x-api-
reference/blob/master/Formatting.md#d3_format
And for dates see:
https://github.com/d3/d3-3.x-api-
reference/blob/master/Time-Formatting.md#format
We add one item to d3's date formatter: "%{n}f"
for fractional seconds with n digits. For
example, *2016-10-13 09:15:23.456* with
tickformat "%H~%M~%S.%2f" would display
"09~15~23.46"
tickformatstops
A tuple of :class:`plotly.graph_objects.histogr
am2d.colorbar.Tickformatstop` instances or
dicts with compatible properties
tickformatstopdefaults
When used in a template (as layout.template.dat
a.histogram2d.colorbar.tickformatstopdefaults),
sets the default property values to use for
elements of
histogram2d.colorbar.tickformatstops
ticklen
Sets the tick length (in px).
tickmode
Sets the tick mode for this axis. If "auto",
the number of ticks is set via `nticks`. If
"linear", the placement of the ticks is
determined by a starting position `tick0` and a
tick step `dtick` ("linear" is the default
value if `tick0` and `dtick` are provided). If
"array", the placement of the ticks is set via
`tickvals` and the tick text is `ticktext`.
("array" is the default value if `tickvals` is
provided).
tickprefix
Sets a tick label prefix.
ticks
Determines whether ticks are drawn or not. If
"", this axis' ticks are not drawn. If
"outside" ("inside"), this axis' are drawn
outside (inside) the axis lines.
ticksuffix
Sets a tick label suffix.
ticktext
Sets the text displayed at the ticks position
via `tickvals`. Only has an effect if
`tickmode` is set to "array". Used with
`tickvals`.
ticktextsrc
Sets the source reference on Chart Studio Cloud
for ticktext .
tickvals
Sets the values at which ticks on this axis
appear. Only has an effect if `tickmode` is set
to "array". Used with `ticktext`.
tickvalssrc
Sets the source reference on Chart Studio Cloud
for tickvals .
tickwidth
Sets the tick width (in px).
title
:class:`plotly.graph_objects.histogram2d.colorb
ar.Title` instance or dict with compatible
properties
titlefont
Deprecated: Please use
histogram2d.colorbar.title.font instead. Sets
this color bar's title font. Note that the
title's font used to be set by the now
deprecated `titlefont` attribute.
titleside
Deprecated: Please use
histogram2d.colorbar.title.side instead.
Determines the location of color bar's title
with respect to the color bar. Note that the
title's location used to be set by the now
deprecated `titleside` attribute.
x
Sets the x position of the color bar (in plot
fraction).
xanchor
Sets this color bar's horizontal position
anchor. This anchor binds the `x` position to
the "left", "center" or "right" of the color
bar.
xpad
Sets the amount of padding (in px) along the x
direction.
y
Sets the y position of the color bar (in plot
fraction).
yanchor
Sets this color bar's vertical position anchor
This anchor binds the `y` position to the
"top", "middle" or "bottom" of the color bar.
ypad
Sets the amount of padding (in px) along the y
direction.
Returns
-------
plotly.graph_objs.histogram2d.ColorBar
"""
return self["colorbar"]
@colorbar.setter
def colorbar(self, val):
self["colorbar"] = val
# colorscale
# ----------
@property
def colorscale(self):
"""
Sets the colorscale. The colorscale must be an array containing
arrays mapping a normalized value to an rgb, rgba, hex, hsl,
hsv, or named color string. At minimum, a mapping for the
lowest (0) and highest (1) values are required. For example,
`[[0, 'rgb(0,0,255)'], [1, 'rgb(255,0,0)']]`. To control the
bounds of the colorscale in color space, use`zmin` and `zmax`.
Alternatively, `colorscale` may be a palette name string of the
following list: Greys,YlGnBu,Greens,YlOrRd,Bluered,RdBu,Reds,Bl
ues,Picnic,Rainbow,Portland,Jet,Hot,Blackbody,Earth,Electric,Vi
ridis,Cividis.
The 'colorscale' property is a colorscale and may be
specified as:
- A list of colors that will be spaced evenly to create the colorscale.
Many predefined colorscale lists are included in the sequential, diverging,
and cyclical modules in the plotly.colors package.
- A list of 2-element lists where the first element is the
normalized color level value (starting at 0 and ending at 1),
and the second item is a valid color string.
(e.g. [[0, 'green'], [0.5, 'red'], [1.0, 'rgb(0, 0, 255)']])
- One of the following named colorscales:
['aggrnyl', 'agsunset', 'algae', 'amp', 'armyrose', 'balance',
'blackbody', 'bluered', 'blues', 'blugrn', 'bluyl', 'brbg',
'brwnyl', 'bugn', 'bupu', 'burg', 'burgyl', 'cividis', 'curl',
'darkmint', 'deep', 'delta', 'dense', 'earth', 'edge', 'electric',
'emrld', 'fall', 'geyser', 'gnbu', 'gray', 'greens', 'greys',
'haline', 'hot', 'hsv', 'ice', 'icefire', 'inferno', 'jet',
'magenta', 'magma', 'matter', 'mint', 'mrybm', 'mygbm', 'oranges',
'orrd', 'oryel', 'peach', 'phase', 'picnic', 'pinkyl', 'piyg',
'plasma', 'plotly3', 'portland', 'prgn', 'pubu', 'pubugn', 'puor',
'purd', 'purp', 'purples', 'purpor', 'rainbow', 'rdbu', 'rdgy',
'rdpu', 'rdylbu', 'rdylgn', 'redor', 'reds', 'solar', 'spectral',
'speed', 'sunset', 'sunsetdark', 'teal', 'tealgrn', 'tealrose',
'tempo', 'temps', 'thermal', 'tropic', 'turbid', 'twilight',
'viridis', 'ylgn', 'ylgnbu', 'ylorbr', 'ylorrd'].
Appending '_r' to a named colorscale reverses it.
Returns
-------
str
"""
return self["colorscale"]
@colorscale.setter
def colorscale(self, val):
self["colorscale"] = val
# customdata
# ----------
@property
def customdata(self):
"""
Assigns extra data each datum. This may be useful when
listening to hover, click and selection events. Note that,
"scatter" traces also appends customdata items in the markers
DOM elements
The 'customdata' property is an array that may be specified as a tuple,
list, numpy array, or pandas Series
Returns
-------
numpy.ndarray
"""
return self["customdata"]
@customdata.setter
def customdata(self, val):
self["customdata"] = val
# customdatasrc
# -------------
@property
def customdatasrc(self):
"""
Sets the source reference on Chart Studio Cloud for customdata
.
The 'customdatasrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["customdatasrc"]
@customdatasrc.setter
def customdatasrc(self, val):
self["customdatasrc"] = val
# histfunc
# --------
@property
def histfunc(self):
"""
Specifies the binning function used for this histogram trace.
If "count", the histogram values are computed by counting the
number of values lying inside each bin. If "sum", "avg", "min",
"max", the histogram values are computed using the sum, the
average, the minimum or the maximum of the values lying inside
each bin respectively.
The 'histfunc' property is an enumeration that may be specified as:
- One of the following enumeration values:
['count', 'sum', 'avg', 'min', 'max']
Returns
-------
Any
"""
return self["histfunc"]
@histfunc.setter
def histfunc(self, val):
self["histfunc"] = val
# histnorm
# --------
@property
def histnorm(self):
"""
Specifies the type of normalization used for this histogram
trace. If "", the span of each bar corresponds to the number of
occurrences (i.e. the number of data points lying inside the
bins). If "percent" / "probability", the span of each bar
corresponds to the percentage / fraction of occurrences with
respect to the total number of sample points (here, the sum of
all bin HEIGHTS equals 100% / 1). If "density", the span of
each bar corresponds to the number of occurrences in a bin
divided by the size of the bin interval (here, the sum of all
bin AREAS equals the total number of sample points). If
*probability density*, the area of each bar corresponds to the
probability that an event will fall into the corresponding bin
(here, the sum of all bin AREAS equals 1).
The 'histnorm' property is an enumeration that may be specified as:
- One of the following enumeration values:
['', 'percent', 'probability', 'density', 'probability
density']
Returns
-------
Any
"""
return self["histnorm"]
@histnorm.setter
def histnorm(self, val):
self["histnorm"] = val
# hoverinfo
# ---------
@property
def hoverinfo(self):
"""
Determines which trace information appear on hover. If `none`
or `skip` are set, no information is displayed upon hovering.
But, if `none` is set, click and hover events are still fired.
The 'hoverinfo' property is a flaglist and may be specified
as a string containing:
- Any combination of ['x', 'y', 'z', 'text', 'name'] joined with '+' characters
(e.g. 'x+y')
OR exactly one of ['all', 'none', 'skip'] (e.g. 'skip')
- A list or array of the above
Returns
-------
Any|numpy.ndarray
"""
return self["hoverinfo"]
@hoverinfo.setter
def hoverinfo(self, val):
self["hoverinfo"] = val
# hoverinfosrc
# ------------
@property
def hoverinfosrc(self):
"""
Sets the source reference on Chart Studio Cloud for hoverinfo
.
The 'hoverinfosrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["hoverinfosrc"]
@hoverinfosrc.setter
def hoverinfosrc(self, val):
self["hoverinfosrc"] = val
# hoverlabel
# ----------
@property
def hoverlabel(self):
"""
The 'hoverlabel' property is an instance of Hoverlabel
that may be specified as:
- An instance of :class:`plotly.graph_objs.histogram2d.Hoverlabel`
- A dict of string/value properties that will be passed
to the Hoverlabel constructor
Supported dict properties:
align
Sets the horizontal alignment of the text
content within hover label box. Has an effect
only if the hover label text spans more two or
more lines
alignsrc
Sets the source reference on Chart Studio Cloud
for align .
bgcolor
Sets the background color of the hover labels
for this trace
bgcolorsrc
Sets the source reference on Chart Studio Cloud
for bgcolor .
bordercolor
Sets the border color of the hover labels for
this trace.
bordercolorsrc
Sets the source reference on Chart Studio Cloud
for bordercolor .
font
Sets the font used in hover labels.
namelength
Sets the default length (in number of
characters) of the trace name in the hover
labels for all traces. -1 shows the whole name
regardless of length. 0-3 shows the first 0-3
characters, and an integer >3 will show the
whole name if it is less than that many
characters, but if it is longer, will truncate
to `namelength - 3` characters and add an
ellipsis.
namelengthsrc
Sets the source reference on Chart Studio Cloud
for namelength .
Returns
-------
plotly.graph_objs.histogram2d.Hoverlabel
"""
return self["hoverlabel"]
@hoverlabel.setter
def hoverlabel(self, val):
self["hoverlabel"] = val
# hovertemplate
# -------------
@property
def hovertemplate(self):
"""
Template string used for rendering the information that appear
on hover box. Note that this will override `hoverinfo`.
Variables are inserted using %{variable}, for example "y:
%{y}". Numbers are formatted using d3-format's syntax
%{variable:d3-format}, for example "Price: %{y:$.2f}".
https://github.com/d3/d3-3.x-api-
reference/blob/master/Formatting.md#d3_format for details on
the formatting syntax. Dates are formatted using d3-time-
format's syntax %{variable|d3-time-format}, for example "Day:
%{2019-01-01|%A}". https://github.com/d3/d3-3.x-api-
reference/blob/master/Time-Formatting.md#format for details on
the date formatting syntax. The variables available in
`hovertemplate` are the ones emitted as event data described at
this link https://plotly.com/javascript/plotlyjs-events/#event-
data. Additionally, every attributes that can be specified per-
point (the ones that are `arrayOk: true`) are available.
variable `z` Anything contained in tag `<extra>` is displayed
in the secondary box, for example
"<extra>{fullData.name}</extra>". To hide the secondary box
completely, use an empty tag `<extra></extra>`.
The 'hovertemplate' property is a string and must be specified as:
- A string
- A number that will be converted to a string
- A tuple, list, or one-dimensional numpy array of the above
Returns
-------
str|numpy.ndarray
"""
return self["hovertemplate"]
@hovertemplate.setter
def hovertemplate(self, val):
self["hovertemplate"] = val
# hovertemplatesrc
# ----------------
@property
def hovertemplatesrc(self):
"""
Sets the source reference on Chart Studio Cloud for
hovertemplate .
The 'hovertemplatesrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["hovertemplatesrc"]
@hovertemplatesrc.setter
def hovertemplatesrc(self, val):
self["hovertemplatesrc"] = val
# ids
# ---
@property
def ids(self):
"""
Assigns id labels to each datum. These ids for object constancy
of data points during animation. Should be an array of strings,
not numbers or any other type.
The 'ids' property is an array that may be specified as a tuple,
list, numpy array, or pandas Series
Returns
-------
numpy.ndarray
"""
return self["ids"]
@ids.setter
def ids(self, val):
self["ids"] = val
# idssrc
# ------
@property
def idssrc(self):
"""
Sets the source reference on Chart Studio Cloud for ids .
The 'idssrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["idssrc"]
@idssrc.setter
def idssrc(self, val):
self["idssrc"] = val
# legendgroup
# -----------
@property
def legendgroup(self):
"""
Sets the legend group for this trace. Traces part of the same
legend group hide/show at the same time when toggling legend
items.
The 'legendgroup' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["legendgroup"]
@legendgroup.setter
def legendgroup(self, val):
self["legendgroup"] = val
# marker
# ------
@property
def marker(self):
"""
The 'marker' property is an instance of Marker
that may be specified as:
- An instance of :class:`plotly.graph_objs.histogram2d.Marker`
- A dict of string/value properties that will be passed
to the Marker constructor
Supported dict properties:
color
Sets the aggregation data.
colorsrc
Sets the source reference on Chart Studio Cloud
for color .
Returns
-------
plotly.graph_objs.histogram2d.Marker
"""
return self["marker"]
@marker.setter
def marker(self, val):
self["marker"] = val
# meta
# ----
@property
def meta(self):
"""
Assigns extra meta information associated with this trace that
can be used in various text attributes. Attributes such as
trace `name`, graph, axis and colorbar `title.text`, annotation
`text` `rangeselector`, `updatemenues` and `sliders` `label`
text all support `meta`. To access the trace `meta` values in
an attribute in the same trace, simply use `%{meta[i]}` where
`i` is the index or key of the `meta` item in question. To
access trace `meta` in layout attributes, use
`%{data[n[.meta[i]}` where `i` is the index or key of the
`meta` and `n` is the trace index.
The 'meta' property accepts values of any type
Returns
-------
Any|numpy.ndarray
"""
return self["meta"]
@meta.setter
def meta(self, val):
self["meta"] = val
# metasrc
# -------
@property
def metasrc(self):
"""
Sets the source reference on Chart Studio Cloud for meta .
The 'metasrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["metasrc"]
@metasrc.setter
def metasrc(self, val):
self["metasrc"] = val
# name
# ----
@property
def name(self):
"""
Sets the trace name. The trace name appear as the legend item
and on hover.
The 'name' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["name"]
@name.setter
def name(self, val):
self["name"] = val
# nbinsx
# ------
@property
def nbinsx(self):
"""
Specifies the maximum number of desired bins. This value will
be used in an algorithm that will decide the optimal bin size
such that the histogram best visualizes the distribution of the
data. Ignored if `xbins.size` is provided.
The 'nbinsx' property is a integer and may be specified as:
- An int (or float that will be cast to an int)
in the interval [0, 9223372036854775807]
Returns
-------
int
"""
return self["nbinsx"]
@nbinsx.setter
def nbinsx(self, val):
self["nbinsx"] = val
# nbinsy
# ------
@property
def nbinsy(self):
"""
Specifies the maximum number of desired bins. This value will
be used in an algorithm that will decide the optimal bin size
such that the histogram best visualizes the distribution of the
data. Ignored if `ybins.size` is provided.
The 'nbinsy' property is a integer and may be specified as:
- An int (or float that will be cast to an int)
in the interval [0, 9223372036854775807]
Returns
-------
int
"""
return self["nbinsy"]
@nbinsy.setter
def nbinsy(self, val):
self["nbinsy"] = val
# opacity
# -------
@property
def opacity(self):
"""
Sets the opacity of the trace.
The 'opacity' property is a number and may be specified as:
- An int or float in the interval [0, 1]
Returns
-------
int|float
"""
return self["opacity"]
@opacity.setter
def opacity(self, val):
self["opacity"] = val
# reversescale
# ------------
@property
def reversescale(self):
"""
Reverses the color mapping if true. If true, `zmin` will
correspond to the last color in the array and `zmax` will
correspond to the first color.
The 'reversescale' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["reversescale"]
@reversescale.setter
def reversescale(self, val):
self["reversescale"] = val
# showlegend
# ----------
@property
def showlegend(self):
"""
Determines whether or not an item corresponding to this trace
is shown in the legend.
The 'showlegend' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["showlegend"]
@showlegend.setter
def showlegend(self, val):
self["showlegend"] = val
# showscale
# ---------
@property
def showscale(self):
"""
Determines whether or not a colorbar is displayed for this
trace.
The 'showscale' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["showscale"]
@showscale.setter
def showscale(self, val):
self["showscale"] = val
# stream
# ------
@property
def stream(self):
"""
The 'stream' property is an instance of Stream
that may be specified as:
- An instance of :class:`plotly.graph_objs.histogram2d.Stream`
- A dict of string/value properties that will be passed
to the Stream constructor
Supported dict properties:
maxpoints
Sets the maximum number of points to keep on
the plots from an incoming stream. If
`maxpoints` is set to 50, only the newest 50
points will be displayed on the plot.
token
The stream id number links a data trace on a
plot with a stream. See https://chart-
studio.plotly.com/settings for more details.
Returns
-------
plotly.graph_objs.histogram2d.Stream
"""
return self["stream"]
@stream.setter
def stream(self, val):
self["stream"] = val
# uid
# ---
@property
def uid(self):
"""
Assign an id to this trace, Use this to provide object
constancy between traces during animations and transitions.
The 'uid' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["uid"]
@uid.setter
def uid(self, val):
self["uid"] = val
# uirevision
# ----------
@property
def uirevision(self):
"""
Controls persistence of some user-driven changes to the trace:
`constraintrange` in `parcoords` traces, as well as some
`editable: true` modifications such as `name` and
`colorbar.title`. Defaults to `layout.uirevision`. Note that
other user-driven trace attribute changes are controlled by
`layout` attributes: `trace.visible` is controlled by
`layout.legend.uirevision`, `selectedpoints` is controlled by
`layout.selectionrevision`, and `colorbar.(x|y)` (accessible
with `config: {editable: true}`) is controlled by
`layout.editrevision`. Trace changes are tracked by `uid`,
which only falls back on trace index if no `uid` is provided.
So if your app can add/remove traces before the end of the
`data` array, such that the same trace has a different index,
you can still preserve user-driven changes if you give each
trace a `uid` that stays with it as it moves.
The 'uirevision' property accepts values of any type
Returns
-------
Any
"""
return self["uirevision"]
@uirevision.setter
def uirevision(self, val):
self["uirevision"] = val
# visible
# -------
@property
def visible(self):
"""
Determines whether or not this trace is visible. If
"legendonly", the trace is not drawn, but can appear as a
legend item (provided that the legend itself is visible).
The 'visible' property is an enumeration that may be specified as:
- One of the following enumeration values:
[True, False, 'legendonly']
Returns
-------
Any
"""
return self["visible"]
@visible.setter
def visible(self, val):
self["visible"] = val
# x
# -
@property
def x(self):
"""
Sets the sample data to be binned on the x axis.
The 'x' property is an array that may be specified as a tuple,
list, numpy array, or pandas Series
Returns
-------
numpy.ndarray
"""
return self["x"]
@x.setter
def x(self, val):
self["x"] = val
# xaxis
# -----
@property
def xaxis(self):
"""
Sets a reference between this trace's x coordinates and a 2D
cartesian x axis. If "x" (the default value), the x coordinates
refer to `layout.xaxis`. If "x2", the x coordinates refer to
`layout.xaxis2`, and so on.
The 'xaxis' property is an identifier of a particular
subplot, of type 'x', that may be specified as the string 'x'
optionally followed by an integer >= 1
(e.g. 'x', 'x1', 'x2', 'x3', etc.)
Returns
-------
str
"""
return self["xaxis"]
@xaxis.setter
def xaxis(self, val):
self["xaxis"] = val
# xbingroup
# ---------
@property
def xbingroup(self):
"""
Set a group of histogram traces which will have compatible
x-bin settings. Using `xbingroup`, histogram2d and
histogram2dcontour traces (on axes of the same axis type) can
have compatible x-bin settings. Note that the same `xbingroup`
value can be used to set (1D) histogram `bingroup`
The 'xbingroup' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["xbingroup"]
@xbingroup.setter
def xbingroup(self, val):
self["xbingroup"] = val
# xbins
# -----
@property
def xbins(self):
"""
The 'xbins' property is an instance of XBins
that may be specified as:
- An instance of :class:`plotly.graph_objs.histogram2d.XBins`
- A dict of string/value properties that will be passed
to the XBins constructor
Supported dict properties:
end
Sets the end value for the x axis bins. The
last bin may not end exactly at this value, we
increment the bin edge by `size` from `start`
until we reach or exceed `end`. Defaults to the
maximum data value. Like `start`, for dates use
a date string, and for category data `end` is
based on the category serial numbers.
size
Sets the size of each x axis bin. Default
behavior: If `nbinsx` is 0 or omitted, we
choose a nice round bin size such that the
number of bins is about the same as the typical
number of samples in each bin. If `nbinsx` is
provided, we choose a nice round bin size
giving no more than that many bins. For date
data, use milliseconds or "M<n>" for months, as
in `axis.dtick`. For category data, the number
of categories to bin together (always defaults
to 1).
start
Sets the starting value for the x axis bins.
Defaults to the minimum data value, shifted
down if necessary to make nice round values and
to remove ambiguous bin edges. For example, if
most of the data is integers we shift the bin
edges 0.5 down, so a `size` of 5 would have a
default `start` of -0.5, so it is clear that
0-4 are in the first bin, 5-9 in the second,
but continuous data gets a start of 0 and bins
[0,5), [5,10) etc. Dates behave similarly, and
`start` should be a date string. For category
data, `start` is based on the category serial
numbers, and defaults to -0.5.
Returns
-------
plotly.graph_objs.histogram2d.XBins
"""
return self["xbins"]
@xbins.setter
def xbins(self, val):
self["xbins"] = val
# xcalendar
# ---------
@property
def xcalendar(self):
"""
Sets the calendar system to use with `x` date data.
The 'xcalendar' property is an enumeration that may be specified as:
- One of the following enumeration values:
['gregorian', 'chinese', 'coptic', 'discworld',
'ethiopian', 'hebrew', 'islamic', 'julian', 'mayan',
'nanakshahi', 'nepali', 'persian', 'jalali', 'taiwan',
'thai', 'ummalqura']
Returns
-------
Any
"""
return self["xcalendar"]
@xcalendar.setter
def xcalendar(self, val):
self["xcalendar"] = val
# xgap
# ----
@property
def xgap(self):
"""
Sets the horizontal gap (in pixels) between bricks.
The 'xgap' property is a number and may be specified as:
- An int or float in the interval [0, inf]
Returns
-------
int|float
"""
return self["xgap"]
@xgap.setter
def xgap(self, val):
self["xgap"] = val
# xsrc
# ----
@property
def xsrc(self):
"""
Sets the source reference on Chart Studio Cloud for x .
The 'xsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["xsrc"]
@xsrc.setter
def xsrc(self, val):
self["xsrc"] = val
# y
# -
@property
def y(self):
"""
Sets the sample data to be binned on the y axis.
The 'y' property is an array that may be specified as a tuple,
list, numpy array, or pandas Series
Returns
-------
numpy.ndarray
"""
return self["y"]
@y.setter
def y(self, val):
self["y"] = val
# yaxis
# -----
@property
def yaxis(self):
"""
Sets a reference between this trace's y coordinates and a 2D
cartesian y axis. If "y" (the default value), the y coordinates
refer to `layout.yaxis`. If "y2", the y coordinates refer to
`layout.yaxis2`, and so on.
The 'yaxis' property is an identifier of a particular
subplot, of type 'y', that may be specified as the string 'y'
optionally followed by an integer >= 1
(e.g. 'y', 'y1', 'y2', 'y3', etc.)
Returns
-------
str
"""
return self["yaxis"]
@yaxis.setter
def yaxis(self, val):
self["yaxis"] = val
# ybingroup
# ---------
@property
def ybingroup(self):
"""
Set a group of histogram traces which will have compatible
y-bin settings. Using `ybingroup`, histogram2d and
histogram2dcontour traces (on axes of the same axis type) can
have compatible y-bin settings. Note that the same `ybingroup`
value can be used to set (1D) histogram `bingroup`
The 'ybingroup' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["ybingroup"]
@ybingroup.setter
def ybingroup(self, val):
self["ybingroup"] = val
# ybins
# -----
@property
def ybins(self):
"""
The 'ybins' property is an instance of YBins
that may be specified as:
- An instance of :class:`plotly.graph_objs.histogram2d.YBins`
- A dict of string/value properties that will be passed
to the YBins constructor
Supported dict properties:
end
Sets the end value for the y axis bins. The
last bin may not end exactly at this value, we
increment the bin edge by `size` from `start`
until we reach or exceed `end`. Defaults to the
maximum data value. Like `start`, for dates use
a date string, and for category data `end` is
based on the category serial numbers.
size
Sets the size of each y axis bin. Default
behavior: If `nbinsy` is 0 or omitted, we
choose a nice round bin size such that the
number of bins is about the same as the typical
number of samples in each bin. If `nbinsy` is
provided, we choose a nice round bin size
giving no more than that many bins. For date
data, use milliseconds or "M<n>" for months, as
in `axis.dtick`. For category data, the number
of categories to bin together (always defaults
to 1).
start
Sets the starting value for the y axis bins.
Defaults to the minimum data value, shifted
down if necessary to make nice round values and
to remove ambiguous bin edges. For example, if
most of the data is integers we shift the bin
edges 0.5 down, so a `size` of 5 would have a
default `start` of -0.5, so it is clear that
0-4 are in the first bin, 5-9 in the second,
but continuous data gets a start of 0 and bins
[0,5), [5,10) etc. Dates behave similarly, and
`start` should be a date string. For category
data, `start` is based on the category serial
numbers, and defaults to -0.5.
Returns
-------
plotly.graph_objs.histogram2d.YBins
"""
return self["ybins"]
@ybins.setter
def ybins(self, val):
self["ybins"] = val
# ycalendar
# ---------
@property
def ycalendar(self):
"""
Sets the calendar system to use with `y` date data.
The 'ycalendar' property is an enumeration that may be specified as:
- One of the following enumeration values:
['gregorian', 'chinese', 'coptic', 'discworld',
'ethiopian', 'hebrew', 'islamic', 'julian', 'mayan',
'nanakshahi', 'nepali', 'persian', 'jalali', 'taiwan',
'thai', 'ummalqura']
Returns
-------
Any
"""
return self["ycalendar"]
@ycalendar.setter
def ycalendar(self, val):
self["ycalendar"] = val
# ygap
# ----
@property
def ygap(self):
"""
Sets the vertical gap (in pixels) between bricks.
The 'ygap' property is a number and may be specified as:
- An int or float in the interval [0, inf]
Returns
-------
int|float
"""
return self["ygap"]
@ygap.setter
def ygap(self, val):
self["ygap"] = val
# ysrc
# ----
@property
def ysrc(self):
"""
Sets the source reference on Chart Studio Cloud for y .
The 'ysrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["ysrc"]
@ysrc.setter
def ysrc(self, val):
self["ysrc"] = val
# z
# -
@property
def z(self):
"""
Sets the aggregation data.
The 'z' property is an array that may be specified as a tuple,
list, numpy array, or pandas Series
Returns
-------
numpy.ndarray
"""
return self["z"]
@z.setter
def z(self, val):
self["z"] = val
# zauto
# -----
@property
def zauto(self):
"""
Determines whether or not the color domain is computed with
respect to the input data (here in `z`) or the bounds set in
`zmin` and `zmax` Defaults to `false` when `zmin` and `zmax`
are set by the user.
The 'zauto' property must be specified as a bool
(either True, or False)
Returns
-------
bool
"""
return self["zauto"]
@zauto.setter
def zauto(self, val):
self["zauto"] = val
# zhoverformat
# ------------
@property
def zhoverformat(self):
"""
Sets the hover text formatting rule using d3 formatting mini-
languages which are very similar to those in Python. See:
https://github.com/d3/d3-3.x-api-
reference/blob/master/Formatting.md#d3_format
The 'zhoverformat' property is a string and must be specified as:
- A string
- A number that will be converted to a string
Returns
-------
str
"""
return self["zhoverformat"]
@zhoverformat.setter
def zhoverformat(self, val):
self["zhoverformat"] = val
# zmax
# ----
@property
def zmax(self):
"""
Sets the upper bound of the color domain. Value should have the
same units as in `z` and if set, `zmin` must be set as well.
The 'zmax' property is a number and may be specified as:
- An int or float
Returns
-------
int|float
"""
return self["zmax"]
@zmax.setter
def zmax(self, val):
self["zmax"] = val
# zmid
# ----
@property
def zmid(self):
"""
Sets the mid-point of the color domain by scaling `zmin` and/or
`zmax` to be equidistant to this point. Value should have the
same units as in `z`. Has no effect when `zauto` is `false`.
The 'zmid' property is a number and may be specified as:
- An int or float
Returns
-------
int|float
"""
return self["zmid"]
@zmid.setter
def zmid(self, val):
self["zmid"] = val
# zmin
# ----
@property
def zmin(self):
"""
Sets the lower bound of the color domain. Value should have the
same units as in `z` and if set, `zmax` must be set as well.
The 'zmin' property is a number and may be specified as:
- An int or float
Returns
-------
int|float
"""
return self["zmin"]
@zmin.setter
def zmin(self, val):
self["zmin"] = val
# zsmooth
# -------
@property
def zsmooth(self):
"""
Picks a smoothing algorithm use to smooth `z` data.
The 'zsmooth' property is an enumeration that may be specified as:
- One of the following enumeration values:
['fast', 'best', False]
Returns
-------
Any
"""
return self["zsmooth"]
@zsmooth.setter
def zsmooth(self, val):
self["zsmooth"] = val
# zsrc
# ----
@property
def zsrc(self):
"""
Sets the source reference on Chart Studio Cloud for z .
The 'zsrc' property must be specified as a string or
as a plotly.grid_objs.Column object
Returns
-------
str
"""
return self["zsrc"]
@zsrc.setter
def zsrc(self, val):
self["zsrc"] = val
# type
# ----
@property
def type(self):
return self._props["type"]
# Self properties description
# ---------------------------
@property
def _prop_descriptions(self):
return """\
autobinx
Obsolete: since v1.42 each bin attribute is auto-
determined separately and `autobinx` is not needed.
However, we accept `autobinx: true` or `false` and will
update `xbins` accordingly before deleting `autobinx`
from the trace.
autobiny
Obsolete: since v1.42 each bin attribute is auto-
determined separately and `autobiny` is not needed.
However, we accept `autobiny: true` or `false` and will
update `ybins` accordingly before deleting `autobiny`
from the trace.
autocolorscale
Determines whether the colorscale is a default palette
(`autocolorscale: true`) or the palette determined by
`colorscale`. In case `colorscale` is unspecified or
`autocolorscale` is true, the default palette will be
chosen according to whether numbers in the `color`
array are all positive, all negative or mixed.
bingroup
Set the `xbingroup` and `ybingroup` default prefix For
example, setting a `bingroup` of 1 on two histogram2d
traces will make them their x-bins and y-bins match
separately.
coloraxis
Sets a reference to a shared color axis. References to
these shared color axes are "coloraxis", "coloraxis2",
"coloraxis3", etc. Settings for these shared color axes
are set in the layout, under `layout.coloraxis`,
`layout.coloraxis2`, etc. Note that multiple color
scales can be linked to the same color axis.
colorbar
:class:`plotly.graph_objects.histogram2d.ColorBar`
instance or dict with compatible properties
colorscale
Sets the colorscale. The colorscale must be an array
containing arrays mapping a normalized value to an rgb,
rgba, hex, hsl, hsv, or named color string. At minimum,
a mapping for the lowest (0) and highest (1) values are
required. For example, `[[0, 'rgb(0,0,255)'], [1,
'rgb(255,0,0)']]`. To control the bounds of the
colorscale in color space, use`zmin` and `zmax`.
Alternatively, `colorscale` may be a palette name
string of the following list: Greys,YlGnBu,Greens,YlOrR
d,Bluered,RdBu,Reds,Blues,Picnic,Rainbow,Portland,Jet,H
ot,Blackbody,Earth,Electric,Viridis,Cividis.
customdata
Assigns extra data each datum. This may be useful when
listening to hover, click and selection events. Note
that, "scatter" traces also appends customdata items in
the markers DOM elements
customdatasrc
Sets the source reference on Chart Studio Cloud for
customdata .
histfunc
Specifies the binning function used for this histogram
trace. If "count", the histogram values are computed by
counting the number of values lying inside each bin. If
"sum", "avg", "min", "max", the histogram values are
computed using the sum, the average, the minimum or the
maximum of the values lying inside each bin
respectively.
histnorm
Specifies the type of normalization used for this
histogram trace. If "", the span of each bar
corresponds to the number of occurrences (i.e. the
number of data points lying inside the bins). If
"percent" / "probability", the span of each bar
corresponds to the percentage / fraction of occurrences
with respect to the total number of sample points
(here, the sum of all bin HEIGHTS equals 100% / 1). If
"density", the span of each bar corresponds to the
number of occurrences in a bin divided by the size of
the bin interval (here, the sum of all bin AREAS equals
the total number of sample points). If *probability
density*, the area of each bar corresponds to the
probability that an event will fall into the
corresponding bin (here, the sum of all bin AREAS
equals 1).
hoverinfo
Determines which trace information appear on hover. If
`none` or `skip` are set, no information is displayed
upon hovering. But, if `none` is set, click and hover
events are still fired.
hoverinfosrc
Sets the source reference on Chart Studio Cloud for
hoverinfo .
hoverlabel
:class:`plotly.graph_objects.histogram2d.Hoverlabel`
instance or dict with compatible properties
hovertemplate
Template string used for rendering the information that
appear on hover box. Note that this will override
`hoverinfo`. Variables are inserted using %{variable},
for example "y: %{y}". Numbers are formatted using
d3-format's syntax %{variable:d3-format}, for example
"Price: %{y:$.2f}". https://github.com/d3/d3-3.x-api-
reference/blob/master/Formatting.md#d3_format for
details on the formatting syntax. Dates are formatted
using d3-time-format's syntax %{variable|d3-time-
format}, for example "Day: %{2019-01-01|%A}".
https://github.com/d3/d3-3.x-api-
reference/blob/master/Time-Formatting.md#format for
details on the date formatting syntax. The variables
available in `hovertemplate` are the ones emitted as
event data described at this link
https://plotly.com/javascript/plotlyjs-events/#event-
data. Additionally, every attributes that can be
specified per-point (the ones that are `arrayOk: true`)
are available. variable `z` Anything contained in tag
`<extra>` is displayed in the secondary box, for
example "<extra>{fullData.name}</extra>". To hide the
secondary box completely, use an empty tag
`<extra></extra>`.
hovertemplatesrc
Sets the source reference on Chart Studio Cloud for
hovertemplate .
ids
Assigns id labels to each datum. These ids for object
constancy of data points during animation. Should be an
array of strings, not numbers or any other type.
idssrc
Sets the source reference on Chart Studio Cloud for
ids .
legendgroup
Sets the legend group for this trace. Traces part of
the same legend group hide/show at the same time when
toggling legend items.
marker
:class:`plotly.graph_objects.histogram2d.Marker`
instance or dict with compatible properties
meta
Assigns extra meta information associated with this
trace that can be used in various text attributes.
Attributes such as trace `name`, graph, axis and
colorbar `title.text`, annotation `text`
`rangeselector`, `updatemenues` and `sliders` `label`
text all support `meta`. To access the trace `meta`
values in an attribute in the same trace, simply use
`%{meta[i]}` where `i` is the index or key of the
`meta` item in question. To access trace `meta` in
layout attributes, use `%{data[n[.meta[i]}` where `i`
is the index or key of the `meta` and `n` is the trace
index.
metasrc
Sets the source reference on Chart Studio Cloud for
meta .
name
Sets the trace name. The trace name appear as the
legend item and on hover.
nbinsx
Specifies the maximum number of desired bins. This
value will be used in an algorithm that will decide the
optimal bin size such that the histogram best
visualizes the distribution of the data. Ignored if
`xbins.size` is provided.
nbinsy
Specifies the maximum number of desired bins. This
value will be used in an algorithm that will decide the
optimal bin size such that the histogram best
visualizes the distribution of the data. Ignored if
`ybins.size` is provided.
opacity
Sets the opacity of the trace.
reversescale
Reverses the color mapping if true. If true, `zmin`
will correspond to the last color in the array and
`zmax` will correspond to the first color.
showlegend
Determines whether or not an item corresponding to this
trace is shown in the legend.
showscale
Determines whether or not a colorbar is displayed for
this trace.
stream
:class:`plotly.graph_objects.histogram2d.Stream`
instance or dict with compatible properties
uid
Assign an id to this trace, Use this to provide object
constancy between traces during animations and
transitions.
uirevision
Controls persistence of some user-driven changes to the
trace: `constraintrange` in `parcoords` traces, as well
as some `editable: true` modifications such as `name`
and `colorbar.title`. Defaults to `layout.uirevision`.
Note that other user-driven trace attribute changes are
controlled by `layout` attributes: `trace.visible` is
controlled by `layout.legend.uirevision`,
`selectedpoints` is controlled by
`layout.selectionrevision`, and `colorbar.(x|y)`
(accessible with `config: {editable: true}`) is
controlled by `layout.editrevision`. Trace changes are
tracked by `uid`, which only falls back on trace index
if no `uid` is provided. So if your app can add/remove
traces before the end of the `data` array, such that
the same trace has a different index, you can still
preserve user-driven changes if you give each trace a
`uid` that stays with it as it moves.
visible
Determines whether or not this trace is visible. If
"legendonly", the trace is not drawn, but can appear as
a legend item (provided that the legend itself is
visible).
x
Sets the sample data to be binned on the x axis.
xaxis
Sets a reference between this trace's x coordinates and
a 2D cartesian x axis. If "x" (the default value), the
x coordinates refer to `layout.xaxis`. If "x2", the x
coordinates refer to `layout.xaxis2`, and so on.
xbingroup
Set a group of histogram traces which will have
compatible x-bin settings. Using `xbingroup`,
histogram2d and histogram2dcontour traces (on axes of
the same axis type) can have compatible x-bin settings.
Note that the same `xbingroup` value can be used to set
(1D) histogram `bingroup`
xbins
:class:`plotly.graph_objects.histogram2d.XBins`
instance or dict with compatible properties
xcalendar
Sets the calendar system to use with `x` date data.
xgap
Sets the horizontal gap (in pixels) between bricks.
xsrc
Sets the source reference on Chart Studio Cloud for x
.
y
Sets the sample data to be binned on the y axis.
yaxis
Sets a reference between this trace's y coordinates and
a 2D cartesian y axis. If "y" (the default value), the
y coordinates refer to `layout.yaxis`. If "y2", the y
coordinates refer to `layout.yaxis2`, and so on.
ybingroup
Set a group of histogram traces which will have
compatible y-bin settings. Using `ybingroup`,
histogram2d and histogram2dcontour traces (on axes of
the same axis type) can have compatible y-bin settings.
Note that the same `ybingroup` value can be used to set
(1D) histogram `bingroup`
ybins
:class:`plotly.graph_objects.histogram2d.YBins`
instance or dict with compatible properties
ycalendar
Sets the calendar system to use with `y` date data.
ygap
Sets the vertical gap (in pixels) between bricks.
ysrc
Sets the source reference on Chart Studio Cloud for y
.
z
Sets the aggregation data.
zauto
Determines whether or not the color domain is computed
with respect to the input data (here in `z`) or the
bounds set in `zmin` and `zmax` Defaults to `false`
when `zmin` and `zmax` are set by the user.
zhoverformat
Sets the hover text formatting rule using d3 formatting
mini-languages which are very similar to those in
Python. See: https://github.com/d3/d3-3.x-api-
reference/blob/master/Formatting.md#d3_format
zmax
Sets the upper bound of the color domain. Value should
have the same units as in `z` and if set, `zmin` must
be set as well.
zmid
Sets the mid-point of the color domain by scaling
`zmin` and/or `zmax` to be equidistant to this point.
Value should have the same units as in `z`. Has no
effect when `zauto` is `false`.
zmin
Sets the lower bound of the color domain. Value should
have the same units as in `z` and if set, `zmax` must
be set as well.
zsmooth
Picks a smoothing algorithm use to smooth `z` data.
zsrc
Sets the source reference on Chart Studio Cloud for z
.
"""
def __init__(
self,
arg=None,
autobinx=None,
autobiny=None,
autocolorscale=None,
bingroup=None,
coloraxis=None,
colorbar=None,
colorscale=None,
customdata=None,
customdatasrc=None,
histfunc=None,
histnorm=None,
hoverinfo=None,
hoverinfosrc=None,
hoverlabel=None,
hovertemplate=None,
hovertemplatesrc=None,
ids=None,
idssrc=None,
legendgroup=None,
marker=None,
meta=None,
metasrc=None,
name=None,
nbinsx=None,
nbinsy=None,
opacity=None,
reversescale=None,
showlegend=None,
showscale=None,
stream=None,
uid=None,
uirevision=None,
visible=None,
x=None,
xaxis=None,
xbingroup=None,
xbins=None,
xcalendar=None,
xgap=None,
xsrc=None,
y=None,
yaxis=None,
ybingroup=None,
ybins=None,
ycalendar=None,
ygap=None,
ysrc=None,
z=None,
zauto=None,
zhoverformat=None,
zmax=None,
zmid=None,
zmin=None,
zsmooth=None,
zsrc=None,
**kwargs
):
"""
Construct a new Histogram2d object
The sample data from which statistics are computed is set in
`x` and `y` (where `x` and `y` represent marginal
distributions, binning is set in `xbins` and `ybins` in this
case) or `z` (where `z` represent the 2D distribution and
binning set, binning is set by `x` and `y` in this case). The
resulting distribution is visualized as a heatmap.
Parameters
----------
arg
dict of properties compatible with this constructor or
an instance of :class:`plotly.graph_objs.Histogram2d`
autobinx
Obsolete: since v1.42 each bin attribute is auto-
determined separately and `autobinx` is not needed.
However, we accept `autobinx: true` or `false` and will
update `xbins` accordingly before deleting `autobinx`
from the trace.
autobiny
Obsolete: since v1.42 each bin attribute is auto-
determined separately and `autobiny` is not needed.
However, we accept `autobiny: true` or `false` and will
update `ybins` accordingly before deleting `autobiny`
from the trace.
autocolorscale
Determines whether the colorscale is a default palette
(`autocolorscale: true`) or the palette determined by
`colorscale`. In case `colorscale` is unspecified or
`autocolorscale` is true, the default palette will be
chosen according to whether numbers in the `color`
array are all positive, all negative or mixed.
bingroup
Set the `xbingroup` and `ybingroup` default prefix For
example, setting a `bingroup` of 1 on two histogram2d
traces will make them their x-bins and y-bins match
separately.
coloraxis
Sets a reference to a shared color axis. References to
these shared color axes are "coloraxis", "coloraxis2",
"coloraxis3", etc. Settings for these shared color axes
are set in the layout, under `layout.coloraxis`,
`layout.coloraxis2`, etc. Note that multiple color
scales can be linked to the same color axis.
colorbar
:class:`plotly.graph_objects.histogram2d.ColorBar`
instance or dict with compatible properties
colorscale
Sets the colorscale. The colorscale must be an array
containing arrays mapping a normalized value to an rgb,
rgba, hex, hsl, hsv, or named color string. At minimum,
a mapping for the lowest (0) and highest (1) values are
required. For example, `[[0, 'rgb(0,0,255)'], [1,
'rgb(255,0,0)']]`. To control the bounds of the
colorscale in color space, use`zmin` and `zmax`.
Alternatively, `colorscale` may be a palette name
string of the following list: Greys,YlGnBu,Greens,YlOrR
d,Bluered,RdBu,Reds,Blues,Picnic,Rainbow,Portland,Jet,H
ot,Blackbody,Earth,Electric,Viridis,Cividis.
customdata
Assigns extra data each datum. This may be useful when
listening to hover, click and selection events. Note
that, "scatter" traces also appends customdata items in
the markers DOM elements
customdatasrc
Sets the source reference on Chart Studio Cloud for
customdata .
histfunc
Specifies the binning function used for this histogram
trace. If "count", the histogram values are computed by
counting the number of values lying inside each bin. If
"sum", "avg", "min", "max", the histogram values are
computed using the sum, the average, the minimum or the
maximum of the values lying inside each bin
respectively.
histnorm
Specifies the type of normalization used for this
histogram trace. If "", the span of each bar
corresponds to the number of occurrences (i.e. the
number of data points lying inside the bins). If
"percent" / "probability", the span of each bar
corresponds to the percentage / fraction of occurrences
with respect to the total number of sample points
(here, the sum of all bin HEIGHTS equals 100% / 1). If
"density", the span of each bar corresponds to the
number of occurrences in a bin divided by the size of
the bin interval (here, the sum of all bin AREAS equals
the total number of sample points). If *probability
density*, the area of each bar corresponds to the
probability that an event will fall into the
corresponding bin (here, the sum of all bin AREAS
equals 1).
hoverinfo
Determines which trace information appear on hover. If
`none` or `skip` are set, no information is displayed
upon hovering. But, if `none` is set, click and hover
events are still fired.
hoverinfosrc
Sets the source reference on Chart Studio Cloud for
hoverinfo .
hoverlabel
:class:`plotly.graph_objects.histogram2d.Hoverlabel`
instance or dict with compatible properties
hovertemplate
Template string used for rendering the information that
appear on hover box. Note that this will override
`hoverinfo`. Variables are inserted using %{variable},
for example "y: %{y}". Numbers are formatted using
d3-format's syntax %{variable:d3-format}, for example
"Price: %{y:$.2f}". https://github.com/d3/d3-3.x-api-
reference/blob/master/Formatting.md#d3_format for
details on the formatting syntax. Dates are formatted
using d3-time-format's syntax %{variable|d3-time-
format}, for example "Day: %{2019-01-01|%A}".
https://github.com/d3/d3-3.x-api-
reference/blob/master/Time-Formatting.md#format for
details on the date formatting syntax. The variables
available in `hovertemplate` are the ones emitted as
event data described at this link
https://plotly.com/javascript/plotlyjs-events/#event-
data. Additionally, every attributes that can be
specified per-point (the ones that are `arrayOk: true`)
are available. variable `z` Anything contained in tag
`<extra>` is displayed in the secondary box, for
example "<extra>{fullData.name}</extra>". To hide the
secondary box completely, use an empty tag
`<extra></extra>`.
hovertemplatesrc
Sets the source reference on Chart Studio Cloud for
hovertemplate .
ids
Assigns id labels to each datum. These ids for object
constancy of data points during animation. Should be an
array of strings, not numbers or any other type.
idssrc
Sets the source reference on Chart Studio Cloud for
ids .
legendgroup
Sets the legend group for this trace. Traces part of
the same legend group hide/show at the same time when
toggling legend items.
marker
:class:`plotly.graph_objects.histogram2d.Marker`
instance or dict with compatible properties
meta
Assigns extra meta information associated with this
trace that can be used in various text attributes.
Attributes such as trace `name`, graph, axis and
colorbar `title.text`, annotation `text`
`rangeselector`, `updatemenues` and `sliders` `label`
text all support `meta`. To access the trace `meta`
values in an attribute in the same trace, simply use
`%{meta[i]}` where `i` is the index or key of the
`meta` item in question. To access trace `meta` in
layout attributes, use `%{data[n[.meta[i]}` where `i`
is the index or key of the `meta` and `n` is the trace
index.
metasrc
Sets the source reference on Chart Studio Cloud for
meta .
name
Sets the trace name. The trace name appear as the
legend item and on hover.
nbinsx
Specifies the maximum number of desired bins. This
value will be used in an algorithm that will decide the
optimal bin size such that the histogram best
visualizes the distribution of the data. Ignored if
`xbins.size` is provided.
nbinsy
Specifies the maximum number of desired bins. This
value will be used in an algorithm that will decide the
optimal bin size such that the histogram best
visualizes the distribution of the data. Ignored if
`ybins.size` is provided.
opacity
Sets the opacity of the trace.
reversescale
Reverses the color mapping if true. If true, `zmin`
will correspond to the last color in the array and
`zmax` will correspond to the first color.
showlegend
Determines whether or not an item corresponding to this
trace is shown in the legend.
showscale
Determines whether or not a colorbar is displayed for
this trace.
stream
:class:`plotly.graph_objects.histogram2d.Stream`
instance or dict with compatible properties
uid
Assign an id to this trace, Use this to provide object
constancy between traces during animations and
transitions.
uirevision
Controls persistence of some user-driven changes to the
trace: `constraintrange` in `parcoords` traces, as well
as some `editable: true` modifications such as `name`
and `colorbar.title`. Defaults to `layout.uirevision`.
Note that other user-driven trace attribute changes are
controlled by `layout` attributes: `trace.visible` is
controlled by `layout.legend.uirevision`,
`selectedpoints` is controlled by
`layout.selectionrevision`, and `colorbar.(x|y)`
(accessible with `config: {editable: true}`) is
controlled by `layout.editrevision`. Trace changes are
tracked by `uid`, which only falls back on trace index
if no `uid` is provided. So if your app can add/remove
traces before the end of the `data` array, such that
the same trace has a different index, you can still
preserve user-driven changes if you give each trace a
`uid` that stays with it as it moves.
visible
Determines whether or not this trace is visible. If
"legendonly", the trace is not drawn, but can appear as
a legend item (provided that the legend itself is
visible).
x
Sets the sample data to be binned on the x axis.
xaxis
Sets a reference between this trace's x coordinates and
a 2D cartesian x axis. If "x" (the default value), the
x coordinates refer to `layout.xaxis`. If "x2", the x
coordinates refer to `layout.xaxis2`, and so on.
xbingroup
Set a group of histogram traces which will have
compatible x-bin settings. Using `xbingroup`,
histogram2d and histogram2dcontour traces (on axes of
the same axis type) can have compatible x-bin settings.
Note that the same `xbingroup` value can be used to set
(1D) histogram `bingroup`
xbins
:class:`plotly.graph_objects.histogram2d.XBins`
instance or dict with compatible properties
xcalendar
Sets the calendar system to use with `x` date data.
xgap
Sets the horizontal gap (in pixels) between bricks.
xsrc
Sets the source reference on Chart Studio Cloud for x
.
y
Sets the sample data to be binned on the y axis.
yaxis
Sets a reference between this trace's y coordinates and
a 2D cartesian y axis. If "y" (the default value), the
y coordinates refer to `layout.yaxis`. If "y2", the y
coordinates refer to `layout.yaxis2`, and so on.
ybingroup
Set a group of histogram traces which will have
compatible y-bin settings. Using `ybingroup`,
histogram2d and histogram2dcontour traces (on axes of
the same axis type) can have compatible y-bin settings.
Note that the same `ybingroup` value can be used to set
(1D) histogram `bingroup`
ybins
:class:`plotly.graph_objects.histogram2d.YBins`
instance or dict with compatible properties
ycalendar
Sets the calendar system to use with `y` date data.
ygap
Sets the vertical gap (in pixels) between bricks.
ysrc
Sets the source reference on Chart Studio Cloud for y
.
z
Sets the aggregation data.
zauto
Determines whether or not the color domain is computed
with respect to the input data (here in `z`) or the
bounds set in `zmin` and `zmax` Defaults to `false`
when `zmin` and `zmax` are set by the user.
zhoverformat
Sets the hover text formatting rule using d3 formatting
mini-languages which are very similar to those in
Python. See: https://github.com/d3/d3-3.x-api-
reference/blob/master/Formatting.md#d3_format
zmax
Sets the upper bound of the color domain. Value should
have the same units as in `z` and if set, `zmin` must
be set as well.
zmid
Sets the mid-point of the color domain by scaling
`zmin` and/or `zmax` to be equidistant to this point.
Value should have the same units as in `z`. Has no
effect when `zauto` is `false`.
zmin
Sets the lower bound of the color domain. Value should
have the same units as in `z` and if set, `zmax` must
be set as well.
zsmooth
Picks a smoothing algorithm use to smooth `z` data.
zsrc
Sets the source reference on Chart Studio Cloud for z
.
Returns
-------
Histogram2d
"""
super(Histogram2d, self).__init__("histogram2d")
if "_parent" in kwargs:
self._parent = kwargs["_parent"]
return
# Validate arg
# ------------
if arg is None:
arg = {}
elif isinstance(arg, self.__class__):
arg = arg.to_plotly_json()
elif isinstance(arg, dict):
arg = _copy.copy(arg)
else:
raise ValueError(
"""\
The first argument to the plotly.graph_objs.Histogram2d
constructor must be a dict or
an instance of :class:`plotly.graph_objs.Histogram2d`"""
)
# Handle skip_invalid
# -------------------
self._skip_invalid = kwargs.pop("skip_invalid", False)
self._validate = kwargs.pop("_validate", True)
# Populate data dict with properties
# ----------------------------------
_v = arg.pop("autobinx", None)
_v = autobinx if autobinx is not None else _v
if _v is not None:
self["autobinx"] = _v
_v = arg.pop("autobiny", None)
_v = autobiny if autobiny is not None else _v
if _v is not None:
self["autobiny"] = _v
_v = arg.pop("autocolorscale", None)
_v = autocolorscale if autocolorscale is not None else _v
if _v is not None:
self["autocolorscale"] = _v
_v = arg.pop("bingroup", None)
_v = bingroup if bingroup is not None else _v
if _v is not None:
self["bingroup"] = _v
_v = arg.pop("coloraxis", None)
_v = coloraxis if coloraxis is not None else _v
if _v is not None:
self["coloraxis"] = _v
_v = arg.pop("colorbar", None)
_v = colorbar if colorbar is not None else _v
if _v is not None:
self["colorbar"] = _v
_v = arg.pop("colorscale", None)
_v = colorscale if colorscale is not None else _v
if _v is not None:
self["colorscale"] = _v
_v = arg.pop("customdata", None)
_v = customdata if customdata is not None else _v
if _v is not None:
self["customdata"] = _v
_v = arg.pop("customdatasrc", None)
_v = customdatasrc if customdatasrc is not None else _v
if _v is not None:
self["customdatasrc"] = _v
_v = arg.pop("histfunc", None)
_v = histfunc if histfunc is not None else _v
if _v is not None:
self["histfunc"] = _v
_v = arg.pop("histnorm", None)
_v = histnorm if histnorm is not None else _v
if _v is not None:
self["histnorm"] = _v
_v = arg.pop("hoverinfo", None)
_v = hoverinfo if hoverinfo is not None else _v
if _v is not None:
self["hoverinfo"] = _v
_v = arg.pop("hoverinfosrc", None)
_v = hoverinfosrc if hoverinfosrc is not None else _v
if _v is not None:
self["hoverinfosrc"] = _v
_v = arg.pop("hoverlabel", None)
_v = hoverlabel if hoverlabel is not None else _v
if _v is not None:
self["hoverlabel"] = _v
_v = arg.pop("hovertemplate", None)
_v = hovertemplate if hovertemplate is not None else _v
if _v is not None:
self["hovertemplate"] = _v
_v = arg.pop("hovertemplatesrc", None)
_v = hovertemplatesrc if hovertemplatesrc is not None else _v
if _v is not None:
self["hovertemplatesrc"] = _v
_v = arg.pop("ids", None)
_v = ids if ids is not None else _v
if _v is not None:
self["ids"] = _v
_v = arg.pop("idssrc", None)
_v = idssrc if idssrc is not None else _v
if _v is not None:
self["idssrc"] = _v
_v = arg.pop("legendgroup", None)
_v = legendgroup if legendgroup is not None else _v
if _v is not None:
self["legendgroup"] = _v
_v = arg.pop("marker", None)
_v = marker if marker is not None else _v
if _v is not None:
self["marker"] = _v
_v = arg.pop("meta", None)
_v = meta if meta is not None else _v
if _v is not None:
self["meta"] = _v
_v = arg.pop("metasrc", None)
_v = metasrc if metasrc is not None else _v
if _v is not None:
self["metasrc"] = _v
_v = arg.pop("name", None)
_v = name if name is not None else _v
if _v is not None:
self["name"] = _v
_v = arg.pop("nbinsx", None)
_v = nbinsx if nbinsx is not None else _v
if _v is not None:
self["nbinsx"] = _v
_v = arg.pop("nbinsy", None)
_v = nbinsy if nbinsy is not None else _v
if _v is not None:
self["nbinsy"] = _v
_v = arg.pop("opacity", None)
_v = opacity if opacity is not None else _v
if _v is not None:
self["opacity"] = _v
_v = arg.pop("reversescale", None)
_v = reversescale if reversescale is not None else _v
if _v is not None:
self["reversescale"] = _v
_v = arg.pop("showlegend", None)
_v = showlegend if showlegend is not None else _v
if _v is not None:
self["showlegend"] = _v
_v = arg.pop("showscale", None)
_v = showscale if showscale is not None else _v
if _v is not None:
self["showscale"] = _v
_v = arg.pop("stream", None)
_v = stream if stream is not None else _v
if _v is not None:
self["stream"] = _v
_v = arg.pop("uid", None)
_v = uid if uid is not None else _v
if _v is not None:
self["uid"] = _v
_v = arg.pop("uirevision", None)
_v = uirevision if uirevision is not None else _v
if _v is not None:
self["uirevision"] = _v
_v = arg.pop("visible", None)
_v = visible if visible is not None else _v
if _v is not None:
self["visible"] = _v
_v = arg.pop("x", None)
_v = x if x is not None else _v
if _v is not None:
self["x"] = _v
_v = arg.pop("xaxis", None)
_v = xaxis if xaxis is not None else _v
if _v is not None:
self["xaxis"] = _v
_v = arg.pop("xbingroup", None)
_v = xbingroup if xbingroup is not None else _v
if _v is not None:
self["xbingroup"] = _v
_v = arg.pop("xbins", None)
_v = xbins if xbins is not None else _v
if _v is not None:
self["xbins"] = _v
_v = arg.pop("xcalendar", None)
_v = xcalendar if xcalendar is not None else _v
if _v is not None:
self["xcalendar"] = _v
_v = arg.pop("xgap", None)
_v = xgap if xgap is not None else _v
if _v is not None:
self["xgap"] = _v
_v = arg.pop("xsrc", None)
_v = xsrc if xsrc is not None else _v
if _v is not None:
self["xsrc"] = _v
_v = arg.pop("y", None)
_v = y if y is not None else _v
if _v is not None:
self["y"] = _v
_v = arg.pop("yaxis", None)
_v = yaxis if yaxis is not None else _v
if _v is not None:
self["yaxis"] = _v
_v = arg.pop("ybingroup", None)
_v = ybingroup if ybingroup is not None else _v
if _v is not None:
self["ybingroup"] = _v
_v = arg.pop("ybins", None)
_v = ybins if ybins is not None else _v
if _v is not None:
self["ybins"] = _v
_v = arg.pop("ycalendar", None)
_v = ycalendar if ycalendar is not None else _v
if _v is not None:
self["ycalendar"] = _v
_v = arg.pop("ygap", None)
_v = ygap if ygap is not None else _v
if _v is not None:
self["ygap"] = _v
_v = arg.pop("ysrc", None)
_v = ysrc if ysrc is not None else _v
if _v is not None:
self["ysrc"] = _v
_v = arg.pop("z", None)
_v = z if z is not None else _v
if _v is not None:
self["z"] = _v
_v = arg.pop("zauto", None)
_v = zauto if zauto is not None else _v
if _v is not None:
self["zauto"] = _v
_v = arg.pop("zhoverformat", None)
_v = zhoverformat if zhoverformat is not None else _v
if _v is not None:
self["zhoverformat"] = _v
_v = arg.pop("zmax", None)
_v = zmax if zmax is not None else _v
if _v is not None:
self["zmax"] = _v
_v = arg.pop("zmid", None)
_v = zmid if zmid is not None else _v
if _v is not None:
self["zmid"] = _v
_v = arg.pop("zmin", None)
_v = zmin if zmin is not None else _v
if _v is not None:
self["zmin"] = _v
_v = arg.pop("zsmooth", None)
_v = zsmooth if zsmooth is not None else _v
if _v is not None:
self["zsmooth"] = _v
_v = arg.pop("zsrc", None)
_v = zsrc if zsrc is not None else _v
if _v is not None:
self["zsrc"] = _v
# Read-only literals
# ------------------
self._props["type"] = "histogram2d"
arg.pop("type", None)
# Process unknown kwargs
# ----------------------
self._process_kwargs(**dict(arg, **kwargs))
# Reset skip_invalid
# ------------------
self._skip_invalid = False
| mit |
yanchen036/tensorflow | tensorflow/python/estimator/inputs/queues/feeding_queue_runner_test.py | 116 | 5164 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests `FeedingQueueRunner` using arrays and `DataFrames`."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.client import session
from tensorflow.python.estimator.inputs.queues import feeding_functions as ff
from tensorflow.python.framework import ops
from tensorflow.python.platform import test
from tensorflow.python.training import coordinator
from tensorflow.python.training import queue_runner_impl
try:
# pylint: disable=g-import-not-at-top
import pandas as pd
HAS_PANDAS = True
except IOError:
# Pandas writes a temporary file during import. If it fails, don't use pandas.
HAS_PANDAS = False
except ImportError:
HAS_PANDAS = False
def get_rows(array, row_indices):
rows = [array[i] for i in row_indices]
return np.vstack(rows)
class FeedingQueueRunnerTestCase(test.TestCase):
"""Tests for `FeedingQueueRunner`."""
def testArrayFeeding(self):
with ops.Graph().as_default():
array = np.arange(32).reshape([16, 2])
q = ff._enqueue_data(array, capacity=100)
batch_size = 3
dq_op = q.dequeue_many(batch_size)
with session.Session() as sess:
coord = coordinator.Coordinator()
threads = queue_runner_impl.start_queue_runners(sess=sess, coord=coord)
for i in range(100):
indices = [
j % array.shape[0]
for j in range(batch_size * i, batch_size * (i + 1))
]
expected_dq = get_rows(array, indices)
dq = sess.run(dq_op)
np.testing.assert_array_equal(indices, dq[0])
np.testing.assert_array_equal(expected_dq, dq[1])
coord.request_stop()
coord.join(threads)
def testArrayFeedingMultiThread(self):
with ops.Graph().as_default():
array = np.arange(256).reshape([128, 2])
q = ff._enqueue_data(array, capacity=128, num_threads=8, shuffle=True)
batch_size = 3
dq_op = q.dequeue_many(batch_size)
with session.Session() as sess:
coord = coordinator.Coordinator()
threads = queue_runner_impl.start_queue_runners(sess=sess, coord=coord)
for _ in range(100):
dq = sess.run(dq_op)
indices = dq[0]
expected_dq = get_rows(array, indices)
np.testing.assert_array_equal(expected_dq, dq[1])
coord.request_stop()
coord.join(threads)
def testPandasFeeding(self):
if not HAS_PANDAS:
return
with ops.Graph().as_default():
array1 = np.arange(32)
array2 = np.arange(32, 64)
df = pd.DataFrame({"a": array1, "b": array2}, index=np.arange(64, 96))
q = ff._enqueue_data(df, capacity=100)
batch_size = 5
dq_op = q.dequeue_many(5)
with session.Session() as sess:
coord = coordinator.Coordinator()
threads = queue_runner_impl.start_queue_runners(sess=sess, coord=coord)
for i in range(100):
indices = [
j % array1.shape[0]
for j in range(batch_size * i, batch_size * (i + 1))
]
expected_df_indices = df.index[indices]
expected_rows = df.iloc[indices]
dq = sess.run(dq_op)
np.testing.assert_array_equal(expected_df_indices, dq[0])
for col_num, col in enumerate(df.columns):
np.testing.assert_array_equal(expected_rows[col].values,
dq[col_num + 1])
coord.request_stop()
coord.join(threads)
def testPandasFeedingMultiThread(self):
if not HAS_PANDAS:
return
with ops.Graph().as_default():
array1 = np.arange(128, 256)
array2 = 2 * array1
df = pd.DataFrame({"a": array1, "b": array2}, index=np.arange(128))
q = ff._enqueue_data(df, capacity=128, num_threads=8, shuffle=True)
batch_size = 5
dq_op = q.dequeue_many(batch_size)
with session.Session() as sess:
coord = coordinator.Coordinator()
threads = queue_runner_impl.start_queue_runners(sess=sess, coord=coord)
for _ in range(100):
dq = sess.run(dq_op)
indices = dq[0]
expected_rows = df.iloc[indices]
for col_num, col in enumerate(df.columns):
np.testing.assert_array_equal(expected_rows[col].values,
dq[col_num + 1])
coord.request_stop()
coord.join(threads)
if __name__ == "__main__":
test.main()
| apache-2.0 |
jmschrei/scikit-learn | examples/model_selection/plot_roc_crossval.py | 1 | 3453 | """
=============================================================
Receiver Operating Characteristic (ROC) with cross validation
=============================================================
Example of Receiver Operating Characteristic (ROC) metric to evaluate
classifier output quality using cross-validation.
ROC curves typically feature true positive rate on the Y axis, and false
positive rate on the X axis. This means that the top left corner of the plot is
the "ideal" point - a false positive rate of zero, and a true positive rate of
one. This is not very realistic, but it does mean that a larger area under the
curve (AUC) is usually better.
The "steepness" of ROC curves is also important, since it is ideal to maximize
the true positive rate while minimizing the false positive rate.
This example shows the ROC response of different datasets, created from K-fold
cross-validation. Taking all of these curves, it is possible to calculate the
mean area under curve, and see the variance of the curve when the
training set is split into different subsets. This roughly shows how the
classifier output is affected by changes in the training data, and how
different the splits generated by K-fold cross-validation are from one another.
.. note::
See also :func:`sklearn.metrics.auc_score`,
:func:`sklearn.cross_validation.cross_val_score`,
:ref:`example_model_selection_plot_roc.py`,
"""
print(__doc__)
import numpy as np
from scipy import interp
import matplotlib.pyplot as plt
from itertools import cycle
from sklearn import svm, datasets
from sklearn.metrics import roc_curve, auc
from sklearn.cross_validation import StratifiedKFold
###############################################################################
# Data IO and generation
# import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
X, y = X[y != 2], y[y != 2]
n_samples, n_features = X.shape
# Add noisy features
random_state = np.random.RandomState(0)
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
###############################################################################
# Classification and ROC analysis
# Run classifier with cross-validation and plot ROC curves
cv = StratifiedKFold(y, n_folds=6)
classifier = svm.SVC(kernel='linear', probability=True,
random_state=random_state)
mean_tpr = 0.0
mean_fpr = np.linspace(0, 1, 100)
all_tpr = []
colors = cycle(['cyan', 'indigo', 'seagreen', 'yellow', 'blue', 'darkorange'])
lw = 2
i = 0
for (train, test), color in zip(cv, colors):
probas_ = classifier.fit(X[train], y[train]).predict_proba(X[test])
# Compute ROC curve and area the curve
fpr, tpr, thresholds = roc_curve(y[test], probas_[:, 1])
mean_tpr += interp(mean_fpr, fpr, tpr)
mean_tpr[0] = 0.0
roc_auc = auc(fpr, tpr)
plt.plot(fpr, tpr, lw=lw, color=color,
label='ROC fold %d (area = %0.2f)' % (i, roc_auc))
i += 1
plt.plot([0, 1], [0, 1], linestyle='--', lw=lw, color='k',
label='Luck')
mean_tpr /= len(cv)
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
plt.plot(mean_fpr, mean_tpr, color='g', linestyle='--',
label='Mean ROC (area = %0.2f)' % mean_auc, lw=lw)
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
| bsd-3-clause |
plissonf/scikit-learn | sklearn/linear_model/tests/test_omp.py | 272 | 7752 | # Author: Vlad Niculae
# Licence: BSD 3 clause
import numpy as np
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import ignore_warnings
from sklearn.linear_model import (orthogonal_mp, orthogonal_mp_gram,
OrthogonalMatchingPursuit,
OrthogonalMatchingPursuitCV,
LinearRegression)
from sklearn.utils import check_random_state
from sklearn.datasets import make_sparse_coded_signal
n_samples, n_features, n_nonzero_coefs, n_targets = 20, 30, 5, 3
y, X, gamma = make_sparse_coded_signal(n_targets, n_features, n_samples,
n_nonzero_coefs, random_state=0)
G, Xy = np.dot(X.T, X), np.dot(X.T, y)
# this makes X (n_samples, n_features)
# and y (n_samples, 3)
def test_correct_shapes():
assert_equal(orthogonal_mp(X, y[:, 0], n_nonzero_coefs=5).shape,
(n_features,))
assert_equal(orthogonal_mp(X, y, n_nonzero_coefs=5).shape,
(n_features, 3))
def test_correct_shapes_gram():
assert_equal(orthogonal_mp_gram(G, Xy[:, 0], n_nonzero_coefs=5).shape,
(n_features,))
assert_equal(orthogonal_mp_gram(G, Xy, n_nonzero_coefs=5).shape,
(n_features, 3))
def test_n_nonzero_coefs():
assert_true(np.count_nonzero(orthogonal_mp(X, y[:, 0],
n_nonzero_coefs=5)) <= 5)
assert_true(np.count_nonzero(orthogonal_mp(X, y[:, 0], n_nonzero_coefs=5,
precompute=True)) <= 5)
def test_tol():
tol = 0.5
gamma = orthogonal_mp(X, y[:, 0], tol=tol)
gamma_gram = orthogonal_mp(X, y[:, 0], tol=tol, precompute=True)
assert_true(np.sum((y[:, 0] - np.dot(X, gamma)) ** 2) <= tol)
assert_true(np.sum((y[:, 0] - np.dot(X, gamma_gram)) ** 2) <= tol)
def test_with_without_gram():
assert_array_almost_equal(
orthogonal_mp(X, y, n_nonzero_coefs=5),
orthogonal_mp(X, y, n_nonzero_coefs=5, precompute=True))
def test_with_without_gram_tol():
assert_array_almost_equal(
orthogonal_mp(X, y, tol=1.),
orthogonal_mp(X, y, tol=1., precompute=True))
def test_unreachable_accuracy():
assert_array_almost_equal(
orthogonal_mp(X, y, tol=0),
orthogonal_mp(X, y, n_nonzero_coefs=n_features))
assert_array_almost_equal(
assert_warns(RuntimeWarning, orthogonal_mp, X, y, tol=0,
precompute=True),
orthogonal_mp(X, y, precompute=True,
n_nonzero_coefs=n_features))
def test_bad_input():
assert_raises(ValueError, orthogonal_mp, X, y, tol=-1)
assert_raises(ValueError, orthogonal_mp, X, y, n_nonzero_coefs=-1)
assert_raises(ValueError, orthogonal_mp, X, y,
n_nonzero_coefs=n_features + 1)
assert_raises(ValueError, orthogonal_mp_gram, G, Xy, tol=-1)
assert_raises(ValueError, orthogonal_mp_gram, G, Xy, n_nonzero_coefs=-1)
assert_raises(ValueError, orthogonal_mp_gram, G, Xy,
n_nonzero_coefs=n_features + 1)
def test_perfect_signal_recovery():
idx, = gamma[:, 0].nonzero()
gamma_rec = orthogonal_mp(X, y[:, 0], 5)
gamma_gram = orthogonal_mp_gram(G, Xy[:, 0], 5)
assert_array_equal(idx, np.flatnonzero(gamma_rec))
assert_array_equal(idx, np.flatnonzero(gamma_gram))
assert_array_almost_equal(gamma[:, 0], gamma_rec, decimal=2)
assert_array_almost_equal(gamma[:, 0], gamma_gram, decimal=2)
def test_estimator():
omp = OrthogonalMatchingPursuit(n_nonzero_coefs=n_nonzero_coefs)
omp.fit(X, y[:, 0])
assert_equal(omp.coef_.shape, (n_features,))
assert_equal(omp.intercept_.shape, ())
assert_true(np.count_nonzero(omp.coef_) <= n_nonzero_coefs)
omp.fit(X, y)
assert_equal(omp.coef_.shape, (n_targets, n_features))
assert_equal(omp.intercept_.shape, (n_targets,))
assert_true(np.count_nonzero(omp.coef_) <= n_targets * n_nonzero_coefs)
omp.set_params(fit_intercept=False, normalize=False)
omp.fit(X, y[:, 0])
assert_equal(omp.coef_.shape, (n_features,))
assert_equal(omp.intercept_, 0)
assert_true(np.count_nonzero(omp.coef_) <= n_nonzero_coefs)
omp.fit(X, y)
assert_equal(omp.coef_.shape, (n_targets, n_features))
assert_equal(omp.intercept_, 0)
assert_true(np.count_nonzero(omp.coef_) <= n_targets * n_nonzero_coefs)
def test_identical_regressors():
newX = X.copy()
newX[:, 1] = newX[:, 0]
gamma = np.zeros(n_features)
gamma[0] = gamma[1] = 1.
newy = np.dot(newX, gamma)
assert_warns(RuntimeWarning, orthogonal_mp, newX, newy, 2)
def test_swapped_regressors():
gamma = np.zeros(n_features)
# X[:, 21] should be selected first, then X[:, 0] selected second,
# which will take X[:, 21]'s place in case the algorithm does
# column swapping for optimization (which is the case at the moment)
gamma[21] = 1.0
gamma[0] = 0.5
new_y = np.dot(X, gamma)
new_Xy = np.dot(X.T, new_y)
gamma_hat = orthogonal_mp(X, new_y, 2)
gamma_hat_gram = orthogonal_mp_gram(G, new_Xy, 2)
assert_array_equal(np.flatnonzero(gamma_hat), [0, 21])
assert_array_equal(np.flatnonzero(gamma_hat_gram), [0, 21])
def test_no_atoms():
y_empty = np.zeros_like(y)
Xy_empty = np.dot(X.T, y_empty)
gamma_empty = ignore_warnings(orthogonal_mp)(X, y_empty, 1)
gamma_empty_gram = ignore_warnings(orthogonal_mp)(G, Xy_empty, 1)
assert_equal(np.all(gamma_empty == 0), True)
assert_equal(np.all(gamma_empty_gram == 0), True)
def test_omp_path():
path = orthogonal_mp(X, y, n_nonzero_coefs=5, return_path=True)
last = orthogonal_mp(X, y, n_nonzero_coefs=5, return_path=False)
assert_equal(path.shape, (n_features, n_targets, 5))
assert_array_almost_equal(path[:, :, -1], last)
path = orthogonal_mp_gram(G, Xy, n_nonzero_coefs=5, return_path=True)
last = orthogonal_mp_gram(G, Xy, n_nonzero_coefs=5, return_path=False)
assert_equal(path.shape, (n_features, n_targets, 5))
assert_array_almost_equal(path[:, :, -1], last)
def test_omp_return_path_prop_with_gram():
path = orthogonal_mp(X, y, n_nonzero_coefs=5, return_path=True,
precompute=True)
last = orthogonal_mp(X, y, n_nonzero_coefs=5, return_path=False,
precompute=True)
assert_equal(path.shape, (n_features, n_targets, 5))
assert_array_almost_equal(path[:, :, -1], last)
def test_omp_cv():
y_ = y[:, 0]
gamma_ = gamma[:, 0]
ompcv = OrthogonalMatchingPursuitCV(normalize=True, fit_intercept=False,
max_iter=10, cv=5)
ompcv.fit(X, y_)
assert_equal(ompcv.n_nonzero_coefs_, n_nonzero_coefs)
assert_array_almost_equal(ompcv.coef_, gamma_)
omp = OrthogonalMatchingPursuit(normalize=True, fit_intercept=False,
n_nonzero_coefs=ompcv.n_nonzero_coefs_)
omp.fit(X, y_)
assert_array_almost_equal(ompcv.coef_, omp.coef_)
def test_omp_reaches_least_squares():
# Use small simple data; it's a sanity check but OMP can stop early
rng = check_random_state(0)
n_samples, n_features = (10, 8)
n_targets = 3
X = rng.randn(n_samples, n_features)
Y = rng.randn(n_samples, n_targets)
omp = OrthogonalMatchingPursuit(n_nonzero_coefs=n_features)
lstsq = LinearRegression()
omp.fit(X, Y)
lstsq.fit(X, Y)
assert_array_almost_equal(omp.coef_, lstsq.coef_)
| bsd-3-clause |
Cophy08/ggplot | ggplot/stats/stat_bin2d.py | 8 | 2679 | from __future__ import (absolute_import, division, print_function,
unicode_literals)
from collections import defaultdict
import pandas as pd
import numpy as np
from ggplot.utils import make_iterable_ntimes
from .stat import stat
_MSG_STATUS = """stat_bin2d is still under construction.
The resulting plot lacks color to indicate the counts/density in each bin
and if grouping/facetting is used you get more bins than specified and
they vary in size between the groups.
see: https://github.com/yhat/ggplot/pull/266#issuecomment-41355513
https://github.com/yhat/ggplot/issues/283
"""
class stat_bin2d(stat):
REQUIRED_AES = {'x', 'y'}
DEFAULT_PARAMS = {'geom': 'rect', 'position': 'identity',
'bins': 30, 'drop': True, 'weight': 1,
'right': False}
CREATES = {'xmin', 'xmax', 'ymin', 'ymax', 'fill'}
def _calculate(self, data):
self._print_warning(_MSG_STATUS)
x = data.pop('x')
y = data.pop('y')
bins = self.params['bins']
drop = self.params['drop']
right = self.params['right']
weight = make_iterable_ntimes(self.params['weight'], len(x))
# create the cutting parameters
x_assignments, xbreaks = pd.cut(x, bins=bins, labels=False,
right=right, retbins=True)
y_assignments, ybreaks = pd.cut(y, bins=bins, labels=False,
right=right, retbins=True)
# create rectangles
# xmin, xmax, ymin, ymax, fill=count
df = pd.DataFrame({'xbin': x_assignments,
'ybin': y_assignments,
'weights': weight})
table = pd.pivot_table(df, values='weights',
index=['xbin', 'ybin'], aggfunc=np.sum)
rects = np.array([[xbreaks[i], xbreaks[i+1],
ybreaks[j], ybreaks[j+1],
table[(i, j)]]
for (i, j) in table.keys()])
new_data = pd.DataFrame(rects, columns=['xmin', 'xmax',
'ymin', 'ymax',
'fill'])
# !!! assign colors???
# TODO: Remove this when visual mapping is applied after
# computing the stats
new_data['fill'] = ['#333333'] * len(new_data)
# Copy the other aesthetics into the new dataframe
# Note: There probably shouldn't be any for this stat
n = len(new_data)
for ae in data:
new_data[ae] = make_iterable_ntimes(data[ae].iloc[0], n)
return new_data
| bsd-2-clause |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.