repo_name
stringlengths 6
92
| path
stringlengths 4
191
| copies
stringclasses 322
values | size
stringlengths 4
6
| content
stringlengths 821
753k
| license
stringclasses 15
values |
---|---|---|---|---|---|
gotomypc/scikit-learn | examples/model_selection/plot_precision_recall.py | 249 | 6150 | """
================
Precision-Recall
================
Example of Precision-Recall metric to evaluate classifier output quality.
In information retrieval, precision is a measure of result relevancy, while
recall is a measure of how many truly relevant results are returned. A high
area under the curve represents both high recall and high precision, where high
precision relates to a low false positive rate, and high recall relates to a
low false negative rate. High scores for both show that the classifier is
returning accurate results (high precision), as well as returning a majority of
all positive results (high recall).
A system with high recall but low precision returns many results, but most of
its predicted labels are incorrect when compared to the training labels. A
system with high precision but low recall is just the opposite, returning very
few results, but most of its predicted labels are correct when compared to the
training labels. An ideal system with high precision and high recall will
return many results, with all results labeled correctly.
Precision (:math:`P`) is defined as the number of true positives (:math:`T_p`)
over the number of true positives plus the number of false positives
(:math:`F_p`).
:math:`P = \\frac{T_p}{T_p+F_p}`
Recall (:math:`R`) is defined as the number of true positives (:math:`T_p`)
over the number of true positives plus the number of false negatives
(:math:`F_n`).
:math:`R = \\frac{T_p}{T_p + F_n}`
These quantities are also related to the (:math:`F_1`) score, which is defined
as the harmonic mean of precision and recall.
:math:`F1 = 2\\frac{P \\times R}{P+R}`
It is important to note that the precision may not decrease with recall. The
definition of precision (:math:`\\frac{T_p}{T_p + F_p}`) shows that lowering
the threshold of a classifier may increase the denominator, by increasing the
number of results returned. If the threshold was previously set too high, the
new results may all be true positives, which will increase precision. If the
previous threshold was about right or too low, further lowering the threshold
will introduce false positives, decreasing precision.
Recall is defined as :math:`\\frac{T_p}{T_p+F_n}`, where :math:`T_p+F_n` does
not depend on the classifier threshold. This means that lowering the classifier
threshold may increase recall, by increasing the number of true positive
results. It is also possible that lowering the threshold may leave recall
unchanged, while the precision fluctuates.
The relationship between recall and precision can be observed in the
stairstep area of the plot - at the edges of these steps a small change
in the threshold considerably reduces precision, with only a minor gain in
recall. See the corner at recall = .59, precision = .8 for an example of this
phenomenon.
Precision-recall curves are typically used in binary classification to study
the output of a classifier. In order to extend Precision-recall curve and
average precision to multi-class or multi-label classification, it is necessary
to binarize the output. One curve can be drawn per label, but one can also draw
a precision-recall curve by considering each element of the label indicator
matrix as a binary prediction (micro-averaging).
.. note::
See also :func:`sklearn.metrics.average_precision_score`,
:func:`sklearn.metrics.recall_score`,
:func:`sklearn.metrics.precision_score`,
:func:`sklearn.metrics.f1_score`
"""
print(__doc__)
import matplotlib.pyplot as plt
import numpy as np
from sklearn import svm, datasets
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import average_precision_score
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.multiclass import OneVsRestClassifier
# import some data to play with
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Binarize the output
y = label_binarize(y, classes=[0, 1, 2])
n_classes = y.shape[1]
# Add noisy features
random_state = np.random.RandomState(0)
n_samples, n_features = X.shape
X = np.c_[X, random_state.randn(n_samples, 200 * n_features)]
# Split into training and test
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.5,
random_state=random_state)
# Run classifier
classifier = OneVsRestClassifier(svm.SVC(kernel='linear', probability=True,
random_state=random_state))
y_score = classifier.fit(X_train, y_train).decision_function(X_test)
# Compute Precision-Recall and plot curve
precision = dict()
recall = dict()
average_precision = dict()
for i in range(n_classes):
precision[i], recall[i], _ = precision_recall_curve(y_test[:, i],
y_score[:, i])
average_precision[i] = average_precision_score(y_test[:, i], y_score[:, i])
# Compute micro-average ROC curve and ROC area
precision["micro"], recall["micro"], _ = precision_recall_curve(y_test.ravel(),
y_score.ravel())
average_precision["micro"] = average_precision_score(y_test, y_score,
average="micro")
# Plot Precision-Recall curve
plt.clf()
plt.plot(recall[0], precision[0], label='Precision-Recall curve')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title('Precision-Recall example: AUC={0:0.2f}'.format(average_precision[0]))
plt.legend(loc="lower left")
plt.show()
# Plot Precision-Recall curve for each class
plt.clf()
plt.plot(recall["micro"], precision["micro"],
label='micro-average Precision-recall curve (area = {0:0.2f})'
''.format(average_precision["micro"]))
for i in range(n_classes):
plt.plot(recall[i], precision[i],
label='Precision-recall curve of class {0} (area = {1:0.2f})'
''.format(i, average_precision[i]))
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Extension of Precision-Recall curve to multi-class')
plt.legend(loc="lower right")
plt.show()
| bsd-3-clause |
rohit21122012/DCASE2013 | runs/2016/dnn2016med_gd_5/src/dataset.py | 55 | 78980 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
import locale
import socket
import tarfile
import urllib2
import zipfile
from sklearn.cross_validation import StratifiedShuffleSplit, KFold
from files import *
from general import *
from ui import *
class Dataset(object):
"""Dataset base class.
The specific dataset classes are inherited from this class, and only needed methods are reimplemented.
"""
def __init__(self, data_path='data', name='dataset'):
"""__init__ method.
Parameters
----------
data_path : str
Basepath where the dataset is stored.
(Default value='data')
"""
# Folder name for dataset
self.name = name
# Path to the dataset
self.local_path = os.path.join(data_path, self.name)
# Create the dataset path if does not exist
if not os.path.isdir(self.local_path):
os.makedirs(self.local_path)
# Evaluation setup folder
self.evaluation_setup_folder = 'evaluation_setup'
# Path to the folder containing evaluation setup files
self.evaluation_setup_path = os.path.join(self.local_path, self.evaluation_setup_folder)
# Meta data file, csv-format
self.meta_filename = 'meta.txt'
# Path to meta data file
self.meta_file = os.path.join(self.local_path, self.meta_filename)
# Hash file to detect removed or added files
self.filelisthash_filename = 'filelist.hash'
# Number of evaluation folds
self.evaluation_folds = 1
# List containing dataset package items
# Define this in the inherited class.
# Format:
# {
# 'remote_package': download_url,
# 'local_package': os.path.join(self.local_path, 'name_of_downloaded_package'),
# 'local_audio_path': os.path.join(self.local_path, 'name_of_folder_containing_audio_files'),
# }
self.package_list = []
# List of audio files
self.files = None
# List of meta data dict
self.meta_data = None
# Training meta data for folds
self.evaluation_data_train = {}
# Testing meta data for folds
self.evaluation_data_test = {}
# Recognized audio extensions
self.audio_extensions = {'wav', 'flac'}
# Info fields for dataset
self.authors = ''
self.name_remote = ''
self.url = ''
self.audio_source = ''
self.audio_type = ''
self.recording_device_model = ''
self.microphone_model = ''
@property
def audio_files(self):
"""Get all audio files in the dataset
Parameters
----------
Nothing
Returns
-------
filelist : list
File list with absolute paths
"""
if self.files is None:
self.files = []
for item in self.package_list:
path = item['local_audio_path']
if path:
l = os.listdir(path)
for f in l:
file_name, file_extension = os.path.splitext(f)
if file_extension[1:] in self.audio_extensions:
self.files.append(os.path.abspath(os.path.join(path, f)))
self.files.sort()
return self.files
@property
def audio_file_count(self):
"""Get number of audio files in dataset
Parameters
----------
Nothing
Returns
-------
filecount : int
Number of audio files
"""
return len(self.audio_files)
@property
def meta(self):
"""Get meta data for dataset. If not already read from disk, data is read and returned.
Parameters
----------
Nothing
Returns
-------
meta_data : list
List containing meta data as dict.
Raises
-------
IOError
meta file not found.
"""
if self.meta_data is None:
self.meta_data = []
meta_id = 0
if os.path.isfile(self.meta_file):
f = open(self.meta_file, 'rt')
try:
reader = csv.reader(f, delimiter='\t')
for row in reader:
if len(row) == 2:
# Scene meta
self.meta_data.append({'file': row[0], 'scene_label': row[1].rstrip()})
elif len(row) == 4:
# Audio tagging meta
self.meta_data.append(
{'file': row[0], 'scene_label': row[1].rstrip(), 'tag_string': row[2].rstrip(),
'tags': row[3].split(';')})
elif len(row) == 6:
# Event meta
self.meta_data.append({'file': row[0],
'scene_label': row[1].rstrip(),
'event_onset': float(row[2]),
'event_offset': float(row[3]),
'event_label': row[4].rstrip(),
'event_type': row[5].rstrip(),
'id': meta_id
})
meta_id += 1
finally:
f.close()
else:
raise IOError("Meta file not found [%s]" % self.meta_file)
return self.meta_data
@property
def meta_count(self):
"""Number of meta data items.
Parameters
----------
Nothing
Returns
-------
meta_item_count : int
Meta data item count
"""
return len(self.meta)
@property
def fold_count(self):
"""Number of fold in the evaluation setup.
Parameters
----------
Nothing
Returns
-------
fold_count : int
Number of folds
"""
return self.evaluation_folds
@property
def scene_labels(self):
"""List of unique scene labels in the meta data.
Parameters
----------
Nothing
Returns
-------
labels : list
List of scene labels in alphabetical order.
"""
labels = []
for item in self.meta:
if 'scene_label' in item and item['scene_label'] not in labels:
labels.append(item['scene_label'])
labels.sort()
return labels
@property
def scene_label_count(self):
"""Number of unique scene labels in the meta data.
Parameters
----------
Nothing
Returns
-------
scene_label_count : int
Number of unique scene labels.
"""
return len(self.scene_labels)
@property
def event_labels(self):
"""List of unique event labels in the meta data.
Parameters
----------
Nothing
Returns
-------
labels : list
List of event labels in alphabetical order.
"""
labels = []
for item in self.meta:
if 'event_label' in item and item['event_label'].rstrip() not in labels:
labels.append(item['event_label'].rstrip())
labels.sort()
return labels
@property
def event_label_count(self):
"""Number of unique event labels in the meta data.
Parameters
----------
Nothing
Returns
-------
event_label_count : int
Number of unique event labels
"""
return len(self.event_labels)
@property
def audio_tags(self):
"""List of unique audio tags in the meta data.
Parameters
----------
Nothing
Returns
-------
labels : list
List of audio tags in alphabetical order.
"""
tags = []
for item in self.meta:
if 'tags' in item:
for tag in item['tags']:
if tag and tag not in tags:
tags.append(tag)
tags.sort()
return tags
@property
def audio_tag_count(self):
"""Number of unique audio tags in the meta data.
Parameters
----------
Nothing
Returns
-------
audio_tag_count : int
Number of unique audio tags
"""
return len(self.audio_tags)
def __getitem__(self, i):
"""Getting meta data item
Parameters
----------
i : int
item id
Returns
-------
meta_data : dict
Meta data item
"""
if i < len(self.meta):
return self.meta[i]
else:
return None
def __iter__(self):
"""Iterator for meta data items
Parameters
----------
Nothing
Returns
-------
Nothing
"""
i = 0
meta = self[i]
# yield window while it's valid
while meta is not None:
yield meta
# get next item
i += 1
meta = self[i]
@staticmethod
def print_bytes(num_bytes):
"""Output number of bytes according to locale and with IEC binary prefixes
Parameters
----------
num_bytes : int > 0 [scalar]
Bytes
Returns
-------
bytes : str
Human readable string
"""
KiB = 1024
MiB = KiB * KiB
GiB = KiB * MiB
TiB = KiB * GiB
PiB = KiB * TiB
EiB = KiB * PiB
ZiB = KiB * EiB
YiB = KiB * ZiB
locale.setlocale(locale.LC_ALL, '')
output = locale.format("%d", num_bytes, grouping=True) + ' bytes'
if num_bytes > YiB:
output += ' (%.4g YiB)' % (num_bytes / YiB)
elif num_bytes > ZiB:
output += ' (%.4g ZiB)' % (num_bytes / ZiB)
elif num_bytes > EiB:
output += ' (%.4g EiB)' % (num_bytes / EiB)
elif num_bytes > PiB:
output += ' (%.4g PiB)' % (num_bytes / PiB)
elif num_bytes > TiB:
output += ' (%.4g TiB)' % (num_bytes / TiB)
elif num_bytes > GiB:
output += ' (%.4g GiB)' % (num_bytes / GiB)
elif num_bytes > MiB:
output += ' (%.4g MiB)' % (num_bytes / MiB)
elif num_bytes > KiB:
output += ' (%.4g KiB)' % (num_bytes / KiB)
return output
def download(self):
"""Download dataset over the internet to the local path
Parameters
----------
Nothing
Returns
-------
Nothing
Raises
-------
IOError
Download failed.
"""
section_header('Download dataset')
for item in self.package_list:
try:
if item['remote_package'] and not os.path.isfile(item['local_package']):
data = None
req = urllib2.Request(item['remote_package'], data, {})
handle = urllib2.urlopen(req)
if "Content-Length" in handle.headers.items():
size = int(handle.info()["Content-Length"])
else:
size = None
actualSize = 0
blocksize = 64 * 1024
tmp_file = os.path.join(self.local_path, 'tmp_file')
fo = open(tmp_file, "wb")
terminate = False
while not terminate:
block = handle.read(blocksize)
actualSize += len(block)
if size:
progress(title_text=os.path.split(item['local_package'])[1],
percentage=actualSize / float(size),
note=self.print_bytes(actualSize))
else:
progress(title_text=os.path.split(item['local_package'])[1],
note=self.print_bytes(actualSize))
if len(block) == 0:
break
fo.write(block)
fo.close()
os.rename(tmp_file, item['local_package'])
except (urllib2.URLError, socket.timeout), e:
try:
fo.close()
except:
raise IOError('Download failed [%s]' % (item['remote_package']))
foot()
def extract(self):
"""Extract the dataset packages
Parameters
----------
Nothing
Returns
-------
Nothing
"""
section_header('Extract dataset')
for item_id, item in enumerate(self.package_list):
if item['local_package']:
if item['local_package'].endswith('.zip'):
with zipfile.ZipFile(item['local_package'], "r") as z:
# Trick to omit first level folder
parts = []
for name in z.namelist():
if not name.endswith('/'):
parts.append(name.split('/')[:-1])
prefix = os.path.commonprefix(parts) or ''
if prefix:
if len(prefix) > 1:
prefix_ = list()
prefix_.append(prefix[0])
prefix = prefix_
prefix = '/'.join(prefix) + '/'
offset = len(prefix)
# Start extraction
members = z.infolist()
file_count = 1
for i, member in enumerate(members):
if len(member.filename) > offset:
member.filename = member.filename[offset:]
if not os.path.isfile(os.path.join(self.local_path, member.filename)):
z.extract(member, self.local_path)
progress(
title_text='Extracting [' + str(item_id) + '/' + str(len(self.package_list)) + ']',
percentage=(file_count / float(len(members))),
note=member.filename)
file_count += 1
elif item['local_package'].endswith('.tar.gz'):
tar = tarfile.open(item['local_package'], "r:gz")
for i, tar_info in enumerate(tar):
if not os.path.isfile(os.path.join(self.local_path, tar_info.name)):
tar.extract(tar_info, self.local_path)
progress(title_text='Extracting [' + str(item_id) + '/' + str(len(self.package_list)) + ']',
note=tar_info.name)
tar.members = []
tar.close()
foot()
def on_after_extract(self):
"""Dataset meta data preparation, this will be overloaded in dataset specific classes
Parameters
----------
Nothing
Returns
-------
Nothing
"""
pass
def get_filelist(self):
"""List of files under local_path
Parameters
----------
Nothing
Returns
-------
filelist: list
File list
"""
filelist = []
for path, subdirs, files in os.walk(self.local_path):
for name in files:
filelist.append(os.path.join(path, name))
return filelist
def check_filelist(self):
"""Generates hash from file list and check does it matches with one saved in filelist.hash.
If some files have been deleted or added, checking will result False.
Parameters
----------
Nothing
Returns
-------
result: bool
Result
"""
if os.path.isfile(os.path.join(self.local_path, self.filelisthash_filename)):
hash = load_text(os.path.join(self.local_path, self.filelisthash_filename))[0]
if hash != get_parameter_hash(sorted(self.get_filelist())):
return False
else:
return True
else:
return False
def save_filelist_hash(self):
"""Generates file list hash, and saves it as filelist.hash under local_path.
Parameters
----------
Nothing
Returns
-------
Nothing
"""
filelist = self.get_filelist()
filelist_hash_not_found = True
for file in filelist:
if self.filelisthash_filename in file:
filelist_hash_not_found = False
if filelist_hash_not_found:
filelist.append(os.path.join(self.local_path, self.filelisthash_filename))
save_text(os.path.join(self.local_path, self.filelisthash_filename), get_parameter_hash(sorted(filelist)))
def fetch(self):
"""Download, extract and prepare the dataset.
Parameters
----------
Nothing
Returns
-------
Nothing
"""
if not self.check_filelist():
self.download()
self.extract()
self.on_after_extract()
self.save_filelist_hash()
return self
def train(self, fold=0):
"""List of training items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
Returns
-------
list : list of dicts
List containing all meta data assigned to training set for given fold.
"""
if fold not in self.evaluation_data_train:
self.evaluation_data_train[fold] = []
if fold > 0:
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt'), 'rt') as f:
for row in csv.reader(f, delimiter='\t'):
if len(row) == 2:
# Scene meta
self.evaluation_data_train[fold].append({
'file': self.relative_to_absolute_path(row[0]),
'scene_label': row[1]
})
elif len(row) == 4:
# Audio tagging meta
self.evaluation_data_train[fold].append({
'file': self.relative_to_absolute_path(row[0]),
'scene_label': row[1],
'tag_string': row[2],
'tags': row[3].split(';')
})
elif len(row) == 5:
# Event meta
self.evaluation_data_train[fold].append({
'file': self.relative_to_absolute_path(row[0]),
'scene_label': row[1],
'event_onset': float(row[2]),
'event_offset': float(row[3]),
'event_label': row[4]
})
else:
data = []
for item in self.meta:
if 'event_label' in item:
data.append({'file': self.relative_to_absolute_path(item['file']),
'scene_label': item['scene_label'],
'event_onset': item['event_onset'],
'event_offset': item['event_offset'],
'event_label': item['event_label'],
})
else:
data.append({'file': self.relative_to_absolute_path(item['file']),
'scene_label': item['scene_label']
})
self.evaluation_data_train[0] = data
return self.evaluation_data_train[fold]
def test(self, fold=0):
"""List of testing items.
Parameters
----------
fold : int > 0 [scalar]
Fold id, if zero all meta data is returned.
(Default value=0)
Returns
-------
list : list of dicts
List containing all meta data assigned to testing set for given fold.
"""
if fold not in self.evaluation_data_test:
self.evaluation_data_test[fold] = []
if fold > 0:
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'rt') as f:
for row in csv.reader(f, delimiter='\t'):
self.evaluation_data_test[fold].append({'file': self.relative_to_absolute_path(row[0])})
else:
data = []
files = []
for item in self.meta:
if self.relative_to_absolute_path(item['file']) not in files:
data.append({'file': self.relative_to_absolute_path(item['file'])})
files.append(self.relative_to_absolute_path(item['file']))
self.evaluation_data_test[fold] = data
return self.evaluation_data_test[fold]
def folds(self, mode='folds'):
"""List of fold ids
Parameters
----------
mode : str {'folds','full'}
Fold setup type, possible values are 'folds' and 'full'. In 'full' mode fold number is set 0 and all data is used for training.
(Default value=folds)
Returns
-------
list : list of integers
Fold ids
"""
if mode == 'folds':
return range(1, self.evaluation_folds + 1)
elif mode == 'full':
return [0]
def file_meta(self, file):
"""Meta data for given file
Parameters
----------
file : str
File name
Returns
-------
list : list of dicts
List containing all meta data related to given file.
"""
file = self.absolute_to_relative(file)
file_meta = []
for item in self.meta:
if item['file'] == file:
file_meta.append(item)
return file_meta
def relative_to_absolute_path(self, path):
"""Converts relative path into absolute path.
Parameters
----------
path : str
Relative path
Returns
-------
path : str
Absolute path
"""
return os.path.abspath(os.path.join(self.local_path, path))
def absolute_to_relative(self, path):
"""Converts absolute path into relative path.
Parameters
----------
path : str
Absolute path
Returns
-------
path : str
Relative path
"""
if path.startswith(os.path.abspath(self.local_path)):
return os.path.relpath(path, self.local_path)
else:
return path
# =====================================================
# DCASE 2016
# =====================================================
class TUTAcousticScenes_2016_DevelopmentSet(Dataset):
"""TUT Acoustic scenes 2016 development dataset
This dataset is used in DCASE2016 - Task 1, Acoustic scene classification
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='TUT-acoustic-scenes-2016-development')
self.authors = 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen'
self.name_remote = 'TUT Acoustic Scenes 2016, development dataset'
self.url = 'https://zenodo.org/record/45739'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Roland Edirol R-09'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 4
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.doc.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.doc.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.meta.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.meta.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.1.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.1.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.2.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.2.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.3.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.3.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.4.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.4.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.5.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.5.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.6.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.6.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.7.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.7.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45739/files/TUT-acoustic-scenes-2016-development.audio.8.zip',
'local_package': os.path.join(self.local_path, 'TUT-acoustic-scenes-2016-development.audio.8.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
}
]
def on_after_extract(self):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not os.path.isfile(self.meta_file):
section_header('Generating meta file for dataset')
meta_data = {}
for fold in xrange(1, self.evaluation_folds):
# Read train files in
train_filename = os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')
f = open(train_filename, 'rt')
reader = csv.reader(f, delimiter='\t')
for row in reader:
if row[0] not in meta_data:
meta_data[row[0]] = row[1]
f.close()
# Read evaluation files in
eval_filename = os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt')
f = open(eval_filename, 'rt')
reader = csv.reader(f, delimiter='\t')
for row in reader:
if row[0] not in meta_data:
meta_data[row[0]] = row[1]
f.close()
f = open(self.meta_file, 'wt')
try:
writer = csv.writer(f, delimiter='\t')
for file in meta_data:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
label = meta_data[file]
writer.writerow((os.path.join(relative_path, raw_filename), label))
finally:
f.close()
foot()
class TUTAcousticScenes_2016_EvaluationSet(Dataset):
"""TUT Acoustic scenes 2016 evaluation dataset
This dataset is used in DCASE2016 - Task 1, Acoustic scene classification
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='TUT-acoustic-scenes-2016-evaluation')
self.authors = 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen'
self.name_remote = 'TUT Acoustic Scenes 2016, evaluation dataset'
self.url = 'http://www.cs.tut.fi/sgn/arg/dcase2016/download/'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Roland Edirol R-09'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 1
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
]
def on_after_extract(self):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
eval_filename = os.path.join(self.evaluation_setup_path, 'evaluate.txt')
if not os.path.isfile(self.meta_file) and os.path.isfile(eval_filename):
section_header('Generating meta file for dataset')
meta_data = {}
f = open(eval_filename, 'rt')
reader = csv.reader(f, delimiter='\t')
for row in reader:
if row[0] not in meta_data:
meta_data[row[0]] = row[1]
f.close()
f = open(self.meta_file, 'wt')
try:
writer = csv.writer(f, delimiter='\t')
for file in meta_data:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
label = meta_data[file]
writer.writerow((os.path.join(relative_path, raw_filename), label))
finally:
f.close()
foot()
def train(self, fold=0):
raise IOError('Train setup not available.')
# TUT Sound events 2016 development and evaluation sets
class TUTSoundEvents_2016_DevelopmentSet(Dataset):
"""TUT Sound events 2016 development dataset
This dataset is used in DCASE2016 - Task 3, Sound event detection in real life audio
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='TUT-sound-events-2016-development')
self.authors = 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen'
self.name_remote = 'TUT Sound Events 2016, development dataset'
self.url = 'https://zenodo.org/record/45759'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Roland Edirol R-09'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 4
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'residential_area'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'home'),
},
{
'remote_package': 'https://zenodo.org/record/45759/files/TUT-sound-events-2016-development.doc.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2016-development.doc.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45759/files/TUT-sound-events-2016-development.meta.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2016-development.meta.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': 'https://zenodo.org/record/45759/files/TUT-sound-events-2016-development.audio.zip',
'local_package': os.path.join(self.local_path, 'TUT-sound-events-2016-development.audio.zip'),
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
]
def event_label_count(self, scene_label=None):
return len(self.event_labels(scene_label=scene_label))
def event_labels(self, scene_label=None):
labels = []
for item in self.meta:
if scene_label is None or item['scene_label'] == scene_label:
if 'event_label' in item and item['event_label'].rstrip() not in labels:
labels.append(item['event_label'].rstrip())
labels.sort()
return labels
def on_after_extract(self):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not os.path.isfile(self.meta_file):
meta_file_handle = open(self.meta_file, 'wt')
try:
writer = csv.writer(meta_file_handle, delimiter='\t')
for filename in self.audio_files:
raw_path, raw_filename = os.path.split(filename)
relative_path = self.absolute_to_relative(raw_path)
scene_label = relative_path.replace('audio', '')[1:]
base_filename, file_extension = os.path.splitext(raw_filename)
annotation_filename = os.path.join(self.local_path, relative_path.replace('audio', 'meta'),
base_filename + '.ann')
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename),
scene_label,
float(annotation_file_row[0].replace(',', '.')),
float(annotation_file_row[1].replace(',', '.')),
annotation_file_row[2], 'm'))
finally:
annotation_file_handle.close()
finally:
meta_file_handle.close()
def train(self, fold=0, scene_label=None):
if fold not in self.evaluation_data_train:
self.evaluation_data_train[fold] = {}
for scene_label_ in self.scene_labels:
if scene_label_ not in self.evaluation_data_train[fold]:
self.evaluation_data_train[fold][scene_label_] = []
if fold > 0:
with open(
os.path.join(self.evaluation_setup_path, scene_label_ + '_fold' + str(fold) + '_train.txt'),
'rt') as f:
for row in csv.reader(f, delimiter='\t'):
if len(row) == 5:
# Event meta
self.evaluation_data_train[fold][scene_label_].append({
'file': self.relative_to_absolute_path(row[0]),
'scene_label': row[1],
'event_onset': float(row[2]),
'event_offset': float(row[3]),
'event_label': row[4]
})
else:
data = []
for item in self.meta:
if item['scene_label'] == scene_label_:
if 'event_label' in item:
data.append({'file': self.relative_to_absolute_path(item['file']),
'scene_label': item['scene_label'],
'event_onset': item['event_onset'],
'event_offset': item['event_offset'],
'event_label': item['event_label'],
})
self.evaluation_data_train[0][scene_label_] = data
if scene_label:
return self.evaluation_data_train[fold][scene_label]
else:
data = []
for scene_label_ in self.scene_labels:
for item in self.evaluation_data_train[fold][scene_label_]:
data.append(item)
return data
def test(self, fold=0, scene_label=None):
if fold not in self.evaluation_data_test:
self.evaluation_data_test[fold] = {}
for scene_label_ in self.scene_labels:
if scene_label_ not in self.evaluation_data_test[fold]:
self.evaluation_data_test[fold][scene_label_] = []
if fold > 0:
with open(
os.path.join(self.evaluation_setup_path, scene_label_ + '_fold' + str(fold) + '_test.txt'),
'rt') as f:
for row in csv.reader(f, delimiter='\t'):
self.evaluation_data_test[fold][scene_label_].append(
{'file': self.relative_to_absolute_path(row[0])})
else:
data = []
files = []
for item in self.meta:
if scene_label_ in item:
if self.relative_to_absolute_path(item['file']) not in files:
data.append({'file': self.relative_to_absolute_path(item['file'])})
files.append(self.relative_to_absolute_path(item['file']))
self.evaluation_data_test[0][scene_label_] = data
if scene_label:
return self.evaluation_data_test[fold][scene_label]
else:
data = []
for scene_label_ in self.scene_labels:
for item in self.evaluation_data_test[fold][scene_label_]:
data.append(item)
return data
class TUTSoundEvents_2016_EvaluationSet(Dataset):
"""TUT Sound events 2016 evaluation dataset
This dataset is used in DCASE2016 - Task 3, Sound event detection in real life audio
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='TUT-sound-events-2016-evaluation')
self.authors = 'Annamaria Mesaros, Toni Heittola, and Tuomas Virtanen'
self.name_remote = 'TUT Sound Events 2016, evaluation dataset'
self.url = 'http://www.cs.tut.fi/sgn/arg/dcase2016/download/'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Roland Edirol R-09'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 1
self.package_list = [
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'home'),
},
{
'remote_package': None,
'local_package': None,
'local_audio_path': os.path.join(self.local_path, 'audio', 'residential_area'),
},
]
@property
def scene_labels(self):
labels = ['home', 'residential_area']
labels.sort()
return labels
def event_label_count(self, scene_label=None):
return len(self.event_labels(scene_label=scene_label))
def event_labels(self, scene_label=None):
labels = []
for item in self.meta:
if scene_label is None or item['scene_label'] == scene_label:
if 'event_label' in item and item['event_label'] not in labels:
labels.append(item['event_label'])
labels.sort()
return labels
def on_after_extract(self):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not os.path.isfile(self.meta_file) and os.path.isdir(os.path.join(self.local_path, 'meta')):
meta_file_handle = open(self.meta_file, 'wt')
try:
writer = csv.writer(meta_file_handle, delimiter='\t')
for filename in self.audio_files:
raw_path, raw_filename = os.path.split(filename)
relative_path = self.absolute_to_relative(raw_path)
scene_label = relative_path.replace('audio', '')[1:]
base_filename, file_extension = os.path.splitext(raw_filename)
annotation_filename = os.path.join(self.local_path, relative_path.replace('audio', 'meta'),
base_filename + '.ann')
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename),
scene_label,
float(annotation_file_row[0].replace(',', '.')),
float(annotation_file_row[1].replace(',', '.')),
annotation_file_row[2], 'm'))
finally:
annotation_file_handle.close()
finally:
meta_file_handle.close()
def train(self, fold=0, scene_label=None):
raise IOError('Train setup not available.')
def test(self, fold=0, scene_label=None):
if fold not in self.evaluation_data_test:
self.evaluation_data_test[fold] = {}
for scene_label_ in self.scene_labels:
if scene_label_ not in self.evaluation_data_test[fold]:
self.evaluation_data_test[fold][scene_label_] = []
if fold > 0:
with open(os.path.join(self.evaluation_setup_path, scene_label + '_fold' + str(fold) + '_test.txt'),
'rt') as f:
for row in csv.reader(f, delimiter='\t'):
self.evaluation_data_test[fold][scene_label_].append(
{'file': self.relative_to_absolute_path(row[0])})
else:
data = []
files = []
for item in self.audio_files:
if scene_label_ in item:
if self.relative_to_absolute_path(item) not in files:
data.append({'file': self.relative_to_absolute_path(item)})
files.append(self.relative_to_absolute_path(item))
self.evaluation_data_test[0][scene_label_] = data
if scene_label:
return self.evaluation_data_test[fold][scene_label]
else:
data = []
for scene_label_ in self.scene_labels:
for item in self.evaluation_data_test[fold][scene_label_]:
data.append(item)
return data
# CHIME home
class CHiMEHome_DomesticAudioTag_DevelopmentSet(Dataset):
def __init__(self, data_path=None):
Dataset.__init__(self, data_path=data_path, name='CHiMeHome-audiotag-development')
self.authors = 'Peter Foster, Siddharth Sigtia, Sacha Krstulovic, Jon Barker, and Mark Plumbley'
self.name_remote = 'The CHiME-Home dataset is a collection of annotated domestic environment audio recordings.'
self.url = ''
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Unknown'
self.microphone_model = 'Unknown'
self.evaluation_folds = 10
self.package_list = [
{
'remote_package': 'https://archive.org/download/chime-home/chime_home.tar.gz',
'local_package': os.path.join(self.local_path, 'chime_home.tar.gz'),
'local_audio_path': os.path.join(self.local_path, 'chime_home', 'chunks'),
},
]
@property
def audio_files(self):
"""Get all audio files in the dataset, use only file from CHime-Home-refined set.
Parameters
----------
nothing
Returns
-------
files : list
audio files
"""
if self.files is None:
refined_files = []
with open(os.path.join(self.local_path, 'chime_home', 'chunks_refined.csv'), 'rt') as f:
for row in csv.reader(f, delimiter=','):
refined_files.append(row[1])
self.files = []
for file in self.package_list:
path = file['local_audio_path']
if path:
l = os.listdir(path)
p = path.replace(self.local_path + os.path.sep, '')
for f in l:
fileName, fileExtension = os.path.splitext(f)
if fileExtension[1:] in self.audio_extensions and fileName in refined_files:
self.files.append(os.path.abspath(os.path.join(path, f)))
self.files.sort()
return self.files
def read_chunk_meta(self, meta_filename):
if os.path.isfile(meta_filename):
meta_file_handle = open(meta_filename, 'rt')
try:
meta_file_reader = csv.reader(meta_file_handle, delimiter=',')
data = {}
for meta_file_row in meta_file_reader:
data[meta_file_row[0]] = meta_file_row[1]
finally:
meta_file_handle.close()
return data
def tagcode_to_taglabel(self, tag):
map = {'c': 'child speech',
'm': 'adult male speech',
'f': 'adult female speech',
'v': 'video game/tv',
'p': 'percussive sound',
'b': 'broadband noise',
'o': 'other',
'S': 'silence/background',
'U': 'unidentifiable'
}
if tag in map:
return map[tag]
else:
return None
def on_after_extract(self):
"""After dataset packages are downloaded and extracted, meta-files are checked.
Legacy dataset meta files are converted to be compatible with current scheme.
Parameters
----------
nothing
Returns
-------
nothing
"""
if not os.path.isfile(self.meta_file):
section_header('Generating meta file for dataset')
scene_label = 'home'
f = open(self.meta_file, 'wt')
try:
writer = csv.writer(f, delimiter='\t')
for file in self.audio_files:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
base_filename, file_extension = os.path.splitext(raw_filename)
annotation_filename = os.path.join(raw_path, base_filename + '.csv')
meta_data = self.read_chunk_meta(annotation_filename)
tags = []
for i, tag in enumerate(meta_data['majorityvote']):
if tag is 'b':
print file
if tag is not 'S' and tag is not 'U':
tags.append(self.tagcode_to_taglabel(tag))
tags = ';'.join(tags)
writer.writerow(
(os.path.join(relative_path, raw_filename), scene_label, meta_data['majorityvote'], tags))
finally:
f.close()
foot()
all_folds_found = True
for fold in xrange(1, self.evaluation_folds):
for target_tag in self.audio_tags:
if not os.path.isfile(os.path.join(self.evaluation_setup_path,
'fold' + str(fold) + '_' + target_tag.replace('/', '-').replace(' ',
'_') + '_train.txt')):
all_folds_found = False
if not os.path.isfile(os.path.join(self.evaluation_setup_path,
'fold' + str(fold) + '_' + target_tag.replace('/', '-').replace(' ',
'_') + '_test.txt')):
all_folds_found = False
if not all_folds_found:
if not os.path.isdir(self.evaluation_setup_path):
os.makedirs(self.evaluation_setup_path)
numpy.random.seed(475686)
kf = KFold(n=len(self.audio_files), n_folds=self.evaluation_folds, shuffle=True)
refined_files = []
with open(os.path.join(self.local_path, 'chime_home', 'chunks_refined.csv'), 'rt') as f:
for row in csv.reader(f, delimiter=','):
refined_files.append(
self.relative_to_absolute_path(os.path.join('chime_home', 'chunks', row[1] + '.wav')))
fold = 1
files = numpy.array(refined_files)
for train_index, test_index in kf:
train_files = files[train_index]
test_files = files[test_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in train_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
item = self.file_meta(file)[0]
writer.writerow(
[os.path.join(relative_path, raw_filename), item['scene_label'], item['tag_string'],
';'.join(item['tags'])])
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
writer.writerow([os.path.join(relative_path, raw_filename)])
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
item = self.file_meta(file)[0]
writer.writerow(
[os.path.join(relative_path, raw_filename), item['scene_label'], item['tag_string'],
';'.join(item['tags'])])
fold += 1
# Legacy datasets
# =====================================================
# DCASE 2013
# =====================================================
class DCASE2013_Scene_DevelopmentSet(Dataset):
"""DCASE 2013 Acoustic scene classification, development dataset
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='DCASE2013-scene-development')
self.authors = 'Dimitrios Giannoulis, Emmanouil Benetos, Dan Stowell, and Mark Plumbley'
self.name_remote = 'IEEE AASP 2013 CASA Challenge - Public Dataset for Scene Classification Task'
self.url = 'http://www.elec.qmul.ac.uk/digitalmusic/sceneseventschallenge/'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Unknown'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 5
self.package_list = [
{
'remote_package': 'http://c4dm.eecs.qmul.ac.uk/rdr/bitstream/handle/123456789/29/scenes_stereo.zip?sequence=1',
'local_package': os.path.join(self.local_path, 'scenes_stereo.zip'),
'local_audio_path': os.path.join(self.local_path, 'scenes_stereo'),
}
]
def on_after_extract(self):
# Make legacy dataset compatible with DCASE2016 dataset scheme
if not os.path.isfile(self.meta_file):
section_header('Generating meta file for dataset')
f = open(self.meta_file, 'wt')
try:
writer = csv.writer(f, delimiter='\t')
for file in self.audio_files:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
label = os.path.splitext(os.path.split(file)[1])[0][:-2]
writer.writerow((os.path.join(relative_path, raw_filename), label))
finally:
f.close()
foot()
all_folds_found = True
for fold in xrange(1, self.evaluation_folds):
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')):
all_folds_found = False
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt')):
all_folds_found = False
if not all_folds_found:
section_header('Generating evaluation setup files for dataset')
if not os.path.isdir(self.evaluation_setup_path):
os.makedirs(self.evaluation_setup_path)
classes = []
files = []
for item in self.meta:
classes.append(item['scene_label'])
files.append(item['file'])
files = numpy.array(files)
sss = StratifiedShuffleSplit(y=classes, n_iter=self.evaluation_folds, test_size=0.3, random_state=0)
fold = 1
for train_index, test_index in sss:
# print("TRAIN:", train_index, "TEST:", test_index)
train_files = files[train_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in train_files:
raw_path, raw_filename = os.path.split(file)
label = self.file_meta(file)[0]['scene_label']
writer.writerow([os.path.join(raw_path, raw_filename), label])
test_files = files[test_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
writer.writerow([os.path.join(raw_path, raw_filename)])
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
label = self.file_meta(file)[0]['scene_label']
writer.writerow([os.path.join(raw_path, raw_filename), label])
fold += 1
foot()
class DCASE2013_Scene_EvaluationSet(DCASE2013_Scene_DevelopmentSet):
"""DCASE 2013 Acoustic scene classification, evaluation dataset
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='DCASE2013-scene-challenge')
self.authors = 'Dimitrios Giannoulis, Emmanouil Benetos, Dan Stowell, and Mark Plumbley'
self.name_remote = 'IEEE AASP 2013 CASA Challenge - Private Dataset for Scene Classification Task'
self.url = 'http://www.elec.qmul.ac.uk/digitalmusic/sceneseventschallenge/'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Unknown'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 5
self.package_list = [
{
'remote_package': 'https://archive.org/download/dcase2013_scene_classification_testset/scenes_stereo_testset.zip',
'local_package': os.path.join(self.local_path, 'scenes_stereo_testset.zip'),
'local_audio_path': os.path.join(self.local_path, 'scenes_stereo_testset'),
}
]
def on_after_extract(self):
# Make legacy dataset compatible with DCASE2016 dataset scheme
if not os.path.isfile(self.meta_file) or 1:
section_header('Generating meta file for dataset')
f = open(self.meta_file, 'wt')
try:
writer = csv.writer(f, delimiter='\t')
for file in self.audio_files:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
label = os.path.splitext(os.path.split(file)[1])[0][:-2]
writer.writerow((os.path.join(relative_path, raw_filename), label))
finally:
f.close()
foot()
all_folds_found = True
for fold in xrange(1, self.evaluation_folds):
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')):
all_folds_found = False
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt')):
all_folds_found = False
if not all_folds_found:
section_header('Generating evaluation setup files for dataset')
if not os.path.isdir(self.evaluation_setup_path):
os.makedirs(self.evaluation_setup_path)
classes = []
files = []
for item in self.meta:
classes.append(item['scene_label'])
files.append(item['file'])
files = numpy.array(files)
sss = StratifiedShuffleSplit(y=classes, n_iter=self.evaluation_folds, test_size=0.3, random_state=0)
fold = 1
for train_index, test_index in sss:
train_files = files[train_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in train_files:
raw_path, raw_filename = os.path.split(file)
label = self.file_meta(file)[0]['scene_label']
writer.writerow([os.path.join(raw_path, raw_filename), label])
test_files = files[test_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
writer.writerow([os.path.join(raw_path, raw_filename)])
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
label = self.file_meta(file)[0]['scene_label']
writer.writerow([os.path.join(raw_path, raw_filename), label])
fold += 1
foot()
# Sound events
class DCASE2013_Event_DevelopmentSet(Dataset):
"""DCASE 2013 Sound event detection, development dataset
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='DCASE2013-event-development')
self.authors = 'Dimitrios Giannoulis, Emmanouil Benetos, Dan Stowell, and Mark Plumbley'
self.name_remote = 'IEEE AASP CASA Challenge - Public Dataset for Event Detection Task'
self.url = 'http://www.elec.qmul.ac.uk/digitalmusic/sceneseventschallenge/'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Unknown'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 5
self.package_list = [
{
'remote_package': 'https://archive.org/download/dcase2013_event_detection_development_OS/events_OS_development_v2.zip',
'local_package': os.path.join(self.local_path, 'events_OS_development_v2.zip'),
'local_audio_path': os.path.join(self.local_path, 'events_OS_development_v2'),
},
# {
# 'remote_package':'http://c4dm.eecs.qmul.ac.uk/rdr/bitstream/handle/123456789/28/singlesounds_annotation.zip?sequence=9',
# 'local_package': os.path.join(self.local_path, 'singlesounds_annotation.zip'),
# 'local_audio_path': None,
# },
# {
# 'remote_package':'http://c4dm.eecs.qmul.ac.uk/rdr/bitstream/handle/123456789/28/singlesounds_stereo.zip?sequence=7',
# 'local_package': os.path.join(self.local_path, 'singlesounds_stereo.zip'),
# 'local_audio_path': os.path.join(self.local_path, 'singlesounds_stereo'),
# },
]
def on_after_extract(self):
# Make legacy dataset compatible with DCASE2016 dataset scheme
scene_label = 'office'
if not os.path.isfile(self.meta_file):
meta_file_handle = open(self.meta_file, 'wt')
try:
writer = csv.writer(meta_file_handle, delimiter='\t')
for file in self.audio_files:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
base_filename, file_extension = os.path.splitext(raw_filename)
if file.find('singlesounds_stereo') != -1:
annotation_filename = os.path.join(self.local_path, 'Annotation1', base_filename + '_bdm.txt')
label = base_filename[:-2]
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename), scene_label,
annotation_file_row[0], annotation_file_row[1], label, 'i'))
finally:
annotation_file_handle.close()
elif file.find('events_OS_development_v2') != -1:
annotation_filename = os.path.join(self.local_path, 'events_OS_development_v2',
base_filename + '_v2.txt')
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename), scene_label,
annotation_file_row[0], annotation_file_row[1],
annotation_file_row[2], 'm'))
finally:
annotation_file_handle.close()
finally:
meta_file_handle.close()
all_folds_found = True
for fold in xrange(1, self.evaluation_folds):
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')):
all_folds_found = False
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt')):
all_folds_found = False
if not all_folds_found:
# Construct training and testing sets. Isolated sound are used for training and
# polyphonic mixtures are used for testing.
if not os.path.isdir(self.evaluation_setup_path):
os.makedirs(self.evaluation_setup_path)
files = []
for item in self.meta:
if item['file'] not in files:
files.append(item['file'])
files = numpy.array(files)
f = numpy.zeros(len(files))
sss = StratifiedShuffleSplit(y=f, n_iter=5, test_size=0.3, random_state=0)
fold = 1
for train_index, test_index in sss:
# print("TRAIN:", train_index, "TEST:", test_index)
train_files = files[train_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in train_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
for item in self.meta:
if item['file'] == file:
writer.writerow([os.path.join(relative_path, raw_filename), item['scene_label'],
item['event_onset'], item['event_offset'], item['event_label']])
test_files = files[test_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
writer.writerow([os.path.join(relative_path, raw_filename)])
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
for item in self.meta:
if item['file'] == file:
writer.writerow([os.path.join(relative_path, raw_filename), item['scene_label'],
item['event_onset'], item['event_offset'], item['event_label']])
fold += 1
class DCASE2013_Event_EvaluationSet(Dataset):
"""DCASE 2013 Sound event detection, evaluation dataset
"""
def __init__(self, data_path='data'):
Dataset.__init__(self, data_path=data_path, name='DCASE2013-event-challenge')
self.authors = 'Dimitrios Giannoulis, Emmanouil Benetos, Dan Stowell, and Mark Plumbley'
self.name_remote = 'IEEE AASP CASA Challenge - Private Dataset for Event Detection Task'
self.url = 'http://www.elec.qmul.ac.uk/digitalmusic/sceneseventschallenge/'
self.audio_source = 'Field recording'
self.audio_type = 'Natural'
self.recording_device_model = 'Unknown'
self.microphone_model = 'Soundman OKM II Klassik/studio A3 electret microphone'
self.evaluation_folds = 5
self.package_list = [
{
'remote_package': 'https://archive.org/download/dcase2013_event_detection_testset_OS/dcase2013_event_detection_testset_OS.zip',
'local_package': os.path.join(self.local_path, 'dcase2013_event_detection_testset_OS.zip'),
'local_audio_path': os.path.join(self.local_path, 'dcase2013_event_detection_testset_OS'),
}
]
def on_after_extract(self):
# Make legacy dataset compatible with DCASE2016 dataset scheme
scene_label = 'office'
if not os.path.isfile(self.meta_file):
meta_file_handle = open(self.meta_file, 'wt')
try:
writer = csv.writer(meta_file_handle, delimiter='\t')
for file in self.audio_files:
raw_path, raw_filename = os.path.split(file)
relative_path = self.absolute_to_relative(raw_path)
base_filename, file_extension = os.path.splitext(raw_filename)
if file.find('dcase2013_event_detection_testset_OS') != -1:
annotation_filename = os.path.join(self.local_path, 'dcase2013_event_detection_testset_OS',
base_filename + '_v2.txt')
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename), scene_label,
annotation_file_row[0], annotation_file_row[1],
annotation_file_row[2], 'm'))
finally:
annotation_file_handle.close()
else:
annotation_filename = os.path.join(self.local_path, 'dcase2013_event_detection_testset_OS',
base_filename + '.txt')
if os.path.isfile(annotation_filename):
annotation_file_handle = open(annotation_filename, 'rt')
try:
annotation_file_reader = csv.reader(annotation_file_handle, delimiter='\t')
for annotation_file_row in annotation_file_reader:
writer.writerow((os.path.join(relative_path, raw_filename), scene_label,
annotation_file_row[0], annotation_file_row[1],
annotation_file_row[2], 'm'))
finally:
annotation_file_handle.close()
finally:
meta_file_handle.close()
all_folds_found = True
for fold in xrange(1, self.evaluation_folds):
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt')):
all_folds_found = False
if not os.path.isfile(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt')):
all_folds_found = False
if not all_folds_found:
# Construct training and testing sets. Isolated sound are used for training and
# polyphonic mixtures are used for testing.
if not os.path.isdir(self.evaluation_setup_path):
os.makedirs(self.evaluation_setup_path)
files = []
for item in self.meta:
if item['file'] not in files:
files.append(item['file'])
files = numpy.array(files)
f = numpy.zeros(len(files))
sss = StratifiedShuffleSplit(y=f, n_iter=5, test_size=0.3, random_state=0)
fold = 1
for train_index, test_index in sss:
# print("TRAIN:", train_index, "TEST:", test_index)
train_files = files[train_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_train.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in train_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
for item in self.meta:
if item['file'] == file:
writer.writerow([os.path.join(relative_path, raw_filename), item['scene_label'],
item['event_onset'], item['event_offset'], item['event_label']])
test_files = files[test_index]
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_test.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
writer.writerow([os.path.join(relative_path, raw_filename)])
with open(os.path.join(self.evaluation_setup_path, 'fold' + str(fold) + '_evaluate.txt'), 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for file in test_files:
raw_path, raw_filename = os.path.split(file)
relative_path = raw_path.replace(self.local_path + os.path.sep, '')
for item in self.meta:
if item['file'] == file:
writer.writerow([os.path.join(relative_path, raw_filename), item['scene_label'],
item['event_onset'], item['event_offset'], item['event_label']])
fold += 1
| mit |
EvenStrangest/tensorflow | tensorflow/examples/skflow/multiple_gpu.py | 9 | 1658 | # Copyright 2015-present The Scikit Flow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from sklearn import datasets, metrics, cross_validation
import tensorflow as tf
from tensorflow.contrib import learn
iris = datasets.load_iris()
X_train, X_test, y_train, y_test = cross_validation.train_test_split(iris.data, iris.target,
test_size=0.2, random_state=42)
def my_model(X, y):
"""
This is DNN with 10, 20, 10 hidden layers, and dropout of 0.5 probability.
Note: If you want to run this example with multiple GPUs, Cuda Toolkit 7.0 and
CUDNN 6.5 V2 from NVIDIA need to be installed beforehand.
"""
with tf.device('/gpu:1'):
layers = learn.ops.dnn(X, [10, 20, 10], dropout=0.5)
with tf.device('/gpu:2'):
return learn.models.logistic_regression(layers, y)
classifier = learn.TensorFlowEstimator(model_fn=my_model, n_classes=3)
classifier.fit(X_train, y_train)
score = metrics.accuracy_score(y_test, classifier.predict(X_test))
print('Accuracy: {0:f}'.format(score))
| apache-2.0 |
jjx02230808/project0223 | examples/applications/wikipedia_principal_eigenvector.py | 233 | 7819 | """
===============================
Wikipedia principal eigenvector
===============================
A classical way to assert the relative importance of vertices in a
graph is to compute the principal eigenvector of the adjacency matrix
so as to assign to each vertex the values of the components of the first
eigenvector as a centrality score:
http://en.wikipedia.org/wiki/Eigenvector_centrality
On the graph of webpages and links those values are called the PageRank
scores by Google.
The goal of this example is to analyze the graph of links inside
wikipedia articles to rank articles by relative importance according to
this eigenvector centrality.
The traditional way to compute the principal eigenvector is to use the
power iteration method:
http://en.wikipedia.org/wiki/Power_iteration
Here the computation is achieved thanks to Martinsson's Randomized SVD
algorithm implemented in the scikit.
The graph data is fetched from the DBpedia dumps. DBpedia is an extraction
of the latent structured data of the Wikipedia content.
"""
# Author: Olivier Grisel <[email protected]>
# License: BSD 3 clause
from __future__ import print_function
from bz2 import BZ2File
import os
from datetime import datetime
from pprint import pprint
from time import time
import numpy as np
from scipy import sparse
from sklearn.decomposition import randomized_svd
from sklearn.externals.joblib import Memory
from sklearn.externals.six.moves.urllib.request import urlopen
from sklearn.externals.six import iteritems
print(__doc__)
###############################################################################
# Where to download the data, if not already on disk
redirects_url = "http://downloads.dbpedia.org/3.5.1/en/redirects_en.nt.bz2"
redirects_filename = redirects_url.rsplit("/", 1)[1]
page_links_url = "http://downloads.dbpedia.org/3.5.1/en/page_links_en.nt.bz2"
page_links_filename = page_links_url.rsplit("/", 1)[1]
resources = [
(redirects_url, redirects_filename),
(page_links_url, page_links_filename),
]
for url, filename in resources:
if not os.path.exists(filename):
print("Downloading data from '%s', please wait..." % url)
opener = urlopen(url)
open(filename, 'wb').write(opener.read())
print()
###############################################################################
# Loading the redirect files
memory = Memory(cachedir=".")
def index(redirects, index_map, k):
"""Find the index of an article name after redirect resolution"""
k = redirects.get(k, k)
return index_map.setdefault(k, len(index_map))
DBPEDIA_RESOURCE_PREFIX_LEN = len("http://dbpedia.org/resource/")
SHORTNAME_SLICE = slice(DBPEDIA_RESOURCE_PREFIX_LEN + 1, -1)
def short_name(nt_uri):
"""Remove the < and > URI markers and the common URI prefix"""
return nt_uri[SHORTNAME_SLICE]
def get_redirects(redirects_filename):
"""Parse the redirections and build a transitively closed map out of it"""
redirects = {}
print("Parsing the NT redirect file")
for l, line in enumerate(BZ2File(redirects_filename)):
split = line.split()
if len(split) != 4:
print("ignoring malformed line: " + line)
continue
redirects[short_name(split[0])] = short_name(split[2])
if l % 1000000 == 0:
print("[%s] line: %08d" % (datetime.now().isoformat(), l))
# compute the transitive closure
print("Computing the transitive closure of the redirect relation")
for l, source in enumerate(redirects.keys()):
transitive_target = None
target = redirects[source]
seen = set([source])
while True:
transitive_target = target
target = redirects.get(target)
if target is None or target in seen:
break
seen.add(target)
redirects[source] = transitive_target
if l % 1000000 == 0:
print("[%s] line: %08d" % (datetime.now().isoformat(), l))
return redirects
# disabling joblib as the pickling of large dicts seems much too slow
#@memory.cache
def get_adjacency_matrix(redirects_filename, page_links_filename, limit=None):
"""Extract the adjacency graph as a scipy sparse matrix
Redirects are resolved first.
Returns X, the scipy sparse adjacency matrix, redirects as python
dict from article names to article names and index_map a python dict
from article names to python int (article indexes).
"""
print("Computing the redirect map")
redirects = get_redirects(redirects_filename)
print("Computing the integer index map")
index_map = dict()
links = list()
for l, line in enumerate(BZ2File(page_links_filename)):
split = line.split()
if len(split) != 4:
print("ignoring malformed line: " + line)
continue
i = index(redirects, index_map, short_name(split[0]))
j = index(redirects, index_map, short_name(split[2]))
links.append((i, j))
if l % 1000000 == 0:
print("[%s] line: %08d" % (datetime.now().isoformat(), l))
if limit is not None and l >= limit - 1:
break
print("Computing the adjacency matrix")
X = sparse.lil_matrix((len(index_map), len(index_map)), dtype=np.float32)
for i, j in links:
X[i, j] = 1.0
del links
print("Converting to CSR representation")
X = X.tocsr()
print("CSR conversion done")
return X, redirects, index_map
# stop after 5M links to make it possible to work in RAM
X, redirects, index_map = get_adjacency_matrix(
redirects_filename, page_links_filename, limit=5000000)
names = dict((i, name) for name, i in iteritems(index_map))
print("Computing the principal singular vectors using randomized_svd")
t0 = time()
U, s, V = randomized_svd(X, 5, n_iter=3)
print("done in %0.3fs" % (time() - t0))
# print the names of the wikipedia related strongest compenents of the the
# principal singular vector which should be similar to the highest eigenvector
print("Top wikipedia pages according to principal singular vectors")
pprint([names[i] for i in np.abs(U.T[0]).argsort()[-10:]])
pprint([names[i] for i in np.abs(V[0]).argsort()[-10:]])
def centrality_scores(X, alpha=0.85, max_iter=100, tol=1e-10):
"""Power iteration computation of the principal eigenvector
This method is also known as Google PageRank and the implementation
is based on the one from the NetworkX project (BSD licensed too)
with copyrights by:
Aric Hagberg <[email protected]>
Dan Schult <[email protected]>
Pieter Swart <[email protected]>
"""
n = X.shape[0]
X = X.copy()
incoming_counts = np.asarray(X.sum(axis=1)).ravel()
print("Normalizing the graph")
for i in incoming_counts.nonzero()[0]:
X.data[X.indptr[i]:X.indptr[i + 1]] *= 1.0 / incoming_counts[i]
dangle = np.asarray(np.where(X.sum(axis=1) == 0, 1.0 / n, 0)).ravel()
scores = np.ones(n, dtype=np.float32) / n # initial guess
for i in range(max_iter):
print("power iteration #%d" % i)
prev_scores = scores
scores = (alpha * (scores * X + np.dot(dangle, prev_scores))
+ (1 - alpha) * prev_scores.sum() / n)
# check convergence: normalized l_inf norm
scores_max = np.abs(scores).max()
if scores_max == 0.0:
scores_max = 1.0
err = np.abs(scores - prev_scores).max() / scores_max
print("error: %0.6f" % err)
if err < n * tol:
return scores
return scores
print("Computing principal eigenvector score using a power iteration method")
t0 = time()
scores = centrality_scores(X, max_iter=100, tol=1e-10)
print("done in %0.3fs" % (time() - t0))
pprint([names[i] for i in np.abs(scores).argsort()[-10:]])
| bsd-3-clause |
walterreade/scikit-learn | examples/applications/wikipedia_principal_eigenvector.py | 16 | 7819 | """
===============================
Wikipedia principal eigenvector
===============================
A classical way to assert the relative importance of vertices in a
graph is to compute the principal eigenvector of the adjacency matrix
so as to assign to each vertex the values of the components of the first
eigenvector as a centrality score:
http://en.wikipedia.org/wiki/Eigenvector_centrality
On the graph of webpages and links those values are called the PageRank
scores by Google.
The goal of this example is to analyze the graph of links inside
wikipedia articles to rank articles by relative importance according to
this eigenvector centrality.
The traditional way to compute the principal eigenvector is to use the
power iteration method:
http://en.wikipedia.org/wiki/Power_iteration
Here the computation is achieved thanks to Martinsson's Randomized SVD
algorithm implemented in the scikit.
The graph data is fetched from the DBpedia dumps. DBpedia is an extraction
of the latent structured data of the Wikipedia content.
"""
# Author: Olivier Grisel <[email protected]>
# License: BSD 3 clause
from __future__ import print_function
from bz2 import BZ2File
import os
from datetime import datetime
from pprint import pprint
from time import time
import numpy as np
from scipy import sparse
from sklearn.decomposition import randomized_svd
from sklearn.externals.joblib import Memory
from sklearn.externals.six.moves.urllib.request import urlopen
from sklearn.externals.six import iteritems
print(__doc__)
###############################################################################
# Where to download the data, if not already on disk
redirects_url = "http://downloads.dbpedia.org/3.5.1/en/redirects_en.nt.bz2"
redirects_filename = redirects_url.rsplit("/", 1)[1]
page_links_url = "http://downloads.dbpedia.org/3.5.1/en/page_links_en.nt.bz2"
page_links_filename = page_links_url.rsplit("/", 1)[1]
resources = [
(redirects_url, redirects_filename),
(page_links_url, page_links_filename),
]
for url, filename in resources:
if not os.path.exists(filename):
print("Downloading data from '%s', please wait..." % url)
opener = urlopen(url)
open(filename, 'wb').write(opener.read())
print()
###############################################################################
# Loading the redirect files
memory = Memory(cachedir=".")
def index(redirects, index_map, k):
"""Find the index of an article name after redirect resolution"""
k = redirects.get(k, k)
return index_map.setdefault(k, len(index_map))
DBPEDIA_RESOURCE_PREFIX_LEN = len("http://dbpedia.org/resource/")
SHORTNAME_SLICE = slice(DBPEDIA_RESOURCE_PREFIX_LEN + 1, -1)
def short_name(nt_uri):
"""Remove the < and > URI markers and the common URI prefix"""
return nt_uri[SHORTNAME_SLICE]
def get_redirects(redirects_filename):
"""Parse the redirections and build a transitively closed map out of it"""
redirects = {}
print("Parsing the NT redirect file")
for l, line in enumerate(BZ2File(redirects_filename)):
split = line.split()
if len(split) != 4:
print("ignoring malformed line: " + line)
continue
redirects[short_name(split[0])] = short_name(split[2])
if l % 1000000 == 0:
print("[%s] line: %08d" % (datetime.now().isoformat(), l))
# compute the transitive closure
print("Computing the transitive closure of the redirect relation")
for l, source in enumerate(redirects.keys()):
transitive_target = None
target = redirects[source]
seen = set([source])
while True:
transitive_target = target
target = redirects.get(target)
if target is None or target in seen:
break
seen.add(target)
redirects[source] = transitive_target
if l % 1000000 == 0:
print("[%s] line: %08d" % (datetime.now().isoformat(), l))
return redirects
# disabling joblib as the pickling of large dicts seems much too slow
#@memory.cache
def get_adjacency_matrix(redirects_filename, page_links_filename, limit=None):
"""Extract the adjacency graph as a scipy sparse matrix
Redirects are resolved first.
Returns X, the scipy sparse adjacency matrix, redirects as python
dict from article names to article names and index_map a python dict
from article names to python int (article indexes).
"""
print("Computing the redirect map")
redirects = get_redirects(redirects_filename)
print("Computing the integer index map")
index_map = dict()
links = list()
for l, line in enumerate(BZ2File(page_links_filename)):
split = line.split()
if len(split) != 4:
print("ignoring malformed line: " + line)
continue
i = index(redirects, index_map, short_name(split[0]))
j = index(redirects, index_map, short_name(split[2]))
links.append((i, j))
if l % 1000000 == 0:
print("[%s] line: %08d" % (datetime.now().isoformat(), l))
if limit is not None and l >= limit - 1:
break
print("Computing the adjacency matrix")
X = sparse.lil_matrix((len(index_map), len(index_map)), dtype=np.float32)
for i, j in links:
X[i, j] = 1.0
del links
print("Converting to CSR representation")
X = X.tocsr()
print("CSR conversion done")
return X, redirects, index_map
# stop after 5M links to make it possible to work in RAM
X, redirects, index_map = get_adjacency_matrix(
redirects_filename, page_links_filename, limit=5000000)
names = dict((i, name) for name, i in iteritems(index_map))
print("Computing the principal singular vectors using randomized_svd")
t0 = time()
U, s, V = randomized_svd(X, 5, n_iter=3)
print("done in %0.3fs" % (time() - t0))
# print the names of the wikipedia related strongest components of the the
# principal singular vector which should be similar to the highest eigenvector
print("Top wikipedia pages according to principal singular vectors")
pprint([names[i] for i in np.abs(U.T[0]).argsort()[-10:]])
pprint([names[i] for i in np.abs(V[0]).argsort()[-10:]])
def centrality_scores(X, alpha=0.85, max_iter=100, tol=1e-10):
"""Power iteration computation of the principal eigenvector
This method is also known as Google PageRank and the implementation
is based on the one from the NetworkX project (BSD licensed too)
with copyrights by:
Aric Hagberg <[email protected]>
Dan Schult <[email protected]>
Pieter Swart <[email protected]>
"""
n = X.shape[0]
X = X.copy()
incoming_counts = np.asarray(X.sum(axis=1)).ravel()
print("Normalizing the graph")
for i in incoming_counts.nonzero()[0]:
X.data[X.indptr[i]:X.indptr[i + 1]] *= 1.0 / incoming_counts[i]
dangle = np.asarray(np.where(X.sum(axis=1) == 0, 1.0 / n, 0)).ravel()
scores = np.ones(n, dtype=np.float32) / n # initial guess
for i in range(max_iter):
print("power iteration #%d" % i)
prev_scores = scores
scores = (alpha * (scores * X + np.dot(dangle, prev_scores))
+ (1 - alpha) * prev_scores.sum() / n)
# check convergence: normalized l_inf norm
scores_max = np.abs(scores).max()
if scores_max == 0.0:
scores_max = 1.0
err = np.abs(scores - prev_scores).max() / scores_max
print("error: %0.6f" % err)
if err < n * tol:
return scores
return scores
print("Computing principal eigenvector score using a power iteration method")
t0 = time()
scores = centrality_scores(X, max_iter=100, tol=1e-10)
print("done in %0.3fs" % (time() - t0))
pprint([names[i] for i in np.abs(scores).argsort()[-10:]])
| bsd-3-clause |
chrisburr/scikit-learn | examples/cluster/plot_digits_linkage.py | 369 | 2959 | """
=============================================================================
Various Agglomerative Clustering on a 2D embedding of digits
=============================================================================
An illustration of various linkage option for agglomerative clustering on
a 2D embedding of the digits dataset.
The goal of this example is to show intuitively how the metrics behave, and
not to find good clusters for the digits. This is why the example works on a
2D embedding.
What this example shows us is the behavior "rich getting richer" of
agglomerative clustering that tends to create uneven cluster sizes.
This behavior is especially pronounced for the average linkage strategy,
that ends up with a couple of singleton clusters.
"""
# Authors: Gael Varoquaux
# License: BSD 3 clause (C) INRIA 2014
print(__doc__)
from time import time
import numpy as np
from scipy import ndimage
from matplotlib import pyplot as plt
from sklearn import manifold, datasets
digits = datasets.load_digits(n_class=10)
X = digits.data
y = digits.target
n_samples, n_features = X.shape
np.random.seed(0)
def nudge_images(X, y):
# Having a larger dataset shows more clearly the behavior of the
# methods, but we multiply the size of the dataset only by 2, as the
# cost of the hierarchical clustering methods are strongly
# super-linear in n_samples
shift = lambda x: ndimage.shift(x.reshape((8, 8)),
.3 * np.random.normal(size=2),
mode='constant',
).ravel()
X = np.concatenate([X, np.apply_along_axis(shift, 1, X)])
Y = np.concatenate([y, y], axis=0)
return X, Y
X, y = nudge_images(X, y)
#----------------------------------------------------------------------
# Visualize the clustering
def plot_clustering(X_red, X, labels, title=None):
x_min, x_max = np.min(X_red, axis=0), np.max(X_red, axis=0)
X_red = (X_red - x_min) / (x_max - x_min)
plt.figure(figsize=(6, 4))
for i in range(X_red.shape[0]):
plt.text(X_red[i, 0], X_red[i, 1], str(y[i]),
color=plt.cm.spectral(labels[i] / 10.),
fontdict={'weight': 'bold', 'size': 9})
plt.xticks([])
plt.yticks([])
if title is not None:
plt.title(title, size=17)
plt.axis('off')
plt.tight_layout()
#----------------------------------------------------------------------
# 2D embedding of the digits dataset
print("Computing embedding")
X_red = manifold.SpectralEmbedding(n_components=2).fit_transform(X)
print("Done.")
from sklearn.cluster import AgglomerativeClustering
for linkage in ('ward', 'average', 'complete'):
clustering = AgglomerativeClustering(linkage=linkage, n_clusters=10)
t0 = time()
clustering.fit(X_red)
print("%s : %.2fs" % (linkage, time() - t0))
plot_clustering(X_red, X, clustering.labels_, "%s linkage" % linkage)
plt.show()
| bsd-3-clause |
hainm/scikit-learn | sklearn/cluster/tests/test_birch.py | 342 | 5603 | """
Tests for the birch clustering algorithm.
"""
from scipy import sparse
import numpy as np
from sklearn.cluster.tests.common import generate_clustered_data
from sklearn.cluster.birch import Birch
from sklearn.cluster.hierarchical import AgglomerativeClustering
from sklearn.datasets import make_blobs
from sklearn.linear_model import ElasticNet
from sklearn.metrics import pairwise_distances_argmin, v_measure_score
from sklearn.utils.testing import assert_greater_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_warns
def test_n_samples_leaves_roots():
# Sanity check for the number of samples in leaves and roots
X, y = make_blobs(n_samples=10)
brc = Birch()
brc.fit(X)
n_samples_root = sum([sc.n_samples_ for sc in brc.root_.subclusters_])
n_samples_leaves = sum([sc.n_samples_ for leaf in brc._get_leaves()
for sc in leaf.subclusters_])
assert_equal(n_samples_leaves, X.shape[0])
assert_equal(n_samples_root, X.shape[0])
def test_partial_fit():
# Test that fit is equivalent to calling partial_fit multiple times
X, y = make_blobs(n_samples=100)
brc = Birch(n_clusters=3)
brc.fit(X)
brc_partial = Birch(n_clusters=None)
brc_partial.partial_fit(X[:50])
brc_partial.partial_fit(X[50:])
assert_array_equal(brc_partial.subcluster_centers_,
brc.subcluster_centers_)
# Test that same global labels are obtained after calling partial_fit
# with None
brc_partial.set_params(n_clusters=3)
brc_partial.partial_fit(None)
assert_array_equal(brc_partial.subcluster_labels_, brc.subcluster_labels_)
def test_birch_predict():
# Test the predict method predicts the nearest centroid.
rng = np.random.RandomState(0)
X = generate_clustered_data(n_clusters=3, n_features=3,
n_samples_per_cluster=10)
# n_samples * n_samples_per_cluster
shuffle_indices = np.arange(30)
rng.shuffle(shuffle_indices)
X_shuffle = X[shuffle_indices, :]
brc = Birch(n_clusters=4, threshold=1.)
brc.fit(X_shuffle)
centroids = brc.subcluster_centers_
assert_array_equal(brc.labels_, brc.predict(X_shuffle))
nearest_centroid = pairwise_distances_argmin(X_shuffle, centroids)
assert_almost_equal(v_measure_score(nearest_centroid, brc.labels_), 1.0)
def test_n_clusters():
# Test that n_clusters param works properly
X, y = make_blobs(n_samples=100, centers=10)
brc1 = Birch(n_clusters=10)
brc1.fit(X)
assert_greater(len(brc1.subcluster_centers_), 10)
assert_equal(len(np.unique(brc1.labels_)), 10)
# Test that n_clusters = Agglomerative Clustering gives
# the same results.
gc = AgglomerativeClustering(n_clusters=10)
brc2 = Birch(n_clusters=gc)
brc2.fit(X)
assert_array_equal(brc1.subcluster_labels_, brc2.subcluster_labels_)
assert_array_equal(brc1.labels_, brc2.labels_)
# Test that the wrong global clustering step raises an Error.
clf = ElasticNet()
brc3 = Birch(n_clusters=clf)
assert_raises(ValueError, brc3.fit, X)
# Test that a small number of clusters raises a warning.
brc4 = Birch(threshold=10000.)
assert_warns(UserWarning, brc4.fit, X)
def test_sparse_X():
# Test that sparse and dense data give same results
X, y = make_blobs(n_samples=100, centers=10)
brc = Birch(n_clusters=10)
brc.fit(X)
csr = sparse.csr_matrix(X)
brc_sparse = Birch(n_clusters=10)
brc_sparse.fit(csr)
assert_array_equal(brc.labels_, brc_sparse.labels_)
assert_array_equal(brc.subcluster_centers_,
brc_sparse.subcluster_centers_)
def check_branching_factor(node, branching_factor):
subclusters = node.subclusters_
assert_greater_equal(branching_factor, len(subclusters))
for cluster in subclusters:
if cluster.child_:
check_branching_factor(cluster.child_, branching_factor)
def test_branching_factor():
# Test that nodes have at max branching_factor number of subclusters
X, y = make_blobs()
branching_factor = 9
# Purposefully set a low threshold to maximize the subclusters.
brc = Birch(n_clusters=None, branching_factor=branching_factor,
threshold=0.01)
brc.fit(X)
check_branching_factor(brc.root_, branching_factor)
brc = Birch(n_clusters=3, branching_factor=branching_factor,
threshold=0.01)
brc.fit(X)
check_branching_factor(brc.root_, branching_factor)
# Raises error when branching_factor is set to one.
brc = Birch(n_clusters=None, branching_factor=1, threshold=0.01)
assert_raises(ValueError, brc.fit, X)
def check_threshold(birch_instance, threshold):
"""Use the leaf linked list for traversal"""
current_leaf = birch_instance.dummy_leaf_.next_leaf_
while current_leaf:
subclusters = current_leaf.subclusters_
for sc in subclusters:
assert_greater_equal(threshold, sc.radius)
current_leaf = current_leaf.next_leaf_
def test_threshold():
# Test that the leaf subclusters have a threshold lesser than radius
X, y = make_blobs(n_samples=80, centers=4)
brc = Birch(threshold=0.5, n_clusters=None)
brc.fit(X)
check_threshold(brc, 0.5)
brc = Birch(threshold=5.0, n_clusters=None)
brc.fit(X)
check_threshold(brc, 5.)
| bsd-3-clause |
fatiando/fatiando | fatiando/gravmag/interactive.py | 6 | 24042 | """
Interactivity functions and classes using matplotlib and IPython widgets
**Gravity forward modeling**
* :class:`~fatiando.gravmag.interactive.Moulder`: a matplitlib GUI for 2D
forward modeling using polygons
----
"""
from __future__ import division, absolute_import
from future.builtins import zip
try:
import cPickle as pickle
except ImportError:
import pickle
import numpy
from matplotlib import pyplot, widgets, patches
from matplotlib.lines import Line2D
from IPython.core.pylabtools import print_figure
from IPython.display import Image
from .. import utils
from . import talwani
from ..mesher import Polygon
class Moulder(object):
"""
Interactive 2D forward modeling using polygons.
A matplotlib GUI application. Allows drawing and manipulating polygons and
computes their predicted data automatically. Also permits contaminating the
data with gaussian pseudo-random error for producing synthetic data sets.
Uses :mod:`fatiando.gravmag.talwani` for computations.
*Moulder* objects can be persisted to Python pickle files using the
:meth:`~fatiando.gravmag.interactive.Moulder.save` method and later
restored using :meth:`~fatiando.gravmag.interactive.Moulder.load`.
.. warning::
Cannot be used with ``%matplotlib inline`` on IPython notebooks because
the app uses the matplotlib plot window. You can still embed the
generated model and data figure on notebooks using the
:meth:`~fatiando.gravmag.interactive.Moulder.plot` method.
Parameters:
* area : list = (x1, x2, z1, z2)
The limits of the model drawing area, in meters.
* x, z : 1d-arrays
The x- and z-coordinates of the computation points (places where
predicted data will be computed). In meters.
* data : None or 1d-array
Observed data measured at *x* and *z*. Will plot this with black dots
along the predicted data.
* density_range : list = [min, max]
The minimum and maximum values allowed for the density. Determines the
limits of the density slider of the application. In kg.m^-3. Defaults
to [-2000, 2000].
* kwargs : dict
Other keyword arguments used to restore the state of the application.
Used by the :meth:`~fatiando.gravmag.interactive.Moulder.load` method.
Not intended for general use.
Examples:
Make the Moulder object and start the app::
import numpy as np
area = (0, 10e3, 0, 5e3)
# Calculate on 100 points
x = np.linspace(area[0], area[1], 100)
z = np.zeros_like(x)
app = Moulder(area, x, z)
app.run()
# This will pop-up a window with the application (like the screenshot
# below). Start drawing (follow the instruction in the figure title).
# When satisfied, close the window to resume execution.
.. image:: ../_static/images/Moulder-screenshot.png
:alt: Screenshot of the Moulder GUI
After closing the plot window, you can access the model and data from the
*Moulder* object::
app.model # The drawn model as fatiando.mesher.Polygon
app.predicted # 1d-array with the data predicted by the model
# You can save the predicted data to use later
app.save_predicted('data.txt')
# You can also save the application and resume it later
app.save('application.pkl')
# Close this session/IPython notebook/etc.
# To resume drawing later:
app = Moulder.load('application.pkl')
app.run()
"""
# The tolerance range for mouse clicks on vertices. In pixels.
epsilon = 5
# App instructions printed in the figure suptitle
instructions = ' | '.join([
'n: New polygon', 'd: delete', 'click: select/move', 'esc: cancel'])
def __init__(self, area, x, z, data=None, density_range=[-2000, 2000],
**kwargs):
self.area = area
self.x, self.z = numpy.asarray(x), numpy.asarray(z)
self.density_range = density_range
self.data = data
# Used to set the ylims for the data axes.
if data is None:
self.dmin, self.dmax = 0, 0
else:
self.dmin, self.dmax = data.min(), data.max()
self.predicted = kwargs.get('predicted', numpy.zeros_like(x))
self.error = kwargs.get('error', 0)
self.cmap = kwargs.get('cmap', pyplot.cm.RdBu_r)
self.line_args = dict(
linewidth=2, linestyle='-', color='k', marker='o',
markerfacecolor='k', markersize=5, animated=False, alpha=0.6)
self.polygons = []
self.lines = []
self.densities = kwargs.get('densities', [])
vertices = kwargs.get('vertices', [])
for xy, dens in zip(vertices, self.densities):
poly, line = self._make_polygon(xy, dens)
self.polygons.append(poly)
self.lines.append(line)
def save_predicted(self, fname):
"""
Save the predicted data to a text file.
Data will be saved in 3 columns separated by spaces: x z data
Parameters:
* fname : string or file-like object
The name of the output file or an open file-like object.
"""
numpy.savetxt(fname, numpy.transpose([self.x, self.z, self.predicted]))
def save(self, fname):
"""
Save the application state into a pickle file.
Use this to persist the application. You can later reload the entire
object, with the drawn model and data, using the
:meth:`~fatiando.gravmag.interactive.Moulder.load` method.
Parameters:
* fname : string
The name of the file to save the application. The extension doesn't
matter (use ``.pkl`` if in doubt).
"""
with open(fname, 'w') as f:
vertices = [numpy.asarray(p.xy) for p in self.polygons]
state = dict(area=self.area, x=self.x,
z=self.z, data=self.data,
density_range=self.density_range,
cmap=self.cmap,
predicted=self.predicted,
vertices=vertices,
densities=self.densities,
error=self.error)
pickle.dump(state, f)
@classmethod
def load(cls, fname):
"""
Restore an application from a pickle file.
The pickle file should have been generated by the
:meth:`~fatiando.gravmag.interactive.Moulder.save` method.
Parameters:
* fname : string
The name of the file.
Returns:
* app : Moulder object
The restored application. You can continue using it as if nothing
had happened.
"""
with open(fname) as f:
state = pickle.load(f)
app = cls(**state)
return app
@property
def model(self):
"""
The polygon model drawn as :class:`fatiando.mesher.Polygon` objects.
"""
m = [Polygon(p.xy, {'density': d})
for p, d in zip(self.polygons, self.densities)]
return m
def run(self):
"""
Start the application for drawing.
Will pop-up a window with a place for drawing the model (below) and a
place with the predicted (and, optionally, observed) data (top).
Follow the instruction on the figure title.
When done, close the window to resume program execution.
"""
fig = self._figure_setup()
# Sliders to control the density and the error in the data
self.density_slider = widgets.Slider(
fig.add_axes([0.10, 0.01, 0.30, 0.02]), 'Density',
self.density_range[0], self.density_range[1], valinit=0.,
valfmt='%6.0f kg/m3')
self.error_slider = widgets.Slider(
fig.add_axes([0.60, 0.01, 0.30, 0.02]), 'Error',
0, 5, valinit=self.error, valfmt='%1.2f mGal')
# Put instructions on figure title
self.dataax.set_title(self.instructions)
# Markers for mouse click events
self._ivert = None
self._ipoly = None
self._lastevent = None
self._drawing = False
self._xy = []
self._drawing_plot = None
# Used to blit the model plot and make
# rendering faster
self.background = None
# Connect event callbacks
self._connect()
self._update_data()
self._update_data_plot()
self.canvas.draw()
pyplot.show()
def _connect(self):
"""
Connect the matplotlib events to their callback methods.
"""
# Make the proper callback connections
self.canvas.mpl_connect('button_press_event',
self._button_press_callback)
self.canvas.mpl_connect('key_press_event',
self._key_press_callback)
self.canvas.mpl_connect('button_release_event',
self._button_release_callback)
self.canvas.mpl_connect('motion_notify_event',
self._mouse_move_callback)
self.canvas.mpl_connect('draw_event',
self._draw_callback)
# Call the cleanup and extra code for a draw event when resizing as
# well. This is needed so that tight_layout adjusts the figure when
# resized. Otherwise, tight_layout snaps only when the user clicks on
# the figure to do something.
self.canvas.mpl_connect('resize_event',
self._draw_callback)
self.density_slider.on_changed(self._set_density_callback)
self.error_slider.on_changed(self._set_error_callback)
def plot(self, figsize=(10, 8), dpi=70):
"""
Make a plot of the data and model for embedding in IPython notebooks
Doesn't require ``%matplotlib inline`` to embed the plot (as that would
not allow the app to run).
Parameters:
* figsize : list = (width, height)
The figure size in inches.
* dpi : float
The number of dots-per-inch for the figure resolution.
"""
fig = self._figure_setup(figsize=figsize, facecolor='white')
self._update_data_plot()
pyplot.close(fig)
data = print_figure(fig, dpi=dpi)
return Image(data=data)
def _figure_setup(self, **kwargs):
"""
Setup the plot figure with labels, titles, ticks, etc.
Sets the *canvas*, *dataax*, *modelax*, *polygons* and *lines*
attributes.
Parameters:
* kwargs : dict
Keyword arguments passed to ``pyplot.subplots``.
Returns:
* fig : matplotlib figure object
The created figure
"""
fig, axes = pyplot.subplots(2, 1, **kwargs)
ax1, ax2 = axes
self.predicted_line, = ax1.plot(self.x, self.predicted, '-r')
if self.data is not None:
self.data_line, = ax1.plot(self.x, self.data, '.k')
ax1.set_ylabel('Gravity anomaly (mGal)')
ax1.set_xlabel('x (m)', labelpad=-10)
ax1.set_xlim(self.area[:2])
ax1.set_ylim((-200, 200))
ax1.grid(True)
tmp = ax2.pcolor(numpy.array([self.density_range]), cmap=self.cmap)
tmp.set_visible(False)
pyplot.colorbar(tmp, orientation='horizontal',
pad=0.08, aspect=80).set_label(r'Density (kg/cm3)')
# Remake the polygons and lines to make sure they belong to the right
# axis coordinates
vertices = [p.xy for p in self.polygons]
newpolygons, newlines = [], []
for xy, dens in zip(vertices, self.densities):
poly, line = self._make_polygon(xy, dens)
newpolygons.append(poly)
newlines.append(line)
ax2.add_patch(poly)
ax2.add_line(line)
self.polygons = newpolygons
self.lines = newlines
ax2.set_xlim(self.area[:2])
ax2.set_ylim(self.area[2:])
ax2.grid(True)
ax2.invert_yaxis()
ax2.set_ylabel('z (m)')
fig.subplots_adjust(top=0.95, left=0.1, right=0.95, bottom=0.06,
hspace=0.1)
self.figure = fig
self.canvas = fig.canvas
self.dataax = axes[0]
self.modelax = axes[1]
fig.canvas.draw()
return fig
def _density2color(self, density):
"""
Map density values to colors using the given *cmap* attribute.
Parameters:
* density : 1d-array
The density values of the model polygons
Returns
* colors : 1d-array
The colors mapped to each density value (returned by a matplotlib
colormap object.
"""
dmin, dmax = self.density_range
return self.cmap((density - dmin)/(dmax - dmin))
def _make_polygon(self, vertices, density):
"""
Create a polygon for drawing.
Polygons are matplitlib.patches.Polygon objects for the fill and
matplotlib.lines.Line2D for the contour.
Parameters:
* vertices : list of [x, z]
List of the [x, z] coordinate pairs of each vertex of the polygon
* density : float
The density of the polygon (used to set the color)
Returns:
* polygon, line
The matplotlib Polygon and Line2D objects
"""
poly = patches.Polygon(vertices, animated=False, alpha=0.9,
color=self._density2color(density))
x, y = list(zip(*poly.xy))
line = Line2D(x, y, **self.line_args)
return poly, line
def _update_data(self):
"""
Recalculate the predicted data (optionally with random error)
"""
self.predicted = talwani.gz(self.x, self.z, self.model)
if self.error > 0:
self.predicted = utils.contaminate(self.predicted, self.error)
def _update_data_plot(self):
"""
Update the predicted data plot in the *dataax*.
Adjusts the xlim of the axes to fit the data.
"""
self.predicted_line.set_ydata(self.predicted)
vmin = 1.2*min(self.predicted.min(), self.dmin)
vmax = 1.2*max(self.predicted.max(), self.dmax)
self.dataax.set_ylim(vmin, vmax)
self.dataax.grid(True)
self.canvas.draw()
def _draw_callback(self, value):
"""
Callback for the canvas.draw() event.
This is called everytime the figure is redrawn. Used to do some
clean up and tunning whenever this is called as well, like calling
``tight_layout``.
"""
self.figure.tight_layout()
def _set_error_callback(self, value):
"""
Callback when error slider is edited
"""
self.error = value
self._update_data()
self._update_data_plot()
def _set_density_callback(self, value):
"""
Callback when density slider is edited
"""
if self._ipoly is not None:
self.densities[self._ipoly] = value
self.polygons[self._ipoly].set_color(self._density2color(value))
self._update_data()
self._update_data_plot()
self.canvas.draw()
def _get_polygon_vertice_id(self, event):
"""
Find out which vertex of which polygon the event happened in.
If the click was inside a polygon (not on a vertex), identify that
polygon.
Returns:
* p, v : int, int
p: the index of the polygon the event happened in or None if
outside all polygons.
v: the index of the polygon vertex that was clicked or None if the
click was not on a vertex.
"""
distances = []
indices = []
for poly in self.polygons:
x, y = poly.get_transform().transform(poly.xy).T
d = numpy.sqrt((x - event.x)**2 + (y - event.y)**2)
distances.append(d.min())
indices.append(numpy.argmin(d))
p = numpy.argmin(distances)
if distances[p] >= self.epsilon:
# Check if the event was inside a polygon
x, y = event.x, event.y
p, v = None, None
for i, poly in enumerate(self.polygons):
if poly.contains_point([x, y]):
p = i
break
else:
v = indices[p]
last = len(self.polygons[p].xy) - 1
if v == 0 or v == last:
v = [0, last]
return p, v
def _button_press_callback(self, event):
"""
What actions to perform when a mouse button is clicked
"""
if event.inaxes != self.modelax:
return
if event.button == 1 and not self._drawing and self.polygons:
self._lastevent = event
for line, poly in zip(self.lines, self.polygons):
poly.set_animated(False)
line.set_animated(False)
line.set_color([0, 0, 0, 0])
self.canvas.draw()
# Find out if a click happened on a vertice
# and which vertice of which polygon
self._ipoly, self._ivert = self._get_polygon_vertice_id(event)
if self._ipoly is not None:
self.density_slider.set_val(self.densities[self._ipoly])
self.polygons[self._ipoly].set_animated(True)
self.lines[self._ipoly].set_animated(True)
self.lines[self._ipoly].set_color([0, 1, 0, 0])
self.canvas.draw()
self.background = self.canvas.copy_from_bbox(self.modelax.bbox)
self.modelax.draw_artist(self.polygons[self._ipoly])
self.modelax.draw_artist(self.lines[self._ipoly])
self.canvas.blit(self.modelax.bbox)
elif self._drawing:
if event.button == 1:
self._xy.append([event.xdata, event.ydata])
self._drawing_plot.set_data(list(zip(*self._xy)))
self.canvas.restore_region(self.background)
self.modelax.draw_artist(self._drawing_plot)
self.canvas.blit(self.modelax.bbox)
elif event.button == 3:
if len(self._xy) >= 3:
density = self.density_slider.val
poly, line = self._make_polygon(self._xy, density)
self.polygons.append(poly)
self.lines.append(line)
self.densities.append(density)
self.modelax.add_patch(poly)
self.modelax.add_line(line)
self._drawing_plot.remove()
self._drawing_plot = None
self._xy = None
self._drawing = False
self._ipoly = len(self.polygons) - 1
self.lines[self._ipoly].set_color([0, 1, 0, 0])
self.dataax.set_title(self.instructions)
self.canvas.draw()
self._update_data()
self._update_data_plot()
def _button_release_callback(self, event):
"""
Reset place markers on mouse button release
"""
if event.inaxes != self.modelax:
return
if event.button != 1:
return
if self._ivert is None and self._ipoly is None:
return
self.background = None
for line, poly in zip(self.lines, self.polygons):
poly.set_animated(False)
line.set_animated(False)
self.canvas.draw()
self._ivert = None
# self._ipoly is only released when clicking outside
# the polygons
self._lastevent = None
self._update_data()
self._update_data_plot()
def _key_press_callback(self, event):
"""
What to do when a key is pressed on the keyboard.
"""
if event.inaxes is None:
return
if event.key == 'd':
if self._drawing and self._xy:
self._xy.pop()
if self._xy:
self._drawing_plot.set_data(list(zip(*self._xy)))
else:
self._drawing_plot.set_data([], [])
self.canvas.restore_region(self.background)
self.modelax.draw_artist(self._drawing_plot)
self.canvas.blit(self.modelax.bbox)
elif self._ivert is not None:
poly = self.polygons[self._ipoly]
line = self.lines[self._ipoly]
if len(poly.xy) > 4:
verts = numpy.atleast_1d(self._ivert)
poly.xy = numpy.array([xy for i, xy in enumerate(poly.xy)
if i not in verts])
line.set_data(list(zip(*poly.xy)))
self._update_data()
self._update_data_plot()
self.canvas.restore_region(self.background)
self.modelax.draw_artist(poly)
self.modelax.draw_artist(line)
self.canvas.blit(self.modelax.bbox)
self._ivert = None
elif self._ipoly is not None:
self.polygons[self._ipoly].remove()
self.lines[self._ipoly].remove()
self.polygons.pop(self._ipoly)
self.lines.pop(self._ipoly)
self.densities.pop(self._ipoly)
self._ipoly = None
self.canvas.draw()
self._update_data()
self._update_data_plot()
elif event.key == 'n':
self._ivert = None
self._ipoly = None
for line, poly in zip(self.lines, self.polygons):
poly.set_animated(False)
line.set_animated(False)
line.set_color([0, 0, 0, 0])
self.canvas.draw()
self.background = self.canvas.copy_from_bbox(self.modelax.bbox)
self._drawing = True
self._xy = []
self._drawing_plot = Line2D([], [], **self.line_args)
self._drawing_plot.set_animated(True)
self.modelax.add_line(self._drawing_plot)
self.dataax.set_title(' | '.join([
'left click: set vertice', 'right click: finish',
'esc: cancel']))
self.canvas.draw()
elif event.key == 'escape':
self._drawing = False
self._xy = []
if self._drawing_plot is not None:
self._drawing_plot.remove()
self._drawing_plot = None
for line, poly in zip(self.lines, self.polygons):
poly.set_animated(False)
line.set_animated(False)
line.set_color([0, 0, 0, 0])
self.canvas.draw()
def _mouse_move_callback(self, event):
"""
Handle things when the mouse move.
"""
if event.inaxes != self.modelax:
return
if event.button != 1:
return
if self._ivert is None and self._ipoly is None:
return
x, y = event.xdata, event.ydata
p = self._ipoly
v = self._ivert
if self._ivert is not None:
self.polygons[p].xy[v] = x, y
else:
dx = x - self._lastevent.xdata
dy = y - self._lastevent.ydata
self.polygons[p].xy[:, 0] += dx
self.polygons[p].xy[:, 1] += dy
self.lines[p].set_data(list(zip(*self.polygons[p].xy)))
self._lastevent = event
self.canvas.restore_region(self.background)
self.modelax.draw_artist(self.polygons[p])
self.modelax.draw_artist(self.lines[p])
self.canvas.blit(self.modelax.bbox)
| bsd-3-clause |
chrsrds/scikit-learn | sklearn/feature_selection/tests/test_variance_threshold.py | 1 | 1628 | import numpy as np
import pytest
from sklearn.utils.testing import assert_array_equal, assert_raises
from scipy.sparse import bsr_matrix, csc_matrix, csr_matrix
from sklearn.feature_selection import VarianceThreshold
data = [[0, 1, 2, 3, 4],
[0, 2, 2, 3, 5],
[1, 1, 2, 4, 0]]
def test_zero_variance():
# Test VarianceThreshold with default setting, zero variance.
for X in [data, csr_matrix(data), csc_matrix(data), bsr_matrix(data)]:
sel = VarianceThreshold().fit(X)
assert_array_equal([0, 1, 3, 4], sel.get_support(indices=True))
assert_raises(ValueError, VarianceThreshold().fit, [[0, 1, 2, 3]])
assert_raises(ValueError, VarianceThreshold().fit, [[0, 1], [0, 1]])
def test_variance_threshold():
# Test VarianceThreshold with custom variance.
for X in [data, csr_matrix(data)]:
X = VarianceThreshold(threshold=.4).fit_transform(X)
assert (len(data), 1) == X.shape
def test_zero_variance_floating_point_error():
# Test that VarianceThreshold(0.0).fit eliminates features that have
# the same value in every sample, even when floating point errors
# cause np.var not to be 0 for the feature.
# See #13691
data = [[-0.13725701]] * 10
if np.var(data) == 0:
pytest.skip('This test is not valid for this platform, as it relies '
'on numerical instabilities.')
for X in [data, csr_matrix(data), csc_matrix(data), bsr_matrix(data)]:
msg = "No feature in X meets the variance threshold 0.00000"
with pytest.raises(ValueError, match=msg):
VarianceThreshold().fit(X)
| bsd-3-clause |
s-gv/rnicu | ecg/ecg_visualizer_ble_PC/galry/test/test.py | 7 | 3947 | """Galry unit tests.
Every test shows a GalryWidget with a white square (non filled) and a black
background. Every test uses a different technique to show the same picture
on the screen. Then, the output image is automatically saved as a PNG file and
it is then compared to the ground truth.
"""
import unittest
import os
import re
from galry import *
from matplotlib.pyplot import imread
def get_image_path(filename=''):
path = os.path.dirname(os.path.realpath(__file__))
return os.path.join(path, 'autosave/%s' % filename)
REFIMG = imread(get_image_path('_REF.png'))
# maximum accepted difference between the sums of the test and
# reference images
TOLERANCE = 10
def erase_images():
log_info("Erasing all non reference images.")
# erase all non ref png at the beginning
l = filter(lambda f: not(f.endswith('REF.png')),# and f != '_REF.png',
os.listdir(get_image_path()))
[os.remove(get_image_path(f)) for f in l]
def compare_subimages(img1, img2):
"""Compare the sum of the values in the two images."""
return np.abs(img1.sum() - img2.sum()) <= TOLERANCE
def compare_images(img1, img2):
"""Compare the sum of the values in the two images and in two
quarter subimages in opposite corners."""
n, m, k = img1.shape
boo = compare_subimages(img1, img2)
boo = boo and compare_subimages(img1[:n/2, :m/2, ...], img2[:n/2, :m/2, ...])
boo = boo and compare_subimages(img1[n/2:, m/2:, ...], img2[n/2:, m/2:, ...])
return boo
class GalryTest(unittest.TestCase):
"""Base class for the tests. Child classes should call `self.show` with
the same keyword arguments as those of `show_basic_window`.
The window will be open for a short time and the image will be recorded
for automatic comparison with the ground truth."""
# in milliseconds
autodestruct = 100
def log_header(self, s):
s += '\n' + ('-' * (len(s) + 10))
log_info(s)
def setUp(self):
self.log_header("Running test %s..." % self.classname())
def tearDown(self):
self.log_header("Test %s finished!" % self.classname())
def classname(self):
"""Return the class name."""
return self.__class__.__name__
def filename(self):
"""Return the filename of the output image, depending on this class
name."""
return get_image_path(self.classname() + '.png')
def reference_image(self):
filename = get_image_path(self.classname() + '.REF.png')
if os.path.exists(filename):
return imread(filename)
else:
return REFIMG
def _show(self, **kwargs):
"""Show the window during a short period of time, and save the output
image."""
return show_basic_window(autosave=self.filename(),
autodestruct=self.autodestruct, **kwargs)
def show(self, **kwargs):
"""Create a window with the given parameters."""
window = self._show(**kwargs)
# make sure the output image is the same as the reference image
img = imread(self.filename())
boo = compare_images(img, self.reference_image())
self.assertTrue(boo)
return window
class MyTestSuite(unittest.TestSuite):
def run(self, *args, **kwargs):
erase_images()
super(MyTestSuite, self).run(*args, **kwargs)
def all_tests(pattern=None, folder=None):
if folder is None:
folder = os.path.dirname(os.path.realpath(__file__))
if pattern is None:
pattern = '*_test.py'
suites = unittest.TestLoader().discover(folder, pattern=pattern)
allsuites = MyTestSuite(suites)
return allsuites
def test(pattern=None, folder=None):
# unittest.main(defaultTest='all_tests')
unittest.TextTestRunner(verbosity=2).run(all_tests(folder=folder,
pattern=pattern))
if __name__ == '__main__':
test()
| agpl-3.0 |
beepee14/scikit-learn | examples/svm/plot_svm_margin.py | 318 | 2328 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=========================================================
SVM Margins Example
=========================================================
The plots below illustrate the effect the parameter `C` has
on the separation line. A large value of `C` basically tells
our model that we do not have that much faith in our data's
distribution, and will only consider points close to line
of separation.
A small value of `C` includes more/all the observations, allowing
the margins to be calculated using all the data in the area.
"""
print(__doc__)
# Code source: Gaël Varoquaux
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
# we create 40 separable points
np.random.seed(0)
X = np.r_[np.random.randn(20, 2) - [2, 2], np.random.randn(20, 2) + [2, 2]]
Y = [0] * 20 + [1] * 20
# figure number
fignum = 1
# fit the model
for name, penalty in (('unreg', 1), ('reg', 0.05)):
clf = svm.SVC(kernel='linear', C=penalty)
clf.fit(X, Y)
# get the separating hyperplane
w = clf.coef_[0]
a = -w[0] / w[1]
xx = np.linspace(-5, 5)
yy = a * xx - (clf.intercept_[0]) / w[1]
# plot the parallels to the separating hyperplane that pass through the
# support vectors
margin = 1 / np.sqrt(np.sum(clf.coef_ ** 2))
yy_down = yy + a * margin
yy_up = yy - a * margin
# plot the line, the points, and the nearest vectors to the plane
plt.figure(fignum, figsize=(4, 3))
plt.clf()
plt.plot(xx, yy, 'k-')
plt.plot(xx, yy_down, 'k--')
plt.plot(xx, yy_up, 'k--')
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=80,
facecolors='none', zorder=10)
plt.scatter(X[:, 0], X[:, 1], c=Y, zorder=10, cmap=plt.cm.Paired)
plt.axis('tight')
x_min = -4.8
x_max = 4.2
y_min = -6
y_max = 6
XX, YY = np.mgrid[x_min:x_max:200j, y_min:y_max:200j]
Z = clf.predict(np.c_[XX.ravel(), YY.ravel()])
# Put the result into a color plot
Z = Z.reshape(XX.shape)
plt.figure(fignum, figsize=(4, 3))
plt.pcolormesh(XX, YY, Z, cmap=plt.cm.Paired)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
fignum = fignum + 1
plt.show()
| bsd-3-clause |
rupakc/Kaggle-Compendium | Yelp Recruitment Challenge/preprocess.py | 4 | 9418 | # -*- coding: utf-8 -*-
"""
Created on Mon Dec 14 22:25:49 2016
Preprocessing Utilities for cleaning and processing of data
@author: Rupak Chakraborty
"""
import pandas as pd
from nltk.corpus import stopwords
import string
from nltk.stem import PorterStemmer
stopword_list = set(stopwords.words("english"))
punctuation_list = list(string.punctuation)
ps = PorterStemmer()
months_list = ["january","february","march","april","may","june","july","august",
"september","october","november","december"]
digit_list = ["0","1","2","3","4","5","6","7","8","9"]
month_list_short = ["jan","feb","mar","apr","may","jun","jul","aug","sept","oct","nov","dec"]
emoticon_list = [":)",":(","^_^","-_-","<3",":D",":P",":/"]
html_tag_list = [" ","<",">","&",";","<strong>","<em>","[1]","</strong>","</em>","<div>","</div>","<b>","</b>","[2]","[3]","...","[img]","[/img]","<u>","</u>","<p>","</p>","\n","\\t","<span>",
"</span>","[Moved]","<br/>","<a>","</a>",""","<br>","<br />","Â","<a rel=\"nofollow\" class=\"ot-hashtag\"","'","<a","’","'"]
extend_punct_list = [' ',',',':',';','\'','\t','\n','?','-','$',"!!","?","w/","!","!!!","w/","'","RT","rt","@","#","/",":)",
":(",":D","^_^","^","...","&","\\",":","?","<",">","$","%","*","`","~","-","_",
"+","=","{","}","[","]","|","\"",",",";",")","(","r/","/u/","*","-"]
punctuation_list.extend(extend_punct_list)
#punctuation_list.remove(".")
months_list.extend(month_list_short)
"""
Given a string normalizes it, i.e. converts it to lowercase and strips it of extra spaces
Params:
--------
s - String which is to be normalized
Returns:
---------
String in the normalized form
"""
def normalize_string(s):
s = s.lower()
s = s.strip()
return s
"""
Given a list of strings normalizes the strings
Params:
-------
string_list - List containing the strings which are to be normalized
Returns:
---------
Returns a list containing the normalized string list
"""
def normalize_string_list(string_list):
normalized_list = []
for sentence in string_list:
normalized_list.append(normalize_string(sentence))
return normalized_list
"""
Given a string and a separator splits up the string in the tokens
Params:
--------
s - string which has to be tokenized
separator - separator based on which the string is to be tokenized
Returns:
---------
A list of words in the sentence based on the separator
"""
def tokenize_string(s,separator):
word_list = list([])
if isinstance(s,basestring):
word_list = s.split(separator)
return word_list
"""
Given a list of sentences tokenizes each sentence in the list
Params:
--------
string_list - List of sentences which have to be tokenized
separator - Separator based on which the sentences have to be tokenized
"""
def tokenize_string_list(string_list,separator):
tokenized_sentence_list = []
for sentence in string_list:
#sentence = sentence.encode("ascii","ignore")
tokenized_sentence_list.append(tokenize_string(sentence,separator))
return tokenized_sentence_list
"""
Given a string containing stopwords removes all the stopwords
Params:
--------
s - String containing the stopwords which are to be removed
Returns:
---------
String sans the stopwords
"""
def remove_stopwords(s):
s = s.lower()
removed_string = ''
words = s.split()
for word in words:
if word not in stopword_list:
removed_string = removed_string + word.strip() + " "
return removed_string.strip()
"""
Given a list of sentences and a filename, writes the sentences to the file
Params:
--------
sentence_list - List of sentences which have to be written to the file
filename - File to which the sentences have to be written
Returns:
---------
Nothing quite just writes the sentences to the file
"""
def write_sentences_to_file(sentence_list,filename):
write_file = open(filename,'w')
for sentence in sentence_list:
write_file.write(encode_ascii(sentence) + '\n')
write_file.flush()
write_file.close()
"""
Removes all the punctuations from a given string
Params:
--------
s - String containing the possible punctuations
Returns:
--------
String without the punctuations (including new lines and tabs)
"""
def remove_punctuations(s):
s = s.lower()
s = s.strip()
for punctuation in punctuation_list:
s = s.replace(punctuation,' ')
return s.strip()
"""
Strips a given string of HTML tags
Params:
--------
s - String from which the HTML tags have to be removed
Returns:
---------
String sans the HTML tags
"""
def remove_html_tags(s):
for tag in html_tag_list:
s = s.replace(tag,' ')
return s
"""
Given a string removes all the digits from them
Params:
-------
s - String from which the digits need to be removed
Returns:
---------
String without occurence of the digits
"""
def remove_digits(s):
for digit in digit_list:
s = s.replace(digit,'')
return s
"""
Given a string returns all occurences of a month from it
Params:
--------
s - String containing possible month names
Returns:
--------
String wihtout the occurence of the months
"""
def remove_months(s):
s = s.lower()
words = s.split()
without_month_list = [word for word in words if word not in months_list]
month_clean_string = ""
for word in without_month_list:
month_clean_string = month_clean_string + word + " "
return month_clean_string.strip()
"""
Checks if a given string contains all ASCII characters
Params:
-------
s - String which is to be checked for ASCII characters
Returns:
--------
True if the string contains all ASCII characters, False otherwise
"""
def is_ascii(s):
if isinstance(s,basestring):
return all(ord(c) < 128 for c in s)
return False
"""
Given a string encodes it in ascii format
Params:
--------
s - String which is to be encoded
Returns:
--------
String encoded in ascii format
"""
def encode_ascii(s):
return s.encode('ascii','ignore')
"""
Stems each word of a given sentence to it's root word using Porters Stemmer
Params:
--------
sentence - String containing the sentence which is to be stemmed
Returns:
---------
Sentence where each word has been stemmed to it's root word
"""
def stem_sentence(sentence):
words = sentence.split()
stemmed_sentence = ""
for word in words:
try:
if is_ascii(word):
stemmed_sentence = stemmed_sentence + ps.stem_word(word) + " "
except:
pass
return stemmed_sentence.strip()
"""
Given a string removes urls from the string
Params:
--------
s - String containing urls which have to be removed
Returns:
--------
String without the occurence of the urls
"""
def remove_url(s):
s = s.lower()
words = s.split()
without_url = ""
for word in words:
if word.count('http:') == 0 and word.count('https:') == 0 and word.count('ftp:') == 0 and word.count('www.') == 0 and word.count('.com') == 0 and word.count('.ly') == 0 and word.count('.st') == 0:
without_url = without_url + word + " "
return without_url.strip()
"""
Given a string removes all the words whose length is less than 3
Params:
--------
s - String from which small words have to be removed.
Returns:
---------
Returns a string without occurence of small words
"""
def remove_small_words(s):
words = s.split()
clean_string = ""
for word in words:
if len(word) >= 3:
clean_string = clean_string + word + " "
return clean_string.strip()
"""
Defines the pipeline for cleaning and preprocessing of text
Params:
--------
s - String containing the text which has to be preprocessed
Returns:
---------
String which has been passed through the preprocessing pipeline
"""
def text_clean_pipeline(s):
s = remove_url(s)
s = remove_punctuations(s)
s = remove_html_tags(s)
s = remove_stopwords(s)
s = remove_months(s)
s = remove_digits(s)
#s = stem_sentence(s)
s = remove_small_words(s)
return s
"""
Given a list of sentences processes the list through the pre-preprocessing pipeline and returns the list
Params:
--------
sentence_list - List of sentences which are to be cleaned
Returns:
---------
The cleaned and pre-processed sentence list
"""
def text_clean_pipeline_list(sentence_list):
clean_sentence_list = list([])
for s in sentence_list:
s = remove_digits(s)
s = remove_punctuations(s)
s = remove_html_tags(s)
s = remove_stopwords(s)
s = remove_months(s)
s = remove_small_words(s)
#s = encode_ascii(s)
#s = remove_url(s)
#s = stem_sentence(s)
clean_sentence_list.append(s)
return clean_sentence_list
"""
Given a excel filepath and a corresponding sheetname reads it and converts it into a dataframe
Params:
--------
filename - Filepath containing the location and name of the file
sheetname - Name of the sheet containing the data
Returns:
---------
pandas dataframe containing the data from the excel file
"""
def get_dataframe_from_excel(filename,sheetname):
xl_file = pd.ExcelFile(filename)
data_frame = xl_file.parse(sheetname)
return data_frame
| mit |
fyffyt/scikit-learn | examples/cluster/plot_adjusted_for_chance_measures.py | 286 | 4353 | """
==========================================================
Adjustment for chance in clustering performance evaluation
==========================================================
The following plots demonstrate the impact of the number of clusters and
number of samples on various clustering performance evaluation metrics.
Non-adjusted measures such as the V-Measure show a dependency between
the number of clusters and the number of samples: the mean V-Measure
of random labeling increases significantly as the number of clusters is
closer to the total number of samples used to compute the measure.
Adjusted for chance measure such as ARI display some random variations
centered around a mean score of 0.0 for any number of samples and
clusters.
Only adjusted measures can hence safely be used as a consensus index
to evaluate the average stability of clustering algorithms for a given
value of k on various overlapping sub-samples of the dataset.
"""
print(__doc__)
# Author: Olivier Grisel <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from time import time
from sklearn import metrics
def uniform_labelings_scores(score_func, n_samples, n_clusters_range,
fixed_n_classes=None, n_runs=5, seed=42):
"""Compute score for 2 random uniform cluster labelings.
Both random labelings have the same number of clusters for each value
possible value in ``n_clusters_range``.
When fixed_n_classes is not None the first labeling is considered a ground
truth class assignment with fixed number of classes.
"""
random_labels = np.random.RandomState(seed).random_integers
scores = np.zeros((len(n_clusters_range), n_runs))
if fixed_n_classes is not None:
labels_a = random_labels(low=0, high=fixed_n_classes - 1,
size=n_samples)
for i, k in enumerate(n_clusters_range):
for j in range(n_runs):
if fixed_n_classes is None:
labels_a = random_labels(low=0, high=k - 1, size=n_samples)
labels_b = random_labels(low=0, high=k - 1, size=n_samples)
scores[i, j] = score_func(labels_a, labels_b)
return scores
score_funcs = [
metrics.adjusted_rand_score,
metrics.v_measure_score,
metrics.adjusted_mutual_info_score,
metrics.mutual_info_score,
]
# 2 independent random clusterings with equal cluster number
n_samples = 100
n_clusters_range = np.linspace(2, n_samples, 10).astype(np.int)
plt.figure(1)
plots = []
names = []
for score_func in score_funcs:
print("Computing %s for %d values of n_clusters and n_samples=%d"
% (score_func.__name__, len(n_clusters_range), n_samples))
t0 = time()
scores = uniform_labelings_scores(score_func, n_samples, n_clusters_range)
print("done in %0.3fs" % (time() - t0))
plots.append(plt.errorbar(
n_clusters_range, np.median(scores, axis=1), scores.std(axis=1))[0])
names.append(score_func.__name__)
plt.title("Clustering measures for 2 random uniform labelings\n"
"with equal number of clusters")
plt.xlabel('Number of clusters (Number of samples is fixed to %d)' % n_samples)
plt.ylabel('Score value')
plt.legend(plots, names)
plt.ylim(ymin=-0.05, ymax=1.05)
# Random labeling with varying n_clusters against ground class labels
# with fixed number of clusters
n_samples = 1000
n_clusters_range = np.linspace(2, 100, 10).astype(np.int)
n_classes = 10
plt.figure(2)
plots = []
names = []
for score_func in score_funcs:
print("Computing %s for %d values of n_clusters and n_samples=%d"
% (score_func.__name__, len(n_clusters_range), n_samples))
t0 = time()
scores = uniform_labelings_scores(score_func, n_samples, n_clusters_range,
fixed_n_classes=n_classes)
print("done in %0.3fs" % (time() - t0))
plots.append(plt.errorbar(
n_clusters_range, scores.mean(axis=1), scores.std(axis=1))[0])
names.append(score_func.__name__)
plt.title("Clustering measures for random uniform labeling\n"
"against reference assignment with %d classes" % n_classes)
plt.xlabel('Number of clusters (Number of samples is fixed to %d)' % n_samples)
plt.ylabel('Score value')
plt.ylim(ymin=-0.05, ymax=1.05)
plt.legend(plots, names)
plt.show()
| bsd-3-clause |
kerkphil/multi-country | Python/Archive/Stage1/StepbyStepv1.py | 2 | 34730 | from __future__ import division
import numpy as np
import scipy as sp
import scipy.optimize as opt
from matplotlib import pyplot as plt
#DEMOGRAPHICS FUNCTIONS
def getDemographics(params):
"""
Description:
-Imports data from csv files for initial populations, fertility rates, mortality rates, and net migrants.
-Stores these data sets in their respective matrices, and calculates population distribuitons through year T.
-NOTE: FOR NOW THIS FUNCTION ONLY USES DATA FOR THE USA. NEEDS TO EVENTUALLY ADD MORE COUNTRIES
Inputs:
-None, but uses the global variables T, T_1, StartFertilityAge, EndFertilityAge, StartDyingAge, and MaxImmigrantAge
Objects in Function:
-USAPopdata: (S+1) vector that has the initial population of the U.S straight from the csv
-USAFertdata: (T_1,EndFertilityAge+2-StartFertilityAge) vector that has U.S. fertility straight from the csv
-USAMortdata: (T_1,S+1-StartDyingAge) vector that has U.S. mortality straight from the csv
-USAMigdata: (MaxImmigrantAge) vector that contains the number of net U.S. migrants straight from the csv
-g_N: (T) vector that contains the exogenous population growth rates
-g_A: Constant that represents the technical growth rate
-l_endowment: (T) vector labor endowment per household
-f_bar: (I) vector that represents the fertility rate after period T_1
-p_bar: (I) vector that represents the mortality rate after period T_1
-m_bar: (I) vector that represents the immigration rate after period T_1
Output:
-FertilityRates: Numpy array that contains fertilty rates for all countries, ages, and years
-MortalityRates: Numpy array that contains mortality rates for all countries, ages, and years
-Migrants: Numpy array that contains net migration for all countries and ages
-N_matrix: Numpy array that contains population numbers for all countries, ages, and years
-Nhat matrix: Numpy array that contains the population percentage for all countries, ages, and years
"""
I, S, T, T_1, StartFertilityAge, EndFertilityAge, StartDyingAge, MaxImmigrantAge, g_A, PrintAges = params
if PrintAges:
print "T =", T
print "T_1", T_1
print "StartFertilityAge", StartFertilityAge
print "EndFertilityAge", EndFertilityAge
print "StartDyingAge", StartDyingAge
print "MaxImmigrantAge", MaxImmigrantAge
#Imports and scales data for the USA. Imports a certain number of generations according to the value of S
USAPopdata = np.loadtxt(("Data_Files/population.csv"),delimiter=',',skiprows=1, usecols=[1])[:S+1]*1000
USAFertdata = np.loadtxt(("Data_Files/usa_fertility.csv"),delimiter=',',skiprows=1, usecols=range(1,EndFertilityAge+2-StartFertilityAge))[48:48+T_1,:]
USAMortdata = np.loadtxt(("Data_Files/usa_mortality.csv"),delimiter=',',skiprows=1, usecols=range(1,S+1-StartDyingAge))[:T_1,:]
USAMigdata = np.loadtxt(("Data_Files/net_migration.csv"),delimiter=',',skiprows=1, usecols=[1])[:MaxImmigrantAge]*100
#Initializes demographics matrices
N_matrix = np.zeros((I, S+1, T))
Nhat_matrix = np.zeros((I, S+1, T))
FertilityRates = np.zeros((I, S+1, T))
MortalityRates = np.zeros((I, S+1, T))
Migrants = np.zeros((I, S+1, T))
g_N = np.zeros(T)
#NOTE: For now we set fertility, mortality, number of migrants, and initial population the same for all countries.
#Sets initial total population (N_matrix), percentage of total world population (Nhat_matrix)
N_matrix[:,:,0] = np.tile(USAPopdata, (I, 1))
Nhat_matrix[:,:,0] = N_matrix[:,:,0]/np.sum(N_matrix[:,:,0])
#Fertility Will be equal to 0 for all ages that don't bear children
FertilityRates[:,StartFertilityAge:EndFertilityAge+1,:T_1] = np.einsum("ts,i->ist", USAFertdata, np.ones(I))
#Mortality be equal to 0 for all young people who aren't old enough to die
MortalityRates[:,StartDyingAge:-1,:T_1] = np.einsum("ts,it->ist", USAMortdata, np.ones((I,T_1)))
#The last generation dies with probability 1
MortalityRates[:,-1,:] = np.ones((I, T))
#The number of migrants is the same for each year
Migrants[:,:MaxImmigrantAge,:T_1] = np.einsum("s,it->ist", USAMigdata, np.ones((I,T_1)))
#Gets steady-state values
f_bar = FertilityRates[:,:,T_1-1]
p_bar = MortalityRates[:,:,T_1-1]
m_bar = Migrants[:,:,T_1-1]
#Set to the steady state for every year beyond year T_1
FertilityRates[:,:,T_1:] = np.tile(np.expand_dims(f_bar, axis=2), (1,1,T-T_1))
MortalityRates[:,:,T_1:] = np.tile(np.expand_dims(p_bar, axis=2), (1,1,T-T_1))
Migrants[:,:,T_1:] = np.tile(np.expand_dims(m_bar, axis=2), (1,1,T-T_1))
#Gets the initial immigration rate
ImmigrationRate = Migrants[:,:,0]/N_matrix[:,:,0]
#Gets initial world population growth rate
g_N[0] = np.sum(Nhat_matrix[:,:,0]*(FertilityRates[:,:,0] + ImmigrationRate - MortalityRates[:,:,0]))
#Calculates population numbers for each country
for t in range(1,T):
#Gets the total number of children and and percentage of children and stores them in generation 0 of their respective matrices
#See equations 2.1 and 2.10
N_matrix[:,0,t] = np.sum((N_matrix[:,:,t-1]*FertilityRates[:,:,t-1]),axis=1)
Nhat_matrix[:,0,t] = np.exp(-g_N[t-1])*np.sum((Nhat_matrix[:,:,t-1]*FertilityRates[:,:,t-1]),axis=1)
#Finds the immigration rate for each year
ImmigrationRate = Migrants[:,:,t-1]/N_matrix[:,:,t-1]
#Gets the population and percentage of population for the next year, taking into account immigration and mortality
#See equations 2.2 and 2.11
N_matrix[:,1:,t] = N_matrix[:,:-1,t-1]*(1+ImmigrationRate[:,:-1]-MortalityRates[:,:-1,t-1])
Nhat_matrix[:,1:,t] = np.exp(-g_N[t-1])*Nhat_matrix[:,:-1,t-1]*(1+ImmigrationRate[:,:-1]-MortalityRates[:,:-1,t-1])
#Gets the growth rate for the next year
g_N[t] = np.sum(Nhat_matrix[:,:,t]*(FertilityRates[:,:,t] + ImmigrationRate - MortalityRates[:,:,t]), axis=(0, 1))
#print np.sum(Nhat_matrix[:,:,t])#This should be equal to 1.0
#Gets labor endowment per household. For now it grows at a constant rate g_A
l_endowment = np.cumsum(np.ones(T)*g_A)
return FertilityRates, MortalityRates, Migrants, N_matrix, Nhat_matrix
def plotDemographics(params, index, years, name, N_matrix):
"""
Description:
Plots the population distribution of a given country for any number of specified years
Inputs:
index: Integer that indicates which country to plot
years: List that contains each year to plot
name: String of the country's name. Used in the legend of the plot
Outputs:
None
"""
S, T = params
for y in range(len(years)):
yeartograph = years[y]
#Checks to make sure we haven't requested to plot a year past the max year
if yeartograph <= T:
plt.plot(range(S+1), N_matrix[index,:,yeartograph])
else:
print "\nERROR: WE HAVE ONLY SIMULATED UP TO THE YEAR", T
time.sleep(15)
plt.title(str(name + " Population Distribution"))
plt.legend(years)
plt.show()
plt.clf()
def getBequests(params, assets_old):
"""
Description:
-Gets the value of the bequests given to each generation
Inputs:
-assets: Assets for each generation in a given year
-current_t: Integer that indicates the current year. Used to pull information from demographics global matrices like FertilityRates
Objects in Function:
-BQ: T
-num_bequest_receivers:
-bq_Distribution:
Output:
-bq: Numpy array that contains the number of bequests for each generation in each country.
"""
I, S, T, StartFertilityAge, StartDyingAge, pop_old, pop_working, current_mort = params
#Initializes bequests
bq = np.zeros((I, S+1))
#Gets the total assets of the people who died this year
BQ = np.sum(assets_old*current_mort*pop_old, axis=1)
#Distributes the total assets equally among the eligible population for each country
#NOTE: This will likely change as we get a more complex function for distributing the bequests
num_bequest_receivers = np.sum(pop_working, axis=1)
bq_Distribution = BQ/num_bequest_receivers
bq[:,StartFertilityAge:StartDyingAge+1] = np.einsum("i,s->is", bq_Distribution, np.ones(StartDyingAge+1-StartFertilityAge))
return bq
def hatvariables(Kpathreal, kfpathreal, Nhat_matrix):
#THIS FUNCTION HAS EQUATIONS 2.13-2.16 AND 2.19-2.20, BUT STILL NEEDS TO BE INCORPORATED INTO THE REST OF THE MODEL TO COMPLETELY TEST
#We are only using up until T periods rather than T+S+1 since Nhat only goes out to T
Kpath = Kpathreal[:,:T]
kfpath = kfpathreal[:,:T]
temp_e = np.ones((I, S+1, T))#THIS SHOULD ONLY BE UNTIL WE GET S GENERATIONS RATHER THAN S-1
n = np.sum(temp_e[:,:,:T]*Nhat_matrix, axis=1)
Ypath = (Kpath**alpha) * (np.einsum("i,it->it", A, n)**(1-alpha))
rpath = alpha * Ypath / Kpath
wpath = (1-alpha) * Ypath / n
"""
#NOTE:This goes in the get_householdchoices_path function
c_path = np.zeros((I, S))
asset_path = np.zeros((I, S+1))
c_path[:,0] = c_1
asset_path[:,0] = starting_assets
for s in range(1,S):
c_path[:,s] = ((beta * (1 + rpath_chunk[:,s] - delta))**(1/sigma) * c_path[:,s-1])/np.exp(g_A)
asset_path[:,s] = (wpath_chunk[:,s]*e[:,0,s-1] + (1 + rpath_chunk[:,s-1] - delta)*asset_path[:,s-1] + bq_chunk - c_path[:,s-1])/np.exp(g_A)
asset_path[:,s+1] = wpath_chunk[:,s]*e_chunk[:,s] + (1 + rpath_chunk[:,s] - delta)*asset_path[:,s] - c_path[:,s]
"""
#STEADY STATE FUNCTIONS
def get_kd(assets, kf):
"""
Description: Calculates the amount of domestic capital that remains in the domestic country
Inputs:
-assets[I,S,T+S+1]: Matrix of assets
-kf[I,T+S+1]: Domestic capital held by foreigners.
Objects in Function:
NONE
Outputs:
-kd[I,T+S+1]: Capital that is used in the domestic country
"""
kd = np.sum(assets[:,1:-1], axis=1) - kf
return kd
def get_n(e):
"""
Description: Calculates the total labor productivity for each country
Inputs:
-e[I,S,T]:Matrix of labor productivities
Objects in Function:
-NONE
Outputs:
-n[I,S+T+1]: Total labor productivity
"""
n = np.sum(e, axis=1)
return n
def get_Y(params, kd, n):
"""
Description:Calculates the output timepath
Inputs:
-params (2) tuple: Contains the necessary parameters used
-kd[I,T+S+1]: Domestic held capital stock
-n[I,S+T+1]: Summed labor productivity
Objects in Function:
-A[I]: Technology for each country
-alpha: Production share of capital
Outputs:
-Y[I,S+T+1]: Timepath of output
"""
alpha, A = params
if kd.ndim == 1:
Y = (kd**alpha) * ((A*n)**(1-alpha))
elif kd.ndim == 2:
Y = (kd**alpha) * (np.einsum("i,is->is", A, n)**(1-alpha))
return Y
def get_r(alpha, Y, kd):
"""
Description: Calculates the rental rates.
Inputs:
-alpha (scalar): Production share of capital
-Y[I,T+S+1]: Timepath of output
-kd[I,T+S+1]: Timepath of domestically owned capital
Objects in Function:
-NONE
Outputs:
-r[I,R+S+1]:Timepath of rental rates
"""
r = alpha * Y / kd
return r
def get_w(alpha, Y, n):
"""
Description: Calculates the wage timepath.
Inputs:
-alpha (scalar): Production share of output
-Y[I,T+S+1]: Output timepath
-n[I,T+S+1]: Total labor productivity timepath
Objects in Function:
-NONE
Outputs:
-w[I,T+S+1]: Wage timepath
"""
w = (1-alpha) * Y / n
return w
def get_cvecss(params, w, r, assets):
"""
Description: Calculates the consumption vector
Inputs:
-params (tuple 2): Tuple that containts the necessary parameters
-w[I,T+S+1]: Wage timepath
-r[I,T+S+1]: Rental Rate timepath
-assets[I,S,T+S+1]: Assets timepath
Objects in Function:
-e[I,S,T+S+1]: Matrix of labor productivities
-delta (parameter): Depreciation rate
Outputs:
-c_vec[I,T+S+1]:Vector of consumption.
"""
e, delta = params
c_vec = np.einsum("i, is -> is", w, e[:,:,0])\
+ np.einsum("i, is -> is",(1 + r - delta) , assets[:,:-1])\
- assets[:,1:]
return c_vec
def check_feasible(K, Y, w, r, c):
"""
Description:Checks the feasibility of the inputs.
Inputs:
-K[I,T+S+1]: Capital stock timepath
-y[I,T+S+1]: Output timepath
-w[I,T+S+1]: Wage timepath
-r[I,T+S+1]: Rental rate timepath
-c[I,T+S+1]: consumption timepath
Objects in Function:
NONE
Outputs:
-Feasible (Boolean): Whether or not the inputs are feasible.
"""
Feasible = True
if np.any(K<0):
Feasible=False
print "WARNING! INFEASABLE VALUE ENCOUNTERED IN K!"
print "The following coordinates have infeasible values:"
print np.argwhere(K<0)
if np.any(Y<0):
Feasible=False
print "WARNING! INFEASABLE VALUE ENCOUNTERED IN Y!"
print "The following coordinates have infeasible values:"
print np.argwhere(Y<0)
if np.any(r<0):
Feasible=False
print "WARNING! INFEASABLE VALUE ENCOUNTERED IN r!"
print "The following coordinates have infeasible values:"
print np.argwhere(r<0)
if np.any(w<0):
Feasible=False
print "WARNING! INFEASABLE VALUE ENCOUNTERED IN w!"
print "The following coordinates have infeasible values:"
print np.argwhere(w<0)
if np.any(c<0):
Feasible=False
print "WARNING! INFEASABLE VALUE ENCOUNTERED IN c_vec!"
print "The following coordinates have infeasible values:"
print np.argwhere(c<0)
return Feasible
def SteadyStateSolution(guess, I, S, beta, sigma, delta, alpha, e, A):
"""
Description:
-This is the function that will be optimized by fsolve.
Inputs:
-guess[I,S+1]: vector that pieced together from assets and kf.
Objects in Function:
-kf[I,]:Foreign capital held by foreigners in each country
-assets[I,S]: Asset path for each country
-k[I,]:Capital for each country
-n[I,]:Labor for each country
-Y[I,]:Output for each country
-r[I,]:Rental Rate for each country
-w[I,]:Wage for each country
-c_vec[I, S]: Consumption by cohort in each country
-Euler_c[I, S-1]: Corresponds to (1.16)
-Euler_r[I,]: Corresponds to (1.17)
-Euler_kf(Scalar): Corresponds to (1.18)
Output:
-all_Euler[I*S,]: Similar to guess, it's a vector that's has both assets and kf.
"""
#Takes a 1D guess of length I*S and reshapes it to match what the original input into the fsolve looked like since fsolve flattens numpy arrays
guess = np.reshape(guess[:,np.newaxis], (I, S))
#Appends a I-length vector of zeros on ends of assets to represent no assets when born and no assets when dead
assets = np.column_stack((np.zeros(I), guess[:,:-1], np.zeros(I)))
#Sets kf as the last element of the guess vector for each country
kf = guess[:,-1]
#Getting the other variables
kd = get_kd(assets, kf)
n = get_n(e[:,:,0])
Yparams = (alpha, A)
Y = get_Y(Yparams, kd, n)
r = get_r(alpha, Y, kd)
w = get_w(alpha, Y, n)
cparams = (e, delta)
c_vec = get_cvecss(cparams, w, r, assets)
K = kd+kf
Feasible = check_feasible(K, Y, w, r, c_vec)
if np.any(c_vec<0): #Punishes the the poor choice of negative values in the fsolve
all_Euler=np.ones((I*S-1))*9999.
else:
#Gets Euler equations
Euler_c = c_vec[:,:-1] ** (-sigma) - beta * c_vec[:,1:] ** (-sigma) * (1 + r[0] - delta)
Euler_r = r[1:] - r[0]
Euler_kf = np.sum(kf)
#Makes a new 1D vector of length I*S that contains all the Euler equations
all_Euler = np.append(np.append(np.ravel(Euler_c), np.ravel(Euler_r)), Euler_kf)
return all_Euler
def getSteadyState(params, assets_init, kf_init):
"""
Description:
This takes the initial guess for assets and kf. Since the function
returns a matrix, this unpacks the individual parts.
Inputs:
-assets_init[I,S-1]:Intial guess for asset path
-kf_init[I]:Initial guess on foreigner held capital
Objects in Function:
-guess[I,S]: A combined matrix that has both assets_init and kf_init
-ss[S*I,]: The result from optimization.
Outputs:
-assets_ss[I,S-1]:Calculated assets steady state
-kf_ss[I,]:Calculated domestic capital owned by foreigners steady state
-k_ss[I]: Calculated total capital stock steady state
-n_ss[I]: Summed labor productivities steady state
-y_ss[I]: Calculated output steady state
-r_ss[I]: calculated steady state rental rate
-w_ss[I]: calculated steady state wage rate
-c_vec_ss[I, S]: Calculated steady state counsumption
"""
I, S, beta, sigma, delta, alpha, e, A = params
#Merges the assets and kf together into one matrix that can be inputted into the fsolve function
guess = np.column_stack((assets_init, kf_init))
#Solves for the steady state
solver_params = (I, S, beta, sigma, delta, alpha, e, A)
ss = opt.fsolve(SteadyStateSolution, guess, args=solver_params)
#Reshapes the ss code
ss = np.array(np.split(ss, I))
#Breaks down the steady state matrix into the two separate assets and kf matrices.
assets_ss = np.column_stack((np.zeros(I), ss[:,:-1], np.zeros(I)))
kf_ss = ss[:,-1]
#Gets the other steady-state values using assets and kf
kd_ss = get_kd(assets_ss, kf_ss)
n_ss = get_n(e[:,:,0])
Yparams = (alpha, A)
Y_ss = get_Y(Yparams, kd_ss, n_ss)
r_ss = get_r(alpha, Y_ss, kd_ss)
w_ss = get_w(alpha, Y_ss, n_ss)
cparams = (e, delta)
c_vec_ss = get_cvecss(cparams, w_ss, r_ss, assets_ss)
print "\nSteady State Found!\n"
return assets_ss, kf_ss, kd_ss, n_ss, Y_ss, r_ss[0], w_ss, c_vec_ss
#TIMEPATH FUNCTIONS
def get_initialguesses(params, assets_ss, kf_ss, w_ss, r_ss):
"""
Description:
With the parameters and steady state values, this function creates
initial guesses in a linear path.
Inputs:
-Params (Tuple): Tuple of parameters from Main.py
-Assets_ss[I,S,T+S+1]: Steady state assets value
-kf_ss[I,]: Steady State value of foreign owned domestic capital
-w_ss[I,]: Steady state value of wages
-r_ss[I,]: Steady state value of rental rate
Objects in Function:
-othervariable_params (Tuple): A tuple specifically made for GetOtherVariables
Outputs:
-assets_init[I,]: Initial Asset path
-kf_init[I,]: New initial foreign held capital
-w_initguess[I,T+S+1]: Initial guess wage timepath
-r_initguess[I,T+S+1]: Initial guess rental rate timepath
-k_init[I,]: total capital stock initial guess
-n_init[I,]: total labor initial guess
-y_init[I,]: output labor initial guess
-c_init[I,]: consumption initial guess
"""
I, S, T, delta, alpha, e, A = params
#Sets initial assets and kf, start with something close to the steady state
assets_init = assets_ss*.9
kf_init = kf_ss*0
w_initguess = np.zeros((I, T+S+1))
r_initguess = np.ones((T+S+1))*.5
#Gets initial kd, n, y, r, w, and K
kd_init = get_kd(assets_init, kf_init)
n_init = get_n(e[:,:,0])
Yparams = (alpha, A)
Y_init = get_Y(Yparams, kd_init, n_init)
r_init = get_r(alpha, Y_init, kd_init)
w_init = get_w(alpha, Y_init, n_init)
cparams = (e, delta)
c_init = get_cvecss(cparams, w_init, r_init, assets_init)
#Gets initial guess for rental rate path. This is set up to be linear.
r_initguess[:T+1] = np.linspace(r_init[0], r_ss, T+1)
r_initguess[T+1:] = r_initguess[T]
#Gets initial guess for wage path. This is set up to be linear.
for i in range(I):
w_initguess[i, :T+1] = np.linspace(w_init[i], w_ss[i], T+1)
w_initguess[i,T+1:] = w_initguess[i,T]
return assets_init, kf_init, w_initguess, r_initguess, kd_init, n_init, Y_init, c_init
def get_foreignK_path(params, Kpath, rpath, kf_ss):
"""
Description:
This calculates the timepath of the foreign capital stock. This is based on equation (1.12 and 1.13).
Inputs:
apath: Asset path, from our calculations
rpath: Rental Rate path, also from our calculation
Objects in Function:
kdpath[I,S+T+1]: Path of domestic owned capital
n[I,S+T+1]: Path of total labor
kf_ss[I,]: Calculated from the steady state.
A[I,]: Parameters from above
Outputs:
kfPath[I,S+T+1]: Path of domestic capital held by foreigners.
"""
I, S, T, alpha, e, A = params
#Sums the labor productivities across cohorts
n = np.sum(e, axis=1)
#Declares the array that will later be used.
kfPath=np.zeros((I,S+T+1))
kdPath=np.zeros((I,S+T+1))
#Gets the domestic-owned capital stock for each country except for the first country
kdPath[1:,:]=(rpath[:]/alpha)**(1/(alpha-1))*np.einsum("i,is->is", A[1:], n[1:,:])
#This is using equation 1.13 solved for the foreign capital stock to caluclate the foreign capital stock
#For everyone except the first country
kfPath[1:,:]=Kpath[1:,:]-kdPath[1:,:]
#To satisfy 1.18, the first country's assets is the negative of the sum of all the other countries' assets
kfPath[0,:]= -np.sum(kfPath[1:,:],axis=0)
#Making every year beyond t equal to the steady-state
kfPath[:,T:] = np.einsum("i,s->is", kf_ss, np.ones(S+1))
return kfPath
def get_lifetime_decisions(params, c_1, wpath_chunk, rpath_chunk, e_chunk, starting_assets, current_s):
"""
Description:
This solves for equations 1.15 and 1.16 in the StepbyStep pdf for a certain generation
Inputs:
-c_1: Initial consumption (not necessarily for the year they were born)
-wpath_chunk: Wages of an agents lifetime, a section of the timepath
-rpath_chunk: Rental rate of an agents lifetime, a section of the timepath
-e_chunk: Worker productivities of an agents lifetime, a section of the global matrix
-starting_assets: Initial assets of the agent. Will be 0s if we are beginning in the year the agent was born
-current_s: Current age of the agent
Objects in Function:
-NONE
Outputs:
-c_path[I, S]: Path of consumption until the agent dies
-asset_path[I, S+1]: Path of assets until the agent dies
"""
I, S, beta, sigma, delta = params
#Initializes the cpath and asset path vectors
c_path = np.zeros((I, S))
asset_path = np.zeros((I, S+1))
#For each country, the cpath and asset path vectors' are the initial values provided.
c_path[:,0] = c_1
asset_path[:,0] = starting_assets
#Based on the individual chunks, these are the households choices
for s in range(1,S):
c_path[:,s] = (beta * (1 + rpath_chunk[s] - delta))**(1/sigma) * c_path[:,s-1]
asset_path[:,s] = wpath_chunk[:,s]*e_chunk[:,s-1] + (1 + rpath_chunk[s-1] - delta)*asset_path[:,s-1] - c_path[:,s-1]
asset_path[:,s+1] = wpath_chunk[:,s]*e_chunk[:,s] + (1 + rpath_chunk[s] - delta)*asset_path[:,s] - c_path[:,s]
#Returns the relevant part of c_path and asset_path for all countries
return c_path[:,0:S-current_s], asset_path[:,0:S+1-current_s]
def find_optimal_starting_consumptions(c_1, wpath_chunk, rpath_chunk, epath_chunk, starting_assets, current_s, params):
"""
Description:
Takes the assets path from the get_householdchoices_path function and creates Euluer errors
Inputs:
Dimension varies
-c_1: Initial consumption (not necessarily for the year they were born)
-wpath_chunk: Wages of an agents lifetime, a part of the timepath
-rpath_chunk: Rental rate of an agents lifetime, another part of the timepath.
-epath_chunk: Worker productivities of an agents lifetime, another part.
-starting_assets: Initial assets of the agent. It's 0 at the beginning of life.
-current_s: Current age of the agent
Objects in Function:
-cpath: Path of consumption based on chunk given.
-assets_path: Path of assets based on the chunks given
Outputs:
-Euler:A flattened version of the assets_path matrix
"""
#Executes the get_household_choices_path function. Sees above.
c_path, assets_path = get_lifetime_decisions(params, c_1, wpath_chunk, rpath_chunk, epath_chunk, starting_assets, current_s)
if np.any(c_path<0):
c_path=np.ones(I)*9999.
Euler = np.ravel(assets_path[:,-1])
return Euler
def get_cons_assets_matrix(params, wpath, rpath, starting_assets):
I, S, T, T_1, beta, sigma, delta, e, StartFertilityAge, StartDyingAge, N_matrix, MortalityRates = params
#Initializes timepath variables
c_timepath = np.zeros((I,S,S+T+1))
a_timepath = np.zeros((I, S+1, S+T+1)) #I,S+1,S+T+1
a_timepath[:,:,0]=starting_assets
bq_timepath = np.zeros((I, S+1, S+T+1)) #Is this too big?
c_timepath[:,S-1,0] = wpath[:,0]*e[:,S-1,0] + (1 + rpath[0] - delta)*a_timepath[:,S-1,0]
#Fills the upper triangle
for s in range(S-2,-1, -1):
agent_assets = starting_assets[:,s]
#We are only doing this for all generations alive in time t=0
t = 0
#We are iterating through each generation in time t=0
current_s = s
#Uses the previous generation's consumption at age s to get the value for our guess
c_guess = c_timepath[:,s+1,t]/((beta*(1+rpath[t]-delta))**(1/sigma))
#Gets optimal initial consumption beginning in the current age of the agent using chunks of w and r that span the lifetime of the given generation
household_params = (I, S, beta, sigma, delta)
opt_consump = opt.fsolve(find_optimal_starting_consumptions, c_guess, args = \
(wpath[:,t:t+S], rpath[t:t+S], e[:,0,t:t+S],agent_assets, current_s, household_params))
#Gets optimal timepaths beginning initial consumption and starting assets
cpath_indiv, apath_indiv = get_lifetime_decisions\
(household_params, opt_consump, wpath[:,t:t+S], rpath[t:t+S], e[:,0,t:t+S], agent_assets, current_s)
for i in xrange(I):
np.fill_diagonal(c_timepath[i,s:,:], cpath_indiv[i,:])
np.fill_diagonal(a_timepath[i,s:,:], apath_indiv[i,:])
bq_params = (I, S, T, StartFertilityAge, StartDyingAge, N_matrix[:,StartDyingAge:,s], N_matrix[:,StartFertilityAge:StartDyingAge+1,s], MortalityRates[:,StartDyingAge:,s])
bq_timepath[:,:,S-s-2] = getBequests(bq_params, a_timepath[:,StartDyingAge:,S-s-2])
#print np.round(cpath_indiv[0,:], decimals=3), opt_consump[0]
#print np.round(np.transpose(c_timepath[0,:,:T_1-s+3]), decimals=3)
#print np.round(starting_assets[0,:], decimals=3)
#print np.round(assetpath_indiv[0,:], decimals=3), agent_assets[0]
#print np.round(np.transpose(a_timepath[0,:,:T_1]), decimals=3)
#Fills everything except for the upper triangle
for t in xrange(1,T):
current_s = 0 #This is always zero because this section deals with people who haven't been born yet in time T=0
agent_assets = np.zeros((I))
#Uses the previous generation's consumption at age s to get the value for our guess
c_guess = c_timepath[:,s+1,t]/((beta*(1+rpath[t+1]-delta))**(1/sigma))
optimalconsumption = opt.fsolve(find_optimal_starting_consumptions, c_guess, args = \
(wpath[:,t:t+S], rpath[t:t+S], e[:,0,t:t+S], agent_assets, current_s, household_params))
cpath_indiv, assetpath_indiv = get_lifetime_decisions\
(household_params, optimalconsumption, wpath[:,t:t+S], rpath[t:t+S], e[:,0,t:t+S], agent_assets, current_s)
for i in range(I):
np.fill_diagonal(c_timepath[i,:,t:], cpath_indiv[i,:])
np.fill_diagonal(a_timepath[i,:,t:], assetpath_indiv[i,:])
if t >= T_1:
temp_t = T_1
else:
temp_t = t
bq_params = (I, S, T, StartFertilityAge, StartDyingAge, N_matrix[:,StartDyingAge:,temp_t+S-2], N_matrix[:,StartFertilityAge:StartDyingAge+1,temp_t+S-2], MortalityRates[:,StartDyingAge:,temp_t+S-2])
bq_timepath[:,:,t+S-2] = getBequests(bq_params, a_timepath[:,StartDyingAge:,temp_t+S-2])
#bq_timepath[:,:,t+S-2] = getBequests(a_timepath[:,:,t+S-2], t+S-2)
return c_timepath, a_timepath
def get_wpathnew_rpathnew(params, wpath, rpath, starting_assets, kd_ss, kf_ss, w_ss, r_ss):
"""
Description:
Takes initial paths of wages and rental rates, gives the consumption path and the the wage and rental paths that are implied by that consumption path.
Inputs:
-w_path0[I, S+T+1]: initial w path
-r_path0[I, S+T+1]: initial r path
Objects in Function:
Note that these vary in dimension depending on the loop.
-current_s: The age of the cohort at time 0
-opt_consump: Solved for consumption
-starting_assets: Initial assets for the cohorts.
-cpath_indiv: The small chunk of cpath.
-assetpath_indiv: The small chunk of assetpath_indiv
-optimalconsumption: Solved from the chunks
-c_timepath: Overall consumption path
-a_timepath: Overall assets timepath
-kfpath: Foreign held domestic capital
-agent assets: Assets held by individuals.
Outputs:
-w_path1[I,S+T+1]: calculated w path
-r_path1[I,S+T+1]: calculated r path
-CPath[I,S+T+1]: Calculated aggregate consumption path for each country
-Kpath[I,S+T+1]: Calculated capital stock path.
-Ypath1[I, S+T+1]: timepath of assets implied from initial guess
"""
I, S, T, T_1, beta, sigma, delta, alpha, e, A, StartFertilityAge, StartDyingAge, N_matrix, MortalityRates = params
ca_params = (I, S, T, T_1, beta, sigma, delta, e, StartFertilityAge, StartDyingAge, N_matrix, MortalityRates)
c_timepath, a_timepath = get_cons_assets_matrix(ca_params, wpath, rpath, starting_assets)
#Calculates the total amount of capital in each country
Kpath=np.sum(a_timepath,axis=1)
#Calculates Aggregate Consumption
Cpath=np.sum(c_timepath,axis=1)
#After time period T, the total capital stock and total consumption is forced to be the steady state
Kpath[:,T:] = np.einsum("i,t->it", kd_ss+kf_ss, np.ones(S+1))
Cpath[:,T:] = np.einsum("i,t->it", Cpath[:,T-1], np.ones(S+1))
#Gets the foriegned owned capital
kf_params = (I, S, T, alpha, e, A)
kfpath = get_foreignK_path(kf_params, Kpath, rpath, kf_ss)
#Based on the overall capital path and the foreign owned capital path, we get new w and r paths.
kdpath = Kpath - kfpath
npath = get_n(e)
Yparams = (alpha, A)
Ypath = get_Y(Yparams, kdpath, npath)
rpath_new = get_r(alpha, Ypath[0], kdpath[0])
wpath_new = get_w(alpha, Ypath, npath)
#Checks to see if any of the timepaths have negative values
check_feasible(Kpath, Ypath, wpath, rpath, c_timepath)
return wpath_new, rpath_new, Cpath, Kpath, Ypath
def get_Timepath(params, wstart, rstart, assets_init, kd_ss, kf_ss, w_ss, r_ss):
I, S, T, T_1, beta, sigma, delta, alpha, e, A, StartFertilityAge, StartDyingAge, N_matrix, MortalityRates, distance, diff, xi, MaxIters = params
Iter=1 #Serves as the iteration counter
wr_params = (I, S, T, T_1, beta, sigma, delta, alpha, e, A, StartFertilityAge, StartDyingAge, N_matrix, MortalityRates)
while distance>diff and Iter<MaxIters: #The timepath iteration runs until the distance gets below a threshold or the iterations hit the maximum
wpath_new, rpath_new, Cpath_new, Kpath_new, Ypath_new = \
get_wpathnew_rpathnew(wr_params, wstart, rstart, assets_init, kd_ss, kf_ss, w_ss, r_ss)
dist1=sp.linalg.norm(wstart-wpath_new,2) #Norm of the wage path
dist2=sp.linalg.norm(rstart-rpath_new,2) #Norm of the intrest rate path
distance=max([dist1,dist2]) #We take the maximum of the two norms to get the distance
#print "Iteration:",Iter,", Norm Distance: ", distance#, "Euler Error, ", EError
Iter+=1 #Updates the iteration counter
if distance<diff or Iter==MaxIters: #When the distance gets below the tolerance or the maximum of iterations is hit, then the TPI finishes.
wend=wpath_new
rend=rpath_new
Cend=Cpath_new
Kend=Kpath_new
Yend=Ypath_new
if Iter==MaxIters: #In case it never gets below the tolerance, it will throw this warning and give the last timepath.
print "Doesn't converge within the maximum number of iterations"
print "Providing the last iteration"
wstart=wstart*xi+(1-xi)*wpath_new #Convex conjugate of the wage path
rstart=rstart*xi+(1-xi)*rpath_new #Convex conjugate of the intrest rate path
return wend, rend, Cend, Kend, Yend
def CountryLabel(Country): #Activated by line 28
'''
Description:
Converts the generic country label given for the graphs and converts it to a proper name
Inputs:
-Country (String): This is simply the generic country label
Objects in Function:
-NONE
Outputs:
-Name (String): The proper name of the country which you decide. Make sure the number of country names lines
up with the number of countries, otherwise, the function will not proceed.
'''
#Each country is given a number
if Country=="Country 0":
Name="United States"
if Country=="Country 1":
Name="Europe"
if Country=="Country 2":
Name="China"
if Country=="Country 3":
Name="Japan"
if Country=="Country 4":
Name="Korea"
if Country=="Country 5":
Name="Russia"
if Country=="Country 6":
Name="India"
#Add More Country labels here
return Name
def plotTimepaths(I, S, T, wpath, rpath, cpath, kpath, Ypath, CountryNamesON):
for i in xrange(I): #Wages
label1='Country '+str(i)
if CountryNamesON==True:
label1=CountryLabel(label1)
plt.plot(np.arange(0,T),wpath[i,:T], label=label1)
plt.title("Time path for Wages")
plt.ylabel("Wages")
plt.xlabel("Time Period")
plt.legend(loc="upper right")
plt.show()
#Rental Rates
label1='Global Interest Rate'
plt.plot(np.arange(0,T),rpath[:T], label=label1)
plt.title("Time path for Rental Rates")
plt.ylabel("Rental Rates")
plt.xlabel("Time Period")
plt.legend(loc="upper right")
plt.show()
for i in xrange(I): #Aggregate Consumption
label1='Country '+str(i)
if CountryNamesON==True:
label1=CountryLabel(label1)
plt.plot(np.arange(0,S+T+1),cpath[i,:],label=label1)
plt.title("Time Path for Aggregate Consumption")
plt.ylabel("Consumption Level")
plt.xlabel("Time Period")
plt.legend(loc="upper right")
plt.show()
for i in xrange(I): #Aggregate Capital Stock
label1='Country '+str(i)
if CountryNamesON==True:
label1=CountryLabel(label1)
plt.plot(np.arange(0,T),kpath[i,:T],label=label1)
plt.title("Time path for Capital Path")
plt.ylabel("Capital Stock level")
plt.xlabel("Time Period")
plt.legend(loc="upper right")
plt.show()
for i in xrange(I):
label1='Country '+str(i)
if CountryNamesON==True:
label1=CountryLabel(label1)
plt.plot(np.arange(0,T),Ypath[i,:T],label=label1)
plt.title("Time path for Output")
plt.ylabel("Output Stock level")
plt.xlabel("Time Period")
plt.legend(loc="upper right")
plt.show()
| mit |
LiaoPan/scikit-learn | sklearn/neighbors/tests/test_ball_tree.py | 129 | 10192 | import pickle
import numpy as np
from numpy.testing import assert_array_almost_equal
from sklearn.neighbors.ball_tree import (BallTree, NeighborsHeap,
simultaneous_sort, kernel_norm,
nodeheap_sort, DTYPE, ITYPE)
from sklearn.neighbors.dist_metrics import DistanceMetric
from sklearn.utils.testing import SkipTest, assert_allclose
rng = np.random.RandomState(10)
V = rng.rand(3, 3)
V = np.dot(V, V.T)
DIMENSION = 3
METRICS = {'euclidean': {},
'manhattan': {},
'minkowski': dict(p=3),
'chebyshev': {},
'seuclidean': dict(V=np.random.random(DIMENSION)),
'wminkowski': dict(p=3, w=np.random.random(DIMENSION)),
'mahalanobis': dict(V=V)}
DISCRETE_METRICS = ['hamming',
'canberra',
'braycurtis']
BOOLEAN_METRICS = ['matching', 'jaccard', 'dice', 'kulsinski',
'rogerstanimoto', 'russellrao', 'sokalmichener',
'sokalsneath']
def dist_func(x1, x2, p):
return np.sum((x1 - x2) ** p) ** (1. / p)
def brute_force_neighbors(X, Y, k, metric, **kwargs):
D = DistanceMetric.get_metric(metric, **kwargs).pairwise(Y, X)
ind = np.argsort(D, axis=1)[:, :k]
dist = D[np.arange(Y.shape[0])[:, None], ind]
return dist, ind
def test_ball_tree_query():
np.random.seed(0)
X = np.random.random((40, DIMENSION))
Y = np.random.random((10, DIMENSION))
def check_neighbors(dualtree, breadth_first, k, metric, kwargs):
bt = BallTree(X, leaf_size=1, metric=metric, **kwargs)
dist1, ind1 = bt.query(Y, k, dualtree=dualtree,
breadth_first=breadth_first)
dist2, ind2 = brute_force_neighbors(X, Y, k, metric, **kwargs)
# don't check indices here: if there are any duplicate distances,
# the indices may not match. Distances should not have this problem.
assert_array_almost_equal(dist1, dist2)
for (metric, kwargs) in METRICS.items():
for k in (1, 3, 5):
for dualtree in (True, False):
for breadth_first in (True, False):
yield (check_neighbors,
dualtree, breadth_first,
k, metric, kwargs)
def test_ball_tree_query_boolean_metrics():
np.random.seed(0)
X = np.random.random((40, 10)).round(0)
Y = np.random.random((10, 10)).round(0)
k = 5
def check_neighbors(metric):
bt = BallTree(X, leaf_size=1, metric=metric)
dist1, ind1 = bt.query(Y, k)
dist2, ind2 = brute_force_neighbors(X, Y, k, metric)
assert_array_almost_equal(dist1, dist2)
for metric in BOOLEAN_METRICS:
yield check_neighbors, metric
def test_ball_tree_query_discrete_metrics():
np.random.seed(0)
X = (4 * np.random.random((40, 10))).round(0)
Y = (4 * np.random.random((10, 10))).round(0)
k = 5
def check_neighbors(metric):
bt = BallTree(X, leaf_size=1, metric=metric)
dist1, ind1 = bt.query(Y, k)
dist2, ind2 = brute_force_neighbors(X, Y, k, metric)
assert_array_almost_equal(dist1, dist2)
for metric in DISCRETE_METRICS:
yield check_neighbors, metric
def test_ball_tree_query_radius(n_samples=100, n_features=10):
np.random.seed(0)
X = 2 * np.random.random(size=(n_samples, n_features)) - 1
query_pt = np.zeros(n_features, dtype=float)
eps = 1E-15 # roundoff error can cause test to fail
bt = BallTree(X, leaf_size=5)
rad = np.sqrt(((X - query_pt) ** 2).sum(1))
for r in np.linspace(rad[0], rad[-1], 100):
ind = bt.query_radius(query_pt, r + eps)[0]
i = np.where(rad <= r + eps)[0]
ind.sort()
i.sort()
assert_array_almost_equal(i, ind)
def test_ball_tree_query_radius_distance(n_samples=100, n_features=10):
np.random.seed(0)
X = 2 * np.random.random(size=(n_samples, n_features)) - 1
query_pt = np.zeros(n_features, dtype=float)
eps = 1E-15 # roundoff error can cause test to fail
bt = BallTree(X, leaf_size=5)
rad = np.sqrt(((X - query_pt) ** 2).sum(1))
for r in np.linspace(rad[0], rad[-1], 100):
ind, dist = bt.query_radius(query_pt, r + eps, return_distance=True)
ind = ind[0]
dist = dist[0]
d = np.sqrt(((query_pt - X[ind]) ** 2).sum(1))
assert_array_almost_equal(d, dist)
def compute_kernel_slow(Y, X, kernel, h):
d = np.sqrt(((Y[:, None, :] - X) ** 2).sum(-1))
norm = kernel_norm(h, X.shape[1], kernel)
if kernel == 'gaussian':
return norm * np.exp(-0.5 * (d * d) / (h * h)).sum(-1)
elif kernel == 'tophat':
return norm * (d < h).sum(-1)
elif kernel == 'epanechnikov':
return norm * ((1.0 - (d * d) / (h * h)) * (d < h)).sum(-1)
elif kernel == 'exponential':
return norm * (np.exp(-d / h)).sum(-1)
elif kernel == 'linear':
return norm * ((1 - d / h) * (d < h)).sum(-1)
elif kernel == 'cosine':
return norm * (np.cos(0.5 * np.pi * d / h) * (d < h)).sum(-1)
else:
raise ValueError('kernel not recognized')
def test_ball_tree_kde(n_samples=100, n_features=3):
np.random.seed(0)
X = np.random.random((n_samples, n_features))
Y = np.random.random((n_samples, n_features))
bt = BallTree(X, leaf_size=10)
for kernel in ['gaussian', 'tophat', 'epanechnikov',
'exponential', 'linear', 'cosine']:
for h in [0.01, 0.1, 1]:
dens_true = compute_kernel_slow(Y, X, kernel, h)
def check_results(kernel, h, atol, rtol, breadth_first):
dens = bt.kernel_density(Y, h, atol=atol, rtol=rtol,
kernel=kernel,
breadth_first=breadth_first)
assert_allclose(dens, dens_true,
atol=atol, rtol=max(rtol, 1e-7))
for rtol in [0, 1E-5]:
for atol in [1E-6, 1E-2]:
for breadth_first in (True, False):
yield (check_results, kernel, h, atol, rtol,
breadth_first)
def test_gaussian_kde(n_samples=1000):
# Compare gaussian KDE results to scipy.stats.gaussian_kde
from scipy.stats import gaussian_kde
np.random.seed(0)
x_in = np.random.normal(0, 1, n_samples)
x_out = np.linspace(-5, 5, 30)
for h in [0.01, 0.1, 1]:
bt = BallTree(x_in[:, None])
try:
gkde = gaussian_kde(x_in, bw_method=h / np.std(x_in))
except TypeError:
raise SkipTest("Old version of scipy, doesn't accept "
"explicit bandwidth.")
dens_bt = bt.kernel_density(x_out[:, None], h) / n_samples
dens_gkde = gkde.evaluate(x_out)
assert_array_almost_equal(dens_bt, dens_gkde, decimal=3)
def test_ball_tree_two_point(n_samples=100, n_features=3):
np.random.seed(0)
X = np.random.random((n_samples, n_features))
Y = np.random.random((n_samples, n_features))
r = np.linspace(0, 1, 10)
bt = BallTree(X, leaf_size=10)
D = DistanceMetric.get_metric("euclidean").pairwise(Y, X)
counts_true = [(D <= ri).sum() for ri in r]
def check_two_point(r, dualtree):
counts = bt.two_point_correlation(Y, r=r, dualtree=dualtree)
assert_array_almost_equal(counts, counts_true)
for dualtree in (True, False):
yield check_two_point, r, dualtree
def test_ball_tree_pickle():
np.random.seed(0)
X = np.random.random((10, 3))
bt1 = BallTree(X, leaf_size=1)
# Test if BallTree with callable metric is picklable
bt1_pyfunc = BallTree(X, metric=dist_func, leaf_size=1, p=2)
ind1, dist1 = bt1.query(X)
ind1_pyfunc, dist1_pyfunc = bt1_pyfunc.query(X)
def check_pickle_protocol(protocol):
s = pickle.dumps(bt1, protocol=protocol)
bt2 = pickle.loads(s)
s_pyfunc = pickle.dumps(bt1_pyfunc, protocol=protocol)
bt2_pyfunc = pickle.loads(s_pyfunc)
ind2, dist2 = bt2.query(X)
ind2_pyfunc, dist2_pyfunc = bt2_pyfunc.query(X)
assert_array_almost_equal(ind1, ind2)
assert_array_almost_equal(dist1, dist2)
assert_array_almost_equal(ind1_pyfunc, ind2_pyfunc)
assert_array_almost_equal(dist1_pyfunc, dist2_pyfunc)
for protocol in (0, 1, 2):
yield check_pickle_protocol, protocol
def test_neighbors_heap(n_pts=5, n_nbrs=10):
heap = NeighborsHeap(n_pts, n_nbrs)
for row in range(n_pts):
d_in = np.random.random(2 * n_nbrs).astype(DTYPE)
i_in = np.arange(2 * n_nbrs, dtype=ITYPE)
for d, i in zip(d_in, i_in):
heap.push(row, d, i)
ind = np.argsort(d_in)
d_in = d_in[ind]
i_in = i_in[ind]
d_heap, i_heap = heap.get_arrays(sort=True)
assert_array_almost_equal(d_in[:n_nbrs], d_heap[row])
assert_array_almost_equal(i_in[:n_nbrs], i_heap[row])
def test_node_heap(n_nodes=50):
vals = np.random.random(n_nodes).astype(DTYPE)
i1 = np.argsort(vals)
vals2, i2 = nodeheap_sort(vals)
assert_array_almost_equal(i1, i2)
assert_array_almost_equal(vals[i1], vals2)
def test_simultaneous_sort(n_rows=10, n_pts=201):
dist = np.random.random((n_rows, n_pts)).astype(DTYPE)
ind = (np.arange(n_pts) + np.zeros((n_rows, 1))).astype(ITYPE)
dist2 = dist.copy()
ind2 = ind.copy()
# simultaneous sort rows using function
simultaneous_sort(dist, ind)
# simultaneous sort rows using numpy
i = np.argsort(dist2, axis=1)
row_ind = np.arange(n_rows)[:, None]
dist2 = dist2[row_ind, i]
ind2 = ind2[row_ind, i]
assert_array_almost_equal(dist, dist2)
assert_array_almost_equal(ind, ind2)
def test_query_haversine():
np.random.seed(0)
X = 2 * np.pi * np.random.random((40, 2))
bt = BallTree(X, leaf_size=1, metric='haversine')
dist1, ind1 = bt.query(X, k=5)
dist2, ind2 = brute_force_neighbors(X, X, k=5, metric='haversine')
assert_array_almost_equal(dist1, dist2)
assert_array_almost_equal(ind1, ind2)
| bsd-3-clause |
wolfiex/DSMACC-testing | dsmacc/examples/lifetime_ae.py | 1 | 4170 | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import keras
from keras.models import Sequential, Model
from keras.layers import Dense,Embedding
from keras.optimizers import Adam
import numpy as np
import pandas as pd
np.warnings.filterwarnings('ignore')
import h5py,re,os,sys
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib
from scipy.spatial import Delaunay
import matplotlib.tri as mtri
import tensorflow as tf
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
tf.logging.set_verbosity(tf.logging.ERROR)
def normalized(a, axis=-1, order=2):
l2 = np.atleast_1d(np.linalg.norm(a, order, axis))
l2[l2==0] = 1
return a / np.expand_dims(l2, axis)
# Custom activation function
from keras.layers import Activation
from keras import backend as K
from keras.utils.generic_utils import get_custom_objects
def swish(x):
return (K.sigmoid(x) * x)
get_custom_objects().update({'swish': Activation(swish)})
class rateae:
def __init__(
self,
indim,
l_big = 256,
l_mid = 64,
l_lat = 2,
epochs= 5,
activation = 'swish', #elu
activation1 = 'swish',
activationlast='softmax',
emb_size = 2
):
m = Sequential()
m.add(Dense(l_big, activation=activation, input_shape=(indim,)))
m.add(Dense(l_mid, activation=activation1))
m.add(Dense(l_lat, activation=activationlast, name="bottleneck"))
#m.add(Embedding(input_dim=indim,output_dim=emb_size, input_length=2, name="embedding"))
m.add(Dense(l_mid, activation=activation1))
m.add(Dense(l_big, activation=activation))
m.add(Dense(indim, activation=activationlast))
m.compile(loss='mean_squared_error', optimizer = Adam())
self.m = m
self.indim = indim
self.epochs=epochs
def train(self,x_train):
assert x_train.shape[1] == self.indim
self.hist = self.m.fit(x_train, x_train, batch_size=self.indim, epochs=self.epochs, verbose=0,
validation_data=(x_train, x_train))
self.encoder = Model(self.m.input, self.m.get_layer('bottleneck').output)
def predict(self,x_train):
self.predict = self.encoder.predict(x_train) # bottleneck representation
self.reconstruct = self.m.predict(x_train) # reconstruction
def plot(self,x_train):
self.predict(x_train)
plt.title('Autoencoder')
plt.scatter(self.predict[:,0], self.predict[:,1], c='blue', s=8, cmap='tab20')
plt.gca().get_xaxis().set_ticklabels([])
plt.gca().get_yaxis().set_ticklabels([])
plt.tight_layout()
plt.show()
#autoencoder.save_weights('autoencoder.h5')
def norm(Y):
n = Y.min()
x = Y.max()
Y -= n
return Y/(n+x)
rest = []
with h5py.File(sys.argv[1],'r') as hf:
print 'start'
groups = list(filter(lambda x: type(x[1])==h5py._hl.group.Group, hf.items()))
test = 5#len(groups)-2
for gid in range(len(groups)):
g = groups[gid][1]
qhead = g.attrs['jacsphead'].decode("utf-8").split(',')
rate = pd.DataFrame(g.get('jacsp')[:,:],columns=qhead)
if gid == 0:
rate.drop('TIME', axis = 1, inplace = True)
rhead = filter(lambda x: len(set(x.split('->')))==1,rate.columns)
ae = rateae(len(rhead))#len(rhead))
#normalisev.
life = np.nan_to_num(np.log10(abs(rate[rhead])))
if gid<test:
for r in life:
ae.train(norm(life))
print ( gid/test )
else:
xt = np.zeros((len(rhead),1))
X_train_ae = ae.encoder.predict(life)
rest.append(X_train_ae)
np.save('autolife.npy',dict(zip(rhead,X_train_ae)))
plt.scatter(x = X_train_ae[:,0],y = X_train_ae[:,1],c=life.mean(axis=1),s=.5)
plt.show()
if gid>test+3: break
dfs= fdsa
from sklearn.decomposition import PCA
pca = PCA(n_components=2,svd_solver='full')
X_pca = pca.fit_transform(life)
plt.scatter(X_pca[:,0],X_pca[:,1])
plt.show()
| gpl-3.0 |
iamkingmaker/trading-with-python | lib/backtest.py | 74 | 7381 | #-------------------------------------------------------------------------------
# Name: backtest
# Purpose: perform routine backtesting tasks.
# This module should be useable as a stand-alone library outide of the TWP package.
#
# Author: Jev Kuznetsov
#
# Created: 03/07/2014
# Copyright: (c) Jev Kuznetsov 2013
# Licence: BSD
#-------------------------------------------------------------------------------
import pandas as pd
import matplotlib.pyplot as plt
import sys
import numpy as np
def tradeBracket(price,entryBar,upper=None, lower=None, timeout=None):
'''
trade a bracket on price series, return price delta and exit bar #
Input
------
price : numpy array of price values
entryBar: entry bar number, *determines entry price*
upper : high stop
lower : low stop
timeout : max number of periods to hold
Returns exit price and number of bars held
'''
assert isinstance(price, np.ndarray) , 'price must be a numpy array'
# create list of exit indices and add max trade duration. Exits are relative to entry bar
if timeout: # set trade length to timeout or series length
exits = [min(timeout,len(price)-entryBar-1)]
else:
exits = [len(price)-entryBar-1]
p = price[entryBar:entryBar+exits[0]+1] # subseries of price
# extend exits list with conditional exits
# check upper bracket
if upper:
assert upper>p[0] , 'Upper bracket must be higher than entry price '
idx = np.where(p>upper)[0] # find where price is higher than the upper bracket
if idx.any():
exits.append(idx[0]) # append first occurence
# same for lower bracket
if lower:
assert lower<p[0] , 'Lower bracket must be lower than entry price '
idx = np.where(p<lower)[0]
if idx.any():
exits.append(idx[0])
exitBar = min(exits) # choose first exit
return p[exitBar], exitBar
class Backtest(object):
"""
Backtest class, simple vectorized one. Works with pandas objects.
"""
def __init__(self,price, signal, signalType='capital',initialCash = 0, roundShares=True):
"""
Arguments:
*price* Series with instrument price.
*signal* Series with capital to invest (long+,short-) or number of shares.
*sitnalType* capital to bet or number of shares 'capital' mode is default.
*initialCash* starting cash.
*roundShares* round off number of shares to integers
"""
#TODO: add auto rebalancing
# check for correct input
assert signalType in ['capital','shares'], "Wrong signal type provided, must be 'capital' or 'shares'"
#save internal settings to a dict
self.settings = {'signalType':signalType}
# first thing to do is to clean up the signal, removing nans and duplicate entries or exits
self.signal = signal.ffill().fillna(0)
# now find dates with a trade
tradeIdx = self.signal.diff().fillna(0) !=0 # days with trades are set to True
if signalType == 'shares':
self.trades = self.signal[tradeIdx] # selected rows where tradeDir changes value. trades are in Shares
elif signalType =='capital':
self.trades = (self.signal[tradeIdx]/price[tradeIdx])
if roundShares:
self.trades = self.trades.round()
# now create internal data structure
self.data = pd.DataFrame(index=price.index , columns = ['price','shares','value','cash','pnl'])
self.data['price'] = price
self.data['shares'] = self.trades.reindex(self.data.index).ffill().fillna(0)
self.data['value'] = self.data['shares'] * self.data['price']
delta = self.data['shares'].diff() # shares bought sold
self.data['cash'] = (-delta*self.data['price']).fillna(0).cumsum()+initialCash
self.data['pnl'] = self.data['cash']+self.data['value']-initialCash
@property
def sharpe(self):
''' return annualized sharpe ratio of the pnl '''
pnl = (self.data['pnl'].diff()).shift(-1)[self.data['shares']!=0] # use only days with position.
return sharpe(pnl) # need the diff here as sharpe works on daily returns.
@property
def pnl(self):
'''easy access to pnl data column '''
return self.data['pnl']
def plotTrades(self):
"""
visualise trades on the price chart
long entry : green triangle up
short entry : red triangle down
exit : black circle
"""
l = ['price']
p = self.data['price']
p.plot(style='x-')
# ---plot markers
# this works, but I rather prefer colored markers for each day of position rather than entry-exit signals
# indices = {'g^': self.trades[self.trades > 0].index ,
# 'ko':self.trades[self.trades == 0].index,
# 'rv':self.trades[self.trades < 0].index}
#
#
# for style, idx in indices.iteritems():
# if len(idx) > 0:
# p[idx].plot(style=style)
# --- plot trades
#colored line for long positions
idx = (self.data['shares'] > 0) | (self.data['shares'] > 0).shift(1)
if idx.any():
p[idx].plot(style='go')
l.append('long')
#colored line for short positions
idx = (self.data['shares'] < 0) | (self.data['shares'] < 0).shift(1)
if idx.any():
p[idx].plot(style='ro')
l.append('short')
plt.xlim([p.index[0],p.index[-1]]) # show full axis
plt.legend(l,loc='best')
plt.title('trades')
class ProgressBar:
def __init__(self, iterations):
self.iterations = iterations
self.prog_bar = '[]'
self.fill_char = '*'
self.width = 50
self.__update_amount(0)
def animate(self, iteration):
print '\r',self,
sys.stdout.flush()
self.update_iteration(iteration + 1)
def update_iteration(self, elapsed_iter):
self.__update_amount((elapsed_iter / float(self.iterations)) * 100.0)
self.prog_bar += ' %d of %s complete' % (elapsed_iter, self.iterations)
def __update_amount(self, new_amount):
percent_done = int(round((new_amount / 100.0) * 100.0))
all_full = self.width - 2
num_hashes = int(round((percent_done / 100.0) * all_full))
self.prog_bar = '[' + self.fill_char * num_hashes + ' ' * (all_full - num_hashes) + ']'
pct_place = (len(self.prog_bar) // 2) - len(str(percent_done))
pct_string = '%d%%' % percent_done
self.prog_bar = self.prog_bar[0:pct_place] + \
(pct_string + self.prog_bar[pct_place + len(pct_string):])
def __str__(self):
return str(self.prog_bar)
def sharpe(pnl):
return np.sqrt(250)*pnl.mean()/pnl.std()
| bsd-3-clause |
Windy-Ground/scikit-learn | examples/ensemble/plot_gradient_boosting_quantile.py | 392 | 2114 | """
=====================================================
Prediction Intervals for Gradient Boosting Regression
=====================================================
This example shows how quantile regression can be used
to create prediction intervals.
"""
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import GradientBoostingRegressor
np.random.seed(1)
def f(x):
"""The function to predict."""
return x * np.sin(x)
#----------------------------------------------------------------------
# First the noiseless case
X = np.atleast_2d(np.random.uniform(0, 10.0, size=100)).T
X = X.astype(np.float32)
# Observations
y = f(X).ravel()
dy = 1.5 + 1.0 * np.random.random(y.shape)
noise = np.random.normal(0, dy)
y += noise
y = y.astype(np.float32)
# Mesh the input space for evaluations of the real function, the prediction and
# its MSE
xx = np.atleast_2d(np.linspace(0, 10, 1000)).T
xx = xx.astype(np.float32)
alpha = 0.95
clf = GradientBoostingRegressor(loss='quantile', alpha=alpha,
n_estimators=250, max_depth=3,
learning_rate=.1, min_samples_leaf=9,
min_samples_split=9)
clf.fit(X, y)
# Make the prediction on the meshed x-axis
y_upper = clf.predict(xx)
clf.set_params(alpha=1.0 - alpha)
clf.fit(X, y)
# Make the prediction on the meshed x-axis
y_lower = clf.predict(xx)
clf.set_params(loss='ls')
clf.fit(X, y)
# Make the prediction on the meshed x-axis
y_pred = clf.predict(xx)
# Plot the function, the prediction and the 90% confidence interval based on
# the MSE
fig = plt.figure()
plt.plot(xx, f(xx), 'g:', label=u'$f(x) = x\,\sin(x)$')
plt.plot(X, y, 'b.', markersize=10, label=u'Observations')
plt.plot(xx, y_pred, 'r-', label=u'Prediction')
plt.plot(xx, y_upper, 'k-')
plt.plot(xx, y_lower, 'k-')
plt.fill(np.concatenate([xx, xx[::-1]]),
np.concatenate([y_upper, y_lower[::-1]]),
alpha=.5, fc='b', ec='None', label='90% prediction interval')
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.ylim(-10, 20)
plt.legend(loc='upper left')
plt.show()
| bsd-3-clause |
nvoron23/scikit-learn | examples/cluster/plot_kmeans_stability_low_dim_dense.py | 338 | 4324 | """
============================================================
Empirical evaluation of the impact of k-means initialization
============================================================
Evaluate the ability of k-means initializations strategies to make
the algorithm convergence robust as measured by the relative standard
deviation of the inertia of the clustering (i.e. the sum of distances
to the nearest cluster center).
The first plot shows the best inertia reached for each combination
of the model (``KMeans`` or ``MiniBatchKMeans``) and the init method
(``init="random"`` or ``init="kmeans++"``) for increasing values of the
``n_init`` parameter that controls the number of initializations.
The second plot demonstrate one single run of the ``MiniBatchKMeans``
estimator using a ``init="random"`` and ``n_init=1``. This run leads to
a bad convergence (local optimum) with estimated centers stuck
between ground truth clusters.
The dataset used for evaluation is a 2D grid of isotropic Gaussian
clusters widely spaced.
"""
print(__doc__)
# Author: Olivier Grisel <[email protected]>
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from sklearn.utils import shuffle
from sklearn.utils import check_random_state
from sklearn.cluster import MiniBatchKMeans
from sklearn.cluster import KMeans
random_state = np.random.RandomState(0)
# Number of run (with randomly generated dataset) for each strategy so as
# to be able to compute an estimate of the standard deviation
n_runs = 5
# k-means models can do several random inits so as to be able to trade
# CPU time for convergence robustness
n_init_range = np.array([1, 5, 10, 15, 20])
# Datasets generation parameters
n_samples_per_center = 100
grid_size = 3
scale = 0.1
n_clusters = grid_size ** 2
def make_data(random_state, n_samples_per_center, grid_size, scale):
random_state = check_random_state(random_state)
centers = np.array([[i, j]
for i in range(grid_size)
for j in range(grid_size)])
n_clusters_true, n_features = centers.shape
noise = random_state.normal(
scale=scale, size=(n_samples_per_center, centers.shape[1]))
X = np.concatenate([c + noise for c in centers])
y = np.concatenate([[i] * n_samples_per_center
for i in range(n_clusters_true)])
return shuffle(X, y, random_state=random_state)
# Part 1: Quantitative evaluation of various init methods
fig = plt.figure()
plots = []
legends = []
cases = [
(KMeans, 'k-means++', {}),
(KMeans, 'random', {}),
(MiniBatchKMeans, 'k-means++', {'max_no_improvement': 3}),
(MiniBatchKMeans, 'random', {'max_no_improvement': 3, 'init_size': 500}),
]
for factory, init, params in cases:
print("Evaluation of %s with %s init" % (factory.__name__, init))
inertia = np.empty((len(n_init_range), n_runs))
for run_id in range(n_runs):
X, y = make_data(run_id, n_samples_per_center, grid_size, scale)
for i, n_init in enumerate(n_init_range):
km = factory(n_clusters=n_clusters, init=init, random_state=run_id,
n_init=n_init, **params).fit(X)
inertia[i, run_id] = km.inertia_
p = plt.errorbar(n_init_range, inertia.mean(axis=1), inertia.std(axis=1))
plots.append(p[0])
legends.append("%s with %s init" % (factory.__name__, init))
plt.xlabel('n_init')
plt.ylabel('inertia')
plt.legend(plots, legends)
plt.title("Mean inertia for various k-means init across %d runs" % n_runs)
# Part 2: Qualitative visual inspection of the convergence
X, y = make_data(random_state, n_samples_per_center, grid_size, scale)
km = MiniBatchKMeans(n_clusters=n_clusters, init='random', n_init=1,
random_state=random_state).fit(X)
fig = plt.figure()
for k in range(n_clusters):
my_members = km.labels_ == k
color = cm.spectral(float(k) / n_clusters, 1)
plt.plot(X[my_members, 0], X[my_members, 1], 'o', marker='.', c=color)
cluster_center = km.cluster_centers_[k]
plt.plot(cluster_center[0], cluster_center[1], 'o',
markerfacecolor=color, markeredgecolor='k', markersize=6)
plt.title("Example cluster allocation with a single random init\n"
"with MiniBatchKMeans")
plt.show()
| bsd-3-clause |
m4rx9/rna-pdb-tools | rna_tools/tools/rna_filter/rna_ec2x.py | 2 | 2806 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
"""rna_ex2x.py - analyze an evolutionary coupling file.
Files can be downloaded from https://marks.hms.harvard.edu/ev_rna/, e.g. RF00167.EC.interaction.csv
``--pairs``::
$ rna_ex2x.py RF00167.EC.interaction_LbyN.csv --pairs
[18, 78],[31, 39],[21, 75],[30, 40],[28, 42],[27, 43],[59, 67],[54, 72],[57, 69],[25, 45],[29, 41],[17, 79],[26, 44],[16, 80],[14, 82],[19, 77],[55, 71],[
15, 81],[34, 63],[56, 70],[58, 68],[35, 63],[26, 45],[35, 64],[32, 39],[54, 73],[24, 74],[16, 82],[24, 45],[24, 43],[32, 36],[25, 48],[48, 82],[36, 48],
"""
from __future__ import print_function
import csv
import pandas
import sys
import argparse
def get_parser():
"""Get parser."""
parser = argparse.ArgumentParser(
description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument("--sep", default=r",", help="separator")
parser.add_argument("--chain", default="A", help="chain")
# parser.add_argument("--offset", default=0)
parser.add_argument('interaction_fn', help='interaction file')
parser.add_argument('--ec-pairs', action='store_true')
parser.add_argument('--ss-pairs', help="file with secondary structure base pairs")
parser.add_argument('--pairs-delta', help="delta: ec-bp - ss-paris", action="store_true")
return parser
if __name__ == '__main__':
parser = get_parser()
args = parser.parse_args()
df = pandas.read_csv(args.interaction_fn, sep=",")
pairs = []
for index, row in df.iterrows():
if str(row['pdb_resid1']) != 'nan':
# ? uggly hack to 78.0 -> 78
if not args.ec_pairs:
print('d:' + args.chain + str(row['pdb_resid1']).replace('.0', '') +
'-' + args.chain + str(row['pdb_resid2']).replace('.0', '') + ' < 100 1')
else:
pairs.append([int(row['pdb_resid1']), int(row['pdb_resid2'])])
pairs.sort()
# this also worked, but it was an ugly print
# print('[' + str(row['pdb_resid1']).replace('.0', '') + ', ' +
# str(row['pdb_resid2']).replace('.0', '') + ']', end=',')
if args.ec_pairs:
print('pairs:')
print(pairs)
if args.ss_pairs:
ssbps = eval(open(args.ss_pairs).read().strip())
ssbps.sort()
print('ssbps:')
print(ssbps)
if args.pairs_delta:
# go ever dca and keep if not in ss
pairsnonss = []
for pair in pairs:
if pair in ssbps:
pass
else:
pairsnonss.append(pair)
print('# of ec_paris:', len(pairs))
print('# of ssbps :', len(ssbps))
print('dalta# :', len(pairsnonss))
print(pairsnonss)
| mit |
billy-inn/scikit-learn | examples/cross_decomposition/plot_compare_cross_decomposition.py | 142 | 4761 | """
===================================
Compare cross decomposition methods
===================================
Simple usage of various cross decomposition algorithms:
- PLSCanonical
- PLSRegression, with multivariate response, a.k.a. PLS2
- PLSRegression, with univariate response, a.k.a. PLS1
- CCA
Given 2 multivariate covarying two-dimensional datasets, X, and Y,
PLS extracts the 'directions of covariance', i.e. the components of each
datasets that explain the most shared variance between both datasets.
This is apparent on the **scatterplot matrix** display: components 1 in
dataset X and dataset Y are maximally correlated (points lie around the
first diagonal). This is also true for components 2 in both dataset,
however, the correlation across datasets for different components is
weak: the point cloud is very spherical.
"""
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.cross_decomposition import PLSCanonical, PLSRegression, CCA
###############################################################################
# Dataset based latent variables model
n = 500
# 2 latents vars:
l1 = np.random.normal(size=n)
l2 = np.random.normal(size=n)
latents = np.array([l1, l1, l2, l2]).T
X = latents + np.random.normal(size=4 * n).reshape((n, 4))
Y = latents + np.random.normal(size=4 * n).reshape((n, 4))
X_train = X[:n / 2]
Y_train = Y[:n / 2]
X_test = X[n / 2:]
Y_test = Y[n / 2:]
print("Corr(X)")
print(np.round(np.corrcoef(X.T), 2))
print("Corr(Y)")
print(np.round(np.corrcoef(Y.T), 2))
###############################################################################
# Canonical (symmetric) PLS
# Transform data
# ~~~~~~~~~~~~~~
plsca = PLSCanonical(n_components=2)
plsca.fit(X_train, Y_train)
X_train_r, Y_train_r = plsca.transform(X_train, Y_train)
X_test_r, Y_test_r = plsca.transform(X_test, Y_test)
# Scatter plot of scores
# ~~~~~~~~~~~~~~~~~~~~~~
# 1) On diagonal plot X vs Y scores on each components
plt.figure(figsize=(12, 8))
plt.subplot(221)
plt.plot(X_train_r[:, 0], Y_train_r[:, 0], "ob", label="train")
plt.plot(X_test_r[:, 0], Y_test_r[:, 0], "or", label="test")
plt.xlabel("x scores")
plt.ylabel("y scores")
plt.title('Comp. 1: X vs Y (test corr = %.2f)' %
np.corrcoef(X_test_r[:, 0], Y_test_r[:, 0])[0, 1])
plt.xticks(())
plt.yticks(())
plt.legend(loc="best")
plt.subplot(224)
plt.plot(X_train_r[:, 1], Y_train_r[:, 1], "ob", label="train")
plt.plot(X_test_r[:, 1], Y_test_r[:, 1], "or", label="test")
plt.xlabel("x scores")
plt.ylabel("y scores")
plt.title('Comp. 2: X vs Y (test corr = %.2f)' %
np.corrcoef(X_test_r[:, 1], Y_test_r[:, 1])[0, 1])
plt.xticks(())
plt.yticks(())
plt.legend(loc="best")
# 2) Off diagonal plot components 1 vs 2 for X and Y
plt.subplot(222)
plt.plot(X_train_r[:, 0], X_train_r[:, 1], "*b", label="train")
plt.plot(X_test_r[:, 0], X_test_r[:, 1], "*r", label="test")
plt.xlabel("X comp. 1")
plt.ylabel("X comp. 2")
plt.title('X comp. 1 vs X comp. 2 (test corr = %.2f)'
% np.corrcoef(X_test_r[:, 0], X_test_r[:, 1])[0, 1])
plt.legend(loc="best")
plt.xticks(())
plt.yticks(())
plt.subplot(223)
plt.plot(Y_train_r[:, 0], Y_train_r[:, 1], "*b", label="train")
plt.plot(Y_test_r[:, 0], Y_test_r[:, 1], "*r", label="test")
plt.xlabel("Y comp. 1")
plt.ylabel("Y comp. 2")
plt.title('Y comp. 1 vs Y comp. 2 , (test corr = %.2f)'
% np.corrcoef(Y_test_r[:, 0], Y_test_r[:, 1])[0, 1])
plt.legend(loc="best")
plt.xticks(())
plt.yticks(())
plt.show()
###############################################################################
# PLS regression, with multivariate response, a.k.a. PLS2
n = 1000
q = 3
p = 10
X = np.random.normal(size=n * p).reshape((n, p))
B = np.array([[1, 2] + [0] * (p - 2)] * q).T
# each Yj = 1*X1 + 2*X2 + noize
Y = np.dot(X, B) + np.random.normal(size=n * q).reshape((n, q)) + 5
pls2 = PLSRegression(n_components=3)
pls2.fit(X, Y)
print("True B (such that: Y = XB + Err)")
print(B)
# compare pls2.coefs with B
print("Estimated B")
print(np.round(pls2.coefs, 1))
pls2.predict(X)
###############################################################################
# PLS regression, with univariate response, a.k.a. PLS1
n = 1000
p = 10
X = np.random.normal(size=n * p).reshape((n, p))
y = X[:, 0] + 2 * X[:, 1] + np.random.normal(size=n * 1) + 5
pls1 = PLSRegression(n_components=3)
pls1.fit(X, y)
# note that the number of compements exceeds 1 (the dimension of y)
print("Estimated betas")
print(np.round(pls1.coefs, 1))
###############################################################################
# CCA (PLS mode B with symmetric deflation)
cca = CCA(n_components=2)
cca.fit(X_train, Y_train)
X_train_r, Y_train_r = plsca.transform(X_train, Y_train)
X_test_r, Y_test_r = plsca.transform(X_test, Y_test)
| bsd-3-clause |
mssalvador/WorkflowCleaning | shared/ComputeDistances.py | 1 | 1530 | """
Created on May 15, 2017
@author: svanhmic
"""
def compute_distance(point, center):
"""
Computes the euclidean distance from a data point to the cluster center.
:param point: coordinates for given point
:param center: cluster center
:return: distance between point and center
"""
import numpy as np
from pyspark.ml.linalg import SparseVector
if isinstance(point, SparseVector) | isinstance(center, SparseVector):
p_d = point.toArray()
c_d = center.toArray()
return float(np.linalg.norm(p_d-c_d, ord=2))
else:
return float(np.linalg.norm(point - center, ord=2))
def make_histogram(dist: list): # , dim):
"""
Makes the histogram of
:param dist: Spark data frame TODO: make this a list input instead
:param dim: number of _dimensions that needs to be plotted
:return:
"""
import seaborn as sns
import matplotlib.pyplot as plt
# isolate the distances from the data frame
set_of_distances = set(dist)
fig = plt.figure()
if len(set_of_distances) > 1:
sns.distplot(
dist,
rug=True,
kde=True,
norm_hist=False,
# ax=ax)
);
fig.canvas.draw()
plt.show()
else:
print('Too few datapoints to show')
def compute_percentage_dist(distance, max_distance):
"""
:param max_distance:
:param distance
:return: distance between point and center
"""
return float(max_distance-distance)/100
| apache-2.0 |
neovintage/airflow | airflow/www/views.py | 1 | 76556 | import sys
import os
import socket
from functools import wraps
from datetime import datetime, timedelta
import dateutil.parser
import copy
from itertools import chain, product
from past.utils import old_div
from past.builtins import basestring
import inspect
import traceback
import sqlalchemy as sqla
from sqlalchemy import or_
from flask import redirect, url_for, request, Markup, Response, current_app, render_template
from flask.ext.admin import BaseView, expose, AdminIndexView
from flask.ext.admin.contrib.sqla import ModelView
from flask.ext.admin.actions import action
from flask.ext.login import flash
from flask._compat import PY2
import jinja2
import markdown
import json
from wtforms import (
Form, SelectField, TextAreaField, PasswordField, StringField)
from pygments import highlight, lexers
from pygments.formatters import HtmlFormatter
import airflow
from airflow import models
from airflow.settings import Session
from airflow import configuration
from airflow import utils
from airflow.utils import AirflowException
from airflow.www import utils as wwwutils
from airflow import settings
from airflow.models import State
from airflow.www.forms import DateTimeForm, DateTimeWithNumRunsForm
QUERY_LIMIT = 100000
CHART_LIMIT = 200000
dagbag = models.DagBag(os.path.expanduser(configuration.get('core', 'DAGS_FOLDER')))
login_required = airflow.login.login_required
current_user = airflow.login.current_user
logout_user = airflow.login.logout_user
FILTER_BY_OWNER = False
if configuration.getboolean('webserver', 'FILTER_BY_OWNER'):
# filter_by_owner if authentication is enabled and filter_by_owner is true
FILTER_BY_OWNER = not current_app.config['LOGIN_DISABLED']
def dag_link(v, c, m, p):
url = url_for(
'airflow.graph',
dag_id=m.dag_id)
return Markup(
'<a href="{url}">{m.dag_id}</a>'.format(**locals()))
def log_link(v, c, m, p):
url = url_for(
'airflow.log',
dag_id=m.dag_id,
task_id=m.task_id,
execution_date=m.execution_date.isoformat())
return Markup(
'<a href="{url}">'
' <span class="glyphicon glyphicon-book" aria-hidden="true">'
'</span></a>').format(**locals())
def task_instance_link(v, c, m, p):
url = url_for(
'airflow.task',
dag_id=m.dag_id,
task_id=m.task_id,
execution_date=m.execution_date.isoformat())
url_root = url_for(
'airflow.graph',
dag_id=m.dag_id,
root=m.task_id,
execution_date=m.execution_date.isoformat())
return Markup(
"""
<span style="white-space: nowrap;">
<a href="{url}">{m.task_id}</a>
<a href="{url_root}" title="Filter on this task and upstream">
<span class="glyphicon glyphicon-filter" style="margin-left: 0px;"
aria-hidden="true"></span>
</a>
</span>
""".format(**locals()))
def state_token(state):
color = State.color(state)
return Markup(
'<span class="label" style="background-color:{color};">'
'{state}</span>'.format(**locals()))
def state_f(v, c, m, p):
return state_token(m.state)
def duration_f(v, c, m, p):
if m.end_date and m.duration:
return timedelta(seconds=m.duration)
def datetime_f(v, c, m, p):
attr = getattr(m, p)
dttm = attr.isoformat() if attr else ''
if datetime.now().isoformat()[:4] == dttm[:4]:
dttm = dttm[5:]
return Markup("<nobr>{}</nobr>".format(dttm))
def nobr_f(v, c, m, p):
return Markup("<nobr>{}</nobr>".format(getattr(m, p)))
def label_link(v, c, m, p):
try:
default_params = eval(m.default_params)
except:
default_params = {}
url = url_for(
'airflow.chart', chart_id=m.id, iteration_no=m.iteration_no,
**default_params)
return Markup("<a href='{url}'>{m.label}</a>".format(**locals()))
def pool_link(v, c, m, p):
url = '/admin/taskinstance/?flt1_pool_equals=' + m.pool
return Markup("<a href='{url}'>{m.pool}</a>".format(**locals()))
def pygment_html_render(s, lexer=lexers.TextLexer):
return highlight(
s,
lexer(),
HtmlFormatter(linenos=True),
)
def render(obj, lexer):
out = ""
if isinstance(obj, basestring):
out += pygment_html_render(obj, lexer)
elif isinstance(obj, (tuple, list)):
for i, s in enumerate(obj):
out += "<div>List item #{}</div>".format(i)
out += "<div>" + pygment_html_render(s, lexer) + "</div>"
elif isinstance(obj, dict):
for k, v in obj.items():
out += '<div>Dict item "{}"</div>'.format(k)
out += "<div>" + pygment_html_render(v, lexer) + "</div>"
return out
def wrapped_markdown(s):
return '<div class="rich_doc">' + markdown.markdown(s) + "</div>"
attr_renderer = {
'bash_command': lambda x: render(x, lexers.BashLexer),
'hql': lambda x: render(x, lexers.SqlLexer),
'sql': lambda x: render(x, lexers.SqlLexer),
'doc': lambda x: render(x, lexers.TextLexer),
'doc_json': lambda x: render(x, lexers.JsonLexer),
'doc_rst': lambda x: render(x, lexers.RstLexer),
'doc_yaml': lambda x: render(x, lexers.YamlLexer),
'doc_md': wrapped_markdown,
'python_callable': lambda x: render(
inspect.getsource(x), lexers.PythonLexer),
}
def data_profiling_required(f):
'''
Decorator for views requiring data profiling access
'''
@wraps(f)
def decorated_function(*args, **kwargs):
if (
current_app.config['LOGIN_DISABLED'] or
(not current_user.is_anonymous() and current_user.data_profiling())
):
return f(*args, **kwargs)
else:
flash("This page requires data profiling privileges", "error")
return redirect(url_for('admin.index'))
return decorated_function
def fused_slots(v, c, m, p):
url = (
'/admin/taskinstance/' +
'?flt1_pool_equals=' + m.pool +
'&flt2_state_equals=running')
return Markup("<a href='{0}'>{1}</a>".format(url, m.used_slots()))
def fqueued_slots(v, c, m, p):
url = (
'/admin/taskinstance/' +
'?flt1_pool_equals=' + m.pool +
'&flt2_state_equals=queued&sort=10&desc=1')
return Markup("<a href='{0}'>{1}</a>".format(url, m.queued_slots()))
class Airflow(BaseView):
def is_visible(self):
return False
@expose('/')
@login_required
def index(self):
return self.render('airflow/dags.html')
@expose('/chart_data')
@data_profiling_required
@wwwutils.gzipped
# @cache.cached(timeout=3600, key_prefix=wwwutils.make_cache_key)
def chart_data(self):
session = settings.Session()
chart_id = request.args.get('chart_id')
csv = request.args.get('csv') == "true"
chart = session.query(models.Chart).filter_by(id=chart_id).first()
db = session.query(
models.Connection).filter_by(conn_id=chart.conn_id).first()
session.expunge_all()
session.commit()
session.close()
payload = {}
payload['state'] = 'ERROR'
payload['error'] = ''
# Processing templated fields
try:
args = eval(chart.default_params)
if type(args) is not type(dict()):
raise AirflowException('Not a dict')
except:
args = {}
payload['error'] += (
"Default params is not valid, string has to evaluate as "
"a Python dictionary. ")
request_dict = {k: request.args.get(k) for k in request.args}
from airflow import macros
args.update(request_dict)
args['macros'] = macros
sql = jinja2.Template(chart.sql).render(**args)
label = jinja2.Template(chart.label).render(**args)
payload['sql_html'] = Markup(highlight(
sql,
lexers.SqlLexer(), # Lexer call
HtmlFormatter(noclasses=True))
)
payload['label'] = label
import pandas as pd
pd.set_option('display.max_colwidth', 100)
hook = db.get_hook()
try:
df = hook.get_pandas_df(wwwutils.limit_sql(sql, CHART_LIMIT, conn_type=db.conn_type))
df = df.fillna(0)
except Exception as e:
payload['error'] += "SQL execution failed. Details: " + str(e)
if csv:
return Response(
response=df.to_csv(index=False),
status=200,
mimetype="application/text")
if not payload['error'] and len(df) == CHART_LIMIT:
payload['warning'] = (
"Data has been truncated to {0}"
" rows. Expect incomplete results.").format(CHART_LIMIT)
if not payload['error'] and len(df) == 0:
payload['error'] += "Empty result set. "
elif (
not payload['error'] and
chart.sql_layout == 'series' and
chart.chart_type != "datatable" and
len(df.columns) < 3):
payload['error'] += "SQL needs to return at least 3 columns. "
elif (
not payload['error'] and
chart.sql_layout == 'columns'and
len(df.columns) < 2):
payload['error'] += "SQL needs to return at least 2 columns. "
elif not payload['error']:
import numpy as np
chart_type = chart.chart_type
data = None
if chart_type == "datatable":
chart.show_datatable = True
if chart.show_datatable:
data = df.to_dict(orient="split")
data['columns'] = [{'title': c} for c in data['columns']]
# Trying to convert time to something Highcharts likes
x_col = 1 if chart.sql_layout == 'series' else 0
if chart.x_is_date:
try:
# From string to datetime
df[df.columns[x_col]] = pd.to_datetime(
df[df.columns[x_col]])
except Exception as e:
raise AirflowException(str(e))
df[df.columns[x_col]] = df[df.columns[x_col]].apply(
lambda x: int(x.strftime("%s")) * 1000)
series = []
colorAxis = None
if chart_type == 'datatable':
payload['data'] = data
payload['state'] = 'SUCCESS'
return wwwutils.json_response(payload)
elif chart_type == 'para':
df.rename(columns={
df.columns[0]: 'name',
df.columns[1]: 'group',
}, inplace=True)
return Response(
response=df.to_csv(index=False),
status=200,
mimetype="application/text")
elif chart_type == 'heatmap':
color_perc_lbound = float(
request.args.get('color_perc_lbound', 0))
color_perc_rbound = float(
request.args.get('color_perc_rbound', 1))
color_scheme = request.args.get('color_scheme', 'blue_red')
if color_scheme == 'blue_red':
stops = [
[color_perc_lbound, '#00D1C1'],
[
color_perc_lbound +
((color_perc_rbound - color_perc_lbound)/2),
'#FFFFCC'
],
[color_perc_rbound, '#FF5A5F']
]
elif color_scheme == 'blue_scale':
stops = [
[color_perc_lbound, '#FFFFFF'],
[color_perc_rbound, '#2222FF']
]
elif color_scheme == 'fire':
diff = float(color_perc_rbound - color_perc_lbound)
stops = [
[color_perc_lbound, '#FFFFFF'],
[color_perc_lbound + 0.33*diff, '#FFFF00'],
[color_perc_lbound + 0.66*diff, '#FF0000'],
[color_perc_rbound, '#000000']
]
else:
stops = [
[color_perc_lbound, '#FFFFFF'],
[
color_perc_lbound +
((color_perc_rbound - color_perc_lbound)/2),
'#888888'
],
[color_perc_rbound, '#000000'],
]
xaxis_label = df.columns[1]
yaxis_label = df.columns[2]
data = []
for row in df.itertuples():
data.append({
'x': row[2],
'y': row[3],
'value': row[4],
})
x_format = '{point.x:%Y-%m-%d}' \
if chart.x_is_date else '{point.x}'
series.append({
'data': data,
'borderWidth': 0,
'colsize': 24 * 36e5,
'turboThreshold': sys.float_info.max,
'tooltip': {
'headerFormat': '',
'pointFormat': (
df.columns[1] + ': ' + x_format + '<br/>' +
df.columns[2] + ': {point.y}<br/>' +
df.columns[3] + ': <b>{point.value}</b>'
),
},
})
colorAxis = {
'stops': stops,
'minColor': '#FFFFFF',
'maxColor': '#000000',
'min': 50,
'max': 2200,
}
else:
if chart.sql_layout == 'series':
# User provides columns (series, x, y)
xaxis_label = df.columns[1]
yaxis_label = df.columns[2]
df[df.columns[2]] = df[df.columns[2]].astype(np.float)
df = df.pivot_table(
index=df.columns[1],
columns=df.columns[0],
values=df.columns[2], aggfunc=np.sum)
else:
# User provides columns (x, y, metric1, metric2, ...)
xaxis_label = df.columns[0]
yaxis_label = 'y'
df.index = df[df.columns[0]]
df = df.sort(df.columns[0])
del df[df.columns[0]]
for col in df.columns:
df[col] = df[col].astype(np.float)
for col in df.columns:
series.append({
'name': col,
'data': [
(k, df[col][k])
for k in df[col].keys()
if not np.isnan(df[col][k])]
})
series = [serie for serie in sorted(
series, key=lambda s: s['data'][0][1], reverse=True)]
if chart_type == "stacked_area":
stacking = "normal"
chart_type = 'area'
elif chart_type == "percent_area":
stacking = "percent"
chart_type = 'area'
else:
stacking = None
hc = {
'chart': {
'type': chart_type
},
'plotOptions': {
'series': {
'marker': {
'enabled': False
}
},
'area': {'stacking': stacking},
},
'title': {'text': ''},
'xAxis': {
'title': {'text': xaxis_label},
'type': 'datetime' if chart.x_is_date else None,
},
'yAxis': {
'title': {'text': yaxis_label},
},
'colorAxis': colorAxis,
'tooltip': {
'useHTML': True,
'backgroundColor': None,
'borderWidth': 0,
},
'series': series,
}
if chart.y_log_scale:
hc['yAxis']['type'] = 'logarithmic'
hc['yAxis']['minorTickInterval'] = 0.1
if 'min' in hc['yAxis']:
del hc['yAxis']['min']
payload['state'] = 'SUCCESS'
payload['hc'] = hc
payload['data'] = data
payload['request_dict'] = request_dict
return wwwutils.json_response(payload)
@expose('/chart')
@data_profiling_required
def chart(self):
session = settings.Session()
chart_id = request.args.get('chart_id')
embed = request.args.get('embed')
chart = session.query(models.Chart).filter_by(id=chart_id).first()
session.expunge_all()
session.commit()
session.close()
if chart.chart_type == 'para':
return self.render('airflow/para/para.html', chart=chart)
sql = ""
if chart.show_sql:
sql = Markup(highlight(
chart.sql,
lexers.SqlLexer(), # Lexer call
HtmlFormatter(noclasses=True))
)
return self.render(
'airflow/highchart.html',
chart=chart,
title="Airflow - Chart",
sql=sql,
label=chart.label,
embed=embed)
@expose('/dag_stats')
#@login_required
def dag_stats(self):
states = [
State.SUCCESS,
State.RUNNING,
State.FAILED,
State.UPSTREAM_FAILED,
State.UP_FOR_RETRY,
State.QUEUED,
]
task_ids = []
dag_ids = []
for dag in dagbag.dags.values():
task_ids += dag.task_ids
if not dag.is_subdag:
dag_ids.append(dag.dag_id)
TI = models.TaskInstance
session = Session()
qry = (
session.query(TI.dag_id, TI.state, sqla.func.count(TI.task_id))
.filter(TI.task_id.in_(task_ids))
.filter(TI.dag_id.in_(dag_ids))
.group_by(TI.dag_id, TI.state)
)
data = {}
for dag_id, state, count in qry:
if dag_id not in data:
data[dag_id] = {}
data[dag_id][state] = count
session.commit()
session.close()
payload = {}
for dag in dagbag.dags.values():
payload[dag.safe_dag_id] = []
for state in states:
try:
count = data[dag.dag_id][state]
except:
count = 0
d = {
'state': state,
'count': count,
'dag_id': dag.dag_id,
'color': State.color(state)
}
payload[dag.safe_dag_id].append(d)
return wwwutils.json_response(payload)
@expose('/code')
@login_required
def code(self):
dag_id = request.args.get('dag_id')
dag = dagbag.get_dag(dag_id)
code = "".join(open(dag.full_filepath, 'r').readlines())
title = dag.filepath
html_code = highlight(
code, lexers.PythonLexer(), HtmlFormatter(linenos=True))
return self.render(
'airflow/dag_code.html', html_code=html_code, dag=dag, title=title,
root=request.args.get('root'),
demo_mode=configuration.getboolean('webserver', 'demo_mode'))
@expose('/dag_details')
@login_required
def dag_details(self):
dag_id = request.args.get('dag_id')
dag = dagbag.get_dag(dag_id)
title = "DAG details"
session = settings.Session()
TI = models.TaskInstance
states = (
session.query(TI.state, sqla.func.count(TI.dag_id))
.filter(TI.dag_id == dag_id)
.group_by(TI.state)
.all()
)
return self.render(
'airflow/dag_details.html',
dag=dag, title=title, states=states, State=utils.State)
@current_app.errorhandler(404)
def circles(self):
return render_template(
'airflow/circles.html', hostname=socket.gethostname()), 404
@current_app.errorhandler(500)
def show_traceback(self):
from airflow import ascii as ascii_
return render_template(
'airflow/traceback.html',
hostname=socket.gethostname(),
nukular=ascii_.nukular,
info=traceback.format_exc()), 500
@expose('/sandbox')
@login_required
def sandbox(self):
from airflow import configuration
title = "Sandbox Suggested Configuration"
cfg_loc = configuration.AIRFLOW_CONFIG + '.sandbox'
f = open(cfg_loc, 'r')
config = f.read()
f.close()
code_html = Markup(highlight(
config,
lexers.IniLexer(), # Lexer call
HtmlFormatter(noclasses=True))
)
return self.render(
'airflow/code.html',
code_html=code_html, title=title, subtitle=cfg_loc)
@expose('/noaccess')
def noaccess(self):
return self.render('airflow/noaccess.html')
@expose('/headers')
def headers(self):
d = {
'headers': {k: v for k, v in request.headers},
}
if hasattr(current_user, 'is_superuser'):
d['is_superuser'] = current_user.is_superuser()
d['data_profiling'] = current_user.data_profiling()
d['is_anonymous'] = current_user.is_anonymous()
d['is_authenticated'] = current_user.is_authenticated()
if hasattr(current_user, 'username'):
d['username'] = current_user.username
return wwwutils.json_response(d)
@expose('/pickle_info')
def pickle_info(self):
d = {}
dag_id = request.args.get('dag_id')
dags = [dagbag.dags.get(dag_id)] if dag_id else dagbag.dags.values()
for dag in dags:
if not dag.is_subdag:
d[dag.dag_id] = dag.pickle_info()
return wwwutils.json_response(d)
@expose('/login', methods=['GET', 'POST'])
def login(self):
return airflow.login.login(self, request)
@expose('/logout')
def logout(self):
logout_user()
return redirect(url_for('admin.index'))
@expose('/rendered')
@login_required
@wwwutils.action_logging
def rendered(self):
dag_id = request.args.get('dag_id')
task_id = request.args.get('task_id')
execution_date = request.args.get('execution_date')
dttm = dateutil.parser.parse(execution_date)
form = DateTimeForm(data={'execution_date': dttm})
dag = dagbag.get_dag(dag_id)
task = copy.copy(dag.get_task(task_id))
ti = models.TaskInstance(task=task, execution_date=dttm)
try:
ti.render_templates()
except Exception as e:
flash("Error rendering template: " + str(e), "error")
title = "Rendered Template"
html_dict = {}
for template_field in task.__class__.template_fields:
content = getattr(task, template_field)
if template_field in attr_renderer:
html_dict[template_field] = attr_renderer[template_field](content)
else:
html_dict[template_field] = (
"<pre><code>" + str(content) + "</pre></code>")
return self.render(
'airflow/ti_code.html',
html_dict=html_dict,
dag=dag,
task_id=task_id,
execution_date=execution_date,
form=form,
title=title,)
@expose('/log')
@login_required
@wwwutils.action_logging
def log(self):
BASE_LOG_FOLDER = os.path.expanduser(
configuration.get('core', 'BASE_LOG_FOLDER'))
dag_id = request.args.get('dag_id')
task_id = request.args.get('task_id')
execution_date = request.args.get('execution_date')
dag = dagbag.get_dag(dag_id)
log_relative = "{dag_id}/{task_id}/{execution_date}".format(
**locals())
loc = os.path.join(BASE_LOG_FOLDER, log_relative)
loc = loc.format(**locals())
log = ""
TI = models.TaskInstance
session = Session()
dttm = dateutil.parser.parse(execution_date)
ti = session.query(TI).filter(
TI.dag_id == dag_id, TI.task_id == task_id,
TI.execution_date == dttm).first()
dttm = dateutil.parser.parse(execution_date)
form = DateTimeForm(data={'execution_date': dttm})
if ti:
host = ti.hostname
log_loaded = False
if socket.gethostname() == host:
try:
f = open(loc)
log += "".join(f.readlines())
f.close()
log_loaded = True
except:
log = "*** Log file isn't where expected.\n".format(loc)
else:
WORKER_LOG_SERVER_PORT = \
configuration.get('celery', 'WORKER_LOG_SERVER_PORT')
url = os.path.join(
"http://{host}:{WORKER_LOG_SERVER_PORT}/log", log_relative
).format(**locals())
log += "*** Log file isn't local.\n"
log += "*** Fetching here: {url}\n".format(**locals())
try:
import requests
log += '\n' + requests.get(url).text
log_loaded = True
except:
log += "*** Failed to fetch log file from worker.\n".format(
**locals())
# try to load log backup from S3
s3_log_folder = configuration.get('core', 'S3_LOG_FOLDER')
if not log_loaded and s3_log_folder.startswith('s3:'):
import boto
s3 = boto.connect_s3()
s3_log_loc = os.path.join(
configuration.get('core', 'S3_LOG_FOLDER'), log_relative)
log += '*** Fetching log from S3: {}\n'.format(s3_log_loc)
log += ('*** Note: S3 logs are only available once '
'tasks have completed.\n')
bucket, key = s3_log_loc.lstrip('s3:/').split('/', 1)
s3_key = boto.s3.key.Key(s3.get_bucket(bucket), key)
if s3_key.exists():
log += '\n' + s3_key.get_contents_as_string().decode()
else:
log += '*** No log found on S3.\n'
session.commit()
session.close()
log = log.decode('utf-8') if PY2 else log
title = "Log"
return self.render(
'airflow/ti_code.html',
code=log, dag=dag, title=title, task_id=task_id,
execution_date=execution_date, form=form)
@expose('/task')
@login_required
@wwwutils.action_logging
def task(self):
dag_id = request.args.get('dag_id')
task_id = request.args.get('task_id')
# Carrying execution_date through, even though it's irrelevant for
# this context
execution_date = request.args.get('execution_date')
dttm = dateutil.parser.parse(execution_date)
form = DateTimeForm(data={'execution_date': dttm})
dag = dagbag.get_dag(dag_id)
if not dag or task_id not in dag.task_ids:
flash(
"Task [{}.{}] doesn't seem to exist"
" at the moment".format(dag_id, task_id),
"error")
return redirect('/admin/')
task = dag.get_task(task_id)
task = copy.copy(task)
task.resolve_template_files()
attributes = []
for attr_name in dir(task):
if not attr_name.startswith('_'):
attr = getattr(task, attr_name)
if type(attr) != type(self.task) and \
attr_name not in attr_renderer:
attributes.append((attr_name, str(attr)))
title = "Task Details"
# Color coding the special attributes that are code
special_attrs_rendered = {}
for attr_name in attr_renderer:
if hasattr(task, attr_name):
source = getattr(task, attr_name)
special_attrs_rendered[attr_name] = attr_renderer[attr_name](source)
return self.render(
'airflow/task.html',
attributes=attributes,
task_id=task_id,
execution_date=execution_date,
special_attrs_rendered=special_attrs_rendered,
form=form,
dag=dag, title=title)
@expose('/run')
@login_required
@wwwutils.action_logging
@wwwutils.notify_owner
def run(self):
dag_id = request.args.get('dag_id')
task_id = request.args.get('task_id')
origin = request.args.get('origin')
dag = dagbag.get_dag(dag_id)
task = dag.get_task(task_id)
execution_date = request.args.get('execution_date')
execution_date = dateutil.parser.parse(execution_date)
force = request.args.get('force') == "true"
deps = request.args.get('deps') == "true"
try:
from airflow.executors import DEFAULT_EXECUTOR as executor
from airflow.executors import CeleryExecutor
if not isinstance(executor, CeleryExecutor):
flash("Only works with the CeleryExecutor, sorry", "error")
return redirect(origin)
except ImportError:
# in case CeleryExecutor cannot be imported it is not active either
flash("Only works with the CeleryExecutor, sorry", "error")
return redirect(origin)
ti = models.TaskInstance(task=task, execution_date=execution_date)
executor.start()
executor.queue_task_instance(
ti, force=force, ignore_dependencies=deps)
executor.heartbeat()
flash(
"Sent {} to the message queue, "
"it should start any moment now.".format(ti))
return redirect(origin)
@expose('/clear')
@login_required
@wwwutils.action_logging
@wwwutils.notify_owner
def clear(self):
dag_id = request.args.get('dag_id')
task_id = request.args.get('task_id')
origin = request.args.get('origin')
dag = dagbag.get_dag(dag_id)
task = dag.get_task(task_id)
execution_date = request.args.get('execution_date')
execution_date = dateutil.parser.parse(execution_date)
confirmed = request.args.get('confirmed') == "true"
upstream = request.args.get('upstream') == "true"
downstream = request.args.get('downstream') == "true"
future = request.args.get('future') == "true"
past = request.args.get('past') == "true"
dag = dag.sub_dag(
task_regex=r"^{0}$".format(task_id),
include_downstream=downstream,
include_upstream=upstream)
end_date = execution_date if not future else None
start_date = execution_date if not past else None
if confirmed:
count = dag.clear(
start_date=start_date,
end_date=end_date)
flash("{0} task instances have been cleared".format(count))
return redirect(origin)
else:
tis = dag.clear(
start_date=start_date,
end_date=end_date,
dry_run=True)
if not tis:
flash("No task instances to clear", 'error')
response = redirect(origin)
else:
details = "\n".join([str(t) for t in tis])
response = self.render(
'airflow/confirm.html',
message=(
"Here's the list of task instances you are about "
"to clear:"),
details=details,)
return response
@expose('/blocked')
@login_required
def blocked(self):
session = settings.Session()
DR = models.DagRun
dags = (
session.query(DR.dag_id, sqla.func.count(DR.id))
.filter(DR.state == State.RUNNING)
.group_by(DR.dag_id)
.all()
)
payload = []
for dag_id, active_dag_runs in dags:
max_active_runs = 0
if dag_id in dagbag.dags:
max_active_runs = dagbag.dags[dag_id].max_active_runs
payload.append({
'dag_id': dag_id,
'active_dag_run': active_dag_runs,
'max_active_runs': max_active_runs,
})
return wwwutils.json_response(payload)
@expose('/success')
@login_required
@wwwutils.action_logging
@wwwutils.notify_owner
def success(self):
dag_id = request.args.get('dag_id')
task_id = request.args.get('task_id')
origin = request.args.get('origin')
dag = dagbag.get_dag(dag_id)
task = dag.get_task(task_id)
execution_date = request.args.get('execution_date')
execution_date = dateutil.parser.parse(execution_date)
confirmed = request.args.get('confirmed') == "true"
upstream = request.args.get('upstream') == "true"
downstream = request.args.get('downstream') == "true"
future = request.args.get('future') == "true"
past = request.args.get('past') == "true"
MAX_PERIODS = 1000
# Flagging tasks as successful
session = settings.Session()
task_ids = [task_id]
end_date = ((dag.latest_execution_date or datetime.now())
if future else execution_date)
if 'start_date' in dag.default_args:
start_date = dag.default_args['start_date']
elif dag.start_date:
start_date = dag.start_date
else:
start_date = execution_date
start_date = execution_date if not past else start_date
if downstream:
task_ids += [
t.task_id
for t in task.get_flat_relatives(upstream=False)]
if upstream:
task_ids += [
t.task_id
for t in task.get_flat_relatives(upstream=True)]
TI = models.TaskInstance
if dag.schedule_interval == '@once':
dates = [start_date]
else:
dates = dag.date_range(start_date, end_date=end_date)
tis = session.query(TI).filter(
TI.dag_id == dag_id,
TI.execution_date.in_(dates),
TI.task_id.in_(task_ids)).all()
tis_to_change = session.query(TI).filter(
TI.dag_id == dag_id,
TI.execution_date.in_(dates),
TI.task_id.in_(task_ids),
TI.state != State.SUCCESS).all()
tasks = list(product(task_ids, dates))
tis_to_create = list(
set(tasks) -
set([(ti.task_id, ti.execution_date) for ti in tis]))
tis_all_altered = list(chain(
[(ti.task_id, ti.execution_date) for ti in tis_to_change],
tis_to_create))
if len(tis_all_altered) > MAX_PERIODS:
flash("Too many tasks at once (>{0})".format(
MAX_PERIODS), 'error')
return redirect(origin)
if confirmed:
for ti in tis_to_change:
ti.state = State.SUCCESS
session.commit()
for task_id, task_execution_date in tis_to_create:
ti = TI(
task=dag.get_task(task_id),
execution_date=task_execution_date,
state=State.SUCCESS)
session.add(ti)
session.commit()
session.commit()
session.close()
flash("Marked success on {} task instances".format(
len(tis_all_altered)))
return redirect(origin)
else:
if not tis_all_altered:
flash("No task instances to mark as successful", 'error')
response = redirect(origin)
else:
tis = []
for task_id, task_execution_date in tis_all_altered:
tis.append(TI(
task=dag.get_task(task_id),
execution_date=task_execution_date,
state=State.SUCCESS))
details = "\n".join([str(t) for t in tis])
response = self.render(
'airflow/confirm.html',
message=(
"Here's the list of task instances you are about "
"to mark as successful:"),
details=details,)
return response
@expose('/tree')
@login_required
@wwwutils.gzipped
@wwwutils.action_logging
def tree(self):
dag_id = request.args.get('dag_id')
blur = configuration.getboolean('webserver', 'demo_mode')
dag = dagbag.get_dag(dag_id)
root = request.args.get('root')
if root:
dag = dag.sub_dag(
task_regex=root,
include_downstream=False,
include_upstream=True)
session = settings.Session()
base_date = request.args.get('base_date')
num_runs = request.args.get('num_runs')
num_runs = int(num_runs) if num_runs else 25
if base_date:
base_date = dateutil.parser.parse(base_date)
else:
base_date = dag.latest_execution_date or datetime.now()
dates = dag.date_range(base_date, num=-abs(num_runs))
min_date = dates[0] if dates else datetime(2000, 1, 1)
DR = models.DagRun
dag_runs = (
session.query(DR)
.filter(
DR.dag_id==dag.dag_id,
DR.execution_date<=base_date,
DR.execution_date>=min_date)
.all()
)
dag_runs = {
dr.execution_date: utils.alchemy_to_dict(dr) for dr in dag_runs}
tis = dag.get_task_instances(
session, start_date=min_date, end_date=base_date)
dates = sorted(list({ti.execution_date for ti in tis}))
max_date = max([ti.execution_date for ti in tis]) if dates else None
task_instances = {}
for ti in tis:
tid = utils.alchemy_to_dict(ti)
dr = dag_runs.get(ti.execution_date)
tid['external_trigger'] = dr['external_trigger'] if dr else False
task_instances[(ti.task_id, ti.execution_date)] = tid
expanded = []
# The default recursion traces every path so that tree view has full
# expand/collapse functionality. After 5,000 nodes we stop and fall
# back on a quick DFS search for performance. See PR #320.
node_count = [0]
node_limit = 5000 / max(1, len(dag.roots))
def recurse_nodes(task, visited):
visited.add(task)
node_count[0] += 1
children = [
recurse_nodes(t, visited) for t in task.upstream_list
if node_count[0] < node_limit or t not in visited]
# D3 tree uses children vs _children to define what is
# expanded or not. The following block makes it such that
# repeated nodes are collapsed by default.
children_key = 'children'
if task.task_id not in expanded:
expanded.append(task.task_id)
elif children:
children_key = "_children"
return {
'name': task.task_id,
'instances': [
task_instances.get((task.task_id, d)) or {
'execution_date': d.isoformat(),
'task_id': task.task_id
}
for d in dates],
children_key: children,
'num_dep': len(task.upstream_list),
'operator': task.task_type,
'retries': task.retries,
'owner': task.owner,
'start_date': task.start_date,
'end_date': task.end_date,
'depends_on_past': task.depends_on_past,
'ui_color': task.ui_color,
}
data = {
'name': '[DAG]',
'children': [recurse_nodes(t, set()) for t in dag.roots],
'instances': [
dag_runs.get(d) or {'execution_date': d.isoformat()}
for d in dates],
}
data = json.dumps(data, indent=4, default=utils.json_ser)
session.commit()
session.close()
form = DateTimeWithNumRunsForm(data={'base_date': max_date,
'num_runs': num_runs})
return self.render(
'airflow/tree.html',
operators=sorted(
list(set([op.__class__ for op in dag.tasks])),
key=lambda x: x.__name__
),
root=root,
form=form,
dag=dag, data=data, blur=blur)
@expose('/graph')
@login_required
@wwwutils.gzipped
@wwwutils.action_logging
def graph(self):
session = settings.Session()
dag_id = request.args.get('dag_id')
blur = configuration.getboolean('webserver', 'demo_mode')
arrange = request.args.get('arrange', "LR")
dag = dagbag.get_dag(dag_id)
if dag_id not in dagbag.dags:
flash('DAG "{0}" seems to be missing.'.format(dag_id), "error")
return redirect('/admin/')
root = request.args.get('root')
if root:
dag = dag.sub_dag(
task_regex=root,
include_upstream=True,
include_downstream=False)
nodes = []
edges = []
for task in dag.tasks:
nodes.append({
'id': task.task_id,
'value': {
'label': task.task_id,
'labelStyle': "fill:{0};".format(task.ui_fgcolor),
'style': "fill:{0};".format(task.ui_color),
}
})
def get_upstream(task):
for t in task.upstream_list:
edge = {
'u': t.task_id,
'v': task.task_id,
}
if edge not in edges:
edges.append(edge)
get_upstream(t)
for t in dag.roots:
get_upstream(t)
dttm = request.args.get('execution_date')
if dttm:
dttm = dateutil.parser.parse(dttm)
else:
dttm = dag.latest_execution_date or datetime.now().date()
DR = models.DagRun
drs = session.query(DR).filter_by(dag_id=dag_id).order_by('execution_date desc').all()
dr_choices = []
dr_state = None
for dr in drs:
dr_choices.append((dr.execution_date.isoformat(), dr.run_id))
if dttm == dr.execution_date:
dr_state = dr.state
class GraphForm(Form):
execution_date = SelectField("DAG run", choices=dr_choices)
arrange = SelectField("Layout", choices=(
('LR', "Left->Right"),
('RL', "Right->Left"),
('TB', "Top->Bottom"),
('BT', "Bottom->Top"),
))
form = GraphForm(
data={'execution_date': dttm.isoformat(), 'arrange': arrange})
task_instances = {
ti.task_id: utils.alchemy_to_dict(ti)
for ti in dag.get_task_instances(session, dttm, dttm)}
tasks = {
t.task_id: {
'dag_id': t.dag_id,
'task_type': t.task_type,
}
for t in dag.tasks}
if not tasks:
flash("No tasks found", "error")
session.commit()
session.close()
doc_md = markdown.markdown(dag.doc_md) if hasattr(dag, 'doc_md') else ''
return self.render(
'airflow/graph.html',
dag=dag,
form=form,
width=request.args.get('width', "100%"),
height=request.args.get('height', "800"),
execution_date=dttm.isoformat(),
state_token=state_token(dr_state),
doc_md=doc_md,
arrange=arrange,
operators=sorted(
list(set([op.__class__ for op in dag.tasks])),
key=lambda x: x.__name__
),
blur=blur,
root=root or '',
task_instances=json.dumps(task_instances, indent=2),
tasks=json.dumps(tasks, indent=2),
nodes=json.dumps(nodes, indent=2),
edges=json.dumps(edges, indent=2),)
@expose('/duration')
@login_required
@wwwutils.action_logging
def duration(self):
session = settings.Session()
dag_id = request.args.get('dag_id')
dag = dagbag.get_dag(dag_id)
base_date = request.args.get('base_date')
num_runs = request.args.get('num_runs')
num_runs = int(num_runs) if num_runs else 25
if base_date:
base_date = dateutil.parser.parse(base_date)
else:
base_date = dag.latest_execution_date or datetime.now()
dates = dag.date_range(base_date, num=-abs(num_runs))
min_date = dates[0] if dates else datetime(2000, 1, 1)
root = request.args.get('root')
if root:
dag = dag.sub_dag(
task_regex=root,
include_upstream=True,
include_downstream=False)
all_data = []
for task in dag.tasks:
data = []
for ti in task.get_task_instances(session, start_date=min_date,
end_date=base_date):
if ti.duration:
data.append([
ti.execution_date.isoformat(),
float(ti.duration) / (60*60)
])
if data:
all_data.append({'data': data, 'name': task.task_id})
tis = dag.get_task_instances(
session, start_date=min_date, end_date=base_date)
dates = sorted(list({ti.execution_date for ti in tis}))
max_date = max([ti.execution_date for ti in tis]) if dates else None
session.commit()
session.close()
form = DateTimeWithNumRunsForm(data={'base_date': max_date,
'num_runs': num_runs})
return self.render(
'airflow/chart.html',
dag=dag,
data=json.dumps(all_data),
chart_options={'yAxis': {'title': {'text': 'hours'}}},
height="700px",
demo_mode=configuration.getboolean('webserver', 'demo_mode'),
root=root,
form=form,
)
@expose('/landing_times')
@login_required
@wwwutils.action_logging
def landing_times(self):
session = settings.Session()
dag_id = request.args.get('dag_id')
dag = dagbag.get_dag(dag_id)
base_date = request.args.get('base_date')
num_runs = request.args.get('num_runs')
num_runs = int(num_runs) if num_runs else 25
if base_date:
base_date = dateutil.parser.parse(base_date)
else:
base_date = dag.latest_execution_date or datetime.now()
dates = dag.date_range(base_date, num=-abs(num_runs))
min_date = dates[0] if dates else datetime(2000, 1, 1)
root = request.args.get('root')
if root:
dag = dag.sub_dag(
task_regex=root,
include_upstream=True,
include_downstream=False)
all_data = []
for task in dag.tasks:
data = []
for ti in task.get_task_instances(session, start_date=min_date,
end_date=base_date):
if ti.end_date:
ts = ti.execution_date
if dag.schedule_interval:
ts = dag.following_schedule(ts)
secs = old_div((ti.end_date - ts).total_seconds(), 60*60)
data.append([ti.execution_date.isoformat(), secs])
all_data.append({'data': data, 'name': task.task_id})
tis = dag.get_task_instances(
session, start_date=min_date, end_date=base_date)
dates = sorted(list({ti.execution_date for ti in tis}))
max_date = max([ti.execution_date for ti in tis]) if dates else None
session.commit()
session.close()
form = DateTimeWithNumRunsForm(data={'base_date': max_date,
'num_runs': num_runs})
return self.render(
'airflow/chart.html',
dag=dag,
data=json.dumps(all_data),
height="700px",
chart_options={'yAxis': {'title': {'text': 'hours after 00:00'}}},
demo_mode=configuration.getboolean('webserver', 'demo_mode'),
root=root,
form=form,
)
@expose('/paused')
@login_required
@wwwutils.action_logging
def paused(self):
DagModel = models.DagModel
dag_id = request.args.get('dag_id')
session = settings.Session()
orm_dag = session.query(
DagModel).filter(DagModel.dag_id == dag_id).first()
if request.args.get('is_paused') == 'false':
orm_dag.is_paused = True
else:
orm_dag.is_paused = False
session.merge(orm_dag)
session.commit()
session.close()
dagbag.get_dag(dag_id)
return "OK"
@expose('/refresh')
@login_required
@wwwutils.action_logging
def refresh(self):
DagModel = models.DagModel
dag_id = request.args.get('dag_id')
session = settings.Session()
orm_dag = session.query(
DagModel).filter(DagModel.dag_id == dag_id).first()
if orm_dag:
orm_dag.last_expired = datetime.now()
session.merge(orm_dag)
session.commit()
session.close()
dagbag.get_dag(dag_id)
flash("DAG [{}] is now fresh as a daisy".format(dag_id))
return redirect('/')
@expose('/refresh_all')
@login_required
@wwwutils.action_logging
def refresh_all(self):
dagbag.collect_dags(only_if_updated=False)
flash("All DAGs are now up to date")
return redirect('/')
@expose('/gantt')
@login_required
@wwwutils.action_logging
def gantt(self):
session = settings.Session()
dag_id = request.args.get('dag_id')
dag = dagbag.get_dag(dag_id)
demo_mode = configuration.getboolean('webserver', 'demo_mode')
root = request.args.get('root')
if root:
dag = dag.sub_dag(
task_regex=root,
include_upstream=True,
include_downstream=False)
dttm = request.args.get('execution_date')
if dttm:
dttm = dateutil.parser.parse(dttm)
else:
dttm = dag.latest_execution_date or datetime.now().date()
form = DateTimeForm(data={'execution_date': dttm})
tis = [
ti
for ti in dag.get_task_instances(session, dttm, dttm)
if ti.start_date]
tis = sorted(tis, key=lambda ti: ti.start_date)
tasks = []
data = []
for i, ti in enumerate(tis):
end_date = ti.end_date or datetime.now()
tasks += [ti.task_id]
color = State.color(ti.state)
data.append({
'x': i,
'low': int(ti.start_date.strftime('%s')) * 1000,
'high': int(end_date.strftime('%s')) * 1000,
'color': color,
})
height = (len(tis) * 25) + 50
session.commit()
session.close()
hc = {
'chart': {
'type': 'columnrange',
'inverted': True,
'height': height,
},
'xAxis': {'categories': tasks, 'alternateGridColor': '#FAFAFA'},
'yAxis': {'type': 'datetime'},
'title': {
'text': None
},
'plotOptions': {
'series': {
'cursor': 'pointer',
'minPointLength': 4,
},
},
'legend': {
'enabled': False
},
'series': [{
'data': data
}]
}
return self.render(
'airflow/gantt.html',
dag=dag,
execution_date=dttm.isoformat(),
form=form,
hc=json.dumps(hc, indent=4),
height=height,
demo_mode=demo_mode,
root=root,
)
@expose('/variables/<form>', methods=["GET", "POST"])
@login_required
@wwwutils.action_logging
def variables(self, form):
try:
if request.method == 'POST':
data = request.json
if data:
session = settings.Session()
var = models.Variable(key=form, val=json.dumps(data))
session.add(var)
session.commit()
return ""
else:
return self.render(
'airflow/variables/{}.html'.format(form)
)
except:
return ("Error: form airflow/variables/{}.html "
"not found.").format(form), 404
class HomeView(AdminIndexView):
@expose("/")
@login_required
def index(self):
session = Session()
DM = models.DagModel
qry = None
# filter the dags if filter_by_owner and current user is not superuser
do_filter = FILTER_BY_OWNER and (not current_user.is_superuser())
if do_filter:
qry = (
session.query(DM)
.filter(
~DM.is_subdag, DM.is_active,
DM.owners == current_user.username)
.all()
)
else:
qry = session.query(DM).filter(~DM.is_subdag, DM.is_active).all()
orm_dags = {dag.dag_id: dag for dag in qry}
import_errors = session.query(models.ImportError).all()
for ie in import_errors:
flash(
"Broken DAG: [{ie.filename}] {ie.stacktrace}".format(ie=ie),
"error")
session.expunge_all()
session.commit()
session.close()
dags = dagbag.dags.values()
if do_filter:
dags = {
dag.dag_id: dag
for dag in dags
if (
dag.owner == current_user.username and (not dag.parent_dag)
)
}
else:
dags = {dag.dag_id: dag for dag in dags if not dag.parent_dag}
all_dag_ids = sorted(set(orm_dags.keys()) | set(dags.keys()))
return self.render(
'airflow/dags.html',
dags=dags,
orm_dags=orm_dags,
all_dag_ids=all_dag_ids)
class QueryView(wwwutils.DataProfilingMixin, BaseView):
@expose('/')
@wwwutils.gzipped
def query(self):
session = settings.Session()
dbs = session.query(models.Connection).order_by(
models.Connection.conn_id).all()
session.expunge_all()
db_choices = list(
((db.conn_id, db.conn_id) for db in dbs if db.get_hook()))
conn_id_str = request.args.get('conn_id')
csv = request.args.get('csv') == "true"
sql = request.args.get('sql')
class QueryForm(Form):
conn_id = SelectField("Layout", choices=db_choices)
sql = TextAreaField("SQL", widget=wwwutils.AceEditorWidget())
data = {
'conn_id': conn_id_str,
'sql': sql,
}
results = None
has_data = False
error = False
if conn_id_str:
db = [db for db in dbs if db.conn_id == conn_id_str][0]
hook = db.get_hook()
try:
df = hook.get_pandas_df(wwwutils.limit_sql(sql, QUERY_LIMIT, conn_type=db.conn_type))
# df = hook.get_pandas_df(sql)
has_data = len(df) > 0
df = df.fillna('')
results = df.to_html(
classes=[
'table', 'table-bordered', 'table-striped', 'no-wrap'],
index=False,
na_rep='',
) if has_data else ''
except Exception as e:
flash(str(e), 'error')
error = True
if has_data and len(df) == QUERY_LIMIT:
flash(
"Query output truncated at " + str(QUERY_LIMIT) +
" rows", 'info')
if not has_data and error:
flash('No data', 'error')
if csv:
return Response(
response=df.to_csv(index=False),
status=200,
mimetype="application/text")
form = QueryForm(request.form, data=data)
session.commit()
session.close()
return self.render(
'airflow/query.html', form=form,
title="Ad Hoc Query",
results=results or '',
has_data=has_data)
class AirflowModelView(ModelView):
list_template = 'airflow/model_list.html'
edit_template = 'airflow/model_edit.html'
create_template = 'airflow/model_create.html'
page_size = 500
class ModelViewOnly(wwwutils.LoginMixin, AirflowModelView):
"""
Modifying the base ModelView class for non edit, browse only operations
"""
named_filter_urls = True
can_create = False
can_edit = False
can_delete = False
column_display_pk = True
class PoolModelView(wwwutils.SuperUserMixin, AirflowModelView):
column_list = ('pool', 'slots', 'used_slots', 'queued_slots')
column_formatters = dict(
pool=pool_link, used_slots=fused_slots, queued_slots=fqueued_slots)
named_filter_urls = True
class SlaMissModelView(wwwutils.SuperUserMixin, ModelViewOnly):
verbose_name_plural = "SLA misses"
verbose_name = "SLA miss"
column_list = (
'dag_id', 'task_id', 'execution_date', 'email_sent', 'timestamp')
column_formatters = dict(
task_id=task_instance_link,
execution_date=datetime_f,
timestamp=datetime_f,
dag_id=dag_link)
named_filter_urls = True
column_searchable_list = ('dag_id', 'task_id',)
column_filters = (
'dag_id', 'task_id', 'email_sent', 'timestamp', 'execution_date')
form_widget_args = {
'email_sent': {'disabled': True},
'timestamp': {'disabled': True},
}
class ChartModelView(wwwutils.DataProfilingMixin, AirflowModelView):
verbose_name = "chart"
verbose_name_plural = "charts"
form_columns = (
'label',
'owner',
'conn_id',
'chart_type',
'show_datatable',
'x_is_date',
'y_log_scale',
'show_sql',
'height',
'sql_layout',
'sql',
'default_params',)
column_list = (
'label', 'conn_id', 'chart_type', 'owner', 'last_modified',)
column_formatters = dict(label=label_link, last_modified=datetime_f)
column_default_sort = ('last_modified', True)
create_template = 'airflow/chart/create.html'
edit_template = 'airflow/chart/edit.html'
column_filters = ('label', 'owner.username', 'conn_id')
column_searchable_list = ('owner.username', 'label', 'sql')
column_descriptions = {
'label': "Can include {{ templated_fields }} and {{ macros }}",
'chart_type': "The type of chart to be displayed",
'sql': "Can include {{ templated_fields }} and {{ macros }}.",
'height': "Height of the chart, in pixels.",
'conn_id': "Source database to run the query against",
'x_is_date': (
"Whether the X axis should be casted as a date field. Expect most "
"intelligible date formats to get casted properly."
),
'owner': (
"The chart's owner, mostly used for reference and filtering in "
"the list view."
),
'show_datatable':
"Whether to display an interactive data table under the chart.",
'default_params': (
'A dictionary of {"key": "values",} that define what the '
'templated fields (parameters) values should be by default. '
'To be valid, it needs to "eval" as a Python dict. '
'The key values will show up in the url\'s querystring '
'and can be altered there.'
),
'show_sql': "Whether to display the SQL statement as a collapsible "
"section in the chart page.",
'y_log_scale': "Whether to use a log scale for the Y axis.",
'sql_layout': (
"Defines the layout of the SQL that the application should "
"expect. Depending on the tables you are sourcing from, it may "
"make more sense to pivot / unpivot the metrics."
),
}
column_labels = {
'sql': "SQL",
'height': "Chart Height",
'sql_layout': "SQL Layout",
'show_sql': "Display the SQL Statement",
'default_params': "Default Parameters",
}
form_choices = {
'chart_type': [
('line', 'Line Chart'),
('spline', 'Spline Chart'),
('bar', 'Bar Chart'),
('para', 'Parallel Coordinates'),
('column', 'Column Chart'),
('area', 'Overlapping Area Chart'),
('stacked_area', 'Stacked Area Chart'),
('percent_area', 'Percent Area Chart'),
('heatmap', 'Heatmap'),
('datatable', 'No chart, data table only'),
],
'sql_layout': [
('series', 'SELECT series, x, y FROM ...'),
('columns', 'SELECT x, y (series 1), y (series 2), ... FROM ...'),
],
'conn_id': [
(c.conn_id, c.conn_id)
for c in (
Session().query(models.Connection.conn_id)
.group_by(models.Connection.conn_id)
)
]
}
def on_model_change(self, form, model, is_created=True):
if model.iteration_no is None:
model.iteration_no = 0
else:
model.iteration_no += 1
if not model.user_id and current_user and hasattr(current_user, 'id'):
model.user_id = current_user.id
model.last_modified = datetime.now()
class KnowEventView(wwwutils.DataProfilingMixin, AirflowModelView):
verbose_name = "known event"
verbose_name_plural = "known events"
form_columns = (
'label',
'event_type',
'start_date',
'end_date',
'reported_by',
'description')
column_list = (
'label', 'event_type', 'start_date', 'end_date', 'reported_by')
column_default_sort = ("start_date", True)
class KnowEventTypeView(wwwutils.DataProfilingMixin, AirflowModelView):
pass
'''
# For debugging / troubleshooting
mv = KnowEventTypeView(
models.KnownEventType,
Session, name="Known Event Types", category="Manage")
admin.add_view(mv)
class DagPickleView(SuperUserMixin, ModelView):
pass
mv = DagPickleView(
models.DagPickle,
Session, name="Pickles", category="Manage")
admin.add_view(mv)
'''
class VariableView(wwwutils.LoginMixin, AirflowModelView):
verbose_name = "Variable"
verbose_name_plural = "Variables"
column_list = ('key',)
column_filters = ('key', 'val')
column_searchable_list = ('key', 'val')
form_widget_args = {
'val': {
'rows': 20,
}
}
class JobModelView(ModelViewOnly):
verbose_name_plural = "jobs"
verbose_name = "job"
column_default_sort = ('start_date', True)
column_filters = (
'job_type', 'dag_id', 'state',
'unixname', 'hostname', 'start_date', 'end_date', 'latest_heartbeat')
column_formatters = dict(
start_date=datetime_f,
end_date=datetime_f,
hostname=nobr_f,
state=state_f,
latest_heartbeat=datetime_f)
class DagRunModelView(ModelViewOnly):
verbose_name_plural = "DAG Runs"
can_delete = True
can_edit = True
can_create = True
column_editable_list = ('state',)
verbose_name = "dag run"
column_default_sort = ('execution_date', True)
form_choices = {
'state': [
('success', 'success'),
('running', 'running'),
('failed', 'failed'),
],
}
column_list = (
'state', 'dag_id', 'execution_date', 'run_id', 'external_trigger')
column_filters = column_list
column_searchable_list = ('dag_id', 'state', 'run_id')
column_formatters = dict(
execution_date=datetime_f,
state=state_f,
start_date=datetime_f,
dag_id=dag_link)
@action('set_running', "Set state to 'running'", None)
def action_set_running(self, ids):
self.set_dagrun_state(ids, State.RUNNING)
@action('set_failed', "Set state to 'failed'", None)
def action_set_failed(self, ids):
self.set_dagrun_state(ids, State.FAILED)
@action('set_success', "Set state to 'success'", None)
def action_set_success(self, ids):
self.set_dagrun_state(ids, State.SUCCESS)
@utils.provide_session
def set_dagrun_state(self, ids, target_state, session=None):
try:
DR = models.DagRun
count = 0
for dr in session.query(DR).filter(DR.id.in_(ids)).all():
count += 1
dr.state = target_state
if target_state == State.RUNNING:
dr.start_date = datetime.now()
else:
dr.end_date = datetime.now()
session.commit()
flash(
"{count} dag runs were set to '{target_state}'".format(**locals()))
except Exception as ex:
if not self.handle_view_exception(ex):
raise Exception("Ooops")
flash('Failed to set state', 'error')
class LogModelView(ModelViewOnly):
verbose_name_plural = "logs"
verbose_name = "log"
column_default_sort = ('dttm', True)
column_filters = ('dag_id', 'task_id', 'execution_date')
column_formatters = dict(
dttm=datetime_f, execution_date=datetime_f, dag_id=dag_link)
class TaskInstanceModelView(ModelViewOnly):
verbose_name_plural = "task instances"
verbose_name = "task instance"
column_filters = (
'state', 'dag_id', 'task_id', 'execution_date', 'hostname',
'queue', 'pool', 'operator', 'start_date', 'end_date')
named_filter_urls = True
column_formatters = dict(
log=log_link, task_id=task_instance_link,
hostname=nobr_f,
state=state_f,
execution_date=datetime_f,
start_date=datetime_f,
end_date=datetime_f,
queued_dttm=datetime_f,
dag_id=dag_link, duration=duration_f)
column_searchable_list = ('dag_id', 'task_id', 'state')
column_default_sort = ('start_date', True)
form_choices = {
'state': [
('success', 'success'),
('running', 'running'),
('failed', 'failed'),
],
}
column_list = (
'state', 'dag_id', 'task_id', 'execution_date', 'operator',
'start_date', 'end_date', 'duration', 'job_id', 'hostname',
'unixname', 'priority_weight', 'queue', 'queued_dttm', 'pool', 'log')
can_delete = True
page_size = 500
@action('set_running', "Set state to 'running'", None)
def action_set_running(self, ids):
self.set_task_instance_state(ids, State.RUNNING)
@action('set_failed', "Set state to 'failed'", None)
def action_set_failed(self, ids):
self.set_task_instance_state(ids, State.FAILED)
@action('set_success', "Set state to 'success'", None)
def action_set_success(self, ids):
self.set_task_instance_state(ids, State.SUCCESS)
@action('set_retry', "Set state to 'up_for_retry'", None)
def action_set_retry(self, ids):
self.set_task_instance_state(ids, State.UP_FOR_RETRY)
@utils.provide_session
def set_task_instance_state(self, ids, target_state, session=None):
try:
TI = models.TaskInstance
for count, id in enumerate(ids):
task_id, dag_id, execution_date = id.split(',')
execution_date = datetime.strptime(execution_date, '%Y-%m-%d %H:%M:%S')
ti = session.query(TI).filter(TI.task_id == task_id,
TI.dag_id == dag_id,
TI.execution_date == execution_date).one()
ti.state = target_state
count += 1
session.commit()
flash(
"{count} task instances were set to '{target_state}'".format(**locals()))
except Exception as ex:
if not self.handle_view_exception(ex):
raise Exception("Ooops")
flash('Failed to set state', 'error')
class ConnectionModelView(wwwutils.SuperUserMixin, AirflowModelView):
create_template = 'airflow/conn_create.html'
edit_template = 'airflow/conn_edit.html'
list_template = 'airflow/conn_list.html'
form_columns = (
'conn_id',
'conn_type',
'host',
'schema',
'login',
'password',
'port',
'extra',
'extra__jdbc__drv_path',
'extra__jdbc__drv_clsname',
)
verbose_name = "Connection"
verbose_name_plural = "Connections"
column_default_sort = ('conn_id', False)
column_list = ('conn_id', 'conn_type', 'host', 'port', 'is_encrypted',)
form_overrides = dict(_password=PasswordField)
form_widget_args = {
'is_encrypted': {'disabled': True},
}
# Used to customized the form, the forms elements get rendered
# and results are stored in the extra field as json. All of these
# need to be prefixed with extra__ and then the conn_type ___ as in
# extra__{conn_type}__name. You can also hide form elements and rename
# others from the connection_form.js file
form_extra_fields = {
'extra__jdbc__drv_path' : StringField('Driver Path'),
'extra__jdbc__drv_clsname': StringField('Driver Class'),
}
form_choices = {
'conn_type': [
('bigquery', 'BigQuery',),
('ftp', 'FTP',),
('google_cloud_storage', 'Google Cloud Storage'),
('hdfs', 'HDFS',),
('http', 'HTTP',),
('hive_cli', 'Hive Client Wrapper',),
('hive_metastore', 'Hive Metastore Thrift',),
('hiveserver2', 'Hive Server 2 Thrift',),
('jdbc', 'Jdbc Connection',),
('mysql', 'MySQL',),
('postgres', 'Postgres',),
('oracle', 'Oracle',),
('vertica', 'Vertica',),
('presto', 'Presto',),
('s3', 'S3',),
('samba', 'Samba',),
('sqlite', 'Sqlite',),
('mssql', 'Microsoft SQL Server'),
('mesos_framework-id', 'Mesos Framework ID'),
]
}
def on_model_change(self, form, model, is_created):
formdata = form.data
if formdata['conn_type'] in ['jdbc']:
extra = {
key:formdata[key]
for key in self.form_extra_fields.keys() if key in formdata}
model.extra = json.dumps(extra)
@classmethod
def alert_fernet_key(cls):
return not configuration.has_option('core', 'fernet_key')
@classmethod
def is_secure(self):
"""
Used to display a message in the Connection list view making it clear
that the passwords can't be encrypted.
"""
is_secure = False
try:
import cryptography
configuration.get('core', 'fernet_key')
is_secure = True
except:
pass
return is_secure
def on_form_prefill(self, form, id):
try:
d = json.loads(form.data.get('extra', '{}'))
except Exception as e:
d = {}
for field in list(self.form_extra_fields.keys()):
value = d.get(field, '')
if value:
field = getattr(form, field)
field.data = value
class UserModelView(wwwutils.SuperUserMixin, AirflowModelView):
verbose_name = "User"
verbose_name_plural = "Users"
column_default_sort = 'username'
class ConfigurationView(wwwutils.SuperUserMixin, BaseView):
@expose('/')
def conf(self):
from airflow import configuration
raw = request.args.get('raw') == "true"
title = "Airflow Configuration"
subtitle = configuration.AIRFLOW_CONFIG
if configuration.getboolean("webserver", "expose_config"):
with open(configuration.AIRFLOW_CONFIG, 'r') as f:
config = f.read()
else:
config = (
"# You Airflow administrator chose not to expose the "
"configuration, most likely for security reasons.")
if raw:
return Response(
response=config,
status=200,
mimetype="application/text")
else:
code_html = Markup(highlight(
config,
lexers.IniLexer(), # Lexer call
HtmlFormatter(noclasses=True))
)
return self.render(
'airflow/code.html',
pre_subtitle=settings.HEADER + " v" + airflow.__version__,
code_html=code_html, title=title, subtitle=subtitle)
class DagModelView(wwwutils.SuperUserMixin, ModelView):
column_list = ('dag_id', 'owners')
column_editable_list = ('is_paused',)
form_excluded_columns = ('is_subdag', 'is_active')
column_searchable_list = ('dag_id',)
column_filters = (
'dag_id', 'owners', 'is_paused', 'is_active', 'is_subdag',
'last_scheduler_run', 'last_expired')
form_widget_args = {
'last_scheduler_run': {'disabled': True},
'fileloc': {'disabled': True},
'is_paused': {'disabled': True},
'last_pickled': {'disabled': True},
'pickle_id': {'disabled': True},
'last_loaded': {'disabled': True},
'last_expired': {'disabled': True},
'pickle_size': {'disabled': True},
'scheduler_lock': {'disabled': True},
'owners': {'disabled': True},
}
column_formatters = dict(
dag_id=dag_link,
)
can_delete = False
can_create = False
page_size = 50
list_template = 'airflow/list_dags.html'
named_filter_urls = True
def get_query(self):
"""
Default filters for model
"""
return (
super(DagModelView, self)
.get_query()
.filter(or_(models.DagModel.is_active, models.DagModel.is_paused))
.filter(~models.DagModel.is_subdag)
)
def get_count_query(self):
"""
Default filters for model
"""
return (
super(DagModelView, self)
.get_count_query()
.filter(models.DagModel.is_active)
.filter(~models.DagModel.is_subdag)
)
| apache-2.0 |
syl20bnr/nupic | external/linux32/lib/python2.6/site-packages/matplotlib/bezier.py | 70 | 14387 | """
A module providing some utility functions regarding bezier path manipulation.
"""
import numpy as np
from math import sqrt
from matplotlib.path import Path
from operator import xor
# some functions
def get_intersection(cx1, cy1, cos_t1, sin_t1,
cx2, cy2, cos_t2, sin_t2):
""" return a intersecting point between a line through (cx1, cy1)
and having angle t1 and a line through (cx2, cy2) and angle t2.
"""
# line1 => sin_t1 * (x - cx1) - cos_t1 * (y - cy1) = 0.
# line1 => sin_t1 * x + cos_t1 * y = sin_t1*cx1 - cos_t1*cy1
line1_rhs = sin_t1 * cx1 - cos_t1 * cy1
line2_rhs = sin_t2 * cx2 - cos_t2 * cy2
# rhs matrix
a, b = sin_t1, -cos_t1
c, d = sin_t2, -cos_t2
ad_bc = a*d-b*c
if ad_bc == 0.:
raise ValueError("Given lines do not intersect")
#rhs_inverse
a_, b_ = d, -b
c_, d_ = -c, a
a_, b_, c_, d_ = [k / ad_bc for k in [a_, b_, c_, d_]]
x = a_* line1_rhs + b_ * line2_rhs
y = c_* line1_rhs + d_ * line2_rhs
return x, y
def get_normal_points(cx, cy, cos_t, sin_t, length):
"""
For a line passing through (*cx*, *cy*) and having a angle *t*,
return locations of the two points located along its perpendicular line at the distance of *length*.
"""
if length == 0.:
return cx, cy, cx, cy
cos_t1, sin_t1 = sin_t, -cos_t
cos_t2, sin_t2 = -sin_t, cos_t
x1, y1 = length*cos_t1 + cx, length*sin_t1 + cy
x2, y2 = length*cos_t2 + cx, length*sin_t2 + cy
return x1, y1, x2, y2
## BEZIER routines
# subdividing bezier curve
# http://www.cs.mtu.edu/~shene/COURSES/cs3621/NOTES/spline/Bezier/bezier-sub.html
def _de_casteljau1(beta, t):
next_beta = beta[:-1] * (1-t) + beta[1:] * t
return next_beta
def split_de_casteljau(beta, t):
"""split a bezier segment defined by its controlpoints *beta*
into two separate segment divided at *t* and return their control points.
"""
beta = np.asarray(beta)
beta_list = [beta]
while True:
beta = _de_casteljau1(beta, t)
beta_list.append(beta)
if len(beta) == 1:
break
left_beta = [beta[0] for beta in beta_list]
right_beta = [beta[-1] for beta in reversed(beta_list)]
return left_beta, right_beta
def find_bezier_t_intersecting_with_closedpath(bezier_point_at_t, inside_closedpath,
t0=0., t1=1., tolerence=0.01):
""" Find a parameter t0 and t1 of the given bezier path which
bounds the intersecting points with a provided closed
path(*inside_closedpath*). Search starts from *t0* and *t1* and it
uses a simple bisecting algorithm therefore one of the end point
must be inside the path while the orther doesn't. The search stop
when |t0-t1| gets smaller than the given tolerence.
value for
- bezier_point_at_t : a function which returns x, y coordinates at *t*
- inside_closedpath : return True if the point is insed the path
"""
# inside_closedpath : function
start = bezier_point_at_t(t0)
end = bezier_point_at_t(t1)
start_inside = inside_closedpath(start)
end_inside = inside_closedpath(end)
if not xor(start_inside, end_inside):
raise ValueError("the segment does not seemed to intersect with the path")
while 1:
# return if the distance is smaller than the tolerence
if (start[0]-end[0])**2 + (start[1]-end[1])**2 < tolerence**2:
return t0, t1
# calculate the middle point
middle_t = 0.5*(t0+t1)
middle = bezier_point_at_t(middle_t)
middle_inside = inside_closedpath(middle)
if xor(start_inside, middle_inside):
t1 = middle_t
end = middle
end_inside = middle_inside
else:
t0 = middle_t
start = middle
start_inside = middle_inside
class BezierSegment:
"""
A simple class of a 2-dimensional bezier segment
"""
# Highrt order bezier lines can be supported by simplying adding
# correcponding values.
_binom_coeff = {1:np.array([1., 1.]),
2:np.array([1., 2., 1.]),
3:np.array([1., 3., 3., 1.])}
def __init__(self, control_points):
"""
*control_points* : location of contol points. It needs have a
shpae of n * 2, where n is the order of the bezier line. 1<=
n <= 3 is supported.
"""
_o = len(control_points)
self._orders = np.arange(_o)
_coeff = BezierSegment._binom_coeff[_o - 1]
_control_points = np.asarray(control_points)
xx = _control_points[:,0]
yy = _control_points[:,1]
self._px = xx * _coeff
self._py = yy * _coeff
def point_at_t(self, t):
"evaluate a point at t"
one_minus_t_powers = np.power(1.-t, self._orders)[::-1]
t_powers = np.power(t, self._orders)
tt = one_minus_t_powers * t_powers
_x = sum(tt * self._px)
_y = sum(tt * self._py)
return _x, _y
def split_bezier_intersecting_with_closedpath(bezier,
inside_closedpath,
tolerence=0.01):
"""
bezier : control points of the bezier segment
inside_closedpath : a function which returns true if the point is inside the path
"""
bz = BezierSegment(bezier)
bezier_point_at_t = bz.point_at_t
t0, t1 = find_bezier_t_intersecting_with_closedpath(bezier_point_at_t,
inside_closedpath,
tolerence=tolerence)
_left, _right = split_de_casteljau(bezier, (t0+t1)/2.)
return _left, _right
def find_r_to_boundary_of_closedpath(inside_closedpath, xy,
cos_t, sin_t,
rmin=0., rmax=1., tolerence=0.01):
"""
Find a radius r (centered at *xy*) between *rmin* and *rmax* at
which it intersect with the path.
inside_closedpath : function
cx, cy : center
cos_t, sin_t : cosine and sine for the angle
rmin, rmax :
"""
cx, cy = xy
def _f(r):
return cos_t*r + cx, sin_t*r + cy
find_bezier_t_intersecting_with_closedpath(_f, inside_closedpath,
t0=rmin, t1=rmax, tolerence=tolerence)
## matplotlib specific
def split_path_inout(path, inside, tolerence=0.01, reorder_inout=False):
""" divide a path into two segment at the point where inside(x, y)
becomes False.
"""
path_iter = path.iter_segments()
ctl_points, command = path_iter.next()
begin_inside = inside(ctl_points[-2:]) # true if begin point is inside
bezier_path = None
ctl_points_old = ctl_points
concat = np.concatenate
iold=0
i = 1
for ctl_points, command in path_iter:
iold=i
i += len(ctl_points)/2
if inside(ctl_points[-2:]) != begin_inside:
bezier_path = concat([ctl_points_old[-2:], ctl_points])
break
ctl_points_old = ctl_points
if bezier_path is None:
raise ValueError("The path does not seem to intersect with the patch")
bp = zip(bezier_path[::2], bezier_path[1::2])
left, right = split_bezier_intersecting_with_closedpath(bp,
inside,
tolerence)
if len(left) == 2:
codes_left = [Path.LINETO]
codes_right = [Path.MOVETO, Path.LINETO]
elif len(left) == 3:
codes_left = [Path.CURVE3, Path.CURVE3]
codes_right = [Path.MOVETO, Path.CURVE3, Path.CURVE3]
elif len(left) == 4:
codes_left = [Path.CURVE4, Path.CURVE4, Path.CURVE4]
codes_right = [Path.MOVETO, Path.CURVE4, Path.CURVE4, Path.CURVE4]
else:
raise ValueError()
verts_left = left[1:]
verts_right = right[:]
#i += 1
if path.codes is None:
path_in = Path(concat([path.vertices[:i], verts_left]))
path_out = Path(concat([verts_right, path.vertices[i:]]))
else:
path_in = Path(concat([path.vertices[:iold], verts_left]),
concat([path.codes[:iold], codes_left]))
path_out = Path(concat([verts_right, path.vertices[i:]]),
concat([codes_right, path.codes[i:]]))
if reorder_inout and begin_inside == False:
path_in, path_out = path_out, path_in
return path_in, path_out
def inside_circle(cx, cy, r):
r2 = r**2
def _f(xy):
x, y = xy
return (x-cx)**2 + (y-cy)**2 < r2
return _f
# quadratic bezier lines
def get_cos_sin(x0, y0, x1, y1):
dx, dy = x1-x0, y1-y0
d = (dx*dx + dy*dy)**.5
return dx/d, dy/d
def get_parallels(bezier2, width):
"""
Given the quadraitc bezier control points *bezier2*, returns
control points of quadrativ bezier lines roughly parralel to given
one separated by *width*.
"""
# The parallel bezier lines constructed by following ways.
# c1 and c2 are contol points representing the begin and end of the bezier line.
# cm is the middle point
c1x, c1y = bezier2[0]
cmx, cmy = bezier2[1]
c2x, c2y = bezier2[2]
# t1 and t2 is the anlge between c1 and cm, cm, c2.
# They are also a angle of the tangential line of the path at c1 and c2
cos_t1, sin_t1 = get_cos_sin(c1x, c1y, cmx, cmy)
cos_t2, sin_t2 = get_cos_sin(cmx, cmy, c2x, c2y)
# find c1_left, c1_right which are located along the lines
# throught c1 and perpendicular to the tangential lines of the
# bezier path at a distance of width. Same thing for c2_left and
# c2_right with respect to c2.
c1x_left, c1y_left, c1x_right, c1y_right = \
get_normal_points(c1x, c1y, cos_t1, sin_t1, width)
c2x_left, c2y_left, c2x_right, c2y_right = \
get_normal_points(c2x, c2y, cos_t2, sin_t2, width)
# find cm_left which is the intersectng point of a line through
# c1_left with angle t1 and a line throught c2_left with angle
# t2. Same with cm_right.
cmx_left, cmy_left = get_intersection(c1x_left, c1y_left, cos_t1, sin_t1,
c2x_left, c2y_left, cos_t2, sin_t2)
cmx_right, cmy_right = get_intersection(c1x_right, c1y_right, cos_t1, sin_t1,
c2x_right, c2y_right, cos_t2, sin_t2)
# the parralel bezier lines are created with control points of
# [c1_left, cm_left, c2_left] and [c1_right, cm_right, c2_right]
path_left = [(c1x_left, c1y_left), (cmx_left, cmy_left), (c2x_left, c2y_left)]
path_right = [(c1x_right, c1y_right), (cmx_right, cmy_right), (c2x_right, c2y_right)]
return path_left, path_right
def make_wedged_bezier2(bezier2, length, shrink_factor=0.5):
"""
Being similar to get_parallels, returns
control points of two quadrativ bezier lines having a width roughly parralel to given
one separated by *width*.
"""
xx1, yy1 = bezier2[2]
xx2, yy2 = bezier2[1]
xx3, yy3 = bezier2[0]
cx, cy = xx3, yy3
x0, y0 = xx2, yy2
dist = sqrt((x0-cx)**2 + (y0-cy)**2)
cos_t, sin_t = (x0-cx)/dist, (y0-cy)/dist,
x1, y1, x2, y2 = get_normal_points(cx, cy, cos_t, sin_t, length)
xx12, yy12 = (xx1+xx2)/2., (yy1+yy2)/2.,
xx23, yy23 = (xx2+xx3)/2., (yy2+yy3)/2.,
dist = sqrt((xx12-xx23)**2 + (yy12-yy23)**2)
cos_t, sin_t = (xx12-xx23)/dist, (yy12-yy23)/dist,
xm1, ym1, xm2, ym2 = get_normal_points(xx2, yy2, cos_t, sin_t, length*shrink_factor)
l_plus = [(x1, y1), (xm1, ym1), (xx1, yy1)]
l_minus = [(x2, y2), (xm2, ym2), (xx1, yy1)]
return l_plus, l_minus
def find_control_points(c1x, c1y, mmx, mmy, c2x, c2y):
""" Find control points of the bezier line throught c1, mm, c2. We
simply assume that c1, mm, c2 which have parameteric value 0, 0.5, and 1.
"""
cmx = .5 * (4*mmx - (c1x + c2x))
cmy = .5 * (4*mmy - (c1y + c2y))
return [(c1x, c1y), (cmx, cmy), (c2x, c2y)]
def make_wedged_bezier2(bezier2, width, w1=1., wm=0.5, w2=0.):
"""
Being similar to get_parallels, returns
control points of two quadrativ bezier lines having a width roughly parralel to given
one separated by *width*.
"""
# c1, cm, c2
c1x, c1y = bezier2[0]
cmx, cmy = bezier2[1]
c3x, c3y = bezier2[2]
# t1 and t2 is the anlge between c1 and cm, cm, c3.
# They are also a angle of the tangential line of the path at c1 and c3
cos_t1, sin_t1 = get_cos_sin(c1x, c1y, cmx, cmy)
cos_t2, sin_t2 = get_cos_sin(cmx, cmy, c3x, c3y)
# find c1_left, c1_right which are located along the lines
# throught c1 and perpendicular to the tangential lines of the
# bezier path at a distance of width. Same thing for c3_left and
# c3_right with respect to c3.
c1x_left, c1y_left, c1x_right, c1y_right = \
get_normal_points(c1x, c1y, cos_t1, sin_t1, width*w1)
c3x_left, c3y_left, c3x_right, c3y_right = \
get_normal_points(c3x, c3y, cos_t2, sin_t2, width*w2)
# find c12, c23 and c123 which are middle points of c1-cm, cm-c3 and c12-c23
c12x, c12y = (c1x+cmx)*.5, (c1y+cmy)*.5
c23x, c23y = (cmx+c3x)*.5, (cmy+c3y)*.5
c123x, c123y = (c12x+c23x)*.5, (c12y+c23y)*.5
# tangential angle of c123 (angle between c12 and c23)
cos_t123, sin_t123 = get_cos_sin(c12x, c12y, c23x, c23y)
c123x_left, c123y_left, c123x_right, c123y_right = \
get_normal_points(c123x, c123y, cos_t123, sin_t123, width*wm)
path_left = find_control_points(c1x_left, c1y_left,
c123x_left, c123y_left,
c3x_left, c3y_left)
path_right = find_control_points(c1x_right, c1y_right,
c123x_right, c123y_right,
c3x_right, c3y_right)
return path_left, path_right
if 0:
path = Path([(0, 0), (1, 0), (2, 2)],
[Path.MOVETO, Path.CURVE3, Path.CURVE3])
left, right = divide_path_inout(path, inside)
clf()
ax = gca()
| gpl-3.0 |
heprom/pymicro | examples/plotting/contour_pole_figure.py | 1 | 1115 | import os, numpy as np
from pymicro.crystal.texture import PoleFigure
from pymicro.crystal.microstructure import Microstructure, Grain, Orientation
from matplotlib import pyplot as plt
if __name__ == '__main__':
"""
A pole figure plotted using contours.
.. note::
Use this example carefully since this is just using a matplotlib contourf
function, and has not been tested properly.
"""
euler_list = np.genfromtxt('../data/pp100', usecols=(0, 1, 2))
micro = Microstructure(name='test', autodelete=True)
micro.add_grains(euler_list)
pf = PoleFigure(hkl='111', proj='stereo', microstructure=micro)
pf.mksize = 40
fig = plt.figure(1, figsize=(12, 5))
ax1 = fig.add_subplot(121, aspect='equal')
ax2 = fig.add_subplot(122, aspect='equal')
pf.create_pf_contour(ax=ax1, ang_step=20)
pf.plot_pf(ax=ax2)
image_name = os.path.splitext(__file__)[0] + '.png'
print('writting %s' % image_name)
plt.savefig(image_name, format='png')
from matplotlib import image
image.thumbnail(image_name, 'thumb_' + image_name, 0.2)
del pf, micro | mit |
Eshavish/TwitterAPIwithZip | python_scripts/redfin_images_downloader.py | 1 | 1914 | import pandas as pd
from bs4 import BeautifulSoup
import requests
import os
download_path = "C:\cygwin64\home\eshavish\node_modules\twitter-api-node-express\python_scripts\pic\\ <- Needs double slash at the end."
num_of_img = 1
soup = ""
print "Loading house data..."
house_urls = pd.read_csv("CHANGE TO LOCATION OF ->\\SanDiego_Housing_TextData.csv")
image_fields = house_urls[["KEY", "R3"]]
print "Downloading house images..."
for a in range(0, image_fields.shape[0] + 1):
if not os.path.exists(download_path + image_fields["KEY"][a]):
os.makedirs(download_path + image_fields["KEY"][a])
print "Downloading house ID: ", image_fields["KEY"][a]
for j in range(0, 4):
image_url = "https://ssl.cdn-redfin.com/photo/48/mbpadded/" + repr(image_fields["R3"][a]) + "/genMid."\
+ image_fields["KEY"][a] + "_" + repr(j) + ".jpg"
soup = BeautifulSoup(requests.get(image_url).content, "lxml")
if str(soup.find_all("p")) != "[<p>Fatal Dirpy Error</p>]":
f = open(download_path + image_fields["KEY"][a] + "\\" + repr(num_of_img) + ".jpg", 'wb')
f.write(requests.get(image_url).content)
f.close()
num_of_img += 1
for k in range(1, 50):
image_url = "https://ssl.cdn-redfin.com/photo/48/mbpadded/" + repr(image_fields["R3"][a])\
+ "/genMid." + image_fields["KEY"][a] + "_" + repr(k) + "_" + repr(j) + ".jpg"
soup = BeautifulSoup(requests.get(image_url).content, "lxml")
if str(soup.find_all("p")) != "[<p>Fatal Dirpy Error</p>]":
try:
f = open(download_path + image_fields["KEY"][a] + "\\" + repr(num_of_img) + ".jpg", 'wb')
f.write(requests.get(image_url).content)
f.close()
num_of_img += 1
except: pass
num_of_img = 1 | mit |
ishank08/scikit-learn | doc/conf.py | 22 | 9789 | # -*- coding: utf-8 -*-
#
# scikit-learn documentation build configuration file, created by
# sphinx-quickstart on Fri Jan 8 09:13:42 2010.
#
# This file is execfile()d with the current directory set to its containing
# dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
from __future__ import print_function
import sys
import os
from sklearn.externals.six import u
# If extensions (or modules to document with autodoc) are in another
# directory, add these directories to sys.path here. If the directory
# is relative to the documentation root, use os.path.abspath to make it
# absolute, like shown here.
sys.path.insert(0, os.path.abspath('sphinxext'))
from github_link import make_linkcode_resolve
import sphinx_gallery
# -- General configuration ---------------------------------------------------
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc', 'sphinx.ext.autosummary',
'numpy_ext.numpydoc',
'sphinx.ext.linkcode', 'sphinx.ext.doctest',
'sphinx_gallery.gen_gallery',
'sphinx_issues',
]
# pngmath / imgmath compatibility layer for different sphinx versions
import sphinx
from distutils.version import LooseVersion
if LooseVersion(sphinx.__version__) < LooseVersion('1.4'):
extensions.append('sphinx.ext.pngmath')
else:
extensions.append('sphinx.ext.imgmath')
autodoc_default_flags = ['members', 'inherited-members']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['templates']
# generate autosummary even if no references
autosummary_generate = True
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8'
# Generate the plots for the gallery
plot_gallery = True
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u('scikit-learn')
copyright = u('2007 - 2017, scikit-learn developers (BSD License)')
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
import sklearn
version = sklearn.__version__
# The full version, including alpha/beta/rc tags.
release = sklearn.__version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of documents that shouldn't be included in the build.
#unused_docs = []
# List of directories, relative to source directory, that shouldn't be
# searched for source files.
exclude_trees = ['_build', 'templates', 'includes']
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = False
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. Major themes that come with
# Sphinx are currently 'default' and 'sphinxdoc'.
html_theme = 'scikit-learn'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
html_theme_options = {'oldversion': False, 'collapsiblesidebar': True,
'google_analytics': True, 'surveybanner': False,
'sprintbanner': True}
# Add any paths that contain custom themes here, relative to this directory.
html_theme_path = ['themes']
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
html_short_title = 'scikit-learn'
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = 'logos/scikit-learn-logo-small.png'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
html_favicon = 'logos/favicon.ico'
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['images']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
html_domain_indices = False
# If false, no index is generated.
html_use_index = False
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = ''
# Output file base name for HTML help builder.
htmlhelp_basename = 'scikit-learndoc'
# -- Options for LaTeX output ------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass
# [howto/manual]).
latex_documents = [('index', 'user_guide.tex', u('scikit-learn user guide'),
u('scikit-learn developers'), 'manual'), ]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
latex_logo = "logos/scikit-learn-logo.png"
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# Additional stuff for the LaTeX preamble.
latex_preamble = r"""
\usepackage{amsmath}\usepackage{amsfonts}\usepackage{bm}\usepackage{morefloats}
\usepackage{enumitem} \setlistdepth{10}
"""
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
latex_domain_indices = False
trim_doctests_flags = True
sphinx_gallery_conf = {
'doc_module': 'sklearn',
'reference_url': {
'sklearn': None,
'matplotlib': 'http://matplotlib.org',
'numpy': 'http://docs.scipy.org/doc/numpy-1.6.0',
'scipy': 'http://docs.scipy.org/doc/scipy-0.11.0/reference',
'nibabel': 'http://nipy.org/nibabel'}
}
# The following dictionary contains the information used to create the
# thumbnails for the front page of the scikit-learn home page.
# key: first image in set
# values: (number of plot in set, height of thumbnail)
carousel_thumbs = {'sphx_glr_plot_classifier_comparison_001.png': 600,
'sphx_glr_plot_outlier_detection_003.png': 372,
'sphx_glr_plot_gpr_co2_001.png': 350,
'sphx_glr_plot_adaboost_twoclass_001.png': 372,
'sphx_glr_plot_compare_methods_001.png': 349}
def make_carousel_thumbs(app, exception):
"""produces the final resized carousel images"""
if exception is not None:
return
print('Preparing carousel images')
image_dir = os.path.join(app.builder.outdir, '_images')
for glr_plot, max_width in carousel_thumbs.items():
image = os.path.join(image_dir, glr_plot)
if os.path.exists(image):
c_thumb = os.path.join(image_dir, glr_plot[:-4] + '_carousel.png')
sphinx_gallery.gen_rst.scale_image(image, c_thumb, max_width, 190)
# Config for sphinx_issues
issues_uri = 'https://github.com/scikit-learn/scikit-learn/issues/{issue}'
issues_github_path = 'scikit-learn/scikit-learn'
issues_user_uri = 'https://github.com/{user}'
def setup(app):
# to hide/show the prompt in code examples:
app.add_javascript('js/copybutton.js')
app.connect('build-finished', make_carousel_thumbs)
# The following is used by sphinx.ext.linkcode to provide links to github
linkcode_resolve = make_linkcode_resolve('sklearn',
u'https://github.com/scikit-learn/'
'scikit-learn/blob/{revision}/'
'{package}/{path}#L{lineno}')
| bsd-3-clause |
jbedorf/tensorflow | tensorflow/contrib/timeseries/examples/predict_test.py | 12 | 2186 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests that the TensorFlow parts of the prediction example run."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from os import path
from tensorflow.contrib.timeseries.examples import predict
from tensorflow.python.platform import test
_MODULE_PATH = path.dirname(__file__)
_DATA_FILE = path.join(_MODULE_PATH, "data/period_trend.csv")
class PeriodTrendExampleTest(test.TestCase):
def test_shapes_and_variance_structural(self):
(times, observed, all_times, mean, upper_limit, lower_limit
) = predict.structural_ensemble_train_and_predict(_DATA_FILE)
# Just check that plotting will probably be OK. We can't actually run the
# plotting code since we don't want to pull in matplotlib as a dependency
# for this test.
self.assertAllEqual([500], times.shape)
self.assertAllEqual([500], observed.shape)
self.assertAllEqual([700], all_times.shape)
self.assertAllEqual([700], mean.shape)
self.assertAllEqual([700], upper_limit.shape)
self.assertAllEqual([700], lower_limit.shape)
def test_ar(self):
(times, observed, all_times, mean,
upper_limit, lower_limit) = predict.ar_train_and_predict(_DATA_FILE)
self.assertAllEqual(times.shape, observed.shape)
self.assertAllEqual(all_times.shape, mean.shape)
self.assertAllEqual(all_times.shape, upper_limit.shape)
self.assertAllEqual(all_times.shape, lower_limit.shape)
if __name__ == "__main__":
test.main()
| apache-2.0 |
temmeand/scikit-rf | skrf/networkSet.py | 1 | 27677 |
'''
.. module:: skrf.networkSet
========================================
networkSet (:mod:`skrf.networkSet`)
========================================
Provides a class representing an un-ordered set of n-port microwave networks.
Frequently one needs to make calculations, such as mean or standard
deviation, on an entire set of n-port networks. To facilitate these
calculations the :class:`NetworkSet` class provides convenient
ways to make such calculations.
Another usage is to interpolate a set of Networks which depend of
an parameter (like a knob, or a geometrical parameter).
The results are returned in :class:`~skrf.network.Network` objects, so they can be plotted and saved in the same way one would do with a
:class:`~skrf.network.Network`.
The functionality in this module is provided as methods and
properties of the :class:`NetworkSet` Class.
NetworkSet Class
================
.. autosummary::
:toctree: generated/
NetworkSet
NetworkSet Utilities
====================
.. autosummary::
:toctree: generated/
func_on_networks
getset
'''
import os
from . network import average as network_average
from . network import Network, PRIMARY_PROPERTIES, COMPONENT_FUNC_DICT, Y_LABEL_DICT
from . import mathFunctions as mf
import zipfile
from copy import deepcopy
import warnings
import numpy as npy
from scipy.interpolate import interp1d
# import matplotlib.pyplot as plb
from . util import now_string_2_dt
# delayed imports due to circular dependencies
# NetworkSet.from_dir : from io.general import read_all_networks
class NetworkSet(object):
'''
A set of Networks.
This class allows functions on sets of Networks, such as mean or
standard deviation, to be calculated conveniently. The results are
returned in :class:`~skrf.network.Network` objects, so that they may be
plotted and saved in like :class:`~skrf.network.Network` objects.
This class also provides methods which can be used to plot uncertainty
bounds for a set of :class:`~skrf.network.Network`.
The names of the :class:`NetworkSet` properties are generated
dynamically upon initialization, and thus documentation for
individual properties and methods is not available. However, the
properties do follow the convention::
>>> my_network_set.function_name_network_property_name
For example, the complex average (mean)
:class:`~skrf.network.Network` for a
:class:`NetworkSet` is::
>>> my_network_set.mean_s
This accesses the property 's', for each element in the
set, and **then** calculates the 'mean' of the resultant set. The
order of operations is important.
Results are returned as :class:`~skrf.network.Network` objects,
so they may be plotted or saved in the same way as for
:class:`~skrf.network.Network` objects::
>>> my_network_set.mean_s.plot_s_mag()
>>> my_network_set.mean_s.write_touchstone('mean_response')
If you are calculating functions that return scalar variables, then
the result is accessible through the Network property .s_re. For
example::
>>> std_s_deg = my_network_set.std_s_deg
This result would be plotted by::
>>> std_s_deg.plot_s_re()
The operators, properties, and methods of NetworkSet object are
dynamically generated by private methods
* :func:`~NetworkSet.__add_a_operator`
* :func:`~NetworkSet.__add_a_func_on_property`
* :func:`~NetworkSet.__add_a_element_wise_method`
* :func:`~NetworkSet.__add_a_plot_uncertainty`
thus, documentation on the individual methods and properties are
not available.
'''
def __init__(self, ntwk_set, name = None):
'''
Initializer for NetworkSet
Parameters
-----------
ntwk_set : list of :class:`~skrf.network.Network` objects
the set of :class:`~skrf.network.Network` objects
name : string
the name of the NetworkSet, given to the Networks returned
from properties of this class.
'''
## type checking
if hasattr(ntwk_set, 'values'):
ntwk_set = list(ntwk_set.values())
# did they pass a list of Networks?
if not isinstance(ntwk_set[0], Network):
raise(TypeError('input must be list of Network types'))
# do all Networks have the same # ports?
if len (set([ntwk.number_of_ports for ntwk in ntwk_set])) >1:
raise(ValueError('All elements in list of Networks must have same number of ports'))
# is all frequency information the same?
if npy.all([(ntwk_set[0].frequency == ntwk.frequency) \
for ntwk in ntwk_set]) == False:
raise(ValueError('All elements in list of Networks must have same frequency information'))
## initialization
# we are good to go
self.ntwk_set = ntwk_set
self.name = name
# create list of network properties, which we use to dynamically
# create a statistical properties of this set
network_property_list = [k+'_'+l \
for k in PRIMARY_PROPERTIES \
for l in COMPONENT_FUNC_DICT.keys()] + \
['passivity','s']
# dynamically generate properties. this is slick.
max, min = npy.max, npy.min
max.__name__ = 'max'
min.__name__ = 'min'
for network_property_name in network_property_list:
for func in [npy.mean, npy.std, max, min]:
self.__add_a_func_on_property(func, network_property_name)
if 'db' not in network_property_name:# != 's_db' and network_property_name != 's':
# db uncertainty requires a special function call see
# plot_uncertainty_bounds_s_db
self.__add_a_plot_uncertainty(network_property_name)
self.__add_a_plot_minmax(network_property_name)
self.__add_a_element_wise_method('plot_'+network_property_name)
self.__add_a_element_wise_method('plot_s_db')
self.__add_a_element_wise_method('plot_s_db_time')
for network_method_name in \
['write_touchstone','interpolate','plot_s_smith']:
self.__add_a_element_wise_method(network_method_name)
for operator_name in \
['__pow__','__floordiv__','__mul__','__div__','__add__','__sub__']:
self.__add_a_operator(operator_name)
@classmethod
def from_zip(cls, zip_file_name, sort_filenames=True, *args, **kwargs):
'''
creates a NetworkSet from a zipfile of touchstones.
Parameters
-----------
zip_file_name : string
name of zipfile
sort_filenames: Boolean
sort the filenames in the zip file before constructing the
NetworkSet
\\*args,\\*\\*kwargs : arguments
passed to NetworkSet constructor
Examples
----------
>>> import skrf as rf
>>> my_set = rf.NetworkSet.from_zip('myzip.zip')
'''
z = zipfile.ZipFile(zip_file_name)
filename_list = z.namelist()
ntwk_list = []
if sort_filenames:
filename_list.sort()
for filename in filename_list:
# try/except block in case not all files are touchstones
n= Network()
try:
n.read_touchstone(z.open(filename))
ntwk_list.append(n)
continue
except:
pass
try:
n.read(z.open(filename))
ntwk_list.append(n)
continue
except:
pass
return cls(ntwk_list)
@classmethod
def from_dir(cls, dir='.',*args, **kwargs):
'''
Create a NetworkSet from a directory containing Networks
This just calls ::
rf.NetworkSet(rf.read_all_networks(dir), *args, **kwargs)
Parameters
---------------
dir : str
directory containing Network files.
\*args, \*\*kwargs :
passed to NetworkSet constructor
Examples
----------
>>> my_set = rf.NetworkSet.from_dir('./data/')
'''
from . io.general import read_all_networks
return cls(read_all_networks(dir), *args, **kwargs)
@classmethod
def from_s_dict(cls,d, frequency, *args, **kwargs):
'''
Create a NetworkSet from a dictionary of s-parameters
The resultant elements of the NetworkSet are named by the keys of
the dictionary.
Parameters
-------------
d : dict
dictionary of s-parameters data. values of this should be
:class:`numpy.ndarray` assignable to :attr:`skrf.network.Network.s`
frequency: :class:`~skrf.frequency.Frequency` object
frequency assigned to each network
\*args, \*\*kwargs :
passed to Network.__init__ for each key/value pair of d
Returns
----------
ns : NetworkSet
See Also
----------
NetworkSet.to_s_dict
'''
return cls([Network(s=d[k], frequency=frequency, name=k,
*args, **kwargs) for k in d])
def __add_a_operator(self,operator_name):
'''
adds a operator method to the NetworkSet.
this is made to
take either a Network or a NetworkSet. if a Network is passed
to the operator, each element of the set will operate on the
Network. If a NetworkSet is passed to the operator, and is the
same length as self. then it will operate element-to-element
like a dot-product.
'''
def operator_func(self, other):
if isinstance(other, NetworkSet):
if len(other) != len(self):
raise(ValueError('Network sets must be of same length to be cascaded'))
return NetworkSet([self.ntwk_set[k].__getattribute__(operator_name)(other.ntwk_set[k]) for k in range(len(self))])
elif isinstance(other, Network):
return NetworkSet([ntwk.__getattribute__(operator_name)(other) for ntwk in self.ntwk_set])
else:
raise(TypeError('NetworkSet operators operate on either Network, or NetworkSet types'))
setattr(self.__class__,operator_name,operator_func)
def __str__(self):
'''
'''
return self.ntwk_set.__str__()
def __repr__(self):
return self.__str__()
def __getitem__(self,key):
'''
returns an element of the network set
'''
if isinstance(key, str):
# if they pass a string then slice each network in this set
return NetworkSet([k[key] for k in self.ntwk_set],
name = self.name)
else:
return self.ntwk_set[key]
def __len__(self):
'''
returns an element of the network set
'''
return len(self.ntwk_set)
def __add_a_element_wise_method(self,network_method_name):
def func(self, *args, **kwargs):
return self.element_wise_method(network_method_name, *args, **kwargs)
setattr(self.__class__,network_method_name,func)
def __add_a_func_on_property(self,func,network_property_name):
'''
dynamically adds a property to this class (NetworkSet).
this is mostly used internally to genrate all of the classes
properties.
takes:
network_property_name: a property of the Network class,
a string. this must have a matrix output of shape fxnxn
func: a function to be applied to the network_property
accross the first axis of the property's output
example:
my_ntwk_set.add_a_func_on_property('s',mean)
'''
fget = lambda self: fon(self.ntwk_set,func,network_property_name,\
name = self.name)
setattr(self.__class__,func.__name__+'_'+network_property_name,\
property(fget))
def __add_a_plot_uncertainty(self,network_property_name):
'''
takes:
network_property_name: a property of the Network class,
a string. this must have a matrix output of shape fxnxn
example:
my_ntwk_set.add_a_func_on_property('s',mean)
'''
def plot_func(self,*args, **kwargs):
kwargs.update({'attribute':network_property_name})
self.plot_uncertainty_bounds_component(*args,**kwargs)
setattr(self.__class__,'plot_uncertainty_bounds_'+\
network_property_name,plot_func)
setattr(self.__class__,'plot_ub_'+\
network_property_name,plot_func)
def __add_a_plot_minmax(self,network_property_name):
'''
takes:
network_property_name: a property of the Network class,
a string. this must have a matrix output of shape fxnxn
example:
my_ntwk_set.add_a_func_on_property('s',mean)
'''
def plot_func(self,*args, **kwargs):
kwargs.update({'attribute':network_property_name})
self.plot_minmax_bounds_component(*args,**kwargs)
setattr(self.__class__,'plot_minmax_bounds_'+\
network_property_name,plot_func)
setattr(self.__class__,'plot_mm_'+\
network_property_name,plot_func)
def to_dict(self):
"""
Returns a dictionary representation of the NetworkSet
The returned dictionary has the Network names for keys, and the
Networks as values.
"""
return dict([(k.name, k) for k in self.ntwk_set])
def to_s_dict(ns, *args, **kwargs):
"""
Converts a NetworkSet to a dictionary of s-parameters
The resultant keys of the dictionary are the names of the Networks
in NetworkSet
Parameters
-------------
ns : NetworkSet
dictionary of s-parameters data. values of this should be
:class:`numpy.ndarray` assignable to :attr:`skrf.network.Network.s`
frequency: :class:`~skrf.frequency.Frequency` object
frequency assigned to each network
\*args, \*\*kwargs :
passed to Network.__init__ for each key/value pair of d
Returns
----------
s_dict : dictionary
contains s-parameters in the form of complex numpy arrays
See Also
--------
NetworkSet.from_s_dict
"""
d = ns.to_dict()
for k in d:
d[k] = d[k].s
return d
def element_wise_method(self,network_method_name, *args, **kwargs):
'''
calls a given method of each element and returns the result as
a new NetworkSet if the output is a Network.
'''
output = [ntwk.__getattribute__(network_method_name)(*args, **kwargs) for ntwk in self.ntwk_set]
if isinstance(output[0],Network):
return NetworkSet(output)
else:
return output
def copy(self):
'''
copies each network of the network set.
'''
return NetworkSet([k.copy() for k in self.ntwk_set])
def sort(self, key=lambda x: x.name, **kwargs):
'''
sort this network set.
Parameters
-------------
**kwargs : dict
keyword args passed to builtin sorted acting on self.ntwk_set
Examples
-----------
>>> ns = rf.NetworkSet.from_dir('mydir')
>>> ns.sort()
Sort by other property
>>> ns.sort(key= lambda x: x.voltage)
'''
self.ntwk_set = sorted(self.ntwk_set, key = key, **kwargs)
def rand(self,n=1):
'''
return `n` random samples from this NetworkSet
Parameters
----------
n : int
number of samples to return
'''
idx = npy.random.randint(0,len(self), n)
out = [self.ntwk_set[k] for k in idx]
if n ==1:
return out[0]
else:
return out
def filter(self,s):
'''
filter networkset based on a string in Network.name
Notes
-----
This is just
`NetworkSet([k for k in self if s in k.name])`
Parameters
-------------
s: str
string contained in network elements to be filtered
Returns
--------
ns : NetworkSet
Examples
-----------
>>> ns.filter('monday')
'''
return NetworkSet([k for k in self if s in k.name])
def scalar_mat(self, param='s',order='F'):
'''
scalar ndarray representing `param` data vs freq and element idx
output is a 3d array with axes (freq, ns_index, port/ri)
freq is frequency
ns_index is index of this networkset
ports is a flattened re/im components of port index (len =2*nports**2)
'''
ntwk=self[0]
nfreq = len(ntwk)
# x will have the axes ( frequency,observations, ports)
x = npy.array([[mf.flatten_c_mat(k.__getattribute__(param)[f]) \
for k in self] for f in range(nfreq)])
return x
def cov(self, **kw):
'''
covariance matrix
shape of output will be (nfreq, 2*nports**2, 2*nports**2)
'''
smat=self.scalar_mat(**kw)
return npy.array([npy.cov(k.T) for k in smat])
@property
def mean_s_db(self):
'''
the mean magnitude in dB.
note:
the mean is taken on the magnitude before converted to db, so
magnitude_2_db( mean(s_mag))
which is NOT the same as
mean(s_db)
'''
ntwk= self.mean_s_mag
ntwk.s = ntwk.s_db
return ntwk
@property
def std_s_db(self):
'''
the standard deviation magnitude in dB.
note:
the standard deviation is taken on the magnitude before converted to db, so
magnitude_2_db( std(s_mag))
which is NOT the same as
std(s_db)
'''
ntwk= self.std_s_mag
ntwk.s = ntwk.s_db
return ntwk
@property
def inv(self):
return NetworkSet( [ntwk.inv for ntwk in self.ntwk_set])
def add_polar_noise(self, ntwk):
from scipy import stats
from numpy import frompyfunc
gimme_norm = lambda x: stats.norm(loc=0,scale=x).rvs(1)[0]
ugimme_norm = frompyfunc(gimme_norm,1,1)
s_deg_rv = npy.array(map(ugimme_norm, self.std_s_deg.s_re), dtype=float)
s_mag_rv = npy.array(map(ugimme_norm, self.std_s_mag.s_re), dtype=float)
mag = ntwk.s_mag+s_mag_rv
deg = ntwk.s_deg+s_deg_rv
ntwk.s = mag* npy.exp(1j*npy.pi/180.*deg)
return ntwk
def set_wise_function(self, func, a_property, *args, **kwargs):
'''
calls a function on a specific property of the networks in
this NetworkSet.
example:
my_ntwk_set.set_wise_func(mean,'s')
'''
return fon(self.ntwk_set, func, a_property, *args, **kwargs)
def uncertainty_ntwk_triplet(self, attribute,n_deviations=3):
'''
returns a 3-tuple of Network objects which contain the
mean, upper_bound, and lower_bound for the given Network
attribute.
Used to save and plot uncertainty information data
'''
ntwk_mean = self.__getattribute__('mean_'+attribute)
ntwk_std = self.__getattribute__('std_'+attribute)
ntwk_std.s = n_deviations * ntwk_std.s
upper_bound = (ntwk_mean +ntwk_std)
lower_bound = (ntwk_mean -ntwk_std)
return (ntwk_mean, lower_bound, upper_bound)
def datetime_index(self):
'''
Create a datetime index from networks names
this is just:
[rf.now_string_2_dt(k.name ) for k in self]
'''
return [now_string_2_dt(k.name ) for k in self]
# io
def write(self, file=None, *args, **kwargs):
'''
Write the NetworkSet to disk using :func:`~skrf.io.general.write`
Parameters
-----------
file : str or file-object
filename or a file-object. If left as None then the
filename will be set to Calibration.name, if its not None.
If both are None, ValueError is raised.
\*args, \*\*kwargs : arguments and keyword arguments
passed through to :func:`~skrf.io.general.write`
Notes
------
If the self.name is not None and file is can left as None
and the resultant file will have the `.ns` extension appended
to the filename.
Examples
---------
>>> ns.name = 'my_ns'
>>> ns.write()
See Also
---------
skrf.io.general.write
skrf.io.general.read
'''
# this import is delayed until here because of a circular dependency
from . io.general import write
if file is None:
if self.name is None:
raise (ValueError('No filename given. You must provide a filename, or set the name attribute'))
file = self.name
write(file,self, *args, **kwargs)
def write_spreadsheet(self, *args, **kwargs):
'''
Write contents of network to a spreadsheet, for your boss to use.
See Also
---------
skrf.io.general.network_2_spreadsheet
'''
from . io.general import networkset_2_spreadsheet
networkset_2_spreadsheet(self, *args, **kwargs)
def ntwk_attr_2_df(self, attr='s_db',m=0, n=0, *args, **kwargs):
'''
Converts an attributes of the Networks within a NetworkSet to a
Pandas DataFrame
Examples
---------
df = ns.ntwk_attr_2_df('s_db',m=1,n=0)
df.to_excel('output.xls') # see Pandas docs for more info
'''
from pandas import DataFrame, Series, Index
index = Index(
self[0].frequency.f_scaled,
name='Freq(%s)'%self[0].frequency.unit
)
df = DataFrame(
dict([('%s'%(k.name),
Series(k.__getattribute__(attr)[:,m,n],index=index))
for k in self]),
index = index,
)
return df
def interpolate_from_network(self, ntw_param, x, interp_kind='linear'):
'''
Interpolate a Network from a NetworkSet, as a multi-file N-port network.
Assumes that the NetworkSet contains N-port networks
with same number of ports N and same number of frequency points.
These networks differ from an given array parameter `interp_param`,
which is used to interpolate the returned Network. Length of `interp_param`
should be equal to the length of the NetworkSet.
Parameters
----------
ntw_param : (N,) array_like
A 1-D array of real values. The length of ntw_param must be equal
to the length of the NetworkSet
x : real
Point to evaluate the interpolated network at
interp_kind: str
Specifies the kind of interpolation as a string: 'linear', 'nearest', 'zero', 'slinear', 'quadratic', 'cubic'. Cf :class:`scipy.interpolate.interp1d` for detailled description.
Default is 'linear'.
Returns
-------
ntw : class:`~skrf.network.Network`
Network interpolated at x
'''
ntw = self[0].copy()
# Interpolating the scattering parameters
s = npy.array([self[idx].s for idx in range(len(self))])
f = interp1d(ntw_param, s, axis=0, kind=interp_kind)
ntw.s = f(x)
return ntw
def func_on_networks(ntwk_list, func, attribute='s',name=None, *args,\
**kwargs):
'''
Applies a function to some attribute of a list of networks.
Returns the result in the form of a Network. This means information
that may not be s-parameters is stored in the s-matrix of the
returned Network.
Parameters
-------------
ntwk_list : list of :class:`~skrf.network.Network` objects
list of Networks on which to apply `func` to
func : function
function to operate on `ntwk_list` s-matrices
attribute : string
attribute of Network's in ntwk_list for func to act on
\*args,\*\*kwargs : arguments and keyword arguments
passed to func
Returns
---------
ntwk : :class:`~skrf.network.Network`
Network with s-matrix the result of func, operating on
ntwk_list's s-matrices
Examples
----------
averaging can be implemented with func_on_networks by
>>> func_on_networks(ntwk_list,mean)
'''
data_matrix = \
npy.array([ntwk.__getattribute__(attribute) for ntwk in ntwk_list])
new_ntwk = ntwk_list[0].copy()
new_ntwk.s = func(data_matrix,axis=0,*args,**kwargs)
if name is not None:
new_ntwk.name = name
return new_ntwk
# short hand name for convenience
fon = func_on_networks
def getset(ntwk_dict, s, *args, **kwargs):
'''
Creates a :class:`NetworkSet`, of all :class:`~skrf.network.Network`s
objects in a dictionary that contain `s` in its key. This is useful
for dealing with the output of
:func:`~skrf.io.general.load_all_touchstones`, which contains
Networks grouped by some kind of naming convention.
Parameters
------------
ntwk_dict : dictionary of Network objects
network dictionary that contains a set of keys `s`
s : string
string contained in the keys of ntwk_dict that are to be in the
NetworkSet that is returned
\*args,\*\*kwargs : passed to NetworkSet()
Returns
--------
ntwk_set : NetworkSet object
A NetworkSet that made from values of ntwk_dict with `s` in
their key
Examples
---------
>>>ntwk_dict = rf.load_all_touchstone('my_dir')
>>>set5v = getset(ntwk_dict,'5v')
>>>set10v = getset(ntwk_dict,'10v')
'''
ntwk_list = [ntwk_dict[k] for k in ntwk_dict if s in k]
if len(ntwk_list) > 0:
return NetworkSet( ntwk_list,*args, **kwargs)
else:
print('Warning: No keys in ntwk_dict contain \'%s\''%s)
return None
def tuner_constellation(name='tuner', singlefreq=76, Z0=50, r_lin = 9, phi_lin=21, TNWformat=True):
r = npy.linspace(0.1,0.9,r_lin)
a = npy.linspace(0,2*npy.pi,phi_lin)
r_, a_ = npy.meshgrid(r,a)
c_ = r_ *npy.exp(1j * a_)
g= c_.flatten()
x = npy.real(g)
y = npy.imag(g)
if TNWformat :
TNL = dict()
# for ii, gi in enumerate(g) :
for ii, gi in enumerate(g) :
TNL['pos'+str(ii)] = Network(f = [singlefreq ], s=[[[gi]]], z0=[[Z0]], name=name +'_' + str(ii))
TNW = NetworkSet(TNL, name=name)
return TNW, x,y,g
else :
return x,y,g
| bsd-3-clause |
bikash/h2o-dev | h2o-py/tests/testdir_algos/gbm/pyunit_NOPASS_bernoulli_synthetic_data_mediumGBM.py | 1 | 2249 | import sys
sys.path.insert(1, "../../../")
import h2o
from h2o import H2OFrame
import numpy as np
import numpy.random
import scipy.stats
from sklearn import ensemble
from sklearn.metrics import roc_auc_score
def bernoulli_synthetic_data_mediumGBM(ip,port):
# Connect to h2o
h2o.init(ip,port)
# Generate training dataset (adaptation of http://www.stat.missouri.edu/~speckman/stat461/boost.R)
train_rows = 10000
train_cols = 10
# Generate variables V1, ... V10
X_train = np.random.randn(train_rows, train_cols)
# y = +1 if sum_i x_{ij}^2 > chisq median on 10 df
y_train = np.asarray([1 if rs > scipy.stats.chi2.ppf(0.5, 10) else -1 for rs in [sum(r) for r in np.multiply(X_train,X_train).tolist()]])
# Train scikit gbm
# TODO: grid-search
loss = "bernoulli"
ntrees = 150
min_rows = 1
max_depth = 2
learn_rate = .01
nbins = 20
gbm_sci = ensemble.GradientBoostingClassifier(learning_rate=learn_rate, n_estimators=ntrees, max_depth=max_depth, min_samples_leaf=min_rows, max_features=None)
gbm_sci.fit(X_train,y_train)
# Generate testing dataset
test_rows = 2000
test_cols = 10
# Generate variables V1, ... V10
X_test = np.random.randn(test_rows, test_cols)
# y = +1 if sum_i x_{ij}^2 > chisq median on 10 df
y_test = np.asarray([1 if rs > scipy.stats.chi2.ppf(0.5, 10) else -1 for rs in [sum(r) for r in np.multiply(X_test,X_test).tolist()]])
# Score (AUC) the scikit gbm model on the test data
auc_sci = roc_auc_score(y_test, gbm_sci.predict_proba(X_test)[:,1])
# Compare this result to H2O
train_h2o = H2OFrame(np.column_stack((y_train, X_train)).tolist())
test_h2o = H2OFrame(np.column_stack((y_test, X_test)).tolist())
gbm_h2o = h2o.gbm(x=train_h2o[1:], y=train_h2o["C1"], loss=loss, ntrees=ntrees, min_rows=min_rows, max_depth=max_depth, learn_rate=learn_rate, nbins=nbins)
gbm_perf = gbm_h2o.model_performance(test_h2o)
auc_h2o = gbm_perf.auc()
#Log.info(paste("scikit AUC:", auc_sci, "\tH2O AUC:", auc_h2o))
assert auc_h2o >= auc_sci, "h2o (auc) performance degradation, with respect to scikit"
if __name__ == "__main__":
h2o.run_test(sys.argv, bernoulli_synthetic_data_mediumGBM) | apache-2.0 |
LiaoPan/blaze | blaze/cached.py | 15 | 3128 | from __future__ import print_function, division, absolute_import
import numpy as np
import pandas as pd
from datashape import dshape, discover, integral, floating, boolean, complexes
from datashape.predicates import isscalar, isrecord, iscollection
from .dispatch import dispatch
from .expr import Expr, Field, ndim
from .compute import compute
from .compatibility import unicode
from odo import odo
class CachedDataset(object):
__slots__ = 'data', 'cache'
def __init__(self, data, cache=None):
self.data = data
self.cache = cache if cache is not None else {}
@dispatch(CachedDataset)
def discover(d, **kwargs):
return discover(d.data, **kwargs)
@dispatch(Field, CachedDataset)
def compute_up(expr, data, **kwargs):
return data.data[expr._name]
@dispatch(Expr, CachedDataset)
def compute_down(expr, data, **kwargs):
try:
return data.cache[expr]
except KeyError:
pass
leaf, = expr._leaves()
# Do work
result = compute(expr, {leaf: data.data}, **kwargs)
ds = expr.dshape
new_type = concrete_type(ds)
# If the result is not the concrete data type for the datashape then make
# it concrete
if not isinstance(result, new_type):
result = odo(result, new_type, dshape=ds)
# Cache result
data.cache[expr] = result
return result
def concrete_type(ds):
"""A type into which we can safely deposit streaming data.
Parameters
----------
ds : DataShape
Returns
-------
type : type
The concrete type corresponding to the DataShape `ds`
Notes
-----
* This will return a Python type if possible
* Option types are not handled specially. The base type of the option type
is returned.
Examples
--------
>>> concrete_type('5 * int')
<class 'pandas.core.series.Series'>
>>> concrete_type('var * {name: string, amount: int}')
<class 'pandas.core.frame.DataFrame'>
>>> concrete_type('float64')
<... 'float'>
>>> concrete_type('float32')
<... 'float'>
>>> concrete_type('int64')
<... 'int'>
>>> concrete_type('int32')
<... 'int'>
>>> concrete_type('uint8')
<... 'int'>
>>> concrete_type('bool')
<... 'bool'>
>>> concrete_type('complex[float64]')
<... 'complex'>
>>> concrete_type('complex[float32]')
<... 'complex'>
>>> concrete_type('?int64')
<... 'int'>
"""
if isinstance(ds, (str, unicode)):
ds = dshape(ds)
if not iscollection(ds) and isscalar(ds.measure):
measure = getattr(ds.measure, 'ty', ds.measure)
if measure in integral.types:
return int
elif measure in floating.types:
return float
elif measure in boolean.types:
return bool
elif measure in complexes.types:
return complex
else:
return ds.measure.to_numpy_dtype().type
if not iscollection(ds):
return type(ds)
if ndim(ds) == 1:
return pd.DataFrame if isrecord(ds.measure) else pd.Series
if ndim(ds) > 1:
return np.ndarray
return list
| bsd-3-clause |
aggle/ccd-exposure-time-calculator | ccdnoise.py | 1 | 16208 | #!/usr/bin/env python
"""
Observational SNR and exposure time calculator
Jonathan Aguilar
Nov. 15, 2013
license
----------
Copyright 2014 Jonathan A. Aguilar (JHU)
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
usage
----------
- Run from ipython with %run ccdnoise.py or from command line with ./ccdnoise.py
- Click on the sliders to set the parameter values.
to-do
----------
- Convert to a proper GUI backend to improve performance
"""
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import matplotlib.gridspec as gridspec
from matplotlib.widgets import Slider#, Button, RadioButtons
import warnings
# some text formatting
mpl.rcParams["font.size"]=16
mpl.rcParams["text.color"]='white'
mpl.rcParams["axes.labelcolor"]='white'
mpl.rcParams["xtick.color"]='white'
mpl.rcParams["ytick.color"]='white'
def mag2flux(mag):
"""
For quickly converting visible-band magnitudes into photons/(um m^2 sec)
"""
return 9.6e10*(10**(-mag/2.5))
def flux2mag(flux):
"""
For quickly converting photons/(um m^2 sec) into V-band magnitudes
"""
return -2.5*np.log10(flux/9.6e10)
def annotateAxis(xvals, yvals, labels, ax):
"""
Semi-intelligently add labels to the plots without too much overcrowding
"""
for label, x, y, i in zip( labels, xvals, yvals, range(targets.size) ):
# check if label index is even or odd
aboveBelow = i%2 and True or False # true if odd
if aboveBelow:
aboveBelow= -1
else:
aboveBelow = 1
ax.annotate(
label,
color="black",
xy=(x, y), xytext=(40, aboveBelow*10),
textcoords='offset points', ha='right', va='bottom',
#bbox=dict(boxstyle = 'round,pad=0.5', fc = 'yellow', fill=False, alpha = 0.5),
arrowprops=dict(arrowstyle='->', connectionstyle='arc3,rad=0'))
#plt.ion()
# PLOTTING
gs = gridspec.GridSpec(21,16,left=0.07,right=0.97)
fig = plt.figure()
fig.suptitle("SNR Calculator for CCDs")
fig.set_facecolor("black")
"""
# background image - disabled for now; was a cool image of a galaxy
bgndimg = mpl.image.imread("NGC1097HaLRGBpugh.jpg")
fig.figimage(bgndimg)
fullfigax= fig.add_axes([0,0,1,1])
fullfigax.imshow(bgndimg,)
"""
col1header = fig.text(0.2,0.92,r"Sources and noise",size="large")
col2header = fig.text(0.65,0.92,r"Telescope parameters",size="large")
# flux column width
col1width=7
col2width=4
# common x-axis
#xlims = (np.log10(1e-4), np.log10(1e15))
xlims = (np.log10(1e-2), np.log10(1e10))
middle = np.mean(xlims)
dt = np.dtype([("flux",'f8'),("label","S20")])
# Astronomical targets [Photons/(um m^2 sec)]
targets = np.array([
(4.0e12, "Jupiter"),
(0.1, "HST deep field"),
(7.5, "Fomalhaut b"),
#(4.0e9, "Andromeda"),
(6.6e5, "QSO 3C 273"),
(mag2flux(5.8),"GRB 080319B"),
(mag2flux(24), "SN 1997ck"),
(mag2flux(0), "Vega")
], dtype=dt)
targets.sort(order="flux")
targetsax = plt.subplot(gs[2:4,:col1width])#plt.sca(axes[numaxis_targets])
plt.xlim(xlims[0],xlims[1])
plt.ylabel("Targets",rotation=45)
plt.xlabel(r"$log_{10}[N_{\gamma}/ (m^2 \mu m\ sec)]$", labelpad=-1)
plt.scatter(np.log10(targets["flux"]),
0.5*np.ones(targets["flux"].size))
annotateAxis(np.log10(targets["flux"]),
0.5*np.ones(targets["flux"].size),
targets["label"],
plt.gca())
targetsax.yaxis.set_ticks([])
targetsax_mag = targetsax.twiny()
targetsax_mag.yaxis.set_ticks([])
targetsax_mag.set_xticklabels(['%1.1f'%flux2mag(10**(t)) for t in targetsax.get_xticks()])
targetsax_mag.set_xlim(xlims)
targetsax_mag.set_xlabel("mag", labelpad=1)
mytarget=np.array([(1e4,"Target")],dt)
mytargplt, = targetsax.plot(np.log10(mytarget["flux"][0]), 0.5,
c='red',marker='x')
stargets=Slider(plt.subplot(gs[0,:col1width]),
"",
xlims[0],xlims[1],
valinit=np.log10(mytarget["flux"][0]))
# Sky [Photons/(um m^2 sec)]
sky = np.array([
(6.1e3,"Full moon"),
# (5.0e2, "Half moon"),
(2.9e2, "New moon"),
], dt)
sky.sort(order="flux")
skyax = plt.subplot(gs[7:9,:col1width])
plt.xlim(xlims[0],xlims[1])
plt.ylabel("Sky",rotation=45)
plt.xlabel(r"$log_{10}[N_{\gamma}/ (m^2 \mu m\ sec)]$", labelpad=-1)
plt.scatter(np.log10(sky["flux"]),
0.5*np.ones(sky["flux"].size))
annotateAxis(np.log10(sky["flux"]),
0.5*np.ones(sky["flux"].size),
sky["label"],
plt.gca())
skyax.yaxis.set_ticks([])
skyax_mag = skyax.twiny()
skyax_mag.yaxis.set_ticks([])
skyax_mag.set_xticklabels(['%1.1f'%flux2mag(10**(t)) for t in skyax.get_xticks()])
skyax_mag.set_xlim(xlims)
skyax_mag.set_xlabel("mag", labelpad=1)
mysky=np.array([(3e2,"Sky")],dt)
myskyplt, = skyax.plot(np.log10(mysky["flux"][0]), 0.5,
c='red',marker='x')
ssky=Slider(plt.subplot(gs[5,:col1width]),
"",
xlims[0],xlims[1],
valinit=np.log10(mysky["flux"][0]),
facecolor="red")
# Dark current, [e-/(pixel sec)]
darkcurrent = np.array([
(6.2e-3, "HST WFC"),
(8e-1, "Keck NIRSPEC"),
(1e-1, "Commercial CCD")
# (1e-3,"40 C"),
# (1e-1,"80 C"),
# (1e2, "100 C")
], dt)
darkcurrent.sort(order="flux")
darkcurrentax = plt.subplot(gs[11:13,:col1width])
plt.xlim(-3,10)
plt.ylabel("Dark current",rotation=45)
plt.xlabel(r"$log_{10}[N_{e^-}/ (pixel \ sec)]$", labelpad=-1)
plt.scatter(np.log10(darkcurrent["flux"]),
0.5*np.ones(darkcurrent["flux"].size))
annotateAxis(np.log10(darkcurrent["flux"]),
0.5*np.ones(darkcurrent["flux"].size),
darkcurrent["label"],
darkcurrentax)
darkcurrentax.yaxis.set_ticks([])
mydarkcurrent=np.array([(1e-2,"Dark current")],dt)
mydarkcurrentplt, = darkcurrentax.plot(np.log10(mydarkcurrent["flux"][0]), 0.5,
c='red',marker='x')
sdarkcurrent=Slider(plt.subplot(gs[10,:col1width]),
"",
darkcurrentax.get_xlim()[0],darkcurrentax.get_xlim()[1],
valinit=np.log10(mydarkcurrent["flux"][0]),
facecolor="red")
# Read noise, [e-/pix/read]
readnoise = np.array([
(2., "Chandra ACIS"),
(3., "HST WFC3"),
(4., "P1640"),
(10., "Subaru Suprime-Cam")
], dt)
readnoiseax = plt.subplot(gs[15:17,:col1width])
plt.xlim(0,15)
plt.ylabel("Read noise",rotation=45)
plt.xlabel(r"$N_{e^-}/pixel$", labelpad=-1)
plt.scatter(readnoise["flux"],
0.5*np.ones(readnoise["flux"].size))
annotateAxis(readnoise["flux"],
0.5*np.ones(readnoise["flux"].size),
readnoise["label"],
readnoiseax)
readnoiseax.yaxis.set_ticks([])
myreadnoise=np.array([(2,"Read noise")],dt)
myreadnoiseplt, = readnoiseax.plot(myreadnoise["flux"], 0.5,
c="red",marker="x")
sreadnoise=Slider(plt.subplot(gs[14,:col1width]),
"",
readnoiseax.get_xlim()[0],readnoiseax.get_xlim()[1],
valinit=2,
facecolor="red")
# Cosmics, [N/(cm^2 sec)]
cosmics = np.array([
(1.e-2,"Sea lvl"),
(2e-2, "4 km"),
(1,"Space"),
(1.3e-4, "LHC")
], dt)
cosmics.sort(order="flux")
cosmicsax = plt.subplot(gs[19:21,:col1width])
plt.xlim(-5,1)
plt.ylabel("Cosmics",rotation=45)
plt.xlabel(r"$log_{10}[N_{cosmics}/ (cm^2\ sec)]$", labelpad=-1)
plt.scatter(np.log10(cosmics["flux"]),
0.5*np.ones(cosmics["flux"].size))
annotateAxis(np.log10(cosmics["flux"]),
0.5*np.ones(cosmics["flux"].size),
cosmics["label"],
cosmicsax)
cosmicsax.yaxis.set_ticks([])
mycosmics=np.array([(1e-2,"Cosmics")],dt)
mycosmicsplt, = cosmicsax.plot(np.log10(mycosmics["flux"][0]), 0.5,
c='red',marker='x')
scosmics=Slider(plt.subplot(gs[18,:col1width]),
"",
cosmicsax.get_xlim()[0],cosmicsax.get_xlim()[1],
valinit=np.log10(mycosmics["flux"][0]),
facecolor="red")
# Telescope parameters
saperture=Slider(plt.subplot(gs[0,col1width+4:col1width+4+col2width]),
r"Aperture diameter [$m$]",
0,10,
valinit=1,
facecolor="green")
sbandwidth=Slider(plt.subplot(gs[1,col1width+4:col1width+4+col2width]),
r"Bandwidth [$\mu m$]",
0,10,
valinit=1,
facecolor="green")
sccdsize=Slider(plt.subplot(gs[2,col1width+4:col1width+4+col2width]),
r"CCD dimension [pixels]",
500,5000,
valinit=2048,
valfmt="%i",
facecolor="green")
sqe=Slider(plt.subplot(gs[3,col1width+4:col1width+4+col2width]),
r"Quantum efficiency",
0,1,
valinit=0.85,
facecolor="green")
spixelsize=Slider(plt.subplot(gs[4,col1width+4:col1width+4+col2width]),
r"Pixel size [$\mu m$]",
1,100,
valinit=18,
facecolor="green")
spixelsperpsf=Slider(plt.subplot(gs[5,col1width+4:col1width+4+col2width]),
r"Pixels under PSF [$\#$]",
1,40,
valinit=10,
valfmt='%1.1f',
facecolor="green")
spsfsontarget=Slider(plt.subplot(gs[6,col1width+4:col1width+4+col2width]),
r"Target size [$PSFs$]",
1,200,
valinit=10,
facecolor="green")
# signal-to-noise
def calcSNR(sig=1,bgnd=0,readnoise=0,Idc=0,npix=1,time=1):
"""
This is the famous CCD equation.
The number of pixels is already taken into account
if you pull values from the bar chart, so leave it at 1
"""
snr = []
time = np.asarray(time)
warnings.filterwarnings("error")
try:
snr = [sig*t/np.sqrt( (sig+bgnd*npix+Idc*npix)*t+(readnoise**2)*npix ) for t in time]
except RuntimeWarning:
print "Error: cannot handle time = 0. Try again."
return np.array(snr)
def calcCosmicHits(time=1):
"""
The probability of losing a pixel to cosmics
"""
time = np.asarray(time)
ccdflux = (spixelsize.val*sccdsize.val*1e-4)**2 * 10**scosmics.val # flux through ccd
npix = spsfsontarget.val*spixelsperpsf.val
probPerPix = (1-1./(sccdsize.val**2))**(ccdflux*time)
probPerTargetPix = probPerPix**npix
return (probPerPix,probPerTargetPix)
# Signal size bar plot - the contribution of electrons from each source
signalsvals = np.array([
((10**stargets.val)*(np.pi*saperture.val**2)*sbandwidth.val*sqe.val,"Target"),
((10**ssky.val)*(np.pi*saperture.val**2)*sbandwidth.val*sqe.val,"Sky"),
((10**scosmics.val) * (spixelsize.val)**2 * (1e-4)**2 * spsfsontarget.val*spixelsperpsf.val * 4.66e6*1/1.12, "Cosmics"),
#((1e-4/spixelsize.val)**2)*(4.66*1e6*1*1/1.12)*spsfsontarget.val*spixelsperpsf.val,"Cosmics"),
((10**sdarkcurrent.val)*spsfsontarget.val*spixelsperpsf.val,"DC"),
(sreadnoise.val*spsfsontarget.val*spixelsperpsf.val,"RN")
],dt)
signalsax = plt.subplot(gs[17:21,10:15])
signalsax.set_ylabel(r"$N_{e^-}/sec$")
signalsax.set_ylim(0,1.1*signalsvals["flux"].max())
signalsrect = signalsax.bar(range(signalsvals.size),signalsvals["flux"],
align="center")
for r in signalsrect[1:]: r.set_color("red")
plt.xticks(np.indices(signalsvals["label"].shape)[0],
signalsvals["label"])
# put the rectangles in a dictionary, so that you don't have to remember their order
signalsdict = dict((s,r) for s,r in zip(signalsvals["label"],signalsrect))
# SNR
time = np.linspace(1,3600,1000)
#snrax = plt.subplot(gs[14:19,9:16])
snrax = plt.subplot(gs[8:15,9:16])
snrax.set_xlim(0,time.max())
snrax.set_title("SNR")
snrax.set_xlabel("Integration time [min]")
snrax.set_xticks(np.linspace(0,3600,7))
snrax.set_xticklabels([int(i/60.) for i in snrax.get_xticks()])
snrax.ticklabel_format(style='sci',axis='y')
snrax.grid(True)
snrplt, = snrax.plot(time,
calcSNR(time=time,
sig = signalsdict["Target"].get_height(),
bgnd = signalsdict["Sky"].get_height(),
Idc = signalsdict["DC"].get_height(),
readnoise = signalsdict["RN"].get_height()
)
)
ruinedax = snrax.twinx() # pixels ruined by cosmics
ruinedax.set_ylim(0,1)
ruinedax.set_xticks(np.linspace(0,3600,7))
ruinedax.set_xticklabels([int(i/60.) for i in ruinedax.get_xticks()])
ruinedax.set_ylabel("Pixel survival prob. from cosmics",labelpad=-1,rotation=-90,color='r')
singleprob, targetpixprob = calcCosmicHits(time=time)
ruinedplts = {}
ruinedplts["singlepix"], = ruinedax.plot(time,singleprob,'r--',label="single pixel")
ruinedplts["targetpix"], = ruinedax.plot(time,targetpixprob,'r-',label="target pixels")
legend = ruinedax.legend(loc=2,frameon=False)
for t in legend.get_texts():
t.set_color("black")
# text formula
#fig.text(0.69,0.05,r"$\frac{S}{N}=\frac{St}{\sqrt{(S+Bn_{pix}+I_d n_{pix})t + R^2_n n_{pix}}}$",size="large")
## Dynamic updating
## signals bar chart
def update_signals_bar():
signalsdict["Target"].set_height((10**stargets.val)*(np.pi*saperture.val**2)*sbandwidth.val*sqe.val)
signalsdict["Sky"].set_height((10**ssky.val)*(np.pi*saperture.val**2)*sbandwidth.val*sqe.val)
signalsdict["Cosmics"].set_height((10**scosmics.val) * (spixelsize.val)**2 * (1e-4)**2 * spsfsontarget.val*spixelsperpsf.val * 4.66e6*1/1.12),
signalsdict["DC"].set_height((10**sdarkcurrent.val)*spsfsontarget.val*spixelsperpsf.val)
signalsdict["RN"].set_height(sreadnoise.val*spsfsontarget.val*spixelsperpsf.val)
# get maximum
maxsig = max([rect.get_height() for rect in signalsdict.values()])
signalsax.set_ylim(0,maxsig*1.1)
#for r,s in zip(signalsrect,signalsvals["flux"]):
#r.set_height(s)
def update_snr_plot():
newsnr = calcSNR(time=time,
sig = signalsdict["Target"].get_height(),
bgnd = signalsdict["Sky"].get_height(),
Idc = signalsdict["DC"].get_height(),
readnoise = signalsdict["RN"].get_height(),
)
maxsig = newsnr.max()
snrax.set_ylim(0,maxsig*1.1)
snrplt.set_ydata(newsnr)
newcosmicsSingle, newcosmicsTarget=calcCosmicHits(time=time)
ruinedplts["singlepix"].set_ydata(newcosmicsSingle)
ruinedplts["targetpix"].set_ydata(newcosmicsTarget)
# whole figure
def update(val):
## signal sources
mytargplt.set_xdata(stargets.val)
myskyplt.set_xdata(ssky.val)
mydarkcurrentplt.set_xdata(sdarkcurrent.val)
mycosmicsplt.set_xdata(scosmics.val)
myreadnoiseplt.set_xdata(sreadnoise.val)
## signals bar plot
update_signals_bar()
## snr plot
update_snr_plot()
fig.canvas.draw_idle()
stargets.on_changed(update)
ssky.on_changed(update)
sdarkcurrent.on_changed(update)
scosmics.on_changed(update)
sreadnoise.on_changed(update)
saperture.on_changed(update)
sbandwidth.on_changed(update)
spixelsperpsf.on_changed(update)
spsfsontarget.on_changed(update)
spixelsize.on_changed(update)
sccdsize.on_changed(update)
sqe.on_changed(update)
# maximize window size
mng = plt.get_current_fig_manager()
mng.resize(*mng.window.maxsize())
plt.show()
| gpl-3.0 |
timqian/sms-tools | lectures/5-Sinusoidal-model/plots-code/spectral-sine-synthesis.py | 24 | 1316 | import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import hamming, triang, blackmanharris
from scipy.fftpack import fft, ifft, fftshift
import math
import sys, os, functools, time
sys.path.append(os.path.join(os.path.dirname(os.path.realpath(__file__)), '../../../software/models/'))
import stft as STFT
import sineModel as SM
import utilFunctions as UF
Ns = 256
hNs = Ns/2
yw = np.zeros(Ns)
fs = 44100
freqs = np.array([1000.0, 4000.0, 8000.0])
amps = np.array([.6, .4, .6])
phases = ([0.5, 1.2, 2.3])
yploc = Ns*freqs/fs
ypmag = 20*np.log10(amps/2.0)
ypphase = phases
Y = UF.genSpecSines(freqs, ypmag, ypphase, Ns, fs)
mY = 20*np.log10(abs(Y[:hNs]))
pY = np.unwrap(np.angle(Y[:hNs]))
y= fftshift(ifft(Y))*sum(blackmanharris(Ns))
plt.figure(1, figsize=(9, 5))
plt.subplot(3,1,1)
plt.plot(fs*np.arange(Ns/2)/Ns, mY, 'r', lw=1.5)
plt.axis([0, fs/2.0,-100,0])
plt.title("mY, freqs (Hz) = 1000, 4000, 8000; amps = .6, .4, .6")
plt.subplot(3,1,2)
pY[pY==0]= np.nan
plt.plot(fs*np.arange(Ns/2)/Ns, pY, 'c', lw=1.5)
plt.axis([0, fs/2.0,-.01,3.0])
plt.title("pY, phases (radians) = .5, 1.2, 2.3")
plt.subplot(3,1,3)
plt.plot(np.arange(-hNs, hNs), y, 'b', lw=1.5)
plt.axis([-hNs, hNs,min(y),max(y)])
plt.title("y")
plt.tight_layout()
plt.savefig('spectral-sine-synthesis.png')
plt.show()
| agpl-3.0 |
ambros-gleixner/rubberband | rubberband/handlers/fe/evaluation.py | 1 | 24666 | """Contains EvaluationView."""
from lxml import html
import pandas as pd
import re
import json
import logging
from .base import BaseHandler
from rubberband.constants import IPET_EVALUATIONS, NONE_DISPLAY, ALL_SOLU
from rubberband.models import TestSet
from rubberband.utils import RBLogHandler
from rubberband.utils.helpers import get_rbid_representation, setup_testruns_subst_dict
from ipet import Experiment, TestRun
from ipet.evaluation import IPETEvaluation
class EvaluationView(BaseHandler):
"""Request handler caring about the evaluation of sets of TestRuns."""
def get(self, eval_id):
"""
Answer to GET requests.
Evaluate TestRuns with IPET, read id of evaluation file from URL.
Parameters
----------
eval_id : str
id of evaluation file read from url by routes.
Writes latex version of ipet-agg-table via file.html if style option in url is `latex`,
else it writes ipet-long-table and ipet-aggregated-table into a json dict
"""
# default and implicit style is ipetevaluation. if given latex, generate a table in the
# style of the release report
style = self.get_argument("style", None)
# setup logger
if style is None:
ipetlogger = logging.getLogger("ipet")
rbhandler = RBLogHandler(self)
rbhandler.setLevel(logging.INFO)
formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
rbhandler.setFormatter(formatter)
ipetlogger.addHandler(rbhandler)
# get evalfile
evalfile = IPET_EVALUATIONS[int(eval_id)]
if style is not None and style == "latex":
evalfile = '''<?xml version="1.0" ?>
<!-- group by githash - exclude fails & aborts -->
<Evaluation comparecolformat="%.3f" index="ProblemName Seed Permutation Settings LPSolver GitHash"
indexsplit="3">
<Column formatstr="%.2f" name="T" origcolname="SolvingTime" minval="0.5"
comp="quot shift. by 1" maxval="TimeLimit" alternative="TimeLimit"
reduction="shmean shift. by 1">
<Aggregation aggregation="shmean" name="sgm" shiftby="1.0"/>
</Column>
<Column formatstr="%.0f" name="N" origcolname="Nodes" comp="quot shift. by 100"
reduction="shmean shift. by 100">
<Aggregation aggregation="shmean" name="sgm" shiftby="100.0" />
</Column>
<FilterGroup name="\cleaninst">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
</FilterGroup>
<FilterGroup active="True" filtertype="intersection" name="\\affected">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter active="True" anytestrun="one" datakey="LP_Iterations_dualLP" operator="diff"/>
<Filter active="True" anytestrun="one" expression1="_solved_"
expression2="1" operator="eq"/>
</FilterGroup>
<FilterGroup name="\\bracket{0}{tilim}">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter anytestrun="one" expression1="_solved_" expression2="1" operator="eq"/>
</FilterGroup>
<FilterGroup name="\\bracket{1}{tilim}">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter anytestrun="one" expression1="_solved_" expression2="1" operator="eq"/>
<Filter anytestrun="one" expression1="T" expression2="1" operator="ge"/>
</FilterGroup>
<FilterGroup name="\\bracket{10}{tilim}">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter anytestrun="one" expression1="_solved_" expression2="1" operator="eq"/>
<Filter anytestrun="one" expression1="T" expression2="10" operator="ge"/>
</FilterGroup>
<FilterGroup name="\\bracket{100}{tilim}">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter anytestrun="one" expression1="_solved_" expression2="1" operator="eq"/>
<Filter anytestrun="one" expression1="T" expression2="100" operator="ge"/>
</FilterGroup>
<FilterGroup name="\\bracket{1000}{tilim}">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter anytestrun="one" expression1="_solved_" expression2="1" operator="eq"/>
<Filter anytestrun="one" expression1="T" expression2="1000" operator="ge"/>
</FilterGroup>
<FilterGroup name="\\alloptimal">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_solved_" expression2="1" operator="eq"/>
</FilterGroup>
<FilterGroup name="\difftimeouts">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter anytestrun="one" expression1="_solved_" expression2="1" operator="eq"/>
<Filter anytestrun="one" expression1="_solved_" expression2="0" operator="eq"/>
</FilterGroup>
<FilterGroup active="True" filtertype="intersection" name="\miplib~2017">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter active="True" anytestrun="all" datakey="ProblemName" operator="keep">
<Value active="True" name="MIPLIB2017"/>
</Filter>
</FilterGroup>
<FilterGroup active="True" filtertype="intersection" name="\miplib~2010">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter active="True" anytestrun="all" datakey="ProblemName" operator="keep">
<Value active="True" name="MIPLIB2010"/>
</Filter>
</FilterGroup>
<FilterGroup active="True" filtertype="intersection" name="\coral">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter active="True" anytestrun="one" datakey="ProblemName" operator="keep">
<Value active="True" name="COR@L"/>
</Filter>
</FilterGroup>
<FilterGroup active="True" filtertype="intersection" name="continuous">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="PresolvedProblem_ContVars"
expression2="PresolvedProblem_Vars" operator="eq"/>
</FilterGroup>
<FilterGroup active="True" filtertype="intersection" name="integer">
<Filter anytestrun="all" expression1="_abort_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="_fail_" expression2="0" operator="eq"/>
<Filter anytestrun="all" expression1="PresolvedProblem_ContVars"
expression2="PresolvedProblem_Vars" operator="lt"/>
</FilterGroup>
</Evaluation>
'''
tolerance = self.get_argument("tolerance")
if tolerance == "":
tolerance = 1e-6
droplist = self.get_argument("droplist")
# read testrunids
testrun_ids = self.get_argument("testruns")
testrunids = testrun_ids.split(",")
# read defaultgroup
default_id = self.get_argument("default", testrunids[0])
# get testruns and default
testruns = get_testruns(testrunids)
# evaluate with ipet
ex, excluded_inst = setup_experiment(testruns, droplist)
ev = setup_evaluation(evalfile, ALL_SOLU, tolerance,
evalstring=(style is not None and style == "latex"))
# set defaultgroup
set_defaultgroup(ev, ex, default_id)
# do evaluation
longtable, aggtable = ev.evaluate(ex)
# None style is default
if style is None:
# add filtergroup buttons to ipet long table
fg_buttons_str, longtable["Filtergroups"] = generate_filtergroup_selector(longtable, ev)
# get substitutions dictionary
repres = setup_testruns_subst_dict(testruns)
cols_dict = get_columns_dict(longtable, {**repres["short"], **repres["all"]})
# add id column to longtable
longtable.insert(0, "id", range(1, len(longtable) + 1))
delcols = [i for i in longtable.columns if i[-1] == "groupTags"]
longtable = longtable.drop(delcols, axis=1)
# convert to html and get style
add_classes = " ".join([self.rb_dt_borderless, self.rb_dt_compact]) # style for table
html_long = table_to_html(longtable, ev, html_id="ipet-long-table",
add_class=add_classes)
html_agg = table_to_html(aggtable, ev, html_id="ipet-aggregated-table",
add_class=add_classes)
ipetlogger.removeHandler(rbhandler)
rbhandler.close()
# postprocessing
html_long = process_ipet_table(html_long, {**repres["short"], **repres["all"]},
add_ind=False, swap=True)
html_agg = process_ipet_table(html_agg, {**repres["long"], **repres["all"]},
add_ind=True, swap=False)
message = ", ".join(sorted(list(set(excluded_inst))))
print(message)
# render to strings
html_tables = self.render_string("results/evaluation.html",
ipet_long_table=html_long,
ipet_aggregated_table=html_agg,
columns=cols_dict,
psmessage=message).decode("utf-8")
# send evaluated data
mydict = {"rb-ipet-eval-result": html_tables,
"rb-ipet-buttons": fg_buttons_str}
self.write(json.dumps(mydict))
elif style == "latex":
# generate a table that can be used in the release-report
df = aggtable
# care for the columns
df = df.reset_index()
poss = ['RubberbandId', 'GitHash', 'Settings', 'LPSolver']
for i in poss:
if i in df.columns:
colindex = i
break
cols = [c for c in df.columns if (c in ['Group', colindex, '_solved_'] or c.
startswith("N_") or c.startswith("T_")) and not c.endswith(")p")]
cols1 = [c for c in cols if c.endswith("Q") or c in ['Group', colindex]]
cols2 = [c for c in cols if not c.endswith("Q") or c in ['Group', colindex]]
df_rel = df[cols1]
df_abs = df[cols2]
df_count = df["_count_"]
# groups in rows
rows = ['\cleaninst', '\\affected', '\\bracket{0}{tilim}', '\\bracket{1}{tilim}',
'\\bracket{10}{tilim}', '\\bracket{100}{tilim}',
'\\bracket{1000}{tilim}', '\difftimeouts', '\\alloptimal',
'\miplib~2010', '\miplib~2017', '\coral', 'continuous', 'integer']
df_rel = df_rel.pivot_table(index=['Group'], columns=[colindex]).swaplevel(
axis=1).sort_index(axis=1, level=0, sort_remaining=True, ascending=False)
df_abs = df_abs.pivot_table(index=['Group'], columns=[colindex]).swaplevel(
axis=1).sort_index(axis=1, level=0, sort_remaining=True, ascending=False)
df_count = df.pivot_table(values=['_count_'], index=['Group'],
columns=[colindex]).swaplevel(axis=1)
df = df_abs
df.insert(loc=0, column=('NaN', "instances"), value=df_count[df_count.columns[0]])
for key in df_abs.columns:
df[key] = df_abs[key]
for key in df_rel.columns:
if not df_rel[key].mean() == 1.0:
(a, b) = key
df['relative', b] = df_rel[key]
df = df.loc[df.index.intersection(rows)].reindex(rows)
# render to latex
formatters = get_column_formatters(df)
out = df.to_latex(column_format='@{}l@{\\;\\;\\extracolsep{\\fill}}rrrrrrrrr@{}',
multicolumn_format="c", escape=False, formatters=formatters)
# split lines into a list
latex_list = out.splitlines()
# insert a `\midrule` at third last position in list (which will be the fourth last
# line in latex output)
latex_list.insert(14, '\cmidrule{1-10}')
latex_list.insert(7, '\cmidrule{1-10}')
latex_list.insert(3, '\cmidrule{3-5} \cmidrule{6-8} \cmidrule{9-10}')
# join split lines to get the modified latex output string
out = '\n'.join(latex_list)
# postprocessing
repl = get_replacement_dict(cols, colindex)
for t in testruns:
repl[t.get_data(colindex)] = t.get_data("ReportVersion")
for k, v in repl.items():
out = out.replace(k, v)
tridstr = ",".join([tr for tr in testrunids if tr != default_id])
baseurl = self.get_rb_base_url()
evaluation_url = "{}/result/{}?compare={}#evaluation".format(baseurl,
default_id, tridstr)
out = insert_into_latex(out, evaluation_url)
# send reply
self.render("file.html", contents=out)
def get_column_formatters(df):
"""
Get the formatters for a dataframe.
Parameters
----------
df : pandas dataframe
table
Returns
-------
dict
dictionary of formatters for columns.
"""
formatters = {}
for p in df.columns:
(a, b) = p
if b.endswith("Q"):
formatters[p] = lambda x: "%.2f" % x
elif b.startswith("T_"):
formatters[p] = lambda x: "%.1f" % x
else:
formatters[p] = lambda x: "%.0f" % x
return formatters
def insert_into_latex(body, url):
"""
Surround a latex table body by a latex header and footer.
Add a comment with link to url.
Parameters
----------
body : str
latex table body
url : str
url to current page
Returns
-------
str
the complete latex table
"""
latex_table_top = r"""
% table automatically generated by rubberband, please have a look and check everything
\begin{table}
\caption{Performance comparison}
\label{tbl:rubberband_table}
\scriptsize
"""
latex_table_bottom = r"""
\end{table}
"""
return latex_table_top + body + latex_table_bottom + "%% " + url
def get_replacement_dict(cols, colindex):
"""
Get the replacement dict for latex representation.
Parameters
----------
cols : columns
columns of table
colindex : key
title of additional column
Returns
-------
dict
replacement dictionary as `key` -> `value` pairs
"""
# collect keys for replacement
repl = {}
for i in cols:
if i.startswith("N_"):
repl[" " + i + " "] = " nodes "
if i.startswith("T_"):
repl[" " + i + " "] = " time "
repl["_solved_"] = " solved"
repl["Group"] = "Subset "
repl["NaN"] = " -"
repl["nan"] = " -"
repl["timeQ"] = "time"
repl["nodesQ"] = "nodes"
repl["{} & instances"] = "Subset & instances"
repl[colindex + " &"] = "&"
repl["egin{tabular"] = r"egin{tabular*}{\textwidth"
repl["end{tabular"] = "end{tabular*"
repl[r'- & \multi'] = r" & \multi"
return repl
def setup_experiment(testruns, droplist=""):
"""
Setup an ipet experiment for the given testruns.
Parameters
----------
testruns : list
a list of rubberband TestSet
Returns
-------
ipet.experiment
experiment
"""
ex = Experiment()
ex.addSoluFile(ALL_SOLU)
regexlist = []
for x in droplist.split(","):
# defaultvalue, if empty we don't want to exclude everything
if x == "":
continue
try:
y = re.compile(x)
regexlist.append(y)
except:
pass
excluded_inst = []
# get data
for t in testruns:
# update representation
additional_data = {"RubberbandId": get_rbid_representation(t, "extended")}
# collect data and pass to ipet
ipettestrun = TestRun()
tr_raw_data = t.get_data(add_data=additional_data)
tr_data = {}
for i in tr_raw_data.keys():
for r in regexlist:
if r.match(i):
excluded_inst.append(i)
break
else:
tr_data[i] = tr_raw_data[i]
ipettestrun.data = pd.DataFrame(tr_data).T
ex.testruns.append(ipettestrun)
return ex, excluded_inst
def process_ipet_table(table, repres, add_ind=False, swap=False):
"""
Make some modifications to the html structure.
Split all multirow cells and replace keys of repres by their corresponding values.
Parameters
----------
table : html
html structure of table
repres : dict
Replacement dictionary, `key` -> `value`
add_ind : bool
Add indices to rows
swap : bool
Swap the first two rows of header
Returns
-------
str
The html table as a string.
"""
# split rowspan cells from the tables body to enable js datatable
table_rows = [e for e in table.find(".//tbody").iter() if e.tag == "tr" or e.tag == "th"]
groupcount = 1
oldtext = ""
for row in table_rows:
cellcount = 0
for cell in row.iter():
if add_ind and cellcount == 1 and cell.tag == "th" and cell.text != oldtext:
cell.text = "{:0>2d}. {}".format(groupcount, cell.text)
oldtext = cell.text
groupcount = groupcount + 1
rowspan = cell.get("rowspan")
if rowspan is not None:
del cell.attrib["rowspan"]
nextrow = row
for i in range(int(rowspan) - 1):
nextrow = nextrow.getnext()
newcell = html.fromstring(html.tostring(cell))
nextrow.insert(cellcount - 1, newcell)
cellcount = cellcount + 1
# render to string and make the dataTable fit the width
htmltable = html.tostring(table).decode("utf-8")
# replace ids and so on
htmltable = replace_in_str(htmltable, repres)
htmltable = htmltable.replace("nan", NONE_DISPLAY)
return htmltable
def replace_in_str(rstring, repres):
"""Replace keys by values of repres in rstring."""
for k in sorted(repres.keys(), key=len, reverse=True):
rstring = rstring.replace(k, repres[k])
return rstring
def highlight_series(s):
"""
Highlight a series or datafragme with background color light gray.
An apply function for pandas styler.
Maps Series/DataFrame -> Series/Dataframe of identical
Parameters
----------
s : pandas Series or Dataframe
Returns
-------
str
css text-align attribute in form of s.
"""
return ['background-color: #eee' for v in s]
def table_to_html(df, ev, html_id="", add_class=""):
"""
Convert an ipet table to an html table.
Parameters
----------
df : pandas.dataframe
ev : ipet.evaluation
html_id : str
Returns
-------
html
html object of table
"""
formatters = ev.getColumnFormatters(df)
# apply sortlevel
df = ev.sortDataFrame(df)
tableclasses = 'ipet-table rb-table-data {}" width="100%'.format(add_class)
htmlstr = df.to_html(border=0,
na_rep=NONE_DISPLAY, formatters=formatters, justify="right",
table_id=html_id, classes=tableclasses)
return html.fromstring(htmlstr)
def get_testruns(testrunids):
"""
Collect testruns from the ids.
Parameters
----------
testrunids : list or string
list of testrun ids or a single one
Returns
-------
list
corresponding rubberband.TestSet(s)
"""
if type(testrunids) is not list:
return TestSet.get(id=testrunids)
testruns = []
for i in testrunids:
t = TestSet.get(id=i)
testruns.append(t)
return testruns
def setup_evaluation(evalfile, solufile, tolerance, evalstring=False):
"""
Setup the IPET evaluation.
Parameters
----------
evalfile : str
name of evaluation file to use
solufile : str
name of solution file to use
tolerance : str
tolerance for validation
evalstring : bool
evalfile a string (or a filename)
Returns
-------
ipet.IPETEvaluation
"""
if evalstring:
evaluation = IPETEvaluation.fromXML(evalfile)
else:
evaluation = IPETEvaluation.fromXMLFile(evalfile["path"])
evaluation.set_grouptags(True)
evaluation.set_validate(solufile)
evaluation.set_feastol(tolerance)
return evaluation
def set_defaultgroup(evaluation, experiment, testrun_id):
"""
Set defaultgroup implied by testrun_id based on evaluation.
Parameters
----------
evaluation : ipet.IPETEvaluation
evaluation to use
experiment : ipet.IPETExperiment
experiment to use
testrun_id : str
testrun setting defaultgroup
"""
index = evaluation.getColIndex()
# testrun_id can be found in column "RubberbandMetaId"
df = experiment.getJoinedData()[index + ["RubberbandMetaId"]]
df = df[df.RubberbandMetaId == testrun_id]
defaultgroup_list = list(df.iloc[0][index])
defaultgroup_string = ":".join(defaultgroup_list)
evaluation.set_defaultgroup(defaultgroup_string)
def generate_filtergroup_selector(table, evaluation):
"""
Generate a string with html filtergroup selector for ipet long table and and a column for table.
Parameters
----------
table : pandas.DataFrame
ipet long table
evaluation : ipet.IPETEvaluation
corresponding ipet evaluation
Returns
-------
str, pandas.Series
selector and additional column
"""
table = table.copy()
gtindex = [c for c in table.columns if c[-1] == "groupTags"][0]
table["Filtergroups"] = list(map("|{}|".format, table[gtindex]))
out = '<div id="ipet-long-table-filter col"><label class="col-form-label text-left">Select filtergroups:<select id="ipet-long-filter-select" class="custom-select">' # noqa
for fg in evaluation.getActiveFilterGroups():
fg_name = fg.getName()
fg_data = evaluation.getInstanceGroupData(fg)
# don't show empty filtergroups
if len(fg_data) == 0:
continue
# construct new option string
newoption = '<option value="' + fg_name + '">' + fg_name + '</option>' # noqa
# update selector strin
out = out + newoption
maxfgstr = ",".join(["|{}|".format(fg.getName()) for fg in evaluation.getActiveFilterGroups()])
maxlen = len(maxfgstr)
pd.set_option('max_colwidth', max(pd.get_option('display.max_colwidth'), maxlen))
out = out + '</select></label></div>'
return out, table["Filtergroups"]
def get_columns_dict(table, replace):
"""Construct a dictionary with column headers and ids, also replace given by replace dict."""
# 0 is name, 1 is id
if type(table.index) == pd.MultiIndex:
colcount = 1 + len(table.index[0])
else:
colcount = 2
cols = {}
for c in table.columns:
c_repres = ",".join(c)
if "Filtergroups" not in c:
cols[colcount] = replace_in_str(str(c_repres), replace)
colcount = colcount + 1
return cols
| mit |
akhilaananthram/nupic | nupic/math/roc_utils.py | 49 | 8308 | # ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2013, Numenta, Inc. Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Affero Public License for more details.
#
# You should have received a copy of the GNU Affero Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
"""
Utility functions to compute ROC (Receiver Operator Characteristic) curves
and AUC (Area Under the Curve).
The ROCCurve() and AreaUnderCurve() functions are based on the roc_curve()
and auc() functions found in metrics.py module of scikit-learn
(http://scikit-learn.org/stable/). Scikit-learn has a BSD license (3 clause).
Following is the original license/credits statement from the top of the
metrics.py file:
# Authors: Alexandre Gramfort <[email protected]>
# Mathieu Blondel <[email protected]>
# Olivier Grisel <[email protected]>
# License: BSD Style.
"""
import numpy as np
def ROCCurve(y_true, y_score):
"""compute Receiver operating characteristic (ROC)
Note: this implementation is restricted to the binary classification task.
Parameters
----------
y_true : array, shape = [n_samples]
true binary labels
y_score : array, shape = [n_samples]
target scores, can either be probability estimates of
the positive class, confidence values, or binary decisions.
Returns
-------
fpr : array, shape = [>2]
False Positive Rates
tpr : array, shape = [>2]
True Positive Rates
thresholds : array, shape = [>2]
Thresholds on y_score used to compute fpr and tpr
Examples
--------
>>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([1, 1, 2, 2])
>>> scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, tpr, thresholds = metrics.roc_curve(y, scores)
>>> fpr
array([ 0. , 0.5, 0.5, 1. ])
References
----------
http://en.wikipedia.org/wiki/Receiver_operating_characteristic
"""
y_true = np.ravel(y_true)
classes = np.unique(y_true)
# ROC only for binary classification
if classes.shape[0] != 2:
raise ValueError("ROC is defined for binary classification only")
y_score = np.ravel(y_score)
n_pos = float(np.sum(y_true == classes[1])) # nb of true positive
n_neg = float(np.sum(y_true == classes[0])) # nb of true negative
thresholds = np.unique(y_score)
neg_value, pos_value = classes[0], classes[1]
tpr = np.empty(thresholds.size, dtype=np.float) # True positive rate
fpr = np.empty(thresholds.size, dtype=np.float) # False positive rate
# Build tpr/fpr vector
current_pos_count = current_neg_count = sum_pos = sum_neg = idx = 0
signal = np.c_[y_score, y_true]
sorted_signal = signal[signal[:, 0].argsort(), :][::-1]
last_score = sorted_signal[0][0]
for score, value in sorted_signal:
if score == last_score:
if value == pos_value:
current_pos_count += 1
else:
current_neg_count += 1
else:
tpr[idx] = (sum_pos + current_pos_count) / n_pos
fpr[idx] = (sum_neg + current_neg_count) / n_neg
sum_pos += current_pos_count
sum_neg += current_neg_count
current_pos_count = 1 if value == pos_value else 0
current_neg_count = 1 if value == neg_value else 0
idx += 1
last_score = score
else:
tpr[-1] = (sum_pos + current_pos_count) / n_pos
fpr[-1] = (sum_neg + current_neg_count) / n_neg
# hard decisions, add (0,0)
if fpr.shape[0] == 2:
fpr = np.array([0.0, fpr[0], fpr[1]])
tpr = np.array([0.0, tpr[0], tpr[1]])
# trivial decisions, add (0,0) and (1,1)
elif fpr.shape[0] == 1:
fpr = np.array([0.0, fpr[0], 1.0])
tpr = np.array([0.0, tpr[0], 1.0])
return fpr, tpr, thresholds
def AreaUnderCurve(x, y):
"""Compute Area Under the Curve (AUC) using the trapezoidal rule
Parameters
----------
x : array, shape = [n]
x coordinates
y : array, shape = [n]
y coordinates
Returns
-------
auc : float
Examples
--------
>>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([1, 1, 2, 2])
>>> pred = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, tpr, thresholds = metrics.roc_curve(y, pred)
>>> metrics.auc(fpr, tpr)
0.75
"""
#x, y = check_arrays(x, y)
if x.shape[0] != y.shape[0]:
raise ValueError('x and y should have the same shape'
' to compute area under curve,'
' but x.shape = %s and y.shape = %s.'
% (x.shape, y.shape))
if x.shape[0] < 2:
raise ValueError('At least 2 points are needed to compute'
' area under curve, but x.shape = %s' % x.shape)
# reorder the data points according to the x axis
order = np.argsort(x)
x = x[order]
y = y[order]
h = np.diff(x)
area = np.sum(h * (y[1:] + y[:-1])) / 2.0
return area
def _printNPArray(x, precision=2):
format = "%%.%df" % (precision)
for elem in x:
print format % (elem),
print
def _test():
"""
This is a toy example, to show the basic functionality:
The dataset is:
actual prediction
-------------------------
0 0.1
0 0.4
1 0.5
1 0.3
1 0.45
Some ROC terminology:
A True Positive (TP) is when we predict TRUE and the actual value is 1.
A False Positive (FP) is when we predict TRUE, but the actual value is 0.
The True Positive Rate (TPR) is TP/P, where P is the total number of actual
positives (3 in this example, the last 3 samples).
The False Positive Rate (FPR) is FP/N, where N is the total number of actual
negatives (2 in this example, the first 2 samples)
Here are the classifications at various choices for the threshold. The
prediction is TRUE if the predicted value is >= threshold and FALSE otherwise.
actual pred 0.50 0.45 0.40 0.30 0.10
---------------------------------------------------------
0 0.1 0 0 0 0 1
0 0.4 0 0 1 1 1
1 0.5 1 1 1 1 1
1 0.3 0 0 0 1 1
1 0.45 0 1 1 1 1
TruePos(TP) 1 2 2 3 3
FalsePos(FP) 0 0 1 1 2
TruePosRate(TPR) 1/3 2/3 2/3 3/3 3/3
FalsePosRate(FPR) 0/2 0/2 1/2 1/2 2/2
The ROC curve is a plot of FPR on the x-axis and TPR on the y-axis. Basically,
one can pick any operating point along this curve to run, the operating point
determined by which threshold you want to use. By changing the threshold, you
tradeoff TP's for FPs.
The more area under this curve, the better the classification algorithm is.
The AreaUnderCurve() function can be used to compute the area under this
curve.
"""
yTrue = np.array([0, 0, 1, 1, 1])
yScore = np.array([0.1, 0.4, 0.5, 0.3, 0.45])
(fpr, tpr, thresholds) = ROCCurve(yTrue, yScore)
print "Actual: ",
_printNPArray(yTrue)
print "Predicted: ",
_printNPArray(yScore)
print
print "Thresholds:",
_printNPArray(thresholds[::-1])
print "FPR(x): ",
_printNPArray(fpr)
print "TPR(y): ",
_printNPArray(tpr)
print
area = AreaUnderCurve(fpr, tpr)
print "AUC: ", area
if __name__=='__main__':
_test()
| agpl-3.0 |
jchodera/yank | docs/conf.py | 2 | 9828 | # -*- coding: utf-8 -*-
#
# MDTraj documentation build configuration file, created by
# sphinx-quickstart on Tue Jun 11 21:23:28 2013.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
sys.path.insert(0, os.path.abspath('sphinxext'))
sys.path.insert(0, os.path.abspath('themes/sphinx_rtd_theme-0.1.5'))
import sphinx_rtd_theme
import yank
import yank.version
# DEBUG: Include TODO lists for now.
todo_include_todos = True
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
# This is the compete set of extensions MDTraj uses
#extensions = ['sphinx.ext.mathjax', 'sphinx.ext.ifconfig', 'sphinx.ext.autosummary',
# 'sphinx.ext.viewcode', 'sphinx.ext.autodoc', 'numpydoc',
# 'sphinx.ext.intersphinx', 'sphinx.ext.viewcode',
# 'IPython.sphinxext.ipython_console_highlighting',
# 'IPython.sphinxext.ipython_directive',
# 'matplotlib.sphinxext.plot_directive']
# Pared-down set (for now) for Yank
extensions = ['sphinx.ext.mathjax', 'sphinx.ext.ifconfig', 'sphinx.ext.autosummary',
'sphinx.ext.viewcode', 'sphinx.ext.autodoc', 'sphinx.ext.viewcode',
'numpydoc', 'sphinx.ext.todo', 'sphinxcontrib.bibtex']
# Napoleon settings
napoleon_google_docstring = False
napoleon_numpy_docstring = True
napoleon_include_private_with_doc = False
napoleon_include_special_with_doc = True
napoleon_use_admonition_for_examples = False
napoleon_use_admonition_for_notes = False
napoleon_use_admonition_for_references = False
napoleon_use_ivar = False
napoleon_use_param = True
napoleon_use_rtype = True
autosummary_generate = True
autodoc_default_flags = ['members', 'inherited-members']
extensions.append('notebook_sphinxext')
extensions.append('notebookcell_sphinxext')
_python_doc_base = 'http://docs.python.org/2.7'
intersphinx_mapping = {
_python_doc_base: None,
'http://docs.scipy.org/doc/numpy': None,
'http://docs.scipy.org/doc/scipy/reference': None,
'http://scikit-learn.org/stable': None
}
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'YANK'
copyright = u'2014, Copyright Stanford University, University of California Berkeley, Sloan Kettering Institute, and the authors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = yank.version.short_version
# The full version, including alpha/beta/rc tags.
release = yank.version.short_version
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = "sphinx_rtd_theme"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
html_logo = '_static/yank.png'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'yankdoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'Yank.tex', u'Yank Documentation',
u'John Chodera', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'yank', u'Yank Documentation',
[u'John Chodera'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'Yank', u'Yank Documentation',
u'John Chodera', 'Yank', 'A flexible GPU-accelerated Python framework for alchemical free energy calculations.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
autodoc_member_order = 'bysource'
# stackoverflow.com/questions/12206334
numpydoc_show_class_members = False
| lgpl-3.0 |
kdebrab/pandas | pandas/tests/test_panel.py | 2 | 105438 | # -*- coding: utf-8 -*-
# pylint: disable=W0612,E1101
from warnings import catch_warnings
from datetime import datetime
import operator
import pytest
import numpy as np
from pandas.core.dtypes.common import is_float_dtype
from pandas import (Series, DataFrame, Index, date_range, isna, notna,
pivot, MultiIndex)
from pandas.core.nanops import nanall, nanany
from pandas.core.panel import Panel
from pandas.io.formats.printing import pprint_thing
from pandas import compat
from pandas.compat import range, lrange, StringIO, OrderedDict, signature
from pandas.tseries.offsets import BDay, MonthEnd
from pandas.util.testing import (assert_panel_equal, assert_frame_equal,
assert_series_equal, assert_almost_equal,
ensure_clean, makeMixedDataFrame,
makeCustomDataframe as mkdf)
import pandas.core.panel as panelm
import pandas.util.testing as tm
import pandas.util._test_decorators as td
def make_test_panel():
with catch_warnings(record=True):
_panel = tm.makePanel()
tm.add_nans(_panel)
_panel = _panel.copy()
return _panel
class PanelTests(object):
panel = None
def test_pickle(self):
with catch_warnings(record=True):
unpickled = tm.round_trip_pickle(self.panel)
assert_frame_equal(unpickled['ItemA'], self.panel['ItemA'])
def test_rank(self):
with catch_warnings(record=True):
pytest.raises(NotImplementedError, lambda: self.panel.rank())
def test_cumsum(self):
with catch_warnings(record=True):
cumsum = self.panel.cumsum()
assert_frame_equal(cumsum['ItemA'], self.panel['ItemA'].cumsum())
def not_hashable(self):
with catch_warnings(record=True):
c_empty = Panel()
c = Panel(Panel([[[1]]]))
pytest.raises(TypeError, hash, c_empty)
pytest.raises(TypeError, hash, c)
class SafeForLongAndSparse(object):
def test_repr(self):
repr(self.panel)
def test_copy_names(self):
with catch_warnings(record=True):
for attr in ('major_axis', 'minor_axis'):
getattr(self.panel, attr).name = None
cp = self.panel.copy()
getattr(cp, attr).name = 'foo'
assert getattr(self.panel, attr).name is None
def test_iter(self):
tm.equalContents(list(self.panel), self.panel.items)
def test_count(self):
f = lambda s: notna(s).sum()
self._check_stat_op('count', f, obj=self.panel, has_skipna=False)
def test_sum(self):
self._check_stat_op('sum', np.sum, skipna_alternative=np.nansum)
def test_mean(self):
self._check_stat_op('mean', np.mean)
@td.skip_if_no("numpy", min_version="1.10.0")
def test_prod(self):
self._check_stat_op('prod', np.prod, skipna_alternative=np.nanprod)
def test_median(self):
def wrapper(x):
if isna(x).any():
return np.nan
return np.median(x)
self._check_stat_op('median', wrapper)
def test_min(self):
with catch_warnings(record=True):
self._check_stat_op('min', np.min)
def test_max(self):
with catch_warnings(record=True):
self._check_stat_op('max', np.max)
@td.skip_if_no_scipy
def test_skew(self):
from scipy.stats import skew
def this_skew(x):
if len(x) < 3:
return np.nan
return skew(x, bias=False)
self._check_stat_op('skew', this_skew)
def test_var(self):
def alt(x):
if len(x) < 2:
return np.nan
return np.var(x, ddof=1)
self._check_stat_op('var', alt)
def test_std(self):
def alt(x):
if len(x) < 2:
return np.nan
return np.std(x, ddof=1)
self._check_stat_op('std', alt)
def test_sem(self):
def alt(x):
if len(x) < 2:
return np.nan
return np.std(x, ddof=1) / np.sqrt(len(x))
self._check_stat_op('sem', alt)
def _check_stat_op(self, name, alternative, obj=None, has_skipna=True,
skipna_alternative=None):
if obj is None:
obj = self.panel
# # set some NAs
# obj.loc[5:10] = np.nan
# obj.loc[15:20, -2:] = np.nan
f = getattr(obj, name)
if has_skipna:
skipna_wrapper = tm._make_skipna_wrapper(alternative,
skipna_alternative)
def wrapper(x):
return alternative(np.asarray(x))
for i in range(obj.ndim):
result = f(axis=i, skipna=False)
assert_frame_equal(result, obj.apply(wrapper, axis=i))
else:
skipna_wrapper = alternative
wrapper = alternative
for i in range(obj.ndim):
result = f(axis=i)
if name in ['sum', 'prod']:
assert_frame_equal(result, obj.apply(skipna_wrapper, axis=i))
pytest.raises(Exception, f, axis=obj.ndim)
# Unimplemented numeric_only parameter.
if 'numeric_only' in signature(f).args:
tm.assert_raises_regex(NotImplementedError, name, f,
numeric_only=True)
class SafeForSparse(object):
def test_get_axis(self):
assert (self.panel._get_axis(0) is self.panel.items)
assert (self.panel._get_axis(1) is self.panel.major_axis)
assert (self.panel._get_axis(2) is self.panel.minor_axis)
def test_set_axis(self):
new_items = Index(np.arange(len(self.panel.items)))
new_major = Index(np.arange(len(self.panel.major_axis)))
new_minor = Index(np.arange(len(self.panel.minor_axis)))
# ensure propagate to potentially prior-cached items too
item = self.panel['ItemA']
self.panel.items = new_items
if hasattr(self.panel, '_item_cache'):
assert 'ItemA' not in self.panel._item_cache
assert self.panel.items is new_items
# TODO: unused?
item = self.panel[0] # noqa
self.panel.major_axis = new_major
assert self.panel[0].index is new_major
assert self.panel.major_axis is new_major
# TODO: unused?
item = self.panel[0] # noqa
self.panel.minor_axis = new_minor
assert self.panel[0].columns is new_minor
assert self.panel.minor_axis is new_minor
def test_get_axis_number(self):
assert self.panel._get_axis_number('items') == 0
assert self.panel._get_axis_number('major') == 1
assert self.panel._get_axis_number('minor') == 2
with tm.assert_raises_regex(ValueError, "No axis named foo"):
self.panel._get_axis_number('foo')
with tm.assert_raises_regex(ValueError, "No axis named foo"):
self.panel.__ge__(self.panel, axis='foo')
def test_get_axis_name(self):
assert self.panel._get_axis_name(0) == 'items'
assert self.panel._get_axis_name(1) == 'major_axis'
assert self.panel._get_axis_name(2) == 'minor_axis'
def test_get_plane_axes(self):
# what to do here?
index, columns = self.panel._get_plane_axes('items')
index, columns = self.panel._get_plane_axes('major_axis')
index, columns = self.panel._get_plane_axes('minor_axis')
index, columns = self.panel._get_plane_axes(0)
def test_truncate(self):
with catch_warnings(record=True):
dates = self.panel.major_axis
start, end = dates[1], dates[5]
trunced = self.panel.truncate(start, end, axis='major')
expected = self.panel['ItemA'].truncate(start, end)
assert_frame_equal(trunced['ItemA'], expected)
trunced = self.panel.truncate(before=start, axis='major')
expected = self.panel['ItemA'].truncate(before=start)
assert_frame_equal(trunced['ItemA'], expected)
trunced = self.panel.truncate(after=end, axis='major')
expected = self.panel['ItemA'].truncate(after=end)
assert_frame_equal(trunced['ItemA'], expected)
def test_arith(self):
with catch_warnings(record=True):
self._test_op(self.panel, operator.add)
self._test_op(self.panel, operator.sub)
self._test_op(self.panel, operator.mul)
self._test_op(self.panel, operator.truediv)
self._test_op(self.panel, operator.floordiv)
self._test_op(self.panel, operator.pow)
self._test_op(self.panel, lambda x, y: y + x)
self._test_op(self.panel, lambda x, y: y - x)
self._test_op(self.panel, lambda x, y: y * x)
self._test_op(self.panel, lambda x, y: y / x)
self._test_op(self.panel, lambda x, y: y ** x)
self._test_op(self.panel, lambda x, y: x + y) # panel + 1
self._test_op(self.panel, lambda x, y: x - y) # panel - 1
self._test_op(self.panel, lambda x, y: x * y) # panel * 1
self._test_op(self.panel, lambda x, y: x / y) # panel / 1
self._test_op(self.panel, lambda x, y: x ** y) # panel ** 1
pytest.raises(Exception, self.panel.__add__,
self.panel['ItemA'])
@staticmethod
def _test_op(panel, op):
result = op(panel, 1)
assert_frame_equal(result['ItemA'], op(panel['ItemA'], 1))
def test_keys(self):
tm.equalContents(list(self.panel.keys()), self.panel.items)
def test_iteritems(self):
# Test panel.iteritems(), aka panel.iteritems()
# just test that it works
for k, v in self.panel.iteritems():
pass
assert len(list(self.panel.iteritems())) == len(self.panel.items)
def test_combineFrame(self):
with catch_warnings(record=True):
def check_op(op, name):
# items
df = self.panel['ItemA']
func = getattr(self.panel, name)
result = func(df, axis='items')
assert_frame_equal(
result['ItemB'], op(self.panel['ItemB'], df))
# major
xs = self.panel.major_xs(self.panel.major_axis[0])
result = func(xs, axis='major')
idx = self.panel.major_axis[1]
assert_frame_equal(result.major_xs(idx),
op(self.panel.major_xs(idx), xs))
# minor
xs = self.panel.minor_xs(self.panel.minor_axis[0])
result = func(xs, axis='minor')
idx = self.panel.minor_axis[1]
assert_frame_equal(result.minor_xs(idx),
op(self.panel.minor_xs(idx), xs))
ops = ['add', 'sub', 'mul', 'truediv', 'floordiv', 'pow', 'mod']
if not compat.PY3:
ops.append('div')
for op in ops:
try:
check_op(getattr(operator, op), op)
except:
pprint_thing("Failing operation: %r" % op)
raise
if compat.PY3:
try:
check_op(operator.truediv, 'div')
except:
pprint_thing("Failing operation: %r" % 'div')
raise
def test_combinePanel(self):
with catch_warnings(record=True):
result = self.panel.add(self.panel)
assert_panel_equal(result, self.panel * 2)
def test_neg(self):
with catch_warnings(record=True):
assert_panel_equal(-self.panel, self.panel * -1)
# issue 7692
def test_raise_when_not_implemented(self):
with catch_warnings(record=True):
p = Panel(np.arange(3 * 4 * 5).reshape(3, 4, 5),
items=['ItemA', 'ItemB', 'ItemC'],
major_axis=date_range('20130101', periods=4),
minor_axis=list('ABCDE'))
d = p.sum(axis=1).iloc[0]
ops = ['add', 'sub', 'mul', 'truediv',
'floordiv', 'div', 'mod', 'pow']
for op in ops:
with pytest.raises(NotImplementedError):
getattr(p, op)(d, axis=0)
def test_select(self):
with catch_warnings(record=True):
p = self.panel
# select items
result = p.select(lambda x: x in ('ItemA', 'ItemC'), axis='items')
expected = p.reindex(items=['ItemA', 'ItemC'])
assert_panel_equal(result, expected)
# select major_axis
result = p.select(lambda x: x >= datetime(
2000, 1, 15), axis='major')
new_major = p.major_axis[p.major_axis >= datetime(2000, 1, 15)]
expected = p.reindex(major=new_major)
assert_panel_equal(result, expected)
# select minor_axis
result = p.select(lambda x: x in ('D', 'A'), axis=2)
expected = p.reindex(minor=['A', 'D'])
assert_panel_equal(result, expected)
# corner case, empty thing
result = p.select(lambda x: x in ('foo', ), axis='items')
assert_panel_equal(result, p.reindex(items=[]))
def test_get_value(self):
for item in self.panel.items:
for mjr in self.panel.major_axis[::2]:
for mnr in self.panel.minor_axis:
with tm.assert_produces_warning(FutureWarning,
check_stacklevel=False):
result = self.panel.get_value(item, mjr, mnr)
expected = self.panel[item][mnr][mjr]
assert_almost_equal(result, expected)
def test_abs(self):
with catch_warnings(record=True):
result = self.panel.abs()
result2 = abs(self.panel)
expected = np.abs(self.panel)
assert_panel_equal(result, expected)
assert_panel_equal(result2, expected)
df = self.panel['ItemA']
result = df.abs()
result2 = abs(df)
expected = np.abs(df)
assert_frame_equal(result, expected)
assert_frame_equal(result2, expected)
s = df['A']
result = s.abs()
result2 = abs(s)
expected = np.abs(s)
assert_series_equal(result, expected)
assert_series_equal(result2, expected)
assert result.name == 'A'
assert result2.name == 'A'
class CheckIndexing(object):
def test_getitem(self):
pytest.raises(Exception, self.panel.__getitem__, 'ItemQ')
def test_delitem_and_pop(self):
with catch_warnings(record=True):
expected = self.panel['ItemA']
result = self.panel.pop('ItemA')
assert_frame_equal(expected, result)
assert 'ItemA' not in self.panel.items
del self.panel['ItemB']
assert 'ItemB' not in self.panel.items
pytest.raises(Exception, self.panel.__delitem__, 'ItemB')
values = np.empty((3, 3, 3))
values[0] = 0
values[1] = 1
values[2] = 2
panel = Panel(values, lrange(3), lrange(3), lrange(3))
# did we delete the right row?
panelc = panel.copy()
del panelc[0]
tm.assert_frame_equal(panelc[1], panel[1])
tm.assert_frame_equal(panelc[2], panel[2])
panelc = panel.copy()
del panelc[1]
tm.assert_frame_equal(panelc[0], panel[0])
tm.assert_frame_equal(panelc[2], panel[2])
panelc = panel.copy()
del panelc[2]
tm.assert_frame_equal(panelc[1], panel[1])
tm.assert_frame_equal(panelc[0], panel[0])
def test_setitem(self):
with catch_warnings(record=True):
lp = self.panel.filter(['ItemA', 'ItemB']).to_frame()
with pytest.raises(ValueError):
self.panel['ItemE'] = lp
# DataFrame
df = self.panel['ItemA'][2:].filter(items=['A', 'B'])
self.panel['ItemF'] = df
self.panel['ItemE'] = df
df2 = self.panel['ItemF']
assert_frame_equal(df, df2.reindex(
index=df.index, columns=df.columns))
# scalar
self.panel['ItemG'] = 1
self.panel['ItemE'] = True
assert self.panel['ItemG'].values.dtype == np.int64
assert self.panel['ItemE'].values.dtype == np.bool_
# object dtype
self.panel['ItemQ'] = 'foo'
assert self.panel['ItemQ'].values.dtype == np.object_
# boolean dtype
self.panel['ItemP'] = self.panel['ItemA'] > 0
assert self.panel['ItemP'].values.dtype == np.bool_
pytest.raises(TypeError, self.panel.__setitem__, 'foo',
self.panel.loc[['ItemP']])
# bad shape
p = Panel(np.random.randn(4, 3, 2))
with tm.assert_raises_regex(ValueError,
r"shape of value must be "
r"\(3, 2\), shape of given "
r"object was \(4, 2\)"):
p[0] = np.random.randn(4, 2)
def test_setitem_ndarray(self):
with catch_warnings(record=True):
timeidx = date_range(start=datetime(2009, 1, 1),
end=datetime(2009, 12, 31),
freq=MonthEnd())
lons_coarse = np.linspace(-177.5, 177.5, 72)
lats_coarse = np.linspace(-87.5, 87.5, 36)
P = Panel(items=timeidx, major_axis=lons_coarse,
minor_axis=lats_coarse)
data = np.random.randn(72 * 36).reshape((72, 36))
key = datetime(2009, 2, 28)
P[key] = data
assert_almost_equal(P[key].values, data)
def test_set_minor_major(self):
with catch_warnings(record=True):
# GH 11014
df1 = DataFrame(['a', 'a', 'a', np.nan, 'a', np.nan])
df2 = DataFrame([1.0, np.nan, 1.0, np.nan, 1.0, 1.0])
panel = Panel({'Item1': df1, 'Item2': df2})
newminor = notna(panel.iloc[:, :, 0])
panel.loc[:, :, 'NewMinor'] = newminor
assert_frame_equal(panel.loc[:, :, 'NewMinor'],
newminor.astype(object))
newmajor = notna(panel.iloc[:, 0, :])
panel.loc[:, 'NewMajor', :] = newmajor
assert_frame_equal(panel.loc[:, 'NewMajor', :],
newmajor.astype(object))
def test_major_xs(self):
with catch_warnings(record=True):
ref = self.panel['ItemA']
idx = self.panel.major_axis[5]
xs = self.panel.major_xs(idx)
result = xs['ItemA']
assert_series_equal(result, ref.xs(idx), check_names=False)
assert result.name == 'ItemA'
# not contained
idx = self.panel.major_axis[0] - BDay()
pytest.raises(Exception, self.panel.major_xs, idx)
def test_major_xs_mixed(self):
with catch_warnings(record=True):
self.panel['ItemD'] = 'foo'
xs = self.panel.major_xs(self.panel.major_axis[0])
assert xs['ItemA'].dtype == np.float64
assert xs['ItemD'].dtype == np.object_
def test_minor_xs(self):
with catch_warnings(record=True):
ref = self.panel['ItemA']
idx = self.panel.minor_axis[1]
xs = self.panel.minor_xs(idx)
assert_series_equal(xs['ItemA'], ref[idx], check_names=False)
# not contained
pytest.raises(Exception, self.panel.minor_xs, 'E')
def test_minor_xs_mixed(self):
with catch_warnings(record=True):
self.panel['ItemD'] = 'foo'
xs = self.panel.minor_xs('D')
assert xs['ItemA'].dtype == np.float64
assert xs['ItemD'].dtype == np.object_
def test_xs(self):
with catch_warnings(record=True):
itemA = self.panel.xs('ItemA', axis=0)
expected = self.panel['ItemA']
tm.assert_frame_equal(itemA, expected)
# Get a view by default.
itemA_view = self.panel.xs('ItemA', axis=0)
itemA_view.values[:] = np.nan
assert np.isnan(self.panel['ItemA'].values).all()
# Mixed-type yields a copy.
self.panel['strings'] = 'foo'
result = self.panel.xs('D', axis=2)
assert result._is_copy is not None
def test_getitem_fancy_labels(self):
with catch_warnings(record=True):
p = self.panel
items = p.items[[1, 0]]
dates = p.major_axis[::2]
cols = ['D', 'C', 'F']
# all 3 specified
assert_panel_equal(p.loc[items, dates, cols],
p.reindex(items=items, major=dates, minor=cols))
# 2 specified
assert_panel_equal(p.loc[:, dates, cols],
p.reindex(major=dates, minor=cols))
assert_panel_equal(p.loc[items, :, cols],
p.reindex(items=items, minor=cols))
assert_panel_equal(p.loc[items, dates, :],
p.reindex(items=items, major=dates))
# only 1
assert_panel_equal(p.loc[items, :, :], p.reindex(items=items))
assert_panel_equal(p.loc[:, dates, :], p.reindex(major=dates))
assert_panel_equal(p.loc[:, :, cols], p.reindex(minor=cols))
def test_getitem_fancy_slice(self):
pass
def test_getitem_fancy_ints(self):
p = self.panel
# #1603
result = p.iloc[:, -1, :]
expected = p.loc[:, p.major_axis[-1], :]
assert_frame_equal(result, expected)
def test_getitem_fancy_xs(self):
p = self.panel
item = 'ItemB'
date = p.major_axis[5]
col = 'C'
# get DataFrame
# item
assert_frame_equal(p.loc[item], p[item])
assert_frame_equal(p.loc[item, :], p[item])
assert_frame_equal(p.loc[item, :, :], p[item])
# major axis, axis=1
assert_frame_equal(p.loc[:, date], p.major_xs(date))
assert_frame_equal(p.loc[:, date, :], p.major_xs(date))
# minor axis, axis=2
assert_frame_equal(p.loc[:, :, 'C'], p.minor_xs('C'))
# get Series
assert_series_equal(p.loc[item, date], p[item].loc[date])
assert_series_equal(p.loc[item, date, :], p[item].loc[date])
assert_series_equal(p.loc[item, :, col], p[item][col])
assert_series_equal(p.loc[:, date, col], p.major_xs(date).loc[col])
def test_getitem_fancy_xs_check_view(self):
with catch_warnings(record=True):
item = 'ItemB'
date = self.panel.major_axis[5]
# make sure it's always a view
NS = slice(None, None)
# DataFrames
comp = assert_frame_equal
self._check_view(item, comp)
self._check_view((item, NS), comp)
self._check_view((item, NS, NS), comp)
self._check_view((NS, date), comp)
self._check_view((NS, date, NS), comp)
self._check_view((NS, NS, 'C'), comp)
# Series
comp = assert_series_equal
self._check_view((item, date), comp)
self._check_view((item, date, NS), comp)
self._check_view((item, NS, 'C'), comp)
self._check_view((NS, date, 'C'), comp)
def test_getitem_callable(self):
with catch_warnings(record=True):
p = self.panel
# GH 12533
assert_frame_equal(p[lambda x: 'ItemB'], p.loc['ItemB'])
assert_panel_equal(p[lambda x: ['ItemB', 'ItemC']],
p.loc[['ItemB', 'ItemC']])
def test_ix_setitem_slice_dataframe(self):
with catch_warnings(record=True):
a = Panel(items=[1, 2, 3], major_axis=[11, 22, 33],
minor_axis=[111, 222, 333])
b = DataFrame(np.random.randn(2, 3), index=[111, 333],
columns=[1, 2, 3])
a.loc[:, 22, [111, 333]] = b
assert_frame_equal(a.loc[:, 22, [111, 333]], b)
def test_ix_align(self):
with catch_warnings(record=True):
from pandas import Series
b = Series(np.random.randn(10), name=0)
b.sort_values()
df_orig = Panel(np.random.randn(3, 10, 2))
df = df_orig.copy()
df.loc[0, :, 0] = b
assert_series_equal(df.loc[0, :, 0].reindex(b.index), b)
df = df_orig.swapaxes(0, 1)
df.loc[:, 0, 0] = b
assert_series_equal(df.loc[:, 0, 0].reindex(b.index), b)
df = df_orig.swapaxes(1, 2)
df.loc[0, 0, :] = b
assert_series_equal(df.loc[0, 0, :].reindex(b.index), b)
def test_ix_frame_align(self):
with catch_warnings(record=True):
p_orig = tm.makePanel()
df = p_orig.iloc[0].copy()
assert_frame_equal(p_orig['ItemA'], df)
p = p_orig.copy()
p.iloc[0, :, :] = df
assert_panel_equal(p, p_orig)
p = p_orig.copy()
p.iloc[0] = df
assert_panel_equal(p, p_orig)
p = p_orig.copy()
p.iloc[0, :, :] = df
assert_panel_equal(p, p_orig)
p = p_orig.copy()
p.iloc[0] = df
assert_panel_equal(p, p_orig)
p = p_orig.copy()
p.loc['ItemA'] = df
assert_panel_equal(p, p_orig)
p = p_orig.copy()
p.loc['ItemA', :, :] = df
assert_panel_equal(p, p_orig)
p = p_orig.copy()
p['ItemA'] = df
assert_panel_equal(p, p_orig)
p = p_orig.copy()
p.iloc[0, [0, 1, 3, 5], -2:] = df
out = p.iloc[0, [0, 1, 3, 5], -2:]
assert_frame_equal(out, df.iloc[[0, 1, 3, 5], [2, 3]])
# GH3830, panel assignent by values/frame
for dtype in ['float64', 'int64']:
panel = Panel(np.arange(40).reshape((2, 4, 5)),
items=['a1', 'a2'], dtype=dtype)
df1 = panel.iloc[0]
df2 = panel.iloc[1]
tm.assert_frame_equal(panel.loc['a1'], df1)
tm.assert_frame_equal(panel.loc['a2'], df2)
# Assignment by Value Passes for 'a2'
panel.loc['a2'] = df1.values
tm.assert_frame_equal(panel.loc['a1'], df1)
tm.assert_frame_equal(panel.loc['a2'], df1)
# Assignment by DataFrame Ok w/o loc 'a2'
panel['a2'] = df2
tm.assert_frame_equal(panel.loc['a1'], df1)
tm.assert_frame_equal(panel.loc['a2'], df2)
# Assignment by DataFrame Fails for 'a2'
panel.loc['a2'] = df2
tm.assert_frame_equal(panel.loc['a1'], df1)
tm.assert_frame_equal(panel.loc['a2'], df2)
def _check_view(self, indexer, comp):
cp = self.panel.copy()
obj = cp.loc[indexer]
obj.values[:] = 0
assert (obj.values == 0).all()
comp(cp.loc[indexer].reindex_like(obj), obj)
def test_logical_with_nas(self):
with catch_warnings(record=True):
d = Panel({'ItemA': {'a': [np.nan, False]},
'ItemB': {'a': [True, True]}})
result = d['ItemA'] | d['ItemB']
expected = DataFrame({'a': [np.nan, True]})
assert_frame_equal(result, expected)
# this is autodowncasted here
result = d['ItemA'].fillna(False) | d['ItemB']
expected = DataFrame({'a': [True, True]})
assert_frame_equal(result, expected)
def test_neg(self):
with catch_warnings(record=True):
assert_panel_equal(-self.panel, -1 * self.panel)
def test_invert(self):
with catch_warnings(record=True):
assert_panel_equal(-(self.panel < 0), ~(self.panel < 0))
def test_comparisons(self):
with catch_warnings(record=True):
p1 = tm.makePanel()
p2 = tm.makePanel()
tp = p1.reindex(items=p1.items + ['foo'])
df = p1[p1.items[0]]
def test_comp(func):
# versus same index
result = func(p1, p2)
tm.assert_numpy_array_equal(result.values,
func(p1.values, p2.values))
# versus non-indexed same objs
pytest.raises(Exception, func, p1, tp)
# versus different objs
pytest.raises(Exception, func, p1, df)
# versus scalar
result3 = func(self.panel, 0)
tm.assert_numpy_array_equal(result3.values,
func(self.panel.values, 0))
with np.errstate(invalid='ignore'):
test_comp(operator.eq)
test_comp(operator.ne)
test_comp(operator.lt)
test_comp(operator.gt)
test_comp(operator.ge)
test_comp(operator.le)
def test_get_value(self):
with catch_warnings(record=True):
for item in self.panel.items:
for mjr in self.panel.major_axis[::2]:
for mnr in self.panel.minor_axis:
result = self.panel.get_value(item, mjr, mnr)
expected = self.panel[item][mnr][mjr]
assert_almost_equal(result, expected)
with tm.assert_raises_regex(TypeError,
"There must be an argument "
"for each axis"):
self.panel.get_value('a')
def test_set_value(self):
with catch_warnings(record=True):
for item in self.panel.items:
for mjr in self.panel.major_axis[::2]:
for mnr in self.panel.minor_axis:
self.panel.set_value(item, mjr, mnr, 1.)
tm.assert_almost_equal(self.panel[item][mnr][mjr], 1.)
# resize
res = self.panel.set_value('ItemE', 'foo', 'bar', 1.5)
assert isinstance(res, Panel)
assert res is not self.panel
assert res.get_value('ItemE', 'foo', 'bar') == 1.5
res3 = self.panel.set_value('ItemE', 'foobar', 'baz', 5)
assert is_float_dtype(res3['ItemE'].values)
msg = ("There must be an argument for each "
"axis plus the value provided")
with tm.assert_raises_regex(TypeError, msg):
self.panel.set_value('a')
class TestPanel(PanelTests, CheckIndexing, SafeForLongAndSparse,
SafeForSparse):
def setup_method(self, method):
self.panel = make_test_panel()
self.panel.major_axis.name = None
self.panel.minor_axis.name = None
self.panel.items.name = None
def test_constructor(self):
with catch_warnings(record=True):
# with BlockManager
wp = Panel(self.panel._data)
assert wp._data is self.panel._data
wp = Panel(self.panel._data, copy=True)
assert wp._data is not self.panel._data
tm.assert_panel_equal(wp, self.panel)
# strings handled prop
wp = Panel([[['foo', 'foo', 'foo', ], ['foo', 'foo', 'foo']]])
assert wp.values.dtype == np.object_
vals = self.panel.values
# no copy
wp = Panel(vals)
assert wp.values is vals
# copy
wp = Panel(vals, copy=True)
assert wp.values is not vals
# GH #8285, test when scalar data is used to construct a Panel
# if dtype is not passed, it should be inferred
value_and_dtype = [(1, 'int64'), (3.14, 'float64'),
('foo', np.object_)]
for (val, dtype) in value_and_dtype:
wp = Panel(val, items=range(2), major_axis=range(3),
minor_axis=range(4))
vals = np.empty((2, 3, 4), dtype=dtype)
vals.fill(val)
tm.assert_panel_equal(wp, Panel(vals, dtype=dtype))
# test the case when dtype is passed
wp = Panel(1, items=range(2), major_axis=range(3),
minor_axis=range(4),
dtype='float32')
vals = np.empty((2, 3, 4), dtype='float32')
vals.fill(1)
tm.assert_panel_equal(wp, Panel(vals, dtype='float32'))
def test_constructor_cast(self):
with catch_warnings(record=True):
zero_filled = self.panel.fillna(0)
casted = Panel(zero_filled._data, dtype=int)
casted2 = Panel(zero_filled.values, dtype=int)
exp_values = zero_filled.values.astype(int)
assert_almost_equal(casted.values, exp_values)
assert_almost_equal(casted2.values, exp_values)
casted = Panel(zero_filled._data, dtype=np.int32)
casted2 = Panel(zero_filled.values, dtype=np.int32)
exp_values = zero_filled.values.astype(np.int32)
assert_almost_equal(casted.values, exp_values)
assert_almost_equal(casted2.values, exp_values)
# can't cast
data = [[['foo', 'bar', 'baz']]]
pytest.raises(ValueError, Panel, data, dtype=float)
def test_constructor_empty_panel(self):
with catch_warnings(record=True):
empty = Panel()
assert len(empty.items) == 0
assert len(empty.major_axis) == 0
assert len(empty.minor_axis) == 0
def test_constructor_observe_dtype(self):
with catch_warnings(record=True):
# GH #411
panel = Panel(items=lrange(3), major_axis=lrange(3),
minor_axis=lrange(3), dtype='O')
assert panel.values.dtype == np.object_
def test_constructor_dtypes(self):
with catch_warnings(record=True):
# GH #797
def _check_dtype(panel, dtype):
for i in panel.items:
assert panel[i].values.dtype.name == dtype
# only nan holding types allowed here
for dtype in ['float64', 'float32', 'object']:
panel = Panel(items=lrange(2), major_axis=lrange(10),
minor_axis=lrange(5), dtype=dtype)
_check_dtype(panel, dtype)
for dtype in ['float64', 'float32', 'int64', 'int32', 'object']:
panel = Panel(np.array(np.random.randn(2, 10, 5), dtype=dtype),
items=lrange(2),
major_axis=lrange(10),
minor_axis=lrange(5), dtype=dtype)
_check_dtype(panel, dtype)
for dtype in ['float64', 'float32', 'int64', 'int32', 'object']:
panel = Panel(np.array(np.random.randn(2, 10, 5), dtype='O'),
items=lrange(2),
major_axis=lrange(10),
minor_axis=lrange(5), dtype=dtype)
_check_dtype(panel, dtype)
for dtype in ['float64', 'float32', 'int64', 'int32', 'object']:
panel = Panel(
np.random.randn(2, 10, 5),
items=lrange(2), major_axis=lrange(10),
minor_axis=lrange(5),
dtype=dtype)
_check_dtype(panel, dtype)
for dtype in ['float64', 'float32', 'int64', 'int32', 'object']:
df1 = DataFrame(np.random.randn(2, 5),
index=lrange(2), columns=lrange(5))
df2 = DataFrame(np.random.randn(2, 5),
index=lrange(2), columns=lrange(5))
panel = Panel.from_dict({'a': df1, 'b': df2}, dtype=dtype)
_check_dtype(panel, dtype)
def test_constructor_fails_with_not_3d_input(self):
with catch_warnings(record=True):
with tm.assert_raises_regex(ValueError, "The number of dimensions required is 3"): # noqa
Panel(np.random.randn(10, 2))
def test_consolidate(self):
with catch_warnings(record=True):
assert self.panel._data.is_consolidated()
self.panel['foo'] = 1.
assert not self.panel._data.is_consolidated()
panel = self.panel._consolidate()
assert panel._data.is_consolidated()
def test_ctor_dict(self):
with catch_warnings(record=True):
itema = self.panel['ItemA']
itemb = self.panel['ItemB']
d = {'A': itema, 'B': itemb[5:]}
d2 = {'A': itema._series, 'B': itemb[5:]._series}
d3 = {'A': None,
'B': DataFrame(itemb[5:]._series),
'C': DataFrame(itema._series)}
wp = Panel.from_dict(d)
wp2 = Panel.from_dict(d2) # nested Dict
# TODO: unused?
wp3 = Panel.from_dict(d3) # noqa
tm.assert_index_equal(wp.major_axis, self.panel.major_axis)
assert_panel_equal(wp, wp2)
# intersect
wp = Panel.from_dict(d, intersect=True)
tm.assert_index_equal(wp.major_axis, itemb.index[5:])
# use constructor
assert_panel_equal(Panel(d), Panel.from_dict(d))
assert_panel_equal(Panel(d2), Panel.from_dict(d2))
assert_panel_equal(Panel(d3), Panel.from_dict(d3))
# a pathological case
d4 = {'A': None, 'B': None}
# TODO: unused?
wp4 = Panel.from_dict(d4) # noqa
assert_panel_equal(Panel(d4), Panel(items=['A', 'B']))
# cast
dcasted = {k: v.reindex(wp.major_axis).fillna(0)
for k, v in compat.iteritems(d)}
result = Panel(dcasted, dtype=int)
expected = Panel({k: v.astype(int)
for k, v in compat.iteritems(dcasted)})
assert_panel_equal(result, expected)
result = Panel(dcasted, dtype=np.int32)
expected = Panel({k: v.astype(np.int32)
for k, v in compat.iteritems(dcasted)})
assert_panel_equal(result, expected)
def test_constructor_dict_mixed(self):
with catch_warnings(record=True):
data = {k: v.values for k, v in self.panel.iteritems()}
result = Panel(data)
exp_major = Index(np.arange(len(self.panel.major_axis)))
tm.assert_index_equal(result.major_axis, exp_major)
result = Panel(data, items=self.panel.items,
major_axis=self.panel.major_axis,
minor_axis=self.panel.minor_axis)
assert_panel_equal(result, self.panel)
data['ItemC'] = self.panel['ItemC']
result = Panel(data)
assert_panel_equal(result, self.panel)
# corner, blow up
data['ItemB'] = data['ItemB'][:-1]
pytest.raises(Exception, Panel, data)
data['ItemB'] = self.panel['ItemB'].values[:, :-1]
pytest.raises(Exception, Panel, data)
def test_ctor_orderedDict(self):
with catch_warnings(record=True):
keys = list(set(np.random.randint(0, 5000, 100)))[
:50] # unique random int keys
d = OrderedDict([(k, mkdf(10, 5)) for k in keys])
p = Panel(d)
assert list(p.items) == keys
p = Panel.from_dict(d)
assert list(p.items) == keys
def test_constructor_resize(self):
with catch_warnings(record=True):
data = self.panel._data
items = self.panel.items[:-1]
major = self.panel.major_axis[:-1]
minor = self.panel.minor_axis[:-1]
result = Panel(data, items=items,
major_axis=major, minor_axis=minor)
expected = self.panel.reindex(
items=items, major=major, minor=minor)
assert_panel_equal(result, expected)
result = Panel(data, items=items, major_axis=major)
expected = self.panel.reindex(items=items, major=major)
assert_panel_equal(result, expected)
result = Panel(data, items=items)
expected = self.panel.reindex(items=items)
assert_panel_equal(result, expected)
result = Panel(data, minor_axis=minor)
expected = self.panel.reindex(minor=minor)
assert_panel_equal(result, expected)
def test_from_dict_mixed_orient(self):
with catch_warnings(record=True):
df = tm.makeDataFrame()
df['foo'] = 'bar'
data = {'k1': df, 'k2': df}
panel = Panel.from_dict(data, orient='minor')
assert panel['foo'].values.dtype == np.object_
assert panel['A'].values.dtype == np.float64
def test_constructor_error_msgs(self):
with catch_warnings(record=True):
def testit():
Panel(np.random.randn(3, 4, 5),
lrange(4), lrange(5), lrange(5))
tm.assert_raises_regex(ValueError,
r"Shape of passed values is "
r"\(3, 4, 5\), indices imply "
r"\(4, 5, 5\)",
testit)
def testit():
Panel(np.random.randn(3, 4, 5),
lrange(5), lrange(4), lrange(5))
tm.assert_raises_regex(ValueError,
r"Shape of passed values is "
r"\(3, 4, 5\), indices imply "
r"\(5, 4, 5\)",
testit)
def testit():
Panel(np.random.randn(3, 4, 5),
lrange(5), lrange(5), lrange(4))
tm.assert_raises_regex(ValueError,
r"Shape of passed values is "
r"\(3, 4, 5\), indices imply "
r"\(5, 5, 4\)",
testit)
def test_conform(self):
with catch_warnings(record=True):
df = self.panel['ItemA'][:-5].filter(items=['A', 'B'])
conformed = self.panel.conform(df)
tm.assert_index_equal(conformed.index, self.panel.major_axis)
tm.assert_index_equal(conformed.columns, self.panel.minor_axis)
def test_convert_objects(self):
with catch_warnings(record=True):
# GH 4937
p = Panel(dict(A=dict(a=['1', '1.0'])))
expected = Panel(dict(A=dict(a=[1, 1.0])))
result = p._convert(numeric=True, coerce=True)
assert_panel_equal(result, expected)
def test_dtypes(self):
result = self.panel.dtypes
expected = Series(np.dtype('float64'), index=self.panel.items)
assert_series_equal(result, expected)
def test_astype(self):
with catch_warnings(record=True):
# GH7271
data = np.array([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])
panel = Panel(data, ['a', 'b'], ['c', 'd'], ['e', 'f'])
str_data = np.array([[['1', '2'], ['3', '4']],
[['5', '6'], ['7', '8']]])
expected = Panel(str_data, ['a', 'b'], ['c', 'd'], ['e', 'f'])
assert_panel_equal(panel.astype(str), expected)
pytest.raises(NotImplementedError, panel.astype, {0: str})
def test_apply(self):
with catch_warnings(record=True):
# GH1148
# ufunc
applied = self.panel.apply(np.sqrt)
with np.errstate(invalid='ignore'):
expected = np.sqrt(self.panel.values)
assert_almost_equal(applied.values, expected)
# ufunc same shape
result = self.panel.apply(lambda x: x * 2, axis='items')
expected = self.panel * 2
assert_panel_equal(result, expected)
result = self.panel.apply(lambda x: x * 2, axis='major_axis')
expected = self.panel * 2
assert_panel_equal(result, expected)
result = self.panel.apply(lambda x: x * 2, axis='minor_axis')
expected = self.panel * 2
assert_panel_equal(result, expected)
# reduction to DataFrame
result = self.panel.apply(lambda x: x.dtype, axis='items')
expected = DataFrame(np.dtype('float64'),
index=self.panel.major_axis,
columns=self.panel.minor_axis)
assert_frame_equal(result, expected)
result = self.panel.apply(lambda x: x.dtype, axis='major_axis')
expected = DataFrame(np.dtype('float64'),
index=self.panel.minor_axis,
columns=self.panel.items)
assert_frame_equal(result, expected)
result = self.panel.apply(lambda x: x.dtype, axis='minor_axis')
expected = DataFrame(np.dtype('float64'),
index=self.panel.major_axis,
columns=self.panel.items)
assert_frame_equal(result, expected)
# reductions via other dims
expected = self.panel.sum(0)
result = self.panel.apply(lambda x: x.sum(), axis='items')
assert_frame_equal(result, expected)
expected = self.panel.sum(1)
result = self.panel.apply(lambda x: x.sum(), axis='major_axis')
assert_frame_equal(result, expected)
expected = self.panel.sum(2)
result = self.panel.apply(lambda x: x.sum(), axis='minor_axis')
assert_frame_equal(result, expected)
# pass kwargs
result = self.panel.apply(
lambda x, y: x.sum() + y, axis='items', y=5)
expected = self.panel.sum(0) + 5
assert_frame_equal(result, expected)
def test_apply_slabs(self):
with catch_warnings(record=True):
# same shape as original
result = self.panel.apply(lambda x: x * 2,
axis=['items', 'major_axis'])
expected = (self.panel * 2).transpose('minor_axis', 'major_axis',
'items')
assert_panel_equal(result, expected)
result = self.panel.apply(lambda x: x * 2,
axis=['major_axis', 'items'])
assert_panel_equal(result, expected)
result = self.panel.apply(lambda x: x * 2,
axis=['items', 'minor_axis'])
expected = (self.panel * 2).transpose('major_axis', 'minor_axis',
'items')
assert_panel_equal(result, expected)
result = self.panel.apply(lambda x: x * 2,
axis=['minor_axis', 'items'])
assert_panel_equal(result, expected)
result = self.panel.apply(lambda x: x * 2,
axis=['major_axis', 'minor_axis'])
expected = self.panel * 2
assert_panel_equal(result, expected)
result = self.panel.apply(lambda x: x * 2,
axis=['minor_axis', 'major_axis'])
assert_panel_equal(result, expected)
# reductions
result = self.panel.apply(lambda x: x.sum(0), axis=[
'items', 'major_axis'
])
expected = self.panel.sum(1).T
assert_frame_equal(result, expected)
# make sure that we don't trigger any warnings
with catch_warnings(record=True):
result = self.panel.apply(lambda x: x.sum(1), axis=[
'items', 'major_axis'
])
expected = self.panel.sum(0)
assert_frame_equal(result, expected)
# transforms
f = lambda x: ((x.T - x.mean(1)) / x.std(1)).T
# make sure that we don't trigger any warnings
result = self.panel.apply(f, axis=['items', 'major_axis'])
expected = Panel({ax: f(self.panel.loc[:, :, ax])
for ax in self.panel.minor_axis})
assert_panel_equal(result, expected)
result = self.panel.apply(f, axis=['major_axis', 'minor_axis'])
expected = Panel({ax: f(self.panel.loc[ax])
for ax in self.panel.items})
assert_panel_equal(result, expected)
result = self.panel.apply(f, axis=['minor_axis', 'items'])
expected = Panel({ax: f(self.panel.loc[:, ax])
for ax in self.panel.major_axis})
assert_panel_equal(result, expected)
# with multi-indexes
# GH7469
index = MultiIndex.from_tuples([('one', 'a'), ('one', 'b'), (
'two', 'a'), ('two', 'b')])
dfa = DataFrame(np.array(np.arange(12, dtype='int64')).reshape(
4, 3), columns=list("ABC"), index=index)
dfb = DataFrame(np.array(np.arange(10, 22, dtype='int64')).reshape(
4, 3), columns=list("ABC"), index=index)
p = Panel({'f': dfa, 'g': dfb})
result = p.apply(lambda x: x.sum(), axis=0)
# on windows this will be in32
result = result.astype('int64')
expected = p.sum(0)
assert_frame_equal(result, expected)
def test_apply_no_or_zero_ndim(self):
with catch_warnings(record=True):
# GH10332
self.panel = Panel(np.random.rand(5, 5, 5))
result_int = self.panel.apply(lambda df: 0, axis=[1, 2])
result_float = self.panel.apply(lambda df: 0.0, axis=[1, 2])
result_int64 = self.panel.apply(
lambda df: np.int64(0), axis=[1, 2])
result_float64 = self.panel.apply(lambda df: np.float64(0.0),
axis=[1, 2])
expected_int = expected_int64 = Series([0] * 5)
expected_float = expected_float64 = Series([0.0] * 5)
assert_series_equal(result_int, expected_int)
assert_series_equal(result_int64, expected_int64)
assert_series_equal(result_float, expected_float)
assert_series_equal(result_float64, expected_float64)
def test_reindex(self):
with catch_warnings(record=True):
ref = self.panel['ItemB']
# items
result = self.panel.reindex(items=['ItemA', 'ItemB'])
assert_frame_equal(result['ItemB'], ref)
# major
new_major = list(self.panel.major_axis[:10])
result = self.panel.reindex(major=new_major)
assert_frame_equal(result['ItemB'], ref.reindex(index=new_major))
# raise exception put both major and major_axis
pytest.raises(Exception, self.panel.reindex,
major_axis=new_major,
major=new_major)
# minor
new_minor = list(self.panel.minor_axis[:2])
result = self.panel.reindex(minor=new_minor)
assert_frame_equal(result['ItemB'], ref.reindex(columns=new_minor))
# raise exception put both major and major_axis
pytest.raises(Exception, self.panel.reindex,
minor_axis=new_minor,
minor=new_minor)
# this ok
result = self.panel.reindex()
assert_panel_equal(result, self.panel)
assert result is not self.panel
# with filling
smaller_major = self.panel.major_axis[::5]
smaller = self.panel.reindex(major=smaller_major)
larger = smaller.reindex(major=self.panel.major_axis, method='pad')
assert_frame_equal(larger.major_xs(self.panel.major_axis[1]),
smaller.major_xs(smaller_major[0]))
# don't necessarily copy
result = self.panel.reindex(
major=self.panel.major_axis, copy=False)
assert_panel_equal(result, self.panel)
assert result is self.panel
def test_reindex_axis_style(self):
with catch_warnings(record=True):
panel = Panel(np.random.rand(5, 5, 5))
expected0 = Panel(panel.values).iloc[[0, 1]]
expected1 = Panel(panel.values).iloc[:, [0, 1]]
expected2 = Panel(panel.values).iloc[:, :, [0, 1]]
result = panel.reindex([0, 1], axis=0)
assert_panel_equal(result, expected0)
result = panel.reindex([0, 1], axis=1)
assert_panel_equal(result, expected1)
result = panel.reindex([0, 1], axis=2)
assert_panel_equal(result, expected2)
result = panel.reindex([0, 1], axis=2)
assert_panel_equal(result, expected2)
def test_reindex_multi(self):
with catch_warnings(record=True):
# with and without copy full reindexing
result = self.panel.reindex(
items=self.panel.items,
major=self.panel.major_axis,
minor=self.panel.minor_axis, copy=False)
assert result.items is self.panel.items
assert result.major_axis is self.panel.major_axis
assert result.minor_axis is self.panel.minor_axis
result = self.panel.reindex(
items=self.panel.items,
major=self.panel.major_axis,
minor=self.panel.minor_axis, copy=False)
assert_panel_equal(result, self.panel)
# multi-axis indexing consistency
# GH 5900
df = DataFrame(np.random.randn(4, 3))
p = Panel({'Item1': df})
expected = Panel({'Item1': df})
expected['Item2'] = np.nan
items = ['Item1', 'Item2']
major_axis = np.arange(4)
minor_axis = np.arange(3)
results = []
results.append(p.reindex(items=items, major_axis=major_axis,
copy=True))
results.append(p.reindex(items=items, major_axis=major_axis,
copy=False))
results.append(p.reindex(items=items, minor_axis=minor_axis,
copy=True))
results.append(p.reindex(items=items, minor_axis=minor_axis,
copy=False))
results.append(p.reindex(items=items, major_axis=major_axis,
minor_axis=minor_axis, copy=True))
results.append(p.reindex(items=items, major_axis=major_axis,
minor_axis=minor_axis, copy=False))
for i, r in enumerate(results):
assert_panel_equal(expected, r)
def test_reindex_like(self):
with catch_warnings(record=True):
# reindex_like
smaller = self.panel.reindex(items=self.panel.items[:-1],
major=self.panel.major_axis[:-1],
minor=self.panel.minor_axis[:-1])
smaller_like = self.panel.reindex_like(smaller)
assert_panel_equal(smaller, smaller_like)
def test_take(self):
with catch_warnings(record=True):
# axis == 0
result = self.panel.take([2, 0, 1], axis=0)
expected = self.panel.reindex(items=['ItemC', 'ItemA', 'ItemB'])
assert_panel_equal(result, expected)
# axis >= 1
result = self.panel.take([3, 0, 1, 2], axis=2)
expected = self.panel.reindex(minor=['D', 'A', 'B', 'C'])
assert_panel_equal(result, expected)
# neg indices ok
expected = self.panel.reindex(minor=['D', 'D', 'B', 'C'])
result = self.panel.take([3, -1, 1, 2], axis=2)
assert_panel_equal(result, expected)
pytest.raises(Exception, self.panel.take, [4, 0, 1, 2], axis=2)
def test_sort_index(self):
with catch_warnings(record=True):
import random
ritems = list(self.panel.items)
rmajor = list(self.panel.major_axis)
rminor = list(self.panel.minor_axis)
random.shuffle(ritems)
random.shuffle(rmajor)
random.shuffle(rminor)
random_order = self.panel.reindex(items=ritems)
sorted_panel = random_order.sort_index(axis=0)
assert_panel_equal(sorted_panel, self.panel)
# descending
random_order = self.panel.reindex(items=ritems)
sorted_panel = random_order.sort_index(axis=0, ascending=False)
assert_panel_equal(
sorted_panel,
self.panel.reindex(items=self.panel.items[::-1]))
random_order = self.panel.reindex(major=rmajor)
sorted_panel = random_order.sort_index(axis=1)
assert_panel_equal(sorted_panel, self.panel)
random_order = self.panel.reindex(minor=rminor)
sorted_panel = random_order.sort_index(axis=2)
assert_panel_equal(sorted_panel, self.panel)
def test_fillna(self):
with catch_warnings(record=True):
filled = self.panel.fillna(0)
assert np.isfinite(filled.values).all()
filled = self.panel.fillna(method='backfill')
assert_frame_equal(filled['ItemA'],
self.panel['ItemA'].fillna(method='backfill'))
panel = self.panel.copy()
panel['str'] = 'foo'
filled = panel.fillna(method='backfill')
assert_frame_equal(filled['ItemA'],
panel['ItemA'].fillna(method='backfill'))
empty = self.panel.reindex(items=[])
filled = empty.fillna(0)
assert_panel_equal(filled, empty)
pytest.raises(ValueError, self.panel.fillna)
pytest.raises(ValueError, self.panel.fillna, 5, method='ffill')
pytest.raises(TypeError, self.panel.fillna, [1, 2])
pytest.raises(TypeError, self.panel.fillna, (1, 2))
# limit not implemented when only value is specified
p = Panel(np.random.randn(3, 4, 5))
p.iloc[0:2, 0:2, 0:2] = np.nan
pytest.raises(NotImplementedError,
lambda: p.fillna(999, limit=1))
# Test in place fillNA
# Expected result
expected = Panel([[[0, 1], [2, 1]], [[10, 11], [12, 11]]],
items=['a', 'b'], minor_axis=['x', 'y'],
dtype=np.float64)
# method='ffill'
p1 = Panel([[[0, 1], [2, np.nan]], [[10, 11], [12, np.nan]]],
items=['a', 'b'], minor_axis=['x', 'y'],
dtype=np.float64)
p1.fillna(method='ffill', inplace=True)
assert_panel_equal(p1, expected)
# method='bfill'
p2 = Panel([[[0, np.nan], [2, 1]], [[10, np.nan], [12, 11]]],
items=['a', 'b'], minor_axis=['x', 'y'],
dtype=np.float64)
p2.fillna(method='bfill', inplace=True)
assert_panel_equal(p2, expected)
def test_ffill_bfill(self):
with catch_warnings(record=True):
assert_panel_equal(self.panel.ffill(),
self.panel.fillna(method='ffill'))
assert_panel_equal(self.panel.bfill(),
self.panel.fillna(method='bfill'))
def test_truncate_fillna_bug(self):
with catch_warnings(record=True):
# #1823
result = self.panel.truncate(before=None, after=None, axis='items')
# it works!
result.fillna(value=0.0)
def test_swapaxes(self):
with catch_warnings(record=True):
result = self.panel.swapaxes('items', 'minor')
assert result.items is self.panel.minor_axis
result = self.panel.swapaxes('items', 'major')
assert result.items is self.panel.major_axis
result = self.panel.swapaxes('major', 'minor')
assert result.major_axis is self.panel.minor_axis
panel = self.panel.copy()
result = panel.swapaxes('major', 'minor')
panel.values[0, 0, 1] = np.nan
expected = panel.swapaxes('major', 'minor')
assert_panel_equal(result, expected)
# this should also work
result = self.panel.swapaxes(0, 1)
assert result.items is self.panel.major_axis
# this works, but return a copy
result = self.panel.swapaxes('items', 'items')
assert_panel_equal(self.panel, result)
assert id(self.panel) != id(result)
def test_transpose(self):
with catch_warnings(record=True):
result = self.panel.transpose('minor', 'major', 'items')
expected = self.panel.swapaxes('items', 'minor')
assert_panel_equal(result, expected)
# test kwargs
result = self.panel.transpose(items='minor', major='major',
minor='items')
expected = self.panel.swapaxes('items', 'minor')
assert_panel_equal(result, expected)
# text mixture of args
result = self.panel.transpose(
'minor', major='major', minor='items')
expected = self.panel.swapaxes('items', 'minor')
assert_panel_equal(result, expected)
result = self.panel.transpose('minor',
'major',
minor='items')
expected = self.panel.swapaxes('items', 'minor')
assert_panel_equal(result, expected)
# duplicate axes
with tm.assert_raises_regex(TypeError,
'not enough/duplicate arguments'):
self.panel.transpose('minor', maj='major', minor='items')
with tm.assert_raises_regex(ValueError,
'repeated axis in transpose'):
self.panel.transpose('minor', 'major', major='minor',
minor='items')
result = self.panel.transpose(2, 1, 0)
assert_panel_equal(result, expected)
result = self.panel.transpose('minor', 'items', 'major')
expected = self.panel.swapaxes('items', 'minor')
expected = expected.swapaxes('major', 'minor')
assert_panel_equal(result, expected)
result = self.panel.transpose(2, 0, 1)
assert_panel_equal(result, expected)
pytest.raises(ValueError, self.panel.transpose, 0, 0, 1)
def test_transpose_copy(self):
with catch_warnings(record=True):
panel = self.panel.copy()
result = panel.transpose(2, 0, 1, copy=True)
expected = panel.swapaxes('items', 'minor')
expected = expected.swapaxes('major', 'minor')
assert_panel_equal(result, expected)
panel.values[0, 1, 1] = np.nan
assert notna(result.values[1, 0, 1])
def test_to_frame(self):
with catch_warnings(record=True):
# filtered
filtered = self.panel.to_frame()
expected = self.panel.to_frame().dropna(how='any')
assert_frame_equal(filtered, expected)
# unfiltered
unfiltered = self.panel.to_frame(filter_observations=False)
assert_panel_equal(unfiltered.to_panel(), self.panel)
# names
assert unfiltered.index.names == ('major', 'minor')
# unsorted, round trip
df = self.panel.to_frame(filter_observations=False)
unsorted = df.take(np.random.permutation(len(df)))
pan = unsorted.to_panel()
assert_panel_equal(pan, self.panel)
# preserve original index names
df = DataFrame(np.random.randn(6, 2),
index=[['a', 'a', 'b', 'b', 'c', 'c'],
[0, 1, 0, 1, 0, 1]],
columns=['one', 'two'])
df.index.names = ['foo', 'bar']
df.columns.name = 'baz'
rdf = df.to_panel().to_frame()
assert rdf.index.names == df.index.names
assert rdf.columns.names == df.columns.names
def test_to_frame_mixed(self):
with catch_warnings(record=True):
panel = self.panel.fillna(0)
panel['str'] = 'foo'
panel['bool'] = panel['ItemA'] > 0
lp = panel.to_frame()
wp = lp.to_panel()
assert wp['bool'].values.dtype == np.bool_
# Previously, this was mutating the underlying
# index and changing its name
assert_frame_equal(wp['bool'], panel['bool'], check_names=False)
# GH 8704
# with categorical
df = panel.to_frame()
df['category'] = df['str'].astype('category')
# to_panel
# TODO: this converts back to object
p = df.to_panel()
expected = panel.copy()
expected['category'] = 'foo'
assert_panel_equal(p, expected)
def test_to_frame_multi_major(self):
with catch_warnings(record=True):
idx = MultiIndex.from_tuples(
[(1, 'one'), (1, 'two'), (2, 'one'), (2, 'two')])
df = DataFrame([[1, 'a', 1], [2, 'b', 1],
[3, 'c', 1], [4, 'd', 1]],
columns=['A', 'B', 'C'], index=idx)
wp = Panel({'i1': df, 'i2': df})
expected_idx = MultiIndex.from_tuples(
[
(1, 'one', 'A'), (1, 'one', 'B'),
(1, 'one', 'C'), (1, 'two', 'A'),
(1, 'two', 'B'), (1, 'two', 'C'),
(2, 'one', 'A'), (2, 'one', 'B'),
(2, 'one', 'C'), (2, 'two', 'A'),
(2, 'two', 'B'), (2, 'two', 'C')
],
names=[None, None, 'minor'])
expected = DataFrame({'i1': [1, 'a', 1, 2, 'b', 1, 3,
'c', 1, 4, 'd', 1],
'i2': [1, 'a', 1, 2, 'b',
1, 3, 'c', 1, 4, 'd', 1]},
index=expected_idx)
result = wp.to_frame()
assert_frame_equal(result, expected)
wp.iloc[0, 0].iloc[0] = np.nan # BUG on setting. GH #5773
result = wp.to_frame()
assert_frame_equal(result, expected[1:])
idx = MultiIndex.from_tuples(
[(1, 'two'), (1, 'one'), (2, 'one'), (np.nan, 'two')])
df = DataFrame([[1, 'a', 1], [2, 'b', 1],
[3, 'c', 1], [4, 'd', 1]],
columns=['A', 'B', 'C'], index=idx)
wp = Panel({'i1': df, 'i2': df})
ex_idx = MultiIndex.from_tuples([(1, 'two', 'A'), (1, 'two', 'B'),
(1, 'two', 'C'),
(1, 'one', 'A'),
(1, 'one', 'B'),
(1, 'one', 'C'),
(2, 'one', 'A'),
(2, 'one', 'B'),
(2, 'one', 'C'),
(np.nan, 'two', 'A'),
(np.nan, 'two', 'B'),
(np.nan, 'two', 'C')],
names=[None, None, 'minor'])
expected.index = ex_idx
result = wp.to_frame()
assert_frame_equal(result, expected)
def test_to_frame_multi_major_minor(self):
with catch_warnings(record=True):
cols = MultiIndex(levels=[['C_A', 'C_B'], ['C_1', 'C_2']],
labels=[[0, 0, 1, 1], [0, 1, 0, 1]])
idx = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), (
2, 'two'), (3, 'three'), (4, 'four')])
df = DataFrame([[1, 2, 11, 12], [3, 4, 13, 14],
['a', 'b', 'w', 'x'],
['c', 'd', 'y', 'z'], [-1, -2, -3, -4],
[-5, -6, -7, -8]], columns=cols, index=idx)
wp = Panel({'i1': df, 'i2': df})
exp_idx = MultiIndex.from_tuples(
[(1, 'one', 'C_A', 'C_1'), (1, 'one', 'C_A', 'C_2'),
(1, 'one', 'C_B', 'C_1'), (1, 'one', 'C_B', 'C_2'),
(1, 'two', 'C_A', 'C_1'), (1, 'two', 'C_A', 'C_2'),
(1, 'two', 'C_B', 'C_1'), (1, 'two', 'C_B', 'C_2'),
(2, 'one', 'C_A', 'C_1'), (2, 'one', 'C_A', 'C_2'),
(2, 'one', 'C_B', 'C_1'), (2, 'one', 'C_B', 'C_2'),
(2, 'two', 'C_A', 'C_1'), (2, 'two', 'C_A', 'C_2'),
(2, 'two', 'C_B', 'C_1'), (2, 'two', 'C_B', 'C_2'),
(3, 'three', 'C_A', 'C_1'), (3, 'three', 'C_A', 'C_2'),
(3, 'three', 'C_B', 'C_1'), (3, 'three', 'C_B', 'C_2'),
(4, 'four', 'C_A', 'C_1'), (4, 'four', 'C_A', 'C_2'),
(4, 'four', 'C_B', 'C_1'), (4, 'four', 'C_B', 'C_2')],
names=[None, None, None, None])
exp_val = [[1, 1], [2, 2], [11, 11], [12, 12],
[3, 3], [4, 4],
[13, 13], [14, 14], ['a', 'a'],
['b', 'b'], ['w', 'w'],
['x', 'x'], ['c', 'c'], ['d', 'd'], [
'y', 'y'], ['z', 'z'],
[-1, -1], [-2, -2], [-3, -3], [-4, -4],
[-5, -5], [-6, -6],
[-7, -7], [-8, -8]]
result = wp.to_frame()
expected = DataFrame(exp_val, columns=['i1', 'i2'], index=exp_idx)
assert_frame_equal(result, expected)
def test_to_frame_multi_drop_level(self):
with catch_warnings(record=True):
idx = MultiIndex.from_tuples([(1, 'one'), (2, 'one'), (2, 'two')])
df = DataFrame({'A': [np.nan, 1, 2]}, index=idx)
wp = Panel({'i1': df, 'i2': df})
result = wp.to_frame()
exp_idx = MultiIndex.from_tuples(
[(2, 'one', 'A'), (2, 'two', 'A')],
names=[None, None, 'minor'])
expected = DataFrame({'i1': [1., 2], 'i2': [1., 2]}, index=exp_idx)
assert_frame_equal(result, expected)
def test_to_panel_na_handling(self):
with catch_warnings(record=True):
df = DataFrame(np.random.randint(0, 10, size=20).reshape((10, 2)),
index=[[0, 0, 0, 0, 0, 0, 1, 1, 1, 1],
[0, 1, 2, 3, 4, 5, 2, 3, 4, 5]])
panel = df.to_panel()
assert isna(panel[0].loc[1, [0, 1]]).all()
def test_to_panel_duplicates(self):
# #2441
with catch_warnings(record=True):
df = DataFrame({'a': [0, 0, 1], 'b': [1, 1, 1], 'c': [1, 2, 3]})
idf = df.set_index(['a', 'b'])
tm.assert_raises_regex(
ValueError, 'non-uniquely indexed', idf.to_panel)
def test_panel_dups(self):
with catch_warnings(record=True):
# GH 4960
# duplicates in an index
# items
data = np.random.randn(5, 100, 5)
no_dup_panel = Panel(data, items=list("ABCDE"))
panel = Panel(data, items=list("AACDE"))
expected = no_dup_panel['A']
result = panel.iloc[0]
assert_frame_equal(result, expected)
expected = no_dup_panel['E']
result = panel.loc['E']
assert_frame_equal(result, expected)
expected = no_dup_panel.loc[['A', 'B']]
expected.items = ['A', 'A']
result = panel.loc['A']
assert_panel_equal(result, expected)
# major
data = np.random.randn(5, 5, 5)
no_dup_panel = Panel(data, major_axis=list("ABCDE"))
panel = Panel(data, major_axis=list("AACDE"))
expected = no_dup_panel.loc[:, 'A']
result = panel.iloc[:, 0]
assert_frame_equal(result, expected)
expected = no_dup_panel.loc[:, 'E']
result = panel.loc[:, 'E']
assert_frame_equal(result, expected)
expected = no_dup_panel.loc[:, ['A', 'B']]
expected.major_axis = ['A', 'A']
result = panel.loc[:, 'A']
assert_panel_equal(result, expected)
# minor
data = np.random.randn(5, 100, 5)
no_dup_panel = Panel(data, minor_axis=list("ABCDE"))
panel = Panel(data, minor_axis=list("AACDE"))
expected = no_dup_panel.loc[:, :, 'A']
result = panel.iloc[:, :, 0]
assert_frame_equal(result, expected)
expected = no_dup_panel.loc[:, :, 'E']
result = panel.loc[:, :, 'E']
assert_frame_equal(result, expected)
expected = no_dup_panel.loc[:, :, ['A', 'B']]
expected.minor_axis = ['A', 'A']
result = panel.loc[:, :, 'A']
assert_panel_equal(result, expected)
def test_filter(self):
pass
def test_compound(self):
with catch_warnings(record=True):
compounded = self.panel.compound()
assert_series_equal(compounded['ItemA'],
(1 + self.panel['ItemA']).product(0) - 1,
check_names=False)
def test_shift(self):
with catch_warnings(record=True):
# major
idx = self.panel.major_axis[0]
idx_lag = self.panel.major_axis[1]
shifted = self.panel.shift(1)
assert_frame_equal(self.panel.major_xs(idx),
shifted.major_xs(idx_lag))
# minor
idx = self.panel.minor_axis[0]
idx_lag = self.panel.minor_axis[1]
shifted = self.panel.shift(1, axis='minor')
assert_frame_equal(self.panel.minor_xs(idx),
shifted.minor_xs(idx_lag))
# items
idx = self.panel.items[0]
idx_lag = self.panel.items[1]
shifted = self.panel.shift(1, axis='items')
assert_frame_equal(self.panel[idx], shifted[idx_lag])
# negative numbers, #2164
result = self.panel.shift(-1)
expected = Panel({i: f.shift(-1)[:-1]
for i, f in self.panel.iteritems()})
assert_panel_equal(result, expected)
# mixed dtypes #6959
data = [('item ' + ch, makeMixedDataFrame())
for ch in list('abcde')]
data = dict(data)
mixed_panel = Panel.from_dict(data, orient='minor')
shifted = mixed_panel.shift(1)
assert_series_equal(mixed_panel.dtypes, shifted.dtypes)
def test_tshift(self):
# PeriodIndex
with catch_warnings(record=True):
ps = tm.makePeriodPanel()
shifted = ps.tshift(1)
unshifted = shifted.tshift(-1)
assert_panel_equal(unshifted, ps)
shifted2 = ps.tshift(freq='B')
assert_panel_equal(shifted, shifted2)
shifted3 = ps.tshift(freq=BDay())
assert_panel_equal(shifted, shifted3)
tm.assert_raises_regex(ValueError, 'does not match',
ps.tshift, freq='M')
# DatetimeIndex
panel = make_test_panel()
shifted = panel.tshift(1)
unshifted = shifted.tshift(-1)
assert_panel_equal(panel, unshifted)
shifted2 = panel.tshift(freq=panel.major_axis.freq)
assert_panel_equal(shifted, shifted2)
inferred_ts = Panel(panel.values, items=panel.items,
major_axis=Index(np.asarray(panel.major_axis)),
minor_axis=panel.minor_axis)
shifted = inferred_ts.tshift(1)
unshifted = shifted.tshift(-1)
assert_panel_equal(shifted, panel.tshift(1))
assert_panel_equal(unshifted, inferred_ts)
no_freq = panel.iloc[:, [0, 5, 7], :]
pytest.raises(ValueError, no_freq.tshift)
def test_pct_change(self):
with catch_warnings(record=True):
df1 = DataFrame({'c1': [1, 2, 5], 'c2': [3, 4, 6]})
df2 = df1 + 1
df3 = DataFrame({'c1': [3, 4, 7], 'c2': [5, 6, 8]})
wp = Panel({'i1': df1, 'i2': df2, 'i3': df3})
# major, 1
result = wp.pct_change() # axis='major'
expected = Panel({'i1': df1.pct_change(),
'i2': df2.pct_change(),
'i3': df3.pct_change()})
assert_panel_equal(result, expected)
result = wp.pct_change(axis=1)
assert_panel_equal(result, expected)
# major, 2
result = wp.pct_change(periods=2)
expected = Panel({'i1': df1.pct_change(2),
'i2': df2.pct_change(2),
'i3': df3.pct_change(2)})
assert_panel_equal(result, expected)
# minor, 1
result = wp.pct_change(axis='minor')
expected = Panel({'i1': df1.pct_change(axis=1),
'i2': df2.pct_change(axis=1),
'i3': df3.pct_change(axis=1)})
assert_panel_equal(result, expected)
result = wp.pct_change(axis=2)
assert_panel_equal(result, expected)
# minor, 2
result = wp.pct_change(periods=2, axis='minor')
expected = Panel({'i1': df1.pct_change(periods=2, axis=1),
'i2': df2.pct_change(periods=2, axis=1),
'i3': df3.pct_change(periods=2, axis=1)})
assert_panel_equal(result, expected)
# items, 1
result = wp.pct_change(axis='items')
expected = Panel(
{'i1': DataFrame({'c1': [np.nan, np.nan, np.nan],
'c2': [np.nan, np.nan, np.nan]}),
'i2': DataFrame({'c1': [1, 0.5, .2],
'c2': [1. / 3, 0.25, 1. / 6]}),
'i3': DataFrame({'c1': [.5, 1. / 3, 1. / 6],
'c2': [.25, .2, 1. / 7]})})
assert_panel_equal(result, expected)
result = wp.pct_change(axis=0)
assert_panel_equal(result, expected)
# items, 2
result = wp.pct_change(periods=2, axis='items')
expected = Panel(
{'i1': DataFrame({'c1': [np.nan, np.nan, np.nan],
'c2': [np.nan, np.nan, np.nan]}),
'i2': DataFrame({'c1': [np.nan, np.nan, np.nan],
'c2': [np.nan, np.nan, np.nan]}),
'i3': DataFrame({'c1': [2, 1, .4],
'c2': [2. / 3, .5, 1. / 3]})})
assert_panel_equal(result, expected)
def test_round(self):
with catch_warnings(record=True):
values = [[[-3.2, 2.2], [0, -4.8213], [3.123, 123.12],
[-1566.213, 88.88], [-12, 94.5]],
[[-5.82, 3.5], [6.21, -73.272], [-9.087, 23.12],
[272.212, -99.99], [23, -76.5]]]
evalues = [[[float(np.around(i)) for i in j] for j in k]
for k in values]
p = Panel(values, items=['Item1', 'Item2'],
major_axis=date_range('1/1/2000', periods=5),
minor_axis=['A', 'B'])
expected = Panel(evalues, items=['Item1', 'Item2'],
major_axis=date_range('1/1/2000', periods=5),
minor_axis=['A', 'B'])
result = p.round()
assert_panel_equal(expected, result)
def test_numpy_round(self):
with catch_warnings(record=True):
values = [[[-3.2, 2.2], [0, -4.8213], [3.123, 123.12],
[-1566.213, 88.88], [-12, 94.5]],
[[-5.82, 3.5], [6.21, -73.272], [-9.087, 23.12],
[272.212, -99.99], [23, -76.5]]]
evalues = [[[float(np.around(i)) for i in j] for j in k]
for k in values]
p = Panel(values, items=['Item1', 'Item2'],
major_axis=date_range('1/1/2000', periods=5),
minor_axis=['A', 'B'])
expected = Panel(evalues, items=['Item1', 'Item2'],
major_axis=date_range('1/1/2000', periods=5),
minor_axis=['A', 'B'])
result = np.round(p)
assert_panel_equal(expected, result)
msg = "the 'out' parameter is not supported"
tm.assert_raises_regex(ValueError, msg, np.round, p, out=p)
def test_multiindex_get(self):
with catch_warnings(record=True):
ind = MultiIndex.from_tuples(
[('a', 1), ('a', 2), ('b', 1), ('b', 2)],
names=['first', 'second'])
wp = Panel(np.random.random((4, 5, 5)),
items=ind,
major_axis=np.arange(5),
minor_axis=np.arange(5))
f1 = wp['a']
f2 = wp.loc['a']
assert_panel_equal(f1, f2)
assert (f1.items == [1, 2]).all()
assert (f2.items == [1, 2]).all()
MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1)],
names=['first', 'second'])
def test_multiindex_blocks(self):
with catch_warnings(record=True):
ind = MultiIndex.from_tuples([('a', 1), ('a', 2), ('b', 1)],
names=['first', 'second'])
wp = Panel(self.panel._data)
wp.items = ind
f1 = wp['a']
assert (f1.items == [1, 2]).all()
f1 = wp[('b', 1)]
assert (f1.columns == ['A', 'B', 'C', 'D']).all()
def test_repr_empty(self):
with catch_warnings(record=True):
empty = Panel()
repr(empty)
def test_rename(self):
with catch_warnings(record=True):
mapper = {'ItemA': 'foo', 'ItemB': 'bar', 'ItemC': 'baz'}
renamed = self.panel.rename_axis(mapper, axis=0)
exp = Index(['foo', 'bar', 'baz'])
tm.assert_index_equal(renamed.items, exp)
renamed = self.panel.rename_axis(str.lower, axis=2)
exp = Index(['a', 'b', 'c', 'd'])
tm.assert_index_equal(renamed.minor_axis, exp)
# don't copy
renamed_nocopy = self.panel.rename_axis(mapper, axis=0, copy=False)
renamed_nocopy['foo'] = 3.
assert (self.panel['ItemA'].values == 3).all()
def test_get_attr(self):
assert_frame_equal(self.panel['ItemA'], self.panel.ItemA)
# specific cases from #3440
self.panel['a'] = self.panel['ItemA']
assert_frame_equal(self.panel['a'], self.panel.a)
self.panel['i'] = self.panel['ItemA']
assert_frame_equal(self.panel['i'], self.panel.i)
def test_from_frame_level1_unsorted(self):
with catch_warnings(record=True):
tuples = [('MSFT', 3), ('MSFT', 2), ('AAPL', 2), ('AAPL', 1),
('MSFT', 1)]
midx = MultiIndex.from_tuples(tuples)
df = DataFrame(np.random.rand(5, 4), index=midx)
p = df.to_panel()
assert_frame_equal(p.minor_xs(2), df.xs(2, level=1).sort_index())
def test_to_excel(self):
try:
import xlwt # noqa
import xlrd # noqa
import openpyxl # noqa
from pandas.io.excel import ExcelFile
except ImportError:
pytest.skip("need xlwt xlrd openpyxl")
for ext in ['xls', 'xlsx']:
with ensure_clean('__tmp__.' + ext) as path:
self.panel.to_excel(path)
try:
reader = ExcelFile(path)
except ImportError:
pytest.skip("need xlwt xlrd openpyxl")
for item, df in self.panel.iteritems():
recdf = reader.parse(str(item), index_col=0)
assert_frame_equal(df, recdf)
def test_to_excel_xlsxwriter(self):
try:
import xlrd # noqa
import xlsxwriter # noqa
from pandas.io.excel import ExcelFile
except ImportError:
pytest.skip("Requires xlrd and xlsxwriter. Skipping test.")
with ensure_clean('__tmp__.xlsx') as path:
self.panel.to_excel(path, engine='xlsxwriter')
try:
reader = ExcelFile(path)
except ImportError as e:
pytest.skip("cannot write excel file: %s" % e)
for item, df in self.panel.iteritems():
recdf = reader.parse(str(item), index_col=0)
assert_frame_equal(df, recdf)
def test_dropna(self):
with catch_warnings(record=True):
p = Panel(np.random.randn(4, 5, 6), major_axis=list('abcde'))
p.loc[:, ['b', 'd'], 0] = np.nan
result = p.dropna(axis=1)
exp = p.loc[:, ['a', 'c', 'e'], :]
assert_panel_equal(result, exp)
inp = p.copy()
inp.dropna(axis=1, inplace=True)
assert_panel_equal(inp, exp)
result = p.dropna(axis=1, how='all')
assert_panel_equal(result, p)
p.loc[:, ['b', 'd'], :] = np.nan
result = p.dropna(axis=1, how='all')
exp = p.loc[:, ['a', 'c', 'e'], :]
assert_panel_equal(result, exp)
p = Panel(np.random.randn(4, 5, 6), items=list('abcd'))
p.loc[['b'], :, 0] = np.nan
result = p.dropna()
exp = p.loc[['a', 'c', 'd']]
assert_panel_equal(result, exp)
result = p.dropna(how='all')
assert_panel_equal(result, p)
p.loc['b'] = np.nan
result = p.dropna(how='all')
exp = p.loc[['a', 'c', 'd']]
assert_panel_equal(result, exp)
def test_drop(self):
with catch_warnings(record=True):
df = DataFrame({"A": [1, 2], "B": [3, 4]})
panel = Panel({"One": df, "Two": df})
def check_drop(drop_val, axis_number, aliases, expected):
try:
actual = panel.drop(drop_val, axis=axis_number)
assert_panel_equal(actual, expected)
for alias in aliases:
actual = panel.drop(drop_val, axis=alias)
assert_panel_equal(actual, expected)
except AssertionError:
pprint_thing("Failed with axis_number %d and aliases: %s" %
(axis_number, aliases))
raise
# Items
expected = Panel({"One": df})
check_drop('Two', 0, ['items'], expected)
pytest.raises(KeyError, panel.drop, 'Three')
# errors = 'ignore'
dropped = panel.drop('Three', errors='ignore')
assert_panel_equal(dropped, panel)
dropped = panel.drop(['Two', 'Three'], errors='ignore')
expected = Panel({"One": df})
assert_panel_equal(dropped, expected)
# Major
exp_df = DataFrame({"A": [2], "B": [4]}, index=[1])
expected = Panel({"One": exp_df, "Two": exp_df})
check_drop(0, 1, ['major_axis', 'major'], expected)
exp_df = DataFrame({"A": [1], "B": [3]}, index=[0])
expected = Panel({"One": exp_df, "Two": exp_df})
check_drop([1], 1, ['major_axis', 'major'], expected)
# Minor
exp_df = df[['B']]
expected = Panel({"One": exp_df, "Two": exp_df})
check_drop(["A"], 2, ['minor_axis', 'minor'], expected)
exp_df = df[['A']]
expected = Panel({"One": exp_df, "Two": exp_df})
check_drop("B", 2, ['minor_axis', 'minor'], expected)
def test_update(self):
with catch_warnings(record=True):
pan = Panel([[[1.5, np.nan, 3.], [1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]],
[[1.5, np.nan, 3.], [1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]])
other = Panel(
[[[3.6, 2., np.nan], [np.nan, np.nan, 7]]], items=[1])
pan.update(other)
expected = Panel([[[1.5, np.nan, 3.], [1.5, np.nan, 3.],
[1.5, np.nan, 3.], [1.5, np.nan, 3.]],
[[3.6, 2., 3], [1.5, np.nan, 7],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]])
assert_panel_equal(pan, expected)
def test_update_from_dict(self):
with catch_warnings(record=True):
pan = Panel({'one': DataFrame([[1.5, np.nan, 3],
[1.5, np.nan, 3],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]),
'two': DataFrame([[1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]])})
other = {'two': DataFrame(
[[3.6, 2., np.nan], [np.nan, np.nan, 7]])}
pan.update(other)
expected = Panel(
{'one': DataFrame([[1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]),
'two': DataFrame([[3.6, 2., 3],
[1.5, np.nan, 7],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]])
}
)
assert_panel_equal(pan, expected)
def test_update_nooverwrite(self):
with catch_warnings(record=True):
pan = Panel([[[1.5, np.nan, 3.], [1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]],
[[1.5, np.nan, 3.], [1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]])
other = Panel(
[[[3.6, 2., np.nan], [np.nan, np.nan, 7]]], items=[1])
pan.update(other, overwrite=False)
expected = Panel([[[1.5, np.nan, 3], [1.5, np.nan, 3],
[1.5, np.nan, 3.], [1.5, np.nan, 3.]],
[[1.5, 2., 3.], [1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]])
assert_panel_equal(pan, expected)
def test_update_filtered(self):
with catch_warnings(record=True):
pan = Panel([[[1.5, np.nan, 3.], [1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]],
[[1.5, np.nan, 3.], [1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]])
other = Panel(
[[[3.6, 2., np.nan], [np.nan, np.nan, 7]]], items=[1])
pan.update(other, filter_func=lambda x: x > 2)
expected = Panel([[[1.5, np.nan, 3.], [1.5, np.nan, 3.],
[1.5, np.nan, 3.], [1.5, np.nan, 3.]],
[[1.5, np.nan, 3], [1.5, np.nan, 7],
[1.5, np.nan, 3.], [1.5, np.nan, 3.]]])
assert_panel_equal(pan, expected)
def test_update_raise(self):
with catch_warnings(record=True):
pan = Panel([[[1.5, np.nan, 3.], [1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]],
[[1.5, np.nan, 3.], [1.5, np.nan, 3.],
[1.5, np.nan, 3.],
[1.5, np.nan, 3.]]])
pytest.raises(Exception, pan.update, *(pan, ),
**{'raise_conflict': True})
def test_all_any(self):
assert (self.panel.all(axis=0).values == nanall(
self.panel, axis=0)).all()
assert (self.panel.all(axis=1).values == nanall(
self.panel, axis=1).T).all()
assert (self.panel.all(axis=2).values == nanall(
self.panel, axis=2).T).all()
assert (self.panel.any(axis=0).values == nanany(
self.panel, axis=0)).all()
assert (self.panel.any(axis=1).values == nanany(
self.panel, axis=1).T).all()
assert (self.panel.any(axis=2).values == nanany(
self.panel, axis=2).T).all()
def test_all_any_unhandled(self):
pytest.raises(NotImplementedError, self.panel.all, bool_only=True)
pytest.raises(NotImplementedError, self.panel.any, bool_only=True)
# GH issue 15960
def test_sort_values(self):
pytest.raises(NotImplementedError, self.panel.sort_values)
pytest.raises(NotImplementedError, self.panel.sort_values, 'ItemA')
class TestPanelFrame(object):
"""
Check that conversions to and from Panel to DataFrame work.
"""
def setup_method(self, method):
panel = make_test_panel()
self.panel = panel.to_frame()
self.unfiltered_panel = panel.to_frame(filter_observations=False)
def test_ops_differently_indexed(self):
with catch_warnings(record=True):
# trying to set non-identically indexed panel
wp = self.panel.to_panel()
wp2 = wp.reindex(major=wp.major_axis[:-1])
lp2 = wp2.to_frame()
result = self.panel + lp2
assert_frame_equal(result.reindex(lp2.index), lp2 * 2)
# careful, mutation
self.panel['foo'] = lp2['ItemA']
assert_series_equal(self.panel['foo'].reindex(lp2.index),
lp2['ItemA'],
check_names=False)
def test_ops_scalar(self):
with catch_warnings(record=True):
result = self.panel.mul(2)
expected = DataFrame.__mul__(self.panel, 2)
assert_frame_equal(result, expected)
def test_combineFrame(self):
with catch_warnings(record=True):
wp = self.panel.to_panel()
result = self.panel.add(wp['ItemA'].stack(), axis=0)
assert_frame_equal(result.to_panel()['ItemA'], wp['ItemA'] * 2)
def test_combinePanel(self):
with catch_warnings(record=True):
wp = self.panel.to_panel()
result = self.panel.add(self.panel)
wide_result = result.to_panel()
assert_frame_equal(wp['ItemA'] * 2, wide_result['ItemA'])
# one item
result = self.panel.add(self.panel.filter(['ItemA']))
def test_combine_scalar(self):
with catch_warnings(record=True):
result = self.panel.mul(2)
expected = DataFrame(self.panel._data) * 2
assert_frame_equal(result, expected)
def test_combine_series(self):
with catch_warnings(record=True):
s = self.panel['ItemA'][:10]
result = self.panel.add(s, axis=0)
expected = DataFrame.add(self.panel, s, axis=0)
assert_frame_equal(result, expected)
s = self.panel.iloc[5]
result = self.panel + s
expected = DataFrame.add(self.panel, s, axis=1)
assert_frame_equal(result, expected)
def test_operators(self):
with catch_warnings(record=True):
wp = self.panel.to_panel()
result = (self.panel + 1).to_panel()
assert_frame_equal(wp['ItemA'] + 1, result['ItemA'])
def test_arith_flex_panel(self):
with catch_warnings(record=True):
ops = ['add', 'sub', 'mul', 'div',
'truediv', 'pow', 'floordiv', 'mod']
if not compat.PY3:
aliases = {}
else:
aliases = {'div': 'truediv'}
self.panel = self.panel.to_panel()
for n in [np.random.randint(-50, -1), np.random.randint(1, 50), 0]:
for op in ops:
alias = aliases.get(op, op)
f = getattr(operator, alias)
exp = f(self.panel, n)
result = getattr(self.panel, op)(n)
assert_panel_equal(result, exp, check_panel_type=True)
# rops
r_f = lambda x, y: f(y, x)
exp = r_f(self.panel, n)
result = getattr(self.panel, 'r' + op)(n)
assert_panel_equal(result, exp)
def test_sort(self):
def is_sorted(arr):
return (arr[1:] > arr[:-1]).any()
sorted_minor = self.panel.sort_index(level=1)
assert is_sorted(sorted_minor.index.labels[1])
sorted_major = sorted_minor.sort_index(level=0)
assert is_sorted(sorted_major.index.labels[0])
def test_to_string(self):
buf = StringIO()
self.panel.to_string(buf)
def test_to_sparse(self):
if isinstance(self.panel, Panel):
msg = 'sparsifying is not supported'
tm.assert_raises_regex(NotImplementedError, msg,
self.panel.to_sparse)
def test_truncate(self):
with catch_warnings(record=True):
dates = self.panel.index.levels[0]
start, end = dates[1], dates[5]
trunced = self.panel.truncate(start, end).to_panel()
expected = self.panel.to_panel()['ItemA'].truncate(start, end)
# TODO truncate drops index.names
assert_frame_equal(trunced['ItemA'], expected, check_names=False)
trunced = self.panel.truncate(before=start).to_panel()
expected = self.panel.to_panel()['ItemA'].truncate(before=start)
# TODO truncate drops index.names
assert_frame_equal(trunced['ItemA'], expected, check_names=False)
trunced = self.panel.truncate(after=end).to_panel()
expected = self.panel.to_panel()['ItemA'].truncate(after=end)
# TODO truncate drops index.names
assert_frame_equal(trunced['ItemA'], expected, check_names=False)
# truncate on dates that aren't in there
wp = self.panel.to_panel()
new_index = wp.major_axis[::5]
wp2 = wp.reindex(major=new_index)
lp2 = wp2.to_frame()
lp_trunc = lp2.truncate(wp.major_axis[2], wp.major_axis[-2])
wp_trunc = wp2.truncate(wp.major_axis[2], wp.major_axis[-2])
assert_panel_equal(wp_trunc, lp_trunc.to_panel())
# throw proper exception
pytest.raises(Exception, lp2.truncate, wp.major_axis[-2],
wp.major_axis[2])
def test_axis_dummies(self):
from pandas.core.reshape.reshape import make_axis_dummies
minor_dummies = make_axis_dummies(self.panel, 'minor').astype(np.uint8)
assert len(minor_dummies.columns) == len(self.panel.index.levels[1])
major_dummies = make_axis_dummies(self.panel, 'major').astype(np.uint8)
assert len(major_dummies.columns) == len(self.panel.index.levels[0])
mapping = {'A': 'one', 'B': 'one', 'C': 'two', 'D': 'two'}
transformed = make_axis_dummies(self.panel, 'minor',
transform=mapping.get).astype(np.uint8)
assert len(transformed.columns) == 2
tm.assert_index_equal(transformed.columns, Index(['one', 'two']))
# TODO: test correctness
def test_get_dummies(self):
from pandas.core.reshape.reshape import get_dummies, make_axis_dummies
self.panel['Label'] = self.panel.index.labels[1]
minor_dummies = make_axis_dummies(self.panel, 'minor').astype(np.uint8)
dummies = get_dummies(self.panel['Label'])
tm.assert_numpy_array_equal(dummies.values, minor_dummies.values)
def test_mean(self):
with catch_warnings(record=True):
means = self.panel.mean(level='minor')
# test versus Panel version
wide_means = self.panel.to_panel().mean('major')
assert_frame_equal(means, wide_means)
def test_sum(self):
with catch_warnings(record=True):
sums = self.panel.sum(level='minor')
# test versus Panel version
wide_sums = self.panel.to_panel().sum('major')
assert_frame_equal(sums, wide_sums)
def test_count(self):
with catch_warnings(record=True):
index = self.panel.index
major_count = self.panel.count(level=0)['ItemA']
labels = index.labels[0]
for i, idx in enumerate(index.levels[0]):
assert major_count[i] == (labels == i).sum()
minor_count = self.panel.count(level=1)['ItemA']
labels = index.labels[1]
for i, idx in enumerate(index.levels[1]):
assert minor_count[i] == (labels == i).sum()
def test_join(self):
with catch_warnings(record=True):
lp1 = self.panel.filter(['ItemA', 'ItemB'])
lp2 = self.panel.filter(['ItemC'])
joined = lp1.join(lp2)
assert len(joined.columns) == 3
pytest.raises(Exception, lp1.join,
self.panel.filter(['ItemB', 'ItemC']))
def test_pivot(self):
with catch_warnings(record=True):
from pandas.core.reshape.reshape import _slow_pivot
one, two, three = (np.array([1, 2, 3, 4, 5]),
np.array(['a', 'b', 'c', 'd', 'e']),
np.array([1, 2, 3, 5, 4.]))
df = pivot(one, two, three)
assert df['a'][1] == 1
assert df['b'][2] == 2
assert df['c'][3] == 3
assert df['d'][4] == 5
assert df['e'][5] == 4
assert_frame_equal(df, _slow_pivot(one, two, three))
# weird overlap, TODO: test?
a, b, c = (np.array([1, 2, 3, 4, 4]),
np.array(['a', 'a', 'a', 'a', 'a']),
np.array([1., 2., 3., 4., 5.]))
pytest.raises(Exception, pivot, a, b, c)
# corner case, empty
df = pivot(np.array([]), np.array([]), np.array([]))
def test_panel_index():
index = panelm.panel_index([1, 2, 3, 4], [1, 2, 3])
expected = MultiIndex.from_arrays([np.tile([1, 2, 3, 4], 3),
np.repeat([1, 2, 3], 4)],
names=['time', 'panel'])
tm.assert_index_equal(index, expected)
def test_panel_np_all():
with catch_warnings(record=True):
wp = Panel({"A": DataFrame({'b': [1, 2]})})
result = np.all(wp)
assert result == np.bool_(True)
| bsd-3-clause |
UNR-AERIAL/scikit-learn | sklearn/decomposition/tests/test_sparse_pca.py | 142 | 5990 | # Author: Vlad Niculae
# License: BSD 3 clause
import sys
import numpy as np
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import SkipTest
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_false
from sklearn.utils.testing import if_not_mac_os
from sklearn.decomposition import SparsePCA, MiniBatchSparsePCA
from sklearn.utils import check_random_state
def generate_toy_data(n_components, n_samples, image_size, random_state=None):
n_features = image_size[0] * image_size[1]
rng = check_random_state(random_state)
U = rng.randn(n_samples, n_components)
V = rng.randn(n_components, n_features)
centers = [(3, 3), (6, 7), (8, 1)]
sz = [1, 2, 1]
for k in range(n_components):
img = np.zeros(image_size)
xmin, xmax = centers[k][0] - sz[k], centers[k][0] + sz[k]
ymin, ymax = centers[k][1] - sz[k], centers[k][1] + sz[k]
img[xmin:xmax][:, ymin:ymax] = 1.0
V[k, :] = img.ravel()
# Y is defined by : Y = UV + noise
Y = np.dot(U, V)
Y += 0.1 * rng.randn(Y.shape[0], Y.shape[1]) # Add noise
return Y, U, V
# SparsePCA can be a bit slow. To avoid having test times go up, we
# test different aspects of the code in the same test
def test_correct_shapes():
rng = np.random.RandomState(0)
X = rng.randn(12, 10)
spca = SparsePCA(n_components=8, random_state=rng)
U = spca.fit_transform(X)
assert_equal(spca.components_.shape, (8, 10))
assert_equal(U.shape, (12, 8))
# test overcomplete decomposition
spca = SparsePCA(n_components=13, random_state=rng)
U = spca.fit_transform(X)
assert_equal(spca.components_.shape, (13, 10))
assert_equal(U.shape, (12, 13))
def test_fit_transform():
alpha = 1
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 10, (8, 8), random_state=rng) # wide array
spca_lars = SparsePCA(n_components=3, method='lars', alpha=alpha,
random_state=0)
spca_lars.fit(Y)
# Test that CD gives similar results
spca_lasso = SparsePCA(n_components=3, method='cd', random_state=0,
alpha=alpha)
spca_lasso.fit(Y)
assert_array_almost_equal(spca_lasso.components_, spca_lars.components_)
@if_not_mac_os()
def test_fit_transform_parallel():
alpha = 1
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 10, (8, 8), random_state=rng) # wide array
spca_lars = SparsePCA(n_components=3, method='lars', alpha=alpha,
random_state=0)
spca_lars.fit(Y)
U1 = spca_lars.transform(Y)
# Test multiple CPUs
spca = SparsePCA(n_components=3, n_jobs=2, method='lars', alpha=alpha,
random_state=0).fit(Y)
U2 = spca.transform(Y)
assert_true(not np.all(spca_lars.components_ == 0))
assert_array_almost_equal(U1, U2)
def test_transform_nan():
# Test that SparsePCA won't return NaN when there is 0 feature in all
# samples.
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 10, (8, 8), random_state=rng) # wide array
Y[:, 0] = 0
estimator = SparsePCA(n_components=8)
assert_false(np.any(np.isnan(estimator.fit_transform(Y))))
def test_fit_transform_tall():
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 65, (8, 8), random_state=rng) # tall array
spca_lars = SparsePCA(n_components=3, method='lars',
random_state=rng)
U1 = spca_lars.fit_transform(Y)
spca_lasso = SparsePCA(n_components=3, method='cd', random_state=rng)
U2 = spca_lasso.fit(Y).transform(Y)
assert_array_almost_equal(U1, U2)
def test_initialization():
rng = np.random.RandomState(0)
U_init = rng.randn(5, 3)
V_init = rng.randn(3, 4)
model = SparsePCA(n_components=3, U_init=U_init, V_init=V_init, max_iter=0,
random_state=rng)
model.fit(rng.randn(5, 4))
assert_array_equal(model.components_, V_init)
def test_mini_batch_correct_shapes():
rng = np.random.RandomState(0)
X = rng.randn(12, 10)
pca = MiniBatchSparsePCA(n_components=8, random_state=rng)
U = pca.fit_transform(X)
assert_equal(pca.components_.shape, (8, 10))
assert_equal(U.shape, (12, 8))
# test overcomplete decomposition
pca = MiniBatchSparsePCA(n_components=13, random_state=rng)
U = pca.fit_transform(X)
assert_equal(pca.components_.shape, (13, 10))
assert_equal(U.shape, (12, 13))
def test_mini_batch_fit_transform():
raise SkipTest("skipping mini_batch_fit_transform.")
alpha = 1
rng = np.random.RandomState(0)
Y, _, _ = generate_toy_data(3, 10, (8, 8), random_state=rng) # wide array
spca_lars = MiniBatchSparsePCA(n_components=3, random_state=0,
alpha=alpha).fit(Y)
U1 = spca_lars.transform(Y)
# Test multiple CPUs
if sys.platform == 'win32': # fake parallelism for win32
import sklearn.externals.joblib.parallel as joblib_par
_mp = joblib_par.multiprocessing
joblib_par.multiprocessing = None
try:
U2 = MiniBatchSparsePCA(n_components=3, n_jobs=2, alpha=alpha,
random_state=0).fit(Y).transform(Y)
finally:
joblib_par.multiprocessing = _mp
else: # we can efficiently use parallelism
U2 = MiniBatchSparsePCA(n_components=3, n_jobs=2, alpha=alpha,
random_state=0).fit(Y).transform(Y)
assert_true(not np.all(spca_lars.components_ == 0))
assert_array_almost_equal(U1, U2)
# Test that CD gives similar results
spca_lasso = MiniBatchSparsePCA(n_components=3, method='cd', alpha=alpha,
random_state=0).fit(Y)
assert_array_almost_equal(spca_lasso.components_, spca_lars.components_)
| bsd-3-clause |
thomasorb/orb | orb/utils/graph.py | 1 | 3632 | #!/usr/bin/python
# *-* coding: utf-8 *-*
# Author: Thomas Martin <[email protected]>
# File: image.py
## Copyright (c) 2010-2020 Thomas Martin <[email protected]>
##
## This file is part of ORB
##
## ORB is free software: you can redistribute it and/or modify it
## under the terms of the GNU General Public License as published by
## the Free Software Foundation, either version 3 of the License, or
## (at your option) any later version.
##
## ORB is distributed in the hope that it will be useful, but WITHOUT
## ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
## or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
## License for more details.
##
## You should have received a copy of the GNU General Public License
## along with ORB. If not, see <http://www.gnu.org/licenses/>.
import pylab as pl
import astropy.wcs
import matplotlib.cm
import matplotlib.colors
import numpy as np
import orb.utils.io
def imshow(data, figsize=(7,7), perc=99, cmap='viridis', wcs=None, alpha=1, ncolors=None,
vmin=None, vmax=None, autofit=False):
"""Convenient image plotting function
:param data: 2d array to show. Can be a path to a fits file.
:param figsize: size of the figure (same as pyplot.figure's figsize keyword)
:param perc: percentile of the data distribution used to scale
the colorbar. Can be a tuple (min, max) or a scalar in which
case the min percentile will be 100-perc.
:param cmap: colormap
:param wcs: if a astropy.WCS instance show celestial coordinates,
Else, pixel coordinates are shown.
:param alpha: image opacity (if another image is displayed above)
:param ncolors: if an integer is passed, the colorbar is
discretized to this number of colors.
:param vmin: min value used to scale the colorbar. If set the
perc parameter is not used.
:param vmax: max value used to scale the colorbar. If set the
perc parameter is not used.
"""
if isinstance(data, str):
data = orb.utils.io.read_fits(data)
assert data.ndim == 2, 'array must have 2 dimensions'
if np.iscomplexobj(data):
data = data.real
try:
iter(perc)
except Exception:
perc = np.clip(float(perc), 50, 100)
perc = 100-perc, perc
else:
if len(list(perc)) != 2:
raise Exception('perc should be a tuple of len 2 or a single float')
if vmin is None: vmin = np.nanpercentile(data, perc[0])
if vmax is None: vmax = np.nanpercentile(data, perc[1])
if ncolors is not None:
cmap = getattr(matplotlib.cm, cmap)
norm = matplotlib.colors.BoundaryNorm(np.linspace(vmin, vmax, ncolors),
cmap.N, clip=True)
else:
norm = None
fig = pl.figure(figsize=figsize)
if wcs is not None:
assert isinstance(wcs, astropy.wcs.WCS), 'wcs must be an astropy.wcs.WCS instance'
ax = fig.add_subplot(111, projection=wcs)
ax.coords[0].set_major_formatter('d.dd')
ax.coords[1].set_major_formatter('d.dd')
pl.imshow(data.T, vmin=vmin, vmax=vmax, cmap=cmap, origin='lower', alpha=alpha,norm=norm)
if autofit:
xbounds = np.arange(data.shape[0]) * np.where(np.any(~np.isnan(data), axis=1), 1, np.nan) # x
ybounds = np.arange(data.shape[1]) * np.where(np.any(~np.isnan(data), axis=0), 1, np.nan) # y
xmin = np.nanmin(xbounds)
xmax = np.nanmax(xbounds)+1
ymin = np.nanmin(ybounds)
ymax = np.nanmax(ybounds)+1
pl.xlim(xmin, xmax)
pl.ylim(ymin, ymax)
| gpl-3.0 |
jaeilepp/eggie | mne/decoding/tests/test_csp.py | 2 | 3563 | # Author: Alexandre Gramfort <[email protected]>
# Romain Trachel <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
from nose.tools import assert_true, assert_raises
import numpy as np
from numpy.testing import assert_array_almost_equal
from mne import io, Epochs, read_events, pick_types
from mne.decoding.csp import CSP
from mne.utils import _TempDir, requires_sklearn
tempdir = _TempDir()
data_dir = op.join(op.dirname(__file__), '..', '..', 'io', 'tests', 'data')
raw_fname = op.join(data_dir, 'test_raw.fif')
event_name = op.join(data_dir, 'test-eve.fif')
tmin, tmax = -0.2, 0.5
event_id = dict(aud_l=1, vis_l=3)
start, stop = 0, 8 # if stop is too small pca may fail in some cases, but
# we're okay on this file
def test_csp():
"""Test Common Spatial Patterns algorithm on epochs
"""
raw = io.Raw(raw_fname, preload=False)
events = read_events(event_name)
picks = pick_types(raw.info, meg=True, stim=False, ecg=False,
eog=False, exclude='bads')
picks = picks[1:13:3]
epochs = Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), preload=True)
epochs_data = epochs.get_data()
n_channels = epochs_data.shape[1]
n_components = 3
csp = CSP(n_components=n_components)
csp.fit(epochs_data, epochs.events[:, -1])
y = epochs.events[:, -1]
X = csp.fit_transform(epochs_data, y)
assert_true(csp.filters_.shape == (n_channels, n_channels))
assert_true(csp.patterns_.shape == (n_channels, n_channels))
assert_array_almost_equal(csp.fit(epochs_data, y).transform(epochs_data),
X)
# test init exception
assert_raises(ValueError, csp.fit, epochs_data,
np.zeros_like(epochs.events))
assert_raises(ValueError, csp.fit, epochs, y)
assert_raises(ValueError, csp.transform, epochs, y)
csp.n_components = n_components
sources = csp.transform(epochs_data)
assert_true(sources.shape[1] == n_components)
@requires_sklearn
def test_regularized_csp():
"""Test Common Spatial Patterns algorithm using regularized covariance
"""
raw = io.Raw(raw_fname, preload=False)
events = read_events(event_name)
picks = pick_types(raw.info, meg=True, stim=False, ecg=False,
eog=False, exclude='bads')
picks = picks[1:13:3]
epochs = Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), preload=True)
epochs_data = epochs.get_data()
n_channels = epochs_data.shape[1]
n_components = 3
reg_cov = [None, 0.05, 'lws', 'oas']
for reg in reg_cov:
csp = CSP(n_components=n_components, reg=reg)
csp.fit(epochs_data, epochs.events[:, -1])
y = epochs.events[:, -1]
X = csp.fit_transform(epochs_data, y)
assert_true(csp.filters_.shape == (n_channels, n_channels))
assert_true(csp.patterns_.shape == (n_channels, n_channels))
assert_array_almost_equal(csp.fit(epochs_data, y).
transform(epochs_data), X)
# test init exception
assert_raises(ValueError, csp.fit, epochs_data,
np.zeros_like(epochs.events))
assert_raises(ValueError, csp.fit, epochs, y)
assert_raises(ValueError, csp.transform, epochs, y)
csp.n_components = n_components
sources = csp.transform(epochs_data)
assert_true(sources.shape[1] == n_components)
| bsd-2-clause |
erdc/proteus | proteus/tests/POD/heat.py | 1 | 2171 | #!/usr/bin/env python
"""
Fine-scale heat equation solver
The equation is du/du - Laplace u + u + f(x,y,z,t) = 0
"""
from __future__ import print_function
from heat_init import *
physics.name = "heat_3d"
so.name = physics.name
ns = NumericalSolution.NS_base(so,[physics],[numerics],so.sList,opts)
import time
start = time.time()
failed=ns.calculateSolution('run1')
assert(not failed)
end = time.time() # we measure time required to obtain the fully resolved solution
print('Time required was %s seconds' % (end - start))
#arrays for using matplotlib's unstructured plotting interface
x = ns.modelList[0].levelModelList[-1].mesh.nodeArray[:,0]
y = ns.modelList[0].levelModelList[-1].mesh.nodeArray[:,1]
z = ns.modelList[0].levelModelList[-1].mesh.nodeArray[:,2]
u = ns.modelList[0].levelModelList[-1].u[0].dof
#we want to build a 3d plot of f(x,y,z0), when z0 = 0.5
u_range = u[4410:4851]
x_range = x[4410:4851]
y_range = y[4410:4851]
u_range = u_range.reshape(21,21)
x_range = x_range.reshape(21,21)
y_range = y_range.reshape(21,21)
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
fig = plt.figure()
ax = fig.gca(projection='3d')
surf=ax.plot_surface(x_range, y_range, u_range, rstride=1, cstride=1, cmap=cm.coolwarm, linewidth=0, antialiased=False)
plt.xlabel('x'); plt.ylabel('y')
plt.title('approximate solution at t = 1');
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.05f'))
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.savefig("solution.png")
plt.show()
#saving mass and stiffness matrices below
Model = ns.modelList[0].levelModelList[-1]
mj = Model.initializeMassJacobian()
Model.getMassJacobian(mj)
kj = Model.initializeSpatialJacobian()
Model.getSpatialJacobian(kj)
rowptr,colind,nzval = mj.getCSRrepresentation()
np.savetxt('iam',rowptr,'%d', ' ')
np.savetxt('jam',colind,'%d', ' ')
np.savetxt('am',nzval,delimiter=' ')
rowptr_s,colind_s,nzval_s = kj.getCSRrepresentation()
np.savetxt('ias',rowptr_s,'%d',' ')
np.savetxt('jas',colind_s,'%d',' ')
np.savetxt('as',nzval_s,delimiter=' ')
| mit |
anurag313/scikit-learn | examples/exercises/plot_cv_digits.py | 232 | 1206 | """
=============================================
Cross-validation on Digits Dataset Exercise
=============================================
A tutorial exercise using Cross-validation with an SVM on the Digits dataset.
This exercise is used in the :ref:`cv_generators_tut` part of the
:ref:`model_selection_tut` section of the :ref:`stat_learn_tut_index`.
"""
print(__doc__)
import numpy as np
from sklearn import cross_validation, datasets, svm
digits = datasets.load_digits()
X = digits.data
y = digits.target
svc = svm.SVC(kernel='linear')
C_s = np.logspace(-10, 0, 10)
scores = list()
scores_std = list()
for C in C_s:
svc.C = C
this_scores = cross_validation.cross_val_score(svc, X, y, n_jobs=1)
scores.append(np.mean(this_scores))
scores_std.append(np.std(this_scores))
# Do the plotting
import matplotlib.pyplot as plt
plt.figure(1, figsize=(4, 3))
plt.clf()
plt.semilogx(C_s, scores)
plt.semilogx(C_s, np.array(scores) + np.array(scores_std), 'b--')
plt.semilogx(C_s, np.array(scores) - np.array(scores_std), 'b--')
locs, labels = plt.yticks()
plt.yticks(locs, list(map(lambda x: "%g" % x, locs)))
plt.ylabel('CV score')
plt.xlabel('Parameter C')
plt.ylim(0, 1.1)
plt.show()
| bsd-3-clause |
Achuth17/scikit-learn | sklearn/decomposition/tests/test_truncated_svd.py | 240 | 6055 | """Test truncated SVD transformer."""
import numpy as np
import scipy.sparse as sp
from sklearn.decomposition import TruncatedSVD
from sklearn.utils import check_random_state
from sklearn.utils.testing import (assert_array_almost_equal, assert_equal,
assert_raises, assert_greater,
assert_array_less)
# Make an X that looks somewhat like a small tf-idf matrix.
# XXX newer versions of SciPy have scipy.sparse.rand for this.
shape = 60, 55
n_samples, n_features = shape
rng = check_random_state(42)
X = rng.randint(-100, 20, np.product(shape)).reshape(shape)
X = sp.csr_matrix(np.maximum(X, 0), dtype=np.float64)
X.data[:] = 1 + np.log(X.data)
Xdense = X.A
def test_algorithms():
svd_a = TruncatedSVD(30, algorithm="arpack")
svd_r = TruncatedSVD(30, algorithm="randomized", random_state=42)
Xa = svd_a.fit_transform(X)[:, :6]
Xr = svd_r.fit_transform(X)[:, :6]
assert_array_almost_equal(Xa, Xr)
comp_a = np.abs(svd_a.components_)
comp_r = np.abs(svd_r.components_)
# All elements are equal, but some elements are more equal than others.
assert_array_almost_equal(comp_a[:9], comp_r[:9])
assert_array_almost_equal(comp_a[9:], comp_r[9:], decimal=3)
def test_attributes():
for n_components in (10, 25, 41):
tsvd = TruncatedSVD(n_components).fit(X)
assert_equal(tsvd.n_components, n_components)
assert_equal(tsvd.components_.shape, (n_components, n_features))
def test_too_many_components():
for algorithm in ["arpack", "randomized"]:
for n_components in (n_features, n_features+1):
tsvd = TruncatedSVD(n_components=n_components, algorithm=algorithm)
assert_raises(ValueError, tsvd.fit, X)
def test_sparse_formats():
for fmt in ("array", "csr", "csc", "coo", "lil"):
Xfmt = Xdense if fmt == "dense" else getattr(X, "to" + fmt)()
tsvd = TruncatedSVD(n_components=11)
Xtrans = tsvd.fit_transform(Xfmt)
assert_equal(Xtrans.shape, (n_samples, 11))
Xtrans = tsvd.transform(Xfmt)
assert_equal(Xtrans.shape, (n_samples, 11))
def test_inverse_transform():
for algo in ("arpack", "randomized"):
# We need a lot of components for the reconstruction to be "almost
# equal" in all positions. XXX Test means or sums instead?
tsvd = TruncatedSVD(n_components=52, random_state=42)
Xt = tsvd.fit_transform(X)
Xinv = tsvd.inverse_transform(Xt)
assert_array_almost_equal(Xinv, Xdense, decimal=1)
def test_integers():
Xint = X.astype(np.int64)
tsvd = TruncatedSVD(n_components=6)
Xtrans = tsvd.fit_transform(Xint)
assert_equal(Xtrans.shape, (n_samples, tsvd.n_components))
def test_explained_variance():
# Test sparse data
svd_a_10_sp = TruncatedSVD(10, algorithm="arpack")
svd_r_10_sp = TruncatedSVD(10, algorithm="randomized", random_state=42)
svd_a_20_sp = TruncatedSVD(20, algorithm="arpack")
svd_r_20_sp = TruncatedSVD(20, algorithm="randomized", random_state=42)
X_trans_a_10_sp = svd_a_10_sp.fit_transform(X)
X_trans_r_10_sp = svd_r_10_sp.fit_transform(X)
X_trans_a_20_sp = svd_a_20_sp.fit_transform(X)
X_trans_r_20_sp = svd_r_20_sp.fit_transform(X)
# Test dense data
svd_a_10_de = TruncatedSVD(10, algorithm="arpack")
svd_r_10_de = TruncatedSVD(10, algorithm="randomized", random_state=42)
svd_a_20_de = TruncatedSVD(20, algorithm="arpack")
svd_r_20_de = TruncatedSVD(20, algorithm="randomized", random_state=42)
X_trans_a_10_de = svd_a_10_de.fit_transform(X.toarray())
X_trans_r_10_de = svd_r_10_de.fit_transform(X.toarray())
X_trans_a_20_de = svd_a_20_de.fit_transform(X.toarray())
X_trans_r_20_de = svd_r_20_de.fit_transform(X.toarray())
# helper arrays for tests below
svds = (svd_a_10_sp, svd_r_10_sp, svd_a_20_sp, svd_r_20_sp, svd_a_10_de,
svd_r_10_de, svd_a_20_de, svd_r_20_de)
svds_trans = (
(svd_a_10_sp, X_trans_a_10_sp),
(svd_r_10_sp, X_trans_r_10_sp),
(svd_a_20_sp, X_trans_a_20_sp),
(svd_r_20_sp, X_trans_r_20_sp),
(svd_a_10_de, X_trans_a_10_de),
(svd_r_10_de, X_trans_r_10_de),
(svd_a_20_de, X_trans_a_20_de),
(svd_r_20_de, X_trans_r_20_de),
)
svds_10_v_20 = (
(svd_a_10_sp, svd_a_20_sp),
(svd_r_10_sp, svd_r_20_sp),
(svd_a_10_de, svd_a_20_de),
(svd_r_10_de, svd_r_20_de),
)
svds_sparse_v_dense = (
(svd_a_10_sp, svd_a_10_de),
(svd_a_20_sp, svd_a_20_de),
(svd_r_10_sp, svd_r_10_de),
(svd_r_20_sp, svd_r_20_de),
)
# Assert the 1st component is equal
for svd_10, svd_20 in svds_10_v_20:
assert_array_almost_equal(
svd_10.explained_variance_ratio_,
svd_20.explained_variance_ratio_[:10],
decimal=5,
)
# Assert that 20 components has higher explained variance than 10
for svd_10, svd_20 in svds_10_v_20:
assert_greater(
svd_20.explained_variance_ratio_.sum(),
svd_10.explained_variance_ratio_.sum(),
)
# Assert that all the values are greater than 0
for svd in svds:
assert_array_less(0.0, svd.explained_variance_ratio_)
# Assert that total explained variance is less than 1
for svd in svds:
assert_array_less(svd.explained_variance_ratio_.sum(), 1.0)
# Compare sparse vs. dense
for svd_sparse, svd_dense in svds_sparse_v_dense:
assert_array_almost_equal(svd_sparse.explained_variance_ratio_,
svd_dense.explained_variance_ratio_)
# Test that explained_variance is correct
for svd, transformed in svds_trans:
total_variance = np.var(X.toarray(), axis=0).sum()
variances = np.var(transformed, axis=0)
true_explained_variance_ratio = variances / total_variance
assert_array_almost_equal(
svd.explained_variance_ratio_,
true_explained_variance_ratio,
)
| bsd-3-clause |
robin-lai/scikit-learn | sklearn/metrics/cluster/unsupervised.py | 230 | 8281 | """ Unsupervised evaluation metrics. """
# Authors: Robert Layton <[email protected]>
#
# License: BSD 3 clause
import numpy as np
from ...utils import check_random_state
from ..pairwise import pairwise_distances
def silhouette_score(X, labels, metric='euclidean', sample_size=None,
random_state=None, **kwds):
"""Compute the mean Silhouette Coefficient of all samples.
The Silhouette Coefficient is calculated using the mean intra-cluster
distance (``a``) and the mean nearest-cluster distance (``b``) for each
sample. The Silhouette Coefficient for a sample is ``(b - a) / max(a,
b)``. To clarify, ``b`` is the distance between a sample and the nearest
cluster that the sample is not a part of.
Note that Silhouette Coefficent is only defined if number of labels
is 2 <= n_labels <= n_samples - 1.
This function returns the mean Silhouette Coefficient over all samples.
To obtain the values for each sample, use :func:`silhouette_samples`.
The best value is 1 and the worst value is -1. Values near 0 indicate
overlapping clusters. Negative values generally indicate that a sample has
been assigned to the wrong cluster, as a different cluster is more similar.
Read more in the :ref:`User Guide <silhouette_coefficient>`.
Parameters
----------
X : array [n_samples_a, n_samples_a] if metric == "precomputed", or, \
[n_samples_a, n_features] otherwise
Array of pairwise distances between samples, or a feature array.
labels : array, shape = [n_samples]
Predicted labels for each sample.
metric : string, or callable
The metric to use when calculating distance between instances in a
feature array. If metric is a string, it must be one of the options
allowed by :func:`metrics.pairwise.pairwise_distances
<sklearn.metrics.pairwise.pairwise_distances>`. If X is the distance
array itself, use ``metric="precomputed"``.
sample_size : int or None
The size of the sample to use when computing the Silhouette Coefficient
on a random subset of the data.
If ``sample_size is None``, no sampling is used.
random_state : integer or numpy.RandomState, optional
The generator used to randomly select a subset of samples if
``sample_size is not None``. If an integer is given, it fixes the seed.
Defaults to the global numpy random number generator.
`**kwds` : optional keyword parameters
Any further parameters are passed directly to the distance function.
If using a scipy.spatial.distance metric, the parameters are still
metric dependent. See the scipy docs for usage examples.
Returns
-------
silhouette : float
Mean Silhouette Coefficient for all samples.
References
----------
.. [1] `Peter J. Rousseeuw (1987). "Silhouettes: a Graphical Aid to the
Interpretation and Validation of Cluster Analysis". Computational
and Applied Mathematics 20: 53-65.
<http://www.sciencedirect.com/science/article/pii/0377042787901257>`_
.. [2] `Wikipedia entry on the Silhouette Coefficient
<http://en.wikipedia.org/wiki/Silhouette_(clustering)>`_
"""
n_labels = len(np.unique(labels))
n_samples = X.shape[0]
if not 1 < n_labels < n_samples:
raise ValueError("Number of labels is %d. Valid values are 2 "
"to n_samples - 1 (inclusive)" % n_labels)
if sample_size is not None:
random_state = check_random_state(random_state)
indices = random_state.permutation(X.shape[0])[:sample_size]
if metric == "precomputed":
X, labels = X[indices].T[indices].T, labels[indices]
else:
X, labels = X[indices], labels[indices]
return np.mean(silhouette_samples(X, labels, metric=metric, **kwds))
def silhouette_samples(X, labels, metric='euclidean', **kwds):
"""Compute the Silhouette Coefficient for each sample.
The Silhouette Coefficient is a measure of how well samples are clustered
with samples that are similar to themselves. Clustering models with a high
Silhouette Coefficient are said to be dense, where samples in the same
cluster are similar to each other, and well separated, where samples in
different clusters are not very similar to each other.
The Silhouette Coefficient is calculated using the mean intra-cluster
distance (``a``) and the mean nearest-cluster distance (``b``) for each
sample. The Silhouette Coefficient for a sample is ``(b - a) / max(a,
b)``.
Note that Silhouette Coefficent is only defined if number of labels
is 2 <= n_labels <= n_samples - 1.
This function returns the Silhouette Coefficient for each sample.
The best value is 1 and the worst value is -1. Values near 0 indicate
overlapping clusters.
Read more in the :ref:`User Guide <silhouette_coefficient>`.
Parameters
----------
X : array [n_samples_a, n_samples_a] if metric == "precomputed", or, \
[n_samples_a, n_features] otherwise
Array of pairwise distances between samples, or a feature array.
labels : array, shape = [n_samples]
label values for each sample
metric : string, or callable
The metric to use when calculating distance between instances in a
feature array. If metric is a string, it must be one of the options
allowed by :func:`sklearn.metrics.pairwise.pairwise_distances`. If X is
the distance array itself, use "precomputed" as the metric.
`**kwds` : optional keyword parameters
Any further parameters are passed directly to the distance function.
If using a ``scipy.spatial.distance`` metric, the parameters are still
metric dependent. See the scipy docs for usage examples.
Returns
-------
silhouette : array, shape = [n_samples]
Silhouette Coefficient for each samples.
References
----------
.. [1] `Peter J. Rousseeuw (1987). "Silhouettes: a Graphical Aid to the
Interpretation and Validation of Cluster Analysis". Computational
and Applied Mathematics 20: 53-65.
<http://www.sciencedirect.com/science/article/pii/0377042787901257>`_
.. [2] `Wikipedia entry on the Silhouette Coefficient
<http://en.wikipedia.org/wiki/Silhouette_(clustering)>`_
"""
distances = pairwise_distances(X, metric=metric, **kwds)
n = labels.shape[0]
A = np.array([_intra_cluster_distance(distances[i], labels, i)
for i in range(n)])
B = np.array([_nearest_cluster_distance(distances[i], labels, i)
for i in range(n)])
sil_samples = (B - A) / np.maximum(A, B)
return sil_samples
def _intra_cluster_distance(distances_row, labels, i):
"""Calculate the mean intra-cluster distance for sample i.
Parameters
----------
distances_row : array, shape = [n_samples]
Pairwise distance matrix between sample i and each sample.
labels : array, shape = [n_samples]
label values for each sample
i : int
Sample index being calculated. It is excluded from calculation and
used to determine the current label
Returns
-------
a : float
Mean intra-cluster distance for sample i
"""
mask = labels == labels[i]
mask[i] = False
if not np.any(mask):
# cluster of size 1
return 0
a = np.mean(distances_row[mask])
return a
def _nearest_cluster_distance(distances_row, labels, i):
"""Calculate the mean nearest-cluster distance for sample i.
Parameters
----------
distances_row : array, shape = [n_samples]
Pairwise distance matrix between sample i and each sample.
labels : array, shape = [n_samples]
label values for each sample
i : int
Sample index being calculated. It is used to determine the current
label.
Returns
-------
b : float
Mean nearest-cluster distance for sample i
"""
label = labels[i]
b = np.min([np.mean(distances_row[labels == cur_label])
for cur_label in set(labels) if not cur_label == label])
return b
| bsd-3-clause |
surenkum/eecs_542 | fetch_data.py | 1 | 4939 | import numpy as np
import os
try:
from urllib import urlopen
except ImportError:
from urllib.request import urlopen
import tarfile
import zipfile
import gzip
from sklearn.datasets import load_files
from sklearn.externals import joblib
TWENTY_URL = ("http://people.csail.mit.edu/jrennie/"
"20Newsgroups/20news-bydate.tar.gz")
TWENTY_ARCHIVE_NAME = "20news-bydate.tar.gz"
TWENTY_CACHE_NAME = "20news-bydate.pkz"
TWENTY_TRAIN_FOLDER = "20news-bydate-train"
TWENTY_TEST_FOLDER = "20news-bydate-test"
SENTIMENT140_URL = ("http://cs.stanford.edu/people/alecmgo/"
"trainingandtestdata.zip")
SENTIMENT140_ARCHIVE_NAME = "trainingandtestdata.zip"
COVERTYPE_URL = ('http://archive.ics.uci.edu/ml/'
'machine-learning-databases/covtype/covtype.data.gz')
def get_datasets_folder():
here = os.path.dirname(__file__)
datasets_folder = os.path.abspath(os.path.join(here, 'datasets'))
datasets_archive = os.path.abspath(os.path.join(here, 'datasets.zip'))
if not os.path.exists(datasets_folder):
if os.path.exists(datasets_archive):
print("Extracting " + datasets_archive)
zf = zipfile.ZipFile(datasets_archive)
zf.extractall('.')
assert os.path.exists(datasets_folder)
else:
print("Creating datasets folder: " + datasets_folder)
os.makedirs(datasets_folder)
else:
print("Using existing dataset folder:" + datasets_folder)
return datasets_folder
def check_twenty_newsgroups(datasets_folder):
print("Checking availability of the 20 newsgroups dataset")
archive_path = os.path.join(datasets_folder, TWENTY_ARCHIVE_NAME)
train_path = os.path.join(datasets_folder, TWENTY_TRAIN_FOLDER)
test_path = os.path.join(datasets_folder, TWENTY_TEST_FOLDER)
if not os.path.exists(archive_path):
print("Downloading dataset from %s (14 MB)" % TWENTY_URL)
opener = urlopen(TWENTY_URL)
open(archive_path, 'wb').write(opener.read())
else:
print("Found archive: " + archive_path)
if not os.path.exists(train_path) or not os.path.exists(test_path):
print("Decompressing %s" % archive_path)
tarfile.open(archive_path, "r:gz").extractall(path=datasets_folder)
print("Checking that the 20 newsgroups files exist...")
assert os.path.exists(train_path)
assert os.path.exists(test_path)
print("=> Success!")
def check_sentiment140(datasets_folder):
print("Checking availability of the sentiment 140 dataset")
archive_path = os.path.join(datasets_folder, SENTIMENT140_ARCHIVE_NAME)
sentiment140_path = os.path.join(datasets_folder, 'sentiment140')
train_path = os.path.join(sentiment140_path,
'training.1600000.processed.noemoticon.csv')
test_path = os.path.join(sentiment140_path,
'testdata.manual.2009.06.14.csv')
if not os.path.exists(archive_path):
print("Downloading dataset from %s (77MB)" % SENTIMENT140_URL)
opener = urlopen(SENTIMENT140_URL)
open(archive_path, 'wb').write(opener.read())
else:
print("Found archive: " + archive_path)
if not os.path.exists(sentiment140_path):
print("Extracting %s to %s" % (archive_path, sentiment140_path))
zf = zipfile.ZipFile(archive_path)
zf.extractall(sentiment140_path)
print("Checking that the sentiment 140 CSV files exist...")
assert os.path.exists(train_path)
assert os.path.exists(test_path)
print("=> Success!")
def check_covertype(datasets_folder):
print("Checking availability of the covertype dataset")
archive_path = os.path.join(datasets_folder, 'covtype.data.gz')
covtype_dir = os.path.join(datasets_folder, "covertype")
samples_path = os.path.join(covtype_dir, "samples.pkl")
targets_path = os.path.join(covtype_dir, "targets.pkl")
if not os.path.exists(covtype_dir):
os.makedirs(covtype_dir)
if not os.path.exists(archive_path):
print("Downloading dataset from %s (10.7MB)" % COVERTYPE_URL)
open(archive_path, 'wb').write(urlopen(COVERTYPE_URL).read())
else:
print("Found archive: " + archive_path)
if not os.path.exists(samples_path) or not os.path.exists(targets_path):
print("Parsing the data and splitting input and labels...")
f = open(archive_path, 'rb')
Xy = np.genfromtxt(gzip.GzipFile(fileobj=f), delimiter=',')
X = Xy[:, :-1]
y = Xy[:, -1].astype(np.int32)
joblib.dump(X, samples_path)
joblib.dump(y, targets_path )
print("=> Success!")
if __name__ == "__main__":
import sys
datasets_folder = get_datasets_folder()
if 'twenty_newsgroups' in sys.argv:
check_twenty_newsgroups(datasets_folder)
if 'sentiment140' in sys.argv:
check_sentiment140(datasets_folder)
if 'covertype' in sys.argv:
check_covertype(datasets_folder)
| gpl-3.0 |
ksopyla/primal_svm | examples/mnist_linear_svm.py | 1 | 2847 | print(__doc__)
# Author: Krzysztof Sopyla <[email protected]>
# https://machinethoughts.me
# License: BSD 3 clause
# Standard scientific Python imports
import linearSVM as lsvm
from examples.mnist_helpers import *
# Import datasets, classifiers and performance metrics
from sklearn import datasets, svm, metrics
# fetch original mnist dataset
from sklearn.datasets import fetch_mldata
import datetime as dt
mnist = fetch_mldata('MNIST original', data_home='./')
# minist object contains: data, COL_NAMES, DESCR, target fields
# you can check it by running
mnist.keys()
# data field is 70k x 784 array, each row represents pixels from 28x28=784 image
images = mnist.data
targets = mnist.target
# Let's have a look at the random 16 images,
# We have to reshape each data row, from flat array of 784 int to 28x28 2D array
# pick random indexes from 0 to size of our dataset
show_some_digits(images,targets)
# full dataset classification
X_data =images/255.0
Y = targets
# split data to train and test
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_data, Y, test_size=0.15, random_state=42)
# Create a classifier: a support vector classifier
classifier = svm.LinearSVC(C=1)
# We learn the digits on train part
start_time = dt.datetime.now()
print('Start learning at {}'.format(str(start_time)))
classifier.fit(X_train, y_train)
end_time = dt.datetime.now()
print('Stop learning {}'.format(str(end_time)))
elapsed_time = end_time - start_time
print('Elapsed learning {}'.format(str(elapsed_time)))
# Now predict the value of the test
expected = y_test
predicted = classifier.predict(X_test)
acc = np.sum(predicted == expected)/len(expected)
print(classifier.coef_)
print('accuracy={}'.format(acc))
acc = 0
show_some_digits(X_test,predicted,title_text="Predicted {}")
print("Classification report for classifier %s:\n%s\n"
% (classifier, metrics.classification_report(expected, predicted)))
cm = metrics.confusion_matrix(expected, predicted)
print("Confusion matrix:\n%s" % cm)
plt.figure()
plot_confusion_matrix(cm)
###
psvm = lsvm.PrimalSVM(l2reg=0.1)
start_time = dt.datetime.now()
print('Start learning at {}'.format(str(start_time)))
psvm.fit(X_train, y_train)
end_time = dt.datetime.now()
print('Stop learning {}'.format(str(end_time)))
elapsed_time= end_time - start_time
print('Elapsed learning {}'.format(str(elapsed_time)))
pred, pred_val = psvm.predict(X_test)
acc = np.sum(pred==Y)/len(Y)
print(psvm.w)
print(pred_val)
print('accuracy={}'.format(acc))
print("Classification report for psvm classifier %s:\n%s\n"
% (classifier, metrics.classification_report(expected, pred)))
cm = metrics.confusion_matrix(expected, predicted)
print("Confusion matrix for psvm:\n%s" % cm)
plt.figure()
plot_confusion_matrix(cm)
| mit |
nelango/ViralityAnalysis | model/lib/sklearn/externals/joblib/__init__.py | 23 | 4764 | """ Joblib is a set of tools to provide **lightweight pipelining in
Python**. In particular, joblib offers:
1. transparent disk-caching of the output values and lazy re-evaluation
(memoize pattern)
2. easy simple parallel computing
3. logging and tracing of the execution
Joblib is optimized to be **fast** and **robust** in particular on large
data and has specific optimizations for `numpy` arrays. It is
**BSD-licensed**.
============================== ============================================
**User documentation**: http://pythonhosted.org/joblib
**Download packages**: http://pypi.python.org/pypi/joblib#downloads
**Source code**: http://github.com/joblib/joblib
**Report issues**: http://github.com/joblib/joblib/issues
============================== ============================================
Vision
--------
The vision is to provide tools to easily achieve better performance and
reproducibility when working with long running jobs.
* **Avoid computing twice the same thing**: code is rerun over an
over, for instance when prototyping computational-heavy jobs (as in
scientific development), but hand-crafted solution to alleviate this
issue is error-prone and often leads to unreproducible results
* **Persist to disk transparently**: persisting in an efficient way
arbitrary objects containing large data is hard. Using
joblib's caching mechanism avoids hand-written persistence and
implicitly links the file on disk to the execution context of
the original Python object. As a result, joblib's persistence is
good for resuming an application status or computational job, eg
after a crash.
Joblib strives to address these problems while **leaving your code and
your flow control as unmodified as possible** (no framework, no new
paradigms).
Main features
------------------
1) **Transparent and fast disk-caching of output value:** a memoize or
make-like functionality for Python functions that works well for
arbitrary Python objects, including very large numpy arrays. Separate
persistence and flow-execution logic from domain logic or algorithmic
code by writing the operations as a set of steps with well-defined
inputs and outputs: Python functions. Joblib can save their
computation to disk and rerun it only if necessary::
>>> from sklearn.externals.joblib import Memory
>>> mem = Memory(cachedir='/tmp/joblib')
>>> import numpy as np
>>> a = np.vander(np.arange(3)).astype(np.float)
>>> square = mem.cache(np.square)
>>> b = square(a) # doctest: +ELLIPSIS
________________________________________________________________________________
[Memory] Calling square...
square(array([[ 0., 0., 1.],
[ 1., 1., 1.],
[ 4., 2., 1.]]))
___________________________________________________________square - 0...s, 0.0min
>>> c = square(a)
>>> # The above call did not trigger an evaluation
2) **Embarrassingly parallel helper:** to make is easy to write readable
parallel code and debug it quickly::
>>> from sklearn.externals.joblib import Parallel, delayed
>>> from math import sqrt
>>> Parallel(n_jobs=1)(delayed(sqrt)(i**2) for i in range(10))
[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]
3) **Logging/tracing:** The different functionalities will
progressively acquire better logging mechanism to help track what
has been ran, and capture I/O easily. In addition, Joblib will
provide a few I/O primitives, to easily define define logging and
display streams, and provide a way of compiling a report.
We want to be able to quickly inspect what has been run.
4) **Fast compressed Persistence**: a replacement for pickle to work
efficiently on Python objects containing large data (
*joblib.dump* & *joblib.load* ).
..
>>> import shutil ; shutil.rmtree('/tmp/joblib/')
"""
# PEP0440 compatible formatted version, see:
# https://www.python.org/dev/peps/pep-0440/
#
# Generic release markers:
# X.Y
# X.Y.Z # For bugfix releases
#
# Admissible pre-release markers:
# X.YaN # Alpha release
# X.YbN # Beta release
# X.YrcN # Release Candidate
# X.Y # Final release
#
# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.
# 'X.Y.dev0' is the canonical version of 'X.Y.dev'
#
__version__ = '0.9.3'
from .memory import Memory, MemorizedResult
from .logger import PrintTime
from .logger import Logger
from .hashing import hash
from .numpy_pickle import dump
from .numpy_pickle import load
from .parallel import Parallel
from .parallel import delayed
from .parallel import cpu_count
| mit |
syngenta-digital-innovation-lab/project_mayflower-slackbot | src/bot_programs/make_matlibplot_gantt_chart.py | 1 | 3280 | import random, os, time
import datetime
import datetime as dt
import matplotlib.pyplot as plt
import matplotlib.font_manager as font_manager
import matplotlib.dates
from matplotlib.dates import WEEKLY,MONTHLY,HOURLY, DateFormatter, rrulewrapper, RRuleLocator, HourLocator, MinuteLocator
import numpy as np
import settings
from datetimes_utils import getTodayDatetime
def datetime_to_float(dtime):
return float(dtime.hour) + float(dtime.minute / 60)
def makeGanttChart(retvalUsers, currentUserSlackId, channelInput, targetDay):
ylabels = []
customDates = []
init = True
set_min_dtime = getTodayDatetime()
set_max_dtime = getTodayDatetime()
settings.log.debug("making y-axis graph labels")
# make y labels and put in datetimes
for user in retvalUsers:
name = ((user[1])[0]).split(" ")[0].capitalize()
start_dtime = (user[3])[0]
end_dtime = (user[3])[1]
'''
if init:
init = False
min_dtime = start_dtime
max_dtime = end_dtime
if start_dtime < min_dtime:
min_dtime = start_dtime
if max_dtime < end_dtime:
max_dtime = end_dtime
'''
ylabels.append(name)
customDates.append([datetime_to_float(start_dtime), datetime_to_float(end_dtime)])
ilen=len(ylabels)
pos = np.arange(0.5,ilen*0.5+0.5,0.5)
task_dates = {}
for i,task in enumerate(ylabels):
task_dates[task] = customDates[i]
fig = plt.figure(figsize=(20,8))
ax = fig.add_subplot(111)
settings.log.debug("Make bars from datetimes")
r = lambda: random.randint(0,255)
for i in range(len(ylabels)):
start_date,end_date = task_dates[ylabels[i]]
random_color = '#%02X%02X%02X' % (r(),r(),r())
ax.barh((i*0.5)+0.5, end_date - start_date, left=start_date, height=0.425, align='center', color=random_color)
locsy, labelsy = plt.yticks(pos,ylabels)
plt.setp(labelsy, fontsize = 12)
ax.set_ylim(ymin = -0.1, ymax = ilen*0.5+0.5)
settings.log.debug("Make x-axis labels")
# make x-axis labales and min max times
ax.set_xlim(datetime_to_float(set_min_dtime.replace(hour=7, minute=0)), datetime_to_float(set_max_dtime.replace(hour=20, minute=0)))
major_ticks = np.arange(7, 19, 1.0)
minor_ticks = np.arange(7, 19, 0.25)
ax.set_xticks(major_ticks)
ax.set_xticks(minor_ticks, minor=True)
x = [7.0, 7.5, 8.0, 8.5, 9.0, 9.5, 10.0, 10.5, 11, 11.5, 12.0, 12.5, 13.0, 13.5, 14.0, 14.5, 15.0, 15.5, 16.0, 16.5, 17.0, 17.5, 18.0, 18.5, 19.0, 19.5, 20.0]
labels = ['7:00', '7:30', '8:00', '8:30', '9:00', '9:30', '10:00', '10:30', '11:00', '11:30', '12:00', '12:30', '1:00', '1:30', '2:00', '2:30', '3:00', '3:30', '4:00', '4:30', '5:00', '5:30', '6:00', '6:30', '7:00', '7:30', '8:00']
plt.xticks(x, labels)
ax.grid(color = 'grey', linestyle = ':')
font = font_manager.FontProperties(size='small')
ax.legend(loc=1,prop=font)
input_filename = targetDay.capitalize() + '-Attendance.png'
ax.invert_yaxis()
plt.title(targetDay.capitalize() + " " + set_min_dtime.strftime('%b %d, 20%y'))
settings.log.debug("Saving gantt chart as " + input_filename)
plt.savefig(settings.currDirPath + "graph_images/" + input_filename)
settings.log.debug("Saved gantt chart")
plt.clf()
plt.close()
return input_filename
| mit |
DailyActie/Surrogate-Model | 01-codes/scikit-learn-master/sklearn/datasets/species_distributions.py | 1 | 7919 | """
=============================
Species distribution dataset
=============================
This dataset represents the geographic distribution of species.
The dataset is provided by Phillips et. al. (2006).
The two species are:
- `"Bradypus variegatus"
<http://www.iucnredlist.org/apps/redlist/details/3038/0>`_ ,
the Brown-throated Sloth.
- `"Microryzomys minutus"
<http://www.iucnredlist.org/apps/redlist/details/13408/0>`_ ,
also known as the Forest Small Rice Rat, a rodent that lives in Peru,
Colombia, Ecuador, Peru, and Venezuela.
References:
* `"Maximum entropy modeling of species geographic distributions"
<http://www.cs.princeton.edu/~schapire/papers/ecolmod.pdf>`_
S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling,
190:231-259, 2006.
Notes:
* See examples/applications/plot_species_distribution_modeling.py
for an example of using this dataset
"""
# Authors: Peter Prettenhofer <[email protected]>
# Jake Vanderplas <[email protected]>
#
# License: BSD 3 clause
from io import BytesIO
from os import makedirs
from os.path import exists
try:
# Python 2
from urllib2 import urlopen
PY2 = True
except ImportError:
# Python 3
from urllib.request import urlopen
PY2 = False
import numpy as np
from sklearn.datasets.base import get_data_home, Bunch
from sklearn.datasets.base import _pkl_filepath
from sklearn.externals import joblib
DIRECTORY_URL = "http://www.cs.princeton.edu/~schapire/maxent/datasets/"
SAMPLES_URL = DIRECTORY_URL + "samples.zip"
COVERAGES_URL = DIRECTORY_URL + "coverages.zip"
DATA_ARCHIVE_NAME = "species_coverage.pkz"
def _load_coverage(F, header_length=6, dtype=np.int16):
"""Load a coverage file from an open file object.
This will return a numpy array of the given dtype
"""
header = [F.readline() for i in range(header_length)]
make_tuple = lambda t: (t.split()[0], float(t.split()[1]))
header = dict([make_tuple(line) for line in header])
M = np.loadtxt(F, dtype=dtype)
nodata = int(header[b'NODATA_value'])
if nodata != -9999:
M[nodata] = -9999
return M
def _load_csv(F):
"""Load csv file.
Parameters
----------
F : file object
CSV file open in byte mode.
Returns
-------
rec : np.ndarray
record array representing the data
"""
if PY2:
# Numpy recarray wants Python 2 str but not unicode
names = F.readline().strip().split(',')
else:
# Numpy recarray wants Python 3 str but not bytes...
names = F.readline().decode('ascii').strip().split(',')
rec = np.loadtxt(F, skiprows=0, delimiter=',', dtype='a22,f4,f4')
rec.dtype.names = names
return rec
def construct_grids(batch):
"""Construct the map grid from the batch object
Parameters
----------
batch : Batch object
The object returned by :func:`fetch_species_distributions`
Returns
-------
(xgrid, ygrid) : 1-D arrays
The grid corresponding to the values in batch.coverages
"""
# x,y coordinates for corner cells
xmin = batch.x_left_lower_corner + batch.grid_size
xmax = xmin + (batch.Nx * batch.grid_size)
ymin = batch.y_left_lower_corner + batch.grid_size
ymax = ymin + (batch.Ny * batch.grid_size)
# x coordinates of the grid cells
xgrid = np.arange(xmin, xmax, batch.grid_size)
# y coordinates of the grid cells
ygrid = np.arange(ymin, ymax, batch.grid_size)
return (xgrid, ygrid)
def fetch_species_distributions(data_home=None,
download_if_missing=True):
"""Loader for species distribution dataset from Phillips et. al. (2006)
Read more in the :ref:`User Guide <datasets>`.
Parameters
----------
data_home : optional, default: None
Specify another download and cache folder for the datasets. By default
all scikit learn data is stored in '~/scikit_learn_data' subfolders.
download_if_missing: optional, True by default
If False, raise a IOError if the data is not locally available
instead of trying to download the data from the source site.
Returns
--------
The data is returned as a Bunch object with the following attributes:
coverages : array, shape = [14, 1592, 1212]
These represent the 14 features measured at each point of the map grid.
The latitude/longitude values for the grid are discussed below.
Missing data is represented by the value -9999.
train : record array, shape = (1623,)
The training points for the data. Each point has three fields:
- train['species'] is the species name
- train['dd long'] is the longitude, in degrees
- train['dd lat'] is the latitude, in degrees
test : record array, shape = (619,)
The test points for the data. Same format as the training data.
Nx, Ny : integers
The number of longitudes (x) and latitudes (y) in the grid
x_left_lower_corner, y_left_lower_corner : floats
The (x,y) position of the lower-left corner, in degrees
grid_size : float
The spacing between points of the grid, in degrees
Notes
------
This dataset represents the geographic distribution of species.
The dataset is provided by Phillips et. al. (2006).
The two species are:
- `"Bradypus variegatus"
<http://www.iucnredlist.org/apps/redlist/details/3038/0>`_ ,
the Brown-throated Sloth.
- `"Microryzomys minutus"
<http://www.iucnredlist.org/apps/redlist/details/13408/0>`_ ,
also known as the Forest Small Rice Rat, a rodent that lives in Peru,
Colombia, Ecuador, Peru, and Venezuela.
References
----------
* `"Maximum entropy modeling of species geographic distributions"
<http://www.cs.princeton.edu/~schapire/papers/ecolmod.pdf>`_
S. J. Phillips, R. P. Anderson, R. E. Schapire - Ecological Modelling,
190:231-259, 2006.
Notes
-----
* See examples/applications/plot_species_distribution_modeling.py
for an example of using this dataset with scikit-learn
"""
data_home = get_data_home(data_home)
if not exists(data_home):
makedirs(data_home)
# Define parameters for the data files. These should not be changed
# unless the data model changes. They will be saved in the npz file
# with the downloaded data.
extra_params = dict(x_left_lower_corner=-94.8,
Nx=1212,
y_left_lower_corner=-56.05,
Ny=1592,
grid_size=0.05)
dtype = np.int16
archive_path = _pkl_filepath(data_home, DATA_ARCHIVE_NAME)
if not exists(archive_path):
print('Downloading species data from %s to %s' % (SAMPLES_URL,
data_home))
X = np.load(BytesIO(urlopen(SAMPLES_URL).read()))
for f in X.files:
fhandle = BytesIO(X[f])
if 'train' in f:
train = _load_csv(fhandle)
if 'test' in f:
test = _load_csv(fhandle)
print('Downloading coverage data from %s to %s' % (COVERAGES_URL,
data_home))
X = np.load(BytesIO(urlopen(COVERAGES_URL).read()))
coverages = []
for f in X.files:
fhandle = BytesIO(X[f])
print(' - converting', f)
coverages.append(_load_coverage(fhandle))
coverages = np.asarray(coverages, dtype=dtype)
bunch = Bunch(coverages=coverages,
test=test,
train=train,
**extra_params)
joblib.dump(bunch, archive_path, compress=9)
else:
bunch = joblib.load(archive_path)
return bunch
| mit |
xzh86/scikit-learn | sklearn/feature_extraction/tests/test_text.py | 110 | 34127 | from __future__ import unicode_literals
import warnings
from sklearn.feature_extraction.text import strip_tags
from sklearn.feature_extraction.text import strip_accents_unicode
from sklearn.feature_extraction.text import strip_accents_ascii
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import ENGLISH_STOP_WORDS
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import cross_val_score
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
from sklearn.base import clone
import numpy as np
from nose import SkipTest
from nose.tools import assert_equal
from nose.tools import assert_false
from nose.tools import assert_not_equal
from nose.tools import assert_true
from nose.tools import assert_almost_equal
from numpy.testing import assert_array_almost_equal
from numpy.testing import assert_array_equal
from numpy.testing import assert_raises
from sklearn.utils.testing import (assert_in, assert_less, assert_greater,
assert_warns_message, assert_raise_message,
clean_warning_registry)
from collections import defaultdict, Mapping
from functools import partial
import pickle
from io import StringIO
JUNK_FOOD_DOCS = (
"the pizza pizza beer copyright",
"the pizza burger beer copyright",
"the the pizza beer beer copyright",
"the burger beer beer copyright",
"the coke burger coke copyright",
"the coke burger burger",
)
NOTJUNK_FOOD_DOCS = (
"the salad celeri copyright",
"the salad salad sparkling water copyright",
"the the celeri celeri copyright",
"the tomato tomato salad water",
"the tomato salad water copyright",
)
ALL_FOOD_DOCS = JUNK_FOOD_DOCS + NOTJUNK_FOOD_DOCS
def uppercase(s):
return strip_accents_unicode(s).upper()
def strip_eacute(s):
return s.replace('\xe9', 'e')
def split_tokenize(s):
return s.split()
def lazy_analyze(s):
return ['the_ultimate_feature']
def test_strip_accents():
# check some classical latin accentuated symbols
a = '\xe0\xe1\xe2\xe3\xe4\xe5\xe7\xe8\xe9\xea\xeb'
expected = 'aaaaaaceeee'
assert_equal(strip_accents_unicode(a), expected)
a = '\xec\xed\xee\xef\xf1\xf2\xf3\xf4\xf5\xf6\xf9\xfa\xfb\xfc\xfd'
expected = 'iiiinooooouuuuy'
assert_equal(strip_accents_unicode(a), expected)
# check some arabic
a = '\u0625' # halef with a hamza below
expected = '\u0627' # simple halef
assert_equal(strip_accents_unicode(a), expected)
# mix letters accentuated and not
a = "this is \xe0 test"
expected = 'this is a test'
assert_equal(strip_accents_unicode(a), expected)
def test_to_ascii():
# check some classical latin accentuated symbols
a = '\xe0\xe1\xe2\xe3\xe4\xe5\xe7\xe8\xe9\xea\xeb'
expected = 'aaaaaaceeee'
assert_equal(strip_accents_ascii(a), expected)
a = '\xec\xed\xee\xef\xf1\xf2\xf3\xf4\xf5\xf6\xf9\xfa\xfb\xfc\xfd'
expected = 'iiiinooooouuuuy'
assert_equal(strip_accents_ascii(a), expected)
# check some arabic
a = '\u0625' # halef with a hamza below
expected = '' # halef has no direct ascii match
assert_equal(strip_accents_ascii(a), expected)
# mix letters accentuated and not
a = "this is \xe0 test"
expected = 'this is a test'
assert_equal(strip_accents_ascii(a), expected)
def test_word_analyzer_unigrams():
for Vectorizer in (CountVectorizer, HashingVectorizer):
wa = Vectorizer(strip_accents='ascii').build_analyzer()
text = ("J'ai mang\xe9 du kangourou ce midi, "
"c'\xe9tait pas tr\xeas bon.")
expected = ['ai', 'mange', 'du', 'kangourou', 'ce', 'midi',
'etait', 'pas', 'tres', 'bon']
assert_equal(wa(text), expected)
text = "This is a test, really.\n\n I met Harry yesterday."
expected = ['this', 'is', 'test', 'really', 'met', 'harry',
'yesterday']
assert_equal(wa(text), expected)
wa = Vectorizer(input='file').build_analyzer()
text = StringIO("This is a test with a file-like object!")
expected = ['this', 'is', 'test', 'with', 'file', 'like',
'object']
assert_equal(wa(text), expected)
# with custom preprocessor
wa = Vectorizer(preprocessor=uppercase).build_analyzer()
text = ("J'ai mang\xe9 du kangourou ce midi, "
" c'\xe9tait pas tr\xeas bon.")
expected = ['AI', 'MANGE', 'DU', 'KANGOUROU', 'CE', 'MIDI',
'ETAIT', 'PAS', 'TRES', 'BON']
assert_equal(wa(text), expected)
# with custom tokenizer
wa = Vectorizer(tokenizer=split_tokenize,
strip_accents='ascii').build_analyzer()
text = ("J'ai mang\xe9 du kangourou ce midi, "
"c'\xe9tait pas tr\xeas bon.")
expected = ["j'ai", 'mange', 'du', 'kangourou', 'ce', 'midi,',
"c'etait", 'pas', 'tres', 'bon.']
assert_equal(wa(text), expected)
def test_word_analyzer_unigrams_and_bigrams():
wa = CountVectorizer(analyzer="word", strip_accents='unicode',
ngram_range=(1, 2)).build_analyzer()
text = "J'ai mang\xe9 du kangourou ce midi, c'\xe9tait pas tr\xeas bon."
expected = ['ai', 'mange', 'du', 'kangourou', 'ce', 'midi',
'etait', 'pas', 'tres', 'bon', 'ai mange', 'mange du',
'du kangourou', 'kangourou ce', 'ce midi', 'midi etait',
'etait pas', 'pas tres', 'tres bon']
assert_equal(wa(text), expected)
def test_unicode_decode_error():
# decode_error default to strict, so this should fail
# First, encode (as bytes) a unicode string.
text = "J'ai mang\xe9 du kangourou ce midi, c'\xe9tait pas tr\xeas bon."
text_bytes = text.encode('utf-8')
# Then let the Analyzer try to decode it as ascii. It should fail,
# because we have given it an incorrect encoding.
wa = CountVectorizer(ngram_range=(1, 2), encoding='ascii').build_analyzer()
assert_raises(UnicodeDecodeError, wa, text_bytes)
ca = CountVectorizer(analyzer='char', ngram_range=(3, 6),
encoding='ascii').build_analyzer()
assert_raises(UnicodeDecodeError, ca, text_bytes)
def test_char_ngram_analyzer():
cnga = CountVectorizer(analyzer='char', strip_accents='unicode',
ngram_range=(3, 6)).build_analyzer()
text = "J'ai mang\xe9 du kangourou ce midi, c'\xe9tait pas tr\xeas bon"
expected = ["j'a", "'ai", 'ai ', 'i m', ' ma']
assert_equal(cnga(text)[:5], expected)
expected = ['s tres', ' tres ', 'tres b', 'res bo', 'es bon']
assert_equal(cnga(text)[-5:], expected)
text = "This \n\tis a test, really.\n\n I met Harry yesterday"
expected = ['thi', 'his', 'is ', 's i', ' is']
assert_equal(cnga(text)[:5], expected)
expected = [' yeste', 'yester', 'esterd', 'sterda', 'terday']
assert_equal(cnga(text)[-5:], expected)
cnga = CountVectorizer(input='file', analyzer='char',
ngram_range=(3, 6)).build_analyzer()
text = StringIO("This is a test with a file-like object!")
expected = ['thi', 'his', 'is ', 's i', ' is']
assert_equal(cnga(text)[:5], expected)
def test_char_wb_ngram_analyzer():
cnga = CountVectorizer(analyzer='char_wb', strip_accents='unicode',
ngram_range=(3, 6)).build_analyzer()
text = "This \n\tis a test, really.\n\n I met Harry yesterday"
expected = [' th', 'thi', 'his', 'is ', ' thi']
assert_equal(cnga(text)[:5], expected)
expected = ['yester', 'esterd', 'sterda', 'terday', 'erday ']
assert_equal(cnga(text)[-5:], expected)
cnga = CountVectorizer(input='file', analyzer='char_wb',
ngram_range=(3, 6)).build_analyzer()
text = StringIO("A test with a file-like object!")
expected = [' a ', ' te', 'tes', 'est', 'st ', ' tes']
assert_equal(cnga(text)[:6], expected)
def test_countvectorizer_custom_vocabulary():
vocab = {"pizza": 0, "beer": 1}
terms = set(vocab.keys())
# Try a few of the supported types.
for typ in [dict, list, iter, partial(defaultdict, int)]:
v = typ(vocab)
vect = CountVectorizer(vocabulary=v)
vect.fit(JUNK_FOOD_DOCS)
if isinstance(v, Mapping):
assert_equal(vect.vocabulary_, vocab)
else:
assert_equal(set(vect.vocabulary_), terms)
X = vect.transform(JUNK_FOOD_DOCS)
assert_equal(X.shape[1], len(terms))
def test_countvectorizer_custom_vocabulary_pipeline():
what_we_like = ["pizza", "beer"]
pipe = Pipeline([
('count', CountVectorizer(vocabulary=what_we_like)),
('tfidf', TfidfTransformer())])
X = pipe.fit_transform(ALL_FOOD_DOCS)
assert_equal(set(pipe.named_steps['count'].vocabulary_),
set(what_we_like))
assert_equal(X.shape[1], len(what_we_like))
def test_countvectorizer_custom_vocabulary_repeated_indeces():
vocab = {"pizza": 0, "beer": 0}
try:
CountVectorizer(vocabulary=vocab)
except ValueError as e:
assert_in("vocabulary contains repeated indices", str(e).lower())
def test_countvectorizer_custom_vocabulary_gap_index():
vocab = {"pizza": 1, "beer": 2}
try:
CountVectorizer(vocabulary=vocab)
except ValueError as e:
assert_in("doesn't contain index", str(e).lower())
def test_countvectorizer_stop_words():
cv = CountVectorizer()
cv.set_params(stop_words='english')
assert_equal(cv.get_stop_words(), ENGLISH_STOP_WORDS)
cv.set_params(stop_words='_bad_str_stop_')
assert_raises(ValueError, cv.get_stop_words)
cv.set_params(stop_words='_bad_unicode_stop_')
assert_raises(ValueError, cv.get_stop_words)
stoplist = ['some', 'other', 'words']
cv.set_params(stop_words=stoplist)
assert_equal(cv.get_stop_words(), set(stoplist))
def test_countvectorizer_empty_vocabulary():
try:
vect = CountVectorizer(vocabulary=[])
vect.fit(["foo"])
assert False, "we shouldn't get here"
except ValueError as e:
assert_in("empty vocabulary", str(e).lower())
try:
v = CountVectorizer(max_df=1.0, stop_words="english")
# fit on stopwords only
v.fit(["to be or not to be", "and me too", "and so do you"])
assert False, "we shouldn't get here"
except ValueError as e:
assert_in("empty vocabulary", str(e).lower())
def test_fit_countvectorizer_twice():
cv = CountVectorizer()
X1 = cv.fit_transform(ALL_FOOD_DOCS[:5])
X2 = cv.fit_transform(ALL_FOOD_DOCS[5:])
assert_not_equal(X1.shape[1], X2.shape[1])
def test_tf_idf_smoothing():
X = [[1, 1, 1],
[1, 1, 0],
[1, 0, 0]]
tr = TfidfTransformer(smooth_idf=True, norm='l2')
tfidf = tr.fit_transform(X).toarray()
assert_true((tfidf >= 0).all())
# check normalization
assert_array_almost_equal((tfidf ** 2).sum(axis=1), [1., 1., 1.])
# this is robust to features with only zeros
X = [[1, 1, 0],
[1, 1, 0],
[1, 0, 0]]
tr = TfidfTransformer(smooth_idf=True, norm='l2')
tfidf = tr.fit_transform(X).toarray()
assert_true((tfidf >= 0).all())
def test_tfidf_no_smoothing():
X = [[1, 1, 1],
[1, 1, 0],
[1, 0, 0]]
tr = TfidfTransformer(smooth_idf=False, norm='l2')
tfidf = tr.fit_transform(X).toarray()
assert_true((tfidf >= 0).all())
# check normalization
assert_array_almost_equal((tfidf ** 2).sum(axis=1), [1., 1., 1.])
# the lack of smoothing make IDF fragile in the presence of feature with
# only zeros
X = [[1, 1, 0],
[1, 1, 0],
[1, 0, 0]]
tr = TfidfTransformer(smooth_idf=False, norm='l2')
clean_warning_registry()
with warnings.catch_warnings(record=True) as w:
1. / np.array([0.])
numpy_provides_div0_warning = len(w) == 1
in_warning_message = 'divide by zero'
tfidf = assert_warns_message(RuntimeWarning, in_warning_message,
tr.fit_transform, X).toarray()
if not numpy_provides_div0_warning:
raise SkipTest("Numpy does not provide div 0 warnings.")
def test_sublinear_tf():
X = [[1], [2], [3]]
tr = TfidfTransformer(sublinear_tf=True, use_idf=False, norm=None)
tfidf = tr.fit_transform(X).toarray()
assert_equal(tfidf[0], 1)
assert_greater(tfidf[1], tfidf[0])
assert_greater(tfidf[2], tfidf[1])
assert_less(tfidf[1], 2)
assert_less(tfidf[2], 3)
def test_vectorizer():
# raw documents as an iterator
train_data = iter(ALL_FOOD_DOCS[:-1])
test_data = [ALL_FOOD_DOCS[-1]]
n_train = len(ALL_FOOD_DOCS) - 1
# test without vocabulary
v1 = CountVectorizer(max_df=0.5)
counts_train = v1.fit_transform(train_data)
if hasattr(counts_train, 'tocsr'):
counts_train = counts_train.tocsr()
assert_equal(counts_train[0, v1.vocabulary_["pizza"]], 2)
# build a vectorizer v1 with the same vocabulary as the one fitted by v1
v2 = CountVectorizer(vocabulary=v1.vocabulary_)
# compare that the two vectorizer give the same output on the test sample
for v in (v1, v2):
counts_test = v.transform(test_data)
if hasattr(counts_test, 'tocsr'):
counts_test = counts_test.tocsr()
vocabulary = v.vocabulary_
assert_equal(counts_test[0, vocabulary["salad"]], 1)
assert_equal(counts_test[0, vocabulary["tomato"]], 1)
assert_equal(counts_test[0, vocabulary["water"]], 1)
# stop word from the fixed list
assert_false("the" in vocabulary)
# stop word found automatically by the vectorizer DF thresholding
# words that are high frequent across the complete corpus are likely
# to be not informative (either real stop words of extraction
# artifacts)
assert_false("copyright" in vocabulary)
# not present in the sample
assert_equal(counts_test[0, vocabulary["coke"]], 0)
assert_equal(counts_test[0, vocabulary["burger"]], 0)
assert_equal(counts_test[0, vocabulary["beer"]], 0)
assert_equal(counts_test[0, vocabulary["pizza"]], 0)
# test tf-idf
t1 = TfidfTransformer(norm='l1')
tfidf = t1.fit(counts_train).transform(counts_train).toarray()
assert_equal(len(t1.idf_), len(v1.vocabulary_))
assert_equal(tfidf.shape, (n_train, len(v1.vocabulary_)))
# test tf-idf with new data
tfidf_test = t1.transform(counts_test).toarray()
assert_equal(tfidf_test.shape, (len(test_data), len(v1.vocabulary_)))
# test tf alone
t2 = TfidfTransformer(norm='l1', use_idf=False)
tf = t2.fit(counts_train).transform(counts_train).toarray()
assert_equal(t2.idf_, None)
# test idf transform with unlearned idf vector
t3 = TfidfTransformer(use_idf=True)
assert_raises(ValueError, t3.transform, counts_train)
# test idf transform with incompatible n_features
X = [[1, 1, 5],
[1, 1, 0]]
t3.fit(X)
X_incompt = [[1, 3],
[1, 3]]
assert_raises(ValueError, t3.transform, X_incompt)
# L1-normalized term frequencies sum to one
assert_array_almost_equal(np.sum(tf, axis=1), [1.0] * n_train)
# test the direct tfidf vectorizer
# (equivalent to term count vectorizer + tfidf transformer)
train_data = iter(ALL_FOOD_DOCS[:-1])
tv = TfidfVectorizer(norm='l1')
tv.max_df = v1.max_df
tfidf2 = tv.fit_transform(train_data).toarray()
assert_false(tv.fixed_vocabulary_)
assert_array_almost_equal(tfidf, tfidf2)
# test the direct tfidf vectorizer with new data
tfidf_test2 = tv.transform(test_data).toarray()
assert_array_almost_equal(tfidf_test, tfidf_test2)
# test transform on unfitted vectorizer with empty vocabulary
v3 = CountVectorizer(vocabulary=None)
assert_raises(ValueError, v3.transform, train_data)
# ascii preprocessor?
v3.set_params(strip_accents='ascii', lowercase=False)
assert_equal(v3.build_preprocessor(), strip_accents_ascii)
# error on bad strip_accents param
v3.set_params(strip_accents='_gabbledegook_', preprocessor=None)
assert_raises(ValueError, v3.build_preprocessor)
# error with bad analyzer type
v3.set_params = '_invalid_analyzer_type_'
assert_raises(ValueError, v3.build_analyzer)
def test_tfidf_vectorizer_setters():
tv = TfidfVectorizer(norm='l2', use_idf=False, smooth_idf=False,
sublinear_tf=False)
tv.norm = 'l1'
assert_equal(tv._tfidf.norm, 'l1')
tv.use_idf = True
assert_true(tv._tfidf.use_idf)
tv.smooth_idf = True
assert_true(tv._tfidf.smooth_idf)
tv.sublinear_tf = True
assert_true(tv._tfidf.sublinear_tf)
def test_hashing_vectorizer():
v = HashingVectorizer()
X = v.transform(ALL_FOOD_DOCS)
token_nnz = X.nnz
assert_equal(X.shape, (len(ALL_FOOD_DOCS), v.n_features))
assert_equal(X.dtype, v.dtype)
# By default the hashed values receive a random sign and l2 normalization
# makes the feature values bounded
assert_true(np.min(X.data) > -1)
assert_true(np.min(X.data) < 0)
assert_true(np.max(X.data) > 0)
assert_true(np.max(X.data) < 1)
# Check that the rows are normalized
for i in range(X.shape[0]):
assert_almost_equal(np.linalg.norm(X[0].data, 2), 1.0)
# Check vectorization with some non-default parameters
v = HashingVectorizer(ngram_range=(1, 2), non_negative=True, norm='l1')
X = v.transform(ALL_FOOD_DOCS)
assert_equal(X.shape, (len(ALL_FOOD_DOCS), v.n_features))
assert_equal(X.dtype, v.dtype)
# ngrams generate more non zeros
ngrams_nnz = X.nnz
assert_true(ngrams_nnz > token_nnz)
assert_true(ngrams_nnz < 2 * token_nnz)
# makes the feature values bounded
assert_true(np.min(X.data) > 0)
assert_true(np.max(X.data) < 1)
# Check that the rows are normalized
for i in range(X.shape[0]):
assert_almost_equal(np.linalg.norm(X[0].data, 1), 1.0)
def test_feature_names():
cv = CountVectorizer(max_df=0.5)
# test for Value error on unfitted/empty vocabulary
assert_raises(ValueError, cv.get_feature_names)
X = cv.fit_transform(ALL_FOOD_DOCS)
n_samples, n_features = X.shape
assert_equal(len(cv.vocabulary_), n_features)
feature_names = cv.get_feature_names()
assert_equal(len(feature_names), n_features)
assert_array_equal(['beer', 'burger', 'celeri', 'coke', 'pizza',
'salad', 'sparkling', 'tomato', 'water'],
feature_names)
for idx, name in enumerate(feature_names):
assert_equal(idx, cv.vocabulary_.get(name))
def test_vectorizer_max_features():
vec_factories = (
CountVectorizer,
TfidfVectorizer,
)
expected_vocabulary = set(['burger', 'beer', 'salad', 'pizza'])
expected_stop_words = set([u'celeri', u'tomato', u'copyright', u'coke',
u'sparkling', u'water', u'the'])
for vec_factory in vec_factories:
# test bounded number of extracted features
vectorizer = vec_factory(max_df=0.6, max_features=4)
vectorizer.fit(ALL_FOOD_DOCS)
assert_equal(set(vectorizer.vocabulary_), expected_vocabulary)
assert_equal(vectorizer.stop_words_, expected_stop_words)
def test_count_vectorizer_max_features():
# Regression test: max_features didn't work correctly in 0.14.
cv_1 = CountVectorizer(max_features=1)
cv_3 = CountVectorizer(max_features=3)
cv_None = CountVectorizer(max_features=None)
counts_1 = cv_1.fit_transform(JUNK_FOOD_DOCS).sum(axis=0)
counts_3 = cv_3.fit_transform(JUNK_FOOD_DOCS).sum(axis=0)
counts_None = cv_None.fit_transform(JUNK_FOOD_DOCS).sum(axis=0)
features_1 = cv_1.get_feature_names()
features_3 = cv_3.get_feature_names()
features_None = cv_None.get_feature_names()
# The most common feature is "the", with frequency 7.
assert_equal(7, counts_1.max())
assert_equal(7, counts_3.max())
assert_equal(7, counts_None.max())
# The most common feature should be the same
assert_equal("the", features_1[np.argmax(counts_1)])
assert_equal("the", features_3[np.argmax(counts_3)])
assert_equal("the", features_None[np.argmax(counts_None)])
def test_vectorizer_max_df():
test_data = ['abc', 'dea', 'eat']
vect = CountVectorizer(analyzer='char', max_df=1.0)
vect.fit(test_data)
assert_true('a' in vect.vocabulary_.keys())
assert_equal(len(vect.vocabulary_.keys()), 6)
assert_equal(len(vect.stop_words_), 0)
vect.max_df = 0.5 # 0.5 * 3 documents -> max_doc_count == 1.5
vect.fit(test_data)
assert_true('a' not in vect.vocabulary_.keys()) # {ae} ignored
assert_equal(len(vect.vocabulary_.keys()), 4) # {bcdt} remain
assert_true('a' in vect.stop_words_)
assert_equal(len(vect.stop_words_), 2)
vect.max_df = 1
vect.fit(test_data)
assert_true('a' not in vect.vocabulary_.keys()) # {ae} ignored
assert_equal(len(vect.vocabulary_.keys()), 4) # {bcdt} remain
assert_true('a' in vect.stop_words_)
assert_equal(len(vect.stop_words_), 2)
def test_vectorizer_min_df():
test_data = ['abc', 'dea', 'eat']
vect = CountVectorizer(analyzer='char', min_df=1)
vect.fit(test_data)
assert_true('a' in vect.vocabulary_.keys())
assert_equal(len(vect.vocabulary_.keys()), 6)
assert_equal(len(vect.stop_words_), 0)
vect.min_df = 2
vect.fit(test_data)
assert_true('c' not in vect.vocabulary_.keys()) # {bcdt} ignored
assert_equal(len(vect.vocabulary_.keys()), 2) # {ae} remain
assert_true('c' in vect.stop_words_)
assert_equal(len(vect.stop_words_), 4)
vect.min_df = 0.8 # 0.8 * 3 documents -> min_doc_count == 2.4
vect.fit(test_data)
assert_true('c' not in vect.vocabulary_.keys()) # {bcdet} ignored
assert_equal(len(vect.vocabulary_.keys()), 1) # {a} remains
assert_true('c' in vect.stop_words_)
assert_equal(len(vect.stop_words_), 5)
def test_count_binary_occurrences():
# by default multiple occurrences are counted as longs
test_data = ['aaabc', 'abbde']
vect = CountVectorizer(analyzer='char', max_df=1.0)
X = vect.fit_transform(test_data).toarray()
assert_array_equal(['a', 'b', 'c', 'd', 'e'], vect.get_feature_names())
assert_array_equal([[3, 1, 1, 0, 0],
[1, 2, 0, 1, 1]], X)
# using boolean features, we can fetch the binary occurrence info
# instead.
vect = CountVectorizer(analyzer='char', max_df=1.0, binary=True)
X = vect.fit_transform(test_data).toarray()
assert_array_equal([[1, 1, 1, 0, 0],
[1, 1, 0, 1, 1]], X)
# check the ability to change the dtype
vect = CountVectorizer(analyzer='char', max_df=1.0,
binary=True, dtype=np.float32)
X_sparse = vect.fit_transform(test_data)
assert_equal(X_sparse.dtype, np.float32)
def test_hashed_binary_occurrences():
# by default multiple occurrences are counted as longs
test_data = ['aaabc', 'abbde']
vect = HashingVectorizer(analyzer='char', non_negative=True,
norm=None)
X = vect.transform(test_data)
assert_equal(np.max(X[0:1].data), 3)
assert_equal(np.max(X[1:2].data), 2)
assert_equal(X.dtype, np.float64)
# using boolean features, we can fetch the binary occurrence info
# instead.
vect = HashingVectorizer(analyzer='char', non_negative=True, binary=True,
norm=None)
X = vect.transform(test_data)
assert_equal(np.max(X.data), 1)
assert_equal(X.dtype, np.float64)
# check the ability to change the dtype
vect = HashingVectorizer(analyzer='char', non_negative=True, binary=True,
norm=None, dtype=np.float64)
X = vect.transform(test_data)
assert_equal(X.dtype, np.float64)
def test_vectorizer_inverse_transform():
# raw documents
data = ALL_FOOD_DOCS
for vectorizer in (TfidfVectorizer(), CountVectorizer()):
transformed_data = vectorizer.fit_transform(data)
inversed_data = vectorizer.inverse_transform(transformed_data)
analyze = vectorizer.build_analyzer()
for doc, inversed_terms in zip(data, inversed_data):
terms = np.sort(np.unique(analyze(doc)))
inversed_terms = np.sort(np.unique(inversed_terms))
assert_array_equal(terms, inversed_terms)
# Test that inverse_transform also works with numpy arrays
transformed_data = transformed_data.toarray()
inversed_data2 = vectorizer.inverse_transform(transformed_data)
for terms, terms2 in zip(inversed_data, inversed_data2):
assert_array_equal(np.sort(terms), np.sort(terms2))
def test_count_vectorizer_pipeline_grid_selection():
# raw documents
data = JUNK_FOOD_DOCS + NOTJUNK_FOOD_DOCS
# label junk food as -1, the others as +1
target = [-1] * len(JUNK_FOOD_DOCS) + [1] * len(NOTJUNK_FOOD_DOCS)
# split the dataset for model development and final evaluation
train_data, test_data, target_train, target_test = train_test_split(
data, target, test_size=.2, random_state=0)
pipeline = Pipeline([('vect', CountVectorizer()),
('svc', LinearSVC())])
parameters = {
'vect__ngram_range': [(1, 1), (1, 2)],
'svc__loss': ('hinge', 'squared_hinge')
}
# find the best parameters for both the feature extraction and the
# classifier
grid_search = GridSearchCV(pipeline, parameters, n_jobs=1)
# Check that the best model found by grid search is 100% correct on the
# held out evaluation set.
pred = grid_search.fit(train_data, target_train).predict(test_data)
assert_array_equal(pred, target_test)
# on this toy dataset bigram representation which is used in the last of
# the grid_search is considered the best estimator since they all converge
# to 100% accuracy models
assert_equal(grid_search.best_score_, 1.0)
best_vectorizer = grid_search.best_estimator_.named_steps['vect']
assert_equal(best_vectorizer.ngram_range, (1, 1))
def test_vectorizer_pipeline_grid_selection():
# raw documents
data = JUNK_FOOD_DOCS + NOTJUNK_FOOD_DOCS
# label junk food as -1, the others as +1
target = [-1] * len(JUNK_FOOD_DOCS) + [1] * len(NOTJUNK_FOOD_DOCS)
# split the dataset for model development and final evaluation
train_data, test_data, target_train, target_test = train_test_split(
data, target, test_size=.1, random_state=0)
pipeline = Pipeline([('vect', TfidfVectorizer()),
('svc', LinearSVC())])
parameters = {
'vect__ngram_range': [(1, 1), (1, 2)],
'vect__norm': ('l1', 'l2'),
'svc__loss': ('hinge', 'squared_hinge'),
}
# find the best parameters for both the feature extraction and the
# classifier
grid_search = GridSearchCV(pipeline, parameters, n_jobs=1)
# Check that the best model found by grid search is 100% correct on the
# held out evaluation set.
pred = grid_search.fit(train_data, target_train).predict(test_data)
assert_array_equal(pred, target_test)
# on this toy dataset bigram representation which is used in the last of
# the grid_search is considered the best estimator since they all converge
# to 100% accuracy models
assert_equal(grid_search.best_score_, 1.0)
best_vectorizer = grid_search.best_estimator_.named_steps['vect']
assert_equal(best_vectorizer.ngram_range, (1, 1))
assert_equal(best_vectorizer.norm, 'l2')
assert_false(best_vectorizer.fixed_vocabulary_)
def test_vectorizer_pipeline_cross_validation():
# raw documents
data = JUNK_FOOD_DOCS + NOTJUNK_FOOD_DOCS
# label junk food as -1, the others as +1
target = [-1] * len(JUNK_FOOD_DOCS) + [1] * len(NOTJUNK_FOOD_DOCS)
pipeline = Pipeline([('vect', TfidfVectorizer()),
('svc', LinearSVC())])
cv_scores = cross_val_score(pipeline, data, target, cv=3)
assert_array_equal(cv_scores, [1., 1., 1.])
def test_vectorizer_unicode():
# tests that the count vectorizer works with cyrillic.
document = (
"\xd0\x9c\xd0\xb0\xd1\x88\xd0\xb8\xd0\xbd\xd0\xbd\xd0\xbe\xd0"
"\xb5 \xd0\xbe\xd0\xb1\xd1\x83\xd1\x87\xd0\xb5\xd0\xbd\xd0\xb8\xd0"
"\xb5 \xe2\x80\x94 \xd0\xbe\xd0\xb1\xd1\x88\xd0\xb8\xd1\x80\xd0\xbd"
"\xd1\x8b\xd0\xb9 \xd0\xbf\xd0\xbe\xd0\xb4\xd1\x80\xd0\xb0\xd0\xb7"
"\xd0\xb4\xd0\xb5\xd0\xbb \xd0\xb8\xd1\x81\xd0\xba\xd1\x83\xd1\x81"
"\xd1\x81\xd1\x82\xd0\xb2\xd0\xb5\xd0\xbd\xd0\xbd\xd0\xbe\xd0\xb3"
"\xd0\xbe \xd0\xb8\xd0\xbd\xd1\x82\xd0\xb5\xd0\xbb\xd0\xbb\xd0"
"\xb5\xd0\xba\xd1\x82\xd0\xb0, \xd0\xb8\xd0\xb7\xd1\x83\xd1\x87"
"\xd0\xb0\xd1\x8e\xd1\x89\xd0\xb8\xd0\xb9 \xd0\xbc\xd0\xb5\xd1\x82"
"\xd0\xbe\xd0\xb4\xd1\x8b \xd0\xbf\xd0\xbe\xd1\x81\xd1\x82\xd1\x80"
"\xd0\xbe\xd0\xb5\xd0\xbd\xd0\xb8\xd1\x8f \xd0\xb0\xd0\xbb\xd0\xb3"
"\xd0\xbe\xd1\x80\xd0\xb8\xd1\x82\xd0\xbc\xd0\xbe\xd0\xb2, \xd1\x81"
"\xd0\xbf\xd0\xbe\xd1\x81\xd0\xbe\xd0\xb1\xd0\xbd\xd1\x8b\xd1\x85 "
"\xd0\xbe\xd0\xb1\xd1\x83\xd1\x87\xd0\xb0\xd1\x82\xd1\x8c\xd1\x81\xd1"
"\x8f.")
vect = CountVectorizer()
X_counted = vect.fit_transform([document])
assert_equal(X_counted.shape, (1, 15))
vect = HashingVectorizer(norm=None, non_negative=True)
X_hashed = vect.transform([document])
assert_equal(X_hashed.shape, (1, 2 ** 20))
# No collisions on such a small dataset
assert_equal(X_counted.nnz, X_hashed.nnz)
# When norm is None and non_negative, the tokens are counted up to
# collisions
assert_array_equal(np.sort(X_counted.data), np.sort(X_hashed.data))
def test_tfidf_vectorizer_with_fixed_vocabulary():
# non regression smoke test for inheritance issues
vocabulary = ['pizza', 'celeri']
vect = TfidfVectorizer(vocabulary=vocabulary)
X_1 = vect.fit_transform(ALL_FOOD_DOCS)
X_2 = vect.transform(ALL_FOOD_DOCS)
assert_array_almost_equal(X_1.toarray(), X_2.toarray())
assert_true(vect.fixed_vocabulary_)
def test_pickling_vectorizer():
instances = [
HashingVectorizer(),
HashingVectorizer(norm='l1'),
HashingVectorizer(binary=True),
HashingVectorizer(ngram_range=(1, 2)),
CountVectorizer(),
CountVectorizer(preprocessor=strip_tags),
CountVectorizer(analyzer=lazy_analyze),
CountVectorizer(preprocessor=strip_tags).fit(JUNK_FOOD_DOCS),
CountVectorizer(strip_accents=strip_eacute).fit(JUNK_FOOD_DOCS),
TfidfVectorizer(),
TfidfVectorizer(analyzer=lazy_analyze),
TfidfVectorizer().fit(JUNK_FOOD_DOCS),
]
for orig in instances:
s = pickle.dumps(orig)
copy = pickle.loads(s)
assert_equal(type(copy), orig.__class__)
assert_equal(copy.get_params(), orig.get_params())
assert_array_equal(
copy.fit_transform(JUNK_FOOD_DOCS).toarray(),
orig.fit_transform(JUNK_FOOD_DOCS).toarray())
def test_stop_words_removal():
# Ensure that deleting the stop_words_ attribute doesn't affect transform
fitted_vectorizers = (
TfidfVectorizer().fit(JUNK_FOOD_DOCS),
CountVectorizer(preprocessor=strip_tags).fit(JUNK_FOOD_DOCS),
CountVectorizer(strip_accents=strip_eacute).fit(JUNK_FOOD_DOCS)
)
for vect in fitted_vectorizers:
vect_transform = vect.transform(JUNK_FOOD_DOCS).toarray()
vect.stop_words_ = None
stop_None_transform = vect.transform(JUNK_FOOD_DOCS).toarray()
delattr(vect, 'stop_words_')
stop_del_transform = vect.transform(JUNK_FOOD_DOCS).toarray()
assert_array_equal(stop_None_transform, vect_transform)
assert_array_equal(stop_del_transform, vect_transform)
def test_pickling_transformer():
X = CountVectorizer().fit_transform(JUNK_FOOD_DOCS)
orig = TfidfTransformer().fit(X)
s = pickle.dumps(orig)
copy = pickle.loads(s)
assert_equal(type(copy), orig.__class__)
assert_array_equal(
copy.fit_transform(X).toarray(),
orig.fit_transform(X).toarray())
def test_non_unique_vocab():
vocab = ['a', 'b', 'c', 'a', 'a']
vect = CountVectorizer(vocabulary=vocab)
assert_raises(ValueError, vect.fit, [])
def test_hashingvectorizer_nan_in_docs():
# np.nan can appear when using pandas to load text fields from a csv file
# with missing values.
message = "np.nan is an invalid document, expected byte or unicode string."
exception = ValueError
def func():
hv = HashingVectorizer()
hv.fit_transform(['hello world', np.nan, 'hello hello'])
assert_raise_message(exception, message, func)
def test_tfidfvectorizer_binary():
# Non-regression test: TfidfVectorizer used to ignore its "binary" param.
v = TfidfVectorizer(binary=True, use_idf=False, norm=None)
assert_true(v.binary)
X = v.fit_transform(['hello world', 'hello hello']).toarray()
assert_array_equal(X.ravel(), [1, 1, 1, 0])
X2 = v.transform(['hello world', 'hello hello']).toarray()
assert_array_equal(X2.ravel(), [1, 1, 1, 0])
def test_tfidfvectorizer_export_idf():
vect = TfidfVectorizer(use_idf=True)
vect.fit(JUNK_FOOD_DOCS)
assert_array_almost_equal(vect.idf_, vect._tfidf.idf_)
def test_vectorizer_vocab_clone():
vect_vocab = TfidfVectorizer(vocabulary=["the"])
vect_vocab_clone = clone(vect_vocab)
vect_vocab.fit(ALL_FOOD_DOCS)
vect_vocab_clone.fit(ALL_FOOD_DOCS)
assert_equal(vect_vocab_clone.vocabulary_, vect_vocab.vocabulary_)
| bsd-3-clause |
nelango/ViralityAnalysis | model/lib/sklearn/utils/tests/test_validation.py | 79 | 18547 | """Tests for input validation functions"""
import warnings
from tempfile import NamedTemporaryFile
from itertools import product
import numpy as np
from numpy.testing import assert_array_equal
import scipy.sparse as sp
from nose.tools import assert_raises, assert_true, assert_false, assert_equal
from sklearn.utils.testing import assert_raises_regexp
from sklearn.utils.testing import assert_no_warnings
from sklearn.utils.testing import assert_warns_message
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import ignore_warnings
from sklearn.utils import as_float_array, check_array, check_symmetric
from sklearn.utils import check_X_y
from sklearn.utils.mocking import MockDataFrame
from sklearn.utils.estimator_checks import NotAnArray
from sklearn.random_projection import sparse_random_matrix
from sklearn.linear_model import ARDRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR
from sklearn.datasets import make_blobs
from sklearn.utils.validation import (
NotFittedError,
has_fit_parameter,
check_is_fitted,
check_consistent_length,
DataConversionWarning,
)
from sklearn.utils.testing import assert_raise_message
def test_as_float_array():
# Test function for as_float_array
X = np.ones((3, 10), dtype=np.int32)
X = X + np.arange(10, dtype=np.int32)
# Checks that the return type is ok
X2 = as_float_array(X, copy=False)
np.testing.assert_equal(X2.dtype, np.float32)
# Another test
X = X.astype(np.int64)
X2 = as_float_array(X, copy=True)
# Checking that the array wasn't overwritten
assert_true(as_float_array(X, False) is not X)
# Checking that the new type is ok
np.testing.assert_equal(X2.dtype, np.float64)
# Here, X is of the right type, it shouldn't be modified
X = np.ones((3, 2), dtype=np.float32)
assert_true(as_float_array(X, copy=False) is X)
# Test that if X is fortran ordered it stays
X = np.asfortranarray(X)
assert_true(np.isfortran(as_float_array(X, copy=True)))
# Test the copy parameter with some matrices
matrices = [
np.matrix(np.arange(5)),
sp.csc_matrix(np.arange(5)).toarray(),
sparse_random_matrix(10, 10, density=0.10).toarray()
]
for M in matrices:
N = as_float_array(M, copy=True)
N[0, 0] = np.nan
assert_false(np.isnan(M).any())
def test_np_matrix():
# Confirm that input validation code does not return np.matrix
X = np.arange(12).reshape(3, 4)
assert_false(isinstance(as_float_array(X), np.matrix))
assert_false(isinstance(as_float_array(np.matrix(X)), np.matrix))
assert_false(isinstance(as_float_array(sp.csc_matrix(X)), np.matrix))
def test_memmap():
# Confirm that input validation code doesn't copy memory mapped arrays
asflt = lambda x: as_float_array(x, copy=False)
with NamedTemporaryFile(prefix='sklearn-test') as tmp:
M = np.memmap(tmp, shape=(10, 10), dtype=np.float32)
M[:] = 0
for f in (check_array, np.asarray, asflt):
X = f(M)
X[:] = 1
assert_array_equal(X.ravel(), M.ravel())
X[:] = 0
def test_ordering():
# Check that ordering is enforced correctly by validation utilities.
# We need to check each validation utility, because a 'copy' without
# 'order=K' will kill the ordering.
X = np.ones((10, 5))
for A in X, X.T:
for copy in (True, False):
B = check_array(A, order='C', copy=copy)
assert_true(B.flags['C_CONTIGUOUS'])
B = check_array(A, order='F', copy=copy)
assert_true(B.flags['F_CONTIGUOUS'])
if copy:
assert_false(A is B)
X = sp.csr_matrix(X)
X.data = X.data[::-1]
assert_false(X.data.flags['C_CONTIGUOUS'])
@ignore_warnings
def test_check_array():
# accept_sparse == None
# raise error on sparse inputs
X = [[1, 2], [3, 4]]
X_csr = sp.csr_matrix(X)
assert_raises(TypeError, check_array, X_csr)
# ensure_2d
assert_warns(DeprecationWarning, check_array, [0, 1, 2])
X_array = check_array([0, 1, 2])
assert_equal(X_array.ndim, 2)
X_array = check_array([0, 1, 2], ensure_2d=False)
assert_equal(X_array.ndim, 1)
# don't allow ndim > 3
X_ndim = np.arange(8).reshape(2, 2, 2)
assert_raises(ValueError, check_array, X_ndim)
check_array(X_ndim, allow_nd=True) # doesn't raise
# force_all_finite
X_inf = np.arange(4).reshape(2, 2).astype(np.float)
X_inf[0, 0] = np.inf
assert_raises(ValueError, check_array, X_inf)
check_array(X_inf, force_all_finite=False) # no raise
# nan check
X_nan = np.arange(4).reshape(2, 2).astype(np.float)
X_nan[0, 0] = np.nan
assert_raises(ValueError, check_array, X_nan)
check_array(X_inf, force_all_finite=False) # no raise
# dtype and order enforcement.
X_C = np.arange(4).reshape(2, 2).copy("C")
X_F = X_C.copy("F")
X_int = X_C.astype(np.int)
X_float = X_C.astype(np.float)
Xs = [X_C, X_F, X_int, X_float]
dtypes = [np.int32, np.int, np.float, np.float32, None, np.bool, object]
orders = ['C', 'F', None]
copys = [True, False]
for X, dtype, order, copy in product(Xs, dtypes, orders, copys):
X_checked = check_array(X, dtype=dtype, order=order, copy=copy)
if dtype is not None:
assert_equal(X_checked.dtype, dtype)
else:
assert_equal(X_checked.dtype, X.dtype)
if order == 'C':
assert_true(X_checked.flags['C_CONTIGUOUS'])
assert_false(X_checked.flags['F_CONTIGUOUS'])
elif order == 'F':
assert_true(X_checked.flags['F_CONTIGUOUS'])
assert_false(X_checked.flags['C_CONTIGUOUS'])
if copy:
assert_false(X is X_checked)
else:
# doesn't copy if it was already good
if (X.dtype == X_checked.dtype and
X_checked.flags['C_CONTIGUOUS'] == X.flags['C_CONTIGUOUS']
and X_checked.flags['F_CONTIGUOUS'] == X.flags['F_CONTIGUOUS']):
assert_true(X is X_checked)
# allowed sparse != None
X_csc = sp.csc_matrix(X_C)
X_coo = X_csc.tocoo()
X_dok = X_csc.todok()
X_int = X_csc.astype(np.int)
X_float = X_csc.astype(np.float)
Xs = [X_csc, X_coo, X_dok, X_int, X_float]
accept_sparses = [['csr', 'coo'], ['coo', 'dok']]
for X, dtype, accept_sparse, copy in product(Xs, dtypes, accept_sparses,
copys):
with warnings.catch_warnings(record=True) as w:
X_checked = check_array(X, dtype=dtype,
accept_sparse=accept_sparse, copy=copy)
if (dtype is object or sp.isspmatrix_dok(X)) and len(w):
message = str(w[0].message)
messages = ["object dtype is not supported by sparse matrices",
"Can't check dok sparse matrix for nan or inf."]
assert_true(message in messages)
else:
assert_equal(len(w), 0)
if dtype is not None:
assert_equal(X_checked.dtype, dtype)
else:
assert_equal(X_checked.dtype, X.dtype)
if X.format in accept_sparse:
# no change if allowed
assert_equal(X.format, X_checked.format)
else:
# got converted
assert_equal(X_checked.format, accept_sparse[0])
if copy:
assert_false(X is X_checked)
else:
# doesn't copy if it was already good
if (X.dtype == X_checked.dtype and X.format == X_checked.format):
assert_true(X is X_checked)
# other input formats
# convert lists to arrays
X_dense = check_array([[1, 2], [3, 4]])
assert_true(isinstance(X_dense, np.ndarray))
# raise on too deep lists
assert_raises(ValueError, check_array, X_ndim.tolist())
check_array(X_ndim.tolist(), allow_nd=True) # doesn't raise
# convert weird stuff to arrays
X_no_array = NotAnArray(X_dense)
result = check_array(X_no_array)
assert_true(isinstance(result, np.ndarray))
def test_check_array_pandas_dtype_object_conversion():
# test that data-frame like objects with dtype object
# get converted
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=np.object)
X_df = MockDataFrame(X)
assert_equal(check_array(X_df).dtype.kind, "f")
assert_equal(check_array(X_df, ensure_2d=False).dtype.kind, "f")
# smoke-test against dataframes with column named "dtype"
X_df.dtype = "Hans"
assert_equal(check_array(X_df, ensure_2d=False).dtype.kind, "f")
def test_check_array_dtype_stability():
# test that lists with ints don't get converted to floats
X = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
assert_equal(check_array(X).dtype.kind, "i")
assert_equal(check_array(X, ensure_2d=False).dtype.kind, "i")
def test_check_array_dtype_warning():
X_int_list = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
X_float64 = np.asarray(X_int_list, dtype=np.float64)
X_float32 = np.asarray(X_int_list, dtype=np.float32)
X_int64 = np.asarray(X_int_list, dtype=np.int64)
X_csr_float64 = sp.csr_matrix(X_float64)
X_csr_float32 = sp.csr_matrix(X_float32)
X_csc_float32 = sp.csc_matrix(X_float32)
X_csc_int32 = sp.csc_matrix(X_int64, dtype=np.int32)
y = [0, 0, 1]
integer_data = [X_int64, X_csc_int32]
float64_data = [X_float64, X_csr_float64]
float32_data = [X_float32, X_csr_float32, X_csc_float32]
for X in integer_data:
X_checked = assert_no_warnings(check_array, X, dtype=np.float64,
accept_sparse=True)
assert_equal(X_checked.dtype, np.float64)
X_checked = assert_warns(DataConversionWarning, check_array, X,
dtype=np.float64,
accept_sparse=True, warn_on_dtype=True)
assert_equal(X_checked.dtype, np.float64)
# Check that the warning message includes the name of the Estimator
X_checked = assert_warns_message(DataConversionWarning,
'SomeEstimator',
check_array, X,
dtype=[np.float64, np.float32],
accept_sparse=True,
warn_on_dtype=True,
estimator='SomeEstimator')
assert_equal(X_checked.dtype, np.float64)
X_checked, y_checked = assert_warns_message(
DataConversionWarning, 'KNeighborsClassifier',
check_X_y, X, y, dtype=np.float64, accept_sparse=True,
warn_on_dtype=True, estimator=KNeighborsClassifier())
assert_equal(X_checked.dtype, np.float64)
for X in float64_data:
X_checked = assert_no_warnings(check_array, X, dtype=np.float64,
accept_sparse=True, warn_on_dtype=True)
assert_equal(X_checked.dtype, np.float64)
X_checked = assert_no_warnings(check_array, X, dtype=np.float64,
accept_sparse=True, warn_on_dtype=False)
assert_equal(X_checked.dtype, np.float64)
for X in float32_data:
X_checked = assert_no_warnings(check_array, X,
dtype=[np.float64, np.float32],
accept_sparse=True)
assert_equal(X_checked.dtype, np.float32)
assert_true(X_checked is X)
X_checked = assert_no_warnings(check_array, X,
dtype=[np.float64, np.float32],
accept_sparse=['csr', 'dok'],
copy=True)
assert_equal(X_checked.dtype, np.float32)
assert_false(X_checked is X)
X_checked = assert_no_warnings(check_array, X_csc_float32,
dtype=[np.float64, np.float32],
accept_sparse=['csr', 'dok'],
copy=False)
assert_equal(X_checked.dtype, np.float32)
assert_false(X_checked is X_csc_float32)
assert_equal(X_checked.format, 'csr')
def test_check_array_min_samples_and_features_messages():
# empty list is considered 2D by default:
msg = "0 feature(s) (shape=(1, 0)) while a minimum of 1 is required."
assert_raise_message(ValueError, msg, check_array, [[]])
# If considered a 1D collection when ensure_2d=False, then the minimum
# number of samples will break:
msg = "0 sample(s) (shape=(0,)) while a minimum of 1 is required."
assert_raise_message(ValueError, msg, check_array, [], ensure_2d=False)
# Invalid edge case when checking the default minimum sample of a scalar
msg = "Singleton array array(42) cannot be considered a valid collection."
assert_raise_message(TypeError, msg, check_array, 42, ensure_2d=False)
# But this works if the input data is forced to look like a 2 array with
# one sample and one feature:
X_checked = assert_warns(DeprecationWarning, check_array, [42],
ensure_2d=True)
assert_array_equal(np.array([[42]]), X_checked)
# Simulate a model that would need at least 2 samples to be well defined
X = np.ones((1, 10))
y = np.ones(1)
msg = "1 sample(s) (shape=(1, 10)) while a minimum of 2 is required."
assert_raise_message(ValueError, msg, check_X_y, X, y,
ensure_min_samples=2)
# The same message is raised if the data has 2 dimensions even if this is
# not mandatory
assert_raise_message(ValueError, msg, check_X_y, X, y,
ensure_min_samples=2, ensure_2d=False)
# Simulate a model that would require at least 3 features (e.g. SelectKBest
# with k=3)
X = np.ones((10, 2))
y = np.ones(2)
msg = "2 feature(s) (shape=(10, 2)) while a minimum of 3 is required."
assert_raise_message(ValueError, msg, check_X_y, X, y,
ensure_min_features=3)
# Only the feature check is enabled whenever the number of dimensions is 2
# even if allow_nd is enabled:
assert_raise_message(ValueError, msg, check_X_y, X, y,
ensure_min_features=3, allow_nd=True)
# Simulate a case where a pipeline stage as trimmed all the features of a
# 2D dataset.
X = np.empty(0).reshape(10, 0)
y = np.ones(10)
msg = "0 feature(s) (shape=(10, 0)) while a minimum of 1 is required."
assert_raise_message(ValueError, msg, check_X_y, X, y)
# nd-data is not checked for any minimum number of features by default:
X = np.ones((10, 0, 28, 28))
y = np.ones(10)
X_checked, y_checked = check_X_y(X, y, allow_nd=True)
assert_array_equal(X, X_checked)
assert_array_equal(y, y_checked)
def test_has_fit_parameter():
assert_false(has_fit_parameter(KNeighborsClassifier, "sample_weight"))
assert_true(has_fit_parameter(RandomForestRegressor, "sample_weight"))
assert_true(has_fit_parameter(SVR, "sample_weight"))
assert_true(has_fit_parameter(SVR(), "sample_weight"))
def test_check_symmetric():
arr_sym = np.array([[0, 1], [1, 2]])
arr_bad = np.ones(2)
arr_asym = np.array([[0, 2], [0, 2]])
test_arrays = {'dense': arr_asym,
'dok': sp.dok_matrix(arr_asym),
'csr': sp.csr_matrix(arr_asym),
'csc': sp.csc_matrix(arr_asym),
'coo': sp.coo_matrix(arr_asym),
'lil': sp.lil_matrix(arr_asym),
'bsr': sp.bsr_matrix(arr_asym)}
# check error for bad inputs
assert_raises(ValueError, check_symmetric, arr_bad)
# check that asymmetric arrays are properly symmetrized
for arr_format, arr in test_arrays.items():
# Check for warnings and errors
assert_warns(UserWarning, check_symmetric, arr)
assert_raises(ValueError, check_symmetric, arr, raise_exception=True)
output = check_symmetric(arr, raise_warning=False)
if sp.issparse(output):
assert_equal(output.format, arr_format)
assert_array_equal(output.toarray(), arr_sym)
else:
assert_array_equal(output, arr_sym)
def test_check_is_fitted():
# Check is ValueError raised when non estimator instance passed
assert_raises(ValueError, check_is_fitted, ARDRegression, "coef_")
assert_raises(TypeError, check_is_fitted, "SVR", "support_")
ard = ARDRegression()
svr = SVR()
try:
assert_raises(NotFittedError, check_is_fitted, ard, "coef_")
assert_raises(NotFittedError, check_is_fitted, svr, "support_")
except ValueError:
assert False, "check_is_fitted failed with ValueError"
# NotFittedError is a subclass of both ValueError and AttributeError
try:
check_is_fitted(ard, "coef_", "Random message %(name)s, %(name)s")
except ValueError as e:
assert_equal(str(e), "Random message ARDRegression, ARDRegression")
try:
check_is_fitted(svr, "support_", "Another message %(name)s, %(name)s")
except AttributeError as e:
assert_equal(str(e), "Another message SVR, SVR")
ard.fit(*make_blobs())
svr.fit(*make_blobs())
assert_equal(None, check_is_fitted(ard, "coef_"))
assert_equal(None, check_is_fitted(svr, "support_"))
def test_check_consistent_length():
check_consistent_length([1], [2], [3], [4], [5])
check_consistent_length([[1, 2], [[1, 2]]], [1, 2], ['a', 'b'])
check_consistent_length([1], (2,), np.array([3]), sp.csr_matrix((1, 2)))
assert_raises_regexp(ValueError, 'inconsistent numbers of samples',
check_consistent_length, [1, 2], [1])
assert_raises_regexp(TypeError, 'got <\w+ \'int\'>',
check_consistent_length, [1, 2], 1)
assert_raises_regexp(TypeError, 'got <\w+ \'object\'>',
check_consistent_length, [1, 2], object())
assert_raises(TypeError, check_consistent_length, [1, 2], np.array(1))
# Despite ensembles having __len__ they must raise TypeError
assert_raises_regexp(TypeError, 'estimator', check_consistent_length,
[1, 2], RandomForestRegressor())
# XXX: We should have a test with a string, but what is correct behaviour?
| mit |
pgmpy/pgmpy | pgmpy/sampling/base.py | 2 | 16536 | from warnings import warn
import numpy as np
from pgmpy import HAS_PANDAS
from pgmpy.utils import _check_1d_array_object, _check_length_equal
if HAS_PANDAS:
import pandas
class BaseGradLogPDF(object):
"""
Base class for evaluating gradient log of probability density function/ distribution
Classes inheriting this base class can be passed as an argument for
finding gradient log of probability distribution in inference algorithms
The class should initialize self.grad_log and self.log_pdf
Parameters
----------
variable_assignments : A 1d array like object (numpy.ndarray or list)
Vector representing values(assignments) of variables at which we want to find gradient and log
model : An instance of pgmpy.models
Examples
--------
>>> from pgmpy.factors import GaussianDistribution
>>> from pgmpy.inference.continuous import BaseGradLogPDF
>>> import numpy as np
>>> class GradLogGaussian(BaseGradLogPDF):
... def __init__(self, position, model):
... BaseGradLogPDF.__init__(self, position, model)
... self.grad_log, self.log_pdf = self._get_gradient_log_pdf()
... def _get_gradient_log_pdf(self):
... sub_vec = self.position - self.model.mean.flatten()
... grad = - np.dot(self.model.precision_matrix, sub_vec)
... log_pdf = 0.5 * float(np.dot(sub_vec, grad))
... return grad, log_pdf
>>> mean = np.array([1, 1])
>>> covariance = np.array([[1, 0.2], [0.2, 7]])
>>> model = GaussianDistribution(['x', 'y'], mean, covariance)
>>> dist_param = np.array([0.1, 0.9])
>>> grad_logp, logp = GradLogGaussian(dist_param, model).get_gradient_log_pdf()
>>> logp
-0.4054597701149426
>>> grad_logp
array([ 0.90229885, -0.01149425])
"""
def __init__(self, variable_assignments, model):
self.variable_assignments = _check_1d_array_object(
variable_assignments, "variable_assignments"
)
_check_length_equal(
variable_assignments,
model.variables,
"variable_assignments",
"model.variables",
)
self.model = model
# The gradient log of probability distribution at position
self.grad_log = None
# The gradient log of probability distribution at position
self.log_pdf = None
def get_gradient_log_pdf(self):
"""
Returns the gradient log and log of model at given position
Returns
-------
Returns a tuple of following types
numpy.array: Representing gradient log of model at given position
float: Representing log of model at given position
Example
--------
>>> # Using implementation of GradLogPDFGaussian
>>> from pgmpy.sampling.base import GradLogPDFGaussian
>>> from pgmpy.factors.continuous import GaussianDistribution
>>> import numpy as np
>>> mean = np.array([1, 1])
>>> covariance = np.array([[1, -5], [-5, 2]])
>>> model = GaussianDistribution(['x', 'y'], mean, covariance)
>>> dist_param = np.array([0.6, 0.8])
>>> grad_logp, logp = GradLogPDFGaussian(dist_param, model).get_gradient_log_pdf()
>>> logp
0.025217391304347823
>>> grad_logp
array([-0.07826087, -0.09565217])
"""
return self.grad_log, self.log_pdf
class GradLogPDFGaussian(BaseGradLogPDF):
"""
Class for finding gradient and gradient log of Joint Gaussian Distribution
Inherits pgmpy.inference.base_continuous.BaseGradLogPDF
Here model must be an instance of GaussianDistribution
Parameters
----------
variable_assignments : A 1d array like object (numpy.ndarray or list)
Vector representing values of variables at which we want to find gradient and log
model : An instance of pgmpy.models.GaussianDistribution
Example
-------
>>> from pgmpy.sampling import GradLogPDFGaussian
>>> from pgmpy.factors.continuous import GaussianDistribution
>>> import numpy as np
>>> mean = np.array([3, 4])
>>> covariance = np.array([[5, 4], [4, 5]])
>>> model = GaussianDistribution(['x', 'y'], mean, covariance)
>>> dist_param = np.array([12, 21])
>>> grad_logp, logp = GradLogPDFGaussian(dist_param, model).get_gradient_log_pdf()
>>> logp
-34.777777777777771
>>> grad_logp
array([ 2.55555556, -5.44444444])
"""
def __init__(self, variable_assignments, model):
BaseGradLogPDF.__init__(self, variable_assignments, model)
self.grad_log, self.log_pdf = self._get_gradient_log_pdf()
def _get_gradient_log_pdf(self):
"""
Method that finds gradient and its log at position
"""
sub_vec = self.variable_assignments - self.model.mean.flatten()
grad = -np.dot(self.model.precision_matrix, sub_vec)
log_pdf = 0.5 * np.dot(sub_vec, grad)
return grad, log_pdf
class BaseSimulateHamiltonianDynamics(object):
"""
Base class for proposing new values of position and momentum by simulating Hamiltonian Dynamics.
Classes inheriting this base class can be passed as an argument for
simulate_dynamics in inference algorithms.
Parameters
----------
model : An instance of pgmpy.models
Model for which DiscretizeTime object is initialized
position : A 1d array like object (numpy.ndarray or list)
Vector representing values of parameter position( or X)
momentum: A 1d array like object (numpy.ndarray or list)
Vector representing the proposed value for momentum (velocity)
stepsize: Float
stepsize for the simulating dynamics
grad_log_pdf : A subclass of pgmpy.inference.continuous.BaseGradLogPDF
A class for finding gradient log and log of distribution
grad_log_position: A 1d array like object, defaults to None
Vector representing gradient log at given position
If None, then will be calculated
Examples
--------
>>> from pgmpy.sampling import BaseSimulateHamiltonianDynamics
>>> from pgmpy.factors.continuous import GaussianDistribution
>>> from pgmpy.sampling import GradLogPDFGaussian
>>> import numpy as np
>>> # Class should initalize self.new_position, self.new_momentum and self.new_grad_logp
>>> # self.new_grad_logp represents gradient log at new proposed value of position
>>> class ModifiedEuler(BaseSimulateHamiltonianDynamics):
... def __init__(self, model, position, momentum, stepsize, grad_log_pdf, grad_log_position=None):
... BaseSimulateHamiltonianDynamics.__init__(self, model, position, momentum,
... stepsize, grad_log_pdf, grad_log_position)
... self.new_position, self.new_momentum, self.new_grad_logp = self._get_proposed_values()
... def _get_proposed_values(self):
... momentum_bar = self.momentum + self.stepsize * self.grad_log_position
... position_bar = self.position + self.stepsize * momentum_bar
... grad_log_position, _ = self.grad_log_pdf(position_bar, self.model).get_gradient_log_pdf()
... return position_bar, momentum_bar, grad_log_position
>>> pos = np.array([1, 2])
>>> momentum = np.array([0, 0])
>>> mean = np.array([0, 0])
>>> covariance = np.eye(2)
>>> model = GaussianDistribution(['x', 'y'], mean, covariance)
>>> new_pos, new_momentum, new_grad = ModifiedEuler(model, pos, momentum,
... 0.25, GradLogPDFGaussian).get_proposed_values()
>>> new_pos
array([0.9375, 1.875])
>>> new_momentum
array([-0.25, -0.5])
>>> new_grad
array([-0.9375, -1.875])
"""
def __init__(
self, model, position, momentum, stepsize, grad_log_pdf, grad_log_position=None
):
position = _check_1d_array_object(position, "position")
momentum = _check_1d_array_object(momentum, "momentum")
if not issubclass(grad_log_pdf, BaseGradLogPDF):
raise TypeError(
"grad_log_pdf must be an instance"
+ " of pgmpy.inference.continuous.base.BaseGradLogPDF"
)
_check_length_equal(position, momentum, "position", "momentum")
_check_length_equal(position, model.variables, "position", "model.variables")
if grad_log_position is None:
grad_log_position, _ = grad_log_pdf(position, model).get_gradient_log_pdf()
else:
grad_log_positon = _check_1d_array_object(
grad_log_position, "grad_log_position"
)
_check_length_equal(
grad_log_position, position, "grad_log_position", "position"
)
self.position = position
self.momentum = momentum
self.stepsize = stepsize
self.model = model
self.grad_log_pdf = grad_log_pdf
self.grad_log_position = grad_log_position
# new_position is the new proposed position, new_momentum is the new proposed momentum, new_grad_lop
# is the value of grad log at new_position
self.new_position = self.new_momentum = self.new_grad_logp = None
def get_proposed_values(self):
"""
Returns the new proposed values of position and momentum
Returns
-------
Returns a tuple of following type (in order)
numpy.array: A 1d numpy.array representing new proposed value of position
numpy.array: A 1d numpy.array representing new proposed value of momentum
numpy.array: A 1d numpy.array representing gradient of log distribution at new proposed value of position
Example
-------
>>> # Using implementation of ModifiedEuler
>>> from pgmpy.inference.continuous import ModifiedEuler, GradLogPDFGaussian as GLPG
>>> from pgmpy.factors import GaussianDistribution
>>> import numpy as np
>>> pos = np.array([3, 4])
>>> momentum = np.array([1, 1])
>>> mean = np.array([-1, 1])
>>> covariance = 3*np.eye(2)
>>> model = GaussianDistribution(['x', 'y'], mean, covariance)
>>> new_pos, new_momentum, new_grad = ModifiedEuler(model, pos, momentum, 0.70, GLPG).get_proposed_values()
>>> new_pos
array([ 3.04666667, 4.21 ])
>>> new_momentum
array([ 0.06666667, 0.3 ])
>>> new_grad
array([-1.34888889, -1.07 ])
"""
return self.new_position, self.new_momentum, self.new_grad_logp
class LeapFrog(BaseSimulateHamiltonianDynamics):
"""
Class for simulating hamiltonian dynamics using leapfrog method
Inherits pgmpy.inference.base_continuous.BaseSimulateHamiltonianDynamics
Parameters
----------
model : An instance of pgmpy.models
Model for which DiscretizeTime object is initialized
position : A 1d array like object (numpy.ndarray or list)
Vector representing values of parameter position( or X)
momentum: A 1d array like object (numpy.ndarray or list)
Vector representing the proposed value for momentum (velocity)
stepsize: Float
stepsize for the simulating dynamics
grad_log_pdf : A subclass of pgmpy.inference.continuous.BaseGradLogPDF, defaults to None
A class for evaluating gradient log and log of distribution for a given assignment of variables
If None, then model.get_gradient_log_pdf will be used
grad_log_position: A 1d array like object, defaults to None
Vector representing gradient log at given position
If None, then will be calculated
Example
--------
>>> from pgmpy.factors.continuous import GaussianDistribution
>>> from pgmpy.sampling import LeapFrog, GradLogPDFGaussian as GLPG
>>> import numpy as np
>>> pos = np.array([2, 1])
>>> momentum = np.array([7, 7])
>>> mean = np.array([-5, 5])
>>> covariance = np.array([[1, 2], [2, 1]])
>>> model = GaussianDistribution(['x', 'y'], mean, covariance)
>>> new_pos, new_momentum, new_grad = LeapFrog(model, pos, momentum, 4.0, GLPG).get_proposed_values()
>>> new_pos
array([ 70., -19.])
>>> new_momentum
array([ 99., -121.])
>>> new_grad
array([ 41., -58.])
"""
def __init__(
self, model, position, momentum, stepsize, grad_log_pdf, grad_log_position=None
):
BaseSimulateHamiltonianDynamics.__init__(
self, model, position, momentum, stepsize, grad_log_pdf, grad_log_position
)
(
self.new_position,
self.new_momentum,
self.new_grad_logp,
) = self._get_proposed_values()
def _get_proposed_values(self):
"""
Method to perform time splitting using leapfrog
"""
# Take half step in time for updating momentum
momentum_bar = self.momentum + 0.5 * self.stepsize * self.grad_log_position
# Take full step in time for updating position position
position_bar = self.position + self.stepsize * momentum_bar
grad_log, _ = self.grad_log_pdf(position_bar, self.model).get_gradient_log_pdf()
# Take remaining half step in time for updating momentum
momentum_bar = momentum_bar + 0.5 * self.stepsize * grad_log
return position_bar, momentum_bar, grad_log
class ModifiedEuler(BaseSimulateHamiltonianDynamics):
"""
Class for simulating Hamiltonian Dynamics using Modified euler method
Inherits pgmpy.inference.base_continuous.BaseSimulateHamiltonianDynamics
Parameters
----------
model: An instance of pgmpy.models
Model for which DiscretizeTime object is initialized
position: A 1d array like object (numpy.ndarray or list)
Vector representing values of parameter position( or X)
momentum: A 1d array like object (numpy.ndarray or list)
Vector representing the proposed value for momentum (velocity)
stepsize: Float
stepsize for the simulating dynamics
grad_log_pdf: A subclass of pgmpy.inference.continuous.BaseGradLogPDF, defaults to None
A class for finding gradient log and log of distribution
If None, then will use model.get_gradient_log_pdf
grad_log_position: A 1d array like object, defaults to None
Vector representing gradient log at given position
If None, then will be calculated
Example
--------
>>> from pgmpy.factors.continuous import GaussianDistribution
>>> from pgmpy.sampling import GradLogPDFGaussian, ModifiedEuler
>>> import numpy as np
>>> pos = np.array([2, 1])
>>> momentum = np.array([1, 1])
>>> mean = np.array([0, 0])
>>> covariance = np.eye(2)
>>> model = GaussianDistribution(['x', 'y'], mean, covariance)
>>> new_pos, new_momentum, new_grad = ModifiedEuler(model, pos, momentum,
... 0.25, GradLogPDFGaussian).get_proposed_values()
>>> new_pos
array([2.125, 1.1875])
>>> new_momentum
array([0.5, 0.75])
>>> new_grad
array([-2.125, -1.1875])
"""
def __init__(
self, model, position, momentum, stepsize, grad_log_pdf, grad_log_position=None
):
BaseSimulateHamiltonianDynamics.__init__(
self, model, position, momentum, stepsize, grad_log_pdf, grad_log_position
)
(
self.new_position,
self.new_momentum,
self.new_grad_logp,
) = self._get_proposed_values()
def _get_proposed_values(self):
"""
Method to perform time splitting using Modified euler method
"""
# Take full step in time and update momentum
momentum_bar = self.momentum + self.stepsize * self.grad_log_position
# Take full step in time and update position
position_bar = self.position + self.stepsize * momentum_bar
grad_log, _ = self.grad_log_pdf(position_bar, self.model).get_gradient_log_pdf()
return position_bar, momentum_bar, grad_log
def _return_samples(samples, state_names_map=None):
"""
A utility function to return samples according to type
"""
df = pandas.DataFrame.from_records(samples)
if state_names_map is not None:
for var in df.columns:
if var != "_weight":
df[var] = df[var].map(state_names_map[var])
return df
| mit |
qifeigit/scikit-learn | examples/calibration/plot_compare_calibration.py | 241 | 5008 | """
========================================
Comparison of Calibration of Classifiers
========================================
Well calibrated classifiers are probabilistic classifiers for which the output
of the predict_proba method can be directly interpreted as a confidence level.
For instance a well calibrated (binary) classifier should classify the samples
such that among the samples to which it gave a predict_proba value close to
0.8, approx. 80% actually belong to the positive class.
LogisticRegression returns well calibrated predictions as it directly
optimizes log-loss. In contrast, the other methods return biased probilities,
with different biases per method:
* GaussianNaiveBayes tends to push probabilties to 0 or 1 (note the counts in
the histograms). This is mainly because it makes the assumption that features
are conditionally independent given the class, which is not the case in this
dataset which contains 2 redundant features.
* RandomForestClassifier shows the opposite behavior: the histograms show
peaks at approx. 0.2 and 0.9 probability, while probabilities close to 0 or 1
are very rare. An explanation for this is given by Niculescu-Mizil and Caruana
[1]: "Methods such as bagging and random forests that average predictions from
a base set of models can have difficulty making predictions near 0 and 1
because variance in the underlying base models will bias predictions that
should be near zero or one away from these values. Because predictions are
restricted to the interval [0,1], errors caused by variance tend to be one-
sided near zero and one. For example, if a model should predict p = 0 for a
case, the only way bagging can achieve this is if all bagged trees predict
zero. If we add noise to the trees that bagging is averaging over, this noise
will cause some trees to predict values larger than 0 for this case, thus
moving the average prediction of the bagged ensemble away from 0. We observe
this effect most strongly with random forests because the base-level trees
trained with random forests have relatively high variance due to feature
subseting." As a result, the calibration curve shows a characteristic sigmoid
shape, indicating that the classifier could trust its "intuition" more and
return probabilties closer to 0 or 1 typically.
* Support Vector Classification (SVC) shows an even more sigmoid curve as
the RandomForestClassifier, which is typical for maximum-margin methods
(compare Niculescu-Mizil and Caruana [1]), which focus on hard samples
that are close to the decision boundary (the support vectors).
.. topic:: References:
.. [1] Predicting Good Probabilities with Supervised Learning,
A. Niculescu-Mizil & R. Caruana, ICML 2005
"""
print(__doc__)
# Author: Jan Hendrik Metzen <[email protected]>
# License: BSD Style.
import numpy as np
np.random.seed(0)
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.calibration import calibration_curve
X, y = datasets.make_classification(n_samples=100000, n_features=20,
n_informative=2, n_redundant=2)
train_samples = 100 # Samples used for training the models
X_train = X[:train_samples]
X_test = X[train_samples:]
y_train = y[:train_samples]
y_test = y[train_samples:]
# Create classifiers
lr = LogisticRegression()
gnb = GaussianNB()
svc = LinearSVC(C=1.0)
rfc = RandomForestClassifier(n_estimators=100)
###############################################################################
# Plot calibration plots
plt.figure(figsize=(10, 10))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
for clf, name in [(lr, 'Logistic'),
(gnb, 'Naive Bayes'),
(svc, 'Support Vector Classification'),
(rfc, 'Random Forest')]:
clf.fit(X_train, y_train)
if hasattr(clf, "predict_proba"):
prob_pos = clf.predict_proba(X_test)[:, 1]
else: # use decision function
prob_pos = clf.decision_function(X_test)
prob_pos = \
(prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test, prob_pos, n_bins=10)
ax1.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s" % (name, ))
ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,
histtype="step", lw=2)
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title('Calibration plots (reliability curve)')
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
ax2.legend(loc="upper center", ncol=2)
plt.tight_layout()
plt.show()
| bsd-3-clause |
huongttlan/statsmodels | statsmodels/datasets/macrodata/data.py | 25 | 3184 | """United States Macroeconomic data"""
__docformat__ = 'restructuredtext'
COPYRIGHT = """This is public domain."""
TITLE = __doc__
SOURCE = """
Compiled by Skipper Seabold. All data are from the Federal Reserve Bank of St.
Louis [1] except the unemployment rate which was taken from the National
Bureau of Labor Statistics [2]. ::
[1] Data Source: FRED, Federal Reserve Economic Data, Federal Reserve Bank of
St. Louis; http://research.stlouisfed.org/fred2/; accessed December 15,
2009.
[2] Data Source: Bureau of Labor Statistics, U.S. Department of Labor;
http://www.bls.gov/data/; accessed December 15, 2009.
"""
DESCRSHORT = """US Macroeconomic Data for 1959Q1 - 2009Q3"""
DESCRLONG = DESCRSHORT
NOTE = """::
Number of Observations - 203
Number of Variables - 14
Variable name definitions::
year - 1959q1 - 2009q3
quarter - 1-4
realgdp - Real gross domestic product (Bil. of chained 2005 US$,
seasonally adjusted annual rate)
realcons - Real personal consumption expenditures (Bil. of chained
2005 US$, seasonally adjusted annual rate)
realinv - Real gross private domestic investment (Bil. of chained
2005 US$, seasonally adjusted annual rate)
realgovt - Real federal consumption expenditures & gross investment
(Bil. of chained 2005 US$, seasonally adjusted annual rate)
realdpi - Real private disposable income (Bil. of chained 2005
US$, seasonally adjusted annual rate)
cpi - End of the quarter consumer price index for all urban
consumers: all items (1982-84 = 100, seasonally adjusted).
m1 - End of the quarter M1 nominal money stock (Seasonally
adjusted)
tbilrate - Quarterly monthly average of the monthly 3-month
treasury bill: secondary market rate
unemp - Seasonally adjusted unemployment rate (%)
pop - End of the quarter total population: all ages incl. armed
forces over seas
infl - Inflation rate (ln(cpi_{t}/cpi_{t-1}) * 400)
realint - Real interest rate (tbilrate - infl)
"""
from numpy import recfromtxt, column_stack, array
from pandas import DataFrame
from statsmodels.datasets.utils import Dataset
from os.path import dirname, abspath
def load():
"""
Load the US macro data and return a Dataset class.
Returns
-------
Dataset instance:
See DATASET_PROPOSAL.txt for more information.
Notes
-----
The macrodata Dataset instance does not contain endog and exog attributes.
"""
data = _get_data()
names = data.dtype.names
dataset = Dataset(data=data, names=names)
return dataset
def load_pandas():
dataset = load()
dataset.data = DataFrame(dataset.data)
return dataset
def _get_data():
filepath = dirname(abspath(__file__))
data = recfromtxt(open(filepath + '/macrodata.csv', 'rb'), delimiter=",",
names=True, dtype=float)
return data
| bsd-3-clause |
ppinard/matplotlib-colorbar | tests/test_colorbar.py | 1 | 10023 | #!/usr/bin/env python
""" """
# Standard library modules.
# Third party modules.
import matplotlib.pyplot as plt
import matplotlib.cbook as cbook
import matplotlib.colors
import numpy as np
import pytest
# Local modules.
from matplotlib_colorbar.colorbar import Colorbar
# Globals and constants variables.
@pytest.fixture
def figure():
fig = plt.figure()
yield fig
plt.close()
del fig
@pytest.fixture
def colorbar(figure):
ax = figure.add_subplot("111")
data = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
mappable = ax.imshow(data)
colorbar = Colorbar(mappable)
ax.add_artist(colorbar)
return colorbar
def test_colorbar_draw(colorbar):
plt.draw()
def test_colorbar_draw_ticklocation_bottom(colorbar):
colorbar.set_orientation("horizontal")
colorbar.set_ticklocation("bottom")
plt.draw()
def test_colorbar_draw_ticklocation_top(colorbar):
colorbar.set_orientation("horizontal")
colorbar.set_ticklocation("top")
plt.draw()
def test_colorbar_draw_ticklocation_left(colorbar):
colorbar.set_orientation("vertical")
colorbar.set_ticklocation("left")
plt.draw()
def test_colorbar_draw_ticklocation_right(colorbar):
colorbar.set_orientation("vertical")
colorbar.set_ticklocation("right")
plt.draw()
def test_colorbar_label(colorbar):
assert colorbar.get_label() is None
assert colorbar.label is None
colorbar.set_label("Hello world")
assert colorbar.get_label() == "Hello world"
assert colorbar.label == "Hello world"
colorbar.label = "Hello world"
assert colorbar.get_label() == "Hello world"
assert colorbar.label == "Hello world"
plt.draw()
def test_colorbar_orientation(colorbar):
with pytest.raises(ValueError):
colorbar.set_orientation("blah")
def test_colorbar_orientation_vertical(colorbar):
assert colorbar.get_orientation() is None
assert colorbar.orientation is None
colorbar.set_orientation("vertical")
assert colorbar.get_orientation() == "vertical"
assert colorbar.orientation == "vertical"
colorbar.orientation = "vertical"
assert colorbar.get_orientation() == "vertical"
assert colorbar.orientation == "vertical"
plt.draw()
def test_colorbar_orientation_horizontal(colorbar):
assert colorbar.get_orientation() is None
assert colorbar.orientation is None
colorbar.set_orientation("horizontal")
assert colorbar.get_orientation() == "horizontal"
assert colorbar.orientation == "horizontal"
colorbar.orientation = "horizontal"
assert colorbar.get_orientation() == "horizontal"
assert colorbar.orientation == "horizontal"
plt.draw()
def test_colorbar_length_fraction(colorbar):
assert colorbar.get_length_fraction() is None
assert colorbar.length_fraction is None
colorbar.set_length_fraction(0.2)
assert colorbar.get_length_fraction() == pytest.approx(0.2, abs=1e-2)
assert colorbar.length_fraction == pytest.approx(0.2, abs=1e-2)
colorbar.length_fraction = 0.1
assert colorbar.get_length_fraction() == pytest.approx(0.1, abs=1e-2)
assert colorbar.length_fraction == pytest.approx(0.1, abs=1e-2)
with pytest.raises(ValueError):
colorbar.set_length_fraction(0.0)
with pytest.raises(ValueError):
colorbar.set_length_fraction(1.1)
def test_colorbar_width_fraction(colorbar):
assert colorbar.get_width_fraction() is None
assert colorbar.width_fraction is None
colorbar.set_width_fraction(0.2)
assert colorbar.get_width_fraction() == pytest.approx(0.2, abs=1e-2)
assert colorbar.width_fraction == pytest.approx(0.2, abs=1e-2)
colorbar.width_fraction = 0.1
assert colorbar.get_width_fraction() == pytest.approx(0.1, abs=1e-2)
assert colorbar.width_fraction == pytest.approx(0.1, abs=1e-2)
with pytest.raises(ValueError):
colorbar.set_width_fraction(0.0)
with pytest.raises(ValueError):
colorbar.set_width_fraction(1.1)
def test_colorbar_location(colorbar):
assert colorbar.get_location() is None
assert colorbar.location is None
colorbar.set_location("upper right")
assert colorbar.get_location() == 1
assert colorbar.location == 1
colorbar.location = "lower left"
assert colorbar.get_location() == 3
assert colorbar.location == 3
with pytest.raises(ValueError):
colorbar.set_location("blah")
def test_colorbar_pad(colorbar):
assert colorbar.get_pad() is None
assert colorbar.pad is None
colorbar.set_pad(4)
assert colorbar.get_pad() == pytest.approx(4.0, abs=1e-2)
assert colorbar.pad == pytest.approx(4.0, abs=1e-2)
colorbar.pad = 5
assert colorbar.get_pad() == pytest.approx(5.0, abs=1e-2)
assert colorbar.pad == pytest.approx(5.0, abs=1e-2)
def test_colorbar_border_pad(colorbar):
assert colorbar.get_border_pad() is None
assert colorbar.border_pad is None
colorbar.set_border_pad(4)
assert colorbar.get_border_pad() == pytest.approx(4.0, abs=1e-2)
assert colorbar.border_pad == pytest.approx(4.0, abs=1e-2)
colorbar.border_pad = 5
assert colorbar.get_border_pad() == pytest.approx(5.0, abs=1e-2)
assert colorbar.border_pad == pytest.approx(5.0, abs=1e-2)
def test_colorbar_box_alpha(colorbar):
assert colorbar.get_box_alpha() is None
assert colorbar.box_alpha is None
colorbar.set_box_alpha(0.1)
assert colorbar.get_box_alpha() == pytest.approx(0.1, abs=1e-2)
assert colorbar.box_alpha == pytest.approx(0.1, abs=1e-2)
colorbar.box_alpha = 0.2
assert colorbar.get_box_alpha() == pytest.approx(0.2, abs=1e-2)
assert colorbar.box_alpha == pytest.approx(0.2, abs=1e-2)
with pytest.raises(ValueError):
colorbar.set_box_alpha(-0.1)
with pytest.raises(ValueError):
colorbar.set_box_alpha(1.1)
def test_colorbar_sep(colorbar):
assert colorbar.get_sep() is None
assert colorbar.sep is None
colorbar.set_sep(4)
assert colorbar.get_sep() == 4
assert colorbar.sep == 4
colorbar.sep = 5
assert colorbar.get_sep() == 5
assert colorbar.sep == 5
def test_colorbar_frameon(colorbar):
assert colorbar.get_frameon() is None
assert colorbar.frameon is None
colorbar.set_frameon(True)
assert colorbar.get_frameon()
assert colorbar.frameon
colorbar.frameon = False
assert not colorbar.get_frameon()
assert not colorbar.frameon
def test_colorbar_ticks(colorbar):
assert colorbar.get_ticks() is None
assert colorbar.ticks is None
colorbar.set_ticks([0.0, 0.5, 1.0])
assert colorbar.get_ticks() == [0.0, 0.5, 1.0]
assert colorbar.ticks == [0.0, 0.5, 1.0]
colorbar.ticks = [0, 0.2, 0.4, 0.6, 0.8, 1]
assert colorbar.get_ticks() == [0, 0.2, 0.4, 0.6, 0.8, 1]
assert colorbar.ticks == [0, 0.2, 0.4, 0.6, 0.8, 1]
def test_colorbar_ticks_nominimum(colorbar):
colorbar.set_ticks([0.0, 2.0])
plt.draw()
def test_colorbar_ticklabels(colorbar):
assert colorbar.get_ticklabels() is None
assert colorbar.ticklabels is None
colorbar.set_ticklabels(["min", "max"])
assert colorbar.get_ticklabels() == ["min", "max"]
assert colorbar.ticklabels == ["min", "max"]
colorbar.ticklabels = ["small", "big"]
assert colorbar.get_ticklabels() == ["small", "big"]
assert colorbar.ticklabels == ["small", "big"]
colorbar.ticks = [0.0, 1.0]
with pytest.raises(ValueError):
colorbar.set_ticklabels(
["one label",]
)
def test_colorbar_ticklocation(colorbar):
assert colorbar.get_ticklocation() is None
assert colorbar.ticklocation is None
colorbar.set_orientation("horizontal")
colorbar.set_ticklocation("bottom")
assert colorbar.get_ticklocation() == "bottom"
assert colorbar.ticklocation == "bottom"
colorbar.set_ticklocation(None)
colorbar.set_orientation("horizontal")
colorbar.set_ticklocation("top")
assert colorbar.get_ticklocation() == "top"
assert colorbar.ticklocation == "top"
colorbar.set_ticklocation(None)
colorbar.set_orientation("vertical")
colorbar.set_ticklocation("left")
assert colorbar.get_ticklocation() == "left"
assert colorbar.ticklocation == "left"
colorbar.set_ticklocation(None)
colorbar.set_orientation("vertical")
colorbar.set_ticklocation("right")
assert colorbar.get_ticklocation() == "right"
assert colorbar.ticklocation == "right"
colorbar.set_ticklocation(None)
colorbar.set_orientation("horizontal")
with pytest.raises(ValueError):
colorbar.set_ticklocation("left")
with pytest.raises(ValueError):
colorbar.set_ticklocation("right")
colorbar.set_orientation("vertical")
with pytest.raises(ValueError):
colorbar.set_ticklocation("bottom")
with pytest.raises(ValueError):
colorbar.set_ticklocation("top")
def test_colorbar_set_visible(colorbar):
colorbar.set_visible(False)
plt.draw()
def test_colorbar_no_mappable(colorbar):
colorbar.set_mappable(False)
plt.draw()
@pytest.mark.mpl_image_compare
def test_colorbar_example1(colorbar):
with cbook.get_sample_data("grace_hopper.png") as fp:
data = np.array(plt.imread(fp))
fig = plt.figure()
ax = fig.add_subplot("111", aspect="equal")
mappable = ax.imshow(data[..., 0], cmap="viridis")
colorbar = Colorbar(mappable, location="lower left")
colorbar.set_ticks([0.0, 0.5, 1.0])
ax.add_artist(colorbar)
return fig
@pytest.mark.mpl_image_compare
def test_colorbar_example2():
with cbook.get_sample_data("grace_hopper.png") as fp:
data = np.array(plt.imread(fp))
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot("111", aspect="equal")
norm = matplotlib.colors.Normalize(vmin=-1.0, vmax=1.0)
mappable = ax.imshow(data[..., 0], cmap="viridis", norm=norm)
colorbar = Colorbar(mappable, location="lower left")
colorbar.set_ticks([-1.0, 0, 1.0])
ax.add_artist(colorbar)
return fig
| bsd-2-clause |
jaabell/Seismogram | Layers.py | 2 | 6511 | import numpy as np
from Wavelets import WaveletGenerator
def syntheticSeismogram(v, rho, d, wavtyp='RICKER', wavf=[100], usingT=False, maxDepth=500, plotIt=False):
"""
syntheticSeismogram generates and displays a synthetic seismogram for
a simple 1-D layered model.
Inputs:
v : velocity of each layer (m/s)
rho : density of each layer (kg/m^3)
d : depth to the top of each layer (m)
The last layer is assumed to be a half-space
wavtyp : type of Wavelet
The wavelet options are:
Ricker: takes one frequency
Gaussian: still in progress
Ormsby: takes 4 frequencies
Klauder: takes 2 frequencies
usingT :
Lindsey Heagy
[email protected]
Created: November 30, 2013
Modified: January 16, 2013
v = np.array([350, 1000, 2000]) # Velocity of each layer (m/s)
rho = np.array([1700, 2000, 2500]) # Density of each layer (kg/m^3)
d = np.array([0, 100, 200]) # Position of top of each layer (m)
"""
# Ensure that these are float numpy arrays
v, rho, d , wavf = np.array(v, dtype=float), np.array(rho, dtype=float), np.array(d, dtype=float), np.array(wavf,dtype=float)
usingT = np.array(usingT, dtype=bool)
nlayer = len(v) # number of layers
# Check that the number of layers match
assert len(rho) == nlayer, 'Number of layer densities must match number of layer velocities'
assert len(d) == nlayer, 'Number of layer tops must match the number of layer velocities'
# compute necessary parameters
Z = rho*v # acoustic impedance
R = np.diff(Z)/(Z[:-1] + Z[1:]) # reflection coefficients
twttop = 2*np.diff(d)/v[:-1] # 2-way travel time within each layer
twttop = np.cumsum(twttop) # 2-way travel time from surface to top of each layer
# create model logs
resolution = 400 # How finely we discretize in depth
dpth = np.linspace(0,maxDepth,resolution) # create depth vector
nd = len(dpth)
# Initialize logs
rholog = np.zeros(nd) # density
vlog = np.zeros(nd) # velocity
zlog = np.zeros(nd) # acoustic impedance
rseries = np.zeros(nd) # reflectivity series
t = np.zeros(nd) # time
# Loop over layers to put information in logs
for i in range(nlayer):
di = (dpth >= d[i]) # current depth indicies
rholog[di] = rho[i] # density
vlog[di] = v[i] # velocity
zlog[di] = Z[i] # acoustic impedance
if i < nlayer-1:
di = np.logical_and(di, dpth < d[i+1])
ir = np.arange(resolution)[di][-1:][0]
if usingT:
if i == 0:
rseries[ir] = R[i]
else:
rseries[ir] = R[i]*np.prod(1-R[i-1]**2)
else:
rseries[ir] = R[i]
if i > 0:
t[di] = 2*(dpth[di] - d[i])/v[i] + twttop[i-1]
else:
t[di] = 2*dpth[di]/v[i]
# make wavelet
dtwav = np.abs(np.min(np.diff(t)))/10.0
twav = np.arange(-2.0/np.min(wavf), 2.0/np.min(wavf), dtwav)
# Get source wavelet
wav = WaveletGenerator(wavtyp,wavf,twav)
# create synthetic seismogram
tref = np.arange(0,np.max(t),dtwav) + np.min(twav) # time discretization for reflectivity series
tr = t[np.abs(rseries) > 0]
rseriesconv = np.zeros(len(tref))
for i in range(len(tr)):
index = np.abs(tref - tr[i]).argmin()
rseriesconv[index] = R[i]
seis = np.convolve(wav,rseriesconv)
tseis = np.min(twav)+dtwav*np.arange(len(seis))
index = np.logical_and(tseis >= 0, tseis <= np.max(t))
tseis = tseis[index]
seis = seis[index]
if plotIt:
import matplotlib.pyplot as plt
plt.figure(1)
# Plot Density
plt.subplot(151)
plt.plot(rholog,dpth,linewidth=2)
plt.title('Density')
# xlim([min(rholog) max(rholog)] + [-1 1]*0.1*[max(rholog)-min(rholog)])
# ylim([min(dpth),max(dpth)])
# set(gca,'Ydir','reverse')
plt.grid()
plt.subplot(152)
plt.plot(vlog,dpth,linewidth=2)
plt.title('Velocity')
# xlim([min(vlog) max(vlog)] + [-1 1]*0.1*[max(vlog)-min(vlog)])
# ylim([min(dpth),max(dpth)])
# set(gca,'Ydir','reverse')
plt.grid()
plt.subplot(153)
plt.plot(zlog,dpth,linewidth=2)
plt.title('Acoustic Impedance')
# xlim([min(zlog) max(zlog)] + [-1 1]*0.1*[max(zlog)-min(zlog)])
# ylim([min(dpth),max(dpth)])
# set(gca,'Ydir','reverse')
plt.grid()
plt.subplot(154)
plt.hlines(dpth,np.zeros(nd),rseries,linewidth=2) #,'marker','none'
plt.title('Reflectivity Series');
# set(gca,'cameraupvector',[-1, 0, 0]);
plt.grid()
# set(gca,'ydir','reverse');
plt.subplot(155)
plt.plot(t,dpth,linewidth=2);
plt.title('Depth-Time');
# plt.xlim([np.min(t), np.max(t)] + [-1, 1]*0.1*[np.max(t)-np.min(t)]);
# plt.ylim([np.min(dpth),np.max(dpth)]);
# set(gca,'Ydir','reverse');
plt.grid()
##
plt.figure(2)
# plt.subplot(141)
# plt.plot(dpth,t,linewidth=2);
# title('Time-Depth');
# ylim([min(t), max(t)] + [-1 1]*0.1*[max(t)-min(t)]);
# xlim([min(dpth),max(dpth)]);
# set(gca,'Ydir','reverse');
# plt.grid()
plt.subplot(132)
plt.hlines(tref,np.zeros(len(rseriesconv)),rseriesconv,linewidth=2) #,'marker','none'
plt.title('Reflectivity Series')
# set(gca,'cameraupvector',[-1, 0, 0])
plt.grid()
plt.subplot(131)
plt.plot(wav,twav,linewidth=2)
plt.title('Wavelet')
plt.grid()
# set(gca,'ydir','reverse')
plt.subplot(133)
plt.plot(seis,tseis,linewidth=2)
plt.grid()
# set(gca,'ydir','reverse')
plt.show()
return dpth, t, seis, tseis
if __name__ == '__main__':
d = [0, 50, 100] # Position of top of each layer (m)
v = [350, 1000, 2000] # Velocity of each layer (m/s)
rho = [1700, 2000, 2500] # Density of each layer (kg/m^3)
syntheticSeismogram(v, rho, d, maxDepth=250, plotIt=True)
| mit |
jaytlennon/Emergence | tools/DiversityTools/mete/mete.py | 10 | 40946 | """Module for fitting and testing Harte et al.'s maximum entropy models
Terminology and notation follows Harte (2011)
"""
from __future__ import division
from math import log, exp, isnan, floor, ceil, factorial
import os.path
import sys
import cPickle
import numpy as np
import matplotlib.pyplot as plt
#import mpmath as mp
from scipy.optimize import bisect, fsolve
from scipy.stats import logser, geom
from numpy.random import random_integers
from random import uniform
from numpy import array, e, empty
def trunc_logser_pmf(x, p, upper_bound):
"""Probability mass function for the upper truncated log-series
Parameters
----------
x : array_like
Values of `x` for which the pmf should be determined
p : float
Parameter for the log-series distribution
upper_bound : float
Upper bound of the distribution
Returns
-------
pmf : array
Probability mass function for each value of `x`
"""
if p < 1:
return logser.pmf(x, p) / logser.cdf(upper_bound, p)
else:
x = np.array(x)
ivals = np.arange(1, upper_bound + 1)
normalization = sum(p ** ivals / ivals)
pmf = (p ** x / x) / normalization
return pmf
def trunc_logser_cdf(x_max, p, upper_bound):
"""Cumulative probability function for the upper truncated log-series"""
if p < 1:
#If we can just renormalize the untruncated cdf do so for speed
return logser.cdf(x_max, p) / logser.cdf(upper_bound, p)
else:
x_list = range(1, int(x_max) + 1)
cdf = sum(trunc_logser_pmf(x_list, p, upper_bound))
return cdf
def trunc_logser_rvs(p, upper_bound, size):
"""Random variates of the upper truncated log-series
Currently this function only supports random variate generation for p < 1.
This will cover most circumstances, but it is possible to have p >= 1 for
the truncated version of the distribution.
"""
assert p < 1, 'trunc_logser_rvs currently only supports random number generation for p < 1'
size = int(size)
rvs = logser.rvs(p, size=size)
for i in range(0, size):
while(rvs[i] > upper_bound):
rvs[i] = logser.rvs(p, size=1)
return rvs
def get_beta(Svals, Nvals, version='precise', beta_dict={}):
"""Solve for Beta, the sum of the two Lagrange multipliers for R(n, epsilon)
Parameters
----------
Svals : int or array_like
The number of species
Nvals : int or array_like
The total number of individuals
version : {'precise', 'untruncated', 'approx'}, optional
Determine which solution to use to solve for Beta. The default is
'precise', which uses minimal approximations.
'precise' uses minimal approximations and includes upper trunction of
the distribution at N_0 (eq. 7.27 from Harte et al. 2011)
'untruncated' uses minimal approximations, but assumes that the
distribution of n goes to infinity (eq. B.4 from Harte et al. 2008)
'approx' uses more approximations, but will run substantially faster,
especially for large N (equation 7.30 from Harte 2011)
beta_dict : dict, optional
A dictionary of beta values so that beta can be looked up rather than
solved numerically. This can substantially speed up execution.
Both Svals and Nvals can be vectors to allow calculation of multiple values
of Beta simultaneously. The vectors must be the same length.
Returns
-------
betas : list
beta values for each pair of Svals and Nvals
"""
#Allow both single values and iterables for S and N by converting single values to iterables
if not hasattr(Svals, '__iter__'):
Svals = array([Svals])
else:
Svals = array(Svals)
if not hasattr(Nvals, '__iter__'):
Nvals = array([Nvals])
else:
Nvals = array(Nvals)
assert len(Svals) == len(Nvals), "S and N must have the same length"
assert all(Svals > 1), "S must be greater than 1"
assert all(Nvals > 0), "N must be greater than 0"
assert all(Svals/Nvals < 1), "N must be greater than S"
assert version in ('precise', 'untruncated', 'approx'), "Unknown version provided"
betas = []
for i, S in enumerate(Svals):
N = Nvals[i]
# Set the distance from the undefined boundaries of the Lagrangian multipliers
# to set the upper and lower boundaries for the numerical root finders
BOUNDS = [0, 1]
DIST_FROM_BOUND = 10 ** -15
#If not, solve for beta using the substitution x = e**-beta
if (S, N) in beta_dict:
betas.append(beta_dict[(S, N)])
elif version == 'precise':
n = array(range(1, int(N)+1))
y = lambda x: sum(x ** n / N * S) - sum((x ** n) / n)
exp_neg_beta = bisect(y, BOUNDS[0] + DIST_FROM_BOUND,
min((sys.float_info[0] / S) ** (1 / N), 2),
xtol = 1.490116e-08)
betas.append(-1 * log(exp_neg_beta))
elif version == 'untruncated':
y = lambda x: 1 / log(1 / (1 - x)) * x / (1 - x) - N / S
exp_neg_beta = bisect(y, BOUNDS[0] + DIST_FROM_BOUND,
BOUNDS[1] - DIST_FROM_BOUND)
betas.append(-1 * log(exp_neg_beta))
elif version == 'approx':
y = lambda x: x * log(1 / x) - S / N
betas.append(fsolve(y, 0.0001))
#Store the value in the dictionary to avoid repeating expensive
#numerical routines for the same values of S and N. This is
#particularly important for determining pdfs through mete_distributions.
beta_dict[(S, N)] = betas[-1]
#If only a single pair of S and N values was passed, return a float
if len(betas) == 1:
betas = betas[0]
return betas
def get_lambda2(S, N, E):
"""Return lambda_2, the second Lagrangian multiplier for R(n, epsilon)
lambda_2 is calculated using equation 7.26 from Harte 2011.
"""
return S / (E - N)
def get_lambda1(S, N, E, version='precise', beta_dict={}):
"""Return lambda_1, the first Lagrangian multiplier for R(n, epsilon)
lamba_1 is calculated using equation 7.26 from Harte 2011 and get_beta().
"""
beta = get_beta(S, N, version, beta_dict)
return beta - get_lambda2(S, N, E)
def get_dict(filename):
"""Check if lookup dictionary for lamba exists. If not, create an empty one.
Arguments:
filename = is the name of the dictionary file to read from, (e.g., 'beta_lookup_table.pck')
"""
if os.path.exists(filename):
dict_file = open(filename, 'r')
dict_in = cPickle.load(dict_file)
dict_file.close()
print("Successfully loaded lookup table with %s lines" % len(dict_in))
else:
dict_file = open(filename, 'w')
dict_in = {}
cPickle.dump(dict_in, dict_file)
dict_file.close()
print("No lookup table found. Creating an empty table...")
return dict_in
def save_dict(dictionary, filename):
"""Save the current beta lookup table to a file
Arguments:
dictionary = the dictionary object to output
filename = the name of the dictionary file to write to (e.g., 'beta_lookup_table.pck')
"""
dic_output = open(filename, 'w')
cPickle.dump(dictionary, dic_output)
dic_output.close()
def build_beta_dict(S_start, S_end, N_max, N_min=1, filename='beta_lookup_table.pck'):
"""Add values to the lookup table for beta
Starting at S_start and finishing at S_end this function will take values
of N from S + 1 to N_max, determine if a value of beta is already in the
lookup table for the current value of values of S and N, and if not then
calculate the value of beta and add it to the dictionary.
Values are stored for S and N rather than for N/S because the precise form
of the solution (eq. 7.27 in Harte 2011) depends on N as well as N/S due
to the upper trunctation of the distribution at N.
"""
beta_dictionary = get_dict(filename)
for S in range(S_start, S_end + 1):
N_start = max(S + 1, N_min)
for N in range(N_start, N_max):
if (S, N) not in beta_dictionary:
beta_dictionary[(S, N)] = get_beta(S, N)
save_dict(beta_dictionary, filename)
def get_mete_pmf(S0, N0, beta = None):
"""Get the truncated log-series PMF predicted by METE"""
if beta == None:
beta = get_beta(S0, N0)
p = exp(-beta)
truncated_pmf = trunc_logser_pmf(range(1, int(N0) + 1), p, N0)
return truncated_pmf
def get_mete_sad(S0, N0, beta=None, bin_edges=None):
"""Get the expected number of species with each abundance
If no value is provided for beta it will be solved for using S0 & N0
If bin_edges is not provided then the values returned are the estimated
number of species for each integer value from 1 to N0
If bin_edges is provided it should be an array of bin edges including the
bottom and top edges. The last value in bin_edge should be > N0
"""
pmf = get_mete_pmf(S0, N0, beta)
if bin_edges != None:
N = array(range(1, int(N0) + 1))
binned_pmf = []
for edge in range(0, len(bin_edges) - 1):
bin_probability = sum(pmf[(N >= bin_edges[edge]) &
(N < bin_edges[edge + 1])])
binned_pmf.append(bin_probability)
pmf = array(binned_pmf)
predicted_sad = S0 * pmf
return predicted_sad
def get_lambda_spatialdistrib(A, A0, n0):
"""Solve for lambda_Pi from Harte 2011 equ. 7.50 and 7.51
Arguments:
A = the spatial scale of interest
A0 = the maximum spatial scale under consideration
n0 = the number of individuals of the focal species at scale A0
"""
assert type(n0) is int, "n must be an integer"
assert A > 0 and A0 > 0, "A and A0 must be greater than 0"
assert A <= A0, "A must be less than or equal to A0"
y = lambda x: x / (1 - x) - (n0 + 1) * x ** (n0 + 1) / (1 - x ** (n0 + 1)) - n0 * A / A0
if A < A0 / 2:
# Set the distance from the undefined boundaries of the Lagrangian multipliers
# to set the upper and lower boundaries for the numerical root finders
BOUNDS = [0, 1]
DIST_FROM_BOUND = 10 ** -15
exp_neg_lambda = bisect(y, BOUNDS[0] + DIST_FROM_BOUND,
BOUNDS[1] - DIST_FROM_BOUND)
elif A == A0 / 2:
#Special case from Harte (2011). See text between Eq. 7.50 and 7.51
exp_neg_lambda = 1
else:
# x can potentially go up to infinity
# thus use solution of a logistic equation as the starting point
exp_neg_lambda = (fsolve(y, - log(A0 / A - 1)))[0]
lambda_spatialdistrib = -1 * log(exp_neg_lambda)
return lambda_spatialdistrib
def get_spatialdistrib_dict(A, A0, n0, lambda_list=[0, {}]):
"""Solve for lambda_Pi from Harte 2011 equ. 7.50 and 7.51
Arguments:
A = the spatial scale of interest
A0 = the maximum spatial scale under consideration
n0 = the number of individuals of the focal species at scale A0
"""
if (A, A0, n0) not in lambda_list[1]:
lambda_list[1][(A, A0, n0)] = get_lambda_spatialdistrib(A, A0, n0)
lambda_list[0] = lambda_list[1][(A, A0, n0)]
return lambda_list
def get_mete_Pi(n, A, n0, A0):
"""
Solve for the METE Pi distribution from Harte 2011 equ 7.48 and 7.49.
Arguments:
n = number of individuals of the focal species at the scale A;
A = the spatial scale of interest;
n0 = the number of individuals of the focal species at scale A0;
A0 = the maximum spatial scale under consideration;
Returns:
The probability of observing n out of n0 individuals in a randomly selected quadrat
of area A out of A0
"""
x = exp(-get_lambda_spatialdistrib(A, A0, n0))
Z_Pi = sum([x ** i for i in range(0, n0 + 1)])
mete_Pi = x ** n / Z_Pi
return mete_Pi
def calc_S_from_Pi_fixed_abu(A, A0, n0vals):
"""
Downscales the expected number of species using the non-interative approach
Harte 2011 when the abundances are fixed, equ 3.12
Arguments:
A = the spatial scale of interest;
A0 = the maximum spatial scale under consideration;
S0 = the total number of species that occurs in A0
n0vals= a list of species abundances
Returns:
The expected number of species inferred from the anchor scale
"""
return S
def sar_noniterative(Avals, A0, S0, N0, version='approx'):
""" Computes the downscaled METE noninterative species-area relationship (SAR)
Harte 2011 equ C.1\
Arguments:
Avals: the spatial scales of interest, must be greater than zero and less than A0;
A0 = the maximum spatial scale under consideration;
S0 = the total number of species in A0;
N0 = the total number of individuals in A0;
version = 'approx' or 'precise', which specifies if an approximation is used for the pdf
of the SAD or if the precise truncated log-series form is used instead.
Returns:
A numpy array the first row contains the Avals, and the second row contains the expected
S values
"""
A_ok = [i > 0 and i < A0 for i in Avals]
if False in A_ok:
print "Warning: will only compute S for Areas that are greater than zero and less than A0"
Avals = [Avals[i] for i in which(A_ok)]
beta = get_beta(S0, N0)
if beta < 0 and version == 'approx':
print("ERROR! Cannot compute log of a negative beta value, change version to precise")
## compute relative abundance distribution
if version == 'approx':
rad = [exp(-beta * n) / (n * log(beta ** -1)) for n in range(1, N0 + 1)]
if version == 'precise':
rad = [trunc_logser_pmf(n, exp(-beta), N0) for n in range(1, N0 + 1)]
Svals = [S0 * sum([(1 - get_mete_Pi(0, A, n, A0)) * rad[n - 1] for n in range(1, N0 + 1)]) for A in Avals]
Svals.append(S0)
Avals.append(A0)
out = list()
out.append(Avals)
out.append(Svals)
return out
def sar_noniterative_fixed_abu(Avals, A0, n0vals):
"""Predictions for the downscaled METE SAR using Eq. 3.12 from Harte 2011 when the
abundances (n0) are fixed"""
A_ok = [i > 0 and i < A0 for i in Avals]
if False in A_ok:
print "Warning: will only compute S for Areas that are greater than zero and less than A0"
Avals = [Avals[i] for i in which(A_ok)]
Svals = [sum([1 - get_mete_Pi(0, A, n0, A0) for n0 in n0vals]) for A in Avals]
Svals.append(len(n0vals))
Avals.append(A0)
out = list()
out.append(Avals)
out.append(Svals)
return out
def get_mete_rad(S, N, beta=None, beta_dict={}):
"""Use beta to generate SAD predicted by the METE
Keyword arguments:
S -- the number of species
N -- the total number of individuals
beta -- allows input of beta by user if it has already been calculated
"""
assert S > 1, "S must be greater than 1"
assert N > 0, "N must be greater than 0"
assert S/N < 1, "N must be greater than S"
if beta is None:
beta = get_beta(S, N, beta_dict=beta_dict)
p = e ** -beta
abundance = list(empty([S]))
rank = range(1, int(S)+1)
rank.reverse()
if p >= 1:
for i in range(0, int(S)):
y = lambda x: trunc_logser_cdf(x, p, N) - (rank[i]-0.5) / S
if y(1) > 0:
abundance[i] = 1
else:
abundance[i] = int(round(bisect(y,1,N)))
else:
for i in range(0, int(S)):
y = lambda x: logser.cdf(x,p) / logser.cdf(N,p) - (rank[i]-0.5) / S
abundance[i] = int(round(bisect(y, 0, N)))
return (abundance, p)
def get_mete_sad_geom(S, N):
"""METE's predicted RAD when the only constraint is N/S
Keyword arguments:
S -- the number of species
N -- the total number of individuals
"""
assert S > 1, "S must be greater than 1"
assert N > 0, "N must be greater than 0"
assert S/N < 1, "N must be greater than S"
p = S / N
abundance = list(empty([S]))
rank = range(1, int(S)+1)
rank.reverse()
for i in range(0, int(S)):
y = lambda x: geom.cdf(x,p) / geom.cdf(N,p) - (rank[i]-0.5) / S
abundance[i] = int(round(bisect(y, 0, N)))
return (abundance, p)
def downscale_sar(A, S, N, Amin):
"""Predictions for downscaled SAR using Eq. 7 from Harte et al. 2009"""
beta = get_beta(S, N)
x = exp(-beta)
S = S / x - N * (1 - x) / (x - x ** (N + 1)) * (1 - x ** N / (N + 1))
A /= 2
N /= 2
if A <= Amin:
return ([A], [S])
else:
down_scaled_data = downscale_sar(A, S, N, Amin)
return (down_scaled_data[0] + [A], down_scaled_data[1] + [S])
def downscale_sar_fixed_abu(A0, n0vals, Amin):
"""Predictions for downscaled SAR when abundance is fixed using the iterative approach
by combining Eq. 7.51 and Eq. 3.12 from Harte 2011"""
flag = 0
A = A0
Avals = []
while flag == 0:
A /= 2
if (A >= Amin):
Avals.append(A)
else:
flag = 1
Avals.reverse()
Svals = [sum([1 - heap_prob(0, A, n0, A0) for n0 in n0vals]) for A in Avals]
S0 = len(n0vals)
Svals.append(S0)
Avals.append(A0)
out = []
out.append(Avals)
out.append(Svals)
return out
def upscale_sar(A, S, N, Amax):
"""Predictions for upscaled SAR using Eqs. 8 and 9 from Harte et al. 2009"""
def equations_for_S_2A(x, S_A, N_A):
"""Implicit equations for S(2A) given S(A) and N(A)"""
# TO DO: make this clearer by separating equations and then putting them
# in a list for output
out = [x[1] / x[0] - 2 * N_A *
(1 - x[0]) / (x[0] - x[0] ** (2 * N_A + 1)) *
(1 - x[0] ** (2 * N_A) / (2 * N_A + 1)) - S_A]
n = array(range(1, int(2 * N_A + 1)))
out.append(x[1] / 2 / N_A * sum(x[0] ** n) - sum(x[0] ** n / n))
return out
def solve_for_S_2A(S, N):
beta_A = get_beta(S, N)
# The slope at (N, S) along universal curve
# From Eqn.11 in Harte et al. 2009
# Initial guess of S_2A is calculated using extrapolation with z_A
if beta_A < 0: # to avoid taking the log of a negative number
z_A = 0
else:
z_A = 1 / log(2) / log(1 / beta_A)
if z_A < 0 or z_A > 1: # to avoid initial guesses that are obviously incorrect
S_2A_init = S * 1.5
else:
S_2A_init = S * 2 ** z_A
x_A = exp(-get_beta(S_2A_init, 2 * N))
x0 = fsolve(equations_for_S_2A, [x_A, S_2A_init], args=(S, N), full_output = 1)
S_2A, convergence = x0[0][1], x0[2]
if convergence != 1:
return float('nan')
else:
return S_2A
S = solve_for_S_2A(S, N)
A *= 2
N *= 2
if A >= Amax:
return ([A], [S])
elif isnan(S):
return ([A], S)
else:
up_scaled_data = upscale_sar(A, S, N, Amax)
return ([A] + up_scaled_data[0], [S] + up_scaled_data[1])
def sar(A0, S0, N0, Amin, Amax):
"""Harte et al. 2009 predictions for the species area relationship
Takes a minimum and a maximum area along with the area, richness, and
abundance at some anchor scale and determines the richness at all bisected
and/or doubled scales so as to include Amin and Amax.
"""
# This is where we will deal with adding the anchor scale to the results
def predicted_slope(S, N):
"""Calculates slope of the predicted line for a given S and N
by combining upscaling one level and downscaling one level from
the focal scale
"""
ans_lower = downscale_sar(2, S, N, 1)
if isnan(ans_lower[1][0]) == False:
S_lower = array(ans_lower[1][0])
ans_upper = upscale_sar(2, S, N, 4)
if isnan(ans_upper[1][0]) == True:
print("Error in upscaling. z cannot be computed.")
return float('nan')
else:
S_upper = array(ans_upper[1][0])
return (log(S_upper / S_lower) / log(4))
else:
print("Error in downscaling. Cannot find root.")
return float('nan')
def get_slopes(site_data):
"""get slopes from various scales, output list of area slope and N/S
input data is a list of lists, each list contains [area, mean S, mean N]
"""
# return a list containing 4 values: area, observed slope, predicted slope, and N/S
# ToDo: figure out why values of S as low as 1 still present problems for
# this approach, this appears to happen when S << N
data = array(site_data)
Zvalues = []
area = data[:, 0]
S_values = data[:, 1]
for a in area:
if a * 4 <= max(area): #stop at last area
S_down = float(S_values[area == a])
S_focal = float(S_values[area == a * 2 ])
S_up = float(S_values[area == a * 4])
if S_focal >= 2: #don't calculate if S < 2
N_focal = float(data[area == a * 2, 2])
z_pred = predicted_slope(S_focal, N_focal)
z_emp = (log(S_up) - log(S_down)) / log(4)
NS = N_focal / S_focal
parameters = [a * 2, z_emp, z_pred, NS]
Zvalues.append(parameters)
else:
continue
else:
break
return Zvalues
def plot_universal_curve(slopes_data):
"""plots ln(N/S) x slope for empirical data and MaxEnt predictions.
Predictions should look like Harte's universal curve
input data is a list of lists. Each list contains:
[area, empirical slope, predicted slope, N/S]
"""
#TO DO: Add argument for axes
slopes = array(slopes_data)
NS = slopes[:, 3]
z_pred = slopes[:, 2]
z_obs = slopes[:, 1]
#plot Harte's universal curve from predictions with empirical data to analyze fit
plt.semilogx(NS, z_pred, 'bo')
plt.xlabel("ln(N/S)")
plt.ylabel("Slope")
plt.hold(True)
plt.semilogx(NS, z_obs, 'ro')
plt.show()
def heap_prob(n, A, n0, A0, pdict={}):
"""
Determines the HEAP probability for n given A, no, and A0
Uses equation 4.15 in Harte 2011
Returns the probability that n individuals are observed in a quadrat of area A
"""
i = int(log(A0 / A,2))
key = (n, n0, i)
if key not in pdict:
if i == 1:
pdict[key] = 1 / (n0 + 1)
else:
A *= 2
pdict[key] = sum([heap_prob(q, A, n0, A0, pdict)/ (q + 1) for q in range(n, n0 + 1)])
return pdict[key]
def bisect_prob(n, A, n0, A0, psi, pdict={}):
"""
Univaritate pdf of the Bisection model
Theorem 2.3 in Conlisk et al. (2007)
psi is an aggregation parameter {0,1}
Note: when psi = 0.5 that the Bisection Model = HEAP Model
"""
total = 0
i = int(log(A0 / A, 2))
key = (n, n0, i, psi)
if key not in pdict:
if(i == 1):
pdict[key] = single_prob(n, n0, psi)
else:
A *= 2
pdict[key] = sum([bisect_prob(q, A, n0, A0, psi, pdict) * single_prob(n, q, psi) for q in range(n, n0 + 1)])
return pdict[key]
def heap_pmf(A, n0, A0):
"""Determines the probability mass function for HEAP
Uses equation 4.15 in Harte 2011
"""
pmf = [heap_prob(n, A, n0, A0) for n in range(0, n0 + 1)]
return pmf
def binomial(n, k):
return factorial(n) / (factorial(k) * factorial(n - k))
def get_big_binomial(n, k, fdict):
"""returns the natural log of the binomial coefficent n choose k"""
if n > 0 and k > 0:
nFact = fdict[n]
kFact = fdict[k]
nkFact = fdict[n - k]
return nFact - kFact - nkFact
else:
return 0
def get_heap_dict(n, A, n0, A0, plist=[0, {}]):
"""
Determines the HEAP probability for n given A, n0, and A0
Uses equation 4.15 in Harte 2011
Returns a list with the first element is the probability of n individuals
being observed in a quadrat of area A, and the second element is a
dictionary that was built to compute that probability
"""
i = int(log(A0 / A,2))
if (n,n0,i) not in plist[1]:
if i == 1:
plist[1][(n,n0,i)] = 1 / (n0 + 1)
else:
A = A * 2
plist[1][(n,n0,i)] = sum([get_heap_dict(q, A, n0, A0, plist)[0]/ (q + 1) for q in range(n, n0 + 1)])
plist[0] = plist[1][(n,n0,i)]
return plist
def build_heap_dict(n,n0,i, filename='heap_lookup_table.pck'):
"""Add values to the lookup table for heap"""
heap_dictionary = get_dict(filename)
if (n,n0,i) not in heap_dictionary:
A = 1
A0 = 2 ** i
heap_dictionary.update( get_heap_dict(n,A,n0,A0,[0,heap_dictionary])[1] )
save_dict(heap_dictionary, filename)
print("Dictionary building completed")
def get_lambda_heap(i, n0):
"""
Probability of observing at least one individual of a species with n0 individuals
given a randomly sampled quadrat of area A out of a total area A0.
This function uses the iterative or HEAP scaling model
Harte 2007, Scaling Biodiveristy Chp. Eq. 6.4, pg.106
i: number of bisections
n0: abundance
"""
if i == 0:
lambda_heap = 1
if i != 0:
A0 = 2 ** i
lambda_heap = 1 - heap_prob(0, 1, n0, A0)
return(lambda_heap)
def get_lambda_mete(i, n0):
"""
Probability of observing at least one individual of a species with n0 individuals
given a randomly sampled quadrat of area A out of a total area A0.
This function uses the non-iterative scaling model
i: number of bisections
n0: abundance
"""
if i == 0:
lambda_mete = 1
if i != 0:
A0 = 2 ** i
lambda_mete = 1 - get_mete_Pi(0, 1, n0, A0)
return(lambda_mete)
def get_lambda_bisect(i, n0, psi):
"""
Probability of observing at least one individual of a species with n0 individuals
given a randomly sampled quadrat of area A out of a total area A0.
i: number of bisections
n0: abundance
psi: aggregation parameter {0, 1}
"""
if i == 0:
lambda_bisect = 1
if i != 0:
A0 = 2 ** i
lambda_bisect = 1 - bisect_prob(0, 1, n0, A0, psi)
return(lambda_bisect)
def chi_heap(i, j, n0, chi_dict={}):
"""
calculates the commonality function for a given degree of bisection (i) at
orders of seperation (j)
Harte 2007, Scaling Biodiveristy Chp. Eq. 6.10, pg.113
i: number of bisections
j: order of seperation
"""
if n0 == 1:
out = 0
else:
key = (i, j, n0)
if key not in chi_dict:
if j == 1:
chi_dict[key] = (n0 + 1)**-1 * sum(map(lambda m: get_lambda_heap(i - 1, m) *
get_lambda_heap(i - 1, n0 - m), range(1, n0)))
else:
i -= 1
j -= 1
chi_dict[key] = (n0 + 1)**-1 * sum(map(lambda m: chi_heap(i, j, m, chi_dict), range(2, n0 + 1)))
out = chi_dict[key]
return(out)
def chi_bisect(i, j, n0, psi, chi_dict={}):
"""
calculates the commonality function for a given degree of bisection (i) at
orders of seperation (j) which a specific level of aggregation.
i: number of bisections
j: order of seperation
n0: abundance
psi: aggregation parameter {0, 1}
Note: Function has only been checked at psi = .5, more testing is needed here
"""
if n0 == 1:
out = 0
else:
key = (i, j, n0, psi)
if key not in chi_dict:
if j == 1:
chi_dict[key] = sum(map(lambda m: single_prob(m, n0, psi) *
get_lambda_bisect(i - 1, m, psi) *
get_lambda_bisect(i - 1, n0 - m, psi), range(1, n0)))
else:
i -= 1
j -= 1
chi_dict[key] = sum(map(lambda m: single_prob(m, n0, psi) *
chi_bisect(i, j, m, psi, chi_dict) , range(2, n0 + 1)))
out = chi_dict[key]
return(out)
def sep_orders(i, shape='sqr'):
"""
Arguments:
i: number of bisections or scale of A relative to A0
shape: sqr, rect, or golden to indicate that A0 is a
square, rectangle, or golden rectangle, respectively
Note: golden rectangle has the dimensions L x L(2^.5)
Returns:
seperation orders in which the number of bisections is
shape preserving
"""
if shape == 'golden':
j = range(i, 0, -1)
if shape == 'sqr':
j = range(i, 0, -1)
even_indices = which([index == 0 for index in map(lambda j: j % 2, j)])
j = [j[index] for index in even_indices]
if shape == 'rect':
j = range(i, 0, -1)
odd_indices = which([index == 1 for index in map(lambda j: j % 2, j)])
j = [j[index] for index in odd_indices]
return j
def calc_D(j, L=1):
"""
Distance calculation given serperation order
that are shape preserving
From Ostling et al. (2004) pg. 630
j: seperation order
L: width of rectangle or square of area A0
"""
D = L / 2**(j / 2)
return D
def sor_heap(A, A0, S0, N0, shape='sqr', L=1):
"""
Computes sorensen's simiarilty index using the truncated logseries SAD
given spatial grain (A) at all possible seperation distances
Scaling Biodiveristy Chp. Eq. 6.10, pg.113
Also see Plotkin and Muller-Landau (2002), Eq. 10 which
demonstrates formulation of sorensen for this case in which
the abuance distribution is specified but the realized abundances are unknown
shape: shape of A0 see function sep_orders()
L: the width of the rectangle or square area A0
"""
beta = get_beta(S0, N0)
i = int(log(A0 / A, 2))
j = sep_orders(i, shape)
L = [L] * len(j)
d = map(calc_D, j, L)
chi = np.empty((N0, len(d)))
lambda_vals = np.empty((N0, len(d)))
for n in range(1, N0 + 1):
## Eq. 7.32 in Harte 2009
prob_n_indiv = exp(-beta * n) / (n * log(beta ** -1))
chi_tmp = map(lambda jval: chi_heap(i, jval, n), j)
lambda_tmp = [get_lambda_heap(i, n)] * len(d)
chi[n - 1, ] = [prob_n_indiv * x for x in chi_tmp]
lambda_vals[n - 1, ] = [prob_n_indiv * x for x in lambda_tmp]
sor = map(lambda col: sum(chi[:, col]) / sum(lambda_vals[:, col]), range(0, len(d)))
i = [i] * len(j)
out = np.array([i, j, d, sor]).transpose()
return out
def sor_heap_fixed_abu(A, n0, A0, shape='sqr', L=1):
"""
Computes sorensen's simiarilty index for a given SAD (n0)
and spatial grain (A) at all possible seperation distances
Scaling Biodiveristy Chp. Eq. 6.10, pg.113
shape: shape of A0 see function sep_orders()
L: the width of the rectangle or square area A0
"""
if isinstance(n0, (int, long)):
n0 = [n0]
n0_unique = list(set(n0))
n0_uni_len = len(n0_unique)
n0_count = [n0.count(x) for x in n0_unique]
i = int(log(A0 / A, 2))
j = sep_orders(i, shape)
L = [L] * len(j)
d = map(calc_D, j, L)
chi = np.empty((n0_uni_len, len(d)))
lambda_vals = np.empty((n0_uni_len, len(d)))
for sp in range(0, n0_uni_len):
chi_tmp = map(lambda jval: chi_heap(i, jval, n0_unique[sp]), j)
lambda_tmp = [get_lambda_heap(i, n0_unique[sp])] * len(d)
chi[sp, ] = [n0_count[sp] * x for x in chi_tmp]
lambda_vals[sp, ] = [n0_count[sp] * x for x in lambda_tmp]
sor = map(lambda col: sum(chi[:, col]) / sum(lambda_vals[:, col]), range(0, len(d)))
i = [i] * len(j)
out = np.array([i, j, d, sor]).transpose()
return out
def sor_bisect(A, A0, S0, N0, psi, shape='sqr', L=1):
"""
Computes sorensen's simiarilty index using the truncated logseries SAD
given spatial grain (A) at all possible seperation distances
Scaling Biodiveristy Chp. Eq. 6.10, pg.113
Also see Plotkin and Muller-Landau (2002), Eq. 10 which
demonstrates formulation of sorensen for this case in which
the abuance distribution is specified but the realized abundances are unknown
psi: aggregation parameter {0, 1}
shape: shape of A0 see function sep_orders()
L: the width of the rectangle or square area A0
"""
beta = get_beta(S0, N0)
i = int(log(A0 / A, 2))
j = sep_orders(i, shape)
L = [L] * len(j)
d = map(calc_D, j, L)
chi = np.empty((N0, len(d)))
lambda_vals = np.empty((N0, len(d)))
for n in range(1, N0 + 1):
## Eq. 7.32 in Harte 2009
prob_n_indiv = exp(-beta * n) / (n * log(beta ** -1))
chi_tmp = map(lambda jval: chi_bisect(i, jval, n, psi), j)
lambda_tmp = [get_lambda_bisect(i, n, psi)] * len(d)
chi[n - 1, ] = [prob_n_indiv * x for x in chi_tmp]
lambda_vals[n - 1, ] = [prob_n_indiv * x for x in lambda_tmp]
sor = map(lambda col: sum(chi[:, col]) / sum(lambda_vals[:, col]), range(0, len(d)))
i = [i] * len(j)
psi = [psi] * len(j)
out = np.array([psi, i, j, d, sor]).transpose()
return out
def sor_bisect_fixed_abu(A, n0, A0, psi, shape='sqr', L=1):
"""
Computes sorensen's simiarilty index for a given SAD (n0)
and spatial grain (A) at all possible seperation distances
Scaling Biodiveristy Chp. Eq. 6.10, pg.113
psi: aggregation parameter {0, 1}
shape: shape of A0 see function sep_orders()
L: the width of the rectangle or square area A0
"""
if isinstance(n0, (int, long)):
n0 = [n0]
n0_unique = list(set(n0))
n0_uni_len = len(n0_unique)
n0_count = [n0.count(x) for x in n0_unique]
i = int(log(A0 / A, 2))
j = sep_orders(i, shape)
L = [L] * len(j)
d = map(calc_D, j, L)
chi = np.empty((n0_uni_len, len(d)))
lambda_vals = np.empty((n0_uni_len, len(d)))
for sp in range(0, n0_uni_len):
chi_tmp = map(lambda jval: chi_bisect(i, jval, n0_unique[sp], psi), j)
lambda_tmp = [get_lambda_bisect(i, n0_unique[sp], psi)] * len(d)
chi[sp, ] = [n0_count[sp] * x for x in chi_tmp]
lambda_vals[sp, ] = [n0_count[sp] * x for x in lambda_tmp]
sor = map(lambda col: sum(chi[:, col]) / sum(lambda_vals[:, col]), range(0, len(d)))
i = [i] * len(j)
psi = [psi] * len(j)
out = np.array([psi, i, j, d, sor]).transpose()
return out
def sim_spatial_one_step(abu_list):
"""Simulates the abundances of species after bisecting one cell.
Input: species abundances in the original cell.
Output: a list with two sublists containing species abundances in the two
halved cells.
Assuming indistinguishable individuals (see Harte et al. 2008).
"""
abu_half_1 = []
abu_half_2 = []
for spp in abu_list:
if spp == 0:
abu_half_1.append(0)
abu_half_2.append(0)
else:
abu_1 = random_integers(0, spp)
abu_half_1.append(abu_1)
abu_half_2.append(spp - abu_1)
abu_halves = [abu_half_1, abu_half_2]
return abu_halves
def sim_spatial_whole(S, N, bisec, transect=False, abu=None, beta=None):
"""Simulates species abundances in all cells given S & N at whole plot
level and bisection number.
Keyword arguments:
S -- the number of species
N -- the number of individuals
bisec -- the number of bisections to carry out (see Note below)
transect -- boolean, if True a 1-dimensional spatial community is
generated, the default is to generate a spatially 2-dimensional
community
abu -- an optional abundance vector that can be supplied for the community
instead of using log-series random variates
Output: a list of lists, each sublist contains species abundance in one
cell, and x-y coordinate of the cell.
Note: bisection number 1 corresponds to first bisection which intersects the x-axis
"""
if S == 1:
abu = [N]
if abu is None:
if beta is None:
p = exp(-get_beta(S, N))
else:
p = exp(-beta)
abu = trunc_logser_rvs(p, N, size=S)
abu_prev = [[1, 1, array(abu)]]
bisec_num = 0
while bisec_num < bisec:
abu_new = []
for cell in abu_prev:
x_prev = cell[0]
y_prev = cell[1]
abu_new_cell = sim_spatial_one_step(cell[2])
if transect:
cell_new_1 = [x_prev * 2 - 1, y_prev, abu_new_cell[0]]
cell_new_2 = [x_prev * 2, y_prev, abu_new_cell[1]]
else:
if bisec_num % 2 == 0:
cell_new_1 = [x_prev * 2 - 1, y_prev, abu_new_cell[0]]
cell_new_2 = [x_prev * 2, y_prev, abu_new_cell[1]]
else:
cell_new_1 = [x_prev, y_prev * 2 - 1, abu_new_cell[0]]
cell_new_2 = [x_prev, y_prev * 2, abu_new_cell[1]]
abu_new.append(cell_new_1)
abu_new.append(cell_new_2)
abu_prev = abu_new
bisec_num += 1
return abu_prev
def sim_spatial_whole_iter(S, N, bisec, coords, n_iter = 10000):
"""Simulates the bisection n_iter times and gets the aggregated species
richness in plots with given coordinates."""
max_x = 2 ** ceil((bisec - 1) / 2)
max_y = 2 ** floor((bisec - 1) / 2)
if max(array(coords)[:,0]) > max_x or max(array(coords)[:,1]) > max_y:
print("Error: Coordinates out of bounds.")
return float('nan')
else:
i = 1
S_list = []
while i <= n_iter:
abu_list = []
abu_plot = sim_spatial_whole(S, N, bisec)
for i_coords in coords:
for j_cell in abu_plot:
if j_cell[0] == i_coords[0] and j_cell[1] == i_coords[1]:
abu_list.append(j_cell[2])
break
abu_agg = array(abu_list).sum(axis = 0)
S_i = sum(abu_agg != 0)
S_list.append(S_i)
i += 1
S_avg = sum(S_list) / len(S_list)
return S_avg
def community_energy_pdf(epsilon, S0, N0, E0):
lambda1 = get_lambda1()
lambda2 = get_lambda2()
gamma = lambda1 + epsilon * lambda2
exp_neg_gamma = exp(-gamma)
return S0 / N0 * (exp_neg_gamma / (1 - exp_neg_gamma) ** 2 -
exp_neg_gamma ** N0 / (1 - exp_neg_gamma) *
(N0 + exp_neg_gamma / (1 - exp_neg_gamma)))
def which(boolean_list):
""" Mimics the R function 'which()' and it returns the indics of the
boolean list that are labeled True """
return [i for i in range(0, len(boolean_list)) if boolean_list[i]]
def order(num_list):
"""
This function mimics the R function 'order()' and it carries out a
bubble sort on a list and an associated index list.
The function only returns the index list so that the order of the sorting
is returned but not the list in sorted order
Note: [x[i] for i in order(x)] is the same as sorted(x)
"""
num_list = list(num_list)
list_length = len(num_list)
index_list = range(0, list_length)
swapped = True
while swapped:
swapped = False
for i in range(1, list_length):
if num_list[i-1] > num_list[i]:
temp1 = num_list[i-1]
temp2 = num_list[i]
num_list[i-1] = temp2
num_list[i] = temp1
temp1 = index_list[i-1]
temp2 = index_list[i]
index_list[i-1] = temp2
index_list[i] = temp1
swapped = True
return index_list
def single_rvs(n0, psi, size=1):
"""Generate random deviates from the single division model, still
is not working properly possibily needs to be checked"""
cdf = single_cdf(1,n0,2,psi)
xvals = [0] * size
for i in range(size):
rand_float = uniform(0,1)
temp_cdf = list(cdf + [rand_float])
ordered_values = order(temp_cdf)
xvals[i] = [j for j in range(0,len(ordered_values)) if ordered_values[j] == (n0 + 1)][0]
return xvals
def get_F(a, n):
"""
Eq. 7 in Conlisk et al. (2007)
gamma(a + n) / (gamma(a) * gamma(n + 1))
"""
return mp.gamma(a + n) / (mp.gamma(a) * mp.gamma(n + 1))
def single_prob(n, n0, psi, c=2):
"""
Eq. 1.3 in Conlisk et al. (2007), note that this implmentation is
only correct when the variable c = 2
Note: if psi = .5 this is the special HEAP case in which the
function no longer depends on n.
c = number of cells
"""
a = (1 - psi) / psi
F = (get_F(a, n) * get_F((c - 1) * a, n0 - n)) / get_F(c * a, n0)
return float(F)
def single_cdf(n0, psi):
cdf = [0.0] * (n0 + 1)
for n in range(0, n0 + 1):
if n == 0:
cdf[n] = single_prob(n, n0, psi)
else:
cdf[n] = cdf[n - 1] + single_prob(n, n0, psi)
return cdf
| mit |
jschavem/facial-expression-classification | source/classification/reference/cv_filtered.py | 1 | 1733 | from sklearn.metrics import confusion_matrix
from sklearn.metrics import f1_score
from sklearn.metrics import accuracy_score
import numpy as np
import time
from lib.in_subject_cross_validation import cross_validate_dataset
selected_attributes = {
'koelstra-approach': {
# 0: [3, 4, 28, 32, 33, 41, 62, 70],
0: [3, 12, 32, 43, 44],
1: [13, 17, 46, 57, 72, 73, 75],
2: [3, 4, 5, 13, 14, 17, 18, 24, 30, 32, 34, 46, 53, 59, 61, 71, 75, 88]
},
'koelstra-normalized': {
0: [3, 4, 28, 32, 33, 41, 62, 70],
1: [3, 17, 30, 31, 32, 34, 41, 42, 49, 59, 60, 61, 62, 72, 73, 75, 78, 86, 88, 89],
2: [3, 4, 5, 12, 14, 15, 17, 24, 28, 30, 31, 32, 33, 34, 41, 43, 44, 46, 47, 53, 57, 59, 60, 61, 62, 63, 70, 71,
72, 73, 75, 76, 82, 86, 88, 89],
}
}
def main():
dataset = 'koelstra-approach'
attribute_index = 0
attributes = selected_attributes[dataset][attribute_index]
actual, predicted = cross_validate_dataset(dataset, attribute_index, ground_truth_count=attributes)
conf_matrix = confusion_matrix(actual, predicted, ['low', 'high'])
print conf_matrix
print ""
scores = f1_score(actual, predicted, ['low', 'high'], 'low', average=None)
class_counts = [sum(row) for row in conf_matrix]
average_f1 = np.average(scores, weights=class_counts)
accuracy = accuracy_score(actual, predicted)
print "Average F1 score: %.3f" % average_f1
print "Average accuracy: %.3f" % accuracy
attr_names = ["valence", "arousal", "control"]
print "py-%s-rfe,%s,%s,%.3f,%.3f" % (
dataset, attr_names[attribute_index], time.strftime('%Y-%m-%d'), average_f1, accuracy)
if __name__ == '__main__':
main()
| mit |
rishikksh20/scikit-learn | sklearn/gaussian_process/gaussian_process.py | 19 | 34705 | # -*- coding: utf-8 -*-
# Author: Vincent Dubourg <[email protected]>
# (mostly translation, see implementation details)
# License: BSD 3 clause
from __future__ import print_function
import numpy as np
from scipy import linalg, optimize
from ..base import BaseEstimator, RegressorMixin
from ..metrics.pairwise import manhattan_distances
from ..utils import check_random_state, check_array, check_X_y
from ..utils.validation import check_is_fitted
from . import regression_models as regression
from . import correlation_models as correlation
from ..utils import deprecated
MACHINE_EPSILON = np.finfo(np.double).eps
@deprecated("l1_cross_distances was deprecated in version 0.18 "
"and will be removed in 0.20.")
def l1_cross_distances(X):
"""
Computes the nonzero componentwise L1 cross-distances between the vectors
in X.
Parameters
----------
X : array_like
An array with shape (n_samples, n_features)
Returns
-------
D : array with shape (n_samples * (n_samples - 1) / 2, n_features)
The array of componentwise L1 cross-distances.
ij : arrays with shape (n_samples * (n_samples - 1) / 2, 2)
The indices i and j of the vectors in X associated to the cross-
distances in D: D[k] = np.abs(X[ij[k, 0]] - Y[ij[k, 1]]).
"""
X = check_array(X)
n_samples, n_features = X.shape
n_nonzero_cross_dist = n_samples * (n_samples - 1) // 2
ij = np.zeros((n_nonzero_cross_dist, 2), dtype=np.int)
D = np.zeros((n_nonzero_cross_dist, n_features))
ll_1 = 0
for k in range(n_samples - 1):
ll_0 = ll_1
ll_1 = ll_0 + n_samples - k - 1
ij[ll_0:ll_1, 0] = k
ij[ll_0:ll_1, 1] = np.arange(k + 1, n_samples)
D[ll_0:ll_1] = np.abs(X[k] - X[(k + 1):n_samples])
return D, ij
@deprecated("GaussianProcess was deprecated in version 0.18 and will be "
"removed in 0.20. Use the GaussianProcessRegressor instead.")
class GaussianProcess(BaseEstimator, RegressorMixin):
"""The legacy Gaussian Process model class.
.. deprecated:: 0.18
This class will be removed in 0.20.
Use the :class:`GaussianProcessRegressor` instead.
Read more in the :ref:`User Guide <gaussian_process>`.
Parameters
----------
regr : string or callable, optional
A regression function returning an array of outputs of the linear
regression functional basis. The number of observations n_samples
should be greater than the size p of this basis.
Default assumes a simple constant regression trend.
Available built-in regression models are::
'constant', 'linear', 'quadratic'
corr : string or callable, optional
A stationary autocorrelation function returning the autocorrelation
between two points x and x'.
Default assumes a squared-exponential autocorrelation model.
Built-in correlation models are::
'absolute_exponential', 'squared_exponential',
'generalized_exponential', 'cubic', 'linear'
beta0 : double array_like, optional
The regression weight vector to perform Ordinary Kriging (OK).
Default assumes Universal Kriging (UK) so that the vector beta of
regression weights is estimated using the maximum likelihood
principle.
storage_mode : string, optional
A string specifying whether the Cholesky decomposition of the
correlation matrix should be stored in the class (storage_mode =
'full') or not (storage_mode = 'light').
Default assumes storage_mode = 'full', so that the
Cholesky decomposition of the correlation matrix is stored.
This might be a useful parameter when one is not interested in the
MSE and only plan to estimate the BLUP, for which the correlation
matrix is not required.
verbose : boolean, optional
A boolean specifying the verbose level.
Default is verbose = False.
theta0 : double array_like, optional
An array with shape (n_features, ) or (1, ).
The parameters in the autocorrelation model.
If thetaL and thetaU are also specified, theta0 is considered as
the starting point for the maximum likelihood estimation of the
best set of parameters.
Default assumes isotropic autocorrelation model with theta0 = 1e-1.
thetaL : double array_like, optional
An array with shape matching theta0's.
Lower bound on the autocorrelation parameters for maximum
likelihood estimation.
Default is None, so that it skips maximum likelihood estimation and
it uses theta0.
thetaU : double array_like, optional
An array with shape matching theta0's.
Upper bound on the autocorrelation parameters for maximum
likelihood estimation.
Default is None, so that it skips maximum likelihood estimation and
it uses theta0.
normalize : boolean, optional
Input X and observations y are centered and reduced wrt
means and standard deviations estimated from the n_samples
observations provided.
Default is normalize = True so that data is normalized to ease
maximum likelihood estimation.
nugget : double or ndarray, optional
Introduce a nugget effect to allow smooth predictions from noisy
data. If nugget is an ndarray, it must be the same length as the
number of data points used for the fit.
The nugget is added to the diagonal of the assumed training covariance;
in this way it acts as a Tikhonov regularization in the problem. In
the special case of the squared exponential correlation function, the
nugget mathematically represents the variance of the input values.
Default assumes a nugget close to machine precision for the sake of
robustness (nugget = 10. * MACHINE_EPSILON).
optimizer : string, optional
A string specifying the optimization algorithm to be used.
Default uses 'fmin_cobyla' algorithm from scipy.optimize.
Available optimizers are::
'fmin_cobyla', 'Welch'
'Welch' optimizer is dued to Welch et al., see reference [WBSWM1992]_.
It consists in iterating over several one-dimensional optimizations
instead of running one single multi-dimensional optimization.
random_start : int, optional
The number of times the Maximum Likelihood Estimation should be
performed from a random starting point.
The first MLE always uses the specified starting point (theta0),
the next starting points are picked at random according to an
exponential distribution (log-uniform on [thetaL, thetaU]).
Default does not use random starting point (random_start = 1).
random_state : integer or numpy.RandomState, optional
The generator used to shuffle the sequence of coordinates of theta in
the Welch optimizer. If an integer is given, it fixes the seed.
Defaults to the global numpy random number generator.
Attributes
----------
theta_ : array
Specified theta OR the best set of autocorrelation parameters (the \
sought maximizer of the reduced likelihood function).
reduced_likelihood_function_value_ : array
The optimal reduced likelihood function value.
Examples
--------
>>> import numpy as np
>>> from sklearn.gaussian_process import GaussianProcess
>>> X = np.array([[1., 3., 5., 6., 7., 8.]]).T
>>> y = (X * np.sin(X)).ravel()
>>> gp = GaussianProcess(theta0=0.1, thetaL=.001, thetaU=1.)
>>> gp.fit(X, y) # doctest: +ELLIPSIS
GaussianProcess(beta0=None...
...
Notes
-----
The presentation implementation is based on a translation of the DACE
Matlab toolbox, see reference [NLNS2002]_.
References
----------
.. [NLNS2002] `H.B. Nielsen, S.N. Lophaven, H. B. Nielsen and J.
Sondergaard. DACE - A MATLAB Kriging Toolbox.` (2002)
http://imedea.uib-csic.es/master/cambioglobal/Modulo_V_cod101615/Lab/lab_maps/krigging/DACE-krigingsoft/dace/dace.pdf
.. [WBSWM1992] `W.J. Welch, R.J. Buck, J. Sacks, H.P. Wynn, T.J. Mitchell,
and M.D. Morris (1992). Screening, predicting, and computer
experiments. Technometrics, 34(1) 15--25.`
http://www.jstor.org/stable/1269548
"""
_regression_types = {
'constant': regression.constant,
'linear': regression.linear,
'quadratic': regression.quadratic}
_correlation_types = {
'absolute_exponential': correlation.absolute_exponential,
'squared_exponential': correlation.squared_exponential,
'generalized_exponential': correlation.generalized_exponential,
'cubic': correlation.cubic,
'linear': correlation.linear}
_optimizer_types = [
'fmin_cobyla',
'Welch']
def __init__(self, regr='constant', corr='squared_exponential', beta0=None,
storage_mode='full', verbose=False, theta0=1e-1,
thetaL=None, thetaU=None, optimizer='fmin_cobyla',
random_start=1, normalize=True,
nugget=10. * MACHINE_EPSILON, random_state=None):
self.regr = regr
self.corr = corr
self.beta0 = beta0
self.storage_mode = storage_mode
self.verbose = verbose
self.theta0 = theta0
self.thetaL = thetaL
self.thetaU = thetaU
self.normalize = normalize
self.nugget = nugget
self.optimizer = optimizer
self.random_start = random_start
self.random_state = random_state
def fit(self, X, y):
"""
The Gaussian Process model fitting method.
Parameters
----------
X : double array_like
An array with shape (n_samples, n_features) with the input at which
observations were made.
y : double array_like
An array with shape (n_samples, ) or shape (n_samples, n_targets)
with the observations of the output to be predicted.
Returns
-------
gp : self
A fitted Gaussian Process model object awaiting data to perform
predictions.
"""
# Run input checks
self._check_params()
self.random_state = check_random_state(self.random_state)
# Force data to 2D numpy.array
X, y = check_X_y(X, y, multi_output=True, y_numeric=True)
self.y_ndim_ = y.ndim
if y.ndim == 1:
y = y[:, np.newaxis]
# Check shapes of DOE & observations
n_samples, n_features = X.shape
_, n_targets = y.shape
# Run input checks
self._check_params(n_samples)
# Normalize data or don't
if self.normalize:
X_mean = np.mean(X, axis=0)
X_std = np.std(X, axis=0)
y_mean = np.mean(y, axis=0)
y_std = np.std(y, axis=0)
X_std[X_std == 0.] = 1.
y_std[y_std == 0.] = 1.
# center and scale X if necessary
X = (X - X_mean) / X_std
y = (y - y_mean) / y_std
else:
X_mean = np.zeros(1)
X_std = np.ones(1)
y_mean = np.zeros(1)
y_std = np.ones(1)
# Calculate matrix of distances D between samples
D, ij = l1_cross_distances(X)
if (np.min(np.sum(D, axis=1)) == 0.
and self.corr != correlation.pure_nugget):
raise Exception("Multiple input features cannot have the same"
" target value.")
# Regression matrix and parameters
F = self.regr(X)
n_samples_F = F.shape[0]
if F.ndim > 1:
p = F.shape[1]
else:
p = 1
if n_samples_F != n_samples:
raise Exception("Number of rows in F and X do not match. Most "
"likely something is going wrong with the "
"regression model.")
if p > n_samples_F:
raise Exception(("Ordinary least squares problem is undetermined "
"n_samples=%d must be greater than the "
"regression model size p=%d.") % (n_samples, p))
if self.beta0 is not None:
if self.beta0.shape[0] != p:
raise Exception("Shapes of beta0 and F do not match.")
# Set attributes
self.X = X
self.y = y
self.D = D
self.ij = ij
self.F = F
self.X_mean, self.X_std = X_mean, X_std
self.y_mean, self.y_std = y_mean, y_std
# Determine Gaussian Process model parameters
if self.thetaL is not None and self.thetaU is not None:
# Maximum Likelihood Estimation of the parameters
if self.verbose:
print("Performing Maximum Likelihood Estimation of the "
"autocorrelation parameters...")
self.theta_, self.reduced_likelihood_function_value_, par = \
self._arg_max_reduced_likelihood_function()
if np.isinf(self.reduced_likelihood_function_value_):
raise Exception("Bad parameter region. "
"Try increasing upper bound")
else:
# Given parameters
if self.verbose:
print("Given autocorrelation parameters. "
"Computing Gaussian Process model parameters...")
self.theta_ = self.theta0
self.reduced_likelihood_function_value_, par = \
self.reduced_likelihood_function()
if np.isinf(self.reduced_likelihood_function_value_):
raise Exception("Bad point. Try increasing theta0.")
self.beta = par['beta']
self.gamma = par['gamma']
self.sigma2 = par['sigma2']
self.C = par['C']
self.Ft = par['Ft']
self.G = par['G']
if self.storage_mode == 'light':
# Delete heavy data (it will be computed again if required)
# (it is required only when MSE is wanted in self.predict)
if self.verbose:
print("Light storage mode specified. "
"Flushing autocorrelation matrix...")
self.D = None
self.ij = None
self.F = None
self.C = None
self.Ft = None
self.G = None
return self
def predict(self, X, eval_MSE=False, batch_size=None):
"""
This function evaluates the Gaussian Process model at x.
Parameters
----------
X : array_like
An array with shape (n_eval, n_features) giving the point(s) at
which the prediction(s) should be made.
eval_MSE : boolean, optional
A boolean specifying whether the Mean Squared Error should be
evaluated or not.
Default assumes evalMSE = False and evaluates only the BLUP (mean
prediction).
batch_size : integer, optional
An integer giving the maximum number of points that can be
evaluated simultaneously (depending on the available memory).
Default is None so that all given points are evaluated at the same
time.
Returns
-------
y : array_like, shape (n_samples, ) or (n_samples, n_targets)
An array with shape (n_eval, ) if the Gaussian Process was trained
on an array of shape (n_samples, ) or an array with shape
(n_eval, n_targets) if the Gaussian Process was trained on an array
of shape (n_samples, n_targets) with the Best Linear Unbiased
Prediction at x.
MSE : array_like, optional (if eval_MSE == True)
An array with shape (n_eval, ) or (n_eval, n_targets) as with y,
with the Mean Squared Error at x.
"""
check_is_fitted(self, "X")
# Check input shapes
X = check_array(X)
n_eval, _ = X.shape
n_samples, n_features = self.X.shape
n_samples_y, n_targets = self.y.shape
# Run input checks
self._check_params(n_samples)
if X.shape[1] != n_features:
raise ValueError(("The number of features in X (X.shape[1] = %d) "
"should match the number of features used "
"for fit() "
"which is %d.") % (X.shape[1], n_features))
if batch_size is None:
# No memory management
# (evaluates all given points in a single batch run)
# Normalize input
X = (X - self.X_mean) / self.X_std
# Initialize output
y = np.zeros(n_eval)
if eval_MSE:
MSE = np.zeros(n_eval)
# Get pairwise componentwise L1-distances to the input training set
dx = manhattan_distances(X, Y=self.X, sum_over_features=False)
# Get regression function and correlation
f = self.regr(X)
r = self.corr(self.theta_, dx).reshape(n_eval, n_samples)
# Scaled predictor
y_ = np.dot(f, self.beta) + np.dot(r, self.gamma)
# Predictor
y = (self.y_mean + self.y_std * y_).reshape(n_eval, n_targets)
if self.y_ndim_ == 1:
y = y.ravel()
# Mean Squared Error
if eval_MSE:
C = self.C
if C is None:
# Light storage mode (need to recompute C, F, Ft and G)
if self.verbose:
print("This GaussianProcess used 'light' storage mode "
"at instantiation. Need to recompute "
"autocorrelation matrix...")
reduced_likelihood_function_value, par = \
self.reduced_likelihood_function()
self.C = par['C']
self.Ft = par['Ft']
self.G = par['G']
rt = linalg.solve_triangular(self.C, r.T, lower=True)
if self.beta0 is None:
# Universal Kriging
u = linalg.solve_triangular(self.G.T,
np.dot(self.Ft.T, rt) - f.T,
lower=True)
else:
# Ordinary Kriging
u = np.zeros((n_targets, n_eval))
MSE = np.dot(self.sigma2.reshape(n_targets, 1),
(1. - (rt ** 2.).sum(axis=0)
+ (u ** 2.).sum(axis=0))[np.newaxis, :])
MSE = np.sqrt((MSE ** 2.).sum(axis=0) / n_targets)
# Mean Squared Error might be slightly negative depending on
# machine precision: force to zero!
MSE[MSE < 0.] = 0.
if self.y_ndim_ == 1:
MSE = MSE.ravel()
return y, MSE
else:
return y
else:
# Memory management
if type(batch_size) is not int or batch_size <= 0:
raise Exception("batch_size must be a positive integer")
if eval_MSE:
y, MSE = np.zeros(n_eval), np.zeros(n_eval)
for k in range(max(1, int(n_eval / batch_size))):
batch_from = k * batch_size
batch_to = min([(k + 1) * batch_size + 1, n_eval + 1])
y[batch_from:batch_to], MSE[batch_from:batch_to] = \
self.predict(X[batch_from:batch_to],
eval_MSE=eval_MSE, batch_size=None)
return y, MSE
else:
y = np.zeros(n_eval)
for k in range(max(1, int(n_eval / batch_size))):
batch_from = k * batch_size
batch_to = min([(k + 1) * batch_size + 1, n_eval + 1])
y[batch_from:batch_to] = \
self.predict(X[batch_from:batch_to],
eval_MSE=eval_MSE, batch_size=None)
return y
def reduced_likelihood_function(self, theta=None):
"""
This function determines the BLUP parameters and evaluates the reduced
likelihood function for the given autocorrelation parameters theta.
Maximizing this function wrt the autocorrelation parameters theta is
equivalent to maximizing the likelihood of the assumed joint Gaussian
distribution of the observations y evaluated onto the design of
experiments X.
Parameters
----------
theta : array_like, optional
An array containing the autocorrelation parameters at which the
Gaussian Process model parameters should be determined.
Default uses the built-in autocorrelation parameters
(ie ``theta = self.theta_``).
Returns
-------
reduced_likelihood_function_value : double
The value of the reduced likelihood function associated to the
given autocorrelation parameters theta.
par : dict
A dictionary containing the requested Gaussian Process model
parameters:
sigma2
Gaussian Process variance.
beta
Generalized least-squares regression weights for
Universal Kriging or given beta0 for Ordinary
Kriging.
gamma
Gaussian Process weights.
C
Cholesky decomposition of the correlation matrix [R].
Ft
Solution of the linear equation system : [R] x Ft = F
G
QR decomposition of the matrix Ft.
"""
check_is_fitted(self, "X")
if theta is None:
# Use built-in autocorrelation parameters
theta = self.theta_
# Initialize output
reduced_likelihood_function_value = - np.inf
par = {}
# Retrieve data
n_samples = self.X.shape[0]
D = self.D
ij = self.ij
F = self.F
if D is None:
# Light storage mode (need to recompute D, ij and F)
D, ij = l1_cross_distances(self.X)
if (np.min(np.sum(D, axis=1)) == 0.
and self.corr != correlation.pure_nugget):
raise Exception("Multiple X are not allowed")
F = self.regr(self.X)
# Set up R
r = self.corr(theta, D)
R = np.eye(n_samples) * (1. + self.nugget)
R[ij[:, 0], ij[:, 1]] = r
R[ij[:, 1], ij[:, 0]] = r
# Cholesky decomposition of R
try:
C = linalg.cholesky(R, lower=True)
except linalg.LinAlgError:
return reduced_likelihood_function_value, par
# Get generalized least squares solution
Ft = linalg.solve_triangular(C, F, lower=True)
Q, G = linalg.qr(Ft, mode='economic')
sv = linalg.svd(G, compute_uv=False)
rcondG = sv[-1] / sv[0]
if rcondG < 1e-10:
# Check F
sv = linalg.svd(F, compute_uv=False)
condF = sv[0] / sv[-1]
if condF > 1e15:
raise Exception("F is too ill conditioned. Poor combination "
"of regression model and observations.")
else:
# Ft is too ill conditioned, get out (try different theta)
return reduced_likelihood_function_value, par
Yt = linalg.solve_triangular(C, self.y, lower=True)
if self.beta0 is None:
# Universal Kriging
beta = linalg.solve_triangular(G, np.dot(Q.T, Yt))
else:
# Ordinary Kriging
beta = np.array(self.beta0)
rho = Yt - np.dot(Ft, beta)
sigma2 = (rho ** 2.).sum(axis=0) / n_samples
# The determinant of R is equal to the squared product of the diagonal
# elements of its Cholesky decomposition C
detR = (np.diag(C) ** (2. / n_samples)).prod()
# Compute/Organize output
reduced_likelihood_function_value = - sigma2.sum() * detR
par['sigma2'] = sigma2 * self.y_std ** 2.
par['beta'] = beta
par['gamma'] = linalg.solve_triangular(C.T, rho)
par['C'] = C
par['Ft'] = Ft
par['G'] = G
return reduced_likelihood_function_value, par
def _arg_max_reduced_likelihood_function(self):
"""
This function estimates the autocorrelation parameters theta as the
maximizer of the reduced likelihood function.
(Minimization of the opposite reduced likelihood function is used for
convenience)
Parameters
----------
self : All parameters are stored in the Gaussian Process model object.
Returns
-------
optimal_theta : array_like
The best set of autocorrelation parameters (the sought maximizer of
the reduced likelihood function).
optimal_reduced_likelihood_function_value : double
The optimal reduced likelihood function value.
optimal_par : dict
The BLUP parameters associated to thetaOpt.
"""
# Initialize output
best_optimal_theta = []
best_optimal_rlf_value = []
best_optimal_par = []
if self.verbose:
print("The chosen optimizer is: " + str(self.optimizer))
if self.random_start > 1:
print(str(self.random_start) + " random starts are required.")
percent_completed = 0.
# Force optimizer to fmin_cobyla if the model is meant to be isotropic
if self.optimizer == 'Welch' and self.theta0.size == 1:
self.optimizer = 'fmin_cobyla'
if self.optimizer == 'fmin_cobyla':
def minus_reduced_likelihood_function(log10t):
return - self.reduced_likelihood_function(
theta=10. ** log10t)[0]
constraints = []
for i in range(self.theta0.size):
constraints.append(lambda log10t, i=i:
log10t[i] - np.log10(self.thetaL[0, i]))
constraints.append(lambda log10t, i=i:
np.log10(self.thetaU[0, i]) - log10t[i])
for k in range(self.random_start):
if k == 0:
# Use specified starting point as first guess
theta0 = self.theta0
else:
# Generate a random starting point log10-uniformly
# distributed between bounds
log10theta0 = (np.log10(self.thetaL)
+ self.random_state.rand(*self.theta0.shape)
* np.log10(self.thetaU / self.thetaL))
theta0 = 10. ** log10theta0
# Run Cobyla
try:
log10_optimal_theta = \
optimize.fmin_cobyla(minus_reduced_likelihood_function,
np.log10(theta0).ravel(), constraints,
iprint=0)
except ValueError as ve:
print("Optimization failed. Try increasing the ``nugget``")
raise ve
optimal_theta = 10. ** log10_optimal_theta
optimal_rlf_value, optimal_par = \
self.reduced_likelihood_function(theta=optimal_theta)
# Compare the new optimizer to the best previous one
if k > 0:
if optimal_rlf_value > best_optimal_rlf_value:
best_optimal_rlf_value = optimal_rlf_value
best_optimal_par = optimal_par
best_optimal_theta = optimal_theta
else:
best_optimal_rlf_value = optimal_rlf_value
best_optimal_par = optimal_par
best_optimal_theta = optimal_theta
if self.verbose and self.random_start > 1:
if (20 * k) / self.random_start > percent_completed:
percent_completed = (20 * k) / self.random_start
print("%s completed" % (5 * percent_completed))
optimal_rlf_value = best_optimal_rlf_value
optimal_par = best_optimal_par
optimal_theta = best_optimal_theta
elif self.optimizer == 'Welch':
# Backup of the given attributes
theta0, thetaL, thetaU = self.theta0, self.thetaL, self.thetaU
corr = self.corr
verbose = self.verbose
# This will iterate over fmin_cobyla optimizer
self.optimizer = 'fmin_cobyla'
self.verbose = False
# Initialize under isotropy assumption
if verbose:
print("Initialize under isotropy assumption...")
self.theta0 = check_array(self.theta0.min())
self.thetaL = check_array(self.thetaL.min())
self.thetaU = check_array(self.thetaU.max())
theta_iso, optimal_rlf_value_iso, par_iso = \
self._arg_max_reduced_likelihood_function()
optimal_theta = theta_iso + np.zeros(theta0.shape)
# Iterate over all dimensions of theta allowing for anisotropy
if verbose:
print("Now improving allowing for anisotropy...")
for i in self.random_state.permutation(theta0.size):
if verbose:
print("Proceeding along dimension %d..." % (i + 1))
self.theta0 = check_array(theta_iso)
self.thetaL = check_array(thetaL[0, i])
self.thetaU = check_array(thetaU[0, i])
def corr_cut(t, d):
return corr(check_array(np.hstack([optimal_theta[0][0:i],
t[0],
optimal_theta[0][(i +
1)::]])),
d)
self.corr = corr_cut
optimal_theta[0, i], optimal_rlf_value, optimal_par = \
self._arg_max_reduced_likelihood_function()
# Restore the given attributes
self.theta0, self.thetaL, self.thetaU = theta0, thetaL, thetaU
self.corr = corr
self.optimizer = 'Welch'
self.verbose = verbose
else:
raise NotImplementedError("This optimizer ('%s') is not "
"implemented yet. Please contribute!"
% self.optimizer)
return optimal_theta, optimal_rlf_value, optimal_par
def _check_params(self, n_samples=None):
# Check regression model
if not callable(self.regr):
if self.regr in self._regression_types:
self.regr = self._regression_types[self.regr]
else:
raise ValueError("regr should be one of %s or callable, "
"%s was given."
% (self._regression_types.keys(), self.regr))
# Check regression weights if given (Ordinary Kriging)
if self.beta0 is not None:
self.beta0 = np.atleast_2d(self.beta0)
if self.beta0.shape[1] != 1:
# Force to column vector
self.beta0 = self.beta0.T
# Check correlation model
if not callable(self.corr):
if self.corr in self._correlation_types:
self.corr = self._correlation_types[self.corr]
else:
raise ValueError("corr should be one of %s or callable, "
"%s was given."
% (self._correlation_types.keys(), self.corr))
# Check storage mode
if self.storage_mode != 'full' and self.storage_mode != 'light':
raise ValueError("Storage mode should either be 'full' or "
"'light', %s was given." % self.storage_mode)
# Check correlation parameters
self.theta0 = np.atleast_2d(self.theta0)
lth = self.theta0.size
if self.thetaL is not None and self.thetaU is not None:
self.thetaL = np.atleast_2d(self.thetaL)
self.thetaU = np.atleast_2d(self.thetaU)
if self.thetaL.size != lth or self.thetaU.size != lth:
raise ValueError("theta0, thetaL and thetaU must have the "
"same length.")
if np.any(self.thetaL <= 0) or np.any(self.thetaU < self.thetaL):
raise ValueError("The bounds must satisfy O < thetaL <= "
"thetaU.")
elif self.thetaL is None and self.thetaU is None:
if np.any(self.theta0 <= 0):
raise ValueError("theta0 must be strictly positive.")
elif self.thetaL is None or self.thetaU is None:
raise ValueError("thetaL and thetaU should either be both or "
"neither specified.")
# Force verbose type to bool
self.verbose = bool(self.verbose)
# Force normalize type to bool
self.normalize = bool(self.normalize)
# Check nugget value
self.nugget = np.asarray(self.nugget)
if np.any(self.nugget) < 0.:
raise ValueError("nugget must be positive or zero.")
if (n_samples is not None
and self.nugget.shape not in [(), (n_samples,)]):
raise ValueError("nugget must be either a scalar "
"or array of length n_samples.")
# Check optimizer
if self.optimizer not in self._optimizer_types:
raise ValueError("optimizer should be one of %s"
% self._optimizer_types)
# Force random_start type to int
self.random_start = int(self.random_start)
| bsd-3-clause |
adykstra/mne-python | mne/io/array/tests/test_array.py | 2 | 6154 | # Author: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.testing import (assert_array_almost_equal, assert_allclose,
assert_equal)
import pytest
import matplotlib.pyplot as plt
from mne import find_events, Epochs, pick_types, channels
from mne.io import read_raw_fif
from mne.io.array import RawArray
from mne.io.tests.test_raw import _test_raw_reader
from mne.io.meas_info import create_info, _kind_dict
from mne.utils import requires_version, run_tests_if_main
base_dir = op.join(op.dirname(__file__), '..', '..', 'tests', 'data')
fif_fname = op.join(base_dir, 'test_raw.fif')
def test_long_names():
"""Test long name support."""
info = create_info(['a' * 15 + 'b', 'a' * 16], 1000., verbose='error')
data = np.empty((2, 1000))
raw = RawArray(data, info)
assert raw.ch_names == ['a' * 13 + '-0', 'a' * 13 + '-1']
info = create_info(['a' * 16] * 11, 1000., verbose='error')
data = np.empty((11, 1000))
raw = RawArray(data, info)
assert raw.ch_names == ['a' * 12 + '-%s' % ii for ii in range(11)]
def test_array_copy():
"""Test copying during construction."""
info = create_info(1, 1000.)
data = np.empty((1, 1000))
# 'auto' (default)
raw = RawArray(data, info)
assert raw._data is data
assert raw.info is not info
raw = RawArray(data.astype(np.float32), info)
assert raw._data is not data
assert raw.info is not info
# 'info' (more restrictive)
raw = RawArray(data, info, copy='info')
assert raw._data is data
assert raw.info is not info
with pytest.raises(ValueError, match="data copying was not .* copy='info"):
RawArray(data.astype(np.float32), info, copy='info')
# 'data'
raw = RawArray(data, info, copy='data')
assert raw._data is not data
assert raw.info is info
# 'both'
raw = RawArray(data, info, copy='both')
assert raw._data is not data
assert raw.info is not info
raw = RawArray(data.astype(np.float32), info, copy='both')
assert raw._data is not data
assert raw.info is not info
# None
raw = RawArray(data, info, copy=None)
assert raw._data is data
assert raw.info is info
with pytest.raises(ValueError, match='data copying was not .* copy=None'):
RawArray(data.astype(np.float32), info, copy=None)
@pytest.mark.slowtest
@requires_version('scipy', '0.12')
def test_array_raw():
"""Test creating raw from array."""
# creating
raw = read_raw_fif(fif_fname).crop(2, 5)
data, times = raw[:, :]
sfreq = raw.info['sfreq']
ch_names = [(ch[4:] if 'STI' not in ch else ch)
for ch in raw.info['ch_names']] # change them, why not
types = list()
for ci in range(101):
types.extend(('grad', 'grad', 'mag'))
types.extend(['ecog', 'seeg', 'hbo']) # really 3 meg channels
types.extend(['stim'] * 9)
types.extend(['eeg'] * 60)
picks = np.concatenate([pick_types(raw.info)[::20],
pick_types(raw.info, meg=False, stim=True),
pick_types(raw.info, meg=False, eeg=True)[::20]])
del raw
data = data[picks]
ch_names = np.array(ch_names)[picks].tolist()
types = np.array(types)[picks].tolist()
types.pop(-1)
# wrong length
pytest.raises(ValueError, create_info, ch_names, sfreq, types)
# bad entry
types.append('foo')
pytest.raises(KeyError, create_info, ch_names, sfreq, types)
types[-1] = 'eog'
# default type
info = create_info(ch_names, sfreq)
assert_equal(info['chs'][0]['kind'], _kind_dict['misc'][0])
# use real types
info = create_info(ch_names, sfreq, types)
raw2 = _test_raw_reader(RawArray, test_preloading=False,
data=data, info=info, first_samp=2 * data.shape[1])
data2, times2 = raw2[:, :]
assert_allclose(data, data2)
assert_allclose(times, times2)
assert ('RawArray' in repr(raw2))
pytest.raises(TypeError, RawArray, info, data)
# filtering
picks = pick_types(raw2.info, misc=True, exclude='bads')[:4]
assert_equal(len(picks), 4)
raw_lp = raw2.copy()
kwargs = dict(fir_design='firwin', picks=picks)
raw_lp.filter(None, 4.0, h_trans_bandwidth=4., **kwargs)
raw_hp = raw2.copy()
raw_hp.filter(16.0, None, l_trans_bandwidth=4., **kwargs)
raw_bp = raw2.copy()
raw_bp.filter(8.0, 12.0, l_trans_bandwidth=4., h_trans_bandwidth=4.,
**kwargs)
raw_bs = raw2.copy()
raw_bs.filter(16.0, 4.0, l_trans_bandwidth=4., h_trans_bandwidth=4.,
**kwargs)
data, _ = raw2[picks, :]
lp_data, _ = raw_lp[picks, :]
hp_data, _ = raw_hp[picks, :]
bp_data, _ = raw_bp[picks, :]
bs_data, _ = raw_bs[picks, :]
sig_dec = 15
assert_array_almost_equal(data, lp_data + bp_data + hp_data, sig_dec)
assert_array_almost_equal(data, bp_data + bs_data, sig_dec)
# plotting
raw2.plot()
raw2.plot_psd(tmax=2., average=True, n_fft=1024, spatial_colors=False)
plt.close('all')
# epoching
events = find_events(raw2, stim_channel='STI 014')
events[:, 2] = 1
assert len(events) > 2
epochs = Epochs(raw2, events, 1, -0.2, 0.4, preload=True)
evoked = epochs.average()
assert_equal(evoked.nave, len(events) - 1)
# complex data
rng = np.random.RandomState(0)
data = rng.randn(1, 100) + 1j * rng.randn(1, 100)
raw = RawArray(data, create_info(1, 1000., 'eeg'))
assert_allclose(raw._data, data)
# Using digital montage to give MNI electrode coordinates
n_elec = 10
ts_size = 10000
Fs = 512.
elec_labels = [str(i) for i in range(n_elec)]
elec_coords = np.random.randint(60, size=(n_elec, 3)).tolist()
electrode = np.random.rand(n_elec, ts_size)
dig_ch_pos = dict(zip(elec_labels, elec_coords))
mon = channels.DigMontage(dig_ch_pos=dig_ch_pos)
info = create_info(elec_labels, Fs, 'ecog', montage=mon)
raw = RawArray(electrode, info)
raw.plot_psd(average=False) # looking for inexistent layout
raw.plot_psd_topo()
run_tests_if_main()
| bsd-3-clause |
glebysg/GC_server | gestureclean/within_user_agreement.py | 1 | 6821 | import sys
from vacs.models import Command, Experiment, User, Vac, Evaluation,\
Assignment, Participant, Score, ValAssignment, Validation
from vacs.utils import Order
from django.contrib.auth import get_user_model
import csv
import numpy as np
from scipy.stats import entropy
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
from matplotlib import colors
from matplotlib.ticker import PercentFormatter
import scipy.stats as stats
import matplotlib.gridspec as gridspec
# Get all the vacs for the experiment
experiment_id = 77
vacs = Vac.objects.filter(experiment__id=77)
# for each user
participants = Participant.objects.filter(experiment__id=experiment_id)
participants = Participant.objects.filter(experiment__id=experiment_id)
consistency_means=[]
consistency_means_vac=[]
test_means = []
for participant in participants:
within_user_consistency_means = []
within_user_consistency_means_vac = []
user = participant.user
assignments = Assignment.objects.filter(user=user)
# for each assignment
for assignment in assignments:
# for each vac
for vac in vacs:
# get all evaluations of that user with that vac
evaluations = Evaluation.objects.filter(
vac=vac,
assignment=assignment
)
# get groups of evaluations
eval_matrix = np.zeros((2,9,9))
for evaluation in evaluations:
if len(evaluation.evaluation)>5:
evaluation = evaluation.evaluation.split(".")
else:
evaluation = list(evaluation.evaluation)
# create evaluation matrix 9x9x2
# The first one is for the less thans in the upper
# triangular and greater thans in the lower trian
# gular and The second one is for the equals.
# [u'9', u'<', u'5', u'<', u'8']
for index in range(0,3,2):
if evaluation[index + 1] == '<':
m_index = 0
else:
m_index = 1
g1 = int(evaluation[index])
g2 = int(evaluation[index+2])
eval_matrix[m_index,g1-1,g2-1] +=1
# do the last comparison
if evaluation[1] == '<' or evaluation[3] == '<':
m_index = 0
else:
m_index = 1
g1 = int(evaluation[0])
g2 = int(evaluation[4])
eval_matrix[m_index,g1-1,g2-1] +=1
# Get the values for each pair (this should be
# a dict or a array of two).
eval_entropy = []
for i in range(9):
for j in range (i+1,9):
# append pair name
# get pair values for "<", ">", "="
gt_than = eval_matrix[0,i,j]
less_than = eval_matrix[0,j,i]
eq = eval_matrix[1,i,j]+eval_matrix[1,j,i]
if gt_than + less_than + eq > 1:
pair = []
pair.append(str(i)+"-"+str(j))
pair.append([gt_than , less_than, eq])
# Get the entropy for each pair
pair.append( entropy(pair[-1]))
eval_entropy.append(pair)
consistency_means.append(sum([(sum(val_list)*entr)/15.0 \
for pair, val_list, entr in eval_entropy]))
consistency_means_vac.append((vac.name, sum([(sum(val_list)*entr)/15.0 \
for pair, val_list, entr in eval_entropy])))
consistency_means = np.array(consistency_means)
print("///////////")
# exit()
final_mean =np.mean(consistency_means)
final_median = np.median(consistency_means)
print "////////////////////////////"
print "MEAN ENTROPY:", final_mean
print "MAX VALUE", np.max(consistency_means)
print "MEDIAN VALUE", final_median
###### Scatter plot ###############
plt.scatter(range(len(consistency_means)), consistency_means, c="#1200c9")
plt.plot([0,250],[final_mean,final_mean], 'c--', label='Entropy mean')
line = plt.plot([0,250],[final_median,final_median], dashes=[2,2,10,2], c="r", label='Entropy median')
plt.setp(line, linewidth=3)
plt.xlabel('Evaluation instances (for a command-criterion pair)')
plt.ylabel('Average entropy of each evaluation')
plt.legend()
# plt.show()
plt.savefig('entropy_scatter.png', bbox_inches='tight', dpi=300)
plt.clf()
###### HISTOGRAM ###############
n_bins = 20
cm = plt.cm.get_cmap('RdYlBu_r')
# Get the histogramp
Y,X = np.histogram(consistency_means, n_bins, normed=1)
x_span = X.max()-X.min()
C = [cm(((x-X.min())/x_span)) for x in X]
plt.bar(X[:-1],Y,color=C,width=X[1]-X[0],edgecolor='k')
plt.xlabel('Evaluations entropy value')
plt.ylabel('Number of evaluations (of a command-criterion pair)')
# plt.show()
plt.savefig('entropy_hist.png', bbox_inches='tight', dpi=300)
plt.clf()
######### PER VAC ###############
# print consistency_means_vac
matplotlib.rcParams['lines.linewidth'] = 2
vac_evals = []
index = 0
fig = plt.figure()
gs1 = gridspec.GridSpec(3, 2)
first = True
for vac in vacs:
# get the entropies that belong to an specific vac
print("///////////")
filtered_consistency_means = [ent_val for vac_name, ent_val in consistency_means_vac\
if vac_name == vac.name]
vac_evals.append(filtered_consistency_means)
final_mean =np.mean(filtered_consistency_means)
final_median = np.median(filtered_consistency_means)
print "////////////////////////////"
print vac.name+" MEAN ENTROPY:", final_mean
print vac.name+" MAX VALUE", np.max(filtered_consistency_means)
print vac.name+" MEDIAN VALUE", final_median
small_plt = fig.add_subplot(gs1[index])
small_plt.scatter(range(len(filtered_consistency_means)), filtered_consistency_means, c="#1200c9")
if first:
l1 = small_plt.plot([0,42],[final_mean,final_mean], 'c--', label='Entropy mean')
l2 = small_plt.plot([0,42],[final_median,final_median], dashes=[2,2,15,2], c="r", label='Entropy median')
plt.setp(l2, linewidth=2)
first = False
else:
l1 = small_plt.plot([0,42],[final_mean,final_mean], 'c--')
l2 = small_plt.plot([0,42],[final_median,final_median], dashes=[2,2,15,2], c="r")
vac_name = vac_name
if vac_name == 'Complexity':
vac_name = 'Simplicity'
elif vac_name == 'Amount of movement':
vac_name = 'Economy of movement'
small_plt.set_title(vac.name)
index +=1
gs1.tight_layout(fig, rect=(0,0,1,0.90))
fig.legend(loc='upper left')
# plt.show()
fig.savefig('all_entropy_scatter.png', dpi=600)
print(stats.f_oneway(*vac_evals))
| mit |
wnzhang/optimal-rtb | python/lryzx.py | 1 | 3603 | #!/usr/bin/python
import sys
import random
import math
import operator
from sklearn.metrics import roc_auc_score
from sklearn.metrics import mean_squared_error
bufferCaseNum = 1000000
eta = 0.01
lamb = 1E-6
featWeight = {}
trainRounds = 10
random.seed(10)
initWeight = 0.05
def nextInitWeight():
return (random.random() - 0.5) * initWeight
def ints(s):
res = []
for ss in s:
res.append(int(ss))
return res
def sigmoid(p):
return 1.0 / (1.0 + math.exp(-p))
if len(sys.argv) < 3:
print 'Usage: train.yzx.txt test.yzx.txt'
exit(-1)
for round in range(0, trainRounds):
# train for this round
fi = open(sys.argv[1], 'r')
lineNum = 0
trainData = []
for line in fi:
lineNum = (lineNum + 1) % bufferCaseNum
trainData.append(ints(line.replace(":1", "").split()))
if lineNum == 0:
for data in trainData:
clk = data[0]
mp = data[1]
fsid = 2 # feature start id
# predict
pred = 0.0
for i in range(fsid, len(data)):
feat = data[i]
if feat not in featWeight:
featWeight[feat] = nextInitWeight()
pred += featWeight[feat]
pred = sigmoid(pred)
# start to update weight
# w_i = w_i + learning_rate * [ (y - p) * x_i - lamb * w_i ]
for i in range(fsid, len(data)):
feat = data[i]
featWeight[feat] = featWeight[feat] * (1 - lamb) + eta * (clk - pred)
trainData = []
if len(trainData) > 0:
for data in trainData:
clk = data[0]
mp = data[1]
fsid = 2 # feature start id
# predict
pred = 0.0
for i in range(fsid, len(data)):
feat = data[i]
if feat not in featWeight:
featWeight[feat] = nextInitWeight()
pred += featWeight[feat]
pred = sigmoid(pred)
# start to update weight
# w_i = w_i + learning_rate * [ (y - p) * x_i - lamb * w_i ]
for i in range(fsid, len(data)):
feat = data[i]
featWeight[feat] = featWeight[feat] * (1 - lamb) + eta * (clk - pred)
fi.close()
# test for this round
y = []
yp = []
fi = open(sys.argv[2], 'r')
for line in fi:
data = ints(line.replace(":1", "").split())
clk = data[0]
mp = data[1]
fsid = 2 # feature start id
pred = 0.0
for i in range(fsid, len(data)):
feat = data[i]
if feat in featWeight:
pred += featWeight[feat]
pred = sigmoid(pred)
y.append(clk)
yp.append(pred)
fi.close()
auc = roc_auc_score(y, yp)
rmse = math.sqrt(mean_squared_error(y, yp))
print str(round) + '\t' + str(auc) + '\t' + str(rmse)
# output the weights
fo = open(sys.argv[1] + '.lr.weight', 'w')
featvalue = sorted(featWeight.iteritems(), key=operator.itemgetter(0))
for fv in featvalue:
fo.write(str(fv[0]) + '\t' + str(fv[1]) + '\n')
fo.close()
# output the prediction
fi = open(sys.argv[2], 'r')
fo = open(sys.argv[2] + '.lr.pred', 'w')
for line in fi:
data = ints(line.replace(":1", "").split())
pred = 0.0
for i in range(1, len(data)):
feat = data[i]
if feat in featWeight:
pred += featWeight[feat]
pred = sigmoid(pred)
fo.write(str(pred) + '\n')
fo.close()
fi.close()
| apache-2.0 |
chrsrds/scikit-learn | sklearn/utils/tests/test_validation.py | 1 | 32885 | """Tests for input validation functions"""
import warnings
import os
from tempfile import NamedTemporaryFile
from itertools import product
import pytest
from pytest import importorskip
import numpy as np
import scipy.sparse as sp
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_raises_regex
from sklearn.utils.testing import assert_no_warnings
from sklearn.utils.testing import assert_warns_message
from sklearn.utils.testing import assert_warns
from sklearn.utils.testing import ignore_warnings
from sklearn.utils.testing import SkipTest
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_allclose_dense_sparse
from sklearn.utils import as_float_array, check_array, check_symmetric
from sklearn.utils import check_X_y
from sklearn.utils import deprecated
from sklearn.utils.mocking import MockDataFrame
from sklearn.utils.estimator_checks import NotAnArray
from sklearn.random_projection import sparse_random_matrix
from sklearn.linear_model import ARDRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR
from sklearn.datasets import make_blobs
from sklearn.utils.validation import (
has_fit_parameter,
check_is_fitted,
check_consistent_length,
assert_all_finite,
check_memory,
check_non_negative,
_num_samples,
check_scalar)
import sklearn
from sklearn.exceptions import NotFittedError
from sklearn.exceptions import DataConversionWarning
from sklearn.utils.testing import assert_raise_message
from sklearn.utils.testing import TempMemmap
def test_as_float_array():
# Test function for as_float_array
X = np.ones((3, 10), dtype=np.int32)
X = X + np.arange(10, dtype=np.int32)
X2 = as_float_array(X, copy=False)
assert X2.dtype == np.float32
# Another test
X = X.astype(np.int64)
X2 = as_float_array(X, copy=True)
# Checking that the array wasn't overwritten
assert as_float_array(X, False) is not X
assert X2.dtype == np.float64
# Test int dtypes <= 32bit
tested_dtypes = [np.bool,
np.int8, np.int16, np.int32,
np.uint8, np.uint16, np.uint32]
for dtype in tested_dtypes:
X = X.astype(dtype)
X2 = as_float_array(X)
assert X2.dtype == np.float32
# Test object dtype
X = X.astype(object)
X2 = as_float_array(X, copy=True)
assert X2.dtype == np.float64
# Here, X is of the right type, it shouldn't be modified
X = np.ones((3, 2), dtype=np.float32)
assert as_float_array(X, copy=False) is X
# Test that if X is fortran ordered it stays
X = np.asfortranarray(X)
assert np.isfortran(as_float_array(X, copy=True))
# Test the copy parameter with some matrices
matrices = [
np.matrix(np.arange(5)),
sp.csc_matrix(np.arange(5)).toarray(),
sparse_random_matrix(10, 10, density=0.10).toarray()
]
for M in matrices:
N = as_float_array(M, copy=True)
N[0, 0] = np.nan
assert not np.isnan(M).any()
@pytest.mark.parametrize(
"X",
[(np.random.random((10, 2))),
(sp.rand(10, 2).tocsr())])
def test_as_float_array_nan(X):
X[5, 0] = np.nan
X[6, 1] = np.nan
X_converted = as_float_array(X, force_all_finite='allow-nan')
assert_allclose_dense_sparse(X_converted, X)
def test_np_matrix():
# Confirm that input validation code does not return np.matrix
X = np.arange(12).reshape(3, 4)
assert not isinstance(as_float_array(X), np.matrix)
assert not isinstance(as_float_array(np.matrix(X)), np.matrix)
assert not isinstance(as_float_array(sp.csc_matrix(X)), np.matrix)
def test_memmap():
# Confirm that input validation code doesn't copy memory mapped arrays
asflt = lambda x: as_float_array(x, copy=False)
with NamedTemporaryFile(prefix='sklearn-test') as tmp:
M = np.memmap(tmp, shape=(10, 10), dtype=np.float32)
M[:] = 0
for f in (check_array, np.asarray, asflt):
X = f(M)
X[:] = 1
assert_array_equal(X.ravel(), M.ravel())
X[:] = 0
def test_ordering():
# Check that ordering is enforced correctly by validation utilities.
# We need to check each validation utility, because a 'copy' without
# 'order=K' will kill the ordering.
X = np.ones((10, 5))
for A in X, X.T:
for copy in (True, False):
B = check_array(A, order='C', copy=copy)
assert B.flags['C_CONTIGUOUS']
B = check_array(A, order='F', copy=copy)
assert B.flags['F_CONTIGUOUS']
if copy:
assert A is not B
X = sp.csr_matrix(X)
X.data = X.data[::-1]
assert not X.data.flags['C_CONTIGUOUS']
@pytest.mark.parametrize(
"value, force_all_finite",
[(np.inf, False), (np.nan, 'allow-nan'), (np.nan, False)]
)
@pytest.mark.parametrize(
"retype",
[np.asarray, sp.csr_matrix]
)
def test_check_array_force_all_finite_valid(value, force_all_finite, retype):
X = retype(np.arange(4).reshape(2, 2).astype(np.float))
X[0, 0] = value
X_checked = check_array(X, force_all_finite=force_all_finite,
accept_sparse=True)
assert_allclose_dense_sparse(X, X_checked)
@pytest.mark.parametrize(
"value, force_all_finite, match_msg",
[(np.inf, True, 'Input contains NaN, infinity'),
(np.inf, 'allow-nan', 'Input contains infinity'),
(np.nan, True, 'Input contains NaN, infinity'),
(np.nan, 'allow-inf', 'force_all_finite should be a bool or "allow-nan"'),
(np.nan, 1, 'Input contains NaN, infinity')]
)
@pytest.mark.parametrize(
"retype",
[np.asarray, sp.csr_matrix]
)
def test_check_array_force_all_finiteinvalid(value, force_all_finite,
match_msg, retype):
X = retype(np.arange(4).reshape(2, 2).astype(np.float))
X[0, 0] = value
with pytest.raises(ValueError, match=match_msg):
check_array(X, force_all_finite=force_all_finite,
accept_sparse=True)
def test_check_array_force_all_finite_object():
X = np.array([['a', 'b', np.nan]], dtype=object).T
X_checked = check_array(X, dtype=None, force_all_finite='allow-nan')
assert X is X_checked
X_checked = check_array(X, dtype=None, force_all_finite=False)
assert X is X_checked
with pytest.raises(ValueError, match='Input contains NaN'):
check_array(X, dtype=None, force_all_finite=True)
@ignore_warnings
def test_check_array():
# accept_sparse == False
# raise error on sparse inputs
X = [[1, 2], [3, 4]]
X_csr = sp.csr_matrix(X)
assert_raises(TypeError, check_array, X_csr)
# ensure_2d=False
X_array = check_array([0, 1, 2], ensure_2d=False)
assert X_array.ndim == 1
# ensure_2d=True with 1d array
assert_raise_message(ValueError, 'Expected 2D array, got 1D array instead',
check_array, [0, 1, 2], ensure_2d=True)
# ensure_2d=True with scalar array
assert_raise_message(ValueError,
'Expected 2D array, got scalar array instead',
check_array, 10, ensure_2d=True)
# don't allow ndim > 3
X_ndim = np.arange(8).reshape(2, 2, 2)
assert_raises(ValueError, check_array, X_ndim)
check_array(X_ndim, allow_nd=True) # doesn't raise
# dtype and order enforcement.
X_C = np.arange(4).reshape(2, 2).copy("C")
X_F = X_C.copy("F")
X_int = X_C.astype(np.int)
X_float = X_C.astype(np.float)
Xs = [X_C, X_F, X_int, X_float]
dtypes = [np.int32, np.int, np.float, np.float32, None, np.bool, object]
orders = ['C', 'F', None]
copys = [True, False]
for X, dtype, order, copy in product(Xs, dtypes, orders, copys):
X_checked = check_array(X, dtype=dtype, order=order, copy=copy)
if dtype is not None:
assert X_checked.dtype == dtype
else:
assert X_checked.dtype == X.dtype
if order == 'C':
assert X_checked.flags['C_CONTIGUOUS']
assert not X_checked.flags['F_CONTIGUOUS']
elif order == 'F':
assert X_checked.flags['F_CONTIGUOUS']
assert not X_checked.flags['C_CONTIGUOUS']
if copy:
assert X is not X_checked
else:
# doesn't copy if it was already good
if (X.dtype == X_checked.dtype and
X_checked.flags['C_CONTIGUOUS'] == X.flags['C_CONTIGUOUS']
and X_checked.flags['F_CONTIGUOUS'] == X.flags['F_CONTIGUOUS']):
assert X is X_checked
# allowed sparse != None
X_csc = sp.csc_matrix(X_C)
X_coo = X_csc.tocoo()
X_dok = X_csc.todok()
X_int = X_csc.astype(np.int)
X_float = X_csc.astype(np.float)
Xs = [X_csc, X_coo, X_dok, X_int, X_float]
accept_sparses = [['csr', 'coo'], ['coo', 'dok']]
for X, dtype, accept_sparse, copy in product(Xs, dtypes, accept_sparses,
copys):
with warnings.catch_warnings(record=True) as w:
X_checked = check_array(X, dtype=dtype,
accept_sparse=accept_sparse, copy=copy)
if (dtype is object or sp.isspmatrix_dok(X)) and len(w):
message = str(w[0].message)
messages = ["object dtype is not supported by sparse matrices",
"Can't check dok sparse matrix for nan or inf."]
assert message in messages
else:
assert len(w) == 0
if dtype is not None:
assert X_checked.dtype == dtype
else:
assert X_checked.dtype == X.dtype
if X.format in accept_sparse:
# no change if allowed
assert X.format == X_checked.format
else:
# got converted
assert X_checked.format == accept_sparse[0]
if copy:
assert X is not X_checked
else:
# doesn't copy if it was already good
if X.dtype == X_checked.dtype and X.format == X_checked.format:
assert X is X_checked
# other input formats
# convert lists to arrays
X_dense = check_array([[1, 2], [3, 4]])
assert isinstance(X_dense, np.ndarray)
# raise on too deep lists
assert_raises(ValueError, check_array, X_ndim.tolist())
check_array(X_ndim.tolist(), allow_nd=True) # doesn't raise
# convert weird stuff to arrays
X_no_array = NotAnArray(X_dense)
result = check_array(X_no_array)
assert isinstance(result, np.ndarray)
# deprecation warning if string-like array with dtype="numeric"
expected_warn_regex = r"converted to decimal numbers if dtype='numeric'"
X_str = [['11', '12'], ['13', 'xx']]
for X in [X_str, np.array(X_str, dtype='U'), np.array(X_str, dtype='S')]:
with pytest.warns(FutureWarning, match=expected_warn_regex):
check_array(X, dtype="numeric")
# deprecation warning if byte-like array with dtype="numeric"
X_bytes = [[b'a', b'b'], [b'c', b'd']]
for X in [X_bytes, np.array(X_bytes, dtype='V1')]:
with pytest.warns(FutureWarning, match=expected_warn_regex):
check_array(X, dtype="numeric")
def test_check_array_pandas_dtype_object_conversion():
# test that data-frame like objects with dtype object
# get converted
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=np.object)
X_df = MockDataFrame(X)
assert check_array(X_df).dtype.kind == "f"
assert check_array(X_df, ensure_2d=False).dtype.kind == "f"
# smoke-test against dataframes with column named "dtype"
X_df.dtype = "Hans"
assert check_array(X_df, ensure_2d=False).dtype.kind == "f"
def test_check_array_on_mock_dataframe():
arr = np.array([[0.2, 0.7], [0.6, 0.5], [0.4, 0.1], [0.7, 0.2]])
mock_df = MockDataFrame(arr)
checked_arr = check_array(mock_df)
assert (checked_arr.dtype ==
arr.dtype)
checked_arr = check_array(mock_df, dtype=np.float32)
assert checked_arr.dtype == np.dtype(np.float32)
def test_check_array_dtype_stability():
# test that lists with ints don't get converted to floats
X = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
assert check_array(X).dtype.kind == "i"
assert check_array(X, ensure_2d=False).dtype.kind == "i"
def test_check_array_dtype_warning():
X_int_list = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
X_float64 = np.asarray(X_int_list, dtype=np.float64)
X_float32 = np.asarray(X_int_list, dtype=np.float32)
X_int64 = np.asarray(X_int_list, dtype=np.int64)
X_csr_float64 = sp.csr_matrix(X_float64)
X_csr_float32 = sp.csr_matrix(X_float32)
X_csc_float32 = sp.csc_matrix(X_float32)
X_csc_int32 = sp.csc_matrix(X_int64, dtype=np.int32)
y = [0, 0, 1]
integer_data = [X_int64, X_csc_int32]
float64_data = [X_float64, X_csr_float64]
float32_data = [X_float32, X_csr_float32, X_csc_float32]
for X in integer_data:
X_checked = assert_no_warnings(check_array, X, dtype=np.float64,
accept_sparse=True)
assert X_checked.dtype == np.float64
X_checked = assert_warns(DataConversionWarning, check_array, X,
dtype=np.float64,
accept_sparse=True, warn_on_dtype=True)
assert X_checked.dtype == np.float64
# Check that the warning message includes the name of the Estimator
X_checked = assert_warns_message(DataConversionWarning,
'SomeEstimator',
check_array, X,
dtype=[np.float64, np.float32],
accept_sparse=True,
warn_on_dtype=True,
estimator='SomeEstimator')
assert X_checked.dtype == np.float64
X_checked, y_checked = assert_warns_message(
DataConversionWarning, 'KNeighborsClassifier',
check_X_y, X, y, dtype=np.float64, accept_sparse=True,
warn_on_dtype=True, estimator=KNeighborsClassifier())
assert X_checked.dtype == np.float64
for X in float64_data:
with pytest.warns(None) as record:
warnings.simplefilter("ignore", DeprecationWarning) # 0.23
X_checked = check_array(X, dtype=np.float64,
accept_sparse=True, warn_on_dtype=True)
assert X_checked.dtype == np.float64
X_checked = check_array(X, dtype=np.float64,
accept_sparse=True, warn_on_dtype=False)
assert X_checked.dtype == np.float64
assert len(record) == 0
for X in float32_data:
X_checked = assert_no_warnings(check_array, X,
dtype=[np.float64, np.float32],
accept_sparse=True)
assert X_checked.dtype == np.float32
assert X_checked is X
X_checked = assert_no_warnings(check_array, X,
dtype=[np.float64, np.float32],
accept_sparse=['csr', 'dok'],
copy=True)
assert X_checked.dtype == np.float32
assert X_checked is not X
X_checked = assert_no_warnings(check_array, X_csc_float32,
dtype=[np.float64, np.float32],
accept_sparse=['csr', 'dok'],
copy=False)
assert X_checked.dtype == np.float32
assert X_checked is not X_csc_float32
assert X_checked.format == 'csr'
def test_check_array_warn_on_dtype_deprecation():
X = np.asarray([[0.0], [1.0]])
Y = np.asarray([[2.0], [3.0]])
with pytest.warns(DeprecationWarning,
match="'warn_on_dtype' is deprecated"):
check_array(X, warn_on_dtype=True)
with pytest.warns(DeprecationWarning,
match="'warn_on_dtype' is deprecated"):
check_X_y(X, Y, warn_on_dtype=True)
def test_check_array_accept_sparse_type_exception():
X = [[1, 2], [3, 4]]
X_csr = sp.csr_matrix(X)
invalid_type = SVR()
msg = ("A sparse matrix was passed, but dense data is required. "
"Use X.toarray() to convert to a dense numpy array.")
assert_raise_message(TypeError, msg,
check_array, X_csr, accept_sparse=False)
msg = ("Parameter 'accept_sparse' should be a string, "
"boolean or list of strings. You provided 'accept_sparse={}'.")
assert_raise_message(ValueError, msg.format(invalid_type),
check_array, X_csr, accept_sparse=invalid_type)
msg = ("When providing 'accept_sparse' as a tuple or list, "
"it must contain at least one string value.")
assert_raise_message(ValueError, msg.format([]),
check_array, X_csr, accept_sparse=[])
assert_raise_message(ValueError, msg.format(()),
check_array, X_csr, accept_sparse=())
assert_raise_message(TypeError, "SVR",
check_array, X_csr, accept_sparse=[invalid_type])
def test_check_array_accept_sparse_no_exception():
X = [[1, 2], [3, 4]]
X_csr = sp.csr_matrix(X)
check_array(X_csr, accept_sparse=True)
check_array(X_csr, accept_sparse='csr')
check_array(X_csr, accept_sparse=['csr'])
check_array(X_csr, accept_sparse=('csr',))
@pytest.fixture(params=['csr', 'csc', 'coo', 'bsr'])
def X_64bit(request):
X = sp.rand(20, 10, format=request.param)
for attr in ['indices', 'indptr', 'row', 'col']:
if hasattr(X, attr):
setattr(X, attr, getattr(X, attr).astype('int64'))
yield X
def test_check_array_accept_large_sparse_no_exception(X_64bit):
# When large sparse are allowed
check_array(X_64bit, accept_large_sparse=True, accept_sparse=True)
def test_check_array_accept_large_sparse_raise_exception(X_64bit):
# When large sparse are not allowed
msg = ("Only sparse matrices with 32-bit integer indices "
"are accepted. Got int64 indices.")
assert_raise_message(ValueError, msg,
check_array, X_64bit,
accept_sparse=True,
accept_large_sparse=False)
def test_check_array_min_samples_and_features_messages():
# empty list is considered 2D by default:
msg = "0 feature(s) (shape=(1, 0)) while a minimum of 1 is required."
assert_raise_message(ValueError, msg, check_array, [[]])
# If considered a 1D collection when ensure_2d=False, then the minimum
# number of samples will break:
msg = "0 sample(s) (shape=(0,)) while a minimum of 1 is required."
assert_raise_message(ValueError, msg, check_array, [], ensure_2d=False)
# Invalid edge case when checking the default minimum sample of a scalar
msg = "Singleton array array(42) cannot be considered a valid collection."
assert_raise_message(TypeError, msg, check_array, 42, ensure_2d=False)
# Simulate a model that would need at least 2 samples to be well defined
X = np.ones((1, 10))
y = np.ones(1)
msg = "1 sample(s) (shape=(1, 10)) while a minimum of 2 is required."
assert_raise_message(ValueError, msg, check_X_y, X, y,
ensure_min_samples=2)
# The same message is raised if the data has 2 dimensions even if this is
# not mandatory
assert_raise_message(ValueError, msg, check_X_y, X, y,
ensure_min_samples=2, ensure_2d=False)
# Simulate a model that would require at least 3 features (e.g. SelectKBest
# with k=3)
X = np.ones((10, 2))
y = np.ones(2)
msg = "2 feature(s) (shape=(10, 2)) while a minimum of 3 is required."
assert_raise_message(ValueError, msg, check_X_y, X, y,
ensure_min_features=3)
# Only the feature check is enabled whenever the number of dimensions is 2
# even if allow_nd is enabled:
assert_raise_message(ValueError, msg, check_X_y, X, y,
ensure_min_features=3, allow_nd=True)
# Simulate a case where a pipeline stage as trimmed all the features of a
# 2D dataset.
X = np.empty(0).reshape(10, 0)
y = np.ones(10)
msg = "0 feature(s) (shape=(10, 0)) while a minimum of 1 is required."
assert_raise_message(ValueError, msg, check_X_y, X, y)
# nd-data is not checked for any minimum number of features by default:
X = np.ones((10, 0, 28, 28))
y = np.ones(10)
X_checked, y_checked = check_X_y(X, y, allow_nd=True)
assert_array_equal(X, X_checked)
assert_array_equal(y, y_checked)
def test_check_array_complex_data_error():
X = np.array([[1 + 2j, 3 + 4j, 5 + 7j], [2 + 3j, 4 + 5j, 6 + 7j]])
assert_raises_regex(
ValueError, "Complex data not supported", check_array, X)
# list of lists
X = [[1 + 2j, 3 + 4j, 5 + 7j], [2 + 3j, 4 + 5j, 6 + 7j]]
assert_raises_regex(
ValueError, "Complex data not supported", check_array, X)
# tuple of tuples
X = ((1 + 2j, 3 + 4j, 5 + 7j), (2 + 3j, 4 + 5j, 6 + 7j))
assert_raises_regex(
ValueError, "Complex data not supported", check_array, X)
# list of np arrays
X = [np.array([1 + 2j, 3 + 4j, 5 + 7j]),
np.array([2 + 3j, 4 + 5j, 6 + 7j])]
assert_raises_regex(
ValueError, "Complex data not supported", check_array, X)
# tuple of np arrays
X = (np.array([1 + 2j, 3 + 4j, 5 + 7j]),
np.array([2 + 3j, 4 + 5j, 6 + 7j]))
assert_raises_regex(
ValueError, "Complex data not supported", check_array, X)
# dataframe
X = MockDataFrame(
np.array([[1 + 2j, 3 + 4j, 5 + 7j], [2 + 3j, 4 + 5j, 6 + 7j]]))
assert_raises_regex(
ValueError, "Complex data not supported", check_array, X)
# sparse matrix
X = sp.coo_matrix([[0, 1 + 2j], [0, 0]])
assert_raises_regex(
ValueError, "Complex data not supported", check_array, X)
def test_has_fit_parameter():
assert not has_fit_parameter(KNeighborsClassifier, "sample_weight")
assert has_fit_parameter(RandomForestRegressor, "sample_weight")
assert has_fit_parameter(SVR, "sample_weight")
assert has_fit_parameter(SVR(), "sample_weight")
class TestClassWithDeprecatedFitMethod:
@deprecated("Deprecated for the purpose of testing has_fit_parameter")
def fit(self, X, y, sample_weight=None):
pass
assert has_fit_parameter(TestClassWithDeprecatedFitMethod,
"sample_weight"), \
"has_fit_parameter fails for class with deprecated fit method."
def test_check_symmetric():
arr_sym = np.array([[0, 1], [1, 2]])
arr_bad = np.ones(2)
arr_asym = np.array([[0, 2], [0, 2]])
test_arrays = {'dense': arr_asym,
'dok': sp.dok_matrix(arr_asym),
'csr': sp.csr_matrix(arr_asym),
'csc': sp.csc_matrix(arr_asym),
'coo': sp.coo_matrix(arr_asym),
'lil': sp.lil_matrix(arr_asym),
'bsr': sp.bsr_matrix(arr_asym)}
# check error for bad inputs
assert_raises(ValueError, check_symmetric, arr_bad)
# check that asymmetric arrays are properly symmetrized
for arr_format, arr in test_arrays.items():
# Check for warnings and errors
assert_warns(UserWarning, check_symmetric, arr)
assert_raises(ValueError, check_symmetric, arr, raise_exception=True)
output = check_symmetric(arr, raise_warning=False)
if sp.issparse(output):
assert output.format == arr_format
assert_array_equal(output.toarray(), arr_sym)
else:
assert_array_equal(output, arr_sym)
def test_check_is_fitted():
# Check is ValueError raised when non estimator instance passed
assert_raises(ValueError, check_is_fitted, ARDRegression, "coef_")
assert_raises(TypeError, check_is_fitted, "SVR", "support_")
ard = ARDRegression()
svr = SVR()
try:
assert_raises(NotFittedError, check_is_fitted, ard, "coef_")
assert_raises(NotFittedError, check_is_fitted, svr, "support_")
except ValueError:
assert False, "check_is_fitted failed with ValueError"
# NotFittedError is a subclass of both ValueError and AttributeError
try:
check_is_fitted(ard, "coef_", "Random message %(name)s, %(name)s")
except ValueError as e:
assert str(e) == "Random message ARDRegression, ARDRegression"
try:
check_is_fitted(svr, "support_", "Another message %(name)s, %(name)s")
except AttributeError as e:
assert str(e) == "Another message SVR, SVR"
ard.fit(*make_blobs())
svr.fit(*make_blobs())
assert check_is_fitted(ard, "coef_") is None
assert check_is_fitted(svr, "support_") is None
def test_check_consistent_length():
check_consistent_length([1], [2], [3], [4], [5])
check_consistent_length([[1, 2], [[1, 2]]], [1, 2], ['a', 'b'])
check_consistent_length([1], (2,), np.array([3]), sp.csr_matrix((1, 2)))
assert_raises_regex(ValueError, 'inconsistent numbers of samples',
check_consistent_length, [1, 2], [1])
assert_raises_regex(TypeError, r"got <\w+ 'int'>",
check_consistent_length, [1, 2], 1)
assert_raises_regex(TypeError, r"got <\w+ 'object'>",
check_consistent_length, [1, 2], object())
assert_raises(TypeError, check_consistent_length, [1, 2], np.array(1))
# Despite ensembles having __len__ they must raise TypeError
assert_raises_regex(TypeError, 'estimator', check_consistent_length,
[1, 2], RandomForestRegressor())
# XXX: We should have a test with a string, but what is correct behaviour?
def test_check_dataframe_fit_attribute():
# check pandas dataframe with 'fit' column does not raise error
# https://github.com/scikit-learn/scikit-learn/issues/8415
try:
import pandas as pd
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
X_df = pd.DataFrame(X, columns=['a', 'b', 'fit'])
check_consistent_length(X_df)
except ImportError:
raise SkipTest("Pandas not found")
def test_suppress_validation():
X = np.array([0, np.inf])
assert_raises(ValueError, assert_all_finite, X)
sklearn.set_config(assume_finite=True)
assert_all_finite(X)
sklearn.set_config(assume_finite=False)
assert_raises(ValueError, assert_all_finite, X)
def test_check_array_series():
# regression test that check_array works on pandas Series
pd = importorskip("pandas")
res = check_array(pd.Series([1, 2, 3]), ensure_2d=False)
assert_array_equal(res, np.array([1, 2, 3]))
# with categorical dtype (not a numpy dtype) (GH12699)
s = pd.Series(['a', 'b', 'c']).astype('category')
res = check_array(s, dtype=None, ensure_2d=False)
assert_array_equal(res, np.array(['a', 'b', 'c'], dtype=object))
def test_check_dataframe_warns_on_dtype():
# Check that warn_on_dtype also works for DataFrames.
# https://github.com/scikit-learn/scikit-learn/issues/10948
pd = importorskip("pandas")
df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], dtype=object)
assert_warns_message(DataConversionWarning,
"Data with input dtype object were all converted to "
"float64.",
check_array, df, dtype=np.float64, warn_on_dtype=True)
assert_warns(DataConversionWarning, check_array, df,
dtype='numeric', warn_on_dtype=True)
with pytest.warns(None) as record:
warnings.simplefilter("ignore", DeprecationWarning) # 0.23
check_array(df, dtype='object', warn_on_dtype=True)
assert len(record) == 0
# Also check that it raises a warning for mixed dtypes in a DataFrame.
df_mixed = pd.DataFrame([['1', 2, 3], ['4', 5, 6]])
assert_warns(DataConversionWarning, check_array, df_mixed,
dtype=np.float64, warn_on_dtype=True)
assert_warns(DataConversionWarning, check_array, df_mixed,
dtype='numeric', warn_on_dtype=True)
assert_warns(DataConversionWarning, check_array, df_mixed,
dtype=object, warn_on_dtype=True)
# Even with numerical dtypes, a conversion can be made because dtypes are
# uniformized throughout the array.
df_mixed_numeric = pd.DataFrame([[1., 2, 3], [4., 5, 6]])
assert_warns(DataConversionWarning, check_array, df_mixed_numeric,
dtype='numeric', warn_on_dtype=True)
with pytest.warns(None) as record:
warnings.simplefilter("ignore", DeprecationWarning) # 0.23
check_array(df_mixed_numeric.astype(int),
dtype='numeric', warn_on_dtype=True)
assert len(record) == 0
class DummyMemory:
def cache(self, func):
return func
class WrongDummyMemory:
pass
@pytest.mark.filterwarnings("ignore:The 'cachedir' attribute")
def test_check_memory():
memory = check_memory("cache_directory")
assert memory.cachedir == os.path.join('cache_directory', 'joblib')
memory = check_memory(None)
assert memory.cachedir is None
dummy = DummyMemory()
memory = check_memory(dummy)
assert memory is dummy
assert_raises_regex(ValueError, "'memory' should be None, a string or"
" have the same interface as joblib.Memory."
" Got memory='1' instead.", check_memory, 1)
dummy = WrongDummyMemory()
assert_raises_regex(ValueError, "'memory' should be None, a string or"
" have the same interface as joblib.Memory."
" Got memory='{}' instead.".format(dummy),
check_memory, dummy)
@pytest.mark.parametrize('copy', [True, False])
def test_check_array_memmap(copy):
X = np.ones((4, 4))
with TempMemmap(X, mmap_mode='r') as X_memmap:
X_checked = check_array(X_memmap, copy=copy)
assert np.may_share_memory(X_memmap, X_checked) == (not copy)
assert X_checked.flags['WRITEABLE'] == copy
@pytest.mark.parametrize('retype', [
np.asarray, sp.csr_matrix, sp.csc_matrix, sp.coo_matrix, sp.lil_matrix,
sp.bsr_matrix, sp.dok_matrix, sp.dia_matrix
])
def test_check_non_negative(retype):
A = np.array([[1, 1, 0, 0],
[1, 1, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]])
X = retype(A)
check_non_negative(X, "")
X = retype([[0, 0], [0, 0]])
check_non_negative(X, "")
A[0, 0] = -1
X = retype(A)
assert_raises_regex(ValueError, "Negative ", check_non_negative, X, "")
def test_check_X_y_informative_error():
X = np.ones((2, 2))
y = None
assert_raise_message(ValueError, "y cannot be None", check_X_y, X, y)
def test_retrieve_samples_from_non_standard_shape():
class TestNonNumericShape:
def __init__(self):
self.shape = ("not numeric",)
def __len__(self):
return len([1, 2, 3])
X = TestNonNumericShape()
assert _num_samples(X) == len(X)
@pytest.mark.parametrize('x, target_type, min_val, max_val',
[(3, int, 2, 5),
(2.5, float, 2, 5)])
def test_check_scalar_valid(x, target_type, min_val, max_val):
"""Test that check_scalar returns no error/warning if valid inputs are
provided"""
with pytest.warns(None) as record:
check_scalar(x, "test_name", target_type, min_val, max_val)
assert len(record) == 0
@pytest.mark.parametrize('x, target_name, target_type, min_val, max_val, '
'err_msg',
[(1, "test_name1", float, 2, 4,
TypeError("`test_name1` must be an instance of "
"<class 'float'>, not <class 'int'>.")),
(1, "test_name2", int, 2, 4,
ValueError('`test_name2`= 1, must be >= 2.')),
(5, "test_name3", int, 2, 4,
ValueError('`test_name3`= 5, must be <= 4.'))])
def test_check_scalar_invalid(x, target_name, target_type, min_val, max_val,
err_msg):
"""Test that check_scalar returns the right error if a wrong input is
given"""
with pytest.raises(Exception) as raised_error:
check_scalar(x, target_name, target_type=target_type,
min_val=min_val, max_val=max_val)
assert str(raised_error.value) == str(err_msg)
assert type(raised_error.value) == type(err_msg)
| bsd-3-clause |
reetawwsum/Supervised-Learning | flag.py | 1 | 1223 | '''
==========================
Playing with Flags Dataset
==========================
'''
import csv
import numpy as np
from collections import Counter
from sklearn import preprocessing
from sklearn import cross_validation
from sklearn import tree
from sklearn import metrics
from common.fn import *
with open('datasets/flag.data', 'r') as input_file:
csv_reader = csv.reader(input_file, delimiter=',')
raw_data = []
for line in csv_reader:
raw_data.append(line)
data = []
target = []
for sample in raw_data:
data.append(sample[7:])
target.append(sample[6])
X = np.array(data)
y = np.array(target)
le = preprocessing.LabelEncoder()
X[:, 10] = le.fit_transform(X[:, 10])
X[:, -2] = le.fit_transform(X[:, -2])
X[:, -1] = le.fit_transform(X[:, -1])
X = X.astype(int)
y = y.astype(int)
sss = cross_validation.StratifiedShuffleSplit(y, 3, test_size=0.1, random_state=42)
for train_index, test_index in sss:
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
clf = tree.DecisionTreeClassifier()
clf.fit(X_train, y_train)
y_predict = clf.predict(X_test)
print metrics.accuracy_score(y_predict, y_test)
# print metrics.classification_report(y_predict, y_test) | mit |
googleapis/python-bigquery | google/cloud/bigquery/_pandas_helpers.py | 1 | 27381 | # Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Shared helper functions for connecting BigQuery and pandas."""
import concurrent.futures
import functools
import logging
import queue
import warnings
from packaging import version
try:
import pandas
except ImportError: # pragma: NO COVER
pandas = None
try:
import pyarrow
import pyarrow.parquet
except ImportError: # pragma: NO COVER
pyarrow = None
try:
from google.cloud.bigquery_storage import ArrowSerializationOptions
except ImportError:
_ARROW_COMPRESSION_SUPPORT = False
else:
# Having BQ Storage available implies that pyarrow >=1.0.0 is available, too.
_ARROW_COMPRESSION_SUPPORT = True
from google.cloud.bigquery import schema
_LOGGER = logging.getLogger(__name__)
_NO_BQSTORAGE_ERROR = (
"The google-cloud-bigquery-storage library is not installed, "
"please install google-cloud-bigquery-storage to use bqstorage features."
)
_PROGRESS_INTERVAL = 0.2 # Maximum time between download status checks, in seconds.
_MAX_QUEUE_SIZE_DEFAULT = object() # max queue size sentinel for BQ Storage downloads
_PANDAS_DTYPE_TO_BQ = {
"bool": "BOOLEAN",
"datetime64[ns, UTC]": "TIMESTAMP",
# BigQuery does not support uploading DATETIME values from Parquet files.
# See: https://github.com/googleapis/google-cloud-python/issues/9996
"datetime64[ns]": "TIMESTAMP",
"float32": "FLOAT",
"float64": "FLOAT",
"int8": "INTEGER",
"int16": "INTEGER",
"int32": "INTEGER",
"int64": "INTEGER",
"uint8": "INTEGER",
"uint16": "INTEGER",
"uint32": "INTEGER",
}
class _DownloadState(object):
"""Flag to indicate that a thread should exit early."""
def __init__(self):
# No need for a lock because reading/replacing a variable is defined to
# be an atomic operation in the Python language definition (enforced by
# the global interpreter lock).
self.done = False
def pyarrow_datetime():
return pyarrow.timestamp("us", tz=None)
def pyarrow_numeric():
return pyarrow.decimal128(38, 9)
def pyarrow_bignumeric():
return pyarrow.decimal256(76, 38)
def pyarrow_time():
return pyarrow.time64("us")
def pyarrow_timestamp():
return pyarrow.timestamp("us", tz="UTC")
if pyarrow:
# This dictionary is duplicated in bigquery_storage/test/unite/test_reader.py
# When modifying it be sure to update it there as well.
BQ_TO_ARROW_SCALARS = {
"BOOL": pyarrow.bool_,
"BOOLEAN": pyarrow.bool_,
"BYTES": pyarrow.binary,
"DATE": pyarrow.date32,
"DATETIME": pyarrow_datetime,
"FLOAT": pyarrow.float64,
"FLOAT64": pyarrow.float64,
"GEOGRAPHY": pyarrow.string,
"INT64": pyarrow.int64,
"INTEGER": pyarrow.int64,
"NUMERIC": pyarrow_numeric,
"STRING": pyarrow.string,
"TIME": pyarrow_time,
"TIMESTAMP": pyarrow_timestamp,
}
ARROW_SCALAR_IDS_TO_BQ = {
# https://arrow.apache.org/docs/python/api/datatypes.html#type-classes
pyarrow.bool_().id: "BOOL",
pyarrow.int8().id: "INT64",
pyarrow.int16().id: "INT64",
pyarrow.int32().id: "INT64",
pyarrow.int64().id: "INT64",
pyarrow.uint8().id: "INT64",
pyarrow.uint16().id: "INT64",
pyarrow.uint32().id: "INT64",
pyarrow.uint64().id: "INT64",
pyarrow.float16().id: "FLOAT64",
pyarrow.float32().id: "FLOAT64",
pyarrow.float64().id: "FLOAT64",
pyarrow.time32("ms").id: "TIME",
pyarrow.time64("ns").id: "TIME",
pyarrow.timestamp("ns").id: "TIMESTAMP",
pyarrow.date32().id: "DATE",
pyarrow.date64().id: "DATETIME", # because millisecond resolution
pyarrow.binary().id: "BYTES",
pyarrow.string().id: "STRING", # also alias for pyarrow.utf8()
# The exact scale and precision don't matter, see below.
pyarrow.decimal128(38, scale=9).id: "NUMERIC",
}
if version.parse(pyarrow.__version__) >= version.parse("3.0.0"):
BQ_TO_ARROW_SCALARS["BIGNUMERIC"] = pyarrow_bignumeric
# The exact decimal's scale and precision are not important, as only
# the type ID matters, and it's the same for all decimal256 instances.
ARROW_SCALAR_IDS_TO_BQ[pyarrow.decimal256(76, scale=38).id] = "BIGNUMERIC"
_BIGNUMERIC_SUPPORT = True
else:
_BIGNUMERIC_SUPPORT = False
else: # pragma: NO COVER
BQ_TO_ARROW_SCALARS = {} # pragma: NO COVER
ARROW_SCALAR_IDS_TO_BQ = {} # pragma: NO_COVER
_BIGNUMERIC_SUPPORT = False # pragma: NO COVER
def bq_to_arrow_struct_data_type(field):
arrow_fields = []
for subfield in field.fields:
arrow_subfield = bq_to_arrow_field(subfield)
if arrow_subfield:
arrow_fields.append(arrow_subfield)
else:
# Could not determine a subfield type. Fallback to type
# inference.
return None
return pyarrow.struct(arrow_fields)
def bq_to_arrow_data_type(field):
"""Return the Arrow data type, corresponding to a given BigQuery column.
Returns:
None: if default Arrow type inspection should be used.
"""
if field.mode is not None and field.mode.upper() == "REPEATED":
inner_type = bq_to_arrow_data_type(
schema.SchemaField(field.name, field.field_type, fields=field.fields)
)
if inner_type:
return pyarrow.list_(inner_type)
return None
field_type_upper = field.field_type.upper() if field.field_type else ""
if field_type_upper in schema._STRUCT_TYPES:
return bq_to_arrow_struct_data_type(field)
data_type_constructor = BQ_TO_ARROW_SCALARS.get(field_type_upper)
if data_type_constructor is None:
return None
return data_type_constructor()
def bq_to_arrow_field(bq_field):
"""Return the Arrow field, corresponding to a given BigQuery column.
Returns:
None: if the Arrow type cannot be determined.
"""
arrow_type = bq_to_arrow_data_type(bq_field)
if arrow_type:
is_nullable = bq_field.mode.upper() == "NULLABLE"
return pyarrow.field(bq_field.name, arrow_type, nullable=is_nullable)
warnings.warn("Unable to determine type for field '{}'.".format(bq_field.name))
return None
def bq_to_arrow_schema(bq_schema):
"""Return the Arrow schema, corresponding to a given BigQuery schema.
Returns:
None: if any Arrow type cannot be determined.
"""
arrow_fields = []
for bq_field in bq_schema:
arrow_field = bq_to_arrow_field(bq_field)
if arrow_field is None:
# Auto-detect the schema if there is an unknown field type.
return None
arrow_fields.append(arrow_field)
return pyarrow.schema(arrow_fields)
def bq_to_arrow_array(series, bq_field):
arrow_type = bq_to_arrow_data_type(bq_field)
field_type_upper = bq_field.field_type.upper() if bq_field.field_type else ""
if bq_field.mode.upper() == "REPEATED":
return pyarrow.ListArray.from_pandas(series, type=arrow_type)
if field_type_upper in schema._STRUCT_TYPES:
return pyarrow.StructArray.from_pandas(series, type=arrow_type)
return pyarrow.Array.from_pandas(series, type=arrow_type)
def get_column_or_index(dataframe, name):
"""Return a column or index as a pandas series."""
if name in dataframe.columns:
return dataframe[name].reset_index(drop=True)
if isinstance(dataframe.index, pandas.MultiIndex):
if name in dataframe.index.names:
return (
dataframe.index.get_level_values(name)
.to_series()
.reset_index(drop=True)
)
else:
if name == dataframe.index.name:
return dataframe.index.to_series().reset_index(drop=True)
raise ValueError("column or index '{}' not found.".format(name))
def list_columns_and_indexes(dataframe):
"""Return all index and column names with dtypes.
Returns:
Sequence[Tuple[str, dtype]]:
Returns a sorted list of indexes and column names with
corresponding dtypes. If an index is missing a name or has the
same name as a column, the index is omitted.
"""
column_names = frozenset(dataframe.columns)
columns_and_indexes = []
if isinstance(dataframe.index, pandas.MultiIndex):
for name in dataframe.index.names:
if name and name not in column_names:
values = dataframe.index.get_level_values(name)
columns_and_indexes.append((name, values.dtype))
else:
if dataframe.index.name and dataframe.index.name not in column_names:
columns_and_indexes.append((dataframe.index.name, dataframe.index.dtype))
columns_and_indexes += zip(dataframe.columns, dataframe.dtypes)
return columns_and_indexes
def dataframe_to_bq_schema(dataframe, bq_schema):
"""Convert a pandas DataFrame schema to a BigQuery schema.
Args:
dataframe (pandas.DataFrame):
DataFrame for which the client determines the BigQuery schema.
bq_schema (Sequence[Union[ \
:class:`~google.cloud.bigquery.schema.SchemaField`, \
Mapping[str, Any] \
]]):
A BigQuery schema. Use this argument to override the autodetected
type for some or all of the DataFrame columns.
Returns:
Optional[Sequence[google.cloud.bigquery.schema.SchemaField]]:
The automatically determined schema. Returns None if the type of
any column cannot be determined.
"""
if bq_schema:
bq_schema = schema._to_schema_fields(bq_schema)
bq_schema_index = {field.name: field for field in bq_schema}
bq_schema_unused = set(bq_schema_index.keys())
else:
bq_schema_index = {}
bq_schema_unused = set()
bq_schema_out = []
unknown_type_fields = []
for column, dtype in list_columns_and_indexes(dataframe):
# Use provided type from schema, if present.
bq_field = bq_schema_index.get(column)
if bq_field:
bq_schema_out.append(bq_field)
bq_schema_unused.discard(bq_field.name)
continue
# Otherwise, try to automatically determine the type based on the
# pandas dtype.
bq_type = _PANDAS_DTYPE_TO_BQ.get(dtype.name)
bq_field = schema.SchemaField(column, bq_type)
bq_schema_out.append(bq_field)
if bq_field.field_type is None:
unknown_type_fields.append(bq_field)
# Catch any schema mismatch. The developer explicitly asked to serialize a
# column, but it was not found.
if bq_schema_unused:
raise ValueError(
u"bq_schema contains fields not present in dataframe: {}".format(
bq_schema_unused
)
)
# If schema detection was not successful for all columns, also try with
# pyarrow, if available.
if unknown_type_fields:
if not pyarrow:
msg = u"Could not determine the type of columns: {}".format(
", ".join(field.name for field in unknown_type_fields)
)
warnings.warn(msg)
return None # We cannot detect the schema in full.
# The augment_schema() helper itself will also issue unknown type
# warnings if detection still fails for any of the fields.
bq_schema_out = augment_schema(dataframe, bq_schema_out)
return tuple(bq_schema_out) if bq_schema_out else None
def augment_schema(dataframe, current_bq_schema):
"""Try to deduce the unknown field types and return an improved schema.
This function requires ``pyarrow`` to run. If all the missing types still
cannot be detected, ``None`` is returned. If all types are already known,
a shallow copy of the given schema is returned.
Args:
dataframe (pandas.DataFrame):
DataFrame for which some of the field types are still unknown.
current_bq_schema (Sequence[google.cloud.bigquery.schema.SchemaField]):
A BigQuery schema for ``dataframe``. The types of some or all of
the fields may be ``None``.
Returns:
Optional[Sequence[google.cloud.bigquery.schema.SchemaField]]
"""
# pytype: disable=attribute-error
augmented_schema = []
unknown_type_fields = []
for field in current_bq_schema:
if field.field_type is not None:
augmented_schema.append(field)
continue
arrow_table = pyarrow.array(dataframe[field.name])
detected_type = ARROW_SCALAR_IDS_TO_BQ.get(arrow_table.type.id)
if detected_type is None:
unknown_type_fields.append(field)
continue
new_field = schema.SchemaField(
name=field.name,
field_type=detected_type,
mode=field.mode,
description=field.description,
fields=field.fields,
)
augmented_schema.append(new_field)
if unknown_type_fields:
warnings.warn(
u"Pyarrow could not determine the type of columns: {}.".format(
", ".join(field.name for field in unknown_type_fields)
)
)
return None
return augmented_schema
# pytype: enable=attribute-error
def dataframe_to_arrow(dataframe, bq_schema):
"""Convert pandas dataframe to Arrow table, using BigQuery schema.
Args:
dataframe (pandas.DataFrame):
DataFrame to convert to Arrow table.
bq_schema (Sequence[Union[ \
:class:`~google.cloud.bigquery.schema.SchemaField`, \
Mapping[str, Any] \
]]):
Desired BigQuery schema. The number of columns must match the
number of columns in the DataFrame.
Returns:
pyarrow.Table:
Table containing dataframe data, with schema derived from
BigQuery schema.
"""
column_names = set(dataframe.columns)
column_and_index_names = set(
name for name, _ in list_columns_and_indexes(dataframe)
)
bq_schema = schema._to_schema_fields(bq_schema)
bq_field_names = set(field.name for field in bq_schema)
extra_fields = bq_field_names - column_and_index_names
if extra_fields:
raise ValueError(
u"bq_schema contains fields not present in dataframe: {}".format(
extra_fields
)
)
# It's okay for indexes to be missing from bq_schema, but it's not okay to
# be missing columns.
missing_fields = column_names - bq_field_names
if missing_fields:
raise ValueError(
u"bq_schema is missing fields from dataframe: {}".format(missing_fields)
)
arrow_arrays = []
arrow_names = []
arrow_fields = []
for bq_field in bq_schema:
arrow_fields.append(bq_to_arrow_field(bq_field))
arrow_names.append(bq_field.name)
arrow_arrays.append(
bq_to_arrow_array(get_column_or_index(dataframe, bq_field.name), bq_field)
)
if all((field is not None for field in arrow_fields)):
return pyarrow.Table.from_arrays(
arrow_arrays, schema=pyarrow.schema(arrow_fields)
)
return pyarrow.Table.from_arrays(arrow_arrays, names=arrow_names)
def dataframe_to_parquet(dataframe, bq_schema, filepath, parquet_compression="SNAPPY"):
"""Write dataframe as a Parquet file, according to the desired BQ schema.
This function requires the :mod:`pyarrow` package. Arrow is used as an
intermediate format.
Args:
dataframe (pandas.DataFrame):
DataFrame to convert to Parquet file.
bq_schema (Sequence[Union[ \
:class:`~google.cloud.bigquery.schema.SchemaField`, \
Mapping[str, Any] \
]]):
Desired BigQuery schema. Number of columns must match number of
columns in the DataFrame.
filepath (str):
Path to write Parquet file to.
parquet_compression (Optional[str]):
The compression codec to use by the the ``pyarrow.parquet.write_table``
serializing method. Defaults to "SNAPPY".
https://arrow.apache.org/docs/python/generated/pyarrow.parquet.write_table.html#pyarrow-parquet-write-table
"""
if pyarrow is None:
raise ValueError("pyarrow is required for BigQuery schema conversion.")
bq_schema = schema._to_schema_fields(bq_schema)
arrow_table = dataframe_to_arrow(dataframe, bq_schema)
pyarrow.parquet.write_table(arrow_table, filepath, compression=parquet_compression)
def _row_iterator_page_to_arrow(page, column_names, arrow_types):
# Iterate over the page to force the API request to get the page data.
try:
next(iter(page))
except StopIteration:
pass
arrays = []
for column_index, arrow_type in enumerate(arrow_types):
arrays.append(pyarrow.array(page._columns[column_index], type=arrow_type))
if isinstance(column_names, pyarrow.Schema):
return pyarrow.RecordBatch.from_arrays(arrays, schema=column_names)
return pyarrow.RecordBatch.from_arrays(arrays, names=column_names)
def download_arrow_row_iterator(pages, bq_schema):
"""Use HTTP JSON RowIterator to construct an iterable of RecordBatches.
Args:
pages (Iterator[:class:`google.api_core.page_iterator.Page`]):
An iterator over the result pages.
bq_schema (Sequence[Union[ \
:class:`~google.cloud.bigquery.schema.SchemaField`, \
Mapping[str, Any] \
]]):
A decription of the fields in result pages.
Yields:
:class:`pyarrow.RecordBatch`
The next page of records as a ``pyarrow`` record batch.
"""
bq_schema = schema._to_schema_fields(bq_schema)
column_names = bq_to_arrow_schema(bq_schema) or [field.name for field in bq_schema]
arrow_types = [bq_to_arrow_data_type(field) for field in bq_schema]
for page in pages:
yield _row_iterator_page_to_arrow(page, column_names, arrow_types)
def _row_iterator_page_to_dataframe(page, column_names, dtypes):
# Iterate over the page to force the API request to get the page data.
try:
next(iter(page))
except StopIteration:
pass
columns = {}
for column_index, column_name in enumerate(column_names):
dtype = dtypes.get(column_name)
columns[column_name] = pandas.Series(page._columns[column_index], dtype=dtype)
return pandas.DataFrame(columns, columns=column_names)
def download_dataframe_row_iterator(pages, bq_schema, dtypes):
"""Use HTTP JSON RowIterator to construct a DataFrame.
Args:
pages (Iterator[:class:`google.api_core.page_iterator.Page`]):
An iterator over the result pages.
bq_schema (Sequence[Union[ \
:class:`~google.cloud.bigquery.schema.SchemaField`, \
Mapping[str, Any] \
]]):
A decription of the fields in result pages.
dtypes(Mapping[str, numpy.dtype]):
The types of columns in result data to hint construction of the
resulting DataFrame. Not all column types have to be specified.
Yields:
:class:`pandas.DataFrame`
The next page of records as a ``pandas.DataFrame`` record batch.
"""
bq_schema = schema._to_schema_fields(bq_schema)
column_names = [field.name for field in bq_schema]
for page in pages:
yield _row_iterator_page_to_dataframe(page, column_names, dtypes)
def _bqstorage_page_to_arrow(page):
return page.to_arrow()
def _bqstorage_page_to_dataframe(column_names, dtypes, page):
# page.to_dataframe() does not preserve column order in some versions
# of google-cloud-bigquery-storage. Access by column name to rearrange.
return page.to_dataframe(dtypes=dtypes)[column_names]
def _download_table_bqstorage_stream(
download_state, bqstorage_client, session, stream, worker_queue, page_to_item
):
rowstream = bqstorage_client.read_rows(stream.name).rows(session)
for page in rowstream.pages:
if download_state.done:
return
item = page_to_item(page)
worker_queue.put(item)
def _nowait(futures):
"""Separate finished and unfinished threads, much like
:func:`concurrent.futures.wait`, but don't wait.
"""
done = []
not_done = []
for future in futures:
if future.done():
done.append(future)
else:
not_done.append(future)
return done, not_done
def _download_table_bqstorage(
project_id,
table,
bqstorage_client,
preserve_order=False,
selected_fields=None,
page_to_item=None,
max_queue_size=_MAX_QUEUE_SIZE_DEFAULT,
):
"""Use (faster, but billable) BQ Storage API to construct DataFrame."""
# Passing a BQ Storage client in implies that the BigQuery Storage library
# is available and can be imported.
from google.cloud import bigquery_storage
if "$" in table.table_id:
raise ValueError(
"Reading from a specific partition is not currently supported."
)
if "@" in table.table_id:
raise ValueError("Reading from a specific snapshot is not currently supported.")
requested_streams = 1 if preserve_order else 0
requested_session = bigquery_storage.types.ReadSession(
table=table.to_bqstorage(), data_format=bigquery_storage.types.DataFormat.ARROW
)
if selected_fields is not None:
for field in selected_fields:
requested_session.read_options.selected_fields.append(field.name)
if _ARROW_COMPRESSION_SUPPORT:
requested_session.read_options.arrow_serialization_options.buffer_compression = (
ArrowSerializationOptions.CompressionCodec.LZ4_FRAME
)
session = bqstorage_client.create_read_session(
parent="projects/{}".format(project_id),
read_session=requested_session,
max_stream_count=requested_streams,
)
_LOGGER.debug(
"Started reading table '{}.{}.{}' with BQ Storage API session '{}'.".format(
table.project, table.dataset_id, table.table_id, session.name
)
)
# Avoid reading rows from an empty table.
if not session.streams:
return
total_streams = len(session.streams)
# Use _DownloadState to notify worker threads when to quit.
# See: https://stackoverflow.com/a/29237343/101923
download_state = _DownloadState()
# Create a queue to collect frames as they are created in each thread.
#
# The queue needs to be bounded by default, because if the user code processes the
# fetched result pages too slowly, while at the same time new pages are rapidly being
# fetched from the server, the queue can grow to the point where the process runs
# out of memory.
if max_queue_size is _MAX_QUEUE_SIZE_DEFAULT:
max_queue_size = total_streams
elif max_queue_size is None:
max_queue_size = 0 # unbounded
worker_queue = queue.Queue(maxsize=max_queue_size)
with concurrent.futures.ThreadPoolExecutor(max_workers=total_streams) as pool:
try:
# Manually submit jobs and wait for download to complete rather
# than using pool.map because pool.map continues running in the
# background even if there is an exception on the main thread.
# See: https://github.com/googleapis/google-cloud-python/pull/7698
not_done = [
pool.submit(
_download_table_bqstorage_stream,
download_state,
bqstorage_client,
session,
stream,
worker_queue,
page_to_item,
)
for stream in session.streams
]
while not_done:
# Don't block on the worker threads. For performance reasons,
# we want to block on the queue's get method, instead. This
# prevents the queue from filling up, because the main thread
# has smaller gaps in time between calls to the queue's get
# method. For a detailed explaination, see:
# https://friendliness.dev/2019/06/18/python-nowait/
done, not_done = _nowait(not_done)
for future in done:
# Call result() on any finished threads to raise any
# exceptions encountered.
future.result()
try:
frame = worker_queue.get(timeout=_PROGRESS_INTERVAL)
yield frame
except queue.Empty: # pragma: NO COVER
continue
# Return any remaining values after the workers finished.
while True: # pragma: NO COVER
try:
frame = worker_queue.get_nowait()
yield frame
except queue.Empty: # pragma: NO COVER
break
finally:
# No need for a lock because reading/replacing a variable is
# defined to be an atomic operation in the Python language
# definition (enforced by the global interpreter lock).
download_state.done = True
# Shutdown all background threads, now that they should know to
# exit early.
pool.shutdown(wait=True)
def download_arrow_bqstorage(
project_id, table, bqstorage_client, preserve_order=False, selected_fields=None,
):
return _download_table_bqstorage(
project_id,
table,
bqstorage_client,
preserve_order=preserve_order,
selected_fields=selected_fields,
page_to_item=_bqstorage_page_to_arrow,
)
def download_dataframe_bqstorage(
project_id,
table,
bqstorage_client,
column_names,
dtypes,
preserve_order=False,
selected_fields=None,
max_queue_size=_MAX_QUEUE_SIZE_DEFAULT,
):
page_to_item = functools.partial(_bqstorage_page_to_dataframe, column_names, dtypes)
return _download_table_bqstorage(
project_id,
table,
bqstorage_client,
preserve_order=preserve_order,
selected_fields=selected_fields,
page_to_item=page_to_item,
max_queue_size=max_queue_size,
)
def dataframe_to_json_generator(dataframe):
for row in dataframe.itertuples(index=False, name=None):
output = {}
for column, value in zip(dataframe.columns, row):
# Omit NaN values.
if value != value:
continue
output[column] = value
yield output
| apache-2.0 |
xiaoxiamii/scikit-learn | examples/cluster/plot_cluster_comparison.py | 246 | 4684 | """
=========================================================
Comparing different clustering algorithms on toy datasets
=========================================================
This example aims at showing characteristics of different
clustering algorithms on datasets that are "interesting"
but still in 2D. The last dataset is an example of a 'null'
situation for clustering: the data is homogeneous, and
there is no good clustering.
While these examples give some intuition about the algorithms,
this intuition might not apply to very high dimensional data.
The results could be improved by tweaking the parameters for
each clustering strategy, for instance setting the number of
clusters for the methods that needs this parameter
specified. Note that affinity propagation has a tendency to
create many clusters. Thus in this example its two parameters
(damping and per-point preference) were set to to mitigate this
behavior.
"""
print(__doc__)
import time
import numpy as np
import matplotlib.pyplot as plt
from sklearn import cluster, datasets
from sklearn.neighbors import kneighbors_graph
from sklearn.preprocessing import StandardScaler
np.random.seed(0)
# Generate datasets. We choose the size big enough to see the scalability
# of the algorithms, but not too big to avoid too long running times
n_samples = 1500
noisy_circles = datasets.make_circles(n_samples=n_samples, factor=.5,
noise=.05)
noisy_moons = datasets.make_moons(n_samples=n_samples, noise=.05)
blobs = datasets.make_blobs(n_samples=n_samples, random_state=8)
no_structure = np.random.rand(n_samples, 2), None
colors = np.array([x for x in 'bgrcmykbgrcmykbgrcmykbgrcmyk'])
colors = np.hstack([colors] * 20)
clustering_names = [
'MiniBatchKMeans', 'AffinityPropagation', 'MeanShift',
'SpectralClustering', 'Ward', 'AgglomerativeClustering',
'DBSCAN', 'Birch']
plt.figure(figsize=(len(clustering_names) * 2 + 3, 9.5))
plt.subplots_adjust(left=.02, right=.98, bottom=.001, top=.96, wspace=.05,
hspace=.01)
plot_num = 1
datasets = [noisy_circles, noisy_moons, blobs, no_structure]
for i_dataset, dataset in enumerate(datasets):
X, y = dataset
# normalize dataset for easier parameter selection
X = StandardScaler().fit_transform(X)
# estimate bandwidth for mean shift
bandwidth = cluster.estimate_bandwidth(X, quantile=0.3)
# connectivity matrix for structured Ward
connectivity = kneighbors_graph(X, n_neighbors=10, include_self=False)
# make connectivity symmetric
connectivity = 0.5 * (connectivity + connectivity.T)
# create clustering estimators
ms = cluster.MeanShift(bandwidth=bandwidth, bin_seeding=True)
two_means = cluster.MiniBatchKMeans(n_clusters=2)
ward = cluster.AgglomerativeClustering(n_clusters=2, linkage='ward',
connectivity=connectivity)
spectral = cluster.SpectralClustering(n_clusters=2,
eigen_solver='arpack',
affinity="nearest_neighbors")
dbscan = cluster.DBSCAN(eps=.2)
affinity_propagation = cluster.AffinityPropagation(damping=.9,
preference=-200)
average_linkage = cluster.AgglomerativeClustering(
linkage="average", affinity="cityblock", n_clusters=2,
connectivity=connectivity)
birch = cluster.Birch(n_clusters=2)
clustering_algorithms = [
two_means, affinity_propagation, ms, spectral, ward, average_linkage,
dbscan, birch]
for name, algorithm in zip(clustering_names, clustering_algorithms):
# predict cluster memberships
t0 = time.time()
algorithm.fit(X)
t1 = time.time()
if hasattr(algorithm, 'labels_'):
y_pred = algorithm.labels_.astype(np.int)
else:
y_pred = algorithm.predict(X)
# plot
plt.subplot(4, len(clustering_algorithms), plot_num)
if i_dataset == 0:
plt.title(name, size=18)
plt.scatter(X[:, 0], X[:, 1], color=colors[y_pred].tolist(), s=10)
if hasattr(algorithm, 'cluster_centers_'):
centers = algorithm.cluster_centers_
center_colors = colors[:len(centers)]
plt.scatter(centers[:, 0], centers[:, 1], s=100, c=center_colors)
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.xticks(())
plt.yticks(())
plt.text(.99, .01, ('%.2fs' % (t1 - t0)).lstrip('0'),
transform=plt.gca().transAxes, size=15,
horizontalalignment='right')
plot_num += 1
plt.show()
| bsd-3-clause |
cjayb/mne-python | mne/utils/dataframe.py | 14 | 3236 | # -*- coding: utf-8 -*-
"""inst.to_data_frame() helper functions."""
# Authors: Daniel McCloy <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
from ._logging import logger
from ..defaults import _handle_default
def _set_pandas_dtype(df, columns, dtype):
"""Try to set the right columns to dtype."""
for column in columns:
df[column] = df[column].astype(dtype)
logger.info('Converting "%s" to "%s"...' % (column, dtype))
def _scale_dataframe_data(inst, data, picks, scalings):
ch_types = inst.get_channel_types()
ch_types_used = list()
scalings = _handle_default('scalings', scalings)
for tt in scalings.keys():
if tt in ch_types:
ch_types_used.append(tt)
for tt in ch_types_used:
scaling = scalings[tt]
idx = [ii for ii in range(len(picks)) if ch_types[ii] == tt]
if len(idx):
data[:, idx] *= scaling
return data
def _convert_times(inst, times, time_format):
"""Convert vector of time in seconds to ms, datetime, or timedelta."""
# private function; pandas already checked in calling function
from pandas import to_timedelta
if time_format == 'ms':
times = np.round(times * 1e3).astype(np.int64)
elif time_format == 'timedelta':
times = to_timedelta(times, unit='s')
elif time_format == 'datetime':
times = (to_timedelta(times + inst.first_time, unit='s') +
inst.info['meas_date'])
return times
def _build_data_frame(inst, data, picks, long_format, mindex, index,
default_index, col_names=None, col_kind='channel'):
"""Build DataFrame from MNE-object-derived data array."""
# private function; pandas already checked in calling function
from pandas import DataFrame
from ..source_estimate import _BaseSourceEstimate
# build DataFrame
if col_names is None:
col_names = [inst.ch_names[p] for p in picks]
df = DataFrame(data, columns=col_names)
for i, (k, v) in enumerate(mindex):
df.insert(i, k, v)
# build Index
if long_format:
df.set_index(default_index, inplace=True)
df.columns.name = col_kind
elif index is not None:
df.set_index(index, inplace=True)
if set(index) == set(default_index):
df.columns.name = col_kind
# long format
if long_format:
df = df.stack().reset_index()
df.rename(columns={0: 'value'}, inplace=True)
# add column for channel types (as appropriate)
ch_map = (None if isinstance(inst, _BaseSourceEstimate) else
dict(zip(np.array(inst.ch_names)[picks],
np.array(inst.get_channel_types())[picks])))
if ch_map is not None:
col_index = len(df.columns) - 1
ch_type = df['channel'].map(ch_map)
df.insert(col_index, 'ch_type', ch_type)
# restore index
if index is not None:
df.set_index(index, inplace=True)
# convert channel/vertex/ch_type columns to factors
to_factor = [c for c in df.columns.tolist()
if c not in ('time', 'value')]
_set_pandas_dtype(df, to_factor, 'category')
return df
| bsd-3-clause |
looptool/download | datawarehouse/build_olap.py | 1 | 105734 | """
Loop Data Warehouse creation script
This script processes the log (.csv) and course export file (.zip) for each course that is specified in config.json.
Blackboard and Moodle course data is transformed into the same structure internally which means that the analytics and
dashboard code is the same for both.
Assumes that the zips are unzipped and that log files are called log.csv and that both are placed in the same folder for
each course.
OLAP and summary tables are then created in a mysql database. The cloop_olap.sql file contains the sql to create the database.
Star schema is used to make queries across dimensions easy:
- Pageviews are stored in a fact table (i.e., fact_coursevisits)
- Dates, Pages and Users are stored in Dimension tables
All HTML tables included in the dashboard are cached in the summary_courses table.
Different code however has been written for extracting the data for Blackboard and Moodle. Blackboard uses the IMS-CP format while
Moodle has its own format. These formats were reverse engineered for the project.
Blackboard and MoodleMacquarie formats need a csv log file.
Moodle from UniSA only needs the course export format.
High level overview of processing from course export file:
- Extract users and insert into fact table
- Extract course structure from zip and insert into content dimension table
- Extract forum posts
- Extract quiz attempts and scores
Todo:
- Use Celery for task processing and cron
- Update Pandas to use Dask for larger out of memory data processing
- Clean up Blackboard manifest code processing
- Import all users for processing - at the moment for the trial only students are imported
- Move database to Postgres
"""
import sys
from sqlalchemy.engine import create_engine
import re
import datetime
import time
import csv
import random
import xml.etree.cElementTree as ET
import os.path
import urlparse
import unicodedata
import pandas as pd
import numpy
import json
cache = {}
connection = None
def main():
global engine
global connection
global DayL
global course_id
global course_startdate
global course_enddate
global course_type
global treetable
global course_weeks
global course_event
global content_hidden_list
global forum_hidden_list
global submission_hidden_list
global staff_list
global tree_table_totals
global communication_types
global assessment_types
global msg_to_forum_dict
global df_treetable
global sitetree
DayL = ['Monday','Tuesday','Wednesday','Thursday','Friday','Saturday','Sunday']
# load config.json
course_config = None
with open('config.json') as data_file:
course_config = json.load(data_file)
staff_list = []
sitetree = {}
engine = create_engine('mysql://root:root@localhost/cloop_olap?unix_socket=/Applications/MAMP/tmp/mysql/mysql.sock')
course_exports_path = '/Users/aneesha/cloop/olap/data/'
connection = engine.connect()
print str(datetime.datetime.now())
print "Cleanup_database - delete all records"
cleanup_database()
print "Generate Dates"
generate_dates()
for config in course_config:
course_id = config['course_id']
course_type = config['course_type']
week_startdate = config['start_date']
week_enddate = config['end_date']
course_weeks = get_weeks(week_startdate, week_enddate)
tree_table_totals = [0] * len(course_weeks)
#Create communication and assessment content type mappings
if course_type == "Moodle":
communication_types = ['forum']
assessment_types = ['quiz', 'assign']
elif course_type == "MoodleMacquarie":
communication_types = ['forum_discussions', 'oublog', 'forum', 'oublog_posts', 'forum_posts', 'dialogue', 'dialogue_conversations']
assessment_types = ['quiz','quiz_attempts','grade_grades','turnitintool', 'assign']
elif course_type == "Blackboard":
communication_types = ['resource/x-bb-discussionboard', 'course/x-bb-collabsession', 'resource/x-bb-discussionfolder']
assessment_types = ['assessment/x-bb-qti-test', 'course/x-bb-courseassessment', 'resource/x-turnitin-assignment']
add_coursesummarytable(course_id)
print "Processing Course ID: ", course_id
process_exportzips(course_id, course_type, course_exports_path)
print str(datetime.datetime.now())
def cleanup_database():
"""
Deletes all records in the database.
"""
# First the OLAP Tables
connection.execute("DELETE FROM dim_users");
connection.execute("DELETE FROM fact_coursevisits");
connection.execute("DELETE FROM dim_dates");
connection.execute("DELETE FROM dim_sessions");
connection.execute("DELETE FROM dim_session");
connection.execute("DELETE FROM dim_pages");
connection.execute("DELETE FROM dim_submissionattempts");
connection.execute("DELETE FROM dim_submissiontypes");
# Next cleanup the Summary Tables
connection.execute("DELETE FROM Summary_Courses");
connection.execute("DELETE FROM summary_forum");
connection.execute("DELETE FROM summary_discussion");
connection.execute("DELETE FROM summary_posts");
connection.execute("DELETE FROM Summary_CourseVisitsByDayInWeek");
connection.execute("DELETE FROM Summary_CourseCommunicationVisitsByDayInWeek");
connection.execute("DELETE FROM Summary_CourseAssessmentVisitsByDayInWeek");
connection.execute("DELETE FROM Summary_SessionsByDayInWeek");
connection.execute("DELETE FROM Summary_SessionAverageLengthByDayInWeek");
connection.execute("DELETE FROM Summary_SessionAveragePagesPerSessionByDayInWeek");
connection.execute("DELETE FROM Summary_ParticipatingUsersByDayInWeek");
connection.execute("DELETE FROM Summary_UniquePageViewsByDayInWeek");
def generate_dates():
"""
Generates dates for the date dimension table
"""
start_date = datetime.datetime.strptime("1-JAN-14", "%d-%b-%y")
end_date = datetime.datetime.strptime("31-DEC-16", "%d-%b-%y")
cur_date = start_date
while (cur_date <= end_date):
insert_date(cur_date.year, cur_date.month, cur_date.day, cur_date.weekday(), cur_date.isocalendar()[1])
cur_date = cur_date + datetime.timedelta(days = +1)
def insert_date(year, month, day, dayinweek, week):
"""
Inserts a date in the dim_dates dimension table
Args:
year: 4 digit year
month: 1 - 12 month value
day: day of the month
dayinweek: 0 - 6 day in week
week: week in year value
Returns:
id of the inserted date
"""
row = { }
prefix = "date"
insert_date = datetime.date(int(year), int(month), int(day));
if insert_date != None:
row["id"] = sanitize(datetime.datetime.strftime(insert_date, "%d/%b/%y"))
row[prefix + "_year"] = year
row[prefix + "_month"] = month
row[prefix + "_day"] = day
row[prefix + "_dayinweek"] = dayinweek
row[prefix + "_week"] = week
if row[prefix + "_month"] == 12 and row[prefix + "_week"] <= 1:
row[prefix + "_week"] = 52
if row[prefix + "_month"] == 1 and row[prefix + "_week"] >= 52:
row[prefix + "_week"] = 1
unixtimestamp = time.mktime(datetime.datetime.strptime(row["id"], "%d-%b-%y").timetuple())
row['unixtimestamp'] = unixtimestamp
return save_object ("dim_dates", row)
def add_coursesummarytable(course_id):
"""
Inserts a record for the course_id in the summary_courses table
Args:
course_id: course id
"""
row = {"course_id": course_id}
save_summaryobject ("summary_courses", row)
def process_exportzips(course_id, course_type, course_exports_path):
"""
Determines if the course is from Moodle or Blackboard and runs the associated methods to process the course
Args:
course_id: course id
course_type: Blackboard or Moodle
course_exports_path: path to the export folder
"""
global treetable
global staff_list
global df_treetable
course_export_path = "%s%d" %(course_exports_path, course_id)
course_export_normpath = os.path.normpath(course_export_path)
if course_type=="Blackboard":
print "Import Blackboard Manifest Tree"
user_xmlfile, id_type_dict, usermsgstate_resourcefile, announcements_id, gradebook_id, content_link_id_to_content_id_dict = process_IMSCP_Manifest_Blackboard(course_export_normpath, course_id)
print "Import Users"
userxml_export_path = "%s%d/%s" %(course_exports_path, course_id, user_xmlfile)
populate_dim_users_table(userxml_export_path, course_type, course_id)
print "Process Access Log"
log_export_path = "%s%d/%s" %(course_exports_path, course_id,"log.csv")
process_accesslog(log_export_path, course_type, course_id, announcements_id, gradebook_id, content_link_id_to_content_id_dict, idtypemap=id_type_dict)
else:
print "Moodle"
print "Import Users"
populate_dim_users_table(course_export_normpath, course_type, course_id)
print "Process Access Log"
process_accesslog(course_export_normpath, course_type, course_id)
course_export_acivitiespath = "%s%d/activities" %(course_exports_path, course_id)
course_export_activitiesnormpath = os.path.normpath(course_export_acivitiespath)
print "Process Access Log - Resources"
process_courseresources(course_export_activitiesnormpath, course_id, course_type)
course_export_sectionspath = "%s%d/sections" %(course_exports_path, course_id)
course_export_sectionsnormpath = os.path.normpath(course_export_sectionspath)
process_moodle_sections(course_export_sectionsnormpath, course_id, course_type)
print "Processing Session Calculations"
process_user_sessions(course_id)
print "Processing Summary Tables"
# Populate Summary Tables
populate_summarytables(course_id)
print "Processing Count and Vis Tables"
# Populate Tables for Course Access Counts and Visualisations
generate_studentbyweektable(course_id,"counts")
generate_studentbyweektable(course_id,"vis")
weeks_str_list = ','.join(map(str, course_weeks))
contentsql = "SELECT F.pageview, D.date_week, F.module, F.action, F.page_id, F.section_order, F.course_id, F.user_pk FROM Fact_Coursevisits F INNER JOIN Dim_Dates D ON F.Date_Id = D.Id WHERE F.course_id=%d AND D.Date_week IN (%s);" % (course_id, weeks_str_list)
newconnection = engine.raw_connection()
# store as pandas dataframe; keep global to speed up processing
df_treetable = pd.read_sql(contentsql, newconnection)
print "ProcessingTreetables processing - Counts"
treetable = ""
generate_contentaccesstreetable(course_id,"counts")
print "Processing Treetables processing - Users"
treetable = ""
generate_contentaccesstreetable(course_id,"user")
print "Processing Communication tables"
generate_contentaccesstable(course_id, "communication", "user", "communication_counts_table")
generate_contentaccesstable(course_id, "communication", "count", "communication_user_table")
print "Processing Forum Posts"
generate_contentaccesstable(course_id, "forum", "posts", "forum_posts_table")
print "Processing Assessment tables"
generate_contentaccesstable(course_id, "submmision", "user", "assessment_counts_table")
generate_contentaccesstable(course_id, "submmision", "count", "assessment_user_table")
print "Processing Timelines Summary"
gen_coursepageviews(course_id)
print "Processing Assessment Grades for each student"
generate_assessmentgrades(course_id, "assessmentgrades")
def populate_summarytables(course_id):
"""
Produces summary tables Summary_CourseVisitsByDayInWeek, Summary_CourseCommunicationVisitsByDayInWeek
Summary_CourseAssessmentVisitsByDayInWeek, Summary_SessionAverageLengthByDayInWeek, Summary_SessionAveragePagesPerSessionByDayInWeek
Summary_ParticipatingUsersByDayInWeek, Summary_UniquePageViewsByDayInWeek
Args:
course_id: course id
"""
global course_weeks
global communication_types
global assessment_types
course_weeks_str = str(course_weeks).strip('[]')
excluse_contentype_list = communication_types + assessment_types
excluse_contentype_str = ','.join("'{0}'".format(x) for x in excluse_contentype_list)
communication_types_str = ','.join("'{0}'".format(x) for x in communication_types)
assessment_types_str = ','.join("'{0}'".format(x) for x in assessment_types)
# Populate Summary_CourseVisitsByDayInWeek - only contains content items
trans = connection.begin()
Summary_CourseVisitsByDayInWeek_sql = "SELECT D.Date_Year, D.Date_Day, D.Date_Week, D.Date_dayinweek, SUM(F.pageview) AS Pageviews, F.course_id FROM Dim_Dates D LEFT JOIN Fact_Coursevisits F ON D.Id = F.Date_Id WHERE D.Date_dayinweek IN (0,1,2,3,4,5,6) AND D.DATE_week IN (%s) AND F.course_id=%d AND F.module NOT IN (%s) GROUP BY D.Date_Week, D.Date_dayinweek" %(course_weeks_str, course_id, excluse_contentype_str)
result = connection.execute(Summary_CourseVisitsByDayInWeek_sql);
for row in result:
pageview = row[4] if row[4] is not None else 0
record = {"Date_Year": row[0], "Date_Day": row[1], "Date_Week": row[2], "Date_dayinweek": row[3], "pageviews": pageview, "course_id": course_id}
save_summaryobject ("Summary_CourseVisitsByDayInWeek", record)
trans.commit()
# Populate Summary_CourseCommunicationVisitsByDayInWeek - only contains forums
trans = connection.begin()
Summary_CourseCommunicationVisitsByDayInWeek_sql = "SELECT D.Date_Year, D.Date_Day, D.Date_Week, D.Date_dayinweek, SUM(F.pageview) AS Pageviews, F.course_id FROM Dim_Dates D LEFT JOIN Fact_Coursevisits F ON D.Id = F.Date_Id WHERE D.Date_dayinweek IN (0,1,2,3,4,5,6) AND D.DATE_week IN (%s) AND F.course_id=%d AND F.module IN (%s) GROUP BY D.Date_Week, D.Date_dayinweek" %(course_weeks_str, course_id, communication_types_str)
result = connection.execute(Summary_CourseCommunicationVisitsByDayInWeek_sql);
for row in result:
pageview = row[4] if row[4] is not None else 0
record = {"Date_Year": row[0], "Date_Day": row[1], "Date_Week": row[2], "Date_dayinweek": row[3], "pageviews": pageview, "course_id": course_id}
save_summaryobject ("Summary_CourseCommunicationVisitsByDayInWeek", record)
trans.commit()
# Populate Summary_CourseAssessmentVisitsByDayInWeek - only contains quiz and assign
trans = connection.begin()
Summary_CourseAssessmentVisitsByDayInWeek_sql = "SELECT D.Date_Year, D.Date_Day, D.Date_Week, D.Date_dayinweek, SUM(F.pageview) AS Pageviews, F.course_id FROM Dim_Dates D LEFT JOIN Fact_Coursevisits F ON D.Id = F.Date_Id WHERE D.Date_dayinweek IN (0,1,2,3,4,5,6) AND D.DATE_week IN (%s) AND F.course_id=%d AND F.module IN (%s) GROUP BY D.Date_Week, D.Date_dayinweek" %(course_weeks_str, course_id, assessment_types_str)
result = connection.execute(Summary_CourseAssessmentVisitsByDayInWeek_sql);
for row in result:
pageview = row[4] if row[4] is not None else 0
record = {"Date_Year": row[0], "Date_Day": row[1], "Date_Week": row[2], "Date_dayinweek": row[3], "pageviews": pageview, "course_id": course_id}
save_summaryobject ("Summary_CourseAssessmentVisitsByDayInWeek", record)
trans.commit()
# Populate Summary_SessionAverageLengthByDayInWeek
trans = connection.begin()
Summary_SessionAverageLengthByDayInWeek_sql = "SELECT S.Date_Year, S.Date_Week, S.Date_dayinweek, AVG(S.session_length_in_mins), S.course_id FROM Dim_Session S WHERE S.Date_dayinweek IN (0,1,2,3,4,5,6) AND S.DATE_week IN (%s) AND S.course_id=%d GROUP BY S.Date_Week, S.Date_dayinweek" %(course_weeks_str, course_id)
result = connection.execute(Summary_SessionAverageLengthByDayInWeek_sql);
for row in result:
session_average_in_minutes = row[3] if row[3] is not None else 0
record = {"Date_Year": row[0], "Date_Week": row[1], "Date_dayinweek": row[2], "session_average_in_minutes": session_average_in_minutes, "course_id": course_id}
save_summaryobject ("Summary_SessionAverageLengthByDayInWeek", record)
trans.commit()
# Populate Summary_SessionAveragePagesPerSessionByDayInWeek
trans = connection.begin()
Summary_SessionAveragePagesPerSessionByDayInWeek_sql = "SELECT S.Date_Year, S.Date_Week, S.Date_dayinweek, AVG(S.pageviews), S.course_id FROM Dim_Session S WHERE S.Date_dayinweek IN (0,1,2,3,4,5,6) AND S.DATE_week IN (%s) AND S.course_id=%d GROUP BY S.Date_Week, S.Date_dayinweek" %(course_weeks_str, course_id)
result = connection.execute(Summary_SessionAveragePagesPerSessionByDayInWeek_sql);
for row in result:
pages_per_session = row[3] if row[3] is not None else 0
record = {"Date_Year": row[0], "Date_Week": row[1], "Date_dayinweek": row[2], "pages_per_session": pages_per_session, "course_id": course_id}
save_summaryobject ("Summary_SessionAveragePagesPerSessionByDayInWeek", record)
trans.commit()
# Populate Summary_SessionsByDayInWeek
trans = connection.begin()
Summary_SessionsByDayInWeek_sql = "SELECT S.Date_Year, S.Date_Week, S.Date_dayinweek, COUNT(DISTINCT S.session_id) AS Session, S.course_id FROM Dim_Session S WHERE S.Date_dayinweek IN (0,1,2,3,4,5,6) AND S.DATE_week IN (%s) AND S.course_id=%d GROUP BY S.Date_Week, S.Date_dayinweek" %(course_weeks_str, course_id)
result = connection.execute(Summary_SessionsByDayInWeek_sql);
for row in result:
sessions = row[3] if row[3] is not None else 0
record = {"Date_Year": row[0], "Date_Week": row[1], "Date_dayinweek": row[2], "sessions": sessions, "course_id": course_id}
save_summaryobject ("Summary_SessionsByDayInWeek", record)
trans.commit()
# Populate Summary_ParticipatingUsersByDayInWeek
trans = connection.begin()
Summary_ParticipatingUsersByDayInWeek_sql = "SELECT D.Date_Year, D.Date_Day, D.Date_Week, D.Date_dayinweek, SUM(F.pageview) AS pageviews, F.course_id FROM Dim_Dates D LEFT JOIN Fact_Coursevisits F ON D.Id = F.Date_Id WHERE D.Date_dayinweek IN (0,1,2,3,4,5,6) AND D.DATE_week IN (%s) AND F.course_id=%d GROUP BY D.Date_Week, D.Date_dayinweek;" % (course_weeks_str, course_id)
result = connection.execute(Summary_ParticipatingUsersByDayInWeek_sql);
for row in result:
pageview = row[4] if row[4] is not None else 0
record = {"Date_Year": row[0], "Date_Day": row[1], "Date_Week": row[2], "Date_dayinweek": row[3], "pageviews": pageview, "course_id": course_id}
save_summaryobject ("Summary_ParticipatingUsersByDayInWeek", record)
trans.commit()
# Populate Summary_UniquePageViewsByDayInWeek
trans = connection.begin()
Summary_UniquePageViewsByDayInWeek_sql = "SELECT D.Date_Year, D.Date_Day, D.Date_Week, D.Date_dayinweek, COUNT(DISTINCT F.page_id) AS UniquePageViews, F.course_id FROM Fact_Coursevisits F INNER JOIN Dim_Dates D ON F.Date_Id = D.Id WHERE D.Date_dayinweek IN (0,1,2,3,4,5,6) AND D.DATE_week IN (%s) AND F.course_id=%d GROUP BY D.Date_Week, D.Date_dayinweek;" % (course_weeks_str, course_id)
result = connection.execute(Summary_UniquePageViewsByDayInWeek_sql);
for row in result:
pageview = row[4] if row[4] is not None else 0
record = {"Date_Year": row[0], "Date_Day": row[1], "Date_Week": row[2], "Date_dayinweek": row[3], "pageviews": pageview, "course_id": course_id}
save_summaryobject ("Summary_UniquePageViewsByDayInWeek", record)
trans.commit()
"""
Session Processing
"""
def process_user_sessions(course_id):
"""
Gets each user (i.e., student) and calls method to process to split visits into a session.
A session includes all pageviews where the time access difference does not exceed 40 mins
Args:
course_id: course id
"""
res_sql = "SELECT DISTINCT lms_id FROM dim_users WHERE course_id=%d;" %(course_id)
result = connection.execute(res_sql);
rows = result.fetchall()
for row in rows:
user_pk = int(row[0])
determine_user_sessions(user_pk)
def determine_user_sessions(user_pk):
"""
Split visits for a user into a session.
A session includes all pageviews where the time access difference does not exceed 40 mins
Algorithm to Determine Sessions:
get all hits for a user in order from database (user_pk)
sessionid = 1
sessionupdatelist = []
loop through (starting at i=0) (all hits for a user)
sessionupdatelist append counter
get date time stamp for i=counter
get date time stamp for i=counter + 1
timediff = (date time stamp for i=counter + 1) - (date time stamp for i=counter)
if timediff > 40 mins
update all id's that match sessionupdatelist with sessionid in database
clear sessionupdatelist
update sessionid by 1
Args:
user_pk: user id in loop
"""
trans = connection.begin()
sessionid = getmax_sessionid()
sessionupdatelist = []
sessionupdatelist_i = []
res_sql = "SELECT id, time_id, date_id, course_id, unixtimestamp, date_id FROM fact_coursevisits WHERE user_id=%d order by unixtimestamp ASC;" %(user_pk)
result = connection.execute(res_sql);
rows = result.fetchall()
total_hits = len(rows)
timediff_mins = 0
for (i, row) in enumerate(rows):
current_id = row[0]
current_time = row[1]
current_date = row[2]
current_date_time = current_date + " " + current_time
sessionupdatelist.append(current_id)
sessionupdatelist_i.append(i)
if i < (total_hits-1):
next_id = rows[i+1][0]
next_time = rows[i+1][1]
next_date = rows[i+1][2]
next_date_time = next_date + " " + next_time
fmt = '%d-%b-%y %H:%M:%S'
d1 = datetime.datetime.strptime(current_date_time, fmt)
d2 = datetime.datetime.strptime(next_date_time, fmt)
# convert to unix timestamp
d1_ts = time.mktime(d1.timetuple())
d2_ts = time.mktime(d2.timetuple())
# they are now in seconds, subtract and then divide by 60 to get minutes.
timediff_mins = int(d2_ts-d1_ts) / 60
else:
timediff_mins = 40.0
if (timediff_mins >= 40.0):
sessionupdatelist_string = ','.join(map(str, sessionupdatelist))
session_start = int(float(rows[sessionupdatelist_i[0]][4]))
session_end = int(float(rows[sessionupdatelist_i[len(sessionupdatelist_i)-1]][4]))
session_length_in_mins = int(session_end-session_start) / 60
pageviews = len(sessionupdatelist_i)
dt = datetime.datetime.fromtimestamp(session_start)
date_year = dt.year
date_day = dt.day
date_week = dt.isocalendar()[1]
date_dayinweek = dt.weekday()
updatesql = "UPDATE fact_coursevisits SET session_id=%d WHERE id IN (%s);" %(sessionid, sessionupdatelist_string)
updateresult = connection.execute(updatesql);
record = {"user_id": user_pk, "date_week": date_week, "date_year":date_year, "date_dayinweek": date_dayinweek, "pageviews": pageviews, "session_length_in_mins": session_length_in_mins, "session_id":sessionid, "course_id": row[3], "unixtimestamp": row[4], "date_id": row[5] }
save_summaryobject ("dim_session", record)
sessionupdatelist = []
sessionupdatelist_i = []
sessionid = sessionid + 1
trans.commit()
def calculate_sessionlengthandnopageviews(session_id, course_id):
"""
Calculates the session length in minutes and the no of pageviews in the session.
Stores the result in the dim_sessions table
Args:
session_id: session id
course_id: course id
"""
res_sql = "SELECT F.unixtimestamp, D.Date_Week, D.Date_Year, D.Date_DayInWeek FROM fact_coursevisits F LEFT JOIN dim_dates D ON F.Date_Id = D.Id LEFT JOIN dim_session S ON F.ID = S.fact_coursevisits_id WHERE S.session_id=%s AND F.course_id=%d order by F.unixtimestamp ASC;" %(session_id, course_id)
result = connection.execute(res_sql);
rows = result.fetchall()
total_pageviews_in_session = len(rows)
timediff_mins = 0
session_start = int(float(rows[0][0]))
session_end = int(float(rows[total_pageviews_in_session-1][0]))
session_length_in_mins = int(session_end-session_start) / 60
date_week = rows[0][1]
date_year = rows[0][2]
date_dayinweek = rows[0][3]
record = {"date_week": date_week, "date_year":date_year, "date_dayinweek": date_dayinweek, "session_id": session_id, "course_id": course_id, "session_length_in_mins": session_length_in_mins, "pageviews": total_pageviews_in_session}
save_summaryobject ("dim_sessions", record)
def getmax_sessionid():
"""
Gets the maximim session id + 1
Returns:
int with next session id to use
"""
res_sql = "SELECT max(session_id) FROM dim_session;"
result = connection.execute(res_sql);
rows = result.fetchall()
max_val = rows[0][0]
if (max_val is None):
return 1
else:
return int(max_val) + 1
"""
Dashboard Timeseries Processing
"""
def get_contentpageviews_dataset(content_id, course_id, weeks):
"""
Calculates timeseries data for each content item.
Args:
content_id: content id
course_id: course id
weeks: list of weeks to produce the timeseries for
Returns:
json string for javascript rendering of graphs in highchart
"""
sql = "SELECT D.id, D.Date_Year, D.Date_Month, D.Date_Day,SUM(F.pageview) AS Pageviews FROM Dim_Dates D LEFT JOIN Fact_Coursevisits F ON D.Id = F.Date_Id WHERE D.DATE_week IN (%s) AND F.course_id=%d AND F.page_id=%d GROUP BY D.id;" %(weeks, course_id, content_id)
result = connection.execute(sql);
pagepageviews_dict = {}
dataset_list = []
for row in result:
pagepageviews_dict[row[0]] = row[4]
sql = "SELECT D.id, D.Date_Year, D.Date_Month, D.Date_Day FROM Dim_Dates D WHERE D.DATE_week IN (%s)" %(weeks)
result = connection.execute(sql);
for row in result:
if row[0] in pagepageviews_dict:
pageviews = pagepageviews_dict[row[0]]
else:
pageviews = 0
datapoint = "[Date.UTC(%s,%s,%s),%s]" % (row[1],row[2],row[3],pageviews)
dataset_list.append(datapoint)
dataset = ','.join(map(str, dataset_list))
return dataset
def get_userpageviews_dataset(user_id, course_id, weeks):
"""
Calculates timeseries data for each user.
Args:
user_id: user id
course_id: course id
weeks: list of weeks to produce the timeseries for
Returns:
json string for javascript rendering of graphs in highchart
"""
sql = "SELECT D.id, D.Date_Year, D.Date_Month, D.Date_Day, SUM(F.pageview) AS Pageviews FROM Dim_Dates D LEFT JOIN Fact_Coursevisits F ON D.Id = F.Date_Id WHERE D.DATE_week IN (%s) AND F.course_id=%d AND F.user_pk='%s' GROUP BY D.id;" %(weeks, course_id, user_id)
result = connection.execute(sql);
userpageviews_dict = {}
dataset_list = []
for row in result:
userpageviews_dict[row[0]] = row[4]
sql = "SELECT D.id, D.Date_Year, D.Date_Month, D.Date_Day FROM Dim_Dates D WHERE D.DATE_week IN (%s)" %(weeks)
result = connection.execute(sql);
for row in result:
if row[0] in userpageviews_dict:
pageviews = userpageviews_dict[row[0]]
else:
pageviews = 0
datapoint = "[Date.UTC(%s,%s,%s),%s]" % (row[1],row[2],row[3],pageviews)
dataset_list.append(datapoint)
dataset = ','.join(map(str, dataset_list))
return dataset
def gen_coursepageviews(course_id):
"""
Generates and saves pageview timelines in a table column.
Args:
course_id: course id
"""
global course_weeks
weeks = ','.join(map(str, course_weeks))
dataset = get_coursecontentpageviews_dataset(course_id, weeks)
update_coursesummarytable(course_id, 'contentcoursepageviews', dataset)
dataset = get_coursecommunicationpageviews_dataset(course_id, weeks)
update_coursesummarytable(course_id, 'communicationcoursepageviews', dataset)
dataset = get_courseassessmentpageviews_dataset(course_id, weeks)
update_coursesummarytable(course_id, 'assessmentcoursepageviews', dataset)
def get_coursecontentpageviews_dataset(course_id, weeks):
"""
Calculates timeseries data for content pageviews.
Args:
course_id: course id
weeks: list of weeks to produce the timeseries for
Returns:
json string for javascript rendering of graphs in highchart
"""
global communication_types
global assessment_types
excluse_contentype_list = communication_types + assessment_types
excluse_contentype__str = ','.join("'{0}'".format(x) for x in excluse_contentype_list)
sql = "SELECT D.id, D.Date_Year, D.Date_Month, D.Date_Day,SUM(F.pageview) AS Pageviews FROM Dim_Dates D LEFT JOIN Fact_Coursevisits F ON D.Id = F.Date_Id WHERE D.DATE_week IN (%s) AND F.course_id=%d AND F.module NOT IN (%s) GROUP BY D.id ORDER BY D.date_year, D.date_month, D.date_day;" %(weeks, course_id, excluse_contentype__str)
result = connection.execute(sql);
pagepageviews_dict = {}
dataset_list = []
for row in result:
pagepageviews_dict[row[0]] = row[4]
sql = "SELECT D.id, D.Date_Year, D.Date_Month, D.Date_Day FROM Dim_Dates D WHERE D.DATE_week IN (%s) ORDER BY D.date_year, D.date_month, D.date_day;" %(weeks)
result = connection.execute(sql);
for row in result:
if row[0] in pagepageviews_dict:
pageviews = pagepageviews_dict[row[0]]
else:
pageviews = 0
datapoint = "[Date.UTC(%s,%s,%s),%s]" % (row[1],str(int(row[2])-1),row[3],pageviews)
dataset_list.append(datapoint)
dataset = ','.join(map(str, dataset_list))
return dataset
def get_coursecommunicationpageviews_dataset(course_id, weeks):
"""
Calculates timeseries data for communication pageviews.
Args:
course_id: course id
weeks: list of weeks to produce the timeseries for
Returns:
json string for javascript rendering of graphs in highchart
"""
global communication_types
communication_types_str = ','.join("'{0}'".format(x) for x in communication_types)
sql = "SELECT D.id, D.Date_Year, D.Date_Month, D.Date_Day,SUM(F.pageview) AS Pageviews FROM Dim_Dates D LEFT JOIN Fact_Coursevisits F ON D.Id = F.Date_Id WHERE D.DATE_week IN (%s) AND F.course_id=%d AND F.module IN (%s) GROUP BY D.id ORDER BY D.date_year, D.date_month, D.date_day;" %(weeks, course_id, communication_types_str)
result = connection.execute(sql);
pagepageviews_dict = {}
dataset_list = []
for row in result:
pagepageviews_dict[row[0]] = row[4]
sql = "SELECT D.id, D.Date_Year, D.Date_Month, D.Date_Day FROM Dim_Dates D WHERE D.DATE_week IN (%s) ORDER BY D.date_year, D.date_month, D.date_day;" %(weeks)
result = connection.execute(sql);
for row in result:
if row[0] in pagepageviews_dict:
pageviews = pagepageviews_dict[row[0]]
else:
pageviews = 0
datapoint = "[Date.UTC(%s,%s,%s),%s]" % (row[1],str(int(row[2])-1),row[3],pageviews)
dataset_list.append(datapoint)
dataset = ','.join(map(str, dataset_list))
return dataset
def get_courseassessmentpageviews_dataset(course_id,weeks):
"""
Calculates timeseries data for assessment pageviews.
Args:
course_id: course id
weeks: list of weeks to produce the timeseries for
Returns:
json string for javascript rendering of graphs in highchart
"""
global assessment_types
assessment_types_str = ','.join("'{0}'".format(x) for x in assessment_types)
sql = "SELECT D.id, D.Date_Year, D.Date_Month, D.Date_Day,SUM(F.pageview) AS Pageviews FROM Dim_Dates D LEFT JOIN Fact_Coursevisits F ON D.Id = F.Date_Id WHERE D.DATE_week IN (%s) AND F.course_id=%d AND F.module IN (%s) GROUP BY D.id ORDER BY D.date_year, D.date_month, D.date_day;" %(weeks, course_id, assessment_types_str)
result = connection.execute(sql);
pagepageviews_dict = {}
dataset_list = []
for row in result:
pagepageviews_dict[row[0]] = row[4]
sql = "SELECT D.id, D.Date_Year, D.Date_Month, D.Date_Day FROM Dim_Dates D WHERE D.DATE_week IN (%s) ORDER BY D.date_year, D.date_month, D.date_day;" %(weeks)
result = connection.execute(sql);
for row in result:
if row[0] in pagepageviews_dict:
pageviews = pagepageviews_dict[row[0]]
else:
pageviews = 0
datapoint = "[Date.UTC(%s,%s,%s),%s]" % (row[1],str(int(row[2])-1),row[3],pageviews)
dataset_list.append(datapoint)
dataset = ','.join(map(str, dataset_list))
return dataset
"""
Process Assessment Grades
"""
def generate_assessmentgrades(course_id, updatecol):
"""
Creates the Grades HTML table. A row is included for each student with columns for each assessment.
Pandas dataframes are used to speed up processing.
Args:
course_id: course id
updatecol: column to update in the course summary table
"""
global assessment_types
assessment_types_str = ','.join("'{0}'".format(x) for x in assessment_types)
# get all quizzes and assignments
res_sql = "SELECT title, content_type, content_id, parent_id FROM dim_pages WHERE content_type IN (%s) AND course_id=%d order by order_no;" %(assessment_types_str, course_id)
result = connection.execute(res_sql);
rows = result.fetchall()
titles_list = []
page_id_list = []
for page in rows:
titles_list.append(page[0])
page_id_list.append(page[2])
# print table header
# assignments and quizzes as headings
table = []
table.append("<thead><tr>")
table.append("<th>Name</th>")
for x in titles_list:
table.append("<th>%s</th>" % (x))
table.append("</tr>")
table.append("</thead>")
table.append("<tbody>\n")
# load all quiz scores into Pandas
quizsql = "SELECT content_id, user_id, grade FROM dim_submissionattempts WHERE course_id=%s;" % (course_id)
newconnection = engine.raw_connection()
df = pd.read_sql(quizsql, newconnection)
# loop over all students
sql = "SELECT lms_id,firstname,lastname,role FROM dim_users WHERE role='Student' ORDER BY lastname;" #need to make dynamic for each type of role
result = connection.execute(sql);
for row in result:
table.append("<tr class='stats-row'><td class='stats-title'><span>%s %s</span></td>" % (row[1], row[2]))
for i in page_id_list:
filterd_df = df[(df.content_id == int(i)) & (df.user_id == int(row[0]))]
quiz_score = filterd_df['grade'].max()
table.append("<td>%s</td>" % (quiz_score))
table.append("</tr>\n")
table.append("</tbody>\n")
table = ''.join(table)
update_coursesummarytable (course_id, updatecol, table.replace("'", "''"))
def get_usergrade(pageid, course_id, user_id):
"""
Returns the grade a user received in an assessement item
Args:
pageid: content id of the assessment item
course_id: course id
user_id: user id
Returns:
grade as a float
"""
avg_grade = 0
sql = "SELECT max(grade) AS maxgrade FROM dim_submissionattempts WHERE content_id=%s AND course_id=%s AND user_id=%s;" % (pageid, course_id, user_id)
result = connection.execute(sql);
for row in result:
if row[0] is not None:
avg_grade = row[0]
rounded_grade = "%.2f" % round(float(avg_grade),2)
return rounded_grade
"""
Process Student Access Table
"""
def generate_studentbyweektable(course_id, displaytype):
"""
Creates a student by weeks table. Uses pandas dataframes to speed up processing
Args:
course_id: course id
displaytype: counts or vis
"""
global course_weeks
global week_totals
week_totals = [0] * len(course_weeks)
weeks_str_list = ','.join(map(str, course_weeks))
total_users = get_totalusers(course_id)
total_pageviews = get_totalpageviews(course_id)
table = [] # need class="heat-map"
table.append("<thead><tr>")
table.append("<th>Firstname</th><th>Lastname</th><th>Account Type</th><th>View</th>")
for x in range(len(course_weeks)): #range(0, 10):
table.append("<th>Week %d</th>" % (x+1))
table.append("</tr>")
table.append("</thead>")
table.append("<tbody>\n")
useraccesssql = "SELECT F.pageview, D.date_week, F.user_pk FROM Fact_Coursevisits F INNER JOIN Dim_Dates D ON F.Date_Id = D.Id WHERE F.course_id=%d AND D.Date_week IN (%s);" % (course_id, weeks_str_list)
newconnection = engine.raw_connection()
df = pd.read_sql(useraccesssql, newconnection)
sql = "SELECT user_pk,firstname,lastname,role FROM dim_users WHERE course_id=%d ORDER BY lastname;" %(course_id) #need to make dynamic
result = connection.execute(sql);
for row in result:
table.append("<tr class='stats-row'><td class='stats-title'>%s</td><td class='stats-title'>%s</td><td class='stats-title'>%s</td><td class='stats-title'><a href='/coursemember?user_id=%s&course_id=%s' class='btn btn-small btn-primary'>View</a></td>" % (row[1],row[2],row[3], row[0], course_id))
totalcounts_dict = {}
for wk in course_weeks:
totalcounts_dict[wk] = 0
filtered_df = df[(df.user_pk == row[0])]
by_week = filtered_df.groupby(by=['date_week'])['pageview'].sum()
for ind, dfrow in by_week.iteritems():
totalcounts_dict[ind] = dfrow
if displaytype=="vis":
table.append('<td> </td>')
else:
table.append(makeweek_rows(course_id, course_weeks,totalcounts_dict,total_users,total_pageviews,userpercent=False ))
table.append("</tr>\n")
table.append("<tr class='stats-row'><td class='stats-title'></td><td class='stats-title'></td><td class='stats-title'></td><td class='stats-title'></td><td class='stats-title'>Total</td>")
for week in week_totals:
table.append('<td class="stats-title">%d</td>' % (week))
table.append("</tbody>\n")
table = ''.join(table)
if displaytype == 'counts':
update_coursesummarytable (course_id, 'users_counts_table', table.replace("'", "''"))
else:
update_coursesummarytable (course_id, 'users_vis_table', table.replace("'", "''"))
"""
Process Content, Assessment and Communication Tables
"""
def generate_contentaccesstable(course_id, contentdisplaytype, displaytype, updatecol):
"""
Generate and saves HTML tables for content assess tables.
Args:
course_id: course id
contentdisplaytype: communication, submission or communication
displaytype: counts or user
updatecol: column to update in summary_courses table
"""
global course_weeks
global forum_hidden_list
global submission_hidden_list
global communication_types
global assessment_types
global week_totals
forum_hidden_list = ""
submission_hidden_list = ""
weeks_str_list = ','.join(map(str, course_weeks))
week_totals = [0] * len(course_weeks)
total_users = get_totalusers(course_id)
total_pageviews = get_totalpageviews(course_id)
hidden_list = ""
if (contentdisplaytype == "communication" or contentdisplaytype == "forum"):
#contenttype = "'forum'"
contenttype = ','.join("'{0}'".format(x) for x in communication_types)
hidden_list = forum_hidden_list
else:
#contenttype = "'quiz','assign'"
contenttype = ','.join("'{0}'".format(x) for x in assessment_types)
hidden_list = submission_hidden_list
percent_calc_text = ""
if displaytype == "user":
percent_calc_text = "Percentage of students"
else:
percent_calc_text = "Percentage in relation to total pageviews"
table = []
table.append("<thead><tr>")
table.append("<th>Name</th><th>Type</th><th>View</th>")
for x in range(len(course_weeks)):
table.append("<th>Week %d</th>" % (x+1))
table.append("<th>Total</th><th>%<a href='#' class='btn btn-mini' data-rel='tooltip' data-original-title='%s'><i class='halflings-icon info-sign'></i></a></th>" % (percent_calc_text))
table.append("</tr>")
table.append("</thead>")
table.append("<tbody>\n")
# add rows for pages that show all quizzes, forums and assignments
# code can be reduced in size by processing as a list
if course_type == "Moodle":
if (contentdisplaytype == "communication"):
table.append("<tr class='stats-row'><td class='stats-title'><span>All Forums</span> <a href='#' class='btn btn-mini' data-rel='tooltip' data-original-title='All Forums refers to views of the page that has links to each discussion forum'><i class='halflings-icon info-sign'></i></a></td><td class='stats-title'>Forum</td><td class='stats-title'></td>")
#for week in course_weeks:
if displaytype == "user":
user_counts_list = get_userweekcount_for_specificallcoursepage(weeks_str_list, course_id, "forum", "view all")
table.append(makeweek_rows(course_id,course_weeks,user_counts_list,total_users,total_pageviews, userpercent=True))
else:
total_counts_list = get_weekcount_for_specificallcoursepage(weeks_str_list, course_id, "forum", "view all")
table.append(makeweek_rows(course_id,course_weeks,total_counts_list,total_users,total_pageviews, userpercent=False))
table.append("</tr>\n")
if (contentdisplaytype == "submmision"):
table.append("<tr class='stats-row'><td class='stats-title'><span>All Quizzes</span> <a href='#' class='btn btn-mini' data-rel='tooltip' data-original-title='All Quizzes refers to views of the page that has links to each quiz'><i class='halflings-icon info-sign'></i></a></td><td class='stats-title'>Quiz</td><td class='stats-title'></td>")
if displaytype == "user":
user_counts_dict = get_userweekcount_for_specificallcoursepage(weeks_str_list, course_id, "quiz", "view all")
table.append(makeweek_rows(course_id,course_weeks,user_counts_dict,total_users,total_pageviews, userpercent=True))
else:
total_counts_dict = get_weekcount_for_specificallcoursepage(weeks_str_list, course_id, "quiz", "view all")
table.append(makeweek_rows(course_id,course_weeks,total_counts_dict,total_users,total_pageviews, userpercent=False))
table.append("</tr>\n")
table.append("<tr class='stats-row'><td class='stats-title'><span>All Assignments</span> <a href='#' class='btn btn-mini' data-rel='tooltip' data-original-title='All Assignments refers to views of the page that has links to each assignment'><i class='halflings-icon info-sign'></i></a></td><td class='stats-title'>Assign</td><td class='stats-title'></td>")
if displaytype == "user":
user_counts_dict = get_userweekcount_for_specificallcoursepage(weeks_str_list, course_id, "assign", "view all")
table.append(makeweek_rows(course_id,course_weeks,user_counts_dict,total_users,total_pageviews, userpercent=True))
else:
total_counts_dict = get_weekcount_for_specificallcoursepage(weeks_str_list, course_id, "assign", "view all")
table.append(makeweek_rows(course_id, course_weeks,total_counts_dict,total_users,total_pageviews, userpercent=False))
table.append("</tr>\n")
res_sql = "SELECT title, content_type, content_id, parent_id FROM dim_pages WHERE content_type IN (%s) AND course_id=%d order by order_no;" %(contenttype, course_id)
result = connection.execute(res_sql);
rows = result.fetchall()
for row in rows:
pageid2 = int(row[2])
table.append("<tr class='stats-row'><td class='stats-title'><span>%s</span></td><td class='stats-title'>%s</td><td class='stats-title'><a href='/coursepage?course_id=%d&page_id=%s' class='btn btn-small btn-primary'>View</a></td>" % (row[0], row[1], course_id, str(pageid2)))
if displaytype == "user":
user_counts_dict = get_userweekcount(weeks_str_list, row[2],course_id, "","")
table.append(makeweek_rows(course_id,course_weeks,user_counts_dict,total_users,total_pageviews, userpercent=True))
elif displaytype=='posts':
posts_dict = get_posts(weeks_str_list, row[2],course_id)
table.append(makeweek_rows(course_id,course_weeks,posts_dict,total_users,total_pageviews, userpercent=False))
else:
total_counts_dict = get_weekcount(weeks_str_list, row[2],course_id,"","")
table.append(makeweek_rows(course_id,course_weeks,total_counts_dict,total_users,total_pageviews, userpercent=False))
table.append("</tr>\n")
table.append("<tr class='stats-row'><td class='stats-title'><span>Total</span></td><td class='stats-title'></td><td class='stats-title'></td>")
for week in week_totals:
table.append('<td>%d</td>' % (week))
table.append('<td> </td><td> </td>')
table.append("</tr>\n")
table.append("</tbody>\n")
table = ''.join(table)
update_coursesummarytable (course_id, updatecol, table.replace("'", "''"))
def makeweek_rows(course_id,weeks,count_dict,total_users,total_pageviews, userpercent=False):
global week_totals
row = ""
i = 0
rowcount = 0
total = 0
for week in weeks:
val = 0
if week in count_dict:
val = count_dict[week]
rowcount = rowcount + val
row = row + '<td>%d</td>' % (val)
week_totals[i] = week_totals[i] + val
i = i + 1
percent = 0.0
if (userpercent==True):
percent = float(rowcount)*(100.0/total_users)
else:
percent = float(rowcount)*(100.0/total_pageviews)
row = row + "<td class='stats-title'>%d</td>" % (rowcount)
row = row + '<td>%.4f</td>' % (round(percent,4))
return row
def generate_contentaccesstreetable(course_id, displaytype):
"""
Generate and saves HTML tree table for content assess. This is the expandable treetable
Args:
course_id: course id
updatecol: column to update in summary_courses table
"""
global treetable
global course_weeks
global week_totals
global sitetree
sitetree = []
week_totals = [0] * len(course_weeks)
weeks_str_list = ','.join(map(str, course_weeks))
total_users = get_totalusers(course_id)
total_pageviews = get_totalpageviews(course_id)
treetable = []
treetable.append("<thead><tr>")
treetable.append("<th>Name</th><th>Type</th><th>View</th>")
for x in range(len(course_weeks)):
treetable.append("<th>Week %d</th>" % (x+1))
treetable.append("<th>Total</th><th>%<a href='#' class='btn btn-mini' data-rel='tooltip' data-original-title='Percentage in relation to total pageviews.'><i class='halflings-icon info-sign'></i></a></th>")
treetable.append("</tr>")
treetable.append("</thead>")
treetable.append("<tbody>\n")
# Add counts for main access page
treetable.append("<tr class='stats-row'><td class='stats-title'>Main Course Homepage</td><td class='stats-title'>Course</td><td class='stats-title'></td>")
#for week in course_weeks:
if displaytype == "user":
user_counts_dict = get_userweekcount_for_specificallcoursepage(weeks_str_list, course_id, "course", "view")
treetable.append(makeweek_rows(course_id,course_weeks,user_counts_dict,total_users,total_pageviews, userpercent=True))
else:
total_counts_dict = get_weekcount_for_specificallcoursepage(weeks_str_list, course_id, "course", "view")
treetable.append(makeweek_rows(course_id,course_weeks,total_counts_dict,total_users,total_pageviews, userpercent=False))
treetable.append("</tr>\n")
moodle_treetable(course_id, displaytype, 0)
treetable.append("<tr class='stats-row'><td class='stats-title'><span>Total</span></td><td class='stats-title'></td><td class='stats-title'></td>")
for week in week_totals:
treetable.append('<td>%d</td>' % (week))
treetable.append('<td> </td><td> </td>')
treetable.append("</tr>\n")
treetable.append("</tbody>\n")
treetable = ''.join(treetable)
sitetree_json = json.dumps(sitetree)
if displaytype == 'counts':
update_coursesummarytable (course_id, 'content_counts_table', treetable.replace("'", "''"))
update_coursesummarytable (course_id, 'sitetree', sitetree_json.replace("'", "''"))
else:
update_coursesummarytable (course_id, 'content_user_table', treetable.replace("'", "''"))
def moodle_treetable(course_id, displaytype, parent_id=0):
"""
Moodle course structure processing
"""
global treetable
global course_weeks
global content_hidden_list
global tree_table_totals
global communication_types
global assessment_types
global sitetree
total_users = get_totalusers(course_id)
total_pageviews = get_totalpageviews(course_id)
excluse_contentype_list = communication_types + assessment_types
excluse_contentype_str = ','.join("'{0}'".format(x) for x in excluse_contentype_list)
weeks_str_list = ','.join(map(str, course_weeks))
res_sql = "SELECT title, content_type, content_id, parent_id, order_no FROM dim_pages WHERE parent_id=%d AND content_type NOT IN (%s) AND course_id=%d order by order_no;" %(parent_id, excluse_contentype_str, course_id)
result = connection.execute(res_sql);
rows = result.fetchall()
for row in rows:
pageid2 = int(row[2])
sitetree.append((pageid2,parent_id))
order_no = row[4]
content_type = row[1]
classname = ""
parentidstring =""
if course_type=="Moodle":
pagestoinclude = ['page', 'folder', 'section', 'course_modules'] #"label" 'assign', 'forum', 'quiz', 'resource', 'url', 'book', 'course_module_instance_list', 'user', 'course_resources_list', 'activity_report', 'report', 'recent_activity', 'readtracking', 'event'
else:
pagestoinclude = ['course/x-bb-coursetoc','resource/x-bb-folder', 'resource/x-bb-stafffolder', 'resource/x-bb-discussionfolder']
if row[1] in pagestoinclude:
classname = "folder"
else:
classname = "file"
if row[3]==0:
parentidstring = ""
else:
parentidstring = "data-tt-parent-id='%s'" % (str(row[3]))
if (row[1]=='section'):
treetable.append("<tr data-tt-id='%s' %s class='stats-row'><td class='stats-title'><span class='%s'>%s</span></td><td class='stats-title'>%s</td><td class='stats-title'><a href='/coursepage?course_id=%d&page_id=%s§ion_order=%s' class='btn btn-small btn-primary'>View</a></td></td>" % (str(pageid2), parentidstring, classname, row[0], row[1], course_id, str(pageid2), str(row[4])))
else:
treetable.append("<tr data-tt-id='%s' %s class='stats-row'><td class='stats-title'><span class='%s'>%s</span></td><td class='stats-title'>%s</td><td class='stats-title'><a href='/coursepage?course_id=%d&page_id=%s' class='btn btn-small btn-primary'>View</a></td>" % (str(pageid2), parentidstring, classname, row[0], row[1], course_id, str(pageid2)))
if displaytype == "user":
user_counts_dict = get_userweekcount(weeks_str_list, row[2],course_id, content_type, order_no)
treetable.append(makeweek_rows(course_id,course_weeks,user_counts_dict,total_users,total_pageviews, userpercent=True))
else:
total_counts_dict = get_weekcount(weeks_str_list, row[2],course_id, content_type, order_no)
treetable.append(makeweek_rows(course_id,course_weeks,total_counts_dict,total_users,total_pageviews, userpercent=False))
treetable.append("</tr>\n")
moodle_treetable(course_id, displaytype, parent_id=int(row[2]))
def get_weekcount_for_specificallcoursepage(week, course_id, module, action):
global df_treetable
pageviews = {}
for wk in course_weeks:
pageviews[wk] = 0
filtered_df = df_treetable[(df_treetable.module == module) & (df_treetable.action == module) & (df_treetable.course_id == int(course_id))]
by_week = filtered_df.groupby(by=['date_week'])['pageview'].sum()
for ind, dfrow in by_week.iteritems():
pageviews[ind] = dfrow
return pageviews
def get_userweekcount_for_specificallcoursepage(week, course_id, module, action):
global df_treetable
usercounts = {}
for wk in course_weeks:
usercounts[wk] = 0
filtered_df = df_treetable[(df_treetable.course_id == int(course_id)) & (df_treetable.module == module) & (df_treetable.action == action)]
unique_by_week = filtered_df.groupby('date_week').user_pk.nunique() #filtered_df.groupby(by=['date_week'])['pageview'].sum()
total_users = get_totalusers(course_id)
for ind, dfrow in unique_by_week.iteritems():
usercounts[ind] = dfrow #percent
return usercounts
def get_posts(week, pageid, course_id):
"""
Returns the number of forums posts made in a specified week
Args:
week: week no
pageid: content id of the forum item
course_id: course id
Returns:
posts: forum posts as an integer
"""
posts = {}
sql = "SELECT COUNT(F.id) AS pageviews, D.date_week FROM summary_posts F INNER JOIN Dim_Dates D ON F.Date_Id = D.Id WHERE F.course_id=%d AND F.forum_id=%d AND D.date_week IN (%s) GROUP BY D.date_week ORDER BY D.date_week;" % (course_id, int(pageid), week)
result = connection.execute(sql);
for row in result:
posts[int(row[1])] = int(row[0])
return posts
def get_weekcount(week,pageid, course_id, content_type, order_no):
global df_treetable
pageviews = {}
pageid2 = int(pageid)
for wk in course_weeks:
pageviews[wk] = 0
if content_type=="section":
filtered_df = df_treetable[(df_treetable.section_order == int(pageid)) & (df_treetable.course_id == int(course_id))]
else:
filtered_df = df_treetable[(df_treetable.page_id == int(pageid)) & (df_treetable.course_id == int(course_id))]
by_week = filtered_df.groupby(by=['date_week'])['pageview'].sum()
for ind, dfrow in by_week.iteritems():
pageviews[ind] = dfrow
return pageviews
def get_userweekcount(week,pageid, course_id, content_type, order_no):
global df_treetable
usercounts = {}
pageid2 = int(pageid)
for wk in course_weeks:
usercounts[wk] = 0
filtered_df = None
if content_type=="section":
filtered_df = df_treetable[(df_treetable.course_id == int(course_id)) & (df_treetable.section_order == order_no)]
else:
filtered_df = df_treetable[(df_treetable.course_id == int(course_id)) & (df_treetable.page_id == int(pageid))]
unique_by_week = filtered_df.groupby('date_week').user_pk.nunique() #filtered_df.groupby(by=['date_week'])['pageview'].sum()
total_users = get_totalusers(course_id)
for ind, dfrow in unique_by_week.iteritems():
usercounts[ind] = dfrow #percent
return usercounts
def get_totalusers(course_id):
"""
Returns the number of students in a course
Todo: extend to take role as parameter
Args:
course_id: course id
Returns:
usercounts: no of users as an integer
"""
sql = "SELECT COUNT(U.id) AS usercount FROM Dim_Users U WHERE U.course_id=%d AND U.role='Student';" % (course_id)
result = connection.execute(sql);
for row in result:
if row[0] is not None:
usercounts = int(row[0])
return int(usercounts)
def get_totalpageviews(course_id):
"""
Returns the total number of pageviews for a course
Args:
course_id: course id
Returns:
counts: no of pagevies as an integer
"""
sql = "SELECT COUNT(id) AS views FROM fact_coursevisits U WHERE course_id=%d;" % (course_id)
counts = 0
result = connection.execute(sql);
for row in result:
if row[0] is not None:
counts = int(row[0])
return int(counts)
def get_eventpre_postcounts(week, pageid, course_id):
"""
Calculate pageviews pre and post a day of the week
"""
pageviews = 0
pageid2 = int(pageid)
sql = "SELECT SUM(F.pageview) AS pageviews FROM Fact_Coursevisits F INNER JOIN Dim_Dates D ON F.Date_Id = D.Id WHERE F.course_id=%d AND F.page_id=%d AND D.date_week=%d AND D.date_dayinweek IN (0,1);" % (course_id, int(pageid2), week)
result = connection.execute(sql);
for row in result:
if row[0] is not None:
pageviews = int(row[0])
return int(pageviews)
def get_indivuserweekcount(week, userid, course_id):
"""
Returns pageviews by week for the user
"""
pageviews = {}
sql = "SELECT SUM(F.pageview) AS pageviews, D.date_week, F.user_pk FROM Fact_Coursevisits F INNER JOIN Dim_Dates D ON F.Date_Id = D.Id WHERE F.user_pk='%s' AND F.course_id=%d AND D.Date_week IN (%s) GROUP BY D.date_week ORDER BY D.date_week;" % (userid, course_id, week)
result = connection.execute(sql);
for row in result:
pageviews[row[1]] = int(row[0])
return pageviews
def get_usereventpre_postcounts(week,userid,course_id):
"""
Returns pageviews for a user by week pre and post a day of the week
"""
pageviews = 0
sql = "SELECT SUM(F.pageview) AS pageviews FROM Fact_Coursevisits F INNER JOIN Dim_Dates D ON F.Date_Id = D.Id WHERE F.course_id=%d AND F.user_pk='%s' AND D.date_week=%d AND D.date_dayinweek IN (0,1);" % (course_id, str(userid), week)
result = connection.execute(sql);
for row in result:
if row[0] is not None:
pageviews = int(row[0])
return int(pageviews)
"""
Ingest Blackboard Data
"""
def populate_dim_users_table_blackboard(user_membership_resourcefile, course_id):
"""
Extracts users from the course export files and inserts into the dim_users table
Todo: Update to include all roles not only students
Args:
user_membership_resourcefile: file with the list of users
course_id: course id
"""
global coursemember_to_userid_dict
coursemember_to_userid_dict = {}
tree = ET.ElementTree(file=user_membership_resourcefile)
root = tree.getroot()
username = ""
firstname = ""
lastname = ""
role = ""
email = ""
for child_of_root in root:
userpk = str(child_of_root.attrib["id"])
user_id = userpk[1:(len(userpk)-2)]
for child_child_of_root in child_of_root:
if child_child_of_root.tag == "EMAILADDRESS":
email = child_child_of_root.attrib["value"]
if child_child_of_root.tag == "USERNAME":
username = child_child_of_root.attrib["value"]
for child_child_child_of_root in child_child_of_root:
if child_child_child_of_root.tag=="GIVEN":
firstname = child_child_child_of_root.attrib["value"]
firstname = firstname.replace("'", "''")
elif child_child_child_of_root.tag=="FAMILY":
lastname = child_child_child_of_root.attrib["value"]
lastname = lastname.replace("'", "''")
elif child_child_child_of_root.tag=="ROLEID":
role = child_child_child_of_root.attrib["value"]
user_pk = str(course_id) + "_" + user_id
if (firstname is None) or (len(firstname) == 0):
firstname = "blank"
else:
firstname = firstname.replace("'", "''")
if (lastname is None) or (len(lastname) == 0):
lastname = "blank"
else:
lastname = lastname.replace("'", "''")
if (email is None) or (len(email) == 0):
email = "blank"
else:
email = email.replace("'", "''")
if role != "STAFF":
row = {"lms_id": user_id, "firstname": firstname, "lastname": lastname, "username": username, "email": email, "role": role, "user_pk": user_pk, "course_id": course_id}
save_summaryobject ("dim_users", row)
def process_IMSCP_Manifest_Blackboard(path, course_id):
"""
Traverses the Blackboard IMS Manifest XML to get the course structure
Lots of things in the Blackboard XML Manifest don't make sense.
The format was reverse engineered to get the LOOP trial running
Todo: Refactor IMSCP Manifest processing code
"""
manifest_file = os.path.normpath(path + "/imsmanifest.xml")
tree = ET.ElementTree(file=manifest_file)
root = tree.getroot()
user_membership_resourcefile = ""
usermsgstate_resourcefile = ""
membership_file = ""
# Get all resource no and the pk for each.
# Store a dict to map resource no to pk and a dict of counts of content types
# Make a dict to match resource no to pk (id)
resource_pk_dict = {}
resource_type_lookup_dict = {}
resource_name_lookup_dict = {}
resource_id_type_lookup_dict = {}
internal_handles = []
toc_dict ={'discussion_board_entry':0,'staff_information':0, 'announcements_entry':0, 'check_grade':0}
# Make a dict to store counts of content types
resource_type_dict = {}
assessment_res_to_id_dict = {}
for elem in tree.iter(tag='resource'):
file = elem.attrib["{http://www.blackboard.com/content-packaging/}file"]
title = elem.attrib["{http://www.blackboard.com/content-packaging/}title"]
# http://www.peterbe.com/plog/unicode-to-ascii
title = unicodedata.normalize('NFKD', unicode(title)).encode('ascii','ignore')
type = elem.attrib["type"]
if type=="resource/x-bb-document":
type=get_actual_contenttype_blackboard(os.path.normpath(path + "/" + file))
resource_no = elem.attrib["{http://www.w3.org/XML/1998/namespace}base"]
# the type file has invalid xml course/x-bb-trackingevent so ignore
real_id = "0"
if type not in ["course/x-bb-trackingevent","assessment/x-bb-qti-attempt", "resource/x-bb-announcement"]:
cur_id = getID_fromResourceFile(os.path.normpath(path + "/" + file), type)
resource_pk_dict[resource_no] = cur_id
resource_type_lookup_dict[resource_no] = type
resource_name_lookup_dict[resource_no] = title
if cur_id!='0':
resource_id_type_lookup_dict[cur_id[1:len(cur_id)-2]] = type
real_id = cur_id[1:-2]
else:
resource_id_type_lookup_dict[cur_id] = type
if type in resource_type_dict:
resource_type_dict[type] = resource_type_dict[type] + 1
else:
resource_type_dict[type] =1
if type == "course/x-bb-user":
user_membership_resourcefile = file
if type == "discussionboard/x-bb-usermsgstate":
usermsgstate_resourcefile = file
if type == "resource/x-bb-discussionboard":
process_blackboard_forum(os.path.normpath(path + "/" + file), cur_id, title, course_id)
elif type == "resource/x-bb-conference":
process_blackboard_conferences(os.path.normpath(path + "/" + file), cur_id, title, course_id)
elif type == "course/x-bb-gradebook":
gradebook_file = file
elif type == "assessment/x-bb-qti-test":
process_blackboard_test(os.path.normpath(path + "/" + file), cur_id, title, course_id, type)
assessment_res_to_id_dict[resource_no] = cur_id[1:-2]
elif type == 'membership/x-bb-coursemembership':
membership_file = file
if type in ['assessment/x-bb-qti-test', 'resource/x-bb-discussionboard']:
includedrow = {"course_id": course_id, "content_type": type, "content_id": real_id, "title": title.replace("'", "''"), "order_no": 0, "parent_id": 0}
save_summaryobject ("dim_pages", includedrow)
parent_map = {c:p for p in tree.iter() for c in p}
order = 1
for node in parent_map:
if ((node.tag=="item") and ('identifierref' in node.attrib)):
curr_node = node.attrib["identifierref"]
parent_node = parent_map[node]
if parent_node.tag=="organization":
parent_resource = parent_node.attrib["identifier"]
if parent_resource in resource_pk_dict:
parent_resource_no = resource_pk_dict[parent_resource]
else:
parent_resource_no = 0
elif 'identifierref' in parent_node.attrib:
parent_resource = parent_node.attrib["identifierref"]
if parent_resource in resource_pk_dict:
parent_resource_no = resource_pk_dict[parent_resource]
else:
parent_resource_no = 0
else:
parent_resource_no = 0
curr_node_name = resource_name_lookup_dict[curr_node] #node.attrib["{http://www.blackboard.com/content-packaging/}title"]
curr_node_type = resource_type_lookup_dict[curr_node]
curr_node_id = resource_pk_dict[curr_node]
curr_node_id = curr_node_id[1:len(curr_node_id)-2]
if parent_resource_no != 0:
parent_resource_no = parent_resource_no[1:len(parent_resource_no)-2]
#get actual content handle
content_handle = get_contenthandle_blackboard(os.path.normpath(path + "/" + curr_node + ".dat"))
if content_handle in toc_dict:
toc_dict[content_handle] = curr_node_id
if content_handle =="staff_information":
curr_node_type = "resource/x-bb-stafffolder"
elif content_handle == "discussion_board_entry":
curr_node_type = "resource/x-bb-discussionfolder"
elif content_handle == "check_grade":
curr_node_type = "course/x-bb-gradebook"
if curr_node_type!="resource/x-bb-asmt-test-link":
row = {"course_id": course_id, "content_type": curr_node_type, "content_id": curr_node_id, "title": curr_node_name.replace("'", "''"), "order_no": order, "parent_id": parent_resource_no}
save_summaryobject ("dim_pages", row)
else:
x = 1
order += 1
# store single announcements item to match announcements coming from log
announcements_id = getmax_pageid(course_id)
row = {"course_id": course_id, "content_type": "resource/x-bb-announcement", "content_id": announcements_id, "title": "Announcements", "order_no": 0, "parent_id": 0}
save_summaryobject ("dim_pages", row)
# store single view gradebook to match check_gradebook coming from log
gradebook_id = getmax_pageid(course_id)
row = {"course_id": course_id, "content_type": "course/x-bb-gradebook", "content_id": gradebook_id, "title": "View Gradebook", "order_no": 0, "parent_id": 0}
save_summaryobject ("dim_pages", row)
# remap /x-bbstaffinfo and discussion boards
# toc_dict ={'discussion_board_entry':0,'staff_information':0, 'announcements_entry':0, 'check_grade':0}
if toc_dict['staff_information']!=0:
update_dimpage(course_id, "resource/x-bb-staffinfo", toc_dict['staff_information'])
# process memberships
member_to_user_dict = process_blackboard_memberships(os.path.normpath(path + "/" + membership_file), course_id)
# process quiz attempts in gradebook file
content_link_id_to_content_id_dict = process_blackboard_attempt(os.path.normpath(path + "/" + gradebook_file), course_id, assessment_res_to_id_dict, member_to_user_dict, path)
return user_membership_resourcefile, resource_id_type_lookup_dict, usermsgstate_resourcefile, announcements_id, gradebook_id, content_link_id_to_content_id_dict
def build_starschema_blackboard(course_log_file, course_id, announcements_id, gradebook_id, content_link_id_to_content_id_dict, idtypemap):
"""
Extract entries from the csv log file from a Blackboard course and inserts each entry as a row in the fact_coursevisits table
"""
count = 0
header = None
trans = connection.begin()
with open(course_log_file, 'rU') as f:
reader = csv.reader(f)
for row in reader:
count = count + 1
if (header == None):
header = row
continue
arow = {}
for header_index in range (0, len(header)):
arow[(header[header_index])] = row[header_index]
# Process row
cur_visit_date = datetime.datetime.strptime(arow["TIMESTAMP"], "%d-%b-%y %H:%M:%S") #eg 01-SEP-14 09:46:52
date_id = datetime.datetime.strftime(cur_visit_date, "%d-%b-%y") #time.strftime("%d-%b-%y", cur_visit_date)
time_id = datetime.datetime.strftime(cur_visit_date, "%H:%M:%S") #time.strftime("%H:%M:%S", cur_visit_date)
unixtimestamp = time.mktime(cur_visit_date.timetuple())
user_id = arow["USER_PK1"]
page_id = arow["CONTENT_PK1"]
action = arow["EVENT_TYPE"]
forum_id = arow["FORUM_PK1"]
if forum_id!="":
page_id = forum_id
# defaul for module
module = "blank"
if page_id!="":
if str(page_id) in idtypemap:
module = idtypemap[str(page_id)]
else:
if arow["INTERNAL_HANDLE"]=="":
module = arow["DATA"]
# also add to dim_pages
if (not check_ifRecordExists('dim_pages', 'content_id', int(page_id))):
title = "blank"
if module=="/webapps/blackboard/execute/blti/launchLink":
title = "LTI Link"
elif module=="/webapps/blackboard/execute/manageCourseItem":
title = "Manage Course Item"
page_row = {"course_id": course_id, "content_type": module, "content_id": page_id, "title": "blank", "order_no": 0, "parent_id": 0}
save_summaryobject("dim_pages", page_row)
else:
module = "blank"
elif arow["INTERNAL_HANDLE"]=="my_announcements":
module = "resource/x-bb-announcement"
page_id = str(announcements_id)
elif arow["INTERNAL_HANDLE"]=="check_grade":
module = "course/x-bb-gradebook"
page_id = str(gradebook_id)
user_pk = str(course_id) + "_" + user_id
page_pk = str(course_id) + "_" + page_id
pageview = 1
#map all links in content to assessment to actual assessment id
if page_id in content_link_id_to_content_id_dict:
page_id = content_link_id_to_content_id_dict[page_id]
# Need to exclude forum post creation as this is counted in summary_posts
# Exclude admin actions such as entering an announcement
if arow["INTERNAL_HANDLE"] not in ['discussion_board_entry', 'db_thread_list_entry', 'db_collection_entry', 'announcements_entry', 'cp_gradebook_needs_grading']:
row = {"date_id": date_id, "time_id": time_id, "course_id": course_id, "datetime": unixtimestamp, "user_id": user_id, "module": module, "action": action, "page_id": page_id, "pageview":1, "user_pk": user_pk, "page_pk": page_pk, "unixtimestamp": unixtimestamp}
connection.execute(return_summaryobjectsql("fact_coursevisits", row))
trans.commit()
print "Imported %d facts " % (count)
def update_dimpage(course_id, content_type, parent_id):
sql = "UPDATE dim_pages SET parent_id=%s WHERE course_id =%d AND content_type='%s';" % (parent_id, course_id, content_type)
print sql
try:
test = connection.execute(sql);
except: # catch *all* exceptions
e = sys.exc_info()[0]
print "<p>Error: %s</p>" % e
print sys.exc_info()
def process_blackboard_attempt(filepath, course_id, assessment_res_to_id_dict, member_to_user_dict, path):
tree = ET.ElementTree(file=filepath)
root = tree.getroot()
content_id = None #content_id[1:-2]
content_link_id = None
title = None
content_link_id_to_content_id_dict = {}
for elem in tree.iter(tag='OUTCOMEDEFINITION'):
for child_of_root in elem:
# This is the OUTCOMEDEFINITION level
if child_of_root.tag == "ASIDATAID":
resid = child_of_root.attrib['value']
if resid in assessment_res_to_id_dict:
content_id = assessment_res_to_id_dict[resid]
if child_of_root.tag == "CONTENTID":
linkres = child_of_root.attrib['value']
if linkres=="":
content_link_id = None
else:
linkid = getID_fromResourceFile(os.path.normpath(path + "/" + linkres + ".dat"))
content_link_id = linkid[1:-2]
if child_of_root.tag == "EXTERNALREF":
externalref = child_of_root.attrib['value']
if content_id is None and externalref!="":
content_id = externalref[1:-2]
if child_of_root.tag == "TITLE":
#print child_of_root.attrib['value']
title = child_of_root.attrib['value']
for child_child_of_root in child_of_root:
# This is the outcomes level
user_id = None
grade = None
unixtimestamp = None
attempt_id = None
date_str = None
for child_child_child_of_root in child_child_of_root:
if child_child_child_of_root.tag == "COURSEMEMBERSHIPID":
member_id = child_child_child_of_root.attrib["value"]
member_id = member_id[1:len(member_id)-2]
if member_id in member_to_user_dict:
user_id = member_to_user_dict[member_id]
else:
user_id = member_id
for child_child_child_child_of_root in child_child_child_of_root:
if child_child_child_of_root.tag == "ATTEMPT":
attempt_id = child_child_child_of_root.attrib["id"]
attempt_id = attempt_id[1:len(attempt_id)-2]
for child_child_child_child_child_of_root in child_child_child_child_of_root:
if child_child_child_child_child_of_root.tag == "SCORE":
grade = child_child_child_child_child_of_root.attrib['value']
if child_child_child_child_child_of_root.tag == "DATEATTEMPTED":
date_str = child_child_child_child_child_of_root.attrib['value']
if (content_id is not None and grade is not None): # i.e. 0 attempts -
date_id = date_str[0:len(date_str)-4]
post_date = datetime.datetime.strptime(date_id, "%Y-%m-%d %H:%M:%S") #2014-07-11 16:52:53 EST
unixtimestamp = time.mktime(post_date.timetuple())
date_id_index = datetime.datetime.strftime(post_date, "%d-%b-%y") #time.strftime("%d-%b-%y", cur_visit_date)
time_id = datetime.datetime.strftime(post_date, "%H:%M:%S")
attempt_row = {"course_id": course_id, "content_id": content_id, "grade": grade, "user_id": user_id, "unixtimestamp": unixtimestamp}
save_summaryobject ("dim_submissionattempts", attempt_row)
if content_link_id is not None:
content_link_id_to_content_id_dict[content_link_id] = content_id
#save all attempts as pageviews in fact_coursevisits
user_pk = str(course_id) + "_" + user_id
page_pk = str(course_id) + "_" + content_id
fact_row = {"date_id": date_id_index, "time_id": time_id, "course_id": course_id, "datetime": unixtimestamp, "user_id": user_id, "module": 'assessment/x-bb-qti-test', "action": 'COURSE_ACCESS', "page_id": content_id, "pageview":1, "user_pk": user_pk, "page_pk": page_pk, "unixtimestamp": unixtimestamp}
save_summaryobject ("fact_coursevisits", fact_row)
return content_link_id_to_content_id_dict
def get_contenthandle_blackboard(filepath):
tree = ET.ElementTree(file=filepath)
root = tree.getroot()
content_handle = ""
for elem in tree.iter(tag='INTERNALHANDLE'):
content_handle = elem.attrib["value"]
return content_handle
def get_actual_contenttype_blackboard(filepath):
tree = ET.ElementTree(file=filepath)
root = tree.getroot()
content_type = ""
for elem in tree.iter(tag='CONTENTHANDLER'):
content_type = elem.attrib["value"]
return content_type
def process_blackboard_memberships(filepath, course_id):
tree = ET.ElementTree(file=filepath)
root = tree.getroot()
member_to_user_dict = {}
for elem in tree.iter(tag='COURSEMEMBERSHIP'):
member_id = elem.attrib["id"]
member_id = member_id[1:-2]
user_id = 0
for usr in elem:
if usr.tag == "USERID":
user_id = usr.attrib["value"]
user_id = user_id[1:-2]
member_to_user_dict[member_id] = user_id
return member_to_user_dict
def process_blackboard_test(filepath, content_id, title, course_id, content_type):
timeopen = 0
timeclose = 0
grade = 0.0
content_id = content_id[1:-2]
tree = ET.ElementTree(file=filepath)
root = tree.getroot()
for elem in tree.iter(tag='qmd_absolutescore_max'):
grade = elem.text
submissiontype_row = {"course_id": course_id, "content_id": content_id, "content_type": content_type, "timeopen": timeopen, "timeclose": timeclose, "grade": grade}
save_summaryobject ("dim_submissiontypes", submissiontype_row)
def process_blackboard_conferences(filepath, content_id, title, course_id):
# Save Forum = Conferences entry
forum_row = {"course_id": course_id, "forum_id": content_id, "title": "Conferences", "no_discussions": 0}
save_summaryobject ("summary_forum", forum_row)
tree = ET.ElementTree(file=filepath)
root = tree.getroot()
conf_id = ""
title = ""
for elem in tree.iter(tag='CONFERENCE'):
conf_id = elem.attrib["id"]
conf_id = conf_id[1:len(conf_id)-2]
for child_of_root in elem:
if child_of_root.tag == "TITLE":
title = child_of_root.attrib["value"]
# Save Forum - Discussion Board
discussion_row = {"course_id": course_id, "forum_id": conf_id, "discussion_id": content_id, "title": title.replace("'", "''"), "no_posts": 0}
save_summaryobject ("summary_discussion", discussion_row)
def process_blackboard_forum(filepath, content_id, title, course_id):
global msg_to_forum_dict
tree = ET.ElementTree(file=filepath)
root = tree.getroot()
forum_id = root.attrib['id']
forum_id = forum_id[1:len(forum_id)-2]
conf_id = ""
title = ""
for elem in root:
if elem.tag == "CONFERENCEID":
conf_id = elem.attrib["value"]
conf_id = conf_id[1:len(conf_id)-2]
if elem.tag == "TITLE":
title = elem.attrib["value"]
# Get all posts
for msg in tree.iter(tag='MSG'):
post_id = msg.attrib['id'][1:-2]
post_title = ""
date_id = ""
user_id = ""
for elem in msg:
if elem.tag == "TITLE":
post_title = elem.attrib["value"]
if elem.tag == "USERID":
user_id = elem.attrib["value"]
for subelem in elem:
if subelem.tag == "CREATED":
date_id = subelem.attrib["value"]
if (date_id is not None) and (len(date_id)!=0) and (date_id!=" ") and (date_id!=""):
date_id = date_id[0:len(date_id)-4]
post_date = datetime.datetime.strptime(date_id, "%Y-%m-%d %H:%M:%S") #2014-07-11 16:52:53 EST
date_id = datetime.datetime.strftime(post_date, "%d-%b-%y")
user_id = user_id[1:len(user_id)-2]
post_row = {"date_id": date_id, "user_id": user_id, "course_id": course_id, "forum_id": forum_id, "discussion_id": conf_id}
save_summaryobject ("summary_posts", post_row)
def populate_summary_contentaggregatesbycourse_table(resource_type_dict, course_id):
for key in resource_type_dict:
row = {"contenttype": key, "count": resource_type_dict[key], "course_id": course_id}
save_summaryobject ("summary_contentaggregatesbycourse", row)
def getID_fromResourceFile(resource_file, type=None):
id = '0'
tree = ET.ElementTree(file=resource_file)
root = tree.getroot()
if type == "assessment/x-bb-qti-test":
for elem in tree.iter(tag='assessmentmetadata'):
for elem_elem in elem:
if elem_elem.tag == "bbmd_asi_object_id":
id = elem_elem.text
else:
if "id" in root.attrib:
id = root.attrib["id"]
elif root.tag == "questestinterop":
for elem in tree.iter(tag='assessmentmetadata'):
for elem_elem in elem:
if elem_elem.tag == "bbmd_asi_object_id":
id = elem_elem.text
else:
id = '0'
return id
"""
Ingest Moodle Data
"""
def populate_dim_users_table_moodle(user_membership_resourcefile, course_id, course_type):
"""
Extracts users from the course export files and inserts into the dim_users table
Todo: Update to include all roles not only students
Args:
user_membership_resourcefile: file with the list of users
course_id: course id
course_type: Moodle or MoodleMacquarie
"""
print "populate_dim_users_table_moodle" , course_type
role_skip_list = []
if course_type=="Moodle":
role_skip_list = ["Staff","None", None]
tree = ET.ElementTree(file=user_membership_resourcefile)
root = tree.getroot()
firstname = ""
lastname = ""
role = ""
username = ""
email = ""
lms_id = ""
global staff_list
trans = connection.begin()
for child_of_root in root:
lms_id = child_of_root.attrib["id"]
for child_child_of_root in child_of_root:
if (child_child_of_root.tag== "firstname"): firstname = child_child_of_root.text
if (child_child_of_root.tag== "lastname"): lastname = child_child_of_root.text
if (child_child_of_root.tag== "department"): role = child_child_of_root.text
if (child_child_of_root.tag== "username"): username = child_child_of_root.text
if (child_child_of_root.tag== "email"): email = child_child_of_root.text
user_pk = str(course_id) + "_" + lms_id
if (firstname is None) or (len(firstname) == 0):
firstname = "blank"
else:
firstname = firstname.replace("'", "''")
if (lastname is None) or (len(lastname) == 0):
lastname = "blank"
else:
lastname = lastname.replace("'", "''")
if (email is None) or (len(email) == 0):
email = "blank"
if course_type=="Moodle":
if (role not in ["Staff","None", None]):
staff_list.append(int(lms_id))
row = {"lms_id": lms_id, "firstname": firstname, "lastname": lastname, "username": username, "email": email, "role": role, "user_pk": user_pk, "course_id": course_id}
save_summaryobject ("dim_users", row)
elif course_type=="MoodleMacquarie":
if "students" in email:
staff_list.append(int(lms_id))
role = "Student"
row = {"lms_id": lms_id, "firstname": firstname, "lastname": lastname, "username": username, "email": email, "role": role, "user_pk": user_pk, "course_id": course_id}
save_summaryobject ("dim_users", row)
trans.commit()
def build_starschema_moodle(course_log_file, course_id, content_type, forum_id):
"""
Extracts logs from the Moodle export format and inserts as a row in the fact_coursevisits table.
"""
global staff_list
tree = ET.ElementTree(file=course_log_file)
root = tree.getroot()
datetime_str = ""
lms_user_id = ""
module = ""
action = ""
url = ""
info = ""
section_order = 0
count = 1
access_sql = "BEGIN TRANSACTION;"
trans = connection.begin()
for child_of_root in root:
count +=1
for child_child_of_root in child_of_root:
if (child_child_of_root.tag== "time"): datetime_str = child_child_of_root.text
if (child_child_of_root.tag== "userid"): lms_user_id = child_child_of_root.text
if (child_child_of_root.tag== "module"): module = child_child_of_root.text
if (child_child_of_root.tag== "action"): action = child_child_of_root.text
if (child_child_of_root.tag== "url"): url = child_child_of_root.text
if (child_child_of_root.tag== "info"): info = child_child_of_root.text
date_id = time.strftime("%d-%b-%y", time.gmtime(int(datetime_str))) #datetime.datetime.strptime(arow["TIMESTAMP"], "%d-%b-%y")
time_id = time.strftime("%H:%M:%S", time.gmtime(int(datetime_str)))
session_id = 0
page_id = 0
section_id = 0
if (not((url is None) or (len(url) == 0))):
parsed_url = urlparse.urlparse(url)
query_as_dict = urlparse.parse_qs(parsed_url.query)
if "id" in query_as_dict: page_id = query_as_dict["id"][0]
if "sectionid" in query_as_dict: section_id = query_as_dict["sectionid"][0]
if "section" in query_as_dict: section_order = int(query_as_dict["section"][0])-1 # I think the numbers are added by 1 here
if (module in ["forum", "wiki"]):
page_id = forum_id #info
user_pk = str(course_id) + "_" + lms_user_id
page_pk = str(course_id) + "_" + str(page_id)
section_pk = str(course_id) + "_" + str(section_id)
fmt = '%d-%b-%y %H:%M:%S'
dt = datetime.datetime.strptime(date_id + " " + time_id, fmt)
unixtimestamp = time.mktime(dt.timetuple())
if (info is None) or (len(info) == 0):
info = "-"
else:
info = info.replace("'", "''")
info = info.replace("%", "\%") #escape % sign
if (url is None) or (len(url) == 0):
url = ""
else:
url = info.replace("%", "\%") #escape % sign
row = {}
if (lms_user_id not in staff_list):
if (module not in ['label', 'role', 'unisa_module', 'wizard']):
if ((action not in ['add mod', 'update mod', 'editsection', 'enrol', 'unenrol', 'report log', 'loginas'])):
if section_order > 0:
row = {"date_id": date_id, "time_id": time_id, "course_id": course_id, "datetime": datetime_str, "user_id": lms_user_id, "module": module, "action": action, "url": url, "page_id": page_id, "section_id": section_id, "section_order": section_order , "pageview": 1, "user_pk": user_pk, "page_pk": page_pk, "section_pk": section_pk, "unixtimestamp": unixtimestamp, "info": info}
else:
row = {"date_id": date_id, "time_id": time_id, "course_id": course_id, "datetime": datetime_str, "user_id": lms_user_id, "module": module, "action": action, "url": url, "page_id": page_id, "section_id": section_id, "pageview": 1, "user_pk": user_pk, "page_pk": page_pk, "section_pk": section_pk, "unixtimestamp": unixtimestamp, "info": info}
connection.execute(return_summaryobjectsql("fact_coursevisits", row))
trans.commit()
def build_starschema_moodlemacquarie(filepath, course_id):
"""
Extracts csv log from the Moodle Macquarie and inserts as a row in the fact_coursevisits table.
"""
print "build_starschema_moodlemacquarie"
global staff_list
action_skip_list = ['updated', 'unassigned', 'assigned', 'deleted', 'updated', 'loggedinas']
print action_skip_list
trans = connection.begin()
count = 0
header = None
with open(filepath, 'rb') as f:
reader = csv.reader(f)
for row in reader:
count = count + 1
if (header == None):
header = row
continue
arow = {}
for header_index in range (0, len(header)):
arow[(header[header_index])] = row[header_index]
# Process row
# Headings id eventname component action target objecttable objectid crud edulevel contextid
# contextlevel contextinstanceid userid courseid relateduserid anonymous other
# timecreated origin ip realuserid
log_id = arow["id"]
eventname = arow["eventname"]
component = arow["component"]
action = arow["action"]
target = arow["target"]
objecttable = arow["objecttable"]
#page_id = arow["objectid"]
crud = arow["crud"]
edulevel = arow["edulevel"]
contextid = arow["contextid"]
contextlevel = arow["contextlevel"]
page_id = arow["contextinstanceid"]
lms_user_id = arow["userid"]
relateduserid = arow["relateduserid"]
anonymous = arow["anonymous"]
other = arow["other"] # old info column - has section no a:1:{s:10:"sectionnum";i:17;}
timecreated = arow["timecreated"]
origin = arow["origin"]
ip = arow["ip"]
realuserid = arow["realuserid"]
module = None
if objecttable=='\N':
module = target
else:
module = objecttable
date_id = time.strftime("%d-%b-%y", time.gmtime(int(timecreated)))
time_id = time.strftime("%H:%M:%S", time.gmtime(int(timecreated)))
fmt = '%d-%b-%y %H:%M:%S'
dt = datetime.datetime.strptime(date_id + " " + time_id, fmt)
unixtimestamp = time.mktime(dt.timetuple())
user_pk = str(course_id) + "_" + lms_user_id
page_pk = str(course_id) + "_" + str(page_id)
row = {}
if (lms_user_id not in staff_list):
if (action not in action_skip_list):
if (module not in ["user_enrolments","role"]):
row = {"date_id": date_id, "time_id": time_id, "course_id": course_id, "datetime": timecreated, "user_id": lms_user_id, "module": module, "action": action, "page_id": page_id, "pageview":1, "user_pk": user_pk, "page_pk": page_pk, "unixtimestamp": unixtimestamp}
connection.execute(return_summaryobjectsql("fact_coursevisits", row))
else:
print lms_user_id, action, page_id
print "staff_list", staff_list
trans.commit()
print "imported rows:", count
def process_courseresources(course_resourcelog_folder, course_id, course_type):
resource_count_dict = {}
trans = connection.begin()
for subdir, dirs, files in os.walk(course_resourcelog_folder):
folder_name = os.path.basename(subdir)
if ((folder_name!="activities") and (folder_name!="blocks") and (not "blocks" in subdir)):
moodle_res_list = folder_name.split("_")
content_type = moodle_res_list[0]
content_id = moodle_res_list[1]
content_type_file = subdir + "/" + content_type + ".xml"
tree = ET.ElementTree(file=content_type_file)
root = tree.getroot()
title = "blank"
timeopen = ""
timeclose = ""
grade = ""
for child_of_root in root:
for child_child_of_root in child_of_root:
if (child_child_of_root.tag== "name"): title = removeNonAscii(child_child_of_root.text)
if (child_child_of_root.tag== "timeopen"): timeopen = removeNonAscii(child_child_of_root.text)
if (child_child_of_root.tag== "timeclose"): timeclose = removeNonAscii(child_child_of_root.text)
if (child_child_of_root.tag== "grade"): grade = removeNonAscii(child_child_of_root.text)
if (child_child_of_root.tag== "allowsubmissionsfromdate"): timeopen = removeNonAscii(child_child_of_root.text) # for assignment
if (child_child_of_root.tag== "duedate"): timeclose = removeNonAscii(child_child_of_root.text) # for assignment
#get parent and get order
parent_type_file = subdir + "/module.xml"
parent_id, order_no = get_resource_parentandorder(parent_type_file)
title = title.replace("%", " percent")
if (content_type!='label'):
row = {"course_id": course_id, "content_type": content_type, "content_id": content_id, "title": title.replace("'", "''"), "order_no": order_no, "parent_id": parent_id}
save_summaryobject ("dim_pages", row)
if content_type=="forum":
process_forum(content_type_file, content_id, title, course_id)
elif content_type in ["quiz","assign"]:
process_submissiontype(content_type,subdir, content_id, title, course_id, timeopen, timeclose, grade)
#Add to resource_count_dict
if content_type in resource_count_dict:
resource_count_dict[content_type] = resource_count_dict[content_type] + 1
else:
resource_count_dict[content_type] =1
#save log entries from log file in resource folder
if course_type=="Moodle":
logfilename = os.path.normpath(subdir + "/" + "logs.xml")
#print logfilename
build_starschema_moodle(logfilename, course_id, content_type, content_id)
trans.commit()
def process_submissiontype(content_type,content_type_file, content_id, title, course_id, timeopen, timeclose, grade):
submissiontype_row = {"course_id": course_id, "content_id": content_id, "content_type": content_type, "timeopen": timeopen, "timeclose": timeclose, "grade": grade}
save_summaryobject ("dim_submissiontypes", submissiontype_row)
content_type_file = content_type_file + "/grades.xml"
tree = ET.ElementTree(file=content_type_file)
for elem in tree.iter(tag='grade_grade'):
user_id = 0
grade = ""
unixtimestamp = 0
for child_of_root in elem:
if child_of_root.tag == "userid": user_id = child_of_root.text
if child_of_root.tag == "rawgrademax": grade = child_of_root.text
if child_of_root.tag == "timecreated": unixtimestamp = child_of_root.text
attempt_row = {"course_id": course_id, "content_id": content_id, "grade": grade, "user_id": user_id, "unixtimestamp": unixtimestamp}
save_summaryobject ("dim_submissionattempts", attempt_row)
def process_forum(content_type_file, content_id, title, course_id):
discussion_count = 0
tree = ET.ElementTree(file=content_type_file)
for elem in tree.iter(tag='discussion'):
discussion_id = 0
discussion_name = ""
discussion_user_id = 0
post_count = 0
if elem.tag == "discussion":
discussion_id = elem.attrib["id"]
discussion_count += 1
for child_of_root in elem:
if child_of_root.tag == "name": discussion_name = child_of_root.text
if child_of_root.tag == "userid": discussion_user_id = child_of_root.text
if child_of_root.tag == "posts":
for child_child_of_root in child_of_root:
post_count += 1
post_user_id = ""
datetime_str = ""
for child_child_child_of_root in child_child_of_root:
if child_child_child_of_root.tag == "userid":
post_user_id = child_child_child_of_root.text
if child_child_child_of_root.tag == "created":
datetime_str = child_child_child_of_root.text
date_id = time.strftime("%d-%b-%y", time.gmtime(int(datetime_str)))
post_row = {"date_id": date_id, "user_id": post_user_id, "course_id": course_id, "forum_id": content_id, "discussion_id": discussion_id}
save_summaryobject ("summary_posts", post_row)
discussion_row = {"course_id": course_id, "forum_id": content_id, "discussion_id": discussion_id, "title": discussion_name.replace("'", "''"), "no_posts": post_count}
save_summaryobject ("summary_discussion", discussion_row)
forum_row = {"course_id": course_id, "forum_id": content_id, "title": title.replace("'", "''"), "no_discussions": discussion_count}
save_summaryobject ("summary_forum", forum_row)
def process_moodle_sections(course_sections_folder, course_id, course_type):
for subdir, dirs, files in os.walk(course_sections_folder):
folder_name = os.path.basename(subdir)
if (folder_name!="sections"):
moodle_res_list = folder_name.split("_")
content_type = moodle_res_list[0]
content_id = moodle_res_list[1]
section_file = subdir + "/section.xml"
title, order_no = get_section_info(section_file)
if title is None:
title = "blank"
row = {"course_id": course_id, "content_type": content_type, "content_id": content_id, "title": removeNonAscii(title.replace("'", "''")), "order_no": order_no, "parent_id": 0}
save_summaryobject ("dim_pages", row)
update_moodle_sectionid(int(order_no), int(content_id), course_id)
def update_moodle_sectionid(section_no, content_id, course_id):
sql = "UPDATE fact_coursevisits SET page_id=%d WHERE course_id=%d AND section_order=%d;" % (content_id, course_id, section_no)
try:
test = connection.execute(sql);
except: # catch *all* exceptions
e = sys.exc_info()[0]
print "<p>Error: %s</p>" % e
print sys.exc_info()
def get_resource_parentandorder(modulefile):
tree = ET.ElementTree(file=modulefile)
root = tree .getroot()
parent_id = ""
order_no = ""
for child_of_root in root:
if (child_of_root.tag== "sectionid"): parent_id = child_of_root.text
if (child_of_root.tag== "sectionnumber"): order_no = child_of_root.text
return parent_id, order_no
def get_section_info(modulefile):
tree = ET.ElementTree(file=modulefile)
root = tree .getroot()
title = ""
order_no = ""
for child_of_root in root:
if (child_of_root.tag== "name"): title = child_of_root.text
if (child_of_root.tag== "number"): order_no = child_of_root.text
return title, order_no
"""
Process Users and Access Log for Moodle and Blackboard
"""
def process_accesslog(course_log_file, course_type, course_id, announcements_id=None, gradebook_id=None, content_link_id_to_content_id_dict=None, idtypemap=None):
if course_type == "Moodle":
build_starschema_moodle(course_log_file + "/course/logs.xml", course_id, "", 0)
elif course_type == "MoodleMacquarie":
build_starschema_moodlemacquarie(course_log_file + "/log.csv", course_id)
else:
build_starschema_blackboard(course_log_file, course_id, announcements_id, gradebook_id, content_link_id_to_content_id_dict, idtypemap)
def populate_dim_users_table(user_membership_resourcefile, course_type, course_id):
if course_type in ["Moodle", "MoodleMacquarie"]:
populate_dim_users_table_moodle(os.path.normpath(user_membership_resourcefile + "/users.xml"), course_id, course_type)
else:
populate_dim_users_table_blackboard(user_membership_resourcefile, course_id)
"""
Date Helper functions
"""
def get_weeks(start_date_str, end_date_str):
start_week = get_week(start_date_str)
end_week = get_week(end_date_str)
weeks = range(start_week, end_week)
return weeks
def get_week(date_str):
date_obj = datetime.datetime.strptime(date_str, "%d/%b/%y") #eg #'01/SEP/14' and '30/SEP/14'
week = date_obj.isocalendar()[1]
return week
"""
Database Helper functions
"""
def save_object (table, row):
if (table in cache):
if (row["id"] in cache[table]):
return row["id"]
else:
cache[table] = {}
keys = row.keys();
sql = "INSERT INTO " + table + " ("
sql = sql + ", ".join(keys)
sql = sql + ") VALUES ("
sql = sql + ", ".join([ ("'" + str(row[key]) + "'") for key in keys])
sql = sql + ")"
connection.execute(sql);
cache[table][row["id"]] = row
return row["id"]
def save_summaryobject (table, row):
keys = row.keys();
sql = "INSERT INTO " + table + " ("
sql = sql + ", ".join(keys)
sql = sql + ") VALUES ("
sql = sql + ", ".join([ ("'" + str(row[key]) + "'") for key in keys])
sql = sql + ")"
id = connection.execute(sql);
return id
def return_summaryobjectsql (table, row):
keys = row.keys();
sql = "INSERT INTO " + table + " ("
sql = sql + ", ".join(keys)
sql = sql + ") VALUES ("
sql = sql + ", ".join([ ("'" + str(row[key]) + "'") for key in keys])
sql = sql + ");"
return sql
def update_coursesummarytable(course_id, colname, colvalue):
sql = "UPDATE summary_courses SET %s='%s' WHERE course_id =%d;" % (colname, colvalue, course_id)
try:
test = connection.execute(sql);
except: # catch *all* exceptions
e = sys.exc_info()[0]
print "<p>Error: %s</p>" % e
print sys.exc_info()
def update_coursesummarytable_blob(course_id, colname, colvalue):
sql = "UPDATE summary_courses SET %s=%%s WHERE course_id =%d;" % (colname, course_id)
try:
test = connection.execute(sql,colvalue);
except: # catch *all* exceptions
e = sys.exc_info()[0]
print "<p>Error: %s</p>" % e
print sys.exc_info()
def check_ifRecordExists(table, pk_name, pk_value):
exists = False
sql = "SELECT count(%s) as cnts FROM %s WHERE %s=%d" % (pk_name, table, pk_name, pk_value)
result = connection.execute(sql);
one_row = result.fetchone()
if int(one_row[0])>0:
exists = True
return exists
def scale_chart(total_counts):
width = 75
height = 75
if total_counts > 75:
width = 75
height = 75
elif total_counts >= 26 and total_counts <= 74 :
width = 50
height = 50
elif total_counts <= 25:
width = 25
height = 25
return width, height
def sanitize (value):
if (value == ""):
value = "(BLANK)"
elif (value == None):
value = "(NULL)"
else:
value = re.sub('[^\w]', "-", value.strip())
return value
def removeNonAscii(s): return "".join(i for i in s if ord(i)<128)
def insert_coursevisit(fact):
return save_object ("fact_coursevisits", fact)
def insert_course (id, name):
row = {
"id": id,
"name": name
}
return save_object ("dim_courses", row)
def insert_user (id, name, type):
row = {
"id": id,
"name": name,
"type": type
}
return save_object ("dim_users", row)
def insert_session (id):
row = {
"id": id
}
return save_object ("dim_sessions", row)
def insert_page (id, event_type, content_type, page_url):
row = {
"id": id,
"event_type": event_type,
"content_type": content_type,
"page_url": page_url,
}
return save_object ("dim_pages", row)
def getmax_pageid(course_id):
res_sql = "SELECT max(content_id) FROM dim_pages WHERE course_id=%d;" % (course_id)
result = connection.execute(res_sql);
rows = result.fetchall()
max_val = rows[0][0]
if (max_val is None):
return 1
else:
return int(max_val) + 1
if __name__ == "__main__":
main()
| cc0-1.0 |
SirJohnFranklin/FieldSolver | testing/ElectricFieldSolverTests.py | 1 | 4899 | from __future__ import print_function, division
from ElectricFieldSolver import CylindricalPoissonSolver, CartesianPoissonSolver
from HelperFunctions import plot_field
import matplotlib.pyplot as plt
import numpy as np
def benchmark_matrix_direct_solve():
""" 2017-04-03
benchmark_matrix_direct_solve: Testing dx = 0.002 | dy = 0.003
CylindricalPoissonSolver : Created with nx = 340 | dx = 0.002 | ny = 110 | dy = 0.003
-m cProfile -s tottime
ncalls tottime percall cumtime percall filename:lineno(function)
solvers with sparse matrix preparation
1 0.000 0.000 1.343 1.343 ElectricFieldSolver.py:175(calculate_potential_exact)
2 0.435 0.218 0.791 0.396 ElectricFieldSolver.py:585(create_Ab_matrix)
2 0.326 0.163 0.326 0.163 {scipy.sparse.linalg.dsolve._superlu.gssv}
1 800.569 800.569 800.569 800.569 ElectricFieldSolver.py:19(solve_gauss_seidel_cylindric)
"""
for dz in [2e-3, 5e-3]:
for dr in [3e-3, 4e-3, 5e-3]:
print("benchmark_matrix_direct_solve: Testing dx = ", dz, " | dy = ", dr)
world = cylinder_sugarcube_test(mult=1, dz=dz, dr=dr)
ergspsolve = world.calculate_potential_exact()
ergseidel = world.calculate_potential_gauss_seidel()
if not np.allclose(ergspsolve, ergseidel, rtol=1e-6): # atol is set higher, since gauss-seidel only checks rtol
print("rtol is higher than 1e-6")
plot_field(world.zvals, world.rvals, ergspsolve, world.cell_type, 'potential $\phi$ [V] (spsolve)')
plot_field(world.zvals, world.rvals, ergseidel, world.cell_type, 'potential $\phi$ [V] (gauss-seidel)')
plt.figure()
plt.imshow((ergspsolve-ergseidel)/(ergspsolve+ergseidel), origin='lower', interpolation="nearest")
plt.colorbar(format='%.0e')
plt.show()
# 2016-10-18: passed.
def cylinder_sugarcube_test(mult=1, nz=340, dz=1e-4, nr=110, dr=1e-4, currents=True):
world = CylindricalPoissonSolver(nz=nz * mult, dz=dz / mult, nr=nr * mult, dr=dr / mult)
world.verbose = True
# test.do_solver_benchmark()
ct = world.get_electric_cell_type()
# r, z
ct[np.int(60 * mult), 0:np.int(100 * mult + 1.)] = 1 # at r = 60*dx (1e-4), go from z=0, to z=100*dx
ct[np.int(40 * mult):np.int(60 * mult), np.int(100 * mult)] = 1
ct[:np.int(60 * mult), 0] = 1
ct[np.int(70 * mult), 0:np.int(120 * mult + 1)] = 2
ct[np.int(30 * mult):np.int(70 * mult), np.int(120 * mult)] = 2
ctd = {'1': 100, '2': 0} # celltype x = fixed potential
world.set_electric_cell_type(ct, ctd)
if currents:
cc = world.get_magnetic_cell_type()
cc[40:50, 160:240] = 1
ccd = {'1': 1./(dr * dz)}
world.set_magnetic_cell_type(cc, ccd)
return world
def cartesian_sugarcube_test(mult=1, nx=340, dx=1e-4, ny=110, dy=1e-4, currents=True):
world = CartesianPoissonSolver(nx=nx * mult, dx=dx / mult, ny=ny * mult, dy=dy / mult)
world.verbose = True
# test.do_solver_benchmark()
ct = world.get_electric_cell_type()
# x, y
ct[np.int(60 * mult), 0:np.int(100 * mult + 1.)] = 1 # at r = 60*dx (1e-4), go from z=0, to z=100*dx
ct[np.int(40 * mult):np.int(60 * mult), np.int(100 * mult)] = 1
ct[:np.int(60 * mult), 0] = 1
ct[np.int(70 * mult), 0:np.int(120 * mult + 1)] = 2
ct[np.int(30 * mult):np.int(70 * mult), np.int(120 * mult)] = 2
ctd = {'1': 100, '2': 0} # celltype x = fixed potential
world.set_electric_cell_type(ct, ctd)
if currents:
cc = world.get_magnetic_cell_type()
cc[40:50, 160:240] = 1
ccd = {'1': 1/(dx * dy)}
world.set_magnetic_cell_type(cc, ccd)
return world
def test_magnetic_field_solver(multr=1, multz=1, nz=100, dz=1e-3, nr=120, dr=1e-3):
world = CylindricalPoissonSolver(nz=nz * multz, dz=dz / multz, nr=nr * multr, dr=dr / multr)
cc = world.get_magnetic_cell_type()
cc[45*multr:55*multr, 20*multz:80*multz] = 1
ccd = {'1': 2e7}
world.set_magnetic_cell_type(cc, ccd)
return world
if __name__ == '__main__':
benchmark_matrix_direct_solve()
# world = cylinder_sugarcube_test(mult=8, currents=True)
# ergspsolve = world.calculate_potential_exact()
# ergseidel = world.calculate_potential_gauss_seidel()
# world.plot_all_fields()
# ergspsolve = world.calculate_potential_exact()
# world.plot_all_fields()
# print()
# print(np.allclose(ergspsolve, ergseidel, atol=1e-3))
# world = test_magnetic_field_solver(multr=7, multz=4)
# world.calculate_potential_exact()
# world.plot_all_fields()
# world = cylinder_sugarcube_test(mult=1, currents=True)
# world = cartesian_sugarcube_test(mult=1)
plt.show()
| gpl-3.0 |
bgroveben/python3_machine_learning_projects | oreilly_GANs_for_beginners/introduction_to_ml_with_python/mglearn/mglearn/plot_2d_separator.py | 4 | 3954 | import numpy as np
import matplotlib.pyplot as plt
from .plot_helpers import cm2, cm3, discrete_scatter
def plot_2d_classification(classifier, X, fill=False, ax=None, eps=None,
alpha=1, cm=cm3):
# multiclass
if eps is None:
eps = X.std() / 2.
if ax is None:
ax = plt.gca()
x_min, x_max = X[:, 0].min() - eps, X[:, 0].max() + eps
y_min, y_max = X[:, 1].min() - eps, X[:, 1].max() + eps
xx = np.linspace(x_min, x_max, 1000)
yy = np.linspace(y_min, y_max, 1000)
X1, X2 = np.meshgrid(xx, yy)
X_grid = np.c_[X1.ravel(), X2.ravel()]
decision_values = classifier.predict(X_grid)
ax.imshow(decision_values.reshape(X1.shape), extent=(x_min, x_max,
y_min, y_max),
aspect='auto', origin='lower', alpha=alpha, cmap=cm)
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_min, y_max)
ax.set_xticks(())
ax.set_yticks(())
def plot_2d_scores(classifier, X, ax=None, eps=None, alpha=1, cm="viridis",
function=None):
# binary with fill
if eps is None:
eps = X.std() / 2.
if ax is None:
ax = plt.gca()
x_min, x_max = X[:, 0].min() - eps, X[:, 0].max() + eps
y_min, y_max = X[:, 1].min() - eps, X[:, 1].max() + eps
xx = np.linspace(x_min, x_max, 100)
yy = np.linspace(y_min, y_max, 100)
X1, X2 = np.meshgrid(xx, yy)
X_grid = np.c_[X1.ravel(), X2.ravel()]
if function is None:
function = getattr(classifier, "decision_function",
getattr(classifier, "predict_proba"))
else:
function = getattr(classifier, function)
decision_values = function(X_grid)
if decision_values.ndim > 1 and decision_values.shape[1] > 1:
# predict_proba
decision_values = decision_values[:, 1]
grr = ax.imshow(decision_values.reshape(X1.shape),
extent=(x_min, x_max, y_min, y_max), aspect='auto',
origin='lower', alpha=alpha, cmap=cm)
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_min, y_max)
ax.set_xticks(())
ax.set_yticks(())
return grr
def plot_2d_separator(classifier, X, fill=False, ax=None, eps=None, alpha=1,
cm=cm2, linewidth=None, threshold=None,
linestyle="solid"):
# binary?
if eps is None:
eps = X.std() / 2.
if ax is None:
ax = plt.gca()
x_min, x_max = X[:, 0].min() - eps, X[:, 0].max() + eps
y_min, y_max = X[:, 1].min() - eps, X[:, 1].max() + eps
xx = np.linspace(x_min, x_max, 1000)
yy = np.linspace(y_min, y_max, 1000)
X1, X2 = np.meshgrid(xx, yy)
X_grid = np.c_[X1.ravel(), X2.ravel()]
try:
decision_values = classifier.decision_function(X_grid)
levels = [0] if threshold is None else [threshold]
fill_levels = [decision_values.min()] + levels + [
decision_values.max()]
except AttributeError:
# no decision_function
decision_values = classifier.predict_proba(X_grid)[:, 1]
levels = [.5] if threshold is None else [threshold]
fill_levels = [0] + levels + [1]
if fill:
ax.contourf(X1, X2, decision_values.reshape(X1.shape),
levels=fill_levels, alpha=alpha, cmap=cm)
else:
ax.contour(X1, X2, decision_values.reshape(X1.shape), levels=levels,
colors="black", alpha=alpha, linewidths=linewidth,
linestyles=linestyle, zorder=5)
ax.set_xlim(x_min, x_max)
ax.set_ylim(y_min, y_max)
ax.set_xticks(())
ax.set_yticks(())
if __name__ == '__main__':
from sklearn.datasets import make_blobs
from sklearn.linear_model import LogisticRegression
X, y = make_blobs(centers=2, random_state=42)
clf = LogisticRegression().fit(X, y)
plot_2d_separator(clf, X, fill=True)
discrete_scatter(X[:, 0], X[:, 1], y)
plt.show()
| mit |
rahuldhote/scikit-learn | examples/preprocessing/plot_robust_scaling.py | 221 | 2702 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=========================================================
Robust Scaling on Toy Data
=========================================================
Making sure that each Feature has approximately the same scale can be a
crucial preprocessing step. However, when data contains outliers,
:class:`StandardScaler <sklearn.preprocessing.StandardScaler>` can often
be mislead. In such cases, it is better to use a scaler that is robust
against outliers.
Here, we demonstrate this on a toy dataset, where one single datapoint
is a large outlier.
"""
from __future__ import print_function
print(__doc__)
# Code source: Thomas Unterthiner
# License: BSD 3 clause
import matplotlib.pyplot as plt
import numpy as np
from sklearn.preprocessing import StandardScaler, RobustScaler
# Create training and test data
np.random.seed(42)
n_datapoints = 100
Cov = [[0.9, 0.0], [0.0, 20.0]]
mu1 = [100.0, -3.0]
mu2 = [101.0, -3.0]
X1 = np.random.multivariate_normal(mean=mu1, cov=Cov, size=n_datapoints)
X2 = np.random.multivariate_normal(mean=mu2, cov=Cov, size=n_datapoints)
Y_train = np.hstack([[-1]*n_datapoints, [1]*n_datapoints])
X_train = np.vstack([X1, X2])
X1 = np.random.multivariate_normal(mean=mu1, cov=Cov, size=n_datapoints)
X2 = np.random.multivariate_normal(mean=mu2, cov=Cov, size=n_datapoints)
Y_test = np.hstack([[-1]*n_datapoints, [1]*n_datapoints])
X_test = np.vstack([X1, X2])
X_train[0, 0] = -1000 # a fairly large outlier
# Scale data
standard_scaler = StandardScaler()
Xtr_s = standard_scaler.fit_transform(X_train)
Xte_s = standard_scaler.transform(X_test)
robust_scaler = RobustScaler()
Xtr_r = robust_scaler.fit_transform(X_train)
Xte_r = robust_scaler.fit_transform(X_test)
# Plot data
fig, ax = plt.subplots(1, 3, figsize=(12, 4))
ax[0].scatter(X_train[:, 0], X_train[:, 1],
color=np.where(Y_train > 0, 'r', 'b'))
ax[1].scatter(Xtr_s[:, 0], Xtr_s[:, 1], color=np.where(Y_train > 0, 'r', 'b'))
ax[2].scatter(Xtr_r[:, 0], Xtr_r[:, 1], color=np.where(Y_train > 0, 'r', 'b'))
ax[0].set_title("Unscaled data")
ax[1].set_title("After standard scaling (zoomed in)")
ax[2].set_title("After robust scaling (zoomed in)")
# for the scaled data, we zoom in to the data center (outlier can't be seen!)
for a in ax[1:]:
a.set_xlim(-3, 3)
a.set_ylim(-3, 3)
plt.tight_layout()
plt.show()
# Classify using k-NN
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(Xtr_s, Y_train)
acc_s = knn.score(Xte_s, Y_test)
print("Testset accuracy using standard scaler: %.3f" % acc_s)
knn.fit(Xtr_r, Y_train)
acc_r = knn.score(Xte_r, Y_test)
print("Testset accuracy using robust scaler: %.3f" % acc_r)
| bsd-3-clause |
kambysese/mne-python | tutorials/misc/plot_ecog.py | 6 | 7149 | """
.. _tut_working_with_ecog:
======================
Working with ECoG data
======================
MNE supports working with more than just MEG and EEG data. Here we show some
of the functions that can be used to facilitate working with
electrocorticography (ECoG) data.
This example shows how to use:
- ECoG data (`available here <https://openneuro.org/datasets/ds003029>`_)
from an epilepsy patient during a seizure
- channel locations in FreeSurfer's ``fsaverage`` MRI space
- projection onto a pial surface
For a complementary example that involves sEEG data, channel locations in
MNI space, or projection into a volume, see :ref:`tut_working_with_seeg`.
"""
# Authors: Eric Larson <[email protected]>
# Chris Holdgraf <[email protected]>
# Adam Li <[email protected]>
# Alex Rockhill <[email protected]>
# Liberty Hamilton <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
from mne_bids import BIDSPath, read_raw_bids
import mne
from mne.viz import plot_alignment, snapshot_brain_montage
print(__doc__)
# paths to mne datasets - sample ECoG and FreeSurfer subject
bids_root = mne.datasets.epilepsy_ecog.data_path()
sample_path = mne.datasets.sample.data_path()
subjects_dir = op.join(sample_path, 'subjects')
###############################################################################
# Load in data and perform basic preprocessing
# --------------------------------------------
#
# Let's load some ECoG electrode data with `mne-bids
# <https://mne.tools/mne-bids/>`_.
# first define the bids path
bids_path = BIDSPath(root=bids_root, subject='pt1', session='presurgery',
task='ictal', datatype='ieeg', extension='vhdr')
# then we'll use it to load in the sample dataset
# Here we use a format (iEEG) that is only available in MNE-BIDS 0.7+, so it
# will emit a warning on versions <= 0.6
raw = read_raw_bids(bids_path=bids_path, verbose=False)
# Pick only the ECoG channels, removing the EKG channels
raw.pick_types(ecog=True)
# Load the data
raw.load_data()
# Then we remove line frequency interference
raw.notch_filter([60], trans_bandwidth=3)
# drop bad channels
raw.drop_channels(raw.info['bads'])
# the coordinate frame of the montage
print(raw.get_montage().get_positions()['coord_frame'])
# Find the annotated events
events, event_id = mne.events_from_annotations(raw)
# Make a 25 second epoch that spans before and after the seizure onset
epoch_length = 25 # seconds
epochs = mne.Epochs(raw, events, event_id=event_id['onset'],
tmin=13, tmax=13 + epoch_length, baseline=None)
# And then load data and downsample.
# .. note: This is just to save execution time in this example, you should
# not need to do this in general!
epochs.load_data()
epochs.resample(200) # Hz, will also load the data for us
# Finally, make evoked from the one epoch
evoked = epochs.average()
###############################################################################
# Explore the electrodes on a template brain
# ------------------------------------------
#
# Our electrodes are shown after being morphed to fsaverage brain so we'll use
# this fsaverage brain to plot the locations of our electrodes. We'll use
# :func:`~mne.viz.snapshot_brain_montage` to save the plot as image data
# (along with xy positions of each electrode in the image), so that later
# we can plot frequency band power on top of it.
fig = plot_alignment(raw.info, subject='fsaverage', subjects_dir=subjects_dir,
surfaces=['pial'], coord_frame='mri')
az, el, focalpoint = 160, -70, [0.067, -0.040, 0.018]
mne.viz.set_3d_view(fig, azimuth=az, elevation=el, focalpoint=focalpoint)
xy, im = snapshot_brain_montage(fig, raw.info)
###############################################################################
# Compute frequency features of the data
# --------------------------------------
#
# Next, we'll compute the signal power in the gamma (30-90 Hz) band,
# downsampling the result to 10 Hz (to save time).
sfreq = 10
gamma_power_t = evoked.copy().filter(30, 90).apply_hilbert(
envelope=True).resample(sfreq)
gamma_info = gamma_power_t.info
###############################################################################
# Visualize the time-evolution of the gamma power on the brain
# ------------------------------------------------------------
#
# Say we want to visualize the evolution of the power in the gamma band,
# instead of just plotting the average. We can use
# `matplotlib.animation.FuncAnimation` to create an animation and apply this
# to the brain figure.
# convert from a dictionary to array to plot
xy_pts = np.vstack([xy[ch] for ch in raw.info['ch_names']])
# get a colormap to color nearby points similar colors
cmap = get_cmap('viridis')
# create the figure of the brain with the electrode positions
fig, ax = plt.subplots(figsize=(5, 5))
ax.set_title('Gamma power over time', size='large')
ax.imshow(im)
ax.set_axis_off()
# normalize gamma power for plotting
gamma_power = -100 * gamma_power_t.data / gamma_power_t.data.max()
# add the time course overlaid on the positions
x_line = np.linspace(-0.025 * im.shape[0], 0.025 * im.shape[0],
gamma_power_t.data.shape[1])
for i, pos in enumerate(xy_pts):
x, y = pos
color = cmap(i / xy_pts.shape[0])
ax.plot(x_line + x, gamma_power[i] + y, linewidth=0.5, color=color)
###############################################################################
# We can project gamma power from the sensor data to the nearest locations on
# the pial surface and visualize that:
#
# As shown in the plot, the epileptiform activity starts in the temporal lobe,
# progressing posteriorly. The seizure becomes generalized eventually, after
# this example short time section. This dataset is available using
# :func:`mne.datasets.epilepsy_ecog.data_path` for you to examine.
# sphinx_gallery_thumbnail_number = 5
xyz_pts = np.array([dig['r'] for dig in evoked.info['dig']])
src = mne.read_source_spaces(
op.join(subjects_dir, 'fsaverage', 'bem', 'fsaverage-ico-5-src.fif'))
trans = None # identity transform
stc = mne.stc_near_sensors(gamma_power_t, trans, 'fsaverage', src=src,
mode='nearest', subjects_dir=subjects_dir,
distance=0.02)
vmin, vmid, vmax = np.percentile(gamma_power_t.data, [10, 25, 90])
clim = dict(kind='value', lims=[vmin, vmid, vmax])
brain = stc.plot(surface='pial', hemi='rh', colormap='inferno', colorbar=False,
clim=clim, views=['lat', 'med'], subjects_dir=subjects_dir,
size=(250, 250), smoothing_steps=20, time_viewer=False)
# plot electrode locations
for xyz in xyz_pts:
for subplot in (0, 1):
brain.plotter.subplot(subplot, 0)
brain._renderer.sphere(xyz * 1e3, color='white', scale=2)
# You can save a movie like the one on our documentation website with:
# brain.save_movie(time_dilation=1, interpolation='linear', framerate=12,
# time_viewer=True)
| bsd-3-clause |
mmottahedi/nilmtk | tests_on_large_datasets/redd_house3_f1_score.py | 6 | 1214 | from __future__ import print_function, division
from nilmtk import DataSet, HDFDataStore
from nilmtk.disaggregate import fhmm_exact
from nilmtk.metrics import f1_score
from os.path import join
import matplotlib.pyplot as plt
"""
This file replicates issue #376 (which should now be fixed)
https://github.com/nilmtk/nilmtk/issues/376
"""
data_dir = '/data/REDD'
building_number = 3
disag_filename = join(data_dir, 'disag-fhmm' + str(building_number) + '.h5')
data = DataSet(join(data_dir, 'redd.h5'))
print("Loading building " + str(building_number))
elec = data.buildings[building_number].elec
top_train_elec = elec.submeters().select_top_k(k=5)
fhmm = fhmm_exact.FHMM()
fhmm.train(top_train_elec)
output = HDFDataStore(disag_filename, 'w')
fhmm.disaggregate(elec.mains(), output)
output.close()
### f1score fhmm
disag = DataSet(disag_filename)
disag_elec = disag.buildings[building_number].elec
f1 = f1_score(disag_elec, elec)
f1.index = disag_elec.get_labels(f1.index)
f1.plot(kind='barh')
plt.ylabel('appliance');
plt.xlabel('f-score');
plt.title("FHMM");
plt.savefig(join(data_dir, 'f1-fhmm' + str(building_number) + '.png'))
disag.store.close()
####
print("Finishing building " + str(building_number))
| apache-2.0 |
cuemacro/finmarketpy | finmarketpy_examples/tradingmodelfxtrend_example.py | 1 | 9412 | __author__ = 'saeedamen' # Saeed Amen
#
# Copyright 2016-2020 Cuemacro - https://www.cuemacro.com / @cuemacro
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the
# License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#
# See the License for the specific language governing permissions and limitations under the License.
#
import datetime
from findatapy.market import Market, MarketDataGenerator, MarketDataRequest
from finmarketpy.backtest import TradingModel, BacktestRequest
from finmarketpy.economics import TechIndicator
from chartpy import Style
class TradingModelFXTrend_Example(TradingModel):
"""Shows how to create a simple FX CTA style strategy, using the TradingModel abstract class (backtest_examples.py
is a lower level way of doing this).
"""
def __init__(self):
super(TradingModel, self).__init__()
##### FILL IN WITH YOUR OWN PARAMETERS FOR display, dumping, TSF etc.
self.market = Market(market_data_generator=MarketDataGenerator())
self.DUMP_PATH = ''
self.FINAL_STRATEGY = 'FX trend'
self.SCALE_FACTOR = 1
self.DEFAULT_PLOT_ENGINE = 'matplotlib'
# self.CHART_STYLE = Style(plotly_plot_mode='offline_jupyter')
self.br = self.load_parameters()
return
###### Parameters and signal generations (need to be customised for every model)
def load_parameters(self, br = None):
if br is not None: return br
##### FILL IN WITH YOUR OWN BACKTESTING PARAMETERS
br = BacktestRequest()
# get all asset data
br.start_date = "04 Jan 1989"
br.finish_date = datetime.datetime.utcnow().date()
br.spot_tc_bp = 0.5
br.ann_factor = 252
br.plot_start = "01 Apr 2015"
br.calc_stats = True
br.write_csv = False
br.plot_interim = True
br.include_benchmark = True
# Have vol target for each signal
br.signal_vol_adjust = True
br.signal_vol_target = 0.1
br.signal_vol_max_leverage = 5
br.signal_vol_periods = 20
br.signal_vol_obs_in_year = 252
br.signal_vol_rebalance_freq = 'BM'
br.signal_vol_resample_freq = None
# Have vol target for portfolio
br.portfolio_vol_adjust = True
br.portfolio_vol_target = 0.1
br.portfolio_vol_max_leverage = 5
br.portfolio_vol_periods = 20
br.portfolio_vol_obs_in_year = 252
br.portfolio_vol_rebalance_freq = 'BM'
br.portfolio_vol_resample_freq = None
# Tech params
br.tech_params.sma_period = 200
# To make additive indices
# br.cum_index = 'add'
return br
def load_assets(self, br = None):
##### FILL IN WITH YOUR ASSET DATA
from findatapy.util.loggermanager import LoggerManager
logger = LoggerManager().getLogger(__name__)
# For FX basket
full_bkt = ['EURUSD', 'USDJPY', 'GBPUSD', 'AUDUSD', 'USDCAD',
'NZDUSD', 'USDCHF', 'USDNOK', 'USDSEK']
basket_dict = {}
for i in range(0, len(full_bkt)):
basket_dict[full_bkt[i]] = [full_bkt[i]]
basket_dict['FX trend'] = full_bkt
br = self.load_parameters(br = br)
logger.info("Loading asset data...")
vendor_tickers = ['FRED/DEXUSEU', 'FRED/DEXJPUS', 'FRED/DEXUSUK', 'FRED/DEXUSAL', 'FRED/DEXCAUS',
'FRED/DEXUSNZ', 'FRED/DEXSZUS', 'FRED/DEXNOUS', 'FRED/DEXSDUS']
market_data_request = MarketDataRequest(
start_date = br.start_date, # start date
finish_date = br.finish_date, # finish date
freq = 'daily', # daily data
data_source = 'quandl', # use Quandl as data source
tickers = full_bkt, # ticker (Thalesians)
fields = ['close'], # which fields to download
vendor_tickers = vendor_tickers, # ticker (Quandl)
vendor_fields = ['close'], # which Bloomberg fields to download
cache_algo = 'cache_algo_return') # how to return data
asset_df = self.market.fetch_market(market_data_request)
# If web connection fails read from CSV
if asset_df is None:
import pandas
asset_df = pandas.read_csv("d:/fxcta.csv", index_col=0, parse_dates=['Date'],
date_parser = lambda x: pandas.datetime.strptime(x, '%Y-%m-%d'))
# Signalling variables
spot_df = asset_df
spot_df2 = None
# asset_df
return asset_df, spot_df, spot_df2, basket_dict
def construct_signal(self, spot_df, spot_df2, tech_params, br, run_in_parallel=False):
##### FILL IN WITH YOUR OWN SIGNALS
# Use technical indicator to create signals
# (we could obviously create whatever function we wanted for generating the signal dataframe)
tech_ind = TechIndicator()
tech_ind.create_tech_ind(spot_df, 'SMA', tech_params);
signal_df = tech_ind.get_signal()
return signal_df
def construct_strategy_benchmark(self):
###### FILL IN WITH YOUR OWN BENCHMARK
tsr_indices = MarketDataRequest(
start_date = self.br.start_date, # start date
finish_date = self.br.finish_date, # finish date
freq = 'daily', # intraday data
data_source = 'quandl', # use Bloomberg as data source
tickers = ["EURUSD"], # tickers to download
vendor_tickers=['FRED/DEXUSEU'],
fields = ['close'], # which fields to download
vendor_fields = ['close'],
cache_algo = 'cache_algo_return') # how to return data)
df = self.market.fetch_market(tsr_indices)
df.columns = [x.split(".")[0] for x in df.columns]
return df
if __name__ == '__main__':
# Just change "False" to "True" to run any of the below examples
# Create a FX trend strategy then chart the returns, leverage over time
if True:
model = TradingModelFXTrend_Example()
model.construct_strategy()
model.plot_strategy_pnl() # plot the final strategy
model.plot_strategy_leverage() # plot the leverage of the portfolio
model.plot_strategy_group_pnl_trades() # plot the individual trade P&Ls
model.plot_strategy_group_benchmark_pnl() # plot all the cumulative P&Ls of each component
model.plot_strategy_group_benchmark_pnl_ir() # plot all the IR of individual components
model.plot_strategy_group_leverage() # plot all the individual leverages
from finmarketpy.backtest import TradeAnalysis
ta = TradeAnalysis()
# Create statistics for the model returns using both finmarketpy and pyfolio
ta.run_strategy_returns_stats(model, engine='finmarketpy')
# ta.run_strategy_returns_stats(model, engine='pyfolio')
# model.plot_strategy_group_benchmark_annualised_pnl()
# Create a FX CTA strategy, then examine how P&L changes with different vol targeting
# and later transaction costs
if True:
strategy = TradingModelFXTrend_Example()
from finmarketpy.backtest import TradeAnalysis
ta = TradeAnalysis()
ta.run_strategy_returns_stats(model, engine='finmarketpy')
# Which backtesting parameters to change
# names of the portfolio
# broad type of parameter name
parameter_list = [
{'portfolio_vol_adjust': True, 'signal_vol_adjust' : True},
{'portfolio_vol_adjust': False, 'signal_vol_adjust' : False}]
pretty_portfolio_names = \
['Vol target',
'No vol target']
parameter_type = 'vol target'
ta.run_arbitrary_sensitivity(strategy,
parameter_list=parameter_list,
pretty_portfolio_names=pretty_portfolio_names,
parameter_type=parameter_type)
# Now examine sensitivity to different transaction costs
tc = [0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2.0]
ta.run_tc_shock(strategy, tc=tc)
# How does P&L change on day of month
ta.run_day_of_month_analysis(strategy)
# Create a FX CTA strategy then use TradeAnalysis (via pyfolio) to analyse returns
if False:
from finmarketpy.backtest import TradeAnalysis
model = TradingModelFXTrend_Example()
model.construct_strategy()
tradeanalysis = TradeAnalysis()
tradeanalysis.run_strategy_returns_stats(strategy)
| apache-2.0 |
kgullikson88/General | Mamajek_Table.py | 1 | 2991 | import os
import pandas as pd
from scipy.interpolate import InterpolatedUnivariateSpline as spline
import SpectralTypeRelations
home = os.environ['HOME']
TABLE_FILENAME = '{}/Dropbox/School/Research/Databases/SpT_Relations/Mamajek_Table.txt'.format(home)
class MamajekTable(object):
"""
Class to interact with the table that Eric mamajek has online at
http://www.pas.rochester.edu/~emamajek/EEM_dwarf_UBVIJHK_colors_Teff.txt
"""
def __init__(self, filename=TABLE_FILENAME):
MS = SpectralTypeRelations.MainSequence()
# Read in the table.
colspecs=[[0,7], [7,14], [14,21], [21,28], [28,34], [34,40], [40,47], [47,55],
[55,63], [63,70], [70,78], [78,86], [86,94], [94,103], [103,110],
[110,116], [116,122], [122,130], [130,137], [137,144], [144,151],
[151,158]]
mam_df = pd.read_fwf(filename, header=20, colspecs=colspecs, na_values=['...'])[:92]
# Strip the * from the logAge column. Probably shouldn't but...
mam_df['logAge'] = mam_df['logAge'].map(lambda s: s.strip('*') if isinstance(s, basestring) else s)
# Convert everything to floats
for col in mam_df.columns:
mam_df[col] = pd.to_numeric(mam_df[col], errors='ignore')
# Add the spectral type number for interpolation
mam_df['SpTNum'] = mam_df['SpT'].map(MS.SpT_To_Number)
self.mam_df = mam_df
def get_columns(self, print_keys=True):
"""
Get the column names in a list, and optionally print them to the screen.
:param print_keys: bool variable to decide if the keys are printed.
:return:
"""
if print_keys:
for k in self.mam_df.keys():
print k
return list(self.mam_df.keys())
def get_interpolator(self, xcolumn, ycolumn, extrap='nearest'):
"""
Get an interpolator instance between the two columns
:param xcolumn: The name of the x column to interpolate between
:param ycolumn: The name of the value you want to interpolate
:param extrap: How to treat extrapolation. Options are:
'nearest': Default behavior. It will return the nearest match to the given 'x' value
'extrapolate': Extrapolate the spline. This is probably only safe for very small extrapolations
:return: an interpolator function
"""
# Make sure the column names are correct
assert xcolumn in self.mam_df.keys() and ycolumn in self.mam_df.keys()
# Sort the dataframe by the x column, and drop any duplicates or nans it might have
sorted_df = self.mam_df.sort_values(by=xcolumn).dropna(subset=[xcolumn, ycolumn], how='any').drop_duplicates(xcolumn)
# Make an interpolator
ext_value = {'nearest': 3, 'extrapolate': 0}
fcn = spline(sorted_df[xcolumn].values, sorted_df[ycolumn].values, ext=ext_value[extrap])
return fcn
| gpl-3.0 |
Eric89GXL/scikit-learn | examples/plot_classifier_comparison.py | 8 | 4681 | #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
=====================
Classifier comparison
=====================
A comparison of a several classifiers in scikit-learn on synthetic datasets.
The point of this example is to illustrate the nature of decision boundaries
of different classifiers.
This should be taken with a grain of salt, as the intuition conveyed by
these examples does not necessarily carry over to real datasets.
Particularly in high-dimensional spaces, data can more easily be separated
linearly and the simplicity of classifiers such as naive Bayes and linear SVMs
might lead to better generalization than is achieved by other classifiers.
The plots show training points in solid colors and testing points
semi-transparent. The lower right shows the classification accuracy on the test
set.
"""
print(__doc__)
# Code source: Gaël Varoquaux
# Andreas Müller
# Modified for documentation by Jaques Grobler
# License: BSD 3 clause
import numpy as np
import pylab as pl
from matplotlib.colors import ListedColormap
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.lda import LDA
from sklearn.qda import QDA
h = .02 # step size in the mesh
names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Decision Tree",
"Random Forest", "AdaBoost", "Naive Bayes", "LDA", "QDA"]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
AdaBoostClassifier(),
GaussianNB(),
LDA(),
QDA()]
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable
]
figure = pl.figure(figsize=(27, 9))
i = 1
# iterate over datasets
for ds in datasets:
# preprocess dataset, split into training and test part
X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# just plot the dataset first
cm = pl.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = pl.subplot(len(datasets), len(classifiers) + 1, i)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names, classifiers):
ax = pl.subplot(len(datasets), len(classifiers) + 1, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
# Plot also the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,
alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(name)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=15, horizontalalignment='right')
i += 1
figure.subplots_adjust(left=.02, right=.98)
pl.show()
| bsd-3-clause |
Featuretools/featuretools | featuretools/utils/time_utils.py | 1 | 3276 | import pandas as pd
def make_temporal_cutoffs(instance_ids,
cutoffs,
window_size=None,
num_windows=None,
start=None):
'''Makes a set of equally spaced cutoff times prior to a set of input cutoffs and instance ids.
If window_size and num_windows are provided, then num_windows of size window_size will be created
prior to each cutoff time
If window_size and a start list is provided, then a variable number of windows will be created prior
to each cutoff time, with the corresponding start time as the first cutoff.
If num_windows and a start list is provided, then num_windows of variable size will be created prior
to each cutoff time, with the corresponding start time as the first cutoff
Args:
instance_ids (list, np.ndarray, or pd.Series): list of instance ids. This function will make a
new datetime series of multiple cutoff times for each value in this array.
cutoffs (list, np.ndarray, or pd.Series): list of datetime objects associated with each instance id.
Each one of these will be the last time in the new datetime series for each instance id
window_size (pd.Timedelta, optional): amount of time between each datetime in each new cutoff series
num_windows (int, optional): number of windows in each new cutoff series
start (list, optional): list of start times for each instance id
'''
if (window_size is not None and
num_windows is not None and
start is not None):
raise ValueError("Only supply 2 of the 3 optional args, window_size, num_windows and start")
out = []
for i, id_time in enumerate(zip(instance_ids, cutoffs)):
_id, time = id_time
_window_size = window_size
_start = None
if start is not None:
if window_size is None:
_window_size = (time - start[i]) / (num_windows - 1)
else:
_start = start[i]
to_add = pd.DataFrame()
to_add["time"] = pd.date_range(end=time,
periods=num_windows,
freq=_window_size,
start=_start)
to_add['instance_id'] = [_id] * len(to_add['time'])
out.append(to_add)
return pd.concat(out).reset_index(drop=True)
def convert_time_units(secs,
unit):
'''
Converts a time specified in seconds to a time in the given units
Args:
secs (integer): number of seconds. This function will convert the units of this number.
unit(str): units to be converted to.
acceptable values: years, months, days, hours, minutes, seconds, milliseconds, nanoseconds
'''
unit_divs = {'years': 31540000,
'months': 2628000,
'days': 86400,
'hours': 3600,
'minutes': 60,
'seconds': 1,
'milliseconds': 0.001,
'nanoseconds': 0.000000001}
if unit not in unit_divs:
raise ValueError("Invalid unit given, make sure it is plural")
return secs / (unit_divs[unit])
| bsd-3-clause |
ABoothInTheWild/baseball-research | Playoff Odds/mlbPlayoffOdds2018/NLCS.py | 1 | 10983 | # -*- coding: utf-8 -*-
"""
Created on Sat Oct 13 16:50:53 2018
@author: ABooth
"""
import os
from MLBWinProbability import *
from MLB538WinProbability import *
from RunsScoredAllowedSimulator import *
import pandas as pd
import numpy as np
access_token = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
user_agent = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
stats = xmlstats.Xmlstats(access_token, user_agent)
date_format = datetime.today().strftime("%Y%m%d")
xmlStandingsOnDate = stats.standings(date=date_format, sport="mlb")
#last year data
endOfReg2017 = datetime(2017, 10, 2) #2017 last day
stats1 = xmlstats.Xmlstats(access_token, user_agent)
date_format1 = endOfReg2017.strftime("%Y%m%d")
lastYearStandingsOnDate = stats1.standings(date=date_format1, sport="mlb")
eloDF = pd.read_csv("https://projects.fivethirtyeight.com/mlb-api/mlb_elo.csv")
eloDF = eloDF[eloDF.season == 2018]
eloDF.date = pd.to_datetime(eloDF.date)
eloDF = eloDF.sort_values(["date"]).reset_index(drop=True)
mil2018 = ImportRunsScoredAllowedDF("MIL_2018_RegularSeason.csv")
lad2018 = ImportRunsScoredAllowedDF("LAD_2018_RegularSeason.csv")
bos2018 = ImportRunsScoredAllowedDF("BOS_2018_RegularSeason.csv")
hou2018 = ImportRunsScoredAllowedDF("HOU_2018_RegularSeason.csv")
numSims = 100000
##################################################################################
def GetSeries7Prob00(homeProb, awayProb):
totalProb = homeProb*homeProb*awayProb*awayProb
totalProb += homeProb*homeProb*awayProb*(1-awayProb)*awayProb
totalProb += homeProb*homeProb*awayProb*(1-awayProb)*(1-awayProb)*homeProb
totalProb += homeProb*homeProb*awayProb*(1-awayProb)*(1-awayProb)*(1-homeProb)*homeProb
totalProb += homeProb*homeProb*(1-awayProb)*awayProb*awayProb
totalProb += homeProb*homeProb*(1-awayProb)*awayProb*(1-awayProb)*homeProb
totalProb += homeProb*homeProb*(1-awayProb)*awayProb*(1-awayProb)*(1-homeProb)*homeProb
totalProb += homeProb*homeProb*(1-awayProb)*(1-awayProb)*awayProb*homeProb
totalProb += homeProb*homeProb*(1-awayProb)*(1-awayProb)*awayProb*(1-awayProb)*homeProb
totalProb += homeProb*homeProb*(1-awayProb)*(1-awayProb)*(1-awayProb)*homeProb*homeProb
totalProb += homeProb*(1-homeProb)*awayProb*awayProb*awayProb
totalProb += homeProb*(1-homeProb)*awayProb*awayProb*(1-awayProb)*homeProb
totalProb += homeProb*(1-homeProb)*awayProb*awayProb*(1-awayProb)*(1-homeProb)*homeProb
totalProb += homeProb*(1-homeProb)*awayProb*(1-awayProb)*awayProb*homeProb
totalProb += homeProb*(1-homeProb)*awayProb*(1-awayProb)*awayProb*(1-awayProb)*homeProb
totalProb += homeProb*(1-homeProb)*awayProb*(1-awayProb)*(1-awayProb)*homeProb*homeProb
totalProb += homeProb*(1-homeProb)*(1-awayProb)*awayProb*awayProb*homeProb
totalProb += homeProb*(1-homeProb)*(1-awayProb)*awayProb*awayProb*(1-homeProb)*homeProb
totalProb += homeProb*(1-homeProb)*(1-awayProb)*awayProb*(1-awayProb)*homeProb*homeProb
totalProb += homeProb*(1-homeProb)*(1-awayProb)*(1-awayProb)*awayProb*homeProb*homeProb
totalProb += (1-homeProb)*homeProb*awayProb*awayProb*awayProb
totalProb += (1-homeProb)*homeProb*awayProb*awayProb*(1-awayProb)*homeProb
totalProb += (1-homeProb)*homeProb*awayProb*awayProb*(1-awayProb)*(1-homeProb)*homeProb
totalProb += (1-homeProb)*homeProb*awayProb*(1-awayProb)*awayProb*homeProb
totalProb += (1-homeProb)*homeProb*awayProb*(1-awayProb)*awayProb*(1-homeProb)*homeProb
totalProb += (1-homeProb)*homeProb*awayProb*(1-awayProb)*(1-awayProb)*homeProb*homeProb
totalProb += (1-homeProb)*homeProb*(1-awayProb)*awayProb*awayProb*homeProb
totalProb += (1-homeProb)*homeProb*(1-awayProb)*awayProb*awayProb*(1-homeProb)*homeProb
totalProb += (1-homeProb)*homeProb*(1-awayProb)*awayProb*(1-awayProb)*homeProb*homeProb
totalProb += (1-homeProb)*homeProb*(1-awayProb)*(1-awayProb)*awayProb*homeProb*homeProb
totalProb += (1-homeProb)*(1-homeProb)*awayProb*awayProb*awayProb*homeProb
totalProb += (1-homeProb)*(1-homeProb)*awayProb*awayProb*awayProb*(1-homeProb)*homeProb
totalProb += (1-homeProb)*(1-homeProb)*awayProb*awayProb*(1-awayProb)*homeProb*homeProb
totalProb += (1-homeProb)*(1-homeProb)*awayProb*(1-awayProb)*awayProb*homeProb*homeProb
totalProb += (1-homeProb)*(1-homeProb)*(1-awayProb)*awayProb*awayProb*homeProb*homeProb
return totalProb
def GetSeries7Prob10(homeProb, awayProb):
totalProb = homeProb*awayProb*awayProb
totalProb += homeProb*awayProb*(1-awayProb)*awayProb
totalProb += homeProb*awayProb*(1-awayProb)*(1-awayProb)*homeProb
totalProb += homeProb*awayProb*(1-awayProb)*(1-awayProb)*(1-homeProb)*homeProb
totalProb += homeProb*(1-awayProb)*awayProb*awayProb
totalProb += homeProb*(1-awayProb)*awayProb*(1-awayProb)*homeProb
totalProb += homeProb*(1-awayProb)*awayProb*(1-awayProb)*(1-homeProb)*homeProb
totalProb += homeProb*(1-awayProb)*(1-awayProb)*awayProb*homeProb
totalProb += homeProb*(1-awayProb)*(1-awayProb)*awayProb*(1-awayProb)*homeProb
totalProb += homeProb*(1-awayProb)*(1-awayProb)*(1-awayProb)*homeProb*homeProb
totalProb += (1-homeProb)*awayProb*awayProb*awayProb
totalProb += (1-homeProb)*awayProb*awayProb*(1-awayProb)*homeProb
totalProb += (1-homeProb)*awayProb*awayProb*(1-awayProb)*(1-homeProb)*homeProb
totalProb += (1-homeProb)*awayProb*(1-awayProb)*awayProb*homeProb
totalProb += (1-homeProb)*awayProb*(1-awayProb)*awayProb*(1-awayProb)*homeProb
totalProb += (1-homeProb)*awayProb*(1-awayProb)*(1-awayProb)*homeProb*homeProb
totalProb += (1-homeProb)*(1-awayProb)*awayProb*awayProb*homeProb
totalProb += (1-homeProb)*(1-awayProb)*awayProb*awayProb*(1-homeProb)*homeProb
totalProb += (1-homeProb)*(1-awayProb)*awayProb*(1-awayProb)*homeProb*homeProb
totalProb += (1-homeProb)*(1-awayProb)*(1-awayProb)*awayProb*homeProb*homeProb
return totalProb
def GetSeries7Prob01(homeProb, awayProb):
totalProb = homeProb*awayProb*awayProb*awayProb
totalProb += homeProb*awayProb*awayProb*(1-awayProb)*homeProb
totalProb += homeProb*awayProb*awayProb*(1-awayProb)*(1-homeProb)*homeProb
totalProb += homeProb*awayProb*(1-awayProb)*awayProb*homeProb
totalProb += homeProb*awayProb*(1-awayProb)*awayProb*(1-homeProb)*homeProb
totalProb += homeProb*awayProb*(1-awayProb)*(1-awayProb)*homeProb*homeProb
totalProb += homeProb*(1-awayProb)*awayProb*awayProb*homeProb
totalProb += homeProb*(1-awayProb)*awayProb*awayProb*(1-homeProb)*homeProb
totalProb += homeProb*(1-awayProb)*awayProb*(1-awayProb)*homeProb*homeProb
totalProb += homeProb*(1-awayProb)*(1-awayProb)*awayProb*homeProb*homeProb
totalProb += (1-homeProb)*awayProb*awayProb*awayProb*homeProb
totalProb += (1-homeProb)*awayProb*awayProb*awayProb*(1-homeProb)*homeProb
totalProb += (1-homeProb)*awayProb*awayProb*(1-awayProb)*homeProb*homeProb
totalProb += (1-homeProb)*awayProb*(1-awayProb)*awayProb*homeProb*homeProb
totalProb += (1-homeProb)*(1-awayProb)*awayProb*awayProb*homeProb*homeProb
return totalProb
def GetSeries7Prob11(homeProb, awayProb):
totalProb = awayProb*awayProb*awayProb
totalProb += awayProb*awayProb*(1-awayProb)*homeProb
totalProb += awayProb*awayProb*(1-awayProb)*(1-homeProb)*homeProb
totalProb += awayProb*(1-awayProb)*awayProb*homeProb
totalProb += awayProb*(1-awayProb)*awayProb*(1-awayProb)*homeProb
totalProb += awayProb*(1-awayProb)*(1-awayProb)*homeProb*homeProb
totalProb += (1-awayProb)*awayProb*awayProb*homeProb
totalProb += (1-awayProb)*awayProb*awayProb*(1-homeProb)*homeProb
totalProb += (1-awayProb)*awayProb*(1-awayProb)*homeProb*homeProb
totalProb += (1-awayProb)*(1-awayProb)*awayProb*homeProb*homeProb
return totalProb
def GetSeries7Prob21(homeProb, awayProb):
totalProb = awayProb*awayProb
totalProb += awayProb*(1-awayProb)*homeProb
totalProb += awayProb*(1-awayProb)*(1-homeProb)*homeProb
totalProb += (1-awayProb)*awayProb*homeProb
totalProb += (1-awayProb)*awayProb*(1-awayProb)*homeProb
totalProb += (1-awayProb)*(1-awayProb)*homeProb*homeProb
return totalProb
def GetSeries7Prob31(homeProb, awayProb):
totalProb = awayProb
totalProb += (1-awayProb)*homeProb
totalProb += (1-awayProb)*(1-homeProb)*homeProb
return totalProb
def GetSeries7Prob22(homeProb, awayProb):
totalProb = awayProb*homeProb
totalProb += awayProb*(1-awayProb)*homeProb
totalProb += (1-awayProb)*homeProb*homeProb
return totalProb
def GetSeries7Prob23(homeProb, awayProb):
totalProb = homeProb*homeProb
return totalProb
def GetSeries7Prob33(homeProb, awayProb):
totalProb = homeProb
return totalProb
##################################################################################
#HOUBOS
bosHome1 = MLBWinProbability('BOS', 'HOU', xmlStandingsOnDate, lastYearStandingsOnDate)
bosAway1 = 1 - MLBWinProbability('HOU', 'BOS', xmlStandingsOnDate, lastYearStandingsOnDate)
bosHome2 = RunsSimulator(bos2018, hou2018, True, numSims, 4)
bosAway2 = 1 - RunsSimulator(hou2018, bos2018, True, numSims, 4)
five381013 = MLB538WinProbability('BOS', 'HOU', datetime(2018, 10, 13), eloDF)
five381014 = MLB538WinProbability('BOS', 'HOU', datetime(2018, 10, 14), eloDF)
five381016 = 1 - MLB538WinProbability('HOU', 'BOS', datetime(2018, 10, 16), eloDF)
five381017 = 1 - MLB538WinProbability('HOU', 'BOS', datetime(2018, 10, 17), eloDF)
five381018 = 1 - MLB538WinProbability('HOU', 'BOS', datetime(2018, 10, 18), eloDF)
print(bosAway1)
print(bosAway2)
print(five381018)
print(GetSeries7Prob31(bosHome1,bosAway1))
print(GetSeries7Prob31(bosHome2,bosAway2))
#MILLAD
milHome1 = MLBWinProbability('MIL', 'LAD', xmlStandingsOnDate, lastYearStandingsOnDate)
milAway1 = 1 - MLBWinProbability('LAD', 'MIL', xmlStandingsOnDate, lastYearStandingsOnDate)
milHome2 = RunsSimulator(mil2018, lad2018, True, numSims, 4)
milAway2 = 1 - RunsSimulator(lad2018, mil2018, True, numSims, 4)
five381012 = MLB538WinProbability('MIL', 'LAD', datetime(2018, 10, 12), eloDF)
five381013 = MLB538WinProbability('MIL', 'LAD', datetime(2018, 10, 13), eloDF)
five381015 = 1-MLB538WinProbability('LAD', 'MIL', datetime(2018, 10, 15), eloDF)
five381016 = 1-MLB538WinProbability('LAD', 'MIL', datetime(2018, 10, 16), eloDF)
five381017 = 1-MLB538WinProbability('LAD', 'MIL', datetime(2018, 10, 17), eloDF)
five381019 = MLB538WinProbability('MIL', 'LAD', datetime(2018, 10, 19), eloDF)
five381020 = MLB538WinProbability('MIL', 'LAD', datetime(2018, 10, 20), eloDF)
print(milHome1)
print(milHome2)
print(five381020)
print(GetSeries7Prob33(milHome1,milAway1))
print(GetSeries7Prob33(milHome2,milAway2))
| gpl-3.0 |
hainm/scikit-learn | sklearn/cluster/birch.py | 207 | 22706 | # Authors: Manoj Kumar <[email protected]>
# Alexandre Gramfort <[email protected]>
# Joel Nothman <[email protected]>
# License: BSD 3 clause
from __future__ import division
import warnings
import numpy as np
from scipy import sparse
from math import sqrt
from ..metrics.pairwise import euclidean_distances
from ..base import TransformerMixin, ClusterMixin, BaseEstimator
from ..externals.six.moves import xrange
from ..utils import check_array
from ..utils.extmath import row_norms, safe_sparse_dot
from ..utils.validation import NotFittedError, check_is_fitted
from .hierarchical import AgglomerativeClustering
def _iterate_sparse_X(X):
"""This little hack returns a densified row when iterating over a sparse
matrix, insted of constructing a sparse matrix for every row that is
expensive.
"""
n_samples = X.shape[0]
X_indices = X.indices
X_data = X.data
X_indptr = X.indptr
for i in xrange(n_samples):
row = np.zeros(X.shape[1])
startptr, endptr = X_indptr[i], X_indptr[i + 1]
nonzero_indices = X_indices[startptr:endptr]
row[nonzero_indices] = X_data[startptr:endptr]
yield row
def _split_node(node, threshold, branching_factor):
"""The node has to be split if there is no place for a new subcluster
in the node.
1. Two empty nodes and two empty subclusters are initialized.
2. The pair of distant subclusters are found.
3. The properties of the empty subclusters and nodes are updated
according to the nearest distance between the subclusters to the
pair of distant subclusters.
4. The two nodes are set as children to the two subclusters.
"""
new_subcluster1 = _CFSubcluster()
new_subcluster2 = _CFSubcluster()
new_node1 = _CFNode(
threshold, branching_factor, is_leaf=node.is_leaf,
n_features=node.n_features)
new_node2 = _CFNode(
threshold, branching_factor, is_leaf=node.is_leaf,
n_features=node.n_features)
new_subcluster1.child_ = new_node1
new_subcluster2.child_ = new_node2
if node.is_leaf:
if node.prev_leaf_ is not None:
node.prev_leaf_.next_leaf_ = new_node1
new_node1.prev_leaf_ = node.prev_leaf_
new_node1.next_leaf_ = new_node2
new_node2.prev_leaf_ = new_node1
new_node2.next_leaf_ = node.next_leaf_
if node.next_leaf_ is not None:
node.next_leaf_.prev_leaf_ = new_node2
dist = euclidean_distances(
node.centroids_, Y_norm_squared=node.squared_norm_, squared=True)
n_clusters = dist.shape[0]
farthest_idx = np.unravel_index(
dist.argmax(), (n_clusters, n_clusters))
node1_dist, node2_dist = dist[[farthest_idx]]
node1_closer = node1_dist < node2_dist
for idx, subcluster in enumerate(node.subclusters_):
if node1_closer[idx]:
new_node1.append_subcluster(subcluster)
new_subcluster1.update(subcluster)
else:
new_node2.append_subcluster(subcluster)
new_subcluster2.update(subcluster)
return new_subcluster1, new_subcluster2
class _CFNode(object):
"""Each node in a CFTree is called a CFNode.
The CFNode can have a maximum of branching_factor
number of CFSubclusters.
Parameters
----------
threshold : float
Threshold needed for a new subcluster to enter a CFSubcluster.
branching_factor : int
Maximum number of CF subclusters in each node.
is_leaf : bool
We need to know if the CFNode is a leaf or not, in order to
retrieve the final subclusters.
n_features : int
The number of features.
Attributes
----------
subclusters_ : array-like
list of subclusters for a particular CFNode.
prev_leaf_ : _CFNode
prev_leaf. Useful only if is_leaf is True.
next_leaf_ : _CFNode
next_leaf. Useful only if is_leaf is True.
the final subclusters.
init_centroids_ : ndarray, shape (branching_factor + 1, n_features)
manipulate ``init_centroids_`` throughout rather than centroids_ since
the centroids are just a view of the ``init_centroids_`` .
init_sq_norm_ : ndarray, shape (branching_factor + 1,)
manipulate init_sq_norm_ throughout. similar to ``init_centroids_``.
centroids_ : ndarray
view of ``init_centroids_``.
squared_norm_ : ndarray
view of ``init_sq_norm_``.
"""
def __init__(self, threshold, branching_factor, is_leaf, n_features):
self.threshold = threshold
self.branching_factor = branching_factor
self.is_leaf = is_leaf
self.n_features = n_features
# The list of subclusters, centroids and squared norms
# to manipulate throughout.
self.subclusters_ = []
self.init_centroids_ = np.zeros((branching_factor + 1, n_features))
self.init_sq_norm_ = np.zeros((branching_factor + 1))
self.squared_norm_ = []
self.prev_leaf_ = None
self.next_leaf_ = None
def append_subcluster(self, subcluster):
n_samples = len(self.subclusters_)
self.subclusters_.append(subcluster)
self.init_centroids_[n_samples] = subcluster.centroid_
self.init_sq_norm_[n_samples] = subcluster.sq_norm_
# Keep centroids and squared norm as views. In this way
# if we change init_centroids and init_sq_norm_, it is
# sufficient,
self.centroids_ = self.init_centroids_[:n_samples + 1, :]
self.squared_norm_ = self.init_sq_norm_[:n_samples + 1]
def update_split_subclusters(self, subcluster,
new_subcluster1, new_subcluster2):
"""Remove a subcluster from a node and update it with the
split subclusters.
"""
ind = self.subclusters_.index(subcluster)
self.subclusters_[ind] = new_subcluster1
self.init_centroids_[ind] = new_subcluster1.centroid_
self.init_sq_norm_[ind] = new_subcluster1.sq_norm_
self.append_subcluster(new_subcluster2)
def insert_cf_subcluster(self, subcluster):
"""Insert a new subcluster into the node."""
if not self.subclusters_:
self.append_subcluster(subcluster)
return False
threshold = self.threshold
branching_factor = self.branching_factor
# We need to find the closest subcluster among all the
# subclusters so that we can insert our new subcluster.
dist_matrix = np.dot(self.centroids_, subcluster.centroid_)
dist_matrix *= -2.
dist_matrix += self.squared_norm_
closest_index = np.argmin(dist_matrix)
closest_subcluster = self.subclusters_[closest_index]
# If the subcluster has a child, we need a recursive strategy.
if closest_subcluster.child_ is not None:
split_child = closest_subcluster.child_.insert_cf_subcluster(
subcluster)
if not split_child:
# If it is determined that the child need not be split, we
# can just update the closest_subcluster
closest_subcluster.update(subcluster)
self.init_centroids_[closest_index] = \
self.subclusters_[closest_index].centroid_
self.init_sq_norm_[closest_index] = \
self.subclusters_[closest_index].sq_norm_
return False
# things not too good. we need to redistribute the subclusters in
# our child node, and add a new subcluster in the parent
# subcluster to accomodate the new child.
else:
new_subcluster1, new_subcluster2 = _split_node(
closest_subcluster.child_, threshold, branching_factor)
self.update_split_subclusters(
closest_subcluster, new_subcluster1, new_subcluster2)
if len(self.subclusters_) > self.branching_factor:
return True
return False
# good to go!
else:
merged = closest_subcluster.merge_subcluster(
subcluster, self.threshold)
if merged:
self.init_centroids_[closest_index] = \
closest_subcluster.centroid_
self.init_sq_norm_[closest_index] = \
closest_subcluster.sq_norm_
return False
# not close to any other subclusters, and we still
# have space, so add.
elif len(self.subclusters_) < self.branching_factor:
self.append_subcluster(subcluster)
return False
# We do not have enough space nor is it closer to an
# other subcluster. We need to split.
else:
self.append_subcluster(subcluster)
return True
class _CFSubcluster(object):
"""Each subcluster in a CFNode is called a CFSubcluster.
A CFSubcluster can have a CFNode has its child.
Parameters
----------
linear_sum : ndarray, shape (n_features,), optional
Sample. This is kept optional to allow initialization of empty
subclusters.
Attributes
----------
n_samples_ : int
Number of samples that belong to each subcluster.
linear_sum_ : ndarray
Linear sum of all the samples in a subcluster. Prevents holding
all sample data in memory.
squared_sum_ : float
Sum of the squared l2 norms of all samples belonging to a subcluster.
centroid_ : ndarray
Centroid of the subcluster. Prevent recomputing of centroids when
``CFNode.centroids_`` is called.
child_ : _CFNode
Child Node of the subcluster. Once a given _CFNode is set as the child
of the _CFNode, it is set to ``self.child_``.
sq_norm_ : ndarray
Squared norm of the subcluster. Used to prevent recomputing when
pairwise minimum distances are computed.
"""
def __init__(self, linear_sum=None):
if linear_sum is None:
self.n_samples_ = 0
self.squared_sum_ = 0.0
self.linear_sum_ = 0
else:
self.n_samples_ = 1
self.centroid_ = self.linear_sum_ = linear_sum
self.squared_sum_ = self.sq_norm_ = np.dot(
self.linear_sum_, self.linear_sum_)
self.child_ = None
def update(self, subcluster):
self.n_samples_ += subcluster.n_samples_
self.linear_sum_ += subcluster.linear_sum_
self.squared_sum_ += subcluster.squared_sum_
self.centroid_ = self.linear_sum_ / self.n_samples_
self.sq_norm_ = np.dot(self.centroid_, self.centroid_)
def merge_subcluster(self, nominee_cluster, threshold):
"""Check if a cluster is worthy enough to be merged. If
yes then merge.
"""
new_ss = self.squared_sum_ + nominee_cluster.squared_sum_
new_ls = self.linear_sum_ + nominee_cluster.linear_sum_
new_n = self.n_samples_ + nominee_cluster.n_samples_
new_centroid = (1 / new_n) * new_ls
new_norm = np.dot(new_centroid, new_centroid)
dot_product = (-2 * new_n) * new_norm
sq_radius = (new_ss + dot_product) / new_n + new_norm
if sq_radius <= threshold ** 2:
(self.n_samples_, self.linear_sum_, self.squared_sum_,
self.centroid_, self.sq_norm_) = \
new_n, new_ls, new_ss, new_centroid, new_norm
return True
return False
@property
def radius(self):
"""Return radius of the subcluster"""
dot_product = -2 * np.dot(self.linear_sum_, self.centroid_)
return sqrt(
((self.squared_sum_ + dot_product) / self.n_samples_) +
self.sq_norm_)
class Birch(BaseEstimator, TransformerMixin, ClusterMixin):
"""Implements the Birch clustering algorithm.
Every new sample is inserted into the root of the Clustering Feature
Tree. It is then clubbed together with the subcluster that has the
centroid closest to the new sample. This is done recursively till it
ends up at the subcluster of the leaf of the tree has the closest centroid.
Read more in the :ref:`User Guide <birch>`.
Parameters
----------
threshold : float, default 0.5
The radius of the subcluster obtained by merging a new sample and the
closest subcluster should be lesser than the threshold. Otherwise a new
subcluster is started.
branching_factor : int, default 50
Maximum number of CF subclusters in each node. If a new samples enters
such that the number of subclusters exceed the branching_factor then
the node has to be split. The corresponding parent also has to be
split and if the number of subclusters in the parent is greater than
the branching factor, then it has to be split recursively.
n_clusters : int, instance of sklearn.cluster model, default None
Number of clusters after the final clustering step, which treats the
subclusters from the leaves as new samples. By default, this final
clustering step is not performed and the subclusters are returned
as they are. If a model is provided, the model is fit treating
the subclusters as new samples and the initial data is mapped to the
label of the closest subcluster. If an int is provided, the model
fit is AgglomerativeClustering with n_clusters set to the int.
compute_labels : bool, default True
Whether or not to compute labels for each fit.
copy : bool, default True
Whether or not to make a copy of the given data. If set to False,
the initial data will be overwritten.
Attributes
----------
root_ : _CFNode
Root of the CFTree.
dummy_leaf_ : _CFNode
Start pointer to all the leaves.
subcluster_centers_ : ndarray,
Centroids of all subclusters read directly from the leaves.
subcluster_labels_ : ndarray,
Labels assigned to the centroids of the subclusters after
they are clustered globally.
labels_ : ndarray, shape (n_samples,)
Array of labels assigned to the input data.
if partial_fit is used instead of fit, they are assigned to the
last batch of data.
Examples
--------
>>> from sklearn.cluster import Birch
>>> X = [[0, 1], [0.3, 1], [-0.3, 1], [0, -1], [0.3, -1], [-0.3, -1]]
>>> brc = Birch(branching_factor=50, n_clusters=None, threshold=0.5,
... compute_labels=True)
>>> brc.fit(X)
Birch(branching_factor=50, compute_labels=True, copy=True, n_clusters=None,
threshold=0.5)
>>> brc.predict(X)
array([0, 0, 0, 1, 1, 1])
References
----------
* Tian Zhang, Raghu Ramakrishnan, Maron Livny
BIRCH: An efficient data clustering method for large databases.
http://www.cs.sfu.ca/CourseCentral/459/han/papers/zhang96.pdf
* Roberto Perdisci
JBirch - Java implementation of BIRCH clustering algorithm
https://code.google.com/p/jbirch/
"""
def __init__(self, threshold=0.5, branching_factor=50, n_clusters=3,
compute_labels=True, copy=True):
self.threshold = threshold
self.branching_factor = branching_factor
self.n_clusters = n_clusters
self.compute_labels = compute_labels
self.copy = copy
def fit(self, X, y=None):
"""
Build a CF Tree for the input data.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
Input data.
"""
self.fit_, self.partial_fit_ = True, False
return self._fit(X)
def _fit(self, X):
X = check_array(X, accept_sparse='csr', copy=self.copy)
threshold = self.threshold
branching_factor = self.branching_factor
if branching_factor <= 1:
raise ValueError("Branching_factor should be greater than one.")
n_samples, n_features = X.shape
# If partial_fit is called for the first time or fit is called, we
# start a new tree.
partial_fit = getattr(self, 'partial_fit_')
has_root = getattr(self, 'root_', None)
if getattr(self, 'fit_') or (partial_fit and not has_root):
# The first root is the leaf. Manipulate this object throughout.
self.root_ = _CFNode(threshold, branching_factor, is_leaf=True,
n_features=n_features)
# To enable getting back subclusters.
self.dummy_leaf_ = _CFNode(threshold, branching_factor,
is_leaf=True, n_features=n_features)
self.dummy_leaf_.next_leaf_ = self.root_
self.root_.prev_leaf_ = self.dummy_leaf_
# Cannot vectorize. Enough to convince to use cython.
if not sparse.issparse(X):
iter_func = iter
else:
iter_func = _iterate_sparse_X
for sample in iter_func(X):
subcluster = _CFSubcluster(linear_sum=sample)
split = self.root_.insert_cf_subcluster(subcluster)
if split:
new_subcluster1, new_subcluster2 = _split_node(
self.root_, threshold, branching_factor)
del self.root_
self.root_ = _CFNode(threshold, branching_factor,
is_leaf=False,
n_features=n_features)
self.root_.append_subcluster(new_subcluster1)
self.root_.append_subcluster(new_subcluster2)
centroids = np.concatenate([
leaf.centroids_ for leaf in self._get_leaves()])
self.subcluster_centers_ = centroids
self._global_clustering(X)
return self
def _get_leaves(self):
"""
Retrieve the leaves of the CF Node.
Returns
-------
leaves: array-like
List of the leaf nodes.
"""
leaf_ptr = self.dummy_leaf_.next_leaf_
leaves = []
while leaf_ptr is not None:
leaves.append(leaf_ptr)
leaf_ptr = leaf_ptr.next_leaf_
return leaves
def partial_fit(self, X=None, y=None):
"""
Online learning. Prevents rebuilding of CFTree from scratch.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features), None
Input data. If X is not provided, only the global clustering
step is done.
"""
self.partial_fit_, self.fit_ = True, False
if X is None:
# Perform just the final global clustering step.
self._global_clustering()
return self
else:
self._check_fit(X)
return self._fit(X)
def _check_fit(self, X):
is_fitted = hasattr(self, 'subcluster_centers_')
# Called by partial_fit, before fitting.
has_partial_fit = hasattr(self, 'partial_fit_')
# Should raise an error if one does not fit before predicting.
if not (is_fitted or has_partial_fit):
raise NotFittedError("Fit training data before predicting")
if is_fitted and X.shape[1] != self.subcluster_centers_.shape[1]:
raise ValueError(
"Training data and predicted data do "
"not have same number of features.")
def predict(self, X):
"""
Predict data using the ``centroids_`` of subclusters.
Avoid computation of the row norms of X.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
Input data.
Returns
-------
labels: ndarray, shape(n_samples)
Labelled data.
"""
X = check_array(X, accept_sparse='csr')
self._check_fit(X)
reduced_distance = safe_sparse_dot(X, self.subcluster_centers_.T)
reduced_distance *= -2
reduced_distance += self._subcluster_norms
return self.subcluster_labels_[np.argmin(reduced_distance, axis=1)]
def transform(self, X, y=None):
"""
Transform X into subcluster centroids dimension.
Each dimension represents the distance from the sample point to each
cluster centroid.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
Input data.
Returns
-------
X_trans : {array-like, sparse matrix}, shape (n_samples, n_clusters)
Transformed data.
"""
check_is_fitted(self, 'subcluster_centers_')
return euclidean_distances(X, self.subcluster_centers_)
def _global_clustering(self, X=None):
"""
Global clustering for the subclusters obtained after fitting
"""
clusterer = self.n_clusters
centroids = self.subcluster_centers_
compute_labels = (X is not None) and self.compute_labels
# Preprocessing for the global clustering.
not_enough_centroids = False
if isinstance(clusterer, int):
clusterer = AgglomerativeClustering(
n_clusters=self.n_clusters)
# There is no need to perform the global clustering step.
if len(centroids) < self.n_clusters:
not_enough_centroids = True
elif (clusterer is not None and not
hasattr(clusterer, 'fit_predict')):
raise ValueError("n_clusters should be an instance of "
"ClusterMixin or an int")
# To use in predict to avoid recalculation.
self._subcluster_norms = row_norms(
self.subcluster_centers_, squared=True)
if clusterer is None or not_enough_centroids:
self.subcluster_labels_ = np.arange(len(centroids))
if not_enough_centroids:
warnings.warn(
"Number of subclusters found (%d) by Birch is less "
"than (%d). Decrease the threshold."
% (len(centroids), self.n_clusters))
else:
# The global clustering step that clusters the subclusters of
# the leaves. It assumes the centroids of the subclusters as
# samples and finds the final centroids.
self.subcluster_labels_ = clusterer.fit_predict(
self.subcluster_centers_)
if compute_labels:
self.labels_ = self.predict(X)
| bsd-3-clause |
mikebenfield/scikit-learn | sklearn/ensemble/tests/test_gradient_boosting_loss_functions.py | 78 | 6016 | """
Testing for the gradient boosting loss functions and initial estimators.
"""
import numpy as np
from numpy.testing import assert_array_equal
from numpy.testing import assert_almost_equal
from numpy.testing import assert_equal
from sklearn.utils import check_random_state
from sklearn.utils.testing import assert_raises
from sklearn.ensemble.gradient_boosting import BinomialDeviance
from sklearn.ensemble.gradient_boosting import LogOddsEstimator
from sklearn.ensemble.gradient_boosting import LeastSquaresError
from sklearn.ensemble.gradient_boosting import RegressionLossFunction
from sklearn.ensemble.gradient_boosting import LOSS_FUNCTIONS
from sklearn.ensemble.gradient_boosting import _weighted_percentile
from sklearn.ensemble.gradient_boosting import QuantileLossFunction
def test_binomial_deviance():
# Check binomial deviance loss.
# Check against alternative definitions in ESLII.
bd = BinomialDeviance(2)
# pred has the same BD for y in {0, 1}
assert_equal(bd(np.array([0.0]), np.array([0.0])),
bd(np.array([1.0]), np.array([0.0])))
assert_almost_equal(bd(np.array([1.0, 1.0, 1.0]),
np.array([100.0, 100.0, 100.0])),
0.0)
assert_almost_equal(bd(np.array([1.0, 0.0, 0.0]),
np.array([100.0, -100.0, -100.0])), 0)
# check if same results as alternative definition of deviance (from ESLII)
alt_dev = lambda y, pred: np.mean(np.logaddexp(0.0, -2.0 *
(2.0 * y - 1) * pred))
test_data = [(np.array([1.0, 1.0, 1.0]), np.array([100.0, 100.0, 100.0])),
(np.array([0.0, 0.0, 0.0]), np.array([100.0, 100.0, 100.0])),
(np.array([0.0, 0.0, 0.0]),
np.array([-100.0, -100.0, -100.0])),
(np.array([1.0, 1.0, 1.0]),
np.array([-100.0, -100.0, -100.0]))]
for datum in test_data:
assert_almost_equal(bd(*datum), alt_dev(*datum))
# check the gradient against the
alt_ng = lambda y, pred: (2 * y - 1) / (1 + np.exp(2 * (2 * y - 1) * pred))
for datum in test_data:
assert_almost_equal(bd.negative_gradient(*datum), alt_ng(*datum))
def test_log_odds_estimator():
# Check log odds estimator.
est = LogOddsEstimator()
assert_raises(ValueError, est.fit, None, np.array([1]))
est.fit(None, np.array([1.0, 0.0]))
assert_equal(est.prior, 0.0)
assert_array_equal(est.predict(np.array([[1.0], [1.0]])),
np.array([[0.0], [0.0]]))
def test_sample_weight_smoke():
rng = check_random_state(13)
y = rng.rand(100)
pred = rng.rand(100)
# least squares
loss = LeastSquaresError(1)
loss_wo_sw = loss(y, pred)
loss_w_sw = loss(y, pred, np.ones(pred.shape[0], dtype=np.float32))
assert_almost_equal(loss_wo_sw, loss_w_sw)
def test_sample_weight_init_estimators():
# Smoke test for init estimators with sample weights.
rng = check_random_state(13)
X = rng.rand(100, 2)
sample_weight = np.ones(100)
reg_y = rng.rand(100)
clf_y = rng.randint(0, 2, size=100)
for Loss in LOSS_FUNCTIONS.values():
if Loss is None:
continue
if issubclass(Loss, RegressionLossFunction):
k = 1
y = reg_y
else:
k = 2
y = clf_y
if Loss.is_multi_class:
# skip multiclass
continue
loss = Loss(k)
init_est = loss.init_estimator()
init_est.fit(X, y)
out = init_est.predict(X)
assert_equal(out.shape, (y.shape[0], 1))
sw_init_est = loss.init_estimator()
sw_init_est.fit(X, y, sample_weight=sample_weight)
sw_out = init_est.predict(X)
assert_equal(sw_out.shape, (y.shape[0], 1))
# check if predictions match
assert_array_equal(out, sw_out)
def test_weighted_percentile():
y = np.empty(102, dtype=np.float64)
y[:50] = 0
y[-51:] = 2
y[-1] = 100000
y[50] = 1
sw = np.ones(102, dtype=np.float64)
sw[-1] = 0.0
score = _weighted_percentile(y, sw, 50)
assert score == 1
def test_weighted_percentile_equal():
y = np.empty(102, dtype=np.float64)
y.fill(0.0)
sw = np.ones(102, dtype=np.float64)
sw[-1] = 0.0
score = _weighted_percentile(y, sw, 50)
assert score == 0
def test_weighted_percentile_zero_weight():
y = np.empty(102, dtype=np.float64)
y.fill(1.0)
sw = np.ones(102, dtype=np.float64)
sw.fill(0.0)
score = _weighted_percentile(y, sw, 50)
assert score == 1.0
def test_quantile_loss_function():
# Non regression test for the QuantileLossFunction object
# There was a sign problem when evaluating the function
# for negative values of 'ytrue - ypred'
x = np.asarray([-1.0, 0.0, 1.0])
y_found = QuantileLossFunction(1, 0.9)(x, np.zeros_like(x))
y_expected = np.asarray([0.1, 0.0, 0.9]).mean()
np.testing.assert_allclose(y_found, y_expected)
def test_sample_weight_deviance():
# Test if deviance supports sample weights.
rng = check_random_state(13)
X = rng.rand(100, 2)
sample_weight = np.ones(100)
reg_y = rng.rand(100)
clf_y = rng.randint(0, 2, size=100)
mclf_y = rng.randint(0, 3, size=100)
for Loss in LOSS_FUNCTIONS.values():
if Loss is None:
continue
if issubclass(Loss, RegressionLossFunction):
k = 1
y = reg_y
p = reg_y
else:
k = 2
y = clf_y
p = clf_y
if Loss.is_multi_class:
k = 3
y = mclf_y
# one-hot encoding
p = np.zeros((y.shape[0], k), dtype=np.float64)
for i in range(k):
p[:, i] = y == i
loss = Loss(k)
deviance_w_w = loss(y, p, sample_weight)
deviance_wo_w = loss(y, p)
assert deviance_wo_w == deviance_w_w
| bsd-3-clause |
kieferk/dfply | dfply/set_ops.py | 1 | 6022 | from .base import *
import warnings
import pandas as pd
def validate_set_ops(df, other):
"""
Helper function to ensure that DataFrames are valid for set operations.
Columns must be the same name in the same order, and indices must be of the
same dimension with the same names.
"""
if df.columns.values.tolist() != other.columns.values.tolist():
not_in_df = [col for col in other.columns if col not in df.columns]
not_in_other = [col for col in df.columns if col not in other.columns]
error_string = 'Error: not compatible.'
if len(not_in_df):
error_string += ' Cols in y but not x: ' + str(not_in_df) + '.'
if len(not_in_other):
error_string += ' Cols in x but not y: ' + str(not_in_other) + '.'
raise ValueError(error_string)
if len(df.index.names) != len(other.index.names):
raise ValueError('Index dimension mismatch')
if df.index.names != other.index.names:
raise ValueError('Index mismatch')
else:
return
# ------------------------------------------------------------------------------
# `union`
# ------------------------------------------------------------------------------
@pipe
def union(df, other, index=False, keep='first'):
"""
Returns rows that appear in either DataFrame.
Args:
df (pandas.DataFrame): data passed in through the pipe.
other (pandas.DataFrame): other DataFrame to use for set operation with
the first.
Kwargs:
index (bool): Boolean indicating whether to consider the pandas index
as part of the set operation (default `False`).
keep (str): Indicates which duplicate should be kept. Options are `'first'`
and `'last'`.
"""
validate_set_ops(df, other)
stacked = df.append(other)
if index:
stacked_reset_indexes = stacked.reset_index()
index_cols = [col for col in stacked_reset_indexes.columns if col not in df.columns]
index_name = df.index.names
return_df = stacked_reset_indexes.drop_duplicates(keep=keep).set_index(index_cols)
return_df.index.names = index_name
return return_df
else:
return stacked.drop_duplicates(keep=keep)
# ------------------------------------------------------------------------------
# `intersect`
# ------------------------------------------------------------------------------
@pipe
def intersect(df, other, index=False, keep='first'):
"""
Returns rows that appear in both DataFrames.
Args:
df (pandas.DataFrame): data passed in through the pipe.
other (pandas.DataFrame): other DataFrame to use for set operation with
the first.
Kwargs:
index (bool): Boolean indicating whether to consider the pandas index
as part of the set operation (default `False`).
keep (str): Indicates which duplicate should be kept. Options are `'first'`
and `'last'`.
"""
validate_set_ops(df, other)
if index:
df_reset_index = df.reset_index()
other_reset_index = other.reset_index()
index_cols = [col for col in df_reset_index.columns if col not in df.columns]
df_index_names = df.index.names
return_df = (pd.merge(df_reset_index, other_reset_index,
how='inner',
left_on=df_reset_index.columns.values.tolist(),
right_on=df_reset_index.columns.values.tolist())
.set_index(index_cols))
return_df.index.names = df_index_names
return_df = return_df.drop_duplicates(keep=keep)
return return_df
else:
return_df = pd.merge(df, other,
how='inner',
left_on=df.columns.values.tolist(),
right_on=df.columns.values.tolist())
return_df = return_df.drop_duplicates(keep=keep)
return return_df
# ------------------------------------------------------------------------------
# `set_diff`
# ------------------------------------------------------------------------------
@pipe
def set_diff(df, other, index=False, keep='first'):
"""
Returns rows that appear in the first DataFrame but not the second.
Args:
df (pandas.DataFrame): data passed in through the pipe.
other (pandas.DataFrame): other DataFrame to use for set operation with
the first.
Kwargs:
index (bool): Boolean indicating whether to consider the pandas index
as part of the set operation (default `False`).
keep (str): Indicates which duplicate should be kept. Options are `'first'`
and `'last'`.
"""
validate_set_ops(df, other)
if index:
df_reset_index = df.reset_index()
other_reset_index = other.reset_index()
index_cols = [col for col in df_reset_index.columns if col not in df.columns]
df_index_names = df.index.names
return_df = (pd.merge(df_reset_index, other_reset_index,
how='left',
left_on=df_reset_index.columns.values.tolist(),
right_on=other_reset_index.columns.values.tolist(),
indicator=True)
.set_index(index_cols))
return_df = return_df[return_df._merge == 'left_only']
return_df.index.names = df_index_names
return_df = return_df.drop_duplicates(keep=keep)[df.columns]
return return_df
else:
return_df = pd.merge(df, other,
how='left',
left_on=df.columns.values.tolist(),
right_on=df.columns.values.tolist(),
indicator=True)
return_df = return_df[return_df._merge == 'left_only']
return_df = return_df.drop_duplicates(keep=keep)[df.columns]
return return_df
| gpl-3.0 |
tcermak/programingworkshop | Python/pandas_and_parallel/parse_mesonet.py | 8 | 3249 | import pandas as pd
import datetime as dt
from read_mesonet import MesoFile
class MesoArrays(object):
'''For pulling out the meteorological, agricultural or Reese-only data
from the raw mesonet data files, correctly index those arrays and
correct the station pressure measurement
'''
def __init__(self, filename):
self.data = MesoFile(filename).read()
self.year = MesoFile(filename).year
def MetArray(self): # Just the meteorological data array
meso2 = self.data
groups = meso2.groupby(0)
met = meso2.ix[groups.groups.get(1)]
columns = ['Time', 'Array ID', 'Station ID', '10 m Scalar Wind Speed',
'10 m Vector Wind Speed', '10 m Wind Direction',
'10 m Wind Direction Std', '10 m Wind Speed Std',
'10 m Gust Wind Speed', '1.5 m Temperature',
'9 m Temperature', '2 m Temperature',
'1.5 m Relative Humidity', 'Station Pressure', 'Rainfall',
'Dewpoint', '2 m Wind Speed', 'Solar Radiation', 'foo']
met.columns = columns[:len(met.columns)]
met = met.set_index('Time')
met['Station Pressure'] = met['Station Pressure']+600
return met
def AgrArray(self): # Just the agricultural data array
agr2 = self.data
groups = agr2.groupby(0)
agr = agr2.ix[groups.groups.get(2)]
agr = agr.dropna(axis=1, how='all')
columns = ['Time', 'Array ID', 'Station ID',
'5 cm Natural Soil Temperature',
'10 cm Natural Soil Temperature',
'20 cm Natural Soil Temperature',
'5 cm Bare Soil Temperature',
'20 cm Bare Soil Temperature',
'5 cm Water Content', '20 cm Water Content',
'60 cm Water Content', '75 cm Water Content',
'Leaf Wetness', 'Battery Voltage', 'Program Signature']
agr.columns = columns
agr = agr.set_index('Time')
return agr
def ReeseArray(self): # Just the Reese-specific data array
reese2 = self.data
groups = reese2.groupby(0)
reese = reese2.ix[groups.groups.get(3)]
reese = reese.dropna(axis=1, how='all')
year = self.year
if year<2006:
reese.columns = ['Array ID', 'Time', 'Station ID',
'Total Radiation',
'SPLite Accumulated Radiation',
'Licor Accumulated Radiation',
'CM21 Accumulated Radiation',
'CM3 Accumulated Radiation']
if year>2008:
reese.columns = ['Time', 'Array ID', 'Station ID',
'10 m Scalar Wind Speed 2',
'10 m Vector Wind Speed 2',
'10 m Wind Direction 2',
'10 m Wind Direction Std 2',
'10 m Wind Speed Std 2',
'10 m Peak Wind Speed 2',
'20 ft Wind Speed', '2 m Wind Speed 2']
reese = reese.set_index('Time')
return reese | mit |
dario-chiappetta/Due | due/episode.py | 1 | 12174 | """
An Episode is a sequence of Events issued by agents. Here we define an interface
for Episodes, as well as some helper methods to manipulate their content:
* :class:`Episode` models recorded Episodes that can be used to train agents
* :class:`LiveEpisode` models Episodes that are still in progress.
* :func:`extract_utterance_pairs` will extract utterances as strings from Episodes.
API
===
"""
import io
import json
import uuid
import asyncio
import logging
from functools import lru_cache
from datetime import datetime
import numpy as np
import pandas as pd
import due.agent
from due.util.time import convert_datetime, parse_timedelta
UTTERANCE_LABEL = 'utterance'
MAX_EVENT_RESPONSES = 200
class Episode(object):
"""
An Episode is a sequence of Events issued by Agents
"""
def __init__(self, starter_agent_id, invited_agent_id):
self._logger = logging.getLogger(__name__ + ".Episode")
self.starter_id = starter_agent_id
self.invited_id = invited_agent_id
self.id = str(uuid.uuid1())
self.timestamp = datetime.now()
self.events = []
def __eq__(self, other):
if isinstance(other, Episode):
if self.starter_id != other.starter_id: return False
if self.invited_id != other.invited_id: return False
if self.id != other.id: return False
if self.timestamp != other.timestamp: return False
if self.events != other.events: return False
return True
return False
def __ne__(self, other):
return not self.__eq__(other)
def last_event(self, event_type=None):
"""
Returns the last event in the Episode. Optionally, events can be filtered
by type.
:param event_type: an event type, or a collection of types
:type event_type: :class:`Event.Type` or list of :class:`Event.Type`
"""
event_type = event_type if not isinstance(event_type, Event.Type) else (event_type,)
for e in reversed(self.events):
if event_type is None or e.type in event_type:
return e
return None
def save(self, output_format='standard'):
"""
Save the Episode to a serializable object, that can be loaded with
:meth:`due.episode.Episode.load`.
By default, episodes are saved in the **standard** format, which is a
dict of metadata with a list of saved Events, which format is handled by
the :class:`due.event.Event` class).
It is also possible to save the Episode in the **compact** format. In
compact representation, event objects are squashed into CSV lines. This
makes them slower to load and save, but more readable and easily
editable without the use of external tools; because of this, they are
especially suited for toy examples and small hand-crafted corpora.
:return: a serializable representation of `self` :rtype: `dict`
"""
result = {
'id': self.id,
'timestamp': self.timestamp,
'starter_agent': str(self.starter_id),
'invited_agents': [str(self.invited_id)],
'events': [e.save() for e in self.events],
'format': 'standard'
}
if output_format == 'compact':
return _compact_saved_episode(result)
return result
@staticmethod
def load(saved_episode):
"""
Loads an Episode as it was saved by :meth:`due.episode.Episode.save`.
:param saved_episode: the episode to be loaded
:type saved_episode: `dict`
:return: an Episode object representing `saved_episode`
:rtype: :class:`due.episode.Episode`
"""
if saved_episode['format'] == 'compact':
saved_episode = _uncompact_saved_episode(saved_episode)
result = Episode(saved_episode['starter_agent'], saved_episode['invited_agents'][0])
result.id = saved_episode['id']
result.timestamp = convert_datetime(saved_episode['timestamp'])
result.events = [Event.load(e) for e in saved_episode['events']]
return result
class LiveEpisode(Episode):
"""
A LiveEpisode is an Episode that is currently under way. That is, new Events
can be acted in it.
:param starter_agent: the Agent which started the Episode
:type starter_agent: :class:`due.agent.Agent`
:param invited_agent: the agent invited to the Episode
:type invited_agent: :class:`due.agent.Agent`
"""
def __init__(self, starter_agent, invited_agent):
super().__init__(starter_agent.id, invited_agent.id)
self._logger = logging.getLogger(__name__ + ".LiveEpisode")
self.starter = starter_agent
self.invited = invited_agent
self._agent_by_id = {
starter_agent.id: starter_agent,
invited_agent.id: invited_agent
}
def add_event(self, event):
"""
Adds an Event to the LiveEpisode, triggering the
:meth:`due.agent.Agent.event_callback` method on the other participants.
Response Events that are returned from the callback which will be
processed iteratively.
:param agent: the agent which acted the Event
:type agent: :class:`due.agent.Agent`
:param event: the event that was acted by the Agent
:type event: :class:`due.event.Event`
"""
new_events = [event]
count = 0
while new_events:
e = new_events.pop(0)
self._logger.info("New %s event by %s: '%s'", e.type.name, e.agent, e.payload)
agent = self.agent_by_id(e.agent)
self.events.append(e)
e.mark_acted()
for a in self._other_agents(agent):
self._logger.info("Notifying %s", a)
response_events = a.event_callback(e, self)
new_events.extend(response_events)
count += 1
if count > MAX_EVENT_RESPONSES:
self._logger.warning("Agents reached maximum number of responses allowed for a single Event (%s). Further Events won't be notified to Agents", MAX_EVENT_RESPONSES)
break
self.events.extend(new_events)
[e.mark_acted() for e in new_events]
def agent_by_id(self, agent_id):
"""
Retrieve the :class:`due.agent.Agent` object of one of the agents that
are participating in the :class:`LiveEpisode`. Raise `ValueError` if the
given ID does not correspond to any of the agents in the Episode.
:param agent_id: ID of one of the agents in the LiveEpisode
:type agent_id: :class:`due.agent.Agent`
"""
if agent_id not in self._agent_by_id:
raise ValueError(f"Agent '{agent_id}' not found in LiveEpisode {self}")
result = self._agent_by_id[agent_id]
assert isinstance(result, due.agent.Agent)
return result
def _other_agents(self, agent):
return [self.starter] if agent == self.invited else [self.invited]
class AsyncLiveEpisode(LiveEpisode):
"""
This is a subclass of :class:`LiveEpisode` that implement asynchronous
notification of new Events.
"""
def add_event(self, event):
self.events.append(event)
event.mark_acted()
agent = self.agent_by_id(event.agent)
for a in self._other_agents(agent):
loop = asyncio.get_event_loop()
loop.create_task(self.async_event_callback(a, event))
async def async_event_callback(self, agent, event):
self._logger.info("Notifying event %s to agent %s", event, agent)
response_events = agent.event_callback(event, self)
if response_events:
for e in response_events:
self.add_event(e)
#
# Save/Load Helpers
#
def _compact_saved_episode(saved_episode):
"""
Convert a saved episode into a compact representation.
"""
events = saved_episode['events']
events = [_compact_saved_event(e) for e in events]
df = pd.DataFrame(events)
s = io.StringIO()
df.to_csv(s, sep='|', header=False, index=False)
compact_events = [l for l in s.getvalue().split('\n') if l]
return {**saved_episode, 'events': compact_events, 'format': 'compact'}
def _compact_saved_event(saved_event):
"""
Prepare an Event for compact serialization, meaning that its fields must
be writable as a line of CSV). This is always the case, except for Actions,
which payloads are objects. In this case, we serialize them as JSON.
"""
e = saved_event
if e['type'] == Event.Type.Action.value:
return {**e, 'payload': json.dumps(e['payload'])}
return e
def _uncompact_saved_episode(compact_episode):
"""
Convert a compacted saved episode back to the standard format.
"""
buf = io.StringIO('\n'.join(compact_episode['events']))
df = pd.read_csv(buf, sep='|', names=['type', 'timestamp', 'agent', 'payload'])
compact_events = df.replace({np.nan:None}).to_dict(orient='records')
events = []
last_timestamp = convert_datetime(compact_episode['timestamp'])
for e in compact_events:
e_new = _uncompact_saved_event(e, last_timestamp)
events.append(e_new)
last_timestamp = e_new['timestamp']
return {**compact_episode, 'events': events, 'format': 'standard'}
def _uncompact_saved_event(compact_event, last_timestamp):
"""
Note that `compact_event` is not the CSV line. It is already its dict
representation, but Action payloads need to be deserialized from JSON.
Also, the Pandas interpretation of CSV needs to be fixed, by converting
timestamps to `datetime`, and converting NaN values to `None`.
"""
e = compact_event
timestamp = _uncompact_timestamp(compact_event, last_timestamp)
e = {**e, 'timestamp': timestamp}
if compact_event['type'] == Event.Type.Action.value:
e['payload'] = json.loads(e['payload'])
return e
def _uncompact_timestamp(compact_event, last_timestamp):
"""
In compacted episodes the timestamp can be a ISO string, or as a time
difference from the previous event; in this latter case, the delta must be
expressed as a int (number of seconds) or in the `1d2h3m4s` format (see
:func:`due.util.time.parse_timedelta`).
"""
try:
return convert_datetime(compact_event['timestamp'])
except ValueError:
return last_timestamp + parse_timedelta(compact_event['timestamp'])
#
# Utilities
#
def _is_utterance(event):
return event.type == Event.Type.Utterance
def extract_utterances(episode, preprocess_f=None, keep_holes=False):
"""
Return all the utterances in an Episode as strings. If the `keep_holes`
parameter is set `True`, non-utterance events will be returned as well, as
`None` elements in the resulting list.
:param episode: the Episode to extract utterances from
:type episode: :class:`Episode`
:param preprocess_f: when given, sentences will be run through this function before being returned
:type preprocess_f: `func`
:param keep_holes: if `True`, `None` elements will be returned in place of non-utterance events.
:return: the list of utterances in the Episode
:rtype: `list` of `str`
"""
if not preprocess_f:
preprocess_f = lambda x: x
result = []
for e in episode.events:
if e.type == Event.Type.Utterance:
result.append(preprocess_f(e.payload))
elif keep_holes:
result.append(None)
return result
def extract_utterance_pairs(episode, preprocess_f=None):
"""
Process Events in an Episode, extracting all the Utterance Event pairs that
can be interpreted as one dialogue turn (ie. an Agent's utterance, and a
different Agent's response).
In particular, Event pairs are extracted from the Episode so that:
* Both Events are Utterances (currently, non-utterances will raise an exception)
* The second Event immediately follows the first
* The two Events are acted by two different Agents
This means that if an utterance has more than one answers, only the first
one will be included in the result.
If a `preprocess_f` function is specified, resulting utterances will be run
through this function before being returned. A LRU Cache is applied to
`preprocess_f`, as most sentences will be returned as both utterances and
answers/
Return two lists of the same length, so that each utterance `X_i` in the
first list has its response `y_i` in the second.
:param episode: an Episode
:type episode: :class:`due.episode.Episode`
:param preprocess_f: when given, sentences will be run through this function before being returned
:type preprocess_f: `func`
:return: a list of utterances and the list of their answers (one per utterance)
:rtype: (`list`, `list`)
"""
preprocess_f = lru_cache(4)(preprocess_f) if preprocess_f else lambda x: x
result_X = []
result_y = []
for e1, e2 in zip(episode.events, episode.events[1:]):
if not _is_utterance(e1) or not _is_utterance(e2):
raise NotImplementedError("Non-utterance Events are not supported yet")
if e1.agent != e2.agent and e1.payload and e2.payload:
result_X.append(preprocess_f(e1.payload))
result_y.append(preprocess_f(e2.payload))
return result_X, result_y
# Quick fix for circular dependencies
from due.event import Event
| gpl-3.0 |
DonBeo/scikit-learn | sklearn/feature_extraction/hashing.py | 24 | 5668 | # Author: Lars Buitinck <[email protected]>
# License: BSD 3 clause
import numbers
import numpy as np
import scipy.sparse as sp
from . import _hashing
from ..base import BaseEstimator, TransformerMixin
def _iteritems(d):
"""Like d.iteritems, but accepts any collections.Mapping."""
return d.iteritems() if hasattr(d, "iteritems") else d.items()
class FeatureHasher(BaseEstimator, TransformerMixin):
"""Implements feature hashing, aka the hashing trick.
This class turns sequences of symbolic feature names (strings) into
scipy.sparse matrices, using a hash function to compute the matrix column
corresponding to a name. The hash function employed is the signed 32-bit
version of Murmurhash3.
Feature names of type byte string are used as-is. Unicode strings are
converted to UTF-8 first, but no Unicode normalization is done.
This class is a low-memory alternative to DictVectorizer and
CountVectorizer, intended for large-scale (online) learning and situations
where memory is tight, e.g. when running prediction code on embedded
devices.
Parameters
----------
n_features : integer, optional
The number of features (columns) in the output matrices. Small numbers
of features are likely to cause hash collisions, but large numbers
will cause larger coefficient dimensions in linear learners.
dtype : numpy type, optional
The type of feature values. Passed to scipy.sparse matrix constructors
as the dtype argument. Do not set this to bool, np.boolean or any
unsigned integer type.
input_type : string, optional
Either "dict" (the default) to accept dictionaries over
(feature_name, value); "pair" to accept pairs of (feature_name, value);
or "string" to accept single strings.
feature_name should be a string, while value should be a number.
In the case of "string", a value of 1 is implied.
The feature_name is hashed to find the appropriate column for the
feature. The value's sign might be flipped in the output (but see
non_negative, below).
non_negative : boolean, optional, default np.float64
Whether output matrices should contain non-negative values only;
effectively calls abs on the matrix prior to returning it.
When True, output values can be interpreted as frequencies.
When False, output values will have expected value zero.
See also
--------
DictVectorizer : vectorizes string-valued features using a hash table.
sklearn.preprocessing.OneHotEncoder : handles nominal/categorical features
encoded as columns of integers.
"""
def __init__(self, n_features=(2 ** 20), input_type="dict",
dtype=np.float64, non_negative=False):
self._validate_params(n_features, input_type)
self.dtype = dtype
self.input_type = input_type
self.n_features = n_features
self.non_negative = non_negative
@staticmethod
def _validate_params(n_features, input_type):
# strangely, np.int16 instances are not instances of Integral,
# while np.int64 instances are...
if not isinstance(n_features, (numbers.Integral, np.integer)):
raise TypeError("n_features must be integral, got %r (%s)."
% (n_features, type(n_features)))
elif n_features < 1 or n_features >= 2 ** 31:
raise ValueError("Invalid number of features (%d)." % n_features)
if input_type not in ("dict", "pair", "string"):
raise ValueError("input_type must be 'dict', 'pair' or 'string',"
" got %r." % input_type)
def fit(self, X=None, y=None):
"""No-op.
This method doesn't do anything. It exists purely for compatibility
with the scikit-learn transformer API.
Returns
-------
self : FeatureHasher
"""
# repeat input validation for grid search (which calls set_params)
self._validate_params(self.n_features, self.input_type)
return self
def transform(self, raw_X, y=None):
"""Transform a sequence of instances to a scipy.sparse matrix.
Parameters
----------
raw_X : iterable over iterable over raw features, length = n_samples
Samples. Each sample must be iterable an (e.g., a list or tuple)
containing/generating feature names (and optionally values, see
the input_type constructor argument) which will be hashed.
raw_X need not support the len function, so it can be the result
of a generator; n_samples is determined on the fly.
y : (ignored)
Returns
-------
X : scipy.sparse matrix, shape = (n_samples, self.n_features)
Feature matrix, for use with estimators or further transformers.
"""
raw_X = iter(raw_X)
if self.input_type == "dict":
raw_X = (_iteritems(d) for d in raw_X)
elif self.input_type == "string":
raw_X = (((f, 1) for f in x) for x in raw_X)
indices, indptr, values = \
_hashing.transform(raw_X, self.n_features, self.dtype)
n_samples = indptr.shape[0] - 1
if n_samples == 0:
raise ValueError("Cannot vectorize empty sequence.")
X = sp.csr_matrix((values, indices, indptr), dtype=self.dtype,
shape=(n_samples, self.n_features))
X.sum_duplicates() # also sorts the indices
if self.non_negative:
np.abs(X.data, X.data)
return X
| bsd-3-clause |
cdegroc/scikit-learn | examples/neighbors/plot_classification.py | 7 | 1724 | """
================================
Nearest Neighbors Classification
================================
Sample usage of Nearest Neighbors classification.
It will plot the decision boundaries for each class.
"""
print __doc__
import numpy as np
import pylab as pl
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
n_neighbors = 15
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
# Plot the decision boundary. For that, we will asign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
pl.figure()
pl.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
pl.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
pl.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
pl.axis('tight')
pl.show()
| bsd-3-clause |
Featuretools/featuretools | featuretools/entityset/timedelta.py | 1 | 6528 | import pandas as pd
from dateutil.relativedelta import relativedelta
class Timedelta(object):
"""Represents differences in time.
Timedeltas can be defined in multiple units. Supported units:
- "ms" : milliseconds
- "s" : seconds
- "h" : hours
- "m" : minutes
- "d" : days
- "o"/"observations" : number of individual events
- "mo" : months
- "Y" : years
Timedeltas can also be defined in terms of observations. In this case, the
Timedelta represents the period spanned by `value`.
For observation timedeltas:
>>> three_observations_log = Timedelta(3, "observations")
>>> three_observations_log.get_name()
'3 Observations'
"""
_Observations = "o"
# units for absolute times
_absolute_units = ['ms', 's', 'h', 'm', 'd', 'w']
_relative_units = ['mo', 'Y']
_readable_units = {
"ms": "Milliseconds",
"s": "Seconds",
"h": "Hours",
"m": "Minutes",
"d": "Days",
"o": "Observations",
"w": "Weeks",
"Y": "Years",
"mo": "Months"
}
_readable_to_unit = {v.lower(): k for k, v in _readable_units.items()}
def __init__(self, value, unit=None, delta_obj=None):
"""
Args:
value (float, str, dict) : Value of timedelta, string providing
both unit and value, or a dictionary of units and times.
unit (str) : Unit of time delta.
delta_obj (pd.Timedelta or pd.DateOffset) : A time object used
internally to do time operations. If None is provided, one will
be created using the provided value and unit.
"""
self.check_value(value, unit)
self.times = self.fix_units()
if delta_obj is not None:
self.delta_obj = delta_obj
else:
self.delta_obj = self.get_unit_type()
@classmethod
def from_dictionary(cls, dictionary):
dict_units = dictionary['unit']
dict_values = dictionary['value']
if isinstance(dict_units, str) and isinstance(dict_values, (int, float)):
return cls({dict_units: dict_values})
else:
all_units = dict()
for i in range(len(dict_units)):
all_units[dict_units[i]] = dict_values[i]
return cls(all_units)
@classmethod
def make_singular(cls, s):
if len(s) > 1 and s.endswith('s'):
return s[:-1]
return s
@classmethod
def _check_unit_plural(cls, s):
if len(s) > 2 and not s.endswith('s'):
return (s + 's').lower()
elif len(s) > 1:
return s.lower()
return s
def get_value(self, unit=None):
if unit is not None:
return self.times[unit]
elif len(self.times.values()) == 1:
return list(self.times.values())[0]
else:
return self.times
def get_units(self):
return list(self.times.keys())
def get_unit_type(self):
all_units = self.get_units()
if self._Observations in all_units:
return None
elif self.is_absolute() and self.has_multiple_units() is False:
return pd.Timedelta(self.times[all_units[0]], all_units[0])
else:
readable_times = self.lower_readable_times()
return relativedelta(**readable_times)
def check_value(self, value, unit):
if isinstance(value, str):
from featuretools.utils.wrangle import _check_timedelta
td = _check_timedelta(value)
self.times = td.times
elif isinstance(value, dict):
self.times = value
else:
self.times = {unit: value}
def fix_units(self):
fixed_units = dict()
for unit, value in self.times.items():
unit = self._check_unit_plural(unit)
if unit in self._readable_to_unit:
unit = self._readable_to_unit[unit]
fixed_units[unit] = value
return fixed_units
def lower_readable_times(self):
readable_times = dict()
for unit, value in self.times.items():
readable_unit = self._readable_units[unit].lower()
readable_times[readable_unit] = value
return readable_times
def get_name(self):
all_units = self.get_units()
if self.has_multiple_units() is False:
return "{} {}".format(self.times[all_units[0]], self._readable_units[all_units[0]])
final_str = ""
for unit, value in self.times.items():
if value == 1:
unit = self.make_singular(unit)
final_str += "{} {} ".format(value, self._readable_units[unit])
return final_str[:-1]
def get_arguments(self):
units = list()
values = list()
for unit, value in self.times.items():
units.append(unit)
values.append(value)
if len(units) == 1:
return {'unit': units[0], 'value': values[0]}
else:
return {'unit': units, 'value': values}
def is_absolute(self):
for unit in self.get_units():
if unit not in self._absolute_units:
return False
return True
def has_no_observations(self):
for unit in self.get_units():
if unit in self._Observations:
return False
return True
def has_multiple_units(self):
if len(self.get_units()) > 1:
return True
else:
return False
def __eq__(self, other):
if not isinstance(other, Timedelta):
return False
return (self.times == other.times)
def __neg__(self):
"""Negate the timedelta"""
new_times = dict()
for unit, value in self.times.items():
new_times[unit] = -value
if self.delta_obj is not None:
return Timedelta(new_times, delta_obj=-self.delta_obj)
else:
return Timedelta(new_times)
def __radd__(self, time):
"""Add the Timedelta to a timestamp value"""
if self._Observations not in self.get_units():
return time + self.delta_obj
else:
raise Exception("Invalid unit")
def __rsub__(self, time):
"""Subtract the Timedelta from a timestamp value"""
if self._Observations not in self.get_units():
return time - self.delta_obj
else:
raise Exception("Invalid unit")
| bsd-3-clause |
hammerlab/mhcflurry | test/test_multi_output.py | 1 | 2763 | from nose.tools import eq_, assert_less, assert_greater, assert_almost_equal
import numpy
import pandas
from numpy import testing
numpy.random.seed(0)
import logging
logging.getLogger('tensorflow').disabled = True
from mhcflurry.class1_neural_network import Class1NeuralNetwork
from mhcflurry.common import random_peptides
from mhcflurry.testing_utils import cleanup, startup
teardown = cleanup
setup = startup
def test_multi_output():
hyperparameters = dict(
loss="custom:mse_with_inequalities_and_multiple_outputs",
activation="tanh",
layer_sizes=[16],
max_epochs=50,
minibatch_size=250,
random_negative_rate=0.0,
random_negative_constant=0.0,
early_stopping=False,
validation_split=0.0,
locally_connected_layers=[
],
dense_layer_l1_regularization=0.0,
dropout_probability=0.0,
optimizer="adam",
num_outputs=3)
df = pandas.DataFrame()
df["peptide"] = random_peptides(10000, length=9)
df["output1"] = df.peptide.map(lambda s: s[4] == 'K').astype(int) * 49000 + 1
df["output2"] = df.peptide.map(lambda s: s[3] == 'Q').astype(int) * 49000 + 1
df["output3"] = df.peptide.map(lambda s: s[4] == 'K' or s[3] == 'Q').astype(int) * 49000 + 1
print("output1 mean", df.output1.mean())
print("output2 mean", df.output2.mean())
stacked = df.set_index("peptide").stack().reset_index()
stacked.columns = ['peptide', 'output_name', 'value']
stacked["output_index"] = stacked.output_name.map({
"output1": 0,
"output2": 1,
"output3": 2,
})
assert not stacked.output_index.isnull().any(), stacked
fit_kwargs = {
'verbose': 1,
}
predictor = Class1NeuralNetwork(**hyperparameters)
stacked_train = stacked
predictor.fit(
stacked_train.peptide.values,
stacked_train.value.values,
output_indices=stacked_train.output_index.values,
**fit_kwargs)
result = predictor.predict(df.peptide.values, output_index=None)
print(df.shape, result.shape)
print(result)
df["prediction1"] = result[:,0]
df["prediction2"] = result[:,1]
df["prediction3"] = result[:,2]
df_by_peptide = df.set_index("peptide")
correlation = pandas.DataFrame(
numpy.corrcoef(df_by_peptide.T),
columns=df_by_peptide.columns,
index=df_by_peptide.columns)
print(correlation)
sub_correlation = correlation.loc[
["output1", "output2", "output3"],
["prediction1", "prediction2", "prediction3"],
]
assert sub_correlation.iloc[0, 0] > 0.99, correlation
assert sub_correlation.iloc[1, 1] > 0.99, correlation
assert sub_correlation.iloc[2, 2] > 0.99, correlation
| apache-2.0 |
cactusbin/nyt | matplotlib/lib/matplotlib/backends/backend_macosx.py | 4 | 17940 | from __future__ import division, print_function
import os
import numpy
from matplotlib._pylab_helpers import Gcf
from matplotlib.backend_bases import RendererBase, GraphicsContextBase,\
FigureManagerBase, FigureCanvasBase, NavigationToolbar2, TimerBase
from matplotlib.backend_bases import ShowBase
from matplotlib.cbook import maxdict
from matplotlib.figure import Figure
from matplotlib.path import Path
from matplotlib.mathtext import MathTextParser
from matplotlib.colors import colorConverter
from matplotlib import rcParams
from matplotlib.widgets import SubplotTool
import matplotlib
from matplotlib.backends import _macosx
class Show(ShowBase):
def mainloop(self):
_macosx.show()
show = Show()
class RendererMac(RendererBase):
"""
The renderer handles drawing/rendering operations. Most of the renderer's
methods forward the command to the renderer's graphics context. The
renderer does not wrap a C object and is written in pure Python.
"""
texd = maxdict(50) # a cache of tex image rasters
def __init__(self, dpi, width, height):
RendererBase.__init__(self)
self.dpi = dpi
self.width = width
self.height = height
self.gc = GraphicsContextMac()
self.gc.set_dpi(self.dpi)
self.mathtext_parser = MathTextParser('MacOSX')
def set_width_height (self, width, height):
self.width, self.height = width, height
def draw_path(self, gc, path, transform, rgbFace=None):
if rgbFace is not None:
rgbFace = tuple(rgbFace)
linewidth = gc.get_linewidth()
gc.draw_path(path, transform, linewidth, rgbFace)
def draw_markers(self, gc, marker_path, marker_trans, path, trans, rgbFace=None):
if rgbFace is not None:
rgbFace = tuple(rgbFace)
linewidth = gc.get_linewidth()
gc.draw_markers(marker_path, marker_trans, path, trans, linewidth, rgbFace)
def draw_path_collection(self, gc, master_transform, paths, all_transforms,
offsets, offsetTrans, facecolors, edgecolors,
linewidths, linestyles, antialiaseds, urls,
offset_position):
if offset_position=='data':
offset_position = True
else:
offset_position = False
path_ids = []
for path, transform in self._iter_collection_raw_paths(
master_transform, paths, all_transforms):
path_ids.append((path, transform))
master_transform = master_transform.get_matrix()
all_transforms = [t.get_matrix() for t in all_transforms]
offsetTrans = offsetTrans.get_matrix()
gc.draw_path_collection(master_transform, path_ids, all_transforms,
offsets, offsetTrans, facecolors, edgecolors,
linewidths, linestyles, antialiaseds,
offset_position)
def draw_quad_mesh(self, gc, master_transform, meshWidth, meshHeight,
coordinates, offsets, offsetTrans, facecolors,
antialiased, edgecolors):
gc.draw_quad_mesh(master_transform.get_matrix(),
meshWidth,
meshHeight,
coordinates,
offsets,
offsetTrans.get_matrix(),
facecolors,
antialiased,
edgecolors)
def new_gc(self):
self.gc.save()
self.gc.set_hatch(None)
self.gc._alpha = 1.0
self.gc._forced_alpha = False # if True, _alpha overrides A from RGBA
return self.gc
def draw_gouraud_triangle(self, gc, points, colors, transform):
points = transform.transform(points)
gc.draw_gouraud_triangle(points, colors)
def draw_image(self, gc, x, y, im):
im.flipud_out()
nrows, ncols, data = im.as_rgba_str()
gc.draw_image(x, y, nrows, ncols, data)
im.flipud_out()
def draw_tex(self, gc, x, y, s, prop, angle, ismath='TeX!', mtext=None):
# todo, handle props, angle, origins
size = prop.get_size_in_points()
texmanager = self.get_texmanager()
key = s, size, self.dpi, angle, texmanager.get_font_config()
im = self.texd.get(key) # Not sure what this does; just copied from backend_agg.py
if im is None:
Z = texmanager.get_grey(s, size, self.dpi)
Z = numpy.array(255.0 - Z * 255.0, numpy.uint8)
gc.draw_mathtext(x, y, angle, Z)
def _draw_mathtext(self, gc, x, y, s, prop, angle):
ox, oy, width, height, descent, image, used_characters = \
self.mathtext_parser.parse(s, self.dpi, prop)
gc.draw_mathtext(x, y, angle, 255 - image.as_array())
def draw_text(self, gc, x, y, s, prop, angle, ismath=False, mtext=None):
if ismath:
self._draw_mathtext(gc, x, y, s, prop, angle)
else:
family = prop.get_family()
weight = prop.get_weight()
style = prop.get_style()
points = prop.get_size_in_points()
size = self.points_to_pixels(points)
gc.draw_text(x, y, unicode(s), family, size, weight, style, angle)
def get_text_width_height_descent(self, s, prop, ismath):
if ismath=='TeX':
# todo: handle props
texmanager = self.get_texmanager()
fontsize = prop.get_size_in_points()
w, h, d = texmanager.get_text_width_height_descent(s, fontsize,
renderer=self)
return w, h, d
if ismath:
ox, oy, width, height, descent, fonts, used_characters = \
self.mathtext_parser.parse(s, self.dpi, prop)
return width, height, descent
family = prop.get_family()
weight = prop.get_weight()
style = prop.get_style()
points = prop.get_size_in_points()
size = self.points_to_pixels(points)
width, height, descent = self.gc.get_text_width_height_descent(unicode(s), family, size, weight, style)
return width, height, 0.0*descent
def flipy(self):
return False
def points_to_pixels(self, points):
return points/72.0 * self.dpi
def option_image_nocomposite(self):
return True
class GraphicsContextMac(_macosx.GraphicsContext, GraphicsContextBase):
"""
The GraphicsContext wraps a Quartz graphics context. All methods
are implemented at the C-level in macosx.GraphicsContext. These
methods set drawing properties such as the line style, fill color,
etc. The actual drawing is done by the Renderer, which draws into
the GraphicsContext.
"""
def __init__(self):
GraphicsContextBase.__init__(self)
_macosx.GraphicsContext.__init__(self)
def set_alpha(self, alpha):
GraphicsContextBase.set_alpha(self, alpha)
_alpha = self.get_alpha()
_macosx.GraphicsContext.set_alpha(self, _alpha, self.get_forced_alpha())
rgb = self.get_rgb()
_macosx.GraphicsContext.set_foreground(self, rgb)
def set_foreground(self, fg, isRGBA=False):
GraphicsContextBase.set_foreground(self, fg, isRGBA)
rgb = self.get_rgb()
_macosx.GraphicsContext.set_foreground(self, rgb)
def set_graylevel(self, fg):
GraphicsContextBase.set_graylevel(self, fg)
_macosx.GraphicsContext.set_graylevel(self, fg)
def set_clip_rectangle(self, box):
GraphicsContextBase.set_clip_rectangle(self, box)
if not box: return
_macosx.GraphicsContext.set_clip_rectangle(self, box.bounds)
def set_clip_path(self, path):
GraphicsContextBase.set_clip_path(self, path)
if not path: return
path = path.get_fully_transformed_path()
_macosx.GraphicsContext.set_clip_path(self, path)
########################################################################
#
# The following functions and classes are for pylab and implement
# window/figure managers, etc...
#
########################################################################
def draw_if_interactive():
"""
For performance reasons, we don't want to redraw the figure after
each draw command. Instead, we mark the figure as invalid, so that
it will be redrawn as soon as the event loop resumes via PyOS_InputHook.
This function should be called after each draw event, even if
matplotlib is not running interactively.
"""
if matplotlib.is_interactive():
figManager = Gcf.get_active()
if figManager is not None:
figManager.canvas.invalidate()
def new_figure_manager(num, *args, **kwargs):
"""
Create a new figure manager instance
"""
FigureClass = kwargs.pop('FigureClass', Figure)
figure = FigureClass(*args, **kwargs)
return new_figure_manager_given_figure(num, figure)
def new_figure_manager_given_figure(num, figure):
"""
Create a new figure manager instance for the given figure.
"""
canvas = FigureCanvasMac(figure)
manager = FigureManagerMac(canvas, num)
return manager
class TimerMac(_macosx.Timer, TimerBase):
'''
Subclass of :class:`backend_bases.TimerBase` that uses CoreFoundation
run loops for timer events.
Attributes:
* interval: The time between timer events in milliseconds. Default
is 1000 ms.
* single_shot: Boolean flag indicating whether this timer should
operate as single shot (run once and then stop). Defaults to False.
* callbacks: Stores list of (func, args) tuples that will be called
upon timer events. This list can be manipulated directly, or the
functions add_callback and remove_callback can be used.
'''
# completely implemented at the C-level (in _macosx.Timer)
class FigureCanvasMac(_macosx.FigureCanvas, FigureCanvasBase):
"""
The canvas the figure renders into. Calls the draw and print fig
methods, creates the renderers, etc...
Public attribute
figure - A Figure instance
Events such as button presses, mouse movements, and key presses
are handled in the C code and the base class methods
button_press_event, button_release_event, motion_notify_event,
key_press_event, and key_release_event are called from there.
"""
filetypes = FigureCanvasBase.filetypes.copy()
filetypes['bmp'] = 'Windows bitmap'
filetypes['jpeg'] = 'JPEG'
filetypes['jpg'] = 'JPEG'
filetypes['gif'] = 'Graphics Interchange Format'
filetypes['tif'] = 'Tagged Image Format File'
filetypes['tiff'] = 'Tagged Image Format File'
def __init__(self, figure):
FigureCanvasBase.__init__(self, figure)
width, height = self.get_width_height()
self.renderer = RendererMac(figure.dpi, width, height)
_macosx.FigureCanvas.__init__(self, width, height)
def resize(self, width, height):
self.renderer.set_width_height(width, height)
dpi = self.figure.dpi
width /= dpi
height /= dpi
self.figure.set_size_inches(width, height)
def _print_bitmap(self, filename, *args, **kwargs):
# In backend_bases.py, print_figure changes the dpi of the figure.
# But since we are essentially redrawing the picture, we need the
# original dpi. Pick it up from the renderer.
dpi = kwargs['dpi']
old_dpi = self.figure.dpi
self.figure.dpi = self.renderer.dpi
width, height = self.figure.get_size_inches()
width, height = width*dpi, height*dpi
filename = unicode(filename)
self.write_bitmap(filename, width, height, dpi)
self.figure.dpi = old_dpi
def print_bmp(self, filename, *args, **kwargs):
self._print_bitmap(filename, *args, **kwargs)
def print_jpg(self, filename, *args, **kwargs):
self._print_bitmap(filename, *args, **kwargs)
def print_jpeg(self, filename, *args, **kwargs):
self._print_bitmap(filename, *args, **kwargs)
def print_tif(self, filename, *args, **kwargs):
self._print_bitmap(filename, *args, **kwargs)
def print_tiff(self, filename, *args, **kwargs):
self._print_bitmap(filename, *args, **kwargs)
def print_gif(self, filename, *args, **kwargs):
self._print_bitmap(filename, *args, **kwargs)
def new_timer(self, *args, **kwargs):
"""
Creates a new backend-specific subclass of :class:`backend_bases.Timer`.
This is useful for getting periodic events through the backend's native
event loop. Implemented only for backends with GUIs.
optional arguments:
*interval*
Timer interval in milliseconds
*callbacks*
Sequence of (func, args, kwargs) where func(*args, **kwargs) will
be executed by the timer every *interval*.
"""
return TimerMac(*args, **kwargs)
class FigureManagerMac(_macosx.FigureManager, FigureManagerBase):
"""
Wrap everything up into a window for the pylab interface
"""
def __init__(self, canvas, num):
FigureManagerBase.__init__(self, canvas, num)
title = "Figure %d" % num
_macosx.FigureManager.__init__(self, canvas, title)
if rcParams['toolbar']=='classic':
self.toolbar = NavigationToolbarMac(canvas)
elif rcParams['toolbar']=='toolbar2':
self.toolbar = NavigationToolbar2Mac(canvas)
else:
self.toolbar = None
if self.toolbar is not None:
self.toolbar.update()
def notify_axes_change(fig):
'this will be called whenever the current axes is changed'
if self.toolbar != None: self.toolbar.update()
self.canvas.figure.add_axobserver(notify_axes_change)
if matplotlib.is_interactive():
self.show()
def close(self):
Gcf.destroy(self.num)
class NavigationToolbarMac(_macosx.NavigationToolbar):
def __init__(self, canvas):
self.canvas = canvas
basedir = os.path.join(rcParams['datapath'], "images")
images = {}
for imagename in ("stock_left",
"stock_right",
"stock_up",
"stock_down",
"stock_zoom-in",
"stock_zoom-out",
"stock_save_as"):
filename = os.path.join(basedir, imagename+".ppm")
images[imagename] = self._read_ppm_image(filename)
_macosx.NavigationToolbar.__init__(self, images)
self.message = None
def _read_ppm_image(self, filename):
data = ""
imagefile = open(filename)
for line in imagefile:
if "#" in line:
i = line.index("#")
line = line[:i] + "\n"
data += line
imagefile.close()
magic, width, height, maxcolor, imagedata = data.split(None, 4)
width, height = int(width), int(height)
assert magic=="P6"
assert len(imagedata)==width*height*3 # 3 colors in RGB
return (width, height, imagedata)
def panx(self, direction):
axes = self.canvas.figure.axes
selected = self.get_active()
for i in selected:
axes[i].xaxis.pan(direction)
self.canvas.invalidate()
def pany(self, direction):
axes = self.canvas.figure.axes
selected = self.get_active()
for i in selected:
axes[i].yaxis.pan(direction)
self.canvas.invalidate()
def zoomx(self, direction):
axes = self.canvas.figure.axes
selected = self.get_active()
for i in selected:
axes[i].xaxis.zoom(direction)
self.canvas.invalidate()
def zoomy(self, direction):
axes = self.canvas.figure.axes
selected = self.get_active()
for i in selected:
axes[i].yaxis.zoom(direction)
self.canvas.invalidate()
def save_figure(self, *args):
filename = _macosx.choose_save_file('Save the figure',
self.canvas.get_default_filename())
if filename is None: # Cancel
return
self.canvas.print_figure(filename)
class NavigationToolbar2Mac(_macosx.NavigationToolbar2, NavigationToolbar2):
def __init__(self, canvas):
NavigationToolbar2.__init__(self, canvas)
def _init_toolbar(self):
basedir = os.path.join(rcParams['datapath'], "images")
_macosx.NavigationToolbar2.__init__(self, basedir)
def draw_rubberband(self, event, x0, y0, x1, y1):
self.canvas.set_rubberband(int(x0), int(y0), int(x1), int(y1))
def release(self, event):
self.canvas.remove_rubberband()
def set_cursor(self, cursor):
_macosx.set_cursor(cursor)
def save_figure(self, *args):
filename = _macosx.choose_save_file('Save the figure',
self.canvas.get_default_filename())
if filename is None: # Cancel
return
self.canvas.print_figure(filename)
def prepare_configure_subplots(self):
toolfig = Figure(figsize=(6,3))
canvas = FigureCanvasMac(toolfig)
toolfig.subplots_adjust(top=0.9)
tool = SubplotTool(self.canvas.figure, toolfig)
return canvas
def set_message(self, message):
_macosx.NavigationToolbar2.set_message(self, message.encode('utf-8'))
def dynamic_update(self):
self.canvas.draw_idle()
########################################################################
#
# Now just provide the standard names that backend.__init__ is expecting
#
########################################################################
FigureManager = FigureManagerMac
| unlicense |
zygmuntz/numer.ai | march/validate_lr.py | 1 | 2651 | #!/usr/bin/env python
"Load data, create the validation split, optionally scale data, train a linear model, evaluate"
"Code updated for march 2016 data"
import pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Normalizer, PolynomialFeatures
from sklearn.preprocessing import MaxAbsScaler, MinMaxScaler, StandardScaler, RobustScaler
from sklearn.linear_model import LogisticRegression as LR
from sklearn.metrics import roc_auc_score as AUC, accuracy_score as accuracy, log_loss
#
def train_and_evaluate( y_train, x_train, y_val, x_val ):
lr = LR()
lr.fit( x_train, y_train )
p = lr.predict_proba( x_val )
auc = AUC( y_val, p[:,1] )
ll = log_loss( y_val, p[:,1] )
return ( auc, ll )
def transform_train_and_evaluate( transformer ):
global x_train, x_val, y_train
x_train_new = transformer.fit_transform( x_train )
x_val_new = transformer.transform( x_val )
return train_and_evaluate( y_train, x_train_new, y_val, x_val_new )
#
input_file = 'data/numerai_training_data.csv'
d = pd.read_csv( input_file )
train, val = train_test_split( d, test_size = 5000 )
y_train = train.target.values
y_val = val.target.values
x_train = train.drop( 'target', axis = 1 )
x_val = val.drop( 'target', axis = 1 )
# train, predict, evaluate
auc, ll = train_and_evaluate( y_train, x_train, y_val, x_val )
print "No transformation"
print "AUC: {:.2%}, log loss: {:.2%} \n".format( auc, ll )
# try different transformations for X
# X is already scaled to (0,1) so these won't make much difference
transformers = [ MaxAbsScaler(), MinMaxScaler(), RobustScaler(), StandardScaler(),
Normalizer( norm = 'l1' ), Normalizer( norm = 'l2' ), Normalizer( norm = 'max' ) ]
#poly_scaled = Pipeline([ ( 'poly', PolynomialFeatures()), ( 'scaler', MinMaxScaler()) ])
#transformers.append( PolynomialFeatures(), poly_scaled )
for transformer in transformers:
print transformer
auc, ll = transform_train_and_evaluate( transformer )
print "AUC: {:.2%}, log loss: {:.2%} \n".format( auc, ll )
"""
No transformation
AUC: 52.35%, log loss: 69.20%
MaxAbsScaler(copy=True)
AUC: 52.35%, log loss: 69.20%
MinMaxScaler(copy=True, feature_range=(0, 1))
AUC: 52.35%, log loss: 69.20%
RobustScaler(copy=True, with_centering=True, with_scaling=True)
AUC: 52.35%, log loss: 69.20%
StandardScaler(copy=True, with_mean=True, with_std=True)
AUC: 52.35%, log loss: 69.20%
Normalizer(copy=True, norm='l1')
AUC: 51.26%, log loss: 69.26%
Normalizer(copy=True, norm='l2')
AUC: 52.18%, log loss: 69.21%
Normalizer(copy=True, norm='max')
AUC: 52.40%, log loss: 69.19%
""" | bsd-3-clause |
shaunstanislaus/pandashells | pandashells/lib/plot_lib.py | 7 | 4022 | #! /usr/bin/env python
import sys
import re
from pandashells.lib import module_checker_lib
module_checker_lib.check_for_modules(
['matplotlib', 'dateutil', 'mpld3', 'seaborn'])
from dateutil.parser import parse
import matplotlib as mpl
import pylab as pl
import seaborn as sns
import mpld3
def show(args):
# if figure saving requested
if hasattr(args, 'savefig') and args.savefig:
# save html if requested
rex_html = re.compile('.*?\.html$')
if rex_html.match(args.savefig[0]):
fig = pl.gcf()
html = mpld3.fig_to_html(fig)
with open(args.savefig[0], 'w') as outfile:
outfile.write(html)
return
# save image types
pl.savefig(args.savefig[0])
# otherwise show to screen
else:
pl.show()
def set_plot_styling(args):
# set up seaborn context
sns.set(context=args.plot_context[0],
style=args.plot_theme[0],
palette=args.plot_palette[0])
# modify seaborn slightly to look good in interactive backends
if 'white' not in args.plot_theme[0]:
mpl.rcParams['figure.facecolor'] = 'white'
mpl.rcParams['figure.edgecolor'] = 'white'
def set_limits(args):
if args.xlim:
pl.gca().set_xlim(args.xlim)
if args.ylim:
pl.gca().set_ylim(args.ylim)
def set_scale(args):
if args.xlog:
pl.gca().set_xscale('log')
if args.ylog:
pl.gca().set_yscale('log')
def set_labels_title(args):
if args.title:
pl.title(args.title[0])
if args.xlabel:
pl.xlabel(args.xlabel[0])
if args.ylabel:
pl.ylabel(args.ylabel[0])
def set_legend(args):
if args.legend:
loc = args.legend[0]
rex = re.compile(r'\d')
m = rex.match(loc)
if m:
loc = int(loc)
else:
loc = 'best'
pl.legend(loc=loc)
def set_grid(args):
if args.no_grid:
pl.grid(False)
else:
pl.grid(True)
def ensure_xy_args(args):
x_is_none = args.x is None
y_is_none = args.y is None
if (x_is_none ^ y_is_none):
msg = "\nIf either x or y is specified, both must be specified\n\n"
sys.stderr.write(msg)
sys.exit(1)
def ensure_xy_omission_state(args, df):
if (len(df.columns) != 2) and (args.x is None):
msg = "\n\n x and y can be ommited only "
msg += "for 2-column data-frames\n"
sys.stderr.write(msg)
sys.exit(1)
def autofill_plot_fields_and_labels(args, df):
# add labels for two column inputs
if (args.x is None) and (len(df.columns) == 2):
args.x = [df.columns[0]]
args.y = [df.columns[1]]
# if no xlabel, set it to the x field
if args.xlabel is None:
args.xlabel = args.x
# if no ylabel, and only 1 trace being plotted, set ylabel to that field
if (args.ylabel is None) and (len(args.y) == 1):
args.ylabel = [args.y[0]]
def str_to_date(x):
try:
basestring
except NameError:
basestring = str
if isinstance(x.iloc[0], basestring):
return [parse(e) for e in x]
else:
return x
def draw_traces(args, df):
y_field_list = args.y
x = str_to_date(df[args.x[0]])
style_list = args.style
alpha_list = args.alpha
if len(style_list) != len(y_field_list):
style_list = [style_list[0] for y_field in y_field_list]
if len(alpha_list) != len(y_field_list):
alpha_list = [alpha_list[0] for y_field in y_field_list]
for y_field, style, alpha in zip(y_field_list, style_list, alpha_list):
y = df[y_field]
pl.plot(x, y, style, label=y_field, alpha=alpha)
def refine_plot(args):
set_limits(args)
set_scale(args)
set_labels_title(args)
set_grid(args)
set_legend(args)
def draw_xy_plot(args, df):
ensure_xy_args(args)
ensure_xy_omission_state(args, df)
autofill_plot_fields_and_labels(args, df)
draw_traces(args, df)
refine_plot(args)
show(args)
| bsd-2-clause |
comocheng/RMG-Py | rmgpy/rmg/main.py | 1 | 62586 | #!/usr/bin/python
# -*- coding: utf-8 -*-
################################################################################
#
# RMG - Reaction Mechanism Generator
#
# Copyright (c) 2002-2010 Prof. William H. Green ([email protected]) and the
# RMG Team ([email protected])
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the 'Software'),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
#
################################################################################
"""
This module contains the main execution functionality for Reaction Mechanism
Generator (RMG).
"""
import os.path
import sys
import logging
import time
import shutil
import numpy
import csv
try:
import xlwt
except ImportError:
logging.warning('Optional package dependency "xlwt" not loaded; Some output features will not work.')
from rmgpy.molecule import Molecule
from rmgpy.solver.base import TerminationTime, TerminationConversion
from rmgpy.solver.simple import SimpleReactor
from rmgpy.data.rmg import RMGDatabase
from rmgpy.data.base import ForbiddenStructureException, DatabaseError
from rmgpy.data.kinetics.library import KineticsLibrary, LibraryReaction
from rmgpy.data.kinetics.family import KineticsFamily, TemplateReaction
from rmgpy.kinetics.diffusionLimited import diffusionLimiter
from model import Species, CoreEdgeReactionModel
from pdep import PDepNetwork
################################################################################
solvent = None
class RMG:
"""
A representation of a Reaction Mechanism Generator (RMG) job. The
attributes are:
=============================== ================================================
Attribute Description
=============================== ================================================
`inputFile` The path to the input file
`logFile` The path to the log file
------------------------------- ------------------------------------------------
`databaseDirectory` The directory containing the RMG database
`thermoLibraries` The thermodynamics libraries to load
`reactionLibraries` The kinetics libraries to load
`statmechLibraries` The statistical mechanics libraries to load
`seedMechanisms` The seed mechanisms included in the model
`kineticsFamilies` The kinetics families to use for reaction generation
`kineticsDepositories` The kinetics depositories to use for looking up kinetics in each family
`kineticsEstimator` The method to use to estimate kinetics: 'group additivity' or 'rate rules'
`solvent` If solvation estimates are required, the name of the solvent.
------------------------------- ------------------------------------------------
`reactionModel` The core-edge reaction model generated by this job
`reactionSystems` A list of the reaction systems used in this job
`database` The RMG database used in this job
------------------------------- ------------------------------------------------
`absoluteTolerance` The absolute tolerance used in the ODE/DAE solver
`relativeTolerance` The relative tolerance used in the ODE/DAE solver
`sensitivityAbsoluteTolerance` The absolute tolerance used in the ODE/DAE solver for the sensitivities
`sensitivityRelativeTolerance` The relative tolerance used in the ODE/DAE solver for the sensitivities
`fluxToleranceKeepInEdge` The relative species flux below which species are discarded from the edge
`fluxToleranceMoveToCore` The relative species flux above which species are moved from the edge to the core
`fluxToleranceInterrupt` The relative species flux above which the simulation will halt
`maximumEdgeSpecies` The maximum number of edge species allowed at any time
`termination` A list of termination targets (i.e :class:`TerminationTime` and :class:`TerminationConversion` objects)
`speciesConstraints` Dictates the maximum number of atoms, carbons, electrons, etc. generated by RMG
------------------------------- ------------------------------------------------
`outputDirectory` The directory used to save output files
`scratchDirectory` The directory used to save temporary files
`verbosity` The level of logging verbosity for console output
`loadRestart` ``True`` if restarting a previous job, ``False`` otherwise
`saveRestartPeriod` The time period to periodically save a restart file (:class:`Quantity`), or ``None`` for never.
`units` The unit system to use to save output files (currently must be 'si')
`drawMolecules` ``True`` to draw pictures of the species in the core, ``False`` otherwise
`generatePlots` ``True`` to generate plots of the job execution statistics after each iteration, ``False`` otherwise
`verboseComments` ``True`` to keep the verbose comments for database estimates, ``False`` otherwise
`saveEdgeSpecies` ``True`` to save chemkin and HTML files of the edge species, ``False`` otherwise
`pressureDependence` Whether to process unimolecular (pressure-dependent) reaction networks
`quantumMechanics` Whether to apply quantum mechanical calculations instead of group additivity to certain molecular types.
`wallTime` The maximum amount of CPU time in seconds to expend on this job; used to stop gracefully so we can still get profiling information
------------------------------- ------------------------------------------------
`initializationTime` The time at which the job was initiated, in seconds since the epoch (i.e. from time.time())
`done` Whether the job has completed (there is nothing new to add)
=============================== ================================================
"""
def __init__(self, inputFile=None, logFile=None, outputDirectory=None, scratchDirectory=None):
self.inputFile = inputFile
self.logFile = logFile
self.outputDirectory = outputDirectory
self.scratchDirectory = scratchDirectory
self.clear()
def clear(self):
"""
Clear all loaded information about the job (except the file paths).
"""
self.databaseDirectory = None
self.thermoLibraries = None
self.transportLibraries = None
self.reactionLibraries = None
self.statmechLibraries = None
self.seedMechanisms = None
self.kineticsFamilies = None
self.kineticsDepositories = None
self.kineticsEstimator = 'group additivity'
self.solvent = None
self.diffusionLimiter = None
self.reactionModel = None
self.reactionSystems = None
self.database = None
self.fluxToleranceKeepInEdge = 0.0
self.fluxToleranceMoveToCore = 1.0
self.fluxToleranceInterrupt = 1.0
self.absoluteTolerance = 1.0e-8
self.relativeTolerance = 1.0e-4
self.sensitivityAbsoluteTolerance = 1.0e-6
self.sensitivityRelativeTolerance = 1.0e-4
self.maximumEdgeSpecies = 1000000
self.termination = []
self.done = False
self.verbosity = logging.INFO
self.loadRestart = None
self.saveRestartPeriod = None
self.units = 'si'
self.drawMolecules = None
self.generatePlots = None
self.saveSimulationProfiles = None
self.verboseComments = None
self.saveEdgeSpecies = None
self.pressureDependence = None
self.quantumMechanics = None
self.speciesConstraints = {}
self.wallTime = 0
self.initializationTime = 0
def loadInput(self, path=None):
"""
Load an RMG job from the input file located at `inputFile`, or
from the `inputFile` attribute if not given as a parameter.
"""
from input import readInputFile
if path is None: path = self.inputFile
readInputFile(path, self)
self.speciesConstraints['explicitlyAllowedMolecules'] = []
self.reactionModel.kineticsEstimator = self.kineticsEstimator
# If the output directory is not yet set, then set it to the same
# directory as the input file by default
if not self.outputDirectory:
self.outputDirectory = os.path.dirname(path)
if self.pressureDependence:
self.pressureDependence.outputFile = self.outputDirectory
self.reactionModel.pressureDependence = self.pressureDependence
self.reactionModel.speciesConstraints = self.speciesConstraints
self.reactionModel.verboseComments = self.verboseComments
if self.quantumMechanics:
self.quantumMechanics.setDefaultOutputDirectory(self.outputDirectory)
self.reactionModel.quantumMechanics = self.quantumMechanics
def loadThermoInput(self, path=None):
"""
Load an Thermo Estimation job from a thermo input file located at `inputFile`, or
from the `inputFile` attribute if not given as a parameter.
"""
from input import readThermoInputFile
if path is None: path = self.inputFile
if not self.outputDirectory:
self.outputDirectory = os.path.dirname(path)
readThermoInputFile(path, self)
if self.quantumMechanics:
self.quantumMechanics.setDefaultOutputDirectory(self.outputDirectory)
self.reactionModel.quantumMechanics = self.quantumMechanics
def checkInput(self):
"""
Check for a few common mistakes in the input file.
"""
if self.pressureDependence:
for index, reactionSystem in enumerate(self.reactionSystems):
assert (reactionSystem.T.value_si < self.pressureDependence.Tmax.value_si), "Reaction system T is above pressureDependence range."
assert (reactionSystem.T.value_si > self.pressureDependence.Tmin.value_si), "Reaction system T is below pressureDependence range."
assert (reactionSystem.P.value_si < self.pressureDependence.Pmax.value_si), "Reaction system P is above pressureDependence range."
assert (reactionSystem.P.value_si > self.pressureDependence.Pmin.value_si), "Reaction system P is below pressureDependence range."
assert any([not s.reactive for s in reactionSystem.initialMoleFractions.keys()]), \
"Pressure Dependence calculations require at least one inert (nonreacting) species for the bath gas."
def checkLibraries(self):
"""
Check unwanted use of libraries:
Liquid phase libraries in Gas phase simulation.
Loading a Liquid phase library obtained in another solvent than the one defined in the input file.
Other checks can be added here.
"""
#Liquid phase simulation checks
if self.solvent:
#check thermo librairies
for libIter in self.database.thermo.libraries.iterkeys():
if self.database.thermo.libraries[libIter].solvent:
if not self.solvent == self.database.thermo.libraries[libIter].solvent:
raise DatabaseError('''Thermo library "{2}" was obtained in "{1}" and cannot be used with this liquid phase simulation in "{0}"
'''.format(self.solvent, self.database.thermo.libraries[libIter].solvent, self.database.thermo.libraries[libIter].name))
#Check kinetic librairies
for libIter in self.database.kinetics.libraries.iterkeys():
if self.database.kinetics.libraries[libIter].solvent:
if not self.solvent == self.database.kinetics.libraries[libIter].solvent:
raise DatabaseError('''Kinetics library "{2}" was obtained in "{1}" and cannot be used with this liquid phase simulation in "{0}"
'''.format(self.solvent, self.database.kinetics.libraries[libIter].solvent, self.database.kinetics.libraries[libIter].name))
#Gas phase simulation checks
else:
#check thermo librairies
for libIter in self.database.thermo.libraries.iterkeys():
if self.database.thermo.libraries[libIter].solvent:
raise DatabaseError('''Thermo library "{1}" was obtained in "{0}" solvent and cannot be used in gas phase simulation
'''.format(self.database.thermo.libraries[libIter].solvent, self.database.thermo.libraries[libIter].name))
#Check kinetic librairies
for libIter in self.database.kinetics.libraries.iterkeys():
if self.database.kinetics.libraries[libIter].solvent:
raise DatabaseError('''Kinetics library "{1}" was obtained in "{0}" solvent and cannot be used in gas phase simulation
'''.format(self.database.kinetics.libraries[libIter].solvent, self.database.kinetics.libraries[libIter].name))
def saveInput(self, path=None):
"""
Save an RMG job to the input file located at `path`, or
from the `outputFile` attribute if not given as a parameter.
"""
from input import saveInputFile
if path is None: path = self.outputFile
saveInputFile(path, self)
def loadDatabase(self):
self.database = RMGDatabase()
self.database.load(
path = self.databaseDirectory,
thermoLibraries = self.thermoLibraries,
transportLibraries = self.transportLibraries,
reactionLibraries = [library for library, option in self.reactionLibraries],
seedMechanisms = self.seedMechanisms,
kineticsFamilies = self.kineticsFamilies,
kineticsDepositories = self.kineticsDepositories,
#frequenciesLibraries = self.statmechLibraries,
depository = False, # Don't bother loading the depository information, as we don't use it
)
#check libraries
self.checkLibraries()
#set global variable solvent
if self.solvent:
global solvent
solvent=self.solvent
if self.kineticsEstimator == 'rate rules':
if '!training' not in self.kineticsDepositories:
logging.info('Adding rate rules from training set in kinetics families...')
for family in self.database.kinetics.families.values():
family.addKineticsRulesFromTrainingSet(thermoDatabase=self.database.thermo)
else:
logging.info('Training set explicitly not added to rate rules in kinetics families...')
logging.info('Filling in rate rules in kinetics families by averaging...')
for family in self.database.kinetics.families.values():
family.fillKineticsRulesByAveragingUp()
def initialize(self, args):
"""
Initialize an RMG job using the command-line arguments `args` as returned
by the :mod:`argparse` package.
"""
# Save initialization time
self.initializationTime = time.time()
# Log start timestamp
logging.info('RMG execution initiated at ' + time.asctime() + '\n')
# Print out RMG header
self.logHeader()
# Set directories
self.outputDirectory = args.output_directory
self.scratchDirectory = args.scratch_directory
if args.restart:
if not os.path.exists(os.path.join(self.outputDirectory,'restart.pkl')):
logging.error("Could not find restart file (restart.pkl). Please run without --restart option.")
raise Exception("No restart file")
# Read input file
self.loadInput(args.file[0])
# Check input file
self.checkInput()
# See if memory profiling package is available
try:
import psutil
except ImportError:
logging.info('Optional package dependency "psutil" not found; memory profiling information will not be saved.')
# Make output subdirectories
self.makeOutputSubdirectory('plot')
self.makeOutputSubdirectory('species')
self.makeOutputSubdirectory('pdep')
self.makeOutputSubdirectory('chemkin')
self.makeOutputSubdirectory('solver')
if self.saveEdgeSpecies:
self.makeOutputSubdirectory('species_edge')
# Do any necessary quantum mechanics startup
if self.quantumMechanics:
self.quantumMechanics.initialize()
# Load databases
self.loadDatabase()
# Do all liquid-phase startup things:
if self.solvent:
Species.solventData = self.database.solvation.getSolventData(self.solvent)
Species.solventName = self.solvent
diffusionLimiter.enable(Species.solventData, self.database.solvation)
logging.info("Setting solvent data for {0}".format(self.solvent))
# Set wall time
if args.walltime == '0':
self.wallTime = 0
else:
data = args.walltime[0].split(':')
if len(data) == 1:
self.wallTime = int(data[-1])
elif len(data) == 2:
self.wallTime = int(data[-1]) + 60 * int(data[-2])
elif len(data) == 3:
self.wallTime = int(data[-1]) + 60 * int(data[-2]) + 3600 * int(data[-3])
elif len(data) == 4:
self.wallTime = int(data[-1]) + 60 * int(data[-2]) + 3600 * int(data[-3]) + 86400 * int(data[-4])
else:
raise ValueError('Invalid format for wall time; should be HH:MM:SS.')
# Delete previous HTML file
from rmgpy.rmg.output import saveOutputHTML
saveOutputHTML(os.path.join(self.outputDirectory, 'output.html'), self.reactionModel, 'core')
# Initialize reaction model
if args.restart:
self.loadRestartFile(os.path.join(self.outputDirectory,'restart.pkl'))
else:
# Seed mechanisms: add species and reactions from seed mechanism
# DON'T generate any more reactions for the seed species at this time
for seedMechanism in self.seedMechanisms:
self.reactionModel.addSeedMechanismToCore(seedMechanism, react=False)
# Reaction libraries: add species and reactions from reaction library to the edge so
# that RMG can find them if their rates are large enough
for library, option in self.reactionLibraries:
self.reactionModel.addReactionLibraryToEdge(library)
# Also always add in a few bath gases (since RMG-Java does)
for label, smiles in [('Ar','[Ar]'), ('He','[He]'), ('Ne','[Ne]'), ('N2','N#N')]:
molecule = Molecule().fromSMILES(smiles)
spec, isNew = self.reactionModel.makeNewSpecies(molecule, label=label, reactive=False)
if isNew:
self.initialSpecies.append(spec)
# Perform species constraints and forbidden species checks on input species
for spec in self.initialSpecies:
if self.database.forbiddenStructures.isMoleculeForbidden(spec.molecule[0]):
if 'allowed' in self.speciesConstraints and 'input species' in self.speciesConstraints['allowed']:
logging.warning('Input species {0} is globally forbidden. It will behave as an inert unless found in a seed mechanism or reaction library.'.format(spec.label))
else:
raise ForbiddenStructureException("Input species {0} is globally forbidden. You may explicitly allow it, but it will remain inert unless found in a seed mechanism or reaction library.".format(spec.label))
if self.reactionModel.failsSpeciesConstraints(spec):
if 'allowed' in self.speciesConstraints and 'input species' in self.speciesConstraints['allowed']:
self.speciesConstraints['explicitlyAllowedMolecules'].append(spec.molecule[0])
pass
else:
raise ForbiddenStructureException("Species constraints forbids input species {0}. Please reformulate constraints, remove the species, or explicitly allow it.".format(spec.label))
# Add nonreactive species (e.g. bath gases) to core first
# This is necessary so that the PDep algorithm can identify the bath gas
self.reactionModel.enlarge([spec for spec in self.initialSpecies if not spec.reactive])
# Then add remaining reactive species
for spec in self.initialSpecies:
spec.generateThermoData(self.database, quantumMechanics=self.quantumMechanics)
spec.generateTransportData(self.database)
self.reactionModel.enlarge([spec for spec in self.initialSpecies if spec.reactive])
# Save a restart file if desired
if self.saveRestartPeriod:
self.saveRestartFile(os.path.join(self.outputDirectory,'restart.pkl'), self.reactionModel)
def execute(self, args):
"""
Execute an RMG job using the command-line arguments `args` as returned
by the :mod:`argparse` package.
"""
self.initialize(args)
# RMG execution statistics
coreSpeciesCount = []
coreReactionCount = []
edgeSpeciesCount = []
edgeReactionCount = []
execTime = []
restartSize = []
memoryUse = []
self.done = False
self.saveEverything()
# Main RMG loop
while not self.done:
self.done = True
objectsToEnlarge = []
allTerminated = True
for index, reactionSystem in enumerate(self.reactionSystems):
if self.saveSimulationProfiles:
csvfile = file(os.path.join(self.outputDirectory, 'solver', 'simulation_{0}_{1:d}.csv'.format(index+1, len(self.reactionModel.core.species))),'w')
worksheet = csv.writer(csvfile)
else:
worksheet = None
# Conduct simulation
logging.info('Conducting simulation of reaction system %s...' % (index+1))
terminated, obj = reactionSystem.simulate(
coreSpecies = self.reactionModel.core.species,
coreReactions = self.reactionModel.core.reactions,
edgeSpecies = self.reactionModel.edge.species,
edgeReactions = self.reactionModel.edge.reactions,
toleranceKeepInEdge = self.fluxToleranceKeepInEdge,
toleranceMoveToCore = self.fluxToleranceMoveToCore,
toleranceInterruptSimulation = self.fluxToleranceInterrupt,
pdepNetworks = self.reactionModel.networkList,
worksheet = worksheet,
absoluteTolerance = self.absoluteTolerance,
relativeTolerance = self.relativeTolerance,
)
allTerminated = allTerminated and terminated
logging.info('')
# If simulation is invalid, note which species should be added to
# the core
if obj:
if isinstance(obj, PDepNetwork):
# Determine which species in that network has the highest leak rate
# We do this here because we need a temperature and pressure
# Store the maximum leak species along with the associated network
obj = (obj, obj.getMaximumLeakSpecies(reactionSystem.T.value_si, reactionSystem.P.value_si))
objectsToEnlarge.append(obj)
self.done = False
if not self.done: # There is something that needs exploring/enlarging
# If we reached our termination conditions, then try to prune
# species from the edge
if allTerminated:
self.reactionModel.prune(self.reactionSystems, self.fluxToleranceKeepInEdge, self.maximumEdgeSpecies)
# Enlarge objects identified by the simulation for enlarging
# These should be Species or Network objects
logging.info('')
objectsToEnlarge = list(set(objectsToEnlarge))
self.reactionModel.enlarge(objectsToEnlarge)
self.saveEverything()
# Update RMG execution statistics
logging.info('Updating RMG execution statistics...')
coreSpec, coreReac, edgeSpec, edgeReac = self.reactionModel.getModelSize()
coreSpeciesCount.append(coreSpec)
coreReactionCount.append(coreReac)
edgeSpeciesCount.append(edgeSpec)
edgeReactionCount.append(edgeReac)
execTime.append(time.time() - self.initializationTime)
elapsed = execTime[-1]
seconds = elapsed % 60
minutes = (elapsed - seconds) % 3600 / 60
hours = (elapsed - seconds - minutes * 60) % (3600 * 24) / 3600
days = (elapsed - seconds - minutes * 60 - hours * 3600) / (3600 * 24)
logging.info(' Execution time (DD:HH:MM:SS): '
'{0:02}:{1:02}:{2:02}:{3:02}'.format(int(days), int(hours), int(minutes), int(seconds)))
try:
import psutil
process = psutil.Process(os.getpid())
rss, vms = process.get_memory_info()
memoryUse.append(rss / 1.0e6)
logging.info(' Memory used: %.2f MB' % (memoryUse[-1]))
except ImportError:
memoryUse.append(0.0)
if os.path.exists(os.path.join(self.outputDirectory,'restart.pkl.gz')):
restartSize.append(os.path.getsize(os.path.join(self.outputDirectory,'restart.pkl.gz')) / 1.0e6)
logging.info(' Restart file size: %.2f MB' % (restartSize[-1]))
else:
restartSize.append(0.0)
self.saveExecutionStatistics(execTime, coreSpeciesCount, coreReactionCount, edgeSpeciesCount, edgeReactionCount, memoryUse, restartSize)
if self.generatePlots:
self.generateExecutionPlots(execTime, coreSpeciesCount, coreReactionCount, edgeSpeciesCount, edgeReactionCount, memoryUse, restartSize)
logging.info('')
# Consider stopping gracefully if the next iteration might take us
# past the wall time
if self.wallTime > 0 and len(execTime) > 1:
t = execTime[-1]
dt = execTime[-1] - execTime[-2]
if t + 3 * dt > self.wallTime:
logging.info('MODEL GENERATION TERMINATED')
logging.info('')
logging.info('There is not enough time to complete the next iteration before the wall time is reached.')
logging.info('The output model may be incomplete.')
logging.info('')
coreSpec, coreReac, edgeSpec, edgeReac = self.reactionModel.getModelSize()
logging.info('The current model core has %s species and %s reactions' % (coreSpec, coreReac))
logging.info('The current model edge has %s species and %s reactions' % (edgeSpec, edgeReac))
return
# Run sensitivity analysis post-model generation if sensitivity analysis is on
for index, reactionSystem in enumerate(self.reactionSystems):
if reactionSystem.sensitiveSpecies:
logging.info('Conducting sensitivity analysis of reaction system %s...' % (index+1))
sensWorksheet = []
for spec in reactionSystem.sensitiveSpecies:
csvfile = file(os.path.join(self.outputDirectory, 'solver', 'sensitivity_{0}_SPC_{1}.csv'.format(index+1, spec.index)),'w')
sensWorksheet.append(csv.writer(csvfile))
terminated, obj = reactionSystem.simulate(
coreSpecies = self.reactionModel.core.species,
coreReactions = self.reactionModel.core.reactions,
edgeSpecies = self.reactionModel.edge.species,
edgeReactions = self.reactionModel.edge.reactions,
toleranceKeepInEdge = self.fluxToleranceKeepInEdge,
toleranceMoveToCore = self.fluxToleranceMoveToCore,
toleranceInterruptSimulation = self.fluxToleranceInterrupt,
pdepNetworks = self.reactionModel.networkList,
worksheet = None,
absoluteTolerance = self.absoluteTolerance,
relativeTolerance = self.relativeTolerance,
sensitivity = True,
sensitivityAbsoluteTolerance = self.sensitivityAbsoluteTolerance,
sensitivityRelativeTolerance = self.sensitivityRelativeTolerance,
sensWorksheet = sensWorksheet,
)
# Update RMG execution statistics for each time a reactionSystem has sensitivity analysis performed.
# But just provide time and memory used.
logging.info('Updating RMG execution statistics...')
execTime.append(time.time() - self.initializationTime)
elapsed = execTime[-1]
seconds = elapsed % 60
minutes = (elapsed - seconds) % 3600 / 60
hours = (elapsed - seconds - minutes * 60) % (3600 * 24) / 3600
days = (elapsed - seconds - minutes * 60 - hours * 3600) / (3600 * 24)
logging.info(' Execution time (DD:HH:MM:SS): '
'{0:02}:{1:02}:{2:02}:{3:02}'.format(int(days), int(hours), int(minutes), int(seconds)))
try:
import psutil
process = psutil.Process(os.getpid())
rss, vms = process.get_memory_info()
memoryUse.append(rss / 1.0e6)
logging.info(' Memory used: %.2f MB' % (memoryUse[-1]))
except ImportError:
memoryUse.append(0.0)
# Write output file
logging.info('')
logging.info('MODEL GENERATION COMPLETED')
logging.info('')
coreSpec, coreReac, edgeSpec, edgeReac = self.reactionModel.getModelSize()
logging.info('The final model core has %s species and %s reactions' % (coreSpec, coreReac))
logging.info('The final model edge has %s species and %s reactions' % (edgeSpec, edgeReac))
self.finish()
def saveEverything(self):
"""
Saves the output HTML, the Chemkin file, and the Restart file (if appropriate).
The restart file is only saved if self.saveRestartPeriod or self.done.
"""
# If the user specifies it, add unused reaction library reactions to
# an additional output species and reaction list which is written to the ouput HTML
# file as well as the chemkin file
self.reactionModel.outputSpeciesList = []
self.reactionModel.outputReactionList = []
for library, option in self.reactionLibraries:
if option:
self.reactionModel.addReactionLibraryToOutput(library)
# Save the current state of the model to HTML files
self.saveOutputHTML()
# Save a Chemkin filew containing the current model
self.saveChemkinFiles()
# Save the restart file if desired
if self.saveRestartPeriod or self.done:
self.saveRestartFile( os.path.join(self.outputDirectory,'restart.pkl'),
self.reactionModel,
delay=0 if self.done else self.saveRestartPeriod.value_si
)
# Save the QM thermo to a library if QM was turned on
if self.quantumMechanics:
logging.info('Saving the QM generated thermo to qmThermoLibrary.py ...')
self.quantumMechanics.database.save(os.path.join(self.outputDirectory,'qmThermoLibrary.py'))
def finish(self):
"""
Complete the model generation.
"""
# Log end timestamp
logging.info('')
logging.info('RMG execution terminated at ' + time.asctime())
def getGitCommit(self):
import subprocess
from rmgpy import getPath
try:
return subprocess.check_output(['git', 'log',
'--format=%H%n%cd', '-1'],
cwd=getPath()).splitlines()
except:
return '', ''
def logHeader(self, level=logging.INFO):
"""
Output a header containing identifying information about RMG to the log.
"""
logging.log(level, '###################################################')
logging.log(level, '# RMG-Py - Reaction Mechanism Generator in Python #')
logging.log(level, '# Version: Early 2013 #')
logging.log(level, '# Authors: RMG Developers ([email protected]) #')
logging.log(level, '# P.I.s: William H. Green ([email protected]) #')
logging.log(level, '# Richard H. West ([email protected]) #')
logging.log(level, '# Website: http://greengroup.github.com/RMG-Py/ #')
logging.log(level, '###################################################\n')
head, date = self.getGitCommit()
if head != '' and date != '':
logging.log(level, 'The current git HEAD is:')
logging.log(level, '\t%s' % head)
logging.log(level, '\t%s' % date)
logging.log(level, '')
def makeOutputSubdirectory(self, folder):
"""
Create a subdirectory `folder` in the output directory. If the folder
already exists (e.g. from a previous job) its contents are deleted.
"""
dir = os.path.join(self.outputDirectory, folder)
if os.path.exists(dir):
# The directory already exists, so delete it (and all its content!)
shutil.rmtree(dir)
os.mkdir(dir)
def loadRestartFile(self, path):
"""
Load a restart file at `path` on disk.
"""
import cPickle
# Unpickle the reaction model from the specified restart file
logging.info('Loading previous restart file...')
f = open(path, 'rb')
self.reactionModel = cPickle.load(f)
f.close()
# A few things still point to the species in the input file, so update
# those to point to the equivalent species loaded from the restart file
# The termination conversions still point to the old species
from rmgpy.solver.base import TerminationConversion
for reactionSystem in self.reactionSystems:
for term in reactionSystem.termination:
if isinstance(term, TerminationConversion):
term.species, isNew = self.reactionModel.makeNewSpecies(term.species.molecule[0], term.species.label, term.species.reactive)
# The initial mole fractions in the reaction systems still point to the old species
for reactionSystem in self.reactionSystems:
initialMoleFractions = {}
for spec0, moleFrac in reactionSystem.initialMoleFractions.iteritems():
spec, isNew = self.reactionModel.makeNewSpecies(spec0.molecule[0], spec0.label, spec0.reactive)
initialMoleFractions[spec] = moleFrac
reactionSystem.initialMoleFractions = initialMoleFractions
# The reactions and reactionDict still point to the old reaction families
reactionDict = {}
oldFamilies = self.reactionModel.reactionDict.keys()
for family0 in self.reactionModel.reactionDict:
# Find the equivalent library or family in the newly-loaded kinetics database
family = None
if isinstance(family0, KineticsLibrary):
for label, database in self.database.kinetics.libraries.iteritems():
if database.label == family0.label:
family = database
break
elif isinstance(family0, KineticsFamily):
for label, database in self.database.kinetics.families.iteritems():
if database.label == family0.label:
family = database
break
else:
import pdb; pdb.set_trace()
if family is None:
raise Exception("Unable to find matching reaction family for %s" % family0.label)
# Update each affected reaction to point to that new family
# Also use that new family in a duplicate reactionDict
reactionDict[family] = {}
for reactant1 in self.reactionModel.reactionDict[family0]:
reactionDict[family][reactant1] = {}
for reactant2 in self.reactionModel.reactionDict[family0][reactant1]:
reactionDict[family][reactant1][reactant2] = []
if isinstance(family0, KineticsLibrary):
for rxn in self.reactionModel.reactionDict[family0][reactant1][reactant2]:
assert isinstance(rxn, LibraryReaction)
rxn.library = family
reactionDict[family][reactant1][reactant2].append(rxn)
elif isinstance(family0, KineticsFamily):
for rxn in self.reactionModel.reactionDict[family0][reactant1][reactant2]:
assert isinstance(rxn, TemplateReaction)
rxn.family = family
reactionDict[family][reactant1][reactant2].append(rxn)
self.reactionModel.reactionDict = reactionDict
def saveOutputHTML(self):
"""
Save the current reaction model to a pretty HTML file.
"""
logging.info('Saving current model core to HTML file...')
from rmgpy.rmg.output import saveOutputHTML
saveOutputHTML(os.path.join(self.outputDirectory, 'output.html'), self.reactionModel, 'core')
if self.saveEdgeSpecies ==True:
logging.info('Saving current model edge to HTML file...')
from rmgpy.rmg.output import saveOutputHTML
saveOutputHTML(os.path.join(self.outputDirectory, 'output_edge.html'), self.reactionModel, 'edge')
def saveChemkinFiles(self):
"""
Save the current reaction model to a set of Chemkin files.
"""
logging.info('Saving current model core to Chemkin file...')
this_chemkin_path = os.path.join(self.outputDirectory, 'chemkin', 'chem{0:04d}.inp'.format(len(self.reactionModel.core.species)))
latest_chemkin_path = os.path.join(self.outputDirectory, 'chemkin','chem.inp')
latest_chemkin_verbose_path = os.path.join(self.outputDirectory, 'chemkin', 'chem_annotated.inp')
latest_dictionary_path = os.path.join(self.outputDirectory, 'chemkin','species_dictionary.txt')
latest_transport_path = os.path.join(self.outputDirectory, 'chemkin', 'tran.dat')
self.reactionModel.saveChemkinFile(this_chemkin_path, latest_chemkin_verbose_path, latest_dictionary_path, latest_transport_path, False)
if os.path.exists(latest_chemkin_path):
os.unlink(latest_chemkin_path)
shutil.copy2(this_chemkin_path,latest_chemkin_path)
if self.saveEdgeSpecies ==True:
logging.info('Saving current model core and edge to Chemkin file...')
this_chemkin_path = os.path.join(self.outputDirectory, 'chemkin', 'chem_edge%04i.inp' % len(self.reactionModel.core.species)) # len() needs to be core to have unambiguous index
latest_chemkin_path = os.path.join(self.outputDirectory, 'chemkin','chem_edge.inp')
latest_chemkin_verbose_path = os.path.join(self.outputDirectory, 'chemkin', 'chem_edge_annotated.inp')
latest_dictionary_path = os.path.join(self.outputDirectory, 'chemkin','species_edge_dictionary.txt')
latest_transport_path = None
self.reactionModel.saveChemkinFile(this_chemkin_path, latest_chemkin_verbose_path, latest_dictionary_path, latest_transport_path, self.saveEdgeSpecies)
if os.path.exists(latest_chemkin_path):
os.unlink(latest_chemkin_path)
shutil.copy2(this_chemkin_path,latest_chemkin_path)
def saveRestartFile(self, path, reactionModel, delay=0):
"""
Save a restart file to `path` on disk containing the contents of the
provided `reactionModel`. The `delay` parameter is a time in seconds; if
the restart file is not at least that old, the save is aborted. (Use the
default value of 0 to force the restart file to be saved.)
"""
import cPickle
# Saving of a restart file is very slow (likely due to all the Quantity objects)
# Therefore, to save it less frequently, don't bother if the restart file is less than an hour old
if os.path.exists(path) and time.time() - os.path.getmtime(path) < delay:
logging.info('Not saving restart file in this iteration.')
return
# Pickle the reaction model to the specified file
# We also compress the restart file to save space (and lower the disk read/write time)
logging.info('Saving restart file...')
f = open(path, 'wb')
cPickle.dump(reactionModel, f, cPickle.HIGHEST_PROTOCOL)
f.close()
def saveExecutionStatistics(self, execTime, coreSpeciesCount, coreReactionCount,
edgeSpeciesCount, edgeReactionCount, memoryUse, restartSize):
"""
Save the statistics of the RMG job to an Excel spreadsheet for easy viewing
after the run is complete. The statistics are saved to the file
`statistics.xls` in the output directory. The ``xlwt`` package is used to
create the spreadsheet file; if this package is not installed, no file is
saved.
"""
# Attempt to import the xlwt package; return if not installed
try:
xlwt
except NameError:
logging.warning('Package xlwt not loaded. Unable to save execution statistics.')
return
# Create workbook and sheet for statistics to be places
workbook = xlwt.Workbook()
sheet = workbook.add_sheet('Statistics')
# First column is execution time
sheet.write(0,0,'Execution time (s)')
for i, etime in enumerate(execTime):
sheet.write(i+1,0,etime)
# Second column is number of core species
sheet.write(0,1,'Core species')
for i, count in enumerate(coreSpeciesCount):
sheet.write(i+1,1,count)
# Third column is number of core reactions
sheet.write(0,2,'Core reactions')
for i, count in enumerate(coreReactionCount):
sheet.write(i+1,2,count)
# Fourth column is number of edge species
sheet.write(0,3,'Edge species')
for i, count in enumerate(edgeSpeciesCount):
sheet.write(i+1,3,count)
# Fifth column is number of edge reactions
sheet.write(0,4,'Edge reactions')
for i, count in enumerate(edgeReactionCount):
sheet.write(i+1,4,count)
# Sixth column is memory used
sheet.write(0,5,'Memory used (MB)')
for i, memory in enumerate(memoryUse):
sheet.write(i+1,5,memory)
# Seventh column is restart file size
sheet.write(0,6,'Restart file size (MB)')
for i, memory in enumerate(restartSize):
sheet.write(i+1,6,memory)
# Save workbook to file
fstr = os.path.join(self.outputDirectory, 'statistics.xls')
workbook.save(fstr)
def generateExecutionPlots(self, execTime, coreSpeciesCount, coreReactionCount,
edgeSpeciesCount, edgeReactionCount, memoryUse, restartSize):
"""
Generate a number of plots describing the statistics of the RMG job,
including the reaction model core and edge size and memory use versus
execution time. These will be placed in the output directory in the plot/
folder.
"""
logging.info('Generating plots of execution statistics...')
import matplotlib.pyplot as plt
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.semilogx(execTime, coreSpeciesCount, 'o-b')
ax1.set_xlabel('Execution time (s)')
ax1.set_ylabel('Number of core species')
ax2 = ax1.twinx()
ax2.semilogx(execTime, coreReactionCount, 'o-r')
ax2.set_ylabel('Number of core reactions')
plt.savefig(os.path.join(self.outputDirectory, 'plot/coreSize.svg'))
plt.clf()
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.loglog(execTime, edgeSpeciesCount, 'o-b')
ax1.set_xlabel('Execution time (s)')
ax1.set_ylabel('Number of edge species')
ax2 = ax1.twinx()
ax2.loglog(execTime, edgeReactionCount, 'o-r')
ax2.set_ylabel('Number of edge reactions')
plt.savefig(os.path.join(self.outputDirectory, 'plot/edgeSize.svg'))
plt.clf()
fig = plt.figure()
ax1 = fig.add_subplot(111)
ax1.semilogx(execTime, memoryUse, 'o-k')
ax1.semilogx(execTime, restartSize, 'o-g')
ax1.set_xlabel('Execution time (s)')
ax1.set_ylabel('Memory (MB)')
ax1.legend(['RAM', 'Restart file'], loc=2)
plt.savefig(os.path.join(self.outputDirectory, 'plot/memoryUse.svg'))
plt.clf()
def loadRMGJavaInput(self, path):
"""
Load an RMG-Java job from the input file located at `inputFile`, or
from the `inputFile` attribute if not given as a parameter.
"""
# NOTE: This function is currently incomplete!
# It only loads a subset of the available information.
self.reactionModel = CoreEdgeReactionModel()
self.initialSpecies = []
self.reactionSystems = []
Tlist = []; Plist = []; concentrationList = []; speciesDict = {}
termination = []; atol=1e-16; rtol=1e-8
with open(path, 'r') as f:
line = self.readMeaningfulLineJava(f)
while line != '':
if line.startswith('TemperatureModel:'):
tokens = line.split()
units = tokens[2][1:-1]
assert units in ['C', 'F', 'K']
if units == 'C':
Tlist = [float(T)+273.15 for T in tokens[3:]]
elif units == 'F':
Tlist = [(float(T)+459.67)*5./9. for T in tokens[3:]]
else:
Tlist = [float(T) for T in tokens[3:]]
elif line.startswith('PressureModel:'):
tokens = line.split()
units = tokens[2][1:-1]
assert units in ['atm', 'bar', 'Pa', 'torr']
if units == 'atm':
Plist = [float(P)*101325. for P in tokens[3:]]
elif units == 'bar':
Plist = [float(P)*100000. for P in tokens[3:]]
elif units == 'torr':
Plist = [float(P)/760.*101325. for P in tokens[3:]]
else:
Plist = [float(P) for P in tokens[3:]]
elif line.startswith('InitialStatus:'):
label = ''; concentrations = []; adjlist = ''
line = self.readMeaningfulLineJava(f)
while line != 'END':
if line == '' and label != '':
species = Species(label=label, molecule=[Molecule().fromAdjacencyList(adjlist)])
self.initialSpecies.append(species)
speciesDict[label] = species
concentrationList.append(concentrations)
label = ''; concentrations = []; adjlist = ''
elif line != '' and label == '':
tokens = line.split()
label = tokens[0]
units = tokens[1][1:-1]
if tokens[-1] in ['Unreactive', 'ConstantConcentration']:
tokens.pop(-1)
assert units in ['mol/cm3', 'mol/m3', 'mol/l']
if units == 'mol/cm3':
concentrations = [float(C)*1.0e6 for C in tokens[2:]]
elif units == 'mol/l':
concentrations = [float(C)*1.0e3 for C in tokens[2:]]
else:
concentrations = [float(C) for C in tokens[2:]]
elif line != '':
adjlist += line + '\n'
line = f.readline().strip()
if '//' in line: line = line[0:line.index('//')]
elif line.startswith('InertGas:'):
line = self.readMeaningfulLineJava(f)
while line != 'END':
tokens = line.split()
label = tokens[0]
assert label in ['N2', 'Ar', 'He', 'Ne']
if label == 'Ne':
smiles = '[Ne]'
elif label == 'Ar':
smiles = '[Ar]'
elif label == 'He':
smiles = '[He]'
else:
smiles = 'N#N'
units = tokens[1][1:-1]
assert units in ['mol/cm3', 'mol/m3', 'mol/l']
if units == 'mol/cm3':
concentrations = [float(C)*1.0e6 for C in tokens[2:]]
elif units == 'mol/l':
concentrations = [float(C)*1.0e3 for C in tokens[2:]]
else:
concentrations = [float(C) for C in tokens[2:]]
species = Species(label=label, reactive=False, molecule=[Molecule().fromSMILES(smiles)])
self.initialSpecies.append(species)
speciesDict[label] = species
concentrationList.append(concentrations)
line = self.readMeaningfulLineJava(f)
elif line.startswith('FinishController:'):
# First meaningful line is a termination time or conversion
line = self.readMeaningfulLineJava(f)
tokens = line.split()
if tokens[2].lower() == 'conversion:':
label = tokens[3]
conversion = float(tokens[4])
termination.append(TerminationConversion(spec=speciesDict[label], conv=conversion))
elif tokens[2].lower() == 'reactiontime:':
time = float(tokens[3])
units = tokens[4][1:-1]
assert units in ['sec', 'min', 'hr', 'day']
if units == 'min':
time *= 60.
elif units == 'hr':
time *= 60. * 60.
elif units == 'day':
time *= 60. * 60. * 24.
termination.append(TerminationTime(time=time))
# Second meaningful line is the error tolerance
# We're not doing anything with this information yet!
line = self.readMeaningfulLineJava(f)
elif line.startswith('Atol:'):
tokens = line.split()
atol = float(tokens[1])
elif line.startswith('Rtol:'):
tokens = line.split()
rtol = float(tokens[1])
line = self.readMeaningfulLineJava(f)
assert len(Tlist) > 0
assert len(Plist) > 0
concentrationList = numpy.array(concentrationList)
assert concentrationList.shape[1] > 0 # An arbitrary number of concentrations is acceptable, and should be run for each reactor system
# Make a reaction system for each (T,P) combination
for T in Tlist:
for P in Plist:
for i in range(concentrationList.shape[1]):
concentrations = concentrationList[:,i]
totalConc = numpy.sum(concentrations)
initialMoleFractions = dict([(self.initialSpecies[i], concentrations[i] / totalConc) for i in range(len(self.initialSpecies))])
reactionSystem = SimpleReactor(T, P, initialMoleFractions=initialMoleFractions, termination=termination)
self.reactionSystems.append(reactionSystem)
def readMeaningfulLineJava(self, f):
"""
Read a meaningful line from an RMG-Java condition file object `f`,
returning the line with any comments removed.
"""
line = f.readline()
if line != '':
line = line.strip()
if '//' in line: line = line[0:line.index('//')]
while line == '':
line = f.readline()
if line == '': break
line = line.strip()
if '//' in line: line = line[0:line.index('//')]
return line
################################################################################
def initializeLog(verbose, log_file_name):
"""
Set up a logger for RMG to use to print output to stdout. The
`verbose` parameter is an integer specifying the amount of log text seen
at the console; the levels correspond to those of the :data:`logging` module.
"""
# Create logger
logger = logging.getLogger()
logger.setLevel(verbose)
# Create console handler and set level to debug; send everything to stdout
# rather than stderr
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(verbose)
logging.addLevelName(logging.CRITICAL, 'Critical: ')
logging.addLevelName(logging.ERROR, 'Error: ')
logging.addLevelName(logging.WARNING, 'Warning: ')
logging.addLevelName(logging.INFO, '')
logging.addLevelName(logging.DEBUG, '')
logging.addLevelName(1, '')
# Create formatter and add to console handler
#formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s', '%Y-%m-%d %H:%M:%S')
#formatter = Formatter('%(message)s', '%Y-%m-%d %H:%M:%S')
formatter = logging.Formatter('%(levelname)s%(message)s')
ch.setFormatter(formatter)
# create file handler
if os.path.exists(log_file_name):
backup = os.path.join(log_file_name[:-7], 'RMG_backup.log')
if os.path.exists(backup):
print "Removing old "+backup
os.remove(backup)
print 'Moving {0} to {1}\n'.format(log_file_name, backup)
shutil.move(log_file_name, backup)
fh = logging.FileHandler(filename=log_file_name) #, backupCount=3)
fh.setLevel(min(logging.DEBUG,verbose)) # always at least VERBOSE in the file
fh.setFormatter(formatter)
# notice that STDERR does not get saved to the log file
# so errors from underlying libraries (eg. openbabel) etc. that report
# on stderr will not be logged to disk.
# remove old handlers!
while logger.handlers:
logger.removeHandler(logger.handlers[0])
# Add console and file handlers to logger
logger.addHandler(ch)
logger.addHandler(fh)
################################################################################
class Tee:
"""A simple tee to create a stream which prints to many streams.
This is used to report the profiling statistics to both the log file
and the standard output.
"""
def __init__(self, *fileobjects):
self.fileobjects=fileobjects
def write(self, string):
for fileobject in self.fileobjects:
fileobject.write(string)
def processProfileStats(stats_file, log_file):
import pstats
out_stream = Tee(sys.stdout,open(log_file,'a')) # print to screen AND append to RMG.log
print >>out_stream, "="*80
print >>out_stream, "Profiling Data".center(80)
print >>out_stream, "="*80
stats = pstats.Stats(stats_file,stream=out_stream)
stats.strip_dirs()
print >>out_stream, "Sorted by internal time"
stats.sort_stats('time')
stats.print_stats(25)
stats.print_callers(25)
print >>out_stream, "Sorted by cumulative time"
stats.sort_stats('cumulative')
stats.print_stats(25)
stats.print_callers(25)
stats.print_callees(25)
def makeProfileGraph(stats_file):
"""
Uses gprof2dot to create a graphviz dot file of the profiling information.
This requires the gprof2dot package available via `pip install gprof2dot`.
Render the result using the program 'dot' via a command like
`dot -Tpdf input.dot -o output.pdf`.
"""
try:
from gprof2dot import PstatsParser, DotWriter, SAMPLES, themes
except ImportError:
logging.warning('Trouble importing from package gprof2dot. Unable to create a graph of the profile statistics.')
logging.warning('Try getting the latest version with something like `pip install --upgrade gprof2dot`.')
return
import subprocess
#create an Options class to mimic optparser output as much as possible:
class Options:
pass
options = Options()
options.node_thres = 0.8
options.edge_thres = 0.1
options.strip = False
options.show_samples = False
options.root = ""
options.leaf = ""
options.wrap = True
theme = themes['color'] # bw color gray pink
theme.fontname = "ArialMT" # default "Arial" leads to PostScript warnings in dot (on Mac OS)
parser = PstatsParser(stats_file)
profile = parser.parse()
dot_file = stats_file + '.dot'
output = open(dot_file,'wt')
dot = DotWriter(output)
dot.strip = options.strip
dot.wrap = options.wrap
if options.show_samples:
dot.show_function_events.append(SAMPLES)
profile = profile
profile.prune(options.node_thres/100.0, options.edge_thres/100.0)
if options.root:
rootId = profile.getFunctionId(options.root)
if not rootId:
sys.stderr.write('root node ' + options.root + ' not found (might already be pruned : try -e0 -n0 flags)\n')
sys.exit(1)
profile.prune_root(rootId)
if options.leaf:
leafId = profile.getFunctionId(options.leaf)
if not leafId:
sys.stderr.write('leaf node ' + options.leaf + ' not found (maybe already pruned : try -e0 -n0 flags)\n')
sys.exit(1)
profile.prune_leaf(leafId)
dot.graph(profile, theme)
output.close()
try:
subprocess.check_call(['dot', '-Tpdf', dot_file, '-o', '{0}.pdf'.format(dot_file)])
except subprocess.CalledProcessError:
logging.error("Error returned by 'dot' when generating graph of the profile statistics.")
logging.info("To try it yourself:\n dot -Tpdf {0} -o {0}.pdf".format(dot_file))
except OSError:
logging.error("Couldn't run 'dot' to create graph of profile statistics. Check graphviz is installed properly and on your path.")
logging.info("Once you've got it, try:\n dot -Tpdf {0} -o {0}.pdf".format(dot_file))
else:
logging.info("Graph of profile statistics saved to: \n {0}.pdf".format(dot_file))
| mit |
willgrass/pandas | pandas/rpy/common.py | 1 | 2618 | """
Utilities for making working with rpy2 more user- and
developer-friendly.
"""
import numpy as np
from pandas import DataFrame, DataMatrix
from rpy2.robjects.packages import importr
from rpy2.robjects import r
import rpy2.robjects as robj
__all__ = ['convert_robj', 'load_data']
def load_data(name, package=None):
if package:
pack = importr(package)
r.data(name)
return convert_robj(r[name])
def _rclass(obj):
"""
Return R class name for input object
"""
return r['class'](obj)[0]
def _is_null(obj):
return _rclass(obj) == 'NULL'
def _convert_list(obj):
pass
def _convert_named_list(obj):
pass
def _convert_DataFrame(rdf):
columns = list(rdf.colnames)
rows = np.array(rdf.rownames)
data = {}
for i, col in enumerate(columns):
vec = rdf.rx2(i + 1)
data[col] = list(vec)
return DataFrame(data, index=rows)
def _convert_Matrix(mat):
columns = mat.colnames
rows = mat.rownames
columns = None if _is_null(columns) else list(columns)
index = None if _is_null(index) else list(index)
return DataMatrix(np.array(mat), index=index, columns=columns)
def _check_int(vec):
try:
# R observation numbers come through as strings
vec = vec.astype(int)
except Exception:
pass
return vec
_converters = [
(robj.DataFrame , _convert_DataFrame),
(robj.Matrix , _convert_Matrix),
]
def convert_robj(obj):
"""
Convert rpy2 object to a pandas-friendly form
Parameters
----------
obj : rpy2 object
Returns
-------
Non-rpy data structure, mix of NumPy and pandas objects
"""
if not isinstance(obj, orbj.RObjectMixin):
return obj
for rpy_type, converter in _converters:
if isinstance(obj, rpy_type):
return converter(obj)
raise Exception('Do not know what to do with %s object' % klass)
import pandas.util.testing as _test
def test_convert_list():
obj = r('list(a=1, b=2, c=3)')
converted = convert_robj(obj)
_test.assert_dict_equal
def test_convert_frame():
# built-in dataset
df = r['faithful']
converted = convert_robj(obj)
def _named_matrix():
r('mat <- matrix(rnorm(9), ncol=3)')
r('colnames(mat) <- c("one", "two", "three")')
r('rownames(mat) <- c("a", "b", "c")')
return r['mat']
def test_convert_matrix():
mat = _named_matrix()
converted = convert_robj(mat)
assert np.array_equal(converted.index, ['a', 'b', 'c'])
assert np.array_equal(converted.columns, ['one', 'two', 'three'])
def test_convert_nested():
pass
| bsd-3-clause |
ElDeveloper/scikit-learn | sklearn/cluster/tests/test_spectral.py | 262 | 7954 | """Testing for Spectral Clustering methods"""
from sklearn.externals.six.moves import cPickle
dumps, loads = cPickle.dumps, cPickle.loads
import numpy as np
from scipy import sparse
from sklearn.utils import check_random_state
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_greater
from sklearn.utils.testing import assert_warns_message
from sklearn.cluster import SpectralClustering, spectral_clustering
from sklearn.cluster.spectral import spectral_embedding
from sklearn.cluster.spectral import discretize
from sklearn.metrics import pairwise_distances
from sklearn.metrics import adjusted_rand_score
from sklearn.metrics.pairwise import kernel_metrics, rbf_kernel
from sklearn.datasets.samples_generator import make_blobs
def test_spectral_clustering():
S = np.array([[1.0, 1.0, 1.0, 0.2, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 0.2, 0.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 0.2, 0.0, 0.0, 0.0],
[0.2, 0.2, 0.2, 1.0, 1.0, 1.0, 1.0],
[0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0],
[0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0],
[0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0]])
for eigen_solver in ('arpack', 'lobpcg'):
for assign_labels in ('kmeans', 'discretize'):
for mat in (S, sparse.csr_matrix(S)):
model = SpectralClustering(random_state=0, n_clusters=2,
affinity='precomputed',
eigen_solver=eigen_solver,
assign_labels=assign_labels
).fit(mat)
labels = model.labels_
if labels[0] == 0:
labels = 1 - labels
assert_array_equal(labels, [1, 1, 1, 0, 0, 0, 0])
model_copy = loads(dumps(model))
assert_equal(model_copy.n_clusters, model.n_clusters)
assert_equal(model_copy.eigen_solver, model.eigen_solver)
assert_array_equal(model_copy.labels_, model.labels_)
def test_spectral_amg_mode():
# Test the amg mode of SpectralClustering
centers = np.array([
[0., 0., 0.],
[10., 10., 10.],
[20., 20., 20.],
])
X, true_labels = make_blobs(n_samples=100, centers=centers,
cluster_std=1., random_state=42)
D = pairwise_distances(X) # Distance matrix
S = np.max(D) - D # Similarity matrix
S = sparse.coo_matrix(S)
try:
from pyamg import smoothed_aggregation_solver
amg_loaded = True
except ImportError:
amg_loaded = False
if amg_loaded:
labels = spectral_clustering(S, n_clusters=len(centers),
random_state=0, eigen_solver="amg")
# We don't care too much that it's good, just that it *worked*.
# There does have to be some lower limit on the performance though.
assert_greater(np.mean(labels == true_labels), .3)
else:
assert_raises(ValueError, spectral_embedding, S,
n_components=len(centers),
random_state=0, eigen_solver="amg")
def test_spectral_unknown_mode():
# Test that SpectralClustering fails with an unknown mode set.
centers = np.array([
[0., 0., 0.],
[10., 10., 10.],
[20., 20., 20.],
])
X, true_labels = make_blobs(n_samples=100, centers=centers,
cluster_std=1., random_state=42)
D = pairwise_distances(X) # Distance matrix
S = np.max(D) - D # Similarity matrix
S = sparse.coo_matrix(S)
assert_raises(ValueError, spectral_clustering, S, n_clusters=2,
random_state=0, eigen_solver="<unknown>")
def test_spectral_unknown_assign_labels():
# Test that SpectralClustering fails with an unknown assign_labels set.
centers = np.array([
[0., 0., 0.],
[10., 10., 10.],
[20., 20., 20.],
])
X, true_labels = make_blobs(n_samples=100, centers=centers,
cluster_std=1., random_state=42)
D = pairwise_distances(X) # Distance matrix
S = np.max(D) - D # Similarity matrix
S = sparse.coo_matrix(S)
assert_raises(ValueError, spectral_clustering, S, n_clusters=2,
random_state=0, assign_labels="<unknown>")
def test_spectral_clustering_sparse():
X, y = make_blobs(n_samples=20, random_state=0,
centers=[[1, 1], [-1, -1]], cluster_std=0.01)
S = rbf_kernel(X, gamma=1)
S = np.maximum(S - 1e-4, 0)
S = sparse.coo_matrix(S)
labels = SpectralClustering(random_state=0, n_clusters=2,
affinity='precomputed').fit(S).labels_
assert_equal(adjusted_rand_score(y, labels), 1)
def test_affinities():
# Note: in the following, random_state has been selected to have
# a dataset that yields a stable eigen decomposition both when built
# on OSX and Linux
X, y = make_blobs(n_samples=20, random_state=0,
centers=[[1, 1], [-1, -1]], cluster_std=0.01
)
# nearest neighbors affinity
sp = SpectralClustering(n_clusters=2, affinity='nearest_neighbors',
random_state=0)
assert_warns_message(UserWarning, 'not fully connected', sp.fit, X)
assert_equal(adjusted_rand_score(y, sp.labels_), 1)
sp = SpectralClustering(n_clusters=2, gamma=2, random_state=0)
labels = sp.fit(X).labels_
assert_equal(adjusted_rand_score(y, labels), 1)
X = check_random_state(10).rand(10, 5) * 10
kernels_available = kernel_metrics()
for kern in kernels_available:
# Additive chi^2 gives a negative similarity matrix which
# doesn't make sense for spectral clustering
if kern != 'additive_chi2':
sp = SpectralClustering(n_clusters=2, affinity=kern,
random_state=0)
labels = sp.fit(X).labels_
assert_equal((X.shape[0],), labels.shape)
sp = SpectralClustering(n_clusters=2, affinity=lambda x, y: 1,
random_state=0)
labels = sp.fit(X).labels_
assert_equal((X.shape[0],), labels.shape)
def histogram(x, y, **kwargs):
# Histogram kernel implemented as a callable.
assert_equal(kwargs, {}) # no kernel_params that we didn't ask for
return np.minimum(x, y).sum()
sp = SpectralClustering(n_clusters=2, affinity=histogram, random_state=0)
labels = sp.fit(X).labels_
assert_equal((X.shape[0],), labels.shape)
# raise error on unknown affinity
sp = SpectralClustering(n_clusters=2, affinity='<unknown>')
assert_raises(ValueError, sp.fit, X)
def test_discretize(seed=8):
# Test the discretize using a noise assignment matrix
random_state = np.random.RandomState(seed)
for n_samples in [50, 100, 150, 500]:
for n_class in range(2, 10):
# random class labels
y_true = random_state.random_integers(0, n_class, n_samples)
y_true = np.array(y_true, np.float)
# noise class assignment matrix
y_indicator = sparse.coo_matrix((np.ones(n_samples),
(np.arange(n_samples),
y_true)),
shape=(n_samples,
n_class + 1))
y_true_noisy = (y_indicator.toarray()
+ 0.1 * random_state.randn(n_samples,
n_class + 1))
y_pred = discretize(y_true_noisy, random_state)
assert_greater(adjusted_rand_score(y_true, y_pred), 0.8)
| bsd-3-clause |
rohit21122012/DCASE2013 | runs/2016/dnn2016med_traps/traps17/task1_scene_classification.py | 40 | 38423 | #!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# DCASE 2016::Acoustic Scene Classification / Baseline System
import argparse
import textwrap
import timeit
import skflow
from sklearn import mixture
from sklearn import preprocessing as pp
from sklearn.externals import joblib
from sklearn.metrics import confusion_matrix
from src.dataset import *
from src.evaluation import *
from src.features import *
__version_info__ = ('1', '0', '0')
__version__ = '.'.join(__version_info__)
final_result = {}
train_start = 0.0
train_end = 0.0
test_start = 0.0
test_end = 0.0
def main(argv):
numpy.random.seed(123456) # let's make randomization predictable
tot_start = timeit.default_timer()
parser = argparse.ArgumentParser(
prefix_chars='-+',
formatter_class=argparse.RawDescriptionHelpFormatter,
description=textwrap.dedent('''\
DCASE 2016
Task 1: Acoustic Scene Classification
Baseline system
---------------------------------------------
Tampere University of Technology / Audio Research Group
Author: Toni Heittola ( [email protected] )
System description
This is an baseline implementation for D-CASE 2016 challenge acoustic scene classification task.
Features: MFCC (static+delta+acceleration)
Classifier: GMM
'''))
# Setup argument handling
parser.add_argument("-development", help="Use the system in the development mode", action='store_true',
default=False, dest='development')
parser.add_argument("-challenge", help="Use the system in the challenge mode", action='store_true',
default=False, dest='challenge')
parser.add_argument('-v', '--version', action='version', version='%(prog)s ' + __version__)
args = parser.parse_args()
# Load parameters from config file
parameter_file = os.path.join(os.path.dirname(os.path.realpath(__file__)),
os.path.splitext(os.path.basename(__file__))[0] + '.yaml')
params = load_parameters(parameter_file)
params = process_parameters(params)
make_folders(params)
title("DCASE 2016::Acoustic Scene Classification / Baseline System")
# Check if mode is defined
if not (args.development or args.challenge):
args.development = True
args.challenge = False
dataset_evaluation_mode = 'folds'
if args.development and not args.challenge:
print "Running system in development mode"
dataset_evaluation_mode = 'folds'
elif not args.development and args.challenge:
print "Running system in challenge mode"
dataset_evaluation_mode = 'full'
# Get dataset container class
dataset = eval(params['general']['development_dataset'])(data_path=params['path']['data'])
# Fetch data over internet and setup the data
# ==================================================
if params['flow']['initialize']:
dataset.fetch()
# Extract features for all audio files in the dataset
# ==================================================
if params['flow']['extract_features']:
section_header('Feature extraction')
# Collect files in train sets
files = []
for fold in dataset.folds(mode=dataset_evaluation_mode):
for item_id, item in enumerate(dataset.train(fold)):
if item['file'] not in files:
files.append(item['file'])
for item_id, item in enumerate(dataset.test(fold)):
if item['file'] not in files:
files.append(item['file'])
files = sorted(files)
# Go through files and make sure all features are extracted
do_feature_extraction(files=files,
dataset=dataset,
feature_path=params['path']['features'],
params=params['features'],
overwrite=params['general']['overwrite'])
foot()
# Prepare feature normalizers
# ==================================================
if params['flow']['feature_normalizer']:
section_header('Feature normalizer')
do_feature_normalization(dataset=dataset,
feature_normalizer_path=params['path']['feature_normalizers'],
feature_path=params['path']['features'],
dataset_evaluation_mode=dataset_evaluation_mode,
overwrite=params['general']['overwrite'])
foot()
# System training
# ==================================================
if params['flow']['train_system']:
section_header('System training')
train_start = timeit.default_timer()
do_system_training(dataset=dataset,
model_path=params['path']['models'],
feature_normalizer_path=params['path']['feature_normalizers'],
feature_path=params['path']['features'],
classifier_params=params['classifier']['parameters'],
classifier_method=params['classifier']['method'],
dataset_evaluation_mode=dataset_evaluation_mode,
overwrite=params['general']['overwrite']
)
train_end = timeit.default_timer()
foot()
# System evaluation in development mode
if args.development and not args.challenge:
# System testing
# ==================================================
if params['flow']['test_system']:
section_header('System testing')
test_start = timeit.default_timer()
do_system_testing(dataset=dataset,
feature_path=params['path']['features'],
result_path=params['path']['results'],
model_path=params['path']['models'],
feature_params=params['features'],
dataset_evaluation_mode=dataset_evaluation_mode,
classifier_method=params['classifier']['method'],
overwrite=params['general']['overwrite']
)
test_end = timeit.default_timer()
foot()
# System evaluation
# ==================================================
if params['flow']['evaluate_system']:
section_header('System evaluation')
do_system_evaluation(dataset=dataset,
dataset_evaluation_mode=dataset_evaluation_mode,
result_path=params['path']['results'])
foot()
# System evaluation with challenge data
elif not args.development and args.challenge:
# Fetch data over internet and setup the data
challenge_dataset = eval(params['general']['challenge_dataset'])()
if params['flow']['initialize']:
challenge_dataset.fetch()
# System testing
if params['flow']['test_system']:
section_header('System testing with challenge data')
do_system_testing(dataset=challenge_dataset,
feature_path=params['path']['features'],
result_path=params['path']['challenge_results'],
model_path=params['path']['models'],
feature_params=params['features'],
dataset_evaluation_mode=dataset_evaluation_mode,
classifier_method=params['classifier']['method'],
overwrite=True
)
foot()
print " "
print "Your results for the challenge data are stored at [" + params['path']['challenge_results'] + "]"
print " "
tot_end = timeit.default_timer()
print " "
print "Train Time : " + str(train_end - train_start)
print " "
print " "
print "Test Time : " + str(test_end - test_start)
print " "
print " "
print "Total Time : " + str(tot_end - tot_start)
print " "
final_result['train_time'] = train_end - train_start
final_result['test_time'] = test_end - test_start
final_result['tot_time'] = tot_end - tot_start
joblib.dump(final_result, 'result.pkl')
return 0
def process_parameters(params):
"""Parameter post-processing.
Parameters
----------
params : dict
parameters in dict
Returns
-------
params : dict
processed parameters
"""
# Convert feature extraction window and hop sizes seconds to samples
params['features']['mfcc']['win_length'] = int(params['features']['win_length_seconds'] * params['features']['fs'])
params['features']['mfcc']['hop_length'] = int(params['features']['hop_length_seconds'] * params['features']['fs'])
# Copy parameters for current classifier method
params['classifier']['parameters'] = params['classifier_parameters'][params['classifier']['method']]
# Hash
params['features']['hash'] = get_parameter_hash(params['features'])
params['classifier']['hash'] = get_parameter_hash(params['classifier'])
# Paths
params['path']['data'] = os.path.join(os.path.dirname(os.path.realpath(__file__)), params['path']['data'])
params['path']['base'] = os.path.join(os.path.dirname(os.path.realpath(__file__)), params['path']['base'])
# Features
params['path']['features_'] = params['path']['features']
params['path']['features'] = os.path.join(params['path']['base'],
params['path']['features'],
params['features']['hash'])
# Feature normalizers
params['path']['feature_normalizers_'] = params['path']['feature_normalizers']
params['path']['feature_normalizers'] = os.path.join(params['path']['base'],
params['path']['feature_normalizers'],
params['features']['hash'])
# Models
params['path']['models_'] = params['path']['models']
params['path']['models'] = os.path.join(params['path']['base'],
params['path']['models'],
params['features']['hash'], params['classifier']['hash'])
# Results
params['path']['results_'] = params['path']['results']
params['path']['results'] = os.path.join(params['path']['base'],
params['path']['results'],
params['features']['hash'], params['classifier']['hash'])
return params
def make_folders(params, parameter_filename='parameters.yaml'):
"""Create all needed folders, and saves parameters in yaml-file for easier manual browsing of data.
Parameters
----------
params : dict
parameters in dict
parameter_filename : str
filename to save parameters used to generate the folder name
Returns
-------
nothing
"""
# Check that target path exists, create if not
check_path(params['path']['features'])
check_path(params['path']['feature_normalizers'])
check_path(params['path']['models'])
check_path(params['path']['results'])
# Save parameters into folders to help manual browsing of files.
# Features
feature_parameter_filename = os.path.join(params['path']['features'], parameter_filename)
if not os.path.isfile(feature_parameter_filename):
save_parameters(feature_parameter_filename, params['features'])
# Feature normalizers
feature_normalizer_parameter_filename = os.path.join(params['path']['feature_normalizers'], parameter_filename)
if not os.path.isfile(feature_normalizer_parameter_filename):
save_parameters(feature_normalizer_parameter_filename, params['features'])
# Models
model_features_parameter_filename = os.path.join(params['path']['base'],
params['path']['models_'],
params['features']['hash'],
parameter_filename)
if not os.path.isfile(model_features_parameter_filename):
save_parameters(model_features_parameter_filename, params['features'])
model_models_parameter_filename = os.path.join(params['path']['base'],
params['path']['models_'],
params['features']['hash'],
params['classifier']['hash'],
parameter_filename)
if not os.path.isfile(model_models_parameter_filename):
save_parameters(model_models_parameter_filename, params['classifier'])
# Results
# Save parameters into folders to help manual browsing of files.
result_features_parameter_filename = os.path.join(params['path']['base'],
params['path']['results_'],
params['features']['hash'],
parameter_filename)
if not os.path.isfile(result_features_parameter_filename):
save_parameters(result_features_parameter_filename, params['features'])
result_models_parameter_filename = os.path.join(params['path']['base'],
params['path']['results_'],
params['features']['hash'],
params['classifier']['hash'],
parameter_filename)
if not os.path.isfile(result_models_parameter_filename):
save_parameters(result_models_parameter_filename, params['classifier'])
def get_feature_filename(audio_file, path, extension='cpickle'):
"""Get feature filename
Parameters
----------
audio_file : str
audio file name from which the features are extracted
path : str
feature path
extension : str
file extension
(Default value='cpickle')
Returns
-------
feature_filename : str
full feature filename
"""
audio_filename = os.path.split(audio_file)[1]
return os.path.join(path, os.path.splitext(audio_filename)[0] + '.' + extension)
def get_feature_normalizer_filename(fold, path, extension='cpickle'):
"""Get normalizer filename
Parameters
----------
fold : int >= 0
evaluation fold number
path : str
normalizer path
extension : str
file extension
(Default value='cpickle')
Returns
-------
normalizer_filename : str
full normalizer filename
"""
return os.path.join(path, 'scale_fold' + str(fold) + '.' + extension)
def get_model_filename(fold, path, extension='cpickle'):
"""Get model filename
Parameters
----------
fold : int >= 0
evaluation fold number
path : str
model path
extension : str
file extension
(Default value='cpickle')
Returns
-------
model_filename : str
full model filename
"""
return os.path.join(path, 'model_fold' + str(fold) + '.' + extension)
def get_result_filename(fold, path, extension='txt'):
"""Get result filename
Parameters
----------
fold : int >= 0
evaluation fold number
path : str
result path
extension : str
file extension
(Default value='cpickle')
Returns
-------
result_filename : str
full result filename
"""
if fold == 0:
return os.path.join(path, 'results.' + extension)
else:
return os.path.join(path, 'results_fold' + str(fold) + '.' + extension)
def do_feature_extraction(files, dataset, feature_path, params, overwrite=False):
"""Feature extraction
Parameters
----------
files : list
file list
dataset : class
dataset class
feature_path : str
path where the features are saved
params : dict
parameter dict
overwrite : bool
overwrite existing feature files
(Default value=False)
Returns
-------
nothing
Raises
-------
IOError
Audio file not found.
"""
# Check that target path exists, create if not
check_path(feature_path)
for file_id, audio_filename in enumerate(files):
# Get feature filename
current_feature_file = get_feature_filename(audio_file=os.path.split(audio_filename)[1], path=feature_path)
progress(title_text='Extracting',
percentage=(float(file_id) / len(files)),
note=os.path.split(audio_filename)[1])
if not os.path.isfile(current_feature_file) or overwrite:
# Load audio data
if os.path.isfile(dataset.relative_to_absolute_path(audio_filename)):
y, fs = load_audio(filename=dataset.relative_to_absolute_path(audio_filename), mono=True,
fs=params['fs'])
else:
raise IOError("Audio file not found [%s]" % audio_filename)
# Extract features
if params['method'] == 'lfcc':
feature_file_txt = get_feature_filename(audio_file=os.path.split(audio_filename)[1],
path=feature_path,
extension='txt')
feature_data = feature_extraction_lfcc(feature_file_txt)
elif params['method'] == 'traps':
feature_data = feature_extraction_traps(y=y,
fs=fs,
traps_params=params['traps'],
mfcc_params=params['mfcc'])
else:
# feature_data['feat'].shape is (1501, 60)
feature_data = feature_extraction(y=y,
fs=fs,
include_mfcc0=params['include_mfcc0'],
include_delta=params['include_delta'],
include_acceleration=params['include_acceleration'],
mfcc_params=params['mfcc'],
delta_params=params['mfcc_delta'],
acceleration_params=params['mfcc_acceleration'])
# Save
save_data(current_feature_file, feature_data)
def do_feature_normalization(dataset, feature_normalizer_path, feature_path, dataset_evaluation_mode='folds',
overwrite=False):
"""Feature normalization
Calculated normalization factors for each evaluation fold based on the training material available.
Parameters
----------
dataset : class
dataset class
feature_normalizer_path : str
path where the feature normalizers are saved.
feature_path : str
path where the features are saved.
dataset_evaluation_mode : str ['folds', 'full']
evaluation mode, 'full' all material available is considered to belong to one fold.
(Default value='folds')
overwrite : bool
overwrite existing normalizers
(Default value=False)
Returns
-------
nothing
Raises
-------
IOError
Feature file not found.
"""
# Check that target path exists, create if not
check_path(feature_normalizer_path)
for fold in dataset.folds(mode=dataset_evaluation_mode):
current_normalizer_file = get_feature_normalizer_filename(fold=fold, path=feature_normalizer_path)
if not os.path.isfile(current_normalizer_file) or overwrite:
# Initialize statistics
file_count = len(dataset.train(fold))
normalizer = FeatureNormalizer()
for item_id, item in enumerate(dataset.train(fold)):
progress(title_text='Collecting data',
fold=fold,
percentage=(float(item_id) / file_count),
note=os.path.split(item['file'])[1])
# Load features
if os.path.isfile(get_feature_filename(audio_file=item['file'], path=feature_path)):
feature_data = load_data(get_feature_filename(audio_file=item['file'], path=feature_path))['stat']
else:
raise IOError("Feature file not found [%s]" % (item['file']))
# Accumulate statistics
normalizer.accumulate(feature_data)
# Calculate normalization factors
normalizer.finalize()
# Save
save_data(current_normalizer_file, normalizer)
def do_system_training(dataset, model_path, feature_normalizer_path, feature_path, classifier_params,
dataset_evaluation_mode='folds', classifier_method='gmm', overwrite=False):
"""System training
model container format:
{
'normalizer': normalizer class
'models' :
{
'office' : mixture.GMM class
'home' : mixture.GMM class
...
}
}
Parameters
----------
dataset : class
dataset class
model_path : str
path where the models are saved.
feature_normalizer_path : str
path where the feature normalizers are saved.
feature_path : str
path where the features are saved.
classifier_params : dict
parameter dict
dataset_evaluation_mode : str ['folds', 'full']
evaluation mode, 'full' all material available is considered to belong to one fold.
(Default value='folds')
classifier_method : str ['gmm']
classifier method, currently only GMM supported
(Default value='gmm')
overwrite : bool
overwrite existing models
(Default value=False)
Returns
-------
nothing
Raises
-------
ValueError
classifier_method is unknown.
IOError
Feature normalizer not found.
Feature file not found.
"""
if classifier_method != 'gmm' and classifier_method != 'dnn':
raise ValueError("Unknown classifier method [" + classifier_method + "]")
# Check that target path exists, create if not
check_path(model_path)
for fold in dataset.folds(mode=dataset_evaluation_mode):
current_model_file = get_model_filename(fold=fold, path=model_path)
if not os.path.isfile(current_model_file) or overwrite:
# Load normalizer
feature_normalizer_filename = get_feature_normalizer_filename(fold=fold, path=feature_normalizer_path)
if os.path.isfile(feature_normalizer_filename):
normalizer = load_data(feature_normalizer_filename)
else:
raise IOError("Feature normalizer not found [%s]" % feature_normalizer_filename)
# Initialize model container
model_container = {'normalizer': normalizer, 'models': {}}
# Collect training examples
file_count = len(dataset.train(fold))
data = {}
for item_id, item in enumerate(dataset.train(fold)):
progress(title_text='Collecting data',
fold=fold,
percentage=(float(item_id) / file_count),
note=os.path.split(item['file'])[1])
# Load features
feature_filename = get_feature_filename(audio_file=item['file'], path=feature_path)
if os.path.isfile(feature_filename):
feature_data = load_data(feature_filename)['feat']
else:
raise IOError("Features not found [%s]" % (item['file']))
# Scale features
feature_data = model_container['normalizer'].normalize(feature_data)
# Store features per class label
if item['scene_label'] not in data:
data[item['scene_label']] = feature_data
else:
data[item['scene_label']] = numpy.vstack((data[item['scene_label']], feature_data))
le = pp.LabelEncoder()
tot_data = {}
# Train models for each class
for label in data:
progress(title_text='Train models',
fold=fold,
note=label)
if classifier_method == 'gmm':
model_container['models'][label] = mixture.GMM(**classifier_params).fit(data[label])
elif classifier_method == 'dnn':
if 'x' not in tot_data:
tot_data['x'] = data[label]
tot_data['y'] = numpy.repeat(label, len(data[label]), axis=0)
else:
tot_data['x'] = numpy.vstack((tot_data['x'], data[label]))
tot_data['y'] = numpy.hstack((tot_data['y'], numpy.repeat(label, len(data[label]), axis=0)))
else:
raise ValueError("Unknown classifier method [" + classifier_method + "]")
clf = skflow.TensorFlowDNNClassifier(**classifier_params)
if classifier_method == 'dnn':
tot_data['y'] = le.fit_transform(tot_data['y'])
clf.fit(tot_data['x'], tot_data['y'])
clf.save('dnn/dnnmodel1')
# Save models
save_data(current_model_file, model_container)
def do_system_testing(dataset, result_path, feature_path, model_path, feature_params,
dataset_evaluation_mode='folds', classifier_method='gmm', overwrite=False):
"""System testing.
If extracted features are not found from disk, they are extracted but not saved.
Parameters
----------
dataset : class
dataset class
result_path : str
path where the results are saved.
feature_path : str
path where the features are saved.
model_path : str
path where the models are saved.
feature_params : dict
parameter dict
dataset_evaluation_mode : str ['folds', 'full']
evaluation mode, 'full' all material available is considered to belong to one fold.
(Default value='folds')
classifier_method : str ['gmm']
classifier method, currently only GMM supported
(Default value='gmm')
overwrite : bool
overwrite existing models
(Default value=False)
Returns
-------
nothing
Raises
-------
ValueError
classifier_method is unknown.
IOError
Model file not found.
Audio file not found.
"""
if classifier_method != 'gmm' and classifier_method != 'dnn':
raise ValueError("Unknown classifier method [" + classifier_method + "]")
# Check that target path exists, create if not
check_path(result_path)
for fold in dataset.folds(mode=dataset_evaluation_mode):
current_result_file = get_result_filename(fold=fold, path=result_path)
if not os.path.isfile(current_result_file) or overwrite:
results = []
# Load class model container
model_filename = get_model_filename(fold=fold, path=model_path)
if os.path.isfile(model_filename):
model_container = load_data(model_filename)
else:
raise IOError("Model file not found [%s]" % model_filename)
file_count = len(dataset.test(fold))
for file_id, item in enumerate(dataset.test(fold)):
progress(title_text='Testing',
fold=fold,
percentage=(float(file_id) / file_count),
note=os.path.split(item['file'])[1])
# Load features
feature_filename = get_feature_filename(audio_file=item['file'], path=feature_path)
if os.path.isfile(feature_filename):
feature_data = load_data(feature_filename)['feat']
else:
# Load audio
if os.path.isfile(dataset.relative_to_absolute_path(item['file'])):
y, fs = load_audio(filename=dataset.relative_to_absolute_path(item['file']), mono=True,
fs=feature_params['fs'])
else:
raise IOError("Audio file not found [%s]" % (item['file']))
if feature_params['method'] == 'lfcc':
feature_file_txt = get_feature_filename(audio_file=os.path.split(item['file'])[1],
path=feature_path,
extension='txt')
feature_data = feature_extraction_lfcc(feature_file_txt)
elif feature_params['method'] == 'traps':
feature_data = feature_extraction_traps(y=y,
fs=fs,
traps_params=params['traps'],
mfcc_params=feature_params['mfcc'],
statistics=False)['feat']
else:
feature_data = feature_extraction(y=y,
fs=fs,
include_mfcc0=feature_params['include_mfcc0'],
include_delta=feature_params['include_delta'],
include_acceleration=feature_params['include_acceleration'],
mfcc_params=feature_params['mfcc'],
delta_params=feature_params['mfcc_delta'],
acceleration_params=feature_params['mfcc_acceleration'],
statistics=False)['feat']
# Normalize features
feature_data = model_container['normalizer'].normalize(feature_data)
# Do classification for the block
if classifier_method == 'gmm':
current_result = do_classification_gmm(feature_data, model_container)
current_class = current_result['class']
elif classifier_method == 'dnn':
current_result = do_classification_dnn(feature_data, model_container)
current_class = dataset.scene_labels[current_result['class_id']]
else:
raise ValueError("Unknown classifier method [" + classifier_method + "]")
# Store the result
if classifier_method == 'gmm':
results.append((dataset.absolute_to_relative(item['file']),
current_class))
elif classifier_method == 'dnn':
logs_in_tuple = tuple(lo for lo in current_result['logls'])
results.append((dataset.absolute_to_relative(item['file']),
current_class) + logs_in_tuple)
else:
raise ValueError("Unknown classifier method [" + classifier_method + "]")
# Save testing results
with open(current_result_file, 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for result_item in results:
writer.writerow(result_item)
def do_classification_dnn(feature_data, model_container):
# Initialize log-likelihood matrix to -inf
logls = numpy.empty(15)
logls.fill(-numpy.inf)
model_clf = skflow.TensorFlowEstimator.restore('dnn/dnnmodel1')
logls = numpy.sum(numpy.log(model_clf.predict_proba(feature_data)), 0)
classification_result_id = numpy.argmax(logls)
return {'class_id': classification_result_id,
'logls': logls}
def do_classification_gmm(feature_data, model_container):
"""GMM classification for give feature matrix
model container format:
{
'normalizer': normalizer class
'models' :
{
'office' : mixture.GMM class
'home' : mixture.GMM class
...
}
}
Parameters
----------
feature_data : numpy.ndarray [shape=(t, feature vector length)]
feature matrix
model_container : dict
model container
Returns
-------
result : str
classification result as scene label
"""
# Initialize log-likelihood matrix to -inf
logls = numpy.empty(len(model_container['models']))
logls.fill(-numpy.inf)
for label_id, label in enumerate(model_container['models']):
logls[label_id] = numpy.sum(model_container['models'][label].score(feature_data))
classification_result_id = numpy.argmax(logls)
return {'class': model_container['models'].keys()[classification_result_id],
'logls': logls}
def do_system_evaluation(dataset, result_path, dataset_evaluation_mode='folds'):
"""System evaluation. Testing outputs are collected and evaluated. Evaluation results are printed.
Parameters
----------
dataset : class
dataset class
result_path : str
path where the results are saved.
dataset_evaluation_mode : str ['folds', 'full']
evaluation mode, 'full' all material available is considered to belong to one fold.
(Default value='folds')
Returns
-------
nothing
Raises
-------
IOError
Result file not found
"""
dcase2016_scene_metric = DCASE2016_SceneClassification_Metrics(class_list=dataset.scene_labels)
results_fold = []
tot_cm = numpy.zeros((dataset.scene_label_count, dataset.scene_label_count))
for fold in dataset.folds(mode=dataset_evaluation_mode):
dcase2016_scene_metric_fold = DCASE2016_SceneClassification_Metrics(class_list=dataset.scene_labels)
results = []
result_filename = get_result_filename(fold=fold, path=result_path)
if os.path.isfile(result_filename):
with open(result_filename, 'rt') as f:
for row in csv.reader(f, delimiter='\t'):
results.append(row)
else:
raise IOError("Result file not found [%s]" % result_filename)
# Rewrite the result file
if os.path.isfile(result_filename):
with open(result_filename+'2', 'wt') as f:
writer = csv.writer(f, delimiter='\t')
for result_item in results:
y_true = (dataset.file_meta(result_item[0])[0]['scene_label'],)
#print type(y_true)
#print type(result_item)
writer.writerow(y_true + tuple(result_item))
y_true = []
y_pred = []
for result in results:
y_true.append(dataset.file_meta(result[0])[0]['scene_label'])
y_pred.append(result[1])
dcase2016_scene_metric.evaluate(system_output=y_pred, annotated_ground_truth=y_true)
dcase2016_scene_metric_fold.evaluate(system_output=y_pred, annotated_ground_truth=y_true)
results_fold.append(dcase2016_scene_metric_fold.results())
tot_cm += confusion_matrix(y_true, y_pred)
final_result['tot_cm'] = tot_cm
final_result['tot_cm_acc'] = numpy.sum(numpy.diag(tot_cm)) / numpy.sum(tot_cm)
results = dcase2016_scene_metric.results()
final_result['result'] = results
print " File-wise evaluation, over %d folds" % dataset.fold_count
fold_labels = ''
separator = ' =====================+======+======+==========+ +'
if dataset.fold_count > 1:
for fold in dataset.folds(mode=dataset_evaluation_mode):
fold_labels += " {:8s} |".format('Fold' + str(fold))
separator += "==========+"
print " {:20s} | {:4s} : {:4s} | {:8s} | |".format('Scene label', 'Nref', 'Nsys', 'Accuracy') + fold_labels
print separator
for label_id, label in enumerate(sorted(results['class_wise_accuracy'])):
fold_values = ''
if dataset.fold_count > 1:
for fold in dataset.folds(mode=dataset_evaluation_mode):
fold_values += " {:5.1f} % |".format(results_fold[fold - 1]['class_wise_accuracy'][label] * 100)
print " {:20s} | {:4d} : {:4d} | {:5.1f} % | |".format(label,
results['class_wise_data'][label]['Nref'],
results['class_wise_data'][label]['Nsys'],
results['class_wise_accuracy'][
label] * 100) + fold_values
print separator
fold_values = ''
if dataset.fold_count > 1:
for fold in dataset.folds(mode=dataset_evaluation_mode):
fold_values += " {:5.1f} % |".format(results_fold[fold - 1]['overall_accuracy'] * 100)
print " {:20s} | {:4d} : {:4d} | {:5.1f} % | |".format('Overall accuracy',
results['Nref'],
results['Nsys'],
results['overall_accuracy'] * 100) + fold_values
if __name__ == "__main__":
try:
sys.exit(main(sys.argv))
except (ValueError, IOError) as e:
sys.exit(e)
| mit |
nickgentoo/scikit-learn-graph | scripts/Online_PassiveAggressive_countmeansketch_divide_et_impera.py | 1 | 10601 | # -*- coding: utf-8 -*-
"""
python -m scripts/Online_PassiveAggressive_countmeansketch LMdata 3 1 a ODDST 0.01
Created on Fri Mar 13 13:02:41 2015
Copyright 2015 Nicolo' Navarin
This file is part of scikit-learn-graph.
scikit-learn-graph is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
scikit-learn-graph is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with scikit-learn-graph. If not, see <http://www.gnu.org/licenses/>.
"""
from copy import copy
import os,sys,inspect
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0,parentdir)
import sys
from skgraph.feature_extraction.graph.ODDSTVectorizer import ODDSTVectorizer
from skgraph.feature_extraction.graph.WLVectorizer import WLVectorizer
from sklearn.linear_model import PassiveAggressiveClassifier as PAC
from skgraph.datasets import load_graph_datasets
import numpy as np
from scipy.sparse import csc_matrix
from sklearn.utils import compute_class_weight
from scipy.sparse import csr_matrix
from countminsketch_divide_et_impera import CountMinSketch
from itertools import izip
import time
if __name__=='__main__':
start_time = time.time()
if len(sys.argv)<1:
sys.exit("python ODDKernel_example.py dataset r l filename kernel C m d")
dataset=sys.argv[1]
max_radius=int(sys.argv[2])
la=float(sys.argv[3])
#hashs=int(sys.argv[3])
njobs=1
name=str(sys.argv[4])
kernel=sys.argv[5]
C=float(sys.argv[6])
m=int(sys.argv[7])
d=int(sys.argv[8])
#lr=float(sys.argv[7])
#FIXED PARAMETERS
normalization=False
#working with Chemical
g_it=load_graph_datasets.dispatch(dataset)
f=open(name,'w')
#At this point, one_hot_encoding contains the encoding for each symbol in the alphabet
if kernel=="WL":
print "Lambda ignored"
print "Using WL fast subtree kernel"
Vectorizer=WLVectorizer(r=max_radius,normalization=normalization)
elif kernel=="ODDST":
print "Using ST kernel"
Vectorizer=ODDSTVectorizer(r=max_radius,l=la,normalization=normalization)
elif kernel=="NSPDK":
print "Using NSPDK kernel, lambda parameter interpreted as d"
Vectorizer=NSPDKVectorizer(r=max_radius,d=int(la),normalization=normalization)
else:
print "Unrecognized kernel"
#TODO the C parameter should probably be optimized
#print zip(_letters, _one_hot)
#exit()
features=Vectorizer.transform(g_it.graphs) #Parallel ,njobs
print "examples, features", features.shape
features_time=time.time()
print("Computed features in %s seconds ---" % (features_time - start_time))
errors=0
tp=0
fp=0
tn=0
fn=0
predictions=[0]*50
correct=[0]*50
#print ESN
#netDataSet=[]
#netTargetSet=[]
#netKeyList=[]
BERtotal=[]
bintargets=[1,-1]
#print features
#print list_for_deep.keys()
tp = 0
fp = 0
fn = 0
tn = 0
part_plus=0
part_minus=0
WCMS=CountMinSketch(m,d)
cms_creation=0.0
for i in xrange(features.shape[0]):
time1=time.time()
exCMS=CountMinSketch(m,d)
ex=features[i][0]
target=g_it.target[i]
#W=csr_matrix(ex)
rows,cols = ex.nonzero()
#dot=0.0
module=0.0
for row,col in izip(rows,cols):
#((row,col), ex[row,col])
value=ex[row,col]
module+=value**2
#print col, ex[row,col]
#dot+=WCMS[col]*ex[row,col]
exCMS.add(col,value)
#print dot
#TODO aggiungere bias
time2=time.time()
cms_creation+=time2 - time1
dot=WCMS.dot(exCMS)
#print "dot:", dot, "dotCMS:",dot1
if (np.sign(dot) != target ):
#print "error on example",i, "predicted:", dot, "correct:", target
errors+=1
if target==1:
fn+=1
else:
fp+=1
else:
#print "correct classification", target
if target==1:
tp+=1
else:
tn+=1
if(target==1):
coef=(part_minus+1.0)/(part_plus+part_minus+1.0)
part_plus+=1
else:
coef=(part_plus+1.0)/(part_plus+part_minus+1.0)
part_minus+=1
tao = min (C, max (0.0,( (1.0 - target*dot )*coef) / module ) );
if (tao > 0.0):
exCMS*=(tao*target)
WCMS+=(exCMS)
# for row,col in zip(rows,cols):
# ((row,col), ex[row,col])
# #print col, ex[row,col]
# WCMS.add(col,target*tao*ex[row,col])
#print "Correct prediction example",i, "pred", score, "target",target
if i%50==0 and i!=0:
#output performance statistics every 50 examples
if (tn+fp) > 0:
pos_part= float(fp) / (tn+fp)
else:
pos_part=0
if (tp+fn) > 0:
neg_part=float(fn) / (tp+fn)
else:
neg_part=0
BER = 0.5 * ( pos_part + neg_part)
print "1-BER Window esempio ",i, (1.0 - BER)
f.write("1-BER Window esempio "+str(i)+" "+str(1.0 - BER)+"\n")
#print>>f,"1-BER Window esempio "+str(i)+" "+str(1.0 - BER)
BERtotal.append(1.0 - BER)
tp = 0
fp = 0
fn = 0
tn = 0
part_plus=0
part_minus=0
end_time=time.time()
print("Learning phase time %s seconds ---" % (end_time - features_time )) #- cms_creation
print("Total time %s seconds ---" % (end_time - start_time))
print "BER AVG", str(np.average(BERtotal)),"std", np.std(BERtotal)
f.write("BER AVG "+ str(np.average(BERtotal))+" std "+str(np.std(BERtotal))+"\n")
f.close()
#print "N_features", ex.shape
#generate explicit W from CountMeanSketch
#print W
#raw_input("W (output)")
#==============================================================================
#
# tao = /*(double)labels->get_label(idx_a) **/ min (C, max (0.0,(1.0 - (((double)labels->get_label(idx_a))*(classe_mod) )) * c_plus ) / modulo_test);
#
# #W=W_old #dump line
#
#
# #set the weights of PA to the predicted values
# PassiveAggressive.coef_=W
# pred=PassiveAggressive.predict(ex)
#
# score=PassiveAggressive.decision_function(ex)
#
# bintargets.append(target)
# if pred!=target:
# errors+=1
# print "Error",errors," on example",i, "pred", score, "target",target
# if target==1:
# fn+=1
# else:
# fp+=1
#
# else:
# if target==1:
# tp+=1
# else:
# tn+=1
# #print "Correct prediction example",i, "pred", score, "target",target
#
# else:
# #first example is always an error!
# pred=0
# score=0
# errors+=1
# print "Error",errors," on example",i
# if g_it.target[i]==1:
# fn+=1
# else:
# fp+=1
# #print i
# if i%50==0 and i!=0:
# #output performance statistics every 50 examples
# if (tn+fp) > 0:
# pos_part= float(fp) / (tn+fp)
# else:
# pos_part=0
# if (tp+fn) > 0:
# neg_part=float(fn) / (tp+fn)
# else:
# neg_part=0
# BER = 0.5 * ( pos_part + neg_part)
# print "1-BER Window esempio ",i, (1.0 - BER)
# print>>f,"1-BER Window esempio "+str(i)+" "+str(1.0 - BER)
# BERtotal.append(1.0 - BER)
# tp = 0
# fp = 0
# fn = 0
# tn = 0
# bintargets=[1,-1]
# #print features[0][i]
# #print features[0][i].shape
# #f=features[0][i,:]
# #print f.shape
# #print f.shape
# #print g_it.target[i]
# #third parameter is compulsory just for the first call
# print "prediction", pred, score
# #print "intecept",PassiveAggressive.intercept_
# #raw_input()
# if abs(score)<1.0 or pred!=g_it.target[i]:
#
# ClassWeight=compute_class_weight('auto',np.asarray([1,-1]),bintargets)
# #print "class weights", {1:ClassWeight[0],-1:ClassWeight[1]}
# PassiveAggressive.class_weight={1:ClassWeight[0],-1:ClassWeight[1]}
#
# PassiveAggressive.partial_fit(ex,np.array([g_it.target[i]]),np.unique(g_it.target))
# #PassiveAggressive.partial_fit(ex,np.array([g_it.target[i]]),np.unique(g_it.target))
# W_old=PassiveAggressive.coef_
#
#
# #ESN target---#
# netTargetSet=[]
# for key,rowDict in list_for_deep[i].iteritems():
#
#
# target=np.asarray( [np.asarray([W_old[0,key]])]*len(rowDict))
#
#
# netTargetSet.append(target)
#
#
#
#
# #------------ESN TargetSetset--------------------#
# # ESN Training
#
# #for ftDataset,ftTargetSet in zip(netDataSet,netTargetSet):
# #print "Input"
# #print netDataSet
# #raw_input("Output")
# #print netTargetSet
# #raw_input("Target")
# model.OnlineTrain(netDataSet,netTargetSet,lr)
# #raw_input("TR")
# #calcolo statistiche
#
# print "BER AVG", sum(BERtotal) / float(len(BERtotal))
# print>>f,"BER AVG "+str(sum(BERtotal) / float(len(BERtotal)))
# f.close()
#==============================================================================
| gpl-3.0 |
av8ramit/tensorflow | tensorflow/python/estimator/inputs/pandas_io.py | 9 | 4605 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Methods to allow pandas.DataFrame."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.estimator.inputs.queues import feeding_functions
from tensorflow.python.util.tf_export import tf_export
try:
# pylint: disable=g-import-not-at-top
# pylint: disable=unused-import
import pandas as pd
HAS_PANDAS = True
except IOError:
# Pandas writes a temporary file during import. If it fails, don't use pandas.
HAS_PANDAS = False
except ImportError:
HAS_PANDAS = False
@tf_export('estimator.inputs.pandas_input_fn')
def pandas_input_fn(x,
y=None,
batch_size=128,
num_epochs=1,
shuffle=None,
queue_capacity=1000,
num_threads=1,
target_column='target'):
"""Returns input function that would feed Pandas DataFrame into the model.
Note: `y`'s index must match `x`'s index.
Args:
x: pandas `DataFrame` object.
y: pandas `Series` object. `None` if absent.
batch_size: int, size of batches to return.
num_epochs: int, number of epochs to iterate over data. If not `None`,
read attempts that would exceed this value will raise `OutOfRangeError`.
shuffle: bool, whether to read the records in random order.
queue_capacity: int, size of the read queue. If `None`, it will be set
roughly to the size of `x`.
num_threads: Integer, number of threads used for reading and enqueueing. In
order to have predicted and repeatable order of reading and enqueueing,
such as in prediction and evaluation mode, `num_threads` should be 1.
target_column: str, name to give the target column `y`.
Returns:
Function, that has signature of ()->(dict of `features`, `target`)
Raises:
ValueError: if `x` already contains a column with the same name as `y`, or
if the indexes of `x` and `y` don't match.
TypeError: `shuffle` is not bool.
"""
if not HAS_PANDAS:
raise TypeError(
'pandas_input_fn should not be called without pandas installed')
if not isinstance(shuffle, bool):
raise TypeError('shuffle must be explicitly set as boolean; '
'got {}'.format(shuffle))
x = x.copy()
if y is not None:
if target_column in x:
raise ValueError(
'Cannot use name %s for target column: DataFrame already has a '
'column with that name: %s' % (target_column, x.columns))
if not np.array_equal(x.index, y.index):
raise ValueError('Index for x and y are mismatched.\nIndex for x: %s\n'
'Index for y: %s\n' % (x.index, y.index))
x[target_column] = y
# TODO(mdan): These are memory copies. We probably don't need 4x slack space.
# The sizes below are consistent with what I've seen elsewhere.
if queue_capacity is None:
if shuffle:
queue_capacity = 4 * len(x)
else:
queue_capacity = len(x)
min_after_dequeue = max(queue_capacity / 4, 1)
def input_fn():
"""Pandas input function."""
queue = feeding_functions._enqueue_data( # pylint: disable=protected-access
x,
queue_capacity,
shuffle=shuffle,
min_after_dequeue=min_after_dequeue,
num_threads=num_threads,
enqueue_size=batch_size,
num_epochs=num_epochs)
if num_epochs is None:
features = queue.dequeue_many(batch_size)
else:
features = queue.dequeue_up_to(batch_size)
assert len(features) == len(x.columns) + 1, ('Features should have one '
'extra element for the index.')
features = features[1:]
features = dict(zip(list(x.columns), features))
if y is not None:
target = features.pop(target_column)
return features, target
return features
return input_fn
| apache-2.0 |
pydoit/doit-graphx | setup.py | 1 | 2002 | #! /usr/bin/env python
import io
import os
from setuptools import setup
mydir = os.path.dirname(__file__)
def read_project_version():
# Version-trick to have version-info in a single place.
# http://stackoverflow.com/questions/2058802/how-can-i-get-the-version-defined-in-setup-py-setuptools-in-my-package
fglobals = {}
with io.open(os.path.join(mydir, '_version.py')) as fd:
exec(fd.read(), fglobals) # To read __version__
return fglobals['__version__']
setup(name='doit-graphx',
description="doit command plugin to generate task dependency-graphs using networkx",
version=read_project_version(),
license='MIT',
author='Kostis Anagnostopoulos',
author_email='[email protected]',
url='https://github.com/pydoit/doit-graphx',
classifiers=[
'Development Status :: 3 - Alpha',
'Environment :: Console',
'License :: OSI Approved :: MIT License',
'Natural Language :: English',
'Operating System :: OS Independent',
'Operating System :: POSIX',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Intended Audience :: Developers',
'Intended Audience :: Information Technology',
'Intended Audience :: Science/Research',
'Intended Audience :: System Administrators',
'Topic :: Software Development :: Build Tools',
'Topic :: Software Development :: Testing',
'Topic :: Software Development :: Quality Assurance',
'Topic :: Scientific/Engineering',
],
py_modules=['cmd_graphx', '_version'],
# TODO: Fatcor-out matplotlib in an extra-requires.
install_requires=['networkx', 'matplotlib'],
# doit>=0.28.0] # doit 0.28 unreleased
long_description="",
)
| mit |
Sklearn-HMM/scikit-learn-HMM | sklean-hmm/cluster/tests/test_hierarchical.py | 3 | 6966 | """
Several basic tests for hierarchical clustering procedures
"""
# Authors: Vincent Michel, 2010, Gael Varoquaux 2012
# License: BSD 3 clause
from tempfile import mkdtemp
import numpy as np
from scipy import sparse
from scipy.cluster import hierarchy
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.cluster import Ward, WardAgglomeration, ward_tree
from sklearn.cluster.hierarchical import _hc_cut
from sklearn.feature_extraction.image import grid_to_graph
from sklearn.utils.testing import assert_warns
def test_structured_ward_tree():
"""
Check that we obtain the correct solution for structured ward tree.
"""
rnd = np.random.RandomState(0)
mask = np.ones([10, 10], dtype=np.bool)
# Avoiding a mask with only 'True' entries
mask[4:7, 4:7] = 0
X = rnd.randn(50, 100)
connectivity = grid_to_graph(*mask.shape)
children, n_components, n_leaves, parent = ward_tree(X.T, connectivity)
n_nodes = 2 * X.shape[1] - 1
assert_true(len(children) + n_leaves == n_nodes)
# Check that ward_tree raises a ValueError with a connectivity matrix
# of the wrong shape
assert_raises(ValueError, ward_tree, X.T, np.ones((4, 4)))
def test_unstructured_ward_tree():
"""
Check that we obtain the correct solution for unstructured ward tree.
"""
rnd = np.random.RandomState(0)
X = rnd.randn(50, 100)
for this_X in (X, X[0]):
# With specified a number of clusters just for the sake of
# raising a warning and testing the warning code
children, n_nodes, n_leaves, parent = assert_warns(UserWarning,
ward_tree,
this_X.T,
n_clusters=10)
n_nodes = 2 * X.shape[1] - 1
assert_equal(len(children) + n_leaves, n_nodes)
def test_height_ward_tree():
"""
Check that the height of ward tree is sorted.
"""
rnd = np.random.RandomState(0)
mask = np.ones([10, 10], dtype=np.bool)
X = rnd.randn(50, 100)
connectivity = grid_to_graph(*mask.shape)
children, n_nodes, n_leaves, parent = ward_tree(X.T, connectivity)
n_nodes = 2 * X.shape[1] - 1
assert_true(len(children) + n_leaves == n_nodes)
def test_ward_clustering():
"""
Check that we obtain the correct number of clusters with Ward clustering.
"""
rnd = np.random.RandomState(0)
mask = np.ones([10, 10], dtype=np.bool)
X = rnd.randn(100, 50)
connectivity = grid_to_graph(*mask.shape)
clustering = Ward(n_clusters=10, connectivity=connectivity)
clustering.fit(X)
# test caching
clustering = Ward(n_clusters=10, connectivity=connectivity,
memory=mkdtemp())
clustering.fit(X)
labels = clustering.labels_
assert_true(np.size(np.unique(labels)) == 10)
# Turn caching off now
clustering = Ward(n_clusters=10, connectivity=connectivity)
# Check that we obtain the same solution with early-stopping of the
# tree building
clustering.compute_full_tree = False
clustering.fit(X)
np.testing.assert_array_equal(clustering.labels_, labels)
clustering.connectivity = None
clustering.fit(X)
assert_true(np.size(np.unique(clustering.labels_)) == 10)
# Check that we raise a TypeError on dense matrices
clustering = Ward(n_clusters=10,
connectivity=connectivity.todense())
assert_raises(TypeError, clustering.fit, X)
clustering = Ward(n_clusters=10,
connectivity=sparse.lil_matrix(
connectivity.todense()[:10, :10]))
assert_raises(ValueError, clustering.fit, X)
def test_ward_agglomeration():
"""
Check that we obtain the correct solution in a simplistic case
"""
rnd = np.random.RandomState(0)
mask = np.ones([10, 10], dtype=np.bool)
X = rnd.randn(50, 100)
connectivity = grid_to_graph(*mask.shape)
ward = WardAgglomeration(n_clusters=5, connectivity=connectivity)
ward.fit(X)
assert_true(np.size(np.unique(ward.labels_)) == 5)
Xred = ward.transform(X)
assert_true(Xred.shape[1] == 5)
Xfull = ward.inverse_transform(Xred)
assert_true(np.unique(Xfull[0]).size == 5)
assert_array_almost_equal(ward.transform(Xfull), Xred)
def assess_same_labelling(cut1, cut2):
"""Util for comparison with scipy"""
co_clust = []
for cut in [cut1, cut2]:
n = len(cut)
k = cut.max() + 1
ecut = np.zeros((n, k))
ecut[np.arange(n), cut] = 1
co_clust.append(np.dot(ecut, ecut.T))
assert_true((co_clust[0] == co_clust[1]).all())
def test_scikit_vs_scipy():
"""Test scikit ward with full connectivity (i.e. unstructured) vs scipy
"""
from scipy.sparse import lil_matrix
n, p, k = 10, 5, 3
rnd = np.random.RandomState(0)
connectivity = lil_matrix(np.ones((n, n)))
for i in range(5):
X = .1 * rnd.normal(size=(n, p))
X -= 4 * np.arange(n)[:, np.newaxis]
X -= X.mean(axis=1)[:, np.newaxis]
out = hierarchy.ward(X)
children_ = out[:, :2].astype(np.int)
children, _, n_leaves, _ = ward_tree(X, connectivity)
cut = _hc_cut(k, children, n_leaves)
cut_ = _hc_cut(k, children_, n_leaves)
assess_same_labelling(cut, cut_)
# Test error management in _hc_cut
assert_raises(ValueError, _hc_cut, n_leaves + 1, children, n_leaves)
def test_connectivity_popagation():
"""
Check that connectivity in the ward tree is propagated correctly during
merging.
"""
from sklearn.neighbors import NearestNeighbors
X = np.array([(.014, .120), (.014, .099), (.014, .097),
(.017, .153), (.017, .153), (.018, .153),
(.018, .153), (.018, .153), (.018, .153),
(.018, .153), (.018, .153), (.018, .153),
(.018, .152), (.018, .149), (.018, .144),
])
nn = NearestNeighbors(n_neighbors=10).fit(X)
connectivity = nn.kneighbors_graph(X)
ward = Ward(n_clusters=4, connectivity=connectivity)
# If changes are not propagated correctly, fit crashes with an
# IndexError
ward.fit(X)
def test_connectivity_fixing_non_lil():
"""
Check non regression of a bug if a non item assignable connectivity is
provided with more than one component.
"""
# create dummy data
x = np.array([[0, 0], [1, 1]])
# create a mask with several components to force connectivity fixing
m = np.array([[True, False], [False, True]])
c = grid_to_graph(n_x=2, n_y=2, mask=m)
w = Ward(connectivity=c)
assert_warns(UserWarning, w.fit, x)
if __name__ == '__main__':
import nose
nose.run(argv=['', __file__])
| bsd-3-clause |
toobaz/pandas | pandas/tests/indexes/interval/test_interval_tree.py | 2 | 6924 | from itertools import permutations
import numpy as np
import pytest
from pandas._libs.interval import IntervalTree
from pandas import compat
import pandas.util.testing as tm
def skipif_32bit(param):
"""
Skip parameters in a parametrize on 32bit systems. Specifically used
here to skip leaf_size parameters related to GH 23440.
"""
marks = pytest.mark.skipif(
compat.is_platform_32bit(), reason="GH 23440: int type mismatch on 32bit"
)
return pytest.param(param, marks=marks)
@pytest.fixture(
scope="class", params=["int32", "int64", "float32", "float64", "uint64"]
)
def dtype(request):
return request.param
@pytest.fixture(params=[skipif_32bit(1), skipif_32bit(2), 10])
def leaf_size(request):
"""
Fixture to specify IntervalTree leaf_size parameter; to be used with the
tree fixture.
"""
return request.param
@pytest.fixture(
params=[
np.arange(5, dtype="int64"),
np.arange(5, dtype="int32"),
np.arange(5, dtype="uint64"),
np.arange(5, dtype="float64"),
np.arange(5, dtype="float32"),
np.array([0, 1, 2, 3, 4, np.nan], dtype="float64"),
np.array([0, 1, 2, 3, 4, np.nan], dtype="float32"),
]
)
def tree(request, leaf_size):
left = request.param
return IntervalTree(left, left + 2, leaf_size=leaf_size)
class TestIntervalTree:
def test_get_loc(self, tree):
result = tree.get_loc(1)
expected = np.array([0], dtype="intp")
tm.assert_numpy_array_equal(result, expected)
result = np.sort(tree.get_loc(2))
expected = np.array([0, 1], dtype="intp")
tm.assert_numpy_array_equal(result, expected)
with pytest.raises(KeyError, match="-1"):
tree.get_loc(-1)
def test_get_indexer(self, tree):
result = tree.get_indexer(np.array([1.0, 5.5, 6.5]))
expected = np.array([0, 4, -1], dtype="intp")
tm.assert_numpy_array_equal(result, expected)
with pytest.raises(
KeyError, match="'indexer does not intersect a unique set of intervals'"
):
tree.get_indexer(np.array([3.0]))
def test_get_indexer_non_unique(self, tree):
indexer, missing = tree.get_indexer_non_unique(np.array([1.0, 2.0, 6.5]))
result = indexer[:1]
expected = np.array([0], dtype="intp")
tm.assert_numpy_array_equal(result, expected)
result = np.sort(indexer[1:3])
expected = np.array([0, 1], dtype="intp")
tm.assert_numpy_array_equal(result, expected)
result = np.sort(indexer[3:])
expected = np.array([-1], dtype="intp")
tm.assert_numpy_array_equal(result, expected)
result = missing
expected = np.array([2], dtype="intp")
tm.assert_numpy_array_equal(result, expected)
def test_duplicates(self, dtype):
left = np.array([0, 0, 0], dtype=dtype)
tree = IntervalTree(left, left + 1)
result = np.sort(tree.get_loc(0.5))
expected = np.array([0, 1, 2], dtype="intp")
tm.assert_numpy_array_equal(result, expected)
with pytest.raises(
KeyError, match="'indexer does not intersect a unique set of intervals'"
):
tree.get_indexer(np.array([0.5]))
indexer, missing = tree.get_indexer_non_unique(np.array([0.5]))
result = np.sort(indexer)
expected = np.array([0, 1, 2], dtype="intp")
tm.assert_numpy_array_equal(result, expected)
result = missing
expected = np.array([], dtype="intp")
tm.assert_numpy_array_equal(result, expected)
def test_get_loc_closed(self, closed):
tree = IntervalTree([0], [1], closed=closed)
for p, errors in [(0, tree.open_left), (1, tree.open_right)]:
if errors:
with pytest.raises(KeyError, match=str(p)):
tree.get_loc(p)
else:
result = tree.get_loc(p)
expected = np.array([0], dtype="intp")
tm.assert_numpy_array_equal(result, expected)
@pytest.mark.parametrize(
"leaf_size", [skipif_32bit(1), skipif_32bit(10), skipif_32bit(100), 10000]
)
def test_get_indexer_closed(self, closed, leaf_size):
x = np.arange(1000, dtype="float64")
found = x.astype("intp")
not_found = (-1 * np.ones(1000)).astype("intp")
tree = IntervalTree(x, x + 0.5, closed=closed, leaf_size=leaf_size)
tm.assert_numpy_array_equal(found, tree.get_indexer(x + 0.25))
expected = found if tree.closed_left else not_found
tm.assert_numpy_array_equal(expected, tree.get_indexer(x + 0.0))
expected = found if tree.closed_right else not_found
tm.assert_numpy_array_equal(expected, tree.get_indexer(x + 0.5))
@pytest.mark.parametrize(
"left, right, expected",
[
(np.array([0, 1, 4]), np.array([2, 3, 5]), True),
(np.array([0, 1, 2]), np.array([5, 4, 3]), True),
(np.array([0, 1, np.nan]), np.array([5, 4, np.nan]), True),
(np.array([0, 2, 4]), np.array([1, 3, 5]), False),
(np.array([0, 2, np.nan]), np.array([1, 3, np.nan]), False),
],
)
@pytest.mark.parametrize("order", map(list, permutations(range(3))))
def test_is_overlapping(self, closed, order, left, right, expected):
# GH 23309
tree = IntervalTree(left[order], right[order], closed=closed)
result = tree.is_overlapping
assert result is expected
@pytest.mark.parametrize("order", map(list, permutations(range(3))))
def test_is_overlapping_endpoints(self, closed, order):
"""shared endpoints are marked as overlapping"""
# GH 23309
left, right = np.arange(3), np.arange(1, 4)
tree = IntervalTree(left[order], right[order], closed=closed)
result = tree.is_overlapping
expected = closed == "both"
assert result is expected
@pytest.mark.parametrize(
"left, right",
[
(np.array([], dtype="int64"), np.array([], dtype="int64")),
(np.array([0], dtype="int64"), np.array([1], dtype="int64")),
(np.array([np.nan]), np.array([np.nan])),
(np.array([np.nan] * 3), np.array([np.nan] * 3)),
],
)
def test_is_overlapping_trivial(self, closed, left, right):
# GH 23309
tree = IntervalTree(left, right, closed=closed)
assert tree.is_overlapping is False
@pytest.mark.skipif(compat.is_platform_32bit(), reason="GH 23440")
def test_construction_overflow(self):
# GH 25485
left, right = np.arange(101), [np.iinfo(np.int64).max] * 101
tree = IntervalTree(left, right)
# pivot should be average of left/right medians
result = tree.root.pivot
expected = (50 + np.iinfo(np.int64).max) / 2
assert result == expected
| bsd-3-clause |
codrut3/tensorflow | tensorflow/contrib/labeled_tensor/python/ops/ops.py | 77 | 46403 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Non-core ops for LabeledTensor."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import types
import numpy as np
from six import string_types
from tensorflow.contrib.labeled_tensor.python.ops import _typecheck as tc
from tensorflow.contrib.labeled_tensor.python.ops import core
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import functional_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import numerics
from tensorflow.python.ops import random_ops
from tensorflow.python.training import input # pylint: disable=redefined-builtin
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensor, ops.Tensor, core.Axis,
tc.Optional(string_types))
def _gather_1d_on_axis(labeled_tensor, indexer, axis, name=None):
with ops.name_scope(name, 'lt_take', [labeled_tensor]) as scope:
temp_axes = core.Axes([axis] + list(
labeled_tensor.axes.remove(axis.name).values()))
transposed = core.transpose(labeled_tensor, temp_axes.keys())
indexed = core.LabeledTensor(
array_ops.gather(transposed.tensor, indexer), temp_axes)
return core.transpose(indexed, labeled_tensor.axes.keys(), name=scope)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Mapping(string_types,
tc.Union(slice, collections.Hashable, list)),
tc.Optional(string_types))
def select(labeled_tensor, selection, name=None):
"""Slice out a subset of the tensor.
Args:
labeled_tensor: The input tensor.
selection: A dictionary mapping an axis name to a scalar, slice or list of
values to select. Currently supports two types of selections:
(a) Any number of scalar and/or slice selections.
(b) Exactly one list selection, without any scalars or slices.
name: Optional op name.
Returns:
The selection as a `LabeledTensor`.
Raises:
ValueError: If the tensor doesn't have an axis in the selection or if
that axis lacks labels.
KeyError: If any labels in a selection are not found in the original axis.
NotImplementedError: If you attempt to combine a list selection with
scalar selection or another list selection.
"""
with ops.name_scope(name, 'lt_select', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
slices = {}
indexers = {}
for axis_name, value in selection.items():
if axis_name not in labeled_tensor.axes:
raise ValueError(
'The tensor does not have an axis named %s. Its axes are: %r' %
(axis_name, labeled_tensor.axes.keys()))
axis = labeled_tensor.axes[axis_name]
if axis.labels is None:
raise ValueError(
'The axis named %s does not have labels. The axis is: %r' %
(axis_name, axis))
if isinstance(value, slice):
# TODO(shoyer): consider deprecating using slices in favor of lists
if value.start is None:
start = None
else:
start = axis.index(value.start)
if value.stop is None:
stop = None
else:
# For now, follow the pandas convention of making labeled slices
# inclusive of both bounds.
stop = axis.index(value.stop) + 1
if value.step is not None:
raise NotImplementedError('slicing with a step is not yet supported')
slices[axis_name] = slice(start, stop)
# Needs to be after checking for slices, since slice objects claim to be
# instances of collections.Hashable but hash() on them fails.
elif isinstance(value, collections.Hashable):
slices[axis_name] = axis.index(value)
elif isinstance(value, list):
if indexers:
raise NotImplementedError(
'select does not yet support more than one list selection at '
'the same time')
indexer = [axis.index(v) for v in value]
indexers[axis_name] = ops.convert_to_tensor(indexer, dtype=dtypes.int64)
else:
# If type checking is working properly, this shouldn't be possible.
raise TypeError('cannot handle arbitrary types')
if indexers and slices:
raise NotImplementedError(
'select does not yet support combined scalar and list selection')
# For now, handle array selection separately, because tf.gather_nd does
# not support gradients yet. Later, using gather_nd will let us combine
# these paths.
if indexers:
(axis_name, indexer), = indexers.items()
axis = core.Axis(axis_name, selection[axis_name])
return _gather_1d_on_axis(labeled_tensor, indexer, axis, name=scope)
else:
return core.slice_function(labeled_tensor, slices, name=scope)
@tc.returns(core.LabeledTensor)
@tc.accepts(
tc.Collection(core.LabeledTensorLike), string_types,
tc.Optional(string_types))
def concat(labeled_tensors, axis_name, name=None):
"""Concatenate tensors along a dimension.
See tf.concat.
Args:
labeled_tensors: A list of input LabeledTensors.
axis_name: The name of the axis along which to concatenate.
name: Optional op name.
Returns:
The concatenated tensor.
The coordinate labels for the concatenation dimension are also concatenated,
if they are available for every tensor.
Raises:
ValueError: If fewer than one tensor inputs is provided, if the tensors
have incompatible axes, or if `axis_name` isn't the name of an axis.
"""
with ops.name_scope(name, 'lt_concat', labeled_tensors) as scope:
labeled_tensors = [
core.convert_to_labeled_tensor(lt) for lt in labeled_tensors
]
if len(labeled_tensors) < 1:
raise ValueError('concat expects at least 1 tensor, but received %s' %
labeled_tensors)
# All tensors must have these axes.
axes_0 = labeled_tensors[0].axes
axis_names = list(axes_0.keys())
if axis_name not in axis_names:
raise ValueError('%s not in %s' % (axis_name, axis_names))
shared_axes = axes_0.remove(axis_name)
tensors = [labeled_tensors[0].tensor]
concat_axis_list = [axes_0[axis_name]]
for labeled_tensor in labeled_tensors[1:]:
current_shared_axes = labeled_tensor.axes.remove(axis_name)
if current_shared_axes != shared_axes:
# TODO(shoyer): add more specific checks about what went wrong,
# including raising AxisOrderError when appropriate
raise ValueError('Mismatched shared axes: the first tensor '
'had axes %r but this tensor has axes %r.' %
(shared_axes, current_shared_axes))
# Accumulate the axis labels, if they're available.
concat_axis_list.append(labeled_tensor.axes[axis_name])
tensors.append(labeled_tensor.tensor)
concat_axis = core.concat_axes(concat_axis_list)
concat_dimension = axis_names.index(axis_name)
concat_tensor = array_ops.concat(tensors, concat_dimension, name=scope)
values = list(axes_0.values())
concat_axes = (values[:concat_dimension] + [concat_axis] +
values[concat_dimension + 1:])
return core.LabeledTensor(concat_tensor, concat_axes)
# TODO(shoyer): rename pack/unpack to stack/unstack
@tc.returns(core.LabeledTensor)
@tc.accepts(
tc.Collection(core.LabeledTensorLike),
tc.Union(string_types, core.AxisLike), int, tc.Optional(string_types))
def pack(labeled_tensors, new_axis, axis_position=0, name=None):
"""Pack tensors along a new axis.
See tf.pack.
Args:
labeled_tensors: The input tensors, which must have identical axes.
new_axis: The name of the new axis, or a tuple containing the name
and coordinate labels.
axis_position: Optional integer position at which to insert the new axis.
name: Optional op name.
Returns:
The packed tensors as a single LabeledTensor, with `new_axis` in the given
`axis_position`.
Raises:
ValueError: If fewer than one input tensors is provided, or if the tensors
don't have identical axes.
"""
with ops.name_scope(name, 'lt_pack', labeled_tensors) as scope:
labeled_tensors = [
core.convert_to_labeled_tensor(lt) for lt in labeled_tensors
]
if len(labeled_tensors) < 1:
raise ValueError('pack expects at least 1 tensors, but received %s' %
labeled_tensors)
axes_0 = labeled_tensors[0].axes
for t in labeled_tensors:
if t.axes != axes_0:
raise ValueError('Non-identical axes. Expected %s but got %s' %
(axes_0, t.axes))
pack_op = array_ops.stack(
[t.tensor for t in labeled_tensors], axis=axis_position, name=scope)
axes = list(axes_0.values())
axes.insert(axis_position, new_axis)
return core.LabeledTensor(pack_op, axes)
@tc.returns(tc.List(core.LabeledTensor))
@tc.accepts(core.LabeledTensorLike,
tc.Optional(string_types), tc.Optional(string_types))
def unpack(labeled_tensor, axis_name=None, name=None):
"""Unpack the tensor.
See tf.unpack.
Args:
labeled_tensor: The input tensor.
axis_name: Optional name of axis to unpack. By default, the first axis is
used.
name: Optional op name.
Returns:
The list of unpacked LabeledTensors.
Raises:
ValueError: If `axis_name` is not an axis on the input.
"""
with ops.name_scope(name, 'lt_unpack', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
axis_names = list(labeled_tensor.axes.keys())
if axis_name is None:
axis_name = axis_names[0]
if axis_name not in axis_names:
raise ValueError('%s not in %s' % (axis_name, axis_names))
axis = axis_names.index(axis_name)
unpack_ops = array_ops.unstack(labeled_tensor.tensor, axis=axis, name=scope)
axes = [a for i, a in enumerate(labeled_tensor.axes.values()) if i != axis]
return [core.LabeledTensor(t, axes) for t in unpack_ops]
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Collection(string_types),
tc.Collection(tc.Union(string_types, core.AxisLike)),
tc.Optional(string_types))
def reshape(labeled_tensor, existing_axes, new_axes, name=None):
"""Reshape specific axes of a LabeledTensor.
Non-indicated axes remain in their original locations.
Args:
labeled_tensor: The input tensor.
existing_axes: List of axis names found on the input tensor. These must
appear sequentially in the list of axis names on the input. In other
words, they must be a valid slice of `list(labeled_tensor.axes.keys())`.
new_axes: List of strings, tuples of (axis_name, axis_value) or Axis objects
providing new axes with which to replace `existing_axes` in the reshaped
result. At most one element of `new_axes` may be a string, indicating an
axis with unknown size.
name: Optional op name.
Returns:
The reshaped LabeledTensor.
Raises:
ValueError: If `existing_axes` are not all axes on the input, or if more
than one of `new_axes` has unknown size.
AxisOrderError: If `existing_axes` are not a slice of axis names on the
input.
"""
with ops.name_scope(name, 'lt_reshape', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
original_axis_names = list(labeled_tensor.axes.keys())
existing_axes = list(existing_axes)
if not set(existing_axes) <= set(original_axis_names):
raise ValueError('existing_axes %r are not contained in the set of axis '
'names %r on the input labeled tensor' %
(existing_axes, original_axis_names))
start = original_axis_names.index(existing_axes[0])
stop = original_axis_names.index(existing_axes[-1]) + 1
if existing_axes != original_axis_names[start:stop]:
# We could support existing_axes that aren't a slice by using transpose,
# but that could lead to unpredictable performance consequences because
# transposes are not free in TensorFlow. If we did transpose
# automatically, the user might never realize that their data is being
# produced with the wrong order. (The later will occur with some frequency
# because of how broadcasting automatically choose axis order.)
# So for now we've taken the strict approach.
raise core.AxisOrderError(
'existing_axes %r are not a slice of axis names %r on the input '
'labeled tensor. Use `transpose` or `impose_axis_order` to reorder '
'axes on the input explicitly.' %
(existing_axes, original_axis_names))
if sum(isinstance(axis, string_types) for axis in new_axes) > 1:
raise ValueError(
'at most one axis in new_axes can have unknown size. All other '
'axes must have an indicated integer size or labels: %r' % new_axes)
original_values = list(labeled_tensor.axes.values())
axis_size = lambda axis: -1 if axis.size is None else axis.size
shape = [axis_size(axis) for axis in original_values[:start]]
for axis_ref in new_axes:
if isinstance(axis_ref, string_types):
shape.append(-1)
else:
axis = core.as_axis(axis_ref)
shape.append(axis_size(axis))
shape.extend(axis_size(axis) for axis in original_values[stop:])
reshaped_tensor = array_ops.reshape(
labeled_tensor.tensor, shape, name=scope)
axes = original_values[:start] + list(new_axes) + original_values[stop:]
return core.LabeledTensor(reshaped_tensor, axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike, string_types, string_types,
tc.Optional(string_types))
def rename_axis(labeled_tensor, existing_name, new_name, name=None):
"""Rename an axis of LabeledTensor.
Args:
labeled_tensor: The input tensor.
existing_name: Name for an existing axis on the input.
new_name: Desired replacement name.
name: Optional op name.
Returns:
LabeledTensor with renamed axis.
Raises:
ValueError: If `existing_name` is not an axis on the input.
"""
with ops.name_scope(name, 'lt_rename_axis', [labeled_tensor]) as scope:
if existing_name not in labeled_tensor.axes:
raise ValueError('existing_name %r are not contained in the set of axis '
'names %r on the input labeled tensor' %
(existing_name, labeled_tensor.axes.keys()))
new_axis = core.Axis(new_name, labeled_tensor.axes[existing_name].value)
return reshape(labeled_tensor, [existing_name], [new_axis], name=scope)
@tc.returns(tc.List(core.LabeledTensor))
@tc.accepts(string_types, collections.Callable, int, bool,
tc.Collection(core.LabeledTensorLike), bool,
tc.Optional(string_types))
def _batch_helper(default_name,
batch_fn,
batch_size,
enqueue_many,
labeled_tensors,
allow_smaller_final_batch,
name=None):
with ops.name_scope(name, default_name, labeled_tensors) as scope:
labeled_tensors = [
core.convert_to_labeled_tensor(lt) for lt in labeled_tensors
]
batch_ops = batch_fn([t.tensor for t in labeled_tensors], scope)
# TODO(shoyer): Remove this when they sanitize the TF API.
if not isinstance(batch_ops, list):
assert isinstance(batch_ops, ops.Tensor)
batch_ops = [batch_ops]
if allow_smaller_final_batch:
batch_size = None
@tc.returns(core.Axes)
@tc.accepts(core.Axes)
def output_axes(axes):
if enqueue_many:
if 'batch' not in axes or list(axes.keys()).index('batch') != 0:
raise ValueError(
'When enqueue_many is True, input tensors must have an axis '
'called "batch" as their first dimension, '
'but axes were %s' % axes)
culled_axes = axes.remove('batch')
return core.Axes([('batch', batch_size)] + list(culled_axes.values()))
else:
return core.Axes([('batch', batch_size)] + list(axes.values()))
output_labeled_tensors = []
for i, tensor in enumerate(batch_ops):
axes = output_axes(labeled_tensors[i].axes)
output_labeled_tensors.append(core.LabeledTensor(tensor, axes))
return output_labeled_tensors
@tc.returns(tc.List(core.LabeledTensor))
@tc.accepts(
tc.Collection(core.LabeledTensorLike), int, int, int, bool, bool,
tc.Optional(string_types))
def batch(labeled_tensors,
batch_size,
num_threads=1,
capacity=32,
enqueue_many=False,
allow_smaller_final_batch=False,
name=None):
"""Rebatch a tensor.
See tf.batch.
Args:
labeled_tensors: The input tensors.
batch_size: The output batch size.
num_threads: See tf.batch.
capacity: See tf.batch.
enqueue_many: If true, the input tensors must contain a 'batch' axis as
their first axis.
If false, the input tensors must not contain a 'batch' axis.
See tf.batch.
allow_smaller_final_batch: See tf.batch.
name: Optional op name.
Returns:
The rebatched tensors.
If enqueue_many is false, the output tensors will have a new 'batch' axis
as their first axis.
Raises:
ValueError: If enqueue_many is True and the first axis of the tensors
isn't "batch".
"""
def fn(tensors, scope):
return input.batch(
tensors,
batch_size=batch_size,
num_threads=num_threads,
capacity=capacity,
enqueue_many=enqueue_many,
allow_smaller_final_batch=allow_smaller_final_batch,
name=scope)
return _batch_helper('lt_batch', fn, batch_size, enqueue_many,
labeled_tensors, allow_smaller_final_batch, name)
@tc.returns(tc.List(core.LabeledTensor))
@tc.accepts(
tc.Collection(core.LabeledTensorLike), int, int, int, bool, int,
tc.Optional(int), bool, tc.Optional(string_types))
def shuffle_batch(labeled_tensors,
batch_size,
num_threads=1,
capacity=32,
enqueue_many=False,
min_after_dequeue=0,
seed=None,
allow_smaller_final_batch=False,
name=None):
"""Rebatch a tensor, with shuffling.
See tf.batch.
Args:
labeled_tensors: The input tensors.
batch_size: The output batch size.
num_threads: See tf.batch.
capacity: See tf.batch.
enqueue_many: If true, the input tensors must contain a 'batch' axis as
their first axis.
If false, the input tensors must not contain a 'batch' axis.
See tf.batch.
min_after_dequeue: Minimum number of elements in the queue after a dequeue,
used to ensure mixing.
seed: Optional random seed.
allow_smaller_final_batch: See tf.batch.
name: Optional op name.
Returns:
The rebatched tensors.
If enqueue_many is false, the output tensors will have a new 'batch' axis
as their first axis.
Raises:
ValueError: If enqueue_many is True and the first axis of the tensors
isn't "batch".
"""
def fn(tensors, scope):
return input.shuffle_batch(
tensors,
batch_size=batch_size,
num_threads=num_threads,
capacity=capacity,
enqueue_many=enqueue_many,
min_after_dequeue=min_after_dequeue,
seed=seed,
allow_smaller_final_batch=allow_smaller_final_batch,
name=scope)
return _batch_helper('lt_shuffle_batch', fn, batch_size, enqueue_many,
labeled_tensors, allow_smaller_final_batch, name)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Mapping(string_types, int),
tc.Optional(int), tc.Optional(string_types))
def random_crop(labeled_tensor, shape_map, seed=None, name=None):
"""Randomly crops a tensor to a given size.
See tf.random_crop.
Args:
labeled_tensor: The input tensor.
shape_map: A dictionary mapping axis names to the size of the random crop
for that dimension.
seed: An optional random seed.
name: An optional op name.
Returns:
A tensor of the same rank as `labeled_tensor`, cropped randomly in the
selected dimensions.
Raises:
ValueError: If the shape map contains an axis name not in the input tensor.
"""
with ops.name_scope(name, 'lt_random_crop', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
for axis_name in shape_map:
if axis_name not in labeled_tensor.axes:
raise ValueError('Selection axis %s not in axes %s' %
(axis_name, labeled_tensor.axes))
shape = []
axes = []
for axis in labeled_tensor.axes.values():
if axis.name in shape_map:
size = shape_map[axis.name]
shape.append(size)
# We lose labels for the axes we crop, leaving just the size.
axes.append((axis.name, size))
else:
shape.append(len(axis))
axes.append(axis)
crop_op = random_ops.random_crop(
labeled_tensor.tensor, shape, seed=seed, name=scope)
return core.LabeledTensor(crop_op, axes)
# TODO(shoyer): Allow the user to select the axis over which to map.
@tc.returns(core.LabeledTensor)
@tc.accepts(collections.Callable, core.LabeledTensorLike,
tc.Optional(string_types))
def map_fn(fn, labeled_tensor, name=None):
"""Map on the list of tensors unpacked from labeled_tensor.
See tf.map_fn.
Args:
fn: The function to apply to each unpacked LabeledTensor.
It should have type LabeledTensor -> LabeledTensor.
labeled_tensor: The input tensor.
name: Optional op name.
Returns:
A tensor that packs the results of applying fn to the list of tensors
unpacked from labeled_tensor.
"""
with ops.name_scope(name, 'lt_map_fn', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
unpack_lts = unpack(labeled_tensor)
# TODO(ericmc): Fix this upstream.
if labeled_tensor.dtype == dtypes.string:
# We must construct the full graph here, because functional_ops.map_fn
# doesn't work for string-valued tensors.
# Constructing the full graph may be slow.
map_lts = [fn(t) for t in unpack_lts]
return pack(map_lts, list(labeled_tensor.axes.values())[0], name=scope)
else:
# Figure out what the axis labels should be, but use tf.map_fn to
# construct the graph because it's efficient.
# It may be slow to construct the full graph, so we infer the labels from
# the first element.
# TODO(ericmc): This builds a subgraph which then gets thrown away.
# Find a more elegant solution.
first_map_lt = fn(unpack_lts[0])
final_axes = list(labeled_tensor.axes.values())[:1] + list(
first_map_lt.axes.values())
@tc.returns(ops.Tensor)
@tc.accepts(ops.Tensor)
def tf_fn(tensor):
original_axes = list(labeled_tensor.axes.values())[1:]
tensor_lt = core.LabeledTensor(tensor, original_axes)
return fn(tensor_lt).tensor
map_op = functional_ops.map_fn(tf_fn, labeled_tensor.tensor)
map_lt = core.LabeledTensor(map_op, final_axes)
return core.identity(map_lt, name=scope)
@tc.returns(core.LabeledTensor)
@tc.accepts(collections.Callable, core.LabeledTensorLike,
core.LabeledTensorLike, tc.Optional(string_types))
def foldl(fn, labeled_tensor, initial_value, name=None):
"""Left fold on the list of tensors unpacked from labeled_tensor.
See tf.foldl.
Args:
fn: The function to apply to each unpacked LabeledTensor.
It should have type (LabeledTensor, LabeledTensor) -> LabeledTensor.
Its arguments are (accumulated_value, next_value).
labeled_tensor: The input tensor.
initial_value: The initial value of the accumulator.
name: Optional op name.
Returns:
The accumulated value.
"""
with ops.name_scope(name, 'lt_foldl',
[labeled_tensor, initial_value]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
initial_value = core.convert_to_labeled_tensor(initial_value)
@tc.returns(ops.Tensor)
@tc.accepts(ops.Tensor, ops.Tensor)
def tf_fn(accumulator, next_element):
accumulator_lt = core.LabeledTensor(accumulator, initial_value.axes)
next_element_lt = core.LabeledTensor(
next_element, list(labeled_tensor.axes.values())[1:])
return fn(accumulator_lt, next_element_lt).tensor
foldl_op = functional_ops.foldl(
tf_fn, labeled_tensor.tensor, initializer=initial_value.tensor)
foldl_lt = core.LabeledTensor(foldl_op, initial_value.axes)
return core.identity(foldl_lt, name=scope)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Optional(tc.Collection(string_types)), tc.Optional(string_types))
def squeeze(labeled_tensor, axis_names=None, name=None):
"""Remove size-1 dimensions.
See tf.squeeze.
Args:
labeled_tensor: The input tensor.
axis_names: The names of the dimensions to remove, or None to remove
all size-1 dimensions.
name: Optional op name.
Returns:
A tensor with the specified dimensions removed.
Raises:
ValueError: If the named axes are not in the tensor, or if they are
not size-1.
"""
with ops.name_scope(name, 'lt_squeeze', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
if axis_names is None:
axis_names = [a.name for a in labeled_tensor.axes.values() if len(a) == 1]
for axis_name in axis_names:
if axis_name not in labeled_tensor.axes:
raise ValueError('axis %s is not in tensor axes %s' %
(axis_name, labeled_tensor.axes))
elif len(labeled_tensor.axes[axis_name]) != 1:
raise ValueError(
'cannot squeeze axis with size greater than 1: (%s, %s)' %
(axis_name, labeled_tensor.axes[axis_name]))
squeeze_dimensions = []
axes = []
for i, axis in enumerate(labeled_tensor.axes.values()):
if axis.name in axis_names:
squeeze_dimensions.append(i)
else:
axes.append(axis)
if squeeze_dimensions:
squeeze_op = array_ops.squeeze(
labeled_tensor.tensor, squeeze_dimensions, name=scope)
else:
squeeze_op = array_ops.identity(labeled_tensor.tensor, name=scope)
return core.LabeledTensor(squeeze_op, axes)
# pylint: disable=invalid-name
ReduceAxis = tc.Union(string_types,
tc.Tuple(string_types, collections.Hashable))
ReduceAxes = tc.Optional(tc.Union(ReduceAxis, tc.Collection(ReduceAxis)))
# pylint: enable=invalid-name
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike, core.LabeledTensorLike,
tc.Optional(string_types))
def matmul(a, b, name=None):
"""Matrix multiply two tensors with rank 1 or 2.
If both tensors have rank 2, a matrix-matrix product is performed.
If one tensor has rank 1 and the other has rank 2, then a matrix-vector
product is performed.
If both tensors have rank 1, then a vector dot-product is performed.
(This behavior matches that of `numpy.dot`.)
Both tensors must share exactly one dimension in common, which is the
dimension the operation is summed along. The inputs will be automatically
transposed if necessary as part of the matmul op.
We intend to eventually support `matmul` on higher rank input, and also
eventually support summing over any number shared dimensions (via an `axis`
argument), but neither of these features has been implemented yet.
Args:
a: First LabeledTensor.
b: Second LabeledTensor.
name: Optional op name.
Returns:
LabeledTensor with the result of matrix multiplication. Axes are ordered by
the current axis_order_scope, if set, or in or order of appearance on the
inputs.
Raises:
NotImplementedError: If inputs have rank >2 or share multiple axes.
ValueError: If the inputs have rank 0 or do not share any axes.
"""
with ops.name_scope(name, 'lt_matmul', [a, b]) as scope:
a = core.convert_to_labeled_tensor(a)
b = core.convert_to_labeled_tensor(b)
if len(a.axes) > 2 or len(b.axes) > 2:
# We could pass batched inputs to tf.matmul to make this work, but we
# would also need to use tf.tile and/or tf.transpose. These are more
# expensive than doing reshapes, so it's not clear if it's a good idea to
# do this automatically.
raise NotImplementedError(
'matmul currently requires inputs with rank 2 or less, but '
'inputs have ranks %r and %r' % (len(a.axes), len(b.axes)))
if not a.axes or not b.axes:
raise ValueError(
'matmul currently requires inputs with at least rank 1, but '
'inputs have ranks %r and %r' % (len(a.axes), len(b.axes)))
shared_axes = set(a.axes) & set(b.axes)
if len(shared_axes) > 1:
raise NotImplementedError(
'matmul does not yet support summing over multiple shared axes: %r. '
'Use transpose and reshape to create a single shared axis to sum '
'over.' % shared_axes)
if not shared_axes:
raise ValueError('there must have exactly one axis in common between '
'input to matmul: %r, %r' %
(a.axes.keys(), b.axes.keys()))
shared_axis, = shared_axes
if a.axes[shared_axis] != b.axes[shared_axis]:
raise ValueError('axis %r does not match on input arguments: %r vs %r' %
(shared_axis, a.axes[shared_axis].value,
b.axes[shared_axis].value))
result_axes = []
for axes in [a.axes, b.axes]:
for axis in axes.values():
if axis.name != shared_axis:
result_axes.append(axis)
axis_scope_order = core.get_axis_order()
if axis_scope_order is not None:
result_axis_names = [axis.name for axis in result_axes]
new_axis_names = [
name for name in axis_scope_order if name in result_axis_names
]
if new_axis_names != result_axis_names:
# switch a and b
b, a = a, b
# result_axes is a list of length 1 or 2
result_axes = result_axes[::-1]
squeeze_dims = []
if len(a.axes) == 1:
a_tensor = array_ops.reshape(a.tensor, (1, -1))
squeeze_dims.append(0)
transpose_a = False
else:
a_tensor = a.tensor
transpose_a = list(a.axes.keys()).index(shared_axis) == 0
if len(b.axes) == 1:
b_tensor = array_ops.reshape(b.tensor, (-1, 1))
squeeze_dims.append(1)
transpose_b = False
else:
b_tensor = b.tensor
transpose_b = list(b.axes.keys()).index(shared_axis) == 1
result_op = math_ops.matmul(
a_tensor, b_tensor, transpose_a=transpose_a, transpose_b=transpose_b)
if squeeze_dims:
result_op = array_ops.squeeze(result_op, squeeze_dims)
result_op = array_ops.identity(result_op, name=scope)
return core.LabeledTensor(result_op, result_axes)
@tc.returns(types.FunctionType)
@tc.accepts(string_types, collections.Callable)
def define_reduce_op(op_name, reduce_fn):
"""Define a reduction op for labeled tensors.
Args:
op_name: string name of the TensorFlow op.
reduce_fn: function to call to evaluate the op on a tf.Tensor.
Returns:
Function defining the given reduction op that acts on a LabeledTensor.
"""
default_name = 'lt_%s' % op_name
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike, ReduceAxes, tc.Optional(string_types))
def op(labeled_tensor, axes=None, name=None):
"""Computes the given reduction across the given axes of a LabeledTensor.
See `tf.{op_name}` for full details.
Args:
labeled_tensor: The input tensor.
axes: A set of axes or None.
If None, all axes will be reduced.
Axes must all be strings, in which case those dimensions will be
removed, or pairs of (name, None) or (name, label), in which case those
dimensions will be kept.
name: Optional op name.
Returns:
The reduced LabeledTensor.
Raises:
ValueError: if any of the axes to reduce over are not found on
`labeled_tensor`.
"""
with ops.name_scope(name, default_name, [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
if axes is None:
axes = labeled_tensor.axes.keys()
if isinstance(axes, (string_types, tuple)):
axes = [axes]
reduction_axes = {}
axes_to_squeeze = []
for a in axes:
if isinstance(a, string_types):
# We squeeze out this axis.
reduction_axes[a] = a
axes_to_squeeze.append(a)
else:
# We keep this axis, with the user-provided labels.
(axis_name, label) = a
if label is not None:
# The input was a single label, so make it a list so it can be
# turned into an Axis.
label = [label]
reduction_axes[axis_name] = (axis_name, label)
for axis_name in reduction_axes:
if axis_name not in labeled_tensor.axes:
raise ValueError('Axis %s not in axes %s' %
(axis_name, labeled_tensor.axes))
intermediate_axes = []
reduction_dimensions = []
for i, axis in enumerate(labeled_tensor.axes.values()):
if axis.name in reduction_axes:
intermediate_axes.append(reduction_axes[axis.name])
reduction_dimensions.append(i)
else:
intermediate_axes.append(axis)
reduce_op = reduce_fn(
labeled_tensor.tensor, reduction_dimensions, keep_dims=True)
reduce_lt = core.LabeledTensor(reduce_op, intermediate_axes)
return squeeze(reduce_lt, axes_to_squeeze, name=scope)
op.__doc__ = op.__doc__.format(op_name=op_name)
op.__name__ = op_name
return op
reduce_all = define_reduce_op('reduce_all', math_ops.reduce_all)
reduce_any = define_reduce_op('reduce_any', math_ops.reduce_any)
reduce_logsumexp = define_reduce_op('reduce_logsumexp',
math_ops.reduce_logsumexp)
reduce_max = define_reduce_op('reduce_max', math_ops.reduce_max)
reduce_mean = define_reduce_op('reduce_mean', math_ops.reduce_mean)
reduce_min = define_reduce_op('reduce_min', math_ops.reduce_min)
reduce_prod = define_reduce_op('reduce_prod', math_ops.reduce_prod)
reduce_sum = define_reduce_op('reduce_sum', math_ops.reduce_sum)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Mapping(str, tc.Union(int, ops.Tensor)),
tc.Optional(string_types))
def tile(labeled_tensor, multiples, name=None):
"""Constructs a tensor by tiling a given tensor.
Only axes without tick-labels can be tiled. (Otherwise, axis labels on tiled
tensors would no longer be unique.)
See lt.tile.
Args:
labeled_tensor: The input tensor.
multiples: A mapping where the keys are axis names and the values are the
integer number of times to tile along that axis. Only axes with a multiple
different than 1 need be included.
name: Optional op name.
Returns:
A tensor with the indicated axes tiled.
Raises:
ValueError: If the tiled axes are not axes in the input tensor, or if any
axes in multiples have tick labels.
"""
with ops.name_scope(name, 'lt_tile', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
if not set(multiples.keys()) <= set(labeled_tensor.axes.keys()):
raise ValueError('tile axes %r are not contained in the set of axis '
'names %r on the input labeled tensor' %
(multiples.keys(), labeled_tensor.axes))
labeled_axes = [
name for name in multiples
if labeled_tensor.axes[name].labels is not None
]
if labeled_axes:
raise ValueError('cannot tile axes with tick labels: %r' % labeled_axes)
multiples_list = [multiples.get(name, 1) for name in labeled_tensor.axes]
tile_op = array_ops.tile(labeled_tensor.tensor, multiples_list, name=scope)
new_axes = [
axis.name if axis.labels is None else axis
for axis in labeled_tensor.axes.values()
]
return core.LabeledTensor(tile_op, new_axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Mapping(str, tc.Tuple(core.AxisValue, core.AxisValue)),
string_types, tc.Optional(string_types))
def pad(labeled_tensor, paddings, mode='CONSTANT', name=None):
"""Pads a tensor.
See tf.pad.
Args:
labeled_tensor: The input tensor.
paddings: A mapping where the keys are axis names and the values are
tuples where the first element is the padding to insert at the beginning
of the axis and the second is the padding to insert at the end of the
axis.
mode: One of "CONSTANT", "REFLECT", or "SYMMETRIC".
name: Optional op name.
Returns:
A tensor with the indicated axes padded, optionally with those axes extended
with the provided labels.
Raises:
ValueError: If the padded axes are not axes in the input tensor.
"""
with ops.name_scope(name, 'lt_pad', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
if not set(paddings.keys()) <= set(labeled_tensor.axes.keys()):
raise ValueError('pad axes %r are not contained in the set of axis '
'names %r on the input labeled tensor' %
(paddings.keys(), labeled_tensor.axes))
new_axes = []
padding_pairs = []
for name, axis in labeled_tensor.axes.items():
if name in paddings:
padding_before, padding_after = paddings[name]
axis_before = core.Axis(name, padding_before)
axis_after = core.Axis(name, padding_after)
new_axes.append(core.concat_axes([axis_before, axis, axis_after]))
padding_pairs.append((len(axis_before), len(axis_after)))
else:
new_axes.append(axis)
padding_pairs.append((0, 0))
pad_op = array_ops.pad(labeled_tensor.tensor,
padding_pairs,
mode,
name=scope)
return core.LabeledTensor(pad_op, new_axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(
tc.Union(np.ndarray, list, tuple, core.Scalar),
tc.Optional(dtypes.DType),
tc.Optional(
tc.Union(core.Axes, tc.Collection(
tc.Union(string_types, core.AxisLike)))), tc.Optional(string_types))
def constant(value, dtype=None, axes=None, name=None):
"""Creates a constant tensor.
If `axes` includes any strings, shape is inferred from `value`. Otherwise,
the sizes of the given `axes` are used to set `shape` for `tf.constant`.
See tf.constant for more details.
Args:
value: The input tensor.
dtype: The type of the returned tensor.
axes: Optional Axes, list of strings or list of objects coercible to Axis
objects. By default, axes are assumed to be an empty list (i.e., `value`
is treated as a scalar).
name: Optional op name.
Returns:
The tensor with elements set to zero.
"""
with ops.name_scope(name, 'lt_constant', [value]) as scope:
if axes is None:
axes = []
if isinstance(axes, core.Axes):
axes = axes.values()
if any(isinstance(ax, string_types) for ax in axes):
# need to infer shape
shape = None
else:
# axes already indicate shape
axes = [core.as_axis(a) for a in axes]
shape = [a.size for a in axes]
op = array_ops.constant(value, dtype=dtype, shape=shape, name=scope)
return core.LabeledTensor(op, axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Optional(dtypes.DType), tc.Optional(string_types))
def zeros_like(labeled_tensor, dtype=None, name=None):
"""Creates an identical tensor with all elements set to zero.
Args:
labeled_tensor: The input tensor.
dtype: The type of the returned tensor.
name: Optional op name.
Returns:
The tensor with elements set to zero.
"""
with ops.name_scope(name, 'lt_zeros_like', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
op = array_ops.zeros_like(labeled_tensor.tensor, dtype=dtype, name=scope)
return core.LabeledTensor(op, labeled_tensor.axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Optional(dtypes.DType), tc.Optional(string_types))
def ones_like(labeled_tensor, dtype=None, name=None):
"""Creates an identical tensor with all elements set to one.
Args:
labeled_tensor: The input tensor.
dtype: The type of the returned tensor.
name: Optional op name.
Returns:
The tensor with elements set to one.
"""
with ops.name_scope(name, 'lt_ones_like', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
op = array_ops.ones_like(labeled_tensor.tensor, dtype=dtype, name=scope)
return core.LabeledTensor(op, labeled_tensor.axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike,
tc.Optional(dtypes.DType), tc.Optional(string_types))
def cast(labeled_tensor, dtype=None, name=None):
"""Casts a labeled tensor to a new type.
Args:
labeled_tensor: The input tensor.
dtype: The type of the returned tensor.
name: Optional op name.
Returns:
A labeled tensor with the new dtype.
"""
with ops.name_scope(name, 'lt_cast', [labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
op = math_ops.cast(labeled_tensor.tensor, dtype=dtype, name=scope)
return core.LabeledTensor(op, labeled_tensor.axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike, string_types, tc.Optional(string_types))
def verify_tensor_all_finite(labeled_tensor, message, name=None):
"""Asserts a tensor doesn't contain NaNs or Infs.
See tf.verify_tensor_all_finite.
Args:
labeled_tensor: The input tensor.
message: Message to log on failure.
name: Optional op name.
Returns:
The input tensor.
"""
with ops.name_scope(name, 'lt_verify_tensor_all_finite',
[labeled_tensor]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
op = numerics.verify_tensor_all_finite(
labeled_tensor.tensor, msg=message, name=scope)
return core.LabeledTensor(op, labeled_tensor.axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike, core.LabeledTensorLike,
tc.Optional(string_types))
def boolean_mask(labeled_tensor, mask, name=None):
"""Apply a boolean mask to a labeled tensor.
Unlike `tf.boolean_mask`, this currently only works on 1-dimensional masks.
The mask is applied to the first axis of `labeled_tensor`. Labels on the first
axis are removed, because True indices in `mask` may not be known dynamically.
Args:
labeled_tensor: The input tensor.
mask: The type of the returned tensor.
name: Optional op name.
Returns:
The masked labeled tensor.
Raises:
ValueError: if the first axis of the mask
"""
with ops.name_scope(name, 'lt_boolean_mask', [labeled_tensor, mask]) as scope:
labeled_tensor = core.convert_to_labeled_tensor(labeled_tensor)
mask = core.convert_to_labeled_tensor(mask)
if len(mask.axes) > 1:
raise NotImplementedError(
"LabeledTensor's boolean_mask currently only supports 1D masks")
mask_axis = list(mask.axes.values())[0]
lt_axis = list(labeled_tensor.axes.values())[0]
if mask_axis != lt_axis:
raise ValueError('the first axis of the labeled tensor and the mask '
'are not equal:\n%r\n%r' % (lt_axis, mask_axis))
op = array_ops.boolean_mask(labeled_tensor.tensor, mask.tensor, name=scope)
# TODO(shoyer): attempt to infer labels for the masked values, by calling
# tf.contrib.util.constant_value on the mask?
axes = [lt_axis.name] + list(labeled_tensor.axes.values())[1:]
return core.LabeledTensor(op, axes)
@tc.returns(core.LabeledTensor)
@tc.accepts(core.LabeledTensorLike, core.LabeledTensorLike,
core.LabeledTensorLike, tc.Optional(string_types))
def where(condition, x, y, name=None):
"""Return elements from x or y depending on condition.
See `tf.where` for more details. This function currently only implements the
three argument version of where.
Args:
condition: LabeledTensor of type `bool`.
x: LabeledTensor for values where condition is true.
y: LabeledTensor for values where condition is false.
name: Optional op name.
Returns:
The labeled tensor with values according to condition.
Raises:
ValueError: if `x` and `y` have different axes, or if the axes of `x` do not
start with the axes of `condition`.
"""
with ops.name_scope(name, 'lt_where', [condition, x, y]) as scope:
condition = core.convert_to_labeled_tensor(condition)
x = core.convert_to_labeled_tensor(x)
y = core.convert_to_labeled_tensor(y)
if not condition.axes == x.axes == y.axes:
raise ValueError('all inputs to `where` must have equal axes')
op = array_ops.where(condition.tensor, x.tensor, y.tensor, name=scope)
return core.LabeledTensor(op, x.axes)
| apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.