repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
---|---|---|---|
sytays/openanalysis | doc/OpenAnalysis/05 - Data Structures.ipynb | gpl-3.0 | from openanalysis.data_structures import DataStructureBase, DataStructureVisualization
import gi.repository.Gtk as gtk # for displaying GUI dialogs
"""
Explanation: Data Structures
Data structures are a concrete implementation of the specification provided by one or more particular abstract data types (ADT), which specify the operations that can be performed on a data structure and the computational complexity of those operations.
Different kinds of data structures are suited for different kinds of applications, and some are highly specialized to specific tasks. For example, relational databases commonly use B-tree indexes for data retrieval, while compiler implementations usually use hash tables to look up identifiers.
Usually, efficient data structures are key to designing efficient algorithms.
Standard import statement
End of explanation
"""
class BinarySearchTree(DataStructureBase): # Derived from DataStructureBase
class Node: # Class for creating a node
def __init__(self, data):
self.left = None
self.right = None
self.data = data
def __str__(self):
return str(self.data)
def __init__(self):
DataStructureBase.__init__(self, "Binary Search Tree", "t.png") # Initializing with name and path
self.root = None
self.count = 0
def get_root(self): # Returns root node of the tree
return self.root
def insert(self, item): # Inserts item into the tree
newNode = BinarySearchTree.Node(item)
insNode = self.root
parent = None
while insNode is not None:
parent = insNode
if insNode.data > newNode.data:
insNode = insNode.left
else:
insNode = insNode.right
if parent is None:
self.root = newNode
else:
if parent.data > newNode.data:
parent.left = newNode
else:
parent.right = newNode
self.count += 1
def find(self, item): # Finds if item is present in tree or not
node = self.root
while node is not None:
if item < node.data:
node = node.left
elif item > node.data:
node = node.right
else:
return True
return False
def min_value_node(self): # Returns the minimum value node
current = self.root
while current.left is not None:
current = current.left
return current
def delete(self, item): # Deletes item from tree if present
# else shows Value Error
if item not in self:
dialog = gtk.MessageDialog(None, 0, gtk.MessageType.ERROR,
gtk.ButtonsType.CANCEL, "Value not found ERROR")
dialog.format_secondary_text(
"Element not found in the %s" % self.name)
dialog.run()
dialog.destroy()
else:
self.count -= 1
if self.root.data == item and (self.root.left is None or self.root.right is None):
if self.root.left is None and self.root.right is None:
self.root = None
elif self.root.data == item and self.root.left is None:
self.root = self.root.right
elif self.root.data == item and self.root.right is None:
self.root = self.root.left
return self.root
if item < self.root.data:
temp = self.root
self.root = self.root.left
temp.left = self.delete(item)
self.root = temp
elif item > self.root.data:
temp = self.root
self.root = self.root.right
temp.right = self.delete(item)
self.root = temp
else:
if self.root.left is None:
return self.root.right
elif self.root.right is None:
return self.root.left
temp = self.root
self.root = self.root.right
min_node = self.min_value_node()
temp.data = min_node.data
temp.right = self.delete(min_node.data)
self.root = temp
return self.root
def get_graph(self, rt): # Populates self.graph with elements depending
# upon the parent-children relation
if rt is None:
return
self.graph[rt.data] = {}
if rt.left is not None:
self.graph[rt.data][rt.left.data] = {'child_status': 'left'}
self.get_graph(rt.left)
if rt.right is not None:
self.graph[rt.data][rt.right.data] = {'child_status': 'right'}
self.get_graph(rt.right)
"""
Explanation: DataStructureBase is the base class for implementing data structures
DataStructureVisualization is the class that visualizes data structures in GUI
DataStructureBase class
Any data structure, which is to be implemented, has to be derived from this class. Now we shall see data members and member functions of this class:
Data Members
name - Name of the DS
file_path - Path to store output of DS operations
Member Functions
__init__(self, name, file_path) - Initializes DS with a name and a file_path to store the output
insert(self, item) - Inserts item into the DS
delete(Self, item) - Deletes item from the DS, <br/>            if item is not present in the DS, throws a ValueError
find(self, item) - Finds the item in the DS
<br/>          returns True if found, else returns False<br/>          similar to __contains__(self, item)
get_root(self) - Returns the root (for graph and tree DS)
get_graph(self, rt) - Gets the dict representation between the parent and children (for graph and tree DS)
draw(self, nth=None) - Draws the output to visualize the operations performed on the DS<br/>             nth is used to pass an item to visualize a find operation
DataStructureVisualization class
This class is used for visualizing data structures in a GUI (using GTK+ 3). Now we shall see data members and member functions of this class:
Data Members
ds - Any DS, which is an instance of DataStructureBase
Member Functions
__init__(self, ds) - Initializes ds with an instance of DS that is to be visualized
run(self) - Opens a GUI window to visualize the DS operations
An example ..... Binary Search Tree
Now we shall implement the class BinarySearchTree
End of explanation
"""
DataStructureVisualization(BinarySearchTree).run()
import io
import base64
from IPython.display import HTML
video = io.open('../res/bst.mp4', 'r+b').read()
encoded = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii')))
"""
Explanation: Now, this program can be executed as follows:
End of explanation
"""
|
irazhur/StatisticalMethods | examples/XrayImage/Summarizing.ipynb | gpl-2.0 | import astropy.io.fits as pyfits
import numpy as np
import astropy.visualization as viz
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
targdir = 'a1835_xmm/'
imagefile = targdir+'P0098010101M2U009IMAGE_3000.FTZ'
expmapfile = targdir+'P0098010101M2U009EXPMAP3000.FTZ'
bkgmapfile = targdir+'P0098010101M2X000BKGMAP3000.FTZ'
!du -sch $targdir/*
"""
Explanation: Summarizing Images
Images are high dimensional objects: our XMM image contains 648*648 = datapoints (the pixel values).
Visualizing the data is an extremely important first step: the next is summarizing, which can be thought of as dimensionality reduction.
Let's dust off some standard statistics and put them to good use in summarizing this X-ray image.
End of explanation
"""
imfits = pyfits.open(imagefile)
im = imfits[0].data
plt.imshow(viz.scale_image(im, scale='log', max_cut=40), cmap='gray', origin='lower');
"""
Explanation: How Many Photons Came From the Cluster?
Let's estimate the total counts due to the cluster.
That means we need to somehow ignore
all the other objects in the field
the diffuse X-ray "background"
Let's start by masking various regions of the image to separate cluster from background.
End of explanation
"""
maskedimage = im.copy()
# First make some coordinate arrays, including polar r from the cluster center:
(ny,nx) = maskedimage.shape
centroid = np.where(maskedimage == np.max(maskedimage))
x = np.linspace(0, nx-1, nx)
y = np.linspace(0, ny-1, ny)
dx, dy = np.meshgrid(x,y)
dx = dx - centroid[1]
dy = dy - centroid[0]
r = np.sqrt(dx*dx + dy*dy)
# Now select an outer annulus, for the background and an inner circle, for the cluster:
background = maskedimage.copy()
background[r < 100] = -3
background[r > 150] = -3
signal = maskedimage.copy()
signal[r > 100] = 0.0
plt.imshow(viz.scale_image(background, scale='log', max_cut=40), cmap='gray', origin='lower')
"""
Explanation: Estimating the background
Now let's look at the outer parts of the image, far from the cluster, and estimate the background level there.
End of explanation
"""
meanbackground = np.mean(background[background > -1])
medianbackground = np.median(background[background > -1])
print "Mean background counts per pixel = ",meanbackground
print "Median background counts per pixel = ",medianbackground
"""
Explanation: Let's look at the mean and median of the pixels in this image that have non-negative values.
End of explanation
"""
plt.figure(figsize=(10,7))
n, bins, patches = plt.hist(background[background > -1], bins=np.linspace(-3.5,29.5,34))
# plt.yscale('log', nonposy='clip')
plt.xlabel('Background annulus pixel value (counts)')
plt.ylabel('Frequency')
plt.axis([-3.0, 30.0, 0, 40000])
plt.grid(True)
plt.show()
stdevbackground = np.std(background[background > -1])
print "Standard deviation: ",stdevbackground
"""
Explanation: Exercise:
Why do you think there is a difference? Talk to your neighbor for a minute, and be ready to suggest an answer.
To understand the difference in these two estimates, lets look at a pixel histogram for this annulus.
End of explanation
"""
plt.imshow(viz.scale_image(signal, scale='log', max_cut=40), cmap='gray', origin='lower')
plt.figure(figsize=(10,7))
n, bins, patches = plt.hist(signal[signal > -1], bins=np.linspace(-3.5,29.5,34), color='red')
plt.yscale('log', nonposy='clip')
plt.xlabel('Signal region pixel value (counts)')
plt.ylabel('Frequency')
plt.axis([-3.0, 30.0, 0, 500000])
plt.grid(True)
plt.show()
"""
Explanation: Exercise:
"The background level in this image is approximately $0.09 \pm 0.66$ counts"
What's wrong with this statement?
Talk to your neighbor for a few minutes, and see if you can come up with a better version.
Estimating the Cluster Counts
Now let's summarize the circular region centered on the cluster.
End of explanation
"""
# Total counts in signal region:
Ntotal = np.sum(signal[signal > -1])
# Background counts: mean in annulus, multiplied by number of pixels in signal region:
N = signal.copy()*0.0
N[signal > -1] = 1.0
Nbackground = np.sum(N)*meanbackground # Is this a good choice?
# Difference is the cluster counts:
Ncluster = Ntotal - Nbackground
print "Counts in signal region: ",Ntotal
print "Approximate counts due to background: ",Nbackground
print "Approximate counts due to cluster: ",Ncluster
"""
Explanation: Now we can make our estimates:
End of explanation
"""
|
metpy/MetPy | dev/_downloads/591c50ddf519b58966833b985f7ca28b/Parse_Angles.ipynb | bsd-3-clause | import metpy.calc as mpcalc
"""
Explanation: Parse angles
Demonstrate how to convert direction strings to angles.
The code below shows how to parse directional text into angles.
It also demonstrates the function's flexibility
in handling various string formatting.
End of explanation
"""
dir_str = 'SOUTH SOUTH EAST'
print(dir_str)
"""
Explanation: Create a test value of a directional text
End of explanation
"""
angle_deg = mpcalc.parse_angle(dir_str)
print(angle_deg)
"""
Explanation: Now throw that string into the function to calculate
the corresponding angle
End of explanation
"""
dir_str_list = ['ne', 'NE', 'NORTHEAST', 'NORTH_EAST', 'NORTH east']
angle_deg_list = mpcalc.parse_angle(dir_str_list)
print(angle_deg_list)
"""
Explanation: The function can also handle arrays of string
in many different abbrieviations and capitalizations
End of explanation
"""
|
ComputationalModeling/spring-2017-danielak | past-semesters/fall_2016/day-by-day/day14-Schelling-1-dimensional-segregation-day1/Day_14_Pre_Class_Notebook.ipynb | agpl-3.0 | even_numbers = [2, 4, 6, 8, 10, 12, 14]
s1 = even_numbers[1:5] # returns the 2nd through 4th elements
print("s1:", s1)
s2 = even_numbers[2:] # returns the 3rd element thorugh the end
print("s2:", s2)
s3 = even_numbers[:-2] # returns everything but the last two elements
print("s3:", s3)
s4 = even_numbers[1:-2] # returns everything but the first element and the two elements on the end
print("s4:", s4)
s5 = even_numbers[1:-1:2] # returns every other element, starting with the second element and ending at the second-to-last
print("s5:", s5)
s6 = even_numbers[::-1] # starts at the end of the list and returns all elements in backwards order (reversing original list)
print("s6:", s6)
"""
Explanation: List manipulation in Python
Goal for this assignment
The goal for this assignment is to learn to use the various methods for Python's list data type.
Your name
// put your name here!
Part 1: working with lists in Python
A list in Python is what is known as a compound data type, which is fundamentally used to group together other types of variables. It is possible for lists to have values of a variety of types (i.e., integers, strings, floating-point numbers, etc.) but in general people tend to create lists with a single data type. Lists are written as comma-separated values between square brackets, like so:
odd_numbers = [1, 3, 5, 7, 9]
and an empty list can be created by using square brackets with no values:
empty_list = []
The number of elements of a list can be found by using the Python len() method: len(odd_numbers) would return 5, for example.
Lists are accessed using index values: odd_numbers[2] will return the 3rd element from the beginning of the list (since Python counts starting at 0). In this case, the value returned would be 5. Using negative numbers in the index gives you elements starting at the end. For example, odd_numbers[-1] gives you the last element in the list, and odd_numbers[-1] gives you the second-to-last number.
Lists can also be indexed by slicing the list, which gives you a sub-set of the list (which is also a list). A colon indicates that slicing is occurring, and you use the syntax my_array[start:end]. In this example, start is the index where you start, and end is the index after the one you want to end (in keeping with the rest of Python's syntax). If start or end are blank, the slice either begins at the beginning of the list or continues to the end of the list, respectively.
You can also add a third argument, which is the step. In other words, my_array[start:end:step] goes from start index to the index before end in steps of step. More concretely, my_array[1,6,2] will return a list composed of elements 1, 3, and 5 of that list, and my_array[::2] returns every second element in the list.
IMPORTANT: you can do all of these things in Numpy, too!
Some examples are below:
End of explanation
"""
some_letters = ['a','b','c','d','e','f','g','h','i']
# put your code here!
"""
Explanation: Now, try it yourself!
Using the array below, create and print out sub-arrays that do the following:
print out the first four elements (a-d)
print out the last three elements (g-i)
starting with the second element, print out every third element (b, e, h)
End of explanation
"""
A = [1,2,3,4,5,6]
B = ['a','b','c','d','e']
# put your code here!
"""
Explanation: Part 2: list methods in Python
There are several useful methods that are built into lists in Python. A full explanation of all list methods can be found here. However, the most useful list methods are as follows:
list.append(x) - adds an item x to the end of your list.
list.extend(L) - extends the list by adding all items in the given list L. If you try to use the append() method, you will end up with a list that has an element that is another list - this creates a single, unified list made up of the two original lists.
list.insert(i, x) - insert item x at index position i. list.insert(0,x) inserts at the front of the list
list.pop(i) - removes the item at index i and returns it. If you don't give an index, list.pop() gives you the last item in the list.
list.reverse() - reverse the order of the elements of the list. This happens in place, and doesn't return anything.
An important note about copying lists in Python
You may try to copy a list so you can work with it:
new_list = old_list
However, you'll find that if you modify new_list, you also modify old_list. That's because when you equate lists in the way shown above, you are creating a new list that "points at" the old values. To truly copy a list, you have to do the following:
you are still pointing at the old values, and can modify them. The (weird, but correct) way to copy a list is to say:
new_list = list(old_list)
Now, try it yourself!
Using the arrays below, create new arrays, manipulate them as follows, and then print them out:
Create a new array C, which a copy of A, and append the numbers 7 and 8 to the end. (so the elements of C are 1,2,3,4,5,6,7,8)
Then remove the third element of C and put it back into the array before the second element from the end (so its elements, in order, are 1, 2, 4, 5, 6, 3, 7, 8)
Make a new array D that is a copy of A, and then extend it with the middle three elements of array B, using slicing to get the middle 3 elements of B, (so its elements, in order, are 1, 2, 3, 4, 5, 6, 'b', 'c', 'd').
Make a new array E that is a copy of B, reverse the order of its elements, and then remove the first element (so its elements, in order, are 'd', 'c', 'b', 'a').
End of explanation
"""
from IPython.display import HTML
HTML(
"""
<iframe
src="https://goo.gl/forms/jXRNcKiQ8C3lvt8E2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
"""
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation
"""
|
mari-linhares/tensorflow-workshop | code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-Word2Vec-checkpoint.ipynb | apache-2.0 | # Tensorflow
import tensorflow as tf
print('Tested with TensorFlow 1.2.0')
print('Your TensorFlow version:', tf.__version__)
# Feeding function for enqueue data
from tensorflow.python.estimator.inputs.queues import feeding_functions as ff
# Rnn common functions
from tensorflow.contrib.learn.python.learn.estimators import rnn_common
# Model builder
from tensorflow.python.estimator import model_fn as model_fn_lib
# Run an experiment
from tensorflow.contrib.learn.python.learn import learn_runner
# Helpers for data processing
import pandas as pd
import numpy as np
import argparse
import random
"""
Explanation: Dependencies
End of explanation
"""
# data from: http://ai.stanford.edu/~amaas/data/sentiment/
TRAIN_INPUT = 'data/train.csv'
TEST_INPUT = 'data/test.csv'
# data manually generated
MY_TEST_INPUT = 'data/mytest.csv'
# wordtovec
# https://nlp.stanford.edu/projects/glove/
# the matrix will contain 400,000 word vectors, each with a dimensionality of 50.
word_list = np.load('word_list.npy')
word_list = word_list.tolist() # originally loaded as numpy array
word_list = [word.decode('UTF-8') for word in word_list] # encode words as UTF-8
print('Loaded the word list, length:', len(word_list))
word_vector = np.load('word_vector.npy')
print ('Loaded the word vector, shape:', word_vector.shape)
"""
Explanation: Loading Data
First, we want to create our word vectors. For simplicity, we're going to be using a pretrained model.
As one of the biggest players in the ML game, Google was able to train a Word2Vec model on a massive Google News dataset that contained over 100 billion different words! From that model, Google was able to create 3 million word vectors, each with a dimensionality of 300.
In an ideal scenario, we'd use those vectors, but since the word vectors matrix is quite large (3.6 GB!), we'll be using a much more manageable matrix that is trained using GloVe, a similar word vector generation model. The matrix will contain 400,000 word vectors, each with a dimensionality of 50.
We're going to be importing two different data structures, one will be a Python list with the 400,000 words, and one will be a 400,000 x 50 dimensional embedding matrix that holds all of the word vector values.
End of explanation
"""
baseball_index = word_list.index('baseball')
print('Example: baseball')
print(word_vector[baseball_index])
"""
Explanation: We can search our word list for a word like "baseball", and then access its corresponding vector through the embedding matrix.
End of explanation
"""
max_seq_length = 10 # maximum length of sentence
num_dims = 50 # dimensions for each word vector
first_sentence = np.zeros((max_seq_length), dtype='int32')
first_sentence[0] = word_list.index("i")
first_sentence[1] = word_list.index("thought")
first_sentence[2] = word_list.index("the")
first_sentence[3] = word_list.index("movie")
first_sentence[4] = word_list.index("was")
first_sentence[5] = word_list.index("incredible")
first_sentence[6] = word_list.index("and")
first_sentence[7] = word_list.index("inspiring")
# first_sentence[8] = 0
# first_sentence[9] = 0
print(first_sentence.shape)
print(first_sentence) # shows the row index for each word
"""
Explanation: Now that we have our vectors, our first step is taking an input sentence and then constructing the its vector representation. Let's say that we have the input sentence "I thought the movie was incredible and inspiring". In order to get the word vectors, we can use Tensorflow's embedding lookup function. This function takes in two arguments, one for the embedding matrix (the wordVectors matrix in our case), and one for the ids of each of the words. The ids vector can be thought of as the integerized representation of the training set. This is basically just the row index of each of the words. Let's look at a quick example to make this concrete.
End of explanation
"""
with tf.Session() as sess:
print(tf.nn.embedding_lookup(word_vector, first_sentence).eval().shape)
"""
Explanation: TODO### Insert image
The 10 x 50 output should contain the 50 dimensional word vectors for each of the 10 words in the sequence.
End of explanation
"""
from os import listdir
from os.path import isfile, join
positiveFiles = ['positiveReviews/' + f for f in listdir('positiveReviews/') if isfile(join('positiveReviews/', f))]
negativeFiles = ['negativeReviews/' + f for f in listdir('negativeReviews/') if isfile(join('negativeReviews/', f))]
numWords = []
for pf in positiveFiles:
with open(pf, "r", encoding='utf-8') as f:
line=f.readline()
counter = len(line.split())
numWords.append(counter)
print('Positive files finished')
for nf in negativeFiles:
with open(nf, "r", encoding='utf-8') as f:
line=f.readline()
counter = len(line.split())
numWords.append(counter)
print('Negative files finished')
numFiles = len(numWords)
print('The total number of files is', numFiles)
print('The total number of words in the files is', sum(numWords))
print('The average number of words in the files is', sum(numWords)/len(numWords))
"""
Explanation: Before creating the ids matrix for the whole training set, let’s first take some time to visualize the type of data that we have. This will help us determine the best value for setting our maximum sequence length. In the previous example, we used a max length of 10, but this value is largely dependent on the inputs you have.
The training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews. Each of the reviews is stored in a txt file that we need to parse through. The positive reviews are stored in one directory and the negative reviews are stored in another. The following piece of code will determine total and average number of words in each review.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(numWords, 50)
plt.xlabel('Sequence Length')
plt.ylabel('Frequency')
plt.axis([0, 1200, 0, 8000])
plt.show()
"""
Explanation: We can also use the Matplot library to visualize this data in a histogram format.
End of explanation
"""
max_seq_len = 250
"""
Explanation: From the histogram as well as the average number of words per file, we can safely say that most reviews will fall under 250 words, which is the max sequence length value we will set.
End of explanation
"""
ids_matrix = np.load('ids_matrix.npy').tolist()
"""
Explanation: Data
End of explanation
"""
# Parameters for training
STEPS = 15000
BATCH_SIZE = 32
# Parameters for data processing
REVIEW_KEY = 'review'
SEQUENCE_LENGTH_KEY = 'sequence_length'
"""
Explanation: Parameters
End of explanation
"""
POSITIVE_REVIEWS = 12500
# copying sequences
data_sequences = [np.asarray(v, dtype=np.int32) for v in ids_matrix]
# generating labels
data_labels = [[1, 0] if i < POSITIVE_REVIEWS else [0, 1] for i in range(len(ids_matrix))]
# also creating a length column, this will be used by the Dynamic RNN
# see more about it here: https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
data_length = [max_seq_len for i in range(len(ids_matrix))]
"""
Explanation: Separating train and test data
The training set we're going to use is the Imdb movie review dataset. This set has 25,000 movie reviews, with 12,500 positive reviews and 12,500 negative reviews.
Let's first give a positive label [1, 0] to the first 12500 reviews, and a negative label [0, 1] to the other reviews.
End of explanation
"""
data = list(zip(data_sequences, data_labels, data_length))
random.shuffle(data) # shuffle
data = np.asarray(data)
# separating train and test data
limit = int(len(data) * 0.9)
train_data = data[:limit]
test_data = data[limit:]
"""
Explanation: Then, let's shuffle the data and use 90% of the reviews for training and the other 10% for testing.
End of explanation
"""
LABEL_INDEX = 1
def _number_of_pos_labels(df):
pos_labels = 0
for value in df:
if value[LABEL_INDEX] == [1, 0]:
pos_labels += 1
return pos_labels
pos_labels_train = _number_of_pos_labels(train_data)
total_labels_train = len(train_data)
pos_labels_test = _number_of_pos_labels(test_data)
total_labels_test = len(test_data)
print('Total number of positive labels:', pos_labels_train + pos_labels_test)
print('Proportion of positive labels on the Train data:', pos_labels_train/total_labels_train)
print('Proportion of positive labels on the Test data:', pos_labels_test/total_labels_test)
"""
Explanation: Verifying if the train and test data have enough positive and negative examples
End of explanation
"""
def get_input_fn(df, batch_size, num_epochs=1, shuffle=True):
def input_fn():
sequences = np.asarray([v for v in df[:,0]], dtype=np.int32)
labels = np.asarray([v for v in df[:,1]], dtype=np.int32)
length = np.asarray(df[:,2], dtype=np.int32)
# https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/data
dataset = (
tf.contrib.data.Dataset.from_tensor_slices((sequences, labels, length)) # reading data from memory
.repeat(num_epochs) # repeat dataset the number of epochs
.batch(batch_size)
)
# for our "manual" test we don't want to shuffle the data
if shuffle:
dataset = dataset.shuffle(buffer_size=100000)
# create iterator
review, label, length = dataset.make_one_shot_iterator().get_next()
features = {
REVIEW_KEY: review,
SEQUENCE_LENGTH_KEY: length,
}
return features, label
return input_fn
features, label = get_input_fn(test_data, 2, shuffle=False)()
with tf.Session() as sess:
items = sess.run(features)
print(items[REVIEW_KEY])
print(sess.run(label))
train_input_fn = get_input_fn(train_data, BATCH_SIZE, None)
test_input_fn = get_input_fn(test_data, BATCH_SIZE)
"""
Explanation: Input functions
End of explanation
"""
def get_model_fn(rnn_cell_sizes,
label_dimension,
dnn_layer_sizes=[],
optimizer='SGD',
learning_rate=0.01,
embed_dim=128):
def model_fn(features, labels, mode):
review = features[REVIEW_KEY]
sequence_length = tf.cast(features[SEQUENCE_LENGTH_KEY], tf.int32)
# Creating embedding
data = tf.Variable(tf.zeros([BATCH_SIZE, max_seq_len, 50]),dtype=tf.float32)
data = tf.nn.embedding_lookup(word_vector, review)
# Each RNN layer will consist of a LSTM cell
rnn_layers = [tf.contrib.rnn.LSTMCell(size) for size in rnn_cell_sizes]
# Construct the layers
multi_rnn_cell = tf.contrib.rnn.MultiRNNCell(rnn_layers)
# Runs the RNN model dynamically
# more about it at:
# https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn
outputs, final_state = tf.nn.dynamic_rnn(cell=multi_rnn_cell,
inputs=data,
dtype=tf.float32)
# Slice to keep only the last cell of the RNN
last_activations = rnn_common.select_last_activations(outputs, sequence_length)
# Construct dense layers on top of the last cell of the RNN
for units in dnn_layer_sizes:
last_activations = tf.layers.dense(
last_activations, units, activation=tf.nn.relu)
# Final dense layer for prediction
predictions = tf.layers.dense(last_activations, label_dimension)
predictions_softmax = tf.nn.softmax(predictions)
loss = None
train_op = None
eval_op = None
preds_op = {
'prediction': predictions_softmax,
'label': labels
}
if mode == tf.contrib.learn.ModeKeys.EVAL:
eval_op = {
"accuracy": tf.metrics.accuracy(
tf.argmax(input=predictions_softmax, axis=1),
tf.argmax(input=labels, axis=1))
}
if mode != tf.contrib.learn.ModeKeys.INFER:
loss = tf.losses.softmax_cross_entropy(labels, predictions)
if mode == tf.contrib.learn.ModeKeys.TRAIN:
train_op = tf.contrib.layers.optimize_loss(
loss,
tf.contrib.framework.get_global_step(),
optimizer=optimizer,
learning_rate=learning_rate)
return model_fn_lib.EstimatorSpec(mode,
predictions=predictions_softmax,
loss=loss,
train_op=train_op,
eval_metric_ops=eval_op)
return model_fn
model_fn = get_model_fn(rnn_cell_sizes=[64], # size of the hidden layers
label_dimension=2, # since are just 2 classes
dnn_layer_sizes=[128, 64], # size of units in the dense layers on top of the RNN
optimizer='Adam',
learning_rate=0.001,
embed_dim=512)
"""
Explanation: Creating the Estimator model
End of explanation
"""
# create experiment
def generate_experiment_fn():
"""
Create an experiment function given hyperparameters.
Returns:
A function (output_dir) -> Experiment where output_dir is a string
representing the location of summaries, checkpoints, and exports.
this function is used by learn_runner to create an Experiment which
executes model code provided in the form of an Estimator and
input functions.
All listed arguments in the outer function are used to create an
Estimator, and input functions (training, evaluation, serving).
Unlisted args are passed through to Experiment.
"""
def _experiment_fn(run_config, hparams):
estimator = tf.estimator.Estimator(model_fn=model_fn, config=run_config)
return tf.contrib.learn.Experiment(
estimator,
train_input_fn=train_input_fn,
eval_input_fn=test_input_fn,
train_steps=STEPS
)
return _experiment_fn
# run experiment
learn_runner.run(generate_experiment_fn(), run_config=tf.contrib.learn.RunConfig(model_dir='testing2'))
"""
Explanation: Create and Run Experiment
End of explanation
"""
def string_to_array(s, separator=' '):
return s.split(separator)
def generate_data_row(sentence, label, max_length):
sequence = np.zeros((max_length), dtype='int32')
for i, word in enumerate(string_to_array(sentence)):
sequence[i] = word_list.index(word)
return sequence, label, max_length
def generate_data(sentences, labels, max_length):
data = []
for s, l in zip(sentences, labels):
data.append(generate_data_row(s, l, max_length))
return np.asarray(data)
sentences = ['i thought the movie was incredible and inspiring',
'this is a great movie',
'this is a good movie but isnt the best',
'it was fine i guess',
'it was definitely bad',
'its not that bad',
'its not that bad i think its a good movie',
'its not bad i think its a good movie']
labels = [[1, 0],
[1, 0],
[1, 0],
[0, 1],
[0, 1],
[1, 0],
[1, 0],
[1, 0]] # [1, 0]: positive, [0, 1]: negative
my_test_data = generate_data(sentences, labels, 10)
"""
Explanation: Making Predictions
First let's generate our own sentences to see how the model classifies them.
End of explanation
"""
preds = estimator.predict(input_fn=get_input_fn(my_test_data, 1, 1, shuffle=False))
print()
for p, s in zip(preds, sentences):
print('sentence:', s)
print('good review:', p[0], 'bad review:', p[1])
print('-' * 10)
"""
Explanation: Now, let's generate predictions for the sentences
End of explanation
"""
|
theandygross/HIV_Methylation | Parallel/Init_Parallel.ipynb | mit | k = ti((age < 68) & (age > 25))
dd = logit_adj(df_meth.ix[:, k])
m = dd.mean(1)
s = dd.std(1)
df_norm = dd.subtract(m, axis=0).divide(s, axis=0)
df_norm = df_norm.clip(-7,7)
"""
Explanation: Logit Transform and Normalize Methylation Data
End of explanation
"""
def chunkify_df(df, store, table_name, N=100):
df = df.dropna(1)
for i in range(N):
g = df.index[i::N]
dd = df.ix[g]
dd.to_hdf(store, '{}/chunk_{}'.format(table_name, i))
gender.value_counts()
labels.ix[k.intersection(df_meth.columns)].value_counts()
store = '/cellar/users/agross/Data/tmp/for_parallel.h5'
store = pd.HDFStore(store)
store['labels'] = labels
store['bio_age'] = mc_adj_c
store['cell_counts'] = cell_counts
store['age'] = age
store['gender'] = gender == 'M'
#store['bio_age'] = age_adv.append(age_adv0)
labels.ix[k.intersection(df_meth.columns)].value_counts()
chunkify_df(df_norm.ix[:, ti(labels == 's1')], store.filename, 'in_set_s1')
chunkify_df(df_norm.ix[:, ti(labels == 's2')], store.filename, 'in_set_s2')
chunkify_df(df_norm.ix[:, ti(labels == 's3')], store.filename, 'in_set_s3')
store.close()
store.open()
"""
Explanation: Prepare Data for Association Tests
The association tests take a while to run in serial so we do them in a map-reduce type format
The idea is we break the data into 100 chunks, run the tests in parallel, and then combine the results
This is not entirely necissary but drops run-time from ~15 min to about 15 seconds
End of explanation
"""
|
facebook/prophet | notebooks/multiplicative_seasonality.ipynb | mit | %%R -w 10 -h 6 -u in
df <- read.csv('../examples/example_air_passengers.csv')
m <- prophet(df)
future <- make_future_dataframe(m, 50, freq = 'm')
forecast <- predict(m, future)
plot(m, forecast)
df = pd.read_csv('../examples/example_air_passengers.csv')
m = Prophet()
m.fit(df)
future = m.make_future_dataframe(50, freq='MS')
forecast = m.predict(future)
fig = m.plot(forecast)
"""
Explanation: By default Prophet fits additive seasonalities, meaning the effect of the seasonality is added to the trend to get the forecast. This time series of the number of air passengers is an example of when additive seasonality does not work:
End of explanation
"""
%%R -w 10 -h 6 -u in
m <- prophet(df, seasonality.mode = 'multiplicative')
forecast <- predict(m, future)
plot(m, forecast)
m = Prophet(seasonality_mode='multiplicative')
m.fit(df)
forecast = m.predict(future)
fig = m.plot(forecast)
"""
Explanation: This time series has a clear yearly cycle, but the seasonality in the forecast is too large at the start of the time series and too small at the end. In this time series, the seasonality is not a constant additive factor as assumed by Prophet, rather it grows with the trend. This is multiplicative seasonality.
Prophet can model multiplicative seasonality by setting seasonality_mode='multiplicative' in the input arguments:
End of explanation
"""
%%R -w 9 -h 6 -u in
prophet_plot_components(m, forecast)
fig = m.plot_components(forecast)
"""
Explanation: The components figure will now show the seasonality as a percent of the trend:
End of explanation
"""
%%R
m <- prophet(seasonality.mode = 'multiplicative')
m <- add_seasonality(m, 'quarterly', period = 91.25, fourier.order = 8, mode = 'additive')
m <- add_regressor(m, 'regressor', mode = 'additive')
m = Prophet(seasonality_mode='multiplicative')
m.add_seasonality('quarterly', period=91.25, fourier_order=8, mode='additive')
m.add_regressor('regressor', mode='additive')
"""
Explanation: With seasonality_mode='multiplicative', holiday effects will also be modeled as multiplicative. Any added seasonalities or extra regressors will by default use whatever seasonality_mode is set to, but can be overriden by specifying mode='additive' or mode='multiplicative' as an argument when adding the seasonality or regressor.
For example, this block sets the built-in seasonalities to multiplicative, but includes an additive quarterly seasonality and an additive regressor:
End of explanation
"""
|
marburg-open-courseware/gmoc | docs/mpg-if_error_continue/notebooks/working-with-text.ipynb | mit | text1 = "Ethics are built right into the ideals and objectives of the United Nations "
len(text1) # The length of text1
text2 = text1.split(' ') # Return a list of the words in text2, separating by ' '.
len(text2)
text2
"""
Explanation: You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.
Working With Text
End of explanation
"""
[w for w in text2 if len(w) > 3] # Words that are greater than 3 letters long in text2
[w for w in text2 if w.istitle()] # Capitalized words in text2
[w for w in text2 if w.endswith('s')] # Words in text2 that end in 's'
"""
Explanation: <br>
List comprehension allows us to find specific words:
End of explanation
"""
text3 = 'To be or not to be'
text4 = text3.split(' ')
len(text4)
len(set(text4))
set(text4)
len(set([w.lower() for w in text4])) # .lower converts the string to lowercase.
set([w.lower() for w in text4])
"""
Explanation: <br>
We can find unique words using set().
End of explanation
"""
text5 = '"Ethics are built right into the ideals and objectives of the United Nations" \
#UNSG @ NY Society for Ethical Culture bit.ly/2guVelr'
text6 = text5.split(' ')
text6
"""
Explanation: Processing free-text
End of explanation
"""
[w for w in text6 if w.startswith('#')]
"""
Explanation: <br>
Finding hastags:
End of explanation
"""
[w for w in text6 if w.startswith('@')]
text7 = '@UN @UN_Women "Ethics are built right into the ideals and objectives of the United Nations" \
#UNSG @ NY Society for Ethical Culture bit.ly/2guVelr'
text8 = text7.split(' ')
"""
Explanation: <br>
Finding callouts:
End of explanation
"""
import re # import re - a module that provides support for regular expressions
[w for w in text8 if re.search('@[A-Za-z0-9_]+', w)]
"""
Explanation: <br>
We can use regular expressions to help us with more complex parsing.
For example '@[A-Za-z0-9_]+' will return all words that:
* start with '@' and are followed by at least one:
* capital letter ('A-Z')
* lowercase letter ('a-z')
* number ('0-9')
* or underscore ('_')
End of explanation
"""
|
arnoldlu/lisa | ipynb/tutorial/01_IPythonNotebooksUsage.ipynb | apache-2.0 | a = 1
b = 2
def my_simple_sum(a, b):
"""Simple addition
:param a: fist number
:param b: second number
"""
print "Sum is:", a+b
my_simple_sum(a,b)
# Further down in the code we do some changes
a = 100
# than we can go back and re-execute just the previous cell
"""
Explanation: Command mode vs Edit mode
By default we are in COMMAND mode
<li>Press **ENTER** the edit the current cell
<li>Press **ESC** to switch back to command mode
## Main command mode shortcuts
Notebook control:
- **00** : Restart the notebook
Cells control:
- **Up/Down arrows** : move up-down on cells
- **a** : add cell above
- **b** : add cell below
- **x** : delete current cell
Editing cells:
- **Return** : enter edit mode for current cell
- **Control+/** : Toggle code comment
- **Ctrl+Shift+-** : Split cell at cursor position
- **Esc** : return to command mode
Executing cells:
- **Shift+Return** : execute the content of the current cell
More shortcuts listed under *"Help" => "Keyboard shortcuts"*
# Cells editing
Cells have a type, which can be changed using shortcuts or the dedicated dropdown menu.<br>
This is an example of text cell, where you can use **Markdown** tags to format your text.
You can also highlight chunks of code in almost any langauge
Example of Bash script:
```shell
#!/bin/bash
# A useless script
for i in $(seq 10); do
echo Hello World
done
```
Example of C fragment:
```c
29 /*
28 * System energy normalization
27 * Returns the normalized value, in the range [0..SCHED_LOAD_SCALE],
26 * corresponding to the specified energy variation.
25 */
24 static inline int
23 normalize_energy(int energy_diff)
22 {
21 u32 normalized_nrg;
20 int max_delta;
19
18 #ifdef CONFIG_SCHED_DEBUG
17 /* Check for boundaries */
16 max_delta = schedtune_target_nrg.max_power;
15 max_delta -= schedtune_target_nrg.min_power;
14 WARN_ON(abs(energy_diff) >= max_delta);
13 #endif
12
11 /* Do scaling using positive numbers to increase the range */
10 normalized_nrg = (energy_diff < 0) ? -energy_diff : energy_diff;
9
8 /* Scale by energy magnitude */
7 normalized_nrg <<= SCHED_LOAD_SHIFT;
6
5 /* Normalize on max energy for target platform */
4 normalized_nrg = reciprocal_divide(
3 normalized_nrg, schedtune_target_nrg.rdiv);
2
1 return (energy_diff < 0) ? -normalized_nrg : normalized_nrg;
5292 }
```
## Code flow vs execution flow
Normally cells contains code, which is executed when **Shift+Return** is pressed
End of explanation
"""
# Use TAB to complete the function name
# Use SHIFT+Tab after the '(' to access
my_simple_sum(2,3)
"""
Explanation: Access to documentation and Code completion
End of explanation
"""
!pwd
!date
"""
Explanation: Local shell commands execution
We can use a "!" at the beginning of a line to execute that command in a local shell
End of explanation
"""
folder = "../"
!ls -la {folder} | wc -l
"""
Explanation: We can also use variables as parameters by passing them wrapped in "{}"
End of explanation
"""
output = !find ../../ipynb/ -name "*.ipynb"
print "Available notebooks:"
for line in output:
print line.replace('../../ipynb/', ' ')
"""
Explanation: Output of a local shell command can also be captured, for example to be post-processed in python
End of explanation
"""
|
GoogleCloudPlatform/analytics-componentized-patterns | retail/recommendation-system/bqml-mlops/part_3/vertex_ai_pipeline.ipynb | apache-2.0 | PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
# CHANGE the following settings
BASE_IMAGE='gcr.io/your-image-name' #This is the image built from the Dockfile in the same folder
REGION='vertex-ai-region' #For example, us-central1, note that Vertex AI endpoint deployment region must match MODEL_STORAGE bucket region
MODEL_STORAGE = 'gs://your-bucket-name/folder-name' #Make sure this bucket is created in the same region defined above
BQ_DATASET_NAME="hotel_recommendations" #This is the name of the target dataset where you model and predictions will be stored
PROJECT_ID="your-project-id" #This is your GCP project ID that can be found in the GCP console
# Required Parameters for Vertex AI
USER = 'your-user-name'
BUCKET_NAME = 'your-bucket-name'
PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(BUCKET_NAME, USER) #Cloud Storage URI that your pipelines service account can access.
ENDPOINT_NAME='bqml-hotel-recommendations' #Vertex AI Endpoint Name
DEPLOY_COMPUTE='n1-standard-4' #Could be any supported Vertex AI Instance Types
DEPLOY_IMAGE='us-docker.pkg.dev/vertex-ai/prediction/xgboost-cpu.0-82:latest'#Do not change, BQML XGBoost is currently compatible with 0.82
print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))
# Check the KFP version, The KFP version should be >= 1.6. If lower, run !pip3 install --user kfp --upgrade, then restart the kernel
!python3 -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
"""
Explanation: Tutorial Overview
This is part three of the tutorial where you will learn how to run same code in Part One (with minor changes) in Google's new Vertex AI pipeline. Vertex Pipelines helps you to automate, monitor, and govern your ML systems by orchestrating your ML workflow in a serverless manner, and storing your workflow's artifacts using Vertex ML Metadata. By storing the artifacts of your ML workflow in Vertex ML Metadata, you can analyze the lineage of your workflow's artifacts — for example, an ML model's lineage may include the training data, hyperparameters, and code that were used to create the model.
You will also learn how to export the final BQML model and hosted on the Google Vertex AI Endpoint.
Prerequisites
Download the Expedia Hotel Recommendation Dataset from Kaggle. You will be mostly working with the train.csv dataset for this tutorial
Upload the dataset to BigQuery by following the how-to guide Loading CSV Data
Follow the how-to guide create flex slots, reservation and assignment in BigQuery for training ML models. <strong>Make sure to create Flex slots and not month/year slots so you can delete them after the tutorial.</strong>
Build and push a docker image using this dockerfile as the base image for the Kubeflow pipeline components.
Create or use a Google Cloud Storage bucket to export the finalized model to. <strong>Make sure to create the bucket in the same region where you will create Vertex AI Endpoint to host your model.</strong>
If you do not specify a service account, Vertex Pipelines uses the Compute Engine default service account to run your pipelines. The Compute Engine default service account has the Project Editor role by default so it should have access to BigQuery as well as Google Cloud Storage.
Change the following cell to reflect your setup
End of explanation
"""
from typing import NamedTuple
import json
import os
def run_bigquery_ddl(project_id: str, query_string: str, location: str) -> NamedTuple(
'DDLOutput', [('created_table', str), ('query', str)]):
"""
Runs BigQuery query and returns a table/model name
"""
print(query_string)
from google.cloud import bigquery
from google.api_core.future import polling
from google.cloud import bigquery
from google.cloud.bigquery import retry as bq_retry
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query_string, retry=bq_retry.DEFAULT_RETRY)
job._retry = polling.DEFAULT_RETRY
print('bq version: {}'.format(bigquery.__version__))
while job.running():
from time import sleep
sleep(30)
print('Running ...')
tblname = '{}.{}'.format(job.ddl_target_table.dataset_id, job.ddl_target_table.table_id)
print('{} created in {}'.format(tblname, job.ended - job.started))
from collections import namedtuple
result_tuple = namedtuple('DDLOutput', ['created_table', 'query'])
return result_tuple(tblname, query_string)
"""
Explanation: Create BigQuery function
Create a generic BigQuery function that runs a BigQuery query and returns the table/model created. This will be re-used to return BigQuery results for all the different segments of the BigQuery process in the Kubeflow Pipeline. You will see later in the tutorial where this function is being passed as parameter (ddlop) to other functions to perform certain BigQuery operation.
End of explanation
"""
def train_matrix_factorization_model(ddlop, project_id: str, dataset: str):
query = """
CREATE OR REPLACE MODEL `{project_id}.{dataset}.my_implicit_mf_model_quantiles_demo_binary_prod`
OPTIONS
(model_type='matrix_factorization',
feedback_type='implicit',
user_col='user_id',
item_col='hotel_cluster',
rating_col='rating',
l2_reg=30,
num_factors=15) AS
SELECT
user_id,
hotel_cluster,
if(sum(is_booking) > 0, 1, sum(is_booking)) AS rating
FROM `{project_id}.{dataset}.hotel_train`
group by 1,2
""".format(project_id = project_id, dataset = dataset)
return ddlop(project_id, query, 'US')
def evaluate_matrix_factorization_model(project_id:str, mf_model: str, location: str='US')-> NamedTuple('MFMetrics', [('msqe', float)]):
query = """
SELECT * FROM ML.EVALUATE(MODEL `{project_id}.{mf_model}`)
""".format(project_id = project_id, mf_model = mf_model)
print(query)
from google.cloud import bigquery
import json
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query)
metrics_df = job.result().to_dataframe()
from collections import namedtuple
result_tuple = namedtuple('MFMetrics', ['msqe'])
return result_tuple(metrics_df.loc[0].to_dict()['mean_squared_error'])
"""
Explanation: Train Matrix Factorization model and evaluate it
We will start by training a matrix factorization model that will allow us to understand the latent relationship between user and hotel clusters. The reason why we are doing this is because matrix factorization approach can only find latent relationship between a user and a hotel. However, there are other intuitive useful predictors (such as is_mobile, location, and etc) that can improve the model performance. So togther, we can feed the resulting weights/factors as features among with other features to train the final XGBoost model.
End of explanation
"""
def create_user_features(ddlop, project_id:str, dataset:str, mf_model:str):
#Feature engineering for useres
query = """
CREATE OR REPLACE TABLE `{project_id}.{dataset}.user_features_prod` AS
WITH u as
(
select
user_id,
count(*) as total_visits,
count(distinct user_location_city) as distinct_cities,
sum(distinct site_name) as distinct_sites,
sum(is_mobile) as total_mobile,
sum(is_booking) as total_bookings,
FROM `{project_id}.{dataset}.hotel_train`
GROUP BY 1
)
SELECT
u.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors
FROM
u JOIN ML.WEIGHTS( MODEL `{mf_model}`) w
ON processed_input = 'user_id' AND feature = CAST(u.user_id AS STRING)
""".format(project_id = project_id, dataset = dataset, mf_model=mf_model)
return ddlop(project_id, query, 'US')
def create_hotel_features(ddlop, project_id:str, dataset:str, mf_model:str):
#Feature eingineering for hotels
query = """
CREATE OR REPLACE TABLE `{project_id}.{dataset}.hotel_features_prod` AS
WITH h as
(
select
hotel_cluster,
count(*) as total_cluster_searches,
count(distinct hotel_country) as distinct_hotel_countries,
sum(distinct hotel_market) as distinct_hotel_markets,
sum(is_mobile) as total_mobile_searches,
sum(is_booking) as total_cluster_bookings,
FROM `{project_id}.{dataset}.hotel_train`
group by 1
)
SELECT
h.*,
(SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS hotel_factors
FROM
h JOIN ML.WEIGHTS( MODEL `{mf_model}`) w
ON processed_input = 'hotel_cluster' AND feature = CAST(h.hotel_cluster AS STRING)
""".format(project_id = project_id, dataset = dataset, mf_model=mf_model)
return ddlop(project_id, query, 'US')
"""
Explanation: Creating embedding features for users and hotels
We will use the matrix factorization model to create corresponding user factors, hotel factors and embed them together with additional features such as total visits and distinct cities to create a new training dataset to an XGBoost classifier which will try to predict the the likelihood of booking for any user/hotel combination. Also note that we aggregated and grouped the orginal dataset by user_id.
End of explanation
"""
def combine_features(ddlop, project_id:str, dataset:str, mf_model:str, hotel_features:str, user_features:str):
#Combine user and hotel embedding features with the rating associated with each combination
query = """
CREATE OR REPLACE TABLE `{project_id}.{dataset}.total_features_prod` AS
with ratings as(
SELECT
user_id,
hotel_cluster,
if(sum(is_booking) > 0, 1, sum(is_booking)) AS rating
FROM `{project_id}.{dataset}.hotel_train`
group by 1,2
)
select
h.* EXCEPT(hotel_cluster),
u.* EXCEPT(user_id),
IFNULL(rating,0) as rating
from `{hotel_features}` h, `{user_features}` u
LEFT OUTER JOIN ratings r
ON r.user_id = u.user_id AND r.hotel_cluster = h.hotel_cluster
""".format(project_id = project_id, dataset = dataset, mf_model=mf_model, hotel_features=hotel_features, user_features=user_features)
return ddlop(project_id, query, 'US')
"""
Explanation: Function below combines all the features selected (total_mobile_searches) and engineered (user factors and hotel factors) into a training dataset for the XGBoost classifier. Note the target variable is rating which is converted into a binary classfication.
End of explanation
"""
%%bigquery --project $PROJECT_ID
CREATE OR REPLACE FUNCTION `hotel_recommendations.arr_to_input_15_hotels`(h ARRAY<FLOAT64>)
RETURNS
STRUCT<
h1 FLOAT64,
h2 FLOAT64,
h3 FLOAT64,
h4 FLOAT64,
h5 FLOAT64,
h6 FLOAT64,
h7 FLOAT64,
h8 FLOAT64,
h9 FLOAT64,
h10 FLOAT64,
h11 FLOAT64,
h12 FLOAT64,
h13 FLOAT64,
h14 FLOAT64,
h15 FLOAT64
> AS (STRUCT(
h[OFFSET(0)],
h[OFFSET(1)],
h[OFFSET(2)],
h[OFFSET(3)],
h[OFFSET(4)],
h[OFFSET(5)],
h[OFFSET(6)],
h[OFFSET(7)],
h[OFFSET(8)],
h[OFFSET(9)],
h[OFFSET(10)],
h[OFFSET(11)],
h[OFFSET(12)],
h[OFFSET(13)],
h[OFFSET(14)]
));
CREATE OR REPLACE FUNCTION `hotel_recommendations.arr_to_input_15_users`(u ARRAY<FLOAT64>)
RETURNS
STRUCT<
u1 FLOAT64,
u2 FLOAT64,
u3 FLOAT64,
u4 FLOAT64,
u5 FLOAT64,
u6 FLOAT64,
u7 FLOAT64,
u8 FLOAT64,
u9 FLOAT64,
u10 FLOAT64,
u11 FLOAT64,
u12 FLOAT64,
u13 FLOAT64,
u14 FLOAT64,
u15 FLOAT64
> AS (STRUCT(
u[OFFSET(0)],
u[OFFSET(1)],
u[OFFSET(2)],
u[OFFSET(3)],
u[OFFSET(4)],
u[OFFSET(5)],
u[OFFSET(6)],
u[OFFSET(7)],
u[OFFSET(8)],
u[OFFSET(9)],
u[OFFSET(10)],
u[OFFSET(11)],
u[OFFSET(12)],
u[OFFSET(13)],
u[OFFSET(14)]
));
"""
Explanation: We will create a couple of BigQuery user-defined functions (UDF) to convert arrays to a struct and its array elements are the fields in the struct. <strong>Be sure to change the BigQuery dataset name to your dataset name. </strong>
End of explanation
"""
def train_xgboost_model(ddlop, project_id:str, dataset:str, total_features:str):
#Combine user and hotel embedding features with the rating associated with each combination
query = """
CREATE OR REPLACE MODEL `{project_id}.{dataset}.recommender_hybrid_xgboost_prod`
OPTIONS(model_type='boosted_tree_classifier', input_label_cols=['rating'], AUTO_CLASS_WEIGHTS=True)
AS
SELECT
* EXCEPT(user_factors, hotel_factors),
{dataset}.arr_to_input_15_users(user_factors).*,
{dataset}.arr_to_input_15_hotels(hotel_factors).*
FROM
`{total_features}`
""".format(project_id = project_id, dataset = dataset, total_features=total_features)
return ddlop(project_id, query, 'US')
def evaluate_class(project_id:str, dataset:str, class_model:str, total_features:str, location:str='US')-> NamedTuple('ClassMetrics', [('roc_auc', float)]):
query = """
SELECT
*
FROM ML.EVALUATE(MODEL `{class_model}`, (
SELECT
* EXCEPT(user_factors, hotel_factors),
{dataset}.arr_to_input_15_users(user_factors).*,
{dataset}.arr_to_input_15_hotels(hotel_factors).*
FROM
`{total_features}`
))
""".format(dataset = dataset, class_model = class_model, total_features = total_features)
print(query)
from google.cloud import bigquery
bqclient = bigquery.Client(project=project_id, location=location)
job = bqclient.query(query)
metrics_df = job.result().to_dataframe()
from collections import namedtuple
result_tuple = namedtuple('ClassMetrics', ['roc_auc'])
return result_tuple(metrics_df.loc[0].to_dict()['roc_auc'])
"""
Explanation: Train XGBoost model and evaluate it
End of explanation
"""
def export_bqml_model(project_id:str, model:str, destination:str) -> NamedTuple('ModelExport', [('destination', str)]):
import subprocess
import shutil
#command='bq extract -destination_format=ML_XGBOOST_BOOSTER -m {}:{} {}'.format(project_id, model, destination)
model_name = '{}:{}'.format(project_id, model)
print (model_name)
#subprocess.run(['bq', 'extract', '-destination_format=ML_XGBOOST_BOOSTER', '-m', model_name, destination], check=True)
subprocess.run(
(
shutil.which("bq"),
"extract",
"-destination_format=ML_XGBOOST_BOOSTER",
"--project_id=" + project_id,
"-m",
model_name,
destination
),
stderr=subprocess.PIPE,
check=True)
from collections import namedtuple
result_tuple = namedtuple('ModelExport', ['destination'])
return result_tuple(destination)
def deploy_bqml_model_vertexai(project_id:str, region:str, model_name:str, endpoint_name:str, model_dir:str, deploy_image:str, deploy_compute:str):
from google.cloud import aiplatform
parent = "projects/" + project_id + "/locations/" + region
client_options = {"api_endpoint": "{}-aiplatform.googleapis.com".format(region)}
clients = {}
#upload the model to Vertex AI
clients['model'] = aiplatform.gapic.ModelServiceClient(client_options=client_options)
model = {
"display_name": model_name,
"metadata_schema_uri": "",
"artifact_uri": model_dir,
"container_spec": {
"image_uri": deploy_image,
"command": [],
"args": [],
"env": [],
"ports": [{"container_port": 8080}],
"predict_route": "",
"health_route": ""
}
}
upload_model_response = clients['model'].upload_model(parent=parent, model=model)
print("Long running operation on uploading the model:", upload_model_response.operation.name)
model_info = clients['model'].get_model(name=upload_model_response.result(timeout=180).model)
#Create an endpoint on Vertex AI to host the model
clients['endpoint'] = aiplatform.gapic.EndpointServiceClient(client_options=client_options)
create_endpoint_response = clients['endpoint'].create_endpoint(parent=parent, endpoint={"display_name": endpoint_name})
print("Long running operation on creating endpoint:", create_endpoint_response.operation.name)
endpoint_info = clients['endpoint'].get_endpoint(name=create_endpoint_response.result(timeout=180).name)
#Deploy the model to the endpoint
dmodel = {
"model": model_info.name,
"display_name": 'deployed_'+model_name,
"dedicated_resources": {
"min_replica_count": 1,
"max_replica_count": 1,
"machine_spec": {
"machine_type": deploy_compute,
"accelerator_count": 0,
}
}
}
traffic = {
'0' : 100
}
deploy_model_response = clients['endpoint'].deploy_model(endpoint=endpoint_info.name, deployed_model=dmodel, traffic_split=traffic)
print("Long running operation on deploying the model:", deploy_model_response.operation.name)
deploy_model_result = deploy_model_response.result()
"""
Explanation: Export XGBoost model and host it on Vertex AI
One of the nice features of BigQuery ML is the ability to import and export machine learning models. In the function defined below, we are going to export the trained XGBoost model to a Google Cloud Storage bucket. We will later have Google Cloud AI Platform host this model for predictions. It is worth mentioning that you can host this model on any platform that supports Booster (XGBoost 0.82). Check out the documentation for more information on exporting BigQuery ML models and their formats.
End of explanation
"""
import kfp.v2.dsl as dsl
import kfp.v2.components as comp
import time
@dsl.pipeline(
name='hotel-recs-pipeline',
description='training pipeline for hotel recommendation prediction'
)
def training_pipeline():
import json
#Minimum threshold for model metric to determine if model will be deployed to inference: 0.5 is a basically a coin toss with 50/50 chance
mf_msqe_threshold = 0.5
class_auc_threshold = 0.8
#Defining function containers
ddlop = comp.func_to_container_op(run_bigquery_ddl, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery'])
evaluate_mf_op = comp.func_to_container_op(evaluate_matrix_factorization_model, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery', 'google-cloud-bigquery-storage', 'pandas', 'pyarrow'], output_component_file='mf_eval.yaml')
evaluate_class_op = comp.func_to_container_op(evaluate_class, base_image=BASE_IMAGE, packages_to_install=['google-cloud-bigquery','pandas', 'pyarrow'])
export_bqml_model_op = comp.func_to_container_op(export_bqml_model, base_image=BASE_IMAGE, output_component_file='export_bqml.yaml')
deploy_bqml_model_op = comp.func_to_container_op(deploy_bqml_model_vertexai, base_image=BASE_IMAGE, packages_to_install=['google-cloud-aiplatform'])
#############################
#Defining pipeline execution graph
dataset = BQ_DATASET_NAME
#Train matrix factorization model
mf_model_output = train_matrix_factorization_model(ddlop, PROJECT_ID, dataset).set_display_name('train matrix factorization model')
mf_model_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
mf_model = mf_model_output.outputs['created_table']
#Evaluate matrix factorization model
mf_eval_output = evaluate_mf_op(PROJECT_ID, mf_model).set_display_name('evaluate matrix factorization model')
mf_eval_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
#mean squared quantization error
with dsl.Condition(mf_eval_output.outputs['msqe'] < mf_msqe_threshold):
#Create features for Classification model
user_features_output = create_user_features(ddlop, PROJECT_ID, dataset, mf_model).set_display_name('create user factors features')
user_features = user_features_output.outputs['created_table']
user_features_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
hotel_features_output = create_hotel_features(ddlop, PROJECT_ID, dataset, mf_model).set_display_name('create hotel factors features')
hotel_features = hotel_features_output.outputs['created_table']
hotel_features_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
total_features_output = combine_features(ddlop, PROJECT_ID, dataset, mf_model, hotel_features, user_features).set_display_name('combine all features')
total_features = total_features_output.outputs['created_table']
total_features_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
#Train XGBoost model
class_model_output = train_xgboost_model(ddlop, PROJECT_ID, dataset, total_features).set_display_name('train XGBoost model')
class_model = class_model_output.outputs['created_table']
class_model_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
#Evaluate XGBoost model
class_eval_output = evaluate_class_op(PROJECT_ID, dataset, class_model, total_features).set_display_name('evaluate XGBoost model')
class_eval_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
with dsl.Condition(class_eval_output.outputs['roc_auc'] > class_auc_threshold):
#Export model
export_destination_output = export_bqml_model_op(PROJECT_ID, class_model, MODEL_STORAGE).set_display_name('export XGBoost model')
export_destination_output.execution_options.caching_strategy.max_cache_staleness = 'P0D'
export_destination = export_destination_output.outputs['destination']
deploy_model = deploy_bqml_model_op(PROJECT_ID, REGION, class_model, ENDPOINT_NAME, MODEL_STORAGE, DEPLOY_IMAGE, DEPLOY_COMPUTE).set_display_name('Deploy XGBoost model')
deploy_model.execution_options.caching_strategy.max_cache_staleness = 'P0D'
"""
Explanation: Defining the Kubeflow Pipelines
Now we have the necessary functions defined, we are now ready to create a workflow using Kubeflow Pipeline. The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL).
The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
The pipeline performs the following steps -
* Trains a Matrix Factorization model
* Evaluates the trained Matrix Factorization model and if the Mean Square Error is less than threadshold, it will continue to the next step, otherwise, the pipeline will stop
* Engineers new user factors feature with the Matrix Factorization model
* Engineers new hotel factors feature with the Matrix Factorization model
* Combines all the features selected (total_mobile_searches) and engineered (user factors and hotel factors) into a training dataset for the XGBoost classifier
* Trains a XGBoost classifier
* Evalutes the trained XGBoost model and if the ROC AUC score is more than threadshold, it will continue to the next step, otherwise, the pipeline will stop
* Exports the XGBoost model to a Google Cloud Storage bucket
* Deploys the XGBoost model from the Google Cloud Storage bucket to Google Cloud AI Platform for prediction
End of explanation
"""
import kfp.v2 as kfp
from kfp.v2 import compiler
pipeline_func = training_pipeline
compiler.Compiler().compile(pipeline_func=pipeline_func,
package_path='hotel_rec_pipeline_job.json')
from kfp.v2.google.client import AIPlatformClient
api_client = AIPlatformClient(project_id=PROJECT_ID, region=REGION)
response = api_client.create_run_from_job_spec(
job_spec_path='hotel_rec_pipeline_job.json',
enable_caching=False,
pipeline_root=PIPELINE_ROOT # optional- use if want to override compile-time value
#parameter_values={'text': 'Hello world!'}
)
"""
Explanation: Submitting pipeline runs
You can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run.
End of explanation
"""
|
NathanYee/ThinkBayes2 | code/.ipynb_checkpoints/blaster-checkpoint.ipynb | gpl-2.0 | from __future__ import print_function, division
% matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import numpy as np
from thinkbayes2 import Hist, Pmf, Cdf, Suite, Beta
import thinkplot
"""
Explanation: The Alien Blaster problem
This notebook presents solutions to exercises in Think Bayes.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
prior = Beta(2, 3)
thinkplot.Pdf(prior.MakePmf())
prior.Mean()
"""
Explanation: Part One
In preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, $x$.
Based on previous tests, the distribution of $x$ in the population of designs is well-modeled by a beta distribution with parameters $\alpha=2$ and $\beta=3$. What is the average missile's probability of shooting down an alien?
End of explanation
"""
posterior = Beta(3, 2)
posterior.Update((2, 8))
posterior.MAP()
"""
Explanation: In its first test, the new Alien Blaster 9000 takes 10 shots and hits 2 targets. Taking into account this data, what is the posterior distribution of $x$ for this missile? What is the value in the posterior with the highest probability, also known as the MAP?
End of explanation
"""
from scipy import stats
class AlienBlaster(Suite):
def Likelihood(self, data, hypo):
"""Computes the likeliood of data under hypo.
data: number of shots they took
hypo: probability of a hit, p
"""
n = data
x = hypo
# specific version for n=2 shots
likes = [x**4, (1-x)**4, (2*x*(1-x))**2]
# general version for any n shots
likes = [stats.binom.pmf(k, n, x)**2 for k in range(n+1)]
return np.sum(likes)
"""
Explanation: Now suppose the new ultra-secret Alien Blaster 10K is being tested. In a press conference, an EDF general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were, but they report: "The same number of targets were hit in the two tests, so we have reason to think this new design is consistent."
Write a class called AlienBlaster that inherits from Suite and provides a likelihood function that takes this data -- two shots and a tie -- and computes the likelihood of the data for each hypothetical value of $x$. If you would like a challenge, write a version that works for any number of shots.
End of explanation
"""
pmf = Beta(1, 1).MakePmf()
blaster = AlienBlaster(pmf)
blaster.Update(2)
thinkplot.Pdf(blaster)
"""
Explanation: If we start with a uniform prior, we can see what the likelihood function looks like:
End of explanation
"""
pmf = Beta(2, 3).MakePmf()
blaster = AlienBlaster(pmf)
blaster.Update(2)
thinkplot.Pdf(blaster)
"""
Explanation: A tie is most likely if they are both terrible shots or both very good.
Is this data good or bad; that is, does it increase or decrease your estimate of $x$ for the Alien Blaster 10K?
Now let's run it with the specified prior and see what happens when we multiply the convex prior and the concave posterior:
End of explanation
"""
prior.Mean(), blaster.Mean()
prior.MAP(), blaster.MAP()
"""
Explanation: The posterior mean and MAP are lower than in the prior.
End of explanation
"""
k = 3
n = 10
x1 = 0.3
x2 = 0.4
0.3 * stats.binom.pmf(k, n, x1) + 0.7 * stats.binom.pmf(k, n, x2)
"""
Explanation: So if we learn that the new design is "consistent", it is more likely to be consistently bad (in this case).
Part Two
Suppose we
have we have a stockpile of 3 Alien Blaster 9000s and 7 Alien
Blaster 10Ks. After extensive testing, we have concluded that
the AB9000 hits the target 30% of the time, precisely, and the
AB10K hits the target 40% of the time.
If I grab a random weapon from the stockpile and shoot at 10 targets,
what is the probability of hitting exactly 3? Again, you can write a
number, mathematical expression, or Python code.
End of explanation
"""
def flip(p):
return np.random.random() < p
def simulate_shots(n, p):
return np.random.binomial(n, p)
ks = []
for i in range(1000):
if flip(0.3):
k = simulate_shots(n, x1)
else:
k = simulate_shots(n, x2)
ks.append(k)
"""
Explanation: The answer is a value drawn from the mixture of the two distributions.
Continuing the previous problem, let's estimate the distribution
of k, the number of successful shots out of 10.
Write a few lines of Python code to simulate choosing a random weapon and firing it.
Write a loop that simulates the scenario and generates random values of k 1000 times.
Store the values of k you generate and plot their distribution.
End of explanation
"""
pmf = Pmf(ks)
thinkplot.Hist(pmf)
len(ks), np.mean(ks)
"""
Explanation: Here's what the distribution looks like.
End of explanation
"""
xs = np.random.choice(a=[x1, x2], p=[0.3, 0.7], size=1000)
Hist(xs)
"""
Explanation: The mean should be near 3.7. We can run this simulation more efficiently using NumPy. First we generate a sample of xs:
End of explanation
"""
ks = np.random.binomial(n, xs)
"""
Explanation: Then for each x we generate a k:
End of explanation
"""
pmf = Pmf(ks)
thinkplot.Hist(pmf)
np.mean(ks)
"""
Explanation: And the results look similar.
End of explanation
"""
from thinkbayes2 import MakeBinomialPmf
pmf1 = MakeBinomialPmf(n, x1)
pmf2 = MakeBinomialPmf(n, x2)
metapmf = Pmf({pmf1:0.3, pmf2:0.7})
metapmf.Print()
"""
Explanation: One more way to do the same thing is to make a meta-Pmf, which contains the two binomial Pmf objects:
End of explanation
"""
ks = [metapmf.Random().Random() for _ in range(1000)]
"""
Explanation: Here's how we can draw samples from the meta-Pmf:
End of explanation
"""
pmf = Pmf(ks)
thinkplot.Hist(pmf)
np.mean(ks)
"""
Explanation: And here are the results, one more time:
End of explanation
"""
from thinkbayes2 import MakeMixture
mix = MakeMixture(metapmf)
thinkplot.Hist(mix)
mix.Mean()
"""
Explanation: This result, which we have estimated three ways, is a predictive distribution, based on our uncertainty about x.
We can compute the mixture analtically using thinkbayes2.MakeMixture:
def MakeMixture(metapmf, label='mix'):
"""Make a mixture distribution.
Args:
metapmf: Pmf that maps from Pmfs to probs.
label: string label for the new Pmf.
Returns: Pmf object.
"""
mix = Pmf(label=label)
for pmf, p1 in metapmf.Items():
for k, p2 in pmf.Items():
mix[k] += p1 * p2
return mix
The outer loop iterates through the Pmfs; the inner loop iterates through the items.
So p1 is the probability of choosing a particular Pmf; p2 is the probability of choosing a value from the Pmf.
In the example, each Pmf is associated with a value of x (probability of hitting a target). The inner loop enumerates the values of k (number of targets hit after 10 shots).
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/27d6cff3f645408158cdf4f3f05a21b6/30_eeg_erp.ipynb | bsd-3-clause | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import mne
root = mne.datasets.sample.data_path() / 'MEG' / 'sample'
raw_file = root / 'sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_file, preload=False)
events_file = root / 'sample_audvis_filt-0-40_raw-eve.fif'
events = mne.read_events(events_file)
raw.crop(tmax=90) # in seconds (happens in-place)
# discard events >90 seconds (not strictly necessary, but avoids some warnings)
events = events[events[:, 0] <= raw.last_samp]
"""
Explanation: EEG analysis - Event-Related Potentials (ERPs)
This tutorial shows how to perform standard ERP analyses in MNE-Python. Most of
the material here is covered in other tutorials too, but for convenience the
functions and methods most useful for ERP analyses are collected here, with
links to other tutorials where more detailed information is given.
As usual we'll start by importing the modules we need and loading some example
data. Instead of parsing the events from the raw data's :term:stim channel
(like we do in this tutorial <tut-events-vs-annotations>), we'll load
the events from an external events file. Finally, to speed up computations
we'll crop the raw data from ~4.5 minutes down to 90 seconds.
End of explanation
"""
raw.pick(['eeg', 'eog']).load_data()
raw.info
"""
Explanation: The file that we loaded has already been partially processed: 3D sensor
locations have been saved as part of the .fif file, the data have been
low-pass filtered at 40 Hz, and a common average reference is set for the
EEG channels, stored as a projector (see section-avg-ref-proj in the
tut-set-eeg-ref tutorial for more info about when you may want to do
this). We'll discuss how to do each of these below.
Since this is a combined EEG/MEG dataset, let's start by restricting the data
to just the EEG and EOG channels. This will cause the other projectors saved
in the file (which apply only to magnetometer channels) to be removed. By
looking at the measurement info we can see that we now have 59 EEG channels
and 1 EOG channel.
End of explanation
"""
channel_renaming_dict = {name: name.replace(' 0', '').lower()
for name in raw.ch_names}
_ = raw.rename_channels(channel_renaming_dict) # happens in-place
"""
Explanation: Channel names and types
In practice it is quite common to have some channels labeled as EEG that are
actually EOG channels. :class:~mne.io.Raw objects have a
:meth:~mne.io.Raw.set_channel_types method that can be used to change a
channel that is mislabeled as eeg to eog.
You can also rename channels using :meth:~mne.io.Raw.rename_channels.
Detailed examples of both of these methods can be found in the tutorial
tut-raw-class.
In our data set, all channel types are already correct. Therefore, we'll only
remove a space and a leading zero in the channel names and convert to
lowercase:
End of explanation
"""
raw.plot_sensors(show_names=True)
fig = raw.plot_sensors('3d')
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>The assignment to a temporary name ``_`` (the ``_ =`` part) is included
here to suppress automatic printing of the ``raw`` object. You do not
have to do this in your interactive analysis.</p></div>
Channel locations
The tutorial tut-sensor-locations describes how sensor locations are
handled in great detail. To briefly summarize: MNE-Python distinguishes
:term:montages <montage> (which contain 3D sensor locations x, y,
and z, in meters) from :term:layouts <layout> (which define 2D sensor
arrangements for plotting schematic sensor location diagrams). Additionally,
montages may specify idealized sensor locations (based on, e.g., an
idealized spherical head model), or they may contain realistic sensor
locations obtained by digitizing the 3D locations of the sensors when placed
on a real person's head.
This dataset has realistic digitized 3D sensor locations saved as part of the
.fif file, so we can view the sensor locations in 2D or 3D using the
:meth:~mne.io.Raw.plot_sensors method:
End of explanation
"""
for proj in (False, True):
with mne.viz.use_browser_backend('matplotlib'):
fig = raw.plot(n_channels=5, proj=proj, scalings=dict(eeg=50e-6),
show_scrollbars=False)
fig.subplots_adjust(top=0.9) # make room for title
ref = 'Average' if proj else 'No'
fig.suptitle(f'{ref} reference', size='xx-large', weight='bold')
"""
Explanation: If you're working with a standard montage like the 10–20
system, you can add sensor locations to the data with
raw.set_montage('standard_1020') (see tut-sensor-locations for
information on other standard montages included with MNE-Python).
If you have digitized realistic sensor locations, there are dedicated
functions for loading those digitization files into MNE-Python (see
reading-dig-montages for discussion and dig-formats for a list
of supported formats). Once loaded, the digitized sensor locations can be
added to the data by passing the loaded montage object to
:meth:~mne.io.Raw.set_montage.
Setting the EEG reference
As mentioned, this data already has an EEG common average reference
added as a :term:projector. We can view the effect of this projector on the
raw data by plotting it with and without the projector applied:
End of explanation
"""
raw.filter(l_freq=0.1, h_freq=None)
"""
Explanation: The referencing scheme can be changed with the function
:func:mne.set_eeg_reference (which by default operates on a copy of the
data) or the :meth:raw.set_eeg_reference() <mne.io.Raw.set_eeg_reference>
method (which always modifies the data in-place). The tutorial
tut-set-eeg-ref shows several examples.
Filtering
MNE-Python has extensive support for different ways of filtering data. For a
general discussion of filter characteristics and MNE-Python defaults, see
disc-filtering. For practical examples of how to apply filters to your
data, see tut-filter-resample. Here, we'll apply a simple high-pass
filter for illustration:
End of explanation
"""
np.unique(events[:, -1])
"""
Explanation: Evoked responses: epoching and averaging
The general process for extracting evoked responses from continuous data is
to use the :class:~mne.Epochs constructor, and then average the resulting
epochs to create an :class:~mne.Evoked object. In MNE-Python, events are
represented as a :class:NumPy array <numpy.ndarray> containing event
latencies (in samples) and integer event codes. The event codes are stored in
the last column of the events array:
End of explanation
"""
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'face': 5, 'buttonpress': 32}
"""
Explanation: The tut-event-arrays tutorial discusses event arrays in more detail.
Integer event codes are mapped to more descriptive text using a Python
:class:dictionary <dict> usually called event_id. This mapping is
determined by your experiment (i.e., it reflects which event codes you chose
to represent different experimental events or conditions). The
sample-dataset data uses the following mapping:
End of explanation
"""
epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.3, tmax=0.7,
preload=True)
fig = epochs.plot(events=events)
"""
Explanation: Now we can proceed to epoch the continuous data. An interactive plot allows
us to click on epochs to mark them as "bad" and drop them from the
analysis (it is not interactive on this documentation website, but will be
when you run epochs.plot() <mne.Epochs.plot> in a Python console).
End of explanation
"""
reject_criteria = dict(eeg=100e-6, eog=200e-6) # 100 µV, 200 µV
epochs.drop_bad(reject=reject_criteria)
"""
Explanation: It is also possible to automatically drop epochs (either when first creating
them or later on) by providing maximum peak-to-peak signal value thresholds
(passed to :class:~mne.Epochs as the reject parameter; see
tut-reject-epochs-section for details). You can also do this after
the epochs are already created using :meth:~mne.Epochs.drop_bad:
End of explanation
"""
epochs.plot_drop_log()
"""
Explanation: Next, we generate a barplot of which channels contributed most to epochs
getting rejected. If one channel is responsible for many epoch rejections,
it may be worthwhile to mark that channel as "bad" in the
:class:~mne.io.Raw object and then re-run epoching (fewer channels with
more good epochs may be preferable to keeping all channels but losing many
epochs). See tut-bad-channels for more information.
End of explanation
"""
l_aud = epochs['auditory/left'].average()
l_vis = epochs['visual/left'].average()
"""
Explanation: Epochs can also be dropped automatically if the event around which the epoch
is created is too close to the start or end of the :class:~mne.io.Raw
object (e.g., if the epoch would extend past the end of the recording; this
is the cause for the "TOO_SHORT" entry in the
:meth:~mne.Epochs.plot_drop_log plot).
Epochs may also be dropped automatically if the :class:~mne.io.Raw object
contains :term:annotations that begin with either bad or edge
("edge" annotations are automatically inserted when concatenating two or more
:class:~mne.io.Raw objects). See tut-reject-data-spans for more
information on annotation-based epoch rejection.
Now that we've dropped all bad epochs, let's look at our evoked responses for
some conditions we care about. Here, the :meth:~mne.Epochs.average method
will create an :class:~mne.Evoked object, which we can then plot. Notice
that we select which condition we want to average using square-bracket
indexing (like for a :class:dictionary <dict>). This returns a subset with
only the desired epochs, which we then average:
End of explanation
"""
fig1 = l_aud.plot()
fig2 = l_vis.plot(spatial_colors=True)
"""
Explanation: These :class:~mne.Evoked objects have their own interactive plotting method
(though again, it won't be interactive on the documentation website).
Clicking and dragging a span of time will generate a topography of scalp
potentials for the selected time segment. Here, we also demonstrate built-in
color-coding the channel traces by location:
End of explanation
"""
l_aud.plot_topomap(times=[-0.2, 0.1, 0.4], average=0.05)
"""
Explanation: Scalp topographies can also be obtained non-interactively with the
:meth:~mne.Evoked.plot_topomap method. Here, we display topomaps of the
average evoked potential in 50 ms time windows centered at -200 ms, 100 ms,
and 400 ms.
End of explanation
"""
l_aud.plot_joint()
"""
Explanation: Considerable customization of these plots is possible, see the docstring of
:meth:~mne.Evoked.plot_topomap for details.
There is also a built-in method for combining butterfly plots of the signals
with scalp topographies called :meth:~mne.Evoked.plot_joint. Like in
:meth:~mne.Evoked.plot_topomap, you can specify times for the scalp
topographies or you can let the method choose times automatically as shown
here:
End of explanation
"""
for evk in (l_aud, l_vis):
evk.plot(gfp=True, spatial_colors=True, ylim=dict(eeg=[-12, 12]))
"""
Explanation: Global field power (GFP)
Global field power :footcite:Lehmann1980,Lehmann1984,Murray2008 is,
generally speaking, a measure of agreement of the signals picked up by all
sensors across the entire scalp: if all sensors have the same value at a
given time point, the GFP will be zero at that time point. If the signals
differ, the GFP will be non-zero at that time point. GFP
peaks may reflect "interesting" brain activity, warranting further
investigation. Mathematically, the GFP is the population standard
deviation across all sensors, calculated separately for every time point.
You can plot the GFP using evoked.plot(gfp=True) <mne.Evoked.plot>. The GFP
trace will be black if spatial_colors=True and green otherwise. The EEG
reference does not affect the GFP:
End of explanation
"""
l_aud.plot(gfp='only')
"""
Explanation: To plot the GFP by itself, you can pass gfp='only' (this makes it easier
to read off the GFP data values, because the scale is aligned):
End of explanation
"""
gfp = l_aud.data.std(axis=0, ddof=0)
# Reproducing the MNE-Python plot style seen above
fig, ax = plt.subplots()
ax.plot(l_aud.times, gfp * 1e6, color='lime')
ax.fill_between(l_aud.times, gfp * 1e6, color='lime', alpha=0.2)
ax.set(xlabel='Time (s)', ylabel='GFP (µV)', title='EEG')
"""
Explanation: The GFP is the population standard deviation of the signal
across channels. To compute it manually, we can leverage the fact that
evoked.data <mne.Evoked.data> is a :class:NumPy array <numpy.ndarray>,
and verify by plotting it using plain Matplotlib commands:
End of explanation
"""
left = ['eeg17', 'eeg18', 'eeg25', 'eeg26']
right = ['eeg23', 'eeg24', 'eeg34', 'eeg35']
left_ix = mne.pick_channels(l_aud.info['ch_names'], include=left)
right_ix = mne.pick_channels(l_aud.info['ch_names'], include=right)
"""
Explanation: Averaging across channels with regions of interest
Since our sample data contains responses to left and right auditory and
visual stimuli, we may want to compare left versus right regions of interest
(ROIs). To average across channels in a given ROI, we first find the relevant
channel indices. Revisiting the 2D sensor plot above, we might choose the
following channels for left and right ROIs, respectively:
End of explanation
"""
roi_dict = dict(left_ROI=left_ix, right_ROI=right_ix)
roi_evoked = mne.channels.combine_channels(l_aud, roi_dict, method='mean')
print(roi_evoked.info['ch_names'])
roi_evoked.plot()
"""
Explanation: Now we can create a new Evoked object with two virtual channels (one for each
ROI):
End of explanation
"""
evokeds = dict(auditory=l_aud, visual=l_vis)
picks = [f'eeg{n}' for n in range(10, 15)]
mne.viz.plot_compare_evokeds(evokeds, picks=picks, combine='mean')
"""
Explanation: Comparing conditions
If we wanted to contrast auditory to visual stimuli, a useful function is
:func:mne.viz.plot_compare_evokeds. By default, this function will combine
all channels in each evoked object using GFP (or RMS for MEG channels); here
instead we specify to combine by averaging, and restrict it to a subset of
channels by passing picks:
End of explanation
"""
evokeds = dict(auditory=list(epochs['auditory/left'].iter_evoked()),
visual=list(epochs['visual/left'].iter_evoked()))
mne.viz.plot_compare_evokeds(evokeds, combine='mean', picks=picks)
"""
Explanation: We can also generate confidence intervals by treating each epoch as a
separate observation using :meth:~mne.Epochs.iter_evoked. A confidence
interval across subjects could also be obtained by passing a list of
:class:~mne.Evoked objects (one per subject) to the
:func:~mne.viz.plot_compare_evokeds function.
End of explanation
"""
aud_minus_vis = mne.combine_evoked([l_aud, l_vis], weights=[1, -1])
aud_minus_vis.plot_joint()
"""
Explanation: We can also compare conditions by subtracting one :class:~mne.Evoked object
from another using the :func:mne.combine_evoked function (this function
also supports pooling of epochs without subtraction).
End of explanation
"""
grand_average = mne.grand_average([l_aud, l_vis])
print(grand_average)
"""
Explanation: <div class="alert alert-danger"><h4>Warning</h4><p>The code above yields an **equal-weighted difference**. If you have
different numbers of epochs per condition, you might want to equalize the
number of events per condition first by using
`epochs.equalize_event_counts() <mne.Epochs.equalize_event_counts>`
before averaging.</p></div>
Grand averages
To compute grand averages across conditions (or subjects), you can pass a
list of :class:~mne.Evoked objects to :func:mne.grand_average. The result
is another :class:~mne.Evoked object.
End of explanation
"""
list(event_dict)
"""
Explanation: For combining conditions it is also possible to make use of :term:HED
tags in the condition names when selecting which epochs to average. For
example, we have the condition names:
End of explanation
"""
epochs['auditory'].average()
"""
Explanation: We can select the auditory conditions (left and right together) by passing:
End of explanation
"""
# Define a function to print out the channel (ch) containing the
# peak latency (lat; in msec) and amplitude (amp, in µV), with the
# time range (tmin and tmax) that was searched.
# This function will be used throughout the remainder of the tutorial.
def print_peak_measures(ch, tmin, tmax, lat, amp):
print(f'Channel: {ch}')
print(f'Time Window: {tmin * 1e3:.3f} - {tmax * 1e3:.3f} ms')
print(f'Peak Latency: {lat * 1e3:.3f} ms')
print(f'Peak Amplitude: {amp * 1e6:.3f} µV')
# Get peak amplitude and latency from a good time window that contains the peak
good_tmin, good_tmax = 0.08, 0.12
ch, lat, amp = l_vis.get_peak(ch_type='eeg', tmin=good_tmin, tmax=good_tmax,
mode='pos', return_amplitude=True)
# Print output from the good time window that contains the peak
print('** PEAK MEASURES FROM A GOOD TIME WINDOW **')
print_peak_measures(ch, good_tmin, good_tmax, lat, amp)
"""
Explanation: See tut-section-subselect-epochs for more details on that.
The tutorials tut-epochs-class and tut-evoked-class have many
more details about working with the :class:~mne.Epochs and
:class:~mne.Evoked classes.
Amplitude and latency measures
It is common in ERP research to extract measures of amplitude or latency to
compare across different conditions. There are many measures that can be
extracted from ERPs, and many of these are detailed (including the respective
strengths and weaknesses) in chapter 9 of Luck :footcite:Luck2014 (also see
the Measurement Tool in the ERPLAB Toolbox
:footcite:Lopez-CalderonLuck2014).
This part of the tutorial will demonstrate how to extract three common
measures:
Peak latency
Peak amplitude
Mean amplitude
Peak latency and amplitude
The most common measures of amplitude and latency are peak measures.
Peak measures are basically the maximum amplitude of the signal in a
specified time window and the time point (or latency) at which the peak
amplitude occurred.
Peak measures can be obtained using the :meth:~mne.Evoked.get_peak method.
There are two important things to point out about
:meth:~mne.Evoked.get_peak. First, it finds the strongest peak
looking across all channels of the selected type that are available in
the :class:~mne.Evoked object. As a consequence, if you want to restrict
the search to a group of channels or a single channel, you
should first use the :meth:~mne.Evoked.pick or
:meth:~mne.Evoked.pick_channels methods. Second, the
:meth:~mne.Evoked.get_peak method can find different types of peaks using
the mode argument. There are three options:
mode='pos': finds the peak with a positive voltage (ignores
negative voltages)
mode='neg': finds the peak with a negative voltage (ignores
positive voltages)
mode='abs': finds the peak with the largest absolute voltage
regardless of sign (positive or negative)
The following example demonstrates how to find the first positive peak in the
ERP (i.e., the P100) for the left visual condition (i.e., the
l_vis :class:~mne.Evoked object). The time window used to search for
the peak ranges from 0.08 to 0.12 s. This time window was selected because it
is when P100 typically occurs. Note that all 'eeg' channels are submitted
to the :meth:~mne.Evoked.get_peak method.
End of explanation
"""
# Fist, return a copy of l_vis to select the channel from
l_vis_roi = l_vis.copy().pick('eeg59')
# Get the peak and latency measure from the selected channel
ch_roi, lat_roi, amp_roi = l_vis_roi.get_peak(
tmin=good_tmin, tmax=good_tmax, mode='pos', return_amplitude=True)
# Print output
print('** PEAK MEASURES FOR ONE CHANNEL FROM A GOOD TIME WINDOW **')
print_peak_measures(ch_roi, good_tmin, good_tmax, lat_roi, amp_roi)
"""
Explanation: The output shows that channel eeg55 had the maximum positive peak in
the chosen time window from all of the 'eeg' channels searched.
In practice, one might want to pull out the peak for
an a priori region of interest or a single channel depending on the study.
This can be done by combining the :meth:~mne.Evoked.pick
or :meth:~mne.Evoked.pick_channels methods with the
:meth:~mne.Evoked.get_peak method.
Here, let's assume we believe the effects of interest will occur
at eeg59.
End of explanation
"""
# Get BAD peak measures
bad_tmin, bad_tmax = 0.095, 0.135
ch_roi, bad_lat_roi, bad_amp_roi = l_vis_roi.get_peak(
mode='pos', tmin=bad_tmin, tmax=bad_tmax, return_amplitude=True)
# Print output
print('** PEAK MEASURES FOR ONE CHANNEL FROM A BAD TIME WINDOW **')
print_peak_measures(ch_roi, bad_tmin, bad_tmax, bad_lat_roi, bad_amp_roi)
"""
Explanation: While the peak latencies are the same in channels eeg55 and eeg59,
the peak amplitudes differ. This approach can also be applied to virtual
channels created with the :func:~mne.channels.combine_channels function and
difference waves created with the :func:mne.combine_evoked function (see
aud_minus_vis in section Comparing conditions_ above).
Peak measures are very susceptible to high frequency noise in the
signal (for discussion, see :footcite:Luck2014). Specifically, high
frequency noise positively biases peak amplitude measures. This bias can
confound comparisons across conditions where ERPs differ in the level of high
frequency noise, such as when the conditions differ in the number of trials
contributing to the ERP. One way to avoid this is to apply a non-causal
low-pass filter to the ERP. Low-pass filters reduce the contribution of high
frequency noise by smoothing out fast (i.e., high frequency) fluctuations in
the signal (see disc-filtering). While this can reduce the positive
bias in peak amplitude measures caused by high frequency noise, low-pass
filtering the ERP can introduce challenges in interpreting peak latency
measures for effects of interest :footcite:Rousselet2012,VanRullen2011.
If using peak measures, it is critical to visually inspect the data to
make sure the selected time window actually contains a peak. The
meth:~mne.Evoked.get_peak method detects the maximum or minimum voltage in
the specified time range and returns the latency and amplitude of this peak.
There is no guarantee that this method will return an actual peak. Instead,
it may return a value on the rising or falling edge of a peak we are trying
to find.
The following example demonstrates why visual inspection is crucial. Below,
we use a known bad time window (0.095 to 0.135 s) to search for a peak in
channel eeg59.
End of explanation
"""
fig, axs = plt.subplots(nrows=2, ncols=1, layout='tight')
words = (('Bad', 'missing'), ('Good', 'finding'))
times = (np.array([bad_tmin, bad_tmax]), np.array([good_tmin, good_tmax]))
colors = ('C1', 'C0')
for ix, ax in enumerate(axs):
title = '{} time window {} peak'.format(*words[ix])
l_vis_roi.plot(axes=ax, time_unit='ms', show=False, titles=title)
ax.plot(lat_roi * 1e3, amp_roi * 1e6, marker='*', color='C6')
ax.axvspan(*(times[ix] * 1e3), facecolor=colors[ix], alpha=0.3)
ax.set_xlim(-50, 150) # Show zoomed in around peak
"""
Explanation: If all we had were the above values, it would be unclear if they are truly
identifying a peak in the ERP. In fact, the 0.095 to 0.135 s time window
actually does not contain the true peak, which is shown in the top panel
below. The bad time window (highlighted in orange) does not contain the true
peak (the pink star). In contrast, the time window defined initially (0.08 to
0.12 s; highlighted in blue) returns an actual peak instead of a just a
maximum or minimum in the searched time window. Visual inspection will always
help you to convince yourself that the returned values are actual peaks.
End of explanation
"""
# Select all of the channels and crop to the time window
channels = ['eeg54', 'eeg57', 'eeg55', 'eeg59']
hemisphere = ['left', 'left', 'right', 'right']
l_vis_mean_roi = l_vis.copy().pick(channels).crop(
tmin=good_tmin, tmax=good_tmax)
# Extract mean amplitude in µV over time
mean_amp_roi = l_vis_mean_roi.data.mean(axis=1) * 1e6
# Store the data in a data frame
mean_amp_roi_df = pd.DataFrame({
'ch_name': l_vis_mean_roi.ch_names,
'hemisphere': ['left', 'left', 'right', 'right'],
'mean_amp': mean_amp_roi
})
# Print the data frame
print(mean_amp_roi_df.groupby('hemisphere').mean())
"""
Explanation: Mean Amplitude
Another common practice in ERP studies is to define a component (or effect)
as the mean amplitude within a specified time window. One advantage of this
approach is that it is less sensitive to high frequency noise (compared to
peak amplitude measures), because averaging over a time window acts as a
low-pass filter (see discussion in the previous section
Peak latency and amplitude_).
When using mean amplitude measures, selecting the time window based on
the effect of interest (e.g., the difference between two conditions) can
inflate the likelihood of finding false positives in your
results :footcite:LuckGaspelin2017. There are other, and
better, ways to identify a time window to use for extracting mean amplitude
measures. First, you can use an a priori time window based on prior
research.
A second option is to define a time window from an independent condition or
set of trials not used in the analysis (e.g., a "localizer"). A third
approach is
to define a time window using the across-condition grand average. This latter
approach is not circular because the across-condition mean and condition
difference are independent of one another. The issues discussed above also
apply to selecting channels used for analysis.
The following example demonstrates how to pull out the mean amplitude
from the left visual condition (i.e., the l_vis :class:~mne.Evoked
object) from selected channels and time windows. Stimulating the
left visual field increases neural activity of visual cortex in the
contralateral (i.e., right) hemisphere. We can test this by examining the
amplitude of the ERP for left visual field stimulation over right
(contralateral) and left (ipsilateral) channels. The channels used for this
analysis are eeg54 and eeg57 (left hemisphere), and eeg59 and
eeg55 (right hemisphere). The time window used is 0.08 (good_tmin)
to 0.12 s (good_tmax) as it corresponds to when the P100 typically
occurs.
The P100 is sensitive to left and right visual field stimulation. The mean
amplitude is extracted from the above four channels and stored in a
:class:pandas.DataFrame.
End of explanation
"""
# Extract mean amplitude for all channels in l_vis (including `eog`)
l_vis_cropped = l_vis.copy().crop(tmin=good_tmin, tmax=good_tmax)
mean_amp_all = l_vis_cropped.data.mean(axis=1) * 1e6
mean_amp_all_df = pd.DataFrame({
'ch_name': l_vis_cropped.info['ch_names'],
'mean_amp': mean_amp_all
})
mean_amp_all_df['tmin'] = good_tmin
mean_amp_all_df['tmax'] = good_tmax
mean_amp_all_df['condition'] = 'Left/Visual'
with pd.option_context('display.max_columns', None):
print(mean_amp_all_df.head())
print(mean_amp_all_df.tail())
"""
Explanation: As demonstrated in this example, the mean amplitude was higher and
positive in right compared to left hemisphere channels. It should be
reiterated that both spatial and temporal windows used in the analysis should
be determined in an independent manner (e.g., defined a priori from prior
research, a "localizer" or another independent condition) and not based
on the data you will use to test your hypotheses.
The example can be modified to extract the mean amplitude
from all channels and store the resulting output in a
:class:pandas.DataFrame. This can be useful for statistical analyses
conducted in other programming languages.
End of explanation
"""
|
adityaka/misc_scripts | python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/04_01/Final/Create.ipynb | bsd-3-clause | import pandas as pd
import numpy as np
"""
Explanation: Creating Data Frames
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html
DataFrame is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it
like a spreadsheet or SQL table, or a dict of Series objects.
You can create a data frame using:
- Dict of 1D ndarrays, lists, dicts, or Series
- 2-D numpy.ndarray
- Structured or record ndarray
- A Series
- Another DataFrame
Data Frame attributes
| T | Transpose index and columns | |
|---------|-------------------------------------------------------------------------------------------------------------------|---|
| at | Fast label-based scalar accessor | |
| axes | Return a list with the row axis labels and column axis labels as the only members. | |
| blocks | Internal property, property synonym for as_blocks() | |
| dtypes | Return the dtypes in this object. | |
| empty | True if NDFrame is entirely empty [no items], meaning any of the axes are of length 0. | |
| ftypes | Return the ftypes (indication of sparse/dense and dtype) in this object. | |
| iat | Fast integer location scalar accessor. | |
| iloc | Purely integer-location based indexing for selection by position. | |
| is_copy | | |
| ix | A primarily label-location based indexer, with integer position fallback. | |
| loc | Purely label-location based indexer for selection by label. | |
| ndim | Number of axes / array dimensions | |
| shape | Return a tuple representing the dimensionality of the DataFrame. | |
| size | number of elements in the NDFrame | |
| style | Property returning a Styler object containing methods for building a styled HTML representation fo the DataFrame. | |
| values | Numpy representation of NDFrame | |
End of explanation
"""
my_dictionary = {'a' : 45., 'b' : -19.5, 'c' : 4444}
print(my_dictionary.keys())
print(my_dictionary.values())
my_dictionary_df = pd.DataFrame(my_dictionary, index=['first', 'again'])
my_dictionary_df
"""
Explanation: Creating data frames from various data types
documentation: http://pandas.pydata.org/pandas-docs/stable/dsintro.html
cookbook: http://pandas.pydata.org/pandas-docs/stable/cookbook.html
create data frame from Python dictionary
End of explanation
"""
cookbook_df = pd.DataFrame({'AAA' : [4,5,6,7], 'BBB' : [10,20,30,40],'CCC' : [100,50,-30,-50]})
cookbook_df
"""
Explanation: constructor without explicit index
End of explanation
"""
series_dict = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
series_df = pd.DataFrame(series_dict)
series_df
"""
Explanation: constructor contains dictionary with Series as values
End of explanation
"""
produce_dict = {'veggies': ['potatoes', 'onions', 'peppers', 'carrots'],
'fruits': ['apples', 'bananas', 'pineapple', 'berries']}
produce_dict
pd.DataFrame(produce_dict)
"""
Explanation: dictionary of lists
End of explanation
"""
data2 = [{'a': 1, 'b': 2}, {'a': 5, 'b': 10, 'c': 20}]
pd.DataFrame(data2)
"""
Explanation: list of dictionaries
End of explanation
"""
pd.DataFrame({('a', 'b'): {('A', 'B'): 1, ('A', 'C'): 2},
('a', 'a'): {('A', 'C'): 3, ('A', 'B'): 4},
('a', 'c'): {('A', 'B'): 5, ('A', 'C'): 6},
('b', 'a'): {('A', 'C'): 7, ('A', 'B'): 8},
('b', 'b'): {('A', 'D'): 9, ('A', 'B'): 10}})
"""
Explanation: dictionary of tuples, with multi index
End of explanation
"""
|
vortex-exoplanet/VIP | docs/source/tutorials/06_fm_disk.ipynb | mit | %matplotlib inline
from hciplot import plot_frames, plot_cubes
from matplotlib.pyplot import *
from matplotlib import pyplot as plt
import numpy as np
from packaging import version
"""
Explanation: 6. ADI forward modeling of disks
Author: Julien Milli
Last update: 23/03/2022
Suitable for VIP v1.0.0 onwards.
Table of contents
6.1. Introduction
6.1.1. Overview
6.1.2. Parametrisation of the density distribution of dust
6.2. Examples of disks
6.2.1. Symmetric pole-on disk
6.2.2. Inclined symmetric disk
6.2.3. Inclined symmetric disk with anisotropy of scattering
6.2.3.1. Simple Henyey-Greenstein phase function
6.2.3.2. Double Henyey-Greenstein phase function
6.2.3.3. Custom phase function
6.2.3.4. Representing a polarised phase function
6.2.4. Asymmetric disk
6.3. Forward modeling of disks
This tutorial shows:
how to generate different models of synthetic (debris) disks;
how to inject model disks in ADI cubes, for forward modeling.
Let's first import a couple of external packages needed in this tutorial:
End of explanation
"""
import vip_hci as vip
vvip = vip.__version__
print("VIP version: ", vvip)
if version.parse(vvip) < version.parse("1.0.0"):
msg = "Please upgrade your version of VIP"
msg+= "It should be 1.0.0 or above to run this notebook."
raise ValueError(msg)
elif version.parse(vvip) <= version.parse("1.0.3"):
from vip_hci.conf import time_ini, timing
from vip_hci.medsub import median_sub
from vip_hci.metrics import cube_inject_fakedisk, ScatteredLightDisk
else:
from vip_hci.config import time_ini, timing
from vip_hci.fm import cube_inject_fakedisk, ScatteredLightDisk
from vip_hci.psfsub import median_sub
# common to all versions:
from vip_hci.var import create_synth_psf
"""
Explanation: In the following box we import all the VIP routines that will be used in this tutorial.
The path to some routines has changed between versions 1.0.3 and 1.1.0, which saw a major revamp of the modular architecture, hence the if statements.
End of explanation
"""
pixel_scale=0.01225 # pixel scale in arcsec/px
dstar= 80 # distance to the star in pc
nx = 200 # number of pixels of your image in X
ny = 200 # number of pixels of your image in Y
"""
Explanation: 6.1. Introduction
6.1.1. Overview
The functions implemented in vip_hci for disks are located in vip.metrics.scattered_light_disk. It contains the definition of a class called ScatteredLightDisk which can produce a synthetic image of a disk, and also utility functions to create cubes of images where a synthetic disk has been injected at specific position angles to simulate a real observation.
Currently there is no utility function to do forward modelling and try to find the best disk matching a given dataset as this is usually specific to each dataset.
Keep in mind that ScatteredLightDisk is only a ray-tracing approach and does not contain any physics in it (no radiative transfer, no particle cross-section). It assumes the particles number density around a star follows the mathematical prescription given in section 1.2 and uses a unity scattering cross-section for all particles (no particle size distribution and cross-section dependant on the particle size, the flux of the synthetic disk cannot be converted in physical units (e.g. Jy)
6.1.2. Parametrisation of the density distribution of dust
The density distribution of dust particles is parametrized in a cylindrical coordinate system $\rho(r,\theta,z)$ and is described by the equation:
$\rho(r,\theta,z) = \rho_0 \times \left( \frac{2}{\left( \frac{r}{R(\theta)} \right)^{-2a_{in}} + \left( \frac{r}{R(\theta)} \right)^{-2a_{out}} }\right)^{1/2} \times e^{\left[ -\left( \frac{z}{H(r) }\right)^\gamma \right]}$
where $R(\theta)$ is called the reference radius. It is simply the radius of the disk $a$ if the dust distribution is centrally symmetric (no eccentricity). If the disk is eccentric, then $R(\theta)$ depends on $\theta$ and is given by the equation of an ellipse in polar coordinates: $R(\theta) = \frac{a(1-e^2)}{1+e \cos{\theta}}$
This equation for $\rho(r,\theta,z)$ is the product of 3 terms:
1. a constant $\rho_0$ which is the surfacce density of the dust in the midplane, at the reference radius $R(\theta)$.
2. the density distribution in the midplane $z=0$ defined as $\left( \frac{2}{\left( \frac{r}{R(\theta)} \right)^{-2a_{in}} + \left( \frac{r}{R(\theta)} \right)^{-2a_{out}} }\right)^{1/2}$. Such a function ensures that when $r\ll R(\theta)$ then the term is $\propto r^{\alpha_{in}}$ (and we typically use $\alpha_{in}>0$) and when $r\gg R(\theta)$ then the term is $\propto r^{\alpha_{out}}$ (and we typically use $\alpha_{out}<0$).
3. the vertical profile $e^{\left[ -\left( \frac{z}{H(r) }\right)^\gamma \right]}$ is parametrized by an exponential decay of exponent $\gamma$ and scale height $H(r)$. If $\gamma=2$, the vertical profile is Gaussian (and $H(r)$ is proportional to the $\sigma$ or FWHM of the Gaussian (but not strictly equal to any of them). The scale height is further defined as $H(r)=\xi_0 \times \left( \frac{r}{R(\theta)} \right)^\beta$ where $\xi_0$ is the reference scale height at the reference radius $R(\theta)$ and $\beta$ is the flaring coeffient ($\beta=1$ means a linear flaring: the scale height increases linearly with radius).
6.2. Examples of disks
Let's assume we want to create a synthetic image of 200px, containing a disk around a star located at 80 a.u., observed with SPHERE/IRDIS (pixel scale 12.25 mas).
End of explanation
"""
itilt = 0. # inclination of your disk in degrees
a = 70. # semimajoraxis of the disk in au
ksi0 = 3. # reference scale height at the semi-major axis of the disk
gamma = 2. # exponant of the vertical exponential decay
alpha_in = 12
alpha_out = -12
beta = 1
"""
Explanation: 6.2.1. Symmetric pole-on disk
For a pole-on disk, $i_\text{tilt}=0^\circ$.
For a symmetric disk, $e=0$ and the position angle (pa) and argument of pericenter ($\omega$) have no impact.
We choose a semi-major axis of 70 a.u., a vertical profile with a gaussian distribution ($\gamma=2$), a reference scale height of 3 a.u. at the semi-major axis of the disk, and inner and outer exponent $\alpha_{in}=12$ and $\alpha_{out}=-12$
End of explanation
"""
fake_disk1 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws','ain':alpha_in,'aout':alpha_out,
'a':a,'e':0.0,'ksi0':ksi0,'gamma':gamma,'beta':beta},
spf_dico={'name':'HG', 'g':0., 'polar':False},
flux_max=1.)
"""
Explanation: Then create your disk model
End of explanation
"""
fake_disk1_map = fake_disk1.compute_scattered_light()
plot_frames(fake_disk1_map, grid=False, size_factor=6)
"""
Explanation: The method compute_scattered_light returns the synthetic image of the disk.
End of explanation
"""
fake_disk1.print_info()
"""
Explanation: You can print some info on the geometrical properties of the model, the dust distribution parameters, the numerical integration parameters and the phase function parameters (detailed later).
This can be useful because, in addition to reminding all the parameters used in the model, it also computes some properties such as the radial FWHM of the disk.
End of explanation
"""
fake_disk1 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':-3,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':0., 'polar':False},
flux_max=1.)
fake_disk1_map = fake_disk1.compute_scattered_light()
plot_frames(fake_disk1_map, grid=False, size_factor=6)
fake_disk1.print_info()
"""
Explanation: As a side note, if $\alpha_{in} \ne \alpha_{out}$, then the peak surface density of the disk is not located at the reference radius $a$.
End of explanation
"""
itilt = 76 # inclination of your disk in degreess
fake_disk2 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':0., 'polar':False},
flux_max=1.)
fake_disk2_map = fake_disk2.compute_scattered_light()
plot_frames(fake_disk2_map, grid=False, size_factor=6)
"""
Explanation: 6.2.2. Inclined symmetric disk
End of explanation
"""
fake_disk2 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta,
'dens_at_r0':1e6},
spf_dico={'name':'HG', 'g':0, 'polar':False})
fake_disk2_map = fake_disk2.compute_scattered_light()
plot_frames(fake_disk2_map, grid=False, size_factor=6)
"""
Explanation: The position angle of the disk is 0 (e.g. north). The phase function is asymmetric, the reason why the north and south ansae appear brighter is because the disk is not flat: it has a certain scale height and there is more dust intercepted along the line of sight in the ansae.
Note that we decided here to normalize the disk to a maximum brightness of 1, using the option flux_max=1.. This is not the only option available and you can decide to paramterize $\rho_0$ instead, using the keyword dens_at_r0 which directly specifies $\rho_0$.
End of explanation
"""
fake_disk2 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=90, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':2, 'gamma':gamma, 'beta':beta,
'dens_at_r0':1e6},
spf_dico={'name':'HG', 'g':0, 'polar':False})
fake_disk2_map = fake_disk2.compute_scattered_light()
plot_frames(fake_disk2_map, grid=False, size_factor=6)
"""
Explanation: Warning ! The code does not handle perfectly edge-on disks. There is a maximum inclination close to edge-on beyond which it cannot create an image. In practice this is not a limitation as the convolution by the PSF always makes it impossible to disentangle between a close to edge-on disk and a perfectly edge-on disk.
End of explanation
"""
g=0.4
fake_disk3 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':g, 'polar':False},
flux_max=1.)
"""
Explanation: 6.2.3. Inclined symmetric disk with anisotropy of scattering
6.2.3.1. Simple Henyey-Greenstein phase function
We parametrize the phase function by a Henyey Greenstein phase function, with an asymmetry parameter g. An isotropic phase function has $g=0$, forward scattering is represented by $0<g\leq1$ and backward scattering is represented by $-1\leq g<0$
End of explanation
"""
fake_disk3.phase_function.plot_phase_function()
fake_disk3_map = fake_disk3.compute_scattered_light()
plot_frames(fake_disk3_map, grid=False, size_factor=6)
"""
Explanation: You can plot how the phase function look like:
End of explanation
"""
g1=0.6
g2=-0.4
weight1=0.7
fake_disk4 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'DoubleHG', 'g':[g1,g2], 'weight':weight1,
'polar':False},
flux_max=1)
fake_disk4.phase_function.plot_phase_function()
fake_disk4_map = fake_disk4.compute_scattered_light()
plot_frames(fake_disk4_map, grid=False, size_factor=6)
"""
Explanation: The forward side is brighter.
6.2.3.2. Double Henyey-Greenstein phase function
A double Henyey Greenstein (HG) phase function is simply a linear combination of 2 simple HG phase function. It is therefore parametrized by $g_1$ and $g_2$ the 2 asymmetry parameters of each HG and the weight (between 0 and 1) of the first HG phase function. Typically a double HG is used to represent a combination of forward scattering ($g_1>0$) and backward scattering ($g_2<1$)
End of explanation
"""
kind='cubic' #kind must be either "linear", "nearest", "zero", "slinear", "quadratic" or "cubic"
spf_dico = dict({'phi':[0, 60, 90, 120, 180],
'spf':[1, 0.4, 0.3, 0.3, 0.5],
'name':'interpolated', 'polar':False, 'kind':kind})
fake_disk5 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico=spf_dico, flux_max=1)
fake_disk5.phase_function.plot_phase_function()
fake_disk5_map = fake_disk5.compute_scattered_light()
plot_frames(fake_disk5_map, grid=False, size_factor=6)
"""
Explanation: 6.2.3.3. Custom phase function
In some cases, a HG phase function (simple or double) cannot represent well the behaviour of the dust. The code is modular and you can propose new prescriptions for the phase functions if you need, or you can also create a custom phase function.
End of explanation
"""
fake_disk6 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=0, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':0.0, 'ksi0':ksi0, 'gamma':gamma,
'beta':beta, 'dens_at_r0':1e6},
spf_dico={'name':'HG', 'g':0, 'polar':True})
fake_disk6.phase_function.plot_phase_function()
fake_disk6_map = fake_disk6.compute_scattered_light()
plot_frames(fake_disk6_map, grid=False, size_factor=6)
"""
Explanation: 6.2.3.4. Representing a polarised phase function
If you are trying to reproduce the polarised intensity of a disk (for instance Stokes $Q_\phi$ image), you may want to add on top of the scattering phase function, a modulation representing the degree of linear polarisation.
This can be done by setting the polar keyword to True and in this case, the model assumes a Rayleigh-like degree of linear polarisation parametrized by $(1-(\cos \phi)^2) / (1+(\cos \phi)^2)$ where $\phi$ is the scattering angle.
End of explanation
"""
e=0.4 # eccentricity in degrees
omega=30 # argument of pericenter
fake_disk7 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=0, omega=omega, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':e, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':g, 'polar':False},
flux_max=1.)
fake_disk7_map = fake_disk7.compute_scattered_light()
plot_frames(fake_disk7_map, grid=False, size_factor=6)
"""
Explanation: You can combine this Rayleigh-like degree of linear polarisation with any phase function (simple HG, double HG or custom type).
6.2.4. Asymmetric disk
Be careful here !
There is no consensus in the community on how to parametrize an eccentric dust distribution, so keep in mind that the convention described in section 1.2 is only one way to do so, but does not mean the dust density distribution in an eccentric disk follows this prescription. For instance, around the pericenter particle velocities are higher and one expects more collision to happen which can create an overdensity of particles compared to other regions of the disk. Conversely, particles stay longer at the apocenter because of Kepler's third law, which means that one could also expect a higher density at apocenter... All these physical phenomena are not described in this model.
Let's start woth a pole-on disk to be insensitive to phase function effects
End of explanation
"""
fake_disk7 = ScatteredLightDisk(nx=nx, ny=ny, distance=dstar,
itilt=itilt, omega=omega, pxInArcsec=pixel_scale, pa=0,
density_dico={'name':'2PowerLaws', 'ain':alpha_in, 'aout':alpha_out,
'a':a, 'e':e, 'ksi0':ksi0, 'gamma':gamma, 'beta':beta},
spf_dico={'name':'HG', 'g':g, 'polar':False},
flux_max=1.)
fake_disk7_map = fake_disk7.compute_scattered_light()
plot_frames(fake_disk7_map, grid=False, size_factor=6)
"""
Explanation: The brightness asymmetry here is entirely due to the fact that the brightness at one point in the disk is inversely proportional to the squared distance to the star.
Once you incline the disk, you start seeing the competing effect of the phase function and eccentricity.
End of explanation
"""
plot_frames(fake_disk3_map, grid=False, size_factor=6)
nframes = 30
# we assume we have 60º of parallactic angle rotation centered around meridian
parang_amplitude = 60
derotation_angles = np.linspace(-parang_amplitude/2, parang_amplitude/2, nframes)
start = time_ini()
cube_fake_disk3 = cube_inject_fakedisk(fake_disk3_map, -derotation_angles, imlib='vip-fft')
timing(start)
"""
Explanation: 6.3. Forward modeling of disks
Let's start from our inclined simple HG symmeric disk fake_disk3_map and assume we observe this disk as part of an ADI sequence of 30 images
End of explanation
"""
cube_fake_disk3.shape
"""
Explanation: cube_fake_disk3 is now a cube of 30 frames, where the disk has been injected at the correct position angle.
End of explanation
"""
plot_frames((cube_fake_disk3[0], cube_fake_disk3[nframes//2], cube_fake_disk3[nframes-1]),
grid=False, size_factor=3)
"""
Explanation: Let's visualize the first, middle and last image of the cube.
End of explanation
"""
cadi_fake_disk3 = median_sub(cube_fake_disk3, derotation_angles, imlib='vip-fft')
plot_frames((fake_disk3_map, cadi_fake_disk3), grid=False, size_factor=4)
"""
Explanation: We can now process this cube with median-ADI for instance:
End of explanation
"""
psf = create_synth_psf(model='gauss', shape=(11, 11), fwhm=4.)
plot_frames(psf, grid=True, size_factor=2)
"""
Explanation: The example above shows a typical bias that can be induced by ADI on extended disk signals (Milli et al. 2012).
So far we have not dealt with convolution effects. In practice the image of a disk is convolved by the instrumental PSF.
Let's assume here an instrument having a gaussian PSF with FWHM = 4px, and create a synthetic PSF using the create_synth_psf function:
End of explanation
"""
cube_fake_disk3_convolved = cube_inject_fakedisk(fake_disk3_map, -derotation_angles,
psf=psf, imlib='vip-fft')
cadi_fake_disk3_convolved = median_sub(cube_fake_disk3_convolved, derotation_angles, imlib='vip-fft')
plot_frames((fake_disk3_map, cadi_fake_disk3, cadi_fake_disk3_convolved), grid=False, size_factor=4)
"""
Explanation: Then we inject the disk in the cube and convolve each frame by the PSF
End of explanation
"""
|
wzxiong/DAVIS-Machine-Learning | homeworks/HW1-soln.ipynb | mit | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import LeaveOneOut
from sklearn import linear_model, neighbors
%matplotlib inline
plt.style.use('ggplot')
# dataset path
data_dir = "."
sample_data = pd.read_csv(data_dir+"/hw1.csv", delimiter=',')
sample_data.head()
"""
Explanation: STA 208: Homework 1
This is based on the material in Chapters 2, 3 of 'Elements of Statistical Learning' (ESL), in addition to lectures 1-4.
Instructions
We use a script that extracts your answers by looking for cells in between the cells containing the exercise statements (beginning with Exercise X.X). So you
MUST add cells in between the exercise statements and add answers within them and
MUST NOT modify the existing cells, particularly not the problem statement
To make markdown, please switch the cell type to markdown (from code) - you can hit 'm' when you are in command mode - and use the markdown language. For a brief tutorial see: https://daringfireball.net/projects/markdown/syntax
1. Conceptual Exercises
In the following exercises you should provide an explanation, with math when necessary, for any answers. When answering with math you should use basic LaTeX, as in
$$E(Y|X=x) = \int_{\mathcal{Y}} f_{Y|X}(y|x) dy = \int_{\mathcal{Y}} \frac{f_{Y,X}(y,x)}{f_{X}(x)} dy$$
for displayed equations, and $R_{i,j} = 2^{-|i-j|}$ for inline equations. (To see the contents of this cell in markdown, double click on it or hit Enter in escape mode.) To see a list of latex math symbols see here: http://web.ift.uib.no/Teori/KURS/WRK/TeX/symALL.html
Exercise 1.1. (5 pts) Recall that the Hamming loss for Binary classification ($y \in {0,1}$) is
$$l(y,\hat y) = 1{y \ne \hat y} = (y - \hat y)^2$$
as long as $\hat y \in {0,1}$.
This loss can be extended to multiclass classification where there are $K$ possible values that $y$ can take (for example 'dog','cat','squirrel' or 1-5 stars). Explain how you can re-encode $y$ and $\hat y$ to be a $K-1$ dimensional vector that generalizes binary classification, and rewrite the loss using vector operations.
If we encode $\hat{y}$ be to $K$ dimensional vector, then $\hat{y} = e_i$, where $e_i$ is a vector with (K-1) 0 and a 1 at the $i$th index. The corresponding loss function is $l(y,\hat{y}) = \|y - \hat{y}\|_2^2$
It is also possible to encode $\hat{y}$ with $K-1$ dimensional vector and still use quadratic loss: For the first $K-1$ class, we still encode $\hat{y} = e_i$, for class $K$, we encode $\hat{y} = [\alpha, \alpha, ..., \alpha]$, all elements in this vector is $\alpha$, where $\alpha$ is the solution of $(1-\alpha)^2 + (K-2)\alpha^2 = 4$
Exercise 1.2 (5 pts) Ex. 2.7 in ESL
(a)
For convenience, we denote $[1, x]$ as $x$.
For linear regression, $\hat{f}(x_0) = x_0^T \hat{\beta}$, where $\beta = (X^TX)^{-1}X^TY$
so $\hat{f}(x_0) = x_0^T (X^TX)^{-1}X^TY = \sum_{i=1}^n x_0 (X^TX)^{-1} x_i^T y_i$
$l_i(x_0; X) = x_0 (X^TX)^{-1} x_i^T$
For k-nearest-neighbour regression, $\hat{f}(x_0) = \frac{1}{k} \sum_{i \in N_k(x_0)}y_i = \sum_{i=1}^n \frac{1}{k}I(x \in N_k(x_0)) y_i$
$l_i(x_0; X) = \frac{1}{k} I(x \in N_k(x_0))$
(b)
$\mathbb{E}{Y|X}(f(x_0)-\hat{f}(x_0))^2 = \mathbb{E}{Y|X}(f(x_0) - \mathbb{E}{Y|X}\hat{f}(x_0) + \mathbb{E}{Y|X}\hat{f}(x_0) - \hat{f}(x_0))^2 = [f(x_0)-\mathbb{E}{Y|X}\hat{f}(x_0)]^2 + \mathbb{E}{Y|X}[\mathbb{E}{Y|X}\hat{f}(x_0) - \hat{f}(x_0)]^2 \ + [f(x_0)-\mathbb{E}{Y|X}\hat{f}(x_0)][\mathbb{E}{Y|X}\hat{f}(x_0) - \mathbb{E}{Y|X}\hat{f}(x_0)] = [f(x_0)-\mathbb{E}{Y|X}\hat{f}(x_0)]^2 + [\mathbb{E}{Y|X}\hat{f}(x_0) - \hat{f}(x_0)]^2 = bias_{Y|X}^2 + var(\hat{f}(x_0)_{Y|X})$
(c)
$\mathbb{E}{Y,X}(f(x_0)-\hat{f}(x_0))^2 = \mathbb{E}{Y,X}(f(x_0) - \mathbb{E}{Y,X}\hat{f}(x_0) + \mathbb{E}{Y,X}\hat{f}(x_0) - \hat{f}(x_0))^2 = [f(x_0)-\mathbb{E}{Y,X}\hat{f}(x_0)]^2 + \mathbb{E}{Y,X}[\mathbb{E}{Y,X}\hat{f}(x_0) - \hat{f}(x_0)]^2 \ + [f(x_0)-\mathbb{E}{Y,X}\hat{f}(x_0)][\mathbb{E}{Y,X}\hat{f}(x_0) - \mathbb{E}{Y,X}\hat{f}(x_0)] = [f(x_0)-\mathbb{E}{Y,X}\hat{f}(x_0)]^2 + [\mathbb{E}{Y,X}\hat{f}(x_0) - \hat{f}(x_0)]^2 = bias_{Y,X}^2 + var(\hat{f}(x_0)_{Y,X})$
(d)
By Adam's law.
$\mathbb{E}{X}[ bias{Y|X}^2 ] = \mathbb{E}{X} [f(x_0) - \mathbb{E}{Y,X}(\hat{f}(x_0)) + \mathbb{E}{Y,X}(\hat{f}(x_0)) - \mathbb{E}{Y|X}(\hat{f}(x_0))]^2 = bias_{Y,X}^2 + var_X( \mathbb{E}{Y|X}(\hat{f}(x_0)) )$
By Eve's law.
$var(\hat{f}(x_0){Y,X}) = \mathbb{E}{X}(var(\hat{f}(x_0){Y|X})) + var_{X} (\mathbb{E}_{Y|X}(\hat{f}(x_0)))$
Exercise 1.3 (5 pts, 1 for each part) Recall that the true risk for a prediction function, $f$, a loss function, $\ell$, and a joint distribution for $Y,X$ is
$$R(f) = E \ell(y,f(x))$$
For a training set ${x_i,y_x}{i=1}^n$, the empirical risk is
$$R_n = \frac{1}{n} \sum{i=1}^n \ell(y_i,f(x_i)).$$
Let $y = x^\top \beta + \epsilon$ be a linear model for $Y|X$, where $x,\beta$ are $p$-dimensional such that $\epsilon$ is Gaussian with mean 0 and variance $\sigma^2$ (independent of X).
Let $\ell(y,\hat y) = (y - \hat y)^2$ be square error loss.
Show that $f^\star(x) = x^\top \beta$ gives the smallest true risk (also known as the Bayes rule).
Why can't we use this prediction in practice?
Recall that OLS is the empirical risk minimizer for linear functions. Why does this tell us the following:
$$ E R_n (\hat f) \le R(f^\star)$$
How do we know that $E R_n (\hat f) \le R(\hat f)$? and use this to answer Ex. 2.9 in ESL.
What about this was specific to OLS and least squares loss (can this be generalized)? What is the most general statement that you can think of that you can prove in this way?
(1)
We know that $\arg \min_{\hat{Y}} \mathbb{E}(Y-\hat{Y})^2 = \mathbb{E}(Y|X)$
$f^(x)$ is the minimier of true risk $\mathbb{E}(Y-\hat{Y})^2$
So $f^(x) = \mathbb{E}(Y|X=x) = \mathbb{E}(x^T \beta + \epsilon) = \mathbb{E}(x^T\beta) = x^T\beta$
(2)
We don't know $\beta$ in practice.
(3) Solution 1
$R_n(\hat f) \le R_n(f^\star)$ and
$$E R_n(f^\star) = E \left( \frac 1n \sum_{i=1}^n \ell(y_i,f^\star(x_i)) \right) = \frac 1n \sum_{i=1}^n R(f^\star) = R(f^\star)$$
Hence, $E R_n(\hat f) \le R(f^\star)$.
Solution 2
($R(f^*) = \mathbb{E}(Y - X\beta)^2 = \mathbb{E}(\epsilon)^2 = \sigma^2$ )
$\mathbb{E}(l(y_i, \hat{f}(x_i))) = \mathbb{E}[ \mathbb{E}(l(y_i, \hat{f}(x_i))) | X ]$
We calculate the conditional expectation first(treat $X$ as fixed matrix):
$\mathbb{E}[l(y_i, \hat{f}(x_i)) | X] = \mathbb{E}(x_i^T\beta + \epsilon_i - x_i^T (X^TX)^{-1}X^TY)^2 = \mathbb{E}(x_i^T\beta + \epsilon_i - x_i^T (X^TX)^{-1}X^T(X^T\beta + \mathbf{\epsilon}))^2\ = \mathbb{E}(x_i^T\beta + \epsilon_i - x_i^T\beta + <> x_i^T(X^TX)^{-1}X^T\mathbf{\epsilon})^2 = \mathbb{E}((e_i - x_i^T(X^TX)^{-1}X^T)^T \mathbf{\epsilon})^2 = \mathbb{E}{X}[ \mathbb{E}{\epsilon|X} ((e_i - x_i^T(X^TX)^{-1}X^T)^T \mathbf{\epsilon})^2 ] = (e_i - x_i^T(X^TX)^{-1}X^T) \sigma^2 I (e_i - x_i^T(X^TX)^{-1}X^T)^T\ = \sigma^2( 1 + x_i^T(X^TX)^{-1}x_i - 2e_iX(X^TX)^{-1}x_i ) = \sigma^2(1-x_i^T(X^TX)^{-1}x_i) <= \sigma^2$
$\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ $\qquad$ ($(X^TX)^{-1}$ is postive definite)
We have proved that $\mathbb{E}[l(y_i, \hat{f}(x_i)) | X] \leq \sigma^2$ is true for $\forall X$ s.t. $(X^TX)$ has full rank. So $\mathbb{E}(l(y_i, \hat{f}(x_i))) = \mathbb{E}[ \mathbb{E}(l(y_i, \hat{f}(x_i))) | X ] \leq \mathbb{E}(\sigma^2) = \sigma^2$
So $\mathbb{E}(R_n(\hat{f})) = E(\frac{1}{n}\sum_{i=1}^nl(y_i,\hat{f}(x_i))) = \frac{1}{n}\sum_{i=1}^n \mathbb{E}(l(y_i, \hat{f}(x_i))) \leq \sigma^2 = R(f^*)$
(4) Solution 1
Based on (1) we have that $R(f^\star) \le R(\hat f)$. Hence, we have that
$$E R_n (\hat f) \le R(\hat f).$$
Therefore, the expected test error is greater than or equal to the expected training error.
Solution 2
For a newly observed $x_0$ and $y_0$, denote $\mathbf{\epsilon}_t$ as the $\epsilon$ for training set. we know that $\epsilon_0$ and $\mathbf{\epsilon}_t$ are independent.
$R(\hat{f}) = \mathbb{E}(y_0 - x_0(X^TX)^{-1}X^TY)^2 = \mathbb{E}(x_0\beta + \epsilon_0 - x_0(X^TX)^{-1}X^T(X\beta + \mathbf{\epsilon}_t))^2 = \mathbb{E}(\epsilon_0 - x_0(X^TX)^{-1}X^T\epsilon_t)^2 \= \sigma^2 + \mathbb{E}(x_0(X^TX)^{-1}X^T\mathbf{\epsilon}_t)^2 - \mathbb{E}(\epsilon_0) \mathbb{E}(x_0(X^TX)^{-1}X^T\mathbf{\epsilon}_t) = \sigma^2 + \mathbb{E}(x_0(X^TX)^{-1}X^T\mathbf{\epsilon}_t)^2 + 0 \geq \sigma^2 = R(f^*)$
(5) If we refer to Solution 1's then we see that the only place where the Gaussian model was used was in (1). So, the most general statement is...
Let $f^\star$ be the minimizer of $R(f)$ the true risk, and let $\hat f$ be the minimizer of $R_n$. Then $E R_n(\hat f) \le R(\hat f)$.
Exercise 1.4 Ex. 3.5 in ESL
$$\min_{\beta^c}{ \sum_{i=1}^N [y_i - \beta_0^c - \sum_{j=1}^p (x_{ij} - \bar{x}j)\beta_j^c]^2 + \lambda \sum{j=1}^N \beta_j^{c2} } = \min_{\beta^c}{ \sum_{i=1}^N [y_i - (\beta_0^c - \sum_{j=1}^p\bar{x}j\beta_j^c) - \sum{j=1}^p x_{ij}\beta_j^c]^2 + \lambda \sum_{j=1}^N \beta_j^{c2}} \ = \min_{\beta}{ \sum_{i=1}^N [y_i - \beta_0 - \sum_{j=1}^p x_{ij}\beta_j]^2 + \lambda \sum_{j=1}^N \beta_j^{2}} $$
Where $\beta_j = \beta_j^c$ for $1 \leq j \leq p$ and $\beta_0 = \beta_0^c - \sum_{j=1}^p\bar{x}_j\beta_j^c$
For Lasso, the proof is similar.
Exercise 1.5 Ex 3.9 in ESL
$X = QR$, where $Q$ is orthogonal matrix and $R$ is upper trangular matrix, then $\hat{y} = QQ^T y$.
We add a new feature $X_{new}$, denote $Q_{new} = [Q, q]$
$RSS = y^T(I - Q_{new} Q_{new}^T) y = y^T(I - QQ^T - qq^T)y = \|r\|_2^2 - (y^Tq)^2$
So we want to find the $q$ that maximize $y^Tq$
To make my algorithm efficient, we don't want to apply QR decomposition for our new data again and again.
The detailed algorithm:
for i = p+1,p+2,...,q:
$\qquad$ Calculate $q_i$ by gram-schmit: $q_i = x_i - \sum_{j=1}^p x_i^T x_j$ and normalize it by $q_i = q_i/\|q_i\|_2$
$\qquad$ Calculate $(y^Tq_i)^2$
Output the $i$ that maximize $(y^Tq_i)^2$
HW1 Wine Data Analysis
Instructions
You will be graded based on several criteria, and each is on a 5 point scale (5 is excellent - A - 1 is poor - C - 0 is not answered - D/F). You should strive to 'impress us' if you want a 5. This means excellent code, well explained conclusions, well annotated plots, correct answers, etc.
We will be grading you on several criteria:
Conclusions: Conclusions should be consistent with the evidence provided, the conclusion should be well justified, the principles of machine learning that you have learned should be respected (such as overfitting and underfitting etc.)
Correctness of calculations: code should be correct and reflect the principles learned in this course, the logic should be sound, the methods should match the setting and context, you should try many applicable methods that you have learned as long as they apply.
Code, Figures, and Text: Code should be annotated and easy to follow, with docstrings on the functions; captions, titles, for figures
Exercise 2 You should run the following code cells to import the code and reduce the variable set. Address the questions after the code.
End of explanation
"""
X = np.array(sample_data.iloc[:,range(1,5)])
y = np.array(sample_data.iloc[:,0])
def loo_risk(X,y,regmod):
"""
Construct the leave-one-out square error risk for a regression model
Input: design matrix, X, response vector, y, a regression model, regmod
Output: scalar LOO risk
"""
loo = LeaveOneOut()
loo_losses = []
for train_index, test_index in loo.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
regmod.fit(X_train,y_train)
y_hat = regmod.predict(X_test)
loss = np.sum((y_hat - y_test)**2)
loo_losses.append(loss)
return np.mean(loo_losses)
def emp_risk(X,y,regmod):
"""
Return the empirical risk for square error loss
Input: design matrix, X, response vector, y, a regression model, regmod
Output: scalar empirical risk
"""
regmod.fit(X,y)
y_hat = regmod.predict(X)
return np.mean((y_hat - y)**2)
"""
Explanation: The response variable is quality.
End of explanation
"""
lin1 = linear_model.LinearRegression(fit_intercept=True)
print('LOO Risk: '+ str(loo_risk(X,y,lin1)))
print('Emp Risk: ' + str(emp_risk(X,y,lin1)))
"""
Explanation: Exercise 2.1 (5 pts) Compare the leave-one-out risk with the empirical risk for linear regression, on this dataset.
End of explanation
"""
LOOs = []
MSEs = []
K=60
Ks = range(1,K+1)
for k in Ks:
knn = neighbors.KNeighborsRegressor(n_neighbors=k)
LOOs.append(loo_risk(X,y,knn))
MSEs.append(emp_risk(X,y,knn))
plt.plot(Ks,LOOs,'r',label="LOO risk")
plt.title("Risks for kNN Regression")
plt.plot(Ks,MSEs,'b',label="Emp risk")
plt.legend()
_ = plt.xlabel('k')
min(LOOs)
print('optimal k:' + str(LOOs.index(min(LOOs))))
"""
Explanation: Exercise 2.2 (10 pts) Perform kNN regression and compare the leave-one-out risk with the empirical risk for k from 1 to 50. Remark on the tradeoff between bias and variance for this dataset and compare against linear regression.
End of explanation
"""
n,p = X.shape
rem = set(range(p))
supp = []
LOOs = []
while len(supp) < p:
rem = list(set(range(p)) - set(supp))
ERMs = [emp_risk(X[:,supp+[j]],y,linear_model.LinearRegression(fit_intercept=True)) for j in rem]
jmin = rem[np.argmin(ERMs)]
supp.append(jmin)
LOOs.append(loo_risk(X[:,supp],y,linear_model.LinearRegression(fit_intercept=True)))
for i,s,loo in zip(range(p),supp,LOOs):
print("Step {} added variable {} with LOO: {}".format(i,s,loo))
"""
Explanation: Conclusion Comparing the performance of kNN and linear regression, we see that 16-nearest neighbors achieves a LOO risk of 233.2 which is lower than that for linear regression (243.5).
Exercise 2.3 (10 pts) Implement forward stepwise regression (ESL section 3.3.2) for the linear model and compare the LOO risk for each stage. Recall that at each step forward stepwise regression will select a new variable that most improves the empirical risk and include that in the model (starting with the intercept).
End of explanation
"""
|
Bio204-class/bio204-notebooks | 2016-04-25-Parallels-Regression-and-ANOVA.ipynb | cc0-1.0 | n = 25
x = np.linspace(-5, 5, n) + stats.norm.rvs(loc=0, scale=1, size=n)
a, b = 1, 0.75
# I've chosen values to make yind and ydep have about the same variance
yind = a + stats.norm.rvs(loc=0, scale=np.sqrt(8), size=n)
ydep = a + b*x + stats.norm.rvs(loc=0, scale=1, size=n)
# create two different data frames for ease of use with statsmodels
data_ind = pd.DataFrame(dict(x = x, y = yind))
data_dep = pd.DataFrame(dict(x = x, y = ydep))
"""
Explanation: Regression as sum-of-squares decomposition
Example data sets
We setup two synthetic data sets -- one where $Y$ is independent of $X$, and a second where $Y$ is depedendent on $X$.
End of explanation
"""
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(10,4), sharex=True, sharey=True)
ax1.scatter(x, yind, s=60, alpha=0.75, color='steelblue')
ax2.scatter(x, ydep, s=60, alpha=0.75, color='steelblue')
ax1.set_xlabel("X",fontsize=15)
ax1.set_ylabel("Y",fontsize=15)
ax1.set_title("Y independent of X", fontsize=18)
ax2.set_xlabel("X",fontsize=15)
ax2.set_title("Y dependent on X", fontsize=18)
pass
"""
Explanation: And we plot the data sets
End of explanation
"""
fit_ind = smf.ols('y ~ x', data_ind).fit()
fit_dep = smf.ols('y ~ x', data_dep).fit()
"""
Explanation: Fit the regressions with statsmodels
End of explanation
"""
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(10,4), sharex=True, sharey=True)
ax1.scatter(x, yind, s=60, alpha=0.75, color='steelblue')
ax1.plot(x, fit_ind.predict(), color='firebrick', alpha=0.5)
ax2.scatter(x, ydep, s=60, alpha=0.75, color='steelblue')
ax2.plot(x, fit_dep.predict(), color='firebrick', alpha=0.5)
ax1.set_xlabel("X",fontsize=15)
ax1.set_ylabel("Y",fontsize=15)
ax1.set_title("Y independent of X", fontsize=18)
ax2.set_xlabel("X",fontsize=15)
ax2.set_title("Y dependent on X", fontsize=18)
pass
"""
Explanation: Plot the regressions using info return by statsmodels
End of explanation
"""
def sum_squares(x):
x = np.asarray(x)
return np.sum((x - np.mean(x))**2)
def bivariate_regression_table(fit):
""" A function to create an ANOVA-like table for a bivariate regression"""
df_model = fit.df_model
df_resid = fit.df_resid
df_total = df_model + df_resid
SStotal = sum_squares(fit.model.endog)
SSmodel = sum_squares(fit.predict())
SSresid = sum_squares(fit.resid)
MSmodel = SSmodel / df_model
MSresid = SSresid / df_resid
Fstat = MSmodel / MSresid
Pval = stats.f.sf(Fstat, df_model, df_resid)
Ftable = pd.DataFrame(index=["Model","Residuals","Total"],
columns=["df", "SS", "MS", "F", "Pval"],
data = dict(df = [df_model, df_resid, df_total],
SS = [SSmodel, SSresid, SStotal],
MS = [MSmodel, MSresid, ""],
F = [Fstat, "", ""],
Pval = [Pval, "", ""]))
return Ftable
"""
Explanation: Functions to decompose sums of squares
End of explanation
"""
Ftable_ind = bivariate_regression_table(fit_ind)
Ftable_ind
Ftable_dep = bivariate_regression_table(fit_dep)
Ftable_dep
"""
Explanation: ANOVA like tables for each regression
End of explanation
"""
Y1 = stats.norm.rvs(loc=0, scale=1, size=10)
Y2 = stats.norm.rvs(loc=1, scale=1, size=10)
Y = np.concatenate([Y1,Y2])
groups = [-1]*10 + [1]*10 # setup dummy variable to represent grouping
data = pd.DataFrame(dict(Y = Y,
group = groups))
data.head(3)
data.tail(3)
data.corr()
sbn.stripplot(x="group", y="Y", hue="group", data=data,s=10)
pass
"""
Explanation: Two-group one-way ANOVA as a bivariate regression
To setup ANOVA for two-groups as a regression problem, we use "dummy coding" where we incorporate group information into a predictor variable.
End of explanation
"""
fit_data = smf.ols('Y ~ group', data).fit()
"""
Explanation: Fit regression model
End of explanation
"""
plt.scatter(data.group[data.group == -1], data.Y[data.group == -1], s=60, alpha=0.75, color='steelblue')
plt.scatter(data.group[data.group == 1], data.Y[data.group == 1], s=60, alpha=0.75, color='forestgreen')
groups = [-1,1]
predicted = fit_data.predict(dict(group=groups))
plt.plot(groups, predicted, color='firebrick', alpha=0.75)
plt.xticks([-1,1])
plt.xlabel("Group",fontsize=15)
plt.ylabel("Y", fontsize=15)
pass
"""
Explanation: Plot regression
End of explanation
"""
fit_data.fvalue, fit_data.f_pvalue
stats.f_oneway(data.Y[data.group == -1], data.Y[data.group == 1])
"""
Explanation: Compare regression F-statistic and corresponding p-value to that from ANOVA
End of explanation
"""
iris = pd.read_csv("http://roybatty.org/iris.csv")
iris.Species.unique()
iris.columns = iris.columns.str.replace('.','')
iris.columns
sbn.violinplot(x="Species", y="SepalLength", data=iris)
effect1 = []
effect2 = []
for s in iris.Species:
if s == 'setosa':
effect1.append(1)
effect2.append(0)
elif s == 'versicolor':
effect1.append(0)
effect2.append(1)
else:
effect1.append(-1)
effect2.append(-1)
print(effect1)
print(effect2)
# add effect variables to iris data frame
iris.effect1 = effect1
iris.effect2 = effect2
iris_fit = smf.ols("SepalLength ~ effect1 + effect2", iris).fit()
iris_fit.fvalue, iris_fit.f_pvalue
stats.f_oneway(iris.SepalLength[iris.Species == "setosa"],
iris.SepalLength[iris.Species == "versicolor"],
iris.SepalLength[iris.Species == "virginica"])
"""
Explanation: Multi-group one-way ANOVA as a multiple regression
When we want to consider multiple groups, we have to extend the idea of dummy coding to allow for more groups. We can setup simple dummy variables for $g-1$ groups, or we can use a slight variant called "effect coding". The results when using effect coding are usually easier to interpret, and that's what we'll use this here.
The two links below contrast dummy and effect coding:
Dummy coding: http://www.ats.ucla.edu/stat/mult_pkg/faq/general/dummy.htm
Effect coding: http://www.ats.ucla.edu/stat/mult_pkg/faq/general/effect.htm
End of explanation
"""
iris_fit2 = smf.ols('SepalLength ~ C(Species)', iris).fit()
"""
Explanation: If using statsmodels you don't need to explicit create the effect coding variables like we did above. You can specify that a variable is a categorical variables in the formula itself.
End of explanation
"""
anova.anova_lm(iris_fit2)
"""
Explanation: You can then pass the model fit results to statsmodels.stats.anova.anova_lm to get an appropriate ANOVA table.
End of explanation
"""
|
ML4DS/ML4all | U1.KMeans/KMeans_student.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.spatial.distance import cdist
from fig_code import plot_kmeans_interactive
from sklearn.datasets import make_blobs, load_digits, load_sample_image
from sklearn.decomposition import PCA
from sklearn.metrics import confusion_matrix
from sklearn.cluster import KMeans
# use seaborn plotting defaults
import seaborn as sns; sns.set()
"""
Explanation: The $K$-means clustering algorithm
<small><i>This notebook is a modified version of the one created by Jake Vanderplas for PyCon 2015.
Source and license info of the original notebook are on GitHub.</i></small>
End of explanation
"""
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], s=50);
plt.axis('equal')
plt.show()
"""
Explanation: 1. Clustering algorithms
Clustering algorithms try to split a set of data points $\mathcal{S} = {{\bf x}0,\ldots,{\bf x}{L-1}}$, into mutually exclusive clusters or groups, $\mathcal{G}0,\ldots, \mathcal{G}{K-1}$, such that every sample in $\mathcal{S}$ is assigned to one and only one group.
Clustering algorithms belong to the more general family of unsupervised methods: clusters are constructed using the data attributes alone. No labels or target values are used. This makes the difference between a clustering algorithm and a supervised classification algorithm.
There is not a unique formal definition of the clustering problem. Different algorithms group data into clusters following different criteria. The appropriate choice of the clustering algorithm may depend on the particular application scenario.
The image below, taken from the scikit-learn site, shows that different algorithms follow different grouping criteria, clustering the same datasets in different forms.
<img src="http://scikit-learn.org/stable/_images/sphx_glr_plot_cluster_comparison_001.png" width=800>
In any case, all clustering algorithms share a set of common characteristics. A clustering algorithm makes use of some distance or similarity measure between data points to group data in such a way that:
Points in some cluster should lie close to each other
Points in different clusters should be far away
Clusters should be separated by regions of low density of points
Clusters may preserve some kind of connectivity
Clusters may get represented by a representative or centroid
2. The $K$-means algorithm
$K$-Means is a proximity-based clustering algorithm. It searches for cluster centers or centroids which are representative of all points in a cluster. Representativenes is measured by proximity: "good" clusters are those such that all data points are close to its centroid.
Given a dataset $\mathcal{S} = {{\bf x}0,\ldots,{\bf x}{L-1}}$, $K$-means tries to minimize the following distortion function:
$$D = \sum_{k=0}^{K-1} \sum_{{\bf x} \in {\cal{G}}_k}\|{\bf x}-\boldsymbol{\mu}_k\|_2^2$$
where $\boldsymbol{\mu}_k$ is the centroid of cluster $\mathcal{G}_k$.
Note that, in this notebook, we will used $k$ as the index to count groups and centroids, and $K$ for the number of centroides. To avoid any confusion, we will index data samples as ${\bf x}_\ell$ when needed, and the number of samples will be denoted as $L$.
The minimization should be carried out over both the partition ${{\cal G}0,\ldots, {\cal G}{K-1}}$ of ${\cal S}$ (i.e., the assignment problem) and their respective centroids ${\boldsymbol{\mu}0,\ldots,\boldsymbol{\mu}{K-1}}$ (i.e. the estimation problem). This joint assignment-estimation problem is what makes optimization difficult (it is an <a href=https://es.wikipedia.org/wiki/NP-hard>NP-hard</a> problem).
The $K$-means algorithm is based on the fact that, given that one of both problems is solved, the solution to the other is straightworward:
Assignment: For fixed centroids $\boldsymbol{\mu}0,\ldots,\boldsymbol{\mu}{K-1}$, the optimal partition is given by the following
$${\cal G}k^* = \left{{\bf x} \, \left| \, k \in \arg\min{k'} \|{\bf x}-\boldsymbol{\mu}_{k'}\|^2\right. \right}$$
(i.e. each sample is assigned to the group with the closest centroid).
Estimation: For a fixed partition ${{\cal G}0,\ldots, {\cal G}{K-1}}$, the optimal centroids can be computed easily by differentiation
\begin{equation}
\boldsymbol{\mu}k^* = \frac{1}{\left|{\cal G}_k\right|} \sum{{\bf x} \in {\cal{G}}_k} {\bf x}
\end{equation}
where $\left|{\cal G}_k\right|$ is the cardinality of ${\cal G}_k$.
$K$ means is a kind of <a href=https://en.wikipedia.org/wiki/Coordinate_descent>coordinate descent</a> algorithm that applies ciclycally and iteratively the estimation and assigning steps, fixing the solution of the previous optimization at each time.
Exercise: Derive the equation for the optimal centroids.
Solution:
2.1. Steps of the algorithm
After initialization of centroids:
1. Assignment: Assign each data point to closest centroid
2. Estimation: Recalculate centroids positions
3. Go back to 1 until no further changes or max iterations achieved
2.1.1. Initializations
$K$-means convergence is guaranteed ... but just to a local minimum of $D$.
Different initialization possibilities:
1. $K$-means$++$: To maximize inter-centroid distance
2. Random among training points
3. User-selected
Typically, different runs are executed, and the best one is kept.
Check out <a href=http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html> the Scikit-Learn site</a> for parameters, attributes, and methods.
2.1.2. Stopping.
Since (1) the total number of possible assignments is finite, and (2) each step of the $K$-means algorithm reduces (or, at least, does not increase) the value of the distortion function, the algorithm will eventually converge to a fixed distortion value.
2.1.3. Local convergence
Unfortunatelly, there is no guarantee that the final distortion is minimum. The quality of the solution obtained by the algorithm may critically depend on the initialization.
2.2. Example
Let's look at how KMeans operates on a synthetic example. To emphasize that this is unsupervised, we do not plot the colors of the clusters:
End of explanation
"""
est = KMeans(5) # 4 clusters
est.fit(X)
y_kmeans = est.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='rainbow');
plt.axis('equal')
plt.show()
"""
Explanation: By eye, it is relatively easy to pick out the four clusters. If you were to perform an exhaustive search for the different segmentations of the data, however, the search space would be exponential in the number of points. Fortunately, the $K$-Means algorithm implemented in Scikit-learn provides a much more convenient solution.
<b>Exercise:</b>
The following frament of code runs the $K$-means method on the toy example you just created. Modify it, so that you can try other settings for the parameter options implemented by the method. In particular:
Reduce the number of runs to check the consequences of a bad initialization
Test different kinds of initializations (k-means++ vs random)
Provide a user-generated initialization that you consider can result in very suboptimal performance
Test other selections of the number of parameters
Include in the plot the location of the cluster of each class
End of explanation
"""
# WARNING: This command may fail (interactivity not working properly) depending on the python version.
plot_kmeans_interactive(min_clusters=2, max_clusters=6)
plt.show()
"""
Explanation: 2.3. The K-Means Algorithm: Interactive visualization
The following fragment of code allows you to study the evolution of cluster centroids on one run of the algorithm, and to modify also the number of centroids.
End of explanation
"""
digits = load_digits()
print('Input data and label number are provided in the following two variables:')
print("digits['images']: {0}".format(digits['images'].shape))
print("digits['target']: {0}".format(digits['target'].shape))
"""
Explanation: 2.4. Determining the number of clusters
If the number of clusters, $K$, is not known, selecting the appropriate value becomes a major issue. Since the overal distortion $D$ decreases with $K$, the selection of the number o clusters cannot be based on the overal distorsion.
The best value of $K$ may be application dependent. Though we will not discuss specific algorithms in detail, we point out some possible solutions:
Penalization functions: instead of minimizing $D$, we can train the clustering algorithm in order to minimize the functional $D' = D + \lambda f(K)$, where $f$ is an increasing function penalizing large values of $K$, and $\lambda$ is an hyperparameter. For instance, we can take
$$f(K)=\log(K)$$
$$f(K)=K$$
$$f(K)=K^2,$$
etc.
Cluster-based metrics, like
Average <a href=http://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html#sklearn.metrics.silhouette_score>silohuete coefficient</a>. The Silhouette Coefficient is calculated using the mean intra-cluster distance (a) and the mean nearest-cluster distance (b) for each sample. The Silhouette Coefficient for a sample is (b - a) / max(a, b).
<a href= http://scikit-learn.org/stable/modules/generated/sklearn.metrics.calinski_harabaz_score.html#sklearn.metrics.calinski_harabaz_score> Calinski-Harabaz score </a>. It is defined as the ratio of the between-clusters dispersion mean and the within-cluster dispersion:
\begin{align}
s(K) = \frac{\mathrm{Trace}({\bf B}K)}{\mathrm{Trace}({\bf W}_K)} \times \frac{L - K}{K - 1}
\end{align}
where ${\bf W}_K$ is the within-cluster dispersion matrix defined by
$$
{\bf W}_K = \sum{k=0}^{K-1} \sum_{{\bf x} \in {\cal G}k} (x - \boldsymbol{\mu}_k) ({\bf x} - \boldsymbol{\mu}_k)^T
$$
and ${\bf B}_K$ is the between group dispersion matrix, definded by
$$
{\bf B}_K = \sum{k=0}^{K-1} \left|{\cal G}_k\right| (\boldsymbol{\mu}_k - \boldsymbol{\mu}) (\boldsymbol{\mu}_k - \boldsymbol{\mu})^T
$$
with $L$ be the number of points in our data and $\boldsymbol{\mu}$ be the average of all data points.
<b>Exercise:</b> Select the number of samples using any of the above metrics for the dataset in the previous examples.
3. Application of KMeans to Digits
For a closer-to-real-world example, let us take a look at a digit recognition dataset. Here we'll use KMeans to automatically cluster the data in 64 dimensions, and then look at the cluster centers to see what the algorithm has found.
End of explanation
"""
est = KMeans(n_clusters=10)
clusters = est.fit_predict(digits.data)
est.cluster_centers_.shape
fig = plt.figure(figsize=(8, 3))
for i in range(10):
ax = fig.add_subplot(2, 5, 1 + i, xticks=[], yticks=[])
ax.imshow(est.cluster_centers_[i].reshape((8, 8)), cmap=plt.cm.binary)
"""
Explanation: Next, we cluster the data into 10 groups, and plot the representatives (centroids of each group). As with the toy example, you could modify the initialization settings to study the impact of initialization in the performance of the method
End of explanation
"""
X = PCA(2).fit_transform(digits.data)
kwargs = dict(cmap = plt.cm.get_cmap('rainbow', 10),
edgecolor='none', alpha=0.6)
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
ax[0].scatter(X[:, 0], X[:, 1], c=est.labels_, **kwargs)
ax[0].set_title('learned cluster labels')
ax[1].scatter(X[:, 0], X[:, 1], c=digits.target, **kwargs)
ax[1].set_title('true labels');
"""
Explanation: We see that even without the labels, KMeans is able to find clusters whose means are recognizable digits (with apologies to the number 8)!
3.1. Visualization via Dimensionality Reduction
The following fragment of code projects the data into the two "most representative" dimensions, so that we can somehow visualize the result of the clustering (note that we can not visualize the data in the original 64 dimensions). In order to do so, we use a method known as Principal Component Analysis (PCA). This is a method that allows you to obtain a 2-D representation of multidimensional data: we extract the two most relevant features (using PCA) and look at the true cluster labels and $K$-means cluster labels:
End of explanation
"""
conf = confusion_matrix(digits.target, est.labels_)
print(conf)
plt.imshow(conf, cmap='Blues', interpolation='nearest')
plt.colorbar()
plt.grid(False)
plt.ylabel('true')
plt.xlabel('Group index');
#And compute the number of right guesses if each identified group were assigned to the right class
print('Percentage of patterns that would be correctly classified: {0}'.format(
np.sum(np.max(conf,axis=1)) * 100. / np.sum(conf)))
"""
Explanation: 3.2. Classification performance
Just for kicks, let us see how accurate our $K$-means classifier is with no label information. In order to do so, we can work on the confussion matrix:
End of explanation
"""
china = load_sample_image("china.jpg")
plt.imshow(china)
plt.grid(False);
"""
Explanation: This is above 80% classification accuracy for an entirely unsupervised estimator which knew nothing about the labels.
4. Example: KMeans for Color Compression
One interesting application of clustering is in color image compression. For example, imagine you have an image with millions of colors. In most images, a large number of the colors will be unused, and conversely a large number of pixels will have similar or identical colors.
Scikit-learn has a number of images that you can play with, accessed through the datasets module. For example:
End of explanation
"""
print('The image dimensions are {0}'.format(china.shape))
print('The RGB values of pixel 2 x 2 are '.format(china[2,2,:]))
"""
Explanation: The image itself is stored in a 3-dimensional array, of size (height, width, RGB). For each pixel three values are necessary, each in the range 0 to 255. This means that each pixel is stored using 24 bits.
End of explanation
"""
X = (china / 255.0).reshape(-1, 3)
print(X.shape)
"""
Explanation: We can envision this image as a cloud of points in a 3-dimensional color space. We'll rescale the colors so they lie between 0 and 1, then reshape the array to be a typical scikit-learn input:
End of explanation
"""
# reduce the size of the image for speed. Only for the K-means algorithm
image = china[::3, ::3]
n_colors = 128
X = (image / 255.0).reshape(-1, 3)
model = KMeans(n_colors)
model.fit(X)
labels = model.predict((china / 255.0).reshape(-1, 3))
#print labels.shape
colors = model.cluster_centers_
new_image = colors[labels].reshape(china.shape)
new_image = (255 * new_image).astype(np.uint8)
#For comparison purposes, we pick 64 colors at random
perm = np.random.permutation(range(X.shape[0]))[:n_colors]
colors = X[perm,:]
labels = np.argmin(cdist((china / 255.0).reshape(-1, 3),colors),axis=1)
new_image_rnd = colors[labels].reshape(china.shape)
new_image_rnd = (255 * new_image_rnd).astype(np.uint8)
# create and plot the new image
with sns.axes_style('white'):
plt.figure()
plt.imshow(china)
plt.title('Original image')
plt.figure()
plt.imshow(new_image)
plt.title('{0} colors'.format(n_colors))
plt.figure()
plt.imshow(new_image_rnd)
plt.title('{0} colors'.format(n_colors) + ' (random selection)')
"""
Explanation: We now have 273,280 points in 3 dimensions.
Our task is to use KMeans to compress the $256^3$ colors into a smaller number (say, 64 colors). Basically, we want to find $N_{color}$ clusters in the data, and create a new image where the true input color is replaced by the color of the closest cluster. Compressing data in this way, each pixel will be represented using only 6 bits (25 % of the original image size)
End of explanation
"""
|
eblur/AstroHackWeek2015 | day3-machine-learning/07 - Grid Searches for Hyper Parameters.ipynb | gpl-2.0 | from sklearn.grid_search import GridSearchCV
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data,
digits.target)
"""
Explanation: Grid Searches
<img src="figures/grid_search_cross_validation.svg" width=100%>
Grid-Search with build-in cross validation
End of explanation
"""
import numpy as np
param_grid = {'C': 10. ** np.arange(-3, 3),
'gamma' : 10. ** np.arange(-5, 0)}
np.set_printoptions(suppress=True)
print(param_grid)
grid_search = GridSearchCV(SVC(), param_grid, verbose=3)
"""
Explanation: Define parameter grid:
End of explanation
"""
grid_search.fit(X_train, y_train)
grid_search.predict(X_test)
grid_search.score(X_test, y_test)
grid_search.best_params_
# We extract just the scores
scores = [x.mean_validation_score for x in grid_search.grid_scores_]
scores = np.array(scores).reshape(6, 5)
plt.matshow(scores)
plt.xlabel('gamma')
plt.ylabel('C')
plt.colorbar()
plt.xticks(np.arange(5), param_grid['gamma'])
plt.yticks(np.arange(6), param_grid['C']);
"""
Explanation: A GridSearchCV object behaves just like a normal classifier.
End of explanation
"""
from sklearn.cross_validation import cross_val_score
cross_val_score(GridSearchCV(SVC(), param_grid),
digits.data, digits.target)
"""
Explanation: Nested Cross-validation in scikit-learn:
End of explanation
"""
# %load solutions/grid_search_k_neighbors.py
from sklearn.neighbors import KNeighborsClassifier
KNeighborsClassifier?
KNC_grid = {'n_neighbors': range(1,40,2),
'leaf_size' : range(1,40,2)}
grid_search = GridSearchCV(KNeighborsClassifier(), KNC_grid, verbose=3)
grid_search.fit(X_train, y_train)
grid_search.predict(X_test)
grid_search.score(X_test, y_test)
scores = [x.mean_validation_score for x in grid_search.grid_scores_]
scores = np.array(scores).reshape(len(KNC_grid['n_neighbors']), len(KNC_grid['leaf_size']))
plt.matshow(scores)
plt.xlabel('n_neighbors')
plt.ylabel('leaf_size')
plt.colorbar()
plt.xticks(range(len(KNC_grid['n_neighbors'])), KNC_grid['n_neighbors'])
plt.yticks(range(len(KNC_grid['leaf_size'])), KNC_grid['leaf_size']);
"""
Explanation: Exercises
Use GridSearchCV to adjust n_neighbors of KNeighborsClassifier.
End of explanation
"""
|
armandosrz/UdacityNanoMachine | student_intervention/student_intervention.ipynb | apache-2.0 | # Import libraries
import numpy as np
import pandas as pd
from time import time
from sklearn.metrics import f1_score
# Read student data
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
"""
Explanation: Machine Learning Engineer Nanodegree
Supervised Learning
Project: Building a Student Intervention System
Welcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Question 1 - Classification vs. Regression
Your goal for this project is to identify students who might need early intervention before they fail to graduate. Which type of supervised learning problem is this, classification or regression? Why?
Answer:
The type of supervised learning problem is derived from the kind of output we are expecting. In regression we match inputs to continuous outputs, or in other words we predict the actual numeric value a specific input will have. On the other hand, in classification inputs are mapped into discrete outputs (Boolean values for example).
For this specific problem we need to detect those students who are projected to fail and need intervention. In order to reach this goal, the final dataset will be divided into two sets: Those who need intervention and does who do not. Making the problem a boolean variable and hence a classification problem
Exploring the Data
Run the code cell below to load necessary Python libraries and load the student data. Note that the last column from this dataset, 'passed', will be our target label (whether the student graduated or didn't graduate). All other columns are features about each student.
End of explanation
"""
# TODO: Calculate number of students
n_students = len(student_data)
# TODO: Calculate number of features
n_features = len(student_data.columns[:-1])
# TODO: Calculate passing students
n_passed = len(student_data[student_data['passed'] == 'yes'])
# TODO: Calculate failing students
n_failed = n_students - n_passed
# TODO: Calculate graduation rate
grad_rate = (float(n_passed) / n_students) * 100
# Print the results
print "Total number of students: {}".format(n_students)
print "Number of features: {}".format(n_features)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
"""
Explanation: Implementation: Data Exploration
Let's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, you will need to compute the following:
- The total number of students, n_students.
- The total number of features for each student, n_features.
- The number of those students who passed, n_passed.
- The number of those students who failed, n_failed.
- The graduation rate of the class, grad_rate, in percent (%).
End of explanation
"""
# Extract feature columns
feature_cols = list(student_data.columns[:-1])
# Extract target column 'passed'
target_col = student_data.columns[-1]
# Show the list of columns
print "Feature columns:\n{}".format(feature_cols)
print "\nTarget column: {}".format(target_col)
# Separate the data into feature data and target data (X_all and y_all, respectively)
X_all = student_data[feature_cols]
y_all = student_data[target_col]
from IPython.display import display
# Show the feature information by printing the first five rows
print "\nFeature values:"
# Added pretty table display
display(X_all.head())
"""
Explanation: Preparing the Data
In this section, we will prepare the data for modeling, training and testing.
Identify feature and target columns
It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.
Run the code cell below to separate the student data into feature and target columns to see if any features are non-numeric.
End of explanation
"""
def preprocess_features(X):
''' Preprocesses the student data and converts non-numeric binary variables into
binary (0/1) variables. Converts categorical variables into dummy variables. '''
# Initialize new output DataFrame
output = pd.DataFrame(index = X.index)
# Investigate each feature column for the data
for col, col_data in X.iteritems():
# If data type is non-numeric, replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# If data type is categorical, convert to dummy variables
if col_data.dtype == object:
# Example: 'school' => 'school_GP' and 'school_MS'
col_data = pd.get_dummies(col_data, prefix = col)
# Collect the revised columns
output = output.join(col_data)
return output
X_all = preprocess_features(X_all)
print "Processed feature columns ({} total features):\n{}".format(len(X_all.columns), list(X_all.columns))
%matplotlib inline
import seaborn as sns
sns.factorplot("failures", col="goout", data=student_data, hue='passed', kind="count");
"""
Explanation: Preprocess Feature Columns
As you can see, there are several non-numeric columns that need to be converted! Many of them are simply yes/no, e.g. internet. These can be reasonably converted into 1/0 (binary) values.
Other columns, like Mjob and Fjob, have more than two values, and are known as categorical variables. The recommended way to handle such a column is to create as many columns as possible values (e.g. Fjob_teacher, Fjob_other, Fjob_services, etc.), and assign a 1 to one of them and 0 to all others.
These generated columns are sometimes called dummy variables, and we will use the pandas.get_dummies() function to perform this transformation. Run the code cell below to perform the preprocessing routine discussed in this section.
End of explanation
"""
# TODO: Import any additional functionality you may need here
# TODO: Set the number of training points
num_train = 300
# Set the number of testing points
num_test = X_all.shape[0] - num_train
# TODO: Shuffle and split the dataset into the number of training and testing points above
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, stratify=y_all,train_size=num_train, random_state=37)
# Show the results of the split
print "Training set has {} samples.".format(X_train.shape[0])
print "Testing set has {} samples.".format(X_test.shape[0])
print "\nTrain set 'yes' pct = {:.2f}%".format(100 * (y_train == 'yes').mean())
print "Test set 'yes' pct = {:.2f}%".format(100 * (y_test == 'yes').mean())
"""
Explanation: Implementation: Training and Testing Data Split
So far, we have converted all categorical features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, you will need to implement the following:
- Randomly shuffle and split the data (X_all, y_all) into training and testing subsets.
- Use 300 training points (approximately 75%) and 95 testing points (approximately 25%).
- Set a random_state for the function(s) you use, if provided.
- Store the results in X_train, X_test, y_train, and y_test.
End of explanation
"""
def train_classifier(clf, X_train, y_train):
''' Fits a classifier to the training data. '''
# Start the clock, train the classifier, then stop the clock
start = time()
clf.fit(X_train, y_train)
end = time()
# Print the results
print "Trained model in {:.4f} seconds".format(end - start)
def predict_labels(clf, features, target):
''' Makes predictions using a fit classifier based on F1 score. '''
# Start the clock, make predictions, then stop the clock
start = time()
y_pred = clf.predict(features)
end = time()
# Print and return results
print "Made predictions in {:.4f} seconds.".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
''' Train and predict using a classifer based on F1 score. '''
# Indicate the classifier and the training set size
print "Training a {} using a training set size of {}. . .".format(clf.__class__.__name__, len(X_train))
# Train the classifier
train_classifier(clf, X_train, y_train)
f1_training = predict_labels(clf, X_train, y_train)
f1_test = predict_labels(clf, X_test, y_test)
# Print the results of prediction for both training and testing
print "F1 score for training set: {:.4f}.".format(f1_training)
print "F1 score for test set: {:.4f}.".format(f1_test)
return [f1_training, f1_test]
"""
Explanation: Training and Evaluating Models
In this section, you will choose 3 supervised learning models that are appropriate for this problem and available in scikit-learn. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses. You will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F<sub>1</sub> score. You will need to produce three tables (one for each model) that shows the training set size, training time, prediction time, F<sub>1</sub> score on the training set, and F<sub>1</sub> score on the testing set.
The following supervised learning models are currently available in scikit-learn that you may choose from:
- Gaussian Naive Bayes (GaussianNB)
- Decision Trees
- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)
- K-Nearest Neighbors (KNeighbors)
- Stochastic Gradient Descent (SGDC)
- Support Vector Machines (SVM)
- Logistic Regression
Question 2 - Model Application
List three supervised learning models that are appropriate for this problem. For each model chosen
- Describe one real-world application in industry where the model can be applied. (You may need to do a small bit of research for this — give references!)
- What are the strengths of the model; when does it perform well?
- What are the weaknesses of the model; when does it perform poorly?
- What makes this model a good candidate for the problem, given what you know about the data?
Answer:
Decision Tree:
Real-world Application: Decision trees are not only used to derive mathematical knowledge, humans use it every day. The folks at iBoske made it simple for users to create their own decision trees to help people know which camera or car to buy for example.
Strengths of model: Easy to understand and implement, since little data preparation is necessary. Fast performance on datasets that have multiple dimensions, when some of those dimensions just create noise to the algorithm and should not be considered.
Weakness: The creation of extremely complex trees that do not properly generalize the data (overfitting). It could also create a bias tree in occasions in which a certain kind of data dominates the other.
Good Candidate: In our dataset, we have 391 data points with 31 features each. Besides this, we do not posses any extra domain knowledge on the subject. Since we are seeking performance and avoiding the curse of dimensionality that would come with other methods, a Decision Tree is a good candidate to return results with a fast and reliable performance. The outcomes will depend on how the attributes relate to each other and the separation performed on them.
Support Vector Machines (SVM):
Real-world Application: According to wikipedia SVM are used to recognize handwritten characters and image classification.
Strengths of model: Support Vector Machines are highly effective in datasets that have high-number of dimensions (like ours) and are designed to be applied in cases that contain linearly separable data. It provides the advantage of being tuned using different kernel functions.
Weakness: Does not directly provides probabilities, which have to be obtained using the expensive process of cross-validation. So predictions tend to be slower.
Good Candidate: Due to the high number of dimensions present on the dataset, and the classification problem which is linearly separable Support Vector Machines are a great candidate for our problem. It will also provides us with great power when it comes to tuning, looking to increase performance.
Ensemble Gradient Boosting:
Real-world Application: Currently being used at Yahoo to increase their learing to rank web pages as query results from a web search.
Strengths of model: With Ensemble Gradient Boosting we have the ability to detect and decrease the impact of outliers in our dataset. It contains great predictive power, since it loops through the tree several times with a different factors looking to improve the solution.
Weakness: Tend to be slower due to the sequential building of the trees. So performance wise it will take more time building the tree.
Good Candidate: This particular model according to my online sources tends to have a generally better performance than the mentioned above. It also known for being able to handle well multiple features on the data.
Setup
Run the code cell below to initialize three helper functions which you can use for training and testing the three supervised learning models you've chosen above. The functions are as follows:
- train_classifier - takes as input a classifier and training data and fits the classifier to the data.
- predict_labels - takes as input a fit classifier, features, and a target labeling and makes predictions using the F<sub>1</sub> score.
- train_predict - takes as input a classifier, and the training and testing data, and performs train_clasifier and predict_labels.
- This function will report the F<sub>1</sub> score for both the training and testing data separately.
End of explanation
"""
# TODO: Import the three supervised learning models from sklearn
from sklearn import tree
from sklearn import svm
from sklearn.ensemble import GradientBoostingClassifier
#from sklearn.ensemble import RandomForestClassifier
# TODO: Initialize the three models
rand_state = 37
clf_A = tree.DecisionTreeClassifier(random_state=rand_state)
clf_B = svm.SVC(random_state=rand_state)
clf_C = GradientBoostingClassifier(random_state=rand_state)
models = [clf_A, clf_B, clf_C]
# TODO: Execute the 'train_predict' function for each classifier and each training set size
# train_predict(clf, X_train, y_train, X_test, y_test)
results = []
for model in models:
# TODO: Set up the training set sizes
print '************************************************************\n'
train = []
test = []
for set_size in (100,200,300):
print '------------------------------------------------------------'
train_values = train_predict(model, X_train[:set_size], y_train[:set_size], X_test, y_test)
train.append(train_values[0])
test.append(train_values[1])
results.append(train)
results.append(test)
"""
Explanation: Implementation: Model Performance Metrics
With the predefined functions above, you will now import the three supervised learning models of your choice and run the train_predict function for each one. Remember that you will need to train and predict on each classifier for three different training set sizes: 100, 200, and 300. Hence, you should expect to have 9 different outputs below — 3 for each model using the varying training set sizes. In the following code cell, you will need to implement the following:
- Import the three supervised learning models you've discussed in the previous section.
- Initialize the three models and store them in clf_A, clf_B, and clf_C.
- Use a random_state for each model you use, if provided.
- Note: Use the default settings for each model — you will tune one specific model in a later section.
- Create the different training set sizes to be used to train each model.
- Do not reshuffle and resplit the data! The new training points should be drawn from X_train and y_train.
- Fit each model with each training set size and make predictions on the test set (9 in total).
Note: Three tables are provided after the following code cell which can be used to store your results.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
for a in range(3):
plt.plot([100,200,300], results[a+a], '-o')
plt.legend(['y = Tree_train', 'y = SVM_train', 'y = EGB_train'],
bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0)
plt.ylabel('Score')
plt.title('Training Scores')
plt.show()
for a in range(1,6,2):
plt.plot([100,200,300], results[a], '-o')
plt.legend(['y = Tree_test', 'y = SVM_test', 'y = EGB_test'],
bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0)
plt.ylabel('Score')
plt.title('Testing Scores')
plt.show()
"""
Explanation: Tabular Results
Edit the cell below to see how a table can be designed in Markdown. You can record your results from above in the tables provided.
Classifer 1 - Decision Tree Classifier
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0012 | 0.0004 | 1.00 | 0.6552 |
| 200 | 0.0015 | 0.0003 | 1.00 | 0.7231 |
| 300 | 0.0022 | 0.0003 | 1.00 | 0.7244 |
Classifer 2 - Support Vector Machines (SVM)
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0010 | 0.009 | 0.8444 | 0.7287 |
| 200 | 0.0042 | 0.0016 | 0.8525 | 0.7755 |
| 300 | 0.0085 | 0.0021 | 0.8720 | 0.8212 |
Classifer 3 - Ensemble Gradient Boosting
| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| 100 | 0.0647 | 0.0005 | 1.0000 | 0.6723 |
| 200 | 0.0808 | 0.0004 | 0.9962 | 0.7669 |
| 300 | 0.0933 | 0.0004 | 0.9756 | 0.7536 |
End of explanation
"""
# TODO: Import 'GridSearchCV' and 'make_scorer'
from sklearn.metrics import make_scorer
from sklearn.grid_search import GridSearchCV
# TODO: Create the parameters list you wish to tune
parameters = {'kernel':['linear', 'rbf', 'poly','sigmoid'],
'C': [0.6, 1, 1.5, 3],
'probability': [True, False],
'tol': [1e-6,1e-5, 1e-4],
'random_state': [37]
}
# TODO: Initialize the classifier
clf = svm.SVC()
# TODO: Make an f1 scoring function using 'make_scorer'
def f1_metrics(y_true, y_pred):
f1 = f1_score(y_true, y_pred, pos_label='yes')
return f1
f1_scorer = make_scorer(f1_metrics)
# TODO: Perform grid search on the classifier using the f1_scorer as the scoring method
grid_obj = GridSearchCV(clf, param_grid=parameters, scoring=f1_scorer)
# TODO: Fit the grid search object to the training data and find the optimal parameters
grid_obj = grid_obj.fit(X_train, y_train)
# Get the estimator
clf = grid_obj.best_estimator_
# Report the final F1 score for training and testing after parameter tuning
print clf
print "Tuned model has a training F1 score of {:.4f}.".format(predict_labels(clf, X_train, y_train))
print "Tuned model has a testing F1 score of {:.4f}.".format(predict_labels(clf, X_test, y_test))
from IPython.display import display
#display(pd.DataFrame(grid_obj.grid_scores_))
grid_results = pd.DataFrame(grid_obj.grid_scores_)
grid_ = [[x[0]['C'], x[0]['tol'], x[1]] for index, x in list(grid_results.iterrows())]
#sns.heatmap( grid_, annot=True, fmt="d", linewidths=.5)
# tried but couldn't make it a heatmap
"""
Explanation: Choosing the Best Model
In this final section, you will choose from the three supervised learning models the best model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (X_train and y_train) by tuning at least one parameter to improve upon the untuned model's F<sub>1</sub> score.
Question 3 - Choosing the Best Model
Based on the experiments you performed earlier, in one to two paragraphs, explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?
Answer:
Dear Board of supervisors,
Given all the previous results we will be implementing the Support Vector Machine model to predict which students are in need of tutoring in order to prevent them from failing. In terms of performance based on time, decision trees gives us the faster options in terms of training and testing time. However, having a perfect score on the training is not ideal in the case of the Decision tree, cause is it an indication of overfitting. This is shown in the accuracy results on the testing set by obtaining the worst outcome of the three algorithms. In comparison with Gradient Boosting, it is slower in terms of prediction but greatly faster in terms of the training set. Gradient Boosting getting close to a perfect score in the testing set is not an indication of overfitting due to the sequential building of trees the algorithm uses, which also makes a greater impact in terms of the time spent training the data.
In conclusion, SVM provides to be the best option in terms of quality since it provides the best possible score, while also being timely efficient in the training and prediction aspects. It is also the algorithm that provides better tuning options by the ability of using a different kernel in order to improve performance. I am confident at the end of the tuning a prediction of more than 80% accuracy will be warranted.
Question 4 - Model in Layman's Terms
In one to two paragraphs, explain to the board of directors in layman's terms how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.
Answer:
Dear Board of Directors,
We are currently facing a classification problem, which means we need to use all our data to determine if one element belongs to it or not based on the attributes. For example, when determining if a vehicle is a car or a motorcycle we count the number of wheels; if 2 wheels are present then is a motorcyle else it is a car. Our goal is to find out if a student will fail or not the class based on the given parameters.
This is precisely what a Support Vector Machine does. In order to explain it let us think about a football game. During the game we have 22 players on the field, for which 11 belongs to Ohio State and 11 belong to Michigan. Before each play starts there is an imaginary line that divides the start point of those two teams known as the line of scrimmage. What SVM does is try to find this line of scrimmage in our data, so in the case another player is added to the Michigan side, we can surely predict is in fact a Michigan player (and an illegal formation penalty for them of course).
In SVM we might have some ignorant players rooming in the wrong side of the field, so the line of scrimmage would not precisely give us a clean separation. But it will clearly give us a line that maximizes the division between this two categories. It also inclues a tric play, known as kernel trick, that allow us to add new dimensions to our data in order to make better line separations.
Implementation: Model Tuning
Fine tune the chosen model. Use grid search (GridSearchCV) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:
- Import sklearn.grid_search.GridSearchCV and sklearn.metrics.make_scorer.
- Create a dictionary of parameters you wish to tune for the chosen model.
- Example: parameters = {'parameter' : [list of values]}.
- Initialize the classifier you've chosen and store it in clf.
- Create the F<sub>1</sub> scoring function using make_scorer and store it in f1_scorer.
- Set the pos_label parameter to the correct value!
- Perform grid search on the classifier clf using f1_scorer as the scoring method, and store it in grid_obj.
- Fit the grid search object to the training data (X_train, y_train), and store it in grid_obj.
End of explanation
"""
students = X_train[:10]
display(students)
"""
Explanation: Question 5 - Final F<sub>1</sub> Score
What is the final model's F<sub>1</sub> score for training and testing? How does that score compare to the untuned model?
Answer:
| Model | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |
| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |
| Un-tuned | 0.0085 | 0.0021 | 0.8650 | 0.8212 |
| Tuned | 0.0056 | 0.0020 | 0.8358 | 0.8205 |
Scores: In terms of scores the Tuned model gets a similar performance in terms of testing but about 3.5 % worst in terms of the training set.
Time: The big performance increase in the model occured in terms of time. We were able to reduce the amount spend during training by 0.0029 seconds and the prediction time was also reduced.
Overall by tuning the model we were able to siglithly increase the testing scores in the model but made a huge imporvement in terms of time efficiency.
Extra Exploration
The following students are going to be further analysed:
End of explanation
"""
for i, price in enumerate(clf.predict(students)):
print "Is the Student {} predicted to pass the year: {}".format(i+1, price)
X_all.iloc[164]
"""
Explanation: Prediction for them:
End of explanation
"""
|
phoebe-project/phoebe2-docs | 2.1/tutorials/limb_darkening.ipynb | gpl-3.0 | !pip install -I "phoebe>=2.1,<2.2"
"""
Explanation: Limb Darkening
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
"""
Explanation: We'll just add an 'lc' dataset
End of explanation
"""
print b['ld_func_bol@primary']
print b['ld_func_bol@primary'].choices
print b['ld_coeffs_bol@primary']
print b['ld_func@lc01']
print b['ld_func@lc01@primary'].choices
"""
Explanation: Relevant Parameters
End of explanation
"""
b['ld_func@lc01@primary'] = 'logarithmic'
print b['ld_coeffs@lc01@primary']
"""
Explanation: Note that ld_coeffs isn't visible (relevant) if ld_func=='interp'
End of explanation
"""
b.run_compute(model='mymodel')
afig, mplfig = b['lc01@mymodel'].plot(show=True)
"""
Explanation: Influence on Light Curves (fluxes)
End of explanation
"""
|
newworldnewlife/TensorFlow-Tutorials | 18_TFRecords_Dataset_API.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.image import imread
import tensorflow as tf
import numpy as np
import sys
import os
"""
Explanation: TensorFlow Tutorial #18
TFRecords & Dataset API
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
In the previous tutorials we used a so-called feed-dict for inputting data to the TensorFlow graph. It is a fairly simple input method but it is also a performance bottleneck because the data is read sequentially between training steps. This makes it hard to use the GPU at 100% efficiency because the GPU has to wait for new data to work on.
Instead we want to read data in a parallel thread so new training data is always available whenever the GPU is ready. This used to be done with so-called QueueRunners in TensorFlow which was a very complicated system. Now it can be done with the Dataset API and a binary file-format called TFRecords, as described in this tutorial.
This builds on Tutorial #17 for the Estimator API.
Imports
End of explanation
"""
tf.__version__
"""
Explanation: This was developed using Python 3.6 (Anaconda) and TensorFlow version:
End of explanation
"""
import knifey
"""
Explanation: Load Data
End of explanation
"""
from knifey import img_size, img_size_flat, img_shape, num_classes, num_channels
"""
Explanation: The data dimensions have already been defined in the knifey module, so we just need to import the ones we need.
End of explanation
"""
# knifey.data_dir = "data/knifey-spoony/"
"""
Explanation: Set the directory for storing the data-set on your computer.
End of explanation
"""
knifey.maybe_download_and_extract()
"""
Explanation: The Knifey-Spoony data-set is about 22 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
"""
dataset = knifey.load()
"""
Explanation: Now load the data-set. This scans the sub-directories for all *.jpg images and puts the filenames into two lists for the training-set and test-set. This does not actually load the images.
End of explanation
"""
class_names = dataset.class_names
class_names
"""
Explanation: Get the class-names.
End of explanation
"""
image_paths_train, cls_train, labels_train = dataset.get_training_set()
"""
Explanation: Training and Test-Sets
This function returns the file-paths for the images, the class-numbers as integers, and the class-numbers as One-Hot encoded arrays called labels.
In this tutorial we will actually use the integer class-numbers and call them labels. This may be a little confusing but you can always add print-statements to see what the data actually is.
End of explanation
"""
image_paths_train[0]
"""
Explanation: Print the first image-path to see if it looks OK.
End of explanation
"""
image_paths_test, cls_test, labels_test = dataset.get_test_set()
"""
Explanation: Get the test-set.
End of explanation
"""
image_paths_test[0]
"""
Explanation: Print the first image-path to see if it looks OK.
End of explanation
"""
print("Size of:")
print("- Training-set:\t\t{}".format(len(image_paths_train)))
print("- Test-set:\t\t{}".format(len(image_paths_test)))
"""
Explanation: The Knifey-Spoony data-set has now been loaded and consists of 4700 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.
End of explanation
"""
def plot_images(images, cls_true, cls_pred=None, smooth=True):
assert len(images) == len(cls_true)
# Create figure with sub-plots.
fig, axes = plt.subplots(3, 3)
# Adjust vertical spacing.
if cls_pred is None:
hspace = 0.3
else:
hspace = 0.6
fig.subplots_adjust(hspace=hspace, wspace=0.3)
# Interpolation type.
if smooth:
interpolation = 'spline16'
else:
interpolation = 'nearest'
for i, ax in enumerate(axes.flat):
# There may be less than 9 images, ensure it doesn't crash.
if i < len(images):
# Plot image.
ax.imshow(images[i],
interpolation=interpolation)
# Name of the true class.
cls_true_name = class_names[cls_true[i]]
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true_name)
else:
# Name of the predicted class.
cls_pred_name = class_names[cls_pred[i]]
xlabel = "True: {0}\nPred: {1}".format(cls_true_name,
cls_pred_name)
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
"""
Explanation: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
"""
def load_images(image_paths):
# Load the images from disk.
images = [imread(path) for path in image_paths]
# Convert to a numpy array and return it.
return np.asarray(images)
"""
Explanation: Helper-function for loading images
This dataset does not load the actual images, instead it has a list of the images in the training-set and another list for the images in the test-set. This helper-function loads some image-files.
End of explanation
"""
# Load the first images from the test-set.
images = load_images(image_paths=image_paths_test[0:9])
# Get the true classes for those images.
cls_true = cls_test[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true, smooth=True)
"""
Explanation: Plot a few images to see if data is correct
End of explanation
"""
path_tfrecords_train = os.path.join(knifey.data_dir, "train.tfrecords")
path_tfrecords_train
"""
Explanation: Create TFRecords
TFRecords is the binary file-format used internally in TensorFlow which allows for high-performance reading and processing of datasets.
For this small dataset we will just create one TFRecords file for the training-set and another for the test-set. But if your dataset is very large then you can split it into several TFRecords files called shards. This will also improve the random shuffling, because the Dataset API only shuffles from a smaller buffer of e.g. 1024 elements loaded into RAM. So if you have e.g. 100 TFRecords files, then the randomization will be much better than for a single TFRecords file.
File-path for the TFRecords file holding the training-set.
End of explanation
"""
path_tfrecords_test = os.path.join(knifey.data_dir, "test.tfrecords")
path_tfrecords_test
"""
Explanation: File-path for the TFRecords file holding the test-set.
End of explanation
"""
def print_progress(count, total):
# Percentage completion.
pct_complete = float(count) / total
# Status-message.
# Note the \r which means the line should overwrite itself.
msg = "\r- Progress: {0:.1%}".format(pct_complete)
# Print it.
sys.stdout.write(msg)
sys.stdout.flush()
"""
Explanation: Helper-function for printing the conversion progress.
End of explanation
"""
def wrap_int64(value):
return tf.train.Feature(int64_list=tf.train.Int64List(value=[value]))
"""
Explanation: Helper-function for wrapping an integer so it can be saved to the TFRecords file.
End of explanation
"""
def wrap_bytes(value):
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
"""
Explanation: Helper-function for wrapping raw bytes so they can be saved to the TFRecords file.
End of explanation
"""
def convert(image_paths, labels, out_path):
# Args:
# image_paths List of file-paths for the images.
# labels Class-labels for the images.
# out_path File-path for the TFRecords output file.
print("Converting: " + out_path)
# Number of images. Used when printing the progress.
num_images = len(image_paths)
# Open a TFRecordWriter for the output-file.
with tf.python_io.TFRecordWriter(out_path) as writer:
# Iterate over all the image-paths and class-labels.
for i, (path, label) in enumerate(zip(image_paths, labels)):
# Print the percentage-progress.
print_progress(count=i, total=num_images-1)
# Load the image-file using matplotlib's imread function.
img = imread(path)
# Convert the image to raw bytes.
img_bytes = img.tostring()
# Create a dict with the data we want to save in the
# TFRecords file. You can add more relevant data here.
data = \
{
'image': wrap_bytes(img_bytes),
'label': wrap_int64(label)
}
# Wrap the data as TensorFlow Features.
feature = tf.train.Features(feature=data)
# Wrap again as a TensorFlow Example.
example = tf.train.Example(features=feature)
# Serialize the data.
serialized = example.SerializeToString()
# Write the serialized data to the TFRecords file.
writer.write(serialized)
"""
Explanation: This is the function for reading images from disk and writing them along with the class-labels to a TFRecords file. This loads and decodes the images to numpy-arrays and then stores the raw bytes in the TFRecords file. If the original image-files are compressed e.g. as jpeg-files, then the TFRecords file may be many times larger than the original image-files.
It is also possible to save the compressed image files directly in the TFRecords file because it can hold any raw bytes. We would then have to decode the compressed images when the TFRecords file is being read later in the parse() function below.
End of explanation
"""
convert(image_paths=image_paths_train,
labels=cls_train,
out_path=path_tfrecords_train)
"""
Explanation: Note the 4 function calls required to write the data-dict to the TFRecords file. In the original code-example from the Google Developers, these 4 function calls were actually nested. The design-philosophy for TensorFlow generally seems to be: If one function call is good, then 4 function calls are 4 times as good, and if they are nested then it is exponential goodness!
Of course, this is quite poor API design because the last function writer.write() should just be able to take the data-dict directly and then call the 3 other functions internally.
Convert the training-set to a TFRecords-file. Note how we use the integer class-numbers as the labels instead of the One-Hot encoded arrays.
End of explanation
"""
convert(image_paths=image_paths_test,
labels=cls_test,
out_path=path_tfrecords_test)
"""
Explanation: Convert the test-set to a TFRecords-file:
End of explanation
"""
def parse(serialized):
# Define a dict with the data-names and types we expect to
# find in the TFRecords file.
# It is a bit awkward that this needs to be specified again,
# because it could have been written in the header of the
# TFRecords file instead.
features = \
{
'image': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64)
}
# Parse the serialized data so we get a dict with our data.
parsed_example = tf.parse_single_example(serialized=serialized,
features=features)
# Get the image as raw bytes.
image_raw = parsed_example['image']
# Decode the raw bytes so it becomes a tensor with type.
image = tf.decode_raw(image_raw, tf.uint8)
# The type is now uint8 but we need it to be float.
image = tf.cast(image, tf.float32)
# Get the label associated with the image.
label = parsed_example['label']
# The image and label are now correct TensorFlow types.
return image, label
"""
Explanation: Input Functions for the Estimator
The TFRecords files contain the data in a serialized binary format which needs to be converted back to images and labels of the correct data-type. We use a helper-function for this parsing:
End of explanation
"""
def input_fn(filenames, train, batch_size=32, buffer_size=2048):
# Args:
# filenames: Filenames for the TFRecords files.
# train: Boolean whether training (True) or testing (False).
# batch_size: Return batches of this size.
# buffer_size: Read buffers of this size. The random shuffling
# is done on the buffer, so it must be big enough.
# Create a TensorFlow Dataset-object which has functionality
# for reading and shuffling data from TFRecords files.
dataset = tf.data.TFRecordDataset(filenames=filenames)
# Parse the serialized data in the TFRecords files.
# This returns TensorFlow tensors for the image and labels.
dataset = dataset.map(parse)
if train:
# If training then read a buffer of the given size and
# randomly shuffle it.
dataset = dataset.shuffle(buffer_size=buffer_size)
# Allow infinite reading of the data.
num_repeat = None
else:
# If testing then don't shuffle the data.
# Only go through the data once.
num_repeat = 1
# Repeat the dataset the given number of times.
dataset = dataset.repeat(num_repeat)
# Get a batch of data with the given size.
dataset = dataset.batch(batch_size)
# Create an iterator for the dataset and the above modifications.
iterator = dataset.make_one_shot_iterator()
# Get the next batch of images and labels.
images_batch, labels_batch = iterator.get_next()
# The input-function must return a dict wrapping the images.
x = {'image': images_batch}
y = labels_batch
return x, y
"""
Explanation: Helper-function for creating an input-function that reads from TFRecords files for use with the Estimator API.
End of explanation
"""
def train_input_fn():
return input_fn(filenames=path_tfrecords_train, train=True)
"""
Explanation: This is the input-function for the training-set for use with the Estimator API:
End of explanation
"""
def test_input_fn():
return input_fn(filenames=path_tfrecords_test, train=False)
"""
Explanation: This is the input-function for the test-set for use with the Estimator API:
End of explanation
"""
some_images = load_images(image_paths=image_paths_test[0:9])
"""
Explanation: Input Function for Predicting on New Images
An input-function is also needed for predicting the class of new data. As an example we just use a few images from the test-set.
You could load any images you want here. Make sure they are the same dimensions as expected by the TensorFlow model, otherwise you need to resize the images.
End of explanation
"""
predict_input_fn = tf.estimator.inputs.numpy_input_fn(
x={"image": some_images.astype(np.float32)},
num_epochs=1,
shuffle=False)
"""
Explanation: These images are now stored as numpy arrays in memory, so we can use the standard input-function for the Estimator API. Note that the images are loaded as uint8 data but it must be input to the TensorFlow graph as floats so we do a type-cast.
End of explanation
"""
some_images_cls = cls_test[0:9]
"""
Explanation: The class-numbers are actually not used in the input-function as it is not needed for prediction. However, the true class-number is needed when we plot the images further below.
End of explanation
"""
feature_image = tf.feature_column.numeric_column("image",
shape=img_shape)
"""
Explanation: Pre-Made / Canned Estimator
When using a pre-made Estimator, we need to specify the input features for the data. In this case we want to input images from our data-set which are numeric arrays of the given shape.
End of explanation
"""
feature_columns = [feature_image]
"""
Explanation: You can have several input features which would then be combined in a list:
End of explanation
"""
num_hidden_units = [512, 256, 128]
"""
Explanation: In this example we want to use a 3-layer DNN with 512, 256 and 128 units respectively.
End of explanation
"""
model = tf.estimator.DNNClassifier(feature_columns=feature_columns,
hidden_units=num_hidden_units,
activation_fn=tf.nn.relu,
n_classes=num_classes,
model_dir="./checkpoints_tutorial18-1/")
"""
Explanation: The DNNClassifier then constructs the neural network for us. We can also specify the activation function and various other parameters (see the docs). Here we just specify the number of classes and the directory where the checkpoints will be saved.
End of explanation
"""
model.train(input_fn=train_input_fn, steps=200)
"""
Explanation: Training
We can now train the model for a given number of iterations. This automatically loads and saves checkpoints so we can continue the training later.
End of explanation
"""
result = model.evaluate(input_fn=test_input_fn)
result
print("Classification accuracy: {0:.2%}".format(result["accuracy"]))
"""
Explanation: Evaluation
Once the model has been trained, we can evaluate its performance on the test-set.
End of explanation
"""
predictions = model.predict(input_fn=predict_input_fn)
cls = [p['classes'] for p in predictions]
cls_pred = np.array(cls, dtype='int').squeeze()
cls_pred
plot_images(images=some_images,
cls_true=some_images_cls,
cls_pred=cls_pred)
"""
Explanation: Predictions
The trained model can also be used to make predictions on new data.
Note that the TensorFlow graph is recreated and the checkpoint is reloaded every time we make predictions on new data. If the model is very large then this could add a significant overhead.
It is unclear why the Estimator is designed this way, possibly because it will always use the latest checkpoint and it can also be distributed easily for use on multiple computers.
End of explanation
"""
predictions = model.predict(input_fn=test_input_fn)
cls = [p['classes'] for p in predictions]
cls_pred = np.array(cls, dtype='int').squeeze()
"""
Explanation: Predictions for the Entire Test-Set
It appears that the model maybe classifies all images as 'spoony'. So let us see the predictions for the entire test-set. We can do this simply by using its input-function:
End of explanation
"""
np.sum(cls_pred == 2)
"""
Explanation: The test-set contains 530 images in total and they have all been predicted as class 2 (spoony). So this model does not work at all for classifying the Knifey-Spoony dataset.
End of explanation
"""
def model_fn(features, labels, mode, params):
# Args:
#
# features: This is the x-arg from the input_fn.
# labels: This is the y-arg from the input_fn.
# mode: Either TRAIN, EVAL, or PREDICT
# params: User-defined hyper-parameters, e.g. learning-rate.
# Reference to the tensor named "image" in the input-function.
x = features["image"]
# The convolutional layers expect 4-rank tensors
# but x is a 2-rank tensor, so reshape it.
net = tf.reshape(x, [-1, img_size, img_size, num_channels])
# First convolutional layer.
net = tf.layers.conv2d(inputs=net, name='layer_conv1',
filters=32, kernel_size=3,
padding='same', activation=tf.nn.relu)
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
# Second convolutional layer.
net = tf.layers.conv2d(inputs=net, name='layer_conv2',
filters=32, kernel_size=3,
padding='same', activation=tf.nn.relu)
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
# Flatten to a 2-rank tensor.
net = tf.contrib.layers.flatten(net)
# Eventually this should be replaced with:
# net = tf.layers.flatten(net)
# First fully-connected / dense layer.
# This uses the ReLU activation function.
net = tf.layers.dense(inputs=net, name='layer_fc1',
units=128, activation=tf.nn.relu)
# Second fully-connected / dense layer.
# This is the last layer so it does not use an activation function.
net = tf.layers.dense(inputs=net, name='layer_fc_2',
units=num_classes)
# Logits output of the neural network.
logits = net
# Softmax output of the neural network.
y_pred = tf.nn.softmax(logits=logits)
# Classification output of the neural network.
y_pred_cls = tf.argmax(y_pred, axis=1)
if mode == tf.estimator.ModeKeys.PREDICT:
# If the estimator is supposed to be in prediction-mode
# then use the predicted class-number that is output by
# the neural network. Optimization etc. is not needed.
spec = tf.estimator.EstimatorSpec(mode=mode,
predictions=y_pred_cls)
else:
# Otherwise the estimator is supposed to be in either
# training or evaluation-mode. Note that the loss-function
# is also required in Evaluation mode.
# Define the loss-function to be optimized, by first
# calculating the cross-entropy between the output of
# the neural network and the true labels for the input data.
# This gives the cross-entropy for each image in the batch.
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels,
logits=logits)
# Reduce the cross-entropy batch-tensor to a single number
# which can be used in optimization of the neural network.
loss = tf.reduce_mean(cross_entropy)
# Define the optimizer for improving the neural network.
optimizer = tf.train.AdamOptimizer(learning_rate=params["learning_rate"])
# Get the TensorFlow op for doing a single optimization step.
train_op = optimizer.minimize(
loss=loss, global_step=tf.train.get_global_step())
# Define the evaluation metrics,
# in this case the classification accuracy.
metrics = \
{
"accuracy": tf.metrics.accuracy(labels, y_pred_cls)
}
# Wrap all of this in an EstimatorSpec.
spec = tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
train_op=train_op,
eval_metric_ops=metrics)
return spec
"""
Explanation: New Estimator
If you cannot use one of the built-in Estimators, then you can create an arbitrary TensorFlow model yourself. To do this, you first need to create a function which defines the following:
The TensorFlow model, e.g. a Convolutional Neural Network.
The output of the model.
The loss-function used to improve the model during optimization.
The optimization method.
Performance metrics.
The Estimator can be run in three modes: Training, Evaluation, or Prediction. The code is mostly the same, but in Prediction-mode we do not need to setup the loss-function and optimizer.
This is another aspect of the Estimator API that is poorly designed and resembles how we did ANSI C programming using structs in the old days. It would probably have been more elegant to split this into several functions and sub-classed the Estimator-class.
End of explanation
"""
params = {"learning_rate": 1e-4}
"""
Explanation: Create an Instance of the Estimator
We can specify hyper-parameters e.g. for the learning-rate of the optimizer.
End of explanation
"""
model = tf.estimator.Estimator(model_fn=model_fn,
params=params,
model_dir="./checkpoints_tutorial18-2/")
"""
Explanation: We can then create an instance of the new Estimator.
Note that we don't provide feature-columns here as it is inferred automatically from the data-functions when model_fn() is called.
It is unclear from the TensorFlow documentation why it is necessary to specify the feature-columns when using DNNClassifier in the example above, when it is not needed here.
End of explanation
"""
model.train(input_fn=train_input_fn, steps=200)
"""
Explanation: Training
Now that our new Estimator has been created, we can train it.
End of explanation
"""
result = model.evaluate(input_fn=test_input_fn)
result
print("Classification accuracy: {0:.2%}".format(result["accuracy"]))
"""
Explanation: Evaluation
Once the model has been trained, we can evaluate its performance on the test-set.
End of explanation
"""
predictions = model.predict(input_fn=predict_input_fn)
cls_pred = np.array(list(predictions))
cls_pred
plot_images(images=some_images,
cls_true=some_images_cls,
cls_pred=cls_pred)
"""
Explanation: Predictions
The model can also be used to make predictions on new data.
End of explanation
"""
predictions = model.predict(input_fn=test_input_fn)
cls_pred = np.array(list(predictions))
cls_pred
"""
Explanation: Predictions for the Entire Test-Set
To get the predicted classes for the entire test-set, we just use its input-function:
End of explanation
"""
np.sum(cls_pred == 0)
np.sum(cls_pred == 1)
np.sum(cls_pred == 2)
"""
Explanation: The Convolutional Neural Network predicts different classes for the images, although most have just been classified as 0 (forky), so the accuracy is horrible.
End of explanation
"""
|
BartKeulen/drl | notebooks/2-Link arm.ipynb | mit | import numpy as np
from scipy.integrate import ode
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import animation
%matplotlib notebook
"""
Explanation: 2-Link Arm
Implementation of a 2-link arm.
$q = \left[\theta_1, \theta_2, \dot{\theta}_1, \dot{\theta}_2\right] \rightarrow \dot{q} = \left[q_3, q_4, \ddot{q}_1, \ddot{q}_2\right]$
ODE is defined as:
$\ddot{q} = B^{-1}(q)(-C(\dot{q}, q) - g(q) + F)$
With:
$B(q) = \begin{bmatrix}
(m_1 + m_2)l_1^2 + m_2l_2^2 + 2m_2l_1l_2\cos(q_2) & m_2l_2^2 + m_2l_1l_2\cos(q_2) \
m_2l_2^2 + m_2l_1l_2\cos(q_2) & m_2l_2^2
\end{bmatrix}\
C(\dot{q}, q) = \begin{bmatrix}
-m_2l_1l_2\sin(q_2)(2q_3q_4 + q_4^2) \
-m_2l_1l_2\sin(q_2)q_3q_4
\end{bmatrix}\
g(q) = \begin{bmatrix}
-(m_1+m_2)gl_1\sin(q_1) - m_2gl_2\sin(q_1+q_2) \
-m_2gl_2\sin(q_1+q_2)
\end{bmatrix}$
End of explanation
"""
def eom(t, q, params, ths, Kpd):
m1 , m2, l1, l2, g = params
q1, q2, q3, q4, q5, q6 = q
b11 = (m1 + m2)*l1**2 + m2*l2**2 + 2*m2*l1*l2*np.cos(q2)
b12 = m2*l2**2 + m2*l1*l2*np.cos(q2)
b21 = m2*l2**2 + m2*l1*l2*np.cos(q2)
b22 = m2*l2**2
B = np.array([[b11, b12], [b21, b22]])
c1 = -m2*l1*l2*np.sin(q2)*(2*q3*q4 + q4**2)
c2 = -m2*l1*l2*np.sin(q2)*q3*q4
C = np.array([[c1], [c2]])
g1 = -(m1 + m2)*g*l1*np.sin(q1) - m2*g*l2*np.sin(q1 + q2)
g2 = -m2*g*l2*np.sin(q1 + q2)
g = np.array([[g1], [g2]])
Kp1, Kd1, Ki1, Kp2, Kd2, Ki2 = Kpd
th1s, th2s = ths
f1 = Kp1*(th1s - q1) - Kd1*q3 + Ki1*q5
f2 = Kp2*(th2s - q2) - Kd2*q4 + Ki2*q6
Fhat = np.array([[f1], [f2]])
F = np.dot(B, Fhat)
qdotdot = np.dot(np.linalg.inv(B), -C - g + F)
qdot = [q3, q4] + qdotdot.T[0].tolist() + [th1s-q1, th2s-q2]
return qdot
"""
Explanation: Equations of motion
End of explanation
"""
q0 = [-np.pi/2, np.pi/2, 0., 0., 0., 0.]
ths = (np.pi/2, -np.pi/2)
Ts = 20
m1 = 1
m2 = 1
l1 = 1
l2 = 1
g = 9.81
params = (m1, m2, l1, l2, g)
Kp1 = 15
Kd1 = 7
Ki1 = 10
Kp2 = 15
Kd2 = 10
Ki2 = 10
Kpd = (Kp1, Kd1, Ki1, Kp2, Kd2, Ki2)
r = ode(eom).set_integrator('dopri5')
r.set_initial_value(q0, 0).set_f_params(params, ths, Kpd)
dt = 0.1
sol = []
while r.successful() and r.t < Ts:
q = r.integrate(r.t + dt)
sol.append(q)
sol = np.array(sol)
"""
Explanation: Solver
Solve system with with initial position $q_0 = \left[\pi/2, \pi/2, 0., 0.\right]$ and desired position $\theta_s = \left[\pi/2, -\pi/2\right]$. Parameters are set to $m_1 = m_2 = l_1 = l_2 = 1$ and $g = 9.81$.
Solve with simple PID controller, with $K_p = \left[15, 15\right]$ and $K_d = \left[7, 10\right]$ and $K_i = \left[10, 10\right]$.
End of explanation
"""
t = np.linspace(0, Ts, Ts*10)
q1d = np.ones(len(t))*ths[0]
q2d = np.ones(len(t))*ths[1]
plt.figure(figsize=(9,4))
plt.subplot(221)
plt.plot(t, sol[:, 0], 'b', label='Actual')
plt.plot(t, q1d, 'g--', label='Desired')
plt.legend(loc='best')
plt.xlabel('t')
plt.ylabel(r'$\theta_1$')
plt.grid()
plt.subplot(222)
plt.plot(t, q1d-sol[:, 0])
plt.xlabel('t')
plt.ylabel('error')
plt.grid()
plt.subplot(223)
plt.plot(t, sol[:, 1], 'b', label='Actual')
plt.plot(t, q2d, 'g--', label='Desired')
plt.legend(loc='best')
plt.xlabel('t')
plt.ylabel(r'$\theta_2$')
plt.grid()
plt.subplot(224)
plt.plot(t, q2d-sol[:, 1])
plt.xlabel('t')
plt.ylabel('error')
plt.grid()
"""
Explanation: Results
End of explanation
"""
x1 = np.sin(sol[:, 0])*l1
y1 = np.cos(sol[:, 0])*l1
x2 = x1 + np.sin(sol[:, 0] + sol[:, 1])*l2
y2 = y1 + np.cos(sol[:, 0] + sol[:, 1])*l2
x1d = np.sin(ths[0])*l1
y1d = np.cos(ths[0])*l1
x2d = x1d + np.sin(ths[0] + ths[1])*l2
y2d = y1d + np.cos(ths[0] + ths[1])*l2
fig, ax = plt.subplots()
ax.set_xlim((-l1-l2, l1+l2))
ax.set_ylim((-l1-l2, l1+l2))
desire_position = ax.plot(x1d, y1d, 'r*')
desire_position = ax.plot(x2d, y2d, 'r*')
line1, = ax.plot([], [], 'b')
line2, = ax.plot([], [], 'b')
lines = [line1, line2]
def init():
line1.set_data([], [])
line2.set_data([], [])
return [line1, line2]
def animate(i):
line1.set_data([0, x1[i]], [0, y1[i]])
line2.set_data([x1[i], x2[i]], [y1[i], y2[i]])
return [line1, line2]
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=len(x1), blit=True)
"""
Explanation: Animation
End of explanation
"""
def eom_new(t, q, u, params):
m1 , m2, l1, l2, g = params
u1, u2 = u
q1, q2, q3, q4 = q
b11 = (m1 + m2)*l1**2 + m2*l2**2 + 2*m2*l1*l2*np.cos(q2)
b12 = m2*l2**2 + m2*l1*l2*np.cos(q2)
b21 = m2*l2**2 + m2*l1*l2*np.cos(q2)
b22 = m2*l2**2
B = np.array([[b11, b12], [b21, b22]])
c1 = -m2*l1*l2*np.sin(q2)*(2*q3*q4 + q4**2)
c2 = -m2*l1*l2*np.sin(q2)*q3*q4
C = np.array([[c1], [c2]])
g1 = -(m1 + m2)*g*l1*np.sin(q1) - m2*g*l2*np.sin(q1 + q2)
g2 = -m2*g*l2*np.sin(q1 + q2)
g = np.array([[g1], [g2]])
Fhat = np.array([[u1], [u2]])
F = np.dot(B, Fhat)
qdotdot = np.dot(np.linalg.inv(B), -C - g + F)
qdot = [q3, q4] + qdotdot.T[0].tolist()
return qdot
q0 = [-np.pi/2, np.pi/2, 0., 0.]
ths = (np.pi/2, -np.pi/2)
Ts = 20
m1 = 1
m2 = 1
l1 = 1
l2 = 1
g = 9.81
params = (m1, m2, l1, l2, g)
Kp1 = 15
Kd1 = 7
Ki1 = 10
Kp2 = 15
Kd2 = 10
Ki2 = 10
def get_input(q, x_int):
q1, q2, q3, q4 = q
u1 = Kp1*(ths[0] - q1) - Kd1*q3 + Ki1*x_int[0]
u2 = Kp2*(ths[1] - q2) - Kd2*q4 + Ki2*x_int[1]
return (u1, u2)
x_int = [0., 0.]
u0 = get_input(q0, x_int)
f_old = np.array([ths[0]-q0[0], ths[1]-q0[1]])
f_int = np.array([0., 0.])
r = ode(eom_new).set_integrator('dopri5')
r.set_initial_value(q0, 0).set_f_params(u0, params)
dt = 0.1
sol = []
while r.successful() and r.t < Ts:
q = r.integrate(r.t + dt)
sol.append(q)
# Calculate integral of error numerically
f_new = np.array([ths[0]-q[0], ths[1]-q[1]])
f_int = f_int + (f_old + f_new) * dt / 2.
f_old = f_new
x_int = [f_int[0], f_int[1]]
u = get_input(q, x_int)
r.set_f_params(u, params)
sol = np.array(sol)
t = np.linspace(0, Ts, Ts*10)
q1d = np.ones(len(t))*ths[0]
q2d = np.ones(len(t))*ths[1]
plt.figure(figsize=(9,4))
plt.subplot(221)
plt.plot(t, sol[:, 0], 'b', label='Actual')
plt.plot(t, q1d, 'g--', label='Desired')
plt.legend(loc='best')
plt.xlabel('t')
plt.ylabel(r'$\theta_1$')
plt.grid()
plt.subplot(222)
plt.plot(t, q1d-sol[:, 0])
plt.xlabel('t')
plt.ylabel('error')
plt.grid()
plt.subplot(223)
plt.plot(t, sol[:, 1], 'b', label='Actual')
plt.plot(t, q2d, 'g--', label='Desired')
plt.legend(loc='best')
plt.xlabel('t')
plt.ylabel(r'$\theta_2$')
plt.grid()
plt.subplot(224)
plt.plot(t, q2d-sol[:, 1])
plt.xlabel('t')
plt.ylabel('error')
plt.grid()
"""
Explanation: Solve with calculating controls outside of EOM
End of explanation
"""
|
mlhy/ResNet-50-for-Cats.Vs.Dogs | Oxford-Pet/Preprocessing train dataset ox.ipynb | apache-2.0 | from sklearn.model_selection import train_test_split
import seaborn as sns
import os
import shutil
import pandas as pd
%matplotlib inline
df = pd.read_csv('list.txt', sep=' ')
df.ix[2000:2005]
"""
Explanation: Preprocessing train dataset
Divide the train folder into two folders mytrain_ox and myvalid_ox
End of explanation
"""
train_cat = df[df['SPECIES'] == 1]
train_dog = df[df['SPECIES'] == 2]
x = ['cat', 'dog']
y = [len(train_cat), len(train_dog)]
ax = sns.barplot(x=x, y=y)
"""
Explanation: Visualize the size of the original train dataset.
End of explanation
"""
mytrain, myvalid = train_test_split(df, test_size=0.1)
print len(mytrain), len(myvalid)
"""
Explanation: Shuffle and split the train filenames
End of explanation
"""
mytrain_cat = mytrain[mytrain['SPECIES'] == 1]
mytrain_dog = mytrain[mytrain['SPECIES'] == 2]
myvalid_cat = myvalid[myvalid['SPECIES'] == 1]
myvalid_dog = myvalid[myvalid['SPECIES'] == 2]
x = ['mytrain_cat', 'mytrain_dog', 'myvalid_cat', 'myvalid_dog']
y = [len(mytrain_cat), len(mytrain_dog), len(myvalid_cat), len(myvalid_dog)]
ax = sns.barplot(x=x, y=y)
"""
Explanation: Visualize the size of the processed train dataset
End of explanation
"""
def remove_and_create_class(dirname):
if os.path.exists(dirname):
shutil.rmtree(dirname)
os.mkdir(dirname)
os.mkdir(dirname+'/cat')
os.mkdir(dirname+'/dog')
remove_and_create_class('mytrain_ox')
remove_and_create_class('myvalid_ox')
for filename in mytrain_cat['IMAGE']:
os.symlink('../../images/'+filename+'.jpg', 'mytrain_ox/cat/'+filename+'.jpg')
for filename in mytrain_dog['IMAGE']:
os.symlink('../../images/'+filename+'.jpg', 'mytrain_ox/dog/'+filename+'.jpg')
for filename in myvalid_cat['IMAGE']:
os.symlink('../../images/'+filename+'.jpg', 'myvalid_ox/cat/'+filename+'.jpg')
for filename in myvalid_dog['IMAGE']:
os.symlink('../../images/'+filename+'.jpg', 'myvalid_ox/dog/'+filename+'.jpg')
"""
Explanation: Create symbolic link of images
End of explanation
"""
|
jnarhan/Breast_Cancer | src/models/Youqing_SVM_Model2.ipynb | mit | import datetime
import gc
import numpy as np
import os
import random
from scipy import misc
import string
import time
import sys
import sklearn.metrics as skm
import collections
from sklearn.svm import SVC
import matplotlib
matplotlib.use('Agg')
from matplotlib import pyplot as plt
from sklearn import metrics
import dwdii_bc_model_helper as bc
random.seed(20275)
np.set_printoptions(precision=2)
"""
Explanation: 2 Classes (normal, abnormal) Prediction with SVM
A support vector classification machine with the RBF Kernel (C=1 and gamma=0.0001) was built here. And two sets of image data were tested with the model.
For Raw DDSM images, SVM model had an overall accuracy of 65.8%.
For Threshold images, SVM model had an overall accuracy of 62.1%.
End of explanation
"""
imagePath = "png"
trainImagePath = imagePath
trainDataPath = "data/ddsm_train.csv"
testDataPath = "data/ddsm_test.csv"
categories = bc.bcNormVsAbnormNumerics()
imgResize = (150, 150)
normalVsAbnormal=True
os.listdir('data')
metaData, meta2, mCounts = bc.load_training_metadata(trainDataPath, balanceViaRemoval=True, verbose=True,
normalVsAbnormal=True)
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_data, Y_data = bc.load_data(trainDataPath, trainImagePath,
categories=categories,
maxData = maxData,
verboseFreq = 50,
imgResize=imgResize,
normalVsAbnormal=True)
print X_data.shape
print Y_data.shape
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_test, Y_test = bc.load_data(testDataPath, imagePath,
categories=categories,
maxData = maxData,
verboseFreq = 50,
imgResize=imgResize,
normalVsAbnormal=True)
print X_test.shape
print Y_test.shape
X_train = X_data
Y_train = Y_data
print X_train.shape
print X_test.shape
print Y_train.shape
print Y_test.shape
def yDist(y):
bcCounts = collections.defaultdict(int)
for a in range(0, y.shape[0]):
bcCounts[y[a][0]] += 1
return bcCounts
print "Y_train Dist: " + str(yDist(Y_train))
print "Y_test Dist: " + str(yDist(Y_test))
X_train_s = X_train.reshape((2893,-1))
X_test_s = X_test.reshape((726,-1))
Y_train_s = Y_train.ravel()
model = SVC(C=1.0, gamma=0.0001, kernel='rbf')
model.fit(X_train_s,Y_train_s)
predicted = model.predict(X_test_s)
expected = Y_test
svm_matrix = skm.confusion_matrix(Y_test, predicted)
svm_matrix
print metrics.accuracy_score(expected,predicted)
numBC = bc.reverseDict(categories)
class_names = numBC.values()
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
bc.plot_confusion_matrix(svm_matrix, classes=class_names,
title='Confusion Matrix without normalization')
plt.savefig('raw_class2_o_norm.png')
from IPython.display import Image
Image(filename='raw_class2_o_norm.png')
plt.figure()
bc.plot_confusion_matrix(svm_matrix, classes=class_names, normalize=True,
title='Confusion Matrix with normalization')
plt.savefig('raw_class2_norm.png')
# Load the image we just saved
from IPython.display import Image
Image(filename='raw_class2_norm.png')
"""
Explanation: Raw DDSM images
End of explanation
"""
imagePath = "DDSM_threshold"
trainImagePath = imagePath
trainDataPath = "data/ddsm_train.csv"
testDataPath = "data/ddsm_test.csv"
categories = bc.bcNormVsAbnormNumerics()
imgResize = (150, 150)
normalVsAbnormal=True
os.listdir('data')
metaData, meta2, mCounts = bc.load_training_metadata(trainDataPath, balanceViaRemoval=True, verbose=True,
normalVsAbnormal=True)
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_data, Y_data = bc.load_data(trainDataPath, trainImagePath,
categories=categories,
maxData = maxData,
verboseFreq = 50,
imgResize=imgResize,
normalVsAbnormal=True)
print X_data.shape
print Y_data.shape
# Actually load some representative data for model experimentation
maxData = len(metaData)
X_test, Y_test = bc.load_data(testDataPath, imagePath,
categories=categories,
maxData = maxData,
verboseFreq = 50,
imgResize=imgResize,
normalVsAbnormal=True)
print X_test.shape
print Y_test.shape
X_train = X_data
Y_train = Y_data
print X_train.shape
print X_test.shape
print Y_train.shape
print Y_test.shape
def yDist(y):
bcCounts = collections.defaultdict(int)
for a in range(0, y.shape[0]):
bcCounts[y[a][0]] += 1
return bcCounts
print "Y_train Dist: " + str(yDist(Y_train))
print "Y_test Dist: " + str(yDist(Y_test))
X_train_s = X_train.reshape((2742,-1))
X_test_s = X_test.reshape((691,-1))
Y_train_s = Y_train.ravel()
model = SVC(C=1.0, gamma=0.0001, kernel='rbf')
model.fit(X_train_s,Y_train_s)
predicted = model.predict(X_test_s)
expected = Y_test
svm_matrix = skm.confusion_matrix(Y_test, predicted)
svm_matrix
print metrics.accuracy_score(expected,predicted)
numBC = bc.reverseDict(categories)
class_names = numBC.values()
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
bc.plot_confusion_matrix(svm_matrix, classes=class_names,
title='Confusion Matrix without normalization')
plt.savefig('threshold_class2_o_norm.png')
from IPython.display import Image
Image(filename='threshold_class2_o_norm.png')
plt.figure()
bc.plot_confusion_matrix(svm_matrix, classes=class_names, normalize=True,
title='Confusion Matrix with normalization')
plt.savefig('threshold_class2_norm.png')
# Load the image we just saved
from IPython.display import Image
Image(filename='threshold_class2_norm.png')
"""
Explanation: Threshold Images
End of explanation
"""
|
FavioVazquez/Interact.jl | doc/notebooks/01-Introduction.ipynb | mit | using Reactive, Interact
"""
Explanation: Introduction to Interact.jl
End of explanation
"""
s = slider(0:.1:1,label="Slider X:")
signal(s)
"""
Explanation: Interact.jl provides interactive widgets for IJulia. Interaction relies on Reactive.jl reactive programming package. Reactive provides the type Signal which represent time-varying values. For example, a Slider widget can be turned into a "signal of numbers". Execute the following two cells, and then move the slider. You will see that the value of signal(slider) changes accordingly.
End of explanation
"""
display(typeof(s));
isa(s, Widget)
display(typeof(signal(s)));
isa(signal(s), Signal{Float64})
"""
Explanation: Let us now inspect the types of these entities.
End of explanation
"""
s
"""
Explanation: You can have many instances of the same widget in a notebook, and they stay in sync:
End of explanation
"""
xsquared = lift(x -> x*x, signal(s))
"""
Explanation: Using Widget Signals
A slider is useless if you cannot do more with it than just watch its value. Thankfully we can transform one signal into another, which means we can transform the signal of values that the slider takes into, say a signal of it's squares:
End of explanation
"""
using Color
lift(x -> RGB(x, 0.5, 0.5), signal(s))
"""
Explanation: Go ahead and vary the slider to see this in action.
You can transform a signal into pretty much anything else. Let's use the Color package to produce different saturations of red:
End of explanation
"""
r = slider(0:0.01:1, label="R")
g = slider(0:0.01:1, label="G")
b = slider(0:0.01:1, label="B")
map(display, [r,g,b]);
color = lift((x, y, z) -> RGB(x, y, z), r, g, b)
"""
Explanation: You can of course use several inputs as arguments to lift enabling you to combine many signals. Let's create a full color-picker.
End of explanation
"""
color = @lift RGB(r, g, b)
"""
Explanation: the @lift macro provides useful syntactic sugar to do this:
End of explanation
"""
@lift html(string("<div style='color:#", hex(color), "'>Hello, World!</div>"))
"""
Explanation: We can use the HTML widget to write some text you can change the color of.
End of explanation
"""
@manipulate for r = 0:.05:1, g = 0:.05:1, b = 0:.05:1
html(string("<div style='color:#", hex(RGB(r,g,b)), "'>Color me</div>"))
end
"""
Explanation: The @manipulate Macro
The @manipulate macro lets you play with any expression using widgets. We could have, for example, used @manipulate to make a color picker along with our HTML output in one line of code:
End of explanation
"""
x = slider(0:.1:2pi, label="x")
s = @lift slider(-1:.05:1, value=sin(2x), label="sin(2x)")
c = @lift slider(-1:.05:1, value=cos(2x), label="cos(2x)")
map(display, [x,s,c]);
"""
Explanation: Signal of Widgets
You can in fact create signal of other widgets to update them reactively. We have seen one case with HTML above. Let us now create a signal of Slider:
End of explanation
"""
fx = Input(0.0) # A float input
x = slider(0:.1:2pi, label="x")
y = lift(v -> slider(-1:.05:1, value=sin(v), signal=fx, label="f(x)"), x)
map(display, (x,y));
"""
Explanation: Now vary the x slider to see sin(2x) and cos(2x) get set to their appropriate values.
But in the above case, you cannot also use sin(2x) and cos(2x) sliders as input values. To do this, we will have to create a separate Input signal and pass it as argument to lift. Unfortunaltely, we cannot use the @lift macro here because of ambiguity in parsing. Example:
End of explanation
"""
|
SimonBiggs/electronfactors | historical_exploration_and_measurement/measurements/101 Measurements 2015-03-26.ipynb | agpl-3.0 | zOnR50 = np.concatenate((np.array([0.02]), np.arange(0.05,1.25,0.05)))
R50of45 = np.array([0.997,1,1.004,1.008,1.012,1.017,1.021,1.026,1.03,
1.035,1.04,1.045,1.051,1.056,1.062,1.067,1.073,1.08,
1.086,1.092,1.099,1.106,1.113,1.120,1.128])
R50of50 = np.array([0.991,0.994,0.998,1.002,1.006,1.011,1.016,1.02,1.025,
1.03,1.035,1.041,1.046,1.052,1.058,1.064,1.07,1.076,
1.083,1.09,1.097,1.104,1.112,1.119,1.128])
R50of47_5 = np.mean([R50of45,R50of50], axis=0)
stopRatio = interp1d(zOnR50 * 47.5,R50of47_5)
"""
Explanation: Details
Below are the recorded measurements for the first batch of cutout factor measurements
Ionisation conversion
The following cell is used to initialise the ionisation to dose conversion function. Data is extracted from table 20 within TRS398. R50 of the 12 MeV beam is $4.75~g/cm^2$
End of explanation
"""
def calc_display(**kwargs):
depth = np.array(kwargs['depth'])
ionisation = np.array(kwargs['ionisation'])
reference = kwargs['reference']
if len(ionisation) == 1:
factor = (
reference / ionisation *
(stopRatio(25) / stopRatio(depth[0]))
)
else:
stop_ratio_corrected = stopRatio(depth) * ionisation
plt.scatter(depth,stop_ratio_corrected)
plt.ylabel('Stopping power ratio corrected')
plt.xlabel('Depth (mm)')
plt.title('Relative dose measurements')
plt.show()
index_of_max = np.argmax(stop_ratio_corrected)
cutout_reading = ionisation[index_of_max]
factor = (
(reference / cutout_reading) *
(stopRatio(25) / stopRatio(depth[index_of_max]))
)
print(
"Cutout factor = %0.3f | %0.1f%%" %
(factor, (factor - 1) * 100)
)
return factor
"""
Explanation: Measurements
These measurements were done on Harry 2694, with a Markus chamber set to +300 V. The sensitivity was $1.398 \times 10^9$. All measurements were done at 100 SSD with a 12 MeV beam and a $10\times10$ cm applicator. Below are the readings recorded in chronological order.
Readings
Output function definition
End of explanation
"""
data = dict()
# Standard insert
np.mean([1.033, 1.033])
def new_reading(**kwargs):
data = kwargs['data']
key = kwargs['key']
ionisation = kwargs['ionisation']
depth = kwargs['depth']
data[key]['depth'].append(depth)
data[key]['ionisation'].append(np.mean(ionisation))
return data
key = 'concave_cutout'
data[key] = dict()
data[key]['depth'] = []
data[key]['ionisation'] = []
data[key]['reference'] = 1.033
data = new_reading(
key=key, data=data,
ionisation=[1.007, 1.006, 1.006],
depth=25
)
data = new_reading(
key=key, data=data,
ionisation=[1.009, 1.009],
depth=24
)
data = new_reading(
key=key, data=data,
ionisation=[1.011, 1.011],
depth=23
)
data = new_reading(
key=key, data=data,
ionisation=[1.013],
depth=22
)
data = new_reading(
key=key, data=data,
ionisation=[1.013],
depth=21
)
data[key]['factor'] = calc_display(**data[key])
key = 'concave_ellipse'
data[key] = dict()
data[key]['depth'] = []
data[key]['ionisation'] = []
data[key]['reference'] = 1.033
data = new_reading(
key=key, data=data,
ionisation=[1.001, 1.001],
depth=25
)
data = new_reading(
key=key, data=data,
ionisation=[1.004, 1.004],
depth=24
)
data = new_reading(
key=key, data=data,
ionisation=[1.008, 1.007, 1.007],
depth=23
)
data = new_reading(
key=key, data=data,
ionisation=[1.009, 1.009],
depth=22
)
data = new_reading(
key=key, data=data,
ionisation=[1.009],
depth=21
)
data[key]['factor'] = calc_display(**data[key])
# Standard insert
np.mean([1.033, 1.033])
"""
Explanation: Cutout readings
End of explanation
"""
|
JuBra/cobrapy | documentation_builder/deletions.ipynb | lgpl-2.1 | import pandas
from time import time
import cobra.test
from cobra.flux_analysis import \
single_gene_deletion, single_reaction_deletion, \
double_gene_deletion, double_reaction_deletion
cobra_model = cobra.test.create_test_model("textbook")
ecoli_model = cobra.test.create_test_model("ecoli")
"""
Explanation: Simulating Deletions
End of explanation
"""
growth_rates, statuses = single_gene_deletion(cobra_model)
"""
Explanation: Single Deletions
Perform all single gene deletions on a model
End of explanation
"""
gr, st = single_gene_deletion(cobra_model,
cobra_model.genes[:20])
pandas.DataFrame.from_dict({"growth_rates": gr,
"status": st})
"""
Explanation: These can also be done for only a subset of genes
End of explanation
"""
gr, st = single_reaction_deletion(cobra_model,
cobra_model.reactions[:20])
pandas.DataFrame.from_dict({"growth_rates": gr,
"status": st})
"""
Explanation: This can also be done for reactions
End of explanation
"""
double_gene_deletion(cobra_model, cobra_model.genes[-5:],
return_frame=True)
"""
Explanation: Double Deletions
Double deletions run in a similar way. Passing in return_frame=True will cause them to format the results as a pandas Dataframe
End of explanation
"""
start = time() # start timer()
double_gene_deletion(ecoli_model, ecoli_model.genes[:200],
number_of_processes=2)
t1 = time() - start
print("Double gene deletions for 200 genes completed in "
"%.2f sec with 2 cores" % t1)
start = time() # start timer()
double_gene_deletion(ecoli_model, ecoli_model.genes[:200],
number_of_processes=1)
t2 = time() - start
print("Double gene deletions for 200 genes completed in "
"%.2f sec with 1 core" % t2)
print("Speedup of %.2fx" % (t2 / t1))
"""
Explanation: By default, the double deletion function will automatically use multiprocessing, splitting the task over up to 4 cores if they are available. The number of cores can be manually sepcified as well. Setting use of a single core will disable use of the multiprocessing library, which often aids debuggging.
End of explanation
"""
double_reaction_deletion(cobra_model,
cobra_model.reactions[2:7],
return_frame=True)
"""
Explanation: Double deletions can also be run for reactions
End of explanation
"""
|
desihub/desisim | doc/nb/transient-models.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
from astropy import units as u
from desisim.transients import transients
"""
Explanation: Transient Models
Example of randomly grabbing transient models by type from the desisim.transients module.
Transient models can also be accessed by name, which is demonstrated below.
End of explanation
"""
def plot_transient(tr, ax):
wlmin = np.maximum(2500., tr.minwave().value)
wlmax = np.minimum(9500., tr.maxwave().value)
wl = np.arange(wlmin, wlmax, 1.) * u.Angstrom
t = 0*u.day
ax.plot(wl, tr.flux(0*u.day, wl),
label='{} ({})'.format(tr.model, tr.type))
ax.set(xlabel=r'wavelength [$\AA$]',
ylabel=r'flux [arb. units]')
ax.legend(fontsize=10)
"""
Explanation: Plotting Script
Given a transient model and a matplotlib Axes object, plot the flux vs. wavelength at peak flux, always defined to be at time 0.
End of explanation
"""
print(transients)
fig, ax = plt.subplots(1,1, figsize=(6,4), tight_layout=True)
model = transients.get_model('hsiao')
print(model.mintime(), model.maxtime())
plot_transient(model, ax)
"""
Explanation: Available Models
List transient models. Then pick one and plot it.
End of explanation
"""
types = ['Ia', 'Ib', 'Ib/c', 'Ic', 'IIn', 'IIP', 'IIL', 'IIL/P', 'II-pec']
ntype = len(types)
ncol = 3
nrow = int(np.ceil(ntype / ncol))
fig, axes = plt.subplots(nrow, ncol, figsize=(4*ncol, 3*nrow), tight_layout=True)
axes = axes.flatten()
for t, ax in zip(types, axes):
s = transients.get_type(t)
print(vars(s))
plot_transient(s, ax)
"""
Explanation: Model Random Access
You can grab a model by type. The desisim.transients model will randomly select one of the avaialable models of that type. Types are just strings, and if you try one that is not available in the model you'll just get an exception.
Below, loop over several types, randomly select a model from each, and plot the spectrum from t0 (max light) for that model.
End of explanation
"""
|
peakrisk/peakrisk | posts/weather-station.ipynb | gpl-3.0 | # Tell matplotlib to plot in line
%matplotlib inline
# import pandas
import pandas
# seaborn magically adds a layer of goodness on top of Matplotlib
# mostly this is just changing matplotlib defaults, but it does also
# provide some higher level plotting methods.
import seaborn
# Tell seaborn to set things up
seaborn.set()
# just check where I am
!pwd
infile = '../files/light.csv'
!scp 192.168.0.133:Adafruit_Python_BMP/light.csv .
!mv light.csv ../files
data = pandas.read_csv(infile, index_col='date', parse_dates=['date'])
data.describe()
# Lets look at the temperature data
data.temp.plot()
"""
Explanation: Work in Progress -- starting to add commentary and tidy up
I connected a BMP180 temperature and pressure centre to a
raspberry pi and have it running in my study.
I have been using this note book to look at the data as it is generated.
The code uses the Adafruit python library to extract data from the sensor.
I find plotting the data is a good way to take an initial look at it.
So, time for some pandas and matplotlib.
End of explanation
"""
data[:4500].plot(subplots=True)
"""
Explanation: Looks like we have some bad data here. For the first few days things look ok though
To start, lets look at the good bit of the data.
End of explanation
"""
data.temp.plot()
# All the good temperature readings appear to be in the 25C - 32C range,
# so lets filter out the rest.
data.temp[(data.temp < 50.0) & (data.temp > 15.0)].plot()
"""
Explanation: That looks good. So for the first 4500 samples the data looks clean.
The pressure and sealevel_pressure plots have the same shape.
The sealevel_pressure is just the pressure recording adjusted for altitude.
Actually, since I am not telling the software what my altitude it is
It is a bit of a mystery what is causing the bad data after this.
One possibility is I have a separate process that is talking to the sensor that I am running in a console just so I can see the current figures.
I am running this with a linux watch command. I used the default parameters and it is running every 2 seconds.
I am wondering if the sensor code, or the hardware itself has some bugs if the code polls the sensor whilst it is already being probed.
I am now (11am BDA time July 3rd) running the monitor script with watch -n 600 so it only polls every 10 minutes. Will see if that improves things.
So, lets see if we can filter out the bad data
End of explanation
"""
def spot_outliers(series):
""" Compares the change in value in consecutive samples to the standard deviation
If the change is bigger than that, assume it is an outlier.
Note, that there will be two bad deltas, since the sample after the
bad one will be bad too.
"""
delta = series - series.shift()
return delta.abs() > data.std()
outliers = spot_outliers(data)
# Plot temperature
data[~outliers].temp.plot()
data[~outliers].altitude.plot()
data[~outliers].plot(subplots=True)
data[~outliers].sealevel_pressure.plot()
def smooth(data, thresh=None):
means = data.mean()
if thresh is None:
sds = data.std()
else:
sds = thresh
delta = data - data.shift()
good = delta[abs(delta) < sds]
print(good.describe())
return delta.where(good, 0.0)
smooth(data).temp.cumsum().plot()
smooth(data).describe()
start = data[['temp', 'altitude']].irow(0)
(smooth(data, 5.0).cumsum()[['temp', 'altitude']] + start).plot(subplots=True)
"""
Explanation: That looks good. You can see 8 days of temperatures rising through the day and then falling at night. Only a couple of degree difference here in Bermuda at present.
The Third day with the dip in temperature I believe there was a thunderstorm or two which cooled things off temporarily.
I really need to get a humidity sensor working to go with this.
Now lets see if we can spot the outliers and filter them out.
End of explanation
"""
|
mitchshack/data_analysis_with_python_and_pandas | 4 - pandas Basics/4-3 pandas Series NaNs, Reindexing, filling, mutating and copies, basic mapping.ipynb | apache-2.0 | %matplotlib inline
import sys
print(sys.version)
import numpy as np
print(np.__version__)
import pandas as pd
print(pd.__version__)
import matplotlib.pyplot as plt
"""
Explanation: pandas Series Reindexing, filling, mutating, copying, and maps
End of explanation
"""
np_array = np.array([1,2,3,np.nan])
np_array
np_array.mean()
pd_series = pd.Series([1,2,3,np.nan])
pd_series
"""
Explanation: Now NaN values are treated differently in numpy than in pandas. In numpy, as we saw earlier if you’ve got an array with a NaN value, things like summary statistics are calculated as NaN.
End of explanation
"""
pd_series.mean()
np.random.seed(567)
"""
Explanation: Pandas Series treat them differently, it just ignores that empty value. We’ll cover filling in those empty values at a later time.
End of explanation
"""
s1 = pd.Series(np.random.randn(5))
s1
s2 = pd.Series(np.random.randn(5))
s2
"""
Explanation: Sometimes you're going to have to make some new indexes. For example we've got two Series.
End of explanation
"""
combo = pd.concat([s1, s2])
combo
"""
Explanation: Now at times you’re going to want to reindex a Series. What does this mean? Basically that you want to destroy the index you have currently and reset it. Let’s walk through a practical example.
End of explanation
"""
combo[0]
combo.index = range(combo.count())
combo
"""
Explanation: When we concatenate them, we can see we’ve got repeated index values. We can query just like we would normally by these index values, but in all likelihood we’ll want to replace them with a new one.
End of explanation
"""
new_combo = combo.reindex([0,2,15,21])
new_combo
"""
Explanation: However this is rather limited in what you can achieve. It just overwrites the index we have now. What happens if we’re looking to fill in missing data with nan values? We have to use reindex which will return a new Series.
End of explanation
"""
combo.reindex([0,2,15,21], fill_value=0)
new_combo
"""
Explanation: We can specify how to handle nan values with fill_value. or we can specify a method by which they should be filled. This can performed during the reindexing using the method parameter (like we did with fill_value), or we can do it after the fact.
End of explanation
"""
new_combo.ffill()
"""
Explanation: Here’s an example of fill which is forward fill
End of explanation
"""
new_combo.bfill()
new_combo[21] = 5
new_combo
new_combo.bfill()
new_combo
"""
Explanation: and bfill or backward fill
End of explanation
"""
new_combo.fillna(12)
"""
Explanation: Fillna just fills the blanks with whatever value you specify.
End of explanation
"""
s1
s2
s1 + s2
"""
Explanation: Now lastly I want to cover how we can merge different Series’ on certain values and perform simple arithmetic operations.
When s1 and s2 have the same index it’s easy to say add them together and get what we expect.
End of explanation
"""
s2.index = list(range(3,8))
s2
s1 + s2
s1.reindex(range(10),fill_value=0) + s2.reindex(range(10),fill_value=0)
s2.index = range(5)
s1 = pd.Series(range(1,4), index= ['a','a','c'])
s1
s2 = pd.Series(range(1,4), index=['a','a','b'])
s2
"""
Explanation: However things get more complicated when they have different indices. Now when we try and add them it only does so on the overlapping index labels. Often times this may be what we want when we’re analyzing data but other times it’s not. In order to handle that we’ve got to do some reindexing and use fill values.
End of explanation
"""
s1 * s2
"""
Explanation: Finally when we have multiple labels on an index that are the same and we try to bring these Series together with some sort of operation. We’re going to get some multiple. For example multiplying them is equal to performing a cartesian product or the two Series on those specific labels, in this example A.
End of explanation
"""
s1 + s2
s1
"""
Explanation: Adding them together means each one is added to each one.
End of explanation
"""
s1_copy = s1.copy()
s1_copy['a'] = 3
s1_copy
s1
"""
Explanation: Lastly sometime you’re going to want to experiment with modification to Series or data frames. That can be done with the copy method which returns a copy of the data. That makes it easy to experiment with the data.
End of explanation
"""
s1.map(lambda x: x ** 2)
"""
Explanation: There are a couple more methods I want to touch on, most specifically map.
End of explanation
"""
s1.map({1:2,2:3,3:12})
"""
Explanation: Maps are going to feel familiar from our raw python section, except we can do something a bit more special with the pandas Series Version. We can map it to a dictionary as well. This will perform a look up in the dictionary and return whatever is there.
End of explanation
"""
s1.map({2:3,3:12})
"""
Explanation: If it doesn't find the value there, it will return NaN
End of explanation
"""
|
quanhua92/learning-notes | libs/pytorch/01_introduction/train neural networks with backpropagation.ipynb | apache-2.0 | # Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])
# Download and load the training data
trainset = datasets.MNIST("MNIST_data/", download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
"""
Explanation: Prepare data
End of explanation
"""
# Define a feed-forward network
model = nn.Sequential(
nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10) # logits, instead of output of the softmax
)
# Define the loss
criterion = nn.CrossEntropyLoss()
# Get data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# forward pass to get the logits
logits = model(images)
# pass the logits to criterion to get the loss
loss = criterion(logits, labels)
print(loss)
"""
Explanation: Define network
End of explanation
"""
# Define a feed-forward network
model = nn.Sequential(
nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 10),
nn.LogSoftmax(dim=1) # dim=1 to calc softmax across columns instead of rows
)
# Define the loss
criterion = nn.NLLLoss()
# Get data
images, labels = next(iter(trainloader))
# Flatten images
images = images.view(images.shape[0], -1)
# forward pass to get the logits
log_probs = model(images)
# pass the logits to criterion to get the loss
loss = criterion(log_probs, labels)
print(loss)
"""
Explanation: Use nn.LogSoftmax and nn.NLLLoss
It is more convinient to build model with log-softmax output instead of output from Linear and use the negative log likelihood loss.
Then, you can get the actual probabilities by taking torch.exp(output) instead of taking softmax function
End of explanation
"""
x = torch.randn(2, 2, requires_grad=True)
print(x)
y = x ** 2
print(y)
print(y.grad_fn)
z = y.mean()
print(z)
print(x.grad)
"""
Explanation: Use Autograd to perform backpropagation
End of explanation
"""
z.backward()
print("grad: ", x.grad)
print("x:", x)
print("x/2: ", x / 2) # equal to gradients mathamatically = x / 2
"""
Explanation: calculate the gradients
End of explanation
"""
print("Before backward pass: \n", model[0].weight.grad)
loss.backward()
print("After backward pass: \n", model[0].weight.grad)
"""
Explanation: Try to perform backward pass and get the gradients
End of explanation
"""
from torch import optim
# Optimizers require the parameters to optimize and the learning rate
optimizer = optim.SGD(model.parameters(), lr=0.01)
"""
Explanation: Training the network
Use the optimizer from PyTorch optim package to update weights with the gradients. For example, stochastic gradient descent is optim.SGD
End of explanation
"""
print("Initial weights: \n", model[0].weight)
images, labels = next(iter(trainloader))
images = images.view(64, 784)
# !Important: Clear the gradients. otherwise, the gradients will be accumulated
optimizer.zero_grad()
# Forward pass
output = model.forward(images)
loss = criterion(output, labels)
# Backward pass
loss.backward()
print("Gradient: \n", model[0].weight.grad)
# Take a update step with the optimizer
optimizer.step()
print("Updated weights: \n", model[0].weight)
"""
Explanation: Try the optimizer to update weights
The general process with Pytorch:
- Make a forward pass
- Calculate loss
- Perform backward pass with loss.backward()
- Take a step with optimizer to update the weights
End of explanation
"""
epochs = 5
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# Flatten
images = images.view(images.shape[0], -1)
# !Important: Clear the gradients. otherwise, the gradients will be accumulated
optimizer.zero_grad()
# Forward pass
output = model.forward(images)
loss = criterion(output, labels)
# Backward pass
loss.backward()
# Take a update step with the optimizer
optimizer.step()
running_loss += loss.item()
else:
print(f"Training loss: {running_loss/len(trainloader)}")
"""
Explanation: Actually training
End of explanation
"""
images, labels = next(iter(trainloader))
img = images[0].view(1, -1)
# turn off gradients to speed up
with torch.no_grad():
probs = model.forward(img)
output = torch.exp(probs)
plt.imshow(img.view(1, 28, 28).squeeze(), cmap='gray')
print(output.numpy())
print("predict:", np.argmax(output))
"""
Explanation: Get the predictions
End of explanation
"""
|
YihaoLu/pyfolio | pyfolio/examples/bayesian.ipynb | apache-2.0 | %matplotlib inline
import pyfolio as pf
"""
Explanation: Bayesian performance analysis example in pyfolio
There are also a few more advanced (and still experimental) analysis methods in pyfolio based on Bayesian statistics.
The main benefit of these methods is uncertainty quantification. All the values you saw above, like the Sharpe ratio, are just single numbers. These estimates are noisy because they have been computed over a limited number of data points. So how much can you trust these numbers? You don't know because there is no sense of uncertainty. That is where Bayesian statistics helps as instead of single values, we are dealing with probability distributions that assign degrees of belief to all possible parameter values.
Lets create the Bayesian tear sheet. Under the hood this is running MCMC sampling in PyMC3 to estimate the posteriors which can take quite a while (that's the reason why we don't generate this by default in create_full_tear_sheet()).
Import pyfolio
End of explanation
"""
stock_rets = pf.utils.get_symbol_rets('FB')
"""
Explanation: Fetch the daily returns for a stock
End of explanation
"""
out_of_sample = stock_rets.index[-40]
pf.create_bayesian_tear_sheet(stock_rets, live_start_date=out_of_sample, stoch_vol=True)
"""
Explanation: Create Bayesian tear sheet
End of explanation
"""
help(pf.bayesian.run_model)
"""
Explanation: Lets go through these row by row:
The first one is the Bayesian cone plot that is the result of a summer internship project of Sepideh Sadeghi here at Quantopian. It's similar to the cone plot you already saw in the tear sheet above but has two critical additions: (i) it takes uncertainty into account (i.e. a short backtest length will result in a wider cone), and (ii) it does not assume normality of returns but instead uses a Student-T distribution with heavier tails.
The next row compares mean returns of the in-sample (backest) and out-of-sample or OOS (forward) period. As you can see, mean returns are not a single number but a (posterior) distribution that gives us an indication of how certain we can be in our estimates. The green distribution on the left side is much wider, representing our increased uncertainty due to having less OOS data. We can then calculate the difference between these two distributions as shown on the right side. The grey lines denote the 2.5% and 97.5% percentiles. Intuitively, if the right grey line is lower than 0 you can say that with probability > 97.5% the OOS mean returns are below what is suggested by the backtest. The model used here is called BEST and was developed by John Kruschke.
The next couple of rows follow the same pattern but are an estimate of annual volatility, Sharpe ratio and their respective differences.
The 5th row shows the effect size or the difference of means normalized by the standard deviation and gives you a general sense how far apart the two distributions are. Intuitively, even if the means are significantly different, it may not be very meaningful if the standard deviation is huge amounting to a tiny difference of the two returns distributions.
The 6th row shows predicted returns (based on the backtest) for tomorrow, and 5 days from now. The blue line indicates the probability of losing more than 5% of your portfolio value and can be interpeted as a Bayesian VaR estimate.
The 7th row shows a Bayesian estimate of annual alpha and beta. In addition to uncertainty estimates, this model, like all above ones, assumes returns to be T-distributed which leads to more robust estimates than a standard linear regression would.
The 8th row shows Bayesian estimates for log(sigma) and log(nu), two parameters of the stochastic volatility model.
The last row shows the volatility measured by the stochastic voatility model, overlaid on the absolute value of the returns.
Note that the last two rows are only shown when stoch_vol=True. By default, stoch_vol=False because running the stochastic volatility model is computationally expensive.
Finally, only the most recent 400 days of returns are used when computing the stochastic volatility model. This is to minimize computational time.
Running models directly
You can also run individual models. All models can be found in pyfolio.bayesian and run via the run_model() function.
End of explanation
"""
# Run model that assumes returns to be T-distributed
trace = pf.bayesian.run_model('t', stock_rets)
"""
Explanation: For example, to run a model that assumes returns to be normally distributed, you can call:
End of explanation
"""
# Check what frequency of samples from the sharpe posterior are above 0.
print('Probability of Sharpe ratio > 0 = {:3}%'.format((trace['sharpe'] > 0).mean() * 100))
"""
Explanation: The returned trace object can be directly inquired. For example might we ask what the probability of the Sharpe ratio being larger than 0 is by checking what percentage of posterior samples of the Sharpe ratio are > 0:
End of explanation
"""
import pymc3 as pm
pm.traceplot(trace);
"""
Explanation: But we can also interact with it like with any other pymc3 trace:
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/hammoz-consortium/cmip6/models/mpiesm-1-2-ham/toplevel.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'mpiesm-1-2-ham', 'toplevel')
"""
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: MPIESM-1-2-HAM
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
"""
|
nohmapp/acme-for-now | essential_algorithms/Sorting and Searching.ipynb | mit | def sort(array):
if len(array) <= 1:
return array
else:
pivot = len(array) // 2
arr1 = sort(array[:pivot])
arr2 = sort(array[pivot:])
return merge(arr1, arr2)
def merge(left, right):
l_index, r_index = 0, 0
result = []
while l_index < len(left) and r_index < len(right):
if left[l_index] < right[r_index]:
result.append(left[l_index])
l_index += 1
else:
result.append(right[r_index])
r_index += 1
# while l_index< len(left):
# result.append(left[l_index])
# l_index += 1
# while r_index < len(right):
# result.append(right[r_index])
# r_index += 1
result += left[l_index:]
result += right[r_index:]
return result
test1 = [54,26,93,17,77,31,44,55,20]
test2 = []
test3 = [89,34, 23, 342,234,67, 1, 4]
print(sort(test1))
print(sort(test2))
print(sort(test3))
"""
Explanation: Merge Sort
Worst Case: 0(n(log(n))
Best Case: 0(n(log(n))
A divide and conquer algorithm invented by John von Neumann. Very good in linked list but clumsy with arrays, takes more memory.
End of explanation
"""
def quickSort(alist, left= 0, right= len(alist) - 1):
pivot = partitionHoare(alist, left, right)
if left < pivot- 1:
quickSort(alist, left, pivot)
if right > pivot:
quickSort(alist, pivot, right)
return alist
def swap(alist, i, j):
temp = alist[i]
alist[i] = alist[j]
alist[j] = temp
return alist
def partitionLomuto(alist, left, right):
pivot = right
partition = left
for idx in range(left, right):
if alist[idx] <= alist[pivot]:
swap(alist, partition, idx)
partition += 1
swap(alist, partition, left) #j in js original
return partition
def partitionHoare(alist, left, right):
pivot = (left + right) // 2
while left <= right:
print("pivot: %s, list: %s" % (pivot, alist))
while alist[left] < alist[pivot]:
left += 1
while alist[right] > alist[pivot]:
right -= 1
if left <= right:
alist = swap(alist, left, right)
left += 1
right -= 1
return pivot
test1 = [54,26,93,17,77,31,44,55,20]
test2 = []
test3 = [89,34, 23, 342,234,67, 1, 4]
print(quickSort(test1))
#print(quickSort(test2))
print(quickSort(test3))
def quickSort(alist):
quickSortHelper(alist,0,len(alist)-1)
def quickSortHelper(alist,first,last):
if first<last:
splitpoint = partition(alist,first,last)
quickSortHelper(alist,first,splitpoint-1)
quickSortHelper(alist,splitpoint+1,last)
def partition(alist,first,last):
pivotvalue = alist[first]
leftmark = first+1
rightmark = last
done = False
while not done:
while leftmark <= rightmark and alist[leftmark] <= pivotvalue:
leftmark = leftmark + 1
while alist[rightmark] >= pivotvalue and rightmark >= leftmark:
rightmark = rightmark -1
if rightmark < leftmark:
done = True
else:
temp = alist[leftmark]
alist[leftmark] = alist[rightmark]
alist[rightmark] = temp
temp = alist[first]
alist[first] = alist[rightmark]
alist[rightmark] = temp
return rightmark
alist = [54,26,93,17,77,31,44,55,20]
quickSort(alist)
print(alist)
'''
Binary Search
Runtime Complexity - nlog(n)
Memory Complexity - 1
'''
def binary_search(a, key):
low = 0
high = len(a) -1
while low <= high:
mid = low + (high -low)//2
if a[mid] == key:
return mid
elif a[mid] > key:
high = mid -1
elif a[mid] < key:
low = mid + 1
else:
return -1
return -1
"""
Explanation: Quicksort
Worst Case: 0(n^2) when the worst pivot is chosen
Average Performance: 0(nlog(n))
Choosing the pivot is the most important, chosing the wrong pivot can be disastorous. The best case is random, there is also the best of three rule but this only pays off in very large lists. Quicksort is a great inplace sort.
There are two kinds of parition schemes, Hoare, and Lomuto. Hoare is the more efficient partition scheme because it does three times fewer swaps on average. Hoare's performs particularly well with duplicate values but Lomuto is easier to implement.
End of explanation
"""
|
ngautam0/keras-pro-bar | examples/keras_progress_bars.ipynb | mit | from mnist_model import mnist_model
from keras_tqdm import TQDMCallback, TQDMNotebookCallback
"""
Explanation: Keras Progress Bars
The following examples show two different keras_tqdm progress bars.
* TQDM notebook widget
* TQDM console output
To use keras_tqdm progress bars in your own code, just add TQDMCallback or TQDMNotebookCallback to the callbacks passed to model.fit. Set verbose=0 to suppress the built-in progress bars.
MNIST Example
In this experiment, we train a Keras model on the MNIST dataset.
End of explanation
"""
mnist_model(0, [TQDMNotebookCallback()])
"""
Explanation: TQDM Progress Bar (ipywidget)
End of explanation
"""
mnist_model(0, [TQDMCallback()])
"""
Explanation: TQDM Progress Bar (text)
End of explanation
"""
|
ShibataLabPrivate/GPyWorkshop | Experiments/Notebook1.ipynb | mit | # import python modules
import GPy
import numpy as np
from matplotlib import pyplot as plt
# call matplotlib with the inline command to make plots appear within the browser
%matplotlib inline
"""
Explanation: Lab session 1: Gaussian Process models with GPy
Source: Gaussian Process Summer School 2015
The aim of this lab session is to get us started with GPy Library. The current draft of the online documentation of GPy is available from this page. We will focus on three aspects of GPs: the kernel, the random sample paths and the GP regression model.
Requirements:
* GPy: Installation instructions available on the homepage.
* Scipy Stack: This includes numpy, matplotlib and Ipython. Installation can be done using pip:
(sudo) pip install numpy --upgrade
(sudo) pip install jupyter --upgrade
(sudo) pip install matplotlib --upgrade
(sudo) pip install ipython[all] --upgrade
The sudo is optional if you want to have installation in the root folder when working in the Linux operating system. It should not be used for Anaconda or when working in the Windows operating system.
Anaconda: Necessary for Windows, optional for Linux.
For Windows Operating System:
* Install Anaconda by downloading from this link.
* Install GPy by opening the Command Prompt window and typing the following command:
pip install GPy
* All the dependencies required for running the code are either available in Anaconda or installed using GPy.
End of explanation
"""
# The documentation to use the RBF function. There are several advanced options such as useGPU which are
# important for practical applications. The "?" symbol can be used with any function or class to view its
# documentation
GPy.kern.RBF?
# input dimension
d = 1
# variance
var = 1.
# lengthscale
length = 0.2
# define the kernel
k = GPy.kern.RBF(d, variance=var, lengthscale=length)
"""
Explanation: 1 Covariance Functions
GPy supports a wide range of covariance functions. Each function is suitable for some applications. In this section, we work with the covariance functions in GPy. Let's start with defining an squared exponential or rbf covariance function in one dimension:
$$
k(\mathbf{x},\mathbf{x'}) = \sigma^2 \text{exp} \left( - \frac{\|\mathbf{x} - \mathbf{x'} \|}{2l^2} \right)
$$
End of explanation
"""
# view the parameters of the covariance function
print k
"""
Explanation: A summary of the kernel can be obtained using the command print k.
End of explanation
"""
# plot the covariance function
k.plot()
"""
Explanation: It is also possible to plot the kernel as a function of one of its inputs (whilst fixing the other) with k.plot(). Use "?" to view the properties of the plot function for kern instance.
End of explanation
"""
# by default, all the parameters are set to 1. for the RBF kernel
k = GPy.kern.RBF(d)
# we experiment with different length scale parameter values here
theta = np.asarray([0.2,0.5,1.,2.,4.])
# create an instance of a figure
fig = plt.figure()
ax = plt.subplot(111)
# iterate over the lengthscales
for t in theta:
k.lengthscale=t
# plot in the same figure with a different color
k.plot(ax=ax, color=np.random.rand(3,), plot_limits=[-10.0,10.0])
plt.legend(theta)
"""
Explanation: Setting Covariance Function Parameters
The value of the covariance function parameters can be accessed and modified using k.* where the * can refer to the parameter name as it appears in print k. We'll now set the lengthscale of the covariance to different values, and then plot the resulting covariance using the k.plot() method.
End of explanation
"""
# by default, all the parameters are set to 1. for the RBF kernel
k = GPy.kern.RBF(d)
# we experiment with different length scale parameter values here
var = np.asarray([0.2,0.5,1.,2.,4.])
# create an instance of a figure
fig = plt.figure()
ax = plt.subplot(111)
# iterate over the lengthscales
for v in var:
k.variance = v
# plot in the same figure with a different color
k.plot(ax=ax, color=np.random.rand(3,))
plt.legend(var)
"""
Explanation: Exercise 1
a) What is the effect of the lengthscale parameter on the covariance function?
b) Now change the code used above for plotting the covariances associated with the length scale to see the influence of the variance parameter. What is the effect of the the variance parameter on the covariance function?
End of explanation
"""
# look for the kernel documentation
GPy.kern.Matern32?
# input dim
d = 1
# create the Matern32 kernel
k = GPy.kern.Matern32(d)
# view the kernel
print k
k.plot()
"""
Explanation: c) Instead of rbf, try constructing and plotting the following covariance functions: exponential, Matern32, Matern52, Brownian, linear, bias, rbfcos, periodic_Matern32, etc. Use the tab key to look for the functions GPy.kern submodule or search in the GPy documentation.
End of explanation
"""
# input data: 50*2 matrix of iid standard Gaussians
X = np.random.rand(50,2)
# create the matern52 kernel
k = GPy.kern.Matern52(input_dim=2)
# compute the kernel matrix
C = k.K(X,X)
# computes eigenvalues of matrix
eigvals = np.linalg.eigvals(C)
# plot the eigen values
plt.bar(np.arange(len(eigvals)), eigvals)
plt.title('Eigenvalues of the Matern 5/2 Covariance')
"""
Explanation: Computing the kernel matrix given input data, $\mathbf{X}$
Let $\mathbf{X}$ be a $n$ × $d$ numpy array. Given a kernel $k$, the covariance matrix associated to
$\mathbf{X}$ is obtained with C = k.K(X,X) . The positive semi-definiteness of $k$ ensures that C
is a positive semi-definite (psd) matrix regardless of the initial points $\mathbf{X}$. This can be
checked numerically by looking at the eigenvalues:
End of explanation
"""
# define rbf and matern52 kernels
kern1 = GPy.kern.RBF(1, variance=1., lengthscale=2.)
kern2 = GPy.kern.Matern52(1, variance=2., lengthscale=4.)
# combine both kernels
kern = kern1 + kern2
print kern
kern.plot(plot_limits=[-7,9])
"""
Explanation: Combining Covariance Functions
In GPy you can easily combine covariance functions you have created using the sum and product operators, + and *. For example, we can combine an exponentiated quadratic covariance with a Matern 5/2 as follows:
End of explanation
"""
kern = kern1*kern2
print kern
kern.plot(plot_limits=[-6,8])
"""
Explanation: It is also possible to multiply two kernel functions:
End of explanation
"""
# define RBF kernel
k = GPy.kern.RBF(input_dim=1,lengthscale=0.2)
# define X to be 500 points evenly spaced over [0,1]
X = np.linspace(0.,1.,500)
# make the numpy array to 2D array
X = X[:,None]
# set mean function i.e. 0 everywhere
mu = np.zeros((500))
# compute covariance matrix associated with inputs X
C = k.K(X,X)
# Generate 20 separate samples paths from a Gaussian with mean mu and covariance C
Z = np.random.multivariate_normal(mu,C,20)
# open a new plotting window
fig = plt.figure()
for i in range(20):
plt.plot(X[:],Z[i,:])
"""
Explanation: 2 Sampling from a Gaussian Process
Gaussian process provides a prior over an infinite dimensional function. It is defined by a covariance function and a mean function. When we compute the covariance matrix using kern.K(X, X) i.e. compute a covariance matrix between the values of the function that correspond to the input locations in the matrix X. Using this we can draw sample paths from a Gaussian process:
$$
\mathbf{f} = \mathcal{N}(\mathbf{X}|\mathbf{0},C)
$$
End of explanation
"""
plt.matshow(C)
"""
Explanation: We can see the structure of the covariance matrix we are plotting from if we visualize C.
End of explanation
"""
# Define input points and mean function
X = np.atleast_2d(np.linspace(0.,1.,500)).T
mu = np.zeros((500))
# sample paths for RBF kernel with different parameters
k = GPy.kern.RBF(input_dim=1,lengthscale=0.05)
C = k.K(X,X)
# Generate 20 separate samples paths from a Gaussian with mean mu and covariance C
Z = np.random.multivariate_normal(mu,C,10)
# open a new plotting window
fig = plt.figure()
for i in range(10):
plt.plot(X[:],Z[i,:])
# sample paths for Matern32 kernel with different parameters
k = GPy.kern.PeriodicMatern52(input_dim=1, lengthscale=0.01, period=1)
C = k.K(X,X)
# Generate 20 separate samples paths from a Gaussian with mean mu and covariance C
Z = np.random.multivariate_normal(mu,C,10)
# open a new plotting window
fig = plt.figure()
for i in range(10):
plt.plot(X[:],Z[i,:])
"""
Explanation: Exercise 2
a) Try a range of different covariance functions and values to plot the corresponding sample paths for each using the same approach given above.
End of explanation
"""
# input data points
X = np.linspace(0.05,0.95,10)[:,None]
# generate observations through function f
Y = -np.cos(np.pi*X) + np.sin(4*np.pi*X) + np.random.normal(loc=0.0, scale=0.1, size=(10,1))
# plot the generated data
plt.figure()
plt.plot(X,Y,'kx',mew=1.5)
"""
Explanation: b) Can you tell the covariance structures that have been used for generating the
sample paths shown in the figure below?
<br>
<center>
<img src="http://ml.dcs.shef.ac.uk/gpss/gpws14/figa.png" alt="Figure a" style="width: 30%;">
<img src="http://ml.dcs.shef.ac.uk/gpss/gpws14/figb.png" alt="Figure b" style="width: 30%;">
<img src="http://ml.dcs.shef.ac.uk/gpss/gpws14/figd.png" alt="Figure d" style="width: 30%;">
</center>
3 Gaussian Process Regression Model
We will combine the Gaussian process prior with data to form a GP regression model with GPy. We will generate data from the function $f ( x ) = − \cos(\pi x ) + \sin(4\pi x )$ over $[0, 1]$, adding some noise to give $y(x) = f(x) + \epsilon$, with the noise being Gaussian distributed, $\epsilon \sim \mathcal{N}(0, 0.01)$.
End of explanation
"""
# create instance of kernel
k = GPy.kern.RBF(input_dim=1, variance=1., lengthscale=1.)
# create instance of GP regression model
m = GPy.models.GPRegression(X,Y,k)
# view model parameters
print m
# visualize posterior mean and variances
m.plot()
"""
Explanation: A GP regression model is defined by first specifying the covariance function for analysis. Then an instance of the model is generated with a default set of parameters. Then it is possible to view the parameters using print m and visualize the posterior mean prediction and variances using m.plot:
End of explanation
"""
# obtain 5 test points
Xstar = np.linspace(0.01,0.99,5)[:,None]
# predict the output for the test points
Ystar, Vstar = m.predict(Xstar)
# print results
print Ystar
"""
Explanation: The actual predictions of the model for a set of points Xstar can be computed using m.predict(Xstar):
End of explanation
"""
# set the noise parameter of the model using desired SNR
SNR = 10.0
m.Gaussian_noise.variance = m.rbf.variance/SNR
# check the model parameters and plot the model
print m
m.plot()
"""
Explanation: Exercise 3
a) What do you think about this first fit? Does the prior given by the GP seem to be
adapted?
b) The parameters of the models can be modified using a regular expression matching the parameters names (for example m['noise'] = 0.001 ). Change the values of the parameters to obtain a better fit.
End of explanation
"""
# get desired input points where model needs to be evaluated
Xp = np.linspace(0.0,1.0,100)[:,None]
# obtain posterior mean and variances
mu, C = m.predict(Xp, full_cov=True)
# generate 10 random paths of the distribution
nPaths = 10
paths = np.random.multivariate_normal(mu[:,0], C, nPaths)
# plot the dataset and the generated paths
plt.figure()
plt.plot(X,Y,'kx',mew=5)
for i in range(nPaths):
plt.plot(Xp[:],paths[i,:])
"""
Explanation: c) Random sample paths from the conditional GP can be obtained using np.random.multivariate_normal(mu[:,0],C) where the mean vector and covariance matrix mu, C are obtained through the predict function mu, C = m.predict(Xp,full_cov=True). Obtain 10 samples from the posterior sample and plot them alongside the data below.
End of explanation
"""
m.constrain_positive()
"""
Explanation: Covariance Function Parameter Estimation
The parameters values can be estimated by maximizing the likelihood of the observations. Since we don’t want one of the variance to become negative during the optimization, we can constrain all parameters to be positive before running the optimisation.
End of explanation
"""
m.optimize()
m.plot()
print m
"""
Explanation: We can optimize the hyperparameters of the model using the m.optimize() method.
End of explanation
"""
# get desired input points where model needs to be evaluated
Xp = np.linspace(0.0,1.0,100)[:,None]
# obtain posterior mean and variances
mu, C = m.predict(Xp, full_cov=True)
# generate 10 random paths of the distribution
nPaths = 10
paths = np.random.multivariate_normal(mu[:,0], C, nPaths)
# plot the dataset and the generated paths
plt.figure()
plt.plot(X,Y,'kx',mew=5)
for i in range(nPaths):
plt.plot(Xp[:],paths[i,:])
"""
Explanation: Exercise 4
a) Generate random function samples using the optmized model and plot alongside the input data
End of explanation
"""
# create instance of kernel and GP Regression model
k = GPy.kern.Matern32(input_dim=1, variance=1., lengthscale=1.)
m = GPy.models.GPRegression(X,Y,k)
# optimize the model
m.constrain_positive()
m.optimize()
# view model parameters
print m
m.plot()
# get desired input points where model needs to be evaluated
Xp = np.linspace(0.0,1.0,100)[:,None]
# obtain posterior mean and variances
mu, C = m.predict(Xp, full_cov=True)
# generate 10 random paths of the distribution
nPaths = 10
paths = np.random.multivariate_normal(mu[:,0], C, nPaths)
# plot the dataset and the generated paths
plt.figure()
plt.plot(X,Y,'kx',mew=5)
for i in range(nPaths):
plt.plot(Xp[:],paths[i,:])
"""
Explanation: b) Modify the kernel used for building the model to investigate its influence on the model
End of explanation
"""
# load marathon timing dataset
data = np.genfromtxt('marathon.csv', delimiter=',')
# set the input and output data
X = data[:, 0:1]
Y = data[:, 1:2]
# plot the timings
plt.plot(X, Y, 'bx')
plt.xlabel('year')
plt.ylabel('marathon pace min/km')
"""
Explanation: 4 A Running Example
Now we will consider a small example with real world data: data giving the pace of all marathons run at the olympics:
End of explanation
"""
# create the covariance function
kern = GPy.kern.RBF(1) + GPy.kern.Bias(1)
# create the GP Regression model
model = GPy.models.GPRegression(X, Y, kern)
# optimize the model
model.optimize()
model.plot()
"""
Explanation: Exercise 5
a) Build a Gaussian process model for the olympic data set using a combination of an RBF and a bias covariance function. Fit the covariance function parameters and the noise to the data. Plot the fit and error bars from 1870 to 2030. Do you think the predictions are reasonable? If not why not?
End of explanation
"""
# create the covariance function and set the desired parameter values
kern2 = GPy.kern.RBF(1, variance=0.5) + GPy.kern.Bias(1)
model2 = GPy.models.GPRegression(X, Y, kern2)
# optimize and plot the model
model2.optimize()
model2.plot()
# compare the log likelihoods
print model.log_likelihood(), model2.log_likelihood()
"""
Explanation: b) Fit the same model, but this time intialize the length scale of the RBF kernel to 0.5. What has happened? Which of model has the higher log likelihood, this one or the one from (a)?
Hint: use model.log_likelihood() for computing the log likelihood.
End of explanation
"""
# create the covariance function and set the desired parameter values
kern3 = GPy.kern.RBF(1, lengthscale=80.0) + GPy.kern.Matern32(1, lengthscale=10.0) + GPy.kern.Bias(1)
model3 = GPy.models.GPRegression(X, Y, kern3)
# optimize and plot the model
model3.optimize()
model3.plot()
# compare the log likelihoods
print model.log_likelihood(), model3.log_likelihood()
"""
Explanation: c) Modify your model by including two covariance functions. Intitialize a covariance function with an exponentiated quadratic part, a Matern 3/2 part and a bias covariance. Set the initial lengthscale of the exponentiated quadratic to 80 years, set the initial length scale of the Matern 3/2 to 10 years. Optimize the new model and plot the fit again. How does it compare with the previous model?
End of explanation
"""
# create the covariance function and set the desired parameter values
kern4 = GPy.kern.RBF(1, lengthscale=20.0) + GPy.kern.Matern32(1, lengthscale=20.0) + GPy.kern.Bias(1)
model4 = GPy.models.GPRegression(X, Y, kern4)
# optimize and plot the model
model4.optimize()
model4.plot()
# compare the log likelihoods
print model3.log_likelihood(), model4.log_likelihood()
"""
Explanation: d) Repeat part c) but now initialize both of the covariance functions' lengthscales to 20 years. Check the model parameters, what happens now?
End of explanation
"""
# create the covariance function and set the desired parameter values
kern5 = GPy.kern.RBF(1, lengthscale=5.0, variance=5.0)*GPy.kern.Linear(1)
model5 = GPy.models.GPRegression(X, Y, kern5)
# optimize and plot the model
model5.optimize()
model5.plot()
# get the model parameters
print model5
# compare the log likelihoods
print model.log_likelihood(), model5.log_likelihood()
"""
Explanation: e) Now model the data with a product of an exponentiated quadratic covariance function and a linear covariance function. Fit the covariance function parameters. Why are the variance parameters of the linear part so small? How could this be fixed?
End of explanation
"""
|
SylvainCorlay/bqplot | examples/Interactions/Interaction Layer.ipynb | apache-2.0 | ## First we define a Figure
dt_x_fast = DateScale()
lin_y = LinearScale()
x_ax = Axis(label='Index', scale=dt_x_fast)
x_ay = Axis(label=(symbol + ' Price'), scale=lin_y, orientation='vertical')
lc = Lines(x=dates_actual, y=prices, scales={'x': dt_x_fast, 'y': lin_y}, colors=['orange'])
lc_2 = Lines(x=dates_actual[50:], y=prices[50:] + 2, scales={'x': dt_x_fast, 'y': lin_y}, colors=['blue'])
## Next we define the type of selector we would like
intsel_fast = FastIntervalSelector(scale=dt_x_fast, marks=[lc, lc_2])
## Now, we define a function that will be called when the FastIntervalSelector is interacted with
def fast_interval_change_callback(change):
db_fast.value = 'The selected period is ' + str(change.new)
## Now we connect the selectors to that function
intsel_fast.observe(fast_interval_change_callback, names=['selected'])
## We use the HTML widget to see the value of what we are selecting and modify it when an interaction is performed
## on the selector
db_fast = HTML()
db_fast.value = 'The selected period is ' + str(intsel_fast.selected)
fig_fast_intsel = Figure(marks=[lc, lc_2], axes=[x_ax, x_ay], title='Fast Interval Selector Example',
interaction=intsel_fast) #This is where we assign the interaction to this particular Figure
VBox([db_fast, fig_fast_intsel])
"""
Explanation: Line Chart Selectors
Fast Interval Selector
End of explanation
"""
db_index = HTML(value='[]')
## Now we try a selector made to select all the y-values associated with a single x-value
index_sel = IndexSelector(scale=dt_x_fast, marks=[lc, lc_2])
## Now, we define a function that will be called when the selectors are interacted with
def index_change_callback(change):
db_index.value = 'The selected date is ' + str(change.new)
index_sel.observe(index_change_callback, names=['selected'])
fig_index_sel = Figure(marks=[lc, lc_2], axes=[x_ax, x_ay], title='Index Selector Example',
interaction=index_sel)
VBox([db_index, fig_index_sel])
"""
Explanation: Index Selector
End of explanation
"""
from datetime import datetime as py_dtime
dt_x_index = DateScale(min=np.datetime64(py_dtime(2006, 6, 1)))
lin_y2 = LinearScale()
lc2_index = Lines(x=dates_actual, y=prices,
scales={'x': dt_x_index, 'y': lin_y2})
x_ax1 = Axis(label='Date', scale=dt_x_index)
x_ay2 = Axis(label=(symbol + ' Price'), scale=lin_y2, orientation='vertical')
intsel_date = FastIntervalSelector(scale=dt_x_index, marks=[lc2_index])
db_date = HTML()
db_date.value = str(intsel_date.selected)
## Now, we define a function that will be called when the selectors are interacted with - a callback
def date_interval_change_callback(change):
db_date.value = str(change.new)
## Notice here that we call the observe on the Mark lc2_index rather than on the selector intsel_date
lc2_index.observe(date_interval_change_callback, names=['selected'])
fig_date_mark = Figure(marks=[lc2_index], axes=[x_ax1, x_ay2],
title='Fast Interval Selector Selected Indices Example', interaction=intsel_date)
VBox([db_date, fig_date_mark])
"""
Explanation: Returning indexes of selected values
End of explanation
"""
## Defining a new Figure
dt_x_brush = DateScale(min=np.datetime64(py_dtime(2006, 6, 1)))
lin_y2_brush = LinearScale()
lc3_brush = Lines(x=dates_actual, y=prices,
scales={'x': dt_x_brush, 'y': lin_y2_brush})
x_ax_brush = Axis(label='Date', scale=dt_x_brush)
x_ay_brush = Axis(label=(symbol + ' Price'), scale=lin_y2_brush, orientation='vertical')
db_brush = HTML(value='[]')
brushsel_date = BrushIntervalSelector(scale=dt_x_brush, marks=[lc3_brush], color='FireBrick')
## Now, we define a function that will be called when the selectors are interacted with - a callback
def date_brush_change_callback(change):
db_brush.value = str(change.new)
lc3_brush.observe(date_brush_change_callback, names=['selected'])
fig_brush_sel = Figure(marks=[lc3_brush], axes=[x_ax_brush, x_ay_brush],
title='Brush Selector Selected Indices Example', interaction=brushsel_date)
VBox([db_brush, fig_brush_sel])
"""
Explanation: Brush Selector
We can do the same with any type of selector
End of explanation
"""
date_fmt = '%m-%d-%Y'
sec2_data = price_data[symbol2].values
dates = price_data.index.values
sc_x = LinearScale()
sc_y = LinearScale()
scatt = Scatter(x=prices, y=sec2_data,
scales={'x': sc_x, 'y': sc_y})
sc_xax = Axis(label=(symbol), scale=sc_x)
sc_yax = Axis(label=(symbol2), scale=sc_y, orientation='vertical')
br_sel = BrushSelector(x_scale=sc_x, y_scale=sc_y, marks=[scatt], color='red')
db_scat_brush = HTML(value='[]')
## call back for the selector
def brush_callback(change):
db_scat_brush.value = str(br_sel.selected)
br_sel.observe(brush_callback, names=['brushing'])
fig_scat_brush = Figure(marks=[scatt], axes=[sc_xax, sc_yax], title='Scatter Chart Brush Selector Example',
interaction=br_sel)
VBox([db_scat_brush, fig_scat_brush])
"""
Explanation: Scatter Chart Selectors
Brush Selector
End of explanation
"""
sc_brush_dt_x = DateScale(date_format=date_fmt)
sc_brush_dt_y = LinearScale()
scatt2 = Scatter(x=dates_actual, y=sec2_data,
scales={'x': sc_brush_dt_x, 'y': sc_brush_dt_y})
br_sel_dt = BrushSelector(x_scale=sc_brush_dt_x, y_scale=sc_brush_dt_y, marks=[scatt2])
db_brush_dt = HTML(value=str(br_sel_dt.selected))
## call back for the selector
def brush_dt_callback(change):
db_brush_dt.value = str(br_sel_dt.selected)
br_sel_dt.observe(brush_dt_callback, names=['brushing'])
sc_xax = Axis(label=(symbol), scale=sc_brush_dt_x)
sc_yax = Axis(label=(symbol2), scale=sc_brush_dt_y, orientation='vertical')
fig_brush_dt = Figure(marks =[scatt2], axes=[sc_xax, sc_yax], title='Brush Selector with Dates Example',
interaction=br_sel_dt)
VBox([db_brush_dt, fig_brush_dt])
"""
Explanation: Brush Selector with Date Values
End of explanation
"""
## call back for selectors
def interval_change_callback(name, value):
db3.value = str(value)
## call back for the selector
def brush_callback(change):
if(not br_intsel.brushing):
db3.value = str(br_intsel.selected)
returns = np.log(prices[1:]) - np.log(prices[:-1])
hist_x = LinearScale()
hist_y = LinearScale()
hist = Hist(sample=returns, scales={'sample': hist_x, 'count': hist_y})
br_intsel = BrushIntervalSelector(scale=hist_x, marks=[hist])
br_intsel.observe(brush_callback, names=['selected'])
br_intsel.observe(brush_callback, names=['brushing'])
db3 = HTML()
db3.value = str(br_intsel.selected)
h_xax = Axis(scale=hist_x, label='Returns', grids='off', set_ticks=True, tick_format='0.2%')
h_yax = Axis(scale=hist_y, label='Freq', orientation='vertical', grid_lines='none')
fig_hist = Figure(marks=[hist], axes=[h_xax, h_yax], title='Histogram Selection Example', interaction=br_intsel)
VBox([db3, fig_hist])
"""
Explanation: Histogram Selectors
End of explanation
"""
def multi_sel_callback(change):
if(not multi_sel.brushing):
db4.value = str(multi_sel.selected)
line_x = LinearScale()
line_y = LinearScale()
line = Lines(x=np.arange(100), y=np.random.randn(100), scales={'x': line_x, 'y': line_y})
multi_sel = MultiSelector(scale=line_x, marks=[line])
multi_sel.observe(multi_sel_callback, names=['selected'])
multi_sel.observe(multi_sel_callback, names=['brushing'])
db4 = HTML()
db4.value = str(multi_sel.selected)
h_xax = Axis(scale=line_x, label='Returns', grid_lines='none')
h_yax = Axis(scale=hist_y, label='Freq', orientation='vertical', grid_lines='none')
fig_multi = Figure(marks=[line], axes=[h_xax, h_yax], title='Multi-Selector Example',
interaction=multi_sel)
VBox([db4, fig_multi])
# changing the names of the intervals.
multi_sel.names = ['int1', 'int2', 'int3']
"""
Explanation: Multi Selector
This selector provides the ability to have multiple brush selectors on the same graph.
The first brush works like a regular brush.
Ctrl + click creates a new brush, which works like the regular brush.
The active brush has a Green border while all the inactive brushes have a Red border.
Shift + click deactivates the current active brush. Now, click on any inactive brush to make it active.
Ctrl + Alt + Shift + click clears and resets all the brushes.
End of explanation
"""
def multi_sel_dt_callback(change):
if(not multi_sel_dt.brushing):
db_multi_dt.value = str(multi_sel_dt.selected)
line_dt_x = DateScale(min=np.datetime64(py_dtime(2007, 1, 1)))
line_dt_y = LinearScale()
line_dt = Lines(x=dates_actual, y=sec2_data, scales={'x': line_dt_x, 'y': line_dt_y}, colors=['red'])
multi_sel_dt = MultiSelector(scale=line_dt_x)
multi_sel_dt.observe(multi_sel_dt_callback, names=['selected'])
multi_sel_dt.observe(multi_sel_dt_callback, names=['brushing'])
db_multi_dt = HTML()
db_multi_dt.value = str(multi_sel_dt.selected)
h_xax_dt = Axis(scale=line_dt_x, label='Returns', grid_lines='none')
h_yax_dt = Axis(scale=line_dt_y, label='Freq', orientation='vertical', grid_lines='none')
fig_multi_dt = Figure(marks=[line_dt], axes=[h_xax_dt, h_yax_dt], title='Multi-Selector with Date Example',
interaction=multi_sel_dt)
VBox([db_multi_dt, fig_multi_dt])
"""
Explanation: Multi Selector with Date X
End of explanation
"""
lasso_sel = LassoSelector()
xs, ys = LinearScale(), LinearScale()
data = np.arange(20)
line_lasso = Lines(x=data, y=data, scales={'x': xs, 'y': ys})
scatter_lasso = Scatter(x=data, y=data, scales={'x': xs, 'y': ys}, colors=['skyblue'])
bar_lasso = Bars(x=data, y=data/2., scales={'x': xs, 'y': ys})
xax_lasso, yax_lasso = Axis(scale=xs, label='X'), Axis(scale=ys, label='Y', orientation='vertical')
fig_lasso = Figure(marks=[scatter_lasso, line_lasso, bar_lasso], axes=[xax_lasso, yax_lasso],
title='Lasso Selector Example', interaction=lasso_sel)
lasso_sel.marks = [scatter_lasso, line_lasso]
fig_lasso
scatter_lasso.selected, line_lasso.selected
"""
Explanation: Lasso Selector
End of explanation
"""
xs_pz = DateScale(min=np.datetime64(py_dtime(2007, 1, 1)))
ys_pz = LinearScale()
line_pz = Lines(x=dates_actual, y=sec2_data, scales={'x': xs_pz, 'y': ys_pz}, colors=['red'])
panzoom = PanZoom(scales={'x': [xs_pz], 'y': [ys_pz]})
xax = Axis(scale=xs_pz, label='Date', grids='off')
yax = Axis(scale=ys_pz, label='Price', orientation='vertical', grid_lines='none')
Figure(marks=[line_pz], axes=[xax, yax], interaction=panzoom)
"""
Explanation: Pan Zoom
End of explanation
"""
xs_hd = DateScale(min=np.datetime64(py_dtime(2007, 1, 1)))
ys_hd = LinearScale()
line_hd = Lines(x=dates_actual, y=sec2_data, scales={'x': xs_hd, 'y': ys_hd}, colors=['red'])
handdraw = HandDraw(lines=line_hd)
xax = Axis(scale=xs_hd, label='Date', grid_lines='none')
yax = Axis(scale=ys_hd, label='Price', orientation='vertical', grid_lines='none')
Figure(marks=[line_hd], axes=[xax, yax], interaction=handdraw)
"""
Explanation: Hand Draw
End of explanation
"""
dt_x = DateScale(date_format=date_fmt, min=py_dtime(2007, 1, 1))
lc1_x = LinearScale()
lc2_y = LinearScale()
lc2 = Lines(x=np.linspace(0.0, 10.0, len(prices)), y=prices * 0.25,
scales={'x': lc1_x, 'y': lc2_y},
display_legend=True,
labels=['Security 1'])
lc3 = Lines(x=dates_actual, y=sec2_data,
scales={'x': dt_x, 'y': lc2_y},
colors=['red'],
display_legend=True,
labels=['Security 2'])
lc4 = Lines(x=np.linspace(0.0, 10.0, len(prices)), y=sec2_data * 0.75,
scales={'x': LinearScale(min=5, max=10), 'y': lc2_y},
colors=['green'], display_legend=True,
labels=['Security 2 squared'])
x_ax1 = Axis(label='Date', scale=dt_x)
x_ax2 = Axis(label='Time', scale=lc1_x, side='top', grid_lines='none')
x_ay2 = Axis(label=(symbol + ' Price'), scale=lc2_y, orientation='vertical')
fig = Figure(marks=[lc2, lc3, lc4], axes=[x_ax1, x_ax2, x_ay2])
## declaring the interactions
multi_sel = MultiSelector(scale=dt_x, marks=[lc2, lc3])
br_intsel = BrushIntervalSelector(scale=lc1_x, marks=[lc2, lc3])
index_sel = IndexSelector(scale=dt_x, marks=[lc2, lc3])
int_sel = FastIntervalSelector(scale=dt_x, marks=[lc3, lc2])
hd = HandDraw(lines=lc2)
hd2 = HandDraw(lines=lc3)
pz = PanZoom(scales={'x': [dt_x], 'y': [lc2_y]})
deb = HTML()
deb.value = '[]'
## Call back handler for the interactions
def test_callback(change):
deb.value = str(change.new)
multi_sel.observe(test_callback, names=['selected'])
br_intsel.observe(test_callback, names=['selected'])
index_sel.observe(test_callback, names=['selected'])
int_sel.observe(test_callback, names=['selected'])
from collections import OrderedDict
selection_interacts = ToggleButtons(options=OrderedDict([('HandDraw1', hd), ('HandDraw2', hd2), ('PanZoom', pz),
('FastIntervalSelector', int_sel), ('IndexSelector', index_sel),
('BrushIntervalSelector', br_intsel), ('MultiSelector', multi_sel),
('None', None)]))
link((selection_interacts, 'value'), (fig, 'interaction'))
VBox([deb, fig, selection_interacts], align_self='stretch')
# Set the scales of lc4 to the ones of lc2 and check if panzoom pans the two.
lc4.scales = lc2.scales
"""
Explanation: Unified Figure with All Interactions
End of explanation
"""
|
artem-oppermann/Udacity-Data-Analyst-Nanodegree-Projects | Project 2- Data Analysis with Python/titanic_project.ipynb | gpl-2.0 | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from collections import Counter
titanic=pd.read_csv("titanic-data.csv")
titanic.head()
"""
Explanation: Titanic Data Analysis
1. Introduction
In this project I will perform a data analysis on the sample Titanic dataset. The dataset contains
demographics and passenger information of 891 out of the 2224 passengers and crew members on board the Titanic.
The data was obtained at https://www.kaggle.com/c/titanic/data.
In my analysis I will examine the factors what may have increased the chances of survival. I will particularly focus on the following questions:
Did the gender determine the chances of survival?
Did the social-economic status determine the chances of survival?
Did the age considering the gender determine the chances of survival?
Did age, regardless of gender, determine your chances of survival?
Did the number of children aboard per passenger determine the chances of survival?
2. Data Wrangling
2.1 Data Discription
The data contains following information:
survival: Survival (0 = No; 1 = Yes)
pclass: Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)
name: Name
sex: Gender
age: Age
sibsp: Number of Siblings/Spouses Aboard
parch: Number of Parents/Children Aboard
ticket: Ticket Number
fare: Passenger Fare
cabin: Cabin
embarked: Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton)
Here are the first five rows of the dataframe to give an simple overview over the data.
End of explanation
"""
# Create new dataset without unwanted columns
titanic=titanic.drop(['Name','Ticket','Cabin','Embarked','SibSp'], axis=1)
titanic.head()
"""
Explanation: 2.2 Data Cleanup
The data cleanup procedure consists of the follwing steps:
1. Removal of unnecessary data
2. Removal of duplicates
3. Determine missing values in the data
Based on the question I want to answer in the project some of the columns in the dataset will not be important in further
examination, therefore they will be removed. These columns are:
Name
Ticket
Cabin
Embarked
sibsp
Following code lines remove the mentioned rows and show the first five rows of the dataframe to show that the unnecessary data was removed.
End of explanation
"""
# Identify if duplicates in the data do exists
titanic_duplicates = titanic.duplicated()
print('Number of duplicates: {}'.format(titanic_duplicates.sum()))
"""
Explanation: Here I want to identify duplicates in the data.
End of explanation
"""
# Calculating the number of missing values
titanic.isnull().sum()
# Determine the number of males and females with missing age in the dataset
missing_age_male = titanic[pd.isnull(titanic['Age'])]['Sex'] == 'male'
missing_age_female = titanic[pd.isnull(titanic['Age'])]['Sex'] == 'female'
print('Number of male passengers with missing age: {}'.format(missing_age_male.sum()))
print('Number of female passengers with missing age: {}'.format(missing_age_female.sum()))
"""
Explanation: In the dataset there are no data duplicates.
2.3 Missing Values
In this subsection I want to find out how many missing values there are in the dataframe.
End of explanation
"""
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(17, 5))
#Age distribution
ages=titanic["Age"].dropna()
#plt.figure(figsize=(7,7))
axes[0].hist(ages, bins=80, color='#377eb8',edgecolor = "Black")
axes[0].set_xlabel("Age/Years")
axes[0].set_ylabel('Count')
axes[0].set_title('Age distribution of the passengers')
#plt.show()
# Count of male and female passengers
gender=titanic['Sex']
counts = Counter(gender)
common = counts.most_common()
gender = [item[0] for item in common]
count = [item[1] for item in common]
axes[1].bar(np.arange(2), count, tick_label=gender, width=0.4, color='#377eb8',edgecolor = "Black")
axes[1].set_ylabel('Count')
axes[1].set_title('Number of female and male passengers.')
# Count of dead and survived passengers
survival=titanic['Survived']
counts=Counter(survival)
common = counts.most_common()
label=["Dead", "Survived"]
count=[item[1] for item in common]
axes[2].bar(np.arange(2), count, tick_label=label, width=0.4, color='#377eb8',edgecolor = "Black")
axes[2].set_ylabel('Count')
axes[2].set_title('Number of survived and dead passengers.')
plt.show()
"""
Explanation: We can see that roughly 20% of the passengers in the dataset do not have a stated age, especialy the male passengers. This fact should be considered on examination of the question whether the age considering the gender determine the chances of survival.
3. Data Analysis
At first I want to gain a basic overview over the age distribution, the count of female and male as well as the count of survived and dead passengers.
End of explanation
"""
# Group thy PassengerId by Gender und Survival
g=titanic.groupby(["Survived","Sex"])["PassengerId"]
# Count how many passengers died or survived dependent on the gender
survived_men=g.get_group((1,"male")).count()
survived_women=g.get_group((1,"female")).count()
dead_men=g.get_group((0,"male")).count()
dead_women=g.get_group((0,"female")).count()
# Group the PassengerId by gender and count the Id's dependent on the gender to find out the total number of women and men
g=titanic.groupby("Sex")["PassengerId"]
men_sum=float(g.get_group(("male")).count())
women_sum=float(g.get_group(("female")).count())
#Normalization of dead and survived passengers depentend on the gender
p2=survived=[survived_men/men_sum, survived_women/women_sum]
p1=dead=[dead_men/men_sum, dead_women/women_sum]
# Plot the survival by gender ration
plt.figure(figsize=(7,7))
N=2
ind = np.arange(N)
width = 0.35
bar1 = plt.bar(ind, survived, width,color='#377eb8', edgecolor = "Black")
bar2 = plt.bar(ind+width, dead, width,color='#e41a1c', edgecolor = "Black")
plt.ylabel('Ratio of passengers')
plt.title('Survival by Gender')
plt.xticks(ind+width/2, ['Men', "Female"])
plt.legend((bar2, bar1), ('Dead', 'Survived'))
plt.figure(num=None, figsize=(1, 1), dpi=80, facecolor='w', edgecolor='k')
plt.show()
"""
Explanation: The age distribution can be approximated as a gaussian distribution with a mean around 25-30 years.
In the second plot can be noticed that there were almost twice as much male as female passengers. The third plot shows that there were mored deads than survivals.
3.1 Did the gender determine the chances of survival?
End of explanation
"""
################################ Survival regarding the fare ##########################
# Create a dataframe of all fares
fares_df=titanic[["Fare", "Survived"]]
fares=titanic["Fare"]
# Create 20 fare ranges from 0 $ - 300 $ and count how many fares from fares_df belong to each range
num_bins=20
bar_width=300/float(num_bins)
fare_ranges_all=[]
for i in np.arange(0,num_bins,1):
fare_ranges_all.append(len([x for x in fares if i*bar_width <= x < (i+1)*bar_width]))
# Create a dataframe with fares of passengers who survived
survived_fares=fares_df.ix[(fares_df["Survived"]==1)]["Fare"]
# Determine how many fares of passengers who survived belong in each of the 20 ranges
fare_ranges_survived=[]
for i in np.arange(0,num_bins,1):
fare_ranges_survived.append(len([x for x in survived_fares if i*bar_width <= x < (i+1)*bar_width]))
# Handle the case in which a fare range does not contain any counts (to avoid devide by null error during normalization)
for n,i in enumerate(fare_ranges_all):
if i==0:
fare_ranges_all[n]=1
################################ Survival regarding the class ##########################
# Get the Groupby object
g=titanic.groupby(["Survived","Pclass"])
# Count the passengers from each class who not have survived
dead_class_1=g.get_group((0,1))["PassengerId"].count()
dead_class_2=g.get_group((0,2))["PassengerId"].count()
dead_class_3=g.get_group((0,3))["PassengerId"].count()
# Count the passengers from each class who have survived
survived_class_1=g.get_group((1,1))["PassengerId"].count()
survived_class_2=g.get_group((1,2))["PassengerId"].count()
survived_class_3=g.get_group((1,3))["PassengerId"].count()
# Get the Groupby object
g=titanic.groupby(["Pclass"])
# Count the passengers in each class
passengers_class1=float(g.get_group((1))["PassengerId"].count())
passengers_class2=float(g.get_group((2))["PassengerId"].count())
passengers_class3=float(g.get_group((3))["PassengerId"].count())
# Plot the fare-survival relation
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
fare_ranges_all=np.array(fare_ranges_all).astype(float)
#Normalize the number of fares in each range
normed_survived_fares=np.array(fare_ranges_survived)/fare_ranges_all
axes[0].bar(np.arange(0, 300,bar_width)+bar_width/2.0,normed_survived_fares,width=bar_width, align="center", edgecolor = "Black")
axes[0].set_title("Survival by Fare")
axes[0].set_xlabel("Fare price / $")
axes[0].set_ylabel("Ratio of passengers")
# Plot the class-survival relation
y1, y2, y3=dead_class_1/passengers_class1, dead_class_2/passengers_class2, dead_class_3/passengers_class3
z1, z2, z3=survived_class_1/passengers_class1, survived_class_2/passengers_class2, survived_class_3/passengers_class3
dead=[y1, y2, y3]
survived=[z1, z2, z3]
width=0.25
ind = np.arange(3)
bar1 = axes[1].bar(ind, survived, width,color='#377eb8', edgecolor = "Black")
bar2 = axes[1].bar(ind+width, dead, width,color='#e41a1c', edgecolor = "Black")
axes[1].legend((bar2, bar1), ('Dead', 'Survived'))
axes[1].set_title("Survival by Class")
axes[1].set_xlabel("Class")
axes[1].set_ylabel("Ratio of passengers")
axes[1].set_xticks(ind+width/2)
axes[1].set_xticklabels(["1", "2", "3"])
plt.show()
"""
Explanation: According to the plot it is obvious that male passengers had a significant lower chance of survival in comparison to female passengers.
3.2 Did the social-economic status determine the chances of survival?
End of explanation
"""
# Count the PassengerId grouped by the age and save it as dataframe
df=pd.DataFrame({'count' : titanic.groupby("Age")["PassengerId"].count()}).reset_index()
# Make a dictionary out of df, where age is the key and count is the value
passengers_by_age = dict(zip(df["Age"], df["count"]))
# Count the PassengerId that is grouped by survival and gernder and save it in a dataframe
df=pd.DataFrame({'count' : titanic.groupby( ["Survived","Age"])["PassengerId"].count()}).reset_index()
# New dataframe where all passengers survived
df2 = df.ix[(df['Survived'] == 1)]
# Make a dictionary where keys are the passenger age group and the values the normalized count of passengers in this age group
age_survived_norm={}
for index, row in df2.iterrows():
age_survived_norm.update(({row["Age"]:row["count"]/float(passengers_by_age[row["Age"]])}))
# Plot the results
plt.figure(figsize=(13,7))
plt.bar(list(age_survived_norm.keys()), list(age_survived_norm.values()), align='center', color="#377eb8")
plt.xlabel('Age / years')
plt.ylabel('Ratio of survived passengers')
plt.title('Survival of passengers by age')
plt.show()
"""
Explanation: The left plot shows the tendency that a lower fare price decreases the ratio of survival. In the right plot a steady declination of the survival ration with lower passenger class can be observed. Im summary the social-economic status has indeed an impact on the chances of survival. Passengers with higher social-economic status did have a higher chances to survive.
3.3 Did age, regardless of gender, determine your chances of survival?
End of explanation
"""
# Count the PassengerId grouped by age and gender and save ad dataframe
df=pd.DataFrame({'count' : titanic.groupby(["Age", "Sex"])["PassengerId"].count()}).reset_index()
# Take from df values which belong to men and save as new dataframe
df_male=df.ix[(df['Sex']=='male')]
# Take from df values which belong to women and save as new dataframe
df_women=df.ix[(df['Sex']=='female')]
# Create dictionary with age of men as the key and the count of men in this age group as value
male_by_age=dict(zip(df_male["Age"], df_male["count"]))
# Create dictionary with age of women as the key and the count of women in this age group as value
female_by_age=dict(zip(df_women["Age"], df_women["count"]))
#Count the PassengerId grouped by survival, age and gender and save as dataframe
df=pd.DataFrame({'count' : titanic.groupby( ["Survived","Age", "Sex"])["PassengerId"].count()}).reset_index()
#Create two dictionaries in which the keys are the age of men/women and the normalized count of men/women in this age group as value
male_by_age_survived={}
female_by_age_survived={}
for index, row in df.iterrows():
if row["Survived"]==1:
if row["Sex"]=="male":
male_by_age_survived.update(({row["Age"]:row["count"]/float(male_by_age[row["Age"]])}))
else:
female_by_age_survived.update(({row["Age"]:row["count"]/float(female_by_age[row["Age"]])}))
# Plot the results
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(20, 5))
axes[0].bar(list(male_by_age_survived.keys()), list(male_by_age_survived.values()), align='center')
axes[0].set_xlabel('Age / years')
axes[0].set_ylabel('Ration of survived men')
axes[0].set_title('Survival of male passengers by age')
axes[1].bar(list(female_by_age_survived.keys()), list(female_by_age_survived.values()), align='center')
axes[1].set_xlabel('Age / years')
axes[1].set_ylabel('Ration of survived women')
axes[1].set_title('Survival of female passengers by age')
plt.show()
"""
Explanation: It can be observed that younger and older passengers regardless of their gender had an higher survival ratio than middle aged passengers.
3.4 Did the age considering the gender determine the chances of survival?
End of explanation
"""
# How many passengers do have how many children? Create new dataframe and transform it's columns to a dictionary
parch_count=pd.DataFrame({'count' : titanic.groupby("Parch")["PassengerId"].count()}).reset_index()
parch_count=dict(zip(parch_count["Parch"],parch_count["count"]))
# Same dataframe as above but also grouped by survival
df=pd.DataFrame({'count' : titanic.groupby(["Survived","Parch"])["PassengerId"].count()}).reset_index()
# Calculate the survival ratio per children aboard
surival_ratio={}
for index, row in df.iterrows():
if not row["Survived"]:
surival_ratio[row["Parch"]]=1-(row["count"]/float(parch_count[row["Parch"]]))
plt.figure(figsize=(10,6))
plt.bar(list(surival_ratio.keys()),list(surival_ratio.values()),color='#377eb8', edgecolor = "Black")
plt.title("Survival ratio per children aboard")
plt.xlabel("Number of children per passenger")
plt.ylabel("Ratio of survival")
plt.show()
"""
Explanation: If we take the gender into account it can be seen that male passengers in their middle ages had a much lower ratio of survival than women. Male children had however still big chances for survival.
In oppositve female passengers had throughout all age groupes higher survival ratio than male passengers.
3.5 Did the number of children aboard per passenger determine the chances of survival?
End of explanation
"""
|
IIPBC/Material | machine_learning_Nina/Exercise1-3.ipynb | mit |
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# Criar um array com n números.
# Cada um desses números é um exemplo x
# Em seguida, estender os exemplos: x ---> (1,x)
N = 14
x = np.array([0.2, 0.5, 1, 1.1, 1.2, 1.8, 2, 4.3, 4.4, 5.7, 6.9, 7.5, 8, 8.2])
X = np.vstack(zip(np.ones(N), x))
print 'Dimensão do array X:', X.shape
# Supor que os exemplos na primeira metade são negativos e o restante são
# positivos
y = np.array([0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1])
print 'dimensão do array y:', y.shape
# show elements in distinct colors to discriminate negative from positive ones
for i in range(N):
if y[i]==1:
plt.plot(X[i,1], y[i], 'bo') # o (bolinhas) azuis (blue)
else:
plt.plot(X[i,1], y[i], 'ro') # o (bolinhas) vermelhas (red)
plt.ylim(-1,2)
plt.xlabel('x')
plt.ylabel('y (classe)')
plt.show()
"""
Explanation: Exercício 1-3: Classificação
<b>Objetivo:</b> Entender como aplicar a regressão linear e a regressão logística.
1. Como aplicar regressão em um problema de classificação
O rótulo $y$ (valor 0 ou 1 no nosso caso) pode ser pensado como o valor que desejamos predizer para uma observação $\mathbf{x}$ qualquer.<br>
<br>
Portanto a ideia é isso mesmo. Usaremos aqui os mesmos dados do Exercício 0-1.<br>
No plot a seguir, os exemplos positivos e negativos aparecem em alturas distintas no gráfico (diferentemente da forma vista no Exercício 0-1)
End of explanation
"""
# Supomos que o arquivo funcoes.py já está criado
from funcoes import gradientDescent, computeCost
# chutar uns pesos iniciais e calcular o custo inicial
w = np.zeros(2)
initialCost = computeCost(X, y, w)
print 'Initial cost: ', initialCost
# Some gradient descent settings
iterations = 1500
alpha = 0.01
# run gradient descent
w, J_history = gradientDescent(X, y, w, alpha, iterations)
finalCost = computeCost(X, y, w)
print 'Final cost: ', finalCost
# plot do resultado
print 'Weight w found by gradient descent: (%f, %f)' % (w[0], w[1])
# Plot the linear fit
plt.plot(X[:7,1], y[:7], 'ro')
plt.plot(X[7:,1], y[7:], 'bo')
plt.plot(X[:,1], X.dot(w), '-')
plt.ylim(-1,2)
plt.xlabel('x')
plt.ylabel('y (classe)')
plt.show()
"""
Explanation: 2. Regressão linear
Vamos aplicar a regressão linear, usando o método gradiente descendente.<br>
Se não lembrar do método, volte para o Exercício 1-1<br>
Neste ponto, você já deve ter criado o arquivo <tt>funcoes.py</tt>
End of explanation
"""
from funcoes2 import sigmoid, gradientDescent2, computeCost2
# chutar uns pesos iniciais e calcular o custo inicial
w = np.zeros(2)
initialCost = computeCost2(X, y, w)
print 'Initial cost: ', initialCost
# Some gradient descent settings
iterations = 1000
alpha = 0.005
# run gradient descent
w, J_history = gradientDescent2(X, y, w, alpha, iterations)
finalCost = computeCost2(X, y, w)
print 'Final cost: ', finalCost
print w
R = X.dot(w)
plt.plot(X[:,1], X.dot(w), '-')
for i in range(N):
if R[i] > 0.5:
plt.plot(X[i,1], y[i], 'bo')
else:
plt.plot(X[i,1], y[i], 'ro')
plt.xlabel('x')
plt.ylabel('y (class)')
plt.show()
"""
Explanation: Hmmm, alguma coisa está estranha. Está ?
Pense um pouco. Você consegue explicar esse resultado ?
3. Regressão logística
Vamos aplicar a regressão logística. (Lembre-se: apesar do nome, a regressão logística não visa "ajustar" uma função às observações)<br>
Na regressão logística, a combinação linear $\sum_{j=0}^{n} w_j\,x_j$ é processada pela função sigmoide $s(z) = \frac{1}{1+e^{-z}}$<br> Isto é, calcula-se:
$$
g(\mathbf{x}) = s(h(\mathbf{x})) = s(\sum_{j=0}^{n} w_j\,x_j)
$$
e compara-se $g(\mathbf{x})$ com $y$. A ideia é que $g(\mathbf{x})$ aproxime a posteriori $P(y=1|\mathbf{x})$.<br>
Para esta parte, será necessário o arquivo <tt>funcoes2.py</tt> (que faz parte do kit)
End of explanation
"""
|
dacb/elvizCluster | ipython_notebooks/depreciated/160330 Investigate samples with lots of Order Burkholderiales.ipynb | bsd-3-clause | import matplotlib as mpl
% matplotlib inline
import pandas as pd
import seaborn as sns
from IPython.display import IFrame
import elviz_utils
"""
Explanation: The goal of this notebook is to investigate/justify why some samples "look" weird when summing across contigs.
End of explanation
"""
reduced = pd.read_csv('../results/reduced_data--all_phylogeny_remains.csv')
sample_info = elviz_utils.read_sample_info('../')
sample_info.head()
"""
Explanation: Load Data
End of explanation
"""
IFrame('./plot_copies/160330_Order-Burkholderiales_Methylophilales_Methylococcales--Phylum-Bacteroidetes--rep.pdf',
width=800, height=300)
ls "../plots/mixed_phylogeny/"
sample_info[(sample_info['rep'] == 1) &
(sample_info['oxy'] == 'High') &
(sample_info['week'].isin([8, 10]))]
"""
Explanation: Look into samples that have "too high" of Burkold.
See Rep 1 High O2 weeks 8, 10
End of explanation
"""
reduced[(reduced['Order']== 'Burkholderiales') &(reduced['ID']=='55_HOW8')]
"""
Explanation: Link to Elviz Data for 55_HOW8 (High O2 Rep 1 week 8):
http://genome.jgi.doe.gov/viz/plot?jgiProjectId=1056121
End of explanation
"""
reduced[(reduced['Order']== 'Burkholderiales') &(reduced['ID']=='79_HOW10')]
"""
Explanation: Link to Elviz Data for 79_HOW10 (High O2 Rep 1 week 10):
http://genome.jgi.doe.gov/viz/plot?jgiProjectId=1056169
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.20/_downloads/063df3a44a4ac9d23978d7b307e69a4e/plot_read_evoked.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
from mne import read_evokeds
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
# Reading
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0),
proj=True)
"""
Explanation: Reading and writing an evoked file
This script shows how to read and write evoked datasets.
End of explanation
"""
evoked.plot(exclude=[], time_unit='s')
# Show result as a 2D image (x: time, y: channels, color: amplitude)
evoked.plot_image(exclude=[], time_unit='s')
"""
Explanation: Show result as a butterfly plot:
By using exclude=[] bad channels are not excluded and are shown in red
End of explanation
"""
|
mastertrojan/Udacity | intro-to-rnns/.ipynb_checkpoints/Anna KaRNNa-checkpoint.ipynb | mit | import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
"""
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
"""
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
"""
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
"""
text[:100]
"""
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
"""
chars[:100]
"""
Explanation: And we can see the characters encoded as integers.
End of explanation
"""
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
"""
Explanation: Making training and validation batches
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the batch size. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
"""
train_x, train_y, val_x, val_y = split_data(chars, 10, 50)
train_x.shape
"""
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
"""
train_x[:,:50]
"""
Explanation: Looking at the size of this array, we see that we have rows equal to the batch size. When we want to get a batch out of here, we can grab a subset of this array that contains all the rows but has a width equal to the number of steps in the sequence. The first batch looks like this:
End of explanation
"""
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
"""
Explanation: I'll write another function to grab batches out of the arrays made by split_data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
"""
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# One-hot encoding the input and target characters
x_one_hot = tf.one_hot(inputs, num_classes)
y_one_hot = tf.one_hot(targets, num_classes)
### Build the RNN layers
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
### Run the data through the RNN layers
# This makes a list where each element is on step in the sequence
rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one output row for each step for each batch
seq_output = tf.concat(outputs, axis=1)
output = tf.reshape(seq_output, [-1, lstm_size])
# Now connect the RNN putputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(num_classes))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and batch
logits = tf.matmul(output, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
preds = tf.nn.softmax(logits, name='predictions')
# Reshape the targets to match the logits
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
cost = tf.reduce_mean(loss)
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
# NOTE: I'm using a namedtuple here because I think they are cool
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
"""
Explanation: Building the model
Below is a function where I build the graph for the network.
End of explanation
"""
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
keep_prob = 0.5
"""
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to write it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that heps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
"""
epochs = 20
# Save every N iterations
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/i{}_l{}_v{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
"""
Explanation: Training
Time for training which is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}_v{validation loss}.ckpt
End of explanation
"""
tf.train.get_checkpoint_state('checkpoints')
"""
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
"""
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
"""
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
"""
checkpoint = "checkpoints/____.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
"""
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
"""
|
osunderdog/PythonLearning | pandas_timeseries.ipynb | gpl-2.0 | from datetime import datetime, date, time
import sys
sys.version
import pandas as pd
from pandas import Series, DataFrame, Panel
pd.__version__
import numpy as np
np.__version__
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rc('figure', figsize=(10, 8))
mpl.__version__
"""
Explanation: Timeseries with pandas
Working with time-series data is an important part of data analysis.
Starting with v0.8, the pandas library has included a rich API for time-series manipulations.
The pandas time-series API includes:
Creating date ranges
From files
From scratch
Manipulations: Shift, resample, filter
Field accessors (e.g., hour of day)
Plotting
Time zones (localization and conversion)
Dual representations (point-in-time vs interval)
End of explanation
"""
import os.path
os.path.exists('data.csv')
with open('data/data.csv', 'r') as fh:
print(fh.readline()) # headers
print(fh.readline()) # first row
"""
Explanation: Example using tick data
Sample trade ticks from 2011-11-01 to 2011-11-03 for a single security
End of explanation
"""
data = pd.read_csv('data/data.csv',
parse_dates={'Timestamp': ['Date', 'Time']},
index_col='Timestamp')
data.head()
"""
Explanation: parse_dates: use a list or dict for flexible (possibly multi-column) date parsing
End of explanation
"""
ticks = data.ix[:, ['Price', 'Volume']]
ticks.head()
type(data)
"""
Explanation: Narrow Data down to just Timestamp (key), Price and Volume.
head shows the first few rows.
End of explanation
"""
ticks.count()
bars = ticks.Price.resample('1min').ohlc()
bars
bars.describe()
minute_range = bars.high - bars.low
minute_range.describe()
minute_return = bars.close / bars.open - 1
minute_return.describe()
"""
Explanation: resample: regularization and frequency conversion
End of explanation
"""
volume = ticks.Volume.resample('1min').sum()
value = ticks.prod(axis=1).resample('1min').sum()
vwap = value / volume
"""
Explanation: Compute a VWAP using resample
End of explanation
"""
vwap.ix['2011-11-01 09:27':'2011-11-01 09:32']
"""
Explanation: Convenient indexing for time series data
End of explanation
"""
bars.open.at_time('9:30')
bars.close.at_time('16:00')
"""
Explanation: at_time: same (b)at_time (same bat channel)
End of explanation
"""
filtered = vwap.between_time('10:00', '16:00')
filtered.head(20)
vol = volume.between_time('10:00', '16:00')
vol.head(20)
"""
Explanation: between_time: intraday time range
End of explanation
"""
filtered.ix['2011-11-03':'2011-11-04'].head(20)
filled = filtered.fillna(method='pad', limit=1)
filled.ix['2011-11-03':'2011-11-04'].head(20)
vol = vol.fillna(0.)
vol.head(20)
"""
Explanation: fillna: handling missing data
End of explanation
"""
filled.ix['2011-11-03':'2011-11-04'].plot()
plt.ylim(103.5, 104.5)
vwap.ix['2011-11-03':'2011-11-04'].plot()
plt.ylim(103.5, 104.5)
vol.ix['2011-11-03':'2011-11-04'].plot(secondary_y=True, style='r')
"""
Explanation: Simple plotting
End of explanation
"""
ticks.head()
"""
Explanation: Lead/lag
End of explanation
"""
ticks.shift(1).head()
ticks.shift(-1).head()
"""
Explanation: shift realigns values
End of explanation
"""
ticks.tshift(1, 'min').head()
"""
Explanation: tshift manipulates index values
End of explanation
"""
minute_return.head()
mr = minute_return.between_time('9:30', '16:00')
mr.head()
lagged = mr.shift(1)
lagged.head()
"""
Explanation: SSS: stupidly simple strategy
End of explanation
"""
lagged.at_time('9:30')
mr.at_time('16:00')
lagged = minute_return.tshift(1, 'min').between_time('9:30', '16:00')
lagged.at_time('9:30')
"""
Explanation: We shouldn't use shift here because:
End of explanation
"""
pd.ols(y=mr, x=lagged)
mr = vwap / bars.open - 1
mr = mr.between_time('9:30', '16:00')
lagged = mr.tshift(1, 'min').between_time('9:30', '16:00')
pd.ols(y=mr, x=lagged)
inter = mr * vol
inter = inter.between_time('9:30', '16:00')
lagged_inter = inter.tshift(1, 'min').between_time('9:30', '16:00')
pd.ols(y=mr, x=lagged_inter)
"""
Explanation: Let's play
End of explanation
"""
vol = vol.groupby(vol.index.day).transform(lambda x: x/x.sum())
vol.head()
"""
Explanation: Convert to percentage volume
End of explanation
"""
vol.resample('D', how='sum')
inter = mr * vol
inter = inter.between_time('9:30', '16:00')
lagged_inter = inter.tshift(1, 'min').between_time('9:30', '16:00')
pd.ols(y=mr, x=lagged_inter)
"""
Explanation: Verify
End of explanation
"""
hour = vol.index.hour
hourly_volume = vol.groupby(hour).mean()
hourly_volume.plot(kind='bar')
"""
Explanation: Vivaldi FTW
End of explanation
"""
hourly = vol.resample('H')
def calc_mean(hr):
hr = time(hour=hr)
data = hourly.at_time(hr)
return pd.expanding_mean(data)
df = pd.concat([calc_mean(hr) for hr in range(10, 16)])
df = df.sort_index()
df
"""
Explanation: Expanding window of hourly means for volume
End of explanation
"""
clean_vol = vol.between_time('10:00', '15:59')
dev = clean_vol - df.reindex(clean_vol.index, method='pad') # be careful over day boundaries
dev
inter = mr * dev
inter = inter.between_time('10:00', '15:59')
pd.ols(y=mr, x=inter.tshift(1, 'min'))
"""
Explanation: Compute deviations from the hourly means
End of explanation
"""
rng = pd.date_range('2005', '2012', freq='M')
rng
pd.date_range('2005', periods=7*12, freq='M')
pd.date_range(end='2012', periods=7*12, freq='M')
"""
Explanation: Date range creation
pd.date_range
End of explanation
"""
pd.date_range('2005', periods=4, freq='Q')
pd.date_range('2005', periods=4, freq='Q-NOV')
"""
Explanation: Frequency constants
<table>
<tr><td>Name</td><td>Description</td></tr>
<tr><td>D</td><td>Calendar day</td></tr>
<tr><td>B</td><td>Business day</td></tr>
<tr><td>M</td><td>Calendar end of month</td></tr>
<tr><td>MS</td><td>Calendar start of month</td></tr>
<tr><td>BM</td><td>Business end of month</td></tr>
<tr><td>BMS</td><td>Business start of month</td></tr>
<tr><td>W-{MON, TUE,...}</td><td>Week ending on Monday, Tuesday, ...</td></tr>
<tr><td>Q-{JAN, FEB,...}</td><td>Quarter end with year ending January, February...</td></tr>
<tr><td>QS-{JAN, FEB,...}</td><td>Quarter start with year ending January, February...</td></tr>
<tr><td>BQ-{JAN, FEB,...}</td><td>Business quarter end with year ending January, February...</td></tr>
<tr><td>BQS-{JAN, FEB,...}</td><td>Business quarter start with year ending January, February...</td></tr>
<tr><td>A-{JAN, FEB, ...}</td><td>Year end (December)</td></tr>
<tr><td>AS-{JAN, FEB, ...}</td><td>Year start (December)</td></tr>
<tr><td>BA-{JAN, FEB, ...}</td><td>Business year end (December)</td></tr>
<tr><td>BAS-{JAN, FEB, ...}</td><td>Business year start (December)</td></tr>
<tr><td>H</td><td>Hour</td></tr>
<tr><td>T</td><td>Minute</td></tr>
<tr><td>s</td><td>Second</td></tr>
<tr><td>L, ms</td><td>Millisecond</td></tr>
<tr><td>U</td><td>Microsecond</td></tr>
</table>
Anchored offsets
End of explanation
"""
wkrng = pd.date_range('2012-10-25', periods=3, freq='W')
wkrng
wkrng[0].dayofweek
"""
Explanation: Week anchor indicates end of week
End of explanation
"""
pd.date_range('2005', periods=3, freq='A-JUN')
"""
Explanation: Year anchor indicates year ending month
End of explanation
"""
isinstance(rng, pd.Index)
rng[2:4]
"""
Explanation: DatetimeIndex is a subclass of Index
End of explanation
"""
s = Series(randn(len(rng)), rng)
s.head()
df = DataFrame(randn(len(rng), 3), rng, ['X', 'Y', 'Z'])
df.head()
"""
Explanation: Use it for Series/DataFrame labelling
Error Stop here.
End of explanation
"""
s[datetime(2005, 1, 31) : datetime(2006, 12, 31)] #slice end inclusive
df['2005-1-31':'2006-12-31']
"""
Explanation: Label indexing
End of explanation
"""
s['2005':'2006']
"""
Explanation: Partial indexing
End of explanation
"""
df[:2] # slice end exclusive
"""
Explanation: positional indexing still works
End of explanation
"""
elm = rng[0]
elm
isinstance(elm, datetime)
"""
Explanation: Elements of DatetimeIndex
Elements boxed as Timestamp (subclass of datetime.datetime)
End of explanation
"""
elm.nanosecond
"""
Explanation: Why do we need this subclass?
End of explanation
"""
val = rng.values
type(val)
val.dtype
"""
Explanation: Implemented internally using numpy.datetime64 (dtype='M8[ns]')
End of explanation
"""
val[0]
"""
Explanation: Upgrade Numpy to 1.7b to fix repr issue
End of explanation
"""
rng.asobject.values[0]
"""
Explanation: Or use DatetimeIndex.asobject for workaround
End of explanation
"""
rng.asobject
rng.to_pydatetime()
rng.to_pydatetime()[0]
"""
Explanation: Other views
End of explanation
"""
type(rng.asi8)
rng.asi8.dtype
rng.asi8[0]
"""
Explanation: Integer representation
End of explanation
"""
s.index.freqstr
s.resample('30D').head(10)
s.resample('30D', fill_method='ffill').head(10)
"""
Explanation: More fun with resampling and asfreq
End of explanation
"""
s.ix[:3].resample('W')
s.ix[:3].resample('W', fill_method='ffill')
"""
Explanation: Upsampling
End of explanation
"""
s.asfreq('Q').head()
s.resample('Q', 'last').head()
"""
Explanation: asfreq
End of explanation
"""
s.resample('Q').head()
s.ix[3:6].mean()
s.resample('Q', closed='left').head()
s.ix[2:5].mean()
"""
Explanation: closed: 'left' or 'right' bin edge is closed (default is 'right')
End of explanation
"""
s.resample('Q').head()
s.resample('Q', label='left').head()
"""
Explanation: label: label the bin with 'left' or 'right' edge (default is 'right')
End of explanation
"""
s.resample('Q', label='left', loffset='-1D').head()
"""
Explanation: loffset: shift the result index
End of explanation
"""
rng.tz
d = rng[0]
d
d.tz
localized = rng.tz_localize('US/Eastern')
"""
Explanation: Time zones
Localization
End of explanation
"""
localized[0]
localized.asi8[0]
rng.asi8[0]
d_utc = d.tz_localize('UTC')
d_utc
d_utc.tz_localize('US/Eastern')
"""
Explanation: Localization assumes naive time is local (and not UTC)
End of explanation
"""
localized.tz_convert('UTC')
d_ny = d_utc.tz_convert('US/Eastern')
d_ny
rng.tz_convert('US/Eastern')
"""
Explanation: TZ conversions
End of explanation
"""
p = pd.Period('2005', 'A')
p
pd.Period('2006Q1', 'Q-MAR')
pd.Period('2007-1-1', 'B')
"""
Explanation: Period representation
A lot of time series data is better represented as intervals of time rather than points in time.
This is represented in pandas as Period and PeriodIndex
Creating periods
End of explanation
"""
pd.Period('2005', 'AS')
"""
Explanation: No xxx-start frequencies
End of explanation
"""
pd.period_range('2005', '2012', freq='A')
prng = pd.period_range('2005', periods=7, freq='A')
prng
"""
Explanation: PeriodRange
End of explanation
"""
p
p.to_timestamp()
p.to_timestamp('M', 's')
p.to_timestamp('M', 'e')
prng.to_timestamp(how='e')
prng.to_timestamp('M', 'e')
rng
rng.to_period()
rng.to_period('D')
"""
Explanation: Converting between representations
End of explanation
"""
p
p.end_time
datetime(2005, 12, 31, 10, 0, 0) < p.end_time # WAT?!
"""
Explanation: Bugggg
End of explanation
"""
|
peastman/deepchem | examples/tutorials/Creating_Models_with_TensorFlow_and_PyTorch.ipynb | mit | !pip install --pre deepchem
"""
Explanation: Creating Models with TensorFlow and PyTorch
In the tutorials so far, we have used standard models provided by DeepChem. This is fine for many applications, but sooner or later you will want to create an entirely new model with an architecture you define yourself. DeepChem provides integration with both TensorFlow (Keras) and PyTorch, so you can use it with models from either of these frameworks.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
End of explanation
"""
import deepchem as dc
import tensorflow as tf
keras_model = tf.keras.Sequential([
tf.keras.layers.Dense(1000, activation='relu'),
tf.keras.layers.Dropout(rate=0.5),
tf.keras.layers.Dense(1)
])
model = dc.models.KerasModel(keras_model, dc.models.losses.L2Loss())
"""
Explanation: There are actually two different approaches you can take to using TensorFlow or PyTorch models with DeepChem. It depends on whether you want to use TensorFlow/PyTorch APIs or DeepChem APIs for training and evaluating your model. For the former case, DeepChem's Dataset class has methods for easily adapting it to use with other frameworks. make_tf_dataset() returns a tensorflow.data.Dataset object that iterates over the data. make_pytorch_dataset() returns a torch.utils.data.IterableDataset that iterates over the data. This lets you use DeepChem's datasets, loaders, featurizers, transformers, splitters, etc. and easily integrate them into your existing TensorFlow or PyTorch code.
But DeepChem also provides many other useful features. The other approach, which lets you use those features, is to wrap your model in a DeepChem Model object. Let's look at how to do that.
KerasModel
KerasModel is a subclass of DeepChem's Model class. It acts as a wrapper around a tensorflow.keras.Model. Let's see an example of using it. For this example, we create a simple sequential model consisting of two dense layers.
End of explanation
"""
tasks, datasets, transformers = dc.molnet.load_delaney(featurizer='ECFP', splitter='random')
train_dataset, valid_dataset, test_dataset = datasets
model.fit(train_dataset, nb_epoch=50)
metric = dc.metrics.Metric(dc.metrics.pearson_r2_score)
print('training set score:', model.evaluate(train_dataset, [metric]))
print('test set score:', model.evaluate(test_dataset, [metric]))
"""
Explanation: For this example, we used the Keras Sequential class. Our model consists of a dense layer with ReLU activation, 50% dropout to provide regularization, and a final layer that produces a scalar output. We also need to specify the loss function to use when training the model, in this case L<sub>2</sub> loss. We can now train and evaluate the model exactly as we would with any other DeepChem model. For example, let's load the Delaney solubility dataset. How does our model do at predicting the solubilities of molecules based on their extended-connectivity fingerprints (ECFPs)?
End of explanation
"""
import torch
pytorch_model = torch.nn.Sequential(
torch.nn.Linear(1024, 1000),
torch.nn.ReLU(),
torch.nn.Dropout(0.5),
torch.nn.Linear(1000, 1)
)
model = dc.models.TorchModel(pytorch_model, dc.models.losses.L2Loss())
model.fit(train_dataset, nb_epoch=50)
print('training set score:', model.evaluate(train_dataset, [metric]))
print('test set score:', model.evaluate(test_dataset, [metric]))
"""
Explanation: TorchModel
TorchModel works just like KerasModel, except it wraps a torch.nn.Module. Let's use PyTorch to create another model just like the previous one and train it on the same data.
End of explanation
"""
class ClassificationModel(tf.keras.Model):
def __init__(self):
super(ClassificationModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(1000, activation='relu')
self.dense2 = tf.keras.layers.Dense(1)
def call(self, inputs, training=False):
y = self.dense1(inputs)
if training:
y = tf.nn.dropout(y, 0.5)
logits = self.dense2(y)
output = tf.nn.sigmoid(logits)
return output, logits
keras_model = ClassificationModel()
output_types = ['prediction', 'loss']
model = dc.models.KerasModel(keras_model, dc.models.losses.SigmoidCrossEntropy(), output_types=output_types)
"""
Explanation: Computing Losses
Now let's see a more advanced example. In the above models, the loss was computed directly from the model's output. Often that is fine, but not always. Consider a classification model that outputs a probability distribution. While it is possible to compute the loss from the probabilities, it is more numerically stable to compute it from the logits.
To do this, we create a model that returns multiple outputs, both probabilities and logits. KerasModel and TorchModel let you specify a list of "output types". If a particular output has type 'prediction', that means it is a normal output that should be returned when you call predict(). If it has type 'loss', that means it should be passed to the loss function in place of the normal outputs.
Sequential models do not allow multiple outputs, so instead we use a subclassing style model.
End of explanation
"""
tasks, datasets, transformers = dc.molnet.load_bace_classification(feturizer='ECFP', split='scaffold')
train_dataset, valid_dataset, test_dataset = datasets
model.fit(train_dataset, nb_epoch=100)
metric = dc.metrics.Metric(dc.metrics.roc_auc_score)
print('training set score:', model.evaluate(train_dataset, [metric]))
print('test set score:', model.evaluate(test_dataset, [metric]))
"""
Explanation: We can train our model on the BACE dataset. This is a binary classification task that tries to predict whether a molecule will inhibit the enzyme BACE-1.
End of explanation
"""
|
llooker/public-datasets-pipelines | samples/tutorial.ipynb | apache-2.0 | %%capture
# Installing the required libraries:
!pip install matplotlib pandas scikit-learn tensorflow pyarrow tqdm
!pip install google-cloud-bigquery google-cloud-bigquery-storage
!pip install flake8 pycodestyle pycodestyle_magic
# Python Builtin Libraries
from datetime import datetime
# Third Party Libraries
from google.cloud import bigquery
# Configurations
%matplotlib inline
"""
Explanation: Overview
Add a brief description of this tutorial here.
End of explanation
"""
try:
from google.colab import auth
print("Authenticating in Colab")
auth.authenticate_user()
print("Authenticated")
except: # noqa
print("This notebook is not running on Colab.")
print("Please make sure to follow the authentication steps.")
"""
Explanation: Authentication
In order to run this tutorial successfully, we need to be authenticated first.
Depending on where we are running this notebook, the authentication steps may vary:
| Runner | Authentiction Steps |
| ----------- | ----------- |
| Local Computer | Use a service account, or run the following command: <br><br>gcloud auth login |
| Colab | Run the following python code and follow the instructions: <br><br>from google.colab import auth <br> auth.authenticate_user() |
| Vertext AI (Workbench) | Authentication is provided by Workbench |
End of explanation
"""
# ENTER THE GCP PROJECT HERE
gcp_project = "YOUR-GCP-PROJECT"
print(f"gcp_project is set to {gcp_project}")
def helper_function():
"""
Add a description about what this function does.
"""
return None
"""
Explanation: Configurations
Let's make sure we enter the name of our GCP project in the next cell.
End of explanation
"""
query = """
SELECT
created_date, category, complaint_type, neighborhood, latitude, longitude
FROM
`bigquery-public-data.san_francisco_311.311_service_requests`
LIMIT 1000;
"""
bqclient = bigquery.Client(project=gcp_project)
dataframe = bqclient.query(query).result().to_dataframe()
"""
Explanation: Data Preparation
Query the Data
End of explanation
"""
print(dataframe.shape)
dataframe.head()
"""
Explanation: Check the Dataframe
End of explanation
"""
# Convert the datetime to date
dataframe['created_date'] = dataframe['created_date'].apply(datetime.date)
"""
Explanation: Process the Dataframe
End of explanation
"""
|
paulmorio/grusData | basics/SupportVectorMachines.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting defaults
import seaborn as sns; sns.set()
"""
Explanation: Support Vector Machines
Support vector machines (SVMs) are a particularly powerful and flexible class of supervised algorithms for both classification and regression. In this section, we will develop the intuition behind support vector machines and their use in classification problems.
We begin with the standard imports:
End of explanation
"""
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn');
"""
Explanation: Motivating Support Vector Machines
As part of our disussion of Bayesian classification (see In Depth: Naive Bayes Classification), we learned a simple model describing the distribution of each underlying class, and used these generative models to probabilistically determine labels for new points. That was an example of generative classification; here we will consider instead discriminative classification: rather than modeling each class, we simply find a line or curve (in two dimensions) or manifold (in multiple dimensions) that divides the classes from each other.
As an example of this, consider the simple case of a classification task, in which the two classes of points are well separated:
End of explanation
"""
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plt.plot([0.6], [2.1], 'x', color='red', markeredgewidth=2, markersize=10)
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
"""
Explanation: A linear discriminative classifier would attempt to draw a straight line separating the two sets of data, and thereby create a model for classification. For two dimensional data like that shown here, this is a task we could do by hand. But immediately we see a problem: there is more than one possible dividing line that can perfectly discriminate between the two classes!
We can draw them as follows:
End of explanation
"""
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
"""
Explanation: These are three very different separators which, nevertheless, perfectly discriminate between these samples. Depending on which you choose, a new data point (e.g., the one marked by the "X" in this plot) will be assigned a different label! Evidently our simple intuition of "drawing a line between classes" is not enough, and we need to think a bit deeper.
Support Vector Machines: Maximizing the Margin
Support vector machines offer one way to improve on this. The intuition is this: rather than simply drawing a zero-width line between the classes, we can draw around each line a margin of some width, up to the nearest point. Here is an example of how this might look:
End of explanation
"""
from sklearn.svm import SVC # "Support vector classifier"
model = SVC(kernel='linear', C=1E10)
model.fit(X, y)
"""
Explanation: In support vector machines, the line that maximizes this margin is the one we will choose as the optimal model. Support vector machines are an example of such a maximum margin estimator.
Fitting the Support Vector Machine
Let's see the result of an actual fit to this data: we will use Scikit-Learn's support vector classifier to train an SVM model on this data. For the time being, we will use a linear kernel and set the C parameter to a very large number (we'll discuss the meaning of these in more depth momentarily).
End of explanation
"""
def plot_svc_decision_function(model, ax=None, plot_support=True):
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
x = np.linspace(xlim[0], xlim[1], 30)
y = np.linspace(ylim[0], ylim[1], 30)
Y, X = np.meshgrid(y, x)
xy = np.vstack([X.ravel(), Y.ravel()]).T
P = model.decision_function(xy).reshape(X.shape)
# plot decision boundary and margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
if plot_support:
ax.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, linewidth=1, facecolors='none');
ax.set_xlim(xlim)
ax.set_ylim(ylim)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(model);
"""
Explanation: To better visualise what is going on lets make a function that plots the decision function for us.
End of explanation
"""
model.support_vectors_
"""
Explanation: This is the dividing line that maximizes the margin between the two sets of points. Notice that a few of the training points just touch the margin: they are indicated by the black circles in this figure. These points are the pivotal elements of this fit, and are known as the support vectors, and give the algorithm its name. In Scikit-Learn, the identity of these points are stored in the support_vectors_ attribute of the classifier:
End of explanation
"""
def plot_svm(N=10, ax=None):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
model = SVC(kernel='linear', C=1E10)
model.fit(X, y)
ax = ax or plt.gca()
ax.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
ax.set_xlim(-1, 4)
ax.set_ylim(-1, 6)
plot_svc_decision_function(model, ax)
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
for axi, N in zip(ax, [60, 120]):
plot_svm(N, axi)
axi.set_title('N = {0}'.format(N))
from ipywidgets import interact, fixed
interact(plot_svm, N=[10, 200], ax=fixed(None));
"""
Explanation: A key to this classifier's success is that for the fit, only the position of the support vectors matter; any points further from the margin which are on the correct side do not modify the fit! Technically, this is because these points do not contribute to the loss function used to fit the model, so their position and number do not matter so long as they do not cross the margin.
We can see this, for example, if we plot the model learned from the first 60 points and first 120 points of this dataset:
End of explanation
"""
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
clf = SVC(kernel='linear').fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf, plot_support=False);
"""
Explanation: Beyond Linear Boundaries: Kernel SVMs
Where SVM becomes extremely powerful is when it is combined with kernels. We have seen a version of kernels before, in the basis function regressions of In Depth: Linear Regression. There we projected our data into higher-dimensional space defined by polynomials and Gaussian basis functions, and thereby were able to fit for nonlinear relationships with a linear classifier.
In SVM models, we can use a version of the same idea. To motivate the need for kernels, let's look at some data that is not linearly separable:
End of explanation
"""
r = np.exp(-(X ** 2).sum(1))
from mpl_toolkits import mplot3d
def plot_3D(elev=30, azim=30, X=X, y=y):
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='autumn')
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
interact(plot_3D, elev=[-90, 90], azip=(-180, 180),
X=fixed(X), y=fixed(y));
"""
Explanation: It is clear that no linear discrimination will ever be able to separate this data. But we can draw a lesson from the basis function regressions in In Depth: Linear Regression, and think about how we might project the data into a higher dimension such that a linear separator would be sufficient. For example, one simple projection we could use would be to compute a radial basis function centered on the middle clump:
End of explanation
"""
clf = SVC(kernel='rbf', C=1E6)
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=300, lw=1, facecolors='none');
"""
Explanation: We can see that with this additional dimension, the data becomes trivially linearly separable, by drawing a separating plane at, say, r=0.7.
Here we had to choose and carefully tune our projection: if we had not centered our radial basis function in the right location, we would not have seen such clean, linearly separable results. In general, the need to make such a choice is a problem: we would like to somehow automatically find the best basis functions to use.
One strategy to this end is to compute a basis function centered at every point in the dataset, and let the SVM algorithm sift through the results. This type of basis function transformation is known as a kernel transformation, as it is based on a similarity relationship (or kernel) between each pair of points.
A potential problem with this strategy—projecting $N$ points into $N$ dimensions—is that it might become very computationally intensive as $N$ grows large. However, because of a neat little procedure known as the kernel trick, a fit on kernel-transformed data can be done implicitly—that is, without ever building the full $N$-dimensional representation of the kernel projection! This kernel trick is built into the SVM, and is one of the reasons the method is so powerful.
In Scikit-Learn, we can apply kernelized SVM simply by changing our linear kernel to an RBF (radial basis function) kernel, using the kernel model hyperparameter:
End of explanation
"""
X, y = make_blobs(n_samples=100, centers=2,
random_state=0, cluster_std=1.2)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn');
"""
Explanation: Using this kernelized support vector machine, we learn a suitable nonlinear decision boundary. This kernel transformation strategy is used often in machine learning to turn fast linear methods into fast nonlinear methods, especially for models in which the kernel trick can be used.
Tuning the SVM: Softening Margins
Our discussion thus far has centered around very clean datasets, in which a perfect decision boundary exists. But what if your data has some amount of overlap? For example, you may have data like this:
End of explanation
"""
X, y = make_blobs(n_samples=100, centers=2,
random_state=0, cluster_std=0.8)
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
for axi, C in zip(ax, [10.0, 0.1]):
model = SVC(kernel='linear', C=C).fit(X, y)
axi.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(model, axi)
axi.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, lw=1, facecolors='none');
axi.set_title('C = {0:.1f}'.format(C), size=14)
"""
Explanation: To handle this case, the SVM implementation has a bit of a fudge-factor which "softens" the margin: that is, it allows some of the points to creep into the margin if that allows a better fit. The hardness of the margin is controlled by a tuning parameter, most often known as $C$. For very large $C$, the margin is hard, and points cannot lie in it. For smaller $C$, the margin is softer, and can grow to encompass some points.
The plot shown below gives a visual picture of how a changing $C$ parameter affects the final fit, via the softening of the margin:
End of explanation
"""
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=60)
print(faces.target_names)
print(faces.images.shape)
"""
Explanation: The optimal value of the $C$ parameter will depend on your dataset, and should be tuned using cross-validation or a similar procedure (refer back to Hyperparameters and Model Validation).
Example: Face Recognition
As an example of support vector machines in action, let's take a look at the facial recognition problem. We will use the Labeled Faces in the Wild dataset, which consists of several thousand collated photos of various public figures. A fetcher for the dataset is built into Scikit-Learn:
End of explanation
"""
fig, ax = plt.subplots(3, 5)
for i, axi in enumerate(ax.flat):
axi.imshow(faces.images[i], cmap='bone')
axi.set(xticks=[], yticks=[],
xlabel=faces.target_names[faces.target[i]])
"""
Explanation: Lets plot a few to look at what it looks like
End of explanation
"""
from sklearn.svm import SVC
from sklearn.decomposition import RandomizedPCA
from sklearn.pipeline import make_pipeline
pca = RandomizedPCA(n_components=150, whiten=True, random_state=42)
svc = SVC(kernel='rbf', class_weight='balanced')
model = make_pipeline(pca, svc)
"""
Explanation: Each image contains [62×47] or nearly 3,000 pixels. We could proceed by simply using each pixel value as a feature, but often it is more effective to use some sort of preprocessor to extract more meaningful features; here we will use a principal component analysis (see In Depth: Principal Component Analysis) to extract 150 fundamental components to feed into our support vector machine classifier. We can do this most straightforwardly by packaging the preprocessor and the classifier into a single pipeline:
End of explanation
"""
from sklearn.cross_validation import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(faces.data, faces.target,
random_state=42)
"""
Explanation: For the sake of testing our classifier output, we will split the data into a training and testing set:
End of explanation
"""
from sklearn.grid_search import GridSearchCV
param_grid = {'svc__C': [1, 5, 10, 50],
'svc__gamma': [0.0001, 0.0005, 0.001, 0.005]}
grid = GridSearchCV(model, param_grid)
%time grid.fit(Xtrain, ytrain)
print(grid.best_params_)
"""
Explanation: Finally, we can use a grid search cross-validation to explore combinations of parameters. Here we will adjust C (which controls the margin hardness) and gamma (which controls the size of the radial basis function kernel), and determine the best model:
End of explanation
"""
model = grid.best_estimator_
yfit = model.predict(Xtest)
"""
Explanation: The optimal values fall toward the middle of our grid; if they fell at the edges, we would want to expand the grid to make sure we have found the true optimum.
Now with this cross-validated model, we can predict the labels for the test data, which the model has not yet seen:
End of explanation
"""
fig, ax = plt.subplots(4, 6)
for i, axi in enumerate(ax.flat):
axi.imshow(Xtest[i].reshape(62, 47), cmap='bone')
axi.set(xticks=[], yticks=[])
axi.set_ylabel(faces.target_names[yfit[i]].split()[-1],
color='black' if yfit[i] == ytest[i] else 'red')
fig.suptitle('Predicted Names; Incorrect Labels in Red', size=14);
"""
Explanation: Let's take a look at a few of the test images along with their predicted values:
End of explanation
"""
from sklearn.metrics import classification_report
print(classification_report(ytest, yfit,
target_names=faces.target_names))
"""
Explanation: Out of this small sample, our optimal estimator mislabeled only a single face (Bush’s face in the bottom row was mislabeled as Blair). We can get a better sense of our estimator's performance using the classification report, which lists recovery statistics label by label:
End of explanation
"""
from sklearn.metrics import confusion_matrix
mat = confusion_matrix(ytest, yfit)
sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False,
xticklabels=faces.target_names,
yticklabels=faces.target_names)
plt.xlabel('true label')
plt.ylabel('predicted label');
"""
Explanation: We might also display the confusion matrix between these classes:
End of explanation
"""
|
walterst/qiime | examples/ipynb/Fungal-ITS-analysis.ipynb | gpl-2.0 | !(wget ftp://ftp.microbio.me/qiime/tutorial_files/its-soils-tutorial.tgz || curl -O ftp://ftp.microbio.me/qiime/tutorial_files/its-soils-tutorial.tgz)
!(wget ftp://ftp.microbio.me/qiime/tutorial_files/its_12_11_otus.tgz || curl -O ftp://ftp.microbio.me/qiime/tutorial_files/its_12_11_otus.tgz)
"""
Explanation: Fungal ITS QIIME analysis tutorial
In this tutorial we illustrate steps for analyzing fungal ITS amplicon data using the QIIME/UNITE reference OTUs (alpha version 12_11) to compare the composition of 9 soil communities using open-reference OTU picking. More recent ITS reference databases based on UNITE are available on the QIIME resources page. The steps in this tutorial can be generalized to work with other marker genes, such as 18S.
We recommend working through the Illumina Overview Tutorial before working through this tutorial, as it provides more detailed annotation of the steps in a QIIME analysis. This tutorial is intended to highlight the differences that are necessary to work with a database other than QIIME's default reference database. For ITS, we won't build a phylogenetic tree and therefore use nonphylogenetic diversity metrics. Instructions are included for how to build a phylogenetic tree if you're sequencing a non-16S, phylogenetically-informative marker gene (e.g., 18S).
First, we obtain the tutorial data and reference database:
End of explanation
"""
!tar -xzf its-soils-tutorial.tgz
!tar -xzf its_12_11_otus.tgz
!gunzip ./its_12_11_otus/rep_set/97_otus.fasta.gz
!gunzip ./its_12_11_otus/taxonomy/97_otu_taxonomy.txt.gz
"""
Explanation: Now unzip these files.
End of explanation
"""
from IPython.display import FileLink, FileLinks
FileLinks('its-soils-tutorial')
"""
Explanation: You can then view the files in each of these direcories by passing the directory name to the FileLinks function.
End of explanation
"""
!cat its-soils-tutorial/params.txt
"""
Explanation: The params.txt file modifies some of the default parameters of this analysis. You can review those by clicking the link or by catting the file.
End of explanation
"""
!pick_open_reference_otus.py -i its-soils-tutorial/seqs.fna -r its_12_11_otus/rep_set/97_otus.fasta -o otus/ -p its-soils-tutorial/params.txt --suppress_align_and_tree
"""
Explanation: The parameters that differentiate ITS analysis from analysis of other amplicons are the two assign_taxonomy parameters, which are pointing to the reference collection that we just downloaded.
We're now ready to run the pick_open_reference_otus.py workflow. Discussion of these methods can be found in Rideout et. al (2014).
Note that we pass -r to specify a non-default reference database. We're also passing --suppress_align_and_tree because we know that trees generated from ITS sequences are generally not phylogenetically informative.
End of explanation
"""
FileLink('otus/index.html')
"""
Explanation: Note: If you would like to build a phylogenetic tree (e.g., if you're using a phylogentically-informative marker gene such as 18S instead of ITS), you should remove the --suppress_align_and_tree parameter from the above command and add the following lines to the parameters file:
align_seqs:template_fp <path to reference alignment>
filter_alignment:suppress_lane_mask_filter True
filter_alignment:entropy_threshold 0.10
After that completes (it will take a few minutes) we'll have the OTU table with taxonomy. You can review all of the files that are created by passing the path to the index.html file in the output directory to the FileLink function.
End of explanation
"""
!biom summarize-table -i otus/otu_table_mc2_w_tax.biom
"""
Explanation: You can then pass the OTU table to biom summarize-table to view a summary of the information in the OTU table.
End of explanation
"""
!core_diversity_analyses.py -i otus/otu_table_mc2_w_tax.biom -o cdout/ -m its-soils-tutorial/map.txt -e 353 --nonphylogenetic_diversity
"""
Explanation: Next, we run several core diversity analyses, including alpha/beta diversity and taxonomic summarization. We will use an even sampling depth of 353 based on the results of biom summarize-table above. Since we did not built a phylogenetic tree, we'll pass the --nonphylogenetic_diversity flag, which specifies to compute Bray-Curtis distances instead of UniFrac distances, and to use only nonphylogenetic alpha diversity metrics.
End of explanation
"""
FileLink('cdout/index.html')
"""
Explanation: You may see a warning issued above; this is safe to ignore.
Note: If you built a phylogenetic tree, you should pass the path to that tree via -t and not pass --nonphylogenetic_diversity.
You can view the output of core_diversity_analyses.py using FileLink.
End of explanation
"""
FileLinks("its-soils-tutorial/precomputed-output/")
"""
Explanation: Precomputed results
In case you're having trouble running the steps above, for example because of a broken QIIME installation, all of the output generated above has been precomputed. You can access this by running the cell below.
End of explanation
"""
|
AaronCWong/phys202-2015-work | assignments/midterm/InteractEx06.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import Image
from IPython.html.widgets import interact, interactive, fixed
"""
Explanation: Interact Exercise 6
Imports
Put the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.
End of explanation
"""
Image('fermidist.png')
"""
Explanation: Exploring the Fermi distribution
In quantum statistics, the Fermi-Dirac distribution is related to the probability that a particle will be in a quantum state with energy $\epsilon$. The equation for the distribution $F(\epsilon)$ is:
End of explanation
"""
def fermidist(energy, mu, kT):
exp = (energy-mu)/kT
F = 1/((np.exp(exp))+1)
if type(energy) or type (mu) or typle(kT) == np.array:
return np.array(F)
else:
return F
assert np.allclose(fermidist(0.5, 1.0, 10.0), 0.51249739648421033)
assert np.allclose(fermidist(np.linspace(0.0,1.0,10), 1.0, 10.0),
np.array([ 0.52497919, 0.5222076 , 0.51943465, 0.5166605 , 0.51388532,
0.51110928, 0.50833256, 0.50555533, 0.50277775, 0.5 ]))
"""
Explanation: In this equation:
$\epsilon$ is the single particle energy.
$\mu$ is the chemical potential, which is related to the total number of particles.
$k$ is the Boltzmann constant.
$T$ is the temperature in Kelvin.
In the cell below, typeset this equation using LaTeX:
\begin{equation}
F(\epsilon) = \frac{1}{e^{(\epsilon-\mu)/kT}+1}
\end{equation}
Define a function fermidist(energy, mu, kT) that computes the distribution function for a given value of energy, chemical potential mu and temperature kT. Note here, kT is a single variable with units of energy. Make sure your function works with an array and don't use any for or while loops in your code.
End of explanation
"""
def plot_fermidist(mu, kT):
#plt.figure(figsize = (15,5))
plt.plot(energy,fermidist)
#plt.ylabel('Fermidist Distribution')
#plt.xlabel('Energy')
#plt.title('Distribution vs. Energy')
#plt.grid(True)
#plt.box(True)
#plt.xlim(0,10.0,100);
#plt.ylim(0,10)
#axis = plt.gca()
#axis.spines['top'].set_visible(False)
#axis.spines['right'].set_visible(False)
#axis.get_xaxis().tick_bottom()
#axis.get_yaxis().tick_left();
plot_fermidist(4.0, 1.0)
assert True # leave this for grading the plot_fermidist function
"""
Explanation: Write a function plot_fermidist(mu, kT) that plots the Fermi distribution $F(\epsilon)$ as a function of $\epsilon$ as a line plot for the parameters mu and kT.
Use enegies over the range $[0,10.0]$ and a suitable number of points.
Choose an appropriate x and y limit for your visualization.
Label your x and y axis and the overall visualization.
Customize your plot in 3 other ways to make it effective and beautiful.
End of explanation
"""
#interact(plot_fermidist , mu = (0.0,5.0))
#interact(plot_fermidist , kT = (0.1,10.0));
"""
Explanation: Use interact with plot_fermidist to explore the distribution:
For mu use a floating point slider over the range $[0.0,5.0]$.
for kT use a floating point slider over the range $[0.1,10.0]$.
End of explanation
"""
|
mne-tools/mne-tools.github.io | dev/_downloads/00ac060e49528fd74fda09b97366af98/3d_to_2d.ipynb | bsd-3-clause | # Authors: Christopher Holdgraf <[email protected]>
# Alex Rockhill <[email protected]>
#
# License: BSD-3-Clause
from mne.io.fiff.raw import read_raw_fif
import numpy as np
from matplotlib import pyplot as plt
from os import path as op
import mne
from mne.viz import ClickableImage # noqa: F401
from mne.viz import (plot_alignment, snapshot_brain_montage, set_3d_view)
misc_path = mne.datasets.misc.data_path()
ecog_data_fname = op.join(misc_path, 'ecog', 'sample_ecog_ieeg.fif')
subjects_dir = op.join(misc_path, 'ecog')
# We've already clicked and exported
layout_path = op.join(op.dirname(mne.__file__), 'data', 'image')
layout_name = 'custom_layout.lout'
"""
Explanation: How to convert 3D electrode positions to a 2D image
Sometimes we want to convert a 3D representation of electrodes into a 2D
image. For example, if we are using electrocorticography it is common to create
scatterplots on top of a brain, with each point representing an electrode.
In this example, we'll show two ways of doing this in MNE-Python. First,
if we have the 3D locations of each electrode then we can use PyVista to
take a snapshot of a view of the brain. If we do not have these 3D locations,
and only have a 2D image of the electrodes on the brain, we can use the
:class:mne.viz.ClickableImage class to choose our own electrode positions
on the image.
End of explanation
"""
raw = read_raw_fif(ecog_data_fname)
raw.pick_channels([f'G{i}' for i in range(1, 257)]) # pick just one grid
# Since we loaded in the ecog data from FIF, the coordinates
# are in 'head' space, but we actually want them in 'mri' space.
# So we will apply the head->mri transform that was used when
# generating the dataset (the estimated head->mri transform).
montage = raw.get_montage()
trans = mne.coreg.estimate_head_mri_t('sample_ecog', subjects_dir)
montage.apply_trans(trans)
"""
Explanation: Load data
First we will load a sample ECoG dataset which we'll use for generating
a 2D snapshot.
End of explanation
"""
fig = plot_alignment(raw.info, trans=trans, subject='sample_ecog',
subjects_dir=subjects_dir, surfaces=dict(pial=0.9))
set_3d_view(figure=fig, azimuth=20, elevation=80)
xy, im = snapshot_brain_montage(fig, montage)
# Convert from a dictionary to array to plot
xy_pts = np.vstack([xy[ch] for ch in raw.ch_names])
# Compute beta power to visualize
raw.load_data()
beta_power = raw.filter(20, 30).apply_hilbert(envelope=True).get_data()
beta_power = beta_power.max(axis=1) # take maximum over time
# This allows us to use matplotlib to create arbitrary 2d scatterplots
fig2, ax = plt.subplots(figsize=(10, 10))
ax.imshow(im)
cmap = ax.scatter(*xy_pts.T, c=beta_power, s=100, cmap='coolwarm')
cbar = fig2.colorbar(cmap)
cbar.ax.set_ylabel('Beta Power')
ax.set_axis_off()
# fig2.savefig('./brain.png', bbox_inches='tight') # For ClickableImage
"""
Explanation: Project 3D electrodes to a 2D snapshot
Because we have the 3D location of each electrode, we can use the
:func:mne.viz.snapshot_brain_montage function to return a 2D image along
with the electrode positions on that image. We use this in conjunction with
:func:mne.viz.plot_alignment, which visualizes electrode positions.
End of explanation
"""
# This code opens the image so you can click on it. Commented out
# because we've stored the clicks as a layout file already.
# # The click coordinates are stored as a list of tuples
# im = plt.imread('./brain.png')
# click = ClickableImage(im)
# click.plot_clicks()
# # Generate a layout from our clicks and normalize by the image
# print('Generating and saving layout...')
# lt = click.to_layout()
# lt.save(op.join(layout_path, layout_name)) # save if we want
# # We've already got the layout, load it
lt = mne.channels.read_layout(layout_name, path=layout_path, scale=False)
x = lt.pos[:, 0] * float(im.shape[1])
y = (1 - lt.pos[:, 1]) * float(im.shape[0]) # Flip the y-position
fig, ax = plt.subplots()
ax.imshow(im)
ax.scatter(x, y, s=80, color='r')
fig.tight_layout()
ax.set_axis_off()
"""
Explanation: Manually creating 2D electrode positions
If we don't have the 3D electrode positions then we can still create a
2D representation of the electrodes. Assuming that you can see the electrodes
on the 2D image, we can use :class:mne.viz.ClickableImage to open the image
interactively. You can click points on the image and the x/y coordinate will
be stored.
We'll open an image file, then use ClickableImage to
return 2D locations of mouse clicks (or load a file already created).
Then, we'll return these xy positions as a layout for use with plotting topo
maps.
End of explanation
"""
|
GuillaumeDec/machine-learning | Deep Neural Network Application Image Classification/Deep+Neural+Network+-+Application+v3.ipynb | gpl-3.0 | import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v2 import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
"""
Explanation: Deep Neural Network for Image Classification: Application
When you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course!
You will use use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation.
After this assignment you will be able to:
- Build and apply a deep neural network to supervised learning.
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- PIL and scipy are used here to test your model with your own picture at the end.
- dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
End of explanation
"""
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
"""
Explanation: 2 - Dataset
You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!
Problem Statement: You are given a dataset ("data.h5") containing:
- a training set of m_train images labelled as cat (1) or non-cat (0)
- a test set of m_test images labelled as cat and non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).
Let's get more familiar with the dataset. Load the data by running the cell below.
End of explanation
"""
# Example of a picture
index = 13
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.")
# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]
print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
"""
Explanation: The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
End of explanation
"""
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
"""
Explanation: As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.
<img src="images/imvectorkiank.png" style="width:450px;height:300px;">
<caption><center> <u>Figure 1</u>: Image to vector conversion. <br> </center></caption>
End of explanation
"""
### CONSTANTS DEFINING THE MODEL ####
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
"""
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
"""
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Get W1, b1, W2 and b2 from the dictionary parameters.
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2".
### START CODE HERE ### (≈ 2 lines of code)
A1, cache1 = linear_activation_forward(X, W1, b1, "relu")
A2, cache2 = linear_activation_forward(A1, W2, b2, "sigmoid")
### END CODE HERE ###
# Compute cost
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(A2, Y)
### END CODE HERE ###
# Initializing backward propagation
dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
### START CODE HERE ### (≈ 2 lines of code)
dA1, dW2, db2 = linear_activation_backward(dA2, cache2, "sigmoid")
dA0, dW1, db1 = linear_activation_backward(dA1, cache1, "relu")
### END CODE HERE ###
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
### START CODE HERE ### (approx. 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Retrieve W1, b1, W2, b2 from parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
"""
Explanation: $12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector.
3 - Architecture of your model
Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.
You will build two different models:
- A 2-layer neural network
- An L-layer deep neural network
You will then compare the performance of these models, and also try out different values for $L$.
Let's look at the two architectures.
3.1 - 2-layer neural network
<img src="images/2layerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 2</u>: 2-layer neural network. <br> The model can be summarized as: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT. </center></caption>
<u>Detailed Architecture of figure 2</u>:
- The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$.
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.
- You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$.
- You then repeat the same process.
- You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias).
- Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat.
3.2 - L-layer deep neural network
It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation:
<img src="images/LlayerNN_kiank.png" style="width:650px;height:400px;">
<caption><center> <u>Figure 3</u>: L-layer neural network. <br> The model can be summarized as: [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID</center></caption>
<u>Detailed Architecture of figure 3</u>:
- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).
- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.
- Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.
- Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat.
3.3 - General methodology
As usual you will follow the Deep Learning methodology to build the model:
1. Initialize parameters / Define hyperparameters
2. Loop for num_iterations:
a. Forward propagation
b. Compute cost function
c. Backward propagation
d. Update parameters (using parameters, and grads from backprop)
4. Use trained parameters to predict labels
Let's now implement those two models!
4 - Two-layer neural network
Question: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you may need and their inputs are:
python
def initialize_parameters(n_x, n_h, n_y):
...
return parameters
def linear_activation_forward(A_prev, W, b, activation):
...
return A, cache
def compute_cost(AL, Y):
...
return cost
def linear_activation_backward(dA, cache, activation):
...
return dA_prev, dW, db
def update_parameters(parameters, grads, learning_rate):
...
return parameters
End of explanation
"""
parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
"""
Explanation: Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
End of explanation
"""
predictions_train = predict(train_x, train_y, parameters)
"""
Explanation: Expected Output:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.6930497356599888 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.6464320953428849 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.048554785628770206 </td>
</tr>
</table>
Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.
Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
End of explanation
"""
predictions_test = predict(test_x, test_y, parameters)
"""
Explanation: Expected Output:
<table>
<tr>
<td> **Accuracy**</td>
<td> 1.0 </td>
</tr>
</table>
End of explanation
"""
### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] # 5-layer model
# GRADED FUNCTION: L_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
"""
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (number of examples, num_px * num_px * 3)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization.
### START CODE HERE ###
parameters = initialize_parameters_deep(layers_dims)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
### START CODE HERE ### (≈ 1 line of code)
AL, caches = L_model_forward(X, parameters)
### END CODE HERE ###
# Compute cost.
### START CODE HERE ### (≈ 1 line of code)
cost = compute_cost(AL, Y)
### END CODE HERE ###
# Backward propagation.
### START CODE HERE ### (≈ 1 line of code)
grads = L_model_backward(AL, Y, caches)
### END CODE HERE ###
# Update parameters.
### START CODE HERE ### (≈ 1 line of code)
parameters = update_parameters(parameters, grads, learning_rate)
### END CODE HERE ###
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
"""
Explanation: Expected Output:
<table>
<tr>
<td> **Accuracy**</td>
<td> 0.72 </td>
</tr>
</table>
Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting.
Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model.
5 - L-layer Neural Network
Question: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: [LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID. The functions you may need and their inputs are:
python
def initialize_parameters_deep(layer_dims):
...
return parameters
def L_model_forward(X, parameters):
...
return AL, caches
def compute_cost(AL, Y):
...
return cost
def L_model_backward(AL, Y, caches):
...
return grads
def update_parameters(parameters, grads, learning_rate):
...
return parameters
End of explanation
"""
parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
"""
Explanation: You will now train the model as a 5-layer neural network.
Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
End of explanation
"""
pred_train = predict(train_x, train_y, parameters)
"""
Explanation: Expected Output:
<table>
<tr>
<td> **Cost after iteration 0**</td>
<td> 0.771749 </td>
</tr>
<tr>
<td> **Cost after iteration 100**</td>
<td> 0.672053 </td>
</tr>
<tr>
<td> **...**</td>
<td> ... </td>
</tr>
<tr>
<td> **Cost after iteration 2400**</td>
<td> 0.092878 </td>
</tr>
</table>
End of explanation
"""
pred_test = predict(test_x, test_y, parameters)
"""
Explanation: <table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.985645933014
</td>
</tr>
</table>
End of explanation
"""
print_mislabeled_images(classes, test_x, test_y, pred_test)
"""
Explanation: Expected Output:
<table>
<tr>
<td> **Test Accuracy**</td>
<td> 0.8 </td>
</tr>
</table>
Congrats! It seems that your 5-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set.
This is good performance for this task. Nice job!
Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course).
6) Results Analysis
First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
End of explanation
"""
## START CODE HERE ##
my_image = "my_image.jpg" # change this to the name of your image file
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_predicted_image = predict(my_image, my_label_y, parameters)
plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
"""
Explanation: A few type of images the model tends to do poorly on include:
- Cat body in an unusual position
- Cat appears against a background of a similar color
- Unusual cat color and species
- Camera Angle
- Brightness of the picture
- Scale variation (cat is very large or small in image)
7) Test with your own image (optional/ungraded exercise)
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
End of explanation
"""
|
nslatysheva/data_science_blogging | polished_prediction/scanning_hyperspace.ipynb | gpl-3.0 | import wget
import pandas as pd
# Import the dataset
data_url = 'https://raw.githubusercontent.com/nslatysheva/data_science_blogging/master/datasets/wine/winequality-red.csv'
dataset = wget.download(data_url)
dataset = pd.read_csv(dataset, sep=";")
"""
Explanation: Scanning hyperspace: how to tune machine learning models
Introduction
When doing machine learning using Python's scikit-learn library, you can often get reasonable predictive performance by using out-of-the-box settings for your models. However, the payoff can be huge if you invest at least some time into tuning models to your specific problem and dataset. In the previous post, we explored the concepts of overfitting, cross-validation, and the bias-variance tradeoff. These ideas turn out to be central to doing a good job at optimizing the hyperparameters (roughly, the settings) of algorithms. In this post, we will explore the concepts behind hyperparameter optimization and demonstrate the process of tuning and training a random forest classifier.
You'll be working with the famous (well, machine learning famous!) wine dataset, which contains features of different quality wines, like the acidity and sugar content, as well as a quality rating. Our goal is to tune and apply a random forest to these features in order to predict whether a given wine is nice or not.
The steps we'll cover in this blog post can be summarized as follows:
Let's get cracking.
Loading and exploring the dataset
We start off by collecting the dataset. It can be found both online and in our GitHub repository, so we can also just fetch it with wget (note: make sure you first type pip install wget into your terminal since wget is not a preinstalled Python library). This command will download a copy of the dataset to your current working directory.
End of explanation
"""
# Take a peak at the first few columns of the data
first_5_columns = dataset.columns[0:5]
dataset[first_5_columns].head()
"""
Explanation: If you're interested in getting to know the wine dataset graphically, check out a previous post on using the plotly library to make interactive plots of the wine features here.
Let's have a brief look here:
End of explanation
"""
# Examine shape of dataset and the column names
print (dataset.shape)
print (dataset.columns.values)
"""
Explanation: You can examine the dimensions of the dataset and the column names:
End of explanation
"""
# Summarise feature values
dataset.describe()[first_5_columns]
"""
Explanation: So, it looks like you have a dozen features to play with, and just under 1600 data points. Get some summary statistics on the features using describe():
End of explanation
"""
# using a lambda function to bin quality scores
dataset['quality_is_high'] = dataset.quality.apply(lambda x: 1 if x >= 6 else 0)
"""
Explanation: The distribution of the outcome variable quality is a bit funky - the values are mostly 5 and 6 (how would you check this?). This could get a bit irksome later on, so go ahead and recode the quality scores into something more convenient. One idea would be to label wines as being either high quality (e.g. if their score is 6 or higher) or low quality (if the score is 6 or lower). You could encode this with a 1 representing high quality and 0 representing low quality, like so:
End of explanation
"""
import numpy as np
# Convert the dataframe to a numpy array and split the
# data into an input matrix X and class label vector y
npArray = np.array(dataset)
X = npArray[:,:-2].astype(float)
y = npArray[:,-1]
"""
Explanation: Now convert the pandas dataframe into a numpy array and isolate the outcome variable you'd like to predict ('quality_is_high'). This conversion is needed to feed the data into the machine learning pipeline:
End of explanation
"""
from sklearn.cross_validation import train_test_split
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1)
"""
Explanation: Now that you have the dataset set up, the machine learning can begin. First, you have to split the dataset into a training and test set (see previous post for an explanation of why this is a good idea):
End of explanation
"""
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
rf = RandomForestClassifier()
rf.fit(XTrain, yTrain)
rf_predictions = rf.predict(XTest)
print (metrics.classification_report(yTest, rf_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, rf_predictions),4))
"""
Explanation: Setting up a Random Forest
You are now going to try to predict what is a high quality wine using a random forest classifier. Chapter 8 of the Introduction to Statistical Learning book provides a truly excellent introduction to the theory behind classification trees, bagged trees, and random forests. It's worth a read if you have time.
Briefly, random forests build a collection of classification trees, where each tree tries to classify data points into classes by recursively splitting the data on the features (and feature values) that separate the classes best. Each tree is trained on bootstrapped data, and each bifurcation point is only allowed to 'see' a subset of the available variables when deciding on the best split. So, these two elements of randomness are introduced when constructing each tree, which means that a variety of different trees are built. The random forest then ensembles these base learners together, i.e. it combines these trees into an aggregated model. The end result of all of this is that when you want to classify a new data point, the individual trees each make their individual predictions, and the random forest surveys these opinions and accepts the majority position. This approach often leads to improved accuracy, generalizability, and stability in the predictions.
Predicting wine quality with a random forest
Out of the box, scikit's random forest classifier performs reasonably well on the wine dataset:
End of explanation
"""
# Create a default random forest classifer and print its parameters
rf_default = RandomForestClassifier()
print(rf_default.get_params)
"""
Explanation: The model has an overall accuracy around the 0.79 mark (there is some variability in this value - try rerunning the code block several times, or setting different seeds using random_state). This means that 79% of the time, your model is able to predict the right class when given the test data. In other words, it can distinguish pretty well between a good and a bad wine based on their chemical properties.
Next up, you are going to learn how to pick the best values for the hyperparameters of the random forest algorithm in order to get better models with (hopefully!) even higher accuracy than this baseline.
Better modelling through hyperparameter optimization
We've glossed over what a hyperparameter actually is. Let's explore the topic now. Often, when setting out to train a machine learning algorithm on your dataset of interest, you must first specify a number of arguments or hyperparameters (HPs). An HP is just a variable than influences the performance of your model, but isn't directly tuned during model training.
For example, when using the random forest algorithm to do classification, you have to set the value of the hyperparameter n_estimators ahead of time, before training commences. n_estimators controls the number of individual trees in the random forest ensemble. The more the better (with diminishing returns), but more trees come at the expense of longer training time.
As mentioned above, scikit-learn generally provides reasonable hyperparameter default values, such that it is possible to quickly build an e.g. a random forest classifier by simply typing RandomForestClassifier() and then fitting it to your data. We can can get the documentation on what hyperparameter values that the classifier has automatically assumed, but you can also examine models directly using get_params:
End of explanation
"""
# manually specifying some HP values
hp_combinations = [
{"n_estimators": 2, "max_features": None}, # all features are considered
{"n_estimators": 5, "max_features": 'log2'},
{"n_estimators": 9, "max_features": 'sqrt'}
]
"""
Explanation: You can see that n_estimators takes on a default value of 10. Other hyperparameters include max_features, which controls the size of the random selection of features the algorithm is allowed to consider when splitting a node. The default is max_features='auto', where the auto refers to sqrt of the number of all features in classification problems. So for instance, if we have 16 features in total, then trees are restricted to considering only 4 features at each bifurcation point (instead of searching all features for the best split). Other important HPs include the max_depth, which restricts the depth of the trees you grow, and criterion, which dictates how trees calculate the class purity resulting from splits.
As we saw above, the default settings for random forests do a good job. But it's a good idea to try to improve your learning algorithm's performance. But how do you know what values to set the hyperparameters to in order to get the best performance from your learning algorithms?
You optimize hyperparameters in exactly the way that you might expect - you try different values and see what works best. However, some care is needed when deciding how exactly to measure if certain values work well, and which strategy to use to systematically explore hyperparameter space.
Tuning your random forest
In order to build the best possible model that does a good job at describing the underlying trends in a dataset, we need to pick the right HP values. The most basic strategy to do this would be just to test different possible values for the HPs and see how the model performs.
Let's try out some random HP values:
End of explanation
"""
# test out different HP combinations
for hp_combn in hp_combinations:
# Train and output accuracies
rf = RandomForestClassifier(
n_estimators=hp_combn["n_estimators"],
max_features=hp_combn["max_features"]
)
rf.fit(XTrain, yTrain)
rf_predictions = rf.predict(XTest)
print ('When n_estimators is {} and max_features is {}, test set accuracy is {}'.format(
hp_combn["n_estimators"],
hp_combn["max_features"],
round(metrics.accuracy_score(yTest, rf_predictions),2))
)
"""
Explanation: We can manually write a small loop to test out how well the different combinations of these potential HP values fare (later, you'll see better ways to do this):
End of explanation
"""
import itertools
n_estimators = [2, 5, 9]
max_features = [None, 'log2', 'sqrt']
hp_combinations = list(itertools.product(n_estimators, max_features))
print (hp_combinations)
print ("The number of HP combinations is: {}".format(len(hp_combinations)))
"""
Explanation: Looks like the last combinations of HPs might be doing better. However, manually searching for the best HPs in this way is not efficient, it's a bit random, and liable to miss good combinations. There is however a solution, i.e. grid search.
Grid search
Traditionally and perhaps most intuitively, scanning for good HPs values can be done with grid search (also called parameter sweep). This strategy exhaustively searches through some manually prespecified HP values and reports the best option. It is common to try to optimize multiple HPs simultaneously - grid search tries each combination of HPs in turn and reports the best one, hence the name 'grid'. This is a more convenient and complete way of searching through hyperparameter space than manually specifying combinations.
For instance, you could build a grid like this:
Using these commands (see also itertools.product for getting all combinations of variables):
End of explanation
"""
from sklearn.grid_search import GridSearchCV
# Search for good hyperparameter values
# Specify values to grid search over
n_estimators = list(np.arange(10, 60, 20))
max_features = [None, 'sqrt', 'log2'] # we need to change the all to somthing better!
hyperparameters = {
'n_estimators': n_estimators,
'max_features': max_features
}
# Grid search using cross-validation
gridCV = GridSearchCV(RandomForestClassifier(), param_grid=hyperparameters, cv=10, n_jobs=4)
"""
Explanation: However there is a massive pitfall here! Scanning through all possible combinations of HPs to build models and evaluating them on the test set will inadvertently come up with one combination of parameters that does best, but this result might not generalise well. This approach is less misguided than trying to optimize models by evaluating them on the training set, is still not ideal. The problem is that during repeated evaluation on the test dataset, knowledge of the test set can leak into the model bulding phase. You are at risk of inadvertenly learning something about the test set, and hence are susceptible to overfitting. How does one get around these issues?
Grid search with k-fold cross validation for hyperparameter tuning
Enter k-fold cross-validation, which is a handy technique for measuring a model's performance using only the training set. k-fold CV is a general method (see an explanation here), and is not specific to hyperparameter optimization, but is very useful for that purpose. We simply try out different HP values, get several different estimates of model performance for each HP value (or combination of HP values), and choose the model with the lowest CV error. With 10-fold CV, the process looks like this:
In the context of HP optimization, we perform k-fold cross validation together with grid search to get a more robust estimate of the model performance associated with specific HP values. The combination of grid search and k-fold cross validation is very popular for finding the models with good performance and generalisability. So, in HP optimisation we are actually trying to do two things:
Find the combination of HPs that improves model performance (e.g. accuracy)
Make sure that this choice of HPs will generalize well to new data
The CV is there to address the second concern. scikit-learn makes grid search with k-fold CV very easy and slick to do, and even supports parallel distributing of the search (via the n_jobs argument). The set-up looks like this:
End of explanation
"""
# Perform grid search with 10-fold CV
gridCV.fit(XTrain, yTrain)
# Identify optimal hyperparameter values
best_n_estim = gridCV.best_params_['n_estimators']
best_max_features = gridCV.best_params_['max_features']
print("The best performing n_estimators value is: {}".format(best_n_estim))
print("The best performing max_features value is: {}".format(best_max_features))
"""
Explanation: Next, you tell the model to use the training data to perform the grid search with 10 fold CV. You can then collect the best combination of HP values with the model attribute best_params_:
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
# fetch scores, reshape into a grid
scores = [x[1] for x in gridCV.grid_scores_]
scores = np.array(scores).reshape(len(n_estimators), len(max_features))
scores = np.transpose(scores)
# Make heatmap from grid search results
plt.figure(figsize=(12, 6))
plt.imshow(scores, interpolation='nearest', origin='higher', cmap='jet_r')
plt.xticks(np.arange(len(n_estimators)), n_estimators)
plt.yticks(np.arange(len(max_features)), max_features)
plt.xlabel('Number of decision trees')
plt.ylabel('Max features')
plt.colorbar().set_label('Classification Accuracy', rotation=270, labelpad=20)
plt.show()
"""
Explanation: You can visually represent the results from this grid search (which can be found in gridCV.grid_scores_) with a heatmap:
End of explanation
"""
# Train classifier using optimal hyperparameter values
# We could have also gotten this model out from gridCV.best_estimator_
rf = RandomForestClassifier(n_estimators=best_n_estim,
max_features=best_max_features)
rf.fit(XTrain, yTrain)
rf_predictions = rf.predict(XTest)
print (metrics.classification_report(yTest, rf_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, rf_predictions),2))
"""
Explanation: Finally, you can now train a new random forest on the wine dataset using what you have learned from the grid search:
End of explanation
"""
|
tensorflow/probability | tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb | apache-2.0 | #@title ##### Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
"""
import functools
import sys
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
# Globally Enable XLA.
# tf.config.optimizer.set_jit(True)
try:
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
except:
# Invalid device or cannot modify virtual devices once initialized.
pass
tfb = tfp.bijectors
tfd = tfp.distributions
tfn = tfp.experimental.nn
"""
Explanation: VIB + DoSE
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/python/experimental/nn/examples/vib_dose.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In this example, we train a deep variational information bottleneck model (VIB) on the MNIST dataset. We then use density of states estimation to turn our VIB model into an Out-of-distribution (OOD) detector. Our current implementation achieves near-SOTA performance on both OOD detection and classification simultaneously without any exposure to OOD data during training.
References
The VIB paper (Alemi et al. 2016) can be found Here
The DoSE paper (Morningstar et al. 2020) can be found Here
1 Imports
End of explanation
"""
[train_dataset, eval_dataset], datasets_info = tfds.load(
name='mnist',
split=['train', 'test'],
with_info=True,
shuffle_files=True)
def _preprocess(sample):
return (tf.cast(sample['image'], tf.float32) * 2 / 255. - 1.,
tf.cast(sample['label'], tf.int32))
train_size = datasets_info.splits['train'].num_examples
batch_size = 32
train_dataset = tfn.util.tune_dataset(
train_dataset,
batch_size=batch_size,
shuffle_size=int(train_size / 7),
preprocess_fn=_preprocess)
eval_dataset = tfn.util.tune_dataset(
eval_dataset,
repeat_count=1,
preprocess_fn=_preprocess)
x = next(iter(eval_dataset.batch(10)))[0]
tfn.util.display_imgs(x)
"""
Explanation: 2 Load Dataset
End of explanation
"""
input_shape = datasets_info.features['image'].shape
encoded_size = 16
base_depth = 32
prior = tfd.MultivariateNormalDiag(
loc=tf.zeros(encoded_size),
scale_diag=tf.ones(encoded_size))
Conv = functools.partial(
tfn.Convolution,
init_bias_fn=tf.zeros_initializer(),
init_kernel_fn=tf.initializers.he_uniform()) # Better for leaky_relu.
encoder = tfn.Sequential([
lambda x: 2. * tf.cast(x, tf.float32) - 1., # Center.
Conv(1, 1 * base_depth, 5, strides=1, padding='same'),
tf.nn.leaky_relu,
Conv(1 * base_depth, 1 * base_depth, 5, strides=2, padding='same'),
tf.nn.leaky_relu,
Conv(1 * base_depth, 2 * base_depth, 5, strides=1, padding='same'),
tf.nn.leaky_relu,
Conv(2 * base_depth, 2 * base_depth, 5, strides=2, padding='same'),
tf.nn.elu,
Conv(2 * base_depth, 4 * encoded_size, 7, strides=1, padding='valid'),
tf.nn.leaky_relu,
tfn.util.flatten_rightmost(ndims=3),
tfn.Affine(4*encoded_size, encoded_size + encoded_size * (encoded_size + 1) // 2),
lambda x: tfd.MultivariateNormalTriL(
loc=x[..., :encoded_size],
scale_tril=tfb.FillScaleTriL()(x[..., encoded_size:]))
], name='encoder')
print(encoder.summary())
DeConv = functools.partial(
tfn.ConvolutionTranspose,
init_kernel_fn=tf.initializers.he_uniform()) # Better for leaky_relu.
Affine = functools.partial(
tfn.Affine,
init_kernel_fn=tf.initializers.he_uniform())
decoder = tfn.Sequential([
Affine(encoded_size, 10),
lambda x: tfd.Categorical(logits=x)])
print(decoder.summary())
"""
Explanation: 3 Define Model
End of explanation
"""
def compute_loss(x, y, beta=1.):
q = encoder(x)
z = q.sample()
p = decoder(z)
kl = tf.reduce_mean(q.log_prob(z) - prior.log_prob(z), axis=-1)
# Note: we could use exact KL divergence, eg:
# kl = tf.reduce_mean(tfd.kl_divergence(q, prior))
# however we generally find that using the Monte Carlo approximation has
# lower variance.
nll = -tf.reduce_mean(p.log_prob(y), axis=-1)
loss = nll + beta * kl
return loss, (nll, kl), (q, z, p)
train_iter = iter(train_dataset)
def loss():
x, y = next(train_iter)
loss, (nll, kl), _ = compute_loss(x, y, beta=0.075)
return loss, (nll, kl)
opt = tf.optimizers.Adam(learning_rate=1e-3, decay=0.00005)
fit = tfn.util.make_fit_op(
loss,
opt,
decoder.trainable_variables + encoder.trainable_variables,
grad_summary_fn=lambda gs: tf.nest.map_structure(tf.norm, gs))
eval_iter = iter(eval_dataset.batch(5000).repeat())
@tfn.util.tfcompile
def eval():
x, y = next(eval_iter)
loss, (nll, kl), _ = compute_loss(x, y, beta=0.05)
return loss, (nll, kl)
"""
Explanation: 4 Loss / Eval
End of explanation
"""
DEBUG_MODE = False
tf.config.experimental_run_functions_eagerly(DEBUG_MODE)
num_train_epochs = 25. # @param { isTemplate: true}
num_evals = 200 # @param { isTemplate: true}
dur_sec = dur_num = 0
num_train_steps = int(num_train_epochs * train_size // batch_size)
for i in range(num_train_steps):
start = time.time()
trn_loss, (trn_nll, trn_kl), g = fit()
stop = time.time()
dur_sec += stop - start
dur_num += 1
if i % int(num_train_steps / num_evals) == 0 or i == num_train_steps - 1:
tst_loss, (tst_nll, tst_kl) = eval()
f, x = zip(*[
('it:{:5}', opt.iterations),
('ms/it:{:6.4f}', dur_sec / max(1., dur_num) * 1000.),
('trn_loss:{:6.4f}', trn_loss),
('tst_loss:{:6.4f}', tst_loss),
('tst_nll:{:6.4f}', tst_nll),
('tst_kl:{:6.4f}', tst_kl),
('sum_norm_grad:{:6.4f}', sum(g)),
])
print(' '.join(f).format(*[getattr(x_, 'numpy', lambda: x_)()
for x_ in x]))
sys.stdout.flush()
dur_sec = dur_num = 0
# if i % 1000 == 0 or i == maxiter - 1:
# encoder.save('/tmp/encoder.npz')
# decoder.save('/tmp/decoder.npz')
"""
Explanation: 5 Train
End of explanation
"""
def evaluate_accuracy(dataset, encoder, decoder):
"""Evaluate the accuracy of your model on a dataset.
"""
this_it = iter(dataset)
num_correct = 0
num_total = 0
attempts = 0
for xin, xout in this_it:
xin, xout = next(this_it)
e = encoder(xin)
z = e.sample(10000) # 10K samples should have low variance.
d = decoder(z)
yhat = d.sample()
confidence = tf.reduce_mean(d.probs_parameter(), axis=0)
most_likely = tf.cast(tf.math.argmax(confidence, axis=-1), tf.int32)
num_correct += np.sum(most_likely == xout, axis=0)
num_total += xout.shape[0]
attempts +=1
return num_correct, num_total
nc, nt = evaluate_accuracy(eval_dataset.batch(100), encoder, decoder)
print("Accuracy: %.4f"%(nc/nt))
"""
Explanation: 6 Evaluate Classification Accuracy
End of explanation
"""
def get_statistics(encoder, decoder, prior):
"""Setup a function to evaluate statistics given model components.
Args:
encoder: Callable neural network which takes in an image and
returns a tfp.distributions.Distribution object.
decoder: Callable neural network which takes in a vector and
returns a tfp.distributions.Distribution object.
prior: A tfp.distributions.Distribution object which operates
on the same spaces as the encoder.
Returns:
T: A function, which takes in a tensor containing an image (or
batch of images) and evaluates statistics on the model.
Optionally it also returns the prediction, under the assumption
that the DoSE model will only dress an actual classifier.
"""
def T(x, return_pred=False):
"""Evaluate statistics on an input image or batch of images.
Given an input tensor `x` containing either an image or a batch of
images, this function evaluates 4 statistics on a VIB model; the
kl-divergence between the posterior and prior, the expected entropy
of the decoder computed using samples from the posterior, the
posterior entopy, and the cross-entropy between the posterior and
the prior. We also allow for the prediction to be optionally
returned.
Args:
x: rank 4 tensor containing a batch of images
return_pred: Bool indicating whether to return the model
prediction.
Returns:
tf.tensor containing the 4 statistics evaluated on the input.
pred (optional): The prediction of the model.
"""
pzgx = encoder(x)
z = pzgx.sample(100, seed=42) # Seed is fixed for determinism.
pxgz = decoder(z)
kl = pzgx.kl_divergence(prior)[tf.newaxis,...]
dent = tf.reduce_mean(pxgz.entropy(), axis=0)[tf.newaxis,...]
eent = pzgx.entropy()[tf.newaxis,...]
xent = pzgx.cross_entropy(prior)[tf.newaxis,...]
if return_pred:
pred = tf.math.argmax(
tf.reduce_mean(pxgz.probs_parameter, axis=0),
axis=-1)
return tf.concat([kl, dent, eent, xent], axis=0), pred
else:
return tf.concat([kl, dent, eent, xent], axis=0)
return T
T = get_statistics(encoder, decoder, prior)
"""
Explanation: The accuracy of one training run with this particular model and training setup was 99.15%, which is within a half of a percent of the state of the art, and comparable to the mnist accuracy reported in Alemi et al. (2016).
OOD detection using DoSE
From the previous section, we have trained a variational classifier. However, this classifier was trained assuming that all of the inputs are from the distribution which generated the training set. In general, we may not always receive images drawn from this distribution. In these situations, our model prediction is unreliable. We want to be able to identify when this may be the case to avoid serving these flawed predictions.
In this section, we turn the VIB classifier into an OOD detector using DoSE.
1 Get statistics
End of explanation
"""
def get_DoSE_KDE(T, dataset):
"""Get a distribution and decision rule for OOD detection using DoSE.
Given a tensor of statistics tx, compute a Kernel Density Estimate (KDE) of
the statistics. This uses a quantiles trick to cut down the number of
samples used in the KDE (to lower the cost of evaluating a trial point).
Args:
T: A function which takes an input image and returns a vector of
statistics evaluated using the model.
dataset: A tensorflow_datasets `Dataset` which will be used to evaluate
statistics to construct the estimator.
Returns:
is_ood: A function which takes a new point `x` and `threshold`, and
computes the decision rule KDE.log_prob(T(x)) < threshold
dose_kde: A tfd.MixtureSameFamily object. The distribution used as the KDE
from which the log_prob of a batch of statistics can be computed.
"""
# First we should evaluate the statistics on the training set.
it = iter(dataset)
for x, y in it:
if not "tx" in locals():
tx = T(x)
else:
tx = tf.concat([tx, T(x)], axis=-1)
n = tf.cast(tf.shape(tx)[-1], tx.dtype)
num_quantiles = int(25)
q = tfp.stats.quantiles(tx, num_quantiles, axis=-1)
q = tf.transpose(q, tf.roll(tf.range(tf.rank(q)), shift=-1, axis=0))
# Scott's Rule:
h = 3.49 * tf.math.reduce_std(tx, axis=-1, keepdims=True) * (n)**(-1./3.)
h *= n / num_quantiles
dose_kde = tfd.Independent(
tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(logits=tf.zeros(num_quantiles + 1)),
components_distribution=tfd.Normal(loc=q, scale=h)),
reinterpreted_batch_ndims=1)
is_ood = lambda x, threshold: dose_kde.log_prob(tf.transpose(T(x), [1, 0])) < tf.math.log(threshold)
dose_log_prob = lambda x: dose_kde.log_prob(tf.transpose(T(x), [1, 0]))
# T(x) returns shape [T, N], but dose_kde works on shape [N, T]
return is_ood, dose_kde, dose_log_prob, tx
class DoSE_administrator(object):
def __init__(self, T, train_dataset, eval_dataset):
"""Administrate DoSE for model evaluation in a more efficient way.
This high level object just calls the lower level DoSE methods, but
also evaluates the DoSE log-probabilities on the evaluation dataset.
Using these, we can do things like compute auroc much more efficiently.
"""
dose_build = get_DoSE_KDE(T, train_dataset)
# Call DoSE on an image/batch x for lp threshold `threshold`
self.is_ood = dose_build[0]
# Actual dose distribution
self.dose_dist = dose_build[1]
self.dose_lp = lambda t: self.dose_dist.log_prob(tf.transpose(t, [1, 0]))
# Get the log-probability of a batch from dose
self.dose_log_prob = dose_build[2]
# This helps us evaluate auroc more reliably.
self.training_stats = dose_build[3]
# Get training_log probs efficiently
train_size = self.training_stats.shape[-1]
bs = train_size // 1000
for i in range(1000):
tlp = self.dose_lp(self.training_stats[..., bs*i:bs*(i+1)])
if not hasattr(self, 'training_lp'):
self.training_lp = tlp
else:
self.training_lp = tf.concat([self.training_lp, tlp], axis=0)
# Get log_probs, images, labels, and statistics
# on the evaluation dataset.
eval_it = iter(eval_dataset)
for x, y in eval_it:
if not hasattr(self, 'eval_lp'):
self.eval_stats = T(x)
self.eval_lp = self.dose_lp(self.eval_stats)
self.eval_label = y
self.eval_ims = x
else:
tx = T(x)
self.eval_stats = tf.concat([self.eval_stats,
tx],
axis=0)
self.eval_lp = tf.concat([self.eval_lp,
self.dose_lp(tx)],
axis=0)
self.eval_label = tf.concat([self.eval_label, y], axis=0)
self.eval_ims = tf.concat([self.eval_ims, x], axis=0)
def get_acc(self, threshold):
"""Evaluate the OOD accuracy for a certain threshold probability.
This computes the decision rule: `log q(x) < tf.math.log(thresh)`
on the eval dataset. It uses this decision rule to evaluate the
number of correct predictions, along with the 4 components of the
confusion matrix.
Args:
threshold: A threshold on the DoSE probability density.
Returns:
nc: Number of correct predictions
nt: Number of total predictions
tp: Number of true positives
tn: Number of true negatives
fp: Number of false positives
fn: Number of false negatives
"""
yhat = self.eval_lp < tf.math.log(threshold)
fp = tf.reduce_sum(tf.cast(
tf.logical_and(tf.math.not_equal(yhat, self.eval_label),
tf.equal(self.eval_label, False)),
tf.int32),axis=0)
fn = tf.reduce_sum(tf.cast(
tf.logical_and(tf.math.not_equal(yhat, self.eval_label),
tf.equal(self.eval_label, True)),
tf.int32),axis=0)
tp = tf.reduce_sum(tf.cast(
tf.logical_and(tf.equal(yhat, self.eval_label),
tf.equal(self.eval_label, True)),
tf.int32),axis=0)
tn = tf.reduce_sum(tf.cast(
tf.logical_and(tf.equal(yhat, self.eval_label),
tf.equal(self.eval_label, False)),
tf.int32),axis=0)
nc = tp+tn
nt = tf.cast(tf.size(self.eval_label), tf.float32)
return nc, nt, tp, tn, fp, fn
def roc_curve(self, nbins):
"""Get the roc curve for the model."""
nc, nt, tp, tn, fp, fn = self.get_acc(
np.float32(np.exp(np.percentile(self.eval_lp, 0.))))
fpr = [fp.numpy() / (fp.numpy()+tn.numpy())]
tpr = [tp.numpy()/ (tp.numpy() + fn.numpy())]
for i in range(1, nbins+1):
nc, nt, tp, tn, fp, fn = self.get_acc(
np.float32(np.exp(np.percentile(self.eval_lp, i/float(nbins)*100.))))
fpr.append(fp.numpy()/ (fp.numpy() + tn.numpy()))
tpr.append(tp.numpy()/ (tp.numpy() + fn.numpy()))
return fpr, tpr
def precision_recall_curve(self, nbins):
"""Get the precision-recall curve for the model."""
nc, nt, tp, tn, fp, fn = self.get_acc(
np.float32(np.exp(np.percentile(self.eval_lp, 0.))))
precision = [tp.numpy()/ (tp.numpy() + fp.numpy())]
recall = [tp.numpy() / (tp.numpy() + fn.numpy())]
for i in range(1, nbins+1):
nc, nt, tp, tn, fp, fn = self.get_acc(
np.float32(np.exp(np.percentile(self.eval_lp, i/float(nbins)*100.))))
precision.append(tp.numpy()/ (tp.numpy() + fp.numpy()))
recall.append(tp.numpy() / (tp.numpy() + fn.numpy()))
return precision, recall
"""
Explanation: 2 Define DoSE helper classes and functions
End of explanation
"""
# For evaluating statistics on the training set, we need to perform a
# pass through the dataset.
train_one_pass = tfds.load('mnist')['train']
train_one_pass = tfn.util.tune_dataset(train_one_pass,
batch_size=1000,
repeat_count=None,
preprocess_fn=_preprocess)
# OOD dataset is Fashion_MNIST
ood_data = tfds.load('fashion_mnist')['test'].map(_preprocess).map(
lambda x, y: (x, tf.ones_like(y, dtype=tf.bool)))
# In-distribution data is the MNIST test set.
ind_data = tfds.load('mnist')['test'].map(_preprocess).map(
lambda x, y: (x, tf.zeros_like(y, dtype=tf.bool)))
# Our trial dataset is a 50-50 split of the two.
hybrid_data = ind_data.concatenate(ood_data)
hybrid_data = tfn.util.tune_dataset(hybrid_data, batch_size=100,
shuffle_size=20000,repeat_count=None)
"""
Explanation: 3 Setup OOD dataset
End of explanation
"""
DoSE_admin = DoSE_administrator(T, train_one_pass, hybrid_data)
"""
Explanation: 4 Administer DoSE
End of explanation
"""
fp, tp = DoSE_admin.roc_curve(10000)
precision, recall = DoSE_admin.precision_recall_curve(10000)
plt.figure(figsize=[10,5])
plt.subplot(121)
plt.plot(fp, tp, 'b-')
plt.xlim(0, 1.)
plt.ylim(0., 1.)
plt.xlabel('FPR', fontsize=12)
plt.ylabel('TPR', fontsize=12)
plt.title("AUROC: %.4f"%np.trapz(tp, fp), fontsize=12)
plt.subplot(122)
plt.plot(recall, precision, 'b-')
plt.xlim(0, 1.)
plt.ylim(0., 1.)
plt.xlabel('Recall', fontsize=12)
plt.ylabel('Precision', fontsize=12)
plt.title("AUPRC: %.4f"%np.trapz(precision[1:], recall[1:]), fontsize=12)
Sorted_ims = tf.gather(DoSE_admin.eval_ims, tf.argsort(DoSE_admin.eval_lp))
Sorted_labels = tf.gather(DoSE_admin.eval_label, tf.argsort(DoSE_admin.eval_lp))
sorted_ind = tf.gather(Sorted_ims, tf.where(Sorted_labels == False))[:,0]
sorted_ood = tf.gather(Sorted_ims, tf.where(Sorted_labels == True))[:,0]
print("Most False Positive")
tfn.util.display_imgs(sorted_ind[:20])
print("Most True Negative")
tfn.util.display_imgs(sorted_ind[-20:])
print("Most False Negative")
tfn.util.display_imgs(sorted_ood[-20:])
print("Most True Positive")
tfn.util.display_imgs(sorted_ood[:20])
"""
Explanation: 5 Evaluate OOD performance
End of explanation
"""
|
ShubhamDebnath/Coursera-Machine-Learning | Course 4/Residual Networks v2.ipynb | mit | import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
"""
Explanation: Residual Networks
Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by He et al., allow you to train much deeper networks than were previously practically feasible.
In this assignment, you will:
- Implement the basic building blocks of ResNets.
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification.
This assignment will be done in Keras.
Before jumping into the problem, let's run the cell below to load the required packages.
End of explanation
"""
# GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 3
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1, 1), padding ='same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed = 0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1, 1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed = 0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
"""
Explanation: 1 - The problem of very deep neural networks
Last week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.
The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent unbearably slow. More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode" to take very large values).
During training, you might therefore see the magnitude (or norm) of the gradient for the earlier layers descrease to zero very rapidly as training proceeds:
<img src="images/vanishing_grad_kiank.png" style="width:450px;height:220px;">
<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : Vanishing gradient <br> The speed of learning decreases very rapidly for the early layers as the network trains </center></caption>
You are now going to solve this problem by building a Residual Network!
2 - Building a Residual Network
In ResNets, a "shortcut" or a "skip connection" allows the gradient to be directly backpropagated to earlier layers:
<img src="images/skip_connection_kiank.png" style="width:650px;height:200px;">
<caption><center> <u> <font color='purple'> Figure 2 </u><font color='purple'> : A ResNet block showing a skip-connection <br> </center></caption>
The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.
We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function--even more than skip connections helping with vanishing gradients--accounts for ResNets' remarkable performance.)
Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them.
2.1 - The identity block
The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps:
<img src="images/idblock2_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> Figure 3 </u><font color='purple'> : Identity block. Skip connection "skips over" 2 layers. </center></caption>
The upper path is the "shortcut path." The lower path is the "main path." In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras!
In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this:
<img src="images/idblock3_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> Figure 4 </u><font color='purple'> : Identity block. Skip connection "skips over" 3 layers.</center></caption>
Here're the individual steps.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be conv_name_base + '2a'. Use 0 as the seed for the random initialization.
- The first BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2a'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is "same" and its name should be conv_name_base + '2b'. Use 0 as the seed for the random initialization.
- The second BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2b'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be conv_name_base + '2c'. Use 0 as the seed for the random initialization.
- The third BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2c'. Note that there is no ReLU activation function in this component.
Final step:
- The shortcut and the input are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Exercise: Implement the ResNet identity block. We have implemented the first component of the main path. Please read over this carefully to make sure you understand what it is doing. You should implement the rest.
- To implement the Conv2D step: See reference
- To implement BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the channels axis))
- For the activation, use: Activation('relu')(X)
- To add the value passed forward by the shortcut: See reference
End of explanation
"""
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s = 2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1, 1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed = 0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1, 1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed = 0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
##### SHORTCUT PATH #### (≈2 lines)
X_shortcut = Conv2D(filters = F3, kernel_size = (1, 1), strides = (s, s), padding = 'valid', name = conv_name_base + '1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]
</td>
</tr>
</table>
2.2 - The convolutional block
You've implemented the ResNet identity block. Next, the ResNet "convolutional block" is the other type of block. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:
<img src="images/convblock_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> Figure 4 </u><font color='purple'> : Convolutional block </center></caption>
The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.) For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.
The details of the convolutional block are as follows.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be conv_name_base + '2a'.
- The first BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2a'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of (f,f) and a stride of (1,1). Its padding is "same" and it's name should be conv_name_base + '2b'.
- The second BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2b'.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of (1,1) and a stride of (1,1). Its padding is "valid" and it's name should be conv_name_base + '2c'.
- The third BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '2c'. Note that there is no ReLU activation function in this component.
Shortcut path:
- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be conv_name_base + '1'.
- The BatchNorm is normalizing the channels axis. Its name should be bn_name_base + '1'.
Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Exercise: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.
- Conv Hint
- BatchNorm Hint (axis: Integer, the axis that should be normalized (typically the features axis))
- For the activation, use: Activation('relu')(X)
- Addition Hint
End of explanation
"""
# GRADED FUNCTION: ResNet50
def ResNet50(input_shape = (64, 64, 3), classes = 6):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
### START CODE HERE ###
# Stage 3 (≈4 lines)
X = convolutional_block(X, f= 3, filters = [128,128,512], stage = 3, block = 'a' , s = 2)
X = identity_block(X, 3, [128,128,512], stage = 3, block = 'b')
X = identity_block(X, 3, [128,128,512], stage = 3, block = 'c')
X = identity_block(X, 3, [128,128,512], stage = 3, block = 'd')
# Stage 4 (≈6 lines)
X = convolutional_block(X, f= 3, filters = [256, 256, 1024], stage = 4, block = 'a' , s = 2)
X = identity_block(X, 3, [256, 256, 1024], stage = 4, block = 'b')
X = identity_block(X, 3, [256, 256, 1024], stage = 4, block = 'c')
X = identity_block(X, 3, [256, 256, 1024], stage = 4, block = 'd')
X = identity_block(X, 3, [256, 256, 1024], stage = 4, block = 'e')
X = identity_block(X, 3, [256, 256, 1024], stage = 4, block = 'f')
# Stage 5 (≈3 lines)
X = convolutional_block(X, f= 3, filters = [512, 512, 2048], stage = 5, block = 'a' , s = 2)
X = identity_block(X, 3, [512, 512, 2048], stage = 5, block = 'b')
X = identity_block(X, 3, [512, 512, 2048], stage = 5, block = 'c')
# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D((2, 2), name = 'avg_pool')(X)
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X, name='ResNet50')
return model
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]
</td>
</tr>
</table>
3 - Building your first ResNet model (50 layers)
You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together.
<img src="images/resnet_kiank.png" style="width:850px;height:150px;">
<caption><center> <u> <font color='purple'> Figure 5 </u><font color='purple'> : ResNet-50 model </center></caption>
The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is "conv1".
- BatchNorm is applied to the channels axis of the input.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three set of filters of size [64,64,256], "f" is 3, "s" is 1 and the block is "a".
- The 2 identity blocks use three set of filters of size [64,64,256], "f" is 3 and the blocks are "b" and "c".
- Stage 3:
- The convolutional block uses three set of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a".
- The 3 identity blocks use three set of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".
- Stage 4:
- The convolutional block uses three set of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".
- The 5 identity blocks use three set of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".
- Stage 5:
- The convolutional block uses three set of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".
- The 2 identity blocks use three set of filters of size [512, 512, 2048], "f" is 3 and the blocks are "b" and "c".
- The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".
- The flatten doesn't have any hyperparameters or name.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be 'fc' + str(classes).
Exercise: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.
You'll need to use this function:
- Average pooling see reference
Here're some other functions we used in the code below:
- Conv2D: See reference
- BatchNorm: See reference (axis: Integer, the axis that should be normalized (typically the features axis))
- Zero padding: See reference
- Max pooling: See reference
- Fully conected layer: See reference
- Addition: See reference
End of explanation
"""
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
"""
Explanation: Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running model.fit(...) below.
End of explanation
"""
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
"""
Explanation: As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
End of explanation
"""
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
"""
Explanation: The model is now ready to be trained. The only thing you need is a dataset.
Let's load the SIGNS Dataset.
<img src="images/signs_data_kiank.png" style="width:450px;height:250px;">
<caption><center> <u> <font color='purple'> Figure 6 </u><font color='purple'> : SIGNS dataset </center></caption>
End of explanation
"""
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
"""
Explanation: Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
End of explanation
"""
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
** Epoch 1/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours.
</td>
</tr>
<tr>
<td>
** Epoch 2/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing.
</td>
</tr>
</table>
Let's see how this model (trained on only two epochs) performs on the test set.
End of explanation
"""
model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
"""
Explanation: Expected Output:
<table>
<tr>
<td>
**Test Accuracy**
</td>
<td>
between 0.16 and 0.25
</td>
</tr>
</table>
For the purpose of this assignment, we've asked you to train the model only for two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well.
After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU.
Using a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.
End of explanation
"""
img_path = 'images/horns.jpeg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x))
"""
Explanation: ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.
Congratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system!
4 - Test on your own image (Optional/Ungraded)
If you wish, you can also take a picture of your own hand and see the output of the model. To do this:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
End of explanation
"""
model.summary()
"""
Explanation: You can also print a summary of your model by running the following code.
End of explanation
"""
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
"""
Explanation: Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples | notebooks/community/sdk/sdk_automl_text_classification_batch.ipynb | apache-2.0 | import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex SDK: AutoML training text classification model for batch prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_classification_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_classification_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_classification_batch.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to create text classification models and do batch prediction using a Google Cloud AutoML model.
Dataset
The dataset used for this tutorial is the Happy Moments dataset from Kaggle Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Objective
In this tutorial, you create an AutoML text classification model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Make a batch prediction.
There is one key difference between using batch prediction and using online prediction:
Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.
Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
"""
REGION = "us-central1" # @param {type: "string"}
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION $BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import google.cloud.aiplatform as aip
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
"""
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
"""
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
"""
IMPORT_FILE = "gs://cloud-ml-data/NL-classification/happiness.csv"
"""
Explanation: Tutorial
Now you are ready to start creating your own AutoML text classification model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
"""
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
"""
Explanation: Quick peek at your data
This tutorial uses a version of the Happy Moments dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
"""
dataset = aip.TextDataset.create(
display_name="Happy Moments" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aip.schema.dataset.ioformat.text.single_label_classification,
)
print(dataset.resource_name)
"""
Explanation: Create the Dataset
Next, create the Dataset resource using the create method for the TextDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
import_schema_uri: The data labeling schema for the data items.
This operation may take several minutes.
End of explanation
"""
dag = aip.AutoMLTextTrainingJob(
display_name="happydb_" + TIMESTAMP,
prediction_type="classification",
multi_label=False,
)
print(dag)
"""
Explanation: Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLTextTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: A text classification model.
sentiment: A text sentiment analysis model.
extraction: A text entity extraction model.
multi_label: If a classification task, whether single (False) or multi-labeled (True).
sentiment_max: If a sentiment analysis task, the maximum sentiment value.
The instantiated object is the DAG (directed acyclic graph) for the training pipeline.
End of explanation
"""
model = dag.run(
dataset=dataset,
model_display_name="happydb_" + TIMESTAMP,
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
)
"""
Explanation: Run the training pipeline
Next, you run the DAG to start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
validation_fraction_split: The percentage of the dataset to use for validation.
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 20 minutes.
End of explanation
"""
# Get model resource ID
models = aip.Model.list(filter="display_name=happydb_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aip.gapic.ModelServiceClient(client_options=client_options)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
"""
Explanation: Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
End of explanation
"""
test_items = ! gsutil cat $IMPORT_FILE | head -n2
if len(test_items[0]) == 3:
_, test_item_1, test_label_1 = str(test_items[0]).split(",")
_, test_item_2, test_label_2 = str(test_items[1]).split(",")
else:
test_item_1, test_label_1 = str(test_items[0]).split(",")
test_item_2, test_label_2 = str(test_items[1]).split(",")
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
"""
Explanation: Send a batch prediction request
Send a batch prediction to your deployed model.
Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
"""
import json
import tensorflow as tf
gcs_test_item_1 = BUCKET_NAME + "/test1.txt"
with tf.io.gfile.GFile(gcs_test_item_1, "w") as f:
f.write(test_item_1 + "\n")
gcs_test_item_2 = BUCKET_NAME + "/test2.txt"
with tf.io.gfile.GFile(gcs_test_item_2, "w") as f:
f.write(test_item_2 + "\n")
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": gcs_test_item_1, "mime_type": "text/plain"}
f.write(json.dumps(data) + "\n")
data = {"content": gcs_test_item_2, "mime_type": "text/plain"}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
"""
Explanation: Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:
content: The Cloud Storage path to the file with the text item.
mime_type: The content type. In our example, it is a text file.
For example:
{'content': '[your-bucket]/file1.txt', 'mime_type': 'text'}
End of explanation
"""
batch_predict_job = model.batch_predict(
job_display_name="happydb_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
sync=False,
)
print(batch_predict_job)
"""
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
End of explanation
"""
batch_predict_job.wait()
"""
Explanation: Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
"""
import json
import tensorflow as tf
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
"""
Explanation: Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
content: The prediction request.
prediction: The prediction response.
ids: The internal assigned unique identifiers for each prediction request.
displayNames: The class names for each class label.
confidences: The predicted confidence, between 0 and 1, per class label.
End of explanation
"""
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
rdempsey/web-scraping-data-mining-course | week7/2_data_exploration/3. Generate Summary Statistics.ipynb | mit | # Import the libraries we need
import pandas as pd
# Import the dataset from the CSV file
accidents_data_file = '/Users/robert.dempsey/Dropbox/Private/Art of Skill Hacking/' \
'Books/Python Business Intelligence Cookbook/Data/Stats19-Data1979-2004/Accidents7904.csv'
accidents = pd.read_csv(accidents_data_file,
sep=',',
header=0,
index_col=False,
parse_dates=['Date'],
dayfirst=True,
tupleize_cols=False,
error_bad_lines=True,
warn_bad_lines=True,
skip_blank_lines=True,
low_memory=False
)
accidents.head()
"""
Explanation: Generate Summary Statistics
In this notebook we are going to generate summary statistics for our data.
Generate Summary Statistics for the Entire Dataset
End of explanation
"""
# Use the describe function to generate summary stats for the entire dataset
accidents.describe()
# Transpose the results provided by describe()
accidents.describe().transpose()
"""
Explanation: Generate Summary Statistics for the Entire Dataset
End of explanation
"""
# By default describe() restricts the stats to numerical or categorical columns. Use the following to include object columns
accidents.describe(include=['object'])
"""
Explanation: Generate Summary Statistics for Object Type Columns
End of explanation
"""
# Show the mode of each column and transpose it so we can read everything in iPython Notebook
accidents.mode().transpose()
"""
Explanation: Get the Mode of the Entire Dataset
End of explanation
"""
accidents['Weather_Conditions'].describe()
"""
Explanation: Generate Summary Statistics for a Single Column
Generate Summary Statistics for a Single Column
End of explanation
"""
# Get the count of each unique value in the Date column.
pd.value_counts(accidents['Date'])
"""
Explanation: Get a Count of Unique Values for a Single Column
End of explanation
"""
# Get the count of each unique value in the Date column.
print("Min Value: {}".format(accidents['Number_of_Vehicles'].min()))
print("Max Value: {}".format(accidents['Number_of_Vehicles'].max()))
"""
Explanation: Get the Minimum and Maximum of a Single Column
End of explanation
"""
accidents['Number_of_Vehicles'].quantile([.05, .1, .25, .5, .75, .9, .99])
"""
Explanation: Generate Quantiles for a Single Column
End of explanation
"""
# Mean: the average
# Median: the middle value
# Mode: the value that occurs most often
# Range: the difference between the minimum and maximum values
print("Mean: {}".format(accidents['Number_of_Vehicles'].mean()))
print("Median: {}".format(accidents['Number_of_Vehicles'].median()))
print("Mode: {}".format(accidents['Number_of_Vehicles'].mode()))
print("Range: {}".format(
range(accidents['Number_of_Vehicles'].min(),
accidents['Number_of_Vehicles'].max()
)
))
"""
Explanation: Get the Mean, Median, Mode and Range for a Single Column
End of explanation
"""
|
jasontlam/snorkel | tutorials/cdr/CDR_Tutorial_2.ipynb | apache-2.0 | %load_ext autoreload
%autoreload 2
%matplotlib inline
from snorkel import SnorkelSession
session = SnorkelSession()
from snorkel.models import candidate_subclass
ChemicalDisease = candidate_subclass('ChemicalDisease', ['chemical', 'disease'])
train_cands = session.query(ChemicalDisease).filter(ChemicalDisease.split == 0).all()
dev_cands = session.query(ChemicalDisease).filter(ChemicalDisease.split == 1).all()
"""
Explanation: Chemical-Disease Relation (CDR) Tutorial
In this example, we'll be writing an application to extract mentions of chemical-induced-disease relationships from Pubmed abstracts, as per the BioCreative CDR Challenge. This tutorial will show off some of the more advanced features of Snorkel, so we'll assume you've followed the Intro tutorial.
Let's start by reloading from the last notebook.
End of explanation
"""
import bz2
from six.moves.cPickle import load
with bz2.BZ2File('data/ctd.pkl.bz2', 'rb') as ctd_f:
ctd_unspecified, ctd_therapy, ctd_marker = load(ctd_f)
def cand_in_ctd_unspecified(c):
return 1 if c.get_cids() in ctd_unspecified else 0
def cand_in_ctd_therapy(c):
return 1 if c.get_cids() in ctd_therapy else 0
def cand_in_ctd_marker(c):
return 1 if c.get_cids() in ctd_marker else 0
def LF_in_ctd_unspecified(c):
return -1 * cand_in_ctd_unspecified(c)
def LF_in_ctd_therapy(c):
return -1 * cand_in_ctd_therapy(c)
def LF_in_ctd_marker(c):
return cand_in_ctd_marker(c)
"""
Explanation: Part III: Writing LFs
This tutorial features some more advanced LFs than the intro tutorial, with more focus on distant supervision and dependencies between LFs.
Distant supervision approaches
We'll use the Comparative Toxicogenomics Database (CTD) for distant supervision. The CTD lists chemical-condition entity pairs under three categories: therapy, marker, and unspecified. Therapy means the chemical treats the condition, marker means the chemical is typically present with the condition, and unspecified is...unspecified. We can write LFs based on these categories.
End of explanation
"""
import re
from snorkel.lf_helpers import (
get_tagged_text,
rule_regex_search_tagged_text,
rule_regex_search_btw_AB,
rule_regex_search_btw_BA,
rule_regex_search_before_A,
rule_regex_search_before_B,
)
# List to parenthetical
def ltp(x):
return '(' + '|'.join(x) + ')'
def LF_induce(c):
return 1 if re.search(r'{{A}}.{0,20}induc.{0,20}{{B}}', get_tagged_text(c), flags=re.I) else 0
causal_past = ['induced', 'caused', 'due']
def LF_d_induced_by_c(c):
return rule_regex_search_btw_BA(c, '.{0,50}' + ltp(causal_past) + '.{0,9}(by|to).{0,50}', 1)
def LF_d_induced_by_c_tight(c):
return rule_regex_search_btw_BA(c, '.{0,50}' + ltp(causal_past) + ' (by|to) ', 1)
def LF_induce_name(c):
return 1 if 'induc' in c.chemical.get_span().lower() else 0
causal = ['cause[sd]?', 'induce[sd]?', 'associated with']
def LF_c_cause_d(c):
return 1 if (
re.search(r'{{A}}.{0,50} ' + ltp(causal) + '.{0,50}{{B}}', get_tagged_text(c), re.I)
and not re.search('{{A}}.{0,50}(not|no).{0,20}' + ltp(causal) + '.{0,50}{{B}}', get_tagged_text(c), re.I)
) else 0
treat = ['treat', 'effective', 'prevent', 'resistant', 'slow', 'promise', 'therap']
def LF_d_treat_c(c):
return rule_regex_search_btw_BA(c, '.{0,50}' + ltp(treat) + '.{0,50}', -1)
def LF_c_treat_d(c):
return rule_regex_search_btw_AB(c, '.{0,50}' + ltp(treat) + '.{0,50}', -1)
def LF_treat_d(c):
return rule_regex_search_before_B(c, ltp(treat) + '.{0,50}', -1)
def LF_c_treat_d_wide(c):
return rule_regex_search_btw_AB(c, '.{0,200}' + ltp(treat) + '.{0,200}', -1)
def LF_c_d(c):
return 1 if ('{{A}} {{B}}' in get_tagged_text(c)) else 0
def LF_c_induced_d(c):
return 1 if (
('{{A}} {{B}}' in get_tagged_text(c)) and
(('-induc' in c[0].get_span().lower()) or ('-assoc' in c[0].get_span().lower()))
) else 0
def LF_improve_before_disease(c):
return rule_regex_search_before_B(c, 'improv.*', -1)
pat_terms = ['in a patient with ', 'in patients with']
def LF_in_patient_with(c):
return -1 if re.search(ltp(pat_terms) + '{{B}}', get_tagged_text(c), flags=re.I) else 0
uncertain = ['combin', 'possible', 'unlikely']
def LF_uncertain(c):
return rule_regex_search_before_A(c, ltp(uncertain) + '.*', -1)
def LF_induced_other(c):
return rule_regex_search_tagged_text(c, '{{A}}.{20,1000}-induced {{B}}', -1)
def LF_far_c_d(c):
return rule_regex_search_btw_AB(c, '.{100,5000}', -1)
def LF_far_d_c(c):
return rule_regex_search_btw_BA(c, '.{100,5000}', -1)
def LF_risk_d(c):
return rule_regex_search_before_B(c, 'risk of ', 1)
def LF_develop_d_following_c(c):
return 1 if re.search(r'develop.{0,25}{{B}}.{0,25}following.{0,25}{{A}}', get_tagged_text(c), flags=re.I) else 0
procedure, following = ['inject', 'administrat'], ['following']
def LF_d_following_c(c):
return 1 if re.search('{{B}}.{0,50}' + ltp(following) + '.{0,20}{{A}}.{0,50}' + ltp(procedure), get_tagged_text(c), flags=re.I) else 0
def LF_measure(c):
return -1 if re.search('measur.{0,75}{{A}}', get_tagged_text(c), flags=re.I) else 0
def LF_level(c):
return -1 if re.search('{{A}}.{0,25} level', get_tagged_text(c), flags=re.I) else 0
def LF_neg_d(c):
return -1 if re.search('(none|not|no) .{0,25}{{B}}', get_tagged_text(c), flags=re.I) else 0
WEAK_PHRASES = ['none', 'although', 'was carried out', 'was conducted',
'seems', 'suggests', 'risk', 'implicated',
'the aim', 'to (investigate|assess|study)']
WEAK_RGX = r'|'.join(WEAK_PHRASES)
def LF_weak_assertions(c):
return -1 if re.search(WEAK_RGX, get_tagged_text(c), flags=re.I) else 0
"""
Explanation: Text pattern approaches
Now we'll use some LF helpers to create LFs based on indicative text patterns. We came up with these rules by using the viewer to examine training candidates and noting frequent patterns.
End of explanation
"""
def LF_ctd_marker_c_d(c):
return LF_c_d(c) * cand_in_ctd_marker(c)
def LF_ctd_marker_induce(c):
return (LF_c_induced_d(c) or LF_d_induced_by_c_tight(c)) * cand_in_ctd_marker(c)
def LF_ctd_therapy_treat(c):
return LF_c_treat_d_wide(c) * cand_in_ctd_therapy(c)
def LF_ctd_unspecified_treat(c):
return LF_c_treat_d_wide(c) * cand_in_ctd_unspecified(c)
def LF_ctd_unspecified_induce(c):
return (LF_c_induced_d(c) or LF_d_induced_by_c_tight(c)) * cand_in_ctd_unspecified(c)
"""
Explanation: Composite LFs
The following LFs take some of the strongest distant supervision and text pattern LFs, and combine them to form more specific LFs. These LFs introduce some obvious dependencies within the LF set, which we will model later.
End of explanation
"""
def LF_closer_chem(c):
# Get distance between chemical and disease
chem_start, chem_end = c.chemical.get_word_start(), c.chemical.get_word_end()
dis_start, dis_end = c.disease.get_word_start(), c.disease.get_word_end()
if dis_start < chem_start:
dist = chem_start - dis_end
else:
dist = dis_start - chem_end
# Try to find chemical closer than @dist/2 in either direction
sent = c.get_parent()
closest_other_chem = float('inf')
for i in range(dis_end, min(len(sent.words), dis_end + dist // 2)):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Chemical' and cid != sent.entity_cids[chem_start]:
return -1
for i in range(max(0, dis_start - dist // 2), dis_start):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Chemical' and cid != sent.entity_cids[chem_start]:
return -1
return 0
def LF_closer_dis(c):
# Get distance between chemical and disease
chem_start, chem_end = c.chemical.get_word_start(), c.chemical.get_word_end()
dis_start, dis_end = c.disease.get_word_start(), c.disease.get_word_end()
if dis_start < chem_start:
dist = chem_start - dis_end
else:
dist = dis_start - chem_end
# Try to find chemical disease than @dist/8 in either direction
sent = c.get_parent()
for i in range(chem_end, min(len(sent.words), chem_end + dist // 8)):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Disease' and cid != sent.entity_cids[dis_start]:
return -1
for i in range(max(0, chem_start - dist // 8), chem_start):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Disease' and cid != sent.entity_cids[dis_start]:
return -1
return 0
"""
Explanation: Rules based on context hierarchy
These last two rules will make use of the context hierarchy. The first checks if there is a chemical mention much closer to the candidate's disease mention than the candidate's chemical mention. The second does the analog for diseases.
End of explanation
"""
LFs = [
LF_c_cause_d,
LF_c_d,
LF_c_induced_d,
LF_c_treat_d,
LF_c_treat_d_wide,
LF_closer_chem,
LF_closer_dis,
LF_ctd_marker_c_d,
LF_ctd_marker_induce,
LF_ctd_therapy_treat,
LF_ctd_unspecified_treat,
LF_ctd_unspecified_induce,
LF_d_following_c,
LF_d_induced_by_c,
LF_d_induced_by_c_tight,
LF_d_treat_c,
LF_develop_d_following_c,
LF_far_c_d,
LF_far_d_c,
LF_improve_before_disease,
LF_in_ctd_therapy,
LF_in_ctd_marker,
LF_in_patient_with,
LF_induce,
LF_induce_name,
LF_induced_other,
LF_level,
LF_measure,
LF_neg_d,
LF_risk_d,
LF_treat_d,
LF_uncertain,
LF_weak_assertions,
]
from snorkel.annotations import LabelAnnotator
labeler = LabelAnnotator(lfs=LFs)
%time L_train = labeler.apply(split=0)
L_train
L_train.lf_stats(session)
"""
Explanation: Running the LFs on the training set
End of explanation
"""
from snorkel.learning.structure import DependencySelector
ds = DependencySelector()
deps = ds.select(L_train, threshold=0.1)
len(deps)
"""
Explanation: Part IV: Training the generative model
As mentioned above, we want to include the dependencies between our LFs when training the generative model. Snorkel makes it easy to do this! DependencySelector runs a fast structure learning algorithm over the matrix of LF outputs to identify a set of likely dependencies. We can see that these match up with our prior knowledge. For example, it identified a "reinforcing" dependency between LF_c_induced_d and LF_ctd_marker_induce. Recall that we constructed the latter using the former.
End of explanation
"""
from snorkel.learning import GenerativeModel
gen_model = GenerativeModel(lf_propensity=True)
gen_model.train(
L_train, deps=deps, decay=0.95, step_size=0.1/L_train.shape[0], reg_param=0.0
)
train_marginals = gen_model.marginals(L_train)
import matplotlib.pyplot as plt
plt.hist(train_marginals, bins=20)
plt.show()
gen_model.learned_lf_stats()
from snorkel.annotations import save_marginals
save_marginals(session, L_train, train_marginals)
"""
Explanation: Now we'll train the generative model, using the deps argument to account for the learned dependencies. We'll also model LF propensity here, unlike the intro tutorial. In addition to learning the accuracies of the LFs, this also learns their likelihood of labeling an example.
End of explanation
"""
from load_external_annotations import load_external_labels
load_external_labels(session, ChemicalDisease, split=1, annotator='gold')
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
L_gold_dev
L_dev = labeler.apply_existing(split=1)
_ = gen_model.error_analysis(session, L_dev, L_gold_dev)
L_dev.lf_stats(session, L_gold_dev, gen_model.learned_lf_stats()['Accuracy'])
"""
Explanation: Checking performance against development set labels
Finally, we'll run the labeler on the development set, load in some external labels, then evaluate the LF performance. The external labels are applied via a small script for convenience. It maps the document-level relation annotations found in the CDR file to mention-level labels. Note that these will not be perfect, although they are pretty good. If we wanted to keep iterating, we could use snorkel.lf_helpers.test_LF against the dev set, or look at some false positive and false negative candidates.
End of explanation
"""
|
pascal-schetelat/Slope | slopeGraphs.ipynb | mit | from plotSlope import slope
"""
Explanation: E. Tufte Slope Graphs contest
So here is my entry for the slope Graph contest. (You can find the initial bounty description here )
Installation
Dependancies
This script is written in Python and relies on Numpy, Pandas and Matplotlib.
The easiest way to have a clean and robust install is to download one the great Scientific Python distribution, nameley :
Anaconda
Canopy
Python(x,y)
Everything you'll need is included. I personally use Anaconda from the guys at Continuum Analytics. All of them should work on Linux, Windows and Mac.
Download the sources
Go grab the sources at https://github.com/pascal-schetelat/Slope
If you want slope to be available anywhere on your system :
bash
python setup.py install
Else, just set the working directory where plotSlope.py is.
Then you are good to go. Import it in a Python interpreter (for instance Spyder, the Scientific python IDE bundled with Anaconda, or a jupyter notebook):
python
from plotSlope import slope
Usage
Import :
End of explanation
"""
data = pd.read_csv(os.path.join('data','EU_GDP_2007_2013.csv'),index_col=0,na_values='-')
(data/1000).head()
"""
Explanation: Load data from file into a data frame :
End of explanation
"""
f = slope(data/1000,kind='interval',height= 12,width=20,font_size=12,dpi=150,savename='EU_interval.png',title = u'title')
color = {"France":'b','Germany':'r','Ireland':'chocolate','United Kingdom': 'purple'}
f = slope(data/1000, title = u'European GPD until 2010 and forecasts at market prices (billions of Euro) source : EUROSTAT',
kind='interval',height= 12,width=22,font_size=15,
savename='test.png',color=color,dpi=200)
f = slope(data/1000, title = u'European GPD until 2010 and forecasts at market prices (billions of Euro) source : EUROSTAT',
kind='interval',height= 12,width=30,font_size=20,
savename=None,color=color)
"""
Explanation: Plot it :
End of explanation
"""
df = pd.DataFrame( np.random.normal(loc=np.ones(shape=[20,30])*np.arange(30)))
df.rename(columns = lambda el : str(el),index =lambda el : str(el),inplace=True)
f = slope(df.T,width =10,height= 8,kind='ordinal',savename=None,dpi=200,color={'10':'red','27':'blue'},marker=None)
"""
Explanation: Other example : Random data
End of explanation
"""
|
bigdata-i523/hid335 | project/BDA-Project-Data-Visualization.ipynb | gpl-3.0 | import seaborn as sns
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('~/project-data.csv')
df.drop(df.columns[[0,1]], axis=1, inplace=True)
df.shape
"""
Explanation: Big Data Applications and Analytics: Term Project
Sean M. Shiverick Fall 2017
Data Visualization
Resources:
'Python for Data Analysis' by Wes McKinney: https://github.com/wesm/pydata-book
'Python Data Science Handbook': https://jakevdp.github.io/PythonDataScienceHandbook/
Dataset: 2015 NSDUH
1. Import modules and Load the data
Import python modules
load data file and save as DataFrame object
Subset dataframe by column
End of explanation
"""
df.columns
df.info()
"""
Explanation: Explore data frames to check headers
Look at columns headers, variable information, type, etc.
End of explanation
"""
sns.set(style='ticks')
sns.lmplot(y='PRLMISAB',x='HEROINUSE',data=df)
"""
Explanation: Explore data frames to check headers and data types
AGECAT 57146 non-null int64
SEX 57146 non-null int64
MARRIED 57146 non-null float64
EDUCAT 57146 non-null int64
EMPLOY18 57146 non-null float64
CTYMETRO 57146 non-null int64
HEALTH 57146 non-null float64
MENTHLTH 57146 non-null float64
SUICATT 57146 non-null float64
PRLMISEVR 57146 non-null int64
PRLMISAB 57146 non-null float64
PRLANY 57146 non-null int64
HEROINEVR 57146 non-null int64
HEROINUSE 57146 non-null int64
HEROINFQY 57146 non-null float64
TRQLZRS 57146 non-null int64
SEDATVS 57146 non-null int64
COCAINE 57146 non-null int64
AMPHETMN 57146 non-null int64
TRTMENT 57146 non-null float64
MHTRTMT 57146 non-null float64
First plot: scatterplot with linear correlation
Compare Y == PRLMISANY and X == AGE.
Pass BWT as Y variable and AGE as X variable to seaborns lmplot (linear model plot)
It plot points, axes, and regression line, and also plots an error field. Super handy!
End of explanation
"""
sns.lmplot(y='PRLMISAB',x='HEROINUSE',hue='CTYMETRO',data=df)
p = sns.lmplot(y='PRLMISAB',x='HEROINUSE',hue='CTYMETRO',data=df)
p.savefig('fancy-regression-chart.png')
"""
Explanation: Check PRLMISAB effects HEROINUSE, controlling for CTYMETRO.
No real hypothes, just to show you how we can do this.
Code for race: 1=white, 2=black, 3=other
Use command below to save this plot
End of explanation
"""
sns.factorplot(x='HEROINEVR', hue='PRLMISEVR',col='SEX',kind='count',data=df)
"""
Explanation: Third Plot: Factorplot
Compare interaction of SMOKE, BWT, HT using bar charts.
End of explanation
"""
'AGECAT', 'SEX', 'MARRIED', 'EDUCAT', 'EMPLOY18', 'CTYMETRO', 'HEALTH',
'MENTHLTH', 'SUICATT', 'PRLMISEVR', 'PRLMISAB', 'PRLANY', 'HEROINEVR',
'HEROINUSE', 'HEROINFQY', 'TRQLZRS', 'SEDATVS', 'COCAINE', 'AMPHETMN',
'TRTMENT', 'MHTRTMT'
df1 = df[['MENTHLTH','PRLMISAB','HEROINUSE','CTYMETRO']]
sns.pairplot(df1, hue = 'CTYMETRO',size=2.5);
plt.savefig('Figure3.png', bbox_inches='tight')
df1 = df[['AGECAT','SEX','PRLMISAB','HEROINUSE']]
sns.pairplot(df1, hue = 'SEX',size=2.5);
"""
Explanation: Fourth Plot: Pairplots
To understand the distribution of each variable and
Also plot it against all other variables to understand their relationship.
Graph can be visualized for different values of a chosen 'hue' variable
End of explanation
"""
|
fastai/course-v3 | nbs/dl1/lesson2-download.ipynb | apache-2.0 | from fastai.vision import *
"""
Explanation: Creating your own dataset from Google Images
by: Francisco Ingham and Jeremy Howard. Inspired by Adrian Rosebrock
In this tutorial we will see how to easily create an image dataset through Google Images. Note: You will have to repeat these steps for any new category you want to Google (e.g once for dogs and once for cats).
End of explanation
"""
folder = 'black'
file = 'urls_black.csv'
folder = 'teddys'
file = 'urls_teddys.csv'
folder = 'grizzly'
file = 'urls_grizzly.csv'
"""
Explanation: Get a list of URLs
Search and scroll
Go to Google Images and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do.
Scroll down until you've seen all the images you want to download, or until you see a button that says 'Show more results'. All the images you scrolled past are now available to download. To get more, click on the button, and continue scrolling. The maximum number of images Google Images shows is 700.
It is a good idea to put things you want to exclude into the search query, for instance if you are searching for the Eurasian wolf, "canis lupus lupus", it might be a good idea to exclude other variants:
"canis lupus lupus" -dog -arctos -familiaris -baileyi -occidentalis
You can also limit your results to show only photos by clicking on Tools and selecting Photos from the Type dropdown.
Download into file
Now you must run some Javascript code in your browser which will save the URLs of all the images you want for you dataset.
In Google Chrome press <kbd>Ctrl</kbd><kbd>Shift</kbd><kbd>j</kbd> on Windows/Linux and <kbd>Cmd</kbd><kbd>Opt</kbd><kbd>j</kbd> on macOS, and a small window the javascript 'Console' will appear. In Firefox press <kbd>Ctrl</kbd><kbd>Shift</kbd><kbd>k</kbd> on Windows/Linux or <kbd>Cmd</kbd><kbd>Opt</kbd><kbd>k</kbd> on macOS. That is where you will paste the JavaScript commands.
You will need to get the urls of each of the images. Before running the following commands, you may want to disable ad blocking extensions (uBlock, AdBlockPlus etc.) in Chrome. Otherwise the window.open() command doesn't work. Then you can run the following commands:
javascript
urls=Array.from(document.querySelectorAll('.rg_i')).map(el=> el.hasAttribute('data-src')?el.getAttribute('data-src'):el.getAttribute('data-iurl'));
window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\n')));
Create directory and upload urls file into your server
Choose an appropriate name for your labeled images. You can run these steps multiple times to create different labels.
End of explanation
"""
path = Path('data/bears')
dest = path/folder
dest.mkdir(parents=True, exist_ok=True)
path.ls()
"""
Explanation: You will need to run this cell once per each category.
End of explanation
"""
classes = ['teddys','grizzly','black']
download_images(path/file, dest, max_pics=200)
# If you have problems download, try with `max_workers=0` to see exceptions:
download_images(path/file, dest, max_pics=20, max_workers=0)
"""
Explanation: Finally, upload your urls file. You just need to press 'Upload' in your working directory and select your file, then click 'Upload' for each of the displayed files.
Download images
Now you will need to download your images from their respective urls.
fast.ai has a function that allows you to do just that. You just have to specify the urls filename as well as the destination folder and this function will download and save all images that can be opened. If they have some problem in being opened, they will not be saved.
Let's download our images! Notice you can choose a maximum number of images to be downloaded. In this case we will not download all the urls.
You will need to run this line once for every category.
End of explanation
"""
for c in classes:
print(c)
verify_images(path/c, delete=True, max_size=500)
"""
Explanation: Then we can remove any images that can't be opened:
End of explanation
"""
np.random.seed(42)
data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2,
ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)
# If you already cleaned your data, run this cell instead of the one before
# np.random.seed(42)
# data = ImageDataBunch.from_csv(path, folder=".", valid_pct=0.2, csv_labels='cleaned.csv',
# ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)
"""
Explanation: View data
End of explanation
"""
data.classes
data.show_batch(rows=3, figsize=(7,8))
data.classes, data.c, len(data.train_ds), len(data.valid_ds)
"""
Explanation: Good! Let's take a look at some of our pictures then.
End of explanation
"""
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.fit_one_cycle(4)
learn.save('stage-1')
learn.unfreeze()
learn.lr_find()
# If the plot is not showing try to give a start and end learning rate
# learn.lr_find(start_lr=1e-5, end_lr=1e-1)
learn.recorder.plot()
learn.fit_one_cycle(2, max_lr=slice(3e-5,3e-4))
learn.save('stage-2')
"""
Explanation: Train model
End of explanation
"""
learn.load('stage-2');
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
"""
Explanation: Interpretation
End of explanation
"""
from fastai.widgets import *
"""
Explanation: Cleaning Up
Some of our top losses aren't due to bad performance by our model. There are images in our data set that shouldn't be.
Using the ImageCleaner widget from fastai.widgets we can prune our top losses, removing photos that don't belong.
End of explanation
"""
db = (ImageList.from_folder(path)
.split_none()
.label_from_folder()
.transform(get_transforms(), size=224)
.databunch()
)
# If you already cleaned your data using indexes from `from_toplosses`,
# run this cell instead of the one before to proceed with removing duplicates.
# Otherwise all the results of the previous step would be overwritten by
# the new run of `ImageCleaner`.
# db = (ImageList.from_csv(path, 'cleaned.csv', folder='.')
# .split_none()
# .label_from_df()
# .transform(get_transforms(), size=224)
# .databunch()
# )
"""
Explanation: First we need to get the file paths from our top_losses. We can do this with .from_toplosses. We then feed the top losses indexes and corresponding dataset to ImageCleaner.
Notice that the widget will not delete images directly from disk but it will create a new csv file cleaned.csv from where you can create a new ImageDataBunch with the corrected labels to continue training your model.
In order to clean the entire set of images, we need to create a new dataset without the split. The video lecture demostrated the use of the ds_type param which no longer has any effect. See the thread for more details.
End of explanation
"""
learn_cln = cnn_learner(db, models.resnet34, metrics=error_rate)
learn_cln.load('stage-2');
ds, idxs = DatasetFormatter().from_toplosses(learn_cln)
"""
Explanation: Then we create a new learner to use our new databunch with all the images.
End of explanation
"""
# Don't run this in google colab or any other instances running jupyter lab.
# If you do run this on Jupyter Lab, you need to restart your runtime and
# runtime state including all local variables will be lost.
ImageCleaner(ds, idxs, path)
"""
Explanation: Make sure you're running this notebook in Jupyter Notebook, not Jupyter Lab. That is accessible via /tree, not /lab. Running the ImageCleaner widget in Jupyter Lab is not currently supported.
End of explanation
"""
ds, idxs = DatasetFormatter().from_similars(learn_cln)
ImageCleaner(ds, idxs, path, duplicates=True)
"""
Explanation: If the code above does not show any GUI(contains images and buttons) rendered by widgets but only text output, that may caused by the configuration problem of ipywidgets. Try the solution in this link to solve it.
Flag photos for deletion by clicking 'Delete'. Then click 'Next Batch' to delete flagged photos and keep the rest in that row. ImageCleaner will show you a new row of images until there are no more to show. In this case, the widget will show you images until there are none left from top_losses.ImageCleaner(ds, idxs)
You can also find duplicates in your dataset and delete them! To do this, you need to run .from_similars to get the potential duplicates' ids and then run ImageCleaner with duplicates=True. The API works in a similar way as with misclassified images: just choose the ones you want to delete and click 'Next Batch' until there are no more images left.
Make sure to recreate the databunch and learn_cln from the cleaned.csv file. Otherwise the file would be overwritten from scratch, losing all the results from cleaning the data from toplosses.
End of explanation
"""
learn.export()
"""
Explanation: Remember to recreate your ImageDataBunch from your cleaned.csv to include the changes you made in your data!
Putting your model in production
First thing first, let's export the content of our Learner object for production:
End of explanation
"""
defaults.device = torch.device('cpu')
img = open_image(path/'black'/'00000021.jpg')
img
"""
Explanation: This will create a file named 'export.pkl' in the directory where we were working that contains everything we need to deploy our model (the model, the weights but also some metadata like the classes or the transforms/normalization used).
You probably want to use CPU for inference, except at massive scale (and you almost certainly don't need to train in real-time). If you don't have a GPU that happens automatically. You can test your model on CPU like so:
End of explanation
"""
learn = load_learner(path)
pred_class,pred_idx,outputs = learn.predict(img)
pred_class.obj
"""
Explanation: We create our Learner in production enviromnent like this, just make sure that path contains the file 'export.pkl' from before.
End of explanation
"""
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.fit_one_cycle(1, max_lr=0.5)
"""
Explanation: So you might create a route something like this (thanks to Simon Willison for the structure of this code):
python
@app.route("/classify-url", methods=["GET"])
async def classify_url(request):
bytes = await get_bytes(request.query_params["url"])
img = open_image(BytesIO(bytes))
_,_,losses = learner.predict(img)
return JSONResponse({
"predictions": sorted(
zip(cat_learner.data.classes, map(float, losses)),
key=lambda p: p[1],
reverse=True
)
})
(This example is for the Starlette web app toolkit.)
Things that can go wrong
Most of the time things will train fine with the defaults
There's not much you really need to tune (despite what you've heard!)
Most likely are
Learning rate
Number of epochs
Learning rate (LR) too high
End of explanation
"""
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
"""
Explanation: Learning rate (LR) too low
End of explanation
"""
learn.fit_one_cycle(5, max_lr=1e-5)
learn.recorder.plot_losses()
"""
Explanation: Previously we had this result:
Total time: 00:57
epoch train_loss valid_loss error_rate
1 1.030236 0.179226 0.028369 (00:14)
2 0.561508 0.055464 0.014184 (00:13)
3 0.396103 0.053801 0.014184 (00:13)
4 0.316883 0.050197 0.021277 (00:15)
End of explanation
"""
learn = cnn_learner(data, models.resnet34, metrics=error_rate, pretrained=False)
learn.fit_one_cycle(1)
"""
Explanation: As well as taking a really long time, it's getting too many looks at each image, so may overfit.
Too few epochs
End of explanation
"""
np.random.seed(42)
data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.9, bs=32,
ds_tfms=get_transforms(do_flip=False, max_rotate=0, max_zoom=1, max_lighting=0, max_warp=0
),size=224, num_workers=4).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate, ps=0, wd=0)
learn.unfreeze()
learn.fit_one_cycle(40, slice(1e-6,1e-4))
"""
Explanation: Too many epochs
End of explanation
"""
|
python-control/python-control | examples/pvtol-lqr-nested.ipynb | bsd-3-clause | from numpy import * # Grab all of the NumPy functions
from matplotlib.pyplot import * # Grab MATLAB plotting functions
from control.matlab import * # MATLAB-like functions
%matplotlib inline
"""
Explanation: Vertical takeoff and landing aircraft
This notebook demonstrates the use of the python-control package for analysis and design of a controller for a vectored thrust aircraft model that is used as a running example through the text Feedback Systems by Astrom and Murray. This example makes use of MATLAB compatible commands.
Additional information on this system is available at
http://www.cds.caltech.edu/~murray/wiki/index.php/Python-control/Example:_Vertical_takeoff_and_landing_aircraft
System Description
This example uses a simplified model for a (planar) vertical takeoff and landing aircraft (PVTOL), as shown below:
The position and orientation of the center of mass of the aircraft is denoted by $(x,y,\theta)$, $m$ is the mass of the vehicle, $J$ the moment of inertia, $g$ the gravitational constant and $c$ the damping coefficient. The forces generated by the main downward thruster and the maneuvering thrusters are modeled as a pair of forces $F_1$ and $F_2$ acting at a distance $r$ below the aircraft (determined by the geometry of the thrusters).
Letting $z=(x,y,\theta, \dot x, \dot y, \dot\theta$), the equations can be written in state space form as:
$$
\frac{dz}{dt} = \begin{bmatrix}
z_4 \
z_5 \
z_6 \
-\frac{c}{m} z_4 \
-g- \frac{c}{m} z_5 \
0
\end{bmatrix}
+
\begin{bmatrix}
0 \
0 \
0 \
\frac{1}{m} \cos \theta F_1 + \frac{1}{m} \sin \theta F_2 \
\frac{1}{m} \sin \theta F_1 + \frac{1}{m} \cos \theta F_2 \
\frac{r}{J} F_1
\end{bmatrix}
$$
LQR state feedback controller
This section demonstrates the design of an LQR state feedback controller for the vectored thrust aircraft example. This example is pulled from Chapter 6 (Linear Systems, Example 6.4) and Chapter 7 (State Feedback, Example 7.9) of Astrom and Murray. The python code listed here are contained the the file pvtol-lqr.py.
To execute this example, we first import the libraries for SciPy, MATLAB plotting and the python-control package:
End of explanation
"""
m = 4 # mass of aircraft
J = 0.0475 # inertia around pitch axis
r = 0.25 # distance to center of force
g = 9.8 # gravitational constant
c = 0.05 # damping factor (estimated)
"""
Explanation: The parameters for the system are given by
End of explanation
"""
# State space dynamics
xe = [0, 0, 0, 0, 0, 0] # equilibrium point of interest
ue = [0, m*g] # (note these are lists, not matrices)
# Dynamics matrix (use matrix type so that * works for multiplication)
# Note that we write A and B here in full generality in case we want
# to test different xe and ue.
A = matrix(
[[ 0, 0, 0, 1, 0, 0],
[ 0, 0, 0, 0, 1, 0],
[ 0, 0, 0, 0, 0, 1],
[ 0, 0, (-ue[0]*sin(xe[2]) - ue[1]*cos(xe[2]))/m, -c/m, 0, 0],
[ 0, 0, (ue[0]*cos(xe[2]) - ue[1]*sin(xe[2]))/m, 0, -c/m, 0],
[ 0, 0, 0, 0, 0, 0 ]])
# Input matrix
B = matrix(
[[0, 0], [0, 0], [0, 0],
[cos(xe[2])/m, -sin(xe[2])/m],
[sin(xe[2])/m, cos(xe[2])/m],
[r/J, 0]])
# Output matrix
C = matrix([[1, 0, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0]])
D = matrix([[0, 0], [0, 0]])
"""
Explanation: Choosing equilibrium inputs to be $u_e = (0, mg)$, the dynamics of the system $\frac{dz}{dt}$, and their linearization $A$ about equilibrium point $z_e = (0, 0, 0, 0, 0, 0)$ are given by
$$
\frac{dz}{dt} = \begin{bmatrix}
z_4 \
z_5 \
z_6 \
-g \sin z_3 -\frac{c}{m} z_4 \
g(\cos z_3 - 1)- \frac{c}{m} z_5 \
0
\end{bmatrix}
\qquad
A = \begin{bmatrix}
0 & 0 & 0 &1&0&0\
0&0&0&0&1&0 \
0&0&0&0&0&1 \
0&0&-g&-c/m&0&0 \
0&0&0&0&-c/m&0 \
0&0&0&0&0&0
\end{bmatrix}
$$
End of explanation
"""
Qx1 = diag([1, 1, 1, 1, 1, 1])
Qu1a = diag([1, 1])
(K, X, E) = lqr(A, B, Qx1, Qu1a); K1a = matrix(K)
"""
Explanation: To compute a linear quadratic regulator for the system, we write the cost function as
$$ J = \int_0^\infty (\xi^T Q_\xi \xi + v^T Q_v v) dt,$$
where $\xi = z - z_e$ and $v = u - u_e$ represent the local coordinates around the desired equilibrium point $(z_e, u_e)$. We begin with diagonal matrices for the state and input costs:
End of explanation
"""
# Our input to the system will only be (x_d, y_d), so we need to
# multiply it by this matrix to turn it into z_d.
Xd = matrix([[1,0,0,0,0,0],
[0,1,0,0,0,0]]).T
# Closed loop dynamics
H = ss(A-B*K,B*K*Xd,C,D)
# Step response for the first input
x,t = step(H,input=0,output=0,T=linspace(0,10,100))
# Step response for the second input
y,t = step(H,input=1,output=1,T=linspace(0,10,100))
plot(t,x,'-',t,y,'--')
plot([0, 10], [1, 1], 'k-')
ylabel('Position')
xlabel('Time (s)')
title('Step Response for Inputs')
legend(('Yx', 'Yy'), loc='lower right')
show()
"""
Explanation: This gives a control law of the form $v = -K \xi$, which can then be used to derive the control law in terms of the original variables:
$$u = v + u_e = - K(z - z_d) + u_d.$$
where $u_e = (0, mg)$ and $z_d = (x_d, y_d, 0, 0, 0, 0)$
The way we setup the dynamics above, $A$ is already hardcoding $u_d$, so we don't need to include it as an external input. So we just need to cascade the $-K(z-z_d)$ controller with the PVTOL aircraft's dynamics to control it. For didactic purposes, we will cheat in two small ways:
First, we will only interface our controller with the linearized dynamics. Using the nonlinear dynamics would require the NonlinearIOSystem functionalities, which we leave to another notebook to introduce.
Second, as written, our controller requires full state feedback ($K$ multiplies full state vectors $z$), which we do not have access to because our system, as written above, only returns $x$ and $y$ (because of $C$ matrix). Hence, we would need a state observer, such as a Kalman Filter, to track the state variables. Instead, we assume that we have access to the full state.
The following code implements the closed loop system:
End of explanation
"""
# Look at different input weightings
Qu1a = diag([1, 1])
K1a, X, E = lqr(A, B, Qx1, Qu1a)
H1ax = H = ss(A-B*K1a,B*K1a*Xd,C,D)
Qu1b = (40**2)*diag([1, 1])
K1b, X, E = lqr(A, B, Qx1, Qu1b)
H1bx = H = ss(A-B*K1b,B*K1b*Xd,C,D)
Qu1c = (200**2)*diag([1, 1])
K1c, X, E = lqr(A, B, Qx1, Qu1c)
H1cx = ss(A-B*K1c,B*K1c*Xd,C,D)
[Y1, T1] = step(H1ax, T=linspace(0,10,100), input=0,output=0)
[Y2, T2] = step(H1bx, T=linspace(0,10,100), input=0,output=0)
[Y3, T3] = step(H1cx, T=linspace(0,10,100), input=0,output=0)
plot(T1, Y1.T, 'b-', T2, Y2.T, 'r-', T3, Y3.T, 'g-')
plot([0 ,10], [1, 1], 'k-')
title('Step Response for Inputs')
ylabel('Position')
xlabel('Time (s)')
legend(('Y1','Y2','Y3'),loc='lower right')
axis([0, 10, -0.1, 1.4])
show()
"""
Explanation: The plot above shows the $x$ and $y$ positions of the aircraft when it is commanded to move 1 m in each direction. The following shows the $x$ motion for control weights $\rho = 1, 10^2, 10^4$. A higher weight of the input term in the cost function causes a more sluggish response. It is created using the code:
End of explanation
"""
from matplotlib.pyplot import * # Grab MATLAB plotting functions
from control.matlab import * # MATLAB-like functions
# System parameters
m = 4 # mass of aircraft
J = 0.0475 # inertia around pitch axis
r = 0.25 # distance to center of force
g = 9.8 # gravitational constant
c = 0.05 # damping factor (estimated)
# Transfer functions for dynamics
Pi = tf([r], [J, 0, 0]) # inner loop (roll)
Po = tf([1], [m, c, 0]) # outer loop (position)
"""
Explanation: Lateral control using inner/outer loop design
This section demonstrates the design of loop shaping controller for the vectored thrust aircraft example. This example is pulled from Chapter 11 (Frequency Domain Design) of Astrom and Murray.
To design a controller for the lateral dynamics of the vectored thrust aircraft, we make use of a "inner/outer" loop design methodology. We begin by representing the dynamics using the block diagram
<img src=https://murray.cds.caltech.edu/images/murray.cds/3/3f/Pvtol-lateraltf.png>
The controller is constructed by splitting the process dynamics and controller into two components: an inner loop consisting of the roll dynamics $P_i$ and control $C_i$ and an outer loop consisting of the lateral position dynamics $P_o$ and controller $C_o$.
<img src=https://murray.cds.caltech.edu/images/murray.cds/f/f1/Pvtol-nested-1.png>
The closed inner loop dynamics $H_i$ control the roll angle of the aircraft using the vectored thrust while the outer loop controller $C_o$ commands the roll angle to regulate the lateral position.
The following code imports the libraries that are required and defines the dynamics:
End of explanation
"""
k = 200
a = 2
b = 50
Ci = k*tf([1, a], [1, b]) # lead compensator
Li = Pi*Ci
"""
Explanation: For the inner loop, use a lead compensator
End of explanation
"""
Hi = parallel(feedback(Ci, Pi), -m*g*feedback(Ci*Pi, 1))
"""
Explanation: The closed loop dynamics of the inner loop, $H_i$, are given by
End of explanation
"""
# Now design the lateral control system
a = 0.02
b = 5
K = 2
Co = -K*tf([1, 0.3], [1, 10]) # another lead compensator
Lo = -m*g*Po*Co
"""
Explanation: Finally, we design the lateral compensator using another lead compenstor
End of explanation
"""
L = Co*Hi*Po
S = feedback(1, L)
T = feedback(L, 1)
t, y = step(T, T=linspace(0,10,100))
plot(y, t)
title("Step Response")
grid()
xlabel("time (s)")
ylabel("y(t)")
show()
"""
Explanation: The performance of the system can be characterized using the sensitivity function and the complementary sensitivity function:
End of explanation
"""
bode(L)
show()
nyquist(L, (0.0001, 1000))
show()
gangof4(Hi*Po, Co)
"""
Explanation: The frequency response and Nyquist plot for the loop transfer function are computed using the commands
End of explanation
"""
|
piyueh/PoissonTest | PetAmgXTest/Report.ipynb | gpl-2.0 | omg=numpy.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1])
tPCG = numpy.array([5.72, 4.54, 3.78, 3.14, 2.71, 2.38, 2.06, 1.95, 2.49, 10.15])
tPCGF = numpy.array([2.48, 2.14, 2.03, 2.6, 10.7])
tPBICGSTAB = numpy.array([2.79, 2.58, 2.48, 3, 12.1])
pyplot.plot(omg, tPCG, label="PCG")
pyplot.plot(omg[5:], tPCGF, label="PCGF")
pyplot.plot(omg[5:], tPBICGSTAB, label="PBICGSTAB")
pyplot.xlabel("Relaxation factor")
pyplot.ylabel("Time for solve")
pyplot.legend(loc=0);
"""
Explanation: 2D Poisson Problem
We solve the following problem:
\begin{equation}
\frac{\partial u^2}{\partial^2 x} + \frac{\partial u^2}{\partial^2 y} = -8\pi^2\cos{(2\pi x)}\cos{(2\pi y)}
\end{equation}
with boundary conditions:
\begin{equation}
\left.\frac{\partial u}{\partial x}\right|{x=0}=\left.\frac{\partial u}{\partial x}\right|{x=1}=\left.\frac{\partial u}{\partial y}\right|{y=0}=\left.\frac{\partial u}{\partial y}\right|{y=1}=0
\end{equation}
The exact solution is
\begin{equation}
u(x, y) = \cos{(2\pi x)}\cos{(2\pi y)}
\end{equation}
Test set 1:
Number of GPU: 1 (K40)
Machine: Theo
<font color='red'>Top solver: </font>
<font color='red'>Preconditioned CG (PCG)</font>
<font color='red'>Flexible Preconditioned CG (PCGF)</font>
<font color='red'>Preconditioned Stable BiCG (PBICGSTAB)</font>
Tolerance: absolute residual reach $10^{-12}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: Block Jacobi
Cycle: V
Pre-sweep: 1
Post-Sweep: 1
Coarsest sweep: 1
Maximum size of coarsest grid: 100
<font color='red'>Relaxation factor of the block Jacobi: from 0.1 to 1 </font>
Grid size: 3750 $\times$ 3750
End of explanation
"""
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3])
time24 = {"V": numpy.array([5.152, 3.822, 4.314]),
"W": numpy.array([5.568, 5.89, 6.39]),
"F": numpy.array([5.886, 4.232, 4.53])}
errL24 = {"V": numpy.array([0.052, 0.152, 2.004]),
"W": numpy.array([0.008, 0.03, 0.01]),
"F": numpy.array([2.766, 0.002, 0.23])}
errU24 = {"V": numpy.array([0.018, 0.078, 1.986]),
"W": numpy.array([0.012, 0.04, 0.02]),
"F": numpy.array([3.174, 0.008, 0.89])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([6.248, 5.238, 4.15]),
"W": numpy.array([7.382, 5.53, 7.456]),
"F": numpy.array([5.672, 5.58, 4.24])}
errL12 = {"V": numpy.array([0.008, 0.368, 0]),
"W": numpy.array([0.002, 0, 1.656]),
"F": numpy.array([0.992, 1.22, 0])}
errU12 = {"V": numpy.array([0.002, 1.472, 0]),
"W": numpy.array([0.008, 0, 0.424]),
"F": numpy.array([0.658, 1.83, 0])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
"""
Explanation: Test set 2:
<font color='red'>Number of GPU: 12, 24 (K20)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach $10^{-10}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: Block Jacobi
<font color='red'>Cycle: V, W, F</font>
Pre-sweep: 1
Post-Sweep: 1
<font color='red'>Coarsest sweep: 1, 2, 3</font>
Maximum size of coarsest grid: 10
Relaxation factor of the block Jacobi: 0.8
Grid size: 9000 $\times$ 9000 (matrix size: 81M $\times$ 81M)
End of explanation
"""
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3])
time24 = {"V": numpy.array([3.102, 2.444, 2.166]),
"W": numpy.array([3.716, 4.376, 5.4]),
"F": numpy.array([2.872, 3.31, 3.78])}
errL24 = {"V": numpy.array([0.032, 0.044, 0.006]),
"W": numpy.array([0.066, 0.316, 0.99]),
"F": numpy.array([0.012, 0.49, 0.88])}
errU24 = {"V": numpy.array([0.058, 0.016, 0.004]),
"W": numpy.array([0.074, 1.214, 0.67]),
"F": numpy.array([0.008, 0.74, 0.23])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([6.238, 4.42, 3.7]),
"W": numpy.array([5.272, 5.174, 5.396]),
"F": numpy.array([4.23, 3.974, 4.58])}
errL12 = {"V": numpy.array([0.608, 0, 0]),
"W": numpy.array([0.402, 0.004, 0.156]),
"F": numpy.array([0.05, 0.004, 0.61])}
errU12 = {"V": numpy.array([2.422, 0, 0]),
"W": numpy.array([1.608, 0.006, 0.044]),
"F": numpy.array([0.08, 0.016, 0.92])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
"""
Explanation: Test set 3:
<font color='red'>Number of GPU: 12, 24 (K20)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach <font color='red'>$10^{-8}$</font>
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: Block Jacobi
<font color='red'>Cycle: V, W, F</font>
Pre-sweep: 1
Post-Sweep: 1
<font color='red'>Coarsest sweep: 1, 2, 3</font>
Maximum size of coarsest grid: 10
Relaxation factor of the block Jacobi: 0.8
Grid size: 9000 $\times$ 9000 (matrix size: 81M $\times$ 81M)
End of explanation
"""
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3])
time24 = {"V": numpy.array([3.06, 2.422, 2.39]),
"W": numpy.array([3.802, 4.376, 5.406]),
"F": numpy.array([2.878, 3.382, 5.568])}
errL24 = {"V": numpy.array([0.05, 0.022, 0.23]),
"W": numpy.array([0.002, 0.306, 1.006]),
"F": numpy.array([0.008, 0.552, 0.668])}
errU24 = {"V": numpy.array([0.02, 0.038, 0.91]),
"W": numpy.array([0.008, 1.214, 0.674]),
"F": numpy.array([0.012, 0.988, 0.452])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([6.23, 4.376, 3.702]),
"W": numpy.array([5.266, 5.174, 5.43]),
"F": numpy.array([4.208, 4.288, 4.572])}
errL12 = {"V": numpy.array([0.65, 0.126, 0.002]),
"W": numpy.array([0.406, 0.004, 0.01]),
"F": numpy.array([0.028, 0.318, 0.602])}
errU12 = {"V": numpy.array([2.42, 0.044, 0.008]),
"W": numpy.array([1.614, 0.006, 0.01]),
"F": numpy.array([0.112, 1.272, 0.908])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
"""
Explanation: Test set 4:
<font color='red'>Number of GPU: 12, 24 (K20)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach $10^{-8}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: <font color='red'>ALL</font>
Smoother: Block Jocabi
Coarsest solver: Block Jacobi
<font color='red'>Cycle: V, W, F</font>
Pre-sweep: 1
Post-Sweep: 1
<font color='red'>Coarsest sweep: 1, 2, 3</font>
Maximum size of coarsest grid: 10
Relaxation factor of the block Jacobi: 0.8
Grid size: 9000 $\times$ 9000 (matrix size: 81M $\times$ 81M)
End of explanation
"""
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3])
time24 = {"V": numpy.array([4.512, 4.508, 5.152]),
"W": numpy.array([5.626, 5.63, 5.622]),
"F": numpy.array([5.268, 5.286, 6.822])}
errL24 = {"V": numpy.array([0.332, 0.338, 0.962]),
"W": numpy.array([0.026, 0.02, 0.022]),
"F": numpy.array([1.088, 1.026, 0.012])}
errU24 = {"V": numpy.array([1.278, 1.292, 0.638]),
"W": numpy.array([0.034, 0.02, 0.028]),
"F": numpy.array([1.562, 1.534, 0.008])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([4.81, 5.212, 6.016]),
"W": numpy.array([7.402, 7.406, 7.4]),
"F": numpy.array([5.584, 5.53, 5.554])}
errL12 = {"V": numpy.array([0, 0.402, 1.206]),
"W": numpy.array([0.002, 0.006, 0]),
"F": numpy.array([0.084, 0.03, 0.054])}
errU12 = {"V": numpy.array([0, 1.608, 0.804]),
"W": numpy.array([0.008, 0.014, 0]),
"F": numpy.array([0.056, 0.1, 0.086])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 4)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(2, 10)
pyplot.legend(loc=0)
"""
Explanation: Test set 5:
<font color='red'>Number of GPU: 12, 24 (K20)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach $10^{-10}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: <font color='red'>LU Decomposition</font>
<font color='red'>Cycle: V, W, F</font>
Pre-sweep: 1
Post-Sweep: 1
<font color='red'>Coarsest sweep: 1, 2, 3</font>
Maximum size of coarsest grid: 10
Relaxation factor of the block Jacobi: 0.8
Grid size: 9000 $\times$ 9000 (matrix size: 81M $\times$ 81M)
End of explanation
"""
cycle = ["V", "W", "F"]
sweep = numpy.array([1, 2, 3, 4, 5])
time24 = {"V": numpy.array([2.75, 2.186, 1.958, 1.832, 1.782]),
"W": numpy.array([6.9, 8.3, 8.236, 9.762, 15.764]),
"F": numpy.array([4.204, 5.106, 6.574, 5.782, 6.68])}
errL24 = {"V": numpy.array([0.06, 0.066, 0.058, 0.002, 0.002]),
"W": numpy.array([1.61, 1.95, 0.016, 0.012, 0.064]),
"F": numpy.array([0.774, 1.066, 1.554, 0.272, 0.04])}
errU24 = {"V": numpy.array([0.04, 0.044, 0.042, 0.008, 0.008]),
"W": numpy.array([0.41, 1.27, 0.014, 0.038, 0.076]),
"F": numpy.array([1.046, 0.704, 0.426, 0.078, 0.02])}
err24 = {cyc: numpy.array(numpy.vstack((errL24[cyc], errU24[cyc]))) for cyc in cycle}
time12 = {"V": numpy.array([2.396, 2.18, 8.164, 1.98, 2.174]),
"W": numpy.array([8.316, 13.444, 16.048, 23.508, 18.928]),
"F": numpy.array([9.094, 8.608, 6.818, 8.416, 9.832])}
errL12 = {"V": numpy.array([0.006, 0, 6.134, 0, 0.174]),
"W": numpy.array([0.006, 0.044, 0.658, 3.685, 0.048]),
"F": numpy.array([1.624, 0.018, 0.128, 1.126, 1.832])}
errU12 = {"V": numpy.array([0.004, 0, 24.486, 0, 0.696]),
"W": numpy.array([0.014, 0.056, 2.532, 2.462, 0.062]),
"F": numpy.array([0.416, 0.012, 0.042, 1.674, 1.838])}
err12 = {cyc: numpy.array(numpy.vstack((errL12[cyc], errU12[cyc]))) for cyc in cycle}
pyplot.figure(figsize=(16, 4))
pyplot.subplot(1, 2, 1)
pyplot.title("12 Devices")
pyplot.errorbar(sweep, time12["V"], yerr = err12["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time12["W"], yerr = err12["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time12["F"], yerr = err12["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 6)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(1, 30)
pyplot.legend(loc=0)
pyplot.subplot(1, 2, 2)
pyplot.title("24 Devices")
pyplot.errorbar(sweep, time24["V"], yerr = err24["V"], fmt='ks-', label="V")
pyplot.errorbar(sweep, time24["W"], yerr = err24["W"], fmt='r^-', label="W")
pyplot.errorbar(sweep, time24["F"], yerr = err24["F"], fmt='bo-', label="F")
pyplot.xlabel("Number of iteration on coarsest grid")
pyplot.xlim(0, 6)
pyplot.ylabel("Time for solve (sec)")
pyplot.ylim(1, 30)
pyplot.legend(loc=0)
"""
Explanation: Test set 6:
<font color='red'>Number of GPU: 12, 24 (K20)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach $10^{-8}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: <font color='red'>Gauss-Seidel</font>
<font color='red'>Cycle: V, W, F</font>
Pre-sweep: 1
Post-Sweep: 1
<font color='red'>Coarsest sweep: 1, 2, 3, 4, 5</font>
Maximum size of coarsest grid: 10
Relaxation factor of the block Jacobi: 0.8
Grid size: 9000 $\times$ 9000 (matrix size: 81M $\times$ 81M)
End of explanation
"""
N_1GPU = numpy.array([1000, 8000, 64000, 512000, 4096000])
Time_1GPU = numpy.array([0.024, 0.017, 0.10, 0.2, 1.6])
err_1GPU = numpy.array([0., 0., 0., 0., 0.])
N_2GPU = numpy.array([1000, 8000, 64000, 512000, 4096000])
Time_2GPU = numpy.array([0.04, 0.11, 0.09, 0.17, 1.19])
err_2GPU = numpy.array([0., 0., 0., 0., 0.])
N_4GPU = numpy.array([1000, 8000, 64000, 512000, 4096000])
Time_4GPU = numpy.array([0.11, 0.10, 0.09, 0.57, 0.69])
err_4GPU = numpy.array([0., 0., 0., 0., 0.])
N_8GPU = numpy.array([1000, 8000, 64000, 512000, 4096000])
Time_8GPU = numpy.array([0.09, 0.09, 0.53, 0.4, 0.44])
err_8GPU = numpy.array([0., 0., 0., 0., 0.])
nGPU = numpy.array([1, 2, 4, 8])
Time_N1K_GPU = numpy.array([0.024, 0.04, 0.11, 0.09])
Time_N8K_GPU = numpy.array([0.017, 0.11, 0.10, 0.09])
Time_N64K_GPU = numpy.array([0.10, 0.09, 0.09, 0.53])
Time_N512K_GPU = numpy.array([0.2, 0.17, 0.57, 0.4])
Time_N4M_GPU = numpy.array([1.6, 1.19, 0.69, 0.44])
N_1CPU = numpy.array([64000, 512000, 4096000])
Time_1CPU = numpy.array([0.23, 3.06, 45.37])
err_1CPU = numpy.array([0., 0., 0.])
N_2CPU = numpy.array([64000, 512000, 4096000])
Time_2CPU = numpy.array([0.17, 3.12, 39.05])
err_2CPU = numpy.array([0., 0., 0.])
N_4CPU = numpy.array([64000, 512000, 4096000])
Time_4CPU = numpy.array([0.09, 1.65, 21.88])
err_4CPU = numpy.array([0., 0., 0.])
N_8CPU = numpy.array([64000, 512000, 4096000])
Time_8CPU = numpy.array([0.05, 1.22, 18.3])
err_8CPU = numpy.array([0., 0., 0.])
nCPU = numpy.array([1, 2, 4, 8])
Time_N64K_CPU = numpy.array([0.23, 0.17, 0.09, 0.05])
Time_N512K_CPU = numpy.array([3.06, 3.12, 1.65, 1.22])
Time_N4M_CPU = numpy.array([45.37, 39.05, 21.88, 18.3])
#pyplot.figure(figsize=(16,8), dpi=400)
#pyplot.subplot(1, 2, 1)
#pyplot.title("Weak Scaling")
#ax = pyplot.gca()
#ax.set_xscale("log", nonposx='clip')
#ax.set_yscale("log", nonposx='clip')
#pyplot.errorbar(N_1GPU, Time_1GPU, yerr = err_1GPU, fmt='ks-', label="1 GPU")
#pyplot.errorbar(N_2GPU, Time_2GPU, yerr = err_2GPU, fmt='r^-', label="2 GPU")
#pyplot.errorbar(N_4GPU, Time_4GPU, yerr = err_4GPU, fmt='gx-', label="4 GPU")
#pyplot.errorbar(N_8GPU, Time_8GPU, yerr = err_8GPU, fmt='bo-', label="8 GPU")
#
#pyplot.errorbar(N_1CPU, Time_1CPU, yerr = err_1CPU, fmt='ks--', label="1 CPU")
#pyplot.errorbar(N_2CPU, Time_2CPU, yerr = err_2CPU, fmt='r^--', label="2 CPU")
#pyplot.errorbar(N_4CPU, Time_4CPU, yerr = err_4CPU, fmt='gx--', label="4 CPU")
#pyplot.errorbar(N_8CPU, Time_8CPU, yerr = err_8CPU, fmt='bo--', label="8 CPU")
#pyplot.xlabel("Number of total grid points")
#pyplot.ylabel("Wall time for solve (sec)")
#pyplot.legend(loc=0)
pyplot.figure(figsize=(16,8), dpi=400)
pyplot.title("Weak Scaling")
ax = pyplot.gca()
ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposx='clip')
#pyplot.plot(nGPU, Time_N1K_GPU, 'ks-', label="GPU, 10x10x10")
#pyplot.plot(nGPU, Time_N8K_GPU, 'r^-', label="GPU, 20x20x20")
pyplot.plot(nGPU, Time_N64K_GPU, 'rx-', label="GPU, 40x40x40")
pyplot.plot(nGPU, Time_N512K_GPU, 'go-', label="GPU, 80x80x80")
pyplot.plot(nGPU, Time_N4M_GPU, 'b>-', label="GPU, 160x160x160")
pyplot.plot(nCPU, Time_N64K_CPU, 'rx--', label="CPU, 40x40x40")
pyplot.plot(nCPU, Time_N512K_CPU, 'go--', label="CPU, 80x80x80")
pyplot.plot(nCPU, Time_N4M_CPU, 'b>--', label="CPU, 160x160x160")
pyplot.xlabel("Number of GPUs / CPUs")
pyplot.ylabel("Wall time for solve (sec)")
#pyplot.ylim(0, 4)
pyplot.legend(loc=0)
"""
Explanation: 3D Poisson Problem
Weak Scaling Test
<font color='red'>Number of GPU: 1, 2, 4, 8 (K20m)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach $10^{-8}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: Gauss-Seidel
Cycle: V
Pre-sweep: 1
Post-Sweep: 1
Coarsest sweep: 4
Maximum size of coarsest grid: 2
Relaxation factor of the block Jacobi: 0.8
<font color='red'>Grid size: 10x10x10, 20x20x20, 40x40x40, 80x80x80, 160x160x160</font>
End of explanation
"""
N_4M_GPU = numpy.array([1, 2, 4, 8, 16, 32])
Time_4M_GPU_Raw = numpy.array([[1.04, 1.11, 0.86, 0.5, 3.7, 3.49],
[1.04, 1.11, 0.86, 0.49, 3.72, 3.47],
[1.04, 1.11, 0.86, 0.49, 3.69, 3.51],
[1.04, 1.11, 0.86, 0.49, 3.69, 3.47],
[1.04, 1.11, 0.86, 0.49, 3.69, 3.48]])
Time_4M_GPU = numpy.average(Time_4M_GPU_Raw, axis=0)
N_8M_GPU = numpy.array([4, 8, 16, 32])
Time_8M_GPU_Raw = numpy.array([[1.37, 0.81, 0.57, 2.1],
[1.44, 0.81, 0.58, 2.09],
[1.37, 0.81, 0.58, 2.09],
[1.37, 0.82, 0.58, 2.09],
[1.37, 0.81, 0.59, 2.09]])
Time_8M_GPU = numpy.average(Time_8M_GPU_Raw, axis=0)
N_4M_CPU = numpy.array([1, 2, 3, 4, 5, 6, 7, 8])
Time_4M_CPU = numpy.array([9.53, 4.72, 3.06, 2.19, 1.74, 1.53, 1.31, 1.13])
N_8M_CPU = numpy.array([1, 2, 3, 4, 5, 6, 7, 8])
Time_8M_CPU = numpy.array([20.65, 10.33, 6.65, 4.92, 3.9, 3.27, 2.82, 2.45])
N_4M_GPU_OPT = numpy.array([1, 2, 4, 8, 16])
Time_4M_GPU_OPT = numpy.array([0.81, 0.67, 0.42, 0.31, 0.26])
pyplot.figure(figsize=(16,8), dpi=400)
pyplot.title("Strong Scaling (GPU)")
ax = pyplot.gca()
ax.set_xscale("log", nonposx='clip')
ax.set_yscale("log", nonposx='clip')
pyplot.plot(N_4M_GPU, Time_4M_GPU, 'ks-', label="GPU, 200x200x100")
pyplot.plot(N_8M_GPU, Time_8M_GPU, 'rx-', label="GPU, 200x200x200")
pyplot.plot(N_4M_CPU, Time_4M_CPU, 'ks--', label="CPU, 200x200x100")
pyplot.plot(N_8M_CPU, Time_8M_CPU, 'rx--', label="CPU, 200x200x200")
pyplot.plot(N_4M_GPU_OPT, Time_4M_GPU_OPT, 'ks-.', label="GPU, 160x160x160")
pyplot.xlabel("Number of GPUs / CPU-Nodes (12 CPUs per node)")
pyplot.ylabel("Wall time for solve (sec)")
pyplot.legend(loc=0)
"""
Explanation: Strong Scaling Test
<font color='red'>Number of GPU: 1, 2, 4, 8, 16, 32 (K20m)</font>
Machine: ivygpu-noecc
Top solver: Preconditioned CG (PCG)
Tolerance: absolute residual reach $10^{-8}$
Preconditioner: AMG
AMG algorithm: Classical
Selector: PMIS
Interpolation: D2
Strength: AHAT
Smoother: Block Jocabi
Coarsest solver: Gauss-Seidel
Cycle: V
Pre-sweep: 1
Post-Sweep: 1
Coarsest sweep: 4
Maximum size of coarsest grid: 2
Relaxation factor of the block Jacobi: 0.8
<font color='red'>Grid size: 200x200x100, 200x200x200</font>
End of explanation
"""
|
robertoalotufo/ia898 | src/sobel.ipynb | mit | import numpy as np
def sobel(f):
from pconv import pconv
Sx = np.array([[1.,2.,1.],
[0.,0.,0.],
[-1.,-2.,-1.]])
Sy = np.array([[1.,0.,-1.],
[2.,0.,-2.],
[1.,0.,-1.]])
fx = pconv(f, Sx)
fy = pconv(f, Sy)
mag = np.abs(fx + fy*1j)
theta = np.arctan2(fy,fx)
return mag,theta
"""
Explanation: Function sobel
Synopse
Sobel edge detection.
mag,theta = sobel(f)
mag,theta: Magnitude and angle.
f: Image. input image
End of explanation
"""
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python sobel.ipynb
import numpy as np
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
import matplotlib.image as mpimg
"""
Explanation: Description
Computes the edge detection by Sobel. Compute magnitude and angle.
Examples
End of explanation
"""
if testing:
f = np.array([[0,1,0,0],
[0,0,0,0],
[0,0,0,0]],dtype='uint8')
m,t = ia.sobel(f)
print('m:\n',m)
print('t:\n',t)
"""
Explanation: Numerical Example
End of explanation
"""
if testing:
f = mpimg.imread('../data/cameraman.tif')
(g,a) = ia.sobel(f)
nb = ia.nbshow(2)
nb.nbshow(ia.normalize(g),title='Sobel')
nb.nbshow(ia.normalize(np.log(g+1)),title='Log of sobel')
nb.nbshow()
"""
Explanation: Image examples
Example 1.
End of explanation
"""
if testing:
f = ia.circle([200,300], 90, [100,150])
m,t = ia.sobel(f)
dt = np.select([m > 2], [t])
nb = ia.nbshow(3)
nb.nbshow(f,title='Image f')
nb.nbshow(ia.normalize(m), title='Magnitude of Sobel filtering')
nb.nbshow(ia.normalize(dt), title='Angle of edges with magnitude above 2')
nb.nbshow()
"""
Explanation: Example 2.
End of explanation
"""
|
cliburn/sta-663-2017 | homework/06_Making_Python_Faster_2_Solutions.ipynb | mit | import requests
from bs4 import BeautifulSoup
def listFD(url, ext=''):
page = requests.get(url).text
soup = BeautifulSoup(page, 'html.parser')
return [url + node.get('href') for node in soup.find_all('a')
if node.get('href').endswith(ext)]
site = 'http://people.duke.edu/~ccc14/misc/'
ext = 'png'
for i, file in enumerate(listFD(site, ext)):
if i == 5:
break
print(file)
def download_one(url, path):
r = requests.get(url, stream=True)
img = r.raw.read()
with open(path, 'wb') as f:
f.write(img)
%%time
for url in listFD(site, ext):
filename = os.path.split(url)[-1]
download_one(url, filename)
%%time
from concurrent.futures import ThreadPoolExecutor
args = [(url, os.path.split(url)[-1])
for url in listFD(site, ext)]
with ThreadPoolExecutor(max_workers=4) as pool:
pool.map(lambda x: download_one(x[0], x[1]), args)
%%time
from multiprocessing import Pool
args = [(url, os.path.split(url)[-1])
for url in listFD(site, ext)]
with Pool(processes=4) as pool:
pool.starmap(download_one, args)
"""
Explanation: Parallel Processing and C++
1. (25 points) Accelerating network bound procedures.
Print the names of the first 5 PNG images on the URL http://people.duke.edu/~ccc14/misc/. (10 points)
Write a function that uses a for loop to download all images and time how long it takes (5 points)
Write a function that uses concurrent.futures and a thread pool to download all images and time how long it takes (5 points)
Write a function that uses multiprocessing and a process pool to download all images and time how long it takes (5 points)
End of explanation
"""
n = 100
p = 10
xs = np.random.random((n, p))
# This is the only version necessary.
# The numba and numpy versions are just for education.
def buffon():
"""Simulate dropping of one needle."""
center = np.random.random()
angle = 2*np.pi*np.random.random()
offset = 0.5 * np.sin(angle)
if (center + offset > 1) or (center - offset < 0):
return 1
else:
return 0
def buffon_python(n):
"""Calcualte π using Buffon's needle method."""
crosses = 0
for i in range(n):
crosses += buffon()
return n/crosses
def buffon_numpy(n):
"""Calcualte π using Buffon's needle method."""
centers = np.random.uniform(0, 1, n)
angles = np.random.uniform(0, 2*np.pi, n)
offset = 0.5 * np.sin(angles)
crosses = np.sum((centers + offset > 1) | (centers - offset < 0))
return n/crosses
import numba
@numba.jit(nopython=True)
def buffon_():
"""Simulate dropping of one needle."""
center = np.random.random()
angle = 2*np.pi*np.random.random()
offset = 0.5 * np.sin(angle)
if (center + offset > 1) or (center - offset < 0):
return 1
else:
return 0
@numba.jit(nopython=True)
def buffon_numba(n):
"""Calcualte π using Buffon's needle method."""
crosses = 0
for i in range(n):
crosses += buffon_()
return n/crosses
%%time
n = int(1e6)
print(buffon_python(n))
# force JIT compilation before timing
print(buffon_numba(100))
%%time
n = int(1e6)
print(buffon_numba(n))
%%time
n = int(1e6)
print(buffon_numpy(n))
from concurrent.futures import ProcessPoolExecutor
def buffon_pool(n, f, k):
with ProcessPoolExecutor(max_workers=k) as pool:
return np.mean(list(pool.map(f, [n//k] * k)))
%%time
n = int(1e6)
k = 4
print([n/k] * k)
print(buffon_pool(n, buffon_python, 4))
"""
Explanation: 2. (25 points) Accelerating CPU bound procedures
Use the insanely slow Buffon's needle algorithm to estimate $\pi$. Suppose the needle is of length 1, and the lines are also 1 unit apart. Write a function to simulate the dropping of a pin with a random position and random angle, and return 0 if it does not cross a line and 1 if it does. Since the problem is periodic, you can assume that the bottom of the pin falls within (0, 1) and check if it crosses the line y=0 or y=1. (10 points)
Calculate pi from dropping n=10^6 pins and time it (10 points)
Use concurrent.futures and a process pool to parallelize your solution and time it.
End of explanation
"""
%%file hw6_ex3.cpp
#include <iostream>
#include <fstream>
#include <armadillo>
using std::cout;
using std::ofstream;
int main()
{
using namespace arma;
vec x = linspace<vec>(10.0,15.0,10);
vec eps = 10*randn<vec>(10);
vec y = 3*x%x - 7*x + 2 + eps;
cout << "x:\n" << x << "\n";
cout << "y:\n" << y << "\n";
cout << "Lenght of x is: " << norm(x) << "\n";
cout << "Lenght of y is: " << norm(y) << "\n";
cout << "Distance(x, y) is: " << norm(x -y) << "\n";
cout << "Correlation(x, y) is: " << cor(x, y) << "\n";
mat A = join_rows(ones<vec>(10), x);
A = join_rows(A, x%x);
cout << "A:\n" << A << "\n";
vec b = solve(A, y);
cout << "b:\n" << b << "\n";
ofstream fout1("x.txt");
x.print(fout1);
ofstream fout2("y.txt");
y.print(fout2);
ofstream fout3("b.txt");
b.print(fout3);
}
%%bash
g++ -std=c++11 hw6_ex3.cpp -o hw6_ex3 -larmadillo
./hw6_ex3
"""
Explanation: 3. (25 points) Use C++ to
Generate 10 $x$-coordinates linearly spaced between 10 and 15
Generate 10 random $y$-values as $y = 3x^2 − 7x + 2 + \epsilon$ where $\epsilon∼10N(0,1)$
Find the norm of $x$ and $y$ considered as length 10-vectors
Find the Euclidean distance between $x$ and $y$
Solve the linear system to find a quadratic fit for this data
You may wish to use armadillo or eigen to solve this exercise.
End of explanation
"""
n = 10
x = np.linspace(0, 10, n)
y = 3*x**2 - 7*x + 2 + np.random.normal(0, 10, n)
X = np.c_[np.ones(n), x, x**2]
beta = np.linalg.lstsq(X, y)[0]
beta
plt.scatter(x, y)
plt.plot(x, X @ beta, 'red')
pass
%%file wrap.cpp
<%
cfg['compiler_args'] = ['-std=c++11']
cfg['include_dirs'] = ['./eigen']
setup_pybind11(cfg)
%>
#include <pybind11/pybind11.h>
#include <pybind11/eigen.h>
#include <Eigen/LU>
namespace py = pybind11;
// Note: This direct translation is not the most stable or efficient way to solve this
Eigen::VectorXd least_squares(Eigen::MatrixXd X, Eigen::VectorXd y) {
auto XtX = X.transpose() * X;
auto Xty = X.transpose() * y;
return XtX.inverse() * Xty;
}
PYBIND11_PLUGIN(wrap) {
pybind11::module m("wrap", "auto-compiled c++ extension");
m.def("least_squares", &least_squares);
return m.ptr();
}
n = 10
x = np.linspace(0, 10, n)
y = 3*x**2 - 7*x + 2 + np.random.normal(0, 10, n)
X = np.c_[np.ones(n), x, x**2]
import cppimport
m = cppimport.imp("wrap")
beta = m.least_squares(X, y)
beta
plt.scatter(x, y)
plt.plot(x, X @ beta, 'red')
pass
"""
Explanation: 4. (25 points) 4. Write a C++ function that uses the eigen library to solve the least squares linear problem
$$
\beta = (X^TX)^{-1}X^Ty
$$
for a matrix $X$ and vector $y$ and returns the vector of coefficients $\beta$. Wrap the function for use in Python and call it like so
beta <- least_squares(X, y)
where $X$ and $y$ are given below.
Wrap the function so that it can be called from Python and compare with the np.linalg.lstsq solution shown.
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.19/_downloads/cfc20c17238f93690fc049d714cab718/plot_read_inverse.ipynb | bsd-3-clause | # Author: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator
from mne.viz import set_3d_view
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fname_trans = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
inv_fname = data_path
inv_fname += '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inv = read_inverse_operator(inv_fname)
print("Method: %s" % inv['methods'])
print("fMRI prior: %s" % inv['fmri_prior'])
print("Number of sources: %s" % inv['nsource'])
print("Number of channels: %s" % inv['nchan'])
src = inv['src'] # get the source space
# Get access to the triangulation of the cortex
print("Number of vertices on the left hemisphere: %d" % len(src[0]['rr']))
print("Number of triangles on left hemisphere: %d" % len(src[0]['use_tris']))
print("Number of vertices on the right hemisphere: %d" % len(src[1]['rr']))
print("Number of triangles on right hemisphere: %d" % len(src[1]['use_tris']))
"""
Explanation: Reading an inverse operator
The inverse operator's source space is shown in 3D.
End of explanation
"""
fig = mne.viz.plot_alignment(subject='sample', subjects_dir=subjects_dir,
trans=fname_trans, surfaces='white', src=src)
set_3d_view(fig, focalpoint=(0., 0., 0.06))
"""
Explanation: Show result on 3D source space
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nasa-giss/cmip6/models/sandbox-2/ocnbgchem.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-2', 'ocnbgchem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-2
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
"""
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
"""
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
"""
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
"""
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
"""
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
"""
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
"""
|
ffmmjj/intro_to_data_science_workshop | 03-Delimitação de grupos de flores.ipynb | apache-2.0 | import pandas as pd
iris = # Carregue o arquivo 'datasets/iris_without_classes.csv'
# Exiba as primeiras cinco linhas usando o método head() para checar que não existe mais a coluna "Class"
"""
Explanation: Suponha que não soubéssemos quantas espécies diferentes estão presentes no dataset iris. Como poderíamos descobrir essa informação aproximadamente a partir dos dados presentes ali?
Uma solução possível seria plotar os dados em um scatterplot e tentar identificar visualmente a existência de grupos distintos. O datase Iris, no entanto, possui quatro dimensões de dados então não é possível visualizá-lo inteiramente (apenas um par de features por vez).
Para visualizar o dataset completo como um scatterplot 2D, é possível usar técnicas de redução de dimensionalidade para comprimir o dataset para duas dimensões perdendo pouca informação estrutural.
Leitura dos dados
End of explanation
"""
from sklearn.decomposition import PCA
RANDOM_STATE=1234
pca_model = # Crie um objeto PCA com dois componentes
iris_2d = # Use o método fit_transform() para reduzir o dataset para duas dimensões
import matplotlib.pyplot as plt
%matplotlib inline
# Crie um scatterplot do dataset reduzido
# Exiba o gráfico
"""
Explanation: Redução de dimensões
Usaremos o algoritmo PCA do scikit-learn para reduzir o número de dimenSões para dois no dataset.
End of explanation
"""
# Crie dois modelos KMeans: um com dois clusters e outro com três clusters
# Armazene os identificadores previstos pelos modelos usando dois e três clusters
from sklearn.cluster import KMeans
model2 = # Crie um objeto KMeans que espere dois clusters
labels2 = # Infira o identificador de cluster de cada exemplo no dataset usando predict()
model3 = # Crie um objeto KMeans que espere três clusters
labels3 = # Infira o identificador de cluster de cada exemplo no dataset usando predict()
# Crie um scatterplot usando o dataset reduzido colorindo cada ponto de acordo com o cluster
# ao qual ele pertence segundo o KMeans de dois clusters
# Exiba o scatterplot
# Crie um scatterplot usando o dataset reduzido colorindo cada ponto de acordo com o cluster
# ao qual ele pertence segundo o KMeans de três clusters
# Exiba o scatterplot
"""
Explanation: Quantos grupos distintos você consegue identificar?
Descoberta de clusters com K-Means
O problem descrito anteriormente pode ser descrito como um problema de Clusterização. Clusterização permite encontrar grupos de exemplos que sejam semelhantes a outros exemplos no mesmo grupo mas diferentes de exemplos pertencentes a outros grupos.
Neste exemplo, usaremos o algoritmo KMeans do scikit-learn para encontrar cluster no dataset.
Uma limitação do KMeans é que ele precisa receber o número esperado de clusters como argumento, então é necessário que se tenha algum conhecimento daquele domínio para chutar um número razoável de grupos ou pode-se testar diferentes números de clusters e ver qual deles apresenta o melhor resultado.
End of explanation
"""
|
rdhyee/dlab-finance | basic-taq/Generator examples.ipynb | isc | from glob import glob
import raw_taq
import pandas as pd
import numpy as np
from statistics import mode
def print_stats(chunk):
#find the max bid price
max_price = max(chunk['Bid_Price'])
#find the min bid price
min_price = min(chunk['Bid_Price'])
#find the mean of bid price
avg_price = np.mean(chunk['Bid_Price'])
#find the mod of bid price
try:
mod_price = mode(chunk['Bid_Price'])
except StatisticsError:
mod_price = np.nan
#find the sd of bid price
sd_price = np.std(chunk['Bid_Price'])
print("Max bid price: ", max_price, "\n", "Min bid price: ", min_price, "\n",
"Mean bid price: ", avg_price, "\n", "Mod bid price: ", mod_price, "\n",
"Standard deviation bid price: ", sd_price)
# You can run this if you update the raw_taq.py file
from importlib import reload
reload(raw_taq)
"""
Explanation: Using generators to get numpy chunks out of TAQ data
End of explanation
"""
# I grab the [0]'th fname in the glob
fname = glob('../local_data/EQY_US_ALL_BBO_*.zip')[0]
test_run = raw_taq.TAQ2Chunks(fname)
chunk_gen = test_run.convert_taq(20)
type(chunk_gen)
# You can get one chunk this way
chunk = next(chunk_gen)
chunk[0]
# If you want just the type
chunk.dtype
# Numpy record arrays support string indexing to get columns
print(chunk['Bid_Price'])
print(chunk["Ask_Price"])
# Numeric indexing gives a row
chunk[0]
# And you can do both
chunk['Bid_Price'][6]
# Or
chunk[6]['Bid_Price']
"""
Explanation: Here, we grab whatever BBO file we can find
End of explanation
"""
chunk_df = pd.DataFrame(chunk)
chunk_df
# note that time is not correctly parsed yet:
chunk_df.Time
"""
Explanation: You can also easily convert numpy record arrays to pandas dataframes easily
End of explanation
"""
chunk.dtype
fname = glob('../local_data/EQY_US_ALL_BBO_*.zip')[0]
local_taq = raw_taq.TAQ2Chunks(fname)
chunk_gen = local_taq.convert_taq(20)
first_chunk = next(chunk_gen)
curr_symbol = first_chunk['Symbol_root'][0]
accum = pd.DataFrame(first_chunk)
processed_symbols = 0
for chunk in chunk_gen:
where_symbol = curr_symbol == chunk['Symbol_root']
if where_symbol.all():
accum.append(pd.DataFrame(chunk))
else:
same = chunk[where_symbol]
accum.append(pd.DataFrame(same))
# Compute the stats
print('Current symbol:', curr_symbol, len(curr_symbol), 'records')
print_stats(accum)
processed_symbols += 1
if processed_symbols > 3:
break
diff = chunk[~where_symbol]
accum = pd.DataFrame(diff)
curr_symbol = accum.Symbol_root[0]
b'AA ' == b'AA '
"""
Explanation: Goal: Compute some summary statistics across a few securities in the TAQ file
Processing an entire TAQ file will take a long time. So, maybe just run through the chunks for the first two securities (you can then exit out of a loop once you see the third security / symbol).
A complete approach
End of explanation
"""
def simple_fun(l):
for item in l:
yield item
simple_gen = simple_fun(['a', 'b', 1, 2])
type(simple_gen)
next(simple_gen)
for item in simple_fun(['a', 'b', 1, 2]):
print(item)
"""
Explanation: some simple examples of how generator functions work
End of explanation
"""
|
cesans/mapache | features.ipynb | bsd-3-clause | ciudadanos = mapache.Party('Ciudadanos',
logo_url = 'https://www.ciudadanos-cs.org/var/public/sections/page-imagen-del-partido/logo-ciudadanos.jpg',
short_name = 'C\'s',
full_name = 'Ciudadanos - Partido de la Ciudadanía')
ciudadanos.show()
"""
Explanation: Managing Parties
A party is created from its name(s) and the party logo, that will be used to represent the party
End of explanation
"""
ciudadanos = mapache.parseutils.party_from_wiki('https://en.wikipedia.org/wiki/Citizens_(Spanish_political_party)')
ciudadanos.show()
"""
Explanation: Getting party information from wikipedia
The wiki page of the party can be used to extract the information:
End of explanation
"""
wiki_url = "https://en.wikipedia.org/wiki/Opinion_polling_for_the_Spanish_general_election,_2016"
tables = mapache.parseutils.tables_from_wiki(wiki_url)
print('{0} tables found'.format(len(tables)))
"""
Explanation: We recover the tables with all the polls for the 2016 Spanish National elections containing the parties taking part
End of explanation
"""
def parties_from_wikitable(table):
# A PartySet contains a group of parties
spanish_parties = mapache.PartySet()
# In the the opinion polls tables the first row corresponds to the parties,
# each cell except the first two and the last three correspond to a party
row = mapache.parseutils.wikitable_get_rows(table)[0]
cells = mapache.parseutils.wikitable_get_cells(row)
for c in cells[2:-3]:
# From each cell we recover:
# - The url of the party
# - The name of the party (in the url name)
# - The small logo of the party
url, name = mapache.parseutils.wikitable_get_url(c)
small_logo = mapache.parseutils.wikitable_get_imgurl(c)
# The party wiki page is fetched to get the full name and full logo
party = mapache.parseutils.party_from_wiki(url, name)
party.set_thumbnail(small_logo)
spanish_parties.add(party)
return spanish_parties
spanish_parties = parties_from_wikitable(tables[0])
display(HTML(spanish_parties.show_parties()))
display(HTML(spanish_parties.show_parties(small=True)))
"""
Explanation: The first row of the table is parsed to get the name and parties. Party logos are extracted from their wikipedia pages.
This function should be easy to adapt to any other table containing a list of parties
End of explanation
"""
spanish_parties_old = parties_from_wikitable(tables[1])
display(HTML(spanish_parties_old.show_parties(small=True)))
spanish_parties_old.parties
coalition_party_names = ['POD', 'IU-UPEC']
for p in coalition_party_names:
spanish_parties['UP'].add_to_coalition(spanish_parties_old[p])
spanish_parties['UP'].show()
"""
Explanation: The second table includes older opinion polls where the coalition 'Unidos Podemos' is not together, we can add the parties of the coalition so they are recognized in all polls
End of explanation
"""
from dateutil.parser import parse as dateparser
votes = np.random.random(len(spanish_parties.parties))
votes /= sum(votes)
votes *= 100
pollster = 'FakePollster'
date = dateparser('18 November 2015')
party_votes = {}
for party, vote in zip(spanish_parties.parties, votes):
party_votes[party] = vote
poll = mapache.Poll(parties=party_votes, date=date, pollster='Fake poll')
print(poll)
"""
Explanation: Managing polls
We create a fake poll:
End of explanation
"""
polls = mapache.parseutils.poll_from_table(tables[0], date_column=1, party_columns=(2,-3), poll_rows=(2,-1), name="Opinion Polls",
error_column=-3, pollster_column=0,)
print('{0} polls loaded'.format(len(polls.polls)))
print(polls.polls[4])
"""
Explanation: Parsing polls
We parse the Wikipedia page to extract each poll and create a PollsList of Polls
End of explanation
"""
old_polls = mapache.parseutils.poll_from_table(tables[1], date_column=1, party_columns=(2,-3), poll_rows=(2,-1), name="Opinion Polls",
error_column=-3, pollster_column=0,)
print('{0} polls loaded'.format(len(old_polls.polls)))
polls.add(old_polls)
"""
Explanation: We add the old polls as well (from a different table):
End of explanation
"""
name = 'ciutadans'
matched = spanish_parties.match(name)
print('\'{0}\' matched to \'{0}\', which included the names: {2}'.format(name, matched.name,
matched.get_all_names()))
"""
Explanation: Matching polls to parties
As it is likely that the all polls will not have the same name for a party, mapache can match a name to the closest party
End of explanation
"""
spanish_parties.match('unidad popular')
print(old_polls.polls[1])
"""
Explanation: Using name matching it is possible to get all poll results of a party
End of explanation
"""
ciudadanos.show_color()
"""
Explanation: After matching the polls it is possible to plot them. The color corresponding to each party it automatically generated from the logo
TODO: Solve clashes between parties with similar colours
End of explanation
"""
plt.rcParams['figure.figsize'] = (12, 6)
fig, ax = plt.subplots()
parties = ['pp', 'psoe', 'up', 'cs']
for k in parties:
party_polls = polls.get_party(spanish_parties.match(k), join_coalitions=False)
dates = [x[0] for x in party_polls]
votes = [x[1] for x in party_polls]
plt.scatter(dates, votes, color=spanish_parties.match(k).color, s=40)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.grid()
plt.ylim((0,40));
"""
Explanation: Simple plots can be easily created with the usual matplotlib functions. Note that the match function does not require an specific name for the party
End of explanation
"""
plt.rcParams['figure.figsize'] = (12, 6)
fig, ax = plt.subplots()
parties = ['pp', 'psoe', 'up', 'cs']
for k in parties:
party_polls = polls.get_party(spanish_parties.match(k), join_coalitions=True)
dates = [x[0] for x in party_polls]
votes = [x[1] for x in party_polls]
plt.scatter(dates, votes, color=spanish_parties.match(k).color, s=40)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.grid()
plt.ylim((0,40));
"""
Explanation: If join_coallitions=False, the votes of the parties being part of a coallition are not sum together, to get them we can set it to true.
End of explanation
"""
main_parties = spanish_parties.extract(['PP', 'PSOE', 'Unidos Podemos', 'Cs'])
display(HTML(main_parties.show_parties(small=True)))
"""
Explanation: Mapache visualization tools
We will only visualize the four main parties, to extract them from the PartyList
TODO: Automatically select the top N parties
End of explanation
"""
poll=polls.polls[4]
elections = polls.polls[-1]
"""
Explanation: Visualizing a poll
End of explanation
"""
mapache.vis.SingleBars(poll, main_parties);
"""
Explanation: Bar plot
A simple plot of the parties selected (sorted by votes)
End of explanation
"""
mapache.vis.SingleBars(poll, main_parties, elections=elections);
#Add label to the line!
"""
Explanation: An indication of the result in a different Poll (eg. the elections) can be easily added:
End of explanation
"""
wiki_2015 = "https://en.wikipedia.org/wiki/Opinion_polling_for_the_Spanish_general_election,_2015"
tables_2015 = mapache.parseutils.tables_from_wiki(wiki_2015)
parties_2015 = parties_from_wikitable(tables_2015[0])
polls_2015 = mapache.parseutils.poll_from_table(tables_2015[0], date_column=1, party_columns=(2,-3), poll_rows=(2,-1), name="Opinion Polls",
error_column=-3, pollster_column=0,)
polls_2015._name = 'Opinion Polls'
parties_2015 = parties_2015.extract(['PP', 'PSOE', 'Podemos', 'Cs'])
display(HTML(parties_2015.show_parties(small=True)))
# The last row corresponds to the election results!
elections = polls_2015.polls[0]
elections20D = mapache.PollsList()
elections20D.add(elections)
elections20D._name = 'Elections 20D'
del polls_2015.polls[0]
import imp
imp.reload(mapache.vis)
ts = mapache.vis.TimeSeries(parties_2015)
# A column with all the polls (add gps to arg)
ts.add_column(polls_2015)
# A column with the election results
ts.add_column(elections20D)
ts.show()
"""
Explanation: Visualizing several polls
Time series
A more complex visualization including many polls.
In this case we will load the polls previous to the last election.
End of explanation
"""
|
PyLCARS/PythonUberHDL | myHDL_DigitalSignalandSystems/ComplexMultiplier.ipynb | bsd-3-clause | from myhdl import *
from myhdlpeek import Peeker
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from sympy import *
init_printing()
import random
"""
Explanation: \title{myHDL Two Word Complex Multiplier}
\author{Steven K Armour}
\maketitle
This notebook/Program is a walkthrough on how to design and construct a two number complex Multiplier unit in myHDL based on the example of Guenter Dannoritzer
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc" style="margin-top: 1em;"><ul class="toc-item"><li><span><a href="#Refrances" data-toc-modified-id="Refrances-1"><span class="toc-item-num">1 </span>Refrances</a></span></li><li><span><a href="#Nonstandard-Libraries-and-tools-utilized" data-toc-modified-id="Nonstandard-Libraries-and-tools-utilized-2"><span class="toc-item-num">2 </span>Nonstandard Libraries and tools utilized</a></span></li><li><span><a href="#Complex-Multiplication-Review" data-toc-modified-id="Complex-Multiplication-Review-3"><span class="toc-item-num">3 </span>Complex Multiplication Review</a></span></li><li><span><a href="#Encoding-Real-and-Complex-in-a-Single-Input" data-toc-modified-id="Encoding-Real-and-Complex-in-a-Single-Input-4"><span class="toc-item-num">4 </span>Encoding Real and Complex in a Single Input</a></span><ul class="toc-item"><li><span><a href="#Issue:" data-toc-modified-id="Issue:-4.1"><span class="toc-item-num">4.1 </span>Issue:</a></span></li><li><span><a href="#!-Negative-number-Issue-in-MyHDL-with-concat" data-toc-modified-id="!-Negative-number-Issue-in-MyHDL-with-concat-4.2"><span class="toc-item-num">4.2 </span>! Negative number Issue in MyHDL with <code>concat</code></a></span></li></ul></li><li><span><a href="#Algorithm-test" data-toc-modified-id="Algorithm-test-5"><span class="toc-item-num">5 </span>Algorithm test</a></span><ul class="toc-item"><li><span><a href="#-" data-toc-modified-id="--5.1"><span class="toc-item-num">5.1 </span> </a></span></li><li><span><a href="#-" data-toc-modified-id="--5.2"><span class="toc-item-num">5.2 </span> </a></span></li><li><span><a href="#-" data-toc-modified-id="--5.3"><span class="toc-item-num">5.3 </span> </a></span></li><li><span><a href="#-" data-toc-modified-id="--5.4"><span class="toc-item-num">5.4 </span> </a></span></li><li><span><a href="#-" data-toc-modified-id="--5.5"><span class="toc-item-num">5.5 </span> </a></span></li></ul></li><li><span><a href="#myHDL-Implementation" data-toc-modified-id="myHDL-Implementation-6"><span class="toc-item-num">6 </span>myHDL Implementation</a></span><ul class="toc-item"><li><span><a href="#Concat-issue" data-toc-modified-id="Concat-issue-6.1"><span class="toc-item-num">6.1 </span>Concat issue</a></span></li></ul></li><li><span><a href="#Testing" data-toc-modified-id="Testing-7"><span class="toc-item-num">7 </span>Testing</a></span></li><li><span><a href="#Conversion-from-myHDL-to-verilog/vhdl" data-toc-modified-id="Conversion-from-myHDL-to-verilog/vhdl-8"><span class="toc-item-num">8 </span>Conversion from myHDL to verilog/vhdl</a></span><ul class="toc-item"><li><span><a href="#!!-Concat-issue-in-synthesis" data-toc-modified-id="!!-Concat-issue-in-synthesis-8.1"><span class="toc-item-num">8.1 </span>!! Concat issue in synthesis</a></span></li></ul></li></ul></div>
Refrances
Guenter Dannoritzer, Complex Math, http://old.myhdl.org/doku.php/projects:cplx_math [3-29-18]
Nonstandard Libraries and tools utilized
myHDL, [myhdlpeek by xesscorp] (using the experimental pandas branch)(https://github.com/xesscorp/myhdlpeek), draw.io
End of explanation
"""
A, B=symbols('A, B')
a, b, c, d=symbols('a, b, c, d', real=True)
Multi=A*B; Multi
simplify(Multi.subs({A:1+1j, B:1+1j}))
Multi=Multi.subs({A:(a+1j*b), B:(c+1j*d)})
Multi
Multi=expand(Multi); Multi
re(Multi), im(Multi)
"""
Explanation: Complex Multiplication Review
End of explanation
"""
BitWidth=32
WordSize=BitWidth
ReWordLen=WordSize//2; ImWordLen=WordSize//2
print(f'The Re WordLen is {ReWordLen} bits and the Im WordLen is {ImWordLen} bits')
ReMax=(2**(ReWordLen-1)-1); ReMin=-2**(ReWordLen-1)
ImMax=(2**(ImWordLen-1)-1); ImMin=-2**(ImWordLen-1)
ReMax, ReMin, ImMax, ImMin
"""
Explanation: Encoding Real and Complex in a Single Input
To encode the real number word and imaginary word into a single word we need to not treat the signal as a number but as a word and then contract the word into a signal word based on a well-posed contraction rule.
The contraction rule for creating a multiplication is that both the real and imaginary part of the whole number is a 2's complement number(word) that we then contract via the first word as $Re$ part and the second word is the $Im$ part such that we then have
$$Re(\text{word})+Im(\text{word})j\Rightarrow Re(\text{word})Im(\text{word})=\mathbb{Z}(\text{word} ) $$
Issue:
There is an ongoing issue with the concat function in myHDL that is ongoing [Issue Reopened 3-29-18] https://github.com/myhdl/myhdl/issues/128#issuecomment-377370353
When that issue is resolved this notebook will be updated accordingly. But for the moment, an attempt to develop this algorithm is shown and the problem with concat is highlighted so this notebook/program can be used as a benchmark for getting concat working properly.
End of explanation
"""
R=(2**ReWordLen)//4; I=(2**ImWordLen)//8
print(f'Re: {R}, Im: {I}')
RN=intbv(R, min=ReMin, max=ReMax); RN
RNBin=''.join([str(int(i)) for i in RN])
bin(R, ReWordLen), RNBin, bin(R, ReWordLen)==RNBin
IN=intbv(I, min=ImMin, max=ImMax); IN
INBin=''.join([str(int(i)) for i in IN])
bin(I, ImWordLen), INBin, bin(I, ImWordLen)==INBin
AN=concat(RN, IN)
ANBin=''.join([str(int(i)) for i in AN])
bin(AN, WordSize), ANBin, bin(AN, WordSize)==ANBin
"""
Explanation: Test Tragets
End of explanation
"""
RNBack=AN[:ReWordLen]; INBack=AN[ImWordLen:]
RNBack, RN, RNBack==RN,INBack, IN, INBack==IN
"""
Explanation: So now we can then Split AN to get back RN and IN
End of explanation
"""
TestNegNum=-26
print(f"""Target: {TestNegNum}
Absolote Bin: {bin(abs(TestNegNum), 8)},
Signed Bin: {bin(TestNegNum, 8)}""")
TestNegNumBV=intbv(TestNegNum)[8:]
TestNegNumBV, TestNegNumBV.signed()
R=-R; I=-I
print(f'Re: {R}, Im: {I}')
RN=intbv(R, min=ReMin, max=ReMax); RN
RNBin=''.join([str(int(i)) for i in RN])
RN.signed(), bin(R, ReWordLen), RNBin, bin(R, ReWordLen)==RNBin
IN=intbv(I, min=ImMin, max=ImMax); IN
INBin=''.join([str(int(i)) for i in IN])
IN.signed(), bin(I, ImWordLen), INBin, bin(I, ImWordLen)==INBin
"""
Explanation: But since myHDL implements 2's complement, we need to test for negative numbers where the leading bit is the signed signal
End of explanation
"""
AN=concat(RN, IN).signed()
ANBin=''.join([str(int(i)) for i in AN])
bin(AN, WordSize), ANBin, bin(AN, WordSize)==ANBin
AN
RNBack=AN[:ReWordLen]; INBack=AN[ImWordLen:]
RNBack.signed(), RN.signed(), RNBack.signed()==RN.signed(), INBack.signed(), IN.signed(), INBack.signed()==IN.signed()
"""
Explanation: ! Negative number Issue in MyHDL with concat
At the moment concat does not handle a negative number as its leading term. Due to it setting the returned intbv min to 0
End of explanation
"""
ReMax=(2**(ReWordLen-1)-1); ReMin=-2**(ReWordLen-1)
ReMax, ReMin
ImMax=(2**(ImWordLen-1)-1); ImMin=-2**(ImWordLen-1)
ImMax, ImMin
"""
Explanation: Algorithm test
Here we prototype the algorithm using myHDL types without explicit HDL type code that will be developed in the final algorithm.
Calc the min/max for the real and imaginary number in 2's complement based on the allowed word size for real and imaginary
End of explanation
"""
AVal=43-78j; AVal
"""
Explanation: Create a Test A number
End of explanation
"""
a=intbv(int(np.real(AVal)), min=ReMin, max=ReMax); a.signed()
b=intbv(int(np.imag(AVal)), min=ImMin, max=ImMax); b.signed()
A=concat(a, b); A, A.signed()
a=intbv(A[:ReWordLen].signed(), min=ReMin, max=ReMax)
b=intbv(A[ImWordLen:].signed(), min=ImMin, max=ReMax)
a, b,
"""
Explanation: Create the $a$ and $b$ part of $A$, concat $a$ and $b$, Confirm that separating $A$ yields the original $a$ and $b$
End of explanation
"""
BVal=1+123j; BVal
c=intbv(int(np.real(BVal)), min=ReMin, max=ReMax); c
d=intbv(int(np.imag(BVal)), min=ImMin, max=ImMax); d
B=concat(c, d); B, B.signed()
c=intbv(B[:ReWordLen].signed(), min=ReMin, max=ReMax)
d=intbv(B[ImWordLen:].signed(), min=ImMin, max=ImMax)
c, d
"""
Explanation: Perform the same action as above but on $B$
End of explanation
"""
ac=intbv(a*c, min=ReMin, max=ReMax)
np.real(AVal)*np.real(BVal), ac, ac.signed()
bd=intbv(b.signed()*d.signed(), min=ImMin, max=ImMax)
np.imag(AVal)*np.imag(BVal), bd, bd.signed()
ad=intbv(a.signed()*d.signed(), min=min(ReMin, ImMin), max=max(ReMax, ImMax))
np.real(AVal)*np.imag(BVal), ad, ad.signed()
bc=intbv(b.signed()*c.signed(), min=min(ReMin, ImMin), max=max(ReMax, ImMax))
np.imag(AVal)*np.real(BVal), bc, bc.signed()
"""
Explanation: Use the separated $a, b, c, d$ from $A, B$ to then find the sup-product of $AB=(ac−bd)+i(ad+bc)$
$$ac, bd \in Re$$ $$ ad, bc \in Im$$
End of explanation
"""
Re=intbv(ac-bd, min=min(ac.min, bd.min), max=max(ac.max, bd.max))
np.real(AVal)*np.real(BVal)-np.imag(AVal)*np.imag(BVal), Re.signed()
Im=intbv(ad+bc, min=min(ad.min, bc.min), max=max(ad.max, bc.max))
np.real(AVal)*np.imag(BVal)+np.imag(AVal)*np.real(BVal), Im, Im.signed()
AVal*BVal
"""
Explanation: Find the difference of the real and the sum of the imaginary products
End of explanation
"""
Result=concat(Re, Im)
bin(Re, Re._nrbits)+bin(Im, Im._nrbits), ''.join([str(int(i)) for i in Result])
"""
Explanation: Concat the $Re$ and $Im$ and according to the concatenation rule to a single number
End of explanation
"""
@block
def CompMulti(A, B, C, ReWordLen=ReWordLen, ImWordLen=ImWordLen):
"""
Module to implement the complex multiplication of two complex numbers
Inputs:
A: A 2's `concat` of the input complex number where
the input word is a ` concat` according to ReIm
B: A 2's `concat` of the input complex number where
input word is a `concat` according to ReIm
Outputs:
C: A 2's `concat` of the output complex number product
where output world is a `concat` according to ReIm
Conversion Parameters;
ReWordLen: The word len of the real part of `concat` word
ImWordLen: The word len of the imag part of `concat` word
"""
#calc the min max of the based on ReWord and ImWord length
ReMax=(2**(ReWordLen-1)-1); ReMin=-2**(ReWordLen-1)
ImMax=(2**(ImWordLen-1)-1); ImMin=-2**(ImWordLen-1)
#create the regeistors to hold the procuct results
ac=Signal(intbv(0, min=ReMin, max=ReMax))
bd=Signal(intbv(0, min=ImMin, max=ImMax))
ad=Signal(intbv(0, min=min(ReMin, ImMin), max=max(ReMax, ImMax)))
bc=Signal(intbv(0, min=min(ReMin, ImMin), max=max(ReMax, ImMax)))
@always_comb
def SepMulti():
"""
The combinational logic to separate input words into components and
find multiplication products
"""
ac.next=A[:ReWordLen] *B[:ReWordLen]
bd.next=A[ImWordLen:]*B[ImWordLen:]
ad.next=A[:ReWordLen]*B[ImWordLen:]
bc.next=A[ImWordLen:]*B[:ReWordLen]
#will fix when Concat is working properly
#@always_comb
#def AddSupConcat():
# """
# Perfrom the real sup, im addtion, and concat based on
# ReIm to final output
# """
#
# C.next=concat(ac-bd, ad+bc)
return instances()
"""
Explanation: myHDL Implementation
Concat issue
The 'concat' is not seeing the inputs as Signal intbv
<p align='center'>
<img src='ComplexMultiplierDrawings.png'>
End of explanation
"""
#calc Bounds
Max=(2**(WordSize-1)-1); Min=-2**(WordSize-1)
ReMax=(2**(ReWordLen-1)-1); ReMin=-2**(ReWordLen-1)
ImMax=(2**(ImWordLen-1)-1); ImMin=-2**(ImWordLen-1)
#Create Testing Data
Refrance=pd.DataFrame(columns=['a', 'b', 'c', 'd'])
#Generate Testing data and bind values to DF
for i in range(20):
a=random.sample(range(0, ReMax), 1)[0]
b=random.sample(range(ImMin, ImMax), 1)[0]
c=random.sample(range(0, ReMax), 1)[0]
d=random.sample(range(ImMin, ImMax), 1)[0]
Refrance.loc[Refrance.shape[0]]=[a, b, c, d]
#force the stored value in DF to be ints
Refrance=Refrance.astype(int)
Refrance
#calc exspected results
RefranceResults=Refrance.copy()
RefranceResults['A']=RefranceResults['a']+1j*RefranceResults['b']
RefranceResults['B']=RefranceResults['c']+1j*RefranceResults['d']
RefranceResults['C']=RefranceResults['A']*RefranceResults['B']
RefranceResults
Peeker.clear()
Max=(2**(WordSize-1)-1); Min=-2**(WordSize-1)
ReMax=(2**(ReWordLen-1)-1); ReMin=-2**(ReWordLen-1)
ImMax=(2**(ImWordLen-1)-1); ImMin=-2**(ImWordLen-1)
A=Signal(intbv(0, min=Min, max=Max)); Peeker(A, 'A')
B=Signal(intbv(0, min=Min, max=Max)); Peeker(B, 'B')
C=Signal(intbv(0, min=Min, max=Max)); Peeker(C, 'C')
a=Signal(intbv(0, min=ReMin, max=ReMax)); Peeker(a, 'a')
b=Signal(intbv(0, min=ImMin, max=ImMax)); Peeker(b, 'b')
c=Signal(intbv(0, min=ReMin, max=ReMax)); Peeker(c, 'c')
d=Signal(intbv(0, min=ImMin, max=ImMax)); Peeker(d, 'd')
DUT=CompMulti(A, B, C, ReWordLen=ReWordLen, ImWordLen=ImWordLen)
def CompMulti_TB():
@instance
def Test():
for _, j in Refrance.iterrows():
a.next, b.next, c.next, d.next =j
print(a, b, c, d)
#A.next=concat(a, b); B.next=concat(c, d)
yield delay(1)
raise StopSimulation
return instances()
sim = Simulation(DUT, CompMulti_TB(), *Peeker.instances()).run()
Peeker.to_wavedrom()
"""
Explanation: Testing
Generate random number within word size bounds for testing
End of explanation
"""
@block
def CompMulti(A, B, C, ReWordLen=ReWordLen, ImWordLen=ImWordLen):
"""
Module to implent the complex multibtion of two conplex numbers
Inputs:
A: A 2's conplimint concat of the input complex number where
input workd is a concat according to ReIm
B: A 2's conplimint concat of the input complex number where
input workd is a concat according to ReIm
Outputs:
C: A 2's conplimint concat of the output complex number product
where ouput wordd is a concat according to ReIm
Conversion Parmters;
ReWordLen: The word len of the real part of concat word
ImWordLen: The word len of the imag part of concat word
"""
#calc the min max of the based on ReWord and ImWord length
ReMax=(2**(ReWordLen-1)-1); ReMin=-2**(ReWordLen-1)
ImMax=(2**(ImWordLen-1)-1); ImMin=-2**(ImWordLen-1)
#create the regeistors to hold the procuct results
ac=Signal(intbv(0, min=ReMin, max=ReMax))
bd=Signal(intbv(0, min=ImMin, max=ImMax))
ad=Signal(intbv(0, min=min(ReMin, ImMin), max=max(ReMax, ImMax)))
bc=Signal(intbv(0, min=min(ReMin, ImMin), max=max(ReMax, ImMax)))
@always_comb
def SepMulti():
"""
Compintion logic to sep input words into componets and
find multibilication products
"""
ac.next=A[:ReWordLen] *B[:ReWordLen]
bd.next=A[ImWordLen:]*B[ImWordLen:]
ad.next=A[:ReWordLen]*B[ImWordLen:]
bc.next=A[ImWordLen:]*B[:ReWordLen]
@always_comb
def AddSupConcat():
"""
Perfrom the real sup, im addtion, and concat based on
ReIm to final output
"""
C.next=concat(ac-bd, ad+bc)
return instances()
#helper functions to read in the .v and .vhd generated files into python
def VerilogTextReader(loc, printresult=True):
with open(f'{loc}.v', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***Verilog modual from {loc}.v***\n\n', VerilogText)
return VerilogText
def VHDLTextReader(loc, printresult=True):
with open(f'{loc}.vhd', 'r') as vText:
VerilogText=vText.read()
if printresult:
print(f'***VHDL modual from {loc}.vhd***\n\n', VerilogText)
return VerilogText
A=Signal(intbv(0, min=Min, max=Max))
B=Signal(intbv(0, min=Min, max=Max))
C=Signal(intbv(0, min=Min, max=Max))
DUT=CompMulti(A, B, C, ReWordLen=ReWordLen, ImWordLen=ImWordLen)
"""
Explanation: Conversion from myHDL to verilog/vhdl
End of explanation
"""
DUT.convert()
_=VerilogTextReader('CompMulti', True)
DUT.convert(hdl='VHDL')
_=VHDLTextReader('CompMulti', True)
"""
Explanation: !! Concat issue in synthesis
End of explanation
"""
|
gabrielusvicente/data-science-playground | develop/gs-ISL_advertising.ipynb | mit | #define numerical examples
true = [100, 50, 30, 20]
pred = [90, 50, 50, 30]
"""
Explanation: Model evaluation metrics for regression
Evalution matrics for classification problems, such as accuracy, are not useful for regression.
Let's create some example numeric predictions, and calculate three common evalution metrics for regression.
End of explanation
"""
# calculate MAE using scikit-learn
from sklearn import metrics
print metrics.median_absolute_error(true, pred)
"""
Explanation: Mean Absolute Error (MAE) is the mean of the absolute value of the errors:
$$\frac{1}{n} \sum_{i=1}^{n} | y_i - \hat{y}_i| $$
End of explanation
"""
# calculate MSE using scikit-learn
print metrics.mean_squared_error(true, pred)
"""
Explanation: Mean Squared Error (MSE) is the mean of the squared errors:
$$\frac{1}{n} \sum_{i=1}^{n} ( y_i - \hat{y}_i)^2 $$
End of explanation
"""
# calculate RMSE using scikit-learn
import numpy as np
print np.sqrt(metrics.mean_squared_error(true, pred))
"""
Explanation: Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors:
$$\sqrt{\frac{1}{n} \sum_{i=1}^{n} ( y_i - \hat{y}_i)^2}$$
End of explanation
"""
# make predictions on the testing subset
y_pred = linreg.predict(X_test)
# RMSE
print np.sqrt(metrics.mean_squared_error(y_test, y_pred))
"""
Explanation: Comparing these metrics:
MAE is the easiest to understand, because it's the average error.
MSE is more popular than MAE, because MSE "punishes" larger errors.
RMSE is even more popular than MSE, because RMSE is interpretable in the "y" units.
RMSE for our sales predictions:
End of explanation
"""
# select first two features
feature_cols = ['TV', 'Radio']
# select subset
X = data[feature_cols]
#y = data.Sales
y = data[response_cols]
# split into training and testing
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# fit the model
linreg.fit(X_train, y_train)
# make predictions
y_pred = linreg.predict(X_test)
# compute RMSE
print np.sqrt(metrics.mean_squared_error(y_test, y_pred))
"""
Explanation: Feature Selection
End of explanation
"""
|
statsmodels/statsmodels | examples/notebooks/tsa_arma_1.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
from statsmodels.graphics.tsaplots import plot_predict
from statsmodels.tsa.arima_process import arma_generate_sample
from statsmodels.tsa.arima.model import ARIMA
np.random.seed(12345)
"""
Explanation: Autoregressive Moving Average (ARMA): Artificial data
End of explanation
"""
arparams = np.array([0.75, -0.25])
maparams = np.array([0.65, 0.35])
"""
Explanation: Generate some data from an ARMA process:
End of explanation
"""
arparams = np.r_[1, -arparams]
maparams = np.r_[1, maparams]
nobs = 250
y = arma_generate_sample(arparams, maparams, nobs)
"""
Explanation: The conventions of the arma_generate function require that we specify a 1 for the zero-lag of the AR and MA parameters and that the AR parameters be negated.
End of explanation
"""
dates = pd.date_range("1980-1-1", freq="M", periods=nobs)
y = pd.Series(y, index=dates)
arma_mod = ARIMA(y, order=(2, 0, 2), trend="n")
arma_res = arma_mod.fit()
print(arma_res.summary())
y.tail()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10, 8))
fig = plot_predict(arma_res, start="1999-06-30", end="2001-05-31", ax=ax)
legend = ax.legend(loc="upper left")
"""
Explanation: Now, optionally, we can add some dates information. For this example, we'll use a pandas time series.
End of explanation
"""
|
shareactorIO/pipeline | gpu.ml/notebooks/08_Optimize_Model_CPU.ipynb | apache-2.0 | %%bash
which summarize_graph
%%bash
## TODO: /root/models/linear/cpu/metagraph
## ls -l /root/models/optimize_me/
ls -l /root/models/linear/cpu/unoptimized
%%bash
freeze_graph
from tensorflow.python.tools import freeze_graph
checkpoint_prefix = os.path.join(self.get_temp_dir(), "saved_checkpoint")
checkpoint_state_name = "checkpoint_state"
input_graph_name = "input_graph.pb"
output_graph_name = "output_graph.pb"
input_graph_path = os.path.join(self.get_temp_dir(),
input_graph_name)
input_saver_def_path = ""
input_binary = False
output_node_names = "output_node"
restore_op_name = "save/restore_all"
filename_tensor_name = "save/Const:0"
output_graph_path = os.path.join(self.get_temp_dir(), output_graph_name)
clear_devices = False
freeze_graph.freeze_graph(input_graph_path,
input_saver_def_path,
input_binary,
checkpoint_path,
output_node_names,
restore_op_name,
filename_tensor_name,
output_graph_path,
clear_devices, "")
%%bash
## TODO: /root/models/linear/cpu/unoptimized/metagraph.pb
## summarize_graph --in_graph=/root/models/optimize_me/unoptimized_cpu.pb
summarize_graph --in_graph=/root/models/linear/cpu/unoptimized/metagraph.pb
"""
Explanation: Optimize Trained CPU Model
Types of Optimizations Applied for Inference
Remove training-only operations (checkpoint saving, drop out)
Strip out unused nodes
Remove debug operations
Fold batch normalization ops into weights (super cool)
Round weights
Quantize weights
Graph Transform Tool
https://petewarden.com/2016/12/30/rewriting-tensorflow-graphs-with-the-gtt/
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms
Optimize Models
Summarize Graph Utility
End of explanation
"""
%%bash
# TODO: shuffle_batch?? x_observed_batch??
transform_graph \
--in_graph=/root/models/optimize_me/unoptimized_cpu.pb \
--out_graph=/root/models/optimize_me/strip_unused_optimized_cpu.pb \
--inputs='x_observed,weights,bias' \
--outputs='add' \
--transforms='
strip_unused_nodes'
%%bash
ls -l /root/models/optimize_me/
%%bash
summarize_graph --in_graph=/root/models/optimize_me/strip_unused_optimized_cpu.pb
%%bash
benchmark_model --graph=/root/models/optimize_me/strip_unused_optimized_cpu.pb --input_layer=weights,bias,x_observed --input_layer_type=float,float,float --input_layer_shape=:: --output_layer=add
"""
Explanation: Strip Unused Nodes
End of explanation
"""
%%bash
transform_graph \
--in_graph=/root/models/optimize_me/unoptimized_cpu.pb \
--out_graph=/root/models/optimize_me/fold_constants_optimized_cpu.pb \
--inputs='x_observed,weights,bias' \
--outputs='add' \
--transforms='
fold_constants(ignore_errors=true)'
%%bash
ls -l /root/models/optimize_me/
%%bash
summarize_graph --in_graph=/root/models/optimize_me/fold_constants_optimized_cpu.pb
%%bash
benchmark_model --graph=/root/models/optimize_me/fold_constants_optimized_cpu.pb --input_layer=x_observed,bias,weights --input_layer_type=float,float,float --input_layer_shape=:: --output_layer=add
"""
Explanation: Fold Constants
End of explanation
"""
%%bash
transform_graph \
--in_graph=/root/models/optimize_me/fold_constants_optimized_cpu.pb \
--out_graph=/root/models/optimize_me/fold_batch_norms_optimized_cpu.pb \
--inputs='x_observed,weights,bias' \
--outputs='add' \
--transforms='
fold_batch_norms
fold_old_batch_norms'
%%bash
ls -l /root/models/optimize_me/
%%bash
summarize_graph --in_graph=/root/models/optimize_me/fold_batch_norms_optimized_cpu.pb
%%bash
benchmark_model --graph=/root/models/optimize_me/fold_batch_norms_optimized_cpu.pb --input_layer=x_observed,bias,weights --input_layer_type=float,float,float --input_layer_shape=:: --output_layer=add
"""
Explanation: Fold Batch Normalizations
Must run Fold Constants first!
End of explanation
"""
%%bash
transform_graph \
--in_graph=/root/models/optimize_me/fold_batch_norms_optimized_cpu.pb \
--out_graph=/root/models/optimize_me/quantized_optimized_cpu.pb \
--inputs='x_observed,weights,bias' \
--outputs='add' \
--transforms='quantize_weights'
%%bash
ls -l /root/models/optimize_me/
%%bash
summarize_graph --in_graph=/root/models/optimize_me/quantized_optimized_cpu.pb
%%bash
benchmark_model --graph=/root/models/optimize_me/quantized_optimized_cpu.pb --input_layer=x_observed,bias,weights --input_layer_type=float,float,float --input_layer_shape=:: --output_layer=add
"""
Explanation: Quantize Weights
Should run Fold Batch Norms first!
End of explanation
"""
%%bash
transform_graph \
--in_graph=/root/models/optimize_me/unoptimized_cpu.pb \
--out_graph=/root/models/optimize_me/fully_optimized_cpu.pb \
--inputs='x_observed,weights,bias' \
--outputs='add' \
--transforms='
add_default_attributes
remove_nodes(op=Identity, op=CheckNumerics)
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
quantize_weights
quantize_nodes
strip_unused_nodes
obfuscate_names'
%%bash
ls -l /root/models/optimize_me/
%%bash
summarize_graph --in_graph=/root/models/optimize_me/fully_optimized_cpu.pb
%%bash
benchmark_model --graph=/root/models/optimize_me/fully_optimized_cpu.pb --input_layer=weights,x_observed,bias --input_layer_type=float,float,float --input_layer_shape=:: --output_layer=add
"""
Explanation: Perform All Common Optimizations
End of explanation
"""
%%bash
transform_graph \
--in_graph=/root/models/optimize_me/fully_optimized_cpu.pb \
--out_graph=/root/models/optimize_me/sort_by_execution_order_optimized_cpu.pb \
--inputs='x_observed,weights,bias' \
--outputs='add' \
--transforms='
sort_by_execution_order'
%%bash
ls -l /root/models/optimize_me/
%%bash
summarize_graph --in_graph=/root/models/optimize_me/sort_by_execution_order_optimized_cpu.pb
%%bash
benchmark_model --graph=/root/models/optimize_me/sort_by_execution_order_optimized_cpu.pb --input_layer=weights,x_observed,bias --input_layer_type=float,float,float --input_layer_shape=:: --output_layer=add
"""
Explanation: Sort by Execution Order (DAG Topological Order)
Minimizes inference overhead
Inputs for a node guaranteed to be available
End of explanation
"""
|
duncanwp/python_for_climate_scientists | course_content/notebooks/exception_handling.ipynb | gpl-3.0 | n = int(input("Enter an integer: "))
print("Hello " * n)
"""
Explanation: Exception handling
You will have noticed that when something goes wrong in a Python program you see an error message. This is called an exception, and you can handle them explicitly to prevent your program from aborting and printing an unhelpful traceback.
For example, take the following code that asks the user to enter an integer, then prints "Hello" that number of times:
End of explanation
"""
try:
n = int(input("Enter an integer: "))
print("Hello " * n)
except ValueError:
print("That wasn't an integer!")
"""
Explanation: This failed when we provided input that could not be converted to an integer.
We can re-write this so that we catch the exception before it gets to the user, and print a helpful message instead:
End of explanation
"""
while True:
try:
n = int(input("Enter an integer: "))
print("Hello " * n)
break
except ValueError:
print("That wasn't an integer! Try again...")
except KeyboardInterrupt:
print('bye')
break
"""
Explanation: You can handle errors in anyway that might be appropriate for your program.
We could take this one step further by continually asking the user for input until we get an integer:
End of explanation
"""
|
yashdeeph709/Algorithms | PythonBootCamp/Complete-Python-Bootcamp-master/Functions and Methods Homework.ipynb | apache-2.0 | def vol(rad):
pass
"""
Explanation: Functions and Methods Homework
Complete the following questions:
Write a function that computes the volume of a sphere given its radius.
End of explanation
"""
def ran_check(num,low,high):
pass
"""
Explanation: Write a function that checks whether a number is in a given range (Inclusive of high and low)
End of explanation
"""
def ran_bool(num,low,high):
pass
ran_bool(3,1,10)
"""
Explanation: If you only wanted to return a boolean:
End of explanation
"""
def up_low(s):
pass
"""
Explanation: Write a Python function that accepts a string and calculate the number of upper case letters and lower case letters.
Sample String : 'Hello Mr. Rogers, how are you this fine Tuesday?'
Expected Output :
No. of Upper case characters : 4
No. of Lower case Characters : 33
If you feel ambitious, explore the Collections module to solve this problem!
End of explanation
"""
def unique_list(l):
pass
unique_list([1,1,1,1,2,2,3,3,3,3,4,5])
"""
Explanation: Write a Python function that takes a list and returns a new list with unique elements of the first list.
Sample List : [1,1,1,1,2,2,3,3,3,3,4,5]
Unique List : [1, 2, 3, 4, 5]
End of explanation
"""
def multiply(numbers):
pass
multiply([1,2,3,-4])
"""
Explanation: Write a Python function to multiply all the numbers in a list.
Sample List : [1, 2, 3, -4]
Expected Output : -24
End of explanation
"""
def palindrome(s):
pass
palindrome('helleh')
"""
Explanation: Write a Python function that checks whether a passed string is palindrome or not.
Note: A palindrome is word, phrase, or sequence that reads the same backward as forward, e.g., madam or nurses run.
End of explanation
"""
import string
def ispangram(str1, alphabet=string.ascii_lowercase):
pass
ispangram("The quick brown fox jumps over the lazy dog")
string.ascii_lowercase
"""
Explanation: Hard:
Write a Python function to check whether a string is pangram or not.
Note : Pangrams are words or sentences containing every letter of the alphabet at least once.
For example : "The quick brown fox jumps over the lazy dog"
Hint: Look at the string module
End of explanation
"""
|
thempel/adaptivemd | examples/rp/3_example_adaptive.ipynb | lgpl-2.1 | import sys, os
# stop RP from printing logs until severe
# verbose = os.environ.get('RADICAL_PILOT_VERBOSE', 'REPORT')
os.environ['RADICAL_PILOT_VERBOSE'] = 'ERROR'
from adaptivemd import (
Project,
Event, FunctionalEvent,
File
)
# We need this to be part of the imports. You can only restore known objects
# Once these are imported you can load these objects.
from adaptivemd.engine.openmm import OpenMMEngine
from adaptivemd.analysis.pyemma import PyEMMAAnalysis
"""
Explanation: AdaptiveMD
Example 3 - Running an adaptive loop
End of explanation
"""
project = Project('test')
"""
Explanation: Let's open our test project by its name. If you completed the first examples this should all work out of the box.
End of explanation
"""
print project.files
print project.generators
print project.models
"""
Explanation: Open all connections to the MongoDB and Session so we can get started.
An interesting thing to note here is, that since we use a DB in the back, data is synced between notebooks. If you want to see how this works, just run some tasks in the last example, go back here and check on the change of the contents of the project.
Let's see where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
End of explanation
"""
engine = project.generators['openmm']
modeller = project.generators['pyemma']
pdb_file = project.files['initial_pdb']
"""
Explanation: Now restore our old ways to generate tasks by loading the previously used generators.
End of explanation
"""
def strategy():
# create a new scheduler
with project.get_scheduler(cores=2) as local_scheduler:
for loop in range(10):
tasks = local_scheduler(project.new_ml_trajectory(
length=100, number=10))
yield tasks.is_done()
task = local_scheduler(modeller.execute(list(project.trajectories)))
yield task.is_done
"""
Explanation: Run simulations
Now we really start simulations. The general way to do so is to create a simulation task and then submit it to a cluster to be executed. A Task object is a general description of what should be done and boils down to staging some files to your working directory, executing a bash script and finally moving files back from your working directory to a shared storage. RP takes care of most of this very elegantly and hence a Task is designed somewhat to cover the capabilities but in a somehow simpler and more pythonic way.
For example there is a RPC Python Call Task that allows you to execute a function remotely and pull back the results.
Functional Events
We want to first look into a way to run python code asynchroneously in the project. For this, write a function that should be executed. Start with opening a scheduler or using an existing one (in the latter case you need to make sure that when it is executed - which can take a while - the scheduler still exists).
If the function should pause, write yield {condition_to_continue}. This will interrupt your script until the function you return will return True when called.
End of explanation
"""
ev = FunctionalEvent(strategy())
"""
Explanation: turn a generator of your function use add strategy() and not strategy to the FunctionalEvent
End of explanation
"""
project.add_event(ev)
"""
Explanation: and execute the event inside your project
End of explanation
"""
import time
from IPython.display import clear_output
try:
while True:
clear_output(wait=True)
print '# of files %8d : %s' % (len(project.trajectories), '#' * len(project.trajectories))
print '# of models %8d : %s' % (len(project.models), '#' * len(project.models))
sys.stdout.flush()
time.sleep(1)
except KeyboardInterrupt:
pass
"""
Explanation: after some time you will have 10 more trajectories. Just like that.
Let's see how our project is growing
End of explanation
"""
trajs = project.trajectories
q = {}
ins = {}
for f in trajs:
source = f.frame if isinstance(f.frame, File) else f.frame.trajectory
ind = 0 if isinstance(f.frame, File) else f.frame.index
ins[source] = ins.get(source, []) + [ind]
"""
Explanation: And some analysis
End of explanation
"""
scheduler = project.get_scheduler(cores=2)
def strategy1():
for loop in range(10):
tasks = scheduler(project.new_ml_trajectory(
length=100, number=10))
yield tasks.is_done()
def strategy2():
for loop in range(10):
num = len(project.trajectories)
task = scheduler(modeller.execute(list(project.trajectories)))
yield task.is_done
yield project.on_ntraj(num + 5)
project._events = []
project.add_event(FunctionalEvent(strategy1))
project.add_event(FunctionalEvent(strategy2))
project.close()
"""
Explanation: Event
End of explanation
"""
scheduler = project.get_scheduler(cores=2) # get the default scheduler using 2 cores
"""
Explanation: Tasks
To actually run simulations you need to have a scheduler (maybe a better name?). This instance can execute tasks or more precise you can use it to submit tasks which will be converted to ComputeUnitDescriptions and executed on the cluster previously chosen.
End of explanation
"""
trajs = project.new_trajectory(pdb_file, 100, 4)
"""
Explanation: Now we are good to go and can run a first simulation
This works by creating a Trajectory object with a filename, a length and an initial frame. Then the engine will take this information and create a real trajectory with exactly this name, this initil frame and the given length.
Since this is such a common task you can also submit just a Trajectory without the need tp convert it to a Task first (which the engine can also do).
Out project can create new names automatically and so we want 4 new trajectories of length 100 and starting at the existing pdb_file we use to initialize the engine.
End of explanation
"""
scheduler.submit(trajs)
"""
Explanation: Let's submit and see
End of explanation
"""
scheduler.wait()
"""
Explanation: Once the trajectories exist these objects will be saved to the database. It might be a little confusing to have objects before they exist, but this way you can actually work with these trajectories like referencing even before they exist.
This would allow to write now a function that triggers when the trajectory comes into existance. But we are not doing this right now.
Wait is dangerous since it is blocking and you cannot do anything until all tasks are finished. Normally you do not need it. Especially in interactive sessions.
End of explanation
"""
print '# of files', len(project.files)
"""
Explanation: Look at all the files our project now contains.
End of explanation
"""
t = modeller.execute(list(project.trajectories))
scheduler(t)
scheduler.wait()
"""
Explanation: Great! That was easy (I hope you agree).
Next we want to run a simple analysis.
End of explanation
"""
print project.models.last.data.keys()
"""
Explanation: Let's look at the model we generated
End of explanation
"""
print project.models.last.data['msm']['P']
"""
Explanation: And pick some information
End of explanation
"""
def task_generator():
return [
engine.task_run_trajectory(traj) for traj in
project.new_ml_trajectory(100, 4)]
task_generator()
"""
Explanation: Next example will demonstrate on how to write a full adaptive loop
Events
A new concept. Tasks are great and do work for us. But so far we needed to submit tasks ourselves. In adaptive simulations we want this to happen automagically. To help with some of this events exist. This are basically a task_generator coupled with conditions on when to be executed.
Let's write a little task generator (in essence a function that returns tasks)
End of explanation
"""
ev = Event().on(project.on_ntraj(range(20,22,2))).do(task_generator)
"""
Explanation: Now create an event.
End of explanation
"""
def hello():
print 'DONE!!!'
return [] # todo: allow for None here
finished = Event().on(ev.on_done).do(hello)
scheduler.add_event(ev)
scheduler.add_event(finished)
"""
Explanation: .on specifies when something should be executed. In our case when the project has a number of 20 trajectories. This is not yet the case so this event will not do anything unless we simulation more trajectories.
.do specifies the function to be called.
The concept is borrowed from event based languages like often used in JavaScript.
You can build quite complex execution patterns with this. An event for example also knows when it is finished and this can be used as another trigger.
End of explanation
"""
print '# of files', len(project.files)
"""
Explanation: All events and tasks run parallel or at least get submitted and queue for execution in parallel. RP takes care of the actual execution.
End of explanation
"""
ev1 = Event().on(project.on_ntraj(range(30, 70, 4))).do(task_generator)
ev2 = Event().on(project.on_ntraj(38)).do(lambda: modeller.execute(list(project.trajectories))).repeat().until(ev1.on_done)
scheduler.add_event(ev1)
scheduler.add_event(ev2)
len(project.trajectories)
len(project.models)
"""
Explanation: So for now lets run more trajectories and schedule computation of models in regular intervals.
End of explanation
"""
print project.files
"""
Explanation: .repeat means to redo the same task when the last is finished (it will just append an infinite list of conditions to keep on running).
.until specifies a termination condition. The event will not be executed once this condition is met. Makes most sense if you use .repeat or if the trigger condition and stopping should be independent. You might say, run 100 times unless you have a good enough model.
End of explanation
"""
project.close()
"""
Explanation: Strategies (aka the brain)
The brain is just a collection of events. This makes it reuseable and easy to extend.
End of explanation
"""
|
Brunel-Visualization/Brunel | python/examples/Brunel Cars.ipynb | apache-2.0 | import pandas as pd
import brunel
cars = pd.read_csv("data/cars.csv")
cars.head(6)
"""
Explanation: Demo of Brunel on Cars Data
The Data
We read the data into a pandas data frame. In this case we are grabbing some data that represents cars.
We read it in and call the brunel use method to ensure the names are usable
End of explanation
"""
%brunel data('cars') x(mpg) y(horsepower) color(origin) filter(horsepower) :: width=800, height=300
%brunel bar data('cars') x(origin) y(mpg) mean(mpg) animate(year:6) :: width=800, height=300
%brunel data('cars') edge yrange(origin, year) chord size(#count) color(origin) :: width=500, height=400
%brunel data('cars') treemap x(origin, year, cylinders) color(mpg) mean(mpg) size(#count) label(cylinders) tooltip(#all):: width=900, height=600
"""
Explanation: Basics
We import the Brunel module and create a couple of simple scatterplots.
We use the brunel magic to do so
The basic format of each call to Brunel is simple; whether it is a single line or a set of lines (a cell magic),
they are concatenated together, and the result interprested as one command.
This command must start with an ACTION, but may have a set of options at the end specified as ACTION :: OPTIONS.
ACTION is the Brunel action string; OPTIONS are key=value pairs:
* data defines the pandas dataframe to use. If not specified, the pandas data that best fits the action command will be used
* width and height may be supplied to set the resulting size
For details on the Brunel Action languages, see the Online Docs on Bluemix
End of explanation
"""
def identify(x, search):
for y in search:
if y.lower() in x.lower(): return y
return None
cars['Type'] = cars.name.map(lambda x: identify(x, ["Ford", "Buick"]))
%%brunel data('cars') x(engine) y(mpg) color(Type) style('size:50%; fill:#eee') +
x(engine) y(mpg) color(Type) text style('text {font-size:14; font-weight:bold; fill:darker}')
:: width=800, height=800
"""
Explanation: Using the Dataframe
Since Brunel uses the data frame, we can modify or add to that object to show data in different ways. In the following example we apply a function that takes a name and sees if it matches one of a set of sub-strings. We map this function to the car names to create a new column consisting of the names that match either "Ford" or "Buick", and use that in our Brunel action.
Because the Brunel action is long -- we are adding some CSS styling, we split it into two parts for convenience.
End of explanation
"""
|
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn | doc/notebooks/Expressions.ipynb | gpl-3.0 | import vcsn
import pandas as pd
pd.options.display.max_colwidth = 0
"""
Explanation: Expressions
Rational expressions, or expressions for short, denote (rational) languages in a compact way. Since Vcsn supports weighted expressions, they actually can denoted rational series.
This page documents the syntax and transformations (called identities) that are applied to every expression. The list of available algorithms using expression is in the Algorithms page.
Syntax
The syntax for rational expressions is as follows (with increasing precedence):
- \z, the empty expression.
- \e, the empty word.
- a, the letter a.
- 'l', the label l (useful, for instance, when labels are words, or to denote a letter which is an operator: '+' denotes the + letter).
- [abcd], letter class, equivalent to (a+b+c+d).
- [a-d], one letter of the current alphabet between a and d. If the alphabet is ${a, d, e}$, [a-d] denotes [ad], not [abcd].
- [^a-dz], one letter of the current alphabet that is not part of [a-dz].
- [^], any letter of the current alphabet ("any letter other that none").
- (e), e.
- e+f, the addition (disjunction, union) of e and f (note the use of +, | is not accepted).
- e&f, the conjunction (intersection) of e and f.
- e:f, the shuffle product (interleaving) of e and f.
- e&:f, the infiltration of e and f.
- ef and e.f, the multiplication (concatenation) of e and f.
- <k>e, the left exterior product (left-scalar product) of e by k.
- e<k>, the right exterior product (right-scalar product) of e by k.
- e* and e{*}, any number of repetitions of e (the Kleene closure of e).
- e{n}, the power (repeated multiplication) of e n times: ee ... e.
- e{n,m}, any repetition of e between n and m, i.e., the sum of the powers of e between n and m: e{n}+e{n+1}+ ... +e{m}.
- e{n,}, the sum of powers of e at least n times: e{n}e*.
- e{,m}, at most m repetitions of e: e{0,m}.
- e{+}, at least one e: e{1,}.
- e?, e{?}, e optional: e{0,1}.
- e{c}, the complement of e.
where e and f denote expressions, a a label, k a weight, and n and m natural numbers.
Please note that contrary to "regexps" (as in grep, perl, etc.):
- spaces are ignored
- + denotes the choice, not |
- . denotes the concatenation, use [^] to mean "any letter"
Identities
Some rewriting rules are applied on the expressions to "simplify" them. The strength of this simplification is graduated.
none: no identities at all. Some algorithms, such as derived_term, might not terminate.
trivial: the trivial identities only are applied.
associative: the associative identities are added.
linear: the linear identities are added. This is the default.
distributive: the distributive identities are added.
Trivial Identities
$$
\newcommand{\eword}{\varepsilon}
\newcommand{\lmul}[2]{\bra{#1}{#2}}
\newcommand{\rmul}[2]{#1\bra{#2}}
\newcommand{\lmulq}[2]{\bra{#1}^?{#2}}
\newcommand{\rmulq}[2]{#1\bra{#2}^?}
\newcommand{\bra}[1]{\langle#1\rangle}
\newcommand{\K}{\mathbb{K}}
\newcommand{\zed}{\mathsf{0}}
\newcommand{\und}{\mathsf{1}}
\newcommand{\zeK}{0_{\K}}
\newcommand{\unK}{1_{\K}}
\newcommand{\Ed}{\mathsf{E}}
\newcommand{\Fd}{\mathsf{F}}
\newcommand{\Gd}{\mathsf{G}}
\begin{gather}
% \tag{add}
\Ed+\zed \Rightarrow \Ed
\quad
\zed+\Ed \Rightarrow \Ed
\[.7ex] %\tag{kmul}
\begin{aligned}[t]
\lmul{\zeK}{\Ed} & \Rightarrow \zed &
\lmul{\unK}{\Ed} & \Rightarrow \Ed &
\lmul{k}{\zed} & \Rightarrow \zed &
\lmul{k}{\lmul{h}{\Ed}} &\Rightarrow \lmul{kh}{\Ed}
\
\rmul{\Ed}{\zeK} & \Rightarrow \zed &
\rmul{\Ed}{\unK} & \Rightarrow \Ed &
\rmul{\zed}{k} & \Rightarrow \zed &
\rmul{\rmul{\Ed}{k}}{h} &\Rightarrow \rmul{\Ed}{kh}
\end{aligned}\
\rmul{(\lmul{k}{\Ed})}{h} \Rightarrow \lmul{k}{(\rmul{\Ed}{h})} \quad
\rmul{\ell}{k} \Rightarrow \lmul{k}{\ell}
\ %\tag{mul}
\Ed \cdot \zed \Rightarrow \zed \quad
\zed \cdot \Ed \Rightarrow \zed
\
(\lmulq{k}{\und}) \cdot \Ed \Rightarrow \lmulq{k}{\Ed}
\quad
\Ed \cdot (\lmulq{k}{\und}) \Rightarrow \rmulq{\Ed}{k}
\ %\tag{star}
\zed^\star \Rightarrow \und
\
\zed^c \& \Ed \Rightarrow \Ed
\quad
\Ed \& \zed^c \Rightarrow \Ed
\
(\lmul{k}{\Ed})^{c} \Rightarrow \Ed^{c} \quad (\rmul{\Ed}{k})^{c} \Rightarrow \Ed^{c}
\
{\Ed^c}^c \Rightarrow \Ed \text{ if the weights are Boolean ($\mathbb{B}$ or $\mathbb{F}_2$)}
\end{gather}
$$
where $\Ed$ stands for any rational expression, $a \in A$~is any letter,
$\ell\in A \cup {\eword}$, $k, h\in \K$ are weights, and $\lmulq{k}{\ell}$
denotes either $\lmul{k}{\ell}$, or $\ell$ in which case $k = \unK$ in the
right-hand side. Any subexpression of a form listed to the left of a
'$\Rightarrow$' is rewritten as indicated on the right.
Associative Identities
In addition to the trivial identities, the binary operators (addition, conjunction, multiplication) are made associative. Actually, they become variadic instead of being strictly binary.
$$
\begin{align}
\Ed+(\Fd+\Gd) & \Rightarrow \Ed+\Fd+\Gd\
\Ed(\Fd\Gd) & \Rightarrow \Ed\Fd\Gd\
\Ed\&(\Fd\&\Gd) & \Rightarrow \Ed\&\Fd\&\Gd\
\end{align}
$$
Linear Identities
In addition to the associative identities, the addition is made commutative. Actually, members of sums are now sorted, and weights of equal terms are added. Some identities requires the weightset to be a commutative semiring (i.e., the product of scalars is commutative).
$$
\begin{align}
\Fd+\Ed & \Rightarrow \Ed+\Fd && \text{if $\Ed < \Fd$} \
\lmul{k}{\Ed}+\lmul{h}{\Ed} & \Rightarrow \lmul{k+h}{\Ed}\
\rmul{\Ed}{k} & \Rightarrow \lmul{k}{\Ed} && \text{if commutative} \
\lmul{k}{\Ed}\lmul{h}{\Fd} & \Rightarrow \lmul{kh}{(\Ed\Fd)} && \text{if commutative} \
\end{align}
$$
Distributive Identities
In addition to the linear identities, the multiplication and multiplication by a scalar are distributed on the sum.
$$
\begin{gather}
\lmul{k}{(\Ed+\Fd)} \Rightarrow \lmul{k}{\Ed} + \lmul{k}{\Fd} \
\Ed(\Fd+\Gd) \Rightarrow \Ed\Fd + \Ed\Gd \qquad
(\Ed+\Fd)\Gd \Rightarrow \Ed\Gd + \Fd\Gd \
\end{gather}
$$
Examples
End of explanation
"""
ids = ['trivial', 'associative', 'linear', 'distributive']
ctx = vcsn.context('lal_char(a-z), b')
def example(*es):
res = []
for e in es:
res.append([e] + ['$' + ctx.expression(e, id).format('latex') + '$' for id in ids])
return pd.DataFrame(res, columns=['Input'] + list(map(str.title, ids)))
example('a', 'a+b+c', 'a+(b+c)', 'a+b+c+d', 'b+a', '[ab][ab]')
"""
Explanation: The following helping routine takes a list of expressions as text (*es), converts them into genuine expression objects (ctx.expression(e, id)) for each id, formats them into LaTeX, and puts them in a DataFrame for display.
End of explanation
"""
ctx = vcsn.Q
example('a', 'a+a+a', 'a+a+b', 'a+b+a', '<2>(a+b)', '([ab]+[ab]){2}', '<2>ab<3>cd<5>')
"""
Explanation: A few more examples, with weights in $\mathbb{Q}$:
End of explanation
"""
from ipywidgets import interact_manual
from IPython.display import display
es = []
@interact_manual
def interactive_example(expression = "[ab]{3,}"):
es.append(expression)
display(example(*es))
"""
Explanation: Try it!
The following piece of code defines an interactive function for you to try your own expression. Enter an expression in the text area, then click on the "Run" button.
End of explanation
"""
|
elmaso/tno-ai | aind2-dl-master/Student_Admissions.ipynb | gpl-3.0 | import pandas as pd
data = pd.read_csv('student_data.csv')
data
"""
Explanation: Predicting Student Admissions
In this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:
- GRE Scores (Test)
- GPA Scores (Grades)
- Class rank (1-4)
The dataset originally came from here: http://www.ats.ucla.edu/
Note: Thanks Adam Uccello, for helping us debug!
1. Load and visualize the data
To load the data, we will use a very useful data package called Pandas. You can read on Pandas documentation here:
End of explanation
"""
import matplotlib.pyplot as plt
import numpy as np
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
plot_points(data)
plt.show()
"""
Explanation: Let's plot the data and see how it looks.
End of explanation
"""
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
"""
Explanation: The data, based on only GRE and GPA scores, doesn't seem very separable. Maybe if we make a plot for each of the ranks, the boundaries will be more clear.
End of explanation
"""
import keras
from keras.utils import np_utils
# remove NaNs
data = data.fillna(0)
# One-hot encoding the rank
processed_data = pd.get_dummies(data, columns=['rank'])
# Normalizing the gre and the gpa scores to be in the interval (0,1)
processed_data["gre"] = processed_data["gre"]/800
processed_data["gpa"] = processed_data["gpa"]/4
# Splitting the data input into X, and the labels y
X = np.array(processed_data)[:,1:]
X = X.astype('float32')
y = keras.utils.to_categorical(data["admit"],2)
# Checking that the input and output look correct
print("Shape of X:", X.shape)
print("\nShape of y:", y.shape)
print("\nFirst 10 rows of X")
print(X[:10])
print("\nFirst 10 rows of y")
print(y[:10])
"""
Explanation: These plots look a bit more linearly separable, although not completely. But it seems that using a multi-layer perceptron with the rank, gre, and gpa as inputs, may give us a decent solution.
2. Process the data
We'll do the following steps to clean up the data for training:
- One-hot encode the rank
- Normalize the gre and the gpa scores, so they'll be in the interval (0,1)
- Split the data into the input X, and the labels y.
End of explanation
"""
# break training set into training and validation sets
(X_train, X_test) = X[50:], X[:50]
(y_train, y_test) = y[50:], y[:50]
# print shape of training set
print('x_train shape:', X_train.shape)
# print number of training, validation, and test images
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
"""
Explanation: 3. Split the data into training and testing sets
End of explanation
"""
# Imports
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
# Building the model
# Note that filling out the empty rank as "0", gave us an extra column, for "Rank 0" students.
# Thus, our input dimension is 7 instead of 6.
model = Sequential()
model.add(Dense(128, activation='relu', input_shape=(7,)))
model.add(Dropout(.2))
model.add(Dense(64, activation='relu'))
model.add(Dropout(.1))
model.add(Dense(2, activation='softmax'))
# Compiling the model
model.compile(loss = 'categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
"""
Explanation: 4. Define the model architecture
End of explanation
"""
# Training the model
model.fit(X_train, y_train, epochs=200, batch_size=100, verbose=0)
"""
Explanation: 5. Train the model
End of explanation
"""
# Evaluating the model on the training and testing set
score = model.evaluate(X_train, y_train)
print("\n Training Accuracy:", score[1])
score = model.evaluate(X_test, y_test)
print("\n Testing Accuracy:", score[1])
"""
Explanation: 6. Score the model
End of explanation
"""
|
lucasb-eyer/go-colorful | doc/LinearRGB Approximations.ipynb | mit | %matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib as mpl
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import numpy as np
from sympy import *
init_printing()
"""
Explanation: Taylor approximations to color conversion
This notebook shows how to come up with all these magic constants that appear in the approximations to LinearRgb in my go-colorful library in order to speed them up at almost no loss in accuracy.
The gist is to compute a Taylor expansion up to a degree that gives enough accuracy.
Taylor expansions work well for relatively linear-ish functions, as LinearRgb is.
Doing this is especially easy thanks to the SymPy library which has symbolic Taylor expansion built-in!
End of explanation
"""
def linear_rgb(x):
return ((x+0.055)/1.055)**2.4
"""
Explanation: The following is the conversion from RGB to linear RGB (aka. gamma-correction), where I'm dropping the conditional part for very small values of x as a first approximation.
End of explanation
"""
x = Symbol('x', real=True)
series(linear_rgb(x), x, x0=0.5, n=4)
"""
Explanation: Now, we can use SymPy to create a symbolic version of that equation, and compute a symbolic Taylor expansion around $0.5$ (the middle of our target range) up to the fourth degree:
End of explanation
"""
fast_linear_rgb = lambdify([x], series(linear_rgb(x), x, x0=0.5, n=4).removeO())
"""
Explanation: In order to use it numerically, we will "drop the O", which means do the actual approximation, and "lambdify" the function, which turns a symbolic function into a NumPy function:
End of explanation
"""
X = np.linspace(0,1,1001)
ref = linear_rgb(X) # The (almost) correct implementation.
fast = fast_linear_rgb(X) # The Taylor approximation
square = X*X # The approximation by squaring.
fig, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(20,4))
ax1.plot(X, ref, label='linear')
ax1.plot(X, fast, label='fast (max/avg err: {:.4f} / {:.4f})'.format(np.max(np.abs(ref - fast)),
np.mean(np.abs(ref - fast))))
ax1.plot(X, square, label='square (max/avg err: {:.4f} / {:.4f})'.format(np.max(np.abs(ref - square)),
np.mean(np.abs(ref - square))))
ax2.plot(X, ref)
ax2.plot(X, fast)
ax2.plot(X, square)
ax2.set_xlim(0, 0.1)
ax2.set_ylim(0, 0.1)
ax2.set_title("Left end")
ax3.plot(X, ref)
ax3.plot(X, fast, ls=':')
ax3.plot(X, square)
ax3.set_xlim(0.45, 0.55)
ax3.set_ylim(0.15, 0.25)
ax3.set_title("Middle")
ax4.plot(X, ref)
ax4.plot(X, fast)
ax4.plot(X, square)
ax4.set_xlim(0.9, 1)
ax4.set_ylim(0.9, 1)
ax4.set_title("Right end")
ax1.legend();
"""
Explanation: As additional heuristic approximations, we'll include simply squaring the values, which should also be very fast, but quite wrong. Then, plot all these functions in order to see their behaviour, and compute errors:
End of explanation
"""
def delinear_rgb(x):
return 1.055*(x**(1.0/2.4)) - 0.055
fast_delinear_rgb_part1 = lambdify([x], series(delinear_rgb(x), x, x0=0.015, n=6).removeO())
fast_delinear_rgb_part2 = lambdify([x], series(delinear_rgb(x), x, x0=0.03, n=6).removeO())
fast_delinear_rgb_part3 = lambdify([x], series(delinear_rgb(x), x, x0=0.6, n=6).removeO())
ref = delinear_rgb(X)
fast1 = fast_delinear_rgb_part1(X)
fast2 = fast_delinear_rgb_part2(X)
fast3 = fast_delinear_rgb_part3(X)
sqrt = np.sqrt(X)
def plot(ax):
ax.plot(X, ref, label='linear')
l, = ax.plot(X, fast1, label='fast, part1', ls=':')
ax.plot(X, fast2, label='fast, part2', c=l.get_color(), ls='--')
ax.plot(X, fast3, label='fast, part3', c=l.get_color(), ls='-')
ax.plot(X, sqrt, label='sqrt')
fig, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(20,4))
plot(ax1)
ax1.set_ylim(0, 1)
plot(ax2)
ax2.set_xlim(0, 0.05)
ax2.set_ylim(0, 0.25)
ax2.set_title("Left end")
plot(ax3)
ax3.set_xlim(0.45, 0.55)
ax3.set_ylim(0.65, 0.75)
ax3.set_title("Middle")
plot(ax4)
ax4.set_xlim(0.95, 1)
ax4.set_ylim(0.95, 1)
ax4.set_title("Right end")
ax1.legend()
"""
Explanation: The inverse function (for Lab->RGB)
The inverse function is significantly more difficult, because its left part is highly non-linear and changes much faster than the rest of the function.
So what we do here, in order to keep reasonable accuracy, is split it into three parts with three different approximations. You will notice that the leftmost part has quite large coefficients, which hints to the approximation being worse/"harder".
End of explanation
"""
|
mrcinv/matpy | oma/kolokviji/OMA, 2. kolokvij, 2011_2012.ipynb | gpl-2.0 | f = lambda x: x**4 + 2*x**3 - 2*x**2 + 1
x = sympy.Symbol('x', real=True)
"""
Explanation: 2. kolokvij 2011/2012, rešitve
1. naloga
Poišči največjo in najmanjšo vrednost, ki jo zavzame funkcija
$$f(x) = x^4 + 2x^3 - 2x^2 + 1.$$
End of explanation
"""
eq = Eq(f(x).diff(), 0)
eq
critical_points = sympy.solve(eq)
critical_points
end_points = [-3, 1]
points = [(y, f(y)) for y in critical_points + end_points]
points
min(points, key=lambda point: point[1]), max(points, key=lambda point: point[1])
"""
Explanation: Kandadati za ekstreme so stacionarne točke in krajišči našega intervala. Stacionarne točke poiščemo z rešitvijo enačbe $f'(x)=0$.
End of explanation
"""
sympy.solvers.reduce_inequalities([f(x).diff(x) > 0])
"""
Explanation: Določi tudi intervale naraščanja in padanja.
Funkcija narasca kjer je prvi odvod vecji od nic.
End of explanation
"""
sympy.solvers.reduce_inequalities([f(x).diff(x) < 0])
"""
Explanation: Funkcija pada, kjer je prvi odvod manjsi od nic.
End of explanation
"""
a = sympy.Symbol('a', real=True, positive=True)
b = sympy.Symbol('b', real=True, positive=True)
v = lambda a, b: a**2*b*sympy.sqrt(3)/2
p = lambda a, b: a**2*sympy.sqrt(3) + 3*a*b
b = solve(Eq(v(a,b), 1), b)[0]
b
"""
Explanation: 2. naloga
Odpira se kavna hisa kava. Lastniki hise zelijo mesanico kave prodajati v licnih plocevinastih skatlicah, ki imajo obliko tristrane prizme s prostornino 1. Osnovna ploskev je enakostranicni trikotnik s stranico $a$, visina prizme je $b$. Pomagaj jim poiskati optimalno velikost skatlice: koliksna naj bosta $a$ in $b$, da bodo za izdelavo porabili cim manj plocevine?
End of explanation
"""
val_a = sympy.solve(p(a, b).diff())[0]
val_b = b.subs(a, val_a)
val_a, val_b
"""
Explanation: Izrazavo za $b$ vstavimo v enacbo za povrsino, jo odvajamo in poiscemo resitev enacbe $p'(a) = 0.$
End of explanation
"""
x = sympy.Symbol('x')
f = lambda x: (1+sympy.log(x))**2/x
g = lambda x: (x**2 - 2)*sympy.exp(x)
sympy.integrate(f(x))
sympy.integrate(g(x))
"""
Explanation: 3. naloga
Izracunaj nedolocena integrala
$$\int \frac{(1+\log(x))^2}{x}dx$$
in
$$ \int (x^2 -2)e^xdx .$$
End of explanation
"""
pyplot.ylim([-1, 5])
pyplot.xlim([-1, 5])
x = numpy.linspace(0, 2, 100)
pyplot.plot([-1, 5], [0, 6], color='g')
[xs, ys] = sample_function(lambda x: 2.0/x + 2, 0.01, 5, 0.01)
pyplot.plot(xs, ys, color='r')
pyplot.axvline(0, color='y')
pyplot.show()
"""
Explanation: 4. naloga
End of explanation
"""
x = sympy.Symbol('x', real=True)
f = lambda x: sympy.log(1+x**2)/x
F = lambda x: sympy.integrate(f(t), (t, -1, x))
f(x)
"""
Explanation: Presečišča na zgornji sliki se nahajajo pri $x_1=1$ in $x_2=2$.
Ploščina lika je enaka $ \log(4) - 0.5 $. Najlažje jo izračunamo tako, da lik razdelimo na dva dela glede na $x$-koordinato: na del med $0$ in $1$ in na del med $1$ in $2$.
5. naloga
Funkcija $F$ ima predpis
$$F(x) = \int_{-1}^x \frac{\log(1+t^2)}{t}dt.$$
Poisci odvod funckcije $F$.
Njen odvod je kar enak funkciji pod integralom.
End of explanation
"""
%matplotlib inline
from math import log
pylab.rcParams['figure.figsize'] = (10.0, 8.0)
pylab.ylim([-1, 3])
pylab.xlim([-4, 4])
[xs, ys] = sample_function(f, -5, 5, 0.1)
pyplot.plot(xs, ys, color='r')
[xs, ys] = sample_function(lambda x: F(x), -5, 5, 0.2)
pyplot.plot(xs, ys, color='g')
pyplot.show()
"""
Explanation: Skiciraj graf odvoda $F'$ nato pa na isto sliko se graf $F$.
Z rdečo je narisan odvod, z zeleno pa graf funkcije $F$.
End of explanation
"""
|
turbomanage/training-data-analyst | courses/machine_learning/deepdive2/text_classification/labs/automl_for_text_classification.ipynb | apache-2.0 | import os
from google.cloud import bigquery
import pandas as pd
%load_ext google.cloud.bigquery
"""
Explanation: AutoML for Text Classification
Learning Objectives
Learn how to create a text classification dataset for AutoML using BigQuery
Learn how to train AutoML to build a text classification model
Learn how to evaluate a model trained with AutoML
Learn how to predict on new test data with AutoML
Introduction
In this notebook, we will use AutoML for Text Classification to train a text model to recognize the source of article titles: New York Times, TechCrunch or GitHub.
In a first step, we will query a public dataset on BigQuery taken from hacker news ( it is an aggregator that displays tech related headlines from various sources) to create our training set.
In a second step, use the AutoML UI to upload our dataset, train a text model on it, and evaluate the model we have just trained.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
"""
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
SEED = 0
%%bash
gsutil mb gs://$BUCKET
"""
Explanation: Replace the variable values in the cell below:
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
# TODO: Your code goes here.
FROM
# TODO: Your code goes here.
WHERE
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
LIMIT 10
"""
Explanation: Create a Dataset from BigQuery
Hacker news headlines are available as a BigQuery public dataset. The dataset contains all headlines from the sites inception in October 2006 until October 2015.
Lab Task 1a:
Complete the query below to create a sample dataset containing the url, title, and score of articles from the public dataset bigquery-public-data.hacker_news.stories. Use a WHERE clause to restrict to only those articles with
* title length greater than 10 characters
* score greater than 10
* url length greater than 0 characters
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.'))[OFFSET(1)] AS source,
# TODO: Your code goes here.
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '.*://(.[^/]+)/'), '.com$')
# TODO: Your code goes here.
GROUP BY
# TODO: Your code goes here.
ORDER BY num_articles DESC
LIMIT 100
"""
Explanation: Let's do some regular expression parsing in BigQuery to get the source of the newspaper article from the URL. For example, if the url is http://mobile.nytimes.com/...., I want to be left with <i>nytimes</i>
Lab task 1b:
Complete the query below to count the number of titles within each 'source' category. Note that to grab the 'source' of the article we use the a regex command on the url of the article. To count the number of articles you'll use a GROUP BY in sql, and we'll also restrict our attention to only those articles whose title has greater than 10 characters.
End of explanation
"""
regex = '.*://(.[^/]+)/'
sub_query = """
SELECT
title,
ARRAY_REVERSE(SPLIT(REGEXP_EXTRACT(url, '{0}'), '.'))[OFFSET(1)] AS source
FROM
`bigquery-public-data.hacker_news.stories`
WHERE
REGEXP_CONTAINS(REGEXP_EXTRACT(url, '{0}'), '.com$')
AND LENGTH(title) > 10
""".format(regex)
query = """
SELECT
LOWER(REGEXP_REPLACE(title, '[^a-zA-Z0-9 $.-]', ' ')) AS title,
source
FROM
({sub_query})
WHERE (source = 'github' OR source = 'nytimes' OR source = 'techcrunch')
""".format(sub_query=sub_query)
print(query)
"""
Explanation: Now that we have good parsing of the URL to get the source, let's put together a dataset of source and titles. This will be our labeled dataset for machine learning.
End of explanation
"""
bq = bigquery.Client(project=PROJECT)
title_dataset = bq.query(query).to_dataframe()
title_dataset.head()
"""
Explanation: For ML training, we usually need to split our dataset into training and evaluation datasets (and perhaps an independent test dataset if we are going to do model or feature selection based on the evaluation dataset). AutoML however figures out on its own how to create these splits, so we won't need to do that here.
End of explanation
"""
print("The full dataset contains {n} titles".format(n=len(title_dataset)))
"""
Explanation: AutoML for text classification requires that
* the dataset be in csv form with
* the first column being the texts to classify or a GCS path to the text
* the last colum to be the text labels
The dataset we pulled from BiqQuery satisfies these requirements.
End of explanation
"""
title_dataset.source.value_counts()
"""
Explanation: Let's make sure we have roughly the same number of labels for each of our three labels:
End of explanation
"""
DATADIR = './data/'
if not os.path.exists(DATADIR):
os.makedirs(DATADIR)
FULL_DATASET_NAME = 'titles_full.csv'
FULL_DATASET_PATH = os.path.join(DATADIR, FULL_DATASET_NAME)
# Let's shuffle the data before writing it to disk.
title_dataset = title_dataset.sample(n=len(title_dataset))
title_dataset.to_csv(
FULL_DATASET_PATH, header=False, index=False, encoding='utf-8')
"""
Explanation: Finally we will save our data, which is currently in-memory, to disk.
We will create a csv file containing the full dataset and another containing only 1000 articles for development.
Note: It may take a long time to train AutoML on the full dataset, so we recommend to use the sample dataset for the purpose of learning the tool.
End of explanation
"""
sample_title_dataset = # TODO: Your code goes here.
# TODO: Your code goes here.
"""
Explanation: Now let's sample 1000 articles from the full dataset and make sure we have enough examples for each label in our sample dataset (see here for further details on how to prepare data for AutoML).
Lab Task 1c:
Use .sample to create a sample dataset of 1,000 articles from the full dataset. Use .value_counts to see how many articles are contained in each of the three source categories?
End of explanation
"""
SAMPLE_DATASET_NAME = 'titles_sample.csv'
SAMPLE_DATASET_PATH = os.path.join(DATADIR, SAMPLE_DATASET_NAME)
sample_title_dataset.to_csv(
SAMPLE_DATASET_PATH, header=False, index=False, encoding='utf-8')
sample_title_dataset.head()
%%bash
gsutil cp data/titles_sample.csv gs://$BUCKET
"""
Explanation: Let's write the sample datatset to disk.
End of explanation
"""
|
google-research/vision_transformer | lit.ipynb | apache-2.0 | # Installs the vit_jax package from Github.
!pip install -q git+https://github.com/google-research/vision_transformer
import jax
import jax.numpy as jnp
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow_datasets as tfds
import tqdm
from vit_jax import models
# Currently available LiT models
[name for name in models.model_configs.MODEL_CONFIGS if name.startswith('LiT')]
model_name = 'LiT-B16B'
lit_model = models.get_model(model_name)
# Loading the variables from cloud can take a while the first time...
lit_variables = lit_model.load_variables()
# Creating tokens from freeform text (see next section).
tokenizer = lit_model.get_tokenizer()
# Resizing images & converting value range to -1..1 (see next section).
image_preprocessing = lit_model.get_image_preprocessing()
# Preprocessing op for use in tfds pipeline (see last section).
pp = lit_model.get_pp()
"""
Explanation: See code at https://github.com/google-research/vision_transformer/
This Colab is about the paper
LiT: Zero-Shot Transfer with Locked-image text Tuning: https://arxiv.org/abs/2111.07991
For ViT, MLP Mixer etc see the other Colab
https://colab.research.google.com/github/google-research/vision_transformer/blob/main/vit_jax.ipynb
Load model
End of explanation
"""
# Let's load some sample images from tfds.
# Alternatively you can also load these images from the internet / your Drive.
ds = tfds.load('imagenette', split='train')
images_list = [
example['image'].numpy()
for _, example in zip(range(5), ds)
]
# Note that this is a list of images with different shapes, not a four
# dimensional tensor.
[image.shape for image in images_list]
# Note that our preprocessing converts to floats ranging from -1..1 !
images = image_preprocessing(images_list)
images.shape, images.min(), images.max()
plt.figure(figsize=(15, 4))
plt.imshow(np.hstack(images) * .5 + .5)
plt.axis('off');
texts = [
'itap of a cd player',
'a photo of a truck',
'gas station',
'chainsaw',
'a bad photo of colorful houses',
]
tokens = tokenizer(texts)
tokens.shape
# Embed both texts and images with a single model call.
# See next section for embedding images/texts separately.
zimg, ztxt, out = lit_model.apply(lit_variables, images=images, tokens=tokens)
plt.imshow(ztxt @ zimg.T)
probs = np.array(jax.nn.softmax(out['t'] * ztxt @ zimg.T, axis=1))
pd.DataFrame(probs, index=texts).style.background_gradient('Greens', vmin=0, vmax=1).format('{:.2%}')
"""
Explanation: Use model
End of explanation
"""
# Load dataset and create array of class names.
builder = tfds.builder('cifar100')
builder.download_and_prepare()
ds_test = builder.as_dataset('test')
info = builder.info
classnames = [
info.features['label'].int2str(id_)
for id_ in range(info.features['label'].num_classes)
]
classnames[:10]
# "best prompts" from CLIP paper (https://arxiv.org/abs/2103.00020)
PROMPTS = [
'itap of a {}.',
'a bad photo of the {}.',
'a origami {}.',
'a photo of the large {}.',
'a {} in a video game.',
'art of the {}.',
'a photo of the small {}.',
'{}',
]
texts = [
prompt.format(classname)
for classname in classnames
for prompt in PROMPTS
]
len(texts)
# Tokenize the texts using numpy like before.
tokens = tokenizer(texts)
tokens.shape
_, ztxt, _ = lit_model.apply(lit_variables, tokens=tokens)
ztxt.shape
# `pp` from above (section "Load model") is a TensorFlow graph that can
# efficiently be added to the input pre-processing.
imgs = next(iter(ds_test.map(pp).batch(4)))['image']
# Note that `pp` would also tokenize "texts" to "tokens", if such a feature was
# present in the dataset (which is not the case for cifar).
plt.figure(figsize=(15, 4))
plt.imshow(np.hstack(imgs) * .5 + .5)
plt.axis('off');
# JIT-compile image embedding function because there are lots of images.
@jax.jit
def embed_images(variables, images):
zimg, _, _ = lit_model.apply(variables, images=images)
return zimg
# Compute all images embeddings & collect correct labels.
zimgs = []
labels = []
for batch in tqdm.tqdm(ds_test.map(lit_model.get_pp()).batch(500)):
labels += list(batch['label'].numpy())
zimg = embed_images(lit_variables, batch['image'].numpy())
zimgs.append(np.array(zimg))
zimgs = np.concatenate(zimgs)
zimgs.shape
# Compute similarities ...
sims = zimgs @ ztxt.reshape([len(classnames), len(PROMPTS), -1]).mean(axis=1).T
sims.shape
# ... and use most similar embedding to predict label.
(sims.argmax(axis=1) == np.array(labels)).mean()
# Expected accuracy for model "LiT-B16B" : 79.19
"""
Explanation: tfds zero-shot evaluation
End of explanation
"""
|
google-research/google-research | pairwise_fairness/monotone.ipynb | apache-2.0 | import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Tensorflow modules.
import tensorflow as tf
# Install Tensorflow Lattice and Tensorflow Constrained Optimization libraries.
!pip install tensorflow_lattice
!pip install git+https://github.com/google-research/tensorflow_constrained_optimization
# Tensor flow lattice modules.
import tensorflow_lattice as tfl
# Tensorflow constrained optimization modules.
import tensorflow_constrained_optimization as tfco
"""
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
End of explanation
"""
np.random.seed(123456)
# Generate 100 examples. Each example has a score, label and a binary group.
num_examples = 100
# Generate labels from a uniform distribution, and
# the associated protected groups from a Bernoulli(0.5).
labels = np.random.rand(num_examples)
groups = (np.random.rand(num_examples) > 0.5) * 1
# Generate scores by introducing Gaussian noise in the labels,
# and scale them to [0, 1].
scores = labels + np.random.normal(loc=0, scale=0.05, size=num_examples)
scores = (scores - scores.min()) / (scores.max() - scores.min())
# Additional noise for group 1 examples.
# Add uniform random noise to the group scores in the range [0.2, 0.8].
noise_low = 0.2
noise_high = 0.8
noise_indices = (groups == 1) & (scores >= noise_low) & (scores <= noise_high)
scores[noise_indices] = noise_low + (
np.random.rand(sum(noise_indices)) * (noise_high - noise_low))
"""
Explanation: Overview
In this notebook, we'll train a one-dimensional monotonic function with pairwise constraints for fairness.
Problem Setup: We will consider a simulated ranking task consisting of a set of examples represented by real-valued scores ${x_1, \ldots, x_n}$ and a set of "ground truth" labels ${y_1, \ldots, y_n}$ (with higher implying better). Each example is also associated with a binary protected group. The goal is to learn a real-valued function $f: \mathbb{R} \rightarrow \mathbb{R}$ on the scores that ranks examples with higher labels above those with lower labels. Additionally, we will impose a fairness goal on $f$ (loosely speaking, we will require $f$ to perform equally well on examples from both groups).
<br><br>
Pairwise Fairness: For measuring fairness, we will adopt the "pairwise" fairness criteria proposed in:
Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Serena Wang, "Pairwise Fairness for Ranking and Regression", AAAI 2020.
Henceforth, we will refer to this paper as [NCGW19].
<br><br>
The fairness criteria in [NCGW19] are defined in terms of the group-dependent pairwise accuracy for $f$:
$$
Acc_{G_i > G_j}(f) \,=\, \mathbb{P}( f(x) > f(x') \mid y > y', (x,y) \in G_i, (x',y') \in G_j ).
$$
We will use a slight variant of the above definition that assigns a value of 1/2 for ties:
$$
Acc_{G_i > G_j}(f) \,=\, \mathbb{P}( f(x) > f(x') \mid y > y', (x,y) \in G_i, (x',y') \in G_j ) \,+\,
\frac{1}{2}\mathbb{P}( f(x) = f(x') \mid y > y', (x,y) \in G_i, (x',y') \in G_j ).
$$
We will also be interested in the overall pairwise accuracy of $f$:
$$
A(f) \,=\, \mathbb{P}( f(x) > f(x') \mid y > y' ) \,+\, \frac{1}{2}\mathbb{P}( f(x) = f(x') \mid y > y' ).
$$
<br>
Monotone Function: We will restrict our attention to ranking functions $f$ that are monotonic in the scores $x$. While the learned monotone function cannot change the underlying ordering of examples , it can introduce ties in the ordering. As we will see, this can be helpful in enforcing our fairness goal.
Throughout this notebook, we will refer to the $x$ as the original score for an example, and $f(x)$ as the prediction for the example.
Simulated Data
We generate a dataset with two groups. We generate the scores in such a way that they are more accurate for one of the groups.
End of explanation
"""
def plot_scores(x, y, groups, ax, xlabel, ylabel):
# Plots x vs. y using different markers for the group0 and group1 examples.
ax.plot(x[groups == 0], y[groups == 0], "ro", label="Group 0")
ax.plot(x[groups == 1], y[groups == 1], "bx", label="Group 1")
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_title(ylabel + " vs. " + xlabel)
ax.legend(loc="best")
ff.tight_layout()
# Plot scores as a function of labels.
ff, ax = plt.subplots(1, 1, figsize=(5, 5))
plot_scores(labels, scores, groups, ax, "Labels", "Scores")
"""
Explanation: Let us plot the generated data.
End of explanation
"""
# Create a pandas DataFrame with the data contents.
examples_df = pd.DataFrame()
examples_df = examples_df.assign(scores=scores, labels=labels, groups=groups,
merge_key=0)
# We have an additional merge_key column, which we will use to merge the
# data frame with itself, and enumerate all pairs of examples.
paired_df = examples_df.merge(examples_df.copy(), on="merge_key", how="outer",
suffixes=("_high", "_low"))
# Only retain pairs where labels_high > labels_low.
paired_df = paired_df[paired_df.labels_high > paired_df.labels_low]
# Create 2-d NumPy array containing the scores for the higher label examples
# in the first column and the scores for lower label examples in the second
# column. Similarly, create a 2-d NumPy array for the groups.
paired_scores = np.stack([paired_df.scores_high.values,
paired_df.scores_low.values], axis=1)
paired_labels = np.stack([paired_df.labels_high.values,
paired_df.labels_low.values], axis=1)
paired_groups = np.stack([paired_df.groups_high.values,
paired_df.groups_low.values], axis=1)
"""
Explanation: Notice that the original scores order the group 0 examples accurately, but are unreliable for the group 1 examples in the range [0.2, 0.8].
Can we learn a monotone transform on these scores that performs equally well on both groups? For example, would we be able to improve the performance on the group 1 examples by flattening the scores in the [0.2, 0.8] range, while preserving their relative ordering in other regions?
Formulate Pairs
Having generated the data, we enumerate all pairs of examples $(x, x')$ where the labels $y > y'$. By creating ordered example pairs, we can now treat the ranking problem as a classification problem on pairs.
End of explanation
"""
tf.reset_default_graph()
# We 1-d callibrator with 100 key points. A monotone function is constructed by
# linearly interpolating the values on these keypoints.
num_keypoints = 100
kp_inits = tfl.uniform_keypoints_for_signal(
num_keypoints=num_keypoints,
input_min=0.0,
input_max=1.0,
output_min=0,
output_max=1.0)
# Placeholder tensor for holding the input scores.
scores_tensor = tf.placeholder(tf.float32, shape=(None,), name="scores")
# We pass 1-d input array to the calibrator. Recall that we will eventually need
# to compute scores on pairs of examples (x, x'), i.e. on the paired_scores
# array. To do so, we will flatten paired_scores and
# pass it as a 1-d array (where the first half contains the "x" scores, and the
# second half contains the "x'" scores).
# Predictions from the calibrator on the input scores.
(predictions_tensor, projection_op, _) = tfl.calibration_layer(
uncalibrated_tensor=scores_tensor,
num_keypoints=num_keypoints,
monotonic=+1,
keypoints_initializers=kp_inits)
# Setting monotonic=+1 enforces that the calibrator is monotonic.
# Note that the returned projection_op is the projection operation for
# enforcing monotonicity.
# Since we will feed in a flattened array of scores as input, the
# predictions_tensor will also be one dimensional. The first half of this tensor
# will contain the scores for the "x" examples and the second half of the tensor
# will contain the scores for the "x'" examples. We will slice the
# predictions_tensor into two halves and compute the element-wise differences
# in scores between the first half and the second half, i.e. f(x) - f(x').
num_pairs = tf.cast(tf.shape(scores_tensor)[0] / 2, tf.int32)
prediction_diffs_tensor = (predictions_tensor[:num_pairs] -
predictions_tensor[num_pairs:])
"""
Explanation: Note that paired_scores and paired_groups are two-dimensional arrays, where each row represents an ordered pair of example. The first column in paired_scores contains the scores for the 'high label' examples in the pairs, and the second column contains the scores for the 'low label' examples in the pairs. Similarly, paired_groups contains the corresponding protected groups for the 'high label' and 'low label' examples in the pairs.
Monotone Model
We next model the ranking function $f$ as a monotone, one-dimensional calibrator. We use the callibration layer provided in the TF Lattice package to construct the monotone function, and compute the difference in function values $f(x) - f(x')$ on each pair of examples $(x, x')$. The pairwise accuracies can then be computed as a classification rates on the differences in scores.
For more details on 1-D calibrators, please see the following <a href="http://jmlr.org/papers/v17/15-243.html"> paper</a>.
End of explanation
"""
target_labels_tensor = tf.ones(dtype=tf.float32, shape=(num_pairs,),
name="target_labels")
"""
Explanation: By creating ordered pairs of examples $(x, x')$, we can now frame the ranking problem as a classification task, with the goal of maximizing the fraction of examples where the difference $f(x) - f(x')$ is positive. To this end, we define an all 1's tensor that holds the target label for each pair.
End of explanation
"""
subset0_predicate = tf.placeholder(tf.bool, shape=(None,), name="subset0")
subset1_predicate = tf.placeholder(tf.bool, shape=(None,), name="subset1")
"""
Explanation: We will also need placeholder tensors to identify subsets of example pairs on which we wish to impose fairness constraints. For this tutorial, we will impose constraints on two subsets of example pairs. We create tensors for holding boolean predicates for identifying these subsets.
End of explanation
"""
def group_pairwise_accuracy(prediction_diffs, paired_groups):
"""Returns the group-dependent pairwise accuracies.
Returns the group-dependent pairwise accuracies Acc_{G_i > G_j} for each pair
of groups G_i \in {0, 1} and G_j \in {0, 1}.
Args:
prediction_diffs: NumPy array of shape (#num_pairs,) containing the
differences in scores for each ordered pair of examples.
paired_groups: NumPy array of shape (#num_pairs, 2) containing the protected
groups for the better and worse example in each pair.
Returns:
A NumPy array of shape (2, 2) containing the pairwise accuracies, where the
ij-th entry contains Acc_{G_i > G_j}.
"""
accuracy_matrix = np.zeros((2, 2))
for group_high in [0, 1]:
for group_low in [0, 1]:
# Predicate for pairs where the better example is from group_high
# and the worse example is from group_low.
predicate = ((paired_groups[:, 0] == group_high) &
(paired_groups[:, 1] == group_low))
# Parwise accuracy Acc_{group_high > group_low}.
accuracy_matrix[group_high][group_low] = (
np.mean(prediction_diffs[predicate] > 0) +
0.5 * np.mean(prediction_diffs[predicate] == 0))
return accuracy_matrix
def overall_pairwise_accuracy(prediction_diffs):
# Returns overall pairwise accuracy for pairwise differences in predictions.
overall_accuracy = (np.mean(prediction_diffs > 0) +
0.5 * np.mean(prediction_diffs == 0))
return overall_accuracy
"""
Explanation: Baseline: Original Scores
Before proceeding to train our model, let us first evaluate the performance of the original, untransformed scores.
Below, we provide functions for evaluating the group-dependent pairwise accuracy and the overall accuracy for given the score differences on pairs of examples.
End of explanation
"""
prediction_diffs = paired_scores[:, 0] - paired_scores[:, 1]
overall_accuracy = overall_pairwise_accuracy(prediction_diffs)
print("Baseline: Overall pairwise accuracy = %.3f" % overall_accuracy)
pairwise_accuracy = group_pairwise_accuracy(prediction_diffs, paired_groups)
print("Baseline: Group-dependent pairwise accuracies"
"(rows=better, columns=worse)")
pairwise_accuracy_df = pd.DataFrame(
pairwise_accuracy,
columns=["Group 0", "Group 1"],
index=["Group 0", "Group 1"]).round(decimals=3)
print(pairwise_accuracy_df)
"""
Explanation: We evaluate the performance of ranking with the original scores.
End of explanation
"""
# We set up the constrained optimization problem using the TF constrained
# optimization library.
# Set up context object for the entire data
# (for evaluating performance based on the pairwise prediction differences and
# and the target labels).
context_overall = tfco.rate_context(prediction_diffs_tensor,
target_labels_tensor)
# Set up context objects for the subsets:
# G0>G1 pairs denoted as subset0 and G1>G0 pairs denoted as subset1.
context_subset0 = context_overall.subset(subset0_predicate)
context_subset1 = context_overall.subset(subset1_predicate)
# The subset predicates will be fed in during training.
# Set up the objective and constraints in terms of error rates.
# (while the definitions in the notebook used "accuracy rates", we will find
# it convenient in the implementation to instead use "error rates")
# The objective is to minimize the error rate on all pairs.
objective = tfco.error_rate(context_overall)
# Since the target labels are all 1's, minimizing the error rate on the
# prediction differences is the same as maximizing overall pairwise accuracy.
constraints = [
tfco.error_rate(context_subset0) <= tfco.error_rate(context_subset1) + 0.01,
tfco.error_rate(context_subset1) <= tfco.error_rate(context_subset0) + 0.01]
# We constrain the difference between the error rate on the G0>G1 pairs and the
# error rate on the G1>G0 pairs to be within 0.01
# (this is equivalent to constraining the accuracy rates on the two subsets).
# Set up a rate minimization problem.
problem = tfco.RateMinimizationProblem(objective, constraints)
# Set up the optimizer and get `train_op` for gradient updates.
solver = tf.train.AdamOptimizer(learning_rate=0.1)
optimizer = tfco.ProxyLagrangianOptimizerV1(optimizer=solver)
train_op = optimizer.minimize(problem)
"""
Explanation: Note that $Acc_{G0 > G1}$ is significantly higher than $Acc_{G1 > G0}$. This implies that the original scores are more effective in ranking the "better" examples from group 0 above the "worse" examples from group 1, when compared to ranking the "better" examples from group 1 above the worse examples from group 0.
In the following, we will seek to remove this discrepancy by learning a monotone function on the scores under pairwise constraints for fairness.
Proposed Approach: Constrained Optimization
We seek to maximize the overall pairwise accuracy subject to the constraint that $Acc_{G1 > G0}$ and $Acc_{G0 > G1}$ can differ by at most 0.01.
$$
max_f ~~A(f) ~~~~s.t.~~~~ |Acc_{G1 > G0}(f) ~-~ Acc_{G0 > G1}(f)| ~\leq~ 0.01
$$
The constraint that we enforce here is a relaxation of the cross-group pairwise equal opportunity criteria in [NCGW19].
End of explanation
"""
# Start TF session and initialize variables.
session = tf.Session()
tf.set_random_seed(654321) # Set random seed for reproducibility.
session.run(tf.global_variables_initializer())
# Dictionary of values to be fed to the placeholder tensors.
feed_dict = {
scores_tensor: paired_scores.T.reshape(-1,),
subset0_predicate: (paired_groups[:, 0] == 0) & (paired_groups[:, 1] == 1),
subset1_predicate: (paired_groups[:, 0] == 1) & (paired_groups[:, 1] == 0)
}
# Scores: As mentioned earlier, we flatten the paired_scores so that the scores
# for the higher label (better) examples in the pairs are arranged first, and
# those for the lower label (worse) examples in the pairs come next.
# Predicates: subset0_predicate select pairs where the better example
# is from group 0 and the worse example is from group 1. subset1_predicate
# select pairs where the better example is from group 1 and the worse
# example is from group 0.
# We maintain a list of objectives, constraint violations, predictions and
# overall accuracies, and pairwise accuracies during the course of training.
objectives = []
violations = []
predictions = []
overall_accuracies = []
pairwise_accuracies = []
# Perform 250 full gradient updates.
for ii in range(250):
# Gradient updates.
session.run(train_op, feed_dict=feed_dict)
# Projection step.
session.run(projection_op)
# Checkpoint once in 10 iterations.
if ii % 10 == 0:
# Objective and constraint violations.
objective, violation = session.run(
(problem.objective(), problem.constraints()), feed_dict=feed_dict)
objectives.append(objective)
violations.append(violation)
# Pairwise prediction differences and overall and group pairwise accuracies.
prediction_diffs = session.run(
prediction_diffs_tensor,
feed_dict={scores_tensor: paired_scores.T.reshape(-1,)})
# Note that we feed in the "paired" scores, flattened to a 1-d array.
overall_acc = overall_pairwise_accuracy(prediction_diffs)
overall_accuracies.append(overall_acc)
pairwise_acc = group_pairwise_accuracy(prediction_diffs, paired_groups)
pairwise_accuracies.append(pairwise_acc)
# Predictions on individual examples
# (needed later for plotting the trained monotone function).
prediction = session.run(predictions_tensor,
feed_dict={scores_tensor: scores})
# Note that we feed in the individual scores (not the paired ones).
predictions.append(prediction)
session.close()
# Use the recorded objectives and constraints to find the best iterate.
best_iterate = tfco.find_best_candidate_index(np.array(objectives),
np.array(violations))
print("Constrained Opt: Overall pairwise accuracy = %.3f"
% overall_accuracies[best_iterate])
print("Constrained Opt: Group-dependent pairwise accuracies"
"(rows=better, columns=worse)")
pairwise_accuracies_df = pd.DataFrame(
pairwise_accuracies[best_iterate],
columns=["Group 0", "Group 1"],
index=["Group 0", "Group 1"]).round(decimals=3)
print(pairwise_accuracies_df)
"""
Explanation: We are now ready to train our model (this may take a few seconds to run).
End of explanation
"""
ff, ax = plt.subplots(1, 2, figsize=(10, 5))
plot_scores(scores, predictions[best_iterate], groups, ax[0],
"Scores", "Predictions")
plot_scores(labels, predictions[best_iterate], groups, ax[1],
"Labels", "Predictions")
ff.tight_layout()
"""
Explanation: By imposing explict constraints, we are able to ensure that the cross-group pairwise accuracies are within 0.01 of each other. The trained monotone function achieves this by improving on the accuracy for the $G_1 > G_0$ pairs but at the cost of lowering the accuracy for the $G_0 > G_1$ pairs. The overall accuracy is similar to the original untransformed scores.
We plot the learned monotone function $f(x)$ as a function of the scores $x$ and as a function of the ground-truth labels $y$.
End of explanation
"""
|
quantumlib/Cirq | docs/tutorials/variational_algorithm.ipynb | apache-2.0 | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2020 The Cirq Developers
End of explanation
"""
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
"""
Explanation: Quantum variational algorithm
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/variational_algorithm"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/variational_algorithm.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/variational_algorithm.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/variational_algorithm.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
In this tutorial, we use the variational quantum eigensolver (VQE) in Cirq to optimize a simple Ising model.
End of explanation
"""
# define the length and width of the grid.
length = 3
# define qubits on the grid.
qubits = cirq.GridQubit.square(length)
print(qubits)
"""
Explanation: Background: Variational Quantum Algorithm
The variational method in quantum theory is a classical method for finding low energy states of a quantum system. The rough idea of this method is that one defines a trial wave function (sometimes called an ansatz) as a function of some parameters, and then one finds the values of these parameters that minimize the expectation value of the energy with respect to these parameters. This minimized ansatz is then an approximation to the lowest energy eigenstate, and the expectation value serves as an upper bound on the energy of the ground state.
In the last few years (see arXiv:1304.3061 and arXiv:1507.08969, for example), it has been realized that quantum computers can mimic the classical technique and that a quantum computer does so with certain advantages. In particular, when one applies the classical variational method to a system of $n$ qubits, an exponential number (in $n$) of complex numbers is necessary to generically represent the wave function of the system. However, with a quantum computer, one can directly produce this state using a parameterized quantum circuit with less than exponential parameters, and then use repeated measurements to estimate the expectation value of the energy.
This idea has led to a class of algorithms known as variational quantum algorithms. Indeed this approach is not just limited to finding low energy eigenstates, but minimizing any objective function that can be expressed as a quantum observable. It is an open question to identify under what conditions these quantum variational algorithms will succeed, and exploring this class of algorithms is a key part of the research for noisy intermediate scale quantum computers.
The classical problem we will focus on is the 2D +/- Ising model with transverse field (ISING). This problem is NP-complete. It is highly unlikely that quantum computers will be able to efficiently solve it across all instances because it is generally believed that quantum computers cannot solve all NP-complete problems in polynomial time. Yet this type of problem is illustrative of the general class of problems that Cirq is designed to tackle.
Let's define the problem. Consider the energy function
$E(s_1,\dots,s_n) = \sum_{\langle i,j \rangle} J_{i,j}s_i s_j + \sum_i h_i s_i$
where here each $s_i, J_{i,j}$, and $h_i$ are either +1 or -1. Here each index i is associated with a bit on a square lattice, and the $\langle i,j \rangle$ notation means sums over neighboring bits on this lattice. The problem we would like to solve is, given $J_{i,j}$, and $h_i$, find an assignment of $s_i$ values that minimize $E$.
How does a variational quantum algorithm work for this? One approach is to consider $n$ qubits and associate them with each of the bits in the classical problem. This maps the classical problem onto the quantum problem of minimizing the expectation value of the observable
$H=\sum_{\langle i,j \rangle} J_{i,j} Z_i Z_j + \sum_i h_iZ_i$
Then one defines a set of parameterized quantum circuits, i.e., a quantum circuit where the gates (or more general quantum operations) are parameterized by some values. This produces an ansatz state
$|\psi(p_1, p_2, \dots, p_k)\rangle$
where $p_i$ are the parameters that produce this state (here we assume a pure state, but mixed states are of course possible).
The variational algorithm then works by noting that one can obtain the value of the objective function for a given ansatz state by
Prepare the ansatz state.
Make a measurement which samples from some terms in H.
Goto 1.
Note that one cannot always measure $H$ directly (without the use of quantum phase estimation). So one often relies on the linearity of expectation values to measure parts of $H$ in step 2. One always needs to repeat the measurements to obtain an estimate of the expectation value. How many measurements needed to achieve a given accuracy is beyond the scope of this tutorial, but Cirq can help investigate this question.
The above shows that one can use a quantum computer to obtain estimates of the objective function for the ansatz. This can then be used in an outer loop to try to obtain parameters for the lowest value of the objective function. For these best parameter, one can then use that best ansatz to produce samples of solutions to the problem, which obtain a hopefully good approximation for the lowest possible value of the objective function.
Create a circuit on a Grid
To build the above variational quantum algorithm using Cirq, one begins by building the appropriate circuit. Because the problem we have defined has a natural structure on a grid, we will use Cirq’s built-in cirq.GridQubits as our qubits. We will demonstrate some of how this works in an interactive Python environment, the following code can be run in series in a Python environment where you have Cirq installed. For more about circuits and how to create them, see the Tutorial or the Circuits page.
End of explanation
"""
circuit = cirq.Circuit()
circuit.append(cirq.H(q) for q in qubits if (q.row + q.col) % 2 == 0)
circuit.append(cirq.X(q) for q in qubits if (q.row + q.col) % 2 == 1)
print(circuit)
"""
Explanation: Here we see that we've created a bunch of cirq.GridQubits, which have a row and column, indicating their position on a grid.
Now that we have some qubits, let us construct a cirq.Circuit on these qubits. For example, suppose we want to apply the Hadamard gate cirq.H to every qubit whose row index plus column index is even, and an cirq.X gate to every qubit whose row index plus column index is odd. To do this, we write:
End of explanation
"""
def rot_x_layer(length, half_turns):
"""Yields X rotations by half_turns on a square grid of given length."""
# Define the gate once and then re-use it for each Operation.
rot = cirq.XPowGate(exponent=half_turns)
# Create an X rotation Operation for each qubit in the grid.
for i in range(length):
for j in range(length):
yield rot(cirq.GridQubit(i, j))
# Create the circuit using the rot_x_layer generator
circuit = cirq.Circuit()
circuit.append(rot_x_layer(2, 0.1))
print(circuit)
"""
Explanation: Creating the Ansatz
One convenient pattern is to use a python Generator for defining sub-circuits or layers in our algorithm. We will define a function that takes in the relevant parameters and then yields the operations for the sub-circuit, and then this can be appended to the cirq.Circuit:
End of explanation
"""
import random
def rand2d(rows, cols):
return [[random.choice([+1, -1]) for _ in range(cols)] for _ in range(rows)]
def random_instance(length):
# transverse field terms
h = rand2d(length, length)
# links within a row
jr = rand2d(length - 1, length)
# links within a column
jc = rand2d(length, length - 1)
return (h, jr, jc)
h, jr, jc = random_instance(3)
print(f'transverse fields: {h}')
print(f'row j fields: {jr}')
print(f'column j fields: {jc}')
"""
Explanation: Another important concept here is that the rotation gate is specified in half turns ($ht$). For a rotation about X, the gate is:
$\cos(ht * \pi) I + i \sin(ht * \pi) X$
There is a lot of freedom defining a variational ansatz. Here we will do a variation on a QAOA strategy and define an ansatz related to the problem we are trying to solve.
First, we need to choose how the instances of the problem are represented. These are the values $J$ and $h$ in the Hamiltonian definition. We represent them as two-dimensional arrays (lists of lists). For $J$ we use two such lists, one for the row links and one for the column links.
Here is a snippet that we can use to generate random problem instances:
End of explanation
"""
def prepare_plus_layer(length):
for i in range(length):
for j in range(length):
yield cirq.H(cirq.GridQubit(i, j))
"""
Explanation: In the code above, the actual values will be different for each individual run because they are using random.choice.
Given this definition of the problem instance, we can now introduce our ansatz. It will consist of one step of a circuit made up of:
Apply an initial mixing step that puts all the qubits into the $|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ state.
End of explanation
"""
def rot_z_layer(h, half_turns):
"""Yields Z rotations by half_turns conditioned on the field h."""
gate = cirq.ZPowGate(exponent=half_turns)
for i, h_row in enumerate(h):
for j, h_ij in enumerate(h_row):
if h_ij == 1:
yield gate(cirq.GridQubit(i, j))
"""
Explanation: Apply a cirq.ZPowGate for the same parameter for all qubits where the transverse field term $h$ is $+1$.
End of explanation
"""
def rot_11_layer(jr, jc, half_turns):
"""Yields rotations about |11> conditioned on the jr and jc fields."""
cz_gate = cirq.CZPowGate(exponent=half_turns)
for i, jr_row in enumerate(jr):
for j, jr_ij in enumerate(jr_row):
q = cirq.GridQubit(i, j)
q_1 = cirq.GridQubit(i + 1, j)
if jr_ij == -1:
yield cirq.X(q)
yield cirq.X(q_1)
yield cz_gate(q, q_1)
if jr_ij == -1:
yield cirq.X(q)
yield cirq.X(q_1)
for i, jc_row in enumerate(jc):
for j, jc_ij in enumerate(jc_row):
q = cirq.GridQubit(i, j)
q_1 = cirq.GridQubit(i, j + 1)
if jc_ij == -1:
yield cirq.X(q)
yield cirq.X(q_1)
yield cz_gate(q, q_1)
if jc_ij == -1:
yield cirq.X(q)
yield cirq.X(q_1)
"""
Explanation: Apply a cirq.CZPowGate for the same parameter between all qubits where the coupling field term $J$ is $+1$. If the field is $-1$, apply cirq.CZPowGate conjugated by $X$ gates on all qubits.
End of explanation
"""
def initial_step(length):
yield prepare_plus_layer(length)
def one_step(h, jr, jc, x_half_turns, h_half_turns, j_half_turns):
length = len(h)
yield rot_z_layer(h, h_half_turns)
yield rot_11_layer(jr, jc, j_half_turns)
yield rot_x_layer(length, x_half_turns)
h, jr, jc = random_instance(3)
circuit = cirq.Circuit()
circuit.append(initial_step(len(h)))
circuit.append(one_step(h, jr, jc, 0.1, 0.2, 0.3))
print(circuit)
"""
Explanation: Apply an cirq.XPowGate for the same parameter for all qubits. This is the method rot_x_layer we have written above.
Putting all together, we can create a step that uses just three parameters. Below is the code, which uses the generator for each of the layers (note to advanced Python users: this code does not contain a bug in using yield due to the auto flattening of the OP_TREE concept. Typically, one would want to use yield from here, but this is not necessary):
End of explanation
"""
simulator = cirq.Simulator()
circuit = cirq.Circuit()
circuit.append(initial_step(len(h)))
circuit.append(one_step(h, jr, jc, 0.1, 0.2, 0.3))
circuit.append(cirq.measure(*qubits, key='x'))
results = simulator.run(circuit, repetitions=100)
print(results.histogram(key='x'))
"""
Explanation: Here we see that we have chosen particular parameter values $(0.1, 0.2, 0.3)$.
Simulation
In Cirq, the simulators make a distinction between a run and a simulation. A run only allows for a simulation that mimics the actual quantum hardware. For example, it does not allow for access to the amplitudes of the wave function of the system, since that is not experimentally accessible. Simulate commands, however, are broader and allow different forms of simulation. When prototyping small circuits, it is useful to execute simulate methods, but one should be wary of relying on them when running against actual hardware.
End of explanation
"""
import numpy as np
def energy_func(length, h, jr, jc):
def energy(measurements):
# Reshape measurement into array that matches grid shape.
meas_list_of_lists = [measurements[i * length:(i + 1) * length]
for i in range(length)]
# Convert true/false to +1/-1.
pm_meas = 1 - 2 * np.array(meas_list_of_lists).astype(np.int32)
tot_energy = np.sum(pm_meas * h)
for i, jr_row in enumerate(jr):
for j, jr_ij in enumerate(jr_row):
tot_energy += jr_ij * pm_meas[i, j] * pm_meas[i + 1, j]
for i, jc_row in enumerate(jc):
for j, jc_ij in enumerate(jc_row):
tot_energy += jc_ij * pm_meas[i, j] * pm_meas[i, j + 1]
return tot_energy
return energy
print(results.histogram(key='x', fold_func=energy_func(3, h, jr, jc)))
"""
Explanation: Note that we have run the simulation 100 times and produced a histogram of the counts of the measurement results. What are the keys in the histogram counter? Note that we have passed in the order of the qubits. This ordering is then used to translate the order of the measurement results to a register using a big endian representation.
For our optimization problem, we want to calculate the value of the objective function for a given result run. One way to do this is using the raw measurement data from the result of simulator.run. Another way to do this is to provide to the histogram a method to calculate the objective: this will then be used as the key for the returned Counter.
End of explanation
"""
def obj_func(result):
energy_hist = result.histogram(key='x', fold_func=energy_func(3, h, jr, jc))
return np.sum([k * v for k,v in energy_hist.items()]) / result.repetitions
print(f'Value of the objective function {obj_func(results)}')
"""
Explanation: One can then calculate the expectation value over all repetitions:
End of explanation
"""
import sympy
circuit = cirq.Circuit()
alpha = sympy.Symbol('alpha')
beta = sympy.Symbol('beta')
gamma = sympy.Symbol('gamma')
circuit.append(initial_step(len(h)))
circuit.append(one_step(h, jr, jc, alpha, beta, gamma))
circuit.append(cirq.measure(*qubits, key='x'))
print(circuit)
"""
Explanation: Parameterizing the Ansatz
Now that we have constructed a variational ansatz and shown how to simulate it using Cirq, we can think about optimizing the value.
On quantum hardware, one would most likely want to have the optimization code as close to the hardware as possible. As the classical hardware that is allowed to inter-operate with the quantum hardware becomes better specified, this language will be better defined. Without this specification, however, Cirq also provides a useful concept for optimizing the looping in many optimization algorithms. This is the fact that many of the value in the gate sets can, instead of being specified by a float, be specified by a symply.Symbol, and this sympy.Symbol can be substituted for a value specified at execution time.
Luckily for us, we have written our code so that using parameterized values is as simple as passing sympy.Symbol objects where we previously passed float values.
End of explanation
"""
resolver = cirq.ParamResolver({'alpha': 0.1, 'beta': 0.3, 'gamma': 0.7})
resolved_circuit = cirq.resolve_parameters(circuit, resolver)
"""
Explanation: Note now that the circuit's gates are parameterized.
Parameters are specified at runtime using a cirq.ParamResolver, which is just a dictionary from Symbol keys to runtime values.
For instance:
End of explanation
"""
sweep = (cirq.Linspace(key='alpha', start=0.1, stop=0.9, length=5)
* cirq.Linspace(key='beta', start=0.1, stop=0.9, length=5)
* cirq.Linspace(key='gamma', start=0.1, stop=0.9, length=5))
results = simulator.run_sweep(circuit, params=sweep, repetitions=100)
for result in results:
print(result.params.param_dict, obj_func(result))
"""
Explanation: resolves the parameters to actual values in the circuit.
Cirq also has the concept of a sweep. A sweep is a collection of parameter resolvers. This runtime information is very useful when one wants to run many circuits for many different parameter values. Sweeps can be created to specify values directly (this is one way to get classical information into a circuit), or a variety of helper methods. For example suppose we want to evaluate our circuit over an equally spaced grid of parameter values. We can easily create this using cirq.LinSpace.
End of explanation
"""
sweep_size = 10
sweep = (cirq.Linspace(key='alpha', start=0.0, stop=1.0, length=sweep_size)
* cirq.Linspace(key='beta', start=0.0, stop=1.0, length=sweep_size)
* cirq.Linspace(key='gamma', start=0.0, stop=1.0, length=sweep_size))
results = simulator.run_sweep(circuit, params=sweep, repetitions=100)
min = None
min_params = None
for result in results:
value = obj_func(result)
if min is None or value < min:
min = value
min_params = result.params
print(f'Minimum objective value is {min}.')
"""
Explanation: Finding the Minimum
Now we have all the code, we do a simple grid search over values to find a minimal value. Grid search is not the best optimization algorithm, but is here simply illustrative.
End of explanation
"""
|
tensorflow/docs-l10n | site/ko/hub/tutorials/text_to_video_retrieval_with_s3d_milnce.ipynb | apache-2.0 | !pip install -q opencv-python
import os
import tensorflow.compat.v2 as tf
import tensorflow_hub as hub
import numpy as np
import cv2
from IPython import display
import math
"""
Explanation: Text-to-Video retrieval with S3D MIL-NCE
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/text_to_video_retrieval_with_s3d_milnce"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/hub/tutorials/text_to_video_retrieval_with_s3d_milnce.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/hub/tutorials/text_to_video_retrieval_with_s3d_milnce.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/hub/tutorials/text_to_video_retrieval_with_s3d_milnce.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
End of explanation
"""
# Load the model once from TF-Hub.
hub_handle = 'https://tfhub.dev/deepmind/mil-nce/s3d/1'
hub_model = hub.load(hub_handle)
def generate_embeddings(model, input_frames, input_words):
"""Generate embeddings from the model from video frames and input words."""
# Input_frames must be normalized in [0, 1] and of the shape Batch x T x H x W x 3
vision_output = model.signatures['video'](tf.constant(tf.cast(input_frames, dtype=tf.float32)))
text_output = model.signatures['text'](tf.constant(input_words))
return vision_output['video_embedding'], text_output['text_embedding']
# @title Define video loading and visualization functions { display-mode: "form" }
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(video_url, max_frames=32, resize=(224, 224)):
path = tf.keras.utils.get_file(os.path.basename(video_url)[-128:], video_url)
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
frames = np.array(frames)
if len(frames) < max_frames:
n_repeat = int(math.ceil(max_frames / float(len(frames))))
frames = frames.repeat(n_repeat, axis=0)
frames = frames[:max_frames]
return frames / 255.0
def display_video(urls):
html = '<table>'
html += '<tr><th>Video 1</th><th>Video 2</th><th>Video 3</th></tr><tr>'
for url in urls:
html += '<td>'
html += '<img src="{}" height="224">'.format(url)
html += '</td>'
html += '</tr></table>'
return display.HTML(html)
def display_query_and_results_video(query, urls, scores):
"""Display a text query and the top result videos and scores."""
sorted_ix = np.argsort(-scores)
html = ''
html += '<h2>Input query: <i>{}</i> </h2><div>'.format(query)
html += 'Results: <div>'
html += '<table>'
html += '<tr><th>Rank #1, Score:{:.2f}</th>'.format(scores[sorted_ix[0]])
html += '<th>Rank #2, Score:{:.2f}</th>'.format(scores[sorted_ix[1]])
html += '<th>Rank #3, Score:{:.2f}</th></tr><tr>'.format(scores[sorted_ix[2]])
for i, idx in enumerate(sorted_ix):
url = urls[sorted_ix[i]];
html += '<td>'
html += '<img src="{}" height="224">'.format(url)
html += '</td>'
html += '</tr></table>'
return html
# @title Load example videos and define text queries { display-mode: "form" }
video_1_url = 'https://upload.wikimedia.org/wikipedia/commons/b/b0/YosriAirTerjun.gif' # @param {type:"string"}
video_2_url = 'https://upload.wikimedia.org/wikipedia/commons/e/e6/Guitar_solo_gif.gif' # @param {type:"string"}
video_3_url = 'https://upload.wikimedia.org/wikipedia/commons/3/30/2009-08-16-autodrift-by-RalfR-gif-by-wau.gif' # @param {type:"string"}
video_1 = load_video(video_1_url)
video_2 = load_video(video_2_url)
video_3 = load_video(video_3_url)
all_videos = [video_1, video_2, video_3]
query_1_video = 'waterfall' # @param {type:"string"}
query_2_video = 'playing guitar' # @param {type:"string"}
query_3_video = 'car drifting' # @param {type:"string"}
all_queries_video = [query_1_video, query_2_video, query_3_video]
all_videos_urls = [video_1_url, video_2_url, video_3_url]
display_video(all_videos_urls)
"""
Explanation: TF-Hub 모델 가져오기
이 튜토리얼에서는 TensorFlow Hub의 S3D MIL-NCE 모델을 사용하여 텍스트-비디오 검색을 수행하여 주어진 텍스트 쿼리에 가장 유사한 비디오를 찾는 방법을 보여줍니다.
이 모델에는 비디오 임베딩을 생성하기 위한 서명과 텍스트 임베딩을 생성하기 위한 서명 등 두 개의 서명이 있습니다. 이러한 임베딩을 사용하여 임베딩 공간에서 nearest neighbor(NN)를 찾습니다.
End of explanation
"""
# Prepare video inputs.
videos_np = np.stack(all_videos, axis=0)
# Prepare text input.
words_np = np.array(all_queries_video)
# Generate the video and text embeddings.
video_embd, text_embd = generate_embeddings(hub_model, videos_np, words_np)
# Scores between video and text is computed by dot products.
all_scores = np.dot(text_embd, tf.transpose(video_embd))
# Display results.
html = ''
for i, words in enumerate(words_np):
html += display_query_and_results_video(words, all_videos_urls, all_scores[i, :])
html += '<br>'
display.HTML(html)
"""
Explanation: 텍스트-비디오 검색 시연하기
End of explanation
"""
|
crowd-course/datascience | 4-regression/4.3 - Regularization and Model Evaluation.ipynb | mit | import pandas as pd
import numpy as np
from sklearn.linear_model import SGDRegressor
from sklearn.preprocessing import StandardScaler
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (10, 10)
"""
Explanation: Optimizing the Models
Welcome to the practical section of module 4.3. Here we'll continue with the advertising-sales dataset to investigate the ideas of regularization and model evaluation. We'll continue with the multivariate regression model we build in the previous module and we'll be looking into tuning the regularization parameter to achieve the most accurate model and we'll evaluate this accuracy using better metrics than MSE which we have been using in the previous modules.
End of explanation
"""
def scale_features(X, scalar=None):
if(len(X.shape) == 1):
X = X.reshape(-1, 1)
if scalar == None:
scalar = StandardScaler()
scalar.fit(X)
return scalar.transform(X), scalar
# get the advertising data set
dataset = pd.read_csv('../datasets/Advertising.csv')
dataset = dataset[["TV", "Radio", "Newspaper", "Sales"]] # filtering the Unamed index column out of the dataset
dataset_size = len(dataset)
training_size = np.floor(dataset_size * 0.6).astype(int)
validation_size = np.floor(dataset_size * 0.2).astype(int)
# First we split the shuffled dataset into three parts: training, validation and test
X_training = dataset[["TV", "Newspaper"]][:training_size]
y_training = dataset["Sales"][:training_size]
X_validation = dataset[["TV", "Newspaper"]][training_size:training_size + validation_size]
y_validation = dataset["Sales"][training_size:training_size + validation_size]
X_test = dataset[["TV", "Newspaper"]][training_size:training_size + validation_size:]
y_test = dataset["Sales"][training_size:training_size + validation_size:]
# Second we apply feature scaling on X_training and X_test
X_training, training_scalar = scale_features(X_training)
X_validation,_ = scale_features(X_validation, scalar=training_scalar)
X_test,_ = scale_features(X_test, scalar=training_scalar)
model = SGDRegressor(loss='squared_loss')
model.fit(X_training, y_training)
w0 = model.intercept_
w1 = model.coef_[0] # Notice that model.coef_ is a list now not a single number
w2 = model.coef_[1]
print "Trained model: y = %0.2f + %0.2fx₁ + %0.2fx₂" % (w0, w1, w2)
MSE = np.mean((y_test - model.predict(X_test)) ** 2)
print "The Test Data MSE is: %0.3f" % (MSE)
"""
Explanation: Building the Model
In the following you'll see the same code (without visualization) we wrote in the previous module for the rgeression model using both TV and Newspaper data, so it's nothing new, except for the part where we prepare our data. We'll be splitting the dataset into three parts now instead of two:
* Training Set: we'll train the model on this
* Validation Set: we'll be tuning hyperparameters on this (more on that later)
* Tests Set: we'll be evaluating our model on this
End of explanation
"""
model = SGDRegressor(loss='squared_loss', alpha=1)
model.fit(X_training, y_training)
w0 = model.intercept_
w1 = model.coef_[0] # Notice that model.coef_ is a list now not a single number
w2 = model.coef_[1]
print "Trained model: y = %0.2f + %0.2fx₁ + %0.2fx₂" % (w0, w1, w2)
MSE = np.mean((y_test - model.predict(X_test)) ** 2)
print "The Test Data MSE is: %0.3f" % (MSE)
"""
Explanation: L2 Regularization
From the videos, we learned that the idea of regularization is introduced to prevent the model from overfitting to the data points by adding a penality for large weights values. Such penality is expressed mathematically with the second term of the cost function:
$$ J(W) = \sum_{i=1}^{m} (h_w(X^{(i)} - y^{(i)})^2 + \lambda \sum_{j=1}^{n} w_j^2 $$
This is called L2 Regularization and $\lambda$ is called the Regularization Parameter , How can we implment it then with scikit-learn for our models?
Well, no worries, scikit-learn implements that for you and we have been using it all the time.
The SGDRegressor constructs has two arguments that define the behavior of the penality:
* penalty: which is a string specifying the type of penality to use (default to 'l2')
* alpha: which is the value of the $\lambda$ in the equation above
Now let's play with the value of alpha and see how does that affect our model's accuracy. Let's set alpha to a large number say 1. In this case we give the values of the weights a very harsh penalty so they'll end up smaller than they should be and the accuracy should be worse!
End of explanation
"""
alphas = [0.00025, 0.00005, 0.0001, 0.0002, 0.0004]
best_alpha = alphas[0]
least_mse = float("inf") #initialized to infinity
for possible_alpha in alphas:
model = SGDRegressor(loss='squared_loss', alpha=possible_alpha)
model.fit(X_training, y_training)
mse = np.mean((y_validation - model.predict(X_validation)) ** 2)
if mse <= least_mse:
least_mse = mse
best_alpha = possible_alpha
print "The Best alpha is: %.4f" % (best_alpha)
best_model = SGDRegressor(loss='squared_loss', alpha=best_alpha)
best_model.fit(X_training, y_training)
MSE = np.mean((y_test - best_model.predict(X_test)) ** 2) # evaluating the best model on test data
print "The Test Data MSE is: %0.3f" % (MSE)
"""
Explanation: The effect the value of the regularization parameter has on the model's accuracy makes a very good candidate for tuning. We can use the validation data set we created for that purpose. We create a list of possible values for the regularization parameter, we train the model using each of these value and evaluate the model using the validation set. The value with the best evaluation (least MSE) is the best value for the regularization parameter.
End of explanation
"""
model = SGDRegressor(loss='squared_loss', eta0=0.02)
model.fit(X_training, y_training)
w0 = model.intercept_
w1 = model.coef_[0] # Notice that model.coef_ is a list now not a single number
w2 = model.coef_[1]
print "Trained model: y = %0.2f + %0.2fx₁ + %0.2fx₂" % (w0, w1, w2)
R2_adjusted = model.score(X_test, y_test)
print "The Model's Adjusted R² on Test Data is %0.2f" % (R2_adjusted)
"""
Explanation: There's a better way to tune the regularization parameter and possiblby multiple parameters at the same time. This way through scikit-learn's GridSearchCV. We'll not be working with that here, but you're encouraged to read the documentation and user guides and try for yourself how it could be done. Once you got the hang of it, you can maybe try and tune the learning rate and the regularization parameter at the same time!
The R-squared Metric
The Last thing we have here is to see how we can evaluate our model using the $R^2$ metric. We learned in the videos that the $R^2$ metric measures how close the data points are to our regression line (or plane). We also learned that there's an adjusted version of that metric denoted by $\overline{R^2}$ that penalizes for the extra features we add to the model that doesn't help the model be more accurate. Those metric can be calculated using the following formulas:
$$R^2 = 1 - \frac{\sum_{i=1}^{n}(y_i - f_i)^2}{\sum_{i=1}^{n}(y_i - \overline{y})^2}$$
where $f_i$ is our model prediction and $overline{y}$ is the mean of all n $y_i$s. And for the adjusted version:
$$\overline{R^2} = R^2 - \frac{k - 1}{n - k}(1 - R^2)$$
where $k$ is the number of fatures and $n$ is the number of data samples. Both $R^2$ and $\overline{R^2}$ take a value less than or equal to 1.The closer it is to one, the better our model is.
Fortunately, we don't have to do all these calculations by hand to use this metric with scikit-learn. The model's score method does that for us. It takes the test Xs and ys and spits out the value of $\overline{R^2}$
End of explanation
"""
|
buruzaemon/svd | 01_SVD_visualizing_data.ipynb | bsd-3-clause | iris = sklearn.datasets.load_iris()
df_iris = pd.DataFrame(iris.data, columns=iris.feature_names)
print('Iris dataset has {} rows and {} columns\n'.format(*df_iris.shape))
print('Here are the first 5 rows of the data:\n\n{}\n'.format(df_iris.head(5)))
print('Some simple statistics on the Iris dataset:\n\n{}\n'.format(df_iris.describe()))
"""
Explanation: Singular Value Decomposition for Data Visualization
Displaying high-dimensional data using reduced-rank matrices
A data visualization goes a long way in helping you understand the underlying dataset. If the data is highly dimensional, you can use Singular Value Decomposition (SVD) to find a reduced-rank approximation of the data that can be visualized easily.
Example 1: the Iris dataset
We start off with the Iris flower dataset. The data is multivariate, with 150 measurements of 4 features (length and width cm of both sepal and petal) on 3 distinct Iris species. Of the 150 measurements, there are 50 measurements each for Iris setosa, Iris versicolor, and Iris virginica.
Scikit Learn's datasets includes the Iris dataset, so let's load that up and start exploring.
End of explanation
"""
U_iris, S_iris, Vt_iris = np.linalg.svd(df_iris)
"""
Explanation: As we are exploring the dataset, it would be nice to view the data in order to get an idea of how the 3 species might be distributed with respect to one another in terms of their features. Perhaps we are interested in finding clusters, or maybe we would like to find a way to make class predictions?
However, since the data has 4 dimensions, we would be hard-pressed to come up with a good way to graph the data in 4D that we could easily understand.
But what if we could reduce or compress the data so that we could work in 3 dimensions or less?
Singular value decomposition lets us do just that.
Singular value decomposition
Singular value decomposition factorizes an $\mathbb{R}^{m \times n}$ matrix $X$ into
matrix $U \in \mathbb{R}^{m \times m}$ are the left-singular vectors of $X$, where columns are the set of orthonormal eigenvectors of $X \, X^{\intercal}$
diagonal matrix $\Sigma$ with entries $\sigma \in \mathbb{R}$ that are the non-negative singular values of $X$
matrix $V \in \mathbb{R}^{n \times n}$ are the right-singular vectors $X$, where the columns are the set of orthonormal eigenvectors of $X^{\intercal} \, X$
such that
\begin{align}
X &= U \, \Sigma \, V^{\intercal}
\end{align}
We can use numpy.linalg.svd to factorize the Iris data matrix into three components $U$, $\Sigma$, and $V^{\intercal}$.
End of explanation
"""
print('matrix U has {} rows, {} columns\n'.format(*U_iris.shape))
print('{}'.format(pd.DataFrame(U_iris).head(5)))
"""
Explanation: $U$: left-singular vectors of $X$
The rows of the $U$ correspond to the rows of original data matrix $X$, while the columns are the set of ordered, orthornormal eigenvectors of $X \, X^{\intercal}$.
End of explanation
"""
print('matrix Vt has {} rows, {} columns\n'.format(*Vt_iris.shape))
print('{}'.format(pd.DataFrame(Vt_iris).head()))
"""
Explanation: $V$: right-singular vectors of $X$
numpy.linalg.svd actually returns $V^{\intercal}$ instead of $V$, so it is the columns of $V^{\intercal}$ that correspond to the columns of original data matrix $X$. Hence, the rows are the set of ordered, orthornormal eigenvectors of $X^{\intercal} \, X$.
End of explanation
"""
num_sv_iris = np.arange(1, S_iris.size+1)
cum_var_explained_iris = [np.sum(np.square(S_iris[0:n])) / np.sum(np.square(S_iris)) for n in num_sv_iris]
"""
Explanation: $\Sigma$: singular values of $X$
The elements $\sigma_{i}$ of diagonal matrix $\Sigma$ are the non-zero singular values of matrix $X$, which are really just the square roots of the non-zero eigenvalues of $X^{\intercal} \, X$ (and also for $X \, X^{\intercal}$). These singular values can be used to determine the amount of variance $X^{\prime}$ explains of the original data matrix $X$ when reducing the dimensions to find a lower rank approximation.
\begin{align}
X^{\prime}{k} &= U{k} \, \Sigma_{k} \, V^{\intercal}{k} \
&\approx X{r} & \text{ where } rank(X^{\prime}) = k \lt rank(X) = r
\end{align}
The amount of variance that the reduced rank approximation $X^{\prime}{k}$ explains of $X{r}$ is
\begin{align}
\text{cum. variance explained} &= \frac{\sum_{j=1}^{k} \sigma_{j}^{2}}{\sum_{i=1}^{r} \sigma_{i}^{2}}
\end{align}
NOTE: numpy.linalg.svd actually returns a $\Sigma$ that is not a diagonal matrix, but a list of the entries on the diagonal.
End of explanation
"""
fig = plt.figure(figsize=(7.0,5.5))
ax = fig.add_subplot(111)
plt.plot(num_sv_iris,
cum_var_explained_iris,
color='#2171b5',
label='variance explained',
alpha=0.65,
zorder=1000)
plt.scatter(num_sv_iris,
sklearn.preprocessing.normalize(S_iris.reshape((1,-1))),
color='#fc4e2a',
label='singular values (normalized)',
alpha=0.65,
zorder=1000)
plt.legend(loc='center right', scatterpoints=1, fontsize=8)
ax.set_xticks(num_sv_iris)
ax.set_xlim(0.8, 4.1)
ax.set_ylim(0.0, 1.1)
ax.set_xlabel(r'Number of singular values used')
ax.set_ylabel('Variance in data explained')
ax.set_title('Iris dataset: cumulative variance explained & singular values',
fontsize=14,
y=1.03)
ax.set_facecolor('0.98')
plt.grid(alpha=0.8, zorder=1)
plt.tight_layout()
"""
Explanation: Let's have a look at the cumulative variance explained visually as a function of the number of singular values used when reducing rank to find a lower-ranked matrix $X^{\prime}$ to approximate $X$. This will inform us as to how many dimensions we should use.
End of explanation
"""
idx_setosa = np.where(iris.target==0)[0]
idx_versicolor = np.where(iris.target==1)[0]
idx_virginica = np.where(iris.target==2)[0]
setosa_x = U_iris[idx_setosa, 0]
setosa_y = U_iris[idx_setosa, 1]
versicolor_x = U_iris[idx_versicolor, 0]
versicolor_y = U_iris[idx_versicolor, 1]
virginica_x = U_iris[idx_virginica, 0]
virginica_y = U_iris[idx_virginica, 1]
"""
Explanation: Dimension reduction
Judging from the curve representing cumulative variance explained in the figure above, we can see that
with 1 singular value, about 96.5% of the variance of $X$ can be explained
with 2 singular values, that number goes up to approximately 99.8%
Since graphing the Iris dataset in 1D wouldn't be all that interesting (just dots on a line segment), let's try using the first 2 singular values to represent the data on the $x$- and $y$-axes, respectively.
End of explanation
"""
fig = plt.figure(figsize=(7.0,5.5))
ax = fig.add_subplot(111)
plt.scatter(setosa_x,
setosa_y,
marker='o',
color='#66c2a5',
label='Iris-setosa',
zorder=1000)
plt.scatter(versicolor_x,
versicolor_y,
marker='D',
color='#fc8d62',
label='Iris-versicolor',
zorder=1000)
plt.scatter(virginica_x,
virginica_y,
marker='^',
color='#8da0cb',
label='Iris-virginica',
zorder=1000)
plt.legend(loc='upper left', scatterpoints=1, fontsize=8)
ax.set_xlabel(r'singular value $\sigma_{1}$')
ax.set_ylabel(r'singular value $\sigma_{2}$')
ax.set_title('2D plot of Iris dataset',
fontsize=14,
y=1.03)
ax.set_facecolor('0.98')
plt.grid(alpha=0.6, zorder=1)
plt.tight_layout()
"""
Explanation: We will use different marker shapes and colors to represent the three Iris species on our 2D graph.
End of explanation
"""
fin = os.path.join(os.getcwd(),
'data',
'country_language.csv')
df_countries = pd.read_csv(fin, dtype='category')
print("data has {} measurements for {} variables\n".format(*df_countries.shape))
print("Here are the first 10 rows...\n\n{}\n...".format(df_countries.head(10)))
"""
Explanation: There!
Now that we are viewing the originally 4D data with 2 dimensions using the first 2 singular value columns of the $U$ left singular vectors matrix, we can see that there should be a very clear separation for the Iris setosa class and the others. On the other hand, the demarcation between Iris versicolor and Iris virginica might not be as clear cut.
Nevertheless, since this 2D reduced-rank matrix representation $X^{\prime}$ explains nearly 99.8% of the variance in the original dataset, we can be pretty certain that clustering and classification should be possible.
Example 2: Countries and Primary Languages
Or, What to do when your data is categorical?
In the previous example, we were exploring the Iris dataset which is a matrix $X \in \mathbb{R}^{150 \times 4}$. Singular value decomposition helped us to find a reduced-rank matrix $X^{\prime} \in \mathbb{R}^{150 \times 2}$ that accurately approximated the original data matrix and let us visualize the 4-dimensional data using 2 dimensions.
Let's now consider another dataset where the values are not in $\mathbb{R}$, but are categorical.
For this example, we explore a fictional survey of 1000 respondents from each of five countries (Canada, USA, England, Italy and Switzerland), asking them what their primary language is (among English, French, Spanish, German and Italian). So in our data we have categories for both country and language.
We read in the data from file using pandas.dataframe.read_csv.
End of explanation
"""
countries = ['Canada', 'USA', 'England', 'Italy', 'Switzerland']
languages = ['English', 'French', 'Spanish', 'German', 'Italian']
F = pd.crosstab(df_countries.country, df_countries.language, margins=False)
F.index = countries
F.columns = languages
print("{}".format(F))
"""
Explanation: Correspondence Analysis
Contingency table $F$
Our next step in exploring the data is to break out the data in terms of the 2 categories.
Here we convert the raw observations into a contingency table $F$, with the countries as rows and the languages as columns. pandas.crosstab will do just that.
End of explanation
"""
P = F / F.sum().sum()
print('correspondence matrix P:\n\n{}'.format(P))
"""
Explanation: Now say that we are interested in seeing the relation between the countries and the languages.
However, we cannot blindly apply singular value decomposition to contingency table $F$ above.
Since we are working with 2 distinct categories, we can apply correspondence analysis to transform the contingency table into a form where we can use singular value decomposition. At that point, we should be able to find a reduced-rank matrix that approximates the original data, and that in turn would let us graphically represent the the relations beween countries and languages.
The idea is to use the $\chi^{2}$ distance between rows and columns as our basis for singular value decomposition, as the $\chi^{2}$ distribution lets us calculate the independence of qualitative variables.
Correspondence matrix $P$
We start by first calculating the correspondence matrix $P$ where
\begin{align}
P &= \left[ \frac{f_{ij}}{\sum_{i=1}^{I} \sum_{j=1}^{J} f_{ij}} \right] \text{ for } f_{ij} \in F
\end{align}
End of explanation
"""
row_centroid = P.sum(axis=1)
print('row centroid (marginal frequency distribution over countries):\n\n{}'.format(row_centroid))
"""
Explanation: Row centroid $p_{i+}$
Using correspondence matrix $P$, we next obtain the row centroid $p_{i+}$. The row centroid can be interpreted as the marginal frequency distribution over the sum of the countries (rows), and reflects the fact that there were equally 1000 respondents per country in our fictional study.
The row centroid $p_{i+}$ is derived from correspondence matrix $P$ with
\begin{align}
p_{i+} &= \sum_{j=1}^{J} p_{ij}
\end{align}
End of explanation
"""
col_centroid = P.sum(axis=0)
print('column centroid (marginal frequency distribution over languages):\n\n{}'.format(col_centroid))
"""
Explanation: Column centroid $p_{+j}$
Similarly, we obtain the column centroid $p_{+j}$ from $P$. The column centroid can be interpreted as the marginal frequency distribution over the sum of the languages (columns).
The column centroid $p_{+j}$ is derived from correspondence matrix $P$ with
\begin{align}
p_{+j} &= \sum_{i=1}^{I} p_{ij}
\end{align}
End of explanation
"""
Mu_ij = row_centroid.values.reshape((P.index.size,1)) * col_centroid.values.reshape((1,P.columns.size))
Lambda = (P - Mu_ij) / np.sqrt(Mu_ij)
print('inertia Lambda:\n\n{}'.format(Lambda))
"""
Explanation: $\chi^{2}$ distances between countries and languages
So rather than using the contingency table $F$ as the basis for singular value decomposition, we can look at the $\chi^{2}$ distances between rows and columns for visualizing the relation between countries and languages.
The $\chi^{2}$ statistic is given by
\begin{align}
\chi^{2} &= \sum_{i=1}^{I} \sum_{j=1}^{J} \frac{(O_{ij} - E_{ij})^{2}}{E_{ij}} \
&= N \, \sum_{i=1}^{I} \sum_{j=1}^{J} \frac{(p_{ij} - \mu_{ij})^{2}}{\mu_{ij}} \
&= N \, \Lambda^{2} \
\
\Rightarrow \Lambda^{2} &= \frac{\chi^{2}}{N} \
\Lambda &= \left[ \frac{p_{ij} - \mu_{ij}}{\sqrt{\mu_{ij}}} \right] = \left[ \frac{p_{ij} - p_{i+}\,p_{+j}}{\sqrt{p_{i+}\,p_{+j}}} \right]
\end{align}
Inertia matrix $\Lambda$ measures the distribution of the individual profiles (rows/columns) around the average profile (centroids). Thus inertia represents the observed deviation from independence.
Through its relation with the statistic $\chi^{2}$, inertia $\Lambda$ (a matrix of standardized residuals) provides the basis for using singular value decomposition.
End of explanation
"""
U_lambda, S_lambda, Vt_lambda = np.linalg.svd(Lambda)
num_sv_lambda = np.arange(1, S_lambda.size+1)
cum_var_explained_lambda = [np.sum(np.square(S_lambda[0:n])) / np.sum(np.square(S_lambda)) for n in num_sv_lambda]
"""
Explanation: Factorizing inertia matrix $\Lambda$ with singular value decomposition
Now that we have transformed contingency table $F$ into inertia matrix $\Lambda$ where element $\lambda_{ij} \in \mathbb{R}$, we can use singular value decomposition to factorize $\Lambda$ instead of the original raw data matrix.
End of explanation
"""
fig = plt.figure(figsize=(7.0, 5.5))
ax = fig.add_subplot(111)
plt.plot(num_sv_lambda,
cum_var_explained_lambda,
color='#2171b5',
label='variance explained',
alpha=0.65,
zorder=1000)
plt.scatter(num_sv_lambda,
sklearn.preprocessing.normalize(S_lambda.reshape((1,-1))),
color='#fc4e2a',
label='singular values (normalized)',
alpha=0.65,
zorder=1000)
plt.legend(loc='lower left', scatterpoints=1, fontsize=8)
ax.set_xticks(num_sv_lambda)
ax.set_xlim(0.9, 5.1)
ax.set_ylim(-0.1, 1.1)
ax.set_xlabel(r'Number of singular values used')
ax.set_ylabel('Variance in data explained')
ax.set_title('Countries/languages dataset: cumulative var. explained & singular values',
fontsize=14,
y=1.03)
ax.set_facecolor('0.98')
plt.grid(alpha=0.8, zorder=1)
plt.tight_layout()
"""
Explanation: Once again, we look at the cumulative variance explained visually as a function of the number of singular values used when reducing rank to find a lower-ranked inertia matrix $\Lambda^{\prime}$ to approximate $\Lambda$.
End of explanation
"""
country_x = U_lambda[:, 0]
country_y = U_lambda[:, 1]
country_z = U_lambda[:, 2]
lang_x = Vt_lambda[0, :]
lang_y = Vt_lambda[1, :]
lang_z = Vt_lambda[2, :]
"""
Explanation: Dimension reduction
Judging from the curve representing cumulative variance explained with respect to the number of singular values used, we see that
with 1 singular value, about 50.6% of the variance of inertia matrix $\Lambda$ can be explained
with 2 singular values, that number goes up to 91.6%
with 3 singular values, we have 98.7%
To mix things up a bit, let's try visualizing the countries and languages in 3D.
For countries, we will take the first 3 columns of $U$ as the $x$-, $y$- and $z$-coordinates.
But since numpy.linalg.svd returns $V^{\intercal}$ instead of $V$, we will take the first 3 rows of $V^{\intercal}$ for the $x$-, $y$- and $z$-coordinates for the languages.
End of explanation
"""
import pylab
from mpl_toolkits.mplot3d import Axes3D, proj3d
fig = pylab.figure(figsize=(7.5,5.5))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(country_x, country_y, country_z, marker='s', s=50, c='#2171b5')
cntry_labels = []
for i,(x,y,z) in enumerate(zip(country_x, country_y, country_z)):
x2, y2, _ = proj3d.proj_transform(x,y,z, ax.get_proj())
label = pylab.annotate(Lambda.index[i],
xy=(x2,y2),
xytext=(-2,2),
textcoords='offset points',
ha='right',
va='bottom',
color='#2171b5',
alpha=0.9)
cntry_labels.append(label)
ax.scatter(lang_x, lang_y, lang_z, marker='o', s=50, c='#fc4e2a')
lang_labels = []
for i,(x,y,z) in enumerate(zip(lang_x, lang_y, lang_z)):
x2, y2, _ = proj3d.proj_transform(x,y,z, ax.get_proj())
label = pylab.annotate(Lambda.columns[i],
xy=(x2,y2),
xytext=(-2,2),
textcoords='offset points',
ha='right',
va='bottom',
color='#fc4e2a',
alpha=0.4)
lang_labels.append(label)
def update_position(e):
for i,(x,y,z) in enumerate(zip(country_x, country_y, country_z)):
x2, y2, _ = proj3d.proj_transform(x,y,z, ax.get_proj())
cntry_labels[i].xy = x2, y2
for i,(x,y,z) in enumerate(zip(lang_x, lang_y, lang_z)):
x2, y2, _ = proj3d.proj_transform(x,y,z, ax.get_proj())
lang_labels[i].xy = x2, y2
fig.canvas.draw()
fig.canvas.mpl_connect('button_release_event', update_position)
ax.set_xlabel(r'singular value $\sigma_{1}$')
ax.set_xticks([-0.5, 0.0, 0.5])
ax.set_ylabel(r'singular value $\sigma_{2}$')
ax.set_yticks([-0.5, 0.0, 0.4])
ax.set_zlabel(r'singular value $\sigma_{3}$')
ax.set_zticks([-0.5, 0.0, 0.5])
ax.set_title('3D plot of Countries/Languages dataset',
fontsize=16,
y=1.1)
plt.tight_layout()
pylab.show()
"""
Explanation: Visualizing the relation between countries and languages
That was a bit of work moving from raw data to contingency table, to correspondence matrix, to the inertia matrix, and then finally to singular value decomposition, but we are now ready to see how the categories of country and language relate to one another in 3 dimensions.
End of explanation
"""
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(iris.data)
# don't forget to mean-center the data before SVD
X = iris.data - np.mean(iris.data, axis=0)
U, S, Vt = np.linalg.svd(X)
"""
Explanation: And there we are!
You can see how the anglophone countries Canada, USA and England are in close proximity of English and with each other, with Spanish being close to the USA while French is closer to Canada. German is close to Switzerland, with French somewhat in proximity. Italian, however, is out close to Italy, both being located largely in isolation from the other countries and languages.
This should match up with your intuition from contingency table $F$.
Helpful Resources
Making sense of principal component analysis, eigenvectors & eigenvalues
Correspondence analysis is a useful tool to uncover the relationships among categorical variables
Appendix A
PCA and SVD
Principal components analysis (PCA) and singular value decomposition are closely related, and you may often hear both these terms used in the same breath.
Here is a quick mathematical treatment explaining how PCA and SVD are related.
Consider data matrix $X \in \mathbb{R}^{m \times n}$ where $m > n$, and all $x_{ij}$ are centered about the column means. With principal components analysis, we have
\begin{align}
\text{covariance matrix } C &= \frac{1}{m} \, X^{\intercal} \, X & \text{from PCA} \
&= \frac{1}{m} \, V \, \Gamma \, V^{\intercal} & \text{by eigendecomposition of } X^{\intercal} \, X \
\
\text{ but } X &= U \, \Sigma V^{\intercal} & \text{from SVD} \
\
\Rightarrow C &= \frac{1}{m} \, V \, \Sigma \, U^{\intercal} \, U \, \Sigma V^{\intercal} \
&= \frac{1}{m} \, V \, \Sigma^{2} \, V^{\intercal} \
\end{align}
So we see that:
the singular values in $\Sigma$ obtained via SVD are really just the square roots of the eigenvalues in $\Gamma$ of the covariance matrix in PCA.
if you mean-center your raw data matrix $X$ and then calculate SVD, you are doing the same thing as PCA.
the above example shows covariance of $X$ with respect to its columns ($X^{\intercal} \, X$); it also applies for covariance of $X$ with respect to rows ($X \, X^{\intercal}$).
Iris dataset: PCA & SVD
End of explanation
"""
Cov_pca = pca.get_covariance()
print('eigenvalues from PCA:\n{}\n'.format(np.linalg.eigvals(Cov_pca * X.shape[0])))
print('squared singular values from SVD:\n{}'.format(np.square(S)))
"""
Explanation: Compare the eigenvalues of $\Gamma$ derived from PCA with the singular values from $\Sigma$ derived with SVD: $\Gamma = \Sigma^{2}$?
End of explanation
"""
print('covariance matrix C derived from PCA:\n{}\n'.format(Cov_pca))
Cov_svd = (1. / X.shape[0]) * Vt.T.dot(np.diag(np.square(S))).dot(Vt)
print('covariance matrix using S and Vt from SVD:\n{}\n'.format(Cov_svd))
allclose = np.allclose(Cov_pca, Cov_svd, atol=1e-1)
print('Are these matrices equivalent (element-wise closeness comparison)?\n{}'.format(allclose))
"""
Explanation: Can we obtain the covariance matrix $C$ derived with PCA, but using $\frac{1}{m} \, V \, \Sigma^{2} \, V^{\intercal}$ from SVD?
End of explanation
"""
|
feroda/lessons-python4beginners | .ipynb_checkpoints/P4B - Capitolo 1-Copy1-checkpoint.ipynb | agpl-3.0 | # This is hello_who.py
def hello(who):
print("Hello {}!".format(who))
if __name__ == "__main__":
hello("mamma")
"""
Explanation: Python2 for beginners (P4B)
<p style="text-align: center;">Luca Ferroni <[email protected]></p>
<p style="text-align: center;">http://www.befair.it<br />**Software Libero per i territori**</p>
CAPITOLO 1: base
Installazione su Windows
Python con Anaconda (v. sito ufficiale di python), per ottenere:
l'interprete python e l'installatore di pacchetti pip
la shell interattiva IPython
Un editor di testo, consigliati:
Notepad++
Eclipse
PyCharm
Atom
Visual Code
Python's mantras
Un linguaggio pratico
La leggibilità conta
Un solo modo ovvio per fare una cosa
Imperativo, ad oggetti e funzionale
I miglioramenti vengono inseriti dai PEP
La libreria python sempre sotto il cuscino!
WARNING
Python non si riferisce al serpente
Ma al Monthy Python Flying Circus
E ora... "The hello tour!"
Il mio primo codice python
End of explanation
"""
# This is hello_who.py # <-- i commenti iniziano con `#`
# possono essere all'inizio o a fine riga
def hello(who): # <-- la funzione di definisce con `def`
print("Hello {}!".format(who)) # <-- la stampa con `print` e le stringhe con `format`
if __name__ == "__main__": # <-- [verifica di esecuzione e non di import](https://docs.python.org/2/library/__main__.html)
hello("mamma") # <-- invoco la funzione con il valore
"""
Explanation: IPython compagno di sviluppo
$ ipython
Tip: scopri lo Zen di Python PEP 20
End of explanation
"""
# creazione
l = [1,2,3,10,"a", -12.333, 1024, 768, "pippo"]
# concatenazione
l += ["la", "concatenazione", "della", "lista"]
# aggiunta elementi in fondo
l.append(32)
l.append(3)
print(u"la lista è {}".format(l))
l.remove(3) # rimuove la prima occorrenza
print(u"la lista è {}".format(l))
i = l.index(10) # restituisce l'indice della prima occorrenza del valore 10
print(u"l'indice di 10 è {}".format(i))
print(u"il valore all'indice 3 è {}".format(l[3]))
print(u"** vediamo come funziona lo SLICING delle liste")
print(u"Ecco i primi 3 valori della lista {}".format(l[:3]))
print(u"e poi i valori dal 3o al penultimo {}".format(l[3:-1]))
print(u"e poi i valori dal 3o al penultimo, ma ogni 2 {}".format(l[3:-1:2]))
print("\n***FUNZIONI RANGE e XRANGE***\n")
l2 = range(1000) # questi sono i primi 1000 valori da 0 a 999
print(u"ecco la lista ogni 50 elementi di n <=1000: {}".format(l2[::50]))
# LA FUNZIONE xrange è comoda per ottenere un oggetto tipo (ma non = a ) un generatore
# da cui i numeri vengono appunto generati al momento dell'accesso all'elemento stesso
# della sequenza
# Il codice di prima dà errore
try:
l2 = xrange(1000) # questi sono i primi 1000 valori da 0 a 999 ma senza occupare RAM
print(u"ecco la lista ogni 50 elementi di n <= 1000: {}".format(l2[::50]))
except Exception as e:
print("ECCEZIONE {}: {}".format(type(e), e))
# Il codice che funziona con lo slice valuta xrange in una lista quindi
# risulta inutile
l2 = list(xrange(1000)) # questi sono i primi 1000 valori da 0 a 999 ma senza occupare RAM
print(u"ecco la lista ogni 50 elementi di n <= 1000: {}\n".format(l2[::50]))
## ma si può fare direttamente con range o xrange!
print(u"[OK] lista ogni 50 elementi <= 1000: {}".format(range(0,1000,50)))
"""
Explanation: La leggibilità conta
Le linee guida di stile sono nel PEP 8
ogni azienda può averne di differenti
Le classiche sono:
Indentazione con 4 spazi, e non <TAB>: settate il vostro editor!
Lunghezza della riga <= 79 caratteri
Spazi intorno agli operatori
Le convenzioni per le docstring sono descritte nel PEP 257
in particolare:
Se monolinea scrivere tutto su una riga
Se multilinea il separatore finale (""") deve essere su una linea separata.
Mie convenzioni:
classi CamelCase e funzioni + variabili minuscole con _
il codice è scritto in inglese, in particolare i nomi delle variabili
I tipi di dato
numeri
stringhe
tuple: (1, 2 "a", "prova")
set: { 2, 4, 6,"a", 123 }
liste: [1,2,3,10,"a", -12.333]
dizionari: { "nome": "Luca", "cognome": "Ferroni"} -> è una tabella chiave:valore
LISTE: Operazioni e metodi
End of explanation
"""
print("***PER FARE UN CICLO FOR CON INDICE INCREMENTALE SI USA XRANGE!")
for el in xrange(1,21):
print("numero {}".format(el))
print("***PER NUMERARE GLI ELEMENTI DI UNA LISTA SI USA ENUMERATE!")
for i, el in enumerate(l, start=10): # numero partendo da 10, se start non specificato parto da 0
print("Il contenuto {} si trova all'indice {}".format(el, i))
"""
Explanation: Iterazione nelle liste e cicli for su indice
End of explanation
"""
# definizione
d = {"nome": "Luca", "cognome": "Ferroni", "classe": 1980}
# aggiornamento
d.update({
"professioni" : ["docente", "lavoratore autonomo"]
})
# recupero valore per chiave certa (__getitem__)
print(u"Il nome del personaggio è {}".format(d["nome"]))
# sfrutto il mini-formato di template per le stringhe
# https://docs.python.org/2.7/library/string.html#formatspec
print(u"Il personaggio è {nome} {cognome} nato nel {classe}".format(**d))
# Recupero di un valore per una chiave opzionale
print(u"'nome' è una chiave che esiste con valore = {}, 'codiceiban' invece non esiste = {}".format(
d.get('nome'), d.get('codiceiban')))
print(u"Se avessi usato la __getitem__ avrei avuto un KeyError")
# rimozione di una chiave dal dizionario
print(u"Rimuovo il nome dal dizionario con d.pop('nome')")
d.pop('nome')
print(u"'nome' ora non esiste con valore = {}, come 'codiceiban' = {}".format(
d.get('nome'), d.get('codiceiban')))
print(u"Allora, se non trovi la chiave 'nome' allora dimmi 'Pippo'. Cosa dici?")
print(d.get('nome', 'Pippo'))
"""
Explanation: DIZIONARI: Operazioni e metodi
End of explanation
"""
print("\n***PER ITERARE SU TUTTI GLI ELEMENTI DI UN DIZIONARIO SI USA .iteritems()***\n")
for key, value in d.iteritems():
print("Alla chiave {} corrisponde il valore {}".format(key,value))
print("\n***DIZIONARI E ORDINAMENTO***\n")
data_input = [('a', 1), ('b', 2), ('l', 10), ('c', 3)]
d1 = dict(data_input)
import collections
d2_ord = collections.OrderedDict(data_input)
print("input = {}".format(data_input))
print("dizionario non ordinato = {}".format(d1))
print("dizionario ordinato = {}".format(d2_ord))
print("lista di coppie da diz NON ordinato = {}".format(d1.items()))
print("lista di coppie da diz ordinato = {}".format(d2_ord.items()))
"""
Explanation: Iterazione nei dizionari
ATTENZIONE: Il contenuto del dizionario non è ordinato! Non c'è alcuna garanzia sull'ordinamento. Per avere garanzia bisogna usare la classe collections.OrderedDict
End of explanation
"""
def foo(bar):
bar.append(42)
print(bar)
# >> [42]
answer_list = []
foo(answer_list)
print(answer_list)
# >> [42]
def foo(bar):
bar = 'new value'
print (bar)
# >> 'new value'
answer_list = 'old value'
foo(answer_list)
print(answer_list)
# >> 'old value'
"""
Explanation: Caratteristiche del modello dati di Python
Tipi di dato "mutable" e "immutable"
Python Data Model
Ogni oggetto ha:
* identità -> non cambia mai e si può pensare come l'indirizzo in memoria
* tipo -> non cambia mai e rappresenta le operazioni che l'oggetto supporta
* valore -> può cambiare se il tipo è mutable, non se è immutable
Tipi di dato immutable sono:
interi
stringhe
tuple
set
Tipi di dato mutable sono:
liste
dizionari
Tipizzazione forte e dinamica
Da http://stackoverflow.com/questions/11328920/is-python-strongly-typed/11328980#11328980 (v. anche i commenti)
Python is strongly, dynamically typed.
Strong typing means that the type of a value doesn't suddenly change. A string containing only digits doesn't magically become a number, as may happen in Perl. Every change of type requires an explicit conversion.
Dynamic typing means that runtime objects (values) have a type, as opposed to static typing where variables have a type.
As for example
bob = 1
bob = "bob"
This works because the variable does not have a type; it can name any object. After bob=1, you'll find that type(bob) returns int, but after bob="bob", it returns str. (Note that type is a regular function, so it evaluates its argument, then returns the type of the value.)
Passaggio di parametro per valore o riferimento?
Nessuno dei due! V. https://jeffknupp.com/blog/2012/11/13/is-python-callbyvalue-or-callbyreference-neither/
Call by object, or call by object reference.
Concetto base: in python una variabile è solo un nome per un oggetto (= la tripla id,tipo,valore)
In sostanza il comportamento dipende dal fatto che gli oggetti nominati dalle variabili sono mutable o immutable.
Seguono esempi:
End of explanation
"""
# -*- coding: utf-8 -*-
# This is hello_who_3.py
import sys # <-- importo un modulo
def compose_hello(who, force=False): # <-- valore di default
"""
Get the hello message.
"""
try: # <-- gestione eccezioni `Duck Typing`
message = "Hello " + who + "!"
except TypeError: # <-- eccezione specifica
# except TypeError as e: # <-- eccezione specifica su parametro e
print("[WARNING] Il parametro `who` dovrebbe essere una stringa")
if force: # <-- controllo "if"
message = "Hello {}!".format(who)
else:
raise # <-- solleva eccezione originale
except Exception:
print("Verificatasi eccezione non prevista")
else:
print("nessuna eccezione")
finally:
print("Bye")
return message
def hello(who='world'): # <-- valore di default
print(compose_hello(who))
if __name__ == "__main__":
hello("mamma")
hello("pippo")
hello(1)
ret = compose_hello(1, force=True)
print("Ha composto {}".format(ret))
try:
hello(1)
except TypeError as e:
print("{}: {}".format(type(e).__name__, e))
print("Riprova")
"""
Explanation: Funzioni con parametri posizionali, nominali, e arbitrari
Una funzione si definisce con def <nomefunzione>([parametri]) dove i parametri possono essere:
posizionali. Ad es: def hello(who)
nominali. Ad es: def hello(who='') o who=None o who='default'
entrambi, ma i nominali devono essere messi dopo i posizionali. Ad es: def hello(who, say="How are you?")
arbitrari sia posizionali con il simbolo * o nominali con **. Come convenzione si utilizzano i nomi args e kw o kwargs. Ad es: def hello(who, say="How are you?", *args, **kw)
I simboli * e ** indicano rispettivamente la rappresentazione di una lista come una sequenza di elementi, e di un dizionario come una sequenza di parametri <chiave>=<valore>
Scope delle variabili
http://www.saltycrane.com/blog/2008/01/python-variable-scope-notes/
e ricordatevi che:
for i in [1,2,3]:
print(i)
print("Sono fuori dal ciclo e posso vedere che i={}".format(i))
Namespace
I namespace in python sono raccoglitori di nomi e posson essere impliciti o espliciti. Sono impliciti lo spazio dei nomi __builtin__ e __main__. Sono espliciti, le classi, gli oggetti, le funzioni e in particolare i moduli.
Posso importare un modulo che mi va a costituire un namespace con import <nomemodulo> e accedere a tutti i simboli top-level inseriti nel modulo come <nomemodulo>.<simbolo>.
L'importazione di simboli singoli all'interno di un modulo in un altro namespace si può fare con from <nomemodulo> import <simbolo>. Quello che non si dovrebbe fare è importare tutti i simboli di un modulo dentro un altro nella forma: from <nomemodulo> import *. Non fatelo, a meno che non strettamente necessario.
Stack delle eccezioni e loro gestione
Lo stack delle eccezioni builtin, ossia già comprese nel linguaggio python sono al link: https://docs.python.org/2/library/exceptions.html#exception-hierarchy
Derivando da essere facilmente se ne possono definire di proprie.
La gestione delle eccezioni avviene in blocchi:
try:
...
except [eccezione] [as variabile]:
...
else:
...
finally:
...
Pratica del Duck Typing!
« If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. »
Segue esempio di composizione del saluto con la gestione delle eccezioni:
End of explanation
"""
fib(0) --> 0
fib(1) --> 1
fib(2) --> 1
fib(3) --> 2
fib(n) --> fib(n-1) + fib(n-2)
import pytest
from myprogram import fib
def test_fib_ok_small():
assert fib(0) == 0
assert fib(1) == 1
assert fib(2) == 1
assert fib(3) == 2
def test_fib_raise_if_string():
with pytest.raises(TypeError):
fib("a")
def test_fib_raises_lt_zero():
with pytest.raises(ValueError):
fib(-1)
"""
Explanation: La funzione di Fibonacci
End of explanation
"""
|
google/starthinker | colabs/drive_copy.ipynb | apache-2.0 | !pip install git+https://github.com/google/starthinker
"""
Explanation: Drive Copy
Copy a drive document.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
"""
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
"""
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
"""
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'source':'', # Name or URL of document to copy from.
'destination':'', # Name document to copy to.
}
print("Parameters Set To: %s" % FIELDS)
"""
Explanation: 3. Enter Drive Copy Recipe Parameters
Specify a source URL or document name.
Specify a destination name.
If destination does not exist, source will be copied.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
"""
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'drive':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'user','description':'Credentials used for reading data.'}},
'copy':{
'source':{'field':{'name':'source','kind':'string','order':1,'default':'','description':'Name or URL of document to copy from.'}},
'destination':{'field':{'name':'destination','kind':'string','order':2,'default':'','description':'Name document to copy to.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
"""
Explanation: 4. Execute Drive Copy
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation
"""
|
ISosnovik/UVA_AML17 | week_2/2.Experiments.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import Image
from IPython.core.display import HTML
"""
Explanation: Assignment 1
Experiments
Seems like you've already implemented all the building blocks of the neural networks. Now we will conduct several experiments.
Note: These experiments will not be evaluated.
Table of contents
0. Circles Classification Task
1. Digits Classification Task
0. Circles Classification Task
End of explanation
"""
%%capture
%run 1.Blocks.ipynb
"""
Explanation: We will import the functions from the "Blocks ipython notebook" and will use them for training a network for classification task.
End of explanation
"""
# Generate some data
N = 100
phi = np.linspace(0.0, np.pi * 2, 100)
X1 = 1.1 * np.array([np.sin(phi), np.cos(phi)])
X2 = 3.0 * np.array([np.sin(phi), np.cos(phi)])
Y = np.concatenate([np.ones(N), -1.0 * np.ones(N)]).reshape((-1, 1))
X = np.hstack([X1,X2]).T
plt.scatter(X[:,0], X[:,1], c=Y.ravel(), edgecolors= 'none')
"""
Explanation: In this first task we will classify two circles by training a neural network.
The main purpose of this task is to understand the importance of network design and parameters tunning (Number of layers, Number of hidden units etc). At first we will generate and visualize the data in following cell.
End of explanation
"""
##Training the network ##
model = SequentialNN()
# model.add(....)
# model.add(....)
###YOUR CODE FOR DESIGNING THE NETWORK ###
loss = Hinge()
weight_decay = 0.01
sgd = SGD(model, lr=0.01, weight_decay=weight_decay)
for i in range(100):
# get the predictions
y_pred = model.forward(X)
# compute the loss value + L_2 term
loss_value = loss.forward(y_pred, Y) + l2_regularizer(weight_decay, model.get_params())
# log the current loss value
print('Step: {}, \tLoss = {:.2f}'.format(i+1, loss_value))
# get the gradient of the loss functions
loss_grad = loss.backward(y_pred, Y)
# backprop the gradients
model.backward(X, loss_grad)
# perform the updates
sgd.update_params()
##Testing the network ##
y_pred = model.forward(X) > 0
plt.scatter(X[:,0], X[:,1], c = y_pred.ravel(), edgecolors= 'none')
"""
Explanation: As you have already written the code blocks in the Blocks file we will just call those functions and will train the network to classify the two circles.
For this task we have provided the code for training and testing of the network in following blocks. Students are asked to design the network with different configurations and observe the outputs.
* Single Layer Neural Network
* Multiple Layer Neural Network
* Different number of Hidden units
* With and without activation function
End of explanation
"""
import sklearn.datasets
# We load the dataset
digits = sklearn.datasets.load_digits()
# Here we load up the images and labels and print some examples
images_and_labels = list(zip(digits.images, digits.target))
for index, (image, label) in enumerate(images_and_labels[:10]):
plt.subplot(2, 5, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Training: {}'.format(label), y=1.1)
"""
Explanation: 1. Digits Classification Task
In this task you will implement a neural network for classification of hand written digits. You can use the blocks of code which you implemented in the first part of this assignment for completing this task.
We will use digits dataset for this task. This dataset consists of 1797 8x8 images. Further information about the dataset can be found here.
End of explanation
"""
n_objects = digits.images.shape[0]
train_test_split = 0.7
train_size = int(n_objects * train_test_split)
indices = np.arange(n_objects)
np.random.shuffle(indices)
train_indices, test_indices = indices[:train_size], indices[train_size:]
train_images, train_targets = digits.images[train_indices], digits.target[train_indices]
test_images, test_targets = digits.images[test_indices], digits.target[test_indices]
"""
Explanation: Next we will divide the images and labels data into two parts i.e. training data and test data.
End of explanation
"""
train_images = train_images.reshape((-1, 64))
test_images = test_images.reshape((-1, 64))
"""
Explanation: The images in the dataset are $8 \times 8$ and each pixel in the image is eithe 0 or 1. Before giving the images as input to the neural network we will reshape them to 1 by 64 times long 1 dimensional vector as shown in the figure below.
End of explanation
"""
### YOUR CODE FOR TRAINING THE NETWORK###
#Specify the input size for the network
#specify the output size for the network
#specify the inputs for the network
#specify the outputs for the network
#num_input=
#num_output=
#X=
#y=
###
model = SequentialNN()
model.add(Dense(num_input,num_output))
loss = Hinge()
weight_decay = 0.01
sgd = SGD(model, lr=0.01, weight_decay=weight_decay)
for i in range(100):
# get the predictions
y_pred = model.forward(X)
# compute the loss value + L_2 term
loss_value = loss.forward(y_pred, Y) + l2_regularizer(weight_decay, model.get_params())
# log the current loss value
print('Step: {}, \tLoss = {:.2f}'.format(i+1, loss_value))
# get the gradient of the loss functions
loss_grad = loss.backward(y_pred, Y)
# backprop the gradients
model.backward(X, loss_grad)
# perform the updates
sgd.update_params()
"""
Explanation: The basic units of the neural network are perceptrons. A perceptron consists of a cell with atleast two inputs. Cell takes the inputs multiplied with weights and gives an output after computing the values. The basic diagram of a cell is shown below.
For the image dataset which we will be using in this task the perceptron will have 64 inputs for $8 \times 8$ input and 64 weights.
As the digits dataset consists of 10 classes (0 to 9) so, in order to classifiy the images we will need 10 neurons for the prediction of the target class. It can be seen from the image each neuron will give an output and the output from the neuron with the highest value will be selected and that will be the predicted output.
Now, in order to perform classification task for images you will use the functions which you implemented in the first task of the assignment.
In the following lines of code we will train a complete neural network by giving the images as input and the labels as targets to the network. At first we will design the network by by setting the parameters of network by calling SequentialNN() function, after that we will forward propagate the inputs through the network and will calculate the loss, the error between the predicted output and the target output will be back propagated through the network. Finally the parameters of the network will be updated in the direction to reduce the error of the prediction.
End of explanation
"""
#Testing the network
###YOUR CODE FOR TESTING THE NETWORK ###
"""
Explanation: After training the network should be tested on test data; images in this task. During the test time unlabeled inputs are given to the network and by using the trained weights from the training cycle of the network the ouput classes for the unlabeled inputs are predicted. The figure below shows the difference between the training and the testing of the network.
In the following cell implement the code for testing the network.
End of explanation
"""
|
NathanYee/ThinkBayes2 | code/chap03mine.ipynb | gpl-2.0 | from __future__ import print_function, division
% matplotlib inline
import thinkplot
from thinkbayes2 import Hist, Pmf, Suite, Cdf
"""
Explanation: Think Bayes: Chapter 3
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
class Dice(Suite):
def Likelihood(self, data, hypo):
if hypo < data:
return 0
else:
return 1/hypo
"""
Explanation: The Dice problem
Suppose I have a box of dice that contains a 4-sided die, a 6-sided
die, an 8-sided die, a 12-sided die, and a 20-sided die.
Suppose I select a die from the box at random, roll it, and get a 6.
What is the probability that I rolled each die?
The Dice class inherits Update and provides Likelihood
End of explanation
"""
suite = Dice([4, 6, 8, 12, 20])
suite.Update(6)
suite.Print()
"""
Explanation: Here's what the update looks like:
End of explanation
"""
for roll in [6, 8, 7, 7, 5, 4]:
suite.Update(roll)
suite.Print()
"""
Explanation: And here's what it looks like after more data:
End of explanation
"""
class Train(Suite):
def Likelihood(self, data, hypo):
if hypo < data:
return 0
else:
return 1/hypo
"""
Explanation: The train problem
The Train problem has the same likelihood as the Dice problem.
End of explanation
"""
hypos = xrange(1, 1001)
suite = Train(hypos)
suite.Update(60)
"""
Explanation: But there are many more hypotheses
End of explanation
"""
thinkplot.Pdf(suite)
"""
Explanation: Here's what the posterior looks like
End of explanation
"""
def Mean(suite):
total = 0
for hypo, prob in suite.Items():
total += hypo * prob
return total
Mean(suite)
"""
Explanation: And here's how we can compute the posterior mean
End of explanation
"""
suite.Mean()
"""
Explanation: Or we can just use the method
End of explanation
"""
def MakePosterior(high, dataset, constructor=Train):
"""Solves the train problem.
high: int maximum number of trains
dataset: sequence of observed train numbers
constructor: function used to construct the Train object
returns: Train object representing the posterior suite
"""
hypos = range(1, high+1)
suite = constructor(hypos)
for data in dataset:
suite.Update(data)
return suite
"""
Explanation: Sensitivity to the prior
Here's a function that solves the train problem for different priors and data
End of explanation
"""
dataset = [30, 60, 90]
for high in [500, 1000, 2000]:
suite = MakePosterior(high, dataset)
print(high, suite.Mean())
"""
Explanation: Let's run it with the same dataset and several uniform priors
End of explanation
"""
class Train2(Train):
def __init__(self, hypos, alpha=1.0):
Pmf.__init__(self)
for hypo in hypos:
self[hypo] = hypo**(-alpha)
self.Normalize()
"""
Explanation: The results are quite sensitive to the prior, even with several observations.
Power law prior
Now let's try it with a power law prior.
End of explanation
"""
high = 100
hypos = range(1, high+1)
suite1 = Train(hypos)
suite2 = Train2(hypos)
thinkplot.Pdf(suite1)
thinkplot.Pdf(suite2)
"""
Explanation: Here's what a power law prior looks like, compared to a uniform prior
End of explanation
"""
dataset = [60]
high = 1000
thinkplot.PrePlot(num=2)
constructors = [Train, Train2]
labels = ['uniform', 'power law']
for constructor, label in zip(constructors, labels):
suite = MakePosterior(high, dataset, constructor)
suite.label = label
thinkplot.Pmf(suite)
thinkplot.Config(xlabel='Number of trains',
ylabel='Probability')
"""
Explanation: Now let's see what the posteriors look like after observing one train.
End of explanation
"""
dataset = [30, 60, 90]
for high in [500, 1000, 2000]:
suite = MakePosterior(high, dataset, Train2)
print(high, suite.Mean())
"""
Explanation: The power law gives less prior probability to high values, which yields lower posterior means, and less sensitivity to the upper bound.
End of explanation
"""
hypos = xrange(1, 1001)
suite = Train(hypos)
suite.Update(60)
suite.Percentile(5), suite.Percentile(95)
"""
Explanation: Credible intervals
To compute credible intervals, we can use the Percentile method on the posterior.
End of explanation
"""
cdf = Cdf(suite)
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Number of trains',
ylabel='Cumulative Probability',
legend=False)
"""
Explanation: If you have to compute more than a few percentiles, it is more efficient to compute a CDF.
Also, a CDF can be a better way to visualize distributions.
End of explanation
"""
cdf.Percentile(5), cdf.Percentile(95)
"""
Explanation: Cdf also provides Percentile
End of explanation
"""
# Solution goes here
"""
Explanation: Exercises
Exercise: To write a likelihood function for the locomotive problem, we had
to answer this question: "If the railroad has N locomotives, what
is the probability that we see number 60?"
The answer depends on what sampling process we use when we observe the
locomotive. In this chapter, I resolved the ambiguity by specifying
that there is only one train-operating company (or only one that we
care about).
But suppose instead that there are many companies with different
numbers of trains. And suppose that you are equally likely to see any
train operated by any company.
In that case, the likelihood function is different because you
are more likely to see a train operated by a large company.
As an exercise, implement the likelihood function for this variation
of the locomotive problem, and compare the results.
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise: Suppose I capture and tag 10 rock hyraxes. Some time later, I capture another 10 hyraxes and find that two of them are already tagged. How many hyraxes are there in this environment?
As always with problems like this, we have to make some modeling assumptions.
1) For simplicity, you can assume that the environment is reasonably isolated, so the number of hyraxes does not change between observations.
2) And you can assume that each hyrax is equally likely to be captured during each phase of the experiment, regardless of whether it has been tagged. In reality, it is possible that tagged animals would avoid traps in the future, or possible that the same behavior that got them caught the first time makes them more likely to be caught again. But let's start simple.
I suggest the following notation:
N: total population of hyraxes
K: number of hyraxes tagged in the first round
n: number of hyraxes caught in the second round
k: number of hyraxes in the second round that had been tagged
So N is the hypothesis and (K, n, k) make up the data. The probability of the data, given the hypothesis, is the probability of finding k tagged hyraxes out of n if (in the population) K out of N are tagged.
If you are familiar with the hypergeometric distribution, you can use the hypergeometric PMF to compute the likelihood function. Otherwise, you can figure it out using combinatorics.
End of explanation
"""
|
WNoxchi/Kaukasos | FAI_old/lesson2/lesson2_LM_SGD_Optz_codealong.ipynb | mit | %matplotlib inline
import numpy as np
from numpy.random import random
# from matplotlib import pyplot as plt, animation
from matplotlib import pyplot as plt, rcParams, animation, rc
rc('animation', html='html5')
rcParams['figure.figsize'] = 3, 3 # sets plot window size
%precision 4
np.set_printoptions(precision=4, linewidth=60)
def lin(a, b, x): return a*x + b
a = 3.
b = 8.
n = 30
x = random(n)
y = lin(a, b, x)
x
y
plt.scatter(x,y)
def sse(y, y_pred): return ((y - y_pred)**2).sum()
def loss(y, a, b, x): return sse(y, lin(a,b,x))
def avg_loss(y, a,b,x): return np.sqrt(loss(y,a,b,x)/n)
"""
Explanation: Lesson 2 SGD/Optimization Tutorial Code Along
See the notebook -- FAI1 - Practical Deep Learning I
Follow Lecture 2 @ around [1:10:00]
End of explanation
"""
a_guess = -1
b_guess = 1
avg_loss(y, a_guess, b_guess, x)
Lr = 0.01 # below thanks to Wolfram Alpha
# d[(y - y_pred)**2,b] = d[(y - (a*x+b))**2, b] = 2*(b + a*x - y)
# d[(y - y_pred)**2,a] = d[(y - (a*x+b))**2, a] = 2*x*(b+a*x-y) = x * dy/db
"""
Explanation: Let's start out with a line: y = ax + b, where a = 3, b = 8; but where we don't know what a & b are, and starting with guesses for both, use a linear model to find them
End of explanation
"""
def update():
global a_guess, b_guess
y_pred = lin(a_guess, b_guess, x)
dydb = 2*(y_pred - y)
dyda = x*dydb
a_guess -= Lr * dyda.mean() # new guess is minus deriv * (a little bit)
b_guess -= Lr * dydb.mean()
"""
Explanation: Coming up with an update function that'll make our guesses of a & b closer to what a & b actually are, each time it is called.
End of explanation
"""
fig = plt.figure(figsize=(5,4), dpi=100)
plt.scatter(x, y)
line, = plt.plot(x, lin(a_guess, b_guess, x))
plt.close()
def animate(i):
line.set_ydata(lin(a_guess, b_guess, x))
for i in xrange(10): update()
return line,
ani = animation.FuncAnimation(fig, animate, np.arange(0, 40), interval=100)
ani
"""
Explanation: Confirm that our line eventually, actually, fits our data, via animation.
End of explanation
"""
|
steinam/teacher | jup_notebooks/datenbanken/Versicherung_11FI3_On_Paper.ipynb | mit | %load_ext sql
%sql mysql://steinam:steinam@localhost/versicherung_complete
"""
Explanation: Versicherung on Paper
End of explanation
"""
%%sql
-- meine Lösung
select distinct(Land) from Fahrzeughersteller;
%%sql
-- deine Lösung
select fahrzeughersteller.Land
from fahrzeughersteller
group by fahrzeughersteller.Land
;
"""
Explanation: Gesucht wird eine wiederholungsfreie Liste der Herstellerländer 3 P
End of explanation
"""
%%sql
-- meine Lösung
select fahrzeugtyp.Bezeichnung, count(fahrzeug.iD) as Anzahl
from fahrzeugtyp left join fahrzeug
on fahrzeugtyp.id = fahrzeug.fahrzeugtyp_id
group by fahrzeugtyp.bezeichnung
having count(Anzahl) > 2
%%sql
select *, (select count(*) from fahrzeug
where fahrzeug.fahrzeugtyp_id = fahrzeugtyp.id) as Fahrzeuge
from fahrzeugtyp
having Fahrzeuge > 2
order by fahrzeugtyp.bezeichnung;
"""
Explanation: Listen Sie alle Fahrzeugtypen und die Anzahl Fahrzeuge dieses Typs, aber nur, wenn mehr als 2 Fahrzeuge des Typs vorhanden sind. Sortieren Sie die Ausgabe nach Fahrzeugtypen. 4 P
End of explanation
"""
%%sql
-- meine Lösung
-- select ID from Abteilung where Abteilung.Ort = 'Dortmund' or abteilung.Ort = 'Bochum'
select Name, vorname, Bezeichnung, Abteilung.ID, Mitarbeiter.Abteilung_ID,
Abteilung.Ort from Mitarbeiter inner join Abteilung
on Mitarbeiter.Abteilung_ID = Abteilung.ID
where Abteilung.Ort in('Dortmund', 'Bochum')
order by Name
%%sql
-- deine Lösung
select mitarbeiter.Name, mitarbeiter.Vorname,
(select abteilung.bezeichnung
from abteilung where abteilung.id = mitarbeiter.abteilung_id) as Abteilung,
(select abteilung.ort
from abteilung where abteilung.id = mitarbeiter.abteilung_id) as Standort
from mitarbeiter having Standort = "Dortmund" or Standort = "Bochum";
"""
Explanation: Ermittle die Namen und Vornamen der Mitarbeiter incl. Abteilungsname, deren Abteilung ihren Sitz in Dortmund oder Bochum hat.
End of explanation
"""
%%sql
-- meine Lösung
select fahrzeughersteller.id, year(datum) as Jahr,
min(zuordnung_sf_fz.schadenshoehe),
max(zuordnung_sf_fz.Schadenshoehe),
(max(zuordnung_sf_fz.schadenshoehe) - min(zuordnung_sf_fz.schadenshoehe)) as Differenz
from fahrzeughersteller left join fahrzeugtyp
on fahrzeughersteller.id = fahrzeugtyp.hersteller_ID
inner join fahrzeug on fahrzeugtyp.id = fahrzeug.fahrzeugtyp_id
inner join zuordnung_sf_fz
on fahrzeug.id = zuordnung_sf_fz.fahrzeug_id
inner join schadensfall on schadensfall.id = zuordnung_sf_fz.schadensfall_id
group by fahrzeughersteller.id, year(datum)
%%sql
-- redigierte Version von Wortmann geht
select
fahrzeughersteller.Name,
(select min(zuordnung_sf_fz.schadenshoehe) from zuordnung_sf_fz
where zuordnung_sf_fz.fahrzeug_id in(
select fahrzeug.id from fahrzeug
where fahrzeug.fahrzeugtyp_id in(
select fahrzeugtyp.id from fahrzeugtyp
where fahrzeugtyp.hersteller_id = fahrzeughersteller.id
)
)
) as Kleinste,
(select max(zuordnung_sf_fz.schadenshoehe) from zuordnung_sf_fz
where zuordnung_sf_fz.fahrzeug_id in(
select fahrzeug.id from fahrzeug
where fahrzeug.fahrzeugtyp_id in(
select fahrzeugtyp.id from fahrzeugtyp
where fahrzeugtyp.hersteller_id = fahrzeughersteller.id
)
)
) as `Groesste`
from fahrzeughersteller;
"""
Explanation: Gesucht wird für jeden Fahrzeughersteller (Angabe der ID reicht) und jedes Jahr die kleinste und größte Schadenshöhe.
Geben Sie falls möglich auch die Differenz zwischen den beiden Werten mit in der jeweiligen Ergebnismenge aus. Ansonsten erzeugen Sie für diese Aufgabe ein eigenes sql-Statement. 5 P
End of explanation
"""
%%sql
select Mitarbeiter.Name, dienstwagen.Kennzeichen
from Mitarbeiter inner join dienstwagen
on mitarbeiter.id = dienstwagen.Mitarbeiter_id
inner join fahrzeugtyp
on dienstwagen.fahrzeugtyp_Id = fahrzeugtyp.id
inner join fahrzeughersteller
on fahrzeugtyp.hersteller_id = fahrzeughersteller.id
where Fahrzeughersteller.NAme = 'Opel'
%%sql
select * from mitarbeiter
where mitarbeiter.id in(
select dienstwagen.mitarbeiter_id from dienstwagen
where
dienstwagen.mitarbeiter_id = mitarbeiter.id
and dienstwagen.fahrzeugtyp_id in(
select fahrzeugtyp.id from fahrzeugtyp
where fahrzeugtyp.hersteller_id in(
select fahrzeughersteller.id from fahrzeughersteller
where fahrzeughersteller.name = "Opel"
)
)
)
"""
Explanation: Zeige alle Mitarbeiter und deren Autokennzeichen, die als Dienstwagen einen Opel fahren.
4 P
End of explanation
"""
%%sql
select fahrzeug.kennzeichen, sum(schadenshoehe)
from fahrzeug inner join zuordnung_sf_fz
on fahrzeug.id = zuordnung_sf_fz.fahrzeug_id
group by fahrzeug.kennzeichen
having sum(schadenshoehe) > (select avg(schadenshoehe) from zuordnung_sf_fz)
%%sql
-- deine Lösung Wortmann
/*
select * from fahrzeug having fahrzeug.id in(
select zuordnung_sf_zf.fahrzeugtyp_id from zuordnung_sf_zf
where zuordnung_sf_zf.schadenhoehe > ((select sum(zuordnung_sf_zf.schadenhoehe) from zuordnung_sf_zf)) / (select count(*) from zuordnung_sf_zf))
*/
select * from fahrzeug having fahrzeug.id in(
select zuordnung_sf_fz.fahrzeug_id from zuordnung_sf_fz
where zuordnung_sf_fz.schadenshoehe > ((select sum(zuordnung_sf_fz.schadenshoehe) from zuordnung_sf_fz)) / (select count(*) from zuordnung_sf_fz))
"""
Explanation: Welche Fahrzeuge haben Schäden verursacht, deren Schadenssumme höher als die durchschnittliche Schadenshöhe sind. 5 P
End of explanation
"""
%%sql
select Mitarbeiter.Name, Mitarbeiter.Geburtsdatum
from Mitarbeiter
where Geburtsdatum < (select avg(Geburtsdatum) from Mitarbeiter ma)
order by Mitarbeiter.Name
%%sql
-- geht auch
select ma.Name, ma.Geburtsdatum
from Mitarbeiter ma
where (now() - ma.Geburtsdatum) > (now() - (select avg(geburtsdatum) from mitarbeiter))
order by ma.Name;
%%sql
-- deine Lösung Wortmann
select * from mitarbeiter
having mitarbeiter.geburtsdatum < (select sum(mitarbeiter.geburtsdatum) from mitarbeiter) / (select count(*) from mitarbeiter)
"""
Explanation: Welche Mitarbeiter sind älter als das Durchschnittsalter der Mitarbeiter. 4 P
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub | notebooks/nuist/cmip6/models/sandbox-3/atmos.ipynb | gpl-3.0 | # DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nuist', 'sandbox-3', 'atmos')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: NUIST
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
"""
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
"""
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
"""
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
"""
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
"""
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
"""
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
"""
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
"""
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation
"""
|
pycircle/presentations | wprowadzenie_2.ipynb | apache-2.0 | help([1, 2, 3])
dir([1, 2, 3])
sum??
"""
Explanation: <img src='http://pycircle.org/static/pycircle_big.png' style="margin-left:auto; margin-right:auto; height:70%; width:70%">
Wprowadzenie część 2
End of explanation
"""
all([1==1, True, 10, -1, False, 3*5==1]), all([1==5, True, 10, -1])
any([False, True]), any([False, False])
bin(12), oct(12), hex(12), int('12'), float(12.)
ord('A'), chr(65)
raw_input(u"Podaj liczbę: ")
zip([1,2,3, 3], [2, 3, 4, 10])
sorted([8, 3, 12, 9, 3]), reversed(range(10)), list(reversed(range(10)))
len([3, 2, 1]), len([[1, 2], [3, 4, 5]])
list(), dict(), set(), tuple()
"""
Explanation: Funkcje wbudowane
End of explanation
"""
A = (1, 2, 3)
B = [1, 2, 3]
A == B
"""
Explanation: Tuple (krotka)
End of explanation
"""
A = set()
A.add(2)
A.add(3)
A.add(4)
A
A.add(3)
A
B = set((4, 5, 6))
A.difference(B)
A.symmetric_difference(B)
A.intersection(B)
A.union(B)
"""
Explanation: Czym się różni krotka od listy?
Set (zbiory)
End of explanation
"""
pow(2, 10), divmod(10, 3), sum([1, 2, 3])
round(0.5), round(0.2), round(0.9)
min([1, 2, 3]), max([1, 2, 3])
abs(10), abs(-10)
24 % 5, 24 % 2
"""
Explanation: Prosta matematyka
End of explanation
"""
f = lambda x: x+1
f(3)
f = lambda a, b: a+b**3
f(2, 3)
map(lambda x: x+10, [0, 2, 5, 234])
[x+10 for x in [0, 2]]
map(chr, [80, 121, 67, 105, 114, 99, 108, 101])
[chr(x) for x in [80, 121, 67, 105, 114, 99, 108, 101]]
filter(lambda x: x > 0, [-1, 0, 4, -3, 2])
[x for x in [-1, 0, 4, -3, 2] if x > 0]
reduce(lambda a, b: a - b, [2, 3, 4])
2 - 3 - 4
"""
Explanation: Trochę programowania funkcyjnego
map, filter, reduce
wyrażenie lambda $\lambda$
End of explanation
"""
%ls -l
fp = open("pycircle.txt", "w")
%ls -l
fp.write("Hello world\n")
fp.close()
%cat pycircle.txt
with open("pycircle.txt") as fp:
print fp.read(),
"""
Explanation: Więcej informacji temat funkcji wbudowanych na https://docs.python.org/2/library/functions.html
Zadania 1
1 . Napisz kod tworzący listę z przedziału $[0, 100]$ liczb podzielnych przez 3 ale nie podzielnych przez 9
2 . Napisz kod który zwraca unikalne elementy z podanej listy
3 . Napisz kod który znajdzie maksimum wartości słownika
Pliki
End of explanation
"""
def fun1(a):
a.append(9)
return a
def fun2(a=[]):
a.append(9)
return a
lista1 = [1, 2, 3]
lista2 = [3, 4, 5]
fun1(lista1), fun2(lista2)
def fun2(a=[]):
a.append(9)
return a
fun2()
fun2()
fun2()
"""
Explanation: Funkcje
End of explanation
"""
def show_local():
x = 23
print("Local: %s" % x)
show_local()
def show_enclosing(a):
def enclosing():
print("Enclosing: %s" % a)
enclosing()
show_enclosing(5)
x = 43
def show_global():
print("Global %s" % x)
show_global()
def show_built():
print("Built-in: %s" % abs)
show_built()
x = 43
def what_x():
print(x)
x = 4
what_x()
x = 43
def encl_x():
x = 23
def enclosing():
print("Enclosing: %s" % x)
enclosing()
encl_x()
x = 43
def what_about_globals():
global x
x = 37
print("In function %s" % x)
what_about_globals()
print("After function %s" % x)
"""
Explanation: LEGB
<img src="http://sandeeps.in/_images/python_legb.png" style="margin-left:auto; margin-right:auto;">
End of explanation
"""
def f(x):
f.l += x
print "x: ", x
print "f.l: ", f.l
f.l = 10
f(2)
f(14)
"""
Explanation: Funkcje to też obiekty!
End of explanation
"""
def powerer(power):
def nested(number):
return number ** power
return nested
f = powerer(3)
f(2), f(10)
def licznik(start):
def nested(label):
print(label, nested.state)
nested.state += 1
nested.state = start
return nested
f = licznik(0)
f('a')
f('b')
f('c')
"""
Explanation: Fabryki funkcji
End of explanation
"""
' '.join(['a', 'b', 'c'])
def my_join(joining_str, list_of_str):
return reduce
my_join(" ", ['a', 'b', 'c'])
' '.join(['a', 'b', 'c'])
"""
Explanation: Zadania 2
1 . Napisz funkcję która stworzy plik z pierwiastkami liczb z zakresu $[0, 100]$ (całkowite), każdy w osobnej linii
2 . Napisz funkcję wczytująca pierwiastki z pliku z poprzedniego zadania, oblicz ich sumę i dopisz do pliku
3 . Napisz funkcję która będzie działała jak ''.join() za pomocą reduce
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.14/_downloads/plot_sensor_regression.ipynb | bsd-3-clause | # Authors: Tal Linzen <[email protected]>
# Denis A. Engemann <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.stats.regression import linear_regression
print(__doc__)
data_path = sample.data_path()
"""
Explanation: Sensor space least squares regression
Predict single trial activity from a continuous variable.
A single-trial regression is performed in each sensor and timepoint
individually, resulting in an Evoked object which contains the
regression coefficient (beta value) for each combination of sensor
and timepoint. Example also shows the T statistics and the associated
p-values.
Note that this example is for educational purposes and that the data used
here do not contain any significant effect.
(See Hauk et al. (2006). The time course of visual word recognition as
revealed by linear regression analysis of ERP data. Neuroimage.)
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.2, 0.5
event_id = dict(aud_l=1, aud_r=2)
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg='mag', eeg=False, stim=False,
eog=False, exclude='bads')
# Reject some epochs based on amplitude
reject = dict(mag=5e-12)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=reject)
"""
Explanation: Set parameters and read data
End of explanation
"""
names = ['intercept', 'trial-count']
intercept = np.ones((len(epochs),), dtype=np.float)
design_matrix = np.column_stack([intercept, # intercept
np.linspace(0, 1, len(intercept))])
# also accepts source estimates
lm = linear_regression(epochs, design_matrix, names)
def plot_topomap(x, unit):
x.plot_topomap(ch_type='mag', scale=1, size=1.5, vmax=np.max,
unit=unit, times=np.linspace(0.1, 0.2, 5))
trial_count = lm['trial-count']
plot_topomap(trial_count.beta, unit='z (beta)')
plot_topomap(trial_count.t_val, unit='t')
plot_topomap(trial_count.mlog10_p_val, unit='-log10 p')
plot_topomap(trial_count.stderr, unit='z (error)')
"""
Explanation: Run regression
End of explanation
"""
|
napjon/ds-nd | p3-wrangling/project.osm/01-documentation.ipynb | mit | pipeline = [{'$match': {'address.street':{'$exists':1}}},
{'$project': {'_id': '$address.street'}},
{'$limit' : 5}]
result = db.jktosm.aggregate(pipeline)['result']
pprint.pprint(result)
"""
Explanation: OpenStreetMap is an open project, which means it's free and everyone can use it and edit as they like. OpenStreetMap is direct competitor of Google Maps. How OpenStreetMap can compete with the giant you ask? It's depend completely on crowd sourcing. There's lot of people willingly update the map around the world, most of them fix their map country.
Openstreetmap is so powerful, and rely heavily on the human input. But its strength also the downfall. Everytime there's human input, there's always be human error.It's very error prone.I choose whole places of Jakarta. Jakarta is the capital of Indonesia.This dataset is huge, over 250,000 examples. It's my hometown, and i somewhat want to help the community.
<!-- TEASER_END -->
Problems Encountered in the Map
When I open OpenStreetMap dataset, I notice following issues:
Street type abbreviations
Incosistent phone number format
Street Type Abbreviations
Take the name of the street for example. People like to abbreviate the type of the street.
Street become St. st. In Indonesia, 'Jalan'(Street-Eng), also abbreviated as Jln, jln, jl, Jln.
It maybe get us less attention. But for someone as Data Scientist/Web Developer, they expect the street to have generic format.
'Jalan Sudirman' -> Jalan <name> -> name = Sudirman
'Jln Sudirman' -> Jalan <name> -> ERROR!
There are also some users that input street name in two type name, street address and full address. I incorporate all of address name to street address, and result in the following,
End of explanation
"""
pipeline = [{'$match': {'phone':{'$exists':1}}},
{'$project': {'_id': '$phone'}},
{'$limit' : 5}]
result = db.jktosm.aggregate(pipeline)['result']
pprint.pprint(result)
"""
Explanation: Inconsistent phone number format
We also have inconsistent phone number:
{u'_id': u'021-720-0981209'}
{u'_id': u'(021) 7180317'}
{u'_id': u'081807217074'}
{u'_id': u'+62 857 4231 9136'}
This makes difficult for any developer to parse to common format. The country of Jakarta is Indonesia, and it has country code of +62. And we see here that some users prefer to have separator with dash or spaces. Some users even separate the country code and city code(Jakarta: 21) in parantheses. We also see that the numbers prefix with 0, which can be used if you're in Indonesia, but not internationally.
So we have to convert these numbert into common format. Number could benefit by incorporating spaces, that way if developer uses the data, phone number can be extracted by country code, city code, and the rest of the number. Since mobile number doesn't have city code, we can just leave it alone. We can't take prefix all of the number by country code, since operator phone number, like McDonalds, doesn't need country code. So after I solve all of this issues, the results,
End of explanation
"""
!ls -lh dataset/jakarta*
"""
Explanation: Overview of the data
You can see the filesize about the dataset.
End of explanation
"""
pipeline = [
{'$match': {'created.user':{'$exists':1}}},
{'$group': {'_id':'$created.user',
'count':{'$sum':1}}},
{'$sort': {'count':-1}},
{'$limit' : 5}
]
result = db.jktosm.aggregate(pipeline)['result']
pprint.pprint(result)
"""
Explanation: Show the top 5 of contributed users
We also can find the top 5 contributed users. These users are count by how they created the point in the map, and sort descent
End of explanation
"""
pipeline = [{'$match': {'amenity':'restaurant',
'name':{'$exists':1},
'cuisine':{'$exists':1},
'phone':{'$exists':1}}},
{'$project':{'_id':'$name',
'cuisine':'$cuisine',
'contact':'$phone'}}]
result = db.jktosm.aggregate(pipeline)['result']
pprint.pprint(result)
"""
Explanation: Show the restaurant's name, the food they serve, and contact number
End of explanation
"""
|
dgergel/VIC | samples/notebooks/example_plotting_vic_outputs.ipynb | gpl-2.0 | %matplotlib inline
import pandas as pd
import xarray as xr
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
# input files for example:
asci_fname = '/Users/jhamman/workdir/VIC_tests_20160531/examples/Example-Classic-Stehekin-fewb/results/fluxes_48.1875_-120.6875.txt'
nc_fname = '/Users/jhamman/workdir/VIC_tests_20160331/examples/Example-Image-Stehekin-base-case/results/Stehekin.history.nc'
"""
Explanation: Tutorial: Plotting VIC Model Output
This Jupyter Notebook outlines one approach to plotting VIC output from the classic and image drivers. The tools used here are all freely available.
End of explanation
"""
# Use the pandas read_table function to read/parse the VIC output file
df = pd.read_table(asci_fname, comment='#', sep=r"\s*", engine='python',
parse_dates=[[0, 1, 2]], index_col='YEAR_MONTH_DAY')
df.head()
"""
Explanation: Plotting Classic Driver Output
Reading VIC ASCII data:
We'll use pandas to parse the ASCII file.
End of explanation
"""
# Select the precipitation, evapotranspiration, and runoff variables and plot their timeseries.
df[['OUT_PREC', 'OUT_EVAP', 'OUT_RUNOFF']].plot()
"""
Explanation: Plot 1: Time Series of Classic Driver Variables
Here we'll use pandas' built in plotting to plot 3 of the variables in the dataframe (df).
End of explanation
"""
# Open the dataset
ds = xr.open_dataset(nc_fname)
ds
"""
Explanation: Plotting Image Driver Output
The image driver outputs netCDF files. Here we'll use the xarray package to open the dataset, and make a few plots.
End of explanation
"""
ds['OUT_EVAP'].sel(time='1949-01-04-00').plot()
"""
Explanation: Plot 2: Time slice of image driver output
Quick and simple, select a time slice of the EVAP variable and plot it.
End of explanation
"""
ds['OUT_SWE'].resample('1D', dim='time', how='mean').plot(col='time', col_wrap=4, levels=10)
"""
Explanation: Plot 3: Multiple time slices of image driver output
Xarray allows you to plot multiple time periods at once, here we plot the daily average SWE.
End of explanation
"""
ds['OUT_SOIL_TEMP'].sel(time='1949-01-04').resample(
'3h', dim='time', how='mean').plot(
col='time', row='nlayer', levels=10)
"""
Explanation: Plot 4: Multiple time slices of 4d image driver output
For 4d variables, we can again use the xarray facet grid, now time is along the x axis and soil layer is along the y axis
End of explanation
"""
fig, ax = plt.subplots(1, 1, subplot_kw=dict(projection=ccrs.Mercator()))
ds['OUT_EVAP'].mean(dim='time').plot.pcolormesh('lon', 'lat', ax=ax,
levels=10, vmin=0, vmax=0.01,
transform=ccrs.PlateCarree())
# Configure the map
gl = ax.gridlines(crs=ccrs.PlateCarree(), draw_labels=True,
linewidth=2, color='gray', alpha=0.5, linestyle='--')
ax.set_extent([-125, -118, 47, 49], ccrs.Geodetic())
ax.coastlines('10m')
gl.xlabels_top = False
gl.ylabels_right = False
"""
Explanation: Plot 5: Using Cartopy to project VIC Image Driver Output
Often, we want to plot maps of VIC output that are georeferenced and include things like coastlines and political boundaries. Here we use xaray plotting along with cartopy to plot the temporal mean evapotranspiration.
End of explanation
"""
ds['OUT_SWDOWN'].mean(dim=('lon', 'lat')).to_dataframe().plot()
"""
Explanation: Plot 6: Plotting domain mean timeseries from VIC Image Driver Output
Here, we'll take the domain mean of the downward shortwave radiation and will use pandas to plot the data.
End of explanation
"""
ds['OUT_ALBEDO'].isel(lat=2, lon=2).plot()
plt.ylim(0, 1)
plt.close('all')
"""
Explanation: Plot 7: Plotting timeseries at a point from VIC Image Driver Output
Here, we'll take a single point of the surface albedo variable and will use pandas to plot the data.
End of explanation
"""
|
ml-ensemble/ml-ensemble.github.io | info/_downloads/layer.ipynb | mit | from mlens.parallel import Layer, Group, make_group, run
from mlens.utils.dummy import OLS, Scale
from mlens.index import FoldIndex
indexer = FoldIndex(folds=2)
group = make_group(indexer, [OLS(1), OLS(2)], None)
"""
Explanation: .. currentmodule:: mlens.parallel
Layer Mechanics
ML-Ensemble is designed to provide an easy user interface. But it is also designed
to be extremely flexible, all the wile providing maximum concurrency at minimal
memory consumption. The lower-level API that builds the ensemble and manages the
computations is constructed in as modular a fashion as possible.
The low-level API introduces a computational graph-like environment that you can
directly exploit to gain further control over your ensemble. In fact, building
your ensemble through the low-level API is almost as straight forward as using
the high-level API. In this tutorial, we will walk through how to use the
:class:Group and :class:Layer classes to fit several learners.
Suppose we want to fit several learners. The learner tutorial <learner_tutorial
showed us how to fit a single learner, and so one approach would be to simple
iterate over our learners and fit them one at a time. This however is a very slow
approach since we don't exploit the fact that learners can be trained in parallel.
Moreover, any type of aggregation, like putting all predictions into an array, would
have to be done manually.
The Layer API
^^^^^^^^^^^^^
To parallelize the implementation, we can use the :class:Layer class. A layer is
a handle that will run any number of :class:Group instances attached to it in parallel. Each
group in turn is a wrapper around a indexer-transformers-estimators triplet.
Basics
So, to fit our two learners in parallel, we first need a :class:Group object to
handle them.
End of explanation
"""
import numpy as np
np.random.seed(2)
X = np.arange(20).reshape(10, 2)
y = np.random.rand(10)
layer = Layer(stack=group)
print(
run(layer, 'fit', X, y, return_preds=True)
)
"""
Explanation: This group object is now a complete description of how to fit our two
learners using the prescribed indexing method.
To train the estimators, we need feed the group to a :class:Layer instance:
End of explanation
"""
group = make_group(indexer, [OLS(1), OLS(2)], [Scale()])
layer = Layer(stack=group)
print(
run(layer, 'fit', X, y, return_preds=True)
)
"""
Explanation: To use some preprocessing before fitting the estimators, we can use the
transformers argument when creating our group:
End of explanation
"""
group = make_group(
indexer,
{'case-1': [OLS(1)], 'case-2': [OLS(2)]},
{'case-1': [Scale()], 'case-2': []}
)
layer = Layer(stack=group)
print(
run(layer, 'fit', X, y, return_preds=True)
)
"""
Explanation: Multitasking
If we want our estimators two have different preprocessing, we can easily
achieve this either by specifying different cases when making the group,
or by making two separate groups. In the first case:
End of explanation
"""
groups = [
make_group(indexer, OLS(1), Scale()), make_group(indexer, OLS(2), None)
]
layer = Layer(stack=groups)
print(
run(layer, 'fit', X, y, return_preds=True)
)
"""
Explanation: In the latter case:
End of explanation
"""
groups = [
make_group(FoldIndex(2), OLS(1), Scale()),
make_group(FoldIndex(4), OLS(2), None)
]
layer = Layer(stack=groups)
print(
run(layer, 'fit', X, y, return_preds=True)
)
"""
Explanation: Which method to prefer depends on the application, but generally, it is
preferable to put all transformers and all estimators belonging to a
given indexing strategy into one group instance as it is easier to
separate groups based on indexer and using cases to distinguish between
different preprocessing pipelines.
Now, suppose we want to do something more exotic, like using different
indexing strategies for different estimators. This can easily be achieved
by creating groups for each indexing strategy we want:
End of explanation
"""
from mlens.index import BlendIndex
groups = [
make_group(FoldIndex(2), OLS(1), None),
make_group(BlendIndex(0.5), OLS(1), None)
]
layer = Layer(stack=groups)
print(
run(layer, 'fit', X, y, return_preds=True)
)
"""
Explanation: Some care needs to be taken here: if indexing strategies do not return the
same number of rows, the output array will be zero-padded.
End of explanation
"""
layer = Layer()
group = make_group(FoldIndex(4), OLS(), None)
layer.push(group)
"""
Explanation: Note that even if mlens indexer output different shapes, they preserve
row indexing to ensure predictions are consistently mapped to their respective
input. If you build a custom indexer, make sure that it uses a strictly
sequential (with respect to row indexing) partitioning strategy.
Layer features
A layer does not have to be specified all in one go; you can instantiate
a layer and push and pop to its stack.
End of explanation
"""
run(layer, 'fit', X, y)
group = make_group(FoldIndex(2), OLS(1), None)
layer.push(group)
try:
run(layer, 'predict', X, y)
except Exception as exc:
print("Error: %s" % str(exc))
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p></p></div>
If you push or pop to the stack, you must call fit before you can
use the layer for prediction.
End of explanation
"""
from mlens.metrics import rmse
layer = Layer()
group1 = make_group(
indexer,
{'case-1': [OLS(1)], 'case-2': [OLS(2)]},
{'case-1': [Scale()], 'case-2': []},
learner_kwargs={'scorer': rmse}
)
layer.push(group1)
run(layer, 'fit', X, y, return_preds=True)
print()
print("Collected data:")
print(layer.data)
"""
Explanation: The :class:Layer class can print the progress of a job, as well as inspect
data collected during the job. Note that the
printouts of the layer does not take group membership into account.
End of explanation
"""
|
caganze/wisps | notebooks/.ipynb_checkpoints/lsstdsf_pca-checkpoint.ipynb | mit | features=list(hst3d.columns)
features.remove('name')
"""
Explanation: Create a training set, a test set and a set to predict for
End of explanation
"""
import seaborn as sns
#plt.xscale('log')
sns.pairplot(spex[features], hue=None)
good_features=['H_2O-1/J-Cont', 'CH_4/H-Cont', 'H_2O-2/J-Cont']
from sklearn.decomposition import PCA
pca = PCA(n_components=2, svd_solver='full')
pca.fit(spex[good_features].values)
spex_pcaed=pca.transform(spex[good_features].values)
proj_sample=pca.transform(hst3d[good_features].values)
colors=an.color_from_spts(spex.spt.values, cmap='viridis')
plt.scatter(proj_sample[:,0],proj_sample[:,1], alpha=0.6,color='k')
plt.scatter(spex_pcaed[:,0], spex_pcaed[:, 1], color=colors)
plt.xlabel('axis-1', fontsize=18)
plt.ylabel('axis-2', fontsize=18)
plt.xlim([-1.5, 1.5])
plt.ylim([-.3, 1.5])
sns.distplot(spex.spt)
"""
Explanation: Inspect the features, I know these features (at leasr spectral indices) are correlated but also have high variance, I could pick my favorite features and use those instead
End of explanation
"""
|
Subsets and Splits