path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
DL/Python+Basics+With+Numpy+v3.ipynb | ###Markdown
Python Basics with Numpy (optional assignment)Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need. **Instructions:**- You will be using Python 3.- Avoid using for-loops and while-loops, unless you are explicitly told to do so.- Do not modify the ( GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.- After coding your function, run the cell right below it to check if your result is correct.**After this assignment you will:**- Be able to use iPython Notebooks- Be able to use numpy functions and numpy matrix/vector operations- Understand the concept of "broadcasting"- Be able to vectorize codeLet's get started! About iPython Notebooks iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the START CODE HERE and END CODE HERE comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook. We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.**Exercise**: Set test to `"Hello World"` in the cell below to print "Hello World" and run the two cells below.
###Code
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
###Output
test: Hello World
###Markdown
**Expected output**:test: Hello World **What you need to remember**:- Run your cells using SHIFT+ENTER (or "Run cell")- Write code in the designated areas using Python 3 only- Do not modify the code outside of the designated areas 1 - Building basic functions with numpy Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments. 1.1 - sigmoid function, np.exp() Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().**Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.**Reminder**:$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
###Code
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1 + math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
###Output
_____no_output_____
###Markdown
**Expected Output**: ** basic_sigmoid(3) ** 0.9525741268224334 Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
###Code
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
###Output
_____no_output_____
###Markdown
In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
###Code
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
###Output
[ 2.71828183 7.3890561 20.08553692]
###Markdown
Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
###Code
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
###Output
[4 5 6]
###Markdown
Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html). You can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.**Exercise**: Implement the sigmoid function using numpy. **Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix} x_1 \\ x_2 \\ ... \\ x_n \\\end{pmatrix} = \begin{pmatrix} \frac{1}{1+e^{-x_1}} \\ \frac{1}{1+e^{-x_2}} \\ ... \\ \frac{1}{1+e^{-x_n}} \\\end{pmatrix}\tag{1} $$
###Code
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
###Output
_____no_output_____
###Markdown
**Expected Output**: **sigmoid([1,2,3])** array([ 0.73105858, 0.88079708, 0.95257413]) 1.2 - Sigmoid gradientAs you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.**Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$You often code this function in two steps:1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.2. Compute $\sigma'(x) = s(1-s)$
###Code
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (≈ 2 lines of code)
s = 1/(1+np.exp(-x))
ds = s*(1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
###Output
sigmoid_derivative(x) = [ 0.19661193 0.10499359 0.04517666]
###Markdown
**Expected Output**: **sigmoid_derivative([1,2,3])** [ 0.19661193 0.10499359 0.04517666] 1.3 - Reshaping arrays Two common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html). - X.shape is used to get the shape (dimension) of a matrix/vector X. - X.reshape(...) is used to reshape X into some other dimension. For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\*height\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:``` pythonv = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c```- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc.
###Code
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape(image.shape[0]*image.shape[1]*image.shape[2], 1)
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
###Output
image2vector(image) = [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]
###Markdown
**Expected Output**: **image2vector(image)** [[ 0.67826139] [ 0.29380381] [ 0.90714982] [ 0.52835647] [ 0.4215251 ] [ 0.45017551] [ 0.92814219] [ 0.96677647] [ 0.85304703] [ 0.52351845] [ 0.19981397] [ 0.27417313] [ 0.60659855] [ 0.00533165] [ 0.10820313] [ 0.49978937] [ 0.34144279] [ 0.94630077]] 1.4 - Normalizing rowsAnother common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).For example, if $$x = \begin{bmatrix} 0 & 3 & 4 \\ 2 & 6 & 4 \\\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix} 5 \\ \sqrt{56} \\\end{bmatrix}\tag{4} $$and $$ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix} 0 & \frac{3}{5} & \frac{4}{5} \\ \frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
###Code
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, axis = 1, keepdims = True)
# Divide x by its norm.
x = x / x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
###Output
normalizeRows(x) = [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]
###Markdown
**Expected Output**: **normalizeRows(x)** [[ 0. 0.6 0.8 ] [ 0.13736056 0.82416338 0.54944226]] **Note**:In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now! 1.5 - Broadcasting and the softmax function A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). **Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.**Instructions**:- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix} x_1 && x_2 && ... && x_n \end{bmatrix}) = \begin{bmatrix} \frac{e^{x_1}}{\sum_{j}e^{x_j}} && \frac{e^{x_2}}{\sum_{j}e^{x_j}} && ... && \frac{e^{x_n}}{\sum_{j}e^{x_j}} \end{bmatrix} $ - $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix} x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\ x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}\end{bmatrix} = \begin{bmatrix} \frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\ \frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}\end{bmatrix} = \begin{pmatrix} softmax\text{(first row of x)} \\ softmax\text{(second row of x)} \\ ... \\ softmax\text{(last row of x)} \\\end{pmatrix} $$
###Code
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
"""
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis = 1, keepdims=True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
###Output
softmax(x) = [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]
###Markdown
**Expected Output**: **softmax(x)** [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04 1.21052389e-04] [ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04 8.01252314e-04]] **Note**:- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning. **What you need to remember:**- np.exp(x) works for any np.array x and applies the exponential function to every coordinate- the sigmoid function and its gradient- image2vector is commonly used in deep learning- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. - numpy has efficient built-in functions- broadcasting is extremely useful 2) Vectorization In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
###Code
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
###Output
dot = 278
----- Computation time = 0.16485399999988104ms
outer = [[81 18 18 81 0 81 18 45 0 0 81 18 45 0 0]
[18 4 4 18 0 18 4 10 0 0 18 4 10 0 0]
[45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[63 14 14 63 0 63 14 35 0 0 63 14 35 0 0]
[45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[81 18 18 81 0 81 18 45 0 0 81 18 45 0 0]
[18 4 4 18 0 18 4 10 0 0 18 4 10 0 0]
[45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]
----- Computation time = 0.14641400000003912ms
elementwise multiplication = [81 4 10 0 0 63 10 0 0 0 81 4 25 0 0]
----- Computation time = 0.11114999999994879ms
gdot = [ 17.40101806 28.49563761 24.77820737]
----- Computation time = 0.3951330000000475ms
###Markdown
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger. **Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication. 2.1 Implement the L1 and L2 loss functions**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.**Reminder**:- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.- L1 loss is defined as:$$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$
###Code
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs(y - yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
###Output
L1 = 1.1
###Markdown
**Expected Output**: **L1** 1.1 **Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$. - L2 loss is defined as $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$
###Code
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum((y - yhat)*(y - yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
###Output
L2 = 0.43
|
testing/mpl_pydata_workshop-master/.ipynb_checkpoints/customizing_axes-checkpoint.ipynb | ###Markdown
Customizing the Axes Dealing with the too-many-tick-labels problem
###Code
import matplotlib.pyplot as plt
import numpy as np
# often when you create lots-o-subplots, the tick labels overlap
fig, axes = plt.subplots(3,3)
for ax in axes.flat: ax.plot(np.random.rand(10))
axes.flat
###Output
_____no_output_____
###Markdown
Using sharex and sharey to turn off redundant labels If the axes share the same x-axis and y-axis, be definition the yticklabels in all but the first column are redundant, and all xticklabels in all but the last row are redundant. `plt.subplots` is aware of this, and setting *sharex* and *sharey* to be True will turn off unneccsary ticks
###Code
fig, axes = plt.subplots(3,3, sharex=True, sharey=True)
for ax in axes.flat: ax.plot(np.random.rand(10))
###Output
_____no_output_____
###Markdown
Using tight layout to prevent overlap of labels *between* axes If you want to preserve all the tick labels, but reduce overlap between axes, you can use the "tight layout' option to the figure. This will prevent axis tick labels from overlapping other axes, but will not help with tick crowding on a single axis. In the example below, we create and and y data that are not on the same scale, so we want to preserve. We'll deal with the "too many ticks on a single axis" below.
###Code
fig, axes = plt.subplots(3,3)
for i, ax in enumerate(axes.flat):
ax.plot(np.arange(10)**(i+1), (i+1)*np.random.rand(10))
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Use fewer ticks to prevent the overlap of ticks *within* and axes The default matplotlib tick locator (more on tick locators below) is the [`matplotlib.ticker.MaxNLocator`](http://matplotlib.sf.net/api/ticker_api.htmlmatplotlib.ticker.MaxNLocator). The MaxNLocator` will create at most N ticks, and try to place them in intelligent locations regardless of the scale of your data. When creating lots-o-subplots, usually the number of ticks is too large, and you want to dial down the maximum number of ticks. Sometimes just 3 or 4 is enough for plots with many subplots.`ax.locator_params` is a convenience method for customizing the default tick locator, and only works with the `MaxNLocator` (if you have set a custom locator instance this method will not work). Some of the useful options to this command*axis* : 'both' | 'x' | 'y' apply changes to the x-axis, y-axis or both*nbins* : integer use at most *nbins*+1 ticks *tight* : True|False Set the view limits of the axis equal to the data limits (no excess white-space). When tight is False (the default) the axis limits will try and a min/max that are nice, round numbers. For example, if your data fall in the range [0.5, 0.95], if *tight* is False the axis view limits will be [0,1] and if *tight* is True the default view limits will be [0.05, 0.95])
###Code
fig, axes = plt.subplots(3,3)
for i, ax in enumerate(axes.flat):
ax.plot(np.arange(10)**(i+1), (i+1)*np.random.rand(10))
ax.locator_params(axis='both', nbins=4) # axis can be 'x', 'y' or 'both'
fig.tight_layout()
###Output
_____no_output_____
###Markdown
The example above is *almost* usable,except the length of the ticks is too long for the ticks with an order of magnitude 1e3. The default matplotlib tick formatter is a [`matplotlib.ticker.ScalarFormatter`](http://matplotlib.sf.net/api/ticker_api.htmlmatplotlib.ticker.ScalarFormatter) which will fall back to scientific notation if the tick location values fall outside of a certain range. By default this is [1e-7, 1e7] but this range can be controlled in code or the defaults can be changed in the [matplotlibrc file](http://matplotlib.sf.net/users/customizing.html) using the `axes.formatter.limits` rc parameter. Below, we will set the scalar formatting limit to [1e-3, 1e3] so that the tick label representations will be more compact.
###Code
fig, axes = plt.subplots(3,3)
for i, ax in enumerate(axes.flat):
ax.plot(np.arange(10)**(i+1), (i+1)*np.random.rand(10))
ax.locator_params(axis='both', nbins=4) # axis can be 'x', 'y' or 'both'
ax.ticklabel_format(scilimits=(-3,3)) # use scientific limits above 1e3
fig.tight_layout()
###Output
_____no_output_____
###Markdown
ggplot styles Show how to make a plot layout that resembes the R gg plot style
###Code
fig, axes = plt.subplots(3, 3, sharex=True, sharey=True, figsize=(8,8))
gray = (0.9, 0.9, 0.9)
class DropFormatter(ticker.ScalarFormatter):
def __call__(self, x, pos=None):
if pos==0: return ''
return ticker.ScalarFormatter.__call__(self, x, pos=None)
for ax in axes.flat:
ax.locator_params(nbins=5)
ax.patch.set_facecolor(gray)
ax.patch.set_edgecolor(gray)
ax.grid()
ax.hist(np.random.randn(200), 20, facecolor='k')
ax.xaxis.grid(color='white', linestyle='-', linewidth=1.5)
ax.yaxis.grid(color='white', linestyle='-', linewidth=1.5)
ax.xaxis.set_major_formatter(DropFormatter())
ax.yaxis.set_major_formatter(DropFormatter())
ax.set_axisbelow(True)
for line in ax.xaxis.get_ticklines() + ax.yaxis.get_ticklines():
line.set_color(gray)
fig.subplots_adjust(wspace=0.05, hspace=0.05)
###Output
_____no_output_____
###Markdown
*tight* : True|False Set the view limits of the axis equal to the data limits (no excess white-space). When tight is False (the default) the axis limits will try and a min/max that are nice, round numbers. For example, if your data fall in the range [0.5, 0.95], if *tight* is False the axis view limits will be [0,1] and if *tight* is True the default view limits will be [0.05, 0.95])
###Code
fig, axes = plt.subplots(2)
axes[0].plot([0.05, 0.95], [0.05, 0.95])
axes[0].locator_params(tight=False)
axes[0].set_title('view limits tight=False')
axes[0].grid(True)
axes[1].plot([0.05, 0.95], [0.05, 0.95])
axes[1].locator_params(tight=True)
axes[1].set_title('view limits tight=True')
axes[1].grid(True)
# note that tight layout on a figure is a different
# concept that tight viewlimits. setting the figure
# to tight_layout prevents overlapping text between
# axes
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Controlling tick visibility manually In the examples above, we use [`plt.subplots`](http://matplotlib.sf.net/api/pyplot_api.htmlmatplotlib.pyplot.subplots) to automatically create aligned subplots with shares axes and redundant tick labels turned off. When placing axes manually ourselves, we need to use API calls to replicate that functionality.
###Code
fig = plt.figure()
# create two axes of different heights ,one over the other
ax1 = fig.add_axes([0.1, 0.3, 0.8, 0.5])
ax2 = fig.add_axes([0.1, 0.1, 0.8, 0.15], sharex=ax1)
ax1.grid(True)
ax2.grid(True)
t = np.linspace(0, 10, 100)
ax1.plot(t, np.exp(-t/5) * np.sin(2*np.pi*t))
ax2.plot(t, np.random.randn(len(t)))
###Output
_____no_output_____
###Markdown
You can access the x or y ticklabel instances with `ax.get_xticklabels` and `ax.get_yticklabels`. These methods return a list of `matplotlib.text.Text` instances on which you can use API methods to tweak everything from their visility state or rotation to their font style, size, color. In the example below, we just turn the visibility x-tick labels of the upper panel by using the `Artist.set_visible` method (every object in an mpl `Figure` has this method.
###Code
# turn off the redundant tick labels in the upper panel
for label in ax1.get_xticklabels():
label.set_visible(False)
display(fig)
# reduce the number of labels in the lower y-axis
ax2.locator_params(nbins=3, axis='y')
display(fig)
###Output
_____no_output_____
###Markdown
Additionally, the axes has some helper methods when working with a grid of subplots to selectively handle the first or last column or row. Here we'll get a little crazy, rotating the first column text 45 degress and making it red, and enlarging the last row labels and make them blue.
###Code
fig, axes = plt.subplots(3,3)
fig.tight_layout()
for ax in axes.flat:
ax.locator_params(nbins=4)
if ax.is_first_col():
# red and rotated
for label in ax.get_yticklabels():
label.set_rotation(45)
label.set_color('red')
else:
# invisible
for label in ax.get_yticklabels():
label.set_visible(False)
if ax.is_last_row():
# big and blue
for label in ax.get_xticklabels():
label.set_size(14)
label.set_color('blue')
else:
for label in ax.get_xticklabels():
label.set_visible(False)
###Output
_____no_output_____
###Markdown
Packing axes with VBox and Hbox
###Code
# the random data
x, y = np.random.randn(2, 1000)
fig, axScatter = plt.subplots(1)
# the scatter plot:
axScatter = plt.subplot(111)
axScatter.scatter(x, y)
axScatter.set_aspect(1.)
# create new axes on the right and on the top of the current axes
# The first argument of the new_vertical(new_horizontal) method is
# the height (width) of the axes to be created in inches.
from mpl_toolkits.axes_grid1 import make_axes_locatable
divider = make_axes_locatable(axScatter)
axHistx = divider.append_axes("top", 1.2, pad=0.1, sharex=axScatter)
# now determine nice limits by hand:
binwidth = 0.25
xymax = np.max( [np.max(np.fabs(x)), np.max(np.fabs(y))] )
lim = ( int(xymax/binwidth) + 1) * binwidth
bins = np.arange(-lim, lim + binwidth, binwidth)
axHistx.hist(x, bins=bins)
# turn off redundant x tick labels
for tl in axHistx.get_xticklabels():
tl.set_visible(False)
axHistx.set_yticks([0, 50, 100])
display(fig)
axHisty = divider.append_axes("right", 1.2, pad=0.1, sharey=axScatter)
axHisty.hist(y, bins=bins, orientation='horizontal')
# the xaxis of axHistx and yaxis of axHisty are shared with axScatter,
# thus there is no need to manually adjust the xlim and ylim of these
# axis.
for tl in axHisty.get_yticklabels():
tl.set_visible(False)
axHisty.set_xticks([0, 50, 100])
display(fig)
###Output
_____no_output_____
###Markdown
Sophisticated axes layout with gridspec [gridspec](http://matplotlib.sf.net/users/gridspec.html) is a sophisticated layout tool for matplotlib axes that goes far beyond the standard subplot model which forces your axes into a grid of rows by columns. Gridspec allows you to create axes that span multiple rows and columns, and even place sophisticated layouts within a single axes region. There are two ways to use gridspec: with the pyplot helper function [`plt.subplot2grid`](http://matplotlib.sf.net/api/pyplot_api.htmlmatplotlib.pyplot.subplot2grid) or using the [`matplotlib.gridspec'](http://matplotlib.sf.net/api/gridspec_api.html) API.The basic syntax of `subplot2grid`` is like the standard `subplot` creation, except that the indexing is zero based and is two dimensional. For example, to create the equivalent of `subplot(221)` with gridspec, you would write: ax = plt.subplot2grid((2,2),(0, 0))The second argument means the axes refers to the first row and column of a 2x2 grid of axes. What makes gridspec useful is its ability to support spans. For example, we could imagine the 2-D histogram above with the marginal 1D x and y histograms, as a 3x3 grid where the 2D histogram spans rows 2 and 3 and columns 3 and 3, and the marginal densities occupy just one cell in the 3x3 grid.
###Code
# this is a helper utility function for prettifying and labeling the axes
# generated by the gridspec examples below
def make_ticklabels_invisible(fig):
'turn off tick labels and label the axes w/ text'
for i, ax in enumerate(fig.axes):
ax.text(0.5, 0.5, "ax%d" % (i+1), va="center", ha="center", size=20)
for tl in ax.get_xticklabels() + ax.get_yticklabels():
tl.set_visible(False)
fig = plt.figure()
# create the layout above using gridspec
# ax1 starts at row 1 and column 0 and spans two rows and two columns
ax1 = plt.subplot2grid((3,3), (1, 0), colspan=2, rowspan=2)
# ax2 starts at row 0 and column 0 and spans two columns
ax2 = plt.subplot2grid((3,3), (0, 0), colspan=2)
# ax3 starts at row 1 and column 2 and spans 2 rows
ax3 = plt.subplot2grid((3,3), (1, 2), rowspan=2)
make_ticklabels_invisible(fig)
###Output
_____no_output_____
###Markdown
The API gridspec object also support numpy style array indexing and slicing, so you can slice across a gridspec object to indicate a row or column span. Here is a more sophisticated example. Here you create a GridSpec object which supports array slicing and indexing and you can pass these slices into the `plt.subplot` command to create your axes.
###Code
import matplotlib.gridspec as gridspec
fig = plt.figure()
gs = GridSpec(3, 3)
ax1 = plt.subplot(gs[0, :])
# identical to ax1 = plt.subplot(gs.new_subplotspec((0,0), colspan=3))
ax2 = plt.subplot(gs[1,:-1])
ax3 = plt.subplot(gs[1:, -1])
ax4 = plt.subplot(gs[-1,0])
ax5 = plt.subplot(gs[-1,-2])
make_ticklabels_invisible(fig)
###Output
_____no_output_____
###Markdown
Custom tick formatting and locating
###Code
import matplotlib.ticker as ticker
# a random walk
prices = (1 + 0.01*np.random.randn(30)).cumprod() * 10
fig, ax = plt.subplots(1)
ax.plot(prices)
ax.grid()
# format the y tick labels as dolllars
#ax.yaxis.set_major_formatter(ticker.FormatStrFormatter('$%.2f'))
import matplotlib.ticker as ticker
x = np.linspace(0, 10*np.pi, 100)
y = np.random.randn(len(x)).cumsum()
fig, ax = plt.subplots(1)
ax.plot(x, y)
# place x-ticks on the integer multiples of pi
class PiLocator(ticker.Locator):
def __call__(self):
vmin, vmax = self.axis.get_view_interval()
imin = np.ceil(vmin/np.pi)
imax = np.floor(vmax/np.pi)
return np.arange(imin, imax)*np.pi
class PiFormatter(ticker.Formatter):
def __call__(self, x, pos=None):
i = int(x/np.pi)
return r'$%d\pi$'%i
# uncomment for custom locator and formatter
#ax.xaxis.set_major_locator(PiLocator())
#ax.xaxis.set_major_formatter(PiFormatter())
#ax.fmt_xdata = lambda x: '%.4f'%x
###Output
_____no_output_____ |
0911_estimation/05_motifs-done.ipynb | ###Markdown
Data framesIn addition to the `Series`, Pandas also provides a `DataFrame` which has rows and columns, like a table or a spreadsheet. They're similar to (and based on) data frames in the statistics programming language R.We can build a data frame from a dictionary where the _columns_ are entries in a dictionary. Each dictionary _key_ is a column header, and the associated _value_ is a list. The `pd.DataFrame()` function creates a data frame.```nucls = pd.DataFrame({'letter': [ 'A', 'C', 'G', 'T' ], 'name': ['adenine', 'cytosine', 'guanine', 'thymine'], 'ring': ['purine', 'pyrimidine', 'purine', 'pyrimidine']})```
###Code
import pandas as pd
nucls = pd.DataFrame({ 'letter': [ 'A', 'C', 'G', 'T'],
'name': ['adenine', 'cytosine', 'guanine', 'thymine'],
'ring': ['purine', 'pyrimidine', 'purine', 'pyrimidine']})
nucls
###Output
_____no_output_____
###Markdown
We can extract one column of a `DataFrame` as a `Series` using square brackets to index it by the name of the column:```nucls['name']```
###Code
nucls['letter']
###Output
_____no_output_____
###Markdown
We can then index by row into the `Series` with a second set of square brackets```nucls['letter'][2]```
###Code
nucls['letter'][2]
nucls[2:4]
###Output
_____no_output_____
###Markdown
Here is some Python code to create a data frame with observed nucleotide counts from 389 TATA boxes taken from eukaryotic promoters (Bucher, J Mol Biol (1990) 212, 563-578).```tata_counts = pd.DataFrame({'A': [ 16, 352, 3, 354, 268, 360, 222, 155], 'C': [ 46, 0, 10, 0, 0, 3, 2, 44], 'G': [ 18, 2, 2, 5, 0, 20, 44, 157], 'T': [ 309, 35, 374, 30, 121, 6, 121, 33]})```Each row is a position in the TATA motif, and each column is a nucleotide. It's possible to read off the consensus sequence of TATA(A/T)A(A/T)(A/G), sometimes written TATAWAWR, just from looking at the counts in the table.
###Code
tata_counts = pd.DataFrame({'A': [ 16, 352, 3, 354, 268, 360, 222, 155],
'C': [ 46, 0, 10, 0, 0, 3, 2, 44],
'G': [ 18, 2, 2, 5, 0, 20, 44, 157],
'T': [ 309, 35, 374, 30, 121, 6, 121, 33]})
tata_counts
###Output
_____no_output_____
###Markdown
Data frames have many useful methods. For instance, we can use the .sum() method to take the sums across rows or columns. The argument `0` will calculate column sums and the argument `1` will calculate row sums.
###Code
tata_counts.sum(1)
###Output
_____no_output_____
###Markdown
We can then turn these counts into probabilities by dividing each nucleotide count by the total number of sequences counted. That is if 35 out of 389 TATA-box sequences have a `T` at the second position, then the probability of a `T` at position 1 in a random TATA-box sequence is 35/389, just under 10%.```tata_counts / 389```will make a new data frame dividing each individual entry in our data frame by 389. We'll use this to make a new `tata_probs` data frame with the _probabilities_ of each nucleotide.
###Code
tata_probs = tata_counts / 389
tata_probs
###Output
_____no_output_____
###Markdown
We can now look up, e.g., the probability of a `T` at the second position, which is position 1 in Python counting```tata_probs['T'][1]```
###Code
tata_probs['T'][1]
###Output
_____no_output_____
###Markdown
We're most of the way to a probabilistic model of a TATA box. We will assume that each of the nucleotides in the TATA box is independent, so we can multiply these probabilities together$$P(\;\mathtt{TATAAAAG}\;|\;\mathrm{TATA-box}\;) = P(\;\mathtt{T}\mathrm{\,at\,0\;}) \timesP(\;\mathtt{A}\mathrm{\,at\,1\;}) \timesP(\;\mathtt{T}\mathrm{\,at\,2\;}) \timesP(\;\mathtt{A}\mathrm{\,at\,3\;}) \timesP(\;\mathtt{A}\mathrm{\,at\,4\;}) \timesP(\;\mathtt{A}\mathrm{\,at\,5\;}) \timesP(\;\mathtt{G}\mathrm{\,at\,6\;})$$We need to keep track of which position is which, because $P(\;\mathtt{T}\mathrm{\,at\,0\;}) \neq P(\;\mathtt{T}\mathrm{\,at\,1\;})$. The `enumerate()` function lets us keep track of a position when we're iterating over a sequence.```for position, nt in enumerate(sequ): print('position = ' + str(position) + ', nt = ' + str(nt))```
###Code
for position, nt in enumerate('TATAAAAG'):
print(position, nt)
###Output
0 T
1 A
2 T
3 A
4 A
5 A
6 A
7 G
###Markdown
Now, we'll write a `for` loop to iterate over the positions in a sequence and compute a running probability.We'll start with probability 1```prob = 1```and then multiply the probability for each independent position```for position, nt in enumerate(sequ): p = tata_probs[nt][position] prob = prob * p print(position, nt, p, prob)```We can use this to compute the probability of a "very good" TATA-box like `TATATATA`. We can also try the worst possible TATA box, `ACGCGCCT`.
###Code
sequ = 'ACGCGCCT'
prob = 1
for position, nt in enumerate(sequ):
# P( nt at position | TATA-box )
p = tata_probs[nt][position]
prob = prob * p
print(position, nt, p, prob)
prob
###Output
0 A 0.04113110539845758 0.04113110539845758
1 C 0.0 0.0
2 G 0.005141388174807198 0.0
3 C 0.0 0.0
4 G 0.0 0.0
5 C 0.007712082262210797 0.0
6 C 0.005141388174807198 0.0
7 T 0.08483290488431877 0.0
###Markdown
Our final probability is 0! While $P(\;\mathtt{ACGCGCCT}\;|\;\textrm{TATA-box}\;)$ is definitely very small, it's probably not 0. We see zero `C` nucleotides at position 1 out of 389 TATA-boxes, but what if we counted 389,000? Would we find 100, 10, or 1? We often handle these situations by adding a _pseudocount_ to our data. We add a fake count for each nucleotide, at each position, in order to eliminate zeros. The impact of this pseudocount depends on the number of real counts. If we add a pseudocount with 9 real observations, it represents 10% of our overall counts, but if we add a pseudocount with 999 real observations, it's only 0.1%.We can just add 1 to every entry and use this table with pseudocounts to make our new data.```tata_counts_pseudo = tata_counts + 1```
###Code
tata_counts_pseudo = tata_counts + 1
tata_counts_pseudo
###Output
_____no_output_____
###Markdown
Now we can use the new tata_probs to compute the probability of the best TATA-box, which is pretty similar. We can also compute the worst TATA-box, which is very low but not zero.
###Code
tata_counts_pseudo.sum(1)
tata_probs = tata_counts_pseudo / 393
sequ = 'ACGCGCCT'
prob = 1
for position, nt in enumerate(sequ):
p = tata_probs[nt][position]
prob = prob * p
print(position, nt, p, prob)
prob
###Output
0 A 0.043256997455470736 0.043256997455470736
1 C 0.002544529262086514 0.00011006869581544718
2 G 0.007633587786259542 8.402190520263142e-07
3 C 0.002544529262086514 2.1379619644435476e-09
4 G 0.002544529262086514 5.440106779754575e-12
5 C 0.010178117048346057 5.5370043559843004e-14
6 C 0.007633587786259542 4.2267208824307635e-16
7 T 0.08651399491094147 3.656705089125851e-17
###Markdown
It's getting tedious to write the same for loop every time we want to try a different sequence.We can write our own function, `likelihood_tata()`, that will compute the likelihood of a sequence under our TATA-box probability model. We define a function with def followed by the function name. The arguments to the function are named in parentheses, and inside the function, these become variables that take on a different value each time we use the function. The `return` keyword gives the computed "value" for the function.```def likelihood_tata(sequ): prob = 1 for position, nt in enumerate(sequ): p = tata_probs[nt][position] prob = prob * p print(position, nt, p, prob) return(prob)```
###Code
# likelihood_tata('TATAAAAG')
def likelihood_tata(sequ):
prob = 1
for position, nt in enumerate(sequ):
p = tata_probs[nt][position]
prob = prob * p
return(prob)
likelihood_tata('TATAAAAG')
###Output
_____no_output_____
###Markdown
Now we can easily use our function to compute the likelihood of some other possible TATA-box sequences. For example, the three sequences below are "very good" TATA-boxes that differ from the "best" TATA box at one of the three "degenerate" positions in the motif. Notice that the overall probability of getting one of these three imperfect motifs is substantially higher than the probability of the perfect TATA-box. In fact, although the TATA-box is a strong motif, fewer than 10% of the sequences generated according to our model will actually match the "best" sequence.```TATATAAGTATAAATGTATAAAAA```
###Code
likelihood_tata('TATATAAG') + likelihood_tata('TATAAATG') + likelihood_tata('TATAAAAA')
###Output
_____no_output_____
###Markdown
If we want to use our Bayesian framework to think about TATA-boxes, we need some additional information. What is $P(\;\mathtt{TATAAAAG}\;|\;\textit{not}\,\textrm{TATA-box}\;)$? We need a model for all the other sequences in the genome, often called a "background" model.The easy background model is independent nucleotides, with probabilities determined by the overall composition of the genome. We just counted the overall number of `A`s etc in the yeast genome. A rough estimate is```background = pd.Series({'A': 0.31, 'C': 0.19, 'G': 0.19, 'T': 0.31})```
###Code
background = pd.Series({ 'A': 0.31, 'C': 0.19, 'G': 0.19, 'T': 0.31})
background
background['C']
###Output
_____no_output_____
###Markdown
_Exercise_ Use the `background` defined above to write a `likelihood_background()` function that calculates the likelihood of generating a given sequence under the model of random yeast genome.
###Code
def likelihood_background(sequ):
prob = 1
# for position, nt in enumerate(sequ)
for nt in sequ:
p = background[nt]
prob = prob * p
return prob
likelihood_background('TATAAAAG')
likelihood_background('ACGCGCCT')
###Output
_____no_output_____
###Markdown
Since the "worst" TATA-box is GC-rich and the "best" TATA-box is AT-rich, the odds of getting the "best" TATA-box by chance in random sequence is somewhat higher. Of course, the chance of getting the "best" sequence under our TATA-box probabilistic model is dramatically higher than the chance of getting the "worst" sequence. We can use the _ratio of the likelihoods_ as a measure of how well two different models fit a given sequence.Below, we compute the likelihood ratios for the "best" sequence TATAAAAG, the "worst" sequence ACGCGCCT, and getting any one of the three very-good sequences TATAAATG and TATAAAAA.```print(likelihood_tata('TATAAAAG') / likelihood_background('TATAAAAG'))print(likelihood_tata('ACGCGCCT') / likelihood_background('ACGCGCCT'))print( (likelihood_tata('TATATAAG') + likelihood_tata('TATAAATG') + likelihood_tata('TATAAAAA')) / (likelihood_background('TATATAAG') + likelihood_background('TATAAATG') + likelihood_background('TATAAAAA')) )```
###Code
likelihood_tata('ACGCGCCT') / likelihood_background('ACGCGCCT')
###Output
_____no_output_____
###Markdown
We can go one step further and turn this likelihood ratio into a function```def likelihood_ratio(sequ): return(likelihood_tata(sequ) / likelihood_background(sequ))```
###Code
def likelihood_ratio(sequ):
return( likelihood_tata(sequ) / likelihood_background(sequ) )
likelihood_ratio('TATATATA')
###Output
_____no_output_____
###Markdown
We might want to scan a whole promoter to find a TATA-box. Here is the promoter region for the yeast _CDC19_ gene.```cdc19_prm = 'TATGATGCTAGGTACCTTTAGTGTCTTCCTAAAAAAAAAAAAAGGCTCGCCATCAAAACGATATTCGTTGGCTTTTTTTTCTGAATTATAAATACTCTTTGGTAACTTTTCATTTCCAAGAACCTCTTTTTTCCAGTTATATCATG'```We need to extract 8-nucleotide chunks out of the promoter. Square brackets can extract a _range_ of values from a string or a list. To do this, we do `[start:end]` where the start is _included_ and the end is _excluded_.```alphabet = 'abcdefghijklmnopqrstuvwxyz'alphabet[2:6]```This code goes from index 2 (the 3rd entry, `c`) to index 5 (`f`) and does not include index 6 (`g`).
###Code
cdc19_prm = 'TATGATGCTAGGTACCTTTAGTGTCTTCCTAAAAAAAAAAAAAGGCTCGCCATCAAAACGATATTCGTTGGCTTTTTTTTCTGAATTATAAATACTCTTTGGTAACTTTTCATTTCCAAGAACCTCTTTTTTCCAGTTATATCATG'
len(cdc19_prm)
cdc19_prm[0:8]
cdc19_prm[1:9]
###Output
_____no_output_____
###Markdown
We can use this to run```likelihood_ratio(cdc19_prm[0:8])likelihood_ratio(cdc19_prm[1:9])```
###Code
likelihood_ratio(cdc19_prm[1:9])
###Output
_____no_output_____
###Markdown
Now we can loop over each starting position in `cdc19_prm` and compute its likelihood.We start at position 0 and we run until the _end_ of our 8-position window is at the end of the promoter. This happens when `start+8 = len(cdc19_prm)` or equivalently `start = len(cdc19_prm) - 8`.The `range(start, end)` function creates a series of numbers.To start, we can write the loop```for start in range(0, len(cdc19_prm) - 8): print(str(start) + ' ' + cdc19_prm[start:start+8])```and if all of that looks good we can add in a `likelihood_ratio()`.Then we can build a _list_ of these likelihoods and covert it into a Pandas `Series`.
###Code
scores = []
for start in range(0, len(cdc19_prm) - 7):
print(str(start), cdc19_prm[start:start+8], likelihood_ratio(cdc19_prm[start:start+8]))
scores.append(likelihood_ratio(cdc19_prm[start:start+8]))
scores
###Output
0 TATGATGC 0.08402037003121614
1 ATGATGCT 1.5243007104870045e-05
2 TGATGCTA 7.856685684214901e-07
3 GATGCTAG 0.0005434620620826783
4 ATGCTAGG 8.394749243658516e-05
5 TGCTAGGT 0.00040021996619650475
6 GCTAGGTA 0.00014760985830600325
7 CTAGGTAC 4.1651000215254796e-07
8 TAGGTACC 0.001710047300323067
9 AGGTACCT 7.60088191631401e-08
10 GGTACCTT 1.8383645356791617e-05
11 GTACCTTT 7.1095678908228174e-09
12 TACCTTTA 0.0010731331632946833
13 ACCTTTAG 1.5610378954175922e-05
14 CCTTTAGT 0.0032948800433311625
15 CTTTAGTG 0.1916779419022246
16 TTTAGTGT 0.0008727613759418125
17 TTAGTGTC 0.00033714539424310264
18 TAGTGTCT 6.502912112563353e-07
19 AGTGTCTT 2.0787467771005432e-05
20 GTGTCTTC 3.569503832491865e-07
21 TGTCTTCC 5.870894461335304e-06
22 GTCTTCCT 1.6952310264786767e-06
23 TCTTCCTA 2.4552142763171573e-05
24 CTTCCTAA 1.3827684211247231e-05
25 TTCCTAAA 0.010316576568886387
26 TCCTAAAA 0.019587871586508113
27 CCTAAAAA 1.1593876079163083
28 CTAAAAAA 0.2728674834631414
29 TAAAAAAA 10.816325127584454
30 AAAAAAAA 0.5931533134481795
31 AAAAAAAA 0.5931533134481795
32 AAAAAAAA 0.5931533134481795
33 AAAAAAAA 0.5931533134481795
34 AAAAAAAA 0.5931533134481795
35 AAAAAAAA 0.5931533134481795
36 AAAAAAAG 0.9801838492811008
37 AAAAAAGG 0.32271807168919886
38 AAAAAGGC 0.008723663365821753
39 AAAAGGCT 1.6335098688582658e-06
40 AAAGGCTC 4.618124342853626e-07
41 AAGGCTCG 8.538449699044403e-08
42 AGGCTCGC 5.8769349542515984e-08
43 GGCTCGCC 3.494660525632878e-08
44 GCTCGCCA 5.184481351834513e-09
45 CTCGCCAT 8.068532940612602e-07
46 TCGCCATC 4.3910563018749164e-07
47 CGCCATCA 3.256225564713397e-07
48 GCCATCAA 0.0001839171974220322
49 CCATCAAA 6.5500788998123935e-06
50 CATCAAAA 1.1528558467449483
51 ATCAAAAC 0.12774136459071334
52 TCAAAACG 0.0018133355954903436
53 CAAAACGA 0.015925776482590748
54 AAAACGAT 7.442130445540292e-05
55 AAACGATA 9.046018698468444e-06
56 AACGATAT 0.0003101586477232918
57 ACGATATT 0.00018142057602749402
58 CGATATTC 1.6175353747023177e-05
59 GATATTCG 0.03234554435608385
60 ATATTCGT 3.107896820845402e-06
61 TATTCGTT 0.0060781251116620374
62 ATTCGTTG 2.77132827323192e-06
63 TTCGTTGG 0.0006530313327232485
64 TCGTTGGC 3.563323943585197e-05
65 CGTTGGCT 8.364462750233774e-07
66 GTTGGCTT 3.7285139878563274e-06
67 TTGGCTTT 5.219919582998856e-07
68 TGGCTTTT 8.844863737859177e-07
69 GGCTTTTT 6.1619217373752235e-06
70 GCTTTTTT 4.2916610340956486e-05
71 CTTTTTTT 0.002342416280545108
72 TTTTTTTT 0.009469342410714267
73 TTTTTTTC 0.020448502574220435
74 TTTTTTCT 0.0003799175341658442
75 TTTTTCTG 0.06693852338080648
76 TTTTCTGA 0.0003496830169059324
77 TTTCTGAA 0.02045908851170494
78 TTCTGAAT 0.00057133626017267
79 TCTGAATT 0.015410781881917697
80 CTGAATTA 0.0035421499936925545
81 TGAATTAT 0.00028746639514684884
82 GAATTATA 0.02343557385399689
83 AATTATAA 0.09415912543764504
84 ATTATAAA 2.572015510307696
85 TTATAAAT 0.020993999768842225
86 TATAAATA 554.7610701647853
87 ATAAATAC 0.0005520540528692657
88 TAAATACT 0.023467460046603706
89 AAATACTC 0.00024110829936030786
90 AATACTCT 3.128698337530953e-05
91 ATACTCTT 2.7180172913486677e-07
92 TACTCTTT 5.943055664736215e-05
93 ACTCTTTG 9.391723592619285e-06
94 CTCTTTGG 0.0005115412106332113
95 TCTTTGGT 0.0012641992740075355
96 CTTTGGTA 0.0007035376908556549
97 TTTGGTAA 0.00033539489363450725
98 TTGGTAAC 0.007945320747292884
99 TGGTAACT 7.66683074470556e-05
100 GGTAACTT 0.003030929400598772
101 GTAACTTT 1.5469043685032228e-06
102 TAACTTTT 5.212753015176824e-05
103 AACTTTTC 0.0005262479935387388
104 ACTTTTCA 4.332392933470155e-06
105 CTTTTCAT 0.003991895414127533
106 TTTTCATT 0.006530963325422371
107 TTTCATTT 0.0010989012547377645
108 TTCATTTC 0.011207214393309588
109 TCATTTCC 3.9661153694354077e-07
110 CATTTCCA 0.003942044521778511
111 ATTTCCAA 5.4302030397967824e-05
112 TTTCCAAG 0.004763815566455206
113 TTCCAAGA 0.007489344340368922
114 TCCAAGAA 0.02128992311961189
115 CCAAGAAC 3.530280378694432e-05
116 CAAGAACC 0.0007622114584165181
117 AAGAACCT 6.277345067469619e-05
118 AGAACCTC 2.3221446766473617e-07
119 GAACCTCT 2.7969514462638084e-09
120 AACCTCTT 1.1958183311824309e-05
121 ACCTCTTT 1.5063656821308432e-08
122 CCTCTTTT 5.58748112472287e-06
123 CTCTTTTT 0.00011210722128643957
124 TCTTTTTT 0.00042916610340956485
125 CTTTTTTC 0.0050583137946755815
126 TTTTTTCC 0.0008204101550485334
127 TTTTTCCA 0.0016251934500009048
128 TTTTCCAG 0.001636324460499849
129 TTTCCAGT 0.00020686376274838125
130 TTCCAGTT 0.0002574290341015602
131 TCCAGTTA 1.4432963923253654e-05
132 CCAGTTAT 6.536436935413506e-07
133 CAGTTATA 0.07093967681150998
134 AGTTATAT 0.0002845583781906328
135 GTTATATC 1.2076395610134654
136 TTATATCA 4.099738837347602e-05
137 TATATCAT 1.812064955122101
138 ATATCATG 2.896539523976798e-05
|
TASK 3/number_of_clusters_prediction.ipynb | ###Markdown
Task 3 : Prediction Using Unsupervised ML* Task is to use the given ***iris*** dataset and predict the **optimum number of clusters** and represent it visually. 1. Importing Libraries
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot
###Output
_____no_output_____
###Markdown
2. Exploring and Analysing the data
###Code
# loading dataset
data = pd.read_csv('Iris.csv')
data.head(10)
# shape of dataset(rows and columns)
data.shape
# to get to know the names of columns
data.columns
# descriptive statistics of dataset
data.describe()
# more info on dataset
data.info()
# to get to know unique species of iris flower dataset
data.Species
data.Species.unique()
###Output
_____no_output_____
###Markdown
3. K-means Algo for Clusters
###Code
X = data.iloc[:,[1,2,3,4]].values
# importing kmeans
from sklearn.cluster import KMeans
# defining elbow method for estimating number of clusters
def elbowmethod(num_of_clusters,inertias):
pyplot.plot(num_of_clusters,inertias)
pyplot.title('Elbow Method')
pyplot.xlabel('Number of Clusters')
pyplot.ylabel('Inertias')
pyplot.show()
# list
inertias = []
clusters = range(1,11)
for i in clusters:
kmeans = KMeans(n_clusters=i,init='k-means++',max_iter=100,n_init=10,random_state=42)
kmeans.fit(X)
inertias.append(kmeans.inertia_)
elbowmethod(clusters,inertias)
###Output
_____no_output_____
###Markdown
Observation* The optimum number of clusters is where the elbow occurs.* From the above plot, we can observe that the **optimum number of clusters can be choosen is *3***.
###Code
kmeans = KMeans(n_clusters=3,init='k-means++',max_iter=300,n_init=10,random_state=42)
y_kmeans = kmeans.fit_predict(X)
kmeans.cluster_centers_
###Output
_____no_output_____
###Markdown
4. Visualizing the Clusters
###Code
pyplot.figure(figsize=(12,10))
# visualizing the clusters
pyplot.scatter(X[y_kmeans == 0,0],X[y_kmeans == 0,1],s = 100,c='red',label='Iris-setosa')
pyplot.scatter(X[y_kmeans == 1,0],X[y_kmeans == 1,1],s = 100,c='blue',label='Iris-versicolour')
pyplot.scatter(X[y_kmeans == 2,0],X[y_kmeans == 2,1],s = 100,c='green',label = 'Iris-virginica')
# plotting the centroids of the clusters
pyplot.scatter(kmeans.cluster_centers_[:,0],kmeans.cluster_centers_[:,1],s=100,c='yellow',label='centroids')
pyplot.legend()
pyplot.show()
###Output
_____no_output_____ |
language_model/training_on_twitterclasses-2.5m.ipynb | ###Markdown
TRANSFER LEARNING HERE
###Code
def test_model_tl(generator,
train_sentences,
devLabels,
number_of_tests,
number_of_epochs,
filename_to_log,
filename_to_save_weigths,
batch_size,
train_file:'filepath for traininig',
f1_measure:'binary/macro etc',
pos_label:'only if binary f1',
load_model_weights=False,
model_weights_file:'give filepath as str'=None,
tokenize=True,
nb_sequence_length = nb_sequence_length,
nb_embedding_dims= nb_embedding_dims,
check_for_generator=None,
):
f = open(filename_to_log,"w")
total_f1=0
total_prec=0
total_acc=0
total_recall=0
for test_number in range(number_of_tests):
print("Test %d/%d" %(test_number+1, number_of_tests))
model = compile_model(500)
# transfer learning
if load_model_weights and model_weights_file:
model.load_weights(model_weights_file)
print("removing top layer")
model.layers.pop()
output = Dense(2, activation = 'softmax')(model.layers[-1].output)
final_model = Model(inputs=model.input, outputs=[output])
final_model.compile(loss='categorical_crossentropy', optimizer='nadam', metrics=['accuracy'])
# for layer in final_model.layers:
# print(layer.name)
samples_per_epoch = len(train_sentences)
epochs = number_of_epochs
batch_size = batch_size
steps_per_epoch = math.ceil(samples_per_epoch / batch_size)
# checkpoint = ModelCheckpoint(filename_to_save_weigths, monitor='val_acc',save_best_only = True,
# save_weights_only = True)
max_f1=0
max_p=0
max_r=0
max_a=0
for epoch in range(epochs):
print("Epoch: %d" %(epoch+1))
final_model.fit_generator(
generator(filename = train_file, batch_size = batch_size, check = check_for_generator,
labels2Idx= labels2Idx,tokenize= tokenize),
steps_per_epoch= steps_per_epoch, epochs=1,
# validation_data = generator(filename ='/home/jindal/notebooks/twitter_data/twitter_classes_500k_dev.csv',
# batch_size = batch_size, check = check_for_generator,
# labels2Idx = labels2Idx, tokenize = tokenize),
# validation_steps = math.ceil(len(dev_labels) / batch_size),
# callbacks = [checkpoint]
)
testset_features = np.zeros((len(dev_sentences), nb_sequence_length, nb_embedding_dims))
for i in range(len(dev_sentences)):
testset_features[i] = process_features(dev_sentences[i], nb_sequence_length, nb_embedding_dims)
results = final_model.predict(testset_features)
# idx2Label = {0 : "OTHER", 1 : "OFFENSIVE"}
predLabels = results.argmax(axis=-1)
devLabels = devLabels
f1 = f1_score(devLabels, predLabels, average=f1_measure, pos_label=pos_label) # offensive is the major class. So other is minor
r = recall_score(devLabels, predLabels, average=f1_measure, pos_label=pos_label)
p = precision_score(devLabels, predLabels, average=f1_measure, pos_label=pos_label)
a = accuracy_score(devLabels, predLabels)
if max_f1 < f1:
print("model saved. F1 is %f" %(f1))
final_model.save(filename_to_save_weigths)
max_f1 = f1
max_p = p
max_r = r
max_a = a
text = "prec: "+ str(p)+" rec: "+str(r) +" f1: "+str(f1) +" acc: "+str(a)+" \n"
print("Test-Data: Prec: %.3f, Rec: %.3f, F1: %.3f, Acc: %.3f" % (p, r, f1, a))
to_write= "prec: "+ str(max_p)+" rec: "+str(max_r) +" f1: "+str(max_f1) +" acc: "+str(max_a)+" \n"
print(to_write)
f.write(to_write)
total_f1+=max_f1
total_prec+=max_p
total_acc+=max_a
total_recall+=max_r
print("*****************************************************************************")
final_text = "avg_prec: " +str(total_prec/number_of_tests)+" total_rec: "+str(total_recall/number_of_tests) +" total_f1: "+str(total_f1/number_of_tests) +" total_acc: "+str(total_acc/number_of_tests)+" \n"
print(final_text)
f.write(final_text)
f.close()
n_labels =2
train_sentences, train_labels, dev_sentences, dev_labels, labels2Idx = train_dev_sentences(filetrain='/home/gwiedemann/notebooks/OffLang/sample_train.txt',
filedev='/home/gwiedemann/notebooks/OffLang/sample_dev.txt', check=3)
print(dev_sentences[0])
print(dev_labels[:20])
generator = sequential_generator
train_sentences = train_sentences
devLabels = dev_labels
number_of_tests = 5
number_of_epochs = 50
twitterclasses_tl_log = '/home/jindal/notebooks/jindal/NER/language_model/results_tl_twitterclasses.txt'
twitterclasses_tl_save_weigths='/home/jindal/notebooks/jindal/NER/language_model/classification_model_tl_twitterclasses.h5'
batch_size=32
twitterclasses_tl_train_file='/home/gwiedemann/notebooks/OffLang/sample_train.txt'
f1_measure='binary'
pos_label=1
load_model_weights=True
model_weights_file = '/home/jindal/notebooks/jindal/NER/language_model/model_pretrained_twitterclasses.h5'
nb_sequence_length = nb_sequence_length
nb_embedding_dims= nb_embedding_dims
check_for_generator=3
test_model_tl(generator=generator,
train_sentences=train_sentences,
devLabels=devLabels,
number_of_tests= number_of_tests,
number_of_epochs=number_of_epochs,
filename_to_log=twitterclasses_tl_log,
filename_to_save_weigths=twitterclasses_tl_save_weigths,
batch_size=batch_size,
train_file=twitterclasses_tl_train_file,
f1_measure=f1_measure,
pos_label=pos_label,
load_model_weights=load_model_weights,
model_weights_file = model_weights_file,
nb_sequence_length=nb_sequence_length,
nb_embedding_dims=nb_embedding_dims,
check_for_generator= check_for_generator)
###Output
Test 1/5
removing top layer
Epoch: 1
Epoch 1/1
132/132 [==============================] - 34s 258ms/step - loss: 0.6235 - acc: 0.6664
model saved. F1 is 0.527273
Test-Data: Prec: 0.671, Rec: 0.434, F1: 0.527, Acc: 0.743
Epoch: 2
Epoch 1/1
132/132 [==============================] - 31s 231ms/step - loss: 0.5196 - acc: 0.7431
model saved. F1 is 0.625954
Test-Data: Prec: 0.638, Rec: 0.614, F1: 0.626, Acc: 0.757
Epoch: 3
Epoch 1/1
132/132 [==============================] - 31s 234ms/step - loss: 0.4744 - acc: 0.7663
model saved. F1 is 0.650980
Test-Data: Prec: 0.683, Rec: 0.622, F1: 0.651, Acc: 0.780
Epoch: 4
Epoch 1/1
132/132 [==============================] - 30s 231ms/step - loss: 0.4515 - acc: 0.7808
model saved. F1 is 0.677043
Test-Data: Prec: 0.704, Rec: 0.652, F1: 0.677, Acc: 0.795
Epoch: 5
Epoch 1/1
132/132 [==============================] - 30s 227ms/step - loss: 0.4209 - acc: 0.8054
model saved. F1 is 0.694656
Test-Data: Prec: 0.708, Rec: 0.682, F1: 0.695, Acc: 0.802
Epoch: 6
Epoch 1/1
132/132 [==============================] - 30s 231ms/step - loss: 0.3849 - acc: 0.8151
Test-Data: Prec: 0.714, Rec: 0.655, F1: 0.684, Acc: 0.800
Epoch: 7
Epoch 1/1
132/132 [==============================] - 30s 226ms/step - loss: 0.3667 - acc: 0.8286
model saved. F1 is 0.707224
Test-Data: Prec: 0.718, Rec: 0.697, F1: 0.707, Acc: 0.809
Epoch: 8
Epoch 1/1
132/132 [==============================] - 30s 231ms/step - loss: 0.3463 - acc: 0.8416
Test-Data: Prec: 0.703, Rec: 0.655, F1: 0.678, Acc: 0.795
Epoch: 9
Epoch 1/1
132/132 [==============================] - 30s 229ms/step - loss: 0.3172 - acc: 0.8582
Test-Data: Prec: 0.693, Rec: 0.685, F1: 0.689, Acc: 0.796
Epoch: 10
Epoch 1/1
132/132 [==============================] - 30s 231ms/step - loss: 0.2923 - acc: 0.8684
Test-Data: Prec: 0.717, Rec: 0.693, F1: 0.705, Acc: 0.808
Epoch: 11
Epoch 1/1
132/132 [==============================] - 30s 228ms/step - loss: 0.2659 - acc: 0.8807
Test-Data: Prec: 0.721, Rec: 0.599, F1: 0.654, Acc: 0.791
Epoch: 12
Epoch 1/1
132/132 [==============================] - 31s 232ms/step - loss: 0.2578 - acc: 0.8916
Test-Data: Prec: 0.692, Rec: 0.682, F1: 0.687, Acc: 0.795
Epoch: 13
Epoch 1/1
132/132 [==============================] - 31s 233ms/step - loss: 0.2384 - acc: 0.8982
Test-Data: Prec: 0.718, Rec: 0.685, F1: 0.701, Acc: 0.807
Epoch: 14
Epoch 1/1
132/132 [==============================] - 31s 233ms/step - loss: 0.2221 - acc: 0.9067
model saved. F1 is 0.714556
Test-Data: Prec: 0.721, Rec: 0.708, F1: 0.715, Acc: 0.813
Epoch: 15
Epoch 1/1
132/132 [==============================] - 30s 229ms/step - loss: 0.2059 - acc: 0.9086
Test-Data: Prec: 0.713, Rec: 0.708, F1: 0.711, Acc: 0.809
Epoch: 16
Epoch 1/1
132/132 [==============================] - 30s 228ms/step - loss: 0.1998 - acc: 0.9143
Test-Data: Prec: 0.743, Rec: 0.618, F1: 0.675, Acc: 0.803
Epoch: 17
Epoch 1/1
132/132 [==============================] - 30s 228ms/step - loss: 0.1785 - acc: 0.9266
Test-Data: Prec: 0.743, Rec: 0.618, F1: 0.675, Acc: 0.803
Epoch: 18
Epoch 1/1
132/132 [==============================] - 30s 229ms/step - loss: 0.1785 - acc: 0.9226
Test-Data: Prec: 0.682, Rec: 0.700, F1: 0.691, Acc: 0.793
Epoch: 19
Epoch 1/1
132/132 [==============================] - 30s 230ms/step - loss: 0.1563 - acc: 0.9380
Test-Data: Prec: 0.698, Rec: 0.674, F1: 0.686, Acc: 0.796
Epoch: 20
Epoch 1/1
132/132 [==============================] - 31s 232ms/step - loss: 0.1510 - acc: 0.9389
Test-Data: Prec: 0.690, Rec: 0.708, F1: 0.699, Acc: 0.798
Epoch: 21
Epoch 1/1
132/132 [==============================] - 30s 231ms/step - loss: 0.1358 - acc: 0.9489
Test-Data: Prec: 0.713, Rec: 0.659, F1: 0.685, Acc: 0.800
Epoch: 22
Epoch 1/1
132/132 [==============================] - 30s 231ms/step - loss: 0.1334 - acc: 0.9470
Test-Data: Prec: 0.676, Rec: 0.640, F1: 0.658, Acc: 0.780
Epoch: 23
Epoch 1/1
132/132 [==============================] - 30s 231ms/step - loss: 0.1228 - acc: 0.9508
Test-Data: Prec: 0.717, Rec: 0.625, F1: 0.668, Acc: 0.795
Epoch: 24
Epoch 1/1
132/132 [==============================] - 30s 230ms/step - loss: 0.1255 - acc: 0.9512
Test-Data: Prec: 0.709, Rec: 0.622, F1: 0.663, Acc: 0.791
Epoch: 25
Epoch 1/1
132/132 [==============================] - 30s 231ms/step - loss: 0.1222 - acc: 0.9543
Test-Data: Prec: 0.691, Rec: 0.663, F1: 0.677, Acc: 0.791
Epoch: 26
Epoch 1/1
132/132 [==============================] - 30s 230ms/step - loss: 0.1008 - acc: 0.9619
Test-Data: Prec: 0.706, Rec: 0.655, F1: 0.680, Acc: 0.796
Epoch: 27
Epoch 1/1
132/132 [==============================] - 30s 229ms/step - loss: 0.1096 - acc: 0.9524
Test-Data: Prec: 0.657, Rec: 0.697, F1: 0.676, Acc: 0.780
Epoch: 28
Epoch 1/1
132/132 [==============================] - 31s 232ms/step - loss: 0.0973 - acc: 0.9621
Test-Data: Prec: 0.683, Rec: 0.622, F1: 0.651, Acc: 0.780
Epoch: 29
Epoch 1/1
132/132 [==============================] - 30s 230ms/step - loss: 0.0870 - acc: 0.9669
Test-Data: Prec: 0.682, Rec: 0.674, F1: 0.678, Acc: 0.788
Epoch: 30
Epoch 1/1
132/132 [==============================] - 30s 231ms/step - loss: 0.0838 - acc: 0.9645
Test-Data: Prec: 0.694, Rec: 0.603, F1: 0.645, Acc: 0.781
Epoch: 31
Epoch 1/1
132/132 [==============================] - 30s 231ms/step - loss: 0.0948 - acc: 0.9607
Test-Data: Prec: 0.636, Rec: 0.727, F1: 0.678, Acc: 0.772
Epoch: 32
Epoch 1/1
132/132 [==============================] - 30s 230ms/step - loss: 0.0904 - acc: 0.9680
Test-Data: Prec: 0.703, Rec: 0.648, F1: 0.674, Acc: 0.793
Epoch: 33
Epoch 1/1
132/132 [==============================] - 31s 235ms/step - loss: 0.0696 - acc: 0.9751
Test-Data: Prec: 0.700, Rec: 0.622, F1: 0.659, Acc: 0.787
Epoch: 34
Epoch 1/1
132/132 [==============================] - 31s 232ms/step - loss: 0.0668 - acc: 0.9749
Test-Data: Prec: 0.674, Rec: 0.704, F1: 0.689, Acc: 0.790
Epoch: 35
Epoch 1/1
132/132 [==============================] - 31s 231ms/step - loss: 0.0806 - acc: 0.9680
Test-Data: Prec: 0.636, Rec: 0.708, F1: 0.670, Acc: 0.770
Epoch: 36
Epoch 1/1
132/132 [==============================] - 31s 233ms/step - loss: 0.0717 - acc: 0.9770
Test-Data: Prec: 0.671, Rec: 0.633, F1: 0.651, Acc: 0.776
Epoch: 37
Epoch 1/1
132/132 [==============================] - 30s 231ms/step - loss: 0.0625 - acc: 0.9756
Test-Data: Prec: 0.691, Rec: 0.629, F1: 0.659, Acc: 0.785
Epoch: 38
Epoch 1/1
132/132 [==============================] - 30s 229ms/step - loss: 0.0668 - acc: 0.9744
Test-Data: Prec: 0.704, Rec: 0.614, F1: 0.656, Acc: 0.787
Epoch: 39
Epoch 1/1
132/132 [==============================] - 30s 229ms/step - loss: 0.0668 - acc: 0.9756
Test-Data: Prec: 0.662, Rec: 0.644, F1: 0.653, Acc: 0.774
Epoch: 40
Epoch 1/1
132/132 [==============================] - 31s 233ms/step - loss: 0.0668 - acc: 0.9751
Test-Data: Prec: 0.708, Rec: 0.625, F1: 0.664, Acc: 0.791
Epoch: 41
Epoch 1/1
132/132 [==============================] - 32s 239ms/step - loss: 0.0693 - acc: 0.9742
Test-Data: Prec: 0.694, Rec: 0.629, F1: 0.660, Acc: 0.786
Epoch: 42
Epoch 1/1
132/132 [==============================] - 31s 236ms/step - loss: 0.0638 - acc: 0.9766
Test-Data: Prec: 0.657, Rec: 0.697, F1: 0.676, Acc: 0.780
Epoch: 43
Epoch 1/1
132/132 [==============================] - 31s 233ms/step - loss: 0.0542 - acc: 0.9796
Test-Data: Prec: 0.678, Rec: 0.670, F1: 0.674, Acc: 0.786
Epoch: 44
Epoch 1/1
132/132 [==============================] - 31s 235ms/step - loss: 0.0559 - acc: 0.9794
Test-Data: Prec: 0.670, Rec: 0.670, F1: 0.670, Acc: 0.782
Epoch: 45
Epoch 1/1
132/132 [==============================] - 30s 229ms/step - loss: 0.0518 - acc: 0.9822
Test-Data: Prec: 0.670, Rec: 0.693, F1: 0.681, Acc: 0.786
Epoch: 46
Epoch 1/1
132/132 [==============================] - 31s 235ms/step - loss: 0.0599 - acc: 0.9768
Test-Data: Prec: 0.651, Rec: 0.663, F1: 0.657, Acc: 0.771
Epoch: 47
Epoch 1/1
132/132 [==============================] - 31s 237ms/step - loss: 0.0475 - acc: 0.9827
Test-Data: Prec: 0.669, Rec: 0.674, F1: 0.672, Acc: 0.782
Epoch: 48
Epoch 1/1
132/132 [==============================] - 31s 232ms/step - loss: 0.0453 - acc: 0.9841
Test-Data: Prec: 0.658, Rec: 0.719, F1: 0.687, Acc: 0.783
Epoch: 49
Epoch 1/1
132/132 [==============================] - 31s 232ms/step - loss: 0.0523 - acc: 0.9818
Test-Data: Prec: 0.654, Rec: 0.659, F1: 0.657, Acc: 0.772
Epoch: 50
Epoch 1/1
132/132 [==============================] - 33s 247ms/step - loss: 0.0451 - acc: 0.9860
Test-Data: Prec: 0.665, Rec: 0.655, F1: 0.660, Acc: 0.777
prec: 0.7213740458015268 rec: 0.7078651685393258 f1: 0.7145557655954632 acc: 0.8131188118811881
*****************************************************************************
Test 2/5
removing top layer
Epoch: 1
Epoch 1/1
132/132 [==============================] - 37s 281ms/step - loss: 0.6204 - acc: 0.6714
model saved. F1 is 0.533917
Test-Data: Prec: 0.642, Rec: 0.457, F1: 0.534, Acc: 0.736
Epoch: 2
Epoch 1/1
50/132 [==========>...................] - ETA: 20s - loss: 0.5329 - acc: 0.7412 |
visualizing_numeric_data.ipynb | ###Markdown
<img src="../../media/decartes.jpg"alt="DeCART Icon" width="128" height="171">DeCART Summer SchoolforBiomedical Data Science<imgsrc="../../media/U_Health_stacked_png_red.png" alt="Utah HealthLogo" width="128" height="134"> Visualizing Numeric Data
###Code
import os
import glob
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import pandas as pd
DATADIR = os.path.join(os.path.expanduser("~"),"DATA")
###Output
_____no_output_____
###Markdown
[Matplotlib](http://matplotlib.org)Matplotlib is the againg yet competent graphing package that is part of the "scipy stack." Matplotlib focuses on 2D visualizations but it does come with a 3D visualization module that can be used interactively (but not through the notebook).NetworkX and Pandas both use matplotlib as their default drawing packages for quick visualization of data.Matplotlib plots can be customized with a fairly flexible api.For the most part, the best way to learn Matplotlib is to find an example from the [gallery](http://matplotlib.org/gallery.html) The following cell is an exmaple of a **notebook magic**. This particular magic is telling the matplotlib package to draw in the notebook rather than trying to open a separate Python window to draw in.
###Code
%matplotlib inline ## tell python to draw graph in the notebook,
###Output
_____no_output_____
###Markdown
Using [glob](https://docs.python.org/3/library/glob.html) and [os.listdir](https://docs.python.org/3/library/os.html)It is desirable to make our code as platform independent as possible and also to be able to
###Code
HRDIR = os.path.join(DATADIR,"Numerics", "mimic2", "hr", "subjects") ## os function to interact with the operating system
BPDIR = os.path.join(DATADIR,"Numerics","mimic2","bp", "subjects")
hr_files = os.listdir(HRDIR)
bp_files = os.listdir(BPDIR)
###Output
_____no_output_____
###Markdown
list (collection) lengthPython has a function [``len``]() that returns the length of a list or any other collection. This will be something we will use frequently.In this case, we need to see how many heart rate files we read
###Code
print(len(hr_files))
print(hr_files[0])
hr = pd.read_table(os.path.join(HRDIR, hr_files[0]), header=None, names=["heart rate"])
print(hr.shape)
hr.head()
help(hr.plot)
###Output
Help on FramePlotMethods in module pandas.tools.plotting object:
class FramePlotMethods(BasePlotMethods)
| DataFrame plotting accessor and method
|
| Examples
| --------
| >>> df.plot.line()
| >>> df.plot.scatter('x', 'y')
| >>> df.plot.hexbin()
|
| These plotting methods can also be accessed by calling the accessor as a
| method with the ``kind`` argument:
| ``df.plot(kind='line')`` is equivalent to ``df.plot.line()``
|
| Method resolution order:
| FramePlotMethods
| BasePlotMethods
| pandas.core.base.PandasObject
| pandas.core.base.StringMixin
| builtins.object
|
| Methods defined here:
|
| __call__(self, x=None, y=None, kind='line', ax=None, subplots=False, sharex=None, sharey=False, layout=None, figsize=None, use_index=True, title=None, grid=None, legend=True, style=None, logx=False, logy=False, loglog=False, xticks=None, yticks=None, xlim=None, ylim=None, rot=None, fontsize=None, colormap=None, table=False, yerr=None, xerr=None, secondary_y=False, sort_columns=False, **kwds)
| Make plots of DataFrame using matplotlib / pylab.
|
| *New in version 0.17.0:* Each plot kind has a corresponding method on the
| ``DataFrame.plot`` accessor:
| ``df.plot(kind='line')`` is equivalent to
| ``df.plot.line()``.
|
| Parameters
| ----------
| data : DataFrame
| x : label or position, default None
| y : label or position, default None
| Allows plotting of one column versus another
| kind : str
| - 'line' : line plot (default)
| - 'bar' : vertical bar plot
| - 'barh' : horizontal bar plot
| - 'hist' : histogram
| - 'box' : boxplot
| - 'kde' : Kernel Density Estimation plot
| - 'density' : same as 'kde'
| - 'area' : area plot
| - 'pie' : pie plot
| - 'scatter' : scatter plot
| - 'hexbin' : hexbin plot
| ax : matplotlib axes object, default None
| subplots : boolean, default False
| Make separate subplots for each column
| sharex : boolean, default True if ax is None else False
| In case subplots=True, share x axis and set some x axis labels to
| invisible; defaults to True if ax is None otherwise False if an ax
| is passed in; Be aware, that passing in both an ax and sharex=True
| will alter all x axis labels for all axis in a figure!
| sharey : boolean, default False
| In case subplots=True, share y axis and set some y axis labels to
| invisible
| layout : tuple (optional)
| (rows, columns) for the layout of subplots
| figsize : a tuple (width, height) in inches
| use_index : boolean, default True
| Use index as ticks for x axis
| title : string
| Title to use for the plot
| grid : boolean, default None (matlab style default)
| Axis grid lines
| legend : False/True/'reverse'
| Place legend on axis subplots
| style : list or dict
| matplotlib line style per column
| logx : boolean, default False
| Use log scaling on x axis
| logy : boolean, default False
| Use log scaling on y axis
| loglog : boolean, default False
| Use log scaling on both x and y axes
| xticks : sequence
| Values to use for the xticks
| yticks : sequence
| Values to use for the yticks
| xlim : 2-tuple/list
| ylim : 2-tuple/list
| rot : int, default None
| Rotation for ticks (xticks for vertical, yticks for horizontal plots)
| fontsize : int, default None
| Font size for xticks and yticks
| colormap : str or matplotlib colormap object, default None
| Colormap to select colors from. If string, load colormap with that name
| from matplotlib.
| colorbar : boolean, optional
| If True, plot colorbar (only relevant for 'scatter' and 'hexbin' plots)
| position : float
| Specify relative alignments for bar plot layout.
| From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5 (center)
| layout : tuple (optional)
| (rows, columns) for the layout of the plot
| table : boolean, Series or DataFrame, default False
| If True, draw a table using the data in the DataFrame and the data will
| be transposed to meet matplotlib's default layout.
| If a Series or DataFrame is passed, use passed data to draw a table.
| yerr : DataFrame, Series, array-like, dict and str
| See :ref:`Plotting with Error Bars <visualization.errorbars>` for
| detail.
| xerr : same types as yerr.
| stacked : boolean, default False in line and
| bar plots, and True in area plot. If True, create stacked plot.
| sort_columns : boolean, default False
| Sort column names to determine plot ordering
| secondary_y : boolean or sequence, default False
| Whether to plot on the secondary y-axis
| If a list/tuple, which columns to plot on secondary y-axis
| mark_right : boolean, default True
| When using a secondary_y axis, automatically mark the column
| labels with "(right)" in the legend
| kwds : keywords
| Options to pass to matplotlib plotting method
|
| Returns
| -------
| axes : matplotlib.AxesSubplot or np.array of them
|
| Notes
| -----
|
| - See matplotlib documentation online for more on this subject
| - If `kind` = 'bar' or 'barh', you can specify relative alignments
| for bar plot layout by `position` keyword.
| From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5 (center)
| - If `kind` = 'scatter' and the argument `c` is the name of a dataframe
| column, the values of that column are used to color each point.
| - If `kind` = 'hexbin', you can control the size of the bins with the
| `gridsize` argument. By default, a histogram of the counts around each
| `(x, y)` point is computed. You can specify alternative aggregations
| by passing values to the `C` and `reduce_C_function` arguments.
| `C` specifies the value at each `(x, y)` point and `reduce_C_function`
| is a function of one argument that reduces all the values in a bin to
| a single number (e.g. `mean`, `max`, `sum`, `std`).
|
| area(self, x=None, y=None, **kwds)
| Area plot
|
| .. versionadded:: 0.17.0
|
| Parameters
| ----------
| x, y : label or position, optional
| Coordinates for each point.
| **kwds : optional
| Keyword arguments to pass on to :py:meth:`pandas.DataFrame.plot`.
|
| Returns
| -------
| axes : matplotlib.AxesSubplot or np.array of them
|
| bar(self, x=None, y=None, **kwds)
| Vertical bar plot
|
| .. versionadded:: 0.17.0
|
| Parameters
| ----------
| x, y : label or position, optional
| Coordinates for each point.
| **kwds : optional
| Keyword arguments to pass on to :py:meth:`pandas.DataFrame.plot`.
|
| Returns
| -------
| axes : matplotlib.AxesSubplot or np.array of them
|
| barh(self, x=None, y=None, **kwds)
| Horizontal bar plot
|
| .. versionadded:: 0.17.0
|
| Parameters
| ----------
| x, y : label or position, optional
| Coordinates for each point.
| **kwds : optional
| Keyword arguments to pass on to :py:meth:`pandas.DataFrame.plot`.
|
| Returns
| -------
| axes : matplotlib.AxesSubplot or np.array of them
|
| box(self, by=None, **kwds)
| Boxplot
|
| .. versionadded:: 0.17.0
|
| Parameters
| ----------
| by : string or sequence
| Column in the DataFrame to group by.
| \*\*kwds : optional
| Keyword arguments to pass on to :py:meth:`pandas.DataFrame.plot`.
|
| Returns
| -------
| axes : matplotlib.AxesSubplot or np.array of them
|
| density = kde(self, **kwds)
|
| hexbin(self, x, y, C=None, reduce_C_function=None, gridsize=None, **kwds)
| Hexbin plot
|
| .. versionadded:: 0.17.0
|
| Parameters
| ----------
| x, y : label or position, optional
| Coordinates for each point.
| C : label or position, optional
| The value at each `(x, y)` point.
| reduce_C_function : callable, optional
| Function of one argument that reduces all the values in a bin to
| a single number (e.g. `mean`, `max`, `sum`, `std`).
| gridsize : int, optional
| Number of bins.
| **kwds : optional
| Keyword arguments to pass on to :py:meth:`pandas.DataFrame.plot`.
|
| Returns
| -------
| axes : matplotlib.AxesSubplot or np.array of them
|
| hist(self, by=None, bins=10, **kwds)
| Histogram
|
| .. versionadded:: 0.17.0
|
| Parameters
| ----------
| by : string or sequence
| Column in the DataFrame to group by.
| bins: integer, default 10
| Number of histogram bins to be used
| **kwds : optional
| Keyword arguments to pass on to :py:meth:`pandas.DataFrame.plot`.
|
| Returns
| -------
| axes : matplotlib.AxesSubplot or np.array of them
|
| kde(self, **kwds)
| Kernel Density Estimate plot
|
| .. versionadded:: 0.17.0
|
| Parameters
| ----------
| **kwds : optional
| Keyword arguments to pass on to :py:meth:`pandas.DataFrame.plot`.
|
| Returns
| -------
| axes : matplotlib.AxesSubplot or np.array of them
|
| line(self, x=None, y=None, **kwds)
| Line plot
|
| .. versionadded:: 0.17.0
|
| Parameters
| ----------
| x, y : label or position, optional
| Coordinates for each point.
| **kwds : optional
| Keyword arguments to pass on to :py:meth:`pandas.DataFrame.plot`.
|
| Returns
| -------
| axes : matplotlib.AxesSubplot or np.array of them
|
| pie(self, y=None, **kwds)
| Pie chart
|
| .. versionadded:: 0.17.0
|
| Parameters
| ----------
| y : label or position, optional
| Column to plot.
| **kwds : optional
| Keyword arguments to pass on to :py:meth:`pandas.DataFrame.plot`.
|
| Returns
| -------
| axes : matplotlib.AxesSubplot or np.array of them
|
| scatter(self, x, y, s=None, c=None, **kwds)
| Scatter plot
|
| .. versionadded:: 0.17.0
|
| Parameters
| ----------
| x, y : label or position, optional
| Coordinates for each point.
| s : scalar or array_like, optional
| Size of each point.
| c : label or position, optional
| Color of each point.
| **kwds : optional
| Keyword arguments to pass on to :py:meth:`pandas.DataFrame.plot`.
|
| Returns
| -------
| axes : matplotlib.AxesSubplot or np.array of them
|
| ----------------------------------------------------------------------
| Methods inherited from BasePlotMethods:
|
| __init__(self, data)
| Initialize self. See help(type(self)) for accurate signature.
|
| ----------------------------------------------------------------------
| Methods inherited from pandas.core.base.PandasObject:
|
| __dir__(self)
| Provide method name lookup and completion
| Only provide 'public' methods
|
| __sizeof__(self)
| Generates the total memory usage for a object that returns
| either a value or Series of values
|
| __unicode__(self)
| Return a string representation for a particular object.
|
| Invoked by unicode(obj) in py2 only. Yields a Unicode String in both
| py2/py3.
|
| ----------------------------------------------------------------------
| Methods inherited from pandas.core.base.StringMixin:
|
| __bytes__(self)
| Return a string representation for a particular object.
|
| Invoked by bytes(obj) in py3 only.
| Yields a bytestring in both py2/py3.
|
| __repr__(self)
| Return a string representation for a particular object.
|
| Yields Bytestring in Py2, Unicode String in py3.
|
| __str__(self)
| Return a string representation for a particular Object
|
| Invoked by str(df) in both py2/py3.
| Yields Bytestring in Py2, Unicode String in py3.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from pandas.core.base.StringMixin:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
###Markdown
[``plot()``](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html)
###Code
hr["heart rate"].plot()
###Output
_____no_output_____
###Markdown
[Matplotlib Color](http://matplotlib.org/api/colors_api.html)Unless we tell it otherwise, Pandas will use default colors for the plot. However, we can specify the colors ourselves. Matplotlib has a small set of named colors that can be specified by name or a single letter, for example ``'r'`` (``'red'``) or ``'k'`` (``'black'``). Matplotlib also recognizes [HTML named colors](https://www.w3schools.com/colors/colors_names.asp).
###Code
hr["heart rate"].plot(color='y')
hr["heart rate"].plot(color="LightSalmon") ## HDML colors
###Output
_____no_output_____
###Markdown
[RGB$\alpha$ Color](https://en.wikipedia.org/wiki/RGBA_color_space)There are many ways to represent colro numerically. One of the simplest models of color is the **R**ed, **G**reen, **B**lue (**RGB**). This color model uses numbers to represent the portion of red, green, and blue light that form the color. Matplotlib uses a three-tuple of numbers between zero and one in a tuple to represent a color. 1 is the maximum amount of color and zero is the minimum. (1,1,1) is white, (0,0,0) is black, and (1,0,0) is red, for example. We can add a fourth number of represent the transparency if the color: 1 is fully opaque and 0 is fully transparent. ExamplePlot the heart rate with the color red and an $\alpha=0.25$.
###Code
hr["heart rate"].plot(color=(1,0,0, 1)) ## a tuple of red, green, blue, and the transparancy, this is red color
fig1, ax1 = plt.subplots(1)
hr["heart rate"].plot(color=(1,0,0, 0.75), ax=ax1)
ax1.set_xlabel("Time Point")
ax1.set_ylabel("Heart Rate (beats/minute)")
ax1.set_ylim((40, 150))
###Output
_____no_output_____
###Markdown
Visualizing Blood PressuresRead in the blood pressure data for patient ``1000.txt`` and plot the systolic values.
###Code
bp = pd.read_table(os.path.join(BPDIR, '1000.txt'), header=None, names=["systolic", "diastolic"], na_values=['None'])
bp["systolic"].plot()
###Output
_____no_output_____
###Markdown
What went wrong? How can our data not be numeric?In a terminal open the data file with ``vim`` and go to line 753. What do you find? Dealing with Missing Values
###Code
bp = pd.read_table(os.path.join(BPDIR, bp_files[0]), header=None, names=["systolic", "diastolic"], na_values=["None"])# tell python what you defined as missing value
bp["systolic"].plot()
###Output
_____no_output_____
###Markdown
We can plot the whole DataFrame
###Code
bp.plot()
###Output
_____no_output_____
###Markdown
Using an axisWe can create an axis and pass this as a keyword argument to our plotting methods. We can then use the axis object to configure the graph.
###Code
fig2, ax2 = plt.subplots(1)
bp["systolic"].plot(color=(1,0,0, 0.25), ax=ax2)
bp["diastolic"].plot(color=(0,1,0, 0.25), ax=ax2) ## use one axis to plot all the columns
###Output
_____no_output_____
###Markdown
ExerciseCreate a [histogram](https://pandas.pydata.org/pandas-docs/stable/visualization.htmlvisualization-hist) of the systolic and diastolic blood pressures. Create an axis. Use the axis to set the label for the x axis and to draw a legend. Pick interesting colors.Compare your plot to drawing a histogram of the DataFrame.
###Code
fig3, ax3 = plt.subplots(1)
bp['systolic'].plot.hist(color=(0,1,0, 0.75), bins = 20, ax = ax3)
bp['diastolic'].plot.hist(color=(1,0,0, 0.75), bins = 20, ax = ax3)
ax3.set_xlabel("Time Point")
ax3.set_title('BP HIST')
ax3.set_ylabel("blood pressuer")
ax3.set_ylim((0, 420))
fig3.savefig("blood pressure")
help(ax3.set_xlabel)
help(bp.plot.hist)
###Output
Help on method hist in module pandas.tools.plotting:
hist(by=None, bins=10, **kwds) method of pandas.tools.plotting.FramePlotMethods instance
Histogram
.. versionadded:: 0.17.0
Parameters
----------
by : string or sequence
Column in the DataFrame to group by.
bins: integer, default 10
Number of histogram bins to be used
**kwds : optional
Keyword arguments to pass on to :py:meth:`pandas.DataFrame.plot`.
Returns
-------
axes : matplotlib.AxesSubplot or np.array of them
###Markdown
[Seaborn](https://stanford.edu/~mwaskom/software/seaborn/)We have been using matplotlib for our visualization. This is the workhorse of Python visualization although there are important alternatives, including [Bokeh](http://bokeh.pydata.org/en/latest/), [cairo](http://cairographics.org/), [datashader](http://datashader.readthedocs.io/en/latest/), [graphite (time series data)](http://graphite.readthedocs.io/en/latest/). [Seaborn](https://stanford.edu/~mwaskom/software/seaborn/) is a statistical visualization program that is built on top of Matplotlib. Since statistical exploration of our data is a primary task in data science, seaborn can be a very useful tool in this domain. Seaborn seems to work best with [Pandas](http://pandas.pydata.org/), but can be used directly with numpy arrays. Here we provide an example using our blood pressure data.
###Code
import seaborn as sns
sns.jointplot(bp["systolic"], bp["diastolic"], kind='kde')
###Output
_____no_output_____
###Markdown
List Comprehension
###Code
all_bp = [pd.read_table(os.path.join(BPDIR, f),
header=None,
names=["systolic", "diastolic"],
na_values=["None"]) for f in bp_files]
all_bp[0].head()
len(all_bp)
summary_data = [(bp["systolic"].mean(), bp["diastolic"].mean(), bp["systolic"].std(), bp["diastolic"].std())
for bp in all_bp]
len(summary_data)
###Output
_____no_output_____ |
notebooks/wigner_function/py-tftb.ipynb | ###Markdown
Testing the py-tftb toolbox for calculating the WDF of a signal.
###Code
import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as scisig
from tftb.generators import anapulse
from tftb.processing import WignerVilleDistribution
N = 250
sig = np.zeros(N)
x = np.linspace(-1,1,N)
sig = scisig.sawtooth(20*x)
sig[:int(N/4)]=0
sig[int(3*N/4):]=0
plt.plot(x,sig)
plt.ylabel('sig')
plt.xlabel('x')
plt.title('Signal')
plt.show()
wvd = WignerVilleDistribution(sig)
wvd.run()
wvd.plot(kind="contour", scale="log")
plt.contour(np.abs(np.fft.fftshift(wvd.tfr,axes=0)))
plt.show()
sig_ft = np.fft.fft(sig)
wvd = WignerVilleDistribution(sig_ft)
wvd.run()
wvd.plot(kind="contour", scale="log")
plt.contour(np.abs(np.fft.fftshift(wvd.tfr,axes=0)))
plt.show()
###Output
_____no_output_____ |
archive/2018/demo2.ipynb | ###Markdown
SVMs
###Code
X, Y = load_dataset_up_down(1000)
plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap=plt.cm.Spectral);
clf = sklearn.svm.LinearSVC()
clf.fit(X, Y);
w = clf.coef_[0]
a = -w[0] / w[1]
b = - clf.intercept_[0] / w[1]
xx = np.linspace(min(X[:,0]) - 1, max(X[:,0]) + 1)
yy = a * xx + b
plt.plot(xx, yy, 'k-', label="non weighted div")
plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap=plt.cm.Spectral);
print (("y = %.4f * x + %.4f") % (a, b))
clf.coef_
###Output
_____no_output_____
###Markdown
In the perceptron demo, we added some extra points and the accuracy decreased... what would happen now?
###Code
X, Y = load_dataset_up_down(800)
utils.plot_decision_boundary(lambda x: clf.predict(x), X.T, Y.T)
predictions = clf.predict(X)
print ('Accuracy: %d ' % ((np.sum(Y == predictions))/float(Y.size)*100))
###Output
Accuracy: 100
###Markdown
Now, what if the training data have some noise now?
###Code
X, Y = load_dataset_up_down(80, 1)
some_noise = np.random.binomial(1, .03, Y.shape[0])
Y = np.logical_xor(Y, some_noise).astype(np.int8)
plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap=plt.cm.Spectral);
###Output
_____no_output_____ |
ShannoProt/runCAIR/Calc_Proteomes_CAIR.ipynb | ###Markdown
Calculating Proteomes CAIR* This module is for reading CSV-format files and grouping entries with regard to each organism and calculates the CAIR for each UniProt Organism ID. Two output files will be generated:> A) "All species CAIRs.csv" containing all Organism IDs, their correspondent CAIRs, and overall residue frequencies.> B) "Complete proteome CAIRs.csv" containing non-redundant proteomes and their correspondent CAIRs.
###Code
import pandas as pd
from numpy import log2
def entry_to_species(sprot_input_file='Entries sprot.csv', trembl_input_file='Entries trembl.csv',
outfile='All species residues.csv', chunksize=8000000, merge='True'): # refer to runCAIR
out = pd.DataFrame() # creating an empty datafarme
data = pd.read_csv(trembl_input_file, chunksize=chunksize) # reading data in chunks (to avoid RAM insufficiency)
for chunk in data:
chunk = chunk.drop(columns=['CAIR']) # removing CAIRs for each protein (not needed anymore)
group = chunk.groupby('Organism_ID').sum() # grouping by organisms for each chunk
out = out.append(group) # filling out the dataframe
if merge == 'True': # whether to merge the Swiss-Prot file to the prepared TrEMBL file or not
data2 = pd.read_csv(sprot_input_file).drop(columns=['CAIR']) # reading the Swiss-Prot file
data2 = data2.groupby('Organism_ID').sum() # grouping by organisms
out = pd.concat([out, data2]) # adding the Swiss-Prot to the TrEMBL
out = out.groupby('Organism_ID').sum() # grouping by organisms
out.to_csv(outfile) # writing the CSV file
def species_cair(input_file='All species residues.csv', outfile='Complete proteome CAIRs.csv', proteomes_file="proteomes-redundant_no.tab"): # refer to runCAIR
data = pd.DataFrame(pd.read_csv(input_file, dtype=float)) # reading the input
length = data.Len # defining lengths for frequency calculations
cair = pd.DataFrame((log2((data["A"] / length) ** (-data["A"] / length)) + log2(
(data["C"] / length) ** (-data["C"] / length)) + log2(
(data["D"] / length) ** (-data["D"] / length)) + log2(
(data["E"] / length) ** (-data["E"] / length)) + log2((data["F"] / length) ** (-data["F"] / length)) + log2(
(data["G"] / length) ** (-data["G"] / length)) + log2(
(data["H"] / length) ** (-data["H"] / length)) + log2((data["I"] / length) ** (-data["I"] / length)) + log2(
(data["K"] / length) ** (-data["K"] / length)) + log2(
(data["L"] / length) ** (-data["L"] / length)) + log2((data["M"] / length) ** (-data["M"] / length)) + log2(
(data["N"] / length) ** (-data["N"] / length)) + log2(
(data["O"] / length) ** (-data["O"] / length)) + log2((data["P"] / length) ** (-data["P"] / length)) + log2(
(data["Q"] / length) ** (-data["Q"] / length)) + log2(
(data["R"] / length) ** (-data["R"] / length)) + log2((data["S"] / length) ** (-data["S"] / length)) + log2(
(data["T"] / length) ** (-data["T"] / length)) + log2(
(data["U"] / length) ** (-data["U"] / length)) + log2((data["V"] / length) ** (-data["V"] / length)) + log2(
(data["W"] / length) ** (-data["W"] / length)) + log2((data["Y"] / length) ** (-data["Y"] / length))) / log2(22)) # calculating CAIRs
cair.columns = ["CAIR"] # assigning the column name
Organism_ID = pd.DataFrame(data.Organism_ID)
Organism_CAIR = Organism_ID.join(cair)
Proteomes = pd.read_csv(proteomes_file, sep="\t") # reading the proteomes list file
Sp_CAIR = Organism_CAIR.merge(Proteomes, left_on="Organism_ID", right_on="Organism ID").drop_duplicates('Organism_ID',
keep='last') # merging taxonomy data with species CAIRs
Sp_CAIR['First_hierarchy'] = Sp_CAIR['Taxonomic lineage'].str.split(', ', expand=True)[0] # extracting first hierarchy of organisms (superkingdoms)
Sp_CAIR['Taxonomic lineage'] = Sp_CAIR['Taxonomic lineage'].str.split(', ', expand=True)[1] # extracting second hierarchy of organisms (phyla in most cases)
Sp_CAIR = Sp_CAIR[Sp_CAIR.First_hierarchy != 'Viruses'].drop(columns=['First_hierarchy', 'Proteome ID', 'Organism ID']).rename(
columns={'Taxonomic lineage': 'Second_hierarchy'}).reset_index(drop=True) # filtering viruses out and dropping unnecessary columns and renaming
code = [] # creating an empty list for analysis codes
for row in Sp_CAIR['Second_hierarchy']: # assining codes according to the tree of life
if row == 'Proteobacteria':
code.append('1')
elif row == 'Candidatus Hydrogenedentes':
code.append('2')
elif row == 'Candidatus Abyssubacteria':
code.append('3')
elif row == 'Spirochaetes':
code.append('4')
elif row == 'Deferribacteres':
code.append('5')
elif row == 'Chrysiogenetes':
code.append('6')
elif row == 'Acidobacteria':
code.append('7')
elif row == 'Thermodesulfobacteria':
code.append('8')
elif row == 'Nitrospirae':
code.append('9')
elif row == 'Nitrospinae/Tectomicrobia group':
code.append('10')
elif row == 'Elusimicrobia':
code.append('11')
elif row == 'Candidatus Omnitrophica':
code.append('12')
elif row == 'Planctomycetes':
code.append('13')
elif row == 'Chlamydiae':
code.append('14')
elif row == 'Lentisphaerae':
code.append('15')
elif row == 'Candidatus Aureabacteria':
code.append('16')
elif row == 'Kiritimatiellaeota':
code.append('17')
elif row == 'Verrucomicrobia':
code.append('18')
elif row == 'Candidatus Aegiribacteria':
code.append('19')
elif row == 'Candidatus Latescibacteria':
code.append('20')
elif row == 'Gemmatimonadetes':
code.append('21')
elif row == 'Candidatus Fermentibacteria':
code.append('22')
elif row == 'Candidatus Marinimicrobia':
code.append('23')
elif row == 'candidate division LCP-89':
code.append('24')
elif row == 'Calditrichaeota':
code.append('25')
elif row == 'Rhodothermaeota':
code.append('26')
elif row == 'Balneolaeota':
code.append('27')
elif row == 'Ignavibacteriae':
code.append('28')
elif row == 'Candidatus Kryptonia':
code.append('29')
elif row == 'Chlorobi':
code.append('30')
elif row == 'Bacteroidetes':
code.append('31')
elif row == 'Candidatus Kapabacteria':
code.append('32')
elif row == 'Candidatus Cloacimonetes':
code.append('33')
elif row == 'Fibrobacteres':
code.append('34')
elif row == 'Synergistetes':
code.append('35')
elif row == 'Fusobacteria':
code.append('36')
elif row == 'Deinococcus-Thermus':
code.append('37')
elif row == 'Coprothermobacterota':
code.append('38')
elif row == 'Thermotogae':
code.append('39')
elif row == 'Aquificae':
code.append('40')
elif row == 'Caldiserica/Cryosericota group':
code.append('41')
elif row == 'Dictyoglomi':
code.append('42')
elif row == 'Firmicutes':
code.append('43')
elif row == 'Tenericutes':
code.append('44')
elif row == 'Candidatus Eremiobacteraeota':
code.append('45')
elif row == 'Abditibacteriota':
code.append('46')
elif row == 'Armatimonadetes':
code.append('47')
elif row == 'Thermobaculum':
code.append('48')
elif row == 'Chloroflexi':
code.append('49')
elif row == 'Candidatus Dormibacteraeota':
code.append('50')
elif row == 'Actinobacteria':
code.append('51')
elif row == 'Cyanobacteria':
code.append('52')
elif row == 'Candidatus Melainabacteria':
code.append('53')
elif row == 'Candidatus Margulisbacteria':
code.append('54')
elif row == 'Candidatus Saganbacteria':
code.append('55')
elif row == 'Candidatus Saccharibacteria' or\
row == 'candidate division SR1' or\
row == 'Candidatus Dependentiae' or\
row == 'Candidatus Gracilibacteria' or\
row == 'Candidatus Atribacteria' or\
row == 'Candidatus Parcubacteria' or\
row == 'Candidatus Bipolaricaulota' or\
row == 'Candidatus Poribacteria' or\
row == 'unclassified Parcubacteria group' or\
row == 'Candidatus Aminicenantes' or\
row == 'Candidatus Coatesbacteria' or\
row == 'Candidatus Eisenbacteria' or\
row == 'Candidatus Aerophobetes' or\
row == 'Candidatus Riflebacteria' or\
row == 'Candidatus Wolfebacteria' or\
row == 'Candidatus Nomurabacteria' or\
row == 'Candidatus Roizmanbacteria' or\
row == 'Candidatus Uhrbacteria' or\
row == 'Candidatus Yanofskybacteria' or\
row == 'Candidatus Levybacteria' or\
row == 'Candidatus Colwellbacteria' or\
row == 'candidate division WWE3' or\
row == 'Candidatus Sungbacteria' or\
row == 'Candidatus Woykebacteria' or\
row == 'Candidatus Komeilibacteria' or\
row == 'Candidatus Falkowbacteria' or\
row == 'Candidatus Moranbacteria' or\
row == 'Candidatus Curtissbacteria' or\
row == 'Candidatus Giovannonibacteria' or\
row == 'Candidatus Dojkabacteria' or\
row == 'Candidatus Harrisonbacteria' or\
row == 'Candidatus Magasanikbacteria' or\
row == 'Candidatus Beckwithbacteria' or\
row == 'Candidatus Fraserbacteria' or\
row == 'Candidatus Kaiserbacteria' or\
row == 'Candidatus Niyogibacteria' or\
row == 'Candidatus Yonathbacteria' or\
row == 'Candidatus Azambacteria' or\
row == 'Candidatus Portnoybacteria' or\
row == 'Candidatus Berkelbacteria' or\
row == 'Candidatus Doudnabacteria' or\
row == 'Candidatus Zambryskibacteria' or\
row == 'Candidatus Staskawiczbacteria' or\
row == 'Candidatus Woesebacteria' or\
row == 'Candidatus Lloydbacteria' or\
row == 'Candidatus Nealsonbacteria' or\
row == 'Candidatus Microgenomates' or\
row == 'Candidatus Taylorbacteria' or\
row == 'Candidatus Vogelbacteria' or\
row == 'Candidatus Buchananbacteria' or\
row == 'Candidatus Gottesmanbacteria' or\
row == 'Candidatus Jorgensenbacteria' or\
row == 'Candidatus Rokubacteria' or\
row == 'Candidatus Peregrinibacteria' or\
row == 'Candidatus Dadabacteria' or\
row == 'Candidatus Kerfeldbacteria' or\
row == 'Candidatus Desantisbacteria' or\
row == 'Candidatus Ryanbacteria' or\
row == 'Candidatus Pacebacteria' or\
row == 'Candidatus Daviesbacteria' or\
row == 'Candidatus Amesbacteria' or\
row == 'Candidatus Tagabacteria' or\
row == 'Candidatus Shapirobacteria' or\
row == 'candidate division CPR1' or\
row == 'Candidatus Adlerbacteria' or\
row == 'Candidatus Spechtbacteria' or\
row == 'candidate division Kazan-3B-28' or\
row == 'Candidatus Terrybacteria' or\
row == 'Candidatus Wildermuthbacteria' or\
row == 'candidate division NC10' or\
row == 'Candidatus Campbellbacteria' or\
row == 'Candidatus Collierbacteria' or\
row == 'Candidatus Wirthbacteria' or\
row == 'Candidatus Brennerbacteria' or\
row == 'Candidatus Kuenenbacteria' or\
row == 'Candidatus Veblenbacteria' or\
row == 'candidate division KSB1' or\
row == 'Candidatus Glassbacteria' or\
row == 'Candidatus Firestonebacteria' or\
row == 'Candidatus Delongbacteria' or\
row == 'Candidatus Lindowbacteria' or\
row == 'candidate division TA06' or\
row == 'Candidatus Liptonbacteria' or\
row == 'Candidatus Jacksonbacteria' or\
row == 'Candidatus Blackburnbacteria' or\
row == 'Candidatus Abawacabacteria' or\
row == 'Candidatus Wallbacteria' or\
row == 'Candidatus Schekmanbacteria' or\
row == 'Candidatus Hydrothermae' or\
row == 'candidate division WOR-3' or\
row == 'Candidatus Sumerlaeota' or\
row == 'candidate division KSB3' or\
row == 'Candidatus Andersenbacteria' or\
row == 'candidate division WS5' or\
row == 'Candidatus Edwardsbacteria' or\
row == 'Candidatus Chisholmbacteria' or\
row == 'Candidatus Fischerbacteria' or\
row == 'candidate division KD3-62' or\
row == 'candidate division CPR3' or\
row == 'Candidatus Handelsmanbacteria' or\
row == 'candidate division CPR2' or\
row == 'Candidatus Cerribacteria' or\
row == 'Candidatus Raymondbacteria' or\
row == 'Candidatus Goldbacteria':
code.append('56') # Assigning one code for all Candidate Phyla Radiation(CPR)
elif row == 'Candidatus Hydrothermarchaeota':
code.append('57')
elif row == 'Candidatus Altiarchaeota':
code.append('58')
elif row == 'Candidatus Micrarchaeota':
code.append('59')
elif row == 'Candidatus Diapherotrites':
code.append('60')
elif row == 'Candidatus Aenigmarchaeota':
code.append('61')
elif row == 'Candidatus Huberarchaea':
code.append('62')
elif row == 'Nanoarchaeota':
code.append('63')
elif row == 'Candidatus Parvarchaeota':
code.append('64')
elif row == 'Candidatus Pacearchaeota':
code.append('65')
elif row == 'Candidatus Woesearchaeota':
code.append('66')
elif row == 'Euryarchaeota':
code.append('67')
elif row == 'Candidatus Bathyarchaeota':
code.append('68')
elif row == 'Thaumarchaeota':
code.append('69')
elif row == 'Candidatus Geothermarchaeota':
code.append('70')
elif row == 'Candidatus Korarchaeota':
code.append('71')
elif row == 'Candidatus Nezhaarchaeota':
code.append('72')
elif row == 'Candidatus Verstraetearchaeota':
code.append('73')
elif row == 'Candidatus Marsarchaeota':
code.append('74')
elif row == 'Crenarchaeota':
code.append('75')
elif row == 'Asgard group':
code.append('76')
elif row == 'Euglenozoa':
code.append('77')
elif row == 'Heterolobosea':
code.append('78')
elif row == 'Metamonada':
code.append('79')
elif row == 'Apusozoa':
code.append('80')
elif row == 'Rotosphaerida':
code.append('81')
elif row == 'Fungi':
code.append('82')
elif row == 'Ichthyosporea':
code.append('83')
elif row == 'Filasterea':
code.append('84')
elif row == 'Choanoflagellata':
code.append('85')
elif row == 'Metazoa':
code.append('86')
elif row == 'Amoebozoa':
code.append('87')
elif row == 'Haptista':
code.append('88')
elif row == 'Cryptophyceae':
code.append('89')
elif row == 'Sar':
code.append('90')
elif row == 'Viridiplantae':
code.append('91')
elif row == 'Rhodophyta':
code.append('92')
else:
code.append("0") # the following six phyla were not matched, thus excluded:
# Haloplasmatales, environmental samples, Candidatus Vecturithrix, Vampirococcus, Natronospirillum, unclassified DPANN group
Sp_CAIR['code'] = code # inserting the code column
Sp_CAIR.to_csv(outfile, index=False) # writing the CSV output file
###Output
_____no_output_____ |
04_Pipeline.ipynb | ###Markdown
Pipeline> Steps in feature extraction
###Code
#export
from car_speech.fname_processing import load_fnames
import pathlib
import os
import string
import numpy as np
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Shuffle data
###Code
#exports
def shuffle_data(filenames):
return tf.random.shuffle(filenames)
###Output
_____no_output_____
###Markdown
Train/Validation/Test Split using 80:10:10 ratio
###Code
#exports
def train_test_split(filenames):
TRAIN_PORTION = 0.8
VAL_PORTION = 0.1
TEST_PORTION = 0.1
num_samples = len(filenames)
train_end = int(num_samples*TRAIN_PORTION)
val_end = train_end + int(num_samples*VAL_PORTION)
train_files = filenames[:train_end]
val_files = filenames[train_end: val_end]
test_files = filenames[val_end:]
print('Training set size:', len(train_files))
print('Validation set size:', len(val_files))
print('Test set size:', len(test_files))
return [train_files, val_files, test_files]
###Output
_____no_output_____
###Markdown
Get waveforms
###Code
#exports
def decode_audio(audio_binary):
# audio --> tensor
audio, _ = tf.audio.decode_wav(audio_binary)
return tf.squeeze(audio, axis=-1)
def get_label(file_path):
parts = tf.strings.split(file_path, os.path.sep)
# be careful with data type here
# this function must return a tensor
label_tensor = tf.strings.substr(parts[-1], pos=9, len=1)
return label_tensor
def get_waveform_and_label(file_path):
label = get_label(file_path)
audio_binary = tf.io.read_file(file_path)
waveform = decode_audio(audio_binary)
return waveform, label
###Output
_____no_output_____
###Markdown
Get spectrograms
###Code
#exports
def get_spectrogram(waveform):
diff = [16000] - tf.shape(waveform)
waveform = tf.cast(waveform, tf.float32)
if diff >= 0:
# Padding for files with less than 16000 samples
zero_padding = tf.zeros([16000] - tf.shape(waveform), dtype=tf.float32)
# Concatenate audio with padding so that all audio clips will be of the same length
equal_length = tf.concat([waveform, zero_padding], 0)
else:
# Cut the tail if audio > 1 second
equal_length = tf.slice(waveform, [0], [16000])
spectrogram = tf.signal.stft(
equal_length, frame_length=255, frame_step=128)
spectrogram = tf.abs(spectrogram)
return spectrogram
def get_spectrogram_and_label_id_digits(audio, label):
spectrogram = get_spectrogram(audio)
spectrogram = tf.expand_dims(spectrogram, -1)
label_strings = np.array([str(num) for num in range(0,10)])
label_id = tf.argmax(int(label == label_strings))
return spectrogram, label_id
def get_spectrogram_and_label_id_letters(audio, label):
spectrogram = get_spectrogram(audio)
spectrogram = tf.expand_dims(spectrogram, -1)
label_strings = np.array(list(string.ascii_uppercase))
label_id = tf.argmax(int(label == label_strings))
return spectrogram, label_id
def get_spectrogram_and_label_id_mixed(audio, label):
spectrogram = get_spectrogram(audio)
spectrogram = tf.expand_dims(spectrogram, -1)
label_strings = np.array([str(num) for num in range(0,10)] + list(string.ascii_uppercase))
label_id = tf.argmax(int(label == label_strings))
return spectrogram, label_id
###Output
_____no_output_____
###Markdown
Combined pipeline
###Code
#exports
def preprocess_dataset(files, dataset_type):
AUTOTUNE = tf.data.experimental.AUTOTUNE
files_ds = tf.data.Dataset.from_tensor_slices(files)
waveform_ds = files_ds.map(get_waveform_and_label, num_parallel_calls=AUTOTUNE)
if dataset_type == 'digits':
spectrogram_ds = waveform_ds.map(
get_spectrogram_and_label_id_digits, num_parallel_calls=AUTOTUNE)
elif dataset_type == 'letters':
spectrogram_ds = waveform_ds.map(
get_spectrogram_and_label_id_letters, num_parallel_calls=AUTOTUNE)
elif dataset_type == 'mixed':
spectrogram_ds = waveform_ds.map(
get_spectrogram_and_label_id_mixed, num_parallel_calls=AUTOTUNE)
return spectrogram_ds
###Output
_____no_output_____
###Markdown
Example of using the pipeline on digits data
###Code
# have to set type first
DATASET_TYPE = 'digits'
# load classified filenames
filenames = load_fnames('noise_levels/digit_noise_levels/35U.data')
print('number of files:', len(filenames))
# shuffle
filenames = shuffle_data(filenames)
# Train/Validation/Test Split
split_result = train_test_split(filenames)
train_files = split_result[0]
val_files = split_result[1]
test_files = split_result[2]
# Process data using the combined pipeline
train_ds = preprocess_dataset(train_files, DATASET_TYPE)
val_ds = preprocess_dataset(val_files, DATASET_TYPE)
test_ds = preprocess_dataset(test_files, DATASET_TYPE)
print("Completed")
###Output
number of files: 1590
Training set size: 1272
Validation set size: 159
Test set size: 159
Completed
|
Other notebooks/Follow along Hands-on ML book by A Geron/Chapter 3 classification problem by A Geron.ipynb | ###Markdown
This notebook is based on Aurelien Geron's Chapter 3 of Hands-On Machine Learning with Scikit-Learn, Keras and TensorFlow book (2nd edition) Book link: https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/ Geron's git hub link for this chapter 3: https://github.com/ageron/handson-ml2/blob/master/03_classification.ipynb Intro Using the MNIST dataset to learn classification problem in ML Setup
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "classification"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
MNIST
###Code
# downlaod dataset
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version =1)
mnist.keys
mnist.keys()
# see size of data
X, y = mnist["data"], mnist["target"]
X.shape, y.shape
###Output
_____no_output_____
###Markdown
This means that there are 70000 images and each image has 784 features Each image has 784 features because they are 28 x 28 pixels
###Code
# let's have a look at one
import matplotlib as mpl
import matplotlib.pyplot as plt
some_digit = X[0]
some_digit_image = some_digit.reshape(28, 28)
plt.imshow(some_digit_image, cmap=mpl.cm.binary)
plt.axis('off')
save_fig("some_digit_plot")
plt.show()
# lets check the label for the first datay
y[0]
# Note that the label is a string
# lets change them to numbers (integers)
y = y.astype(np.uint8)
###Output
_____no_output_____
###Markdown
plot more digits
###Code
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = mpl.cm.binary,
interpolation="nearest")
plt.axis("off")
def plot_digits(instances, images_per_row = 10,**options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap=mpl.cm.binary, interpolation = 'nearest')
plt.axis('off')
# plot the first 100 data
plt.figure(figsize=(9,9))
example_images = X[:100]
plot_digits(example_images, images_per_row = 10)
plt.show()
###Output
_____no_output_____
###Markdown
Split to train test dataset
###Code
# MNIST has actually been split as this. First 60k dataset are training, last 10k are test set
# The training set also has been shuffled (making sure that all digits are included)
# soem learning algorithm are sensitive to order of training isntances. They perform poorly if they get many similar instances in a row
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
###Output
_____no_output_____
###Markdown
Start from Binary Classifier Start with a simple classifier. We will create a "number 5 detector". So detect 5 or not 5
###Code
y_train_5 = (y_train == 5) # create true for all 5's and false otherwise
y_test_5 = (y_test == 5)
###Output
_____no_output_____
###Markdown
SGD classifier Stochastic Gradient Descent classifier.
###Code
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(max_iter=1000, tol=1e-3, random_state=42)
sgd_clf.fit(X_train, y_train_5) # note that the targer variable now whether the number is 5 or not
sgd_clf.predict([some_digit]) # note: we created some_digit earlier as X[0] which is 5
###Output
_____no_output_____
###Markdown
Accuracy via cross-validation
###Code
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy")
###Output
_____no_output_____
###Markdown
alternatively:
###Code
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3, random_state=42)
for train_index, test_index in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_index]
y_train_folds = y_train_5[train_index]
X_test_fold = X_train[test_index]
y_test_fold = y_train_5[test_index]
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print(n_correct / len(y_pred))
###Output
0.95035
0.96035
0.9604
###Markdown
Either way, 96% accuracy on predictiomns
###Code
# lets classify every single image in the not5 class
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
pass
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring="accuracy")
###Output
_____no_output_____
###Markdown
Hmm 90% accuracy. if you guess a number is not 5, you should be 90% right of the time This means accuracy is not preferred performance measure for classifiers, especially with skewed datasets Confusion matrix First we need some prediction. But do not touch the test set One option is to use Cross val predict
###Code
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
# now the confusion matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
# each cell compared actual vs prediction
# now if we pretend we have a perfect model by copying the y_train_5
# we should see 0's in 2 columns
y_train_perfect_conditions = y_train_5
confusion_matrix(y_train_5, y_train_perfect_conditions)
###Output
_____no_output_____
###Markdown
Precision & Recall
###Code
from sklearn.metrics import precision_score, recall_score
precision_score(y_train_5, y_train_pred), recall_score(y_train_5, y_train_pred)
###Output
_____no_output_____
###Markdown
This suggest the detecter isnt as good as before. When it calims 5, it is correct 84% of the time. And the model only predicts 65% of the 5s F1 score Combination of precision and recall
###Code
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
# to see the threshold of the classifier decision function
y_scores = sgd_clf.decision_function([some_digit]) # note that some_digit is X[0] which is a 5
y_scores
threshold = 0
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
# the default SGD classifier use a threshold of 0
# lets say we raise this to 8k
threshold = 8000
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
###Output
_____no_output_____
###Markdown
When threshold = 0. It detects a 5 which is actually a 5. If we increase the threshold, it doesnt detect a 5 anymore (as false) How to decide threshold?
###Code
# lets do a cross val preduct again, but this time we want the decision score instead of making a prediction
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,
method="decision_function")
# we can use precision recall curve to compute precision and recall for all possible threshold
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
# plot the precision recall curve
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], "b--", label="Precision", linewidth=2)
plt.plot(thresholds, recalls[:-1], "g-", label="Recall", linewidth=2)
plt.legend(loc="center right", fontsize=16) # Not shown in the book
plt.xlabel("Threshold", fontsize=16) # Not shown
plt.grid(True) # Not shown
plt.axis([-50000, 50000, 0, 1]) # Not shown
recall_90_precision = recalls[np.argmax(precisions >= 0.90)]
threshold_90_precision = thresholds[np.argmax(precisions >= 0.90)]
plt.figure(figsize=(8, 4)) # Not shown
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.plot([threshold_90_precision, threshold_90_precision], [0., 0.9], "r:") # Not shown
plt.plot([-50000, threshold_90_precision], [0.9, 0.9], "r:") # Not shown
plt.plot([-50000, threshold_90_precision], [recall_90_precision, recall_90_precision], "r:")# Not shown
plt.plot([threshold_90_precision], [0.9], "ro") # Not shown
plt.plot([threshold_90_precision], [recall_90_precision], "ro") # Not shown
save_fig("precision_recall_vs_threshold_plot") # Not shown
plt.show()
recall_90_precision, threshold_90_precision
# alternatviely precision recall curve
def plot_precision_vs_recall(precisions, recalls):
plt.plot(recalls, precisions, "b-", linewidth=2)
plt.xlabel("Recall", fontsize=16)
plt.ylabel("Precision", fontsize=16)
plt.axis([0, 1, 0, 1])
plt.grid(True)
plt.figure(figsize=(8, 6))
plot_precision_vs_recall(precisions, recalls)
plt.plot([0.4368, 0.4368], [0., 0.9], "r:")
plt.plot([0.0, 0.4368], [0.9, 0.9], "r:")
plt.plot([0.4368], [0.9], "ro")
save_fig("precision_vs_recall_plot")
###Output
Saving figure precision_vs_recall_plot
###Markdown
We can see here that after 70% recall the precision drop sharply. We want to select precision/recall tradeoff before this drop. say around 60%
###Code
# so if we want 90% precision, use the precision, recall vs thereshold plot
recall_90_precision, threshold_90_precision
# Ex: if we want to make the prediciton on the training set, at this 90% threshold for precision
y_train_pred_90 = (y_scores >= threshold_90_precision)
# and the precision and recall score are:
precision_score(y_train_5, y_train_pred_90), recall_score(y_train_5, y_train_pred_90)
###Output
_____no_output_____
###Markdown
Great. we have a 90% precision classifier ROC curve
###Code
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr,tpr,linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--') # dashed diagonal
plt.axis([0, 1, 0, 1]) # Not shown in the book
plt.xlabel('False Positive Rate (Fall-Out)', fontsize=16) # Not shown
plt.ylabel('True Positive Rate (Recall)', fontsize=16) # Not shown
plt.grid(True)
plt.figure(figsize=(8, 6))
plot_roc_curve(fpr,tpr)
plt.plot([4.837e-3, 4.837e-3], [0., 0.4368], "r:") # Not shown
plt.plot([0.0, 4.837e-3], [0.4368, 0.4368], "r:") # Not shown
plt.plot([4.837e-3], [0.4368], "ro") # Not shown
save_fig("roc_curve_plot") # Not shown
plt.show()
###Output
Saving figure roc_curve_plot
###Markdown
The dotted line is a purely random classifier One way to compare is to measure the area under a curve (AUC). Perfect classifier has AUC = 1, a random AUC = 0.5
###Code
# compute the AUC of the curve above
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
###Output
_____no_output_____
###Markdown
When to use PrecisionRecall curve and ROC curve USE PR curve when positive class is rare or when you care more about FP rather than FN. Otherwise use ROC curve In this example, ROC AUC score is high. So classifier looks good. But, there are very few positive (the 5s) compared to the negative (non 5s). In contrast, looking at the ROC curve, there is room for improvement (curve should be closer to top rigth corner) Random Forest Classifier Repeat for random forest classifer Note: RandomForest does not have decision_function() -> it uses predict_proba() method
###Code
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,
method="predict_proba")
# ROC needs label and score. Instead of score, we will give class probabilities.
# Lets use positive class probability as the score
y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5, y_scores_forest)
###Output
_____no_output_____
###Markdown
ROC curve & AUC
###Code
#Plot the ROC For RF and SGD
plt.figure(figsize=(8, 6))
plt.plot(fpr,tpr,'b:',linewidth =2,label='SGD')
plot_roc_curve(fpr_forest,tpr_forest,label='Random Forest')
plt.plot([4.837e-3, 4.837e-3], [0., 0.4368], "r:")
plt.plot([0.0, 4.837e-3], [0.4368, 0.4368], "r:")
plt.plot([4.837e-3], [0.4368], "ro")
plt.plot([4.837e-3, 4.837e-3], [0., 0.9487], "r:")
plt.plot([4.837e-3], [0.9487], "ro")
plt.grid(True)
plt.legend(loc="lower right", fontsize=16)
save_fig("roc_curve_comparison_plot")
plt.show()
# Comparing the AUC score between SGD and RF
roc_auc_score_sgd = roc_auc_score(y_train_5, y_scores)
roc_auc_score_rf = roc_auc_score(y_train_5, y_scores_forest)
roc_auc_score_sgd, roc_auc_score_rf
###Output
_____no_output_____
###Markdown
Precision and recall
###Code
# for SGD earlier
prec_score_sgd = precision_score(y_train_5, y_train_pred)
recall_score_sgd = recall_score(y_train_5, y_train_pred)
prec_score_sgd, recall_score_sgd
# for RF
# First make cross val predict
y_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3)
prec_score_rf = precision_score(y_train_5, y_train_pred_forest)
recall_score_rf = recall_score(y_train_5, y_train_pred_forest)
prec_score_rf, recall_score_rf
###Output
_____no_output_____
###Markdown
Seems like RF perform better than SGD. Both in term of AUC score and precision-recall Reflection Why we use ROC for the RF? at 4.1.7 we have discussed that using PR curve is better Multiclass classification Start with SVC
###Code
from sklearn.svm import SVC
svm_clf = SVC(gamma="auto", random_state=42)
svm_clf.fit(X_train[:1000], y_train[:1000]) # y_train, not y_train_5 because we train for many data
svm_clf.predict([some_digit])
# Call decision function to see the score for each class
some_digit_scores = svm_clf.decision_function([some_digit])
some_digit_scores
# highest score is for digit?
np.argmax(some_digit_scores)
#note that this refers to the location of the digit (index) in the classes. not as digit 5
# The highest score (9.297) is at the 5th index
svm_clf.classes_
svm_clf.classes_[5] #this is to find what digit refers to the 5th location
# this shows how many classifiers it trains. Should show many classifiers
#len(ovr_clf.estimators_)
###Output
_____no_output_____
###Markdown
Forcing SVC through OVR classifier Normally, SVC uses an OVO method. BUt you can force SVC to use OVR
###Code
from sklearn.multiclass import OneVsRestClassifier
ovr_clf = OneVsRestClassifier(SVC(gamma="auto", random_state=42))
ovr_clf.fit(X_train[:1000], y_train[:1000])
ovr_clf.predict([some_digit])
# this shows how many classifiers it trains. Should show only 10 classifiers
len(ovr_clf.estimators_)
###Output
_____no_output_____
###Markdown
With SGD classifier
###Code
# note using subset of the data to speed up
sgd_clf.fit(X_train[:10000], y_train[:10000])
sgd_clf.predict([some_digit])
sgd_clf.decision_function([some_digit])
###Output
_____no_output_____
###Markdown
SGD is quite confident. Almost all scores are negative. Highest score is for the 5th index with a score of 82661
###Code
# evaluate with cross validation
cross_val_score(sgd_clf, X_train[:1000], y_train[:1000], cv=3, scoring='accuracy')
###Output
_____no_output_____
###Markdown
Applying standard scaler
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf, X_train_scaled[:1000], y_train[:1000], cv=3, scoring="accuracy")
###Output
_____no_output_____
###Markdown
Standardisation slightly improve results Error analysis Checking confusion matrix
###Code
sgd_clf = SGDClassifier(max_iter=1000, tol=1e-3, random_state=42)
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled[:5000], y_train[:5000], cv=3)
conf_mx = confusion_matrix(y_train[:5000], y_train_pred[:5000])
conf_mx
# You can even plot the matrix
def plot_confusion_matrix(matrix):
"""If you prefer color and a colorbar"""
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
cax = ax.matshow(matrix)
fig.colorbar(cax)
plt.matshow(conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_plot", tight_layout=False)
plt.show()
# divide each value in confusion matrix by th enumber of images in the corresponding class so you can compare error rates
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
# Fill the diagonal with zeros to keep only the errors
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
plt.show()
###Output
_____no_output_____
###Markdown
Looking at the above, column for class 8 is quite bright. This means many images get mis classfied as 8. 3 and 5 often miss classified too Analysing this give insight how to improve your classifier One good way to analyse individual error is to plot examples. lets plot 3s and 5s
###Code
cl_a, cl_b = 3, 5
X_aa = X_train[:5000][(y_train[:5000] == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[:5000][(y_train[:5000] == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[:5000][(y_train[:5000] == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[:5000][(y_train[:5000] == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)
save_fig("error_analysis_digits_plot")
plt.show()
###Output
Saving figure error_analysis_digits_plot
###Markdown
Top left and bottom right are those that are classified are correct Top right and bottom left are those that are classfied as wrong. However, some of these are obvious that the model is wrong One reason is we used a simple SGDClassifier which is a linear model. All it does, is assign a weight per class to each pixel. When it sees a new image, it just sums up the weighted pixel to get a score for each class. Since 3s and 5s differ by a few pixels, this model will easily confsue them Multi label classification Multi label is when classifier output multiple classes for each isntance. In face recognisiton program, if we train for persom A, B, C and we feed a model that only shows A and C, we want the model to output [1,0,1]
###Code
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7) #is the digit large?
y_train_odd = (y_train % 2 == 1) #is it odd? ie: if divided by 2, reminder is 1
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
knn_clf.predict([some_digit]) #remeber, some_digit is X[0] which is the digit 5
# this should output false,true
###Output
_____no_output_____
###Markdown
Evaluate with F1
###Code
y_train_knn_pred = cross_val_predict(knn_clf, X_train_scaled[:5000], y_multilabel[:5000], cv=3)
f1_score(y_multilabel[:5000], y_train_knn_pred, average ='macro')
###Output
_____no_output_____
###Markdown
Multi output classification Each label can be multiclass. Example is for clean digit image. Classifier output is multilabel (one label per pixel), and each label has different values (pixel intensity)
###Code
noise = np.random.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
noise = np.random.randint(0, 100, (len(X_test), 784))
X_test_mod = X_test + noise
y_train_mod = X_train
y_test_mod = X_test
some_index = 0
plt.subplot(121); plot_digit(X_test_mod[some_index])
plt.subplot(122); plot_digit(y_test_mod[some_index])
save_fig("noisy_digit_example_plot")
plt.show()
knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plot_digit(clean_digit)
save_fig("cleaned_digit_example_plot")
###Output
Saving figure cleaned_digit_example_plot
###Markdown
Attempt to predict or test
###Code
X_test.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Assume SGD is our best model
###Code
y_final_predictions = sgd_clf.predict(X_test)
###Output
_____no_output_____
###Markdown
Compute confusion matrix
###Code
conf_mx_final = confusion_matrix(y_final_predictions,y_test)
conf_mx_final
# plot
#normalised, fill the diagonal with 0 and plot
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
# Fill the diagonal with zeros to keep only the errors
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
plt.xlabel("Predicted", fontsize=16)
plt.ylabel("Actual", fontsize=16)
plt.show()
# Compute the precision and recall score
#precision_score(y_final_predictions,y_test), recall_score(y_final_predictions,y_test)
###Output
_____no_output_____ |
model_layer/Models/RNN/RNN_Model.ipynb | ###Markdown
0 Loading Data
###Code
seed_everything(xfinai_config.seed)
future_index= 'ic'
# try bigger the learning rate
params = {
"epochs": 10,
"batch_size": 64,
"hidden_size": 128,
"fc_size": 128,
"seq_length": 32,
"weight_decay": 0.03699014272607559,
"num_layers": 2,
"learning_rate": 0.006264079267383521,
"dropout_prob": 0.0049846528896436
}
# Load data
train_data = pd.read_pickle(f"{xfinai_config.featured_data_path}/{future_index}_train_data.pkl")
val_data = pd.read_pickle(f"{xfinai_config.featured_data_path}/{future_index}_val_data.pkl")
test_data = pd.read_pickle(f"{xfinai_config.featured_data_path}/{future_index}_test_data.pkl")
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, fc_size, output_size, batch_size, dropout_prob, device):
super().__init__()
self.name = 'RNN'
self.input_size = input_size
self.num_layers = num_layers
self.hidden_size = hidden_size
self.fc_size = fc_size
self.device = device
self.state_dim = (self.num_layers, batch_size, self.hidden_size)
self.rnn = nn.RNN(input_size=self.input_size, hidden_size=self.hidden_size, num_layers=self.num_layers, batch_first=True, dropout=dropout_prob)
self.hidden = torch.zeros(self.state_dim).to(self.device)
self.dropout = nn.Dropout(dropout_prob)
self.fc1 = nn.Linear(hidden_size, self.fc_size)
self.fc2 = nn.Linear(self.fc_size, output_size)
def forward(self, x):
x, h = self.rnn(x, self.hidden)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x[:, -1, :]
###Output
_____no_output_____
###Markdown
1 Define RNN Model
###Code
def eval_model(model, dataloader, data_set_name, future_name, params):
with torch.no_grad():
y_real_list = np.array([])
y_pred_list = np.array([])
for idx, (x_batch, y_batch) in enumerate(dataloader):
# Convert to Tensors
x_batch = x_batch.float().to(model.device)
y_batch = y_batch.float().to(model.device)
y_pred = model(x_batch)
y_real_list = np.append(y_real_list, y_batch.squeeze(1).cpu().numpy())
y_pred_list = np.append(y_pred_list, y_pred.squeeze(1).cpu().numpy())
plt.figure(figsize=[15, 3], dpi=100)
plt.plot(y_real_list, label=f'{data_set_name}_real')
plt.plot(y_pred_list, label=f'{data_set_name}_pred')
plt.legend()
plt.title(f"Inference On {data_set_name} Set - {model.name} {future_name.upper()}")
plt.xlabel('Time')
plt.ylabel('Return')
plt.subplots_adjust(bottom=0.15)
result_dir = path_wrapper.wrap_path(f"{xfinai_config.inference_result_path}/{future_name}/{model.name}")
plt.savefig(f"{result_dir}/{data_set_name}.png")
def save_model(model, future_name):
dir_path = path_wrapper.wrap_path(f"{xfinai_config.model_save_path}/{future_name}")
save_path = f"{dir_path}/{model.name}.pth"
glog.info(f"Starting save model state, save_path: {save_path}")
torch.save(model.state_dict(), save_path)
###Output
_____no_output_____
###Markdown
2 Create Training Func
###Code
def train(train_data_loader, model, criterion, optimizer, params):
glog.info(f"Start Training Model")
# Set to train mode
model.train()
running_train_loss = 0.0
# Begin training
for idx, (x_batch, y_batch) in enumerate(train_data_loader):
optimizer.zero_grad()
# Convert to Tensors
x_batch = x_batch.float().to(model.device)
y_batch = y_batch.float().to(model.device)
# Make prediction
y_pred = model(x_batch)
# Calculate loss
loss = criterion(y_pred, y_batch)
loss.backward()
running_train_loss += loss.item()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optimizer.step()
glog.info(f"End Training Model")
train_loss_average = running_train_loss / len(train_data_loader)
return model, train_loss_average
def validate(val_data_loader, model, criterion, params):
# Set to eval mode
model.eval()
running_val_loss = 0.0
with torch.no_grad():
for idx, (x_batch, y_batch) in enumerate(val_data_loader):
# Convert to Tensors
x_batch = x_batch.float().to(model.device)
y_batch = y_batch.float().to(model.device)
y_pred = model(x_batch)
val_loss = criterion(y_pred, y_batch)
running_val_loss += val_loss.item()
val_loss_average = running_val_loss / len(val_data_loader)
return val_loss_average
###Output
_____no_output_____
###Markdown
3 Run Training
###Code
# Transfer to accelerator
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
# Create dataset & data loader
train_dataset = FuturesDatasetRecurrent(data=train_data, label=xfinai_config.label, seq_length=params['seq_length'])
val_dataset = FuturesDatasetRecurrent(data=val_data, label=xfinai_config.label, seq_length=params['seq_length'])
test_dataset = FuturesDatasetRecurrent(data=test_data, label=xfinai_config.label, seq_length=params['seq_length'])
train_loader = DataLoader(dataset=train_dataset, **xfinai_config.data_loader_config,
batch_size=params['batch_size'])
val_loader = DataLoader(dataset=val_dataset, **xfinai_config.data_loader_config,
batch_size=params['batch_size'])
test_loader = DataLoader(dataset=test_dataset, **xfinai_config.data_loader_config,
batch_size=params['batch_size'])
# create model instance
model = RNN(
input_size=len(train_dataset.features_list),
hidden_size=params['hidden_size'],
num_layers=params['num_layers'],
fc_size=params['fc_size'],
output_size=xfinai_config.model_config['rnn']['output_size'],
batch_size=params['batch_size'],
dropout_prob=params['dropout_prob'],
device=device
).to(device)
criterion = nn.MSELoss()
optimizer = optim.AdamW(model.parameters(),
lr=params['learning_rate'],
weight_decay=params['weight_decay'])
epochs = params['epochs']
print(model)
train_losses = []
val_losses = []
# train the model
for epoch in range(epochs):
trained_model, train_score = train(train_data_loader=train_loader, model=model, criterion=criterion,
optimizer=optimizer,
params=params)
val_score = validate(val_data_loader=val_loader, model=trained_model, criterion=criterion, params=params)
# report intermediate result
print(f"Epoch :{epoch} train_score {train_score} val_score {val_score}")
train_losses.append(train_score)
val_losses.append(val_score)
# # save the model
# save_model(trained_model, future_index)
# plot losses
plotter.plot_loss(train_losses, epochs, 'Train_Loss', trained_model.name, future_index)
plotter.plot_loss(val_losses, epochs, 'Val_Loss', trained_model.name, future_index)
# eval model on 3 datasets
for dataloader, data_set_name in zip([train_loader, val_loader, test_loader],
['Train', 'Val', 'Test']):
eval_model(model=trained_model, dataloader=dataloader, data_set_name=data_set_name,
future_name=future_index, params=params)
save_model(model,future_index)
###Output
I0326 17:07:47.895898 19208 2272598857.py:32] Starting save model state, save_path: D:/projects/XFinAI/model_layer/trained_models/ic/RNN.pth
|
Alphabet_CNN.ipynb | ###Markdown
Setup
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Get Data
###Code
data = pd.read_csv('/content/drive/MyDrive/Projects/Alphabet Classification/A_Z Handwritten Data.csv')
data.info()
class_names = ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L", "M", "N", "O",
"P", "Q", "R", "S", "T" "U", "V", "W", "X", "Y", "Z"]
sample = data.iloc[10].values
sample_label = sample[0]
sample = sample[1:].reshape(28,28)
plt.imshow(sample, cmap="binary")
plt.axis('off')
plt.title(class_names[sample_label])
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
# Seperate labels
labels = data['0'].values.astype('uint8')
X = data.drop('0', axis=1)
X.shape
labels
# Reshape data
X = np.array(X).reshape(372450, 28, 28, 1)
X.shape
# split between train and validation sets
X_train, X_valid, y_train, y_valid = train_test_split(X, labels, test_size = 0.3, random_state = 42)
# scale images to [0, 1] range
X_train = X_train.astype("float32") / 255
X_valid = X_valid.astype("float32") / 255
# Check image shape
print("x_train shape:", X_train.shape)
print(X_train.shape[0], "train samples")
print(X_valid.shape[0], "test samples")
###Output
x_train shape: (260715, 28, 28, 1)
260715 train samples
111735 test samples
###Markdown
Image Augmentation
###Code
# Slight image augmentation
data_augmentation = tf.keras.Sequential([
layers.experimental.preprocessing.RandomRotation(0.05),
layers.experimental.preprocessing.RandomContrast(0.10),
layers.experimental.preprocessing.RandomZoom(height_factor=(0.05,0.05))
])
###Output
_____no_output_____
###Markdown
Modeling
###Code
from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint
keras.backend.clear_session()
tf.random.set_seed(12)
np.random.seed(12)
num_classes = 26
epochs = 15
# Callbacks
checkpoint_filepath = '/tmp/checkpoint'
model_checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_filepath,
save_weights_only=True,
monitor='val_accuracy',
mode='max',
save_best_only=True)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
patience=3, min_lr=0.001)
es_callback = tf.keras.callbacks.EarlyStopping(
monitor="val_accuracy",
min_delta=0.0002,
patience=4,
verbose=0,
mode="auto",
baseline=None,
restore_best_weights=True,
)
model = keras.models.Sequential([
data_augmentation,
keras.layers.Conv2D(128, kernel_size=3, padding="same", activation="relu"),
keras.layers.MaxPool2D(),
keras.layers.Conv2D(64, kernel_size=3, padding="same", activation="relu"),
keras.layers.MaxPool2D(),
keras.layers.Flatten(),
keras.layers.Dropout(0.5),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dense(64, activation="relu"),
keras.layers.Dropout(0.5),
keras.layers.Dense(num_classes, activation="softmax")
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="nadam",
metrics = ["accuracy"])
history = model.fit(X_train, y_train, epochs=epochs, callbacks= [reduce_lr, model_checkpoint_callback, es_callback], validation_data=(X_valid, y_valid))
model.evaluate(X_valid, y_valid)
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1)
plt.title("CNN - Alphabet")
plt.show()
model.save('alphabet_cnn.h5')
###Output
_____no_output_____ |
Course 1 - Neural Networks and Deep Learning/NoteBooks/Planar_data_classification_with_onehidden_layer_v6c.ipynb | ###Markdown
Updates to Assignment If you were working on the older version:* Please click on the "Coursera" icon in the top right to open up the folder directory. * Navigate to the folder: Week 3/ Planar data classification with one hidden layer. You can see your prior work in version 6b: "Planar data classification with one hidden layer v6b.ipynb" List of bug fixes and enhancements* Clarifies that the classifier will learn to classify regions as either red or blue.* compute_cost function fixes np.squeeze by casting it as a float.* compute_cost instructions clarify the purpose of np.squeeze.* compute_cost clarifies that "parameters" parameter is not needed, but is kept in the function definition until the auto-grader is also updated.* nn_model removes extraction of parameter values, as the entire parameter dictionary is passed to the invoked functions. Planar data classification with one hidden layerWelcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression. **You will learn how to:**- Implement a 2-class classification neural network with a single hidden layer- Use units with a non-linear activation function, such as tanh - Compute the cross entropy loss - Implement forward and backward propagation 1 - Packages Let's first import all the packages that you will need during this assignment.- [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis. - [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.- testCases provides some test examples to assess the correctness of your functions- planar_utils provide various useful functions used in this assignment
###Code
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases_v2 import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
###Output
_____no_output_____
###Markdown
2 - Dataset First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`.
###Code
X, Y = load_planar_dataset()
###Output
_____no_output_____
###Markdown
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data. In other words, we want the classifier to define regions as either red or blue.
###Code
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
###Output
_____no_output_____
###Markdown
You have: - a numpy-array (matrix) X that contains your features (x1, x2) - a numpy-array (vector) Y that contains your labels (red:0, blue:1).Lets first get a better sense of what our data is like. **Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`? **Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
###Code
### START CODE HERE ### (≈ 3 lines of code)
shape_X = X.shape
shape_Y = Y.shape
m = Y.shape[1] # training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
###Output
The shape of X is: (2, 400)
The shape of Y is: (1, 400)
I have m = 400 training examples!
###Markdown
**Expected Output**: **shape of X** (2, 400) **shape of Y** (1, 400) **m** 400 3 - Simple Logistic RegressionBefore building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
###Code
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
###Output
_____no_output_____
###Markdown
You can now plot the decision boundary of these models. Run the code below.
###Code
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
###Output
Accuracy of logistic regression: 47 % (percentage of correctly labelled datapoints)
###Markdown
**Expected Output**: **Accuracy** 47% **Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now! 4 - Neural Network modelLogistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.**Here is our model**:**Mathematically**:For one example $x^{(i)}$:$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$ $$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$Given the predictions on all the examples, you can also compute the cost $J$ as follows: $$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$**Reminder**: The general methodology to build a Neural Network is to: 1. Define the neural network structure ( of input units, of hidden units, etc). 2. Initialize the model's parameters 3. Loop: - Implement forward propagation - Compute loss - Implement backward propagation to get the gradients - Update parameters (gradient descent)You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data. 4.1 - Defining the neural network structure **Exercise**: Define three variables: - n_x: the size of the input layer - n_h: the size of the hidden layer (set this to 4) - n_y: the size of the output layer**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
###Code
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
### START CODE HERE ### (≈ 3 lines of code)
n_x = X.shape[0] # size of input layer
n_h = 4
n_y = Y.shape[0] # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
###Output
The size of the input layer is: n_x = 5
The size of the hidden layer is: n_h = 4
The size of the output layer is: n_y = 2
###Markdown
**Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded). **n_x** 5 **n_h** 4 **n_y** 2 4.2 - Initialize the model's parameters **Exercise**: Implement the function `initialize_parameters()`.**Instructions**:- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.- You will initialize the weights matrices with random values. - Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).- You will initialize the bias vectors as zeros. - Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x)*0.01
b1 = np.zeros(shape=(n_h, 1))
W2 = np.random.randn(n_y, n_h)*0.01
b2 = np.zeros(shape=(n_y, 1))
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]]
b1 = [[ 0.]
[ 0.]
[ 0.]
[ 0.]]
W2 = [[-0.01057952 -0.00909008 0.00551454 0.02292208]]
b2 = [[ 0.]]
###Markdown
**Expected Output**: **W1** [[-0.00416758 -0.00056267] [-0.02136196 0.01640271] [-0.01793436 -0.00841747] [ 0.00502881 -0.01245288]] **b1** [[ 0.] [ 0.] [ 0.] [ 0.]] **W2** [[-0.01057952 -0.00909008 0.00551454 0.02292208]] **b2** [[ 0.]] 4.3 - The Loop **Question**: Implement `forward_propagation()`.**Instructions**:- Look above at the mathematical representation of your classifier.- You can use the function `sigmoid()`. It is built-in (imported) in the notebook.- You can use the function `np.tanh()`. It is part of the numpy library.- The steps you have to implement are: 1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`. 2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).- Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function.
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = np.dot(W1, X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = sigmoid(Z2)
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
###Output
0.262818640198 0.091999045227 -1.30766601287 0.212877681719
###Markdown
**Expected Output**: 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows:$$J = - \frac{1}{m} \sum\limits_{i = 1}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$**Exercise**: Implement `compute_cost()` to compute the value of the cost $J$.**Instructions**:- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$:```pythonlogprobs = np.multiply(np.log(A2),Y)cost = - np.sum(logprobs) no need to use a for loop!```(you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`). Note that if you use `np.multiply` followed by `np.sum` the end result will be a type `float`, whereas if you use `np.dot`, the result will be a 2D numpy array. We can use `np.squeeze()` to remove redundant dimensions (in the case of single float, this will be reduced to a zero-dimension array). We can cast the array as a type `float` using `float()`.
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
[Note that the parameters argument is not used in this function,
but the auto-grader currently expects this parameter.
Future version of this notebook will fix both the notebook
and the auto-grader so that `parameters` is not needed.
For now, please include `parameters` in the function signature,
and also when invoking this function.]
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = np.multiply(np.log(A2),Y) + np.multiply((1-Y), np.log(1-A2))
cost = -np.sum(logprobs)/m
### END CODE HERE ###
cost = float(np.squeeze(cost)) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
###Output
cost = 0.6930587610394646
###Markdown
**Expected Output**: **cost** 0.693058761... Using the cache computed during forward propagation, you can now implement backward propagation.**Question**: Implement the function `backward_propagation()`.**Instructions**:Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation. <!--$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$- Note that $*$ denotes elementwise multiplication.- The notation you will use is common in deep learning coding: - dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$ - db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$ - dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$ - db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$ !-->- Tips: - To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute $g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
###Code
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = parameters['W1']
W2 = parameters['W2']
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = cache['A1']
A2 = cache['A2']
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = A2 - Y
dW2 = 1/m * (np.dot(dZ2, A1.T))
db2 = 1/m * (np.sum(dZ2, axis = 1, keepdims = True))
dZ1 = np.multiply(np.dot(W2.T, dZ2), 1 - np.power(A1, 2))
dW1 = 1/m * (np.dot(dZ1, X.T))
db1 = 1/m * (np.sum(dZ1, axis = 1, keepdims = True))
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
###Output
dW1 = [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]]
db1 = [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]]
dW2 = [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]]
db2 = [[-0.16655712]]
###Markdown
**Expected output**: **dW1** [[ 0.00301023 -0.00747267] [ 0.00257968 -0.00641288] [-0.00156892 0.003893 ] [-0.00652037 0.01618243]] **db1** [[ 0.00176201] [ 0.00150995] [-0.00091736] [-0.00381422]] **dW2** [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] **db2** [[-0.16655712]] **Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).**General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.**Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
###Code
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = grads['dW1']
db1 = grads['db1']
dW2 = grads['dW2']
db2 = grads['db2']
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 = W1 - learning_rate * dW1
b1 = b1 - learning_rate * db1
W2 = W2 - learning_rate * dW2
b2 = b2 - learning_rate * db2
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
W1 = [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]
b1 = [[ -1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[ -3.20136836e-06]]
W2 = [[-0.01041081 -0.04463285 0.01758031 0.04747113]]
b2 = [[ 0.00010457]]
###Markdown
**Expected Output**: **W1** [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]] **b1** [[ -1.02420756e-06] [ 1.27373948e-05] [ 8.32996807e-07] [ -3.20136836e-06]] **W2** [[-0.01041081 -0.04463285 0.01758031 0.04747113]] **b2** [[ 0.00010457]] 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() **Question**: Build your neural network model in `nn_model()`.**Instructions**: The neural network model has to use the previous functions in the right order.
###Code
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters
### START CODE HERE ### (≈ 1 line of code)
parameters = initialize_parameters(n_x, n_h, n_y)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = forward_propagation(X, parameters)
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = compute_cost(A2, Y, parameters)
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = backward_propagation(parameters, cache, X, Y)
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = update_parameters(parameters, grads, learning_rate = 1.2)
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
Cost after iteration 0: 0.692739
Cost after iteration 1000: 0.000218
Cost after iteration 2000: 0.000107
Cost after iteration 3000: 0.000071
Cost after iteration 4000: 0.000053
Cost after iteration 5000: 0.000042
Cost after iteration 6000: 0.000035
Cost after iteration 7000: 0.000030
Cost after iteration 8000: 0.000026
Cost after iteration 9000: 0.000023
W1 = [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]
b1 = [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]]
W2 = [[-2.45566237 -3.27042274 2.00784958 3.36773273]]
b2 = [[ 0.20459656]]
###Markdown
**Expected Output**: **cost after iteration 0** 0.692739 $\vdots$ $\vdots$ **W1** [[-0.65848169 1.21866811] [-0.76204273 1.39377573] [ 0.5792005 -1.10397703] [ 0.76773391 -1.41477129]] **b1** [[ 0.287592 ] [ 0.3511264 ] [-0.2431246 ] [-0.35772805]] **W2** [[-2.45566237 -3.27042274 2.00784958 3.36773273]] **b2** [[ 0.20459656]] 4.5 Predictions**Question**: Use your model to predict by building predict().Use forward propagation to predict results.**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases} 1 & \text{if}\ activation > 0.5 \\ 0 & \text{otherwise} \end{cases}$ As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
###Code
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, cache = forward_propagation(X, parameters)
predictions = np.round(A2)
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
###Output
predictions mean = 0.666666666667
###Markdown
**Expected Output**: **predictions mean** 0.666666666667 It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
###Code
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
###Output
Cost after iteration 0: 0.693048
Cost after iteration 1000: 0.288083
Cost after iteration 2000: 0.254385
Cost after iteration 3000: 0.233864
Cost after iteration 4000: 0.226792
Cost after iteration 5000: 0.222644
Cost after iteration 6000: 0.219731
Cost after iteration 7000: 0.217504
Cost after iteration 8000: 0.219471
Cost after iteration 9000: 0.218612
###Markdown
**Expected Output**: **Cost after iteration 9000** 0.218607
###Code
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
###Output
Accuracy: 90%
###Markdown
**Expected Output**: **Accuracy** 90% Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression. Now, let's try out several hidden layer sizes. 4.6 - Tuning hidden layer size (optional/ungraded exercise) Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
###Code
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
###Output
Accuracy for 1 hidden units: 67.5 %
Accuracy for 2 hidden units: 67.25 %
Accuracy for 3 hidden units: 90.75 %
Accuracy for 4 hidden units: 90.5 %
Accuracy for 5 hidden units: 91.25 %
Accuracy for 20 hidden units: 90.0 %
Accuracy for 50 hidden units: 90.25 %
###Markdown
**Interpretation**:- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data. - The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticeable overfitting.- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting. **Optional questions**:**Note**: Remember to submit the assignment by clicking the blue "Submit Assignment" button at the upper-right. Some optional/ungraded questions that you can explore if you wish: - What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?- Play with the learning_rate. What happens?- What if we change the dataset? (See part 5 below!) **You've learnt to:**- Build a complete neural network with a hidden layer- Make a good use of a non-linear unit- Implemented forward propagation and backpropagation, and trained a neural network- See the impact of varying the hidden layer size, including overfitting. Nice work! 5) Performance on other datasets If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
###Code
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
###Output
_____no_output_____ |
Boosting/XG Boost Classifier using Python/xg_boost_samrat.ipynb | ###Markdown
XGBoost Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Data.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Training XGBoost on the Training set
###Code
from xgboost import XGBClassifier
classifier = XGBClassifier()
classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Making the Confusion Matrix
###Code
from sklearn.metrics import confusion_matrix, accuracy_score
y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
###Output
[[84 3]
[ 0 50]]
###Markdown
Applying k-Fold Cross Validation
###Code
from sklearn.model_selection import cross_val_score
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10)
print("Accuracy: {:.2f} %".format(accuracies.mean()*100))
print("Standard Deviation: {:.2f} %".format(accuracies.std()*100))
###Output
Accuracy: 96.53 %
Standard Deviation: 2.07 %
|
0-newbooks/faceswap-GAN/temp/faceswap_GAN_keras.ipynb | ###Markdown
Code borrow from [eriklindernoren](https://github.com/eriklindernoren) and [fchollet](https://github.com/fchollet)https://github.com/eriklindernoren/Keras-GAN/blob/master/aae/adversarial_autoencoder.pyhttps://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/8.5-introduction-to-gans.ipynb
###Code
class GANModel():
img_size = 64
channels = 3
img_shape = (img_size, img_size, channels)
encoded_dim = 1024
def __init__(self):
optimizer = Adam(1e-4, 0.5)
# Build and compile the discriminator
self.netDA, self.netDB = self.build_discriminator()
self.netDA.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])
self.netDB.compile(loss='mse', optimizer=optimizer, metrics=['accuracy'])
# Build and compile the generator
self.netGA, self.netGB = self.build_generator()
self.netGA.compile(loss=['mae', 'mse'], optimizer=optimizer)
self.netGB.compile(loss=['mae', 'mse'], optimizer=optimizer)
img = Input(shape=self.img_shape)
alphaA, reconstructed_imgA = self.netGA(img)
alphaB, reconstructed_imgB = self.netGB(img)
# For the adversarial_autoencoder model we will only train the generator
self.netDA.trainable = False
self.netDB.trainable = False
def one_minus(x): return 1 - x
# masked_img = alpha * reconstructed_img + (1 - alpha) * img
masked_imgA = add([multiply([alphaA, reconstructed_imgA]), multiply([Lambda(one_minus)(alphaA), img])])
masked_imgB = add([multiply([alphaB, reconstructed_imgB]), multiply([Lambda(one_minus)(alphaB), img])])
out_discriminatorA = self.netDA(concatenate([masked_imgA, img], axis=-1))
out_discriminatorB = self.netDB(concatenate([masked_imgB, img], axis=-1))
# The adversarial_autoencoder model (stacked generator and discriminator) takes
# img as input => generates encoded represenation and reconstructed image => determines validity
self.adversarial_autoencoderA = Model(img, [reconstructed_imgA, out_discriminatorA])
self.adversarial_autoencoderB = Model(img, [reconstructed_imgB, out_discriminatorB])
self.adversarial_autoencoderA.compile(loss=['mae', 'mse'],
loss_weights=[1, 0.5],
optimizer=optimizer)
self.adversarial_autoencoderB.compile(loss=['mae', 'mse'],
loss_weights=[1, 0.5],
optimizer=optimizer)
def build_generator(self):
def conv_block(input_tensor, f):
x = input_tensor
x = Conv2D(f, kernel_size=3, strides=2, kernel_initializer=RandomNormal(0, 0.02),
use_bias=False, padding="same")(x)
x = LeakyReLU(alpha=0.2)(x)
return x
def res_block(input_tensor, f):
x = input_tensor
x = Conv2D(f, kernel_size=3, kernel_initializer=RandomNormal(0, 0.02),
use_bias=False, padding="same")(x)
x = LeakyReLU(alpha=0.2)(x)
x = Conv2D(f, kernel_size=3, kernel_initializer=RandomNormal(0, 0.02),
use_bias=False, padding="same")(x)
x = add([x, input_tensor])
x = LeakyReLU(alpha=0.2)(x)
return x
def upscale_ps(filters, use_norm=True):
def block(x):
x = Conv2D(filters*4, kernel_size=3, use_bias=False,
kernel_initializer=RandomNormal(0, 0.02), padding='same' )(x)
x = LeakyReLU(0.1)(x)
x = PixelShuffler()(x)
return x
return block
def Encoder(img_shape):
inp = Input(shape=img_shape)
x = Conv2D(64, kernel_size=5, kernel_initializer=RandomNormal(0, 0.02),
use_bias=False, padding="same")(inp)
x = conv_block(x,128)
x = conv_block(x,256)
x = conv_block(x,512)
x = conv_block(x,1024)
x = Dense(1024)(Flatten()(x))
x = Dense(4*4*1024)(x)
x = Reshape((4, 4, 1024))(x)
out = upscale_ps(512)(x)
return Model(inputs=inp, outputs=out)
def Decoder_ps(img_shape):
nc_in = 512
input_size = img_shape[0]//8
inp = Input(shape=(input_size, input_size, nc_in))
x = inp
x = upscale_ps(256)(x)
x = upscale_ps(128)(x)
x = upscale_ps(64)(x)
x = res_block(x, 64)
x = res_block(x, 64)
alpha = Conv2D(1, kernel_size=5, padding='same', activation="sigmoid")(x)
rgb = Conv2D(3, kernel_size=5, padding='same', activation="tanh")(x)
return Model(inp, [alpha, rgb])
encoder = Encoder(self.img_shape)
decoder_A = Decoder_ps(self.img_shape)
decoder_B = Decoder_ps(self.img_shape)
x = Input(shape=self.img_shape)
netGA = Model(x, decoder_A(encoder(x)))
netGB = Model(x, decoder_B(encoder(x)))
try:
netGA.load_weights("models/netGA.h5")
netGB.load_weights("models/netGB.h5")
print ("Generator models loaded.")
except:
print ("Generator weights files not found.")
pass
return netGA, netGB,
def build_discriminator(self):
def conv_block_d(input_tensor, f, use_instance_norm=True):
x = input_tensor
x = Conv2D(f, kernel_size=4, strides=2, kernel_initializer=RandomNormal(0, 0.02),
use_bias=False, padding="same")(x)
x = LeakyReLU(alpha=0.2)(x)
return x
def Discriminator(img_shape):
inp = Input(shape=(img_shape[0], img_shape[1], img_shape[2]*2))
x = conv_block_d(inp, 64, False)
x = conv_block_d(x, 128, False)
x = conv_block_d(x, 256, False)
out = Conv2D(1, kernel_size=4, kernel_initializer=RandomNormal(0, 0.02),
use_bias=False, padding="same", activation="sigmoid")(x)
return Model(inputs=[inp], outputs=out)
netDA = Discriminator(self.img_shape)
netDB = Discriminator(self.img_shape)
try:
netDA.load_weights("models/netDA.h5")
netDB.load_weights("models/netDB.h5")
print ("Discriminator models loaded.")
except:
print ("Discriminator weights files not found.")
pass
return netDA, netDB
def load(self, swapped):
if swapped:
print("swapping not supported on GAN")
pass
def save_weights(self):
self.netGA.save_weights("models/netGA.h5")
self.netGB.save_weights("models/netGB.h5" )
self.netDA.save_weights("models/netDA.h5")
self.netDB.save_weights("models/netDB.h5")
print ("Models saved.")
class Train():
random_transform_args = {
'rotation_range': 20,
'zoom_range': 0.05,
'shift_range': 0.05,
'random_flip': 0.5,
}
def __init__(self, model, fn_A, fn_B, batch_size=8):
self.model = model
self.train_batchA = minibatchAB(fn_A, batch_size, self.random_transform_args)
self.train_batchB = minibatchAB(fn_B, batch_size, self.random_transform_args)
self.batch_size = batch_size
self.use_mixup = True
self.mixup_alpha = 0.2
def train_one_step(self, gen_iter, t0):
# ---------------------
# Train Discriminators
# ---------------------
# Select a random half batch of images
epoch, warped_A, target_A = next(self.train_batchA)
epoch, warped_B, target_B = next(self.train_batchB)
# Generate a half batch of new images
gen_alphasA, gen_imgsA = self.model.netGA.predict(warped_A)
gen_alphasB, gen_imgsB = self.model.netGB.predict(warped_B)
#gen_masked_imgsA = gen_alphasA * gen_imgsA + (1 - gen_alphasA) * warped_A
#gen_masked_imgsB = gen_alphasB * gen_imgsB + (1 - gen_alphasB) * warped_B
gen_masked_imgsA = np.array([gen_alphasA[i] * gen_imgsA[i] + (1 - gen_alphasA[i]) * warped_A[i]
for i in range(self.batch_size)])
gen_masked_imgsB = np.array([gen_alphasB[i] * gen_imgsB[i] + (1 - gen_alphasB[i]) * warped_B[i]
for i in range (self.batch_size)])
valid = np.ones((self.batch_size, ) + self.model.netDA.output_shape[1:])
fake = np.zeros((self.batch_size, ) + self.model.netDA.output_shape[1:])
concat_real_inputA = np.array([np.concatenate([target_A[i], warped_A[i]], axis=-1)
for i in range(self.batch_size)])
concat_real_inputB = np.array([np.concatenate([target_B[i], warped_B[i]], axis=-1)
for i in range(self.batch_size)])
concat_fake_inputA = np.array([np.concatenate([gen_masked_imgsA[i], warped_A[i]], axis=-1)
for i in range(self.batch_size)])
concat_fake_inputB = np.array([np.concatenate([gen_masked_imgsB[i], warped_B[i]], axis=-1)
for i in range(self.batch_size)])
if self.use_mixup:
lam = np.random.beta(self.mixup_alpha, self.mixup_alpha)
mixup_A = lam * concat_real_inputA + (1 - lam) * concat_fake_inputA
mixup_B = lam * concat_real_inputB + (1 - lam) * concat_fake_inputB
# Train the discriminators
#print ("Train the discriminators.")
if self.use_mixup:
d_lossA = self.model.netDA.train_on_batch(mixup_A, lam * valid)
d_lossB = self.model.netDB.train_on_batch(mixup_B, lam * valid)
else:
d_lossA = self.model.netDA.train_on_batch(np.concatenate([concat_real_inputA, concat_fake_inputA], axis=0),
np.concatenate([valid, fake], axis=0))
d_lossB = self.model.netDB.train_on_batch(np.concatenate([concat_real_inputB, concat_fake_inputB], axis=0),
np.concatenate([valid, fake], axis=0))
# ---------------------
# Train Generators
# ---------------------
# Train the generators
#print ("Train the generators.")
g_lossA = self.model.adversarial_autoencoderA.train_on_batch(warped_A, [target_A, valid])
g_lossB = self.model.adversarial_autoencoderB.train_on_batch(warped_B, [target_B, valid])
print('[%d/%s][%d] Loss_DA: %f Loss_DB: %f Loss_GA: %f Loss_GB: %f time: %f'
% (epoch, "num_epochs", gen_iter, d_lossA[0],
d_lossB[0], g_lossA[0], g_lossB[0], time.time()-t0))
return None
def show_sample(self):
_, wA, tA = self.train_batchA.send(14)
_, wB, tB = self.train_batchB.send(14)
self.showG(tA, tB)
def showG(self, test_A, test_B):
def display_fig(figure_A, figure_B):
figure = np.concatenate([figure_A, figure_B], axis=0 )
figure = figure.reshape((4,7) + figure.shape[1:])
figure = stack_images(figure)
figure = np.clip((figure + 1) * 255 / 2, 0, 255).astype('uint8')
figure = cv2.cvtColor(figure, cv2.COLOR_BGR2RGB)
display(Image.fromarray(figure))
out_test_A_netGA = self.model.netGA.predict(test_A)
out_test_A_netGB = self.model.netGB.predict(test_A)
out_test_B_netGA = self.model.netGA.predict(test_B)
out_test_B_netGB = self.model.netGB.predict(test_B)
figure_A = np.stack([
test_A,
out_test_A_netGA[0] * out_test_A_netGA[1] + (1 - out_test_A_netGA[0]) * test_A,
out_test_A_netGB[0] * out_test_A_netGB[1] + (1 - out_test_A_netGB[0]) * test_A,
], axis=1 )
figure_B = np.stack([
test_B,
out_test_B_netGB[0] * out_test_B_netGB[1] + (1 - out_test_B_netGB[0]) * test_B,
out_test_B_netGA[0] * out_test_B_netGA[1] + (1 - out_test_B_netGA[0]) * test_B,
], axis=1 )
print ("Masked results:")
display_fig(figure_A, figure_B)
figure_A = np.stack([
test_A,
out_test_A_netGA[1],
out_test_A_netGB[1],
], axis=1 )
figure_B = np.stack([
test_B,
out_test_B_netGB[1],
out_test_B_netGA[1],
], axis=1 )
print ("Raw results:")
display_fig(figure_A, figure_B)
figure_A = np.stack([
test_A,
np.tile(out_test_A_netGA[0],3) * 2 - 1,
np.tile(out_test_A_netGB[0],3) * 2 - 1,
], axis=1 )
figure_B = np.stack([
test_B,
np.tile(out_test_B_netGB[0],3) * 2 - 1,
np.tile(out_test_B_netGA[0],3) * 2 - 1,
], axis=1 )
print ("Alpha masks:")
display_fig(figure_A, figure_B)
img_dirA = './faceA/*.*'
img_dirB = './faceB/*.*'
def read_image(fn, random_transform_args):
image = cv2.imread(fn)
image = cv2.resize(image, (256,256)) / 255 * 2 - 1
image = random_transform(image, **random_transform_args )
warped_img, target_img = random_warp(image)
return warped_img, target_img
def minibatch(data, batchsize, args):
length = len(data)
epoch = i = 0
tmpsize = None
shuffle(data)
while True:
size = tmpsize if tmpsize else batchsize
if i+size > length:
shuffle(data)
i = 0
epoch+=1
rtn = np.float32([read_image(data[j], args) for j in range(i,i+size)])
i+=size
tmpsize = yield epoch, rtn[:,0,:,:,:], rtn[:,1,:,:,:]
def minibatchAB(dataA, batchsize, args):
batchA = minibatch(dataA, batchsize, args)
tmpsize = None
while True:
ep1, warped_img, target_img = batchA.send(tmpsize)
tmpsize = yield ep1, warped_img, target_img
def load_data(file_pattern):
return glob.glob(file_pattern)
def launch_training(max_iters, batch_size=8, save_interval=100):
train_A = load_data(img_dirA)
train_B = load_data(img_dirB)
assert len(train_A), "No image found in " + str(img_dirA) + "."
assert len(train_B), "No image found in " + str(img_dirB) + "."
gan = GANModel()
trainer = Train(gan, train_A, train_B, batch_size)
print ("Training starts...")
t0 = time.time()
gen_iterations = 0
while gen_iterations < max_iters:
#print ("iter: " + str(gen_iterations))
_ = trainer.train_one_step(gen_iterations, t0)
gen_iterations += 1
# If at save interval => save models & show results
if (gen_iterations) % save_interval == 0:
clear_output()
# Save models
gan.save_weights()
# Show results
trainer.show_sample()
launch_training(max_iters=2e4, batch_size=8, save_interval=20)
###Output
_____no_output_____ |
grad_boosting.ipynb | ###Markdown
Градиентный бустинг своими руками**Внимание:** в тексте задания произошли изменения - поменялось число деревьев (теперь 50), правило изменения величины шага в задании 3 и добавился параметр `random_state` у решающего дерева. Правильные ответы не поменялись, но теперь их проще получить. Также исправлена опечатка в функции `gbm_predict`.В этом задании будет использоваться датасет `boston` из `sklearn.datasets`. Оставьте последние 25% объектов для контроля качества, разделив `X` и `y` на `X_train`, `y_train` и `X_test`, `y_test`.Целью задания будет реализовать простой вариант градиентного бустинга над регрессионными деревьями для случая квадратичной функции потерь.
###Code
import numpy as np
import math
from sklearn import datasets, model_selection, metrics
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
import xgboost as xgb
import warnings
warnings.filterwarnings('ignore')
%pylab inline
X, y = datasets.load_boston(return_X_y = True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, shuffle=False)
###Output
_____no_output_____
###Markdown
Задание 1Как вы уже знаете из лекций, **бустинг** - это метод построения композиций базовых алгоритмов с помощью последовательного добавления к текущей композиции нового алгоритма с некоторым коэффициентом. Градиентный бустинг обучает каждый новый алгоритм так, чтобы он приближал антиградиент ошибки по ответам композиции на обучающей выборке. Аналогично минимизации функций методом градиентного спуска, в градиентном бустинге мы подправляем композицию, изменяя алгоритм в направлении антиградиента ошибки.Воспользуйтесь формулой из лекций, задающей ответы на обучающей выборке, на которые нужно обучать новый алгоритм (фактически это лишь чуть более подробно расписанный градиент от ошибки), и получите частный ее случай, если функция потерь `L` - квадрат отклонения ответа композиции `a(x)` от правильного ответа `y` на данном `x`.Если вы давно не считали производную самостоятельно, вам поможет таблица производных элементарных функций (которую несложно найти в интернете) и правило дифференцирования сложной функции. После дифференцирования квадрата у вас возникнет множитель 2 — т.к. нам все равно предстоит выбирать коэффициент, с которым будет добавлен новый базовый алгоритм, проигноируйте этот множитель при дальнейшем построении алгоритма.
###Code
%%writefile 'grad_boosting_1.txt'
2(a(x) - y)
###Output
Overwriting grad_boosting_1.txt
###Markdown
Задание 2Заведите массив для объектов `DecisionTreeRegressor` (будем их использовать в качестве базовых алгоритмов) и для вещественных чисел (это будут коэффициенты перед базовыми алгоритмами). В цикле от обучите последовательно 50 решающих деревьев с параметрами `max_depth=5` и `random_state=42` (остальные параметры - по умолчанию). В бустинге зачастую используются сотни и тысячи деревьев, но мы ограничимся 50, чтобы алгоритм работал быстрее, и его было проще отлаживать (т.к. цель задания разобраться, как работает метод). Каждое дерево должно обучаться на одном и том же множестве объектов, но ответы, которые учится прогнозировать дерево, будут меняться в соответствие с полученным в задании 1 правилом. Попробуйте для начала всегда брать коэффициент равным 0.9. Обычно оправдано выбирать коэффициент значительно меньшим - порядка 0.05 или 0.1, но т.к. в нашем учебном примере на стандартном датасете будет всего 50 деревьев, возьмем для начала шаг побольше.В процессе реализации обучения вам потребуется функция, которая будет вычислять прогноз построенной на данный момент композиции деревьев на выборке `X`:```def gbm_predict(X): return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip(base_algorithms_list, coefficients_list)]) for x in X](считаем, что base_algorithms_list - список с базовыми алгоритмами, coefficients_list - список с коэффициентами перед алгоритмами)```Эта же функция поможет вам получить прогноз на контрольной выборке и оценить качество работы вашего алгоритма с помощью `mean_squared_error` в `sklearn.metrics`. Возведите результат в степень 0.5, чтобы получить `RMSE`. Полученное значение `RMSE` — **ответ в пункте 2**.
###Code
size = 50
coef = 0.9
base_algorithms_list = []
coefficients_list = [coef]*size
def gbm_predict(X):
return [sum([coeff * algo.predict([x])[0] for algo, coeff in zip(base_algorithms_list, coefficients_list)]) for x in X]
for i in range(size):
dtr = DecisionTreeRegressor(max_depth = 5, random_state = 42)
if i == 0:
dtr.fit(X_train, y_train)
else:
dtr.fit(X_train, (y_train - gbm_predict(X_train)))
base_algorithms_list.append(dtr)
RMSE = math.sqrt(metrics.mean_squared_error(y_test, gbm_predict(X_test)))
with open("grad_boosting_2.txt", "w") as fout:
fout.write(str(RMSE))
###Output
_____no_output_____
###Markdown
Задание 3Вас может также беспокоить, что двигаясь с постоянным шагом, вблизи минимума ошибки ответы на обучающей выборке меняются слишком резко, перескакивая через минимум. Попробуйте уменьшать вес перед каждым алгоритмом с каждой следующей итерацией по формуле `0.9 / (1.0 + i)`, где `i` - номер итерации (от 0 до 49). Используйте качество работы алгоритма как **ответ в пункте 3**. В реальности часто применяется следующая стратегия выбора шага: как только выбран алгоритм, подберем коэффициент перед ним численным методом оптимизации таким образом, чтобы отклонение от правильных ответов было минимальным. Мы не будем предлагать вам реализовать это для выполнения задания, но рекомендуем попробовать разобраться с такой стратегией и реализовать ее при случае для себя.
###Code
coefficients_list = [coef/(1.0 + i) for i in range(size)]
base_algorithms_list = []
for i in range(size):
dtr = DecisionTreeRegressor(max_depth = 5, random_state = 42)
if i == 0:
dtr.fit(X_train, y_train)
else:
dtr.fit(X_train, (y_train - gbm_predict(X_train)))
base_algorithms_list.append(dtr)
RMSE = math.sqrt(metrics.mean_squared_error(y_test, gbm_predict(X_test)))
with open("grad_boosting_3.txt", "w") as fout:
fout.write(str(RMSE))
###Output
_____no_output_____
###Markdown
Задание 4Реализованный вами метод - градиентный бустинг над деревьями - очень популярен в машинном обучении. Он представлен как в самой библиотеке `sklearn`, так и в сторонней библиотеке `XGBoost`, которая имеет свой питоновский интерфейс. На практике `XGBoost` работает заметно лучше `GradientBoostingRegressor` из `sklearn`, но для этого задания вы можете использовать любую реализацию. Исследуйте, переобучается ли градиентный бустинг с ростом числа итераций (и подумайте, почему), а также с ростом глубины деревьев. На основе наблюдений выпишите через пробел номера правильных из приведенных ниже утверждений в порядке возрастания номера (это будет **ответ в п.4**): 1. С увеличением числа деревьев, начиная с некоторого момента, качество работы градиентного бустинга не меняется существенно. 2. С увеличением числа деревьев, начиная с некоторого момента, градиентный бустинг начинает переобучаться. 3. С ростом глубины деревьев, начиная с некоторого момента, качество работы градиентного бустинга на тестовой выборке начинает ухудшаться. 4. С ростом глубины деревьев, начиная с некоторого момента, качество работы градиентного бустинга перестает существенно изменяться
###Code
n_trees = range(1, 50, 5)
max_depths = range(1, 30, 3)
scoring = []
for n_tree in n_trees:
estimator = xgb.XGBRegressor(n_estimators=n_tree)
score = cross_val_score(estimator, X, y, cv = 3)
scoring.append(score)
scoring = np.asmatrix(scoring)
pylab.plot(n_trees, scoring.mean(axis = 1), marker='.')
pylab.grid(True)
pylab.xlabel('trees')
pylab.ylabel('score')
pylab.show()
scoring = []
for max_depth in max_depths:
estimator = xgb.XGBRegressor(max_depth=max_depth, n_estimators=50)
score = cross_val_score(estimator, X, y, cv = 3)
scoring.append(score)
scoring = np.asmatrix(scoring)
pylab.plot(max_depths, scoring.mean(axis = 1), marker='.')
pylab.grid(True)
pylab.xlabel('max_depth')
pylab.ylabel('score')
pylab.show()
%%writefile 'grad_boosting_4.txt'
2 3
###Output
Overwriting grad_boosting_4.txt
###Markdown
Задание 5Сравните получаемое с помощью градиентного бустинга качество с качеством работы линейной регрессии. Для этого обучите `LinearRegression` из `sklearn.linear_model` (с параметрами по умолчанию) на обучающей выборке и оцените для прогнозов полученного алгоритма на тестовой выборке `RMSE`. Полученное качество - ответ в **пункте 5**. В данном примере качество работы простой модели должно было оказаться хуже, но не стоит забывать, что так бывает не всегда. В заданиях к этому курсу вы еще встретите пример обратной ситуации.
###Code
lr = LinearRegression()
lr.fit(X_train, y_train)
RMSE = math.sqrt(metrics.mean_squared_error(y_test, lr.predict(X_test)))
with open('grad_boosting_5.txt', "w") as fout:
fout.write(str(RMSE))
###Output
_____no_output_____ |
photometer/.ipynb_checkpoints/photometer_data-checkpoint.ipynb | ###Markdown
Flow through photometer data First dataset (E. coli)
###Code
import matplotlib.pyplot as plt
import os
import pandas as pd
import numpy as np
data=pd.read_csv("ecoli_180920.csv", header=None, names = ["time", "cell", "dark", "light", "diff"], index_col=False)
data[0:5]
ref=np.mean(data["diff"][0:5])
ref
data["OD"]=-np.log10(data["diff"]/ref)
data[0:5]
data["time"] = pd.to_datetime(data["time"], format="%Y-%m-%d_%H:%M:%S")
x=data["time"]
y=data["OD"]
plt.plot(x, y)
plt.gcf().autofmt_xdate()
plt.show()
###Output
_____no_output_____
###Markdown
Second dataset (P. syringae)
###Code
data2=pd.read_csv("pst_181016.csv", header=None, names = ["time", "signal", "temp"], index_col=False)
data2[0:5]
ref2=np.mean(data2["signal"][0:50])
ref2
data2["OD"]=-np.log10(data2["signal"]/ref2)
data2[0:5]
data2["time"] = pd.to_datetime(data2["time"], format="%Y-%m-%d_%H:%M:%S")
x=data2["time"]
y=data2["OD"]
plt.plot(x, y)
plt.gcf().autofmt_xdate()
plt.show()
-np.log10(800/7900)
###Output
_____no_output_____
###Markdown
Third dataset (P. syringae)
###Code
data3=pd.read_csv("pst_181017.csv", header=None, names = ["time", "signal", "temp"], index_col=False)
data3[0:68]
ref3=np.mean(data3["signal"][0:68])
ref3
data3["OD"]=-np.log10(data3["signal"]/ref3)
data3[0:30]
data3["time"] = pd.to_datetime(data3["time"], format="%Y-%m-%d_%H:%M:%S")
x=data3["time"]
y=data3["OD"]
plt.figure(figsize=(10,12))
plt.plot(x, y)
plt.gcf().autofmt_xdate()
plt.show()
data4=pd.read_csv("pst_181022.csv", header=None, names = ["time", "signal", "temp"], index_col=False)
data4[17:46]
ref4=np.mean(data4["signal"][17:46])
ref4
data4["OD"]=-np.log10(data4["signal"]/ref4)
data4[0:30]
data4["time"] = pd.to_datetime(data4["time"], format="%Y-%m-%d_%H:%M:%S")
x=data4["time"]
y=data4["OD"]
plt.figure(figsize=(10,12))
plt.plot(x, y)
plt.gcf().autofmt_xdate()
plt.show()
len(data4)
data4["time"] = pd.to_datetime(data4["time"], format="%Y-%m-%d_%H:%M:%S")
x=data4["time"][0:8000]
y=data4["OD"][0:8000]
plt.figure(figsize=(10,12))
plt.plot(x, y)
plt.gcf().autofmt_xdate()
plt.show()
###Output
_____no_output_____ |
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/01_06/Begin/.ipynb_checkpoints/Beautiful Mathematics Typesetting-checkpoint.ipynb | ###Markdown
Beautiful Mathematics Typesetting[Tex](https://en.wikipedia.org/wiki/TeX)[LaTex](https://www.latex-project.org/)[Motivating Examples](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Typesetting%20Equations.html) The Lorenz Equations
###Code
\begin{align}
\dot{x} & = \sigma(y-x) \\
\dot{y} & = \rho x - y - xz \\
\dot{z} & = -\beta z + xy
\end{align}
###Output
_____no_output_____
###Markdown
The Cauchy-Schwarz Inequality
###Code
\begin{equation*}
\left( \sum_{k=1}^n a_k b_k \right)^2 \leq \left( \sum_{k=1}^n a_k^2 \right) \left( \sum_{k=1}^n b_k^2 \right)
\end{equation*}
###Output
_____no_output_____
###Markdown
Cross Product Formula
###Code
\begin{equation*}
\mathbf{V}_1 \times \mathbf{V}_2 = \begin{vmatrix}
\mathbf{i} & \mathbf{j} & \mathbf{k} \\
\frac{\partial X}{\partial u} & \frac{\partial Y}{\partial u} & 0 \\
\frac{\partial X}{\partial v} & \frac{\partial Y}{\partial v} & 0
\end{vmatrix}
\end{equation*}
###Output
_____no_output_____
###Markdown
Probability of getting (k) heads when flipping (n) coins
###Code
\begin{equation*}
P(E) = {n \choose k} p^k (1-p)^{ n-k}
\end{equation*}
###Output
_____no_output_____
###Markdown
Identity of Ramanujan[Srinivasa Ramanujan](https://en.wikipedia.org/wiki/Srinivasa_Ramanujan)Self-taught, no formal training in mathematics, made contributions to:- mathematical analysis- number theory- infinite series- continued fractions
###Code
\begin{equation*}
\frac{1}{\Bigl(\sqrt{\phi \sqrt{5}}-\phi\Bigr) e^{\frac25 \pi}} =
1+\frac{e^{-2\pi}} {1+\frac{e^{-4\pi}} {1+\frac{e^{-6\pi}}
{1+\frac{e^{-8\pi}} {1+\ldots} } } }
\end{equation*}
###Output
_____no_output_____
###Markdown
Maxwell’s Equations
###Code
\begin{align}
\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\ \nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\
\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\
\nabla \cdot \vec{\mathbf{B}} & = 0
\end{align}
###Output
_____no_output_____ |
src/RasaNLU_W2V_Approach.ipynb | ###Markdown
Entity Extraction
###Code
class Embedding(object):
def __init__(self,vocab_file,vectors_file, vocab_flag=True):
if vocab_flag:
words = []
with open(vocab_file, 'r') as f:
lines = [x.rstrip().split('\n') for x in f.readlines()]
lines = [x[0] for x in lines]
for line in lines:
current_words = line.split(' ')
words = list(set(words) | set(current_words))
with open(vectors_file, 'r') as f:
vectors = {}
for line in f:
vals = line.rstrip().split(' ')
vectors[vals[0]] = [float(x) for x in vals[1:]]
if not vocab_flag:
words = vectors.keys()
vocab_size = len(words)
vocab = {w: idx for idx, w in enumerate(words)}
ivocab = {idx: w for idx, w in enumerate(words)}
vector_dim = len(vectors[ivocab[0]])
W = np.zeros((vocab_size, vector_dim))
for word, v in vectors.items():
if (word == '<unk>') | (word not in vocab):
continue
W[vocab[word], :] = v
# normalize each word vector to unit variance
W_norm = np.zeros(W.shape)
d = (np.sum(W ** 2, 1) ** (0.5))
W_norm = (W.T / d).T
if vocab_flag:
for i in range(W.shape[0]):
x = W[i, :]
if sum(x) == 0:
W_norm[i, :] = W[i, :]
self.W = W_norm
self.vocab = vocab
self.ivocab = ivocab
def find_similar_words(embed,text,refs):
C = np.zeros((len(refs),embed.W.shape[1]))
for idx, term in enumerate(refs):
if term in embed.vocab:
C[idx,:] = embed.W[embed.vocab[term], :]
tokens = text.split(' ')
scores = [0.] * len(tokens)
for idx, term in enumerate(tokens):
if term in embed.vocab:
vec = embed.W[embed.vocab[term], :]
cosines = np.dot(C,vec.T)
score = np.mean(cosines)
scores[idx] = score
print(scores)
return tokens[np.argmax(scores)]
examples = ["i am looking for a place in the north of town",
"looking for indian restaurants",
"Indian wants to go to an italian restaurant",
"show me chinese restaurants",
"show me chines restaurants in the north",
"show me a mexican place in the centre",
"i am looking for an indian spot called olaolaolaolaolaola",
"search for restaurants",
"anywhere in the west",
"anywhere near 18328",
"I am looking for asian fusion food",
"I am looking a restaurant in 29432",
"I am looking for mexican indian fusion",
"central indian restaurant"]
examples = [x.lower() for x in examples]
fn = open(OUT_PATH+'vocabulary_file_w2v.txt', 'w')
for example in examples:
fn.write(example)
fn.write('\n')
fn.close()
embed = Embedding(VOCAB_FILE, GLOVE_INP_FN, False)
print(embed.W.shape)
print(len(embed.vocab))
test_example1 = 'looking for spanish restaurants'
test_example2 = 'looking for indian restaurants'
test_example3 = 'looking for south indian restaurants'
test_example4 = 'I want to find a chettinad restaurant'
test_example5 = 'chinese man looking for a indian restaurant'
refs = ["mexican","chinese","french","british","american"]
threshold = 0.2
# With stopwords
for example in [test_example1, test_example2, test_example3,
test_example4, test_example5]:
example = example.lower()
print('text: ', example)
print(find_similar_words(embed,example,refs))
print('\n')
# With stopwords
stop = set(stopwords.words('english'))
for example in [test_example1, test_example2, test_example3,
test_example4, test_example5]:
print('text: ', example)
example = " ".join([x.lower() for x in nltk.word_tokenize(example)
if x not in stop])
print(find_similar_words(embed,example,refs))
print('\n')
find_similar_words(embed, 'fish food', refs)
###Output
[0.3736672532256827, 0.45149458923102903]
###Markdown
Intent Detection
###Code
import numpy as np
def sum_vecs(embed,text):
tokens = text.split(' ')
vec = np.zeros(embed.W.shape[1])
for idx, term in enumerate(tokens):
if term in embed.vocab:
vec = vec + embed.W[embed.vocab[term], :]
return vec
def get_centroid(embed,examples):
C = np.zeros((len(examples),embed.W.shape[1]))
for idx, text in enumerate(examples):
C[idx,:] = sum_vecs(embed,text)
centroid = np.mean(C,axis=0)
assert centroid.shape[0] == embed.W.shape[1]
return centroid
def get_intent(embed,text):
intents = ['deny', 'inform', 'greet']
vec = sum_vecs(embed,text)
scores = np.array([ np.linalg.norm(vec-data[label]["centroid"]) for label in intents ])
return intents[np.argmin(scores)]
data={
"greet": {
"examples" : ["hello","hey there","howdy","hello","hi","hey","hey ho"],
"centroid" : None
},
"inform": {
"examples" : [
"i'd like something asian",
"maybe korean",
"what mexican options do i have",
"what italian options do i have",
"i want korean food",
"i want german food",
"i want vegetarian food",
"i would like chinese food",
"i would like indian food",
"what japanese options do i have",
"korean please",
"what about indian",
"i want some vegan food",
"maybe thai",
"i'd like something vegetarian",
"show me french restaurants",
"show me a cool malaysian spot"
],
"centroid" : None
},
"deny": {
"examples" : [
"nah",
"any other places ?",
"anything else",
"no thanks"
"not that one",
"i do not like that place",
"something else please",
"no please show other options"
],
"centroid" : None
}
}
intents = ['greet', 'inform', 'deny']
examples = []
for intent in intents:
examples = list(set(examples) | set(data[intent]['examples']))
examples
fn = open(VOCAB_FILE, 'w')
for example in examples:
fn.write(example)
fn.write('\n')
fn.close()
embed = Embedding(VOCAB_FILE, GLOVE_INP_FN, False)
for label in data.keys():
data[label]["centroid"] = get_centroid(embed,data[label]["examples"])
data
for text in ["hey you","i am looking for chinese food","not for me"]:
print("text : '{0}', predicted_label : '{1}'".format(text,get_intent(embed,text)))
text = "how do you do"
print("text : '{0}', predicted_label : '{1}'".format(text,get_intent(embed,text)))
###Output
text : 'how do you do', predicted_label : 'deny'
|
NLP/Word_Analogy_Debiasing/Word_analogy_debiasing.ipynb | ###Markdown
Word Analogy and Debiasing---This notebook contains word analogy, debiasing and equalizing taks. With the help of modern word embbeddings (e.g. GloVe, word2vec), we are able to make use of word vectors and accomplish these tasks.1. **Word Analogy:** Compute word analogy. For example, 'China' is to 'Mandarin' as 'France' is to 'French'.2. **Debiasing:** The dataset which was used to train the word embeddings can reflect the some bias of human language. Gender bias is a significant one. 3. **Equalizing:** Some words are gender-specific. For example, we may assume gender is the only difference between 'girl' and 'boy'. Therefore, they should have the same distance from other dimensions. Acknowledgement:Some ideas come from [Deep Learning Course on Coursera](https://www.deeplearning.ai/deep-learning-specialization/) (e.g., the debiasing and equalizing equations) and the [paper](https://arxiv.org/abs/1607.06520). 1. Load Word EmbeddingsThe pre-trained word vectors is downloaded from [GloVe](https://nlp.stanford.edu/projects/glove/). The file I used contains 400k words and 50 dimensions.
###Code
import numpy as np
# Read the GloVe text file and return the words.
def read_glove(name):
"""Given the path/name of the glove file, return the words(set) and word2vec_map(a python dict)
"""
file = open(name, 'r')
# Create set for words and a dictionary for words and their corresponding
words = set()
word2vec_map = {}
data = file.readlines()
for line in data:
# add word to the words set.
word = line.split()[0]
words.add(word)
word2vec_map[word] = np.array(line.split()[1:], dtype = np.float64)
return words, word2vec_map
words, word2vec_map = read_glove('glove.6B.50d.txt')
# length of vocab
print('length of vocab:',len(words))
# dimension of word
print('dimension of word:',word2vec_map['hello'].shape)
###Output
length of vocab: 400000
dimension of word: (50,)
###Markdown
2. Word Analogy 2.1 Define similarityCosine similarity is used to measure the similarity of two vectors. $$\text{Cosine Similarity(a, b)} = \frac {a . b} {||a||_2 ||b||_2} = cos(\theta)$$
###Code
def cosine_sim(a,b):
"""Given vector a and b, compute the cosine similarity of these two vectors.
"""
# Compute the dot product of a,b
dot = np.dot(a,b)
# compute the cosine similarity of a,b
sim = dot/(np.linalg.norm(a)*np.linalg.norm(b))
return sim
print(cosine_sim(word2vec_map['man'], word2vec_map['woman']))
###Output
0.8860337718495819
###Markdown
2.2 Find word analogyIf word a is to b as c is to d. Then, we have: $e_b - e_a \approx e_d - e_c$. Iterate over the vocabulary to find the best word analogy given three words.
###Code
def word_analogy(word_a, word_b, word_c, words, word2vec):
"""word_a is to word_b as word_c is to __.
Find the word given the words and word vectors.
"""
# Make sure the inputs are in lower case.
word_a, word_b, word_c = word_a.lower(), word_b.lower(), word_c.lower()
a,b,c = word2vec[word_a], word2vec[word_b], word2vec[word_c]
best_sim = -100
best_word = None
for word in words:
if word in [word_a, word_b, word_c]:
continue
# compute the current similarity
sim = cosine_sim(a-b, c-word2vec[word])
if sim > best_sim:
best_sim = sim
best_word = word
return best_word
triads_to_try = [('italy', 'italian', 'spain'), ('india', 'delhi', 'japan'), ('man', 'woman', 'boy'), ('small', 'smaller', 'good')]
for triad in triads_to_try:
print ('{} -> {} :: {} -> {}'.format( *triad, word_analogy(*triad,words, word2vec_map)))
###Output
italy -> italian :: spain -> spanish
india -> delhi :: japan -> tokyo
man -> woman :: boy -> girl
small -> smaller :: good -> better
###Markdown
2. DebiasingSome words should be neutral to the gender. But pre-trained word vectors are not, which reflects the language bias when we are using the language. 2.1 Define the gender vector
###Code
g1 = word2vec_map['man'] - word2vec_map['woman']
g2 = word2vec_map['father'] - word2vec_map['mother']
g3 = word2vec_map['boy'] - word2vec_map['girl']
# Average the subtractions.
g = (g1+g2+g3)/3
print(cosine_sim(word2vec_map['technology'], g))
print(cosine_sim(word2vec_map['flower'], g))
###Output
0.16192108462558177
-0.0939532553641572
###Markdown
2.2 Neutralize the wordsHere is the equation to neutralize the words. $$e^{bias\_component} = \frac{e \cdot g}{||g||_2^2} * g$$$$e^{debiased} = e - e^{bias\_component}$$Where: $g$: The gender vector. $e$: The original word vector
###Code
def neutralize(word, gender, word2vec):
"""Given the word to neutralize, gender vector and the word vectors, neutralize the word.
"""
e = word2vec[word]
e_bias = (np.dot(e,gender)/(np.linalg.norm(gender)**2))*gender
e_unbiased = e - e_bias
return e_unbiased
###Output
_____no_output_____
###Markdown
After neutralizing words:
###Code
print(cosine_sim(g,neutralize('technology', g, word2vec_map) ))
print(cosine_sim(g,neutralize('flower', g, word2vec_map) ))
###Output
1.8444594232094444e-17
-8.244955165656526e-18
###Markdown
3. Equalizing Some gender-specific words should be equidistant from non-gender dimensions(axis). Major equations:$$ \mu = \frac{e_{w1} + e_{w2}}{2}$$ $$ \mu_{B} = \frac {\mu \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}$$ $$\mu_{\perp} = \mu - \mu_{B} $$$$ e_{w1B} = \frac {e_{w1} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}$$ $$ e_{w2B} = \frac {e_{w2} \cdot \text{bias_axis}}{||\text{bias_axis}||_2^2} *\text{bias_axis}$$$$e_{w1B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w1B}} - \mu_B} {|(e_{w1} - \mu_{\perp}) - \mu_B)|} $$$$e_{w2B}^{corrected} = \sqrt{ |{1 - ||\mu_{\perp} ||^2_2} |} * \frac{e_{\text{w2B}} - \mu_B} {|(e_{w2} - \mu_{\perp}) - \mu_B)|} $$$$e_1 = e_{w1B}^{corrected} + \mu_{\perp} $$$$e_2 = e_{w2B}^{corrected} + \mu_{\perp} $$
###Code
def equalize(pair, bias_axis, word2vec_map):
"""Given the word pairs, the bias axis and the word vectors,
make the word pairs equidistant from unbiased axis.
"""
w1, w2 = pair
e_w1, e_w2 = word2vec_map[w1], word2vec_map[w2]
# Compute the mean of e_w1 and e_w2
mu = (e_w1+e_w2)/2
# Compute the projections of mu over the bias axis and the orthogonal axis
mu_B = np.dot(mu,bias_axis)/(np.square(np.linalg.norm(bias_axis)))*bias_axis
mu_orth = mu - mu_B
# Compute e_w1B and e_w2B
e_w1B = np.dot(e_w1,bias_axis)/(np.square(np.linalg.norm(bias_axis)))*bias_axis
e_w2B = np.dot(e_w2,bias_axis)/(np.square(np.linalg.norm(bias_axis)))*bias_axis
# Adjust the Bias part of e_w1B and e_w2B
corrected_e_w1B = np.sqrt(np.abs(1-np.square(np.linalg.norm(mu_orth))))*(e_w1B-mu_B)/np.linalg.norm((e_w1-mu_orth)-mu_B)
corrected_e_w2B = np.sqrt(np.abs(1-np.square(np.linalg.norm(mu_orth))))*(e_w2B-mu_B)/np.linalg.norm((e_w2-mu_orth)-mu_B)
# Debias by equalizing e1 and e2 to the sum of their corrected projections
e1 = corrected_e_w1B + mu_orth
e2 = corrected_e_w2B + mu_orth
return e1, e2
print("cosine similarities before equalizing:")
print("cosine_similarity(word_to_vec_map[\"man\"], gender) = ", cosine_sim(word2vec_map["man"], g))
print("cosine_similarity(word_to_vec_map[\"woman\"], gender) = ", cosine_sim(word2vec_map["woman"], g))
print()
e1, e2 = equalize(("man", "woman"), g, word2vec_map)
print("cosine similarities after equalizing:")
print("cosine_similarity(e1, gender) = ", cosine_sim(e1, g))
print("cosine_similarity(e2, gender) = ", cosine_sim(e2, g))
###Output
cosine similarities before equalizing:
cosine_similarity(word_to_vec_map["man"], gender) = 0.02435875412347579
cosine_similarity(word_to_vec_map["woman"], gender) = -0.3979047171251496
cosine similarities after equalizing:
cosine_similarity(e1, gender) = 0.6624273110383183
cosine_similarity(e2, gender) = -0.6624273110383184
|
Notebooks/TP1.POC/TP1.reg2-Alt.ipynb | ###Markdown
Hace falta algo que indique con qué entorno vamos a trabajar Importar lo que hace falta
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import re
data_url = "../Data/properatti.csv"
data = pd.read_csv(data_url, encoding="utf-8")
#limpiamos los que NaN en el precio
data = data.dropna(axis=0, how='any', subset=['price_aprox_usd'])
#funcion para borrar outliers.
def borrar_outliers(data, columnas):
"""Solo recibo columnas con valores numericos.
Las columns van en forma de tupla"""
cols_limpiar = columnas
mask=np.ones(shape=(data.shape[0]), dtype=bool)
for i in cols_limpiar:
#calculamos cuartiles, y valores de corte
Q1=data[i].quantile(0.25)
Q3=data[i].quantile(0.75)
RSI=Q3-Q1
max_value=Q3+1.5*RSI
min_value=Q1-1.5*RSI
#ajusto el min value a mano... no puede ser negativo.
min_value=10
#filtramos por max y min
mask=np.logical_and(mask, np.logical_and(data[i]>=min_value, data[i]<=max_value))
return data[mask]
def regex_to_bool(col, reg) :
u"""Returns a series with boolean mask result of apply the regular expresion to the column
col : column where to apply regular expresion
reg : regular expresion compiled
"""
serie = col.apply(lambda x : x if x is np.NaN else reg.search(x))
serie = serie.apply(lambda x : x is not None)
return serie
def regex_to_ones(col, reg, fill = 0) :
u"""Returns a series with ones or other value result of apply the regular expresion to the column
the value of one will be when the regular expression search() method found a match
the fill value (default to 0) will be when the regular expression serach() method did not found a match
col : column where to apply regular expresion
reg : regular expresion compiled
"""
serie = col.apply(lambda x : x if x is np.NaN else reg.search(x))
serie = serie.apply(lambda x : 1 if x is not None else fill)
return serie
def regex_to_tags(col, reg, match, not_match = np.NaN) :
u"""Returns a series with 'match' values result of apply the regular expresion to the column
the 'match' value will be when the regular expression search() method found a match
the 'not_match' value will be when the regular expression serach() method did not found a match
col : column where to apply regular expresion
reg : regular expresion compiled
"""
serie = col.apply(lambda x : x if x is np.NaN else reg.search(x))
serie = serie.apply(lambda x : match if x is not None else not_match)
return serie
_pattern = 'cochera|garage|auto'
_express = re.compile(_pattern, flags = re.IGNORECASE)
work = regex_to_tags(data['description'], _express, 'cochera', '')
data['cochera'] = work
_pattern = 'piscina|pileta'
_express = re.compile(_pattern, flags = re.IGNORECASE)
work = regex_to_tags(data['description'], _express, 'pileta', '')
data['pileta'] = work
_pattern = 'parrilla'
_express = re.compile(_pattern, flags = re.IGNORECASE)
work = regex_to_tags(data['description'], _express, 'parrilla', '')
data['parrilla'] = work
# Crear una categoría concatenando las encontradas
data['amenities'] = data['cochera'] +' '+ data['pileta'] +' '+ data['parrilla']
data[['cochera', 'pileta', 'parrilla', 'amenities']]
data['amenities'].describe()
data['amenities'].value_counts()
###Output
_____no_output_____ |
03-ANN/tensor_operation.ipynb | ###Markdown
3.1 텐서와 Autograd 3.1.2 텐서를 이용한 연산과 행렬곱
###Code
import torch
w = torch.randn(5,3, dtype=torch.float)
x = torch.tensor([[1.0,2.0], [3.0,4.0], [5.0,6.0]])
print("w size:", w.size())
print("x size:", x.size())
print("w:", w)
print("x:", x)
b = torch.randn(5,2, dtype=torch.float)
print("b:", b.size())
print("b:", b)
wx = torch.mm(w,x) # w의 행은 5, x의 열은 2, 즉 shape는 [5, 2]입니다.
print("wx size:", wx.size())
print("wx:", wx)
result = wx + b
print("result size:", result.size())
print("result:", result)
###Output
result size: torch.Size([5, 2])
result: tensor([[-3.7656, -4.9200],
[-2.7057, -5.4891],
[ 2.1144, 0.2841],
[ 1.4777, 0.0384],
[-3.6158, -3.2231]])
|
src/model/pytorch/21-RL/DeepRL-Tutorials/04.Dueling_DQN.ipynb | ###Markdown
Dueling Deep Q Network Imports
###Code
import gym
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from IPython.display import clear_output
from matplotlib import pyplot as plt
%matplotlib inline
from timeit import default_timer as timer
from datetime import timedelta
import math
from utils.wrappers import *
from agents.DQN import Model as DQN_Agent
from utils.ReplayMemory import ExperienceReplayMemory
from utils.hyperparameters import Config
###Output
_____no_output_____
###Markdown
Hyperparameters
###Code
config = Config()
config.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
#epsilon variables
config.epsilon_start = 1.0
config.epsilon_final = 0.01
config.epsilon_decay = 30000
config.epsilon_by_frame = lambda frame_idx: config.epsilon_final + (config.epsilon_start - config.epsilon_final) * math.exp(-1. * frame_idx / config.epsilon_decay)
#misc agent variables
config.GAMMA=0.99
config.LR=1e-4
#memory
config.TARGET_NET_UPDATE_FREQ = 1000
config.EXP_REPLAY_SIZE = 100000
config.BATCH_SIZE = 32
#Learning control variables
config.LEARN_START = 10000
config.MAX_FRAMES=1000000
#Nstep controls
config.N_STEPS=1
###Output
_____no_output_____
###Markdown
Network
###Code
class DuelingDQN(nn.Module):
def __init__(self, input_shape, num_outputs):
super(DuelingDQN, self).__init__()
self.input_shape = input_shape
self.num_actions = num_outputs
self.conv1 = nn.Conv2d(self.input_shape[0], 32, kernel_size=8, stride=4)
self.conv2 = nn.Conv2d(32, 64, kernel_size=4, stride=2)
self.conv3 = nn.Conv2d(64, 64, kernel_size=3, stride=1)
self.adv1 = nn.Linear(self.feature_size(), 512)
self.adv2 = nn.Linear(512, self.num_actions)
self.val1 = nn.Linear(self.feature_size(), 512)
self.val2 = nn.Linear(512, 1)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = F.relu(self.conv3(x))
x = x.view(x.size(0), -1)
adv = F.relu(self.adv1(x))
adv = self.adv2(adv)
val = F.relu(self.val1(x))
val = self.val2(val)
return val + adv - adv.mean()
def feature_size(self):
return self.conv3(self.conv2(self.conv1(torch.zeros(1, *self.input_shape)))).view(1, -1).size(1)
def sample_noise(self):
#ignore this for now
pass
###Output
_____no_output_____
###Markdown
Agent
###Code
class Model(DQN_Agent):
def __init__(self, static_policy=False, env=None, config=None):
super(Model, self).__init__(static_policy, env, config)
def declare_networks(self):
self.model = DuelingDQN(self.env.observation_space.shape, self.env.action_space.n)
self.target_model = DuelingDQN(self.env.observation_space.shape, self.env.action_space.n)
###Output
_____no_output_____
###Markdown
Plot Results
###Code
def plot(frame_idx, rewards, losses, sigma, elapsed_time):
clear_output(True)
plt.figure(figsize=(20,5))
plt.subplot(131)
plt.title('frame %s. reward: %s. time: %s' % (frame_idx, np.mean(rewards[-10:]), elapsed_time))
plt.plot(rewards)
if losses:
plt.subplot(132)
plt.title('loss')
plt.plot(losses)
if sigma:
plt.subplot(133)
plt.title('noisy param magnitude')
plt.plot(sigma)
plt.show()
###Output
_____no_output_____
###Markdown
Training Loop
###Code
start=timer()
env_id = "PongNoFrameskip-v4"
env = make_atari(env_id)
env = wrap_deepmind(env, frame_stack=False)
env = wrap_pytorch(env)
model = Model(env=env, config=config)
episode_reward = 0
observation = env.reset()
for frame_idx in range(1, config.MAX_FRAMES + 1):
epsilon = config.epsilon_by_frame(frame_idx)
action = model.get_action(observation, epsilon)
prev_observation=observation
observation, reward, done, _ = env.step(action)
observation = None if done else observation
model.update(prev_observation, action, reward, observation, frame_idx)
episode_reward += reward
if done:
model.finish_nstep()
model.reset_hx()
observation = env.reset()
model.save_reward(episode_reward)
episode_reward = 0
if np.mean(model.rewards[-10:]) > 19:
plot(frame_idx, all_rewards, losses, timedelta(seconds=int(timer()-start)))
break
if frame_idx % 10000 == 0:
plot(frame_idx, model.rewards, model.losses, model.sigma_parameter_mag, timedelta(seconds=int(timer()-start)))
model.save_w()
env.close()
###Output
_____no_output_____ |
benchmarks/benchmark5-hackathon.ipynb | ###Markdown
Benchmark Problem 5: Stokes Flow
###Code
from IPython.display import HTML
HTML('''{% include jupyter_benchmark_table.html num="[5]" revision=0 %}''')
###Output
_____no_output_____ |
Scikit-Learn/13-Logistic-Regression/01-Logistic Regression with Python.ipynb | ###Markdown
___ ___ Logistic Regression with PythonFor this lecture we will be working with the [Titanic Data Set from Kaggle](https://www.kaggle.com/c/titanic). This is a very famous data set and very often is a student's first step in machine learning! We'll be trying to predict a classification- survival or deceased.Let's begin our understanding of implementing Logistic Regression in Python for classification.We'll use a "semi-cleaned" version of the titanic data set, if you use the data set hosted directly on Kaggle, you may need to do some additional cleaning not shown in this lecture notebook. Import LibrariesLet's import some libraries to get started!
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
The DataLet's start by reading in the titanic_train.csv file into a pandas dataframe.
###Code
train = pd.read_csv('titanic_train.csv')
train.head()
###Output
_____no_output_____
###Markdown
Exploratory Data AnalysisLet's begin some exploratory data analysis! We'll start by checking out missing data! Missing DataWe can use seaborn to create a simple heatmap to see where we are missing data!
###Code
sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis')
###Output
_____no_output_____
###Markdown
Roughly 20 percent of the Age data is missing. The proportion of Age missing is likely small enough for reasonable replacement with some form of imputation. Looking at the Cabin column, it looks like we are just missing too much of that data to do something useful with at a basic level. We'll probably drop this later, or change it to another feature like "Cabin Known: 1 or 0"Let's continue on by visualizing some more of the data! Check out the video for full explanations over these plots, this code is just to serve as reference.
###Code
sns.set_style('whitegrid')
sns.countplot(x='Survived',data=train,palette='RdBu_r')
sns.set_style('whitegrid')
sns.countplot(x='Survived',hue='Sex',data=train,palette='RdBu_r')
sns.set_style('whitegrid')
sns.countplot(x='Survived',hue='Pclass',data=train,palette='rainbow')
sns.distplot(train['Age'].dropna(),kde=False,color='darkred',bins=30)
train['Age'].hist(bins=30,color='darkred',alpha=0.7)
sns.countplot(x='SibSp',data=train)
train['Fare'].hist(color='green',bins=40,figsize=(8,4))
###Output
_____no_output_____
###Markdown
____ Cufflinks for plots___ Let's take a quick moment to show an example of cufflinks!
###Code
import cufflinks as cf
cf.go_offline()
train['Fare'].iplot(kind='hist',bins=30,color='green')
###Output
_____no_output_____
###Markdown
___ Data CleaningWe want to fill in missing age data instead of just dropping the missing age data rows. One way to do this is by filling in the mean age of all the passengers (imputation).However we can be smarter about this and check the average age by passenger class. For example:
###Code
plt.figure(figsize=(12, 7))
sns.boxplot(x='Pclass',y='Age',data=train,palette='winter')
###Output
_____no_output_____
###Markdown
We can see the wealthier passengers in the higher classes tend to be older, which makes sense. We'll use these average age values to impute based on Pclass for Age.
###Code
def impute_age(cols):
Age = cols[0]
Pclass = cols[1]
if pd.isnull(Age):
if Pclass == 1:
return 37
elif Pclass == 2:
return 29
else:
return 24
else:
return Age
###Output
_____no_output_____
###Markdown
Now apply that function!
###Code
train['Age'] = train[['Age','Pclass']].apply(impute_age,axis=1)
###Output
_____no_output_____
###Markdown
Now let's check that heat map again!
###Code
sns.heatmap(train.isnull(),yticklabels=False,cbar=False,cmap='viridis')
###Output
_____no_output_____
###Markdown
Great! Let's go ahead and drop the Cabin column and the row in Embarked that is NaN.
###Code
train.drop('Cabin',axis=1,inplace=True)
train.head()
train.dropna(inplace=True)
###Output
_____no_output_____
###Markdown
Converting Categorical Features We'll need to convert categorical features to dummy variables using pandas! Otherwise our machine learning algorithm won't be able to directly take in those features as inputs.
###Code
train.info()
sex = pd.get_dummies(train['Sex'],drop_first=True)
embark = pd.get_dummies(train['Embarked'],drop_first=True)
train.drop(['Sex','Embarked','Name','Ticket'],axis=1,inplace=True)
train = pd.concat([train,sex,embark],axis=1)
train.head()
###Output
_____no_output_____
###Markdown
Great! Our data is ready for our model! Building a Logistic Regression modelLet's start by splitting our data into a training set and test set (there is another test.csv file that you can play around with in case you want to use all this data for training). Train Test Split
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(train.drop('Survived',axis=1),
train['Survived'], test_size=0.30,
random_state=101)
###Output
_____no_output_____
###Markdown
Training and Predicting
###Code
from sklearn.linear_model import LogisticRegression
logmodel = LogisticRegression()
logmodel.fit(X_train,y_train)
predictions = logmodel.predict(X_test)
###Output
_____no_output_____
###Markdown
Let's move on to evaluate our model! Evaluation We can check precision,recall,f1-score using classification report!
###Code
from sklearn.metrics import classification_report
print(classification_report(y_test,predictions))
###Output
precision recall f1-score support
0 0.81 0.93 0.86 163
1 0.85 0.65 0.74 104
accuracy 0.82 267
macro avg 0.83 0.79 0.80 267
weighted avg 0.82 0.82 0.81 267
|
frameworkHandsOn/2. intro_to_pytorch.ipynb | ###Markdown
Intro[PyTorch](https://pytorch.org/) is a very powerful machine learning framework. Central to PyTorch are [tensors](https://pytorch.org/docs/stable/tensors.html), a generalization of matrices to higher ranks. One intuitive example of a tensor is an image with three color channels: A 3-channel (red, green, blue) image which is 64 pixels wide and 64 pixels tall is a $3\times64\times64$ tensor. You can access the PyTorch framework by writing `import torch` near the top of your code, along with all of your other import statements.This guide will help introduce you to the functionality of PyTorch, but don't worry too much about memorizing it: the assignments will link to relevant documentation where necessary.
###Code
import torch
###Output
_____no_output_____
###Markdown
Why PyTorch?One important question worth asking is, why is PyTorch being used for this course? There is a great breakdown by [the Gradient](https://thegradient.pub/state-of-ml-frameworks-2019-pytorch-dominates-research-tensorflow-dominates-industry/) looking at the state of machine learning frameworks today. In part, as highlighted by the article, PyTorch is generally more pythonic than alternative frameworks, easier to debug, and is the most-used language in machine learning research by a large and growing margin. While PyTorch's primary alternative, Tensorflow, has attempted to integrate many of PyTorch's features, Tensorflow's implementations come with some inherent limitations highlighted in the article.Notably, while PyTorch's industry usage has grown, Tensorflow is still (for now) a slight favorite in industry. In practice, the features that make PyTorch attractive for research also make it attractive for education, and the general trend of machine learning research and practice to PyTorch makes it the more proactive choice. Tensor PropertiesOne way to create tensors from a list or an array is to use `torch.Tensor`. It'll be used to set up examples in this notebook, but you'll never need to use it in the course - in fact, if you find yourself needing it, that's probably not the correct answer.
###Code
example_tensor = torch.Tensor(
[
[[1, 2], [3, 4]],
[[5, 6], [7, 8]],
[[9, 0], [1, 2]]
]
)
###Output
_____no_output_____
###Markdown
You can view the tensor in the notebook by simple printing it out (though some larger tensors will be cut off)
###Code
example_tensor
###Output
_____no_output_____
###Markdown
Tensor Properties: DeviceOne important property is the device of the tensor - throughout this notebook you'll be sticking to tensors which are on the CPU. However, throughout the course you'll also be using tensors on GPU (that is, a graphics card which will be provided for you to use for the course). To view the device of the tensor, all you need to write is `example_tensor.device`. To move a tensor to a new device, you can write `new_tensor = example_tensor.to(device)` where device will be either `cpu` or `cuda`.
###Code
example_tensor.device
###Output
_____no_output_____
###Markdown
Tensor Properties: ShapeAnd you can get the number of elements in each dimension by printing out the tensor's shape, using `example_tensor.shape`, something you're likely familiar with if you've used numpy. For example, this tensor is a $3\times2\times2$ tensor, since it has 3 elements, each of which are $2\times2$.
###Code
example_tensor.shape
###Output
_____no_output_____
###Markdown
You can also get the size of a particular dimension $n$ using `example_tensor.shape[n]` or equivalently `example_tensor.size(n)`
###Code
print("shape[0] =", example_tensor.shape[0])
print("size(1) =", example_tensor.size(1))
###Output
shape[0] = 3
size(1) = 2
###Markdown
Finally, it is sometimes useful to get the number of dimensions (rank) or the number of elements, which you can do as follows
###Code
print("Rank =", len(example_tensor.shape))
print("Number of elements =", example_tensor.numel())
###Output
Rank = 3
Number of elements = 12
###Markdown
Indexing TensorsAs with numpy, you can access specific elements or subsets of elements of a tensor. To access the $n$-th element, you can simply write `example_tensor[n]` - as with Python in general, these dimensions are 0-indexed.
###Code
example_tensor[1]
###Output
_____no_output_____
###Markdown
In addition, if you want to access the $j$-th dimension of the $i$-th example, you can write `example_tensor[i, j]`
###Code
example_tensor[1, 1, 0]
###Output
_____no_output_____
###Markdown
Note that if you'd like to get a Python scalar value from a tensor, you can use `example_scalar.item()`
###Code
example_tensor[1, 1, 0].item()
###Output
_____no_output_____
###Markdown
In addition, you can index into the ith element of a column by using `x[:, i]`. For example, if you want the top-left element of each element in `example_tensor`, which is the `0, 0` element of each matrix, you can write:
###Code
example_tensor[:, 0, 0]
###Output
_____no_output_____
###Markdown
Initializing TensorsThere are many ways to create new tensors in PyTorch, but in this course, the most important ones are: [`torch.ones_like`](https://pytorch.org/docs/master/generated/torch.ones_like.html): creates a tensor of all ones with the same shape and device as `example_tensor`.
###Code
torch.ones_like(example_tensor)
###Output
_____no_output_____
###Markdown
[`torch.zeros_like`](https://pytorch.org/docs/master/generated/torch.zeros_like.html): creates a tensor of all zeros with the same shape and device as `example_tensor`
###Code
torch.zeros_like(example_tensor)
###Output
_____no_output_____
###Markdown
[`torch.randn_like`](https://pytorch.org/docs/stable/generated/torch.randn_like.html): creates a tensor with every element sampled from a [Normal (or Gaussian) distribution](https://en.wikipedia.org/wiki/Normal_distribution) with the same shape and device as `example_tensor`
###Code
torch.randn_like(example_tensor)
###Output
_____no_output_____
###Markdown
Sometimes (though less often than you'd expect), you might need to initialize a tensor knowing only the shape and device, without a tensor for reference for `ones_like` or `randn_like`. In this case, you can create a $2x2$ tensor as follows:
###Code
torch.randn(2, 2, device='cpu') # Alternatively, for a GPU tensor, you'd use device='cuda'
###Output
_____no_output_____
###Markdown
Basic FunctionsThere are a number of basic functions that you should know to use PyTorch - if you're familiar with numpy, all commonly-used functions exist in PyTorch, usually with the same name. You can perform element-wise multiplication / division by a scalar $c$ by simply writing `c * example_tensor`, and element-wise addition / subtraction by a scalar by writing `example_tensor + c`Note that most operations are not in-place in PyTorch, which means that they don't change the original variable's data (However, you can reassign the same variable name to the changed data if you'd like, such as `example_tensor = example_tensor + 1`)
###Code
(example_tensor - 5) * 2
###Output
_____no_output_____
###Markdown
You can calculate the mean or standard deviation of a tensor using [`example_tensor.mean()`](https://pytorch.org/docs/stable/generated/torch.mean.html) or [`example_tensor.std()`](https://pytorch.org/docs/stable/generated/torch.std.html).
###Code
print("Mean:", example_tensor.mean())
print("Stdev:", example_tensor.std())
###Output
Mean: tensor(4.)
Stdev: tensor(2.9848)
###Markdown
You might also want to find the mean or standard deviation along a particular dimension. To do this you can simple pass the number corresponding to that dimension to the function. For example, if you want to get the average $2\times2$ matrix of the $3\times2\times2$ `example_tensor` you can write:
###Code
example_tensor.mean(0)
# Equivalently, you could also write:
# example_tensor.mean(dim=0)
# example_tensor.mean(axis=0)
# torch.mean(example_tensor, 0)
# torch.mean(example_tensor, dim=0)
# torch.mean(example_tensor, axis=0)
###Output
_____no_output_____
###Markdown
PyTorch has many other powerful functions but these should be all of PyTorch functions you need for this course outside of its neural network module (`torch.nn`). PyTorch Neural Network Module (`torch.nn`)PyTorch has a lot of powerful classes in its `torch.nn` module (Usually, imported as simply `nn`). These classes allow you to create a new function which transforms a tensor in specific way, often retaining information when called multiple times.
###Code
import torch.nn as nn
###Output
_____no_output_____
###Markdown
`nn.Linear`To create a linear layer, you need to pass it the number of input dimensions and the number of output dimensions. The linear object initialized as `nn.Linear(10, 2)` will take in a $n\times10$ matrix and return an $n\times2$ matrix, where all $n$ elements have had the same linear transformation performed. For example, you can initialize a linear layer which performs the operation $Ax + b$, where $A$ and $b$ are initialized randomly when you generate the [`nn.Linear()`](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) object.
###Code
linear = nn.Linear(10, 2)
example_input = torch.randn(3, 10)
example_output = linear(example_input)
example_output
###Output
_____no_output_____
###Markdown
`nn.ReLU`[`nn.ReLU()`](https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html) will create an object that, when receiving a tensor, will perform a ReLU activation function. This will be reviewed further in lecture, but in essence, a ReLU non-linearity sets all negative numbers in a tensor to zero. In general, the simplest neural networks are composed of series of linear transformations, each followed by activation functions.
###Code
relu = nn.ReLU()
relu_output = relu(example_output)
relu_output
###Output
_____no_output_____
###Markdown
`nn.BatchNorm1d`[`nn.BatchNorm1d`](https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html) is a normalization technique that will rescale a batch of $n$ inputs to have a consistent mean and standard deviation between batches. As indicated by the `1d` in its name, this is for situations where you expects a set of inputs, where each of them is a flat list of numbers. In other words, each input is a vector, not a matrix or higher-dimensional tensor. For a set of images, each of which is a higher-dimensional tensor, you'd use [`nn.BatchNorm2d`](https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html), discussed later on this page.`nn.BatchNorm1d` takes an argument of the number of input dimensions of each object in the batch (the size of each example vector).
###Code
batchnorm = nn.BatchNorm1d(2)
batchnorm_output = batchnorm(relu_output)
batchnorm_output
###Output
_____no_output_____
###Markdown
`nn.Sequential`[`nn.Sequential`](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html) creates a single operation that performs a sequence of operations. For example, you can write a neural network layer with a batch normalization as
###Code
mlp_layer = nn.Sequential(
nn.Linear(5, 2),
nn.BatchNorm1d(2),
nn.ReLU()
)
test_example = torch.randn(5,5) + 1
print("input: ")
print(test_example)
print("output: ")
print(mlp_layer(test_example))
###Output
input:
tensor([[ 2.4688, 1.0009, 1.6414, 0.2143, 0.3103],
[-0.3540, 0.5270, 0.5286, -2.4275, 1.4398],
[ 0.8454, -0.1692, 1.5860, 0.7589, 2.2362],
[-0.6587, 1.5780, 1.2099, -0.4110, -0.5216],
[ 1.3398, 0.9296, 1.7896, -1.2495, 0.1408]])
output:
tensor([[0.0000, 0.0000],
[1.6839, 0.0315],
[0.0000, 1.6012],
[0.0000, 0.4329],
[0.6200, 0.0000]], grad_fn=<ReluBackward0>)
###Markdown
OptimizationOne of the most important aspects of essentially any machine learning framework is its automatic differentiation library. OptimizersTo create an optimizer in PyTorch, you'll need to use the `torch.optim` module, often imported as `optim`. [`optim.Adam`](https://pytorch.org/docs/stable/optim.htmltorch.optim.Adam) corresponds to the Adam optimizer. To create an optimizer object, you'll need to pass it the parameters to be optimized and the learning rate, `lr`, as well as any other parameters specific to the optimizer.For all `nn` objects, you can access their parameters as a list using their `parameters()` method, as follows:
###Code
import torch.optim as optim
adam_opt = optim.Adam(mlp_layer.parameters(), lr=1e-1)
###Output
_____no_output_____
###Markdown
Training LoopA (basic) training step in PyTorch consists of four basic parts:1. Set all of the gradients to zero using `opt.zero_grad()`2. Calculate the loss, `loss`3. Calculate the gradients with respect to the loss using `loss.backward()`4. Update the parameters being optimized using `opt.step()`That might look like the following code (and you'll notice that if you run it several times, the loss goes down):
###Code
train_example = torch.randn(100,5) + 1
adam_opt.zero_grad()
# We'll use a simple loss function of mean distance from 1
# torch.abs takes the absolute value of a tensor
cur_loss = torch.abs(1 - mlp_layer(train_example)).mean()
cur_loss.backward()
adam_opt.step()
print(cur_loss)
###Output
tensor(0.7660, grad_fn=<MeanBackward0>)
###Markdown
`requires_grad_()`You can also tell PyTorch that it needs to calculate the gradient with respect to a tensor that you created by saying `example_tensor.requires_grad_()`, which will change it in-place. This means that even if PyTorch wouldn't normally store a grad for that particular tensor, it will for that specified tensor. `with torch.no_grad():`PyTorch will usually calculate the gradients as it proceeds through a set of operations on tensors. This can often take up unnecessary computations and memory, especially if you're performing an evaluation. However, you can wrap a piece of code with `with torch.no_grad()` to prevent the gradients from being calculated in a piece of code. `detach():`Sometimes, you want to calculate and use a tensor's value without calculating its gradients. For example, if you have two models, A and B, and you want to directly optimize the parameters of A with respect to the output of B, without calculating the gradients through B, then you could feed the detached output of B to A. There are many reasons you might want to do this, including efficiency or cyclical dependencies (i.e. A depends on B depends on A). New `nn` ClassesYou can also create new classes which extend the `nn` module. For these classes, all class attributes, as in `self.layer` or `self.param` will automatically treated as parameters if they are themselves `nn` objects or if they are tensors wrapped in `nn.Parameter` which are initialized with the class. The `__init__` function defines what will happen when the object is created. The first line of the init function of a class, for example, `WellNamedClass`, needs to be `super(WellNamedClass, self).__init__()`. The `forward` function defines what runs if you create that object `model` and pass it a tensor `x`, as in `model(x)`. If you choose the function signature, `(self, x)`, then each call of the forward function, gets two pieces of information: `self`, which is a reference to the object with which you can access all of its parameters, and `x`, which is the current tensor for which you'd like to return `y`.One class might look like the following:
###Code
class ExampleModule(nn.Module):
def __init__(self, input_dims, output_dims):
super(ExampleModule, self).__init__()
self.linear = nn.Linear(input_dims, output_dims)
self.exponent = nn.Parameter(torch.tensor(1.))
def forward(self, x):
x = self.linear(x)
# This is the notation for element-wise exponentiation,
# which matches python in general
x = x ** self.exponent
return x
example_model = ExampleModule(10, 2)
list(example_model.parameters())
###Output
_____no_output_____
###Markdown
And you can print out their names too, as follows:
###Code
list(example_model.named_parameters())
###Output
_____no_output_____
###Markdown
And here's an example of the class in action:
###Code
input = torch.randn(2, 10)
example_model(input)
###Output
_____no_output_____ |
Week6/AdvML_Week6_ex2.ipynb | ###Markdown
clf.classes_clf.feature_importances_print(clf.max_features_)print(clf.n_classes_)print(clf.n_features_)print(clf.n_outputs_)clf.tree_
###Code
# Randomly select the samples and features for the tree
def sample(n, k, x_train, t_train):
idx = np.random.randint(x_train.shape[0], size=n)
fidx = np.random.randint(x_train.shape[1], size=k)
x = x_train[idx, :]
x = x[:, fidx]
y = t_train[idx]
return x, y, idx, fidx
#print("Rows: ", idx, ", features ", fidx)
#print(x.shape)
#print(y.shape)
def trainTree(x_train, t_train):
clf = DecisionTreeClassifier(random_state=0)
clf = clf.fit(x_train, t_train)
return clf
#cross_val_score(clf, x_train, t_train, cv=10)
def ensureAllClasses(newPred, clf):
for i in range(10):
if i not in clf.classes_:
newPred = np.insert(newPred, i, 0, axis=1)
return newPred
# Main loop
def main(M, n, k):
pred = np.zeros(shape = (endTestIx - startTestIx, 10), dtype = 'float32')
for m in range(M):
x, y, idx, fidx = sample(n, k, x_train, t_train)
clf = trainTree(x, y)
newPred = clf.predict_proba(x_test[startTestIx:endTestIx,fidx])
newPred = ensureAllClasses(newPred, clf)
pred = np.add(pred, newPred)
pred_classes = np.argmax(pred, axis=1)
correct = pred_classes == t_test[startTestIx:endTestIx]
acc = sum(correct)/len(correct)
#print(pred_classes)
#print (acc)
return acc
Mmax = 100
n = 1000
k = 20
accs = list()
for m in range(1, Mmax):
accs.append(main(m, n, k))
plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k')
sns.lineplot(range(1,Mmax), accs)
plt.xlabel('Number of trees (M)')
plt.ylabel('Accuracy of predictions (%)')
plt.title('Number of trees vs. accuracy, n = {0}, k = {1}'.format(n, k))
plt.show()
M = 100
n = 1000
kmax = 200
accs = list()
for k in range(1, kmax, 10):
accs.append(main(M, n, k))
plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k')
sns.lineplot(range(1,kmax,10), accs)
plt.xlabel('Number of features per tree (k)')
plt.ylabel('Accuracy of predictions (%)')
plt.title('Number of features per tree vs. accuracy, M = {0}, n = {1}'.format(M, n))
plt.show()
M = 100
nmax = 5000
k = 50
accs = list()
for n in range(1, nmax, 100):
accs.append(main(M, n, k))
plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k')
sns.lineplot(range(1, nmax, 100), accs)
plt.xlabel('Number of samples per tree (n)')
plt.ylabel('Accuracy of predictions (%)')
plt.title('Number of samples per tree vs. accuracy, M = {0}, k = {1}'.format(M, k))
plt.show()
M = 100
n = 1000
k = 50
repeats = 50
accs = list()
for i in range(50):
accs.append(main(M, n, k))
avAcc = sum(accs)/len(accs)
print(avAcc)
###Output
0.9221999999999996
|
clase_5_pandas/4_checkpoint.ipynb | ###Markdown
Documentación pandashttps://pandas.pydata.org/pandas-docs/stable/index.html DatasetEl dataset que usaremos es una versión muy resumida de datos de la Encuesta Permanentes de Hogares (relevamiento llevado adelante por el INDEC). Se trata de una encuesta continua que tiene como objetivo fundamental generar información sobre el funcionamiento del mercado de trabajo.Solamente utilizaremos algunas variables (edad, nivel educativo, cantidad de horas trabajadas, calificación de la tarea e ingreso laboral) y algunos casos (los ocupados, es decir, aquellos que han trabajado al menos una hora en la semana anterior al relevamiento).Este dataset es el mismo que emplearemos en la clase presencial, y en estos ejercicios buscamos familiarizarnos con él y revisar algunos temas. Importamos la biblioteca pandas y asignamos pd como alias:
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Ejercicio 1Busquemos en la documentación de pandas la sintaxis del método `read_csv` y leamos en un `DataFrame` llamado data los datos del archivo /M1/CLASE_04/Data/data_filt.csv Este archivo tiene algunos datos numéricos y otros de tipo cadena de caracteres. Las columnas son:* ch06: int, edad* nivel_ed: string, nivel educativo* htot: int, cantidad de horas totales trabajadas en el período* calif: string, calificación de la tarea* p47t: int, ingreso
###Code
location = '../Data/data_filt.csv'
data = pd.read_csv(location, encoding='latin1')
data
###Output
_____no_output_____
###Markdown
Ejercicio 2Repasemos el concepto de índice y columnas de un `DataFrame`Accedamos al índice (nombres de las filas) del `DataFrame` dataAccedamos a los nombres de columnas del `DataFrame` data
###Code
print(data.index)
print(data.columns)
###Output
RangeIndex(start=0, stop=23448, step=1)
Index(['ch06', 'nivel_ed', 'htot', 'calif', 'p47t'], dtype='object')
###Markdown
Vamos a modificar ahora el índice de data, así el valor del índice no coincide con la posición y podemos notar diferencias en los ejercicios que siguen.
###Code
data.index = data.index + 7
print(data.index)
data.head(5)
###Output
RangeIndex(start=7, stop=23455, step=1)
###Markdown
Ejercicio 3Repasemos el uso de `loc` e `iloc`* `loc` nos permite acceder a un elemento por su índice* `iloc` nos permite acceder a un elemento por su posiciónLeamos con `loc` y con `iloc` la cuarta fila de data¿Cómo accedemos al valor del índice en la cuarta fila?
###Code
print(data.loc[10])
print('----------')
print(data.iloc[3])
###Output
ch06 52
nivel_ed 1_H/Sec inc
htot 90
calif 2_Op./No calif.
p47t 11000
Name: 10, dtype: object
----------
ch06 52
nivel_ed 1_H/Sec inc
htot 90
calif 2_Op./No calif.
p47t 11000
Name: 10, dtype: object
###Markdown
Ejercicio 4Repasemos el uso de `loc` combinado con máscaras booleanas.Queremos construir un objeto `DataFrame` con los registros de edad menor a 15 o mayor igual a 70.
###Code
data.describe()
edades_extremas = data[(data['ch06'] < 15) | (data['ch06'] > 70)]
edades_extremas.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 234 entries, 10 to 23332
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 ch06 234 non-null int64
1 nivel_ed 234 non-null object
2 htot 234 non-null int64
3 calif 234 non-null object
4 p47t 234 non-null float64
dtypes: float64(1), int64(2), object(2)
memory usage: 11.0+ KB
###Markdown
Ejercicio 5Calculemos algunas métricas sobre un objeto `DataFrame`* Promedio de edad* Máximo de horas trabajadas* Mediana de ingreso Todos estos datos los pude ver directamente con `data.describe()` de forma más acotado y completa, pero paso a detallarlo con métodos específicos
###Code
print('Promedio de edad:')
round(data['ch06'].mean(), 2)
print('Máximo de horas trabajadas:')
data['htot'].max()
print('Mediana de ingresos:')
data['p47t'].median()
###Output
Mediana de ingresos:
|
Chapter_1/clase_2_loops.ipynb | ###Markdown
LÓGICA Y LOOPSYa que hablamos un poco de estructuras organizadas de información (como los vectores y las listas) y de el uso de funciones, surgen necesidades que no se pueden satisafecer con lo que se ha aprendido hasta el momento. Python tiene herramientas para solucionar problemas que , como veremos, surgen a partir de necesidades más complejas de los programadores. Herramientas que sirven para hacer recorridos largos de forma automática o definir condiciones en el desarrollo de un proceso. LógicaEs de esperarse que la programación tenga lógica, sobre todo cuando la computación está fundamentanda en esta. En lo que a python concierne, se usa el concepto de [**Álgebra de Boole**](https://es.wikipedia.org/wiki/%C3%81lgebra_de_Boole) o **Lógica Booleana**. Esta no es más que un esquema matemático para definir la lógica de operadores y compuertas en computación.En el contexto del programador inexperto, la logica boolana corresponde a la interpretanción de cualquier enunciado que pueda ser reducido a Verdadero (**True**) o Falso (**False**). Estos son valores que ya hemos visto antes en el curso y que corresponden a variables de tipo *boolean*. Para poner un ejemplo, miremos que pasa en python cuando en vez de hacer operaciones tratamos de comparar elementos o hacer afirmaciones:
###Code
1>2 # 1 mayor que 2
500 < 5*10 # 500 menor que 5*10 = 50
100 > 10 # 100 mayor que 10
###Output
_____no_output_____
###Markdown
Acá se presentan los operadores *mayor que* (>) y *menor que* (<) siendo usados en **afirmaciones lógicas** que son respondidas con un valor booleano. Al expresar afirmaciones falsas, el valor retornado es *False* y de igual forma al presentar afirmaciones ciertas el valor respuesta es *True*. existen mas operadores lógicos, enter ellos:
###Code
100 == 200 #100 es igual a 200
100 == 10*10 #100 = 10*10
300 >= 50*10 #300 mayor o igual que 50*10
300 <= 301 #300 menor o igual a 301
100 != 200 #100 diferente a 200
###Output
_____no_output_____
###Markdown
En cada uno de estos casos se está evaluando un solo enunciado, es decir, solo se evalua una **condición**. pero ¿Qué pasa si se quiere evaluar más de un enunciado? y si se pudiera evaluar más de uno ... ¿Cómo se *mezclaría* la condición de verdad de cada una para determinar la condición de verdad deseada?Introduciendo otros operadores lógicos: **or** y **and**. OR lógicoOR, como su traducción del inglés lo indica, se refiere al *condicional que es cierto si almenos uno de sus elementos es cierto*, es decir que es cierto *si y solo si* alguno de los enunciados que le conforma es cierto. Un ejemplo de la vida cotidiana es: *"Comer manzanas o peras me hace feliz"*. Mi *Felicidad* hipotética es válidad si recibo manzanas **O** peras. Es válida si almenos una es cierta. En términos más ... computacionales:
###Code
(100 > 200) or (50 > 100) #100 mayor que 200 O 50 mayor que 100 (ambas son falsas)
(100 > 200) or (500 == 5*100) #100 mayor que 200 O 500 igual que 5*100 (una verdadera una falsa)
100 == 10*10 or 4<10 # 100 igual que 10*10 o 4 menor que 10 (ambas verdaderas)
###Output
_____no_output_____
###Markdown
**[GP]** si bien los parentesis rodeando cada enunciado no son necesarios, es mejor tenerlos para organización del código y evitar futuras confusiones. La *compuerta lógica* OR tambien puede ser expresada con una línea vertical:
###Code
(100 > 200) | (500 == 5*100) #100 mayor que 200 O 500 igual que 5*100 (una verdadera una falsa)
###Output
_____no_output_____
###Markdown
**IMPORTANTE** recordar que los valores booleanos pueden ser representados en notación binaria y viceversa con conversores de tipo.
###Code
bool(0) #valor booleano de 0
bool(1) #valor booleano de 1
int(True) #valor entero de True
int(False) #valor entero de False
###Output
_____no_output_____
###Markdown
AND lógicoA diferencia de OR, AND se refiere al *condicional que es cierto si todos sus elementos son ciertos*, es decir que es cierto *si y solo si* todos los enunciados que le conforman son ciertos.Ejemplo: *"Comer manzanas y peras me hace feliz"*. Mi Felicidad hipotética es válidad si recibo manzanas Y peras. Es válida si las dos son ciertas.
###Code
(100 > 200) or (50 > 100) #100 mayor que 200 Y 50 mayor que 100 (ambas son falsas)
(100 > 200) and (500 == 5*100) #100 mayor que 200 Y 500 igual que 5*100 (una verdadera una falsa)
100 == 10*10 and 4<10 # 100 igual que 10*10 Y 4 menor que 10 (ambas verdaderas)
###Output
_____no_output_____
###Markdown
La compuerta lógica AND también puede ser expresada con un '&':
###Code
(100 == 10*10) & (4<10)
###Output
_____no_output_____
###Markdown
Comprobando más enunciadosNo solo enunciados de comparación entre cantidades se pueden comprobar, también está permitido hacer más afirmaciones que pueden ser verificadas como verdaderas o no:
###Code
lista_rock = ["Iggy Pop","Axel Rose","Jim Morrison","Jimmy Hendrix"] #lista de prueba
"Iggy Pop" in lista_rock #Iggy está en la lista
###Output
_____no_output_____
###Markdown
**¿Qué cree usted que le pregunté a Python?** En efecto! verifiqué que un elemento existiera dentro de la lista. Esta es una herramienta que más adelante nos será de utilidad. El comando **in** se refiere a la existencia de un elemento dentro de un *iterable*, que por el momento lo tomaremos en cuenta como un elemento con un numero **contable** de valores. Las listas, arreglos, diccionarios y los elemnto de tipo *map* son ejemplos de iterables.
###Code
"Kurt Cobain" in lista_rock #Kurt no está en la lista
dict_elementos = {"Hidrogeno":1, "Helio":2, "Litio":3, "Berilio":4, "Boro":5} #dicionario de elementos
"Boro" in dict_elementos #Boro es un elemento en el diccionario
2 in dict_elementos #a pesar de que 2 es un valor asociado a un elemento del diccionario
#no es un elemento de diccionario en si mismo
###Output
_____no_output_____
###Markdown
Otro enunciado a comprobar es afirmar el tipo de una variable:
###Code
type(5) is int #se introuce la afirmacion is
type(5)==int #que al parecer....es equivalente a ==
###Output
_____no_output_____
###Markdown
La afirmación **is** implica que dos elementos apuntan al mismo objeto en memoria. **==** implica que dos valores son iguales. Recordemos la notación de slices que vimos al introducir las listas:
###Code
lista_A = [2,4,6,8] #se define la lista A
lista_B = lista_A #La lista B se define como la misma list A
lista_C = lista_A[:] #la lista C se define como un slice de principio a fin de la lista A
###Output
_____no_output_____
###Markdown
es decir que la lista C se define como una lista **diferente** (una copia) a la lista A, a pesar de tener los mismos valores:
###Code
print(lista_A)
print(lista_B)
print(lista_C)
lista_A is lista_B
lista_A == lista_B
###Output
_____no_output_____
###Markdown
En el caso de la lista B que se define como la misma lista A ambos enunciados son equivalentes. Ahora vemaos que pasa con la lista C:
###Code
lista_A is lista_C
lista_A == lista_C
###Output
_____no_output_____
###Markdown
**RECORDAR** que al hacer slices se crean **copias parciales** de la lista que se *corta*. Esto implica que se crea un nuevo objeto en memoria incluso si se copia *toda* la lista como en este caso. Es importante tener en cuenta cuándo esto sucede puesto que muchas veces definir una variable como igual a otra hace que se refiera al mismo objeto en memoria , como las listas A y B. Esto a veces trae problemas: si se quiere modificar la lista B, es psoible que se modifique la lista A puesto que se trata del mismo objeto. mientras que al modificar la lista C solo modifica a esta ya que es un pbjeto diferente. NOT lógicoLos enunciados lógicos también se pueden negar, y para ello esta el operador lógico **not**, cuya única función es negar el enunciado en cuestión.
###Code
not True
a = 5
b = 10
a > b # a es mayor a b, es decir 5 > 10 (Falso)
not a > b # negar 5 > 10
###Output
_____no_output_____
###Markdown
CondicionalesComo su nombre lo indica los **condicionales** verifican el cumplimiento de una condición para desarrollar una parte específica del proceso. En el caso de la programación, se verifica la validez *(True or False)* para un enunciado, y de cumplirse se procede con un proceso subordinado. sentencia IF o IF statementEste tipo de condicional **segrega** un bloque de cigo como proceso subordinado y lo ejecuta **si y solo si** la condicion establecida se cumple. Ejemplo cotidiano: *"Si me demoro en salir de casa, voy a perder el bus"*. El evento de *perder el bus* depende unicamente del condicional *demorarse en salir de casa*. Si perdí el bus es porque me demoré en salir,y así mismo, si me demoré en salir entonces voy perder el bus.Para usar el condicional *"if"* se escribe la palabra if, seguida de la condicion a cumplirse. El código subordinado se escribe a partir de la linea siguiente y todo el bloque **con indentación** puesto que es precisamentre subordinado:
###Code
#se escribe la condicion, se finaliza con DOS PUNTOS para cerrar la sentencia IF
if(4 < 5): # 4 es menor a 5
print("4 es menor a 5") # si la condicion se cumple , se imprime el mensaje
#se escribe la condicion, se finaliza con DOS PUNTOS para cerrar la sentencia IF
# se puede omitir el parentesis
if 4 > 5: # 4 es mayor a 5
print("4 es mayor a 5") # si la condicion se cumple , se imprime el mensaje
###Output
_____no_output_____
###Markdown
Como se puede observar, el código subordinado (en este caso se trata del print) se ejecuta unicamente si se cumple la condición.**[GP]** Es mejor usar los parentesis en la declaración de la sentencia para conservar el orden. como veremos en breve, las afirmaciones condicionales pueden se compuestas y conservar el orden de operaciones en estos caso es más complejo sin usar parentesis. Estos permiten delimitar cada enunciado por aparte y facilitar la interpretación.**IMPORTANTE** NO olvidar los dos puntos al final de la sentencia e indentar las lineas de códio subordinado. Tambien es prudente notar que escribir en python es casi como escribir ordenes en inglés. La interpretabilidad del código en Python es un regalo del Dios de cada religión.Tambien puede existir el caso de *"nested if's"*:
###Code
lista_prueba = [2,4,6,8,0]
if(2 in lista_prueba):
#los siguientes 2 if estan subordinados al if de la linea anterior (note la indentacion hacia la derecha)
if(4 in lista_prueba):
print("estan 2 y 4") #esta linea esta subordinada al if de 4, que a su vez esta subordinado al if de 2
if(3 in lista_prueba):
#este if NO esta subordinado al if de 4, pero si al de 2. el de 4 y 3 estan al mismo nivel
print("estan 2 y 3")
###Output
estan 2 y 4
###Markdown
Para que el código subordinado del IF se ejecute, basta con que su condición sea cierta, incluso si esta es compuesta:
###Code
if ( 100 == 10*10 and 6 in lista_prueba): # 100 = 10*10 y 6 en lista_prueba
print("ambas se cumplen!")
if(4<2 or 0 in lista_prueba): #4 menor a 2 o 0 en lista_prueba
print("alguna se cumle!")
###Output
alguna se cumle!
###Markdown
Veamos un enuncaido compuesto más complejo:
###Code
if ( ( 4 > 2 ) and ((6 in lista_prueba) or (5 in lista_prueba)) ) :
print("...algo se cumple")
###Output
...algo se cumple
###Markdown
Y si hay mas casos . . .¿Qué pasa cuando se quieren evaluar dos o más casos? ¿o que pasa cuando se quiere hacer algo al respecto en caso de no cumplirse la condición? Bueno, para eso se tiene el **else**. Este elemento se usa cuando también se quiere hacer una operación o tarea cuando la condición lógica del *if* no se cumple:
###Code
def funcion_A (n):
if (n < 10):
print("menor a 10")
else:
print("mayor a 10 ")
funcion_A(8) # se cumple la condicion del if, entonces se ejecuta el codigo subordinado
funcion_A(19) # NO se cumple la condición del if, entonces se cumple el codigo subordinado de else.
###Output
mayor a 10
###Markdown
La función en cuestión tiene un gran problema que tal vez usted ya ha notado, y es el siguiente:
###Code
funcion_A(10)
###Output
mayor a 10
###Markdown
Cuando se le proporciona como parámetro el 10, lo clasifica como mayor a 10 puesto que la condición para ser clasificado como "menor a 10" es sólo que sea menor. Cualquier otro caso (mayor o igual) es clasificado como "mayor a 10". Entonces se tienen dos posibilidades: la primera es hacer un *nested if* :
###Code
def funcion_A (n):
if (n < 10):
print("menor a 10")
else:
if(n == 10):
print("es 10")
else:
print("mayor a 10 ")
funcion_A(8)
funcion_A(10)
funcion_A(12)
###Output
menor a 10
es 10
mayor a 10
###Markdown
El *nested if* parce funcionar de maravilla, pero es desordenado para ser una tarea simple. La segunda opción es contraer el *nesting* causado por el *if* subordinado al *else* en lo que se llama un condicional *else if* o, usando la contracción de python para este enunciado, el condicional **elif**:
###Code
def funcion_A (n):
if (n < 10): # si n es menor a 10 entonces ...
print("menor a 10")
elif (n==10): # si no se cumple, entonces miremos si es igual a 10 . . .
print("es 10")
else: # y si no se cumple ninguna, pues . , ,
print("mayor a 10")
funcion_A(8)
funcion_A(10)
funcion_A(12)
###Output
menor a 10
es 10
mayor a 10
###Markdown
**IMPORTANTE** tener en cuenta que no se pueden tener *elif* o *else* en el código si que se haya declarado antes un if, ya que estos condicionales son auxiliares al *if* en caso de que este no se cumpla.Se peuden tener tantos *elif* dependientes en cascada como se desee.
###Code
def funcion_A (n):
if (n < 10): # si n es menor a 10 entonces ...
print("menor a 10")
elif (n==10): # si no se cumple, entonces miremos si es igual a 10 . . .
print("es 10")
elif(n==12): # si no se han cumplido las anteriores, miremos si es igual a 12...
print("es 12")
elif(n==19): # si no se han cumplido las anteriores, miremos si es igual a 19...
print("es 19")
else: # y si no se cumple ninguna, pues . , ,
print("mayor a 10")
funcion_A(8)
funcion_A(10)
funcion_A(12)
funcion_A(19)
funcion_A(22)
###Output
menor a 10
es 10
es 12
es 19
mayor a 10
###Markdown
Otro aspecto importante es siempre saber si lo que se quiere es un *elif* o simplemente otro *if*. La diferencia radica en la exclusividad. Cuando se usa *if-elif* se tiene en cuenta que la condición del *elif* se va a evaluar **solamente** cuando las condiciones anteriores han fallado. Si se usa otro *if* este se evaluará independiente de si *if* anteriores han fallado o no. Por ejemplo :
###Code
def funcion_A(n):
if(n < 10): #si n es menor a 10
print("menor a 10")
elif(2*n < 30): #si no es asi, mirar si el doble de n es menor a 30
print("doble menor a 30")
else: #si no se cumple nada, enonces ...
print("numero grande")
def funcion_B(n):
if(n < 10): #si n es menor a 10
print("menor a 10")
if(2*n < 30): #si el doble de n es menor a 30
print("doble menor a 30")
else: #si no se cumple, enonces ...
print("numero grande")
funcion_A(8)
funcion_A(12)
funcion_A(18)
funcion_B(8) # este caso va a pasar por los dos if (n<10 y 2*n<30)
funcion_B(12)
funcion_B(18)
###Output
menor a 10
doble menor a 30
doble menor a 30
numero grande
###Markdown
Como se ve, si hay diferencia en usar uno u otro. Esto es importante recalcarlo ya que muchos errores de programador aprendiz vienen por no saber el nivel de **exclusividad** entre condicionales y el nivel de segregación que se quiere para cada caso a tener en cuenta.**IMPORTANTE** recordar que las condiciones del *elif* NO tiene que estar necesariamente definidas sobre la misma variable que el *if* al que responden. Función FactorialLa función factorial de un numero entero corresponde a la multiplicación de todos los enteros menores o iguales a dicho numero y mayores a cero. Se denota con un signo de exclamación al lado del numero entero en cuestión. Es decir,$5! = 5 \cdot 4 \cdot 3 \cdot 2 \cdot 1 = 120$ También, $0! = 1$En este orden de ideas, una forma de mostrar la función general es: $n! = \prod_{n}^{i = 1} i$Si se detiene a mirar bien la función factorial, en particular el caso de $5!$, es facil notar que:$5! = 5 \cdot 4! = 5 \cdot 4 \cdot 3! =$. . .O, dicho de una manera más formal:$n! = n \cdot (n-1)!$A este tipo de funciones que se pueden definir respecto a si mismas se les llama **funciones recursivas** y en programación suponen una herramienta demasiado poderosa: **La recursividad**. Continuando con el ejemplo de la función factorial y ya habiendo mencionado en la sección anterior la posibilidad de llamar una función dentro de la función misma, veamos cono esto se puede complementar con la lógica para hacer una función **recursiva** útil.
###Code
def factorial(n):
if( n == 0):
return 1 #el factorial de 0 es 1
else:
return n * factorial(n-1) # si n es diferente de 0, retorne n por el factorial de (n-1)
###Output
_____no_output_____
###Markdown
Ahora, probemos la función:
###Code
print(factorial(0)) # 0! = 1
print(factorial(4)) # 4! = 24
print(factorial(5)) # 0! = 120
###Output
1
24
120
###Markdown
LOOPSFinalmente es hora de aprender lo que, a mi parecer, es la herramienta más poderosa de python : **Los Loops**.Esta herramienta ofrece la posibilidad de *iterar* sobre diferentes valores de forma ordenada. ¿Qué quiero decir con 'iterar'? Bueno, se trata de cumplir una funcion varias veces. Una y otra y otra vez.Al igual que cuando se tratra de lógica, las lineas de código escritas cuando se trata de loops son bastante parecidas a oraciones coherentes en inglés. Esto facilita mucho la tarea de programar al momento de querer formular sentencias con loops.Pero, si un loop trata de hace una tarea de forma repetitiva ¿Cuándo se debe detener la iteración? ¿En qué momento se debe parar? Esto depende de **condiciones lógicas** que se definen en la declaración del loop y que **se deben** respetar para seguir iterando. De esto se hablará a profundidad en las siguientes definiciones. While LoopComo su homonimo en inglés, **while** corresponde a un loop que se seguirá ejecutando **mientras** una condición se cumpla. Como ya se mencionó, estas condiciones son sentencias lógicas que deben ser verdaderas o *True* para que se pueda seguir ejecutando la tarea. Veamos un ejemplo básico:
###Code
i = 0 #Variable i
while i<10 : #mientras i siga siendo menor a 10
print(i) #imprimir i en cada iteracion
i = i+1 #y sumarle 1 a i en cada iteracion
###Output
0
1
2
3
4
5
6
7
8
9
###Markdown
Como vemos, la sentencia *while* se detuvo en el momento en que *i* pasa a ser igual a 10. En ese momento, i ya no cumple la sentencia lógica *i < 10* y el loop se detiene.Es importante tener en cuenta que el condicional del loop puede usar **cualquier** tipo de sentencia lógica. Un ejemplo un poco más bizarro:```pythonwhile True: print(10)```Analice bien lo que estoy tratando de plantearle. La sentencia lógica en este caso se trata de **True** que evidentemente siempre es cierta. Esto implica un loop infinito que haría que se imprimiera en consola el numero 10 por tiempo idefinido. Si quiere intentarlo en su computadora es bajo su propio riesgo.**Recuerde** que se pueden *matar* procesos desde consola con el comando **Ctrl + C** (Linux)Otros ejemplos de loop *while* son:
###Code
lista_prueba = [0,2,4,6,8,10]
i = 0
#en cada iteracion se aumenta i en uno y se verifica si su doble esta en la lista
while (2*i in lista_prueba):
print("El doble de ", i , "esta en la lista.")
i = i+1
i = 0
string_prueba = "El eclipse lunar"
while ( i < len(string_prueba) ):
print(string_prueba[i:])
i=i+1
###Output
El eclipse lunar
l eclipse lunar
eclipse lunar
eclipse lunar
clipse lunar
lipse lunar
ipse lunar
pse lunar
se lunar
e lunar
lunar
lunar
unar
nar
ar
r
###Markdown
**[GP]** También es conveniente usar parántesis en la condición del loop. De nuevo, se trata de evitar confusiones y mantener un poco el orden dentro del código. For Loopel loop **for** es un poco más complejo que el loop while puesto que ya tiene su condición para parar incluida en el enunciado. Por lo general se usa para recorrer **iterables** o contenedores de forma ordenada o con cierta lógica. El primer ejemplo útil es poder iterar sobre cantidades numéricas, para ello usaremos el *iterable* **range**:
###Code
for i in range(10,15):
print(i)
###Output
10
11
12
13
14
###Markdown
**range** define un espacio de numeros enteros entre el primer argumento (incluyendolo) y el segundo argumento (sin incluirlo). Si el no se asigna el primer valor, se asume como 0:
###Code
for i in range(10):
print(i)
###Output
0
1
2
3
4
5
6
7
8
9
###Markdown
**IMPORTANTE** recordar que el iterable *range* es de la clase *range* y por ende no es un objeto tan facil de manipular, a menos que se transforma lista o se use con iteradores como *while* o *for*:
###Code
print(range(10))
print( list(range(10)) )
###Output
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
###Markdown
**for** sirve para iterar sobre contenedores o arreglos:
###Code
lista_prueba = ["a","b","c","d","e"]
for letra in lista_prueba: #para cada elemento de la lista, se hace ...
print(letra)
import numpy as np
espacio_lineal = np.linspace(1,2,15) #espacio lineal de 15 valores distribuidos de 1 a 2
for num in espacio_lineal:
print(2*num)
###Output
2.0
2.142857142857143
2.2857142857142856
2.4285714285714284
2.571428571428571
2.7142857142857144
2.857142857142857
3.0
3.142857142857143
3.2857142857142856
3.4285714285714284
3.571428571428571
3.7142857142857144
3.8571428571428568
4.0
###Markdown
El *for* se puede usar para generar valores basandose en una regla matemática arbitraria, lo cual supone una herramienta enorme a la hora de tratar problemas numericos complejos. También puede ser usada para recorrer arreglos basandose en su indice.
###Code
lista_prueba = []
for i in range(50): #recorre el loop desde 0 (incluyendole) hasta 50 (sin incluirle)
lista_prueba.append( 3*(i+1)-2) # se le agrega el valor de 3*(i-1) - 2 a la lista para cada i entre 0 y 49
print(lista_prueba)
###Output
[1, 4, 7, 10, 13, 16, 19, 22, 25, 28, 31, 34, 37, 40, 43, 46, 49, 52, 55, 58, 61, 64, 67, 70, 73, 76, 79, 82, 85, 88, 91, 94, 97, 100, 103, 106, 109, 112, 115, 118, 121, 124, 127, 130, 133, 136, 139, 142, 145, 148]
###Markdown
Los *Loops* también se pueden **anidar** dentro de otros loops, creando *nested loops*. Estos tienen diversos usos, pero los más comunes tienen que ver con **recorrer arreglos, matrices y demás estructuras de datos**:
###Code
MAT = np.zeros([5,5]) #matriz de ceros de 5 x 5
print ("ANTES = ")
print(MAT)
for i in range(5):
for j in range(5): # el loop sobre j se cumple de 0 a 5 para cada iteracion del loop sobre i
MAT[i,j] = i+j #para cada coordenada i,j de la matriz, se asigna el valor i+j
print("DESPUES = ")
print(MAT)
###Output
ANTES =
[[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]]
DESPUES =
[[0. 1. 2. 3. 4.]
[1. 2. 3. 4. 5.]
[2. 3. 4. 5. 6.]
[3. 4. 5. 6. 7.]
[4. 5. 6. 7. 8.]]
|
chap5/chapter_05_exercise5.ipynb | ###Markdown
Use MatPlotLib’s function hist along with NumPy’s function’s random.randand random.randn to create the histogram graphs shown in Fig. Histograms ofrandom numbers. Random numbers`np.random.rand(num)` creates an array of `num` floats **uniformly** distributed on the interval from 0 to 1.`np.random.randn(num)` produces a **normal (Gaussian)** distribution of `num` random numbers with a mean of 0 and a standard deviation of 1. They are distributed according to$$P(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x²}$$`np.random.randint(low, high, num)` produces a **uniform** random distribution of `num` integers between `low` (inclusive) and `high` (exclusive).
###Code
import numpy as np
import matplotlib.pyplot as plt
N_points = 10000
n_bins = 10
x1 = np.random.randn(N_points)
x2 = np.random.rand(N_points)
# create plot
plt.figure(1, figsize=(10, 6))
n1, bins1, patches1 = plt.hist(x1,
n_bins * 4,
normed=True,
facecolor='g',
alpha=0.5,
edgecolor='k')
n2, bins2, patches2 = plt.hist(x2,
n_bins,
normed=True,
facecolor='b',
alpha=0.5,
edgecolor='k')
plt.xlabel('x')
plt.ylabel('P(x)')
plt.grid(linestyle=':', linewidth=0.5)
# display plot on screen
plt.show()
? plt.hist
? plt.grid
###Output
_____no_output_____ |
More-DL/pure_Tensorflow_2.0/tensorflow_v2/notebooks/3_NeuralNetworks/recurrent_network.ipynb | ###Markdown
Recurrent Neural Network ExampleBuild a recurrent neural network (LSTM) with TensorFlow 2.0.- Author: Aymeric Damien- Project: https://github.com/aymericdamien/TensorFlow-Examples/ RNN OverviewReferences:- [Long Short Term Memory](http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf), Sepp Hochreiter & Jurgen Schmidhuber, Neural Computation 9(8): 1735-1780, 1997. MNIST Dataset OverviewThis example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flattened and converted to a 1-D numpy array of 784 features (28*28).To classify images using a recurrent neural network, we consider every image row as a sequence of pixels. Because MNIST image shape is 28*28px, we will then handle 28 sequences of 28 timesteps for every sample.More info: http://yann.lecun.com/exdb/mnist/
###Code
from __future__ import absolute_import, division, print_function
# Import TensorFlow v2.
import tensorflow as tf
from tensorflow.keras import Model, layers
import numpy as np
# MNIST dataset parameters.
num_classes = 10 # total classes (0-9 digits).
num_features = 784 # data features (img shape: 28*28).
# Training Parameters
learning_rate = 0.001
training_steps = 1000
batch_size = 32
display_step = 100
# Network Parameters
# MNIST image shape is 28*28px, we will then handle 28 sequences of 28 timesteps for every sample.
num_input = 28 # number of sequences.
timesteps = 28 # timesteps.
num_units = 32 # number of neurons for the LSTM layer.
# Prepare MNIST data.
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Convert to float32.
x_train, x_test = np.array(x_train, np.float32), np.array(x_test, np.float32)
# Flatten images to 1-D vector of 784 features (28*28).
x_train, x_test = x_train.reshape([-1, 28, 28]), x_test.reshape([-1, num_features])
# Normalize images value from [0, 255] to [0, 1].
x_train, x_test = x_train / 255., x_test / 255.
# Use tf.data API to shuffle and batch data.
train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_data = train_data.repeat().shuffle(5000).batch(batch_size).prefetch(1)
# Create LSTM Model.
class LSTM(Model):
# Set layers.
def __init__(self):
super(LSTM, self).__init__()
# RNN (LSTM) hidden layer.
self.lstm_layer = layers.LSTM(units=num_units)
self.out = layers.Dense(num_classes)
# Set forward pass.
def call(self, x, is_training=False):
# LSTM layer.
x = self.lstm_layer(x)
# Output layer (num_classes).
x = self.out(x)
if not is_training:
# tf cross entropy expect logits without softmax, so only
# apply softmax when not training.
x = tf.nn.softmax(x)
return x
# Build LSTM model.
lstm_net = LSTM()
# Cross-Entropy Loss.
# Note that this will apply 'softmax' to the logits.
def cross_entropy_loss(x, y):
# Convert labels to int 64 for tf cross-entropy function.
y = tf.cast(y, tf.int64)
# Apply softmax to logits and compute cross-entropy.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=x)
# Average loss across the batch.
return tf.reduce_mean(loss)
# Accuracy metric.
def accuracy(y_pred, y_true):
# Predicted class is the index of highest score in prediction vector (i.e. argmax).
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
return tf.reduce_mean(tf.cast(correct_prediction, tf.float32), axis=-1)
# Adam optimizer.
optimizer = tf.optimizers.Adam(learning_rate)
# Optimization process.
def run_optimization(x, y):
# Wrap computation inside a GradientTape for automatic differentiation.
with tf.GradientTape() as g:
# Forward pass.
pred = lstm_net(x, is_training=True)
# Compute loss.
loss = cross_entropy_loss(pred, y)
# Variables to update, i.e. trainable variables.
trainable_variables = lstm_net.trainable_variables
# Compute gradients.
gradients = g.gradient(loss, trainable_variables)
# Update weights following gradients.
optimizer.apply_gradients(zip(gradients, trainable_variables))
# Run training for the given number of steps.
for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):
# Run the optimization to update W and b values.
run_optimization(batch_x, batch_y)
if step % display_step == 0:
pred = lstm_net(batch_x, is_training=True)
loss = cross_entropy_loss(pred, batch_y)
acc = accuracy(pred, batch_y)
print("step: %i, loss: %f, accuracy: %f" % (step, loss, acc))
###Output
step: 100, loss: 1.663173, accuracy: 0.531250
step: 200, loss: 1.034144, accuracy: 0.750000
step: 300, loss: 0.775579, accuracy: 0.781250
step: 400, loss: 0.840327, accuracy: 0.781250
step: 500, loss: 0.344379, accuracy: 0.937500
step: 600, loss: 0.884484, accuracy: 0.718750
step: 700, loss: 0.569674, accuracy: 0.875000
step: 800, loss: 0.401931, accuracy: 0.906250
step: 900, loss: 0.530193, accuracy: 0.812500
step: 1000, loss: 0.265871, accuracy: 0.968750
|
notebooks/laugh_detection/laugh-vggish.ipynb | ###Markdown
Laugh Detection (VGGish)Based on https://labs.ideo.com/2018/06/15/how-to-build-your-own-laugh-detector/ Dependencies
###Code
!git clone https://github.com/ideo/LaughDetection.git
!ls
!cd LaughDetection ; pip install -r requirements.txt
import os
os.chdir('/content/LaughDetection')
!pwd
!ls audioset
!wget https://storage.googleapis.com/audioset/vggish_model.ckpt
!mv vggish_model.ckpt audioset/vggish_model.ckpt
import keras
import numpy as np
import tensorflow as tf
import glob
from audioset import vggish_embeddings
tf.reset_default_graph()
audio_embedder = vggish_embeddings.VGGishEmbedder(None)
from google.colab import drive
drive.mount('/content/drive')
processed_embedding = audio_embedder.convert_audio_to_embedding('/content/drive/My Drive/cs231n-project/datasets/emotiw/val/Audio/325_27.wav')
embedding_final = np.expand_dims(processed_embedding, axis=0)
model = keras.models.load_model('Models/LSTM_ThreeLayer_100Epochs.h5')
prediction = model.predict(embedding_final)
prediction
!pip install pydub
from pydub import AudioSegment
newAudio = AudioSegment.from_wav("/content/drive/My Drive/cs231n-project/datasets/emotiw/val/Audio/325_27.wav")
newAudio = newAudio[-1000:]
newAudio.export('newSong.wav', format="wav")
!cp /content/drive/'My Drive'/cs231n-project/datasets/emotiw/Val_labels.txt .
processed_embedding = audio_embedder.convert_audio_to_embedding('newSong.wav')
embedding_final = np.expand_dims(processed_embedding, axis=0)
model = keras.models.load_model('Models/LSTM_ThreeLayer_100Epochs.h5')
prediction = model.predict(embedding_final)
prediction
!grep "289_34" Val_labels.txt
processed_embedding = audio_embedder.convert_audio_to_embedding('/content/drive/My Drive/cs231n-project/datasets/emotiw/val/Audio/289_34.wav')
embedding_final = np.expand_dims(processed_embedding, axis=0)
model = keras.models.load_model('Models/LSTM_ThreeLayer_100Epochs.h5')
prediction = model.predict(embedding_final)
prediction
###Output
_____no_output_____
###Markdown
Get the Data
###Code
!wget 'https://storage.googleapis.com/cs231n-emotiw/data/train-full.zip'
!unzip -q train-full.zip
!cp /content/drive/'My Drive'/cs231n-project/datasets/emotiw/Train_labels.txt .
vids = []
train_map = {}
with open("Train_labels.txt", "r") as f:
i = 0
for line in f:
if i > 0:
vid, label = line.strip().split(" ")
vids.append(vid)
train_map[vid] = int(label)
i += 1
from tqdm import tqdm
model = keras.models.load_model('Models/LSTM_ThreeLayer_100Epochs.h5')
total_preds = []
for vid in tqdm(vids):
processed_embedding = audio_embedder.convert_audio_to_embedding(f"/content/drive/My Drive/cs231n-project/datasets/emotiw/train/audio/{vid}.wav")
embedding_final = np.expand_dims(processed_embedding, axis=0)
prediction = model.predict(embedding_final)
total_preds.append(prediction)
len(total_preds)
actual_preds = [float(total_preds[x][0][0]) for x in range(len(total_preds))]
actual_preds[10]
for i in range(10):
print(f"{i / 10} {np.count_nonzero(np.array(actual_preds) > i / 10)}")
pred_labels = []
actual_labels = []
for i in range(len(vids)):
pred_label = actual_preds[i] > 0.8
pred_labels.append(pred_label)
actual_label = train_map[vids[i]] == 1
actual_labels.append(actual_label)
pred_labels = np.array(pred_labels)
actual_labels = np.array(actual_labels)
num_correct = np.count_nonzero(pred_labels == actual_labels)
print(f"{num_correct/len(pred_labels)}")
import pickle
with open('train_laugh_prob.pkl', 'wb') as handle:
pickle.dump({
"actual_preds": actual_preds,
"vids": vids
}, handle)
!cp train_laugh_prob.pkl /content/drive/'My Drive'/cs231n-project/datasets/emotiw/
val_vids = []
val_map = {}
with open("Val_labels.txt", "r") as f:
i = 0
for line in f:
if i > 0:
vid, label = line.strip().split(" ")
val_vids.append(vid)
val_map[vid] = int(label)
i += 1
!pip install moviepy
from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_audio
total_val_preds = []
for vid in tqdm(val_vids):
ffmpeg_extract_audio(f"/content/drive/My Drive/cs231n-project/datasets/emotiw/val/{vid}.mp4", f"{vid}.wav")
processed_embedding = audio_embedder.convert_audio_to_embedding(f"{vid}.wav")
embedding_final = np.expand_dims(processed_embedding, axis=0)
prediction = model.predict(embedding_final)
total_val_preds.append(prediction)
total_val_preds
actual_val_preds = [float(total_val_preds[x][0][0]) for x in range(len(total_val_preds))]
len(actual_val_preds)
import pickle
with open('val_laugh_prob.pkl', 'wb') as handle:
pickle.dump({
"actual_preds": actual_val_preds,
"vids": val_vids
}, handle)
!cp val_laugh_prob.pkl /content/drive/'My Drive'/cs231n-project/datasets/emotiw/
!cp /content/drive/'My Drive'/cs231n-project/datasets/emotiw/TEST.zip .
!unzip TEST.zip
!cp /content/drive/'My Drive'/cs231n-project/datasets/emotiw/Test_labels.txt .
test_vids = []
test_map = {}
with open("Test_labels.txt", "r") as f:
i = 0
for line in f:
if i > 0:
vid, label = line.strip().split(" ")
test_vids.append(vid)
test_map[vid] = int(label)
i += 1
from moviepy.video.io.ffmpeg_tools import ffmpeg_extract_audio
total_test_preds = []
for vid in tqdm(test_vids):
ffmpeg_extract_audio(f"Test/{vid}.mp4", f"{vid}.wav")
processed_embedding = audio_embedder.convert_audio_to_embedding(f"{vid}.wav")
embedding_final = np.expand_dims(processed_embedding, axis=0)
prediction = model.predict(embedding_final)
total_test_preds.append(prediction)
actual_test_preds = [float(total_test_preds[x][0][0]) for x in range(len(total_test_preds))]
len(total_test_preds)
import pickle
with open('test_laugh_prob.pkl', 'wb') as handle:
pickle.dump({
"actual_preds": actual_test_preds,
"vids": test_vids
}, handle)
!cp test_laugh_prob.pkl /content/drive/'My Drive'/cs231n-project/datasets/emotiw/
###Output
_____no_output_____ |
src/数据清洗篇/工具介绍/pandas/pandas的函数操作.ipynb | ###Markdown
pandas的函数操作由于python本身对函数式编程的支持,以及pandas底层依赖的numpy优秀的向量化计算能力,pandas可以使用类似Universal Function的方式向量化的求值.本文例子依然使用[iris]()数据集
###Code
import pandas as pd
iris_data = pd.read_csv("source/iris.csv")
iris_data[:5]
###Output
_____no_output_____
###Markdown
> 例:求出iris三类的信息熵
###Code
import scipy as sp
slogs = lambda x:sp.log(x)*x
entropy = lambda x:sp.exp((slogs(x.sum())-x.map(slogs).sum())/x.sum())
iris_data.groupby("class").agg(entropy)
###Output
_____no_output_____
###Markdown
广播所谓广播就是一个矢量和一个标量的运算,所有矢量中元素都被同样的操作,pandas可以支持这种操作
###Code
data1 = iris_data[:10].copy()
data1
data1*10
data1["sepal_length"]*10
###Output
_____no_output_____
###Markdown
使用numpy的universal functiion
###Code
import numpy as np
np.exp(data1["sepal_length"])
f_npexp = np.frompyfunc(lambda x :np.exp(x)+1,1,1)
f_npexp(data1["sepal_length"])
###Output
_____no_output_____ |
archive/.ipynb_checkpoints/newegg_webscrape_042020-checkpoint.ipynb | ###Markdown
WebScraping NewEgg.com
###Code
# Import dependencies
from bs4 import BeautifulSoup as soup
import requests
import pymongo
import numpy as np
import pandas as pd
# Initialize PyMongo to work with MongoDBs
# conn = 'mongodb://localhost:27017'
# client = pymongo.MongoClient(conn)
# Define database
# db = client.newegg_laptops_db
# # Define collection
# collection = db.apple
# URL of page to be scraped
#apple_url = 'https://www.newegg.com/p/pl?N=100006740+50001759&Order=RELEASE'
laptops_home_url = 'https://www.newegg.com/p/pl?N=100006740&page=1&order=RELEASE'
# Retrieve page with the requests module
response = requests.get(laptops_home_url)
response
# Create BeautifulSoup object; parse with 'lxml'
#page_soup = soup(response.text, 'lxml')
page_soup = soup(response.text, 'lxml')
# grabs each individual apple product on the page (had to use 'inspect' from chrome broswer)
#title = page_soup.title
# This one is suppose to be better for looping thru because targeting the exact containers
containers = page_soup.find_all("div", class_="item-container")
#this just targets the entire HTML
#class_item_titles = page_soup.find_all("a", class_="item-title")
# pull title from the head
#print(title)
# Check how many objects it found (the total number of containers)
print(len(containers))
# for x in containers:
# print(x)
#print(class_item_titles[0].text)
# Here you learned how to target one specific thing and grab it
# item_names = []
# counter = 0
# for item in class_item_titles:
# counter += 1
# item_names.append(item)
# #print(f"{counter}) | {item.text}")
# This information on how the strucutre of HTML will help set up the loop
contain = containers[0]
contain
contain.a
contain.a.div
contain.a.img
for item in contain.a.children:
print(item)
###Output
_____no_output_____
###Markdown
Good practice of using: '.contents' - turns tags and nested tags into a list
###Code
# Contents will turn it into a list, enabling you to target and iterate thru
# use ctrl f to find ','
contain.contents
# using .children turns it into an iterable list
contain.contents[5].children
# for i in contain.contents[5].children:
# print(i)
# goal is target the "title='Apple'" within the img tag.
#contain.contents[5].a.img
contain.contents[5].a
contain.find_all('a', class_='item-brand')
# key lesson: targeting within a tag, then treat it like a dictionary
#contain.contents[5].a.img["title"]
# tried this below to extract the title, may not work.
# contain.contents[5].find_all('img').contents # this doesn't work because you can't
# treat a string as a list
contain.find_all('a', class_='item-brand')[0].img['title']
# Key lesson: is using contents twice, and counting indexes to target the item you want
# to scrape
contain.contents[5].contents[7]
# practice
contain.contents[5].contents[7]['class']
# practice
contain.contents[5].contents[7]['href']
contain.find_all('a', class_="item-title")
# Target the text and store this one
#contain.contents[5].contents[7].text
contain.find_all('a', class_="item-title")[0].text
contain.contents[5].contents[17].contents[3].contents[5].contents
# Extract Dollars
#contain.contents[5].contents[17].contents[3].contents[5].contents[3].text
#dollars = contain.contents[5].contents[17].contents[3].contents[5].contents[3].text
contain.find_all('li', class_="price-current")[0].text.split()[0]
# for x in li:
# print(x.text)
# Extract Cents
# contain.contents[5].contents[17].contents[3].contents[5].contents[4].text
# ignore this because cap
#cents = contain.contents[5].contents[17].contents[3].contents[5].contents[4].text
# Extract Shipping - do strip() to eliminate white spaces
#contain.contents[5].contents[17].contents[3].contents[11].text.strip()
#shipping = contain.contents[5].contents[17].contents[3].contents[11].text.strip()
# Shipping
contain.find_all('li', class_='price-ship')[0].text.strip()
# use tools in BS4 to create a loop; figure out how, do a few SQL problems
# figure out how to scrape images into Mongo DB
contain.find_all('a', class_="item-brand")#[0].img["title"]
# Build the loop to extract: brand, Title of the product, price-dollar, price-cents,
# shipping
#apple_sales_dict = {}
#counter = 0
laptop_brands = []
laptop_models = []
laptop_prices = []
laptop_shipping = []
for con in containers:
brand_name = con.find_all('a', class_="item-brand")[0].img["title"]
laptop_brands.append(brand_name)
#print(brand_name)
prd_title = con.find_all('a', class_="item-title")[0].text
laptop_models.append(prd_title)
price = con.find_all('li', class_="price-current")[0].text.split()[0]
laptop_prices.append(price)
shipping = contain.find_all('li', class_='price-ship')[0].text.strip()
laptop_shipping.append(shipping)
df = pd.DataFrame({
'brand': laptop_brands,
'model_listing': laptop_models,
'price': laptop_prices,
'shipping': laptop_shipping
})
#df.to_csv(f'')
counter = 0
df
contain.contents[5].a.img["title"]
print(len(containers))
print(len(laptop_model))
print(len(laptop_brands))
# laptops = {'brand_name': [], 'prd_title': []}
# for con in containers:
# # Extract Brand
# brand_name = con.contents[5].a.img["title"]
# laptops['brand_name'] = brand_name
# prd_title = con.find_all('a', class_="item-title")[0].text
# laptops['prd_title'] = prd_title
# for k,v in laptops.items():
# print(k + ' | ' + v)
###Output
_____no_output_____ |
examples/molecule-sampling/notebooks/fit_gap_potential_maml.ipynb | ###Markdown
Fit a GAP PotentialExplore how we can fit a [GAP](https://arxiv.org/pdf/1502.01366.pdf) potential using [MAML](https://github.com/materialsvirtuallab/maml) and [QUIP](https://libatoms.github.io/GAP/index.html)
###Code
%matplotlib inline
from matplotlib import pyplot as plt
from mcdemo.lfa.gap.quippy import fit_gap
from sklearn.model_selection import train_test_split
from functools import partial
from tqdm import tqdm
from ase import Atoms
import pandas as pd
import numpy as np
import json
import os
###Output
_____no_output_____
###Markdown
Load in the DatasetGet the dataset form the previous example
###Code
data = pd.read_pickle('atoms.pkl.gz')
print(f'Loaded {len(data)} training examples')
###Output
Loaded 256 training examples
###Markdown
Make a train-test split
###Code
train_data, test_data = train_test_split(data, train_size=0.9, shuffle=True)
###Output
_____no_output_____
###Markdown
Test Model with Some Default ParametersUse a 90/10 train/test split
###Code
%%time
p = fit_gap(train_data['atoms'], train_data['energy'], None, cutoff=6, n_sparse=256, n_max=8, l_max=8, use_forces=False)
%%time
pred_y = test_data['atoms'].apply(p.get_potential_energy)
print(f'MAE: {(pred_y - test_data["energy"]).abs().mean()} Ha')
fig, ax = plt.subplots(figsize=(3.5, 3.5))
ax.scatter(test_data['energy'] - test_data['energy'].min(), pred_y - test_data['energy'].min())
lims = ax.get_xlim()
ax.set_xlim(lims)
#ax.set_ylim(lims)
ax.plot(lims, lims, 'k--')
ax.set_xlabel('$\Delta E$, DFT (Ha)')
ax.set_ylabel('$\Delta E$, ML (Ha)')
###Output
_____no_output_____
###Markdown
Optimize the HyperparmetersWe have a few key parameters to fit, including the complexity of the SOAP intergrals (defined by $n_{max}$ and $l_{max}$) and the GAP model (defined by the number of points)
###Code
subtrain_data, val_data = train_test_split(train_data, test_size=0.1)
n_max = [2, 4, 6, 8]
l_max = [2, 4, 6, 8]
n_grid, l_grid = np.meshgrid(n_max, l_max)
n_grid = n_grid.flatten()
l_grid = l_grid.flatten()
grid_mae = []
for n, l in tqdm(zip(n_grid.flatten(), l_grid.flatten())):
p = fit_gap(subtrain_data['atoms'], subtrain_data['energy'], subtrain_data['forces'].values,
n_sparse=256, n_max=n, l_max=l, cutoff=6)
pred_y = val_data['atoms'].apply(p.get_potential_energy)
grid_mae.append(np.abs(pred_y - val_data['energy']).mean())
fig, ax = plt.subplots()
l = ax.scatter(n_grid, l_grid, c=grid_mae)
fig.colorbar(l)
###Output
_____no_output_____
###Markdown
Just cranking the $n$ and $l$ seems to do the trick Make a Learning CurvePlot the accuracy as a function of number of training set entries
###Code
results = []
for ts in tqdm([2, 8, 32, 128]):
subset = train_data.sample(ts)
# Fit
p = fit_gap(subset['atoms'], subset['energy'], subset['forces'].values, cutoff=6, n_sparse=256, n_max=8, l_max=8)
#
pred_y = test_data['atoms'].apply(p.get_potential_energy)
mae = (pred_y - test_data["energy"]).abs().mean()
results.append({
'train_size': ts,
'mae': mae,
})
###Output
100%|██████████| 4/4 [01:19<00:00, 19.84s/it]
###Markdown
Plot the performance
###Code
results = pd.DataFrame(results)
fig, ax = plt.subplots()
ax.loglog(results['train_size'], results['mae'], '--o')
ax.set_xlabel('N Train')
ax.set_ylabel('MAE (Ha)')
###Output
_____no_output_____ |
AM207_HW8_2018.ipynb | ###Markdown
Homework 8**Harvard University****Spring 2018****Instructors: Rahul Dave****Due Date: ** Friday, March 30th, 2018 at 11:00am**Instructions:**- Upload your iPython notebook containing all work to Canvas.- Structure your notebook and your work to maximize readability.
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pymc3 as pm
import seaborn as sns
import pandas as pd
###Output
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
C:\Users\Shaan Desai\Anaconda3\envs\am207\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Problem 1: Understanding Yelp Review Data As a HumanIn this course, we've spent a lot of time learning algorithms for performing inference on complex models and we've spent time using these models to make decisions regarding our data. But in nearly every assignment, the model for the data is specified in the problem statement. In real life, the creative and, arguably, much more difficult task is to start with a broadly defined goal and then to customize or create a model which will meet this goal in some way. Problem 1 is atypical in that it does not involve any programming or (necessarily) difficult mathematics/statistics. The process of answering these questions *seriously* will however give you an idea of how one might create or select a model for a particular application and your answers will help you with formalizing the model if and when you're called upon to do so.***Grading:*** *We want you to make a genuine effort to mold an ambiguous and broad real-life question into a concrete data science or machine learning problem without the pressure of getting the "right answer". As such, we will grade your answer of Problem 1 on a pass/fail basis. Any reasonable answer that demonstrates actual effort will be given a full grade.*We've compiled for you a fairly representative selection of [Yelp reviews](./yelp_reviews.zip) for a (now closed) sushi restaurant called Ino's Sushi in San Francisco. Read the reviews and form an opinion regarding the various qualities of Ino's Sushi. Answer the following:1. If the task is to summarize the quality of a restaurant in a simple and intuitive way, what might be problematic with simply classifying this restaurant as simply "good" or "bad"? Justify your answers with specific examples from the dataset.2. For Ino's Sushi, categorize the food and the service, separately, as "good" or "bad" based on all the reviews in the dataset. Be as systematic as you can when you do this. (**Hint:** Begin by summarizing each review. For each review, summarize the reviewer's opinion on two aspects of the restaurant: food and service. That is, generate a classification ("good" or "bad") for each aspect based on what the reviewer writes.) 3. Identify statistical weaknesses in breaking each review down into an opinion on the food and an opinion on the service. That is, identify types of reviews that make your method of summarizing the reviewer's optinion on the quality of food and service problemmatic, if not impossible. Use examples from your dataset to support your argument. 4. Identify all the ways in which the task in 2 might be difficult for a machine to accomplish. That is, break down the classification task into simple self-contained subtasks and identify how each subtask can be accomplished by a machine (i.e. which area of machine learning, e.g. topic modeling, sentiment analysis etc, addressess this type of task).5. Describe a complete pipeline for processing and transforming the data to obtain a classification for both food and service for each review. Answers1.The problem with rating the restaurant as good or bad means simplifying the model. We need to ask 'what is good/bad?' is it the service, the quality of food, the environment etc. Some examples of this:- Surya G. has given the restaurant a poor rating. He complains about the service BUT, the 'liver was succulent' and 'tuna was very good' indicating that the food is good.- Karen L. has given the restaurant a 5star. She says the food is excellent but the service makes you uncomfortable.- Tony L gives it a poor rating, not even because of service or food but rather pricing.- Ling C even says her review isn't about the food just about the service.In essence, there are many factors that make up 'good' and 'bad'.2.
###Code
thoughts = [['Surya G','Good','Bad'],['Karen L','Good','Bad'],['Tony L','Bad','Bad'],['Kristen B','Good','Bad'],['Sylvia L','Good','Good'],['Youna K','Good','Bad'],['Alison C','Good','Good'],['Michael L','Good','Good'],['Ling C','Bad','Bad'],['Maile N','Good','Good']]
df = pd.DataFrame(np.array(thoughts),columns=['Reviewer','Food','Service'])
df
###Output
_____no_output_____
###Markdown
3.In some instances people didn't even eat at the place. In others, when they did eat there was a mixed review. In others the 'bad' service is almost argued as being part of the experience. Here are some examples:-Ling C - her review doesn't even address the quality of the food. This makes it hard to classify it as good or bad.-Youna K - says 'if your water is running low accept for what it is' suggests the service might not be good but isn't conclusive.-Tony L - doesn't even eat at the restaurant but says the service is bad 4.After carefully reading some online sources such as: https://www.researchgate.net/publication/252067764_Classification_of_Customer_Reviews_based_on_Sentiment_Analysis, I have found that this is essentially an NLP problem which has been solved before.The framework begins by topic modeling and then finding word counts and 'vectorizing' within specific topics, then splitting this into a train and test set. The model would be a classification with some sort of binomial e.g. MultinomialNB from sklearn.Many issues would arise in the classification:- Reviews aren't consistent so you might get some reviews with food and service as topics in the model and you might get food and price with topics in other models.- Assuming we have the same topics we could get issues with frequency counts. For example, there might be many counts of the word bad indicating that both food and service are bad even though the reviewer might be referencing one case.- Within each topic it is possible that the author is neutral or reserved and it is unclear how our model will perform in this situation. 5.- topic model- find how much the topic dominates a sentence (e.g. 60% or more)- if so, run sentiment analysis - do this for all sentences- tally the ratings- if topic modeling doesn't return the classes we are interested in then set the type to unknown (something we should have done in q2)Something similar to this would be great: https://blog.insightdatascience.com/topic-modeling-and-sentiment-analysis-to-pinpoint-the-perfect-doctor-6a8fdd4a3904 Problem 2: My Sister-In-Law's Baby Cousin Tracy ...Wikipedia describes the National Annenberg Election Survey as follows -- "National Annenberg Election Survey (NAES) is the largest academic public opinion survey conducted during the American presidential elections. It is conducted by the Annenberg Public Policy Center at the University of Pennsylvania." In the file [survey.csv](./survey.csv) we provide the following data from the 2004 National Annenberg Election Survey: `age` -- the age of the respondents, `numr` -- the number of responses, and `knowlgbtq` -- the number of people at the given age who have at least one LGBTQ acquaintance. We want you to model how age influences likelihood of interaction with members of the LGBTQ community in three ways. 1. Using pymc3, create a bayesian regression model (either construct the model directly or use the glm module) with the same feature and dependent variable. Plot the mean predictions for ages 0-100, with a 2-sigma envelope.2. Using pymc3, create a 1-D Gaussian Process regression model with the same feature and dependent variables. Use a squared exponential covariance function. Plot the mean predictions for ages 0-100, with a 2-sigma envelope.3. How do the models compare? Does age influence likelihood of acquaintance with someone LGBTQ? For Bayesian Linear Regression and GP Regression, how does age affect the variance of the estimates?For GP Regression, we can model the likelihood of knowing someone LGBTQ as a product of binomials -- one binomial distribution per age group. $$p(y_a | \theta_a, n_a) = Binom( y_a, n_a, \theta_a)$$where $y_a$ (i.e. `knowlgbtq`) is the observed number of respondents who know someone lgbtq at age $a$, $n_a$ (i.e. `numr`) is the number of trials and $\theta_a$ is the rate parameter for having an lgbtq acquaintance at age $a$.Using the Gaussian approximation (http://en.wikipedia.org/wiki/Binomial_distributionNormal_approximation) to approximate the Binomial since `numr` is large, you can simply use a GP posterior with the error for each measurement to be given using this approximation.
###Code
df = pd.read_csv('survey.csv')
# plot of the data
plt.scatter(df.iloc[:,0],df.iloc[:,2]/df.iloc[:,1])
plt.xlabel('age')
plt.ylabel('likelihood of knowlgbtq')
###Output
_____no_output_____
###Markdown
Bayesian Regression
###Code
x = df.iloc[:,0]
y = df.iloc[:,2]/df.iloc[:,1]
data = dict(x=x, y=y)
with pm.Model() as br:
# specify glm and pass in data. The resulting linear model, its likelihood and
# and all its parameters are automatically added to our model.
pm.GLM.from_formula('y ~ x', data)
trace = pm.sample(10000,tune=1000, stepper = pm.NUTS) # draw posterior samples using NUTS sampling
pm.traceplot(trace);
burnin = 1000
thin = 2
pm.autocorrplot(trace[burnin::thin]);
n_ppredsamps=1000
agegrid = np.arange(0,100)
#meanage = df.age.mean()
ppc_samples=np.zeros((len(agegrid), n_ppredsamps))
for j in range(n_ppredsamps):
k=np.random.randint(2*len(trace))#samples with replacement from both chains
musamps = trace['Intercept'][k] + trace['x'][k] * (agegrid)
sigmasamp = trace['sd'][k]
ppc_samples[:,j] = np.random.normal(musamps, sigmasamp)
ppc_samples_hpd = pm.hpd(ppc_samples.T)
#scatter data points
plt.scatter(df.age, data['y'], c='b', alpha=0.9)
plt.plot(agegrid,np.mean(ppc_samples,axis=1))
plt.fill_between(agegrid, ppc_samples_hpd[:,0], ppc_samples_hpd[:,1], color='green', alpha=0.2)
plt.xlabel('age')
plt.ylabel('ratio')
plt.title('Posterior Predictive Envelope')
###Output
_____no_output_____
###Markdown
Gaussian Process
###Code
with pm.Model() as model1:
# priors on the covariance function hyperparameters
#l = pm.Gamma('l', alpha=2, beta=1)
l = pm.Uniform('l', 0., 10.)
# uninformative prior on the function variance
s2_f = pm.HalfCauchy('s2_f', beta=10)
# uninformative prior on the noise variance
s2_n = pm.HalfCauchy('s2_n', beta=10)
# pval = data['y'].values
# s2_n = pval*(1-pval)/df.numr.values
# # covariance functions for the function f and the noise
f_cov = s2_f**2 * pm.gp.cov.ExpQuad(1, l)
mgp = pm.gp.Marginal(cov_func=f_cov)
y_obs = mgp.marginal_likelihood('y_obs', X=data['x'].values.reshape(-1,1), y=data['y'].values, noise=s2_n, is_observed=True)
data['y'].values
with model1:
#step=pm.Metropolis()
trace = pm.sample(5000, tune=2000, nuts_kwargs={'target_accept':0.85})
#trace = pm.sample(10000, tune=2000, step=step)
x_pred = np.linspace(0,100,1000)
with model1:
fpred = mgp.conditional("fpred", Xnew = x_pred.reshape(-1,1), pred_noise=False)
ypred = mgp.conditional("ypred", Xnew = x_pred.reshape(-1,1), pred_noise=True)
gp_samples = pm.sample_ppc(trace, vars=[fpred, ypred], samples=200)
gp_samples['fpred'].shape
meanpred = gp_samples['fpred'].mean(axis=0)
mu_hpd = pm.hpd(gp_samples['fpred'])
with sns.plotting_context("poster"):
[plt.plot(x_pred, y, color="gray", alpha=0.02) for y in gp_samples['fpred'][::5,:]]
# overlay the observed data
[plt.plot(df.age.values, data['y'].values, 'ok', ms=5, label="train pts")]
[plt.plot(x_pred, meanpred, 'b', ms=10, label="predicted")]
[plt.fill_between(x_pred, mu_hpd[:,0], mu_hpd[:,1], color='g', alpha=0.5)]
plt.xlabel("x");
plt.ylabel("f(x)");
plt.title("Posterior predictive distribution");
plt.xlim(0,100);
plt.ylim(0,1)
plt.legend();
###Output
_____no_output_____
###Markdown
Yes it is inversely correlated, namely the higher the age the lower the likelihood of knowing someone lgbtq. We can see that both models capture this trend. The gaussian process fits the points really well. This is because of the underlying GP and selection of functions that fit through the given data points. We can see that with GP's, the further you are from your dataset the more variance you get. With our simple linear regression we are getting the same variance at all ages. Problem 3: Like a Punch to the Kidneys In this problem we will work with the US Kidney Cancer Dataset (by county), a dataset of kidney cancer frequencies across the US over 5 years on a per county basis. The kidney cancer data can be found [here](./kcancer.csv).A casual inspection of the data might suggest that we independently model cancer rates for each of the provided counties. Our experience in past homeworks/labs/lectures (in particular when we delved into the Rat Tumors problem) suggests potential drawbacks of conclusions based on raw cancer rates. Addressing these drawbacks, let's look use a Bayesian model for our analysis of the data. In particular you will implement an Empircal Bayes model to examine the adjusted cancer rates per county.Let $N$ be the number of counties; let $y_j$ the number of kidney cancer case for the $j$-th county, $n_j$ the population of the $j$-th county and $\theta_j$ the underlying kidney cancer rate for that county. We can construct a Bayesian model for our data as follows:\begin{aligned}y_j &\sim Poisson(5 \cdot n_j \cdot \theta_j), \quad j = 1, \ldots, N\\\theta_j &\sim Gamma(\alpha, \beta), \quad j = 1, \ldots, N\end{aligned}where $\alpha, \beta$ are hyper-parameters of the model.- (1) Implement Empirical Bayes via moment matching as described as follows. Consider the **prior-predictive** distribution (also called the evidence i.e. the denominator normalization in bayes theorem) of the model: $p(y) = \int p(y \vert \theta) p(\theta) d \theta$. Why the prior-predictive? Because technically we "haven't seen" individual county data yet. For this model, the prior-predictive is a negative binomial. Matching the mean and the variance of the negative binomial to that from the data, you can find appropriate expressions for $\alpha$ and $\beta$. (Hint: You need to be careful with the $5n_j$ multiplier.) - (2) Produce a scatter plot of the raw cancer rates (pct mortality) vs the county population size. Highlight the top 300 raw cancer rates in red. Highlight the bottom 300 raw cancer rates in blue. Finally, on the same plot add a scatter plot visualization of the posterior mean cancer rate estimates (pct mortality) vs the county population size, highlight these in green.- (3) Using the above scatter plot, explain why using the posterior means from our model to estimate cancer rates is preferable to studying the raw rates themselves.(**Hint:** You might also find it helpful to follow the Rat Tumor example.)(**Note:** Up until now we've had primarily thought about the posterior predictive: $\int p( y \vert \theta) p(\theta \vert D) d\theta$. The posterior predictive and the prior predictive can be somewhat connected. In conjugate models such as ours, the two distributions have the same form.) Question 1
###Code
df = pd.read_csv('kcancer.csv')
###Output
_____no_output_____
###Markdown
So I looked up Gamma-Poisson mixture to figure out what the parameters would be of the negative binomial we want to scale the gamma function by 5nj to account for it in our modelwe can do this because Y = kX is also gammathus the scaled version has:beta is betaalpha is 5nj alpha(original)Therefore, our gamma distribution has params:$$shape = alpha = r $$$$beta= rate = 5nj(1-p)/p$$$$lambda = 5 nj theta $$ Now the mean and variance of the negative binomial are: $$ \mu = rp/(1-p) $$$$ \sigma^2 = rp/(1-p)^2 $$ but we know $ r = \alpha$ and $ p = 5nj/(5nj+\beta) $ therefore: $$ \mu = \frac{5 nj \alpha}{5nj+\beta} * \frac{(5nj+\beta)}{\beta} =\frac{5nj\alpha}{\beta}$$ $$ \sigma^2 = \frac{5nj\alpha}{\beta} * \frac{5nj + \beta}{\beta} = \frac{25nj^2\alpha}{\beta^2} +\frac{5nj\alpha}{\beta} $$ Can divide by n $$ \mu = 5\alpha/\beta $$$$ \sigma^2 = 25\alpha /\beta^2 + 5\alpha/(n \beta ) = \mu^2/\alpha + \mu/n_{avg} $$ These values are at a per county level since they depend on n. If we treat all counties as coming from the same distribution of populations, then we can find the average n of these and use the mu and sigma from all counties as well. We can do another treatment and divide by 5nj then we can get a per county rate. If we take the mean of this population, we can replace our n by the mean rather than the county level n's'
###Code
y = df['dc']
npop = df['pop']
mu = np.mean(y/(npop))
var = np.var(y/(npop))
n_avg = df['pop'].mean()
alpha = mu**2/(var-mu/n_avg)
beta = 5*alpha/mu
beta
###Output
_____no_output_____
###Markdown
Question 2
###Code
newdf = df.sort_values(by=['pct_mortality'],ascending=False)
newdf1 = df.sort_values(by=['pct_mortality'],ascending=True)
post_means = 5*(alpha + df['dc'].values)/(beta+5*(df['pop'].values))
plt.figure(figsize=(15,8))
plt.scatter(df['pop'].values,df['pct_mortality'].values,alpha=0.01,c='gray')
plt.scatter(newdf['pop'].values[:300],newdf['pct_mortality'].values[:300],alpha=0.5,c='r')
plt.scatter(newdf1['pop'].values[:300],newdf1['pct_mortality'].values[:300],alpha=0.5,c='b')
plt.scatter(df['pop'].values,post_means,alpha=0.5,c='g')
plt.ylim([-0.00001,0.00029])
# plt.xlim([-100000,2000000])
plt.xscale('log')
plt.xlabel('logged population')
plt.ylabel('pct mortality')
###Output
_____no_output_____
###Markdown
Question 3 The mean rates give us a a better sense of the underlying trend in the rates. Namely that smaller populations have higher rates than larger ones. Furthermore, it helps us avoid some of the inherent biases of the data. Problem 4: In the Blink of a Bayesian IrisWe've done classification before, but the goal of this problem is to introduce you to the idea of classification using Bayesian inference. Consider the famous *Fisher flower Iris data set* a multivariate data set introduced by Sir Ronald Fisher (1936) as an example of discriminant analysis. The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. Based on the combination of these four features, you will build a model to predict the species. For this problem only consider two classes: **virginica** and **not-virginica**. The iris data can be obtained [here](./iris.csv).Let $(X, Y )$ be our dataset, where $X=\{\vec{x}_1, \ldots \vec{x}_n\}$ and $\vec{x}_i$ is the standard feature vector corresponding to an offset 1 and the four components explained above. $Y \in \{0,1\}$ are the scalar labels of a class. In other words the species labels are your $Y$ data (virginica = 0 and virginica=1), and the four features -- petal length, petal width, sepal length and sepal width -- along with the offset make up your $X$ data. The goal is to train a classifier, that will predict an unknown class label $\hat{y}$ from a new data point $x$. Consider the following glm (logistic model) for the probability of a class:$$ p(y) = \frac{1}{1+e^{-x^T \beta}} $$(or $logit(p) = x^T \beta$ in more traditional glm form)where $\beta$ is a 5D parameter to learn. Then given $p$ at a particular data point $x$, we can use a bernoulli likelihood to get 1's and 0's. This should be enough for you to set up your model in pymc3. (Other Hints: also use theano.tensor.exp when you define the inverse logit to go from $\beta$ to $p$, and you might want to set up $p$ as a deterministic explicitly so that pymc3 does the work of giving you the trace).Use a 60-40 stratified (preserving class membership) split of the dataset into a training set and a test set. (Feel free to take advantage of scikit-learn's `train_test_split`).1. Choose a prior for $\beta \sim N(0, \sigma^2 I) $ and write down the formula for the normalized posterior $p(\beta| Y,X)$. Since we dont care about regularization here, just use the mostly uninformative value $\sigma = 10$.2. Find the MAP and mean estimate for the posterior on the training set.3. Implement a sampler to sample from this posterior of $\beta$. Generate samples of $\beta$ and plot the sequence of $\beta$'s and histograms for each $\beta$ component.
###Code
df = pd.read_csv('iris.csv')
df['class'] = (df['class'] == ' Iris-virginica')*1
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df.iloc[:,0:4], df.iloc[:,4], test_size=0.4, random_state=42,stratify=df.iloc[:,4])
X_train['intercept'] = 1
X_test['intercept']= 1
X_train.shape
from pymc3 import Normal, Bernoulli, sample, Model # Import relevant distributions
from pymc3.math import invlogit
with Model() as iris:
beta = pm.Normal('beta', 0, sd=10,shape=5)
# Calculate probabilities of death
pvals = pm.Deterministic('pvals',var=pm.math.sigmoid( pm.math.dot(beta,X_train.values.T)))
# Data likelihood
flower_type = pm.Bernoulli('flower_type', p=pvals, observed=y_train.values)
trace = pm.sample(10000)
###Output
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
C:\Users\Shaan Desai\Anaconda3\envs\am207\lib\site-packages\pymc3\model.py:384: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
if not np.issubdtype(var.dtype, float):
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [beta]
###Markdown
MAP Estimates
###Code
map_estimate = pm.find_MAP(model=iris)
map_estimate['beta']
###Output
logp = -22.181, ||grad|| = 0.0036647: 100%|███████████████████████████████████████████| 43/43 [00:00<00:00, 827.58it/s]
###Markdown
Trace plots with Histograms
###Code
pm.traceplot(trace)
###Output
_____no_output_____ |
doc/jupyter_execute/examples/drift-detection/nvidia-triton-cifar10/cifar10_drift.ipynb | ###Markdown
Cifar10 Drift Detection with NVIDIA TritonIn this example we will deploy an image classification model along with a drift detector trained on the same dataset. For in depth details on creating a drift detection model for your own dataset see the [alibi-detect project](https://github.com/SeldonIO/alibi-detect) and associated [documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/). You can find details for this [CIFAR10 example in their documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/cd_ks_cifar10.html) as well.Prequisites: * [Knative eventing installed](https://knative.dev/docs/install/) * Ensure the istio-ingressgateway is exposed as a loadbalancer (no auth in this demo) * [Seldon Core installed](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html) * Ensure you install for istio, e.g. for the helm chart `--set istio.enabled=true` * **A cluster with 2 NVIDIA GPUs available compatible with Triton Inference Server.** * Tested with P100 GPUs on GKE (eu-west-1d) Tested on GKE 1.18 K8S Knative 0.21 and Istio
###Code
!pip install -r requirements_notebook.txt
###Output
_____no_output_____
###Markdown
Ensure gateway installed
###Code
!kubectl apply -f ../../../notebooks/resources/seldon-gateway.yaml
###Output
_____no_output_____
###Markdown
Setup Resources
###Code
!kubectl create namespace cifar10drift
%%writefile broker.yaml
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: default
namespace: cifar10drift
!kubectl apply -f broker.yaml
%%writefile event-display.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-display
namespace: cifar10drift
spec:
replicas: 1
selector:
matchLabels: &labels
app: hello-display
template:
metadata:
labels: *labels
spec:
containers:
- name: event-display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display
---
kind: Service
apiVersion: v1
metadata:
name: hello-display
namespace: cifar10drift
spec:
selector:
app: hello-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
!kubectl apply -f event-display.yaml
###Output
deployment.apps/hello-display unchanged
service/hello-display unchanged
###Markdown
Create the SeldonDeployment image classification model for Cifar10. We add in a `logger` for requests - the default destination is the namespace Knative Broker.
###Code
%%writefile cifar10.yaml
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: triton-cifar10
namespace: cifar10drift
spec:
predictors:
- componentSpecs:
- metadata: {}
spec:
containers:
- image: nvcr.io/nvidia/tritonserver:21.08-py3
name: cifar10
resources:
limits:
cpu: "1"
memory: 20Gi
nvidia.com/gpu: "1"
requests:
cpu: "1"
memory: 10Gi
nvidia.com/gpu: "1"
graph:
implementation: TRITON_SERVER
logger:
mode: all
url: http://broker-ingress.knative-eventing.svc.cluster.local/cifar10drift/default
modelUri: gs://seldon-models/triton/tf_cifar10
name: cifar10
type: MODEL
name: default
replicas: 1
protocol: kfserving
!kubectl apply -f cifar10.yaml
###Output
seldondeployment.machinelearning.seldon.io/triton-cifar10 unchanged
###Markdown
Create the pretrained Drift Detector. We forward replies to the message-dumper we started. Notice the `drift_batch_size`. The drift detector will wait until `drify_batch_size` number of requests are received before making a drift prediction.
###Code
%%writefile cifar10cd.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: drift-detector
namespace: cifar10drift
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
spec:
containers:
- image: seldonio/alibi-detect-server-gpu:1.7.0-dev
imagePullPolicy: Always
args:
- --model_name
- cifar10cd
- --http_port
- '8080'
- --protocol
- kfserving.http
- --storage_uri
- gs://seldon-models/alibi-detect/cd/mmd/cifar10_mmd_torch
- --reply_url
- http://hello-display.cifar10drift
- --event_type
- io.seldon.serving.inference.drift
- --event_source
- io.seldon.serving.cifar10cd
- DriftDetector
- --drift_batch_size
- '5000'
resources:
limits:
cpu: "1"
memory: 20Gi
nvidia.com/gpu: "1"
requests:
cpu: "1"
memory: 10Gi
nvidia.com/gpu: "1"
!kubectl apply -f cifar10cd.yaml
###Output
service.serving.knative.dev/drift-detector configured
###Markdown
Create a Knative trigger to forward logging events to our Outlier Detector.
###Code
%%writefile trigger.yaml
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: drift-trigger
namespace: cifar10drift
spec:
broker: default
filter:
attributes:
type: io.seldon.serving.inference.request
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: drift-detector
namespace: cifar10drift
!kubectl apply -f trigger.yaml
###Output
trigger.eventing.knative.dev/drift-trigger unchanged
###Markdown
Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
###Code
CLUSTER_IPS = !(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP = CLUSTER_IPS[0]
print(CLUSTER_IP)
###Output
104.155.57.161
###Markdown
If you are using Kind or Minikube you will need to port-forward to the istio ingressgateway and uncomment the following
###Code
# CLUSTER_IP="localhost:8004"
###Output
_____no_output_____
###Markdown
Optionally add an authorization token here if you need one.Acquiring this token will be dependent on your auth setup.
###Code
TOKEN = "Bearer <token>"
SERVICE_HOSTNAMES = !(kubectl get ksvc -n cifar10drift drift-detector -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_CD = SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_CD)
import json
import matplotlib.pyplot as plt
import numpy as np
import requests
import tensorflow as tf
tf.keras.backend.clear_session()
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype("float32") / 255
X_test = X_test.astype("float32") / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = (
"plane",
"car",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
)
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis("off")
plt.show()
def predict(X):
formData = {
"inputs": [
{
"name": "input_1",
"datatype": "FP32",
"shape": [X.shape[0], 32, 32, 3],
"data": X.flatten().tolist(),
}
]
}
headers = {
"Authorization": "Bearer " + TOKEN,
"X-Auth-Token": TOKEN,
"Content-Type": "application/json",
}
res = requests.post(
"http://"
+ CLUSTER_IP
+ "/seldon/cifar10drift/triton-cifar10/v2/models/cifar10/infer",
json=formData,
headers=headers,
)
if res.status_code == 200:
j = res.json()
y = np.array(j["outputs"][0]["data"])
y.shape = tuple(j["outputs"][0]["shape"])
return [classes[x.argmax()] for x in y]
else:
print("Failed with ", res.status_code)
return []
def drift(X):
formData = {
"inputs": [
{
"name": "input_1",
"datatype": "FP32",
"shape": [1, 32, 32, 3],
"data": X.flatten().tolist(),
}
]
}
headers = {}
headers = {
"ce-namespace": "default",
"ce-modelid": "cifar10drift",
"ce-type": "io.seldon.serving.inference.request",
"ce-id": "1234",
"ce-source": "localhost",
"ce-specversion": "1.0",
}
headers["Host"] = SERVICE_HOSTNAME_CD
headers["X-Auth-Token"] = TOKEN
headers["Authorization"] = "Bearer " + TOKEN
res = requests.post("http://" + CLUSTER_IP + "/", json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
return od
else:
print("Failed with ", res.status_code)
return []
###Output
(50000, 32, 32, 3) (50000, 1) (10000, 32, 32, 3) (10000, 1)
###Markdown
Normal Prediction
###Code
idx = 1
X = X_train[idx : idx + 1]
show(X)
predict(X)
###Output
_____no_output_____
###Markdown
Test Drift We need to accumulate a large enough batch size so no drift will be tested as yet. We will now send 5000 requests to the model in batches. The drift detector will run at the end of this as we set the `drift_batch_size` to 5000 in our yaml above.
###Code
from tqdm.notebook import tqdm
for i in tqdm(range(1, 5000, 500)):
X = X_train[i : i + 500]
predict(X)
###Output
_____no_output_____
###Markdown
Let's check the message dumper and extract the first drift result.
###Code
res = !kubectl logs -n cifar10drift $(kubectl get pod -n cifar10drift -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data = []
for i in range(0, len(res)):
if res[i] == "Data,":
data.append(res[i + 1])
j = json.loads(json.loads(data[0]))
print("Drift", j["data"]["is_drift"] == 1)
###Output
Drift False
###Markdown
Now, let's create some CIFAR10 examples with motion blur.
###Code
from alibi_detect.datasets import corruption_types_cifar10c, fetch_cifar10c
corruption = ["motion_blur"]
X_corr, y_corr = fetch_cifar10c(corruption=corruption, severity=5, return_X_y=True)
X_corr = X_corr.astype("float32") / 255
show(X_corr[0])
show(X_corr[1])
show(X_corr[2])
###Output
_____no_output_____
###Markdown
Send these examples to the predictor.
###Code
from tqdm.notebook import tqdm
for i in tqdm(range(0, 5000, 500)):
X = X_corr[i : i + 500]
predict(X)
###Output
_____no_output_____
###Markdown
Now when we check the message dump we should find a new drift response.
###Code
res = !kubectl logs -n cifar10drift $(kubectl get pod -n cifar10drift -l app=hello-display -o jsonpath='{.items[0].metadata.name}')
data = []
for i in range(0, len(res)):
if res[i] == "Data,":
data.append(res[i + 1])
j = json.loads(json.loads(data[-1]))
print("Drift", j["data"]["is_drift"] == 1)
###Output
Drift True
###Markdown
Tear Down
###Code
!kubectl delete ns cifar10drift
###Output
_____no_output_____ |
chap03/03-textbook-classification-05.ipynb | ###Markdown
Chapter 3 - Classification Multioutput Classification
###Code
import pickle
import pandas as pd
import numpy as np
import numpy.random as rnd
import matplotlib
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split, cross_val_score, cross_val_predict
from sklearn.metrics import (precision_score,
recall_score,
classification_report,
confusion_matrix, f1_score,
precision_recall_curve, roc_curve, roc_auc_score)
def load(fname):
import pickle
mnist = None
try:
with open(fname, 'rb') as f:
mnist = pickle.load(f)
return mnist
except FileNotFoundError:
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, cache=True)
with open(fname, 'wb') as f:
mnist = pickle.dump(mnist, f)
return mnist
###Output
_____no_output_____
###Markdown
Another type of problem is the multioutput classification problem where each label can be multiclass.
###Code
# Ingest
mnsit = load('mnist.data.pkl')
mnsit_X, mnsit_y = mnsit['data'], mnsit['target']
X_train, X_test, y_train, y_test = train_test_split(mnsit_X, mnsit_y, test_size=0.15, random_state=0)
y_train, y_test = y_train.astype(int), y_test.astype(int)
###Output
_____no_output_____
###Markdown
In this case, predict the correct cleaned image from the noisy image.
###Code
noise_train = rnd.randint(0, 100, (len(X_train), 784))
noise_test = rnd.randint(0, 100, (len(X_test), 784))
X_train_mod = X_train + noise_train
X_test_mod = X_test + noise_test
y_train_mod = X_train
y_test_mod = X_test
X_train_mod_sample = X_train_mod[:100]
y_train_mod_sample = y_train_mod[:100]
# Train a kNN classifier
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train_mod_sample, y_train_mod_sample)
i = 4
clean_digit = knn_clf.predict([X_test_mod[4]])
print(clean_digit)
###Output
[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 197. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
|
model-inference/decisionTree/experiments/hummingbird/notebooks/.ipynb_checkpoints/tvm_and_pyt_graph-checkpoint.ipynb | ###Markdown
If you haven't installed Hummingbird or matplotlib, do that first, by uncommenting the lines below. ** Note: This notebook requires TVM built with LLVM support. Install instructions [here](https://tvm.apache.org/docs/install/index.html) **
###Code
#! pip install hummingbird_ml matplotlib
###Output
_____no_output_____
###Markdown
Import necessary libraries
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import load_breast_cancer
from hummingbird.ml import convert
###Output
_____no_output_____
###Markdown
Create and fit the model
###Code
# Create and train a RandomForestClassifier model
X, y = load_breast_cancer(return_X_y=True)
skl_model = RandomForestClassifier(n_estimators=500, max_depth=7)
skl_model.fit(X, y)
###Output
_____no_output_____
###Markdown
Time scikit-learn
###Code
skl_time = %timeit -o skl_model.predict(X)
###Output
49.9 ms ± 293 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Convert SKL model to PyTorch
###Code
model = convert(skl_model, 'torch')
###Output
_____no_output_____
###Markdown
Time PyTorch - CPU
###Code
pred_cpu_hb = %timeit -o model.predict(X)
###Output
9.14 ms ± 108 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
Switch PyTorch from CPU to GPU
###Code
%%capture
model.to('cuda')
###Output
_____no_output_____
###Markdown
Time PyTorch - GPU
###Code
pred_gpu_hb = %timeit -o model.predict(X)
###Output
1.21 ms ± 16.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
Convert SKL model to TVM (CPU)
###Code
model_tvm = convert(skl_model, 'tvm', X)
###Output
_____no_output_____
###Markdown
Time TVM - CPU
###Code
pred_cpu_tvm = %timeit -o model_tvm.predict(X)
###Output
3.95 ms ± 77.1 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
Convert SKL model to TVM (GPU)
###Code
model_tvm = convert(skl_model, 'tvm', X, 'cuda')
###Output
_____no_output_____
###Markdown
Time TVM - GPU
###Code
pred_gpu_tvm = %timeit -o model_tvm.predict(X)
###Output
493 µs ± 2.73 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
###Markdown
Plot the results
###Code
def plot(title, skl_time, pred_cpu_hb, pred_gpu_hb, pred_cpu_tvm, pred_gpu_tvm):
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.pyplot import cm
fig = plt.figure()
x = ['skl','pyt-cpu','pyt-gpu','tvm-cpu','tvm-gpu']
height = [skl_time.average,pred_cpu_hb.average,pred_gpu_hb.average,pred_cpu_tvm.average,pred_gpu_tvm.average]
width = 1.0
plt.ylabel('time in seconds')
plt.xlabel(title)
rects = plt.bar(x, height, width, color=cm.rainbow(np.linspace(0,1,5)))
def autolabel(rects):
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x() + rect.get_width()/2., 1.05*height,
'%.4f' % (height),
ha='center', va='bottom')
autolabel(rects)
plt.show()
plot("RandomForestClassifier: Breast Cancer Dataset",skl_time, pred_cpu_hb, pred_gpu_hb, pred_cpu_tvm, pred_gpu_tvm)
###Output
_____no_output_____ |
Starting_with_Numpy.ipynb | ###Markdown
Wojciech Pragłowski Numpy basics Guidlines:1. Write a function that will check whether the given matrix contains a given value.1. Write a function that will return the mean, median, and mode (mode).1. Write a function that will generate a matrix of given sizes with the same numerical value.1. Write a function that changes all negative values in the matrix to 0.1. Write a function that will check if the given matrix X with size 2x2 is a solution of the matrix equation ax^2 + bx + c = 0, where a, b, c are given numerical values.1. Write a function that will calculate the nth Fibonacci number using matrix exponentiation [[1,1],[1,0]] (https://pl.wikipedia.org/wiki/Ci%C4%85g_FibonacciegoMacierze_liczb_Fibonacciego).1. Write a function that will calculate the nth Fibonacci number using the Binet formula (https://pl.wikipedia.org/wiki/Ci%C4%85g_FibonacciegoWz%C3%B3r_Bineta).1. Write a function that will generate a matrix with random elements (integers from a given range) with a predetermined value of the matrix trace.1. Write a function that will generate a matrix with random elements (integers from a given range) with predetermined eigenvalues. Possible to use Cayley-Hamilton.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Exercise 1
###Code
def find_value(value, given_array):
check = np.isin(given_array, value)
print(check)
arr = np.arange(6).reshape((2,3))
find_value(2, arr)
###Output
[[False False True]
[False False False]]
###Markdown
Exercise 2
###Code
def maths_returning(given_array):
values, counts = np.unique(given_array, return_counts = True)
index = np.argmax(counts)
print("Średnia:", np.mean(given_array),"\nMediana: ", np.median(given_array), "\nDominanta: ", values[index])
arr = np.array([[1,2,5],[2,5,5],[9,-1,-5]])
arr2 = np.arange(9).reshape(3,3)
maths_returning(arr)
###Output
Średnia: 2.5555555555555554
Mediana: 2.0
Dominanta: 5
###Markdown
Exercise 3
###Code
def generate_array(height,width, value):
new_array = np.empty(shape=(height,width), dtype=int)
for i in range(height):
for j in range(width):
new_array[i] = value
return new_array
generate_array(5, 7, 6)
###Output
_____no_output_____
###Markdown
Exercise 4
###Code
def change_negative(given_array):
new_array = np.where(given_array<0, 0, given_array)
return new_array
arr = np.array([[1,2,3], [-1,-2,-3], [1,-2,3]])
change_negative(arr)
###Output
_____no_output_____
###Markdown
Exercise 5
###Code
def check_equation(given_array, a, b, c):
result = (a*given_array**2) + b*given_array + c
if np.mean(result) == 0:
return print("Podana macierz JEST rozwiązaniem równania!\n", result)
else:
return print("Niestety podana macierz nie rozwiązuje powyższego równania...\n", result)
arr = np.array([[1,1], [1,1]])
check_equation(arr, 1, -1, 0)
###Output
Podana macierz JEST rozwiązaniem równania!
[[0 0]
[0 0]]
###Markdown
Exercise 6
###Code
def array_fibonacci(n):
begin_array = np.array([[1, 1], [1, 0]])
values, vectors = np.linalg.eig(begin_array)
Fn = vectors @ np.diag(values ** n) @ vectors.T
return int(np.rint(Fn[0, 1]))
array_fibonacci(15)
###Output
_____no_output_____
###Markdown
Exercise 7
###Code
def binet_fib(n):
result = 1/np.sqrt(5)*(((1+np.sqrt(5))/2)**n)
return round(result,2)
binet_fib(10)
###Output
_____no_output_____
###Markdown
Exercise 8
###Code
def random_trace(size, trace_value):
diag = []
for i in trace_value:
if i == 0:
diag.append([0 for i in range(size)])
continue
total = i
temp = []
for i in range(size-1):
rand_value = np.random.randint(0, total)
temp.append(rand_value)
total -= rand_value
temp.append(total)
diag.append(temp)
new_array = np.diagflat(diag)
final_array = np.where(new_array == 0, np.random.randint(9), new_array)
return final_array
trace_value = [15]
random_trace(4, trace_value)
###Output
_____no_output_____
###Markdown
Exercise 9
###Code
def array_det(size, det_value):
diag = []
for i in det_value:
if i == 0:
diag.append([0 for i in range(size)])
continue
total = i
temp = []
for i in range(size-1):
rand_value = np.random.randint(0, total)
temp.append(rand_value)
total -= rand_value
temp.append(total)
diag.append(temp)
new_array = np.array(diag).reshape(2,2)
return new_array
array_det(4,[8])
###Output
_____no_output_____ |
prj_code/recommend/ALS Implementation.ipynb | ###Markdown
ALS Implementation
###Code
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
###Output
_____no_output_____
###Markdown
1. Initialize parameters - r_lambda: normalization parameter- alpha: confidence level- nf: dimension of latent vector of each user and item- initilzed values(40, 200, 40) are the best parameters from the paper
###Code
r_lambda = 40
nf = 200
alpha = 40
###Output
_____no_output_____
###Markdown
2. rating matrix
###Code
import numpy as np
# sample rating matrix
R = np.array([[0, 0, 0, 4, 4, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 1, 0, 4, 0],
[0, 3, 4, 0, 3, 0, 0, 2, 2, 0, 0],
[0, 5, 5, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 5, 0, 0, 5, 0],
[0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 5],
[0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 4],
[0, 0, 0, 0, 0, 0, 5, 0, 0, 5, 0],
[0, 0, 0, 3, 0, 0, 0, 0, 4, 5, 0]])
R.shape
###Output
_____no_output_____
###Markdown
3. Initalize user and item latent factor matrix : 아주 작은 랜덤한 값들로 행렬의 값을 초기화 시킴 -> 이를 통해 한쪽 정보를 상수로 고정한 상태에서 반대편 matrix를 학습 - nu : number of users (10) - ni : number of items (11) - nf : dimension of latent vector
###Code
nu = R.shape[0]
ni = R.shape[1]
# initialize X and Y with very small values
X = np.random.rand(nu, nf) * 0.01 # 10x200
Y = np.random.rand(ni, nf) * 0.01 # 11x200
X, Y
###Output
_____no_output_____
###Markdown
4. Initialize Binary Rating Matrix (선호도 행렬 P 설정) : 주어진 학습용 평점 테이블을 0과1로 이뤄진 binary rating matrix P로 변경 - Convert original rating matrix R into P - Pui = 1 if Rui > 0 - Pui = 0 if Rui = 0
###Code
P = np.copy(R)
P[P > 0] = 1
P
###Output
_____no_output_____
###Markdown
5. Initialize Confidence Matrix (신로도 행렬 C 설정) : 주어진 학습 평점 테이블에 confidence level을 적용한 C행렬 계산 - 신뢰도 행렬을 이횽해 평점이 부재한 테이터도 낮은 신뢰도 하에서 분석에 활용 가능 - Initialize Confidence Matrix C - Cui = 1 + alpha * R
###Code
C = 1 +alpha * R
print(C)
C.shape
i = 0
Ci = np.diag(C[:, i])
Ci
###Output
_____no_output_____
###Markdown
6. Loss Function 설정 - C: confidence matrix - P: binary rating matrix - X: user latent matrix - Y: item latent matrix - r_lambda: regularization lambda - xTy: predict matrix - Total_loss = (confidence_level * predict loss) + regularization loss
###Code
def loss_function(C, P, xTy, X, Y, r_lambda) :
predict_error = np.square(P-xTy) # (관측치 - 예측치)^2
confidence_error = np.sum(C*predict_error) # 오차값이 크더라도, 신뢰도가 낮다면 적게 반영하며, 오차값이 작더라도 신뢰도가 크다면 오차를 크게 반영
regularization = r_lambda * (np.sum(np.square(X)) + np.sum(np.square(Y)))
total_loss = confidence_error + regularization
return np.sum(predict_error), confidence_error, regularization, total_loss
###Output
_____no_output_____
###Markdown
7. Optimization Function for user and Item- X[u] = (yTCuy + lambda*I)^-1yTCuy- Y[i] = (xTCix + lambda*I)^-1xTCix- two formula is the same when it changes X to Y and u to i
###Code
def optimize_user(X, Y, C, P, nu, nf, r_lambda) :
"""
optimize user matrix
"""
yT = np.transpose(Y)
for u in range(nu):
Cu = np.diag(C[u]) # np.diag : 대각 행렬 C행렬을 순차적으로 곱해주기 위해서
yT_Cu_y = np.matmul(np.matmul(yT, Cu), Y)
lI = np.dot(r_lambda, np.identity(nf)) # 정규화
yT_Cu_pu = np.matmul(np.matmul(yT, Cu), P[u])
X[u] = np.linalg.solve(yT_Cu_y + lI, yT_Cu_pu)
def optimize_item(X, Y, C, P, ni, nf, r_lambda):
"""
optimize item matrix
"""
xT = np.transpose(X)
for i in range(ni):
Ci = np.diag(C[:, i])
xT_Ci_x = np.matmul(np.matmul(xT, Ci), X)
lI = np.dot(r_lambda, np.identity(nf))
xT_Ci_pi = np.matmul(np.matmul(xT, Ci), P[:, i])
Y[i] = np.linalg.solve(xT_Ci_x + lI, xT_Ci_pi)
xT = np.transpose(X)
for i in range(ni):
Ci = np.diag(C[:, i])
xT_Ci_x = np.matmul(np.matmul(xT, Ci), X)
lI = np.dot(r_lambda, np.identity(nf))
xT_Ci_pi = np.matmul(np.matmul(xT, Ci), P[:, i])
Y[i] = np.linalg.solve(xT_Ci_x + lI, xT_Ci_pi)
xT_Ci_x.shape
lI.shape
xT_Ci_pi.shape
P[:, i].shape
###Output
_____no_output_____
###Markdown
8. Train - usually ALS algorithm repeat train steps for 10~15times
###Code
predict_errors = []
confidence_errors = []
regularization_list = []
total_losses = []
for i in range(15):
if i!=0:
optimize_user(X, Y, C, P, nu, nf, r_lambda)
optimize_item(X, Y, C, P, ni, nf, r_lambda)
predict = np.matmul(X, np.transpose(Y))
predict_error, confidence_error, regularization, total_loss = loss_function(C, P, predict, X, Y, r_lambda)
predict_errors.append(predict_error)
confidence_errors.append(confidence_error)
regularization_list.append(regularization)
total_losses.append(total_loss)
print('----------------step %d----------------' % i)
print("predict error: %f" % predict_error)
print("confidence error: %f" % confidence_error)
print("regularization: %f" % regularization)
print("total loss: %f" % total_loss)
predict = np.matmul(X, np.transpose(Y))
print('final predict')
print([predict])
from matplotlib import pyplot as plt
%matplotlib inline
plt.subplots_adjust(wspace=100.0, hspace=20.0)
fig = plt.figure()
fig.set_figheight(10)
fig.set_figwidth(10)
predict_error_line = fig.add_subplot(2, 2, 1)
confidence_error_line = fig.add_subplot(2, 2, 2)
regularization_error_line = fig.add_subplot(2, 2, 3)
total_loss_line = fig.add_subplot(2, 2, 4)
predict_error_line.set_title("Predict Error")
predict_error_line.plot(predict_errors)
confidence_error_line.set_title("Confidence Error")
confidence_error_line.plot(confidence_errors)
regularization_error_line.set_title("Regularization")
regularization_error_line.plot(regularization_list)
total_loss_line.set_title("Total Loss")
total_loss_line.plot(total_losses)
plt.show()
###Output
_____no_output_____ |
notebooks/project/P2_OOP_in_TicTacToe.ipynb | ###Markdown
Python Foundations Project Part 2: Object Oriented Programming**Instructor**: Wesley Beckner**Contact**: [email protected] part II of our tic-tac-toe and AI journey, we're going to take all the functions we've defined so far and make them object oriented!--- 2.0 Preparing Environment and Importing Data[back to top](top) 2.0.1 Import Packages[back to top](top)
###Code
def visualize_board(board_values):
"""
Visualizes the board during gameplay
Parameters
----------
board_values : list
The values ('X', 'O', or ' ' at each board location)
Returns
-------
None
"""
print(
"|{}|{}|{}|\n|{}|{}|{}|\n|{}|{}|{}|\n".format(*board_values)
)
def init_board():
"""
Initializes an empty board for the start of gameplay
Parameters
----------
None
Returns
-------
board : dict
a dictionary with keys 1-9 and single space (' ') string as values
"""
return {1: ' ',
2: ' ',
3: ' ',
4: ' ',
5: ' ',
6: ' ',
7: ' ',
8: ' ',
9: ' ',}
# the keys on the game board where, if filled completely with X's or O's a
# winner has occurred
win_patterns = [[1,2,3], [4,5,6], [7,8,9],
[1,4,7], [2,5,8], [3,6,9],
[1,5,9], [7,5,3]]
def check_winning(board):
"""
Checks if the game has a winner
Parameters
----------
board : dict
the tictactoe board as a dictionary
Returns
-------
win_statement : str
defaults to an empty string if no winner. Otherwise 'X' Won! or 'O' Won!
"""
for pattern in win_patterns:
values = [board[i] for i in pattern]
if values == ['X', 'X', 'X']:
return "'X' Won!"
elif values == ['O', 'O', 'O']:
return "'O' Won!"
return ''
def tic_tac_toe():
"""
The tictactoe game engine. Runs the while loop that handles the game
Parameters
----------
None
Returns
-------
None
"""
print("'X' will go first!")
board = init_board()
while True:
for player in (['X', 'O']):
visualize_board(board.values())
move = int(input("{}, what's your move?".format(player)))
if board[move] != ' ':
move = input("{}, that position is already taken! "\
"What's your move?".format(player))
else:
board[move] = player
winner = check_winning(board)
if winner == '':
continue
else:
print(winner)
break
if winner != '':
break
###Output
_____no_output_____
###Markdown
2.1 OOP[back to top](top)Notice how we have so many functions with calls to our main object `board`. This is one flag that should notify us: "hey, this might be a good place to switch from functions, to an object with methods!"Let's try to organize this into a more object oriented scheme. 🙋 Question 1We'll also want to write a function that does _what_> hint: what is our `check_winning` function missing? 2.1.1 Thinking in ObjectsIt's helpful to think of how our code can be divided into useful segments that can then be extended, interfaced, used elsewhere, etc.It's just like we had when we were playing with our Poke ball and pokemon objects. In that case, it made sense to make two separate objects one for pokemon and one for Poke balls. 🙋 Question 2Can you think of any way that would make sense to divide our code into objects? I can think of two. 2.1.2 class TicTacToethe first object will be one that handles our board and all of its methods and attributes. In this class called `TicTacToe` we will have the attributes: * `winner`, initialized as an empty string, and updates at the conclusion of a game with 'X', 'O', or 'Stalemate' * `start_player` initialized as an empty string and updates at the start of a game with 'X' or 'O' * `board` initialized as our empty board dictionary * `win_patterns` the list of lists containing the winning patterns of the gameand then we will have three different methods, each of which takes one parameter, `self`* `visualize_board`* `check_winning`* `check_stalemate` : a new function. Returns "It's a stalemate!" and sets `self.winner = "Stalemate"` (note there is a bug 🐞 in the way this is currently written, we will move along for now and work through a debugging tutorial later on!) Q1 Attributes of TicTacToeWithin class TicTacToe, define the attributes described above
###Code
class TicTacToe:
# create winner and start_player parameters with default values as empty
# strings within __init__
def __init__(self):
##################################
########### Attributes ###########
##################################
# set self.winner and self.start_player with the parameters from __init__
# set self.board as a dictionary with ' ' as values and 1-9 as keys
# set self.win_patterns with the 8 winning patterns (a list of lists)
###Output
_____no_output_____
###Markdown
Q2 Methods of TicTacToeNow we will define the methods of `TicTacToe`. Paste your attributes from the above cell, into the bellow cell so that your changes carry over.
###Code
class TicTacToe:
# create winner and start_player parameters with default values as empty
# strings within __init__
def __init__(self):
##################################
########### Attributes ###########
##################################
# set self.winner and self.start_player with the parameters from __init__
# set self.board as a dictionary with ' ' as values and 1-9 as keys
# set self.win_patterns with the 8 winning patterns (a list of lists)
###############################
########### METHODS ###########
###############################
# the other functions are now passed self
# define visualize_board and update the board
# object with self.board (and maybe self.board.values() depending on how your
# visualize_board function is written)
# define check_winning and similarly update win_patterns,
# board, and winner to be accessed via the self. Be sure to update the
# attribute self.winner with the appropriate winner in the function
# here the definition of check_stalemate is given
def check_stalemate(self):
if ' ' not in self.board.values():
self.winner = 'Stalemate'
return "It's a stalemate!"
###Output
_____no_output_____
###Markdown
2.1.3 The Game Engine (just a function for now)Next we'll create a function that runs game play using TicTacToe as an object that it passes around. I've already done the heavy lifting of replacing references to attributes (board, win_patterns) and methods (visualize_board, check_winning) to pass through the `TicTacToe` object. I also added the option for the user to quit the game by typing in `'q'` to the input line if they would like. Q3 Add Condition for Stalemate
###Code
def play_game():
print("'X' will go first!")
tic_tac_toe = TicTacToe()
while True:
for player in (['X', 'O']):
tic_tac_toe.visualize_board()
move = input("{}, what's your move?".format(player))
####################################################################
# we're going to allow the user to quit the game from the input line
####################################################################
if move in ['q', 'quit']:
tic_tac_toe.winner = 'F'
print('quiting the game')
break
move = int(move)
if tic_tac_toe.board[move] != ' ':
while True:
move = input("{}, that position is already taken! "\
"What's your move?".format(player))
move = int(move)
if tic_tac_toe.board[move] != ' ':
continue
else:
break
tic_tac_toe.board[move] = player
# the winner varaible will now be checked within the board object
tic_tac_toe.check_winning()
##############################
# CALL check_stalemate() BELOW
##############################
if tic_tac_toe.winner == '':
clear_output()
continue
##########################################################################
# write an elif statement that checks if self.winner is 'Stalemate' and
# subsequently visualizes the board and breaks out of the while loop
# also print out check_stalemante so the returned string is shown to the
# user
##########################################################################
else:
print(tic_tac_toe.check_winning())
tic_tac_toe.visualize_board()
break
if tic_tac_toe.winner != '':
break
###Output
_____no_output_____
###Markdown
Let's test our new module
###Code
play_game()
###Output
|X|O|X|
|O|O|X|
|X| |O|
X, what's your move?8
It's a stalemate!
|X|O|X|
|O|O|X|
|X|X|O|
|
05_RadioOccultation_xarray.ipynb | ###Markdown
Open the Radio Occultation Temperature Dataset from data.ccca.ac.at Use xarray to open the remote dataset
###Code
ro_temp = xr.open_dataset("https://data.ccca.ac.at/thredds/dodsC/ckan/316/c04/ab-e974-4bca-84fb-ae552fb03b71")
print(ro_temp)
###Output
<xarray.Dataset>
Dimensions: (Latitude: 71, Longitude: 144, Pressure: 102, Time: 3653, nv: 2)
Coordinates:
* Latitude (Latitude) float64 -87.5 -85.0 -82.5 -80.0 -77.5 ...
* Pressure (Pressure) float64 5.819e+03 5.988e+03 6.162e+03 ...
* Longitude (Longitude) float64 -180.0 -177.5 -175.0 -172.5 ...
* Time (Time) datetime64[ns] 2006-09-01T12:00:00 ...
Dimensions without coordinates: nv
Data variables:
Time_bounds (Time, nv) datetime64[ns] ...
Longitude_bounds (Longitude, nv) float64 ...
Latitude_bounds (Latitude, nv) float64 ...
Temperature (Time, Longitude, Latitude, Pressure) float64 ...
Temperature__Count (Time, Longitude, Latitude, Pressure) float32 ...
Attributes:
_NCProperties: version=1|netcdflibversion=4.6.1|hdf5...
ProcessingCenter: WEGC
Level1bProcessorId: OPSv5.6.2
Level1bProcessingCenter: WEGC
Conventions: CF-1.5
ProcessorId: OPSv5.6.2
Version: EGOPS 5.6, Revision tree: egops/branc...
EarthFigureModel: EARTH_EGM96
SpecificHumidity_RetrievalQuality: -1
Temperature_RetrievalQuality: 0
comment: Time information not indicated explic...
description: Dataset processed by EGOPS 5.6, (C) I...
title: Radio Occultation Gridded Data
institution: Wegener Center for Climate and Global...
source: radio occultation
references: Brunner et al. 2017 AMT
history: calculate_climatologies.py Revision: ...
###Markdown
Plot Create orthographic plot of one time frame and the 12th pressure level
###Code
ax = plt.axes(projection=ccrs.Orthographic(20, 35))
ro_temp.Temperature[0,:,:,12].T.plot.contourf(ax=ax, transform=ccrs.PlateCarree());
ax.set_global(); ax.coastlines();
###Output
_____no_output_____
###Markdown
Area average Reduce area and select pressure level nearest to 850 hPa
###Code
bbox = (9.47996951665, 46.4318173285, 16.9796667823, 49.0390742051)
da_ro_temp = ro_temp.Temperature.sel(
Latitude=slice(bbox[1],bbox[3]),
Longitude=slice(bbox[0],bbox[2])
).sel(Pressure=85000, method='nearest')
###Output
_____no_output_____
###Markdown
Plot time series
###Code
da_ro_temp.mean(dim=('Longitude','Latitude')).plot()
###Output
_____no_output_____ |
Training/NER_DiseaeName_Transfer_Learning.ipynb | ###Markdown
###Code
print("Hello colab")
from google.colab import drive
drive.mount('/content/drive')
!pip install -r '/content/drive/My Drive/Colab Notebooks/requirements.txt'
pip install pytorch-pretrained-bert==0.4.0
pip install scispacy
pip install '/content/drive/My Drive/Colab Notebooks/en_ner_bc5cdr_md-0.2.4.tar.gz'
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.style.use("ggplot")
import tensorflow_hub as hub
from keras.preprocessing.sequence import pad_sequences
from keras.utils import to_categorical
from keras.models import Model, Input
from keras.layers import LSTM, Embedding, Dense
from keras.layers import TimeDistributed, Dropout, Bidirectional
from tqdm import tqdm, trange
# Defining Constants
# Maximum length of text sentences
MAXLEN = 180
# batch size
BS=48
data_test = pd.read_csv("/content/drive/My Drive/NER_Disease/train.csv", encoding="latin1")
data1 = data_test.sample(n=15000,random_state=2020)
data1.tail(10)
# data1=data.sample(n=25000,random_state=2020)
# data1.head(10)
words = list(set(data1["Word"].values))
words.append("ENDPAD")
n_words = len(words); n_words
#print(words[1:10])
tags = list(set(data1["tag"].values))
n_tags = len(tags); n_tags
print(tags[0:n_tags])
class SentenceGetter(object):
def __init__(self, data):
self.n_sent = 1
self.data = data
self.empty = False
agg_func = lambda s: [(w, t) for w, t in zip(s["Word"].values.tolist(),
s["tag"].values.tolist())]
self.grouped = self.data.groupby("Sent_ID").apply(agg_func)
self.sentences = [s for s in self.grouped]
def get_next(self):
try:
s = self.grouped["Sentence: {}".format(self.n_sent)]
self.n_sent += 1
return s
except:
return None
getter = SentenceGetter(data1)
sentences = [" ".join([s[0] for s in sent]) for sent in getter.sentences]
sentences[0]
labels = [[s[1] for s in sent] for sent in getter.sentences]
print(len(labels))
tags_vals = list(set(data1["tag"].values))
tag2idx = {t: i for i, t in enumerate(tags_vals)}
print(tag2idx)
import torch
from torch.optim import Adam
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
from pytorch_pretrained_bert import BertTokenizer, BertConfig
from pytorch_pretrained_bert import BertForTokenClassification, BertAdam
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
n_gpu = torch.cuda.device_count()
device
###Output
_____no_output_____
###Markdown
The Bert implementation comes with a pretrained tokenizer and a definied vocabulary. We load the one related to the smallest pre-trained model bert-base-uncased. cased variate since is well suited for NER.
###Code
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True)
###Output
_____no_output_____
###Markdown
Now we tokenize all sentences
###Code
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
print(tokenized_texts[0])
###Output
_____no_output_____
###Markdown
Next, we cut and pad the token and label sequences to our desired length. That is Next, we need to convert each token in each review to an id as present in the tokenizer vocabulary. If there’s a token that is not present in the vocabulary, the tokenizer will use the special [UNK] token and use its id:Refer:- https://towardsdatascience.com/bert-to-the-rescue-17671379687f
###Code
input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts],
maxlen=MAXLEN, dtype="long", truncating="post", padding="post")
input_ids[0]
tags = pad_sequences([[tag2idx.get(l) for l in lab] for lab in labels],
maxlen=MAXLEN, value=tag2idx["O"], padding="post",
dtype="long", truncating="post")
len(tags)
###Output
_____no_output_____
###Markdown
The Bert model supports something called attention_mask, which is similar to the masking in keras. So here we create the mask to ignore the padded elements in the sequences.
###Code
attention_masks = [[float(i>0) for i in ii] for ii in input_ids]
#attention_masks
data1["tag"].values
###Output
_____no_output_____
###Markdown
Now we split the dataset to use 10% to validate the model.
###Code
tr_inputs, val_inputs, tr_tags, val_tags = train_test_split(input_ids, tags,
random_state=2018,test_size=0.05)
tr_masks, val_masks, _, _ = train_test_split(attention_masks, input_ids,
random_state=2018,test_size=0.05)
###Output
_____no_output_____
###Markdown
Since we’re operating in pytorch, we have to convert the dataset to torch tensors.
###Code
tr_inputs = torch.tensor(tr_inputs).to(torch.int64)
val_inputs = torch.tensor(val_inputs).to(torch.int64)
tr_tags = torch.tensor(tr_tags).to(torch.int64)
val_tags = torch.tensor(val_tags).to(torch.int64)
tr_masks = torch.tensor(tr_masks).to(torch.int64)
val_masks = torch.tensor(val_masks).to(torch.int64)
###Output
_____no_output_____
###Markdown
The last step is to define the dataloaders. We shuffle the data at training time with the RandomSampler and at test time we just pass them sequentially with the SequentialSampler.
###Code
train_data = TensorDataset(tr_inputs, tr_masks, tr_tags)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=bs)
valid_data = TensorDataset(val_inputs, val_masks, val_tags)
valid_sampler = SequentialSampler(valid_data)
valid_dataloader = DataLoader(valid_data, sampler=valid_sampler, batch_size=bs)
###Output
_____no_output_____
###Markdown
Setup the Bert model for finetuningThe pytorch-pretrained-bert package provides a BertForTokenClassification class for token-level predictions. BertForTokenClassification is a fine-tuning model that wraps BertModel and adds token-level classifier on top of the BertModel. The token-level classifier is a linear layer that takes as input the last hidden state of the sequence. We load the pre-trained bert-base-uncased model and provide the number of possible labels.
###Code
model = BertForTokenClassification.from_pretrained("bert-base-uncased", num_labels=len(tag2idx))
# print("Model's state_dict:")
# for param_tensor in model.state_dict():
# print(param_tensor, "\t", model.state_dict()[param_tensor].size())
import os
try:
corpus = "/content/drive/My Drive/Colab Notebooks/bert_epoch_state.pt"
checkpoint = torch.load(corpus)
model.load_state_dict(checkpoint['model_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
except:
pass
###Output
/content/drive/My Drive/Colab Notebooks/bert_epoch_state.pt
###Markdown
Now we have to pass the model parameters to the CPU.
###Code
model.cpu()
###Output
_____no_output_____
###Markdown
Before we can start the fine-tuning process, we have to setup the optimizer and add the parameters it should update. A common choice is the Adam optimizer. We also add some weight_decay as regularization to the main weight matrices. If you have limited resources, you can also try to just train the linear classifier on top of Bert and keep all other weights fixed. This will still give you a good performance.
###Code
FULL_FINETUNING = True
if FULL_FINETUNING:
param_optimizer = list(model.named_parameters())
#print(param_optimizer)
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
else:
param_optimizer = list(model.classifier.named_parameters())
optimizer_grouped_parameters = [{"params": [p for n, p in param_optimizer]}]
optimizer = Adam(optimizer_grouped_parameters, lr=3e-5)
###Output
_____no_output_____
###Markdown
Finetune BertFirst we define some metrics, we want to track while training. We use the f1_score from the seqeval package.And we use simple accuracy on a token level comparable to the accuracy in keras.
###Code
pip install seqeval
from seqeval.metrics import f1_score
def flat_accuracy(preds, labels):
pred_flat = np.argmax(preds, axis=2).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
model.cuda(device)
###Output
_____no_output_____
###Markdown
Finally, we can fine-tune the model. A few epochs should be enough. The paper suggest 3-4 epochs.
###Code
epochs = 5
max_grad_norm = 1.0
model.cuda(device)
for epoch in trange(epochs, desc="Epoch"):
# TRAIN loop
model.train()
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
for step, batch in enumerate(train_dataloader):
# add batch to gpu
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
# forward pass
loss = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
# backward pass
loss.backward()
# track train loss
tr_loss += loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
# gradient clipping
torch.nn.utils.clip_grad_norm_(parameters=model.parameters(), max_norm=max_grad_norm)
# update parameters
optimizer.step()
model.zero_grad()
# print train loss per epoch
print("Train loss: {}".format(tr_loss/nb_tr_steps))
# VALIDATION on validation set
model.eval()
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
predictions , true_labels = [], []
for batch in valid_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
with torch.no_grad():
tmp_eval_loss = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
logits = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask)
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
predictions.extend([list(p) for p in np.argmax(logits, axis=2)])
true_labels.append(label_ids)
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_loss += tmp_eval_loss.mean().item()
eval_accuracy += tmp_eval_accuracy
nb_eval_examples += b_input_ids.size(0)
nb_eval_steps += 1
eval_loss = eval_loss/nb_eval_steps
print("Validation loss: {}".format(eval_loss))
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))
pred_tags = [tags_vals[p_i] for p in predictions for p_i in p]
valid_tags = [tags_vals[l_ii] for l in true_labels for l_i in l for l_ii in l_i]
print("F1-Score: {}".format(f1_score(pred_tags, valid_tags)))
torch.save(model.state_dict(), '/content/drive/My Drive/NER_Disease/my_NER_model.h5')
model.load_state_dict(torch.load('/content/drive/My Drive/Colab Notebooks/bert_state_dict_partial.pt'))
checkpoint = torch.load('/content/drive/My Drive/Colab Notebooks/bert_epoch_state_full.pt')
model.load_state_dict(checkpoint['model_state_dict'])
epoch = checkpoint['epoch']
loss = checkpoint['loss']
def preprocess_test_data(data):
words = list(set(data["Word"].values))
words.append("ENDPAD")
n_words = len(words); n_words
#print(words[1:10])
tags = list(set(data["tag"].values))
n_tags = len(tags); n_tags
print(tags[0:n_tags])
getter = SentenceGetter(data)
sentences = [" ".join([s[0] for s in sent]) for sent in getter.sentences]
sentences[23]
labels = [[s[1] for s in sent] for sent in getter.sentences]
print(len(labels))
tags_vals = list(set(data["tag"].values))
tag2idx = {t: i for i, t in enumerate(tags_vals)}
print(tag2idx)
tokenized_texts = [tokenizer.tokenize(sent) for sent in sentences]
print(tokenized_texts)
input_ids = pad_sequences([tokenizer.convert_tokens_to_ids(txt) for txt in tokenized_texts],
maxlen=MAXLEN, dtype="long", truncating="post", padding="post")
input_ids[0]
tags = pad_sequences([[tag2idx.get(l) for l in lab] for lab in labels],
maxlen=MAXLEN, value=tag2idx["O"], padding="post",
dtype="long", truncating="post")
len(tags)
attention_masks = [[float(i>0) for i in ii] for ii in input_ids]
#attention_masks
tr_inputs, val_inputs, tr_tags, val_tags = train_test_split(input_ids, tags,
random_state=2019, test_size=0.9)
tr_masks, val_masks, _, _ = train_test_split(attention_masks, input_ids,
random_state=2019, test_size=0.9)
tr_inputs = torch.tensor(tr_inputs).to(torch.int64)
val_inputs = torch.tensor(val_inputs).to(torch.int64)
tr_tags = torch.tensor(tr_tags).to(torch.int64)
val_tags = torch.tensor(val_tags).to(torch.int64)
tr_masks = torch.tensor(tr_masks).to(torch.int64)
val_masks = torch.tensor(val_masks).to(torch.int64)
test_data = TensorDataset(val_inputs, val_masks, val_tags)
test_sampler = SequentialSampler(test_data)
test_dataloader = DataLoader(test_data, sampler=test_sampler, batch_size=bs)
return test_dataloader
data_test = pd.read_csv("/content/drive/My Drive/NER_Disease/train.csv", encoding="latin1")
data_test = data_test.fillna(method="ffill")
data.tail(10)
data_sample=data_test.sample(n=1000,random_state=1234)
data_sample.tail(10)
test_dataloader= preprocess_test_data(data_sample)
###Output
['B-indications', 'O', 'I-indications']
995
{'B-indications': 0, 'O': 1, 'I-indications': 2}
[['significantly'], ['0', '.', '64'], ['and'], ['def', '##ae', '##cation'], ['abnormal', '##ity'], [','], ['.'], ['hybrid', '-', 'cell'], ['at'], ['of'], ['differential'], ['organ', '##ome', '##tal', '##lic'], ['a'], ['field'], ['paint', '##brush'], ['neuroscience'], ['these'], ['fl', '##a', '.'], ['complex'], ['species'], ['be'], ['had'], ['('], ['ve', '##rte', '##brates'], ['lines'], ['p'], ['also'], ['('], [','], ['of'], ['('], ['homosexuality'], ['nucleus'], ['cellular'], ['overall'], ['do'], ['in'], ['.'], ['were'], [','], ['have'], ['to'], ['treatment'], ['and'], ['pl', '##oid', '##y'], ['with'], ['also'], ['acid'], ['group'], ['fertile'], ['may'], ['without'], ['2018'], ['find'], ['der'], ['yet'], ['both'], ['.'], ['at'], ['and'], ['self', '-', 'administration'], ['poly', '##p'], ['up'], ['er', '##ite', '##mat', '##oso'], ['do', '##pa', '##mine', '##rg', '##ic'], ['multiple'], ['item', '-', 'scale'], ['amounts'], ['uses'], ['and'], ['tests'], ['using'], ['in'], ['amp', '##c'], ['in'], ['im', '##mis', '##ci', '##ble'], [')'], ['looking'], ['behavioral'], ['('], ['who'], ['dissemination'], ['tuberculosis'], ['of'], ['transformed'], ['of'], [','], ['ultra', '##sonic', '/', 'p', '##ne', '##umatic'], ['baby', '-', 'feeding'], ['interviewed'], ['on'], ['is'], ["'", 's'], ['as'], ['do', '##bu', '##tam', '##ine'], ['album', '##in', '##uria'], ['wind'], ['reconstruction'], ['rear', '##rang', '##ement', '##s'], ['this'], [','], ['ap', '##o'], [')'], ['elevated'], ['cyclic'], ['of'], ['the'], ['reported'], ['('], ['location'], ['0', ',', '85'], ['ge', '##rmin', '##al'], ['disease'], ['pregnant'], ['of'], ['as'], ['hyper', '##al', '##ges', '##ia'], ['research'], ['of'], ['study'], ['media', '##ting'], ['cells'], ['35', '/', '69'], ['motif'], ['2'], ['carried'], ['to'], ['experimental', '##ly'], [','], ['proton'], ['of'], ['cancer'], ['cellular'], [','], ['.'], ['ac', '##l', '-', 'intact'], ['phases'], ['.'], [','], ['en', '##cam', '##in', '##ham', '##ent', '##o'], ['however'], ['between'], ['treatment'], ['technique'], ['b'], ['career'], ['used'], ['analysis'], ['que'], ['different'], ['that'], ['0', '.', '03'], [','], ['intervention', '##al', '-', 'based'], ['2001'], ['ec', '##top', '##ic'], ['might'], ['.'], ['the'], ['por', '##cine'], ['review'], ['('], ['mc', '##od'], ['blast', '##ula'], ['role'], [']'], ['material'], ['grade'], ['these'], ['a'], ['and'], ['induced'], ['an'], ['undergoing'], ['he', '##pa', '##to', '##cellular'], ['of'], ['contributing'], ['the'], ['for'], ['at'], ['significantly'], ['the'], ['rec', '##urrent'], ['and'], [','], ['models'], ['that'], ['of'], ['check', '##off'], [','], ['.'], ['group'], ['supporting'], ['national'], ['order'], ['european'], ['with'], ['the'], [')'], [','], ['available'], ['however'], ['effectiveness'], ['in'], [','], ['from'], ['('], ['7', '.', '1'], ['syn', '##ap', '##tic'], ['%'], ['ga', '##ting'], [')'], ['au', '##reus'], ['9', '.', '1'], ['gel', '##ech', '##iidae'], ['stopping'], ['de'], ['which'], ['mg'], ['syn', '##m'], ['to'], ['dc'], ['telephone'], ['matched'], ['surface'], ['.'], ['practical'], ['at'], ['survival'], [')'], ['.'], ['('], ['detector'], ['phases'], ['co', '##fa', '##ctor'], ['their'], ['`', '`'], ['formation'], ['in'], ['on'], ['of'], ['used'], ['was'], ['-', 'ad', '##t', '##n'], ['we'], ['numb'], ['a'], ['.'], ['the'], ['message'], ['cells'], ['new'], ['tr', '##af', '##imo', '##w'], ['of'], ['and'], ['a'], ['trial'], ['carried'], ['a'], ['compliance'], ['inhibitors'], [')'], ['long', '-', 'term'], ['we'], ['and'], ['.'], ['treatment'], ['.'], ['with'], ['drugs'], ['the'], ['.'], ['('], ['nigeria'], ['improvement'], ['these'], ['dental'], ['corn', '##ea'], ['this'], ['the'], ['ass', '##ay'], ['lines'], ['show'], ['dental'], ['were'], ['rec', '##a'], ['('], ['pi', '##3', '##k'], [','], ['efficient'], ['.'], ['in'], ['experiments'], ['volumes'], ['.'], ['bio', '##ps', '##ies'], ['and'], ['the'], ['conclude'], ['from'], ['the'], ['10'], ['to'], ['.'], ['dependence'], ['was'], ['of'], ['investigating'], ['p', '##yr', '##id', '##yla', '##minated'], ['on'], ['type'], ['more'], ['camp'], ['am', '##bula', '##tory'], ['and'], ['the'], [','], ['h', '##la', '-', 'dr'], ['in', '##fect'], ['typing'], ['on'], ['b'], ['activity'], ['discussed'], ['a'], ['of'], ['immediate'], ['risk'], ['intra', '##per', '##ito', '##nea', '##l'], ['emphasis'], ['with'], ['are'], ['in'], ['of'], ['.'], ['.'], ['12'], ['permanent'], ['au', '##c'], ['.'], ['my', '##oca', '##rdial'], ['theories'], ['receptors'], ['rye', '##grass'], ['designed'], [','], ['a'], ['first', '-', 'order'], ['of'], ['in'], ['were'], ['a'], ['a'], ['completely'], ['indicated'], ['this'], ['and'], ['the'], ['from'], ['pit'], ['('], ['%'], ['.'], ['situ', '##aa', '§', 'a', '##µ', '##es'], ['p'], ['subunit', '##s'], ['similar'], ['and'], ['these'], ['allows'], ['ur', '##ina', '##ry'], [','], ['strongly'], ['95'], ['pa', '##pi', '##llo', '##ma', '##virus'], ['2003'], ['hearing'], ['the'], ['cord'], ['mono', '-'], ['a'], ['and'], ['role'], [','], ['all'], ['into'], ['similar'], ['['], ['of'], ['('], ['neither'], ['the'], [')'], ['ul', '##cer'], ['thy', '##ro', '##xin'], ['61'], ['tu', '##mour'], ['registers'], ['is'], ['infections'], ['taken'], ['profile'], ['gr', '##iff'], ['l', '##hr', '##h', '-', 'r'], [';'], ['be'], ['foot'], ['was'], ['properties'], ['.'], ['cc', '##aa', '##t'], ['anti', '##sul', '##fat', '##ide'], ['qualities'], ['os', '##si', '##fication'], ['with'], ['u', '##u', '##u'], ['chinese'], ['development'], ['a'], ['completely'], ['ce', '##ti', '##l'], ['was'], ['and'], [','], ['presented'], ['resistance'], ['supports'], ['are'], ['rice'], ['light'], ['biology'], ['d', '##ys', '##pha', '##gia'], ['20'], ['ser', '##one', '##gative'], ['reversed'], ['a'], ['after'], ['although'], ['of'], ['a'], ['cells'], ['the'], ['purposes'], ['a'], ['building'], ['older'], ['the'], ['scale'], ['do', '##rso', '##lateral'], ['age'], ['elevated'], ['or'], ['.'], ['ph', '##arm', '##aco', '##logical'], ['the'], ['when'], ['any'], ['importance'], ['organization'], ['patients'], [')'], ['inserted'], ['1'], ['the'], ['2', '+'], [','], ['maize'], ['measured'], [','], ['%'], ['tal', '##ar'], ['protein'], ['relief'], ['gi', '##sts'], [')'], ['level'], ['isolated'], ['base', '##plate'], ['with'], ['is'], ['the'], ['pre', '##nat', '##al'], ['list', "'", "'"], ['practice'], ['0', '.', '57', '##3'], ['%'], ['in'], ['absorption'], ['reviewed'], ['evaluation'], ['was'], ['outcome'], ['.'], ['men'], ['in'], ['semi', '-', 'quantitative'], ['interaction'], ['.'], ['using'], ['sea', '##col', '##e'], ['has'], ['mc', '##i'], ['in'], [')'], ['fluid'], ['citrus'], ['200'], [','], ['auto', '##im', '##mun', '##ity'], ['state', 'and'], ['since'], [')'], [':'], ['es', '##rd'], ['inhibition'], ['in'], ['levels'], ['and'], [','], ['universally'], ['explore'], ['surrounding'], ['original'], ['on'], ['release'], ['the'], ['word'], ['survival'], [';'], ['sp', '##ut', '##um'], ['odds'], ['fibre'], ['suggests'], ['trans', '##du', '##cer'], ['6'], ['.'], ['into'], ['investigated'], ['microscopy'], ['up'], ['by'], ['aa', '##©', '##rea'], ['aspects'], ['reference'], ['is'], ['current'], ['of'], ['and'], [','], ['gen', '##omic'], ['24'], ['administered'], ['b'], ['through'], ['normal'], ['with'], ['frequency'], ['-', 'cat', '##aly', '##zed'], ['predict', '##or'], ['not'], ['2', '-', 'fold'], ['processes'], ['directly'], ['disability'], ['based'], ['oxide'], ['of'], ['viruses'], ['the'], [','], ['.'], ['micro', '##tub', '##ule'], ['effect'], ['on'], ['positive'], [')'], ['.'], ['of'], ['distributed'], ['factors'], ['discussions'], ['learning', '-', 'related'], ['sperm', '##ato', '##genic'], ['patients'], ['rec', '##ur', '##rence'], ['and'], ['he', '##xa', '##ch', '##lor', '##obe', '##nz', '##ene'], ['153'], ['cap', '##illa', '##ry'], ['to'], ['glucose'], ['reviewed'], ['pd', '##gf', '-', 'b'], ['prevent'], ['una'], ['increased'], ['pre', '##tre', '##ated'], ['effect'], ['dc', '##g'], ['subjected'], ['students'], ['.'], ['of'], ['reflex', '##a'], ['exclusion'], ['at'], ['this'], ['drinking'], ['measure'], ['cell'], ['in'], ['animals'], ['on'], ['in'], ['particles'], ['os', '##te', '##omy', '##eli', '##tis'], ['day', '-', 'time'], ['xi'], ['appropriate'], ['residents'], ['concentrations'], ['secondary'], ['real', '##ign', '##ing'], ['the'], ['of'], ['of'], ['our'], ['reporter'], ['statistical'], ['of'], ['cl', '##in'], [')'], ['also'], ['fitness'], ['bleeding'], ['therapy'], ['identification'], ['aroma', '##tase'], ['samples'], ['dona', '##ting'], ['per', '##me', '##ability'], ['%'], ['challenges'], ['factors'], ['in'], ['it'], ['24'], ['yang'], ['and'], ['.'], ['d', '##2'], ['boys'], ['our'], ['of'], ['in'], ['the'], [','], ['and'], ['might'], ['ramp'], ['are'], [')'], ['in'], ['hc', '##v'], ['services'], ['that'], ['rub', '##ella'], ['a'], ['the'], ['vascular'], ['swelling'], ['developed'], [')'], ['these'], ['both'], ['also'], ['i', '.', 'e', '.'], ['('], ['efficiency'], ['instability'], ['of'], ['advanced', '/', 'rec', '##urrent'], ['transplant'], ['black'], ['assessed'], ['the'], ['by'], ['.'], ['32'], ['should'], ['damage'], ['opinion'], ['studying'], ['quasi', '-', 'elastic'], ['65', '.', '4'], ['ps', '##s'], ['regions'], ['and'], ['we'], ['hoped'], ['homo', '##tet', '##ram', '##er'], ['vi', '##ment', '##in'], ['in'], ['comparisons'], ['review'], ['potent'], ['titanium'], ['simple'], ['.'], ['the'], ['hs', '##c'], ['.'], ['diagnosis'], ['ratio'], ['.'], ['.'], ['more'], ['est', '##imating'], ['presented'], ['their'], ['signal', '-', 'regulated'], ['ligand'], ['('], ['per', '##me', '##ability'], ['of'], ['the'], ['migration'], [','], ['rate'], ['%'], ['video', '##dis', '##c'], ['valley'], ['complement'], ['concern'], ['exposure'], ['administered'], ['and'], ['.'], ['bitter'], ['in'], ['outcomes'], ['preference'], ['labelled'], ['.'], ['that'], ['modification'], [','], [','], ['for'], ['cam'], ['the'], ['intro', '##n'], ['german'], ['to'], ['at'], ['a'], ['.'], ['challenge'], ['using'], ['rose'], ['micro', '##dis', '##se', '##cted'], [','], [','], ['.'], ['base'], ['cells'], ['predominantly'], ['this'], ['are'], ['de'], ['for'], ['4', '-', 'y', '##r'], ['chang'], ['a'], ['in'], ['displaced'], ['increase'], ['expression'], ['other'], ['short'], ['were'], ['hr'], ['that'], ['vent', '##ric', '##ular'], ['the'], ['%'], ['the'], ['from'], ['of'], ['tissues'], ['lip', '##osa', '##rco', '##ma'], ['com', '##t', 'with'], ['and'], ['the'], ['are'], ['cho', '##les', '##tat', '##ic'], ['slightly'], ['infection'], ['of'], ['125'], ['('], ['.'], ['did'], ['survival'], ['pro', '##kin', '##etic'], ['of'], ['sustained'], ['annual'], ['partly'], ['serum'], ['fluorescent'], ['flu', '##ma', '##zen', '##il'], [')'], ['not'], ['.'], ['e'], ['relation'], ['sociological'], ['not'], ['('], ['se', '##dent', '##ary'], ['the'], ['('], ['4'], ['('], ['from'], ['markers'], ['from'], ['following'], ['on'], ['development'], [','], ['data'], [','], ['8', '.', '5', '/', '100', ',', '000', '/', 'y', '##r'], ['a'], [','], ['when'], ['complete'], ['usually'], ['our'], ['mice'], ['the'], ['understanding'], ['however'], ['an'], ['especially'], ['auto', '##im', '##mun', '##e'], ['newborn', '##s'], ['mimic', '##king'], ['en', '##tre'], ['effects'], ['showing'], ['5'], ['in'], ['from'], ['['], ['pregnancy'], ['ii'], ['inhibit', '##ing'], ['by'], ['and'], ['to'], ['are'], ['produced', 'of'], ['a'], ['programme'], ['in'], ['injury'], ['were'], ['double'], ['blocks'], ['for'], ['cells'], ['non', '-', 'st', '-', 'elevation'], ['per', '##se', '##cu', '##cia', '##³', '##n'], ['which'], ['was'], ['.', '.', '.'], ['how'], ['either'], ['inform'], ['143'], ['three'], ['quality'], ['recommended'], ['lines'], ['of'], ['secondary'], ['major'], ['was'], ['or'], ['oxidation'], ['the'], ['with'], ['agent', 'use'], ['and'], ['response'], ['fault'], ['re', '##ani', '##mation'], ['we'], ['15'], ['met', '##hani', '##mini', '##um'], ['parameters'], ['and'], ['is'], [','], ['a'], ['than'], ['indicated'], [')'], ['actions'], ['architecture'], ['detected'], ['gut'], ['resonance'], [';'], ['the'], ['function'], ['assign', '##ing'], ['.'], ['was'], ['to'], ['with'], ['1', '-'], ['the'], ['dia', '##bet', '##ic'], [','], ['e', '/', 'at'], ['pro', '##te', '##omic'], ['.'], ['negatively'], ['and'], ['detect', '##able'], ['.'], ['of'], ['-'], ['of'], ['weeks'], ['rabbits'], ['developed'], ['brass', '##ica'], ['g', '##2', '/', 'm'], ['patients'], [','], ['palm'], ['fe', '##rti', '##lization'], ['suggested'], ['in'], ['helped'], ['potentially'], [','], ['-', '3'], [','], ['lens'], ['cells'], ['include'], ['in'], [','], ['anti', '##co', '##ag', '##ula', '##nts'], ['and'], ['provides'], ['poly'], ['found'], ['cross', '-', 'linked'], ['system'], ['experiment'], ['d', '##pp', '##1', '/', 'cat', '##he', '##ps', '##in'], ['prescription'], ['conducted'], ['ve', '##sic', '##ular'], ['of'], ['un'], [','], ['single', '-', 'centre'], ['pixels'], ['blend'], ['0', '.', '01'], ['generic'], ['does'], ['of'], ['was'], ['as'], ['of'], ['multi', '##mers'], ['sequence'], ['into'], ['.'], ['cad'], ['some'], ['pc', '##c'], ['ng', '/', 'ml'], ['scalp'], ['and'], ['before']]
###Markdown
Evaluate The Model
###Code
model.eval()
predictions = []
true_labels = []
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
for batch in valid_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
with torch.no_grad():
tmp_eval_loss = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask, labels=b_labels)
logits = model(b_input_ids, token_type_ids=None,
attention_mask=b_input_mask)
logits = logits.detach().cpu().numpy()
predictions.extend([list(p) for p in np.argmax(logits, axis=2)])
label_ids = b_labels.to('cpu').numpy()
true_labels.append(label_ids)
tmp_eval_accuracy = flat_accuracy(logits, label_ids)
eval_loss += tmp_eval_loss.mean().item()
eval_accuracy += tmp_eval_accuracy
nb_eval_examples += b_input_ids.size(0)
nb_eval_steps += 1
pred_tags = [[tags_vals[p_i] for p_i in p] for p in predictions]
valid_tags = [[tags_vals[l_ii] for l_ii in l_i] for l in true_labels for l_i in l ]
print("Validation loss: {}".format(eval_loss/nb_eval_steps))
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))
print("Validation F1-Score: {}".format(f1_score(pred_tags, valid_tags)))
#print(predictions)
###Output
_____no_output_____ |
Chapter 5- Encryption/Vignerre Cipher.ipynb | ###Markdown
Vignerre Cipher
###Code
alphabet="ABCDEFGHIJKLMNOPQRSTUVWXYZABCDEFGHIJKLMNOPQRSTUVWXYZ"
stringtoencrypt= input("Please enter message ")
stringtoencrypt=stringtoencrypt.upper()
keyword= input("Please enter number ")
keyword=keyword.upper()
int(len(keyword))
0%int(len(keyword))
encryptedstring=''
keywordstring=''
count=0
for currentcharacter in stringtoencrypt:
print('---- ENCRYPTING Letter', count)
position = alphabet.find(currentcharacter)
shiftamount = alphabet.find(keyword[count%int(len(keyword))])
newposition = position + shiftamount
keywordstring = keywordstring + keyword[count%int(len(keyword))]
encryptedstring = encryptedstring + alphabet[newposition]
print('Original:', stringtoencrypt[0:count+1])
print('Keyword: ',keywordstring)
print('Enrypted:',encryptedstring)
count=count+1
encryptedstring
decryptedstring=""
keywordstring=''
count=0
for currentcharacter in encryptedstring:
print('---- DECRYPTING Letter', count)
position = alphabet.find(currentcharacter)
shiftamount = alphabet.find(keyword[count%int(len(keyword))])
newposition = position - shiftamount
keywordstring = keywordstring + keyword[count%int(len(keyword))]
decryptedstring = decryptedstring + alphabet[newposition]
print('Original:', encryptedstring[0:count+1])
print('Keyword: ',keywordstring)
print('Enrypted:',decryptedstring)
count=count+1
decryptedstring
###Output
---- DECRYPTING Letter 0
Original: C
Keyword: T
Enrypted: J
---- DECRYPTING Letter 1
Original: CS
Keyword: TO
Enrypted: JE
---- DECRYPTING Letter 2
Original: CSA
Keyword: TOM
Enrypted: JEO
---- DECRYPTING Letter 3
Original: CSAI
Keyword: TOMT
Enrypted: JEOP
---- DECRYPTING Letter 4
Original: CSAIO
Keyword: TOMTO
Enrypted: JEOPA
---- DECRYPTING Letter 5
Original: CSAIOD
Keyword: TOMTOM
Enrypted: JEOPAR
---- DECRYPTING Letter 6
Original: CSAIODW
Keyword: TOMTOMT
Enrypted: JEOPARD
---- DECRYPTING Letter 7
Original: CSAIODWM
Keyword: TOMTOMTO
Enrypted: JEOPARDY
|
notebooks/D6_L2_Filtering/04_image_interest.ipynb | ###Markdown
Finding points of interest in an image
###Code
import numpy as np
import matplotlib.pyplot as plt
import skimage
import skimage.feature as sf
%matplotlib inline
def show(img, cmap=None):
cmap = cmap or plt.cm.gray
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
ax.imshow(img, cmap=cmap)
ax.set_axis_off()
return ax
img = plt.imread('data/child.png')
show(img)
;
corners = sf.corner_harris(img[:, :, 0])
show(corners)
;
peaks = sf.corner_peaks(corners)
ax = show(img)
ax.plot(peaks[:, 1], peaks[:, 0], 'or', ms=4)
;
# The median defines the approximate position of
# the corner points.
ym, xm = np.median(peaks, axis=0)
# The standard deviation gives an estimation
# of the spread of the corner points.
ys, xs = 2 * peaks.std(axis=0)
xm, ym = int(xm), int(ym)
xs, ys = int(xs), int(ys)
show(img[ym - ys:ym + ys, xm - xs:xm + xs])
;
###Output
_____no_output_____ |
image/cassava-leaf-disease-classification/.ipynb_checkpoints/003_albumentations_smoothing-checkpoint.ipynb | ###Markdown
Data Augumentation
###Code
def get_train_transforms():
return Compose([
RandomResizedCrop(CFG.size, CFG.size),
Transpose(p=0.5),
HorizontalFlip(p=0.5),
VerticalFlip(p=0.5),
ShiftScaleRotate(p=0.5),
# JpegCompression(p=0.5),
HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit=0.2, val_shift_limit=0.2, p=0.5),
RandomBrightnessContrast(brightness_limit=(-0.1,0.1), contrast_limit=(-0.1, 0.1), p=0.5),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0),
CoarseDropout(p=0.5),
Cutout(p=0.5),
ToTensorV2(p=1.0),
], p=1.)
def get_valid_transforms():
return Compose([
CenterCrop(CFG.size, CFG.size, p=1.),
Resize(CFG.size, CFG.size),
# HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit=0.2, val_shift_limit=0.2, p=0.5),
# RandomBrightnessContrast(brightness_limit=(-0.1,0.1), contrast_limit=(-0.1, 0.1), p=0.5),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0),
ToTensorV2(p=1.0),
], p=1.)
###Output
_____no_output_____
###Markdown
Data Loader
###Code
class ImageData(Dataset):
def __init__(self, df, data_dir, transform, output_label=True):
super().__init__()
self.df = df
self.data_dir = data_dir
self.transform = transform
self.output_label = output_label
def __len__(self):
return len(self.df)
def __getitem__(self, index):
img_name = self.df.iloc[index,0]
if self.output_label:
label = self.df.iloc[index,1]
img_path = os.path.join(self.data_dir, img_name)
image = plt.imread(img_path)
image = self.transform(image=image)
# do label smoothing
if self.output_label == True:
return image, label
else:
return image
###Output
_____no_output_____
###Markdown
CrossEntropyLoss
###Code
class SmoothCrossEntropyLoss(_WeightedLoss):
def __init__(self, weight=CFG.weights, reduction='mean', smoothing=CFG.smoothing):
super().__init__(weight=weight, reduction=reduction)
self.smoothing = smoothing
self.weight = weight
self.reduction = reduction
@staticmethod
def _smooth_one_hot(targets:torch.Tensor, n_classes:int, smoothing=CFG.smoothing):
assert 0 <= smoothing < 1
with torch.no_grad():
targets = torch.empty(size=(targets.size(0), n_classes),
device=targets.device) \
.fill_(smoothing /(n_classes-1)) \
.scatter_(1, targets.data.unsqueeze(1), 1.-smoothing)
return targets
def forward(self, inputs, targets):
targets = SmoothCrossEntropyLoss._smooth_one_hot(targets, inputs.size(-1),
self.smoothing)
lsm = F.log_softmax(inputs, -1)
if self.weight is not None:
lsm = lsm * self.weight.unsqueeze(0)
loss = -(targets * lsm).sum(-1)
if self.reduction == 'sum':
loss = loss.sum()
elif self.reduction == 'mean':
loss = loss.mean()
return loss
###Output
_____no_output_____
###Markdown
Helper Function
###Code
def Plot_Model_History(loss_tra_li,acc_tra_li,loss_val_li,acc_val_li):
plt.figure(figsize=(12, 5))
plt.subplot(2, 2, 1)
plt.plot(loss_tra_li, label="train_loss")
plt.plot(loss_val_li, label="val_loss")
plt.legend()
plt.subplot(2, 2, 2)
plt.plot(acc_tra_li, label="train_acc")
plt.plot(acc_val_li, label="val_acc")
plt.legend()
plt.show()
def Save_histroy(init=False):
filename = "./log/" + directory_name
if init:
with open(filename+'.txt','w') as f:
f.write("filename: 003_albumentations_smoothing.ipynb, model: {}, lr: {}, weights: {}. batchsize: {}, kfold: {}, epoch: {}, weght_decay: {}, smoothing: {}\n"
.format(CFG.model_name,CFG.learning_rate,CFG.weights, CFG.batch_size,CFG.n_split,CFG.num_epochs,CFG.weight_decay,CFG.smoothing))
else:
with open(filename+'.txt',mode='a') as f:
f.write('\nkfold: {}, epoch: {}. train_loss: {}, train_acc: {}. val_loss: {}, val_acc: {}'
.format(fold_index+1, epoch + 1, train_loss, train_acc, val_loss, val_acc))
def Confusion_Matrix(train_true_li,train_pred_li,val_true_li,val_pred_li):
plt.figure(figsize=(10,5))
train_cm = confusion_matrix(train_true_li,train_pred_li)
val_cm = confusion_matrix(val_true_li,val_pred_li)
plt.subplot(1,2,1)
sns.heatmap(train_cm, annot=True, cmap='Blues', cbar=False,fmt="3d")
plt.xlabel("taget")
plt.ylabel("predict")
plt.title("train")
plt.subplot(1,2,2)
sns.heatmap(val_cm, annot=True, cmap='Blues', cbar=False,fmt="3d")
plt.title("val")
plt.xlabel("taget")
plt.ylabel("predict")
plt.show()
def Define_Model():
m = EfficientNet.from_pretrained(CFG.model_name)
num_ftrs = m._fc.in_features
m._fc = nn.Linear(num_ftrs, CFG.target_size)
return m.to(device)
# n_splitsでKの数を指定
folds = StratifiedKFold(n_splits=CFG.n_split).split(np.arange(df_train.shape[0]), df_train["label"].values)
Save_histroy(init=True)
for fold_index, (train_index,val_index) in enumerate(folds):
model = Define_Model()
scaler = GradScaler()
optimizer = optim.Adam(model.parameters(), lr=CFG.learning_rate)
# scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0=CFG.T_0, T_mult=1, eta_min=CFG.min_lr, last_epoch=-1)
scheduler = torch.optim.lr_scheduler.ExponentialLR(optimizer,gamma=0.5)
train = df_train.iloc[train_index].reset_index(drop=True)
train_data = ImageData(df = train, data_dir = train_dir, transform = get_train_transforms())
train_loader = DataLoader(dataset = train_data, batch_size = CFG.batch_size, num_workers=CFG.num_workers, pin_memory=True,shuffle=True)
train_criterion = SmoothCrossEntropyLoss().to(device)
val = df_train.iloc[val_index,:].reset_index(drop=True)
val_data = ImageData(df = val, data_dir = train_dir, transform = get_valid_transforms())
val_loader = DataLoader(dataset = val_data, batch_size = CFG.batch_size, num_workers=CFG.num_workers, pin_memory=True,shuffle=True)
val_criterion = SmoothCrossEntropyLoss().to(device)
train_epoch_log,val_epoch_log = [],[]
train_acc_log,val_acc_log = [],[]
train_taget_li,val_taget_li = [],[]
train_pred_li,val_pred_li = [],[]
torch.backends.cudnn.benchmark = True
for epoch in range(CFG.num_epochs):
train_total = 0
train_correct = 0
train_loss_sum = 0
val_total = 0
val_correct = 0
val_loss_sum = 0
# train
model.train()
for idx, (data, target) in enumerate(train_loader):
data, target = data['image'].to(device).float(), target.to(device).long()
with autocast():
output = model(data)
loss = train_criterion(output, target)
loss.backward()
# scaler.scale(loss).backward()
train_loss_sum += loss.item()
train_total += target.size(0)
_,predicted = output.max(1)
train_correct += predicted.eq(target).sum().item()
train_taget_li.extend(target.to('cpu').detach().numpy().copy().tolist())
train_pred_li.extend(predicted.to('cpu').detach().numpy().copy().tolist())
if ((idx + 1) % CFG.accum_iter == 0) or ((idx + 1) == len(train_loader)):
optimizer.step()
# scaler.step(optimizer)
# scaler.update()
optimizer.zero_grad()
scheduler.step()
train_loss = train_loss_sum / len(train_loader)
train_epoch_log.append(train_loss)
train_acc = 100.0 * train_correct/train_total
train_acc_log.append(train_acc)
# val
model.eval()
with torch.no_grad():
for idx, (data, target) in enumerate(val_loader):
data, target = data['image'].to(device), target.to(device)
output = model(data)
loss = val_criterion(output, target)
val_loss_sum += loss.item()
val_total += target.size(0)
_,predicted = output.max(1)
val_correct += (predicted == target).sum().item()
val_taget_li.extend(target.to('cpu').detach().numpy().copy().tolist())
val_pred_li.extend(predicted.to('cpu').detach().numpy().copy().tolist())
val_loss = val_loss_sum / len(val_loader)
val_epoch_log.append(val_loss)
val_acc = 100.0 * val_correct/val_total
val_acc_log.append(val_acc)
print('Kfold: {} - Epoch: {} - Train_Loss: {:.6f} - Train_Acc: {:.4f} - Val_Loss: {:.6f} - Val_Acc: {:4f}'.format(fold_index+1, epoch + 1, train_loss, train_acc, val_loss, val_acc))
Save_histroy(init=False)
if (val_loss < CFG.checkpoint_thres_loss) & (val_acc > CFG.checkpoint_thres_acc):
CFG.checkpoint_thres_loss = val_loss
CFG.checkpoint_thres_acc = val_acc
path = create_directory + "./"+ directory_name + '_'+CFG.model_name + "_kfold_"+str(fold_index+1)+"_epoch"+str(epoch+1)+ "_Acc_" + str(val_acc) + '.pth'
torch.save(model.state_dict(), path)
Plot_Model_History(train_epoch_log,train_acc_log,val_epoch_log,val_acc_log)
Confusion_Matrix(train_taget_li,train_pred_li,val_taget_li,val_pred_li)
###Output
Loaded pretrained weights for efficientnet-b5
Kfold: 1 - Epoch: 1 - Train_Loss: 1.212992 - Train_Acc: 62.5210 - Val_Loss: 1.329435 - Val_Acc: 60.549558
Kfold: 1 - Epoch: 2 - Train_Loss: 1.114438 - Train_Acc: 66.1105 - Val_Loss: 1.219338 - Val_Acc: 67.068555
Kfold: 1 - Epoch: 3 - Train_Loss: 1.091065 - Train_Acc: 68.0454 - Val_Loss: 1.234240 - Val_Acc: 64.320763
Kfold: 1 - Epoch: 4 - Train_Loss: 1.071190 - Train_Acc: 69.3073 - Val_Loss: 1.188114 - Val_Acc: 67.489135
Kfold: 1 - Epoch: 5 - Train_Loss: 1.067993 - Train_Acc: 69.5597 - Val_Loss: 1.203699 - Val_Acc: 65.498388
Kfold: 1 - Epoch: 6 - Train_Loss: 1.062166 - Train_Acc: 69.4195 - Val_Loss: 1.169408 - Val_Acc: 68.680779
Kfold: 1 - Epoch: 7 - Train_Loss: 1.068991 - Train_Acc: 69.4125 - Val_Loss: 1.173906 - Val_Acc: 67.797561
Kfold: 1 - Epoch: 8 - Train_Loss: 1.064754 - Train_Acc: 70.0505 - Val_Loss: 1.171145 - Val_Acc: 68.007851
Kfold: 1 - Epoch: 9 - Train_Loss: 1.059110 - Train_Acc: 70.0154 - Val_Loss: 1.193629 - Val_Acc: 66.465723
Kfold: 1 - Epoch: 10 - Train_Loss: 1.063731 - Train_Acc: 69.8191 - Val_Loss: 1.192090 - Val_Acc: 66.423665
Kfold: 1 - Epoch: 11 - Train_Loss: 1.061387 - Train_Acc: 69.8191 - Val_Loss: 1.179616 - Val_Acc: 67.433058
Kfold: 1 - Epoch: 12 - Train_Loss: 1.063997 - Train_Acc: 69.8962 - Val_Loss: 1.175827 - Val_Acc: 67.685406
Kfold: 1 - Epoch: 13 - Train_Loss: 1.058099 - Train_Acc: 70.0996 - Val_Loss: 1.188266 - Val_Acc: 66.732090
Kfold: 1 - Epoch: 14 - Train_Loss: 1.056910 - Train_Acc: 69.8191 - Val_Loss: 1.193049 - Val_Acc: 66.255432
Kfold: 1 - Epoch: 15 - Train_Loss: 1.059773 - Train_Acc: 70.0014 - Val_Loss: 1.175779 - Val_Acc: 67.531193
Kfold: 1 - Epoch: 16 - Train_Loss: 1.057949 - Train_Acc: 69.7560 - Val_Loss: 1.195973 - Val_Acc: 65.806813
Kfold: 1 - Epoch: 17 - Train_Loss: 1.061180 - Train_Acc: 70.0294 - Val_Loss: 1.175075 - Val_Acc: 67.895696
Kfold: 1 - Epoch: 18 - Train_Loss: 1.061859 - Train_Acc: 69.5878 - Val_Loss: 1.203178 - Val_Acc: 65.596523
Kfold: 1 - Epoch: 19 - Train_Loss: 1.060325 - Train_Acc: 69.7070 - Val_Loss: 1.167426 - Val_Acc: 68.274218
Kfold: 1 - Epoch: 20 - Train_Loss: 1.066245 - Train_Acc: 69.6859 - Val_Loss: 1.200698 - Val_Acc: 65.638581
Kfold: 1 - Epoch: 21 - Train_Loss: 1.064705 - Train_Acc: 69.7771 - Val_Loss: 1.184355 - Val_Acc: 67.419038
Kfold: 1 - Epoch: 22 - Train_Loss: 1.062950 - Train_Acc: 69.9103 - Val_Loss: 1.204139 - Val_Acc: 65.947007
Kfold: 1 - Epoch: 23 - Train_Loss: 1.064365 - Train_Acc: 70.1416 - Val_Loss: 1.183981 - Val_Acc: 67.461096
|
0018/DynamicalSystem/Revised.ipynb | ###Markdown
https://discourse.julialang.org/t/solving-difference-equation-part-2/67057
###Code
using DynamicalSystems, DelimitedFiles, .Threads, BenchmarkTools, ProgressMeter
## Components of a test DiscreteDynamicalSystem
function dds_constructor(u0 = [0.5, 0.7]; r=1.0, k=2.0)
return DiscreteDynamicalSystem(dds_rule, u0, [r, k], dds_jac)
end
## equations of motion:
function dds_rule(x, par, n)
r, k = par
a, mu, d = 5.0, 0.5, 0.2
dx = x[1]*exp(r*(1-x[1]/k)-x[2]/(a+x[1]^2))
dy = x[2]*exp(mu*x[1]/(a+x[1]^2)-d)
return @SVector [dx, dy]
end
## Jacobian:
function dds_jac(x, par, n)
r, k = par;
a = 5.0; mu = 0.5; d = 0.2;
J11 = exp(- x[2]/(x[1]^2 + a) - r*(x[1]/k - 1)) - x[1]*exp(- x[2]/(x[1]^2 + a) - r*(x[1]/k - 1))*(r/k - (2*x[1]*x[2])/(x[1]^2 + a)^2)
J12 = -(x[1]*exp(- x[2]/(x[1]^2 + a) - r*(x[1]/k - 1)))/(x[1]^2 + a)
J21 = x[2]*exp((mu*x[1])/(x[1]^2 + a) - d)*(mu/(x[1]^2 + a) - (2*mu*x[1]^2)/(x[1]^2 + a)^2)
J22 = exp((mu*x[1])/(x[1]^2 + a) - d)
return @SMatrix [J11 J12; J21 J22]
end
function traj!(tr, f, p, x0, T; Ttr=0)
x = x0
for i in 1:Ttr
x = f(x, p, i-1)
end
@inbounds tr[1] = x
for i in 1:T
@inbounds tr[i+1] = f(tr[i], p, Ttr+i-1)
end
tr
end
function traj2d!(tr1, tr2, f, p, x0, T; Ttr=0)
x = x0
for i in 1:Ttr
x = f(x, p, i-1)
end
@inbounds tr1[1], tr2[1] = x
for i in 1:T
@inbounds tr1[i+1], tr2[i+1] = f((tr1[i], tr2[i]), p, Ttr+i-1)
end
tr1, tr2
end
traj2d!(zeros(21), zeros(21), ((a, b), p, t) -> (b, a+b), nothing, (0, 1), 20)
function seqper_new(x; tol=1e-3) # function to calculate periodicity
n = length(x)
@inbounds for k in 2:(n ÷ 2 + 1)
if abs(x[k] - x[1]) ≤ tol
all(j -> abs(x[j] - x[j-k+1]) ≤ tol, k:n) && return k - 1
end
end
return n
end
function meshh(x,y) # create a meshgrid of x and y
len_x = length(x);
len_y = length(y);
xmesh = Float64[]
ymesh = Float64[]
for i in 1:len_x
for j in 1:len_y
push!(xmesh, x[i])
push!(ymesh, y[j])
end
end
return xmesh, ymesh
end
x = 1:3
y = 1:4
@show meshh(x, y)
meshh(x, y) .== vec.(reim(complex.(x', y)))
function meshh!(xmesh, ymesh, x, y) # create a meshgrid of x and y
len_x, len_y = length(x), length(y)
for i in 1:len_x
for j in 1:len_y
xmesh[len_y*(i-1) + j] = x[i]
ymesh[len_y*(i-1) + j] = y[j]
end
end
end
x = 1:3
y = 1:4
xmesh = Vector{Float64}(undef, length(x)*length(y))
ymesh = Vector{Float64}(undef, length(x)*length(y))
meshh!(xmesh, ymesh, x, y)
@show xmesh, ymesh
meshh(x, y) .== (xmesh, ymesh)
function isoperiodic_test_org(ds, par_area::Matrix{Float64}, NIter::Int64,
nxblock::Int64, nyblock::Int64, NTr::Int64, xpts::Int64, ypts::Int64)
p1_st = par_area[1]; # beginning of first parameter
p1_nd = par_area[2]; # end of first parameter
p2_st = par_area[3]; # beginning of second parameter
p2_nd = par_area[4]; # end of second parameter
p1_block = range(p1_st, p1_nd, length=nxblock + 1); # divide par_area in blocks
p2_block = range(p2_st, p2_nd, length=nyblock + 1); # divide par_area in blocks
l_bipar = xpts * ypts; # total number of pts in each block
total_blocks = nxblock * nyblock # total number of blocks
sol_last = Array{Float64,2}(undef, NIter + 1, l_bipar); # stores x values of solution for l_bipar (r,k) paris
dss = [deepcopy(ds) for _ in 1:nthreads()]
periods = Vector{Int}(undef, l_bipar * total_blocks)
par1mesh = Vector{Float64}(undef, l_bipar * total_blocks)
par2mesh = Vector{Float64}(undef, l_bipar * total_blocks)
prog = Progress(total_blocks)
for ii in 1:nxblock
par1range = range(p1_block[ii], p1_block[ii+1], length=xpts+1)[1:end-1]
for jj in 1:nyblock
par2range = range(p2_block[jj], p2_block[jj+1], length=ypts+1)[1:end-1]
Threads.@threads for i = 1:xpts
tid = threadid()
for j = 1:ypts
set_parameter!(dss[tid], [par1range[i], par2range[j]]) # change the parameter values
tr = trajectory(dss[tid], NIter; Ttr=NTr) # find the solution
sol_last[:, ypts*(i-1)+j] = tr[:,1] # store x-values
end
end
a = l_bipar*(nyblock*(ii-1)+jj-1)
periods[a+1:a+l_bipar] = seqper_new.(eachcol(sol_last), tol=0.001) # seqper_new calculates periodicity
par1mesh[a+1:a+l_bipar], par2mesh[a+1:a+l_bipar] = meshh(par1range, par2range) # meshgrid of parameter values
next!(prog)
end
end
par1mesh, par2mesh, periods
end
function isoperiodic_test_rev1(ds, par_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
p1_st = par_area[1] # beginning of first parameter
p1_nd = par_area[2] # end of first parameter
p2_st = par_area[3] # beginning of second parameter
p2_nd = par_area[4] # end of second parameter
p1_block = range(p1_st, p1_nd, length = nxblock + 1) # divide par_area in blocks
p2_block = range(p2_st, p2_nd, length = nyblock + 1) # divide par_area in blocks
l_bipar = xpts * ypts # total number of pts in each block
total_blocks = nxblock * nyblock # total number of blocks
sol_last = Matrix{Float64}(undef, NIter + 1, l_bipar) # stores x values of solution for l_bipar (r,k) paris
param = [zeros(2) for _ in 1:nthreads()]
dss = [deepcopy(ds) for _ in 1:nthreads()]
periods = Vector{Int}(undef, l_bipar * total_blocks)
par1mesh = Vector{Float64}(undef, l_bipar * total_blocks)
par2mesh = Vector{Float64}(undef, l_bipar * total_blocks)
prog = Progress(total_blocks)
@inbounds for ii in 1:nxblock
par1range = range(p1_block[ii], p1_block[ii+1], length=xpts+1)[1:end-1]
for jj in 1:nyblock
par2range = range(p2_block[jj], p2_block[jj+1], length=ypts+1)[1:end-1]
Threads.@threads for i = 1:xpts
tid = threadid()
@inbounds for j = 1:ypts
param[tid] .= (par1range[i], par2range[j])
set_parameter!(dss[tid], param[tid]) # change the parameter values
tr = trajectory(dss[tid], NIter; Ttr=NTr) # find the solution
sol_last[:, ypts*(i-1)+j] .= first.(tr.data) # store x-values
end
end
a = l_bipar*(nyblock*(ii-1)+jj-1)
periods[a+1:a+l_bipar] .= seqper_new.(eachcol(sol_last), tol=0.001) # seqper_new calculates periodicity
@views meshh!(par1mesh[a+1:a+l_bipar], par2mesh[a+1:a+l_bipar], par1range, par2range) # meshgrid of parameter values
next!(prog)
end
end
par1mesh, par2mesh, periods
end
function isoperiodic_test_rev2_old(init, par_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
u0 = SVector{2}(init)
p1_st = par_area[1] # beginning of first parameter
p1_nd = par_area[2] # end of first parameter
p2_st = par_area[3] # beginning of second parameter
p2_nd = par_area[4] # end of second parameter
p1_block = range(p1_st, p1_nd, length = nxblock + 1) # divide par_area in blocks
p2_block = range(p2_st, p2_nd, length = nyblock + 1) # divide par_area in blocks
l_bipar = xpts * ypts # total number of pts in each block
total_blocks = nxblock * nyblock # total number of blocks
sol_last = Matrix{Float64}(undef, NIter + 1, l_bipar) # stores x values of solution for l_bipar (r,k) paris
periods = Vector{Int}(undef, l_bipar * total_blocks)
par1mesh = Vector{Float64}(undef, l_bipar * total_blocks)
par2mesh = Vector{Float64}(undef, l_bipar * total_blocks)
tr = [Vector{SVector{2, Float64}}(undef, NIter + 1) for _ in 1:nthreads()]
prog = Progress(total_blocks)
@inbounds for ii in 1:nxblock
par1range = range(p1_block[ii], p1_block[ii+1], length=xpts+1)[1:end-1]
for jj in 1:nyblock
par2range = range(p2_block[jj], p2_block[jj+1], length=ypts+1)[1:end-1]
Threads.@threads for i = 1:xpts
tid = threadid()
@inbounds for j = 1:ypts
traj!(tr[tid], dds_rule, (par1range[i], par2range[j]), u0, NIter; Ttr=NTr) # find the solution
sol_last[:, ypts*(i-1)+j] .= first.(tr[tid]) # store x-values
end
end
a = l_bipar*(nyblock*(ii-1)+jj-1)
periods[a+1:a+l_bipar] .= seqper_new.(eachcol(sol_last), tol=0.001) # seqper_new calculates periodicity
@views meshh!(par1mesh[a+1:a+l_bipar], par2mesh[a+1:a+l_bipar], par1range, par2range) # meshgrid of parameter values
next!(prog)
end
end
par1mesh, par2mesh, periods
end
function isoperiodic_test_rev2(init, par_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
u0 = SVector{2}(init)
p1_st = par_area[1] # beginning of first parameter
p1_nd = par_area[2] # end of first parameter
p2_st = par_area[3] # beginning of second parameter
p2_nd = par_area[4] # end of second parameter
p1_block = range(p1_st, p1_nd, length = nxblock + 1) # divide par_area in blocks
p2_block = range(p2_st, p2_nd, length = nyblock + 1) # divide par_area in blocks
l_bipar = xpts * ypts # total number of pts in each block
total_blocks = nxblock * nyblock # total number of blocks
periods = Vector{Int}(undef, l_bipar * total_blocks)
par1mesh = Vector{Float64}(undef, l_bipar * total_blocks)
par2mesh = Vector{Float64}(undef, l_bipar * total_blocks)
tr1 = [Vector{Float64}(undef, NIter + 1) for _ in 1:nthreads()]
tr2 = [Vector{Float64}(undef, NIter + 1) for _ in 1:nthreads()]
prog = Progress(total_blocks)
@inbounds for ii in 1:nxblock
par1range = range(p1_block[ii], p1_block[ii+1], length=xpts+1)[1:end-1]
for jj in 1:nyblock
par2range = range(p2_block[jj], p2_block[jj+1], length=ypts+1)[1:end-1]
a = l_bipar*(nyblock*(ii-1)+jj-1)
Threads.@threads for i = 1:xpts
tid = threadid()
@inbounds for j = 1:ypts
traj2d!(tr1[tid], tr2[tid],
dds_rule, (par1range[i], par2range[j]), u0, NIter; Ttr=NTr) # find the solution
periods[a + ypts*(i-1)+j] = seqper_new(tr1[tid], tol=0.001)
end
end
@views meshh!(par1mesh[a+1:a+l_bipar], par2mesh[a+1:a+l_bipar], par1range, par2range) # meshgrid of parameter values
next!(prog)
end
end
par1mesh, par2mesh, periods
end
## Test
parameter_area = [1.0 5.0 2.0 5.0]
nxblock = 1
nyblock = 1
xpts = 100
ypts = 100
NIter = 2000
NTr = 50000
init = [0.4, 0.5]
ds = dds_constructor(init)
##
println("********** Minor correction of the original:")
result_org = @time isoperiodic_test_org(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
result_org = @time isoperiodic_test_org(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
result_org = @time isoperiodic_test_org(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
println("********** Revised 1:")
result_rev1 = @time isoperiodic_test_rev1(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
result_rev1 = @time isoperiodic_test_rev1(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
result_rev1 = @time isoperiodic_test_rev1(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
println("********** Revised 2:")
result_rev2 = @time isoperiodic_test_rev2(init, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
result_rev2 = @time isoperiodic_test_rev2(init, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
result_rev2 = @time isoperiodic_test_rev2(init, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
println("********** Confirmation of equivalence:")
@show result_org .== result_rev1;
@show result_rev1 .== result_rev2;
## Test
parameter_area = [1.0 5.0 2.0 5.0]
nxblock = 2
nyblock = 2
xpts = 100
ypts = 100
NIter = 2000
NTr = 50000
init = [0.4, 0.5]
ds = dds_constructor(init)
##
println("********** Minor correction of the original:")
result_org = @time isoperiodic_test_org(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
result_org = @time isoperiodic_test_org(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
result_org = @time isoperiodic_test_org(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
println("********** Revised 1:")
result_rev1 = @time isoperiodic_test_rev1(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
result_rev1 = @time isoperiodic_test_rev1(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
result_rev1 = @time isoperiodic_test_rev1(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
println("********** Revised 2:")
result_rev2 = @time isoperiodic_test_rev2(init, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
result_rev2 = @time isoperiodic_test_rev2(init, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
result_rev2 = @time isoperiodic_test_rev2(init, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
println("********** Confirmation of equivalence:")
@show result_org .== result_rev1;
@show result_rev1 .== result_rev2;
## Test
parameter_area = [1.0 5.0 2.0 5.0]
nxblock = 10
nyblock = 10
xpts = 10
ypts = 10
NIter = 2000
NTr = 50000
init = [0.4, 0.5]
ds = dds_constructor(init)
##
println("********** Minor correction of the original:")
result_org = @time isoperiodic_test_org(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
#result_org = @time isoperiodic_test_org(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
#result_org = @time isoperiodic_test_org(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
println("********** Revised 1:")
result_rev1 = @time isoperiodic_test_rev1(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
#result_rev1 = @time isoperiodic_test_rev1(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
#result_rev1 = @time isoperiodic_test_rev1(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
println("********** Revised 2:")
result_rev2 = @time isoperiodic_test_rev2(init, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
#result_rev2 = @time isoperiodic_test_rev2(init, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
#result_rev2 = @time isoperiodic_test_rev2(init, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
println("********** Confirmation of equivalence:")
@show result_org .== result_rev1;
@show result_rev1 .== result_rev2;
using Plots
xoxp1(x) = x/(x+1)
r, k, period = @time isoperiodic_test_rev2([0.4, 0.5], [1.0 5.0 2.0 5.0], 2000, 1, 1, 50000, 80, 60)
r = reshape(r, 60, 80)
k = reshape(k, 60, 80)
period = reshape(period, 60, 80)
heatmap(vec(r[1,:]), vec(k[:,1]), xoxp1.(period); xlabel="r", ylabel="k", title="period/(period + 1)")
@btime isoperiodic_test_rev2($([0.4, 0.5]), $([1.0 5.0 2.0 5.0]), 2000, 1, 1, 50000, 80, 60)
@code_warntype isoperiodic_test_org(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
@code_warntype isoperiodic_test_rev1(ds, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
@code_warntype isoperiodic_test_rev2(init, parameter_area, NIter, nxblock, nyblock, NTr, xpts, ypts)
###Output
Variables
#self#[36m::Core.Const(isoperiodic_test_rev2)[39m
init[36m::Vector{Float64}[39m
par_area[36m::Matrix{Float64}[39m
NIter[36m::Int64[39m
nxblock[36m::Int64[39m
nyblock[36m::Int64[39m
NTr[36m::Int64[39m
xpts[36m::Int64[39m
ypts[36m::Int64[39m
@_10[33m[1m::Union{Nothing, Tuple{Int64, Int64}}[22m[39m
val[36m::Nothing[39m
#21[36m::var"#21#23"{Int64}[39m
#20[36m::var"#20#22"{Int64}[39m
prog[36m::Progress[39m
tr2[36m::Vector{Vector{Float64}}[39m
tr1[36m::Vector{Vector{Float64}}[39m
par2mesh[36m::Vector{Float64}[39m
par1mesh[36m::Vector{Float64}[39m
periods[36m::Vector{Int64}[39m
total_blocks[36m::Int64[39m
l_bipar[36m::Int64[39m
p2_block[36m::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}[39m
p1_block[36m::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}[39m
p2_nd[36m::Float64[39m
p2_st[36m::Float64[39m
p1_nd[36m::Float64[39m
p1_st[36m::Float64[39m
u0[36m::SVector{2, Float64}[39m
@_29[33m[1m::Union{Nothing, Tuple{Int64, Int64}}[22m[39m
ii[36m::Int64[39m
par1range[36m::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}[39m
threadsfor_fun[36m::var"#116#threadsfor_fun#24"{Int64, Int64, Int64, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Vector{Int64}, SVector{2, Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Int64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, UnitRange{Int64}}[39m
jj[36m::Int64[39m
a[36m::Int64[39m
par2range[36m::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}[39m
range[36m::UnitRange{Int64}[39m
@_37[36m::Bool[39m
Body[36m::Tuple{Vector{Float64}, Vector{Float64}, Vector{Int64}}[39m
[90m1 ──[39m Core.NewvarNode(:(val))
[90m│ [39m %2 = Core.apply_type(Main.SVector, 2)[36m::Core.Const(SVector{2, T} where T)[39m
[90m│ [39m (u0 = (%2)(init))
[90m│ [39m (p1_st = Base.getindex(par_area, 1))
[90m│ [39m (p1_nd = Base.getindex(par_area, 2))
[90m│ [39m (p2_st = Base.getindex(par_area, 3))
[90m│ [39m (p2_nd = Base.getindex(par_area, 4))
[90m│ [39m %8 = (nxblock + 1)[36m::Int64[39m
[90m│ [39m %9 = (:length,)[36m::Core.Const((:length,))[39m
[90m│ [39m %10 = Core.apply_type(Core.NamedTuple, %9)[36m::Core.Const(NamedTuple{(:length,), T} where T<:Tuple)[39m
[90m│ [39m %11 = Core.tuple(%8)[36m::Tuple{Int64}[39m
[90m│ [39m %12 = (%10)(%11)[36m::NamedTuple{(:length,), Tuple{Int64}}[39m
[90m│ [39m %13 = Core.kwfunc(Main.range)[36m::Core.Const(Base.var"#range##kw"())[39m
[90m│ [39m %14 = p1_st[36m::Float64[39m
[90m│ [39m (p1_block = (%13)(%12, Main.range, %14, p1_nd))
[90m│ [39m %16 = (nyblock + 1)[36m::Int64[39m
[90m│ [39m %17 = (:length,)[36m::Core.Const((:length,))[39m
[90m│ [39m %18 = Core.apply_type(Core.NamedTuple, %17)[36m::Core.Const(NamedTuple{(:length,), T} where T<:Tuple)[39m
[90m│ [39m %19 = Core.tuple(%16)[36m::Tuple{Int64}[39m
[90m│ [39m %20 = (%18)(%19)[36m::NamedTuple{(:length,), Tuple{Int64}}[39m
[90m│ [39m %21 = Core.kwfunc(Main.range)[36m::Core.Const(Base.var"#range##kw"())[39m
[90m│ [39m %22 = p2_st[36m::Float64[39m
[90m│ [39m (p2_block = (%21)(%20, Main.range, %22, p2_nd))
[90m│ [39m (l_bipar = xpts * ypts)
[90m│ [39m (total_blocks = nxblock * nyblock)
[90m│ [39m %26 = Core.apply_type(Main.Vector, Main.Int)[36m::Core.Const(Vector{Int64})[39m
[90m│ [39m %27 = (l_bipar * total_blocks)[36m::Int64[39m
[90m│ [39m (periods = (%26)(Main.undef, %27))
[90m│ [39m %29 = Core.apply_type(Main.Vector, Main.Float64)[36m::Core.Const(Vector{Float64})[39m
[90m│ [39m %30 = (l_bipar * total_blocks)[36m::Int64[39m
[90m│ [39m (par1mesh = (%29)(Main.undef, %30))
[90m│ [39m %32 = Core.apply_type(Main.Vector, Main.Float64)[36m::Core.Const(Vector{Float64})[39m
[90m│ [39m %33 = (l_bipar * total_blocks)[36m::Int64[39m
[90m│ [39m (par2mesh = (%32)(Main.undef, %33))
[90m│ [39m %35 = Main.:(var"#20#22")[36m::Core.Const(var"#20#22")[39m
[90m│ [39m %36 = Core.typeof(NIter)[36m::Core.Const(Int64)[39m
[90m│ [39m %37 = Core.apply_type(%35, %36)[36m::Core.Const(var"#20#22"{Int64})[39m
[90m│ [39m (#20 = %new(%37, NIter))
[90m│ [39m %39 = #20[36m::var"#20#22"{Int64}[39m
[90m│ [39m %40 = Main.nthreads()[36m::Int64[39m
[90m│ [39m %41 = (1:%40)[36m::Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])[39m
[90m│ [39m %42 = Base.Generator(%39, %41)[36m::Core.PartialStruct(Base.Generator{UnitRange{Int64}, var"#20#22"{Int64}}, Any[var"#20#22"{Int64}, Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])])[39m
[90m│ [39m (tr1 = Base.collect(%42))
[90m│ [39m %44 = Main.:(var"#21#23")[36m::Core.Const(var"#21#23")[39m
[90m│ [39m %45 = Core.typeof(NIter)[36m::Core.Const(Int64)[39m
[90m│ [39m %46 = Core.apply_type(%44, %45)[36m::Core.Const(var"#21#23"{Int64})[39m
[90m│ [39m (#21 = %new(%46, NIter))
[90m│ [39m %48 = #21[36m::var"#21#23"{Int64}[39m
[90m│ [39m %49 = Main.nthreads()[36m::Int64[39m
[90m│ [39m %50 = (1:%49)[36m::Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])[39m
[90m│ [39m %51 = Base.Generator(%48, %50)[36m::Core.PartialStruct(Base.Generator{UnitRange{Int64}, var"#21#23"{Int64}}, Any[var"#21#23"{Int64}, Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])])[39m
[90m│ [39m (tr2 = Base.collect(%51))
[90m│ [39m (prog = Main.Progress(total_blocks))
[90m│ [39m $(Expr(:inbounds, true))
[90m│ [39m %55 = (1:nxblock)[36m::Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])[39m
[90m│ [39m (@_10 = Base.iterate(%55))
[90m│ [39m %57 = (@_10 === nothing)[36m::Bool[39m
[90m│ [39m %58 = Base.not_int(%57)[36m::Bool[39m
[90m└───[39m goto #13 if not %58
[90m2 ┄─[39m %60 = @_10::Tuple{Int64, Int64}[36m::Tuple{Int64, Int64}[39m
[90m│ [39m (ii = Core.getfield(%60, 1))
[90m│ [39m %62 = Core.getfield(%60, 2)[36m::Int64[39m
[90m│ [39m %63 = Base.getindex(p1_block, ii)[36m::Float64[39m
[90m│ [39m %64 = p1_block[36m::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}[39m
[90m│ [39m %65 = (ii + 1)[36m::Int64[39m
[90m│ [39m %66 = Base.getindex(%64, %65)[36m::Float64[39m
[90m│ [39m %67 = (xpts + 1)[36m::Int64[39m
[90m│ [39m %68 = (:length,)[36m::Core.Const((:length,))[39m
[90m│ [39m %69 = Core.apply_type(Core.NamedTuple, %68)[36m::Core.Const(NamedTuple{(:length,), T} where T<:Tuple)[39m
[90m│ [39m %70 = Core.tuple(%67)[36m::Tuple{Int64}[39m
[90m│ [39m %71 = (%69)(%70)[36m::NamedTuple{(:length,), Tuple{Int64}}[39m
[90m│ [39m %72 = Core.kwfunc(Main.range)[36m::Core.Const(Base.var"#range##kw"())[39m
[90m│ [39m %73 = (%72)(%71, Main.range, %63, %66)[36m::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}[39m
[90m│ [39m %74 = Base.lastindex(%73)[36m::Int64[39m
[90m│ [39m %75 = (%74 - 1)[36m::Int64[39m
[90m│ [39m %76 = (1:%75)[36m::Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])[39m
[90m│ [39m (par1range = Base.getindex(%73, %76))
[90m│ [39m %78 = (1:nyblock)[36m::Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])[39m
[90m│ [39m (@_29 = Base.iterate(%78))
[90m│ [39m %80 = (@_29 === nothing)[36m::Bool[39m
[90m│ [39m %81 = Base.not_int(%80)[36m::Bool[39m
[90m└───[39m goto #11 if not %81
[90m3 ┄─[39m %83 = @_29::Tuple{Int64, Int64}[36m::Tuple{Int64, Int64}[39m
[90m│ [39m (jj = Core.getfield(%83, 1))
[90m│ [39m %85 = Core.getfield(%83, 2)[36m::Int64[39m
[90m│ [39m %86 = Base.getindex(p2_block, jj)[36m::Float64[39m
[90m│ [39m %87 = p2_block[36m::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}[39m
[90m│ [39m %88 = (jj + 1)[36m::Int64[39m
[90m│ [39m %89 = Base.getindex(%87, %88)[36m::Float64[39m
[90m│ [39m %90 = (ypts + 1)[36m::Int64[39m
[90m│ [39m %91 = (:length,)[36m::Core.Const((:length,))[39m
[90m│ [39m %92 = Core.apply_type(Core.NamedTuple, %91)[36m::Core.Const(NamedTuple{(:length,), T} where T<:Tuple)[39m
[90m│ [39m %93 = Core.tuple(%90)[36m::Tuple{Int64}[39m
[90m│ [39m %94 = (%92)(%93)[36m::NamedTuple{(:length,), Tuple{Int64}}[39m
[90m│ [39m %95 = Core.kwfunc(Main.range)[36m::Core.Const(Base.var"#range##kw"())[39m
[90m│ [39m %96 = (%95)(%94, Main.range, %86, %89)[36m::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}[39m
[90m│ [39m %97 = Base.lastindex(%96)[36m::Int64[39m
[90m│ [39m %98 = (%97 - 1)[36m::Int64[39m
[90m│ [39m %99 = (1:%98)[36m::Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])[39m
[90m│ [39m (par2range = Base.getindex(%96, %99))
[90m│ [39m %101 = l_bipar[36m::Int64[39m
[90m│ [39m %102 = (ii - 1)[36m::Int64[39m
[90m│ [39m %103 = (nyblock * %102)[36m::Int64[39m
[90m│ [39m %104 = (%103 + jj)[36m::Int64[39m
[90m│ [39m %105 = (%104 - 1)[36m::Int64[39m
[90m│ [39m (a = %101 * %105)
[90m│ [39m %107 = (1:xpts)[36m::Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])[39m
[90m│ [39m (range = %107)
[90m│ [39m %109 = Main.:(var"#116#threadsfor_fun#24")[36m::Core.Const(var"#116#threadsfor_fun#24")[39m
[90m│ [39m %110 = Core.typeof(NIter)[36m::Core.Const(Int64)[39m
[90m│ [39m %111 = Core.typeof(NTr)[36m::Core.Const(Int64)[39m
[90m│ [39m %112 = Core.typeof(ypts)[36m::Core.Const(Int64)[39m
[90m│ [39m %113 = Core.typeof(tr2)[36m::Core.Const(Vector{Vector{Float64}})[39m
[90m│ [39m %114 = Core.typeof(tr1)[36m::Core.Const(Vector{Vector{Float64}})[39m
[90m│ [39m %115 = Core.typeof(periods)[36m::Core.Const(Vector{Int64})[39m
[90m│ [39m %116 = Core.typeof(u0)[36m::Core.Const(SVector{2, Float64})[39m
[90m│ [39m %117 = Core.typeof(par1range)[36m::Core.Const(StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}})[39m
[90m│ [39m %118 = Core.typeof(a)[36m::Core.Const(Int64)[39m
[90m│ [39m %119 = Core.typeof(par2range)[36m::Core.Const(StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}})[39m
[90m│ [39m %120 = Core.typeof(range::Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64]))[36m::Core.Const(UnitRange{Int64})[39m
[90m│ [39m %121 = Core.apply_type(%109, %110, %111, %112, %113, %114, %115, %116, %117, %118, %119, %120)[36m::Core.Const(var"#116#threadsfor_fun#24"{Int64, Int64, Int64, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Vector{Int64}, SVector{2, Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Int64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, UnitRange{Int64}})[39m
[90m│ [39m %122 = tr2[36m::Vector{Vector{Float64}}[39m
[90m│ [39m %123 = tr1[36m::Vector{Vector{Float64}}[39m
[90m│ [39m %124 = periods[36m::Vector{Int64}[39m
[90m│ [39m %125 = u0[36m::SVector{2, Float64}[39m
[90m│ [39m %126 = par1range[36m::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}[39m
[90m│ [39m %127 = a[36m::Int64[39m
[90m│ [39m %128 = par2range[36m::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}[39m
[90m│ [39m (threadsfor_fun = %new(%121, NIter, NTr, ypts, %122, %123, %124, %125, %126, %127, %128, range::Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])))
[90m│ [39m %130 = Base.Threads.threadid()[36m::Int64[39m
[90m│ [39m %131 = (%130 != 1)[36m::Bool[39m
[90m└───[39m goto #5 if not %131
[90m4 ──[39m (@_37 = %131)
[90m└───[39m goto #6
[90m5 ──[39m %135 = $(Expr(:foreigncall, :(:jl_in_threaded_region), Int32, svec(), 0, :(:ccall)))[36m::Int32[39m
[90m└───[39m (@_37 = %135 != 0)
[90m6 ┄─[39m goto #8 if not @_37
[90m7 ──[39m %138 = Base.invokelatest[36m::Core.Const(Base.invokelatest)[39m
[90m│ [39m %139 = threadsfor_fun::Core.PartialStruct(var"#116#threadsfor_fun#24"{Int64, Int64, Int64, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Vector{Int64}, SVector{2, Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Int64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, UnitRange{Int64}}, Any[Int64, Int64, Int64, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Vector{Int64}, SVector{2, Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Int64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])])[36m::Core.PartialStruct(var"#116#threadsfor_fun#24"{Int64, Int64, Int64, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Vector{Int64}, SVector{2, Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Int64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, UnitRange{Int64}}, Any[Int64, Int64, Int64, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Vector{Int64}, SVector{2, Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Int64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])])[39m
[90m│ [39m (%138)(%139, true)
[90m└───[39m goto #9
[90m8 ──[39m Base.Threads.threading_run(threadsfor_fun::Core.PartialStruct(var"#116#threadsfor_fun#24"{Int64, Int64, Int64, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Vector{Int64}, SVector{2, Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Int64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, UnitRange{Int64}}, Any[Int64, Int64, Int64, Vector{Vector{Float64}}, Vector{Vector{Float64}}, Vector{Int64}, SVector{2, Float64}, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Int64, StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}, Core.PartialStruct(UnitRange{Int64}, Any[Core.Const(1), Int64])]))
[90m9 ┄─[39m Base.Threads.nothing
[90m│ [39m %144 = par1mesh[36m::Vector{Float64}[39m
[90m│ [39m %145 = (a + 1)[36m::Int64[39m
[90m│ [39m %146 = (a + l_bipar)[36m::Int64[39m
[90m│ [39m %147 = (%145:%146)[36m::UnitRange{Int64}[39m
[90m│ [39m %148 = (Base.maybeview)(%144, %147)[36m::Core.PartialStruct(SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, Any[Vector{Float64}, Tuple{UnitRange{Int64}}, Int64, Core.Const(1)])[39m
[90m│ [39m %149 = par2mesh[36m::Vector{Float64}[39m
[90m│ [39m %150 = (a + 1)[36m::Int64[39m
[90m│ [39m %151 = (a + l_bipar)[36m::Int64[39m
[90m│ [39m %152 = (%150:%151)[36m::UnitRange{Int64}[39m
[90m│ [39m %153 = (Base.maybeview)(%149, %152)[36m::Core.PartialStruct(SubArray{Float64, 1, Vector{Float64}, Tuple{UnitRange{Int64}}, true}, Any[Vector{Float64}, Tuple{UnitRange{Int64}}, Int64, Core.Const(1)])[39m
[90m│ [39m %154 = par1range[36m::StepRangeLen{Float64, Base.TwicePrecision{Float64}, Base.TwicePrecision{Float64}}[39m
[90m│ [39m Main.meshh!(%148, %153, %154, par2range)
[90m│ [39m Main.next!(prog)
[90m│ [39m (@_29 = Base.iterate(%78, %85))
[90m│ [39m %158 = (@_29 === nothing)[36m::Bool[39m
[90m│ [39m %159 = Base.not_int(%158)[36m::Bool[39m
[90m└───[39m goto #11 if not %159
[90m10 ─[39m goto #3
[90m11 ┄[39m (@_10 = Base.iterate(%55, %62))
[90m│ [39m %163 = (@_10 === nothing)[36m::Bool[39m
[90m│ [39m %164 = Base.not_int(%163)[36m::Bool[39m
[90m└───[39m goto #13 if not %164
[90m12 ─[39m goto #2
[90m13 ┄[39m (val = nothing)
[90m│ [39m $(Expr(:inbounds, :pop))
[90m│ [39m val
[90m│ [39m %170 = Core.tuple(par1mesh, par2mesh, periods)[36m::Tuple{Vector{Float64}, Vector{Float64}, Vector{Int64}}[39m
[90m└───[39m return %170
|
Part 3 - Analyzing the Data.ipynb | ###Markdown
Part 3 - Analyzing the Data Kiran TIRUMALE LAKSHMANA RAO Importing the Necessary Libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns #install seaborn using 'pip intall seaborn'
%matplotlib inline
plt.style.use('seaborn-whitegrid')
###Output
_____no_output_____
###Markdown
Loading the Cleaned Dataset
###Code
paris = pd.read_csv(r"C:\Users\ktirumalelakshmana\Desktop\Paris_Airbnb_Final.csv")
paris.head()
# Checking the data types etc
paris.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 280 entries, 0 to 279
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Hotel_Type_with_Location 280 non-null object
1 Hote_Type 280 non-null object
2 Location_in_Paris 280 non-null object
3 Hotel_Name 280 non-null object
4 Ratings 280 non-null float64
5 No_of_Reviews 280 non-null int64
6 Previous_Price 280 non-null int64
7 Discounted_Price 280 non-null int64
8 Discount 280 non-null int64
9 No_of_Guests 280 non-null int64
10 No_of_Bedrooms 280 non-null int64
11 Type_Bed_Studio 280 non-null object
12 No_of_Beds 280 non-null int64
dtypes: float64(1), int64(7), object(5)
memory usage: 28.6+ KB
###Markdown
Analyzing the Data The data from Airbnb for Paris location has been scraped for the dates `01-Dec-2020` to `31-Dec-2020` with 2 People or more Different locations where accommodations are available
###Code
Locations = paris['Location_in_Paris'].unique()
Locations.sort()
Locations
# Total number of locations in Paris
len(Locations)
###Output
_____no_output_____
###Markdown
Visualizations Ratings and Discount
###Code
# Defining the x and y axis variables
x = paris['Discount']
y = paris['Ratings']
r_colors = paris['No_of_Reviews']
# Plotting using matplotlib
plt.scatter(x, y, alpha=0.2, c=r_colors, cmap='viridis')
# Adding Title
plt.title('Discount vs Ratings')
# Adding X-Label
plt.xlabel('Discount')
# Adding Y-Label
plt.ylabel('Ratings')
# Displaying the Colorbar
plt.colorbar();
###Output
_____no_output_____
###Markdown
Price and Ratings
###Code
# Defining the x and y axis variables
x = paris['Previous_Price']
y = paris['Ratings']
r_colors = paris['Discount']
# Plotting using matplotlib
plt.scatter(x, y, alpha=0.2, c=r_colors, cmap='viridis')
# Adding Title
plt.title('Price vs Ratings')
# Adding X-Label
plt.xlabel('Previous Price')
# Adding Y-Label
plt.ylabel('Ratings')
# Displaying the Colorbar
plt.colorbar();
###Output
_____no_output_____
###Markdown
Ratings and No. of Reviews
###Code
# Defining the x and y axis variables
x = paris['No_of_Reviews']
y = paris['Ratings']
# Plotting using matplotlib
plt.scatter(x, y, alpha=0.2, cmap='viridis')
# Adding Title
plt.title('Ratings vs No. of Reviews')
# Adding X-Label
plt.xlabel('No. of Reviews')
# Adding Y-Label
plt.ylabel('Ratings');
###Output
_____no_output_____
###Markdown
Discount and Price Correlation between `Discounted_Price` and `Previous_Price`
###Code
#Same plot as above, but using matplotlib
x = paris['Discounted_Price']
y = paris['Previous_Price']
plt.scatter(x, y, alpha=0.2, cmap='viridis')
m, b = np.polyfit(x, y, 1)
plt.plot(x, m*x+b)
plt.title('Discount vs Previous Price - Trend')
plt.xlabel('Discounted Price')
plt.ylabel('Previous Price')
#Plotting using seaborn scatterplot and hue
sns.scatterplot(x = 'Discounted_Price', y = 'Previous_Price',data=paris,
hue='Type_Bed_Studio', alpha=0.5)
#Adding Trendline
m, b = np.polyfit(x, y, 1)
plt.plot(x, m*x+b)
# Set title
plt.title('Discounted Price vs Previous Price - Trend')
# Set x-axis label
plt.xlabel('Discounted Price')
# Set y-axis label
plt.ylabel('Previous Price')
plt.show()
###Output
_____no_output_____
###Markdown
Correlation between `Discount` and `Previous_Price`
###Code
#Plotting using seaborn scatterplot and hue
sns.scatterplot(x = 'Discount', y = 'Previous_Price',data=paris, hue='Type_Bed_Studio', alpha=0.5)
#Adding Trendline
m, b = np.polyfit(x, y, 1)
plt.plot(x, m*x+b)
# Set title
plt.title('Discount vs Previous Price - Trend')
# Set x-axis label
plt.xlabel('Discount')
# Set y-axis label
plt.ylabel('Previous Price')
plt.show()
###Output
_____no_output_____
###Markdown
Discount and Price Distributions
###Code
# Plot histogram
sns.distplot(paris['Discount'],kde = False, color='g')
# Set title
plt.title('Distribution of Discount')
plt.show()
# Plot histogram distribution
sns.distplot(paris['Previous_Price'],kde = False)
# Set title
plt.title('Previous Price Distribution')
plt.xlabel('Previous Price')
plt.show()
sns.distplot(paris['Discounted_Price'],kde = False, color='y')
# Set title
plt.title('Discounted Price')
plt.xlabel('Discounted Price Distribution')
plt.show()
# Overlaying the above 2 histogram distribution
sns.distplot(paris['Previous_Price'],kde = False)
sns.distplot(paris['Discounted_Price'],kde = False, color='y')
# Set title
plt.title('Previous Price & Discounted Price Distributions')
plt.xlabel('Price')
plt.show()
###Output
_____no_output_____
###Markdown
Location wise information
###Code
location = paris.groupby('Location_in_Paris').mean()
location
# Location - lineplot for Discounted Price
sns.set_style("whitegrid", {'axes.grid' : False})
g1 = sns.lineplot(x = location.index.values, y = 'Discounted_Price', data = location,
palette = 'hls',
alpha = 0.5
)
g1.set(xticklabels=[])
g1.set(title='Discounted Price over Location')
g1.set(xlabel='Location')
g1.set(ylabel='Discounted Price')
# Location - lineplot for Ratings
sns.set_style("whitegrid", {'axes.grid' : False})
g1 = sns.lineplot(x = location.index.values, y = 'Ratings', data = location,
palette = 'hls',
alpha = 0.5,
color = 'g'
)
g1.set(xticklabels=[])
g1.set(title='Ratings over Location')
g1.set(xlabel='Location')
g1.set(ylabel='Ratings')
# Location - Discount Price
sns.set_style("whitegrid", {'axes.grid' : False})
g1 = sns.lineplot(x = location.index.values, y = 'Previous_Price', data = location,
palette = 'hls',
alpha = 0.5,
color = 'g')
g1.set(xticklabels=[])
g1.set(title='Previous Price over Location')
g1.set(xlabel='Location')
g1.set(ylabel='Previous Price')
# Location - overlaying Previous Price and Discount Price
sns.set_style("whitegrid", {'axes.grid' : False})
g1 = sns.lineplot(x = location.index.values, y = 'Previous_Price', data = location,
palette = 'hls',
alpha = 0.5,
color = 'g')
g1 = sns.lineplot(x = location.index.values, y = 'Discounted_Price', data = location,
palette = 'hls',
alpha = 0.5
)
g1.set(xticklabels=[])
g1.set(title='Discounted Price vs Previous Price')
g1.set(xlabel='Location')
g1.set(ylabel='Discounted vs Previous Price')
###Output
_____no_output_____
###Markdown
Would have loved to plot the Ratings and Discount on a Paris map. However, unable to do it! `Type_Bed_Studio` wise Information
###Code
Bed_Studio = paris[['Type_Bed_Studio', 'Ratings', 'No_of_Reviews', 'Previous_Price', 'Discount', 'No_of_Guests']].groupby('Type_Bed_Studio').mean()
Bed_Studio
###Output
_____no_output_____
###Markdown
Studio vs Bedroom - Discount
###Code
# Plotting a barplot
sns.set_style("whitegrid", {'axes.grid' : False})
sns.barplot(x = Bed_Studio.index.values, y = 'Discount', data = Bed_Studio,
palette = 'hls',
order = ['Studio', 'Bedroom'],
alpha = 0.5
)
plt.title('Studio vs Bedroom - Discount')
plt.plot()
###Output
_____no_output_____
###Markdown
Studio vs Bedroom - Ratings
###Code
# Plotting a barplot
sns.set_style("whitegrid", {'axes.grid' : False})
sns.barplot(x = Bed_Studio.index.values, y = 'Ratings', data = Bed_Studio,
palette = 'hls',
order = ['Studio', 'Bedroom'],
alpha = 0.5
)
plt.title('Studio vs Bedroom - Ratings')
plt.plot()
###Output
_____no_output_____
###Markdown
Studio vs Bedroom - Previous Price
###Code
# Plotting a barplot
sns.set_style("whitegrid", {'axes.grid' : False})
sns.barplot(x = Bed_Studio.index.values, y = 'Previous_Price', data = Bed_Studio,
palette = 'hls',
order = ['Studio', 'Bedroom'],
alpha = 0.5
)
plt.title('Studio vs Bedroom - Price')
plt.plot()
###Output
_____no_output_____
###Markdown
`No_of_Guests` wise Information
###Code
guests = paris.groupby('No_of_Guests').mean()
guests
###Output
_____no_output_____
###Markdown
Guests vs Ratings
###Code
# Plotting a lineplot for guests vs rating
sns.set_style("whitegrid", {'axes.grid' : False})
sns.barplot(x = guests.index.values, y = 'Ratings', data = guests,
palette = 'PuBu',
alpha = 0.5
)
plt.title('Guests vs Ratings')
plt.plot()
###Output
_____no_output_____
###Markdown
Guests vs Previous Price
###Code
# Plotting a lineplot for guests vs previous price
sns.set_style("whitegrid", {'axes.grid' : False})
sns.lineplot(x = guests.index.values, y = 'Previous_Price', data = guests,
palette = 'PuBu',
alpha = 0.5
)
plt.title('Guests vs Previous Price')
plt.plot()
###Output
_____no_output_____
###Markdown
Guests vs Discount
###Code
# Plotting a lineplot for guests vs discount
sns.set_style("whitegrid", {'axes.grid' : False})
sns.lineplot(x = guests.index.values, y = 'Discount', data = guests,
color = 'g',
alpha = 0.5
)
plt.title('Guests vs Discount')
plt.plot()
###Output
_____no_output_____
###Markdown
Guests vs Discounted Price
###Code
# Plotting a lineplot for guests vs discounted price
sns.set_style("whitegrid", {'axes.grid' : False})
sns.lineplot(x = guests.index.values, y = 'Discounted_Price', data = guests,
color = 'r',
alpha = 0.5
)
plt.title('Guests vs Discounted Price')
plt.plot()
###Output
_____no_output_____
###Markdown
Overlaying above 3 graphs
###Code
# Plotting a lineplot of above graphs - overlaying them together
sns.set_style("whitegrid", {'axes.grid' : False})
sns.lineplot(x = guests.index.values, y = 'Previous_Price', data = guests,
palette = 'PuBu',
alpha = 0.5
)
sns.lineplot(x = guests.index.values, y = 'Discount', data = guests,
color = 'g',
alpha = 0.5
)
sns.lineplot(x = guests.index.values, y = 'Discounted_Price', data = guests,
color = 'r',
alpha = 0.5
)
plt.title('Guests vs Previous Price, Discount & Discounted_Price')
plt.plot()
###Output
_____no_output_____ |
assignment_7/.ipynb_checkpoints/Nguyen_Bank_Marketing_kFold-checkpoint.ipynb | ###Markdown
I. DATA Processinghttps://www.kaggle.com/henriqueyamahata/bank-marketing
###Code
def remove_duplicated_row(df):
df = df.drop(df[df.duplicated()].index).reset_index(drop=True)
return(df)
def remove_features(df,col_lst):
for col in col_lst:
df.pop(col)
return(df)
def replace_missing_by_value(df,column,replaced_value,missing_value='unknown'):
df[column] = df[column].apply(lambda val: replaced_value if val == missing_value else val)
return df
def replace_outlier_by_quantile(df, column, quantile_thresh = 0.95, replaced_value = None):
thresh_value = df[column].quantile(quantile_thresh)
if (replaced_value == None):
replaced_value = thresh_value
df[column] = df[column].apply(lambda val: replaced_value if val > thresh_value else val)
return df
def replace_outlier_by_value(df, column, value_thresh, replaced_value = None):
if (replaced_value == None):
replaced_value = value_thresh
df[column] = df[column].apply(lambda val: replaced_value if val > value_thresh else val)
return df
def replace_missing_by_mode(df,column,missing_value='unknown'):
replaced_value = df[column].mode().values.tolist()[0]
df[column] = df[column].apply(lambda val: replaced_value if val == missing_value else val)
return df
def replace_missing_by_median(df,column,missing_value='unknown'):
replaced_value = df[column].median().values.tolist()[0]
df[column] = df[column].apply(lambda val: replaced_value if val == missing_value else val)
return df
def transform_pdays(val):
transform_dict = {999:'not_previously_contacted',7: 'over_a_week',0:'within_a_week'}
for key in transform_dict.keys():
if (val >= key):
return transform_dict[key]
def eval_class(true, predicted):
acc = metrics.accuracy_score(true, predicted)
precision = metrics.precision_score(true, predicted)
recall = metrics.recall_score(true, predicted)
f1 = metrics.f1_score(true, predicted)
log_loss = metrics.log_loss(true, predicted)
auc = metrics.roc_auc_score(true, predicted)
return acc, precision, recall, f1, log_loss, auc
def create_evaluation_df(model_name, y_train,y_train_pred, y_test, y_test_pred):
eval_clm_metrics = ['Accuracy', 'Precision', 'Recall', 'F1', 'Log_loss','AUC']
eval_clm_train = [m + '_train' for m in eval_clm_metrics]
eval_clm_test = [m + '_test' for m in eval_clm_metrics]
dis_clm = ['Model','Accuracy_train'] + eval_clm_test + ['diff_Acc_train_test']
dis_clm_1 = ['Model','Accuracy_train','Accuracy_test','Precision_test','Recall_test','F1_test']
res_clm = pd.DataFrame(data=[[model_name,*eval_class(y_train,y_train_pred),
*eval_class(y_test, y_test_pred)]],
columns=['Model'] + eval_clm_train + eval_clm_test)
res_clm['diff_Acc_train_test'] = res_clm.apply(lambda x: (x.Accuracy_test - x.Accuracy_train)/x.Accuracy_train, axis=1)
return(res_clm[dis_clm_1])
def init_evaluation_df():
eval_clm_metrics = ['Accuracy', 'Precision', 'Recall', 'F1', 'Log_loss','AUC']
eval_clm_train = [m + '_train' for m in eval_clm_metrics]
eval_clm_test = [m + '_test' for m in eval_clm_metrics]
dis_clm = ['Model','Accuracy_train'] + eval_clm_test + ['diff_Acc_train_test']
dis_clm_1 = ['Model','Accuracy_train','Accuracy_test','Precision_test','Recall_test','F1_test']
res_clm = pd.DataFrame( columns=['Model'] + eval_clm_train + eval_clm_test + ['diff_Acc_train_test'])
return(res_clm[dis_clm_1])
def data_processing_pipeline(df):
# remove duplicated rows
df = remove_duplicated_row(df)
# remove duration and nr.employed
# remove_cols =['duration', 'nr.employed']
# df = remove_features(df,remove_cols)
# edu_unknown = 'unknown'
column = 'education'
replaced_value = df[column].mode().values.tolist()[0]
df = replace_missing_by_value(df,column,replaced_value)
# housing_unknown = 'unknown'
column = 'housing'
replaced_value = 'yes' #df[column].mode().values.tolist()[0]
df = replace_missing_by_value(df,column,replaced_value)
# loan_unknown = 'unknown'
column = 'loan'
replaced_value = 'no' # df[column].mode().values.tolist()[0]
df = replace_missing_by_value(df,column,replaced_value)
# marital_unknown = 'unknown'
column = 'marital'
replaced_value = 'single' # df[column].mode().values.tolist()[0]
df = replace_missing_by_value(df,column,replaced_value)
# job_unknown = 'unknown'
column = 'job'
replaced_value = 'student' # df[column].mode().values.tolist()[0]
df = replace_missing_by_value(df,column,replaced_value)
## OUTlier
# age
value_thresh = 65
column = 'age'
df = replace_outlier_by_value(df,column,value_thresh)
# duration
column = 'duration'
df = replace_outlier_by_quantile(df, column)# replace by quantile_95
# campain
value_thresh = 6
column = 'campaign'
df = replace_outlier_by_value(df,column,value_thresh)
#previous
remove_thresh = float(0.95)
column = 'previous'
df = replace_outlier_by_quantile(df,column)
#cons.conf.idx'
remove_thresh = float(0.95)
column = 'cons.conf.idx'
df = replace_outlier_by_quantile(df,column)
### PHÂN LOẠI LẠI BIẾN
# pdays
column = 'pdays'
df[column] = df[column].map(transform_pdays)
return df
def label_encode_pipeline(df, cat_col_lst):
labelencoder = LabelEncoder()
for column in cat_col_lst:
df[column] = labelencoder.fit_transform(df[column])
return(df)
def run_model(name,model, X_train, y_train, X_test, y_test):
# model_eval_df : evaluation dataframe of model
# y_test_pred_proba: kiểu np.array - dùng để vẽ roc curve
train_model_time = 0 # đo thời gian chạy model
model_eval_df = pd.DataFrame() # evaluation dataframe
start_time = time.time()
model.fit(X_train,y_train)
end_time = time.time()
y_test_pred = model.predict(X_test)
y_test_pred_proba = model.predict_proba(X_test)[:,1] #Lấy xác suất phần 1
y_train_pred = model.predict(X_train)
model_eval_df = create_evaluation_df(name, y_train,y_train_pred, y_test, y_test_pred)
train_model_time = end_time - start_time
return(model_eval_df,y_test_pred_proba)
def run_model_lst(name_lst,model_lst, X_train, y_train, X_test, y_test):
evalutation_df = init_evaluation_df() # evaluation dataframe
y_test_proba_df = pd.DataFrame() # y_test_proba for ROC curve
for model,name in zip(model_lst,name_lst):
model_eval_df,y_test_pred_proba = run_model(name,model, X_train, y_train, X_test, y_test)
evalutation_df = evalutation_df.append(model_eval_df, ignore_index = True)
y_test_proba_df[name] = y_test_pred_proba
return(evalutation_df,y_test_proba_df)
### ROC CURVE
def visualize_ROC_curves(y_test,y_pred_proba_df):
plt.figure(figsize = (15,6))
plt.plot([0, 1], [0, 1], 'k--')
# Generate ROC curve values: fpr, tpr, thresholds
for col in y_pred_proba_df.columns:
fpr1, tpr1, thresholds1 = metrics.roc_curve(y_test, y_pred_proba_df[col])
plt.plot(fpr1, tpr1)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve: Successful Client Classifiers')
plt.legend(['Base line']+ y_pred_proba_df.columns.tolist(), loc='lower right')
plt.show()
### Recall - Precision CURVE
def visualize_RR_curves(y_test,y_pred_proba_df):
plt.figure(figsize = (15,6))
# plt.plot([0, 1], [0, 1], 'k--')
for col in y_pred_proba_df.columns:
precision, recall, thresholds = precision_recall_curve(y_test, y_pred_proba_df[col])
plt.plot(recall,precision)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall: Successful Client Classifier')
plt.legend(y_test_pred_proba_df.columns.tolist()+['Gradient_Raw'], loc='lower right')
plt.show()
### Calculate ROI
def calculate_roi(call_cnt, sale_cnt,cost_per_call, roi_per_success):
return roi_per_success * sale_cnt - cost_per_call * call_cnt
def get_real_roi(y_test, cost_per_call = 10, roi_per_success = 20):
sale_cnt = (y_test == 1).sum()
call_cnt = len(y_test)
real_roi = calculate_roi(call_cnt, sale_cnt, cost_per_call, roi_per_success)
return real_roi
def get_pred_roi(y_test, y_test_pred,cost_per_call = 10, roi_per_success = 20):
sale_cnt = ((y_test == 1) & (y_test_pred == 1)).sum()
call_cnt = sum((y_test_pred == 1))
pred_roi = calculate_roi(call_cnt, sale_cnt, cost_per_call, roi_per_success)
return pred_roi
def under_resample_data(data, target = 'y'):
X = data.iloc[:, data.columns != target]
y = data.iloc[:, data.columns == target]
# Number of data points in the minority class
number_records_yes = len(data[data.y == 1])
yes_indices = np.array(data[data.y == 1].index)
# Picking the indices of the normal classes
no_indices = data[data.y == 0].index
# Out of the indices we picked, randomly select "x" number (number_records_fraud)
random_no_indices = np.random.choice(no_indices, number_records_yes, replace = False)
random_no_indices = np.array(random_no_indices)
# Appending the 2 indices
under_sample_indices = np.concatenate([yes_indices,random_no_indices])
# Under sample dataset
under_sample_data = data.iloc[under_sample_indices,:]
return( under_sample_data)
###Output
_____no_output_____
###Markdown
L O A D data
###Code
#### L O A D Data
file_path = "data/bank-additional-full.csv"
marketing_df = pd.read_csv(file_path,sep = ";")
###Output
_____no_output_____
###Markdown
P R O C E S S I N G data
###Code
process_mkt_df = marketing_df.copy()
test_size = 0.2
target = 'y'
## Processing data
process_mkt_df = data_processing_pipeline(process_mkt_df)
cat_cols = process_mkt_df.dtypes[process_mkt_df.dtypes == 'object'].index
num_cols = process_mkt_df.dtypes[process_mkt_df.dtypes != 'object'].index
## label encoding
process_mkt_df = label_encode_pipeline(process_mkt_df, cat_cols)
## list of models
models = [LogisticRegression(max_iter = 300),
# GaussianNB(),
DecisionTreeClassifier(criterion='gini', max_depth=3, random_state=0),
# RandomForestClassifier(n_estimators=1000, max_depth=3),
GradientBoostingClassifier(n_estimators=1000, learning_rate=0.003),
XGBClassifier(n_estimators=1000, learning_rate=0.003, use_label_encoder = False)
]
names = [ 'Logistic Regressor',
# 'Naive Bayes',
'Decision Tree Classifier',
# 'Random Forest Classifier',
'Gradient Boost Classifier',
'XGBoost Classifier'
]
###Output
_____no_output_____
###Markdown
R U N with the whole data set
###Code
## split train set and test_set
X_train, X_test, y_train, y_test = train_test_split(process_mkt_df.drop('y',axis=1), process_mkt_df['y'],
test_size=test_size, random_state = 101)
## standardize data
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
## init data frame of evalutation and y_pred_proba
evalutation_df = pd.DataFrame() # evaluation dataframe
y_test_pred_proba_df = pd.DataFrame() # y_test_proba for ROC curve
## run list of Models to choose the optimal model
evalutation_df,y_test_pred_proba_df = run_model_lst(names, models, X_train, y_train, X_test, y_test)
# evalutation_df
# ## ghi vào file pickle
# with open("model/pkl_scaler.pkl","wb") as f:
# pickle.dump(scaler,f)
###Output
_____no_output_____
###Markdown
U N D E R S A M P L E D Dataset
###Code
# main data set for training
under_mkt_df = under_resample_data(process_mkt_df, target = 'y')
# Showing ratio
print("Percentage of no clients: ", len(under_mkt_df[under_mkt_df[target] == 0])/len(under_mkt_df))
print("Percentage of yes clients: ", len(under_mkt_df[under_mkt_df[target] == 1])/len(under_mkt_df))
print("Total number of clients in resampled data: ", len(under_mkt_df))
###Output
Percentage of no clients: 0.5
Percentage of yes clients: 0.5
Total number of clients in resampled data: 9278
###Markdown
R U N with the under resampled data set
###Code
## Split train and test undersampled dataset
X_under_train, X_under_test, y_under_train, y_under_test = train_test_split(under_mkt_df.drop(target,axis=1),under_mkt_df[target],
test_size=test_size, random_state = 101)
print("")
print("Number transactions train dataset: ", len(X_under_train))
print("Number transactions test dataset: ", len(X_under_test))
print("Total number of transactions: ", len(X_under_train)+len(X_under_test))
## Standardize data
X_under_train = scaler.fit_transform(X_under_train)
X_under_test = scaler.transform(X_under_test)
## Run model list
suffix = ' with under resampled data'
under_names = [name + suffix for name in names ]
## init data frame of evalutation and y_pred_proba
under_evalutation_df = pd.DataFrame() # evaluation dataframe
y_under_test_pred_proba_df = pd.DataFrame() # y_test_proba for ROC curve
under_evalutation_df,y_under_test_pred_proba_df= run_model_lst(under_names, models, X_under_train, y_under_train
, X_under_test, y_under_test)
# under_evalutation_df
###Output
Number transactions train dataset: 7422
Number transactions test dataset: 1856
Total number of transactions: 9278
[12:01:25] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.4.0/src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
###Markdown
C O M P A R E when training on the whole dataset and the under- resampled dataset
###Code
evaluations = pd.DataFrame()
evaluations = evalutation_df.append(under_evalutation_df, ignore_index = True)
evaluations
###Output
_____no_output_____
###Markdown
ROC curve in case of undersampled dataset Vì ta cần tỉ lệ bỏ sót khách hàng thành công của mô hình phải thấp nên mặc dù accuracy score chạy trên toàn tập data cao hơn, ta sẽ chọn cách train trên tập under sampled data vì trường hợp này cho độ Precision, Recall, F1 score trên nhãn successful tốt hơn và accuracy score trên tập train và test của undersampled data cho kết quả khá khả quan (>80%).* Mô hình còn có thể cải thiện kết quả tiếp nếu ta kFold-validaion, điều này sẽ được cải tiến ở phase sau
###Code
visualize_ROC_curves(y_under_test,y_under_test_pred_proba_df)
###Output
_____no_output_____
###Markdown
K F O L D validation to avoid improve the model's effectiveness on undersampled by models( LogisticClf, DecisionTree, Random Forest Classifier, XGBoost)
###Code
# # scaler on the whole dataset
# scaler = pickle.load(open("model/pkl_scaler.pkl", 'rb'))
## init kfold
target = 'y'
num_fold = 5
kfold = KFold(n_splits=num_fold, shuffle=True)
# fold_scaler = StandardScaler()
X = scaler.transform(under_mkt_df.drop('y',axis=1))
y = under_mkt_df['y']
fold_idx = 1
fold_evalutation_df = pd.DataFrame()
fold_prob_df = pd.DataFrame()
for train_ids, val_ids in kfold.split(X, y):
X_train = X[train_ids]
X_val = X[val_ids]
y_train = y.iloc[train_ids].values
y_val = y.iloc[val_ids].values
eval_df, y_prob_df= run_model_lst(names,models, X_train, y_train, X_val, y_val)
eval_df['k_th_fold'] = fold_idx
fold_evalutation_df= fold_evalutation_df.append(eval_df, ignore_index = True)
# new_names = [col+'_'+str(fold_idx) for col in y_prob_df.columns.tolist()]# đổi tên cột của y_prob_df
# y_prob_df.columns = new_names
# fold_prob_df = pd.concat([fold_prob_df, y_prob_df],axis=1)
# Sang Fold tiếp theo
fold_idx = fold_idx + 1
###Output
[12:02:56] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.4.0/src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[12:03:16] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.4.0/src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[12:03:35] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.4.0/src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[12:03:56] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.4.0/src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
[12:04:18] WARNING: C:/Users/Administrator/workspace/xgboost-win64_release_1.4.0/src/learner.cc:1095: Starting in XGBoost 1.3.0, the default evaluation metric used with the objective 'binary:logistic' was changed from 'error' to 'logloss'. Explicitly set eval_metric if you'd like to restore the old behavior.
###Markdown
E V A L U A T E to choose the optimal model with the best avarage metrics
###Code
# fold_evalutation_df.to_csv('fold_evalutation_df.csv')
# fold_evalutation_df
t = fold_evalutation_df.groupby(['Model']).mean().iloc[:,:-1].sort_values(['Recall_test','Accuracy_test'], ascending=False)
t
# Evaluation metrics for 4 models
###Output
_____no_output_____
###Markdown
O P T I M A L Model
###Code
## EVALUATION METRICS of the optimal model = mean(EVALUATION METRICS) of k times folding the under_resampled dataset
# The optimal model:XGBoost Classifier
optimal_evaluation_df = t.head(1)
optimal_name =t.head(1).index.values[0]
optimal_model = models[names.index(optimal_name)]
print('The optimal model is '+str(optimal_name))
optimal_evaluation_df
# optimal_evaluation_df.to_csv("model/model_evaluation_df.csv")
###Output
The optimal model is XGBoost Classifier
###Markdown
O P T I M A L FIT : train on the whole under sampled dataset to get the last one
###Code
## Train the optimal model on the whole under resampled dataset
target = 'y'
X_train = scaler.transform(under_mkt_df.drop(target, axis = 1))
y_train = under_mkt_df[target]
optimal_model.fit(X_train,y_train)
# ## ghi vào file pickle
# with open("model/pkl_model.pkl","wb") as f:
# pickle.dump(scaler,f)
###Output
_____no_output_____
###Markdown
O P T I M A L Feature Importances
###Code
## Hệ số mô hình tối ưu
features = [i for i in under_mkt_df.columns.values.tolist() if i!= target]
opt_model_importances = pd.Series(data = optimal_model.feature_importances_, index = features, name = optimal_name)
plt.figure(figsize = (14,8))
sns.barplot(x = opt_model_importances.sort_values(ascending = False).values , y = opt_model_importances.sort_values(ascending = False).index)
plt.title('The feature importances of model')
plt.show()
## Visualize feature importances of the optimal model
###Output
_____no_output_____
###Markdown
O P T I M A L ROI on the whole dataset
###Code
### Real R O I on the whole dataset
## ROI = roi_per_success * # of sales - cost_per_call * # of calls
cost_per_call = 10
roi_per_success = 20
target = 'y'
# X_opt_test = process_mkt_df.drop(targer,axis = 1)
X_opt_test = scaler.transform(process_mkt_df.drop(target,axis = 1))
y_opt_test = process_mkt_df[target]
y_pred = optimal_model.predict(X_opt_test)
number_client = len(y_opt_test)
real_roi = get_real_roi(y_opt_test,cost_per_call,roi_per_success)
pred_roi = get_pred_roi(y_opt_test, y_pred)
print('The assumed cost_per_call is {}$ and roi_per_success is:{}$'.format(cost_per_call,roi_per_success))
print('The number of predicted clients is {}'.format(number_client))
print('The real revenue: '+str(real_roi)+"$")
print('The predicted revenue: '+str(pred_roi)+"$")
print('The PROFIT: '+str(pred_roi - real_roi)+'$')
## visualize roi:
rois = [real_roi, pred_roi]
type_roi = ['Real profit', 'Predicted profit']
sns.barplot(y = rois , x = type_roi)
plt.title('Profit predicting on about {} clients'.format(number_client))
plt.show()
###Output
_____no_output_____
###Markdown
R E S E A R C H Decision tree
###Code
## init the research model
tree_model = DecisionTreeClassifier(criterion='gini', max_depth=3, random_state=0)
tree_name = 'Decision Tree Classifier'
X_tree_train, X_tree_test, y_tree_train, y_tree_test = train_test_split(under_mkt_df.drop(target,axis=1),under_mkt_df[target],
test_size=test_size, random_state = 101)
## Standardize data
X_tree_train = scaler.transform(X_tree_train)
X_tree_test = scaler.transform(X_tree_test)
tree_model.fit(X_tree_train,y_tree_train)
y_tree_test_pred = tree_model.predict(X_tree_test)
# # ghi modelvào file pickle
# with open("model/pkl_decisionT_model.pkl","wb") as f:
# pickle.dump(optimal_model,f)
## Hệ số tree decision
features = [i for i in process_mkt_df.columns.values.tolist() if i!= target]
tree_model_importances = pd.Series(data = tree_model.feature_importances_, index = features, name = tree_name)
tree_model_importances = tree_model_importances[tree_model_importances > 0]
## The coefficients of the optimal model
## Visualize feature importances of the tree model
plt.figure(figsize = (8,5))
sns.barplot(x = tree_model_importances.sort_values(ascending = False).values , y = tree_model_importances.sort_values(ascending = False).index)
plt.title('The feature importances of model')
plt.show()
###Output
_____no_output_____
###Markdown
V I S U A L I Z E Decision Tree
###Code
# from sklearn import tree
plt.figure(figsize=(20,15))
tree.plot_tree(tree_model,feature_names = features,rounded=True, filled = True);
plt.title("Decision Tree on Banking Tele-marketing dataset");
###Output
_____no_output_____ |
notebooks/Day3_2-Model-selection-and-validation.ipynb | ###Markdown
Model Selection and ValidationModel selection and validation are fundamental steps in statistical learning applications. In particular, we wish to select the model that performs optimally, both with respect to the training data and to external data. A model's performance on external data is known as its generalization performance. Depending on the type of learning method we use, we may be interested in one or more of the following:* how many variables should be included in the model?* what hyperparameter values should be used in fitting the model?* how many groups should we use to cluster our data?We will almost almost always use a model's generalization performance to answer these questions.[Givens and Hoeting (2012)](references) includes a dataset for salmon spawning success. We can use this data to fix ideas. If we plot the number of recruits against the number of spawners, we see a distinct positive relationship, as we would expect. The question is, *what sort of polynomial relationship best describes the relationship?*
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('notebook')
import warnings
warnings.simplefilter("ignore")
salmon = pd.read_table("../data/salmon.dat", sep=r'\s+', index_col=0)
salmon.plot(x='spawners', y='recruits', kind='scatter')
###Output
_____no_output_____
###Markdown
On the one extreme, a linear relationship is underfit; on the other, we see that including a very large number of polynomial terms is clearly overfitting the data.
###Code
fig, axes = plt.subplots(1, 2, figsize=(14,6))
xvals = np.arange(salmon.spawners.min(), salmon.spawners.max())
fit1 = np.polyfit(salmon.spawners, salmon.recruits, 1)
p1 = np.poly1d(fit1)
axes[0].plot(xvals, p1(xvals))
axes[0].scatter(x=salmon.spawners, y=salmon.recruits)
fit15 = np.polyfit(salmon.spawners, salmon.recruits, 15)
p15 = np.poly1d(fit15)
axes[1].plot(xvals, p15(xvals))
axes[1].scatter(x=salmon.spawners, y=salmon.recruits)
###Output
_____no_output_____
###Markdown
We can select an appropriate polynomial order for the model using **cross-validation**, in which we hold out a testing subset from our dataset, fit the model to the remaining data, and evaluate its performance on the held-out subset.
###Code
from sklearn.model_selection import train_test_split
xtrain, xtest, ytrain, ytest = train_test_split(salmon.spawners,
salmon.recruits, test_size=0.3, random_state=42)
###Output
_____no_output_____
###Markdown
A natural criterion to evaluate model performance is root mean square error.$$L\left(Y, \hat{f}(X) \right) = \sqrt{\frac{1}{N}(Y - \hat{f}(X))^2}$$
###Code
from sklearn.metrics import mean_squared_error
def rmse(x, y, coefs):
yfit = np.polyval(coefs, x)
return mean_squared_error(y, yfit) ** .5
###Output
_____no_output_____
###Markdown
We can now evaluate the model at varying polynomial degrees, and compare their fit.
###Code
degrees = np.arange(11)
train_err = np.zeros(len(degrees))
validation_err = np.zeros(len(degrees))
for i, d in enumerate(degrees):
p = np.polyfit(xtrain, ytrain, d)
train_err[i] = rmse(xtrain, ytrain, p)
validation_err[i] = rmse(xtest, ytest, p)
fig, ax = plt.subplots()
ax.plot(degrees, validation_err, lw=2, label = 'cross-validation error')
ax.plot(degrees, train_err, lw=2, label = 'training error')
ax.legend(loc=0)
ax.set_xlabel('degree of fit')
ax.set_ylabel('rms error')
###Output
_____no_output_____
###Markdown
In the cross-validation above, notice that the error is high for both very low and very high polynomial values, while training error declines monotonically with degree. The cross-validation error (sometimes referred to as test error or generalization error) is composed of two components: **bias** and **variance**. When a model is underfit, bias is low but variance is high, while when a model is overfit, the reverse is true.One can show that the MSE decomposes into a sum of the bias (squared) and variance of the estimator:$$\begin{aligned}\text{Var}(\hat{\theta}) &= E[\hat{\theta} - \theta]^2 - (E[\hat{\theta} - \theta])^2 \\\Rightarrow E[\hat{\theta} - \theta]^2 &= \text{Var}(\hat{\theta}) + \text{Bias}(\hat{\theta})^2\end{aligned}$$The training error, on the other hand, does not have this tradeoff; it will always decrease (or at least, never increase) as variables (polynomial terms) are added to the model. Information-theoretic Model SelectionOne approach to model selection relies on the in-sample prediction error. One popular approach uses an information-theoretic criterion to identify the most appropriate model. Akaike (1973) found a formal relationship between Kullback-Leibler information (a dominant paradigm in information and coding theory) and likelihood theory. Akaike's Information Criterion (AIC) is an estimator of expected relative K-L information based on the maximized log-likelihood function, corrected for asymptotic bias. $$\text{AIC} = -2 \log(L(\theta|data)) + 2K$$AIC balances the fit of the model (in terms of the likelihood) with the number of parameters required to achieve that fit. We can easily calculate AIC from the residual sums of squares as:$$\text{AIC} = n \log(\text{RSS}/n) + 2k$$where $k$ is the number of parameters in the model. Notice that as the number of parameters increase, the residual sum of squares goes down, but the second term (a penalty) increases.To apply AIC to a model selection problem, we choose the model that has the lowest AIC value.[AIC can be shown to be equivalent to leave-one-out cross-validation](http://www.jstor.org/stable/2984877).
###Code
def aic(rss, n, k):
return n * np.log(float(rss) / n) + 2 * k
###Output
_____no_output_____
###Markdown
We can use AIC to select the appropriate polynomial degree.
###Code
aic_values = np.zeros(len(degrees))
params = np.zeros((len(degrees), len(degrees)))
for i, d in enumerate(degrees):
p, residuals, rank, singular_values, rcond = np.polyfit(
salmon.spawners, salmon.recruits, d, full=True)
aic_values[i] = aic((residuals).sum(), len(salmon.spawners), d+1)
params[i, :(d+1)] = p
plt.plot(degrees, aic_values, lw=2)
plt.xlabel('degree of fit')
plt.ylabel('AIC')
###Output
_____no_output_____
###Markdown
For ease of interpretation, AIC values can be transformed into model weights via:$$p_i = \frac{\exp^{-\frac{1}{2} AIC_i}}{\sum_j \exp^{-\frac{1}{2} AIC_j} }$$
###Code
aic_trans = np.exp(-0.5 * aic_values)
aic_probs = aic_trans / aic_trans.sum()
aic_probs.round(2)
###Output
_____no_output_____
###Markdown
Metrics for ClassificationFor classifiers such as decision trees and random forests, we may judge our model performance a little differently than the above.First, let's describe four concepts to help evaluate a classifier.A **true positive** (TP) occurs when we correctly predict the positive class.A **true negative** (TN) occurs when we correctly predict the negative class.A **false positive** (FP) occurs when we incorrectly predict the positice class.A **false negative** (FN) occurs when we incorrectly predict the negative class.These concepts can be taken together to produce many different aspects of a classifier that we may care about.**accuracy** - Overall, how often is the classifier right.: `sklearn.metrics.accuracy_score`$$\frac{TP + TN}{TP + TN + FP + FN}$$**precision**: `sklearn.metrics.precision_score`$$\frac{TP}{TP + FP}$$**recall** (sensitivity): `sklearn.metrics.recall_score`$$\frac{TP}{TP + FN}$$**roc_auc** - The area under the receiver operating characteristic (ROC) curve. Measure of the trade-off between the true positive rate (TPR) and the false positive rate (FPR) as we vary the classification threshold. Equivalently, it's the probability that an observation drawn at random is classified correctly.: `sklearn.metrics.roc_auc_score`**f1**: `sklearn.metrics.f1_score` - the harmonic mean between recall and precision$$\frac{2TP}{2TP + FP + FN}$$ ExerciseConsider the following generated data. Compute the above measures, using any classifier we have seen so far. Use `sklearn.metrics.confusion_matrix` to calculate the quantities by hand. Confirm that they are correct by using the functions noted above.* What is the accuracy of this classifier?* The precision and recall?* The ROC AUC score?* The f1-measure?
###Code
from sklearn.datasets import make_classification
from sklearn.metrics import confusion_matrix
from sklearn.linear_model import LogisticRegression
X, y = make_classification(
n_samples=1000,
n_features=50,
n_informative=25,
n_redundant=0,
random_state=123
)
X_train = X[:750]
y_train = y[:750]
X_test = X[750:]
y_test = y[750:]
# Write answer here
###Output
_____no_output_____
###Markdown
K-fold Cross-validationAs introduced above, cross-validation is probably the most widely used method for estimating generalization error. In particularly data rich environments, we may split our sample into a training set, a testing set, and a validation set that is completely set aside until we have performed model selection.**K-fold cross-validation** is the next best thing. In k-folds cross-validation, the training set is split into *k* smaller sets. Then, for each of the k "folds":1. trained model on *k-1* of the folds as training data2. validate this model the remaining fold, using an appropriate metricThe performance measure reported by k-fold CV is then the average of the *k* computed values. This approach can be computationally expensive, but does not waste too much data, which is an advantage over having a fixed test subset.
###Code
from sklearn.model_selection import cross_val_score, KFold
nfolds = 3
kf = KFold(n_splits=nfolds, shuffle=True)
fig, axes = plt.subplots(1, nfolds, figsize=(14,4))
for i, fold in enumerate(kf.split(salmon.values)):
training, validation = fold
y, x = salmon.values[training].T
axes[i].plot(x, y, 'ro', label='training')
y, x = salmon.values[validation].T
axes[i].plot(x, y, 'bo', label='validation')
axes[i].legend(fontsize='large')
fig.tight_layout()
k = 5
degrees = np.arange(8)
k_fold_err = np.empty(len(degrees))
for i, d in enumerate(degrees):
error = np.empty(k)
for j, fold in enumerate(KFold(n_splits=k).split(salmon.values)):
training, validation = fold
y_train, x_train = salmon.values[training].T
y_test, x_test = salmon.values[validation].T
p = np.polyfit(x_train, y_train, d)
error[j] = rmse(x_test, y_test, p)
k_fold_err[i] = error.mean()
fig, ax = plt.subplots()
ax.plot(degrees, k_fold_err, lw=2)
ax.set_xlabel('degree of fit')
ax.set_ylabel('average rms error')
###Output
_____no_output_____
###Markdown
If the model shows high **bias**, the following actions might help:- **Add more features**. In our example of predicting home prices, it may be helpful to make use of information such as the neighborhood the house is in, the year the house was built, the size of the lot, etc. Adding these features to the training and test sets can improve a high-bias estimator- **Use a more sophisticated model**. Adding complexity to the model can help improve on bias. For a polynomial fit, this can be accomplished by increasing the degree d. Each learning technique has its own methods of adding complexity.- **Decrease regularization**. Regularization is a technique used to impose simplicity in some machine learning models, by adding a penalty term that depends on the characteristics of the parameters. If a model has high bias, decreasing the effect of regularization can lead to better results. If the model shows **high variance**, the following actions might help:- **Use fewer features**. Using a feature selection technique may be useful, and decrease the over-fitting of the estimator.- **Use a simpler model**. Model complexity and over-fitting go hand-in-hand.- **Use more training samples**. Adding training samples can reduce the effect of over-fitting, and lead to improvements in a high variance estimator.- **Increase regularization**. Regularization is designed to prevent over-fitting. In a high-variance model, increasing regularization can lead to better results. Bootstrap aggregating regressionSplitting datasets into training, cross-validation and testing subsets is inefficient, particularly when the original dataset is not large. As an alternative, we can use bootstrapping to both develop and validate our model without dividing our dataset. One algorithm to facilitate this is the **bootstrap aggreggation** (or *bagging*) algorithm.The **bootstrap** is a tool for assessing statistical accuracy. It involves sampling training, target pairs *with replacement* from the original dataset $B$ times. Each sample is the same size as the original dataset.A Bagging regressor is an **ensemble meta-estimator** that fits base regressors each on bootstrapped random subsets of the original dataset and then aggregate their individual predictions (either by voting or by averaging) to form a final prediction.
###Code
from sklearn.ensemble import BaggingRegressor
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
X,y = salmon.values.T
br = BaggingRegressor(LinearRegression(), oob_score=True, random_state=20090425)
X2 = PolynomialFeatures(degree=2).fit_transform(X[:, None])
br.fit(X2, y)
###Output
_____no_output_____
###Markdown
In order to evaluate a particular model, the samples that were not selected for a particular resampled dataset (the **out-of-bag** sample) can be used to estimate the generalization error.
###Code
br.oob_score_
###Output
_____no_output_____
###Markdown
Scikit-learn includes a convenient facility called a **pipeline**, which can be used to chain two or more estimators into a single function. We will hear more about using pipelines later. Here, we will use it to join the bagging regressor, polynomial feature selection, and linear regression into a single function.
###Code
from sklearn.pipeline import make_pipeline
def polynomial_bagging_regression(degree, **kwargs):
return make_pipeline(PolynomialFeatures(degree=degree),
BaggingRegressor(LinearRegression(), **kwargs))
scores = []
for d in degrees:
print('fitting', d)
pbr = polynomial_bagging_regression(d, oob_score=True)
pbr.fit(X[:, None], y)
scores.append(pbr.score(X[:, None], y))
plt.plot(scores)
###Output
_____no_output_____
###Markdown
RegularizationThe `scikit-learn` package includes a built-in dataset of diabetes progression, taken from [Efron *et al.* (2003)](http://arxiv.org/pdf/math/0406456.pdf), which includes a set of 10 normalized predictors.
###Code
from sklearn import datasets
# Predictors: "age" "sex" "bmi" "map" "tc" "ldl" "hdl" "tch" "ltg" "glu"
diabetes = datasets.load_diabetes()
###Output
_____no_output_____
###Markdown
Let's examine how a linear regression model performs across a range of sample sizes.
###Code
diabetes['data'].shape
from sklearn import model_selection
def plot_learning_curve(estimator, label=None):
scores = list()
train_sizes = np.linspace(10, 200, 10).astype(np.int)
for train_size in train_sizes:
test_error = model_selection.cross_val_score(estimator, diabetes['data'], diabetes['target'],
cv=model_selection.ShuffleSplit(train_size=train_size,
test_size=200,
random_state=0)
)
scores.append(test_error)
plt.plot(train_sizes, np.mean(scores, axis=1), label=label or estimator.__class__.__name__)
plt.ylim(0, 1)
plt.ylabel('Explained variance on test set')
plt.xlabel('Training set size')
plt.legend(loc='best', fontsize='x-large')
plot_learning_curve(LinearRegression())
###Output
_____no_output_____
###Markdown
Notice the linear regression is not defined for scenarios where the number of features/parameters exceeds the number of observations. It performs poorly as long as the number of sample is not several times the number of features. One approach for dealing with overfitting is to **regularize** the regession model.The **ridge estimator** is a simple, computationally efficient regularization for linear regression.$$\hat{\beta}^{ridge} = \text{argmin}_{\beta}\left\{\sum_{i=1}^N (y_i - \beta_0 - \sum_{j=1}^k x_{ij} \beta_j)^2 + \lambda \sum_{j=1}^k \beta_j^2 \right\}$$Typically, we are not interested in shrinking the mean, and coefficients are **standardized** to have zero mean and unit L2 norm. Hence,$$\hat{\beta}^{ridge} = \text{argmin}_{\beta} \sum_{i=1}^N (y_i - \sum_{j=1}^k x_{ij} \beta_j)^2$$$$\text{subject to } \sum_{j=1}^k \beta_j^2 < \lambda$$Note that this is *equivalent* to a Bayesian model $y \sim N(X\beta, I)$ with a Gaussian prior on the $\beta_j$:$$\beta_j \sim \text{N}(0, \lambda)$$The estimator for the ridge regression model is:$$\hat{\beta}^{ridge} = (X'X + \lambda I)^{-1}X'y$$
###Code
from sklearn import preprocessing
from sklearn.linear_model import Ridge
k = diabetes['data'].shape[1]
alphas = np.linspace(0, 4)
params = np.zeros((len(alphas), k))
for i,a in enumerate(alphas):
X = preprocessing.scale(diabetes['data'])
y = diabetes['target']
fit = Ridge(alpha=a, normalize=True).fit(X, y)
params[i] = fit.coef_
fix, ax = plt.subplots(figsize=(14,6))
for param in params.T:
ax.plot(alphas, param)
ax.set_xlabel("$\\alpha$")
ax.set_ylabel("$\\beta$", rotation=0)
plot_learning_curve(LinearRegression())
plot_learning_curve(Ridge())
###Output
_____no_output_____
###Markdown
Notice that at very small sample sizes, the ridge estimator outperforms the unregularized model.The regularization of the ridge is a **shrinkage**: the coefficients learned are shrunk towards zero.The amount of regularization is set via the `alpha` parameter of the ridge, which is tunable. The `RidgeCV` method in `scikits-learn` automatically tunes this parameter via cross-validation.
###Code
for a in [0.001, 0.01, 0.1, 1, 10]:
plot_learning_curve(Ridge(a), a)
###Output
_____no_output_____
###Markdown
scikit-learn's `RidgeCV` class automatically tunes the L2 penalty using Generalized Cross-Validation.
###Code
from sklearn.linear_model import RidgeCV
plot_learning_curve(LinearRegression())
plot_learning_curve(Ridge())
plot_learning_curve(RidgeCV())
###Output
_____no_output_____
###Markdown
In contrast to the Ridge estimator, **the Lasso estimator** is useful to impose sparsity on the coefficients. In other words, it is to be prefered if we believe that many of the features are not relevant.$$\hat{\beta}^{lasso} = \text{argmin}_{\beta}\left\{\frac{1}{2}\sum_{i=1}^N (y_i - \beta_0 - \sum_{j=1}^k x_{ij} \beta_j)^2 + \lambda \sum_{j=1}^k |\beta_j| \right\}$$or, similarly:$$\hat{\beta}^{lasso} = \text{argmin}_{\beta} \frac{1}{2}\sum_{i=1}^N (y_i - \sum_{j=1}^k x_{ij} \beta_j)^2$$$$\text{subject to } \sum_{j=1}^k |\beta_j| < \lambda$$Note that this is *equivalent* to a Bayesian model $y \sim N(X\beta, I)$ with a **Laplace** prior on the $\beta_j$:$$\beta_j \sim \text{Laplace}(\lambda) = \frac{\lambda}{2}\exp(-\lambda|\beta_j|)$$Note how the Lasso imposes sparseness on the parameter coefficients:
###Code
from sklearn.linear_model import Lasso
k = diabetes['data'].shape[1]
alphas = np.linspace(0.1, 3)
params = np.zeros((len(alphas), k))
for i,a in enumerate(alphas):
X = preprocessing.scale(diabetes['data'])
y = diabetes['target']
fit = Lasso(alpha=a, normalize=True).fit(X, y)
params[i] = fit.coef_
plt.figure(figsize=(14,6))
for param in params.T:
plt.plot(alphas, param)
plot_learning_curve(RidgeCV())
plot_learning_curve(Lasso(0.05))
###Output
_____no_output_____
###Markdown
In this example, the ridge estimator performs better than the lasso, but when there are fewer observations, the lasso matches its performance. Otherwise, the variance-reducing effect of the lasso regularization is unhelpful relative to the increase in bias.With the lasso too, me must tune the regularization parameter for good performance. There is a corresponding `LassoCV` function in `scikit-learn`, but it is computationally expensive. To speed it up, we can reduce the number of values explored for the alpha parameter.
###Code
from sklearn.linear_model import LassoCV
plot_learning_curve(RidgeCV())
plot_learning_curve(LassoCV(n_alphas=10, max_iter=5000))
###Output
_____no_output_____
###Markdown
Can't decide? **ElasticNet** is a compromise between lasso and ridge regression.$$\hat{\beta}^{elastic} = \text{argmin}_{\beta}\left\{\frac{1}{2}\sum_{i=1}^N (y_i - \beta_0 - \sum_{j=1}^k x_{ij} \beta_j)^2 + (1 - \alpha) \sum_{j=1}^k \beta^2_j + \alpha \sum_{j=1}^k |\beta_j| \right\}$$where $\alpha = \lambda_1/(\lambda_1 + \lambda_2)$. Its tuning parameter $\alpha$ (`l1_ratio` in `scikit-learn`) controls this mixture: when set to 0, ElasticNet is a ridge regression, when set to 1, it is a lasso. The sparser the coefficients, the higher we should set $\alpha$. Note that $\alpha$ can also be set by cross-validation, though it is computationally costly.
###Code
from sklearn.linear_model import ElasticNetCV
plot_learning_curve(RidgeCV())
plot_learning_curve(ElasticNetCV(l1_ratio=.7, n_alphas=10))
###Output
_____no_output_____
###Markdown
Using Cross-validation for Parameter Tuning
###Code
lasso = Lasso()
alphas = np.logspace(-4, -1, 20)
scores = np.empty(len(alphas))
scores_std = np.empty(len(alphas))
for i,alpha in enumerate(alphas):
lasso.alpha = alpha
s = model_selection.cross_val_score(lasso, diabetes.data, diabetes.target, n_jobs=-1)
scores[i] = s.mean()
scores_std[i] = s.std()
plt.semilogx(alphas, scores)
plt.semilogx(alphas, np.array(scores) + np.array(scores_std)/20, 'b--')
plt.semilogx(alphas, np.array(scores) - np.array(scores_std)/20, 'b--')
plt.yticks(())
plt.ylabel('CV score')
plt.xlabel('alpha')
plt.axhline(np.max(scores), linestyle='--', color='.5')
plt.text(5e-2, np.max(scores)+1e-4, str(np.max(scores).round(3)))
###Output
_____no_output_____
###Markdown
Model Checking using Learning CurvesA useful way of checking model performance (in terms of bias and/or variance) is to plot learning curves, which illustrates the learning process as your model is exposed to more data. When the dataset is small, it is easier for a model of a particular complexity to be made to fit the training data well. As the dataset grows, we expect the training error to increase (model accuracy decreases). Conversely, a relatively small dataset will mean that the model will not generalize well, and hence the cross-validation score will be lower, on average.
###Code
from sklearn.model_selection import learning_curve
train_sizes, train_scores, test_scores = learning_curve(lasso,
diabetes.data, diabetes.target,
train_sizes=[50, 70, 90, 110, 130], cv=5)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
###Output
_____no_output_____
###Markdown
For models with high bias, training and cross-validation scores will tend to converge at a low value (high error), indicating that adding more data will not improve performance. For models with high variance, there may be a gap between the training and cross-validation scores, suggesting that model performance could be improved with additional information.
###Code
X,y = salmon.values.T
X2 = PolynomialFeatures(degree=2).fit_transform(X[:, None])
train_sizes, train_scores, test_scores = learning_curve(LinearRegression(), X2, y,
train_sizes=[10, 15, 20, 30], cv=5)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
###Output
_____no_output_____
###Markdown
Exercise: Very low birthweight infantsCompare logistic regression models (using the `linear_model.LogisticRegression` interface) with varying degrees of regularization for the VLBW infant database. Use a relevant metric such as the Brier's score as a metric.$$B = \frac{1}{n} \sum_{i=1}^n (\hat{p}_i - y_i)^2$$
###Code
vlbw = pd.read_csv("../data/vlbw.csv", index_col=0)
vlbw = vlbw.replace(
{
'inout': {
'born at Duke': 0,
'transported': 1
},
'delivery': {
'abdominal':0,
'vaginal':1
},
'ivh': {
'absent': 0,
'present': 1,
'possible': 1,
'definite': 1
},
'sex': {
'female': 0,
'male': 1
}
}
)
vlbw = vlbw[['birth', 'exit', 'hospstay', 'lowph', 'pltct',
'bwt', 'gest', 'meth', 'toc', 'delivery', 'apg1',
'vent', 'pneumo', 'pda', 'cld', 'ivh']].dropna()
# Write your answer here
###Output
_____no_output_____ |
1-dl-project/dl-9-baseline-model-analysis.ipynb | ###Markdown
Import Libaries & Define Functions
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import glob
sns.set(style='whitegrid')
def frame_it(path):
csv_files = glob.glob(path + '/*.csv')
df_list = []
for filename in csv_files:
df = pd.read_csv(filename, index_col='Unnamed: 0', header=0)
df_list.append(df)
return pd.concat(df_list, axis=1)
def show_values_on_bars(axs, h_v="v", space=0.4,pct=False,neg=False):
def _show_on_single_plot(ax):
if h_v == "v":
for p in ax.patches:
_x = p.get_x() + p.get_width() / 2
_y = p.get_y() + p.get_height()
if pct == True:
value = '{:.2%}'.format(p.get_height())
else:
value = '{:.2f}'.format(p.get_height())
ax.text(_x, _y, value, ha="center")
elif h_v == "h":
for p in ax.patches:
_x = p.get_x() + p.get_width() + float(space)
_y = p.get_y() + p.get_height()
if pct == True:
value = '{:.2%}'.format(p.get_width())
else:
value = '{:.2f}'.format(p.get_width())
if neg == True:
ax.text(_x, _y, value, ha="right")
else:
ax.text(_x, _y, value, ha="left")
if isinstance(axs, np.ndarray):
for idx, ax in np.ndenumerate(axs):
_show_on_single_plot(ax)
else:
_show_on_single_plot(axs)
###Output
_____no_output_____
###Markdown
Analysis
###Code
# MODIFY!
df = frame_it('./baseline-err')
# we tranpose the data frame for the analysis
df = df.T
# we transpose the data frame due to way we exported the data
df_rmse = df.sort_values('RMSE')
df_rmse
df_rmse.to_csv(f'./analysis/{notebook_name}.csv')
###Output
_____no_output_____
###Markdown
ERR Values [MBit/s] and [(MBit/s)^2]
###Code
df_rmse.style.highlight_min(color = 'lightgrey', axis = 0).set_table_styles([{'selector': 'tr:hover','props': [('background-color', '')]}])
###Output
_____no_output_____
###Markdown
RMSE Performance Decline based on Best Performance [%]
###Code
df_rmse_min = df_rmse.apply(lambda value : -((value/df.min())-1),axis=1)
df_rmse_min = df_rmse_min.sort_values('RMSE',ascending=False)
df_rmse_min.to_csv(f'./analysis/{notebook_name}-min.csv')
df_rmse_min.style.highlight_max(color = 'lightgrey', axis = 0).set_table_styles([{'selector': 'tr:hover','props': [('background-color', '')]}]).format('{:.2%}')
###Output
_____no_output_____
###Markdown
RMSE Performance Increment based on Worst Performance [%]
###Code
df_rmse_max = df.apply(lambda value : abs((value/df.max())-1),axis=1)
df_rmse_max = df_rmse_max.sort_values('RMSE',ascending=False)
df_rmse_max.to_csv(f'./analysis/{notebook_name}-max.csv')
df_rmse_max.style.highlight_max(color = 'lightgrey', axis = 0).set_table_styles([{'selector': 'tr:hover','props': [('background-color', '')]}]).format('{:.2%}')
# the information in this table is not that meaningful / useful
###Output
_____no_output_____
###Markdown
Visualization
###Code
ax = sns.barplot(data=df_rmse, x='RMSE',y=df_rmse.index, palette='mako')
show_values_on_bars(ax, "h", 0.1)
ax.set(ylabel='Model',xlabel='RMSE [MBit/s]')
ax.tick_params(axis=u'both', which=u'both',length=0)
ax.set_title('Baseline Model RMSE');
ax = sns.barplot(data=df_rmse_min,x='RMSE',y=df_rmse_min.index,palette='mako')
ax.set(ylabel='Model',xlabel='RMSE Performance Decline [%]')
ax.yaxis.set_label_position("right")
ax.yaxis.set_ticks_position("right")
ax.tick_params(axis=u'both', which=u'both',length=0)
show_values_on_bars(ax,"h",0.001,True,True)
ax.set_title('Baseline Model RMSE Perfomance Decline based on Best Performance');
ax = sns.barplot(data=df_rmse_max,x='RMSE',y=df_rmse_max.index,palette='mako')
show_values_on_bars(ax,"h",0.001,True)
ax.tick_params(axis=u'both', which=u'both',length=0)
ax.set(ylabel='Model',xlabel='RMSE Performance Increment [%]')
ax.set_title('Baseline Model RMSE Perfomance Increment based on Worst Performance');
###Output
_____no_output_____ |
003-forward-and-back-props/Backward Propagation.ipynb | ###Markdown
Network Initializer What is neuron?Feed-forward neural networks are inspired by the information processing of one or more neural cells, called a neuron. A neuron accepts input signals via its dendrites, which pass the electrical signal down to the cell body. The axon carries the signal out to synapses, which are the connections of a cell’s axon to other cell’s dendrites.
###Code
from random import random, seed
def initialize_network(n_inputs, n_hidden, n_outputs):
network = list()
# Creating hidden layers according to the number of inputs
hidden_layer = [{'weights': [random() for i in range(n_inputs + 1)]} for i in range(n_hidden)]
network.append(hidden_layer)
# Creating output layer according to the number of hidden layers
output_layer = [{'weights': [random() for i in range(n_hidden + 1)]} for i in range(n_outputs)]
network.append(output_layer)
return network
# It is good practice to initialize the network weights to small random numbers.
# In this case, will we use random numbers in the range of 0 to 1.
# To achieve that we seed random with 1
seed(1)
# 2 input units, 1 hidden unit and 2 output units
network = initialize_network(2, 1, 2)
# You can see the hidden layer has one neuron with 2 input weights plus the bias.
# The output layer has 2 neurons, each with 1 weight plus the bias.
for layer in network:
print(layer)
###Output
[{'weights': [0.7887233511355132, 0.0938595867742349, 0.02834747652200631]}]
[{'weights': [0.8357651039198697, 0.43276706790505337]}, {'weights': [0.762280082457942, 0.0021060533511106927]}]
###Markdown
Forward propagateWe can calculate an output from a neural network by propagating an input signal through each layer until the output layer outputs its values.We can break forward propagation down into three parts:1. Neuron Activation.2. Neuron Transfer.3. Forward Propagation. 1. Neuron ActivationThe first step is to calculate the activation of one neuron given an input.Neuron activation is calculated as the weighted sum of the inputs. Much like linear regression.activation = sum(weight_i * input_i) + biasWhere weight is a network weight, input is an input, i is the index of a weight or an input and bias is a special weight that has no input to multiply with (or you can think of the input as always being 1.0).
###Code
# Implementation
def activate(weights, inputs):
activation = weights[-1]
for i in range(len(weights) - 1):
activation += weights[i] * inputs[i]
return activation
###Output
_____no_output_____
###Markdown
2. Neuron TransferOnce a neuron is activated, we need to transfer the activation to see what the neuron output actually is.Different transfer functions can be used. It is traditional to use the *sigmoid activation function*, but you can also use the *tanh* (hyperbolic tangent) function to transfer outputs. More recently, the *rectifier transfer function* has been popular with large deep learning networks.Sigmoid formulaoutput = 1 / (1 + e^(-activation))
###Code
from math import exp
def transfer(activation):
return 1.0 / (1.0 + exp(-activation))
###Output
_____no_output_____
###Markdown
3. Forawrd propagate
###Code
# Foward propagate is self-explanatory
def forward_propagate(network, row):
inputs = row
for layer in network:
new_inputs = []
for neuron in layer:
activation = activate(neuron['weights'], inputs)
neuron['output'] = transfer(activation)
new_inputs.append(neuron['output'])
inputs = new_inputs
return inputs
inputs = [1, 0, None]
output = forward_propagate(network, inputs)
# Running the example propagates the input pattern [1, 0] and produces an output value that is printed.
# Because the output layer has two neurons, we get a list of two numbers as output.
output
###Output
_____no_output_____
###Markdown
Backpropagation What is it?1. Error is calculated between the expected outputs and the outputs forward propagated from the network.2. These errors are then propagated backward through the network from the output layer to the hidden layer, assigning blame for the error and updating weights as they go. This part is broken down into two sections.- Transfer Derivative- Error Backpropagation Transfer DerivativeGiven an output value from a neuron, we need to calculate it’s *slope*.derivative = output * (1.0 - output)
###Code
# Calulates the derivation from an neuron output
def transfer_derivative(output):
return output * (1.0 - output)
###Output
_____no_output_____
###Markdown
Error Backpropagation1. calculate the error for each output neuron, this will give us our error signal (input) to propagate backwards through the network.error = (expected - output) * transfer_derivative(output)expected: expected output value for the neuronoutput: output value for the neuron and transfer_derivative()----The back-propagated error signal is accumulated and then used to determine the error for the neuron in the hidden layer, as follows:error = (weight_k * error_j) * transfer_derivative(output)error_j: the error signal from the jth neuron in the output layerweight_k: the weight that connects the kth neuron to the current neuron and output is the output for the current neuron
###Code
def backward_propagate_error(network, expected):
for i in reversed(range(len(network))):
layer = network[i]
errors = list()
if i != len(network) - 1:
for j in range(len(layer)):
error = 0.0
for neuron in network[i + 1]:
error += (neuron['weights'][j] * neuron['delta'])
errors.append(error)
else:
for j in range(len(layer)):
neuron = layer[j]
errors.append(expected[j] - neuron['output'])
for j in range(len(layer)):
neuron = layer[j]
neuron['delta'] = errors[j] * transfer_derivative(neuron['output'])
expected = [0, 1]
backward_propagate_error(network, expected)
# delta: error value
for layer in network:
print(layer)
###Output
[{'weights': [0.7887233511355132, 0.0938595867742349, 0.02834747652200631], 'output': 0.6936142046010635, 'delta': -0.011477619712406795}]
[{'weights': [0.8357651039198697, 0.43276706790505337], 'output': 0.7335023968859138, 'delta': -0.1433825771158816}, {'weights': [0.762280082457942, 0.0021060533511106927], 'output': 0.6296776889933221, 'delta': 0.08635312555373359}]
|
crime_stats_compute_aea.ipynb | ###Markdown
Notation:- SAL- small area- PP- police precinct- AEA- Albers Equal Area Conic- CPS- crime per SAL
###Code
from random import shuffle, randint
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
from mpl_toolkits.basemap import Basemap
from shapely.geometry import Polygon, Point, MultiPoint, MultiPolygon, LineString, mapping, shape
from descartes import PolygonPatch
import random
import fiona
import numpy as np
import csv
from fiona import collection
import geopandas as gpd
from geopandas.tools import sjoin # rtree index in-build, used with inner, intersection
import pandas as pd
from collections import defaultdict
###Output
_____no_output_____
###Markdown
def sjoin(left_df, right_df, how='inner', op='intersects', lsuffix='left', rsuffix='right', **kwargs): """Spatial join of two GeoDataFrames. left_df, right_df are GeoDataFrames how: type of join left -> use keys from left_df; retain only left_df geometry column right -> use keys from right_df; retain only right_df geometry column inner -> use intersection of keys from both dfs; retain only left_df geometry column op: binary predicate {'intersects', 'contains', 'within'} see http://toblerity.org/shapely/manual.htmlbinary-predicates lsuffix: suffix to apply to overlapping column names (left GeoDataFrame) rsuffix: suffix to apply to overlapping column names (right GeoDataFrame) """
###Code
def find_intersections(o):
from collections import defaultdict
paired_ind = [o.pp_index, o.sal_index]
d_over_ind = defaultdict(list)
# creating a dictionary that has prescints as keys and associated small areas as values
for i in range(len(paired_ind[0].values)):
if not paired_ind[0].values[i]==paired_ind[1].values[i]: # it shows itself as intersection
d_over_ind[paired_ind[0].values[i]].append(paired_ind[1].values[i])
# get rid of the pol precincts with no small areas associated to them- not the most efficient way
d_temp = {}
for l in d_over_ind:
if len(d_over_ind[l]):
d_temp[l] = d_over_ind[l]
return d_temp
def calculate_join_indices(g1_reind, g2_reind):
# A: region of the police data with criminal record
# C: small area with population data
# we look for all small areas intersecting a given C_i, calculate the fraction of inclusion, scale the
# population accordingly: area(A_j, where A_j crosses C_i)/area(A_j)* popul(A_j)
# the actual indexing:
out = sjoin(g1_reind, g2_reind, how ="inner", op = "intersects")
out.drop('index_right', axis=1, inplace=True) # there is a double index fo smal areas, so we drop one
#out_sorted = out.sort(columns='polPrecincts_index', ascending=True) # guess sorting is not necessary, cause we are
# using doctionaries at later stages
#dict_over_ind = find_intersections(out_sorted)
# output retains only 1 area (left or right join), and gives no intersection area.
# so we create an array with paired indices: police precincts with associated small areas
# we use it in a loop in a function below
dict_over_ind = find_intersections(out)
return dict_over_ind
def calculate_inclusion_indices(g1_reind, g2_reind):
out = sjoin(g1_reind, g2_reind, op = "contains") ## PP contains SAL
out.drop('index_right', axis=1, inplace=True)
dict_over_ind = find_intersections(out)
return dict_over_ind
def calculate_join(dict_over_ind, g1_reind, g2_reind):
area_total = 0
data_aggreg = []
# note to self: make sure to import shapely Polygon
for index1, crim in g1_reind.iterrows():
try:
index1 = crim.pp_index
sals_found = dict_over_ind[index1]
for sal in range(len(sals_found)):
pom = g2_reind[g2_reind.sal_index == sals_found[sal]]['geometry']
#if pom.intersects(crim['geometry']).values[0]:
area_int = pom.intersection(crim['geometry']).area.values[0]
if area_int>0:
area_total += area_int
area_crim = crim['geometry'].area
area_popu = pom.values[0].area
popu_count = g2_reind[g2_reind.sal_index == sals_found[sal]]['PPL_CNT'].values[0]
murd_count = crim['murd_cnt']
pol_province = crim['province']
popu_frac = (area_int / area_popu) * popu_count# fraction of the pop area contained inside the crim
#print(popu_frac)
extra_info_col_names = ['DC_NAME','MN_NAME','MP_NAME','PR_NAME','SP_NAME']
extra_info_col_codes = ['MN_CODE','MP_CODE','PR_CODE','SAL_CODE','SP_CODE']
extra_names = g2_reind[g2_reind.sal_index == sals_found[sal]][extra_info_col_names]#.filter(regex=("NAME"))
extra_codes = g2_reind[g2_reind.sal_index == sals_found[sal]][extra_info_col_codes]#.filter(regex=("NAME"))
data_aggreg.append({'geometry': pom.intersection(crim['geometry']).values[0], 'id1': index1,\
'id2': sals_found[sal] ,'area_pp': area_crim,'area_sal': area_popu,\
'area_inter': area_int, 'popu_inter' : popu_frac, 'popu_sal': popu_count,\
'murd_cnt': murd_count,'province': pol_province,
'DC_NAME': extra_names.DC_NAME.values[0],\
'MN_NAME': extra_names.MN_NAME.values[0], 'MP_NAME': extra_names.MP_NAME.values[0],\
'PR_NAME': extra_names.PR_NAME.values[0],'SP_NAME': extra_names.SP_NAME.values[0],\
'MN_CODE': extra_codes.MN_CODE.values[0],'MP_CODE': extra_codes.MP_CODE.values[0],\
'PR_CODE': extra_codes.PR_CODE.values[0],'SAL_CODE': extra_codes.SAL_CODE.values[0],\
'SP_CODE': extra_codes.SP_CODE.values[0]} )
except:
pass
df_t = gpd.GeoDataFrame(data_aggreg,columns=['geometry', 'id1','id2','area_pp',\
'area_sal','area_inter', 'popu_inter',\
'popu_sal', 'murd_cnt','province','DC_NAME',\
'MN_NAME','MP_NAME','PR_NAME','SP_NAME',\
'MN_CODE','MP_CODE','PR_CODE','SAL_CODE','SP_CODE'])
#df_t.to_file(out_name)
return df_t, area_total, data_aggreg
# this function adds the remaining columns, calculates fractions etc
def compute_final_col(df_temp):
# add population data per police percinct to the main table
# id1- PP, id2 - SAL
temp = df_temp.groupby(by=['id1'])['popu_inter'].sum().reset_index()
data_with_population = pd.merge(df_temp, temp, on='id1', how='outer')\
.rename(columns={'popu_inter_y':'popu_frac_per_pp', 'popu_inter_x':'popu_inter'})
# finally, update the murder rate per SAL : id2 is sal's id
data_with_population['murd_est_per_int'] = data_with_population['popu_inter']/data_with_population['popu_frac_per_pp']\
* data_with_population['murd_cnt']
data_mur_per_int = data_with_population.groupby(by=['id2'])['murd_est_per_int'].sum().reset_index()
data_mur_per_sal = data_mur_per_int.rename(columns={'murd_est_per_int':'murd_est_per_sal'})
data_with_population['ratio_per_int'] = data_with_population['popu_inter']/data_with_population['popu_frac_per_pp']\
data_complete = pd.merge(data_with_population, data_mur_per_sal, on='id2', how='outer')\
.rename(columns={'id1':'index_PP', 'id2':'index_SAL'})
return data_complete
###Output
_____no_output_____
###Markdown
Main functions to find intersection. Files loaded in are the AEA projected shapefiles.
###Code
salSHP_upd = 'shapefiles/updated/sal_population_aea.shp'
polSHP_upd = 'shapefiles/updated/polPrec_murd2015_prov_aea.shp'
geo_pol = gpd.GeoDataFrame.from_file(polSHP_upd)
geo_sal = gpd.GeoDataFrame.from_file(salSHP_upd)
geo_pol_reind = geo_pol.reset_index().rename(columns={'index':'pp_index'})
geo_sal_reind = geo_sal.reset_index().rename(columns={'index':'sal_index'})
#dict_int = calculate_join_indices(geo_pol_reind,geo_sal_reind)
###Output
_____no_output_____
###Markdown
test on a subset:
###Code
gt1= geo_pol_reind[geo_pol.province=="Free State"].head(n=2)
gt2 = geo_sal_reind[geo_sal_reind.PR_NAME=="Free State"].reset_index()
d = calculate_join_indices(gt1, gt2)
###Output
_____no_output_____
###Markdown
Running the intersections on pre-computed indices:
###Code
from timeit import default_timer as timer
#start = timer()
#df_inc, sum_area_inc, data_inc = calculate_join(dict_inc, geo_pol_reind, geo_sal_reind)
#end = timer()
#print("1st", end - start)
start = timer()
df_int, sum_area_int, data_int = calculate_join(dict_int, geo_pol_reind, geo_sal_reind)
end = timer()
print("2nd", end - start)
###Output
_____no_output_____
###Markdown
find pol precincts within WC boundary
###Code
za_province = gpd.read_file('za-provinces.topojson',driver='GeoJSON')#.set_index('id')
za_province.crs={'init': '27700'}
wc_boundary = za_province.ix[8].geometry # WC
#pp_WC = geo_pol[geo_pol.geometry.within(wc_boundary)]
pp_WC_in = geo_pol[geo_pol.geometry.intersects(wc_boundary)]
#.unary_union, sal_wc_union_bound = sal_WC_in.unary_union
pp_WC_overlaps = pp_WC_in[pp_WC_in.province!="Western Cape"]
pp_WC_pol_annot = pp_WC_in[pp_WC_in.province=="Western Cape"]
#pp_test = pp_WC_in[pp_WC_in['compnt_nm'].isin(['atlantis','philadelphia','kraaifontein','brackenfell','kuilsriver','kleinvleveerste river','macassar','somerset west','fish hoek'])]
#pp_test = pp_WC_in[pp_WC_in['compnt_nm'].isin(['beaufort west','doring bay','murraysburg', 'strandfontein','nuwerus','lutzville'])]
%matplotlib inline
#pp_WC_overlaps.plot()
###Output
_____no_output_____
###Markdown
Adding final columns:
###Code
# There are 101,546 intersections
df_int_aea = compute_final_col(df_int) # add final calculations
df_int_aea.to_csv('data/pp_int_intersections2.csv')
###Output
_____no_output_____
###Markdown
Some intersections are multipolygons (PP and SAL intersect in multiple areas):
###Code
df_int_aea.head(n=3).values[2][0]
###Output
_____no_output_____
###Markdown
There are curious cases of intersections, which form polygons. For example,a Free State police precinct 'dewetsdorp' with murder count of 1 (yet high rate of Stock-theft- 52 in 2014) intersects the SAL 4990011 (part of SP Mangaung NU) in two lines:
###Code
geo_sal_reind[geo_sal_reind.sal_index==28532].geometry.values[0]
geo_pol_reind[geo_pol_reind.pp_index ==358].geometry.values[0]
a = geo_pol_reind[geo_pol_reind.pp_index ==358].geometry.values[0]
b= geo_sal_reind[geo_sal_reind.sal_index==28532].geometry.values[0]
c = [geo_pol_reind[geo_pol_reind.pp_index ==358].geometry.values[0],geo_sal_reind[geo_sal_reind.sal_index==28532].geometry.values[0]]
cascaded_union(c)
from shapely.ops import cascaded_union
cascaded_union(b)
geo_sal_reind[geo_sal_reind.sal_index==28532]
df_int_aea.to_file('data/pp_int_intersections.shp')
# When reading from a file"
import pandas as pd
df_int_aea = pd.read_csv('data/pp_int_intersections.csv')
# when reading from file a column Unnamed is added. Needs to be removed.
cols = [c for c in df_int_aea.columns if c.lower()[:7] != 'unnamed']
df_int_aea=df_int_aea[cols]
df_int_aea.head(n=2)
data_prov = df_int_aea[['PR_NAME','province','murd_est_per_int']]
data_prov.groupby('province')['murd_est_per_int'].sum()
data_prov.groupby('PR_NAME')['murd_est_per_int'].sum()
# check over small areas- sum of all the crimes should be 17482
pom = {}
for ind, row in df_inc_aea.iterrows():
pom[row['index_SAL']] = row['murd_est_per_sal']
s=0
for key in pom:
s = s + pom[key]
print(s)
###Output
_____no_output_____
###Markdown
measuring the error of the 'CPS' estimateComputing the lower (LB) and upper bounds (UB), wherever possible, is done the following way:UB: based the calcualation of population per PP on all SALs included entirely within PP. If not possible, set to NaNLB: find all SALs intersecting a given PP, but base the PP population estimation on the population of the entire SAL, not the population of the intersection.As a result, each intersection will have a triplet of values associated to it: (LB, actual estimate, UB/NaN). The bounds are not additive- that is, the estimates applies only to the level of SAL area, and will not be maintained when summed over, e.g. SP or MN For modyfying/selecting entries for bound estimation, we discard the last 4 columns with precomputed values
###Code
df_int=df_int_aea.ix[:,:20]
# this function adds the remaining columns, calculates fractions etc
def compute_final_col_bounds(df_aea):
#recalculate pop frac per PP
temp = df_aea.groupby(by=['index_PP'])['popu_inter'].sum().reset_index()
data_with_population = pd.merge(df_aea, temp, on='index_PP', how='outer')\
.rename(columns={'popu_inter_y':'popu_frac_per_pp', 'popu_inter_x':'popu_inter'})
data_with_population['murd_est_per_int'] = data_with_population['popu_inter']/data_with_population['popu_frac_per_pp']\
* data_with_population['murd_cnt']
data_mur_per_int = data_with_population.groupby(by=['index_SAL'])['murd_est_per_int'].sum().reset_index()
data_mur_per_sal = data_mur_per_int.rename(columns={'murd_est_per_int':'murd_est_per_sal'})
data_with_population['ratio_per_int'] = data_with_population['popu_inter']/data_with_population['popu_frac_per_pp']\
data_complete = pd.merge(data_with_population, data_mur_per_sal, on='index_SAL', how='outer')
#\ .rename(columns={'id1':'index_PP', 'id2':'index_SAL'})
return data_complete
###Output
_____no_output_____
###Markdown
create new tables for the LB and UB
###Code
list_lb =[]
list_ub = []
for i,entry in df_int.iterrows():#f_inc_aea:
if (entry.area_inter/entry.area_sal==1): # select those included 'completely'
list_ub.append(entry)
entry.popu_inter = entry.popu_sal # this is actually already true for the above if() case
list_lb.append(entry)
df_int_aea_ub_p=gpd.GeoDataFrame(list_ub)
df_int_aea_lb_p=gpd.GeoDataFrame(list_lb)
df_int_aea_lb = compute_final_col_bounds(df_int_aea_lb_p)\
.rename(columns={'murd_est_per_int':'murd_est_per_int_lb',\
'ratio_per_int':'ratio_per_int_lb','murd_est_per_sal':'murd_est_per_sal_lb'})
# complete
df_int_aea_ub = compute_final_col_bounds(df_int_aea_ub_p)\
.rename(columns={'murd_est_per_int':'murd_est_per_int_ub',\
'ratio_per_int':'ratio_per_int_ub','murd_est_per_sal':'murd_est_per_sal_ub'})
#check if numbers add up per province level (invariant for inclusion):
data_prov = df_int_aea_ub[['PR_NAME','province','murd_est_per_int_ub']]
data_prov.groupby('province')['murd_est_per_int_ub'].sum()
temp_ub = df_int_aea_ub.groupby(by=['SP_CODE'])['murd_est_per_int_ub'].sum().reset_index()
temp_lb = df_int_aea_lb.groupby(by=['SP_CODE'])['murd_est_per_int_lb'].sum().reset_index()
temp_est = df_int_aea.groupby(by=['SP_CODE'])['murd_est_per_int'].sum().reset_index()
temp = pd.merge(temp_lb, temp_est, on='SP_CODE', how='outer')
df_bounds = pd.merge(temp, temp_ub, on='SP_CODE', how='outer')
###Output
_____no_output_____
###Markdown
At the level of SP (and probably others) some bounds are inverted... UB < LB (2,242 out of 21,589)
###Code
#mn_bounds_def = mn_bounds[~mn_bounds.UB_murder.isnull()]
df_inv_bounds = df_bounds[df_bounds.murd_est_per_int_ub<df_bounds.murd_est_per_int_lb]
df_inv_bounds.tail()
temp_ub = df_int_aea_ub.groupby(by=['SAL_CODE'])['murd_est_per_int_ub'].sum().reset_index()
temp_lb = df_int_aea_lb.groupby(by=['SAL_CODE'])['murd_est_per_int_lb'].sum().reset_index()
temp_est = df_int_aea.groupby(by=['SAL_CODE'])['murd_est_per_int'].sum().reset_index()
# .rename(columns={'popu_inter_y':'popu_frac_per_pp', 'popu_inter_x':'popu_inter'})
temp = pd.merge(temp_lb, temp_est, on='SAL_CODE', how='outer')
df_bounds = pd.merge(temp, temp_ub, on='SAL_CODE', how='outer')
mn_names_set = set(df_int_aea_lb.MN_NAME)
mn_names = []
for s in mn_names_set:
mn_names.append(s)
df_bounds.head(n=2)
df_bound_nonan = df_bounds[~df_bounds.murd_est_per_int_ub.isnull()&df_bounds.murd_est_per_int>0].sort(['murd_est_per_int'])
###Output
_____no_output_____
###Markdown
Plotting the lower and upper bounds:
###Code
import warnings
warnings.filterwarnings('ignore')
import mpld3
from mpld3 import plugins
from mpld3.utils import get_id
#import numpy as np
import collections
from mpld3 import enable_notebook
enable_notebook()
def make_labels_points(dataf):
L = len(dataf)
x = np.array(dataf['murd_est_per_int_lb'])
y = np.array(dataf['murd_est_per_int_ub'])
z = np.array(dataf['murd_est_per_int'])
l = np.array(dataf['SAL_CODE'])
d = y-x # error
s = " "
sc = ", err: "
seq = []
seqc = []
t = [seq.append(s.join((str(l[i]), str(z[i])))) for i in range(L)]
t = [seqc.append(sc.join((seq[i], str(d[i])))) for i in range(L)]
return seqc, L
def make_scatter(dataf, outname, outtitle):
l = np.array(dataf['SAL_CODE'])
x = np.array(dataf['murd_est_per_int_lb'])
y = np.array(dataf['murd_est_per_int_ub'])
z = np.array(dataf['murd_est_per_int'])
d = y-x # error
# build a rectangle in axes coords
left, width = .15, .7
bottom, height = .09, .75
right = left + width
top = bottom + height
fig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE'))
N=len(dataf)
scatter = ax.scatter(range(1,N+1),z,c=100*d,s=1000*d,alpha=0.3, cmap=plt.cm.jet, color='blue', label='...')
ax.set_title(outtitle, size=15)
seqc, L = make_labels_points(dataf)
labels12 = ['(SAL id, est: {0}'.format(seqc[i]) for i in range(L)]
tooltip = plugins.PointLabelTooltip(scatter, labels=labels12)
plugins.connect(fig, tooltip)
ax.set_xlabel('SAL')
ax.set_ylabel('murder rate', labelpad = 20)
html_str = mpld3.fig_to_html(fig)
Html_file= open(outname,"w")
Html_file.write(html_str)
Html_file.close()
make_scatter(df_bound_nonan.head(n=8000), 'bounds.html', "SAL estimation bounds")
df_bound_nonan[df_bound_nonan.SAL_CODE==3760001]
df_int_aea_ub[df_int_aea_ub.SAL_CODE==3760001]
df_int_aea_lb[df_int_aea_lb.SAL_CODE==3760001]
df_int_aea_lb[df_int_aea_lb.index_PP==551]
df_int_aea[df_int_aea.index_PP==551]
###Output
_____no_output_____
###Markdown
Add gender data:
###Code
full_pop = pd.read_csv('data/sal_pop.csv')
def get_ratio(i,full_pop):
try:
x = int(full_pop.iloc[i,].Female)/(int(full_pop.iloc[i,].Male)+int(full_pop.iloc[i,].Female))
except:
x =0
return x
wom_ratio = [get_ratio(i,full_pop) for i in range(len(full_pop))]
full_pop['wom_ratio'] = wom_ratio
full_pop.drop('Male', axis=1, inplace=True)
data_full = pd.merge(df_int_aea, full_pop, on='SAL_CODE')
data_full.head()
###Output
_____no_output_____
###Markdown
WARDS:
###Code
wardsShp =gpd.GeoDataFrame.from_file('../maps/data/Wards2011_aea.shp')
wardsShp.head(n=2)
za_province = gpd.GeoDataFrame.from_file('../south_africa_adm1.shp')#.set_index('id')
%matplotlib inline
#import matplotlib.pyplot as plt
from matplotlib.collections import PatchCollection
from descartes import PolygonPatch
import fiona
from shapely.geometry import Polygon, MultiPolygon, shape
# We can extract the London Borough boundaries by filtering on the AREA_CODE key
mp = MultiPolygon(
[shape(pol['geometry']) for pol in fiona.open('../south_africa_adm1.shp')])
mpW = MultiPolygon(
[shape(pol['geometry']) for pol in fiona.open('../wards_delimitation/Wards_demarc/Wards2011.shp')])
mpS = MultiPolygon(
[shape(pol['geometry']) for pol in fiona.open('shapefiles/oryginal/SAL_SA_2013.shp')])
# define map extent
lllon = 21
lllat = -18
urlon = 34
urlat = -8
# set up Basemap instance
m = Basemap(
projection = 'merc',
llcrnrlon = lllon, llcrnrlat = lllat, urcrnrlon = urlon, urcrnrlat = urlat,
resolution='h')
# We can now do GIS-ish operations on each borough polygon!
# we could randomize this by dumping the polygons into a list and shuffling it
# or we could define a random colour using fc=np.random.rand(3,)
# available colour maps are here: http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps
cm = plt.get_cmap('RdBu')
num_colours = len(mpW)
fig = plt.figure(figsize=(16, 16))
ax = fig.add_subplot(111)
minx, miny, maxx, maxy = mp.bounds
w, h = maxx - minx, maxy - miny
ax.set_xlim(minx - 0.2 * w, maxx + 0.2 * w)
ax.set_ylim(miny - 0.2 * h, maxy + 0.2 * h)
ax.set_aspect(1)
patches = []
for idx, p in enumerate(mp):
#colour = cm(1. * idx / num_colours)
patches.append(PolygonPatch(p, alpha=1., zorder=1))
for idx, p in enumerate(mpW):
colour = cm(1. * idx / num_colours)
patches.append(PolygonPatch(p, ec='#4C4C4C', alpha=1., zorder=1))
for idx, p in enumerate(mpS):
colour = cm(1. * idx / num_colours)
patches.append(PolygonPatch(p, ec='#4C4C4C', alpha=1., zorder=1))
ax.add_collection(PatchCollection(patches, match_original=True))
ax.set_xticks([])
ax.set_yticks([])
plt.title("SAL on Wards")
#plt.savefig('data/london_from_shp.png', alpha=True, dpi=300)
plt.show()
# define map extent
lllon = 15
lllat = -35
urlon = 33
urlat = -22
# set up Basemap instance
m = Basemap(
projection = 'merc',
llcrnrlon = lllon, llcrnrlat = lllat, urcrnrlon = urlon, urcrnrlat = urlat,
resolution='h')
fig = plt.figure(figsize=(16, 16))
m.drawmapboundary(fill_color=None, linewidth=0)
m.drawcoastlines(color='#4C4C4C', linewidth=0.5)
m.drawcountries()
m.fillcontinents(color='#F2E6DB',lake_color='#DDF2FD')
#m.readshapefile('../wards_delimitation/Wards_demarc/Wards2011.sbh','Wards',drawbounds=False)
m.readshapefile('../maps/data/test','wards',drawbounds=False)
from itertools import chain
shp = fiona.open('../maps/data/test.shp')
bds = shp.bounds
shp.close()
extra = 0.01
ll = (bds[0], bds[1])
ur = (bds[2], bds[3])
coords = list(chain(ll, ur))
w, h = coords[2] - coords[0], coords[3] - coords[1]
m = Basemap(
projection='tmerc',
lon_0=24.000,
lat_0=-24.0000,
ellps = 'WGS84',
llcrnrlon=coords[0] - extra * w,
llcrnrlat=coords[1] - extra + 0.01 * h,
urcrnrlon=coords[2] + extra * w,
urcrnrlat=coords[3] + extra + 0.01 * h,
lat_ts=0,
resolution='i',
suppress_ticks=True)
m.readshapefile(
'../maps/data/test',
'wards',
color='none',
zorder=2)
###Output
_____no_output_____
###Markdown
clean the utf problems
###Code
from unidecode import unidecode
with fiona.open(
'../maps/data/wards_sel.shp', 'r') as source:
# Create an output shapefile with the same schema,
# coordinate systems. ISO-8859-1 encoding.
with fiona.open(
'../maps/data/wards_sel_cleaned.shp', 'w',
**source.meta) as sink:
# Identify all the str type properties.
str_prop_keys = [
k for k, v in sink.schema['properties'].items()
if v.startswith('str')]
for rec in source:
# Transliterate and update each of the str properties.
for key in str_prop_keys:
val = rec['properties'][key]
if val:
rec['properties'][key] = unidecode(val)
# Write out the transformed record.
sink.write(rec)
salSHP = 'shapefiles/updated/sal_population_4326.shp'
warSHP = '../wards_delimitation/Wards_demarc/Wards2011.shp'
geo_war = gpd.GeoDataFrame.from_file(warSHP)
geo_sal = gpd.GeoDataFrame.from_file(salSHP)
import pyepsg
pyepsg.get(geo_war.crs['init'].split(':')[1])
pyepsg.get(geo_sal.crs['init'].split(':')[1])
###Output
_____no_output_____
###Markdown
to plot the data on a folium map, we need to convert to a Geographic coordinate system with the wgs84 datum (EPSG: 4326). We also need to greate a GeoJSON object out of the GeoDataFrame. AND! as it turns out (many hourse of tripping over the problem) to SIMPLIFY the geometries. They are too big for webmaps.
###Code
warSHP = '../maps/data/Wards2011.shp'
geo_war = gpd.GeoDataFrame.from_file(warSHP)
#geo_sal = gpd.GeoDataFrame.from_file(salSHP_upd)
geo_war.head(n=2)
geo_war_sub = geo_war.iloc[:,[2,3,7,8,9]].reset_index().head(n=2)
#g = geo_war_sub.simplify(0.05, preserve_topology=False)
geo_war_sub.head(n=3)
geo_war_sub.to_file('../maps/data/wards_sel.shp')
geo_war_sub['geometry'].replace(g,inplace=True)
#data['index_rank'].replace(index_dict, inplace=True)
geo_war_sub_sim.head(n=2)
salSHP = 'shapefiles/updated/sal_population.shp'
geo_sal = gpd.GeoDataFrame.from_file(salSHP)
#geo_sal.head(n=2)
geo_sal_sub = geo_sal.iloc[:,[7,11,15,16,20,23]].reset_index()#.head()
geo_sal_sub.to_file('../maps/data/sal_sub.shp')
#gjsonSal = geo_sal.to_crs(epsg='4326').to_json()# no need to convert, as it already is in 4326
#gjsonSal = geo_sal.to_json()
#gjsonWar = geo_war.to_json()
gj = g.to_json()
import folium
#import pandas as pd
lllon = 15
lllat = -35
urlon = 33
urlat = -22
#state_geo = r'shapefiles/updated/sal_population.json'
#ward_path = r'../maps/data/test.geojson'
#state_geo = r'shapefiles/oryginal/SAL_SA_2013.json'
state_geo = r'../maps/data/sal.json'
#state_geo = r'temp_1E-7.topojson'
#Let Folium determine the scale
map = folium.Map(location=[(lllat+urlat)/2, (lllon+urlon)/2], tiles='Mapbox Bright',zoom_start=6)
#, tiles='cartodbpositron')
#map.geo_json(geo_path=state_geo)
#map.geo_json(geo_path=state_geoW)
#map.geo_json(geo_path=ward_path)
map.create_map(path='test.html')
state_geo
lllon = 15
lllat = -35
urlon = 33
urlat = -22
import folium
#map = folium.Map(location=[-33.9249, 18.4241], zoom_start=10)
mapa = folium.Map([(lllat+urlat)/2, (lllon+urlon)/2],
zoom_start=7,
tiles='cartodbpositron')
#pSal = folium.features.GeoJson(gjsonSal)
#pWae = folium.features.GeoJson(gjsonWar)
#mapa.add_children(pSal)
#mapa.add_children(pWar)
#mapa.geo_json(gj)
#test = folium.folium.Map.geo_json(gj)
#ice_map.geo_json(geo_path=topo_path, topojson='objects.antarctic_ice_shelf')
#mapa.add_children(test)
mapa.create_map(path='test.html')
testshp = '../maps/data/test.shp'
geo_test = gpd.GeoDataFrame.from_file(testshp)
import pyepsg
pyepsg.get(geo_test.crs['init'].split(':')[1])
gjson = geo_test.to_json()
import folium
geo_path = r'../maps/data/test.json'
map_osm = folium.Map(location=[-24.5236, 24.6750],zoom_start=6)
map_osm.geo_json(geo_path=geo_path)
map_osm.create_map(path='osm.html')
###Output
_____no_output_____
###Markdown
analytics based on intersections:
###Code
def find_intersections(o):
from collections import defaultdict
paired_ind = [o.pp_index, o.sal_index]
d_over_ind = defaultdict(list)
# creating a dictionary that has prescints as keys and associated small areas as values
for i in range(len(paired_ind[0].values)):
if not paired_ind[0].values[i]==paired_ind[1].values[i]: # it shows itself as intersection
d_over_ind[paired_ind[0].values[i]].append(paired_ind[1].values[i])
# get rid of the pol precincts with no small areas associated to them- not the most efficient way
d_temp = {}
for l in d_over_ind:
if len(d_over_ind[l]):
d_temp[l] = d_over_ind[l]
return d_temp
def calculate_join_indices(g1_reind, g2_reind):
out = sjoin(g1_reind, g2_reind, how ="inner", op = "intersects")
out.drop('index_right', axis=1, inplace=True)
dict_over_ind = find_intersections(out)
return dict_over_ind
#warSHP = '../maps/data/Wards2011_aea.shp'
#geo_war = gpd.GeoDataFrame.from_file(warSHP)
#salSHP = 'shapefiles/updated/sal_population_aea.shp'
#geo_sal = gpd.GeoDataFrame.from_file(salSHP)
#geo_sal = geo_sal.reset_index()
#geo_war_sub = geo_war.iloc[:,[2,3,7,8,9]].reset_index()#.head(n=2)
out = sjoin(geo_war_sub, geo_sal, how ="inner", op = "intersects")
out_sub = out.iloc[:,[2,3,5,6,15,23,24,28]].reset_index().rename(columns={'index':'index_ward','index_right':'index_sal'})
geo_war_sub = geo_war_sub.rename(columns={'index':'index_ward'})#head(n=2)
#head(n=2)
geo_sal_sub = geo_sal.iloc[:,[5,11,16,17,19,21,24]].reset_index().rename(columns={'index':'index_sal'})
from collections import defaultdict
paired_ind = [out_sub.index_ward, out_sub.index_sal]
dict_temp = defaultdict(list)
# creating a dictionary that has prescints as keys and associated small areas as values
for i in range(len(paired_ind[0].values)):
if not paired_ind[0].values[i]==paired_ind[1].values[i]: # it shows itself as intersection
dict_temp[paired_ind[0].values[i]].append(paired_ind[1].values[i])
dict_int_ward = {}
for l in dict_temp:
if len(dict_temp[l]):
dict_int_ward[l] = dict_temp[l]
#dict_int_ward
def calculate_join_ward_sal(dict_over_ind, g1_reind, g2_reind):
area_total = 0
data_aggreg = []
# note to self: make sure to import shapely Polygon
for index1, row in g1_reind.iterrows():
#print(index1, row.index_ward)
try:
index1 = row.index_ward
sals_found = dict_over_ind[index1]
for sal in range(len(sals_found)):
pom = g2_reind[g2_reind.index_sal == sals_found[sal]]['geometry']
area_int = pom.intersection(row['geometry']).area.values[0]
area_sal = pom.values[0].area
int_percent = area_int/area_sal
#popu_count = g2_reind[g2_reind.sal_index == sals_found[sal]]['PPL_CNT'].values[0]
extra_info_col = ['MP_NAME','PR_NAME','SAL_CODE','SP_NAME']
extra_names = g2_reind[g2_reind.index_sal == sals_found[sal]][extra_info_col]#.filter(regex=("NAME"))
#extra_names = g2_reind[g2_reind.sal_index == sals_found[sal]][extra_info_col_names]#.filter(regex=("NAME"))
data_aggreg.append({'geometry': pom.intersection(row['geometry']).values[0],\
'id1': index1,'ward_id': row.WARD_ID,'id2': sals_found[sal] ,'area_int': area_int,\
'area_sal': area_sal,'int_percent': int_percent,\
'MP_NAME': extra_names.MP_NAME.values[0],\
'PR_NAME': extra_names.PR_NAME.values[0],'SAL_CODE': extra_names.SAL_CODE.values[0],\
'SP_NAME': extra_names.SP_NAME.values[0]} )
except:
pass
cols=['geometry', 'id1','ward_id','id2','area_int','area_sal','int_percent','MP_NAME','PR_NAME','SAL_CODE','SP_NAME']
df_t = gpd.GeoDataFrame(data_aggreg,columns=cols)
#df_t.to_file('shapefiles/sal_ward.shp')
return df_t
from timeit import default_timer as timer
start = timer()
df = calculate_join_ward_sal(dict_int_ward,geo_war_sub, geo_sal_sub)
end = timer()
print("time: ", end - start)
df.head()
df.to_csv('df.csv')
df_nc = df[df.int_percent<1]
#df.groupby(by=['ward_id']).sum()
s = df_nc.groupby(by=['PR_NAME','ward_id'])
type(s)
#There are 4277 wards
len(geo_war)
# all wards have intersections
len(set(df_nc.ward_id))
#84907 SAL areas
len(geo_sal_sub)
# half of the intersect
len(set(df_nc.SAL_CODE))
###Output
_____no_output_____
###Markdown
40515 out of 84907 SALs intersect ward borders.Let's see whether the intersections generated from PP and SAL fit better.
###Code
#trying the intersections
geo_int_p = pd.read_csv('data/pp_int_intersections.csv')
geo_war_sub.crs
#geo_int.head(n=2)
geo_int = gpd.GeoDataFrame(geo_int_p, crs=geo_war_sub.crs)
#geo_int.head(n=2)
cols = [c for c in geo_int.columns if c.lower()[:7] != 'unnamed']
geo_int = geo_int[cols]
geo_int.head(n=2)
geo_int_sub = geo_int.iloc[:,[1,2,0]].reset_index().rename(columns={'index':'index_int'})
geo_sal_sub.head(n=1)
geo_int_sub.geometry.head()
geo_war_sub.head(n=2)
out = sjoin(geo_war_sub.head(n=1), geo_int_sub, how ="inner", op = "intersects")
geo_war_sub.head(n=2)
type(geo_int)
geo_int.crs
test = gpd.GeoDataFrame(pd.read_csv('data/pp_test2.csv'))
geo_war_sub.to_csv('auch.csv')
test.plot()
f,ax = plt.subplots(1)
gpd.plotting.plot_multipolygon(ax, df_int.head(n=2).geometry.values[0], linewidth = 0.1, edgecolr='grey')
plt.show()
df_int.head(n=2).geometry.values[0]
###Output
_____no_output_____ |
LeNet-5.ipynb | ###Markdown
get the minst dataset
###Code
batch_size = 128
num_classes = 10
epochs = 100
# input image dimensions
img_rows, img_cols = 28, 28
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(6, (5, 5), activation='relu', input_shape = input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Conv2D(16, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(120, activation='relu'))
model.add(Dense(84, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
###Output
x_train shape: (60000, 28, 28, 1)
60000 train samples
10000 test samples
###Markdown
Visualize the model
###Code
from IPython.display import SVG
from keras.utils.vis_utils import plot_model
plot_model(model, show_shapes=True, show_layer_names=True)
###Output
_____no_output_____
###Markdown
 Train the model
###Code
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/100
60000/60000 [==============================] - 2s - loss: 0.3232 - acc: 0.9029 - val_loss: 0.1030 - val_acc: 0.9701
Epoch 2/100
60000/60000 [==============================] - 1s - loss: 0.0855 - acc: 0.9744 - val_loss: 0.0740 - val_acc: 0.9774
Epoch 3/100
60000/60000 [==============================] - 2s - loss: 0.0620 - acc: 0.9802 - val_loss: 0.0505 - val_acc: 0.9835
Epoch 4/100
60000/60000 [==============================] - 2s - loss: 0.0477 - acc: 0.9847 - val_loss: 0.0426 - val_acc: 0.9853
Epoch 5/100
60000/60000 [==============================] - 1s - loss: 0.0397 - acc: 0.9878 - val_loss: 0.0396 - val_acc: 0.9864
Epoch 6/100
60000/60000 [==============================] - 2s - loss: 0.0362 - acc: 0.9884 - val_loss: 0.0385 - val_acc: 0.9876
Epoch 7/100
60000/60000 [==============================] - 2s - loss: 0.0284 - acc: 0.9909 - val_loss: 0.0376 - val_acc: 0.9879
Epoch 8/100
60000/60000 [==============================] - 2s - loss: 0.0269 - acc: 0.9912 - val_loss: 0.0330 - val_acc: 0.9894
Epoch 9/100
60000/60000 [==============================] - 2s - loss: 0.0240 - acc: 0.9921 - val_loss: 0.0315 - val_acc: 0.9900
Epoch 10/100
60000/60000 [==============================] - 2s - loss: 0.0197 - acc: 0.9935 - val_loss: 0.0352 - val_acc: 0.9883
Epoch 11/100
60000/60000 [==============================] - 2s - loss: 0.0174 - acc: 0.9941 - val_loss: 0.0337 - val_acc: 0.9895
Epoch 12/100
60000/60000 [==============================] - 2s - loss: 0.0159 - acc: 0.9947 - val_loss: 0.0352 - val_acc: 0.9894
Epoch 13/100
60000/60000 [==============================] - 2s - loss: 0.0139 - acc: 0.9953 - val_loss: 0.0368 - val_acc: 0.9896
Epoch 14/100
60000/60000 [==============================] - 1s - loss: 0.0140 - acc: 0.9954 - val_loss: 0.0314 - val_acc: 0.9909
Epoch 15/100
60000/60000 [==============================] - 2s - loss: 0.0117 - acc: 0.9961 - val_loss: 0.0393 - val_acc: 0.9881
Epoch 16/100
60000/60000 [==============================] - 2s - loss: 0.0108 - acc: 0.9963 - val_loss: 0.0395 - val_acc: 0.9894
Epoch 17/100
60000/60000 [==============================] - 2s - loss: 0.0098 - acc: 0.9965 - val_loss: 0.0418 - val_acc: 0.9897
Epoch 18/100
60000/60000 [==============================] - 2s - loss: 0.0105 - acc: 0.9965 - val_loss: 0.0430 - val_acc: 0.9881
Epoch 19/100
60000/60000 [==============================] - 1s - loss: 0.0076 - acc: 0.9974 - val_loss: 0.0401 - val_acc: 0.9897
Epoch 20/100
60000/60000 [==============================] - 1s - loss: 0.0071 - acc: 0.9975 - val_loss: 0.0427 - val_acc: 0.9890
Epoch 21/100
60000/60000 [==============================] - 1s - loss: 0.0088 - acc: 0.9972 - val_loss: 0.0362 - val_acc: 0.9904
Epoch 22/100
60000/60000 [==============================] - 1s - loss: 0.0073 - acc: 0.9977 - val_loss: 0.0449 - val_acc: 0.9886
Epoch 23/100
60000/60000 [==============================] - 1s - loss: 0.0082 - acc: 0.9972 - val_loss: 0.0437 - val_acc: 0.9891
Epoch 24/100
60000/60000 [==============================] - 1s - loss: 0.0049 - acc: 0.9983 - val_loss: 0.0361 - val_acc: 0.9908
Epoch 25/100
60000/60000 [==============================] - 1s - loss: 0.0050 - acc: 0.9982 - val_loss: 0.0376 - val_acc: 0.9905
Epoch 26/100
60000/60000 [==============================] - 2s - loss: 0.0090 - acc: 0.9969 - val_loss: 0.0546 - val_acc: 0.9871
Epoch 27/100
60000/60000 [==============================] - 2s - loss: 0.0047 - acc: 0.9983 - val_loss: 0.0450 - val_acc: 0.9904
Epoch 28/100
60000/60000 [==============================] - 1s - loss: 0.0055 - acc: 0.9980 - val_loss: 0.0429 - val_acc: 0.9886
Epoch 29/100
60000/60000 [==============================] - 1s - loss: 0.0039 - acc: 0.9989 - val_loss: 0.0528 - val_acc: 0.9877
Epoch 30/100
60000/60000 [==============================] - 2s - loss: 0.0056 - acc: 0.9980 - val_loss: 0.0477 - val_acc: 0.9891
Epoch 31/100
60000/60000 [==============================] - 1s - loss: 0.0044 - acc: 0.9984 - val_loss: 0.0498 - val_acc: 0.9888
Epoch 32/100
60000/60000 [==============================] - 1s - loss: 0.0044 - acc: 0.9985 - val_loss: 0.0501 - val_acc: 0.9897
Epoch 33/100
60000/60000 [==============================] - 1s - loss: 0.0043 - acc: 0.9984 - val_loss: 0.0493 - val_acc: 0.9895
Epoch 34/100
60000/60000 [==============================] - 1s - loss: 0.0029 - acc: 0.9991 - val_loss: 0.0530 - val_acc: 0.9896
Epoch 35/100
60000/60000 [==============================] - 1s - loss: 0.0053 - acc: 0.9984 - val_loss: 0.0445 - val_acc: 0.9908
Epoch 36/100
60000/60000 [==============================] - 1s - loss: 0.0054 - acc: 0.9983 - val_loss: 0.0502 - val_acc: 0.9902
Epoch 37/100
60000/60000 [==============================] - 1s - loss: 0.0049 - acc: 0.9984 - val_loss: 0.0449 - val_acc: 0.9907
Epoch 38/100
60000/60000 [==============================] - 1s - loss: 0.0048 - acc: 0.9986 - val_loss: 0.0483 - val_acc: 0.9900
Epoch 39/100
60000/60000 [==============================] - 1s - loss: 0.0021 - acc: 0.9994 - val_loss: 0.0576 - val_acc: 0.9892
Epoch 40/100
60000/60000 [==============================] - 2s - loss: 0.0025 - acc: 0.9992 - val_loss: 0.0535 - val_acc: 0.9900
Epoch 41/100
60000/60000 [==============================] - 1s - loss: 0.0060 - acc: 0.9982 - val_loss: 0.0673 - val_acc: 0.9869
Epoch 42/100
60000/60000 [==============================] - 2s - loss: 0.0040 - acc: 0.9987 - val_loss: 0.0417 - val_acc: 0.9912
Epoch 43/100
60000/60000 [==============================] - 1s - loss: 0.0026 - acc: 0.9991 - val_loss: 0.0498 - val_acc: 0.9902
Epoch 44/100
60000/60000 [==============================] - 2s - loss: 0.0022 - acc: 0.9993 - val_loss: 0.0545 - val_acc: 0.9899
Epoch 45/100
60000/60000 [==============================] - 2s - loss: 0.0057 - acc: 0.9982 - val_loss: 0.0477 - val_acc: 0.9906
Epoch 46/100
60000/60000 [==============================] - 2s - loss: 0.0023 - acc: 0.9991 - val_loss: 0.0565 - val_acc: 0.9900
Epoch 47/100
60000/60000 [==============================] - 2s - loss: 0.0039 - acc: 0.9987 - val_loss: 0.0538 - val_acc: 0.9907
Epoch 48/100
60000/60000 [==============================] - 1s - loss: 0.0012 - acc: 0.9996 - val_loss: 0.0528 - val_acc: 0.9901
Epoch 49/100
60000/60000 [==============================] - 1s - loss: 0.0066 - acc: 0.9981 - val_loss: 0.0478 - val_acc: 0.9909
Epoch 50/100
60000/60000 [==============================] - 1s - loss: 0.0011 - acc: 0.9996 - val_loss: 0.0493 - val_acc: 0.9913
Epoch 51/100
60000/60000 [==============================] - 2s - loss: 0.0011 - acc: 0.9997 - val_loss: 0.0486 - val_acc: 0.9907
Epoch 52/100
60000/60000 [==============================] - 2s - loss: 0.0061 - acc: 0.9981 - val_loss: 0.0626 - val_acc: 0.9892
Epoch 53/100
60000/60000 [==============================] - 1s - loss: 0.0043 - acc: 0.9988 - val_loss: 0.0609 - val_acc: 0.9886
Epoch 54/100
60000/60000 [==============================] - 2s - loss: 0.0024 - acc: 0.9992 - val_loss: 0.0521 - val_acc: 0.9908
Epoch 55/100
60000/60000 [==============================] - 2s - loss: 0.0020 - acc: 0.9994 - val_loss: 0.0532 - val_acc: 0.9915
Epoch 56/100
60000/60000 [==============================] - 2s - loss: 0.0025 - acc: 0.9993 - val_loss: 0.0577 - val_acc: 0.9893
Epoch 57/100
60000/60000 [==============================] - 2s - loss: 0.0047 - acc: 0.9985 - val_loss: 0.0550 - val_acc: 0.9896
Epoch 58/100
60000/60000 [==============================] - 1s - loss: 0.0026 - acc: 0.9993 - val_loss: 0.0436 - val_acc: 0.9912
Epoch 59/100
60000/60000 [==============================] - 2s - loss: 5.6958e-04 - acc: 0.9998 - val_loss: 0.0433 - val_acc: 0.9922
Epoch 60/100
60000/60000 [==============================] - 2s - loss: 4.2636e-04 - acc: 0.9999 - val_loss: 0.0440 - val_acc: 0.9922
Epoch 61/100
60000/60000 [==============================] - 1s - loss: 4.6596e-05 - acc: 1.0000 - val_loss: 0.0429 - val_acc: 0.9933
Epoch 62/100
60000/60000 [==============================] - 1s - loss: 1.4470e-05 - acc: 1.0000 - val_loss: 0.0430 - val_acc: 0.9934
Epoch 63/100
60000/60000 [==============================] - 1s - loss: 1.0095e-05 - acc: 1.0000 - val_loss: 0.0432 - val_acc: 0.9933
Epoch 64/100
|
nbs/30_traceable_edit_in_flask.ipynb | ###Markdown
01 A logged editable table> Traceable editable table in flask
###Code
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, World!'
###Output
_____no_output_____
###Markdown
Run a simple applitcation
###Code
# default_exp editable
# export
import pandas as pd
from datetime import datetime
import json
from sqlalchemy import create_engine as ce
from sqlalchemy import text
from jinja2 import Template
# export
from pathlib import Path
def get_static():
import forgebox
return Path(forgebox.__path__[0])/"static"
# export
def edit_js():
with open(get_static()/"edit.js","r") as f:
return f"<script>{f.read()}</script>"
class DefaultTemp(Template):
"""
Jinjia template with some default render config
"""
def render(self,dt):
dt.update(dict(type=type,now = datetime.now()))
return super().render(dt)
###Output
_____no_output_____
###Markdown
Create sample data
###Code
con = ce("sqlite:///sample.db")
sample_df = pd.DataFrame(dict(name=["Darrow","Virginia","Sevro",]*20,
house =["Andromedus","Augustus","Barca"]*20,
age=[20,18,17]*20))
sample_df.to_sql("sample_table",index_label="id",
index=True,
con = con, method='multi',
if_exists="replace")
# export
from flask import request
from flask import g
from datetime import datetime
class Editable:
def __init__(self,name,app,table_name,con,id_col,
log_con,log_table="editable_log",columns = None):
"""
name: route name for url path,
also it will be the task title appearning on the frontend
app:flask app
table_name: table to edit
con:sqlachemy connnection, created by : con = sqlalchemy.create_engine
id_col: a column with unique value
log_con:sqlachemy connnection, for storaging change log
"""
self.name = name
self.app = app
self.table_name = table_name
self.con = con
self.log_con = log_con
self.columns = ",".join(columns) if columns!=None else "*"
self.id_col = id_col
self.t_workspace = self.load_temp(get_static()/"workspace.html")
self.t_table = self.load_temp(get_static()/"table.html")
self.assign()
def assign(self):
self.app.route(f"/{self.name}")(self.workspace)
self.app.route(f"/{self.name}/df_api")(self.read_df)
self.app.route(f"/{self.name}/save_api",
methods=["POST"])(self.save_data)
def workspace(self):
return self.t_workspace.render(dict(title=self.name,
pk=self.id_col,
edit_js = edit_js()))
def save_data(self):
data = json.loads(request.data)
# update change and save log
changes = data["changes"]
log_df = pd.DataFrame(list(self.single_row(change) for change in changes))
log_df["idx"] = log_df.idx.apply(str)
log_df["original"] = log_df.original.apply(str)
log_df["changed"] = log_df.changed.apply(str)
log_df.to_sql(f"editable_log",con = self.log_con,index=False, if_exists="append")
print(log_df)
# return updated table
query = data["query"]
page = query["page"]
where = query["where"]
return self.data_table(page,where)
def settype(self,k):
if k[:3] == "int": return int
elif "float" in k: return float
elif k=="str":return str
elif k=="list":return list
elif k=="dict":return dict
else: return eval(k)
def single_row(self,row):
row["ip"]= request.remote_addr
row["table_name"] = self.table_name
row["ts"] = datetime.now()
if row["original"]==row["changed"]:
row['sql'] = ""
return row
else:
col = row["col"]
val = row["changed"]
val = f"'{val}'" if 'str' in row["valtype"] else val
idx = row["idx"]
idx = f"'{idx}'" if type(idx)==str else idx
set_clause = f"SET {col}={val}"
sql = f"""UPDATE {self.table_name}
{set_clause} WHERE {self.id_col}={idx}
"""
row['sql'] = sql
self.con.execute(sql)
return row
def read_df(self):
page = request.args.get('page')
where = request.args.get('where')
return self.data_table(page,where)
def data_table(self,page,where):
where_clause = "" if where.strip() == "" else f"WHERE {where} "
sql = f"""SELECT {self.columns} FROM {self.table_name} {where_clause}
ORDER BY {self.id_col} ASC LIMIT {page},20
"""
print(sql)
df = pd.read_sql(sql,self.con)
df = df.set_index(self.id_col)
return self.t_table.render(dict(df = df))
def load_temp(self,path):
with open(path, "r") as f:
return DefaultTemp(f.read())
###Output
_____no_output_____
###Markdown
Testing editable frontend
###Code
app = Flask(__name__)
# Create Editable pages around sample_table
Editable("table1", # route/task name
app, # flask app to wrap around
table_name="sample_table", # target table name
id_col="id", # unique column
con = con,
log_con=con
)
app.run(host="0.0.0.0",port = 4242,debug=False)
###Output
_____no_output_____
###Markdown
Retrieve the log
###Code
from forgebox.df import PandasDisplay
with PandasDisplay(max_colwidth = 0,max_rows=100):
display(pd.read_sql('editable_log',con = con))
###Output
_____no_output_____ |
frozenYoghourt/Qbraid - Implementing Improved Multiple Controlled Toffoli.ipynb | ###Markdown
Tables of Content Linear Algebra Tools 1. Operator Matrices - Pauli: I, X, Y, Z - Hadamard: H - Phase: P - Sqrt(X): SX - Sqrt(Z): S - Sqrt(H): SH - 4rt (Z): T - X root: Xrt(s) - H root: Hrt(s) - Rotation Matrices: Rx($\theta$), Ry($\theta$), Rz($\theta$) - U3 Matrix: U3($\theta, \phi, \lambda$) - Controlled-Not: CX 2. Common Statevectors - $|0\rangle$: zero - $|1\rangle$: one - $|+\rangle$: plus - $|-\rangle$: minus - $| \uparrow \rangle$: up - $| \downarrow \rangle$: down - Bell States: B00, B01, B10, B11 3. Lambda Methods - ndarray to list: to_list(array) - tensor: *****initial_state - matmul: *****initial_state 4. Full Methods - Calculate Hermitian Conjugate: dagger(mat) - Build CU matrix: cu_matrix(no_qubits, control, target, U, little_edian) - Find RX, RY for arbitrary U3: angles_from_state_vectors(output_statevector) 5. Visualizations - view(mat, rounding = 10) Qiskit Tools 1. Linear Algebra - Short-hand QC: q(*****regs, name=None, global_phase=0) - Multi-controlled Unitary: control_unitary(circ, unitary, *****controls, target) - Control Phase: control_phase(circ, angle, control_bit, target_bit, recip=True, pi_on=True)2. Visualizations - Draw Circuit: milk(circ) - Draw Transpiled Circuit: dtp(circ, print_details = True, visual = True, return_values = False) - Get Unitary / Statevector Function: get(circ, types = 'unitary', nice = True) - Displaying Histogram / Bloch / Counts: sim(circ, visual = 'hist') 3. Toffoli Optimizaton Specific - Unitary Checker: unitary_check(test_unitary) - Multi-Hadamard Composition: h_relief(n, no_h) Import
###Code
import numpy as np
import sympy as sp
from sympy.solvers.solveset import linsolve
import matplotlib
import matplotlib.pyplot as plt
matplotlib.use('Agg')
from sympy import Matrix, init_printing
import qiskit
from qiskit import *
from qiskit.aqua.circuits import *
# Representing Data
from qiskit.providers.aer import QasmSimulator, StatevectorSimulator, UnitarySimulator
from qiskit.tools.visualization import plot_histogram, plot_state_city, plot_bloch_multivector
# Monitor Job on Real Machine
from qiskit.tools.monitor import job_monitor
from functools import reduce # perform sucessive tensor product
# Calculating cost
from sklearn.metrics import mean_squared_error
# Generating random unitary matrix
from scipy.stats import unitary_group
# Measure run time
import time
# Almost Equal
from numpy.testing import assert_almost_equal as aae
###Output
Duplicate key in file '/Users/minhpham/.matplotlib/matplotlibrc' line #2.
Duplicate key in file '/Users/minhpham/.matplotlib/matplotlibrc' line #3.
###Markdown
Linear Algebra Tools
###Code
# Matrices
I = np.array([[1, 0], [0, 1]])
X = np.array([[0, 1], [1, 0]])
Y = np.array([[0, -1j], [1j, 0]])
Z = np.array([[1, 0], [0, -1]])
H = 1/np.sqrt(2)*np.array([[1, 1], [1, -1]])
P = lambda theta: np.array([[1, 0], [0, np.exp(1j*theta)]])
# sqrt(X)
SX = 1/2 * np.array([[1+1j, 1-1j], [1-1j, 1+1j]])
# sqrt(Z)
S = np.array([[1, 0], [0, 1j]])
# sqrt(H)
SH = (1j/4-1/4)*np.array([[np.sqrt(2) + 2j, np.sqrt(2)], [np.sqrt(2), -np.sqrt(2)+2j]])
# 4th root of Z
T = np.array([[1, 0], [0, 1/np.sqrt(2) + 1/np.sqrt(2)*1j]])
# X power
Xp = lambda t: 1/2 * np.array([[1, 1], [1, 1]]) + np.exp(1j*np.pi*t)/(2) * np.array([[1, -1], [-1, 1]])
# H power
Hp = lambda t: np.exp(-1j*np.pi*t/2) * np.array([[np.cos(np.pi*t/2) + 1j/np.sqrt(2)* np.sin(np.pi*t/2), 1j/np.sqrt(2) * np.sin(np.pi*t/2)],
[1j/np.sqrt(2) * np.sin(np.pi*t/2), np.cos(np.pi*t/2)-1j/np.sqrt(2)* np.sin(np.pi*t/2)]])
CX = np.array([[1, 0, 0, 0], [0, 0, 0, 1], [0, 0, 1, 0], [0, 1, 0, 0]])
# Rn Matrix Function
Rx = lambda theta: np.array([[np.cos(theta/2), -1j*np.sin(theta/2)], [-1j*np.sin(theta/2), np.cos(theta/2)]])
Ry = lambda theta: np.array([[np.cos(theta/2), -np.sin(theta/2)], [np.sin(theta/2), np.cos(theta/2)]])
Rz = lambda theta: np.array([[np.exp(-1j*theta/2), 0], [0, np.exp(1j*theta/2)]])
# U3 Matrix
U3 = lambda theta, phi, lam: np.array([[np.cos(theta/2), -np.exp(1j*lam)*np.sin(theta/2)],
[np.exp(1j*phi)*np.sin(theta/2), np.exp(1j*lam + 1j*phi)*np.cos(theta/2)]])
# Eigenvectors of Pauli Matrices
zero = np.array([[1], [0]]) # Z plus basis state
one = np.array([[0], [1]]) # Z plus basis state
plus = np.array([[1], [1]])/np.sqrt(2) # X plus basis state
minus = np.array([[1], [-1]])/np.sqrt(2) # X minus basis state
up = np.array([[1], [1j]])/np.sqrt(2) # Y plus basis state
down = np.array([[1], [-1j]])/np.sqrt(2) # Y plus basis state
# Bell States
B00 = np.array([[1], [0], [0], [1]])/np.sqrt(2) # Bell of 00
B01 = np.array([[1], [0], [0], [-1]])/np.sqrt(2) # Bell of 01
B10 = np.array([[0], [1], [1], [0]])/np.sqrt(2) # Bell of 10
B11 = np.array([[0], [-1], [1], [0]])/np.sqrt(2) # Bell of 11
# ndarray to list
to_list = lambda array: list(np.squeeze(array))
# Tensor Product of 2+ matrices/ vectors
tensor = lambda *initial_state: reduce(lambda x, y: np.kron(x, y), initial_state)
# Matrix Multiplicaton of 2+ matrices / vectors
mat_mul = lambda *initial_state: reduce(lambda x, y: np.dot(x, y), initial_state)
###Output
_____no_output_____
###Markdown
Calculate Hermitian Conjugate
###Code
def dagger(mat):
# Calculate Hermitian conjugate
mat_dagger = np.conj(mat.T)
# Assert Hermitian identity
aae(np.dot(mat_dagger, mat), np.identity(mat.shape[0]))
return mat_dagger
###Output
_____no_output_____
###Markdown
CU Matrix
###Code
def cu_matrix(no_qubits, control, target, U, little_edian = True):
"""
Manually build the unitary matrix for non-adjacent CX gates
Parameters:
-----------
no_qubits: int
Number of qubits in the circuit
control: int
Index of the control qubit (1st qubit is index 0)
target: int
Index of the target qubit (1st qubit is index 0)
U: ndarray
Target unitary matrix
edian: bool (True: qiskit convention)
Qubits order convention
Returns:
--------
cx_out:
Unitary matrix for CU gate
"""
left = [I]*no_qubits
right = [I]*no_qubits
left[control] = np.dot(zero, zero.T)
right[control] = np.dot(one, one.T)
right[target] = U
if little_edian:
cx_out = tensor(*reversed(left)) + tensor(*reversed(right))
else:
cx_out = tensor(*left) + tensor(*right)
# This returns a unitary in qiskit 'little eddian', to switch back, simply switch the target for control
return cx_out
###Output
_____no_output_____
###Markdown
Angles from Statevector
###Code
def angles_from_statevectors(output_statevector):
"""
Calculate correct x, y rotation angles from an arbitrary output statevector
Paramters:
----------
output_statevector: ndarray
Desired output state
Returns:
--------
phi: float
Angle to rotate about the y-axis [0, 2pi)
theta: float
Angle to rotate about the x-axis [0, 2pi)
"""
# Extract the components
x, z = output_statevector.real
y, w = output_statevector.imag
# Calculate the correct angles
phi = 2*np.arctan2(z,x)[0]
theta = 2*np.arctan2(y,z)[0]
print(f'phi: {phi}')
print(f'theta: {theta}')
return phi, theta
###Output
_____no_output_____
###Markdown
View Matrix
###Code
def view(mat, rounding = 10):
display(Matrix(np.round(mat, rounding)))
###Output
_____no_output_____
###Markdown
Qiskit Tools Short-hand Qiskit Circuit
###Code
q = lambda *regs, name=None, global_phase=0: QuantumCircuit(*regs, name=None, global_phase=0)
###Output
_____no_output_____
###Markdown
Controlled Unitary
###Code
def control_unitary(circ, unitary, controls, target):
"""
Composed a multi-controlled single unitary target gate
Parameters:
-----------
circ: QuantumCircuit
Qiskit circuit of appropriate size, no less qubit than the size of the controlled gate
unitary: ndarray of (2, 2)
Unitary operator for the target qubit
controls: list
Indices of controlled qubit on the original circuit
target: int
Index of target bit
Returns:
--------
new_circ: QuantumCircuit
Composed circuit with unitary target
"""
# Get info about circuit parameters
no_controls = len(controls)
unitary_size = np.log2(len(unitary))
# Build unitary circuit
qc = QuantumCircuit(unitary_size)
qc.unitary(unitary, range(int(unitary_size)))
qc = qc.control(no_controls)
# Composed the control part in the circuit
new_circ = circ.compose(qc, (*controls, target))
return new_circ
###Output
_____no_output_____
###Markdown
Controlled Phase
###Code
def control_phase(circ, angle, control_bit, target_bit, recip = True, pi_on = True):
"""
Add a controlled-phase gate
Parameters:
-----------
circ: QuantumCircuit
Inputted circuit
angle: float
Phase Angle
control_bit: int
Index of control bit
target_bit: int
Index of target bit
recip: bool (True)
Take the reciprocal of the angle
pi_on: bool (True)
Multiply pi to the angle
Returns:
--------
circ: QuantumCircuit
Circuit with built-in CP
"""
if recip:
angle = 1/angle
if pi_on:
angle *=np.pi
circ.cp(angle, control_bit, target_bit)
return circ
###Output
_____no_output_____
###Markdown
Draw Circuit
###Code
def milk(circ):
return circ.draw('mpl')
###Output
_____no_output_____
###Markdown
Draw Transpiled Circuit
###Code
def dtp(circ, print_details = True, nice = True, return_values = False):
"""
Draw and/or return information about the transpiled circuit
Parameters:
-----------
circ: QuantumCircuit
QuantumCircuit to br transpiled
print_details: bool (True)
Print the number of u3 and cx gates used
nice: bool (True)
Show the transpiled circuit
return_values: bool (True)
Return the number of u3 and cx gates used
Returns:
--------
no_cx: int
Number of cx gates used
no_u3: int
Number of u3 gates used
"""
# Transpile Circuit
circ = transpile(circ, basis_gates= ['u3', 'cx'], optimization_level=3)
# Count operations
gates = circ.count_ops()
# Compute cost
try:
no_u3 = gates['u3']
except:
no_u3 = 0
try:
no_cx = gates['cx']
except:
no_cx = 0
cost = no_u3 + 10*no_cx
if print_details:
# Print Circuit Details
print(f'cx: {no_cx}')
print(f'u3: {no_u3}')
print(f'Total cost: {cost}')
if nice:
return circ.draw('mpl')
if return_values:
return no_cx, no_u3
###Output
_____no_output_____
###Markdown
Get Unitary/StateVector Function
###Code
def get(circ, types = 'unitary', nice = True):
"""
This function return the statevector or the unitary of the inputted circuit
Parameters:
-----------
circ: QuantumCircuit
Inputted circuit without measurement gate
types: str ('unitary')
Get 'unitary' or 'statevector' option
nice: bool
Display the result nicely option or just return unitary/statevector as ndarray
Returns:
--------
out: ndarray
Outputted unitary of statevector
"""
if types == 'statevector':
backend = BasicAer.get_backend('statevector_simulator')
out = execute(circ, backend).result().get_statevector()
else:
backend = BasicAer.get_backend('unitary_simulator')
out = execute(circ, backend).result().get_unitary()
if nice:
display(Matrix(np.round(out, 10)))
else:
return out
###Output
_____no_output_____
###Markdown
Displaying Histogram / Bloch / Counts
###Code
def sim(circ, visual = 'hist'):
"""
Displaying output of quantum circuit
Parameters:
-----------
circ: QuantumCircuit
QuantumCircuit with or without measurement gates
visual: str ('hist')
'hist' (counts on histogram) or 'bloch' (statevectors on Bloch sphere) or None (get counts only)
Returns:
--------
counts: dict
Counts of each CBS state
"""
# Simulate circuit and display counts on a histogram
if visual == 'hist':
simulator = Aer.get_backend('qasm_simulator')
results = execute(circ, simulator).result()
counts = results.get_counts(circ)
plot_histogram(counts)
return counts
# Get the statevector and display on a Bloch sphere
elif visual == 'bloch':
backend = BasicAer.get_backend('statevector_simulator')
statevector = execute(circ, backend).result().get_statevector()
get(circ)
plot_bloch_multivector(statevector)
# Just get counts
else:
simulator = Aer.get_backend('qasm_simulator')
results = execute(circ, simulator).result()
counts = results.get_counts(circ)
return counts
###Output
_____no_output_____
###Markdown
Unitary Checker
###Code
def unitary_check(test_unitary, perfect = False):
"""
Check if the CnX unitary is correct
Parameters:
-----------
test_unitary: ndarray
Unitary generated by the circuit
perfect: ndarray
Account for phase difference
"""
# Get length of unitary
if not perfect:
test_unitary = np.abs(test_unitary)
size = test_unitary.shape[0]
cx_theory = np.identity(size)
# Change all the difference
cx_theory[int(size/2) - 1, size - 1] = 1
cx_theory[size - 1, int(size/2) - 1] = 1
cx_theory[int(size/2) -1, int(size/2) -1] = 0
cx_theory[size - 1, size - 1] = 0
# Assert Similarity
aae(cx_theory, test_unitary)
print('Unitary is correct')
###Output
_____no_output_____
###Markdown
Task: Implementing Improved Multiple Controlled Toffoli Abstract Multiple controlled Toffoli gates are crucial in the implementation of modular exponentiation [4], like that used in Shor's algorithm. In today's practical realm of small number of qubits devices, there is a real need for efficient realization of multiple controlled Toffoli gate for 6 to 10 controls.Shende and Markov proved that the implementation of the $n$-qubit analogue of the $TOFFOLI$ requires at least $2n \ CNOT$ gates [1]. Currently, the best known upper bound is outlined by Maslov stands at $6n-12$ with the used of $\lceil \frac{n-3}{2} \rceil$ ancilla bits [2]. For implementaion without ancillae, we look at the technique outlined in Corollary 7.6 which has $\Theta(n^2)$ complexity [3]. The aboved mention technique however, still has a high implementation cost for relatively low number of controls. This is due to the high coefficient of the $n^2$ term. Note that in this notebook, $n$ qubits Toffli gates will simply be referred to as $CnX$ gate where $n$ is the number of control bits. For this project, we outline a technique for building $CnX$ gate with modulo phase shift whose unitary satisfies $UU = I$. For a few examples from $n = 2$ to $n = 15$, we provided some values to compare and contrast our circuit cost versus that of qiskit. We then postulated with high confidence the complexity of the technique to be $O(2^{\frac{n}{2}})$. Comparing this to the quadratic technique in Corollary 7.6 of [3], we found that our circuits are superior for $n = 7, 8, ..., 11$ . At the end, we offers some possible implementation cases for our technique. Motivating the General Circuit The general $CnX$ gate takes in $n+1$ qubits as inputs ($n$ controls, $1$ target). It's action on a set of qubits $\{q_i\}_{i = 0}^{n}$ is defined as followed.$$CnX(\{q_i\}_{i = 0}^{n}) = \big{(} \bigwedge_{i = 0}^{n-1} q_i \big{)} \oplus q_n$$Simply stated, the gate flips the target bit if all the controls are $1$s. For example, for $n = 2$, we have the well-known Toffoli gate
###Code
circ = q(3)
circ.ccx(0, 1, 2)
milk(circ)
###Output
_____no_output_____
###Markdown
And for higher $n$, $6$ for example, the circuit would take this form.
###Code
circ = q(7)
circ.mct(list(range(6)), 6)
milk(circ)
###Output
_____no_output_____
###Markdown
The cost for the Qiskit implementation of $CnX$ gate from $n = 2$ to $n = 11$ are listed above in terms of the basic operations ($CX$ and $U3$). Note that the general cost is defined as $10CX + U3$. n | CX | U3 | General Cost --- | --- | --- | --- 2 | 6 | 8 | 68 3 | 20 | 22 | 2224 | 44 | 46 | 4865 | 92 | 94 | 10146 | 188 | 190 | 20707 | 380 | 382 | 41828 | 764 | 766 | 84069 | 1532 | 1534 | 1685410 | 3068 | 3070 | 3375011 | 6140 | 6142 | 67542 As outlined in Corolllary 7.1 [3]. The number of $CX$ grows by $3\cdot 2^{n-1} - 4$, and $U3$ grows by $3\cdot 2^{n-1} - 2$. Overall, we see an $O(2^n)$ complexity of the general cost. Our technique takes advantage of the superposition identity that$$H Z H = X$$For an arbitrary $CnX$, we split the control into two groups (one controlled by $H$, and one controlled by $Z$). If we defined the number of control bits on the $H$ gates as $a$, we have the circuit $C(a)H - C(n-a)Z - C(a)H$. An example of $n = 7, a = 3$ is shown below.
###Code
circ = q(8)
circ = control_unitary(circ, H, [0, 1, 2], 7)
circ = control_unitary(circ, Z, [3, 4, 5, 6], 7)
circ = control_unitary(circ, H, [0, 1, 2], 7)
milk(circ)
###Output
_____no_output_____
###Markdown
The two outer most gates are $C3H$, and the middle gate is $C4Z$. Together they create $C7X$ with a negative phase in 7 columns of the unitary. In general, the number of negative phase in the unitary has the form $2^a - 1$. Although $a$ can be varied, for each $n$, there exists a unique value of $a$ that is optimal for the respective circuit. We run and tested out all the different combination of $n$s and $a$s. And we generate the set of opimal combinations shown below. n | H-a | CX | U3 | General Cost --- | --- | --- | --- | --- 2 | 1 | 3 | 4 | 343 | 1 | 6 | 7 | 674 | 1 | 20 | 25 | 2255 | 2 | 34 | 53 | 3936 | 2 | 50 | 72 | 5727 | 3 | 70 | 101 | 8018 | 4 | 102 | 143 | 11639 | 4 | 146 | 196 | 165610 | 4 | 222 | 286 | 250611 | 5 | 310 | 395 | 3495 Implementing the General Circuit The circuit will be implemented recursively using three base cases. When $n = 1$, when have the $CX$ gate. When $n = 2$, we have the below structure.
###Code
milk(CnX(2))
###Output
_____no_output_____
###Markdown
$n = 3$
###Code
dtp(CnX(3))
###Output
cx: 6
u3: 7
Total cost: 67
###Markdown
We sketch the following for the general circuit of $CnX$ We also provide the qiskit code implementation of for the general $CnX$ below. At the end is the list of the best implementation for each CnX gate. To use, simply assign ```best[n] ``` to an object and use like a normal QuantumCircuit. Note that $n$ represents the number of controls in the desired $CnX$. CnX/CnP (Multiple-controlled Not modulo phase shift circuit)
###Code
def CnX(n, control_list = None, target = None, circ = None, theta = 1):
"""
Create a CnX modulo phase shift gate
Parameters:
-----------
n: int
Number of control bits
control_list: list
Index of control bits on inputted circuit (if any)
target: int
Index of control bits on inputted circuit (if any)
circ: QuantumCircuit
Inputted circuit to compose CnX on
theta: int
1/theta power X n-bit controlled circuit
Returns:
--------
circ: QuantumCircuit
CnX modulo phase shift gate
"""
# Build New Circuit
if circ == None:
circ = q(n+1)
control_list = list(range(n))
target = n
# Base Case
if n == 1:
circ.cx(*control_list, target)
return circ
if n==2:
circ.ch(control_list[0], target)
circ.cz(control_list[1], target)
circ.ch(control_list[0], target)
return circ
if n == 3:
circ.rcccx(*control_list, target)
return circ
# New Case
# CH
circ.ch(control_list[0], target)
# CP2
circ = control_phase(circ, theta*2, control_list[-1], target)
# C(n-2)X
circ = CnX(n-2, control_list[1:-1], control_list[-1], circ)
# -CP2
circ = control_phase(circ, -theta*2, control_list[-1], target)
# C(n-2)X
circ = CnX(n-2, control_list[1:-1], control_list[-1], circ)
# CnP
circ = CnP(n-2, control_list[1:-1], target, circ, theta*2)
# CH
circ.ch(control_list[0], target)
return circ
def CnP(n, control_list = None, target = None, circ = None, theta = 1):
"""
Create a CnP modulo phase shift gate
Parameters:
-----------
n: int
Number of control bits
control_list: list
Index of control bits on inputted circuit (if any)
target: int
Index of control bits on inputted circuit (if any)
circ: QuantumCircuit
Inputted circuit to compose CnP on
theta: int
1/theta power Z n-bit controlled circuit
Returns:
--------
circ: QuantumCircuit
CnP modulo phase shift gate
"""
# Build New Circuit
if circ == None:
circ = q(n+1)
control_list = list(range(n))
target = n
# Base Case
if n ==1:
circ = control_phase(circ, theta, control_list, target)
return circ
# New Case
# CP
circ = control_phase(circ, theta*2, control_list[-1], target)
# C(n-1)X
circ = CnX(n-1, control_list[:-1], control_list[-1], circ)
# -CP
circ = control_phase(circ, -theta*2, control_list[-1], target)
# C(n-1)X
circ = CnX(n-1, control_list[:-1], control_list[-1], circ)
# C(n-1)P
circ = CnP(n-1, control_list[:-1], target, circ, theta*2)
return circ
###Output
_____no_output_____
###Markdown
CnH / Multi-Hadamard Composition
###Code
def CnH(n, control_list = None, target = None, circ = None, theta = 1):
"""
Create a CnH modulo phase shift gate
Parameters:
-----------
n: int
Number of control bits
control_list: list
Index of control bits on inputted circuit (if any)
target: int
Index of control bits on inputted circuit (if any)
circ: QuantumCircuit
Inputted circuit to compose CnH on
theta: int
1/theta power H n-bit controlled circuit
Returns:
--------
circ: QuantumCircuit
CnH modulo phase shift gate
"""
# Build New Circuit
if circ == None:
circ = q(n+1)
control_list = list(range(n))
target = n
# Base Case
if n ==1 and theta ==1:
circ.ch(control_list, target)
return circ
if n ==1:
circ.unitary(cu_matrix(2, 0, 1, Hp(1/theta)), [control_list, target])
return circ
# New Case
# CH
circ.unitary(cu_matrix(2, 0, 1, Hp(1/(theta*2))), [control_list[-1], target])
# C(n-1)X
circ = CnX(n-1, control_list[:-1], control_list[-1], circ)
# CH
circ.unitary(cu_matrix(2, 0, 1, Hp(-1/(theta*2))), [control_list[-1], target])
# C(n-1)X
circ = CnX(n-1, control_list[:-1], control_list[-1], circ)
# C(n-1)P
circ = CnH(n-1, control_list[:-1], target, circ, theta*2)
return circ
def h_relief(n, no_h, return_circ = False):
"""
Implementing the general CaH-C(n-a)Z-CaH architecture
Paramters:
----------
n: int
Total number of control bits
no_h: int
Total number of control bits for the CnH gate
return_circ: bool
Return circuit as a QuantumCircuit object
Returns:
--------
circ: QuantumCircuit
Circuit with CnX and Hadamard Relief
"""
# n is the number of control qubit
# no_h is the number of control qubit on the side hadamard
circ = q(n+1)
circ= CnH(no_h, list(range(no_h)), n, circ)
circ = CnP(n-no_h, list(range(no_h, n)), n, circ)
circ= CnH(no_h, list(range(no_h)), n, circ)
'''# Test for accuracy
test = get(circ, nice = False)
unitary_check(test)'''
if return_circ:
return circ
dtp(circ, nice = False)
### List of opimal combinations
best = [None, None, CnX(2), CnX(3), CnX(4), h_relief(5, 2, return_circ = True), h_relief(6, 2, return_circ = True),
h_relief(7, 3, return_circ = True), h_relief(8, 4, return_circ = True), h_relief(9, 4, return_circ = True),
h_relief(10, 4, return_circ = True), h_relief(11, 5, return_circ = True), h_relief(12, 6, return_circ = True)]
###Output
_____no_output_____
###Markdown
Postulate for Complexity of the General Cost We have two lists below showing the number of $U3$ and $CX$ used for the qiskit technique and our technique
###Code
## Qiskit
cx_q = np.array([6, 20, 44, 92, 188, 380, 764, 1532, 3068, 6140])
u3_q = np.array([8, 22, 46, 94, 190, 382, 766, 1534, 3070, 6142])
## Our
cx_o = np.array([3, 6, 20, 34, 50, 70, 102, 146, 222, 310])
u3_o = np.array([4, 7, 25, 53, 72, 101, 143, 196, 286, 395])
###Output
_____no_output_____
###Markdown
We find the common ratios by taking $a_{n+1}/a_n$, and taking the average of these ratio when $n > 3$ to mitigate the impact of the additive factor.
###Code
## Qiskit
rat_1 = cx_q[1:] / cx_q[:-1]
rat_1 = np.mean(rat_1[3:])
rat_2 = u3_q[1:] / u3_q[:-1]
rat_2 = np.mean(rat_2[3:])
## Our
rat_3 = cx_o[1:] / cx_o[:-1]
rat_3 = np.mean(rat_3[3:])
rat_4 = u3_o[1:] / u3_o[:-1]
rat_4 = np.mean(rat_4[3:])
rat_1, rat_2, rat_3, rat_4
###Output
_____no_output_____
###Markdown
We see that the geometric ratio of our technique is superior to that of qiskit. In base $2$, we can roughly see the following complexity.$$CX \approx O(1.446^n) \approx O(2^{\frac{n}{2}})$$$$U3 \approx O(1.380^n) \approx O(2^{\frac{n}{2}})$$ Compare and Contrast with the $O(n^2)$ technique in Corollary 7.6 of [3] Lemma 7.5 shows an example of $C8X$ built using 2 $C7X$ and 1 $C7V$. For our purposes, we can assume that the cost of $C7V$ is equal to that of $C7X$. In actuality, the cost of any CnU gate is much greater than that of $CnX$ gates so therefore this assumption gives us a lower bound of the cost of the circuit.Previous lemmas and corollaries show that these can gates can be broken down further into smaller $C2X$ and $C3X$ gates.$$\begin{align}C5X &= 12 \ C2X = 12\cdot34 = 408 \\ C7X &= 2 \ C5X + 2 \ C3X = 2\cdot408 + 2\cdot67 = 950 \\ C8X &= 3 \ C7X \end{align}$$If we let use our implementation of $C2X$ and $C3X$. Then we would have the general cost of $C8X = 2850$. However, as our circuit allow for the use of phase differences, we would also allow this circuit to be used to built bigger examples like shown below.
###Code
circ = q(10)
circ = control_unitary(circ, H, [0, 1], 9)
circ.h(9)
circ.mct([2, 3, 4, 5, 6, 7, 8], 9)
circ.h(9)
circ = control_unitary(circ, H, [0, 1], 9)
milk(circ)
###Output
_____no_output_____
###Markdown
The $3$ middle gates will have the effect of $C8Z$, and the two gate outside are $C2Z$. This will leads to $C10X$ with phase difference. Now we made one last modification to the implementation of Lemma 7.5. If we look back to the table from before, we can see that our implementation of $C7X$ has a lower than $950$. Because the phase difference does not affect the control operation, we can replace the paper's $C7X$ with ours.
###Code
print(1)
dtp(CnH(1), nice = False)
print('\n')
print(2)
dtp(CnH(2), nice = False)
print('\n')
print(3)
dtp(CnH(3), nice = False)
###Output
1
cx: 1
u3: 2
Total cost: 12
2
cx: 8
u3: 16
Total cost: 96
3
cx: 18
u3: 31
Total cost: 211
|
nbs/.ipynb_checkpoints/01_rmath-checkpoint.ipynb | ###Markdown
Math Some extra math functions.
###Code
%nbdev_export
def smooth( newVal, oldVal, weight) :
"An exponential smoothing function. The weight is the smoothing factor applied to the old value."
return newVal * (1 - weight) + oldVal * weight;
smooth(2, 10, 0.9)
assert smooth(2, 10, 0.9)==9.2
#hide
from nbdev import *
notebook2script()
###Output
Converted 00_core.ipynb.
Converted 01_rmath.ipynb.
Converted 02_functions.ipynb.
Converted 03_nodes.ipynb.
Converted 04_hierarchy.ipynb.
Converted index.ipynb.
|
dataAnalysis/ETCClassifier.ipynb | ###Markdown
Project led by Nikolas Papastavrou Code developed by Varun Bopardikar Data Analysis conducted by Selina Ho, Hana Ahmed
###Code
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn import metrics
from datetime import datetime
from sklearn.naive_bayes import GaussianNB
from sklearn import tree
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
###Output
_____no_output_____
###Markdown
Load Data
###Code
def gsev(val):
"""
Records whether or not a number is greater than 7.
"""
if val <= 7:
return 0
else:
return 1
df = pd.read_csv('../../fservice.csv')
df['Just Date'] = df['Just Date'].apply(lambda x: datetime.strptime(x,'%Y-%m-%d'))
df['Seven'] = df['ElapsedDays'].apply(gsev, 0)
###Output
/Users/varunbopardikar/anaconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3057: DtypeWarning: Columns (10,33) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Parameters
###Code
c = ['Anonymous','AssignTo', 'RequestType', 'RequestSource','CD','Direction', 'ActionTaken', 'APC' ,'AddressVerified']
d = ['Latitude', 'Longitude']
###Output
_____no_output_____
###Markdown
Feature Cleaning
###Code
#Put desired columns into dataframe, drop nulls.
dfn = df.filter(items = c + d + ['ElapsedDays'] + ['Seven'])
dfn = dfn.dropna()
#Separate data into explanatory and response variables
XCAT = dfn.filter(items = c).values
XNUM = dfn.filter(items = d).values
y = dfn['ElapsedDays'] <= 7
#Encode cateogrical data and merge with numerical data
labelencoder_X = LabelEncoder()
for num in range(len(c)):
XCAT[:, num] = labelencoder_X.fit_transform(XCAT[:, num])
onehotencoder = OneHotEncoder()
XCAT = onehotencoder.fit_transform(XCAT).toarray()
X = np.concatenate((XCAT, XNUM), axis=1)
###Output
/Users/varunbopardikar/anaconda3/lib/python3.7/site-packages/sklearn/preprocessing/_encoders.py:415: FutureWarning: The handling of integer data will change in version 0.22. Currently, the categories are determined based on the range [0, max(values)], while in the future they will be determined based on the unique values.
If you want the future behaviour and silence this warning, you can specify "categories='auto'".
In case you used a LabelEncoder before this OneHotEncoder to convert the categories to integers, then you can now use the OneHotEncoder directly.
warnings.warn(msg, FutureWarning)
###Markdown
Algorithms and Hyperparameters
###Code
##Used Random Forest in Final Model
gnb = GaussianNB()
dc = tree.DecisionTreeClassifier(criterion = 'entropy', max_depth = 20)
rf = RandomForestClassifier(n_estimators = 50, max_depth = 20)
lr = LogisticRegression()
###Output
_____no_output_____
###Markdown
Validation Set
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size = 0.2, random_state = 0)
#Train Model
classifier = rf
classifier.fit(X_train, y_train)
#Test model
y_vpred = classifier.predict(X_val)
#Print Accuracy Function results
print("Accuracy:",metrics.accuracy_score(y_val, y_vpred))
print("Precision, Recall, F1Score:",metrics.precision_recall_fscore_support(y_val, y_vpred, average = 'binary'))
###Output
Accuracy: 0.9385983549336814
Precision, Recall, F1Score: (0.946896616482519, 0.9893259382317161, 0.9676463908853341, None)
###Markdown
Test Set
###Code
#Train Model
#Test model
y_tpred = classifier.predict(X_test)
#Print Accuracy Function results
print("Accuracy:",metrics.accuracy_score(y_test, y_tpred))
print("Precision, Recall, F1Score:",metrics.precision_recall_fscore_support(y_test, y_tpred, average = 'binary'))
###Output
Accuracy: 0.9387186223709323
Precision, Recall, F1Score: (0.9468199376863904, 0.9895874917412928, 0.9677314319565967, None)
|
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb | ###Markdown
Classes and subclasses In this notebook, I will show you the basics of classes and subclasses in Python. As you've seen in the lectures from this week, `Trax` uses layer classes as building blocks for deep learning models, so it is important to understand how classes and subclasses behave in order to be able to build custom layers when needed. By completing this notebook, you will:- Be able to define classes and subclasses in Python- Understand how inheritance works in subclasses- Be able to work with instances Part 1: Parameters, methods and instances First, let's define a class `My_Class`.
###Code
class My_Class: #Definition of My_class
x = None
###Output
_____no_output_____
###Markdown
`My_Class` has one parameter `x` without any value. You can think of parameters as the variables that every object assigned to a class will have. So, at this point, any object of class `My_Class` would have a variable `x` equal to `None`. To check this, I'll create two instances of that class and get the value of `x` for both of them.
###Code
instance_a= My_Class() #To create an instance from class "My_Class" you have to call "My_Class"
instance_b= My_Class()
print('Parameter x of instance_a: ' + str(instance_a.x)) #To get a parameter 'x' from an instance 'a', write 'a.x'
print('Parameter x of instance_b: ' + str(instance_b.x))
###Output
Parameter x of instance_a: None
Parameter x of instance_b: None
###Markdown
For an existing instance you can assign new values for any of its parameters. In the next cell, assign a value of `5` to the parameter `x` of `instance_a`.
###Code
### START CODE HERE (1 line) ###
instance_a.x = 5
### END CODE HERE ###
print('Parameter x of instance_a: ' + str(instance_a.x))
###Output
Parameter x of instance_a: 5
###Markdown
1.1 The `__init__` method When you want to assign values to the parameters of your class when an instance is created, it is necessary to define a special method: `__init__`. The `__init__` method is called when you create an instance of a class. It can have multiple arguments to initialize the paramenters of your instance. In the next cell I will define `My_Class` with an `__init__` method that takes the instance (`self`) and an argument `y` as inputs.
###Code
class My_Class:
def __init__(self, y): # The __init__ method takes as input the instance to be initialized and a variable y
self.x = y # Sets parameter x to be equal to y
###Output
_____no_output_____
###Markdown
In this case, the parameter `x` of an instance from `My_Class` would take the value of an argument `y`. The argument `self` is used to pass information from the instance being created to the method `__init__`. In the next cell, create an instance `instance_c`, with `x` equal to `10`.
###Code
### START CODE HERE (1 line) ###
instance_c = My_Class(10)
### END CODE HERE ###
print('Parameter x of instance_c: ' + str(instance_c.x))
###Output
Parameter x of instance_c: 10
###Markdown
Note that in this case, you had to pass the argument `y` from the `__init__` method to create an instance of `My_Class`. 1.2 The `__call__` method Another important method is the `__call__` method. It is performed whenever you call an initialized instance of a class. It can have multiple arguments and you can define it to do whatever you want like- Change a parameter, - Print a message,- Create new variables, etc.In the next cell, I'll define `My_Class` with the same `__init__` method as before and with a `__call__` method that adds `z` to parameter `x` and prints the result.
###Code
class My_Class:
def __init__(self, y): # The __init__ method takes as input the instance to be initialized and a variable y
self.x = y # Sets parameter x to be equal to y
def __call__(self, z): # __call__ method with self and z as arguments
self.x += z # Adds z to parameter x when called
print(self.x)
###Output
_____no_output_____
###Markdown
Let’s create `instance_d` with `x` equal to 5.
###Code
instance_d = My_Class(5)
###Output
_____no_output_____
###Markdown
And now, see what happens when `instance_d` is called with argument `10`.
###Code
instance_d(10)
###Output
15
###Markdown
Now, you are ready to complete the following cell so any instance from `My_Class`:- Is initialized taking two arguments `y` and `z` and assigns them to `x_1` and `x_2`, respectively. And, - When called, takes the values of the parameters `x_1` and `x_2`, sums them, prints and returns the result.
###Code
class My_Class:
def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z
### START CODE HERE (2 lines) ###
self.x_1 = y
self.x_2 = z
### END CODE HERE ###
def __call__(self): #When called, adds the values of parameters x_1 and x_2, prints and returns the result
### START CODE HERE (1 line) ###
result = self.x_1 + self.x_2
### END CODE HERE ###
print("Addition of {} and {} is {}".format(self.x_1,self.x_2,result))
return result
###Output
_____no_output_____
###Markdown
Run the next cell to check your implementation. If everything is correct, you shouldn't get any errors.
###Code
instance_e = My_Class(10,15)
def test_class_definition():
assert instance_e.x_1 == 10, "Check the value assigned to x_1"
assert instance_e.x_2 == 15, "Check the value assigned to x_2"
assert instance_e() == 25, "Check the __call__ method"
print("\033[92mAll tests passed!")
test_class_definition()
###Output
Addition of 10 and 15 is 25
[92mAll tests passed!
###Markdown
1.3 Custom methods In addition to the `__init__` and `__call__` methods, your classes can have custom-built methods to do whatever you want when called. To define a custom method, you have to indicate its input arguments, the instructions that you want it to perform and the values to return (if any). In the next cell, `My_Class` is defined with `my_method` that multiplies the values of `x_1` and `x_2`, sums that product with an input `w`, and returns the result.
###Code
class My_Class:
def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z
self.x_1 = y
self.x_2 = z
def __call__(self): #Performs an operation with x_1 and x_2, and returns the result
a = self.x_1 - 2*self.x_2
return a
def my_method(self, w): #Multiplies x_1 and x_2, adds argument w and returns the result
result = self.x_1*self.x_2 + w
return result
###Output
_____no_output_____
###Markdown
Create an instance `instance_f` of `My_Class` with any integer values that you want for `x_1` and `x_2`. For that instance, see the result of calling `My_method`, with an argument `w` equal to `16`.
###Code
### START CODE HERE (1 line) ###
instance_f = My_Class(1,10)
### END CODE HERE ###
print("Output of my_method:",instance_f.my_method(16))
###Output
Output of my_method: 26
###Markdown
As you can corroborate in the previous cell, to call a custom method `m`, with arguments `args`, for an instance `i` you must write `i.m(args)`. With that in mind, methods can call others within a class. In the following cell, try to define `new_method` which calls `my_method` with `v` as input argument. Try to do this on your own in the cell given below.
###Code
class My_Class:
def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z
self.x_1 = None
self.x_2 = None
def __call__(self): #Performs an operation with x_1 and x_2, and returns the result
a = None
return a
def my_method(self, w): #Multiplies x_1 and x_2, adds argument w and returns the result
b = None
return b
def new_method(self, v): #Calls My_method with argument v
### START CODE HERE (1 line) ###
result = None
### END CODE HERE ###
return result
###Output
_____no_output_____
###Markdown
SPOILER ALERT Solution:
###Code
# hidden-cell
class My_Class:
def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z
self.x_1 = y
self.x_2 = z
def __call__(self): #Performs an operation with x_1 and x_2, and returns the result
a = self.x_1 - 2*self.x_2
return a
def my_method(self, w): #Multiplies x_1 and x_2, adds argument w and returns the result
b = self.x_1*self.x_2 + w
return b
def new_method(self, v): #Calls My_method with argument v
result = self.my_method(v)
return result
instance_g = My_Class(1,10)
print("Output of my_method:",instance_g.my_method(16))
print("Output of new_method:",instance_g.new_method(16))
###Output
Output of my_method: 26
Output of new_method: 26
###Markdown
Part 2: Subclasses and Inheritance `Trax` uses classes and subclasses to define layers. The base class in `Trax` is `layer`, which means that every layer from a deep learning model is defined as a subclass of the `layer` class. In this part of the notebook, you are going to see how subclasses work. To define a subclass `sub` from class `super`, you have to write `class sub(super):` and define any method and parameter that you want for your subclass. In the next cell, I define `sub_c` as a subclass of `My_Class` with only one method (`additional_method`).
###Code
class sub_c(My_Class): #Subclass sub_c from My_class
def additional_method(self): #Prints the value of parameter x_1
print(self.x_1)
###Output
_____no_output_____
###Markdown
2.1 Inheritance When you define a subclass `sub`, every method and parameter is inherited from `super` class, including the `__init__` and `__call__` methods. This means that any instance from `sub` can use the methods defined in `super`. Run the following cell and see for yourself.
###Code
instance_sub_a = sub_c(1,10)
print('Parameter x_1 of instance_sub_a: ' + str(instance_sub_a.x_1))
print('Parameter x_2 of instance_sub_a: ' + str(instance_sub_a.x_2))
print("Output of my_method of instance_sub_a:",instance_sub_a.my_method(16))
###Output
Parameter x_1 of instance_sub_a: 1
Parameter x_2 of instance_sub_a: 10
Output of my_method of instance_sub_a: 26
###Markdown
As you can see, `sub_c` does not have an initialization method `__init__`, it is inherited from `My_class`. However, you can overwrite any method you want by defining it again in the subclass. For instance, in the next cell define a class `sub_c` with a redefined `my_Method` that multiplies `x_1` and `x_2` but does not add any additional argument.
###Code
class sub_c(My_Class): #Subclass sub_c from My_class
def my_method(self): #Multiplies x_1 and x_2 and returns the result
### START CODE HERE (1 line) ###
b = self.x_1*self.x_2
### END CODE HERE ###
return b
###Output
_____no_output_____
###Markdown
To check your implementation run the following cell.
###Code
test = sub_c(3,10)
assert test.my_method() == 30, "The method my_method should return the product between x_1 and x_2"
print("Output of overridden my_method of test:",test.my_method()) #notice we didn't pass any parameter to call my_method
#print("Output of overridden my_method of test:",test.my_method(16)) #try to see what happens if you call it with 1 argument
###Output
Output of overridden my_method of test: 30
###Markdown
In the next cell, two instances are created, one of `My_Class` and another one of `sub_c`. The instances are initialized with equal `x_1` and `x_2` parameters.
###Code
y,z= 1,10
instance_sub_a = sub_c(y,z)
instance_a = My_Class(y,z)
print('My_method for an instance of sub_c returns: ' + str(instance_sub_a.my_method()))
print('My_method for an instance of My_Class returns: ' + str(instance_a.my_method(10)))
###Output
My_method for an instance of sub_c returns: 10
My_method for an instance of My_Class returns: 20
|
Pandas Lesson.ipynb | ###Markdown
Exploring tabular data with pandasIn this notebook, we will explore a time series of water levels at the Point Atkinson lighthouse using pandas. This is a basic introduction to pandas and we touch on the following topics:* Reading a csv file* Simple plots* Indexing and subsetting* DatetimeIndex* Grouping* Time series methods Getting startedYou will need to have the python libraries pandas, numpy and matplotlib installed. These are all available through the Anaconda distribution of python.* https://store.continuum.io/cshop/anaconda/ResourcesThere is a wealth of information in the pandas documentation.* http://pandas.pydata.org/pandas-docs/stable/Water level data (7795-01-JAN-2000_slev.csv) is from Fisheries and Oceans Canada and is available at this website:* http://www.isdm-gdsi.gc.ca/isdm-gdsi/twl-mne/index-eng.htm
###Code
import pandas as pd
import matplotlib.pyplot as plt
import datetime
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Read the data It is helpful to understand the structure of your dataset before attempting to read it with pandas.
###Code
!head 7795-01-JAN-2000_slev.csv
###Output
Station_Name,Point Atkinson, B.C.
Station_Number,7795
Latitude_Decimal_Degrees,49.337
Longitude_Decimal_Degrees,123.253
Datum,CD
Time_zone,UTC
SLEV=Observed Water Level
Obs_date,SLEV(metres)
2000/01/01 08:00,2.95,
2000/01/01 09:00,3.34,
###Markdown
This dataset contains comma separated values. It has a few rows of metadata (station name, longitude, latitude, etc).The actual data begins with timestamps and water level records at row 9. We can read this data with a pandas function read_csv().read_csv() has many arguments to help customize the reading of many different csv files. For this file, we will* skip the first 8 rows* use index_col=False so that the first column is treated as data and not an index* tell pandas to read the first column as dates (parse_dates=[0])* name the columns as 'date' and 'wlev'.
###Code
data = pd.read_csv('7795-01-JAN-2000_slev.csv', skiprows = 8,
index_col=False, parse_dates=[0], names=['date','wlev'])
###Output
_____no_output_____
###Markdown
data is a DataFrame object
###Code
type(data)
###Output
_____no_output_____
###Markdown
Let's take a quick peak at the dataset.
###Code
data.head()
data.tail()
data.describe()
###Output
_____no_output_____
###Markdown
Notice that pandas did not apply the summary statistics to the date column. Simple Plots pandas has support for some simple plotting features, like line plots, scatter plots, box plots, etc. For full overview of plots visit http://pandas.pydata.org/pandas-docs/stable/visualization.htmlPlotting is really easy. pandas even takes care of labels and legends.
###Code
data.plot('date','wlev')
data.plot(kind='hist')
data.plot(kind='box')
###Output
_____no_output_____
###Markdown
Indexing and Subsetting We can index and subset the data in different ways.By row numberFor example, grab the first two rows.
###Code
data[0:2]
###Output
_____no_output_____
###Markdown
Note that accessing a single row by the row number doesn't work!
###Code
data[0]
###Output
_____no_output_____
###Markdown
In that case, I would recommend using .iloc or slice for one row.
###Code
data.iloc[0]
data[0:1]
###Output
_____no_output_____
###Markdown
By columnFor example, print the first few lines of the wlev column.
###Code
data['wlev'].head()
###Output
_____no_output_____
###Markdown
By a conditionFor example, subset the data with date greater than Jan 1, 2008. We pass our condition into the square brackets of data.
###Code
data_20082009 = data[data['date']>datetime.datetime(2008,1,1)]
data_20082009.plot('date','wlev')
###Output
_____no_output_____
###Markdown
Mulitple conditionsFor example, look for extreme water level events. That is, instances where the water level is above 5 m or below 0 m.Don't forget to put brackets () around each part of the condition.
###Code
data_extreme = data[(data['wlev']>5) | (data['wlev']<0)]
data_extreme.head()
###Output
_____no_output_____
###Markdown
ExerciseWhat was the maximum water level in 2006? Bonus: When?Solution Isolate the year 2006. Use describe to look up the max water level.
###Code
data_2006 = data[(data['date']>=datetime.datetime(2006,1,1)) & (data['date'] < datetime.datetime(2007,1,1))]
data_2006.describe()
###Output
_____no_output_____
###Markdown
The max water level is 5.49m. Use a condition to determine the date.
###Code
date_max = data_2006[data_2006['wlev']==5.49]['date']
print date_max
###Output
53399 2006-02-04 17:00:00
Name: date, dtype: datetime64[ns]
###Markdown
Manipulating dates In the above example, it would have been convenient if we could access only the year part of the time stamp. But this doesn't work:
###Code
data['date'].year
###Output
_____no_output_____
###Markdown
We can use the pandas DatetimeIndex class to make this work. The DatetimeIndex allows us to easily access properties, like year, month, and day of each timestamp. We will use this to add new Year, Month, Day, Hour and DayOfYear columns to the dataframe.
###Code
date_index = pd.DatetimeIndex(data['date'])
print date_index
data['Day'] = date_index.day
data['Month'] = date_index.month
data['Year'] = date_index.year
data['Hour'] = date_index.hour
data['DayOfYear'] = date_index.dayofyear
data.head()
data.describe()
###Output
_____no_output_____
###Markdown
Notice that now pandas applies the describe function to these new columns because it sees them as numerical data.Now, we can access a single year with a simpler conditional.
###Code
data_2006 = data[data['Year']==2006]
data_2006.head()
###Output
_____no_output_____
###Markdown
Grouping Sometimes, it is convenient to group data with similar characteristics. We can do this with the groupby() method.For example, we might want to group by year.
###Code
data_annual = data.groupby(['Year'])
data_annual['wlev'].describe().head(20)
###Output
_____no_output_____
###Markdown
Now the data is organized into groups based on the year of the observation.AggregatingOnce the data is grouped, we may want to summarize it in some way. We can do this with the apply() function. The argument of apply() is a function that we want to apply to each group. For example, we may want to calculate the mean sea level of each year.
###Code
annual_means = data_annual['wlev'].apply(np.mean)
print annual_means
###Output
Year
2000 3.067434
2001 3.057653
2002 3.078112
2003 3.112990
2004 3.104097
2005 3.127036
2006 3.142052
2007 3.095614
2008 3.070757
2009 3.080533
Name: wlev, dtype: float64
###Markdown
It is also really easy to plot the aggregated data.
###Code
annual_means.plot()
###Output
_____no_output_____
###Markdown
Multiple aggregationsWe may also want to apply multiple aggregations, like the mean, max, and min. We can do this with the agg() method and pass a list of aggregation functions as the argument.
###Code
annual_summary = data_annual['wlev'].agg([np.mean,np.max,np.min])
print annual_summary
annual_summary.plot()
###Output
_____no_output_____
###Markdown
Iterating over groupsIn some instances, we may want to iterate over each group. Each group is identifed by a key. If we know the group's key, then we can access that group with the get_group() method. For example, for each year print the mean sea level.
###Code
for year in data_annual.groups.keys():
data_year = data_annual.get_group(year)
print year, data_year['wlev'].mean()
###Output
2000 3.06743417303
2001 3.05765296804
2002 3.07811187215
2003 3.11298972603
2004 3.1040974832
2005 3.12703618873
2006 3.14205230699
2007 3.0956142955
2008 3.07075714448
2009 3.08053287593
###Markdown
We had calculated the annual mean sea level earlier, but this is another way to achieve a similar result.ExerciseFor each year, plot the monthly mean water level.Solution
###Code
for year in data_annual.groups.keys():
data_year = data_annual.get_group(year)
month_mean = data_year.groupby('Month')['wlev'].apply(np.mean)
month_mean.plot(label=year)
plt.legend()
###Output
_____no_output_____
###Markdown
Multiple groupsWe can also group by multiple columns. For example, we might want to group by year and month. That is, a year/month combo defines the group.
###Code
data_yearmonth = data.groupby(['Year','Month'])
means = data_yearmonth['wlev'].apply(np.mean)
means.plot()
###Output
_____no_output_____
###Markdown
Time SeriesThe x-labels on the plot above are a little bit awkward. A different approach would be to resample the data at a monthly freqeuncy. This can be accomplished by setting the date column as an index. Then we can resample the data at a desired frequency. The resampling method is flexible but a common choice is the average.First, we will need to set the index as a DatetimeIndex. Recall, the date_index variable we had assigned earlier. We will add this to the dataframe and make it into the dataframe index.
###Code
data['date_index'] = date_index
data.set_index('date_index', inplace=True)
###Output
_____no_output_____
###Markdown
Now we can resample at a monthly frequency and plot.
###Code
data_monthly = data['wlev'].resample('M', how='mean')
data_monthly.plot()
###Output
_____no_output_____ |
notebooks/analyses_reports/2019-03-15_to_03-19_ab3_node2vec_i_loved.ipynb | ###Markdown
A/B test 3 - loved journeys, control vs node2vecThis related links B/C test (ab3) was conducted from 15-20th 2019.The data used in this report are 15-19th Mar 2019 because the test was ended on 20th mar.The test compared the existing related links (where available) to links generated using node2vec algorithm Import
###Code
%load_ext autoreload
%autoreload 2
import os
import pandas as pd
import numpy as np
import ast
import re
# z test
from statsmodels.stats.proportion import proportions_ztest
# bayesian bootstrap and vis
import matplotlib.pyplot as plt
import seaborn as sns
import bayesian_bootstrap.bootstrap as bb
from astropy.utils import NumpyRNGContext
# progress bar
from tqdm import tqdm, tqdm_notebook
from scipy import stats
from collections import Counter
import sys
sys.path.insert(0, '../../src' )
import analysis as analysis
# set up the style for our plots
sns.set(style='white', palette='colorblind', font_scale=1.3,
rc={'figure.figsize':(12,9),
"axes.facecolor": (0, 0, 0, 0)})
# instantiate progress bar goodness
tqdm.pandas(tqdm_notebook)
pd.set_option('max_colwidth',500)
# the number of bootstrap means used to generate a distribution
boot_reps = 10000
# alpha - false positive rate
alpha = 0.05
# number of tests
m = 4
# Correct alpha for multiple comparisons
alpha = alpha / m
# The Bonferroni correction can be used to adjust confidence intervals also.
# If one establishes m confidence intervals, and wishes to have an overall confidence level of 1-alpha,
# each individual confidence interval can be adjusted to the level of 1-(alpha/m).
# reproducible
seed = 1337
###Output
_____no_output_____
###Markdown
File/dir locations Processed journey data
###Code
DATA_DIR = os.getenv("DATA_DIR")
filename = "full_sample_loved_947858.csv.gz"
filepath = os.path.join(
DATA_DIR, "sampled_journey", "20190315_20190319",
filename)
filepath
VARIANT_DICT = {
'CONTROL_GROUP':'B',
'INTERVENTION_GROUP':'C'
}
# read in processed sampled journey with just the cols we need for related links
df = pd.read_csv(filepath, sep ="\t", compression="gzip")
# convert from str to list
df['Event_cat_act_agg']= df['Event_cat_act_agg'].progress_apply(ast.literal_eval)
df['Page_Event_List'] = df['Page_Event_List'].progress_apply(ast.literal_eval)
df['Page_List'] = df['Page_List'].progress_apply(ast.literal_eval)
# drop dodgy rows, where page variant is not A or B.
df = df.query('ABVariant in [@CONTROL_GROUP, @INTERVENTION_GROUP]')
df[['Occurrences', 'ABVariant']].groupby('ABVariant').sum()
df['Page_List_Length'] = df['Page_List'].progress_apply(len)
###Output
100%|██████████| 740885/740885 [00:00<00:00, 766377.92it/s]
###Markdown
Nav type of page lookup - is it a finding page? if not it's a thing page
###Code
filename = "document_types.csv.gz"
# created a metadata dir in the DATA_DIR to hold this data
filepath = os.path.join(
DATA_DIR, "metadata",
filename)
print(filepath)
df_finding_thing = pd.read_csv(filepath, sep="\t", compression="gzip")
df_finding_thing.head()
thing_page_paths = df_finding_thing[
df_finding_thing['is_finding']==0]['pagePath'].tolist()
finding_page_paths = df_finding_thing[
df_finding_thing['is_finding']==1]['pagePath'].tolist()
###Output
_____no_output_____
###Markdown
OutliersSome rows should be removed before analysis. For example rows with journey lengths of 500 or very high related link click rates. This process might have to happen once features have been created. Derive variables journey_click_rateThere is no difference in the proportion of journeys using at least one related link (journey_click_rate) between page variant A and page variant B. \begin{equation*}\frac{\text{total number of journeys including at least one click on a related link}}{\text{total number of journeys}}\end{equation*}
###Code
# get the number of related links clicks per Sequence
df['Related Links Clicks per seq'] = df['Event_cat_act_agg'].map(analysis.sum_related_click_events)
# map across the Sequence variable, which includes pages and Events
# we want to pass all the list elements to a function one-by-one and then collect the output.
df["Has_Related"] = df["Related Links Clicks per seq"].map(analysis.is_related)
df['Related Links Clicks row total'] = df['Related Links Clicks per seq'] * df['Occurrences']
df.head(3)
###Output
_____no_output_____
###Markdown
count of clicks on navigation elementsThere is no statistically significant difference in the count of clicks on navigation elements per journey between page variant A and page variant B.\begin{equation*}{\text{total number of navigation element click events from content pages}}\end{equation*} Related link counts
###Code
# get the total number of related links clicks for that row (clicks per sequence multiplied by occurrences)
df['Related Links Clicks row total'] = df['Related Links Clicks per seq'] * df['Occurrences']
###Output
_____no_output_____
###Markdown
Navigation events
###Code
def count_nav_events(page_event_list):
"""Counts the number of nav events from a content page in a Page Event List."""
content_page_nav_events = 0
for pair in page_event_list:
if analysis.is_nav_event(pair[1]):
if pair[0] in thing_page_paths:
content_page_nav_events += 1
return content_page_nav_events
# needs finding_thing_df read in from document_types.csv.gz
df['Content_Page_Nav_Event_Count'] = df['Page_Event_List'].progress_map(count_nav_events)
def count_search_from_content(page_list):
search_from_content = 0
for i, page in enumerate(page_list):
if i > 0:
if '/search?q=' in page:
if page_list[i-1] in thing_page_paths:
search_from_content += 1
return search_from_content
df['Content_Search_Event_Count'] = df['Page_List'].progress_map(count_search_from_content)
# count of nav or search clicks
df['Content_Nav_or_Search_Count'] = df['Content_Page_Nav_Event_Count'] + df['Content_Search_Event_Count']
# occurrences is accounted for by the group by bit in our bayesian boot analysis function
df['Content_Nav_Search_Event_Sum_row_total'] = df['Content_Nav_or_Search_Count'] * df['Occurrences']
# required for journeys with no nav later
df['Has_No_Nav_Or_Search'] = df['Content_Nav_Search_Event_Sum_row_total'] == 0
###Output
_____no_output_____
###Markdown
Temporary df file in case of crash Save
###Code
df.to_csv(os.path.join(
DATA_DIR,
"ab3_loved_temp.csv.gz"), sep="\t", compression="gzip", index=False)
df = pd.read_csv(os.path.join(
DATA_DIR,
"ab3_loved_temp.csv.gz"), sep="\t", compression="gzip")
###Output
_____no_output_____
###Markdown
Frequentist statistics Statistical significance
###Code
# help(proportions_ztest)
has_rel = analysis.z_prop(df, 'Has_Related', VARIANT_DICT)
has_rel
has_rel['p-value'] < alpha
###Output
_____no_output_____
###Markdown
Practical significance - uplift
###Code
# Due to multiple testing we used the Bonferroni correction for alpha
ci_low,ci_upp = analysis.zconf_interval_two_samples(has_rel['x_a'], has_rel['n_a'],
has_rel['x_b'], has_rel['n_b'], alpha = alpha)
print(' difference in proportions = {0:.2f}%'.format(100*(has_rel['p_b']-has_rel['p_a'])))
print(' % relative change in proportions = {0:.2f}%'.format(100*((has_rel['p_b']-has_rel['p_a'])/has_rel['p_a'])))
print(' 95% Confidence Interval = ( {0:.2f}% , {1:.2f}% )'
.format(100*ci_low, 100*ci_upp))
###Output
difference in proportions = 1.53%
% relative change in proportions = 44.16%
95% Confidence Interval = ( 1.46% , 1.61% )
###Markdown
Bayesian statistics Based on [this](https://medium.com/@thibalbo/coding-bayesian-ab-tests-in-python-e89356b3f4bd) blog To be developed, a Bayesian approach can provide a simpler interpretation. Bayesian bootstrap
###Code
analysis.compare_total_searches(df, VARIANT_DICT)
fig, ax = plt.subplots()
plot_df_B = df[df.ABVariant == VARIANT_DICT['INTERVENTION_GROUP']].groupby(
'Content_Nav_or_Search_Count').sum().iloc[:, 0]
plot_df_A = df[df.ABVariant == VARIANT_DICT['CONTROL_GROUP']].groupby(
'Content_Nav_or_Search_Count').sum().iloc[:, 0]
ax.set_yscale('log')
width =0.4
ax = plot_df_B.plot.bar(label='B', position=1, width=width)
ax = plot_df_A.plot.bar(label='A', color='salmon', position=0, width=width)
plt.title("loved journeys")
plt.ylabel("Log(number of journeys)")
plt.xlabel("Number of uses of search/nav elements in journey")
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.savefig('nav_counts_loved_bar.png', dpi = 900, bbox_inches = 'tight')
a_bootstrap, b_bootstrap = analysis.bayesian_bootstrap_analysis(df, col_name='Content_Nav_or_Search_Count', boot_reps=boot_reps, seed = seed, variant_dict=VARIANT_DICT)
np.array(a_bootstrap).mean()
np.array(a_bootstrap).mean() - (0.05 * np.array(a_bootstrap).mean())
np.array(b_bootstrap).mean()
print("A relative change of {0:.2f}% from control to intervention".format((np.array(b_bootstrap).mean()-np.array(a_bootstrap).mean())/np.array(a_bootstrap).mean()*100))
# ratio is vestigial but we keep it here for convenience
# it's actually a count but considers occurrences
ratio_stats = analysis.bb_hdi(a_bootstrap, b_bootstrap, alpha=alpha)
ratio_stats
ax = sns.distplot(b_bootstrap, label='B')
ax.errorbar(x=[ratio_stats['b_ci_low'], ratio_stats['b_ci_hi']], y=[2, 2], linewidth=5, c='teal', marker='o',
label='95% HDI B')
ax = sns.distplot(a_bootstrap, label='A', ax=ax, color='salmon')
ax.errorbar(x=[ratio_stats['a_ci_low'], ratio_stats['a_ci_hi']], y=[5, 5], linewidth=5, c='salmon', marker='o',
label='95% HDI A')
ax.set(xlabel='mean search/nav count per journey', ylabel='Density')
sns.despine()
legend = plt.legend(frameon=True, bbox_to_anchor=(0.75, 1), loc='best')
frame = legend.get_frame()
frame.set_facecolor('white')
plt.title("loved journeys")
plt.savefig('nav_counts_loved.png', dpi = 900, bbox_inches = 'tight')
# calculate the posterior for the difference between A's and B's ratio
# ypa prefix is vestigial from blog post
ypa_diff = np.array(b_bootstrap) - np.array(a_bootstrap)
# get the hdi
ypa_diff_ci_low, ypa_diff_ci_hi = bb.highest_density_interval(ypa_diff)
# the mean of the posterior
print('mean:', ypa_diff.mean())
print('low ci:', ypa_diff_ci_low, '\nhigh ci:', ypa_diff_ci_hi)
ax = sns.distplot(ypa_diff)
ax.plot([ypa_diff_ci_low, ypa_diff_ci_hi], [0, 0], linewidth=10, c='k', marker='o',
label='95% HDI')
ax.set(xlabel='Content_Nav_or_Search_Count', ylabel='Density',
title='The difference between B\'s and A\'s mean counts times occurrences')
sns.despine()
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.show();
# We count the number of values greater than 0 and divide by the total number
# of observations
# which returns us the the proportion of values in the distribution that are
# greater than 0, could act a bit like a p-value
(ypa_diff > 0).sum() / ypa_diff.shape[0]
# We count the number of values less than 0 and divide by the total number
# of observations
# which returns us the the proportion of values in the distribution that are
# less than 0, could act a bit like a p-value
(ypa_diff < 0).sum() / ypa_diff.shape[0]
(ypa_diff>0).sum()
(ypa_diff<0).sum()
###Output
_____no_output_____
###Markdown
proportion of journeys with a page sequence including content and related links onlyThere is no statistically significant difference in the proportion of journeys with a page sequence including content and related links only (including loops) between page variant A and page variant B \begin{equation*}\frac{\text{total number of journeys that only contain content pages and related links (i.e. no nav pages)}}{\text{total number of journeys}}\end{equation*} Overall
###Code
# if (Content_Nav_Search_Event_Sum == 0) that's our success
# Has_No_Nav_Or_Search == 1 is a success
# the problem is symmetrical so doesn't matter too much
sum(df.Has_No_Nav_Or_Search * df.Occurrences) / df.Occurrences.sum()
sns.distplot(df.Content_Nav_or_Search_Count.values);
###Output
_____no_output_____
###Markdown
Frequentist statistics Statistical significance
###Code
nav = analysis.z_prop(df, 'Has_No_Nav_Or_Search', VARIANT_DICT)
nav
###Output
_____no_output_____
###Markdown
Practical significance - uplift
###Code
# Due to multiple testing we used the Bonferroni correction for alpha
ci_low,ci_upp = analysis.zconf_interval_two_samples(nav['x_a'], nav['n_a'],
nav['x_b'], nav['n_b'], alpha = alpha)
diff = 100*(nav['x_b']/nav['n_b']-nav['x_a']/nav['n_a'])
print(' difference in proportions = {0:.2f}%'.format(diff))
print(' 95% Confidence Interval = ( {0:.2f}% , {1:.2f}% )'
.format(100*ci_low, 100*ci_upp))
print("There was a {0: .2f}% relative change in the proportion of journeys not using search/nav elements".format(100 * ((nav['p_b']-nav['p_a'])/nav['p_a'])))
###Output
There was a 0.18% relative change in the proportion of journeys not using search/nav elements
###Markdown
Average Journey Length (number of page views)There is no statistically significant difference in the average page list length of journeys (including loops) between page variant A and page variant B.
###Code
length_B = df[df.ABVariant == VARIANT_DICT['INTERVENTION_GROUP']].groupby(
'Page_List_Length').sum().iloc[:, 0]
lengthB_2 = length_B.reindex(np.arange(1, 501, 1), fill_value=0)
length_A = df[df.ABVariant == VARIANT_DICT['CONTROL_GROUP']].groupby(
'Page_List_Length').sum().iloc[:, 0]
lengthA_2 = length_A.reindex(np.arange(1, 501, 1), fill_value=0)
fig, ax = plt.subplots(figsize=(100, 30))
ax.set_yscale('log')
width = 0.4
ax = lengthB_2.plot.bar(label='B', position=1, width=width)
ax = lengthA_2.plot.bar(label='A', color='salmon', position=0, width=width)
plt.xlabel('length', fontsize=1)
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.show();
###Output
_____no_output_____
###Markdown
Bayesian bootstrap for non-parametric hypotheses
###Code
# http://savvastjortjoglou.com/nfl-bayesian-bootstrap.html
# let's use mean journey length (could probably model parametrically but we use it for demonstration here)
# some journeys have length 500 and should probably be removed as they are liekely bots or other weirdness
#exclude journeys of longer than 500 as these could be automated traffic
df_short = df[df['Page_List_Length'] < 500]
print("The mean number of pages in an loved journey is {0:.3f}".format(sum(df.Page_List_Length*df.Occurrences)/df.Occurrences.sum()))
# for reproducibility, set the seed within this context
a_bootstrap, b_bootstrap = analysis.bayesian_bootstrap_analysis(df, col_name='Page_List_Length', boot_reps=boot_reps, seed = seed, variant_dict=VARIANT_DICT)
a_bootstrap_short, b_bootstrap_short = analysis.bayesian_bootstrap_analysis(df_short, col_name='Page_List_Length', boot_reps=boot_reps, seed = seed, variant_dict=VARIANT_DICT)
np.array(a_bootstrap).mean()
np.array(b_bootstrap).mean()
print("There's a relative change in page length of {0:.2f}% from A to B".format((np.array(b_bootstrap).mean()-np.array(a_bootstrap).mean())/np.array(a_bootstrap).mean()*100))
print(np.array(a_bootstrap_short).mean())
print(np.array(b_bootstrap_short).mean())
# Calculate a 95% HDI
a_ci_low, a_ci_hi = bb.highest_density_interval(a_bootstrap)
print('low ci:', a_ci_low, '\nhigh ci:', a_ci_hi)
ax = sns.distplot(a_bootstrap, color='salmon')
ax.plot([a_ci_low, a_ci_hi], [0, 0], linewidth=10, c='k', marker='o',
label='95% HDI')
ax.set(xlabel='Journey Length', ylabel='Density', title='Page Variant A Mean Journey Length')
sns.despine()
plt.legend();
# Calculate a 95% HDI
b_ci_low, b_ci_hi = bb.highest_density_interval(b_bootstrap)
print('low ci:', b_ci_low, '\nhigh ci:', b_ci_hi)
ax = sns.distplot(b_bootstrap)
ax.plot([b_ci_low, b_ci_hi], [0, 0], linewidth=10, c='k', marker='o',
label='95% HDI')
ax.set(xlabel='Journey Length', ylabel='Density', title='Page Variant B Mean Journey Length')
sns.despine()
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.show();
ax = sns.distplot(b_bootstrap, label='B')
ax = sns.distplot(a_bootstrap, label='A', ax=ax, color='salmon')
ax.set(xlabel='Journey Length', ylabel='Density')
sns.despine()
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.title("loved journeys")
plt.savefig('journey_length_loved.png', dpi = 900, bbox_inches = 'tight')
ax = sns.distplot(b_bootstrap_short, label='B')
ax = sns.distplot(a_bootstrap_short, label='A', ax=ax, color='salmon')
ax.set(xlabel='Journey Length', ylabel='Density')
sns.despine()
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.show();
###Output
_____no_output_____
###Markdown
We can also measure the uncertainty in the difference between the Page Variants's Journey Length by subtracting their posteriors.
###Code
# calculate the posterior for the difference between A's and B's YPA
ypa_diff = np.array(b_bootstrap) - np.array(a_bootstrap)
# get the hdi
ypa_diff_ci_low, ypa_diff_ci_hi = bb.highest_density_interval(ypa_diff)
# the mean of the posterior
ypa_diff.mean()
print('low ci:', ypa_diff_ci_low, '\nhigh ci:', ypa_diff_ci_hi)
ax = sns.distplot(ypa_diff)
ax.plot([ypa_diff_ci_low, ypa_diff_ci_hi], [0, 0], linewidth=10, c='k', marker='o',
label='95% HDI')
ax.set(xlabel='Journey Length', ylabel='Density',
title='The difference between B\'s and A\'s mean Journey Length')
sns.despine()
legend = plt.legend(frameon=True)
frame = legend.get_frame()
frame.set_facecolor('white')
plt.show();
###Output
_____no_output_____
###Markdown
We can actually calculate the probability that B's mean Journey Length was greater than A's mean Journey Length by measuring the proportion of values greater than 0 in the above distribution.
###Code
# We count the number of values greater than 0 and divide by the total number
# of observations
# which returns us the the proportion of values in the distribution that are
# greater than 0, could act a bit like a p-value
(ypa_diff > 0).sum() / ypa_diff.shape[0]
# We count the number of values greater than 0 and divide by the total number
# of observations
# which returns us the the proportion of values in the distribution that are
# greater than 0, could act a bit like a p-value
(ypa_diff < 0).sum() / ypa_diff.shape[0]
###Output
_____no_output_____ |
agreg/Crime_parfait.ipynb | ###Markdown
Table of Contents 1 Texte d'oral de modélisation - Agrégation Option Informatique1.1 Préparation à l'agrégation - ENS de Rennes, 2016-171.2 À propos de ce document1.3 Implémentation1.3.1 Une bonne structure de donnée pour des intervalles et des graphes d'intervales1.3.2 Algorithme de coloriage de graphe d'intervalles1.3.3 Algorithme pour calculer le stable maximum d'un graphe d'intervalles1.4 Exemples1.4.1 Qui a tué le Duc de Densmore ?1.4.1.1 Comment résoudre ce problème ?1.4.1.2 Solution1.4.2 Le problème des frigos1.4.3 Le problème du CSA1.4.4 Le problème du wagon restaurant1.4.4.1 Solution via l'algorithme de coloriage de graphe d'intervalles1.5 Bonus ?1.5.1 Visualisation des graphes définis dans les exemples1.6 Conclusion Texte d'oral de modélisation - Agrégation Option Informatique Préparation à l'agrégation - ENS de Rennes, 2016-17- *Date* : 3 avril 2017- *Auteur* : [Lilian Besson](https://GitHub.com/Naereen/notebooks/)- *Texte*: Annale 2006, "Crime Parfait" À propos de ce document- Ceci est une *proposition* de correction, partielle et probablement non-optimale, pour la partie implémentation d'un [texte d'annale de l'agrégation de mathématiques, option informatique](http://Agreg.org/Textes/).- Ce document est un [notebook Jupyter](https://www.Jupyter.org/), et [est open-source sous Licence MIT sur GitHub](https://github.com/Naereen/notebooks/tree/master/agreg/), comme les autres solutions de textes de modélisation que [j](https://GitHub.com/Naereen)'ai écrite cette année.- L'implémentation sera faite en OCaml, version 4+ :
###Code
Sys.command "ocaml -version";;
###Output
The OCaml toplevel, version 4.04.2
###Markdown
---- ImplémentationLa question d'implémentation était la question 2) en page 7.> « Proposer une structure de donnée adaptée pour représenter un graphe d'intervalles dont une représentation sous forme de famille d’intervalles est connue.> Implémenter de manière efficace l’algorithme de coloriage de graphes d'intervalles et illustrer cet algorithme sur une application bien choisie citée dans le texte. »Nous allons donc d'abord définir une structure de donnée pour une famille d'intervalles ainsi que pour un graphe d'intervalle, ainsi qu'une fonction convertissant l'un en l'autre.Cela permettra de facilement définr les différents exemples du texte, et de les résoudre. Une bonne structure de donnée pour des intervalles et des graphes d'intervales- Pour des **intervalles** à valeurs réelles, on se restreint par convénience à des valeurs entières.
###Code
type intervalle = (int * int);;
type intervalles = intervalle list;;
###Output
_____no_output_____
###Markdown
- Pour des **graphes d'intervalles**, on utilise une simple représentation sous forme de liste d'adjacence, plus facile à mettre en place en OCaml qu'une représentation sous forme de matrice. Ici, tous nos graphes ont pour sommets $0 \dots n - 1$.
###Code
type sommet = int;;
type voisins = sommet list;;
type graphe_intervalle = voisins list;;
###Output
_____no_output_____
###Markdown
> *Note:* j'ai préféré garder une structure très simple, pour les intervalles, les graphes d'intervalles et les coloriages, mais on perd un peu en lisibilité dans la fonction coloriage.> > Implicitement, dès qu'une liste d'intervalles est fixée, de taille $n$, ils sont numérotés de $0$ à $n-1$. Le graphe `g` aura pour sommet $0 \dots n-1$, et le coloriage sera un simple tableau de couleurs `c` (i.e., d'entiers), donnant en `c[i]` la couleur de l'intervalle numéro `i`.>> Une solution plus intelligente aurait été d'utiliser des tables d'association, cf. le module [Map](http://caml.inria.fr/pub/docs/manual-ocaml/libref/Map.html) de OCaml, et le code proposé par Julien durant son oral. - On peut rapidement écrire une fonction qui va convertir une liste d'intervalle (`intervalles`) en un graphe d'intervalle. On crée les sommets du graphes, via `index_intvls` qui associe un intervalle à son indice, et ensuite on ajoute les arêtes au graphe selon les contraintes définissant un graphe d'intervalle : $$ \forall I, I' \in V, (I,I') \in E \Leftrightarrow I \neq I' \;\text{and}\; I \cap I' \neq \emptyset $$ Donc avec des intervales $I = [x,y]$ et $I' = [a,b]$, cela donne : $$ \forall I = [x,y], I' = [a,b] \in V, (I,I') \in E \Leftrightarrow (x,y) \neq (a,b) \;\text{and}\; \neg (b < x \;\text{or}\; y < a) $$
###Code
let graphe_depuis_intervalles (intvls : intervalles) : graphe_intervalle =
let n = List.length intvls in (* Nomber de sommet *)
let array_intvls = Array.of_list intvls in (* Tableau des intervalles *)
let index_intvls = Array.to_list (
Array.init n (fun i -> (
array_intvls.(i), i) (* Associe un intervalle à son indice *)
)
) in
let gr = List.map (fun (a, b) -> (* Pour chaque intervalle [a, b] *)
List.filter (fun (x, y) -> (* On ajoute [x, y] s'il intersecte [a, b] *)
(x, y) <> (a, b) (* Intervalle différent *)
&& not ( (b < x) || (y < a) ) (* pas x---y a---b ni a---b x---y *)
) intvls
) intvls in
(* On transforme la liste de liste d'intervalles en une liste de liste d'entiers *)
List.map (fun voisins ->
List.map (fun sommet -> (* Grace au tableau index_intvls *)
List.assoc sommet index_intvls
) voisins
) gr
;;
###Output
_____no_output_____
###Markdown
Algorithme de coloriage de graphe d'intervalles> Étant donné un graphe $G = (V, E)$, on cherche un entier $n$ minimal et une fonction $c : V \to \{1, \cdots, n\}$ telle que si $(v_1 , v_2) \in E$, alors $c(v_1) \neq c(v_2)$.On suit les indications de l'énoncé pour implémenter facilement cet algorithme.> Une *heuristique* simple pour résoudre ce problème consiste à appliquer l’algorithme glouton suivant :> - tant qu'il reste reste des sommets non coloriés,> + en choisir un> + et le colorier avec le plus petit entier qui n’apparait pas dans les voisins déjà coloriés.> En choisissant bien le nouveau sommet à colorier à chaque fois, cette heuristique se révelle optimale pour les graphes d’intervalles.On peut d'abord définir un type de donnée pour un coloriage, sous la forme d'une liste de couple d'intervalle et de couleur.Ainsi, `List.assoc` peut être utilisée pour donner le coloriage de chaque intervalle.
###Code
type couleur = int;;
type coloriage = (intervalle * couleur) list;;
let coloriage_depuis_couleurs (intvl : intervalles) (c : couleur array) : coloriage =
Array.to_list (Array.init (Array.length c) (fun i -> (List.nth intvl i), c.(i)));;
let quelle_couleur (intvl : intervalle) (colors : coloriage) =
List.assoc intvl colors
;;
###Output
_____no_output_____
###Markdown
Ensuite, l'ordre partiel $\prec_i$ sur les intervalles est défini comme ça :$$ I = (a,b) \prec_i J=(x, y) \Longleftrightarrow a < x $$
###Code
let ordre_partiel ((a, _) : intervalle) ((x, _) : intervalle) =
a < x
;;
###Output
_____no_output_____
###Markdown
On a ensuite besoin d'une fonction qui va calculer l'inf de $\mathbb{N} \setminus \{x : x \in \mathrm{valeurs} \}$:
###Code
let inf_N_minus valeurs =
let res = ref 0 in (* Très important d'utiliser une référence ! *)
while List.mem !res valeurs do
incr res;
done;
!res
;;
###Output
_____no_output_____
###Markdown
On vérifie rapidement sur deux exemples :
###Code
inf_N_minus [0; 1; 3];; (* 2 *)
inf_N_minus [0; 1; 2; 3; 4; 5; 6; 10];; (* 7 *)
###Output
_____no_output_____
###Markdown
Enfin, on a besoin d'une fonction pour trouver l'intervalle $I \in V$, minimal pour $\prec_i$, tel que $c(I) = +\infty$.
###Code
let trouve_min_interval intvl (c : coloriage) (inf : couleur) =
let colorie inter = quelle_couleur inter c in
(* D'abord on extraie {I : c(I) = +oo} *)
let intvl2 = List.filter (fun i -> (colorie i) = inf) intvl in
(* Puis on parcourt la liste et on garde le plus petit pour l'ordre *)
let i0 = ref 0 in
for j = 1 to (List.length intvl2) - 1 do
if ordre_partiel (List.nth intvl2 j) (List.nth intvl2 !i0) then
i0 := j;
done;
List.nth intvl2 !i0;
;;
###Output
_____no_output_____
###Markdown
Et donc tout cela permet de finir l'algorithme, tel que décrit dans le texte :
###Code
let coloriage_intervalles (intvl : intervalles) : coloriage =
let n = List.length intvl in (* Nombre d'intervalles *)
let array_intvls = Array.of_list intvl in (* Tableau des intervalles *)
let index_intvls = Array.to_list (
Array.init n (fun i -> (
array_intvls.(i), i) (* Associe un intervalle à son indice *)
)
) in
let gr = graphe_depuis_intervalles intvl in
let inf = n + 10000 in (* Grande valeur, pour +oo *)
let c = Array.make n inf in (* Liste des couleurs, c(I) = +oo pour tout I *)
let maxarray = Array.fold_left max (-inf - 10000) in (* Initialisé à -oo *)
while maxarray c = inf do (* Il reste un I in V tel que c(I) = +oo *)
begin (* C'est la partie pas élégante *)
(* On récupère le coloriage depuis la liste de couleurs actuelle *)
let coloriage = (coloriage_depuis_couleurs intvl c) in
(* Puis la fonction [colorie] pour associer une couleur à un intervalle *)
let colorie inter = quelle_couleur inter coloriage in
(* On choisit un I, minimal pour ordre_partiel, tel que c(I) = +oo *)
let inter = trouve_min_interval intvl coloriage inf in
(* On trouve son indice *)
let i = List.assoc inter index_intvls in
(* On trouve les voisins de i dans le graphe *)
let adj_de_i = List.nth gr i in
(* Puis les voisins de I en tant qu'intervalles *)
let adj_de_I = List.map (fun j -> List.nth intvl j) adj_de_i in
(* Puis on récupère leurs couleurs *)
let valeurs = List.map colorie adj_de_I in
(* c(I) = inf(N - {c(J) : J adjacent a I} ) *)
c.(i) <- inf_N_minus valeurs;
end;
done;
coloriage_depuis_couleurs intvl c;
;;
###Output
_____no_output_____
###Markdown
Une fois qu'on a un coloriage, à valeurs dans $0,\dots,k$ on récupère le nombre de couleurs comme $1 + \max c$, i.e., $k+1$.
###Code
let max_valeurs = List.fold_left max 0;;
let nombre_chromatique (colorg : coloriage) : int =
1 + max_valeurs (List.map snd colorg)
;;
###Output
_____no_output_____
###Markdown
Algorithme pour calculer le *stable maximum* d'un graphe d'intervallesOn répond ici à la question 7.> « Proposer un algorithme efficace pour construire un stable maximum (i.e., un ensemble de sommets indépendants) d'un graphe d’intervalles dont on connaı̂t une représentation sous forme d'intervalles.> On pourra chercher à quelle condition l'intervalle dont l'extrémité droite est la plus à gauche appartient à un stable maximum. » **FIXME, je ne l'ai pas encore fait.** ---- ExemplesOn traite ici l'exemple introductif, ainsi que les trois autres exemples proposés. Qui a tué le Duc de Densmore ?> On ne rappelle pas le problème, mais les données :> - Ann dit avoir vu Betty, Cynthia, Emily, Felicia et Georgia.- Betty dit avoir vu Ann, Cynthia et Helen.- Cynthia dit avoir vu Ann, Betty, Diana, Emily et Helen.- Diana dit avoir vu Cynthia et Emily.- Emily dit avoir vu Ann, Cynthia, Diana et Felicia.- Felicia dit avoir vu Ann et Emily.- Georgia dit avoir vu Ann et Helen.- Helen dit avoir vu Betty, Cynthia et Georgia.Transcrit sous forme de graphe, cela donne :
###Code
(* On définit des entiers, c'est plus simple *)
let ann = 0
and betty = 1
and cynthia = 2
and diana = 3
and emily = 4
and felicia = 5
and georgia = 6
and helen = 7;;
let graphe_densmore = [
[betty; cynthia; emily; felicia; georgia]; (* Ann *)
[ann; cynthia; helen]; (* Betty *)
[ann; betty; diana; emily; helen]; (* Cynthia *)
[cynthia; emily]; (* Diana *)
[ann; cynthia; diana; felicia]; (* Emily *)
[ann; emily]; (* Felicia *)
[ann; helen]; (* Georgia *)
[betty; cynthia; georgia] (* Helen *)
];;
###Output
_____no_output_____
###Markdown
> Figure 1. Graphe d'intervalle pour le problème de l'assassinat du duc de Densmore. Avec les prénoms plutôt que des numéros, cela donne : > Figure 2. Graphe d'intervalle pour le problème de l'assassinat du duc de Densmore. Comment résoudre ce problème ?> Il faut utiliser la caractérisation du théorème 2 du texte, et la définition des graphes parfaits.- Définition + Théorème 2 (point 1) :On sait qu'un graphe d'intervalle est parfait, et donc tous ses graphes induits le sont aussi.La caractérisation via les cordes sur les cycles de taille $\geq 4$ permet de dire qu'un quadrilatère (cycle de taille $4$) n'est pas un graphe d'intervalle.Donc un graphe qui contient un graphe induit étant un quadrilatère ne peut être un graphe d'intervalle.Ainsi, sur cet exemple, comme on a deux quadrilatères $A B H G$ et $A G H C$, on en déduit que $A$, $G$, ou $H$ ont menti.- Théorème 2 (point 2) :Ensuite, si on enlève $G$ ou $H$, le graphe ne devient pas un graphe d'intervalle, par les considérations suivantes, parce que son complémentaire n'est pas un graphe de comparaison.En effet, par exemple si on enlève $G$, $A$ et $H$ et $D$ forment une clique dans le complémentaire $\overline{G}$ de $G$, et l'irréflexivité d'une éventuelle relation $R$ rend cela impossible. Pareil si on enlève $H$, avec $G$ et $B$ et $D$ qui formet une clique dans $\overline{G}$.Par contre, si on enlève $A$, le graphe devient triangulé (et de comparaison, mais c'est plus dur à voir !).Donc seule $A$ reste comme potentielle menteuse. > « Mais... Ça semble difficile de programmer une résolution automatique de ce problème ? »En fait, il suffit d'écrire une fonction de vérification qu'un graphe est un graphe d'intervalle, puis on essaie d'enlever chaque sommet, tant que le graphe n'est pas un graphe d'intervalle.Si le graphe devient valide en enlevant un seul sommet, et qu'il n'y en a qu'un seul qui fonctionne, alors il y a un(e) seul(e) menteur(se) dans le graphe, et donc un(e) seul(e) coupable ! SolutionC'est donc $A$, i.e., Ann l'unique menteuse et donc la coupable.> Ce n'est pas grave de ne pas avoir réussi à répondre durant l'oral !> Au contraire, vous avez le droit de vous détacher du problème initial du texte ! > Une solution bien expliquée peut être trouvée dans [cette vidéo](https://youtu.be/ZGhSyVvOelg) : Le problème des frigos> Dans un grand hopital, les réductions de financement public poussent le gestionnaire du service d'immunologie à faire des économies sur le nombre de frigos à acheter pour stocker les vaccins. A peu de chose près, il lui faut stocker les vaccins suivants :> | Numéro | Nom du vaccin | Température de conservation| :-----: | :------------ | -------------------------: || 0 | Rougeole-Rubéole-Oreillons (RRO) | $4 \cdots 12$ °C| 1 | BCG | $8 \cdots 15$ °C| 2 | Di-Te-Per | $0 \cdots 20$ °C| 3 | Anti-polio | $2 \cdots 3$ °C| 4 | Anti-hépatite B | $-3 \cdots 6$ °C| 5 | Anti-amarile | $-10 \cdots 10$ °C| 6 | Variole | $6 \cdots 20$ °C| 7 | Varicelle | $-5 \cdots 2$ °C| 8 | Antihaemophilus | $-2 \cdots 8$ °C> Combien le gestionaire doit-il acheter de frigos, et sur quelles températures doit-il les régler ?
###Code
let vaccins : intervalles = [
(4, 12);
(8, 15);
(0, 20);
(2, 3);
(-3, 6);
(-10, 10);
(6, 20);
(-5, 2);
(-2, 8)
]
###Output
_____no_output_____
###Markdown
Qu'on peut visualiser sous forme de graphe facilement :
###Code
let graphe_vaccins = graphe_depuis_intervalles vaccins;;
###Output
_____no_output_____
###Markdown
> Figure 3. Graphe d'intervalle pour le problème des frigos et des vaccins. Avec des intervalles au lieu de numéro : > Figure 4. Graphe d'intervalle pour le problème des frigos et des vaccins. On peut récupérer une coloriage minimal pour ce graphe :
###Code
coloriage_intervalles vaccins;;
###Output
_____no_output_____
###Markdown
La couleur la plus grande est `5`, donc le nombre chromatique de ce graphe est `6`.
###Code
nombre_chromatique (coloriage_intervalles vaccins);;
###Output
_____no_output_____
###Markdown
Par contre, la solution au problème des frigos et des vaccins réside dans le nombre de couverture de cliques, $k(G)$, pas dans le nombre chromatique $\chi(G)$.On peut le résoudre en répondant à la question 7, qui demandait de mettre au point un algorithme pour construire un *stable maximum* pour un graphe d'intervalle. Le problème du CSA> Le Conseil Supérieur de l’Audiovisuel doit attribuer de nouvelles bandes de fréquences d’émission pour la stéréophonie numérique sous-terraine (SNS).> Cette technologie de pointe étant encore à l'état expérimental, les appareils capables d'émettre ne peuvent utiliser que les bandes de fréquences FM suivantes :> | Bandes de fréquence | Intervalle (kHz) || :-----------------: | ---------: || 0 | $32 \cdots 36$ || 1 | $24 \cdots 30$ || 2 | $28 \cdots 33$ || 3 | $22 \cdots 26$ || 4 | $20 \cdots 25$ || 5 | $30 \cdots 33$ || 6 | $31 \cdots 34$ || 7 | $27 \cdots 31$ |> Quelles bandes de fréquences doit-on retenir pour permettre à le plus d'appareils possibles d'être utilisés, sachant que deux appareils dont les bandes de fréquences s'intersectent pleinement (pas juste sur les extrémités) sont incompatibles.
###Code
let csa : intervalles = [
(32, 36);
(24, 30);
(28, 33);
(22, 26);
(20, 25);
(30, 33);
(31, 34);
(27, 31)
];;
let graphe_csa = graphe_depuis_intervalles csa;;
###Output
_____no_output_____
###Markdown
> Figure 5. Graphe d'intervalle pour le problème du CSA. Avec des intervalles au lieu de numéro : > Figure 6. Graphe d'intervalle pour le problème du CSA. On peut récupérer une coloriage minimal pour ce graphe :
###Code
coloriage_intervalles csa;;
###Output
_____no_output_____
###Markdown
La couleur la plus grande est `3`, donc le nombre chromatique de ce graphe est `4`.
###Code
nombre_chromatique (coloriage_intervalles csa);;
###Output
_____no_output_____
###Markdown
Par contre, la solution au problème CSA réside dans le nombre de couverture de cliques, $k(G)$, pas dans le nombre chromatique $\chi(G)$.On peut le résoudre en répondant à la question 7, qui demandait de mettre au point un algorithme pour construire un *stable maximum* pour un graphe d'intervalle. Le problème du wagon restaurant> Le chef de train de l'Orient Express doit aménager le wagon restaurant avant le départ du train. Ce wagon est assez petit et doit être le moins encombré de tables possibles, mais il faut prévoir suffisemment de tables pour accueillir toutes personnes qui ont réservé :> | Numéro | Personnage(s) | Heures de dîner | En secondes || :----------------- | --------- | :---------: | :---------: || 0 | Le baron et la baronne Von Haussplatz | 19h30 .. 20h14 | $1170 \cdots 1214$| 1 | Le général Cook | 20h30 .. 21h59 | $1230 \cdots 1319$| 2 | Les époux Steinberg | 19h .. 19h59 | $1140 \cdots 1199$| 3 | La duchesse de Colombart | 20h15 .. 20h59 | $1215 \cdots 1259$| 4 | Le marquis de Carquamba | 21h .. 21h59 | $1260 \cdots 1319$| 5 | La Vociafiore | 19h15 .. 20h29 | $1155 \cdots 1229$| 6 | Le colonel Ferdinand | 20h .. 20h59 | $1200 \cdots 1259$> Combien de tables le chef de train doit-il prévoir ?
###Code
let restaurant = [
(1170, 1214);
(1230, 1319);
(1140, 1199);
(1215, 1259);
(1260, 1319);
(1155, 1229);
(1200, 1259)
];;
let graphe_restaurant = graphe_depuis_intervalles restaurant;;
###Output
_____no_output_____
###Markdown
> Figure 7. Graphe d'intervalle pour le problème du wagon restaurant. Avec des intervalles au lieu de numéro : > Figure 8. Graphe d'intervalle pour le problème du wagon restaurant.
###Code
coloriage_intervalles restaurant;;
###Output
_____no_output_____
###Markdown
La couleur la plus grande est `2`, donc le nombre chromatique de ce graphe est `3`.
###Code
nombre_chromatique (coloriage_intervalles restaurant);;
###Output
_____no_output_____
###Markdown
Solution via l'algorithme de coloriage de graphe d'intervallesPour ce problème là, la solution est effectivement donnée par le nombre chromatique.La couleur sera le numéro de table pour chaque passagers (ou couple de passagers), et donc le nombre minimal de table à installer dans le wagon restaurant est exactement le nombre chromatique.Une solution peut être la suivante, avec **3 tables** :| Numéro | Personnage(s) | Heures de dîner | Numéro de table || :----------------- | --------- | :---------: | :---------: || 0 | Le baron et la baronne Von Haussplatz | 19h30 .. 20h14 | 2| 1 | Le général Cook | 20h30 .. 21h59 | 1| 2 | Les époux Steinberg | 19h .. 19h59 | 0| 3 | La duchesse de Colombart | 20h15 .. 20h59 | 2| 4 | Le marquis de Carquamba | 21h .. 21h59 | 0| 5 | La Vociafiore | 19h15 .. 20h29 | 1| 6 | Le colonel Ferdinand | 20h .. 20h59 | 0On vérifie manuellement que la solution convient.Chaque passager devra quitter sa tableau à la minute près par contre ! On peut afficher la solution avec un graphe colorié.La table `0` sera rouge, `1` sera bleu et `2` sera jaune : > Figure 9. Solution pour le problème du wagon restaurant. ---- Bonus ? Visualisation des graphes définis dans les exemples- J'utilise une petite fonction facile à écrire, qui convertit un graphe (`int list list`) en une chaîne de caractère au format [DOT Graph](http://www.graphviz.org/doc/info/lang.html).- Ensuite, un appel `dot -Tpng ...` en ligne de commande convertit ce graphe en une image, que j'inclus ensuite manuellement.
###Code
(** Transforme un [graph] en une chaîne représentant un graphe décrit par le langage DOT,
voir http://en.wikipedia.org/wiki/DOT_language pour plus de détails sur ce langage.
@param graphname Donne le nom du graphe tel que précisé pour DOT
@param directed Vrai si le graphe doit être dirigé (c'est le cas ici) faux sinon. Change le style des arêtes ([->] ou [--])
@param verb Affiche tout dans le terminal.
@param onetoone Si on veut afficher le graphe en mode carré (échelle 1:1). Parfois bizarre, parfois génial.
*)
let graph_to_dotgraph ?(graphname = "graphname") ?(directed = false) ?(verb = false) ?(onetoone = false) (glist : int list list) =
let res = ref "" in
let log s =
if verb then print_string s; (* Si [verb] affiche dans le terminal le résultat du graphe. *)
res := !res ^ s
in
log (if directed then "digraph " else "graph ");
log graphname; log " {";
if onetoone then
log "\n size=\"1,1\";";
let g = Array.of_list (List.map Array.of_list glist) in
(* On affiche directement les arc, un à un. *)
for i = 0 to (Array.length g) - 1 do
for j = 0 to (Array.length g.(i)) - 1 do
if i < g.(i).(j) then
log ("\n \""
^ (string_of_int i) ^ "\" "
^ (if directed then "->" else "--")
^ " \"" ^ (string_of_int g.(i).(j)) ^ "\""
);
done;
done;
log "\n}\n// generated by OCaml with the function graphe_to_dotgraph.";
!res;;
(** Fonction ecrire_sortie : plus pratique que output. *)
let ecrire_sortie monoutchanel machaine =
output monoutchanel machaine 0 (String.length machaine);
flush monoutchanel;;
(** Fonction ecrire_dans_fichier : pour écrire la chaine dans le fichier à l'adresse renseignée. *)
let ecrire_dans_fichier ~chaine ~adresse =
let mon_out_channel = open_out adresse in
ecrire_sortie mon_out_channel chaine;
close_out mon_out_channel;;
let s_graphe_densmore = graph_to_dotgraph ~graphname:"densmore" ~directed:false ~verb:false graphe_densmore;;
let s_graphe_vaccins = graph_to_dotgraph ~graphname:"vaccins" ~directed:false ~verb:false graphe_vaccins;;
let s_graphe_csa = graph_to_dotgraph ~graphname:"csa" ~directed:false ~verb:false graphe_csa;;
let s_graphe_restaurant = graph_to_dotgraph ~graphname:"restaurant" ~directed:false ~verb:false graphe_restaurant;;
ecrire_dans_fichier ~chaine:s_graphe_densmore ~adresse:"/tmp/densmore.dot" ;;
(* Sys.command "fdp -Tpng /tmp/densmore.dot > images/densmore.png";; *)
ecrire_dans_fichier ~chaine:s_graphe_vaccins ~adresse:"/tmp/vaccins.dot" ;;
(* Sys.command "fdp -Tpng /tmp/vaccins.dot > images/vaccins.png";; *)
ecrire_dans_fichier ~chaine:s_graphe_csa ~adresse:"/tmp/csa.dot" ;;
(* Sys.command "fdp -Tpng /tmp/csa.dot > images/csa.png";; *)
ecrire_dans_fichier ~chaine:s_graphe_restaurant ~adresse:"/tmp/restaurant.dot" ;;
(* Sys.command "fdp -Tpng /tmp/restaurant.dot > images/restaurant.png";; *)
###Output
_____no_output_____ |
TRABALHO1GRAFOS.ipynb | ###Markdown
bibliotecas utilizadas Aperte Play para inicializar as bibliotecas
###Code
import networkx as nx
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Entre com o número de vértces do seu grafo e digite enter.
###Code
n = input("entre com o numero de vertices:" )
###Output
entre com o numero de vertices:5
###Markdown
aperte o play para tranformar num número inteiro a sua entrada.
###Code
num=int(str(n))
print(num)
###Output
5
###Markdown
aperte o play para gerar a lista dos vértices do seu Grafo
###Code
G = nx.path_graph(num)
list(G.nodes)
m = int(input("Entre com o número de arestas : "))
###Output
Entre com o número de arestas : 7
###Markdown
Digite as suas arestas, quais vértices estão conectados, aperte enter após cada aresta informada.
###Code
# creating an empty list
lst = []
# iterating till the range
for i in range(0, m):
ele = str(input())
lst.append(ele) # adding the element
print(lst)
###Output
01
12
13
23
24
34
02
['01', '12', '13', '23', '24', '34', '02']
###Markdown
aperte play para gerar uma representação no plano do seu Grafo.
###Code
G = nx.Graph(lst)
opts = { "with_labels": True, "node_color": 'y' }
nx.draw(G, **opts)
###Output
_____no_output_____
###Markdown
aperte o play para gerar os elementos da sua matriz de adjacência do seu Grafo
###Code
A = nx.adjacency_matrix(G)
print(A)
###Output
(0, 1) 1
(0, 2) 1
(1, 0) 1
(1, 2) 1
(1, 3) 1
(2, 0) 1
(2, 1) 1
(2, 3) 1
(2, 4) 1
(3, 1) 1
(3, 2) 1
(3, 4) 1
(4, 2) 1
(4, 3) 1
###Markdown
Agora basta apertar o play e a sua matriz de adjacência está pronta!
###Code
A = nx.adjacency_matrix(G).toarray()
print(A)
###Output
[[0 1 1 0 0]
[1 0 1 1 0]
[1 1 0 1 1]
[0 1 1 0 1]
[0 0 1 1 0]]
|
stable/_downloads/a68c968ba9eafa2b1315cbf9e139eee3/plot_phantom_4DBTi.ipynb | ###Markdown
============================================4D Neuroimaging/BTi phantom dataset tutorial============================================Here we read 4DBTi epochs data obtained with a spherical phantomusing four different dipole locations. For each condition wecompute evoked data and compute dipole fits.Data are provided by Jean-Michel Badier from MEG center in Marseille, France.
###Code
# Authors: Alex Gramfort <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from mayavi import mlab
from mne.datasets import phantom_4dbti
import mne
###Output
_____no_output_____
###Markdown
Read data and compute a dipole fit at the peak of the evoked response
###Code
data_path = phantom_4dbti.data_path()
raw_fname = op.join(data_path, '%d/e,rfhp1.0Hz')
dipoles = list()
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.080)
t0 = 0.07 # peak of the response
pos = np.empty((4, 3))
for ii in range(4):
raw = mne.io.read_raw_bti(raw_fname % (ii + 1,),
rename_channels=False, preload=True)
raw.info['bads'] = ['A173', 'A213', 'A232']
events = mne.find_events(raw, 'TRIGGER', mask=4350, mask_type='not_and')
epochs = mne.Epochs(raw, events=events, event_id=8192, tmin=-0.2, tmax=0.4,
preload=True)
evoked = epochs.average()
evoked.plot(time_unit='s')
cov = mne.compute_covariance(epochs, tmax=0.)
dip = mne.fit_dipole(evoked.copy().crop(t0, t0), cov, sphere)[0]
pos[ii] = dip.pos[0]
###Output
_____no_output_____
###Markdown
Compute localisation errors
###Code
actual_pos = 0.01 * np.array([[0.16, 1.61, 5.13],
[0.17, 1.35, 4.15],
[0.16, 1.05, 3.19],
[0.13, 0.80, 2.26]])
actual_pos = np.dot(actual_pos, [[0, 1, 0], [-1, 0, 0], [0, 0, 1]])
errors = 1e3 * np.linalg.norm(actual_pos - pos, axis=1)
print("errors (mm) : %s" % errors)
###Output
_____no_output_____
###Markdown
Plot the dipoles in 3D
###Code
def plot_pos(pos, color=(0., 0., 0.)):
mlab.points3d(pos[:, 0], pos[:, 1], pos[:, 2], scale_factor=0.005,
color=color)
mne.viz.plot_alignment(evoked.info, bem=sphere, surfaces=[])
# Plot the position of the actual dipole
plot_pos(actual_pos, color=(1., 0., 0.))
# Plot the position of the estimated dipole
plot_pos(pos, color=(1., 1., 0.))
###Output
_____no_output_____ |
notebooks/thesis_experiments/20200924_eMVFTS_Wind_Energy_Raw.ipynb | ###Markdown
Forecasting experiments for GEFCOM 2012 Wind Dataset Install Libs
###Code
!pip3 install -U git+https://github.com/PYFTS/pyFTS
!pip3 install -U git+https://github.com/cseveriano/spatio-temporal-forecasting
!pip3 install -U git+https://github.com/cseveriano/evolving_clustering
!pip3 install -U git+https://github.com/cseveriano/fts2image
!pip3 install -U hyperopt
!pip3 install -U pyts
import pandas as pd
import numpy as np
from hyperopt import hp
from spatiotemporal.util import parameter_tuning, sampling
from spatiotemporal.util import experiments as ex
from sklearn.metrics import mean_squared_error
from google.colab import files
import matplotlib.pyplot as plt
import pickle
import math
from pyFTS.benchmarks import Measures
from pyts.decomposition import SingularSpectrumAnalysis
from google.colab import files
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import datetime
###Output
_____no_output_____
###Markdown
Aux Functions
###Code
def normalize(df):
mindf = df.min()
maxdf = df.max()
return (df-mindf)/(maxdf-mindf)
def denormalize(norm, _min, _max):
return [(n * (_max-_min)) + _min for n in norm]
def getRollingWindow(index):
pivot = index
train_start = pivot.strftime('%Y-%m-%d')
pivot = pivot + datetime.timedelta(days=20)
train_end = pivot.strftime('%Y-%m-%d')
pivot = pivot + datetime.timedelta(days=1)
test_start = pivot.strftime('%Y-%m-%d')
pivot = pivot + datetime.timedelta(days=6)
test_end = pivot.strftime('%Y-%m-%d')
return train_start, train_end, test_start, test_end
def calculate_rolling_error(cv_name, df, forecasts, order_list):
cv_results = pd.DataFrame(columns=['Split', 'RMSE', 'SMAPE'])
limit = df.index[-1].strftime('%Y-%m-%d')
test_end = ""
index = df.index[0]
for i in np.arange(len(forecasts)):
train_start, train_end, test_start, test_end = getRollingWindow(index)
test = df[test_start : test_end]
yhat = forecasts[i]
order = order_list[i]
rmse = Measures.rmse(test.iloc[order:], yhat[:-1])
smape = Measures.smape(test.iloc[order:], yhat[:-1])
res = {'Split' : index.strftime('%Y-%m-%d') ,'RMSE' : rmse, 'SMAPE' : smape}
cv_results = cv_results.append(res, ignore_index=True)
cv_results.to_csv(cv_name+".csv")
index = index + datetime.timedelta(days=7)
return cv_results
def get_final_forecast(norm_forecasts):
forecasts_final = []
for i in np.arange(len(norm_forecasts)):
f_raw = denormalize(norm_forecasts[i], min_raw, max_raw)
forecasts_final.append(f_raw)
return forecasts_final
from spatiotemporal.test import methods_space_oahu as ms
from spatiotemporal.util import parameter_tuning, sampling
from spatiotemporal.util import experiments as ex
from sklearn.metrics import mean_squared_error
import numpy as np
from hyperopt import fmin, tpe, hp, STATUS_OK, Trials
from hyperopt import space_eval
import traceback
from . import sampling
import pickle
def calculate_error(loss_function, test_df, forecast, offset):
error = loss_function(test_df.iloc[(offset):], forecast)
print("Error : "+str(error))
return error
def method_optimize(experiment, forecast_method, train_df, test_df, space, loss_function, max_evals):
def objective(params):
print(params)
try:
_output = list(params['output'])
forecast = forecast_method(train_df, test_df, params)
_step = params.get('step', 1)
offset = params['order'] + _step - 1
error = calculate_error(loss_function, test_df[_output], forecast, offset)
except Exception:
traceback.print_exc()
error = 1000
return {'loss': error, 'status': STATUS_OK}
print("Running experiment: " + experiment)
trials = Trials()
best = fmin(objective, space, algo=tpe.suggest, max_evals=max_evals, trials=trials)
print('best parameters: ')
print(space_eval(space, best))
pickle.dump(best, open("best_" + experiment + ".pkl", "wb"))
pickle.dump(trials, open("trials_" + experiment + ".pkl", "wb"))
def run_search(methods, data, train, loss_function, max_evals=100, resample=None):
if resample:
data = sampling.resample_data(data, resample)
train_df, test_df = sampling.train_test_split(data, train)
for experiment, method, space in methods:
method_optimize(experiment, method, train_df, test_df, space, loss_function, max_evals)
###Output
_____no_output_____
###Markdown
Load Dataset
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import math
from sklearn.metrics import mean_squared_error
#columns names
wind_farms = ['wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7']
# read raw dataset
import pandas as pd
df = pd.read_csv('https://query.data.world/s/3zx2jusk4z6zvlg2dafqgshqp3oao6', parse_dates=['date'], index_col=0)
df.index = pd.to_datetime(df.index, format="%Y%m%d%H")
interval = ((df.index >= '2009-07') & (df.index <= '2010-08'))
df = df.loc[interval]
#Normalize Data
# Save Min-Max for Denorm
min_raw = df.min()
max_raw = df.max()
# Perform Normalization
norm_df = normalize(df)
# Tuning split
tuning_df = norm_df["2009-07-01":"2009-07-31"]
norm_df = norm_df["2009-08-01":"2010-08-30"]
df = df["2009-08-01":"2010-08-30"]
###Output
_____no_output_____
###Markdown
Forecasting Methods Persistence
###Code
def persistence_forecast(train, test, step):
predictions = []
for t in np.arange(0,len(test), step):
yhat = [test.iloc[t]] * step
predictions.extend(yhat)
return predictions
def rolling_cv_persistence(df, step):
forecasts = []
lags_list = []
limit = df.index[-1].strftime('%Y-%m-%d')
test_end = ""
index = df.index[0]
while test_end < limit :
print("Index: ", index.strftime('%Y-%m-%d'))
train_start, train_end, test_start, test_end = getRollingWindow(index)
index = index + datetime.timedelta(days=7)
train = df[train_start : train_end]
test = df[test_start : test_end]
yhat = persistence_forecast(train, test, step)
lags_list.append(1)
forecasts.append(yhat)
return forecasts, lags_list
forecasts_raw, order_list = rolling_cv_persistence(norm_df, 1)
forecasts_final = get_final_forecast(forecasts_raw)
calculate_rolling_error("rolling_cv_wind_raw_persistence", norm_df, forecasts_final, order_list)
files.download('rolling_cv_wind_raw_persistence.csv')
###Output
_____no_output_____
###Markdown
VAR
###Code
from statsmodels.tsa.api import VAR, DynamicVAR
def evaluate_VAR_models(test_name, train, validation,target, maxlags_list):
var_results = pd.DataFrame(columns=['Order','RMSE'])
best_score, best_cfg, best_model = float("inf"), None, None
for lgs in maxlags_list:
model = VAR(train)
results = model.fit(maxlags=lgs, ic='aic')
order = results.k_ar
forecast = []
for i in range(len(validation)-order) :
forecast.extend(results.forecast(validation.values[i:i+order],1))
forecast_df = pd.DataFrame(columns=validation.columns, data=forecast)
rmse = Measures.rmse(validation[target].iloc[order:], forecast_df[target].values)
if rmse < best_score:
best_score, best_cfg, best_model = rmse, order, results
res = {'Order' : str(order) ,'RMSE' : rmse}
print('VAR (%s) RMSE=%.3f' % (str(order),rmse))
var_results = var_results.append(res, ignore_index=True)
var_results.to_csv(test_name+".csv")
print('Best VAR(%s) RMSE=%.3f' % (best_cfg, best_score))
return best_model
def var_forecast(train, test, params):
order = params['order']
step = params['step']
model = VAR(train.values)
results = model.fit(maxlags=order)
lag_order = results.k_ar
print("Lag order:" + str(lag_order))
forecast = []
for i in np.arange(0,len(test)-lag_order+1,step) :
forecast.extend(results.forecast(test.values[i:i+lag_order],step))
forecast_df = pd.DataFrame(columns=test.columns, data=forecast)
return forecast_df.values, lag_order
def rolling_cv_var(df, params):
forecasts = []
order_list = []
limit = df.index[-1].strftime('%Y-%m-%d')
test_end = ""
index = df.index[0]
while test_end < limit :
print("Index: ", index.strftime('%Y-%m-%d'))
train_start, train_end, test_start, test_end = getRollingWindow(index)
index = index + datetime.timedelta(days=7)
train = df[train_start : train_end]
test = df[test_start : test_end]
# Concat train & validation for test
yhat, lag_order = var_forecast(train, test, params)
forecasts.append(yhat)
order_list.append(lag_order)
return forecasts, order_list
params_raw = {'order': 4, 'step': 1}
forecasts_raw, order_list = rolling_cv_var(norm_df, params_raw)
forecasts_final = get_final_forecast(forecasts_raw)
calculate_rolling_error("rolling_cv_wind_raw_var", df, forecasts_final, order_list)
files.download('rolling_cv_wind_raw_var.csv')
###Output
_____no_output_____
###Markdown
e-MVFTS
###Code
from spatiotemporal.models.clusteredmvfts.fts import evolvingclusterfts
def evolvingfts_forecast(train_df, test_df, params, train_model=True):
_variance_limit = params['variance_limit']
_defuzzy = params['defuzzy']
_t_norm = params['t_norm']
_membership_threshold = params['membership_threshold']
_order = params['order']
_step = params['step']
model = evolvingclusterfts.EvolvingClusterFTS(variance_limit=_variance_limit, defuzzy=_defuzzy, t_norm=_t_norm,
membership_threshold=_membership_threshold)
model.fit(train_df.values, order=_order, verbose=False)
forecast = model.predict(test_df.values, steps_ahead=_step)
forecast_df = pd.DataFrame(data=forecast, columns=test_df.columns)
return forecast_df.values
def rolling_cv_evolving(df, params):
forecasts = []
order_list = []
limit = df.index[-1].strftime('%Y-%m-%d')
test_end = ""
index = df.index[0]
first_time = True
while test_end < limit :
print("Index: ", index.strftime('%Y-%m-%d'))
train_start, train_end, test_start, test_end = getRollingWindow(index)
index = index + datetime.timedelta(days=7)
train = df[train_start : train_end]
test = df[test_start : test_end]
# Concat train & validation for test
yhat = list(evolvingfts_forecast(train, test, params, train_model=first_time))
#yhat.append(yhat[-1]) #para manter o formato do vetor de metricas
forecasts.append(yhat)
order_list.append(params['order'])
first_time = False
return forecasts, order_list
params_raw = {'variance_limit': 0.001, 'order': 2, 'defuzzy': 'weighted', 't_norm': 'threshold', 'membership_threshold': 0.6, 'step':1}
forecasts_raw, order_list = rolling_cv_evolving(norm_df, params_raw)
forecasts_final = get_final_forecast(forecasts_raw)
calculate_rolling_error("rolling_cv_wind_raw_emvfts", df, forecasts_final, order_list)
files.download('rolling_cv_wind_raw_emvfts.csv')
###Output
_____no_output_____
###Markdown
MLP
###Code
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import Dropout
from keras.constraints import maxnorm
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.layers.normalization import BatchNormalization
# convert series to supervised learning
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
n_vars = 1 if type(data) is list else data.shape[1]
df = pd.DataFrame(data)
cols, names = list(), list()
# input sequence (t-n, ... t-1)
for i in range(n_in, 0, -1):
cols.append(df.shift(i))
names += [('var%d(t-%d)' % (j+1, i)) for j in range(n_vars)]
# forecast sequence (t, t+1, ... t+n)
for i in range(0, n_out):
cols.append(df.shift(-i))
if i == 0:
names += [('var%d(t)' % (j+1)) for j in range(n_vars)]
else:
names += [('var%d(t+%d)' % (j+1, i)) for j in range(n_vars)]
# put it all together
agg = pd.concat(cols, axis=1)
agg.columns = names
# drop rows with NaN values
if dropnan:
agg.dropna(inplace=True)
return agg
###Output
_____no_output_____
###Markdown
MLP Parameter Tuning
###Code
from spatiotemporal.util import parameter_tuning, sampling
from spatiotemporal.util import experiments as ex
from sklearn.metrics import mean_squared_error
from hyperopt import hp
import numpy as np
mlp_space = {'choice':
hp.choice('num_layers',
[
{'layers': 'two',
},
{'layers': 'three',
'units3': hp.choice('units3', [8, 16, 64, 128, 256, 512]),
'dropout3': hp.choice('dropout3', [0, 0.25, 0.5, 0.75])
}
]),
'units1': hp.choice('units1', [8, 16, 64, 128, 256, 512]),
'units2': hp.choice('units2', [8, 16, 64, 128, 256, 512]),
'dropout1': hp.choice('dropout1', [0, 0.25, 0.5, 0.75]),
'dropout2': hp.choice('dropout2', [0, 0.25, 0.5, 0.75]),
'batch_size': hp.choice('batch_size', [28, 64, 128, 256, 512]),
'order': hp.choice('order', [1, 2, 3]),
'input': hp.choice('input', [wind_farms]),
'output': hp.choice('output', [wind_farms]),
'epochs': hp.choice('epochs', [100, 200, 300])}
def mlp_tuning(train_df, test_df, params):
_input = list(params['input'])
_nlags = params['order']
_epochs = params['epochs']
_batch_size = params['batch_size']
nfeat = len(train_df.columns)
nsteps = params.get('step',1)
nobs = _nlags * nfeat
output_index = -nfeat*nsteps
train_reshaped_df = series_to_supervised(train_df[_input], n_in=_nlags, n_out=nsteps)
train_X, train_Y = train_reshaped_df.iloc[:, :nobs].values, train_reshaped_df.iloc[:, output_index:].values
test_reshaped_df = series_to_supervised(test_df[_input], n_in=_nlags, n_out=nsteps)
test_X, test_Y = test_reshaped_df.iloc[:, :nobs].values, test_reshaped_df.iloc[:, output_index:].values
# design network
model = Sequential()
model.add(Dense(params['units1'], input_dim=train_X.shape[1], activation='relu'))
model.add(Dropout(params['dropout1']))
model.add(BatchNormalization())
model.add(Dense(params['units2'], activation='relu'))
model.add(Dropout(params['dropout2']))
model.add(BatchNormalization())
if params['choice']['layers'] == 'three':
model.add(Dense(params['choice']['units3'], activation='relu'))
model.add(Dropout(params['choice']['dropout3']))
model.add(BatchNormalization())
model.add(Dense(train_Y.shape[1], activation='sigmoid'))
model.compile(loss='mse', optimizer='adam')
# includes the call back object
model.fit(train_X, train_Y, epochs=_epochs, batch_size=_batch_size, verbose=False, shuffle=False)
# predict the test set
forecast = model.predict(test_X, verbose=False)
return forecast
methods = []
methods.append(("EXP_OAHU_MLP", mlp_tuning, mlp_space))
train_split = 0.6
run_search(methods, tuning_df, train_split, Measures.rmse, max_evals=30, resample=None)
###Output
Running experiment: EXP_OAHU_MLP
{'batch_size': 256, 'choice': {'layers': 'two'}, 'dropout1': 0, 'dropout2': 0.25, 'epochs': 300, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 3, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 16, 'units2': 512}
Error : 0.11210207774258987
{'batch_size': 64, 'choice': {'dropout3': 0.75, 'layers': 'three', 'units3': 8}, 'dropout1': 0.75, 'dropout2': 0.75, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 1, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 64, 'units2': 8}
Error : 0.16887562719906232
{'batch_size': 512, 'choice': {'dropout3': 0.5, 'layers': 'three', 'units3': 128}, 'dropout1': 0.5, 'dropout2': 0.5, 'epochs': 300, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 1, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 64, 'units2': 8}
Error : 0.16832074683739862
{'batch_size': 28, 'choice': {'dropout3': 0, 'layers': 'three', 'units3': 256}, 'dropout1': 0.25, 'dropout2': 0, 'epochs': 300, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 64, 'units2': 64}
Error : 0.12007328735895494
{'batch_size': 28, 'choice': {'dropout3': 0.5, 'layers': 'three', 'units3': 256}, 'dropout1': 0.75, 'dropout2': 0.25, 'epochs': 300, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 3, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 512, 'units2': 16}
Error : 0.11256583928262713
{'batch_size': 256, 'choice': {'dropout3': 0.25, 'layers': 'three', 'units3': 64}, 'dropout1': 0.5, 'dropout2': 0.5, 'epochs': 300, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 3, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 512, 'units2': 16}
Error : 0.14391026899955472
{'batch_size': 256, 'choice': {'dropout3': 0.75, 'layers': 'three', 'units3': 64}, 'dropout1': 0, 'dropout2': 0, 'epochs': 300, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 1, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 16, 'units2': 64}
Error : 0.11037676055120181
{'batch_size': 512, 'choice': {'dropout3': 0.75, 'layers': 'three', 'units3': 128}, 'dropout1': 0.25, 'dropout2': 0.25, 'epochs': 300, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 1, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 256, 'units2': 512}
Error : 0.15784381475268033
{'batch_size': 512, 'choice': {'dropout3': 0.75, 'layers': 'three', 'units3': 256}, 'dropout1': 0.75, 'dropout2': 0, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 1, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 8}
Error : 0.16657000728035204
{'batch_size': 512, 'choice': {'layers': 'two'}, 'dropout1': 0.75, 'dropout2': 0.25, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 3, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 512, 'units2': 8}
Error : 0.26202963425973014
{'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 512, 'units2': 64}
Error : 0.08758667541932756
{'batch_size': 28, 'choice': {'dropout3': 0, 'layers': 'three', 'units3': 256}, 'dropout1': 0.5, 'dropout2': 0.75, 'epochs': 100, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 1, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 64, 'units2': 16}
Error : 0.139826483409004
{'batch_size': 128, 'choice': {'layers': 'two'}, 'dropout1': 0.5, 'dropout2': 0.75, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 16, 'units2': 256}
Error : 0.12880869981278525
{'batch_size': 128, 'choice': {'dropout3': 0.25, 'layers': 'three', 'units3': 8}, 'dropout1': 0, 'dropout2': 0.75, 'epochs': 100, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 3, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 512, 'units2': 64}
Error : 0.16604021900218402
{'batch_size': 128, 'choice': {'layers': 'two'}, 'dropout1': 0, 'dropout2': 0.5, 'epochs': 300, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 1, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 256}
Error : 0.09555621269300194
{'batch_size': 256, 'choice': {'dropout3': 0.75, 'layers': 'three', 'units3': 64}, 'dropout1': 0.75, 'dropout2': 0.25, 'epochs': 300, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 8, 'units2': 8}
Error : 0.1711557976639845
{'batch_size': 28, 'choice': {'layers': 'two'}, 'dropout1': 0.75, 'dropout2': 0, 'epochs': 100, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 3, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 8, 'units2': 64}
Error : 0.1638326118189065
{'batch_size': 256, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0, 'epochs': 100, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 1, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 8, 'units2': 512}
Error : 0.15831764665590864
{'batch_size': 256, 'choice': {'layers': 'two'}, 'dropout1': 0.5, 'dropout2': 0.75, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 256, 'units2': 256}
Error : 0.14529388682505784
{'batch_size': 64, 'choice': {'dropout3': 0.25, 'layers': 'three', 'units3': 512}, 'dropout1': 0.25, 'dropout2': 0.75, 'epochs': 300, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 1, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 8, 'units2': 8}
Error : 0.1414119809552915
{'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128}
Error : 0.09542121366565244
{'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128}
Error : 0.08515883577119714
{'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128}
Error : 0.084967455912928
{'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128}
Error : 0.08816597673392379
{'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128}
Error : 0.08461966850490099
{'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128}
Error : 0.08416671260635603
{'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128}
Error : 0.08203448953925911
{'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128}
Error : 0.09141701084487909
{'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128}
Error : 0.08625258845773652
{'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 256, 'units2': 128}
Error : 0.0846710829000828
100%|██████████| 30/30 [02:15<00:00, 4.52s/trial, best loss: 0.08203448953925911]
best parameters:
{'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128}
###Markdown
MLP Forecasting
###Code
def mlp_multi_forecast(train_df, test_df, params):
nfeat = len(train_df.columns)
nlags = params['order']
nsteps = params.get('step',1)
nobs = nlags * nfeat
output_index = -nfeat*nsteps
train_reshaped_df = series_to_supervised(train_df, n_in=nlags, n_out=nsteps)
train_X, train_Y = train_reshaped_df.iloc[:, :nobs].values, train_reshaped_df.iloc[:, output_index:].values
test_reshaped_df = series_to_supervised(test_df, n_in=nlags, n_out=nsteps)
test_X, test_Y = test_reshaped_df.iloc[:, :nobs].values, test_reshaped_df.iloc[:, output_index:].values
# design network
model = designMLPNetwork(train_X.shape[1], train_Y.shape[1], params)
# fit network
model.fit(train_X, train_Y, epochs=500, batch_size=1000, verbose=False, shuffle=False)
forecast = model.predict(test_X)
# fcst = [f[0] for f in forecast]
fcst = forecast
return fcst
def designMLPNetwork(input_shape, output_shape, params):
model = Sequential()
model.add(Dense(params['units1'], input_dim=input_shape, activation='relu'))
model.add(Dropout(params['dropout1']))
model.add(BatchNormalization())
model.add(Dense(params['units2'], activation='relu'))
model.add(Dropout(params['dropout2']))
model.add(BatchNormalization())
if params['choice']['layers'] == 'three':
model.add(Dense(params['choice']['units3'], activation='relu'))
model.add(Dropout(params['choice']['dropout3']))
model.add(BatchNormalization())
model.add(Dense(output_shape, activation='sigmoid'))
model.compile(loss='mse', optimizer='adam')
return model
def rolling_cv_mlp(df, params):
forecasts = []
order_list = []
limit = df.index[-1].strftime('%Y-%m-%d')
test_end = ""
index = df.index[0]
while test_end < limit :
print("Index: ", index.strftime('%Y-%m-%d'))
train_start, train_end, test_start, test_end = getRollingWindow(index)
index = index + datetime.timedelta(days=7)
train = df[train_start : train_end]
test = df[test_start : test_end]
# Perform forecast
yhat = list(mlp_multi_forecast(train, test, params))
yhat.append(yhat[-1]) #para manter o formato do vetor de metricas
forecasts.append(yhat)
order_list.append(params['order'])
return forecasts, order_list
# Enter best params
params_raw = {'batch_size': 64, 'choice': {'layers': 'two'}, 'dropout1': 0.25, 'dropout2': 0.5, 'epochs': 200, 'input': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'order': 2, 'output': ('wp1', 'wp2', 'wp3', 'wp4', 'wp5', 'wp6', 'wp7'), 'units1': 128, 'units2': 128}
forecasts_raw, order_list = rolling_cv_mlp(norm_df, params_raw)
forecasts_final = get_final_forecast(forecasts_raw)
calculate_rolling_error("rolling_cv_wind_raw_mlp_multi", df, forecasts_final, order_list)
files.download('rolling_cv_wind_raw_mlp_multi.csv')
###Output
_____no_output_____
###Markdown
Granular FTS
###Code
from pyFTS.models.multivariate import granular
from pyFTS.partitioners import Grid, Entropy
from pyFTS.models.multivariate import variable
from pyFTS.common import Membership
from pyFTS.partitioners import Grid, Entropy
###Output
_____no_output_____
###Markdown
Granular Parameter Tuning
###Code
granular_space = {
'npartitions': hp.choice('npartitions', [100, 150, 200]),
'order': hp.choice('order', [1, 2]),
'knn': hp.choice('knn', [1, 2, 3, 4, 5]),
'alpha_cut': hp.choice('alpha_cut', [0, 0.1, 0.2, 0.3]),
'input': hp.choice('input', [['wp1', 'wp2', 'wp3']]),
'output': hp.choice('output', [['wp1', 'wp2', 'wp3']])}
def granular_tuning(train_df, test_df, params):
_input = list(params['input'])
_output = list(params['output'])
_npartitions = params['npartitions']
_order = params['order']
_knn = params['knn']
_alpha_cut = params['alpha_cut']
_step = params.get('step',1)
## create explanatory variables
exp_variables = []
for vc in _input:
exp_variables.append(variable.Variable(vc, data_label=vc, alias=vc,
npart=_npartitions, func=Membership.trimf,
data=train_df, alpha_cut=_alpha_cut))
model = granular.GranularWMVFTS(explanatory_variables=exp_variables, target_variable=exp_variables[0], order=_order,
knn=_knn)
model.fit(train_df[_input], num_batches=1)
if _step > 1:
forecast = pd.DataFrame(columns=test_df.columns)
length = len(test_df.index)
for k in range(0,(length -(_order + _step - 1))):
fcst = model.predict(test_df[_input], type='multivariate', start_at=k, steps_ahead=_step)
forecast = forecast.append(fcst.tail(1))
else:
forecast = model.predict(test_df[_input], type='multivariate')
return forecast[_output].values
methods = []
methods.append(("EXP_WIND_GRANULAR", granular_tuning, granular_space))
train_split = 0.6
run_search(methods, tuning_df, train_split, Measures.rmse, max_evals=10, resample=None)
###Output
Running experiment: EXP_WIND_GRANULAR
{'alpha_cut': 0.1, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 1, 'npartitions': 100, 'order': 1, 'output': ('wp1', 'wp2', 'wp3')}
Error : 0.11669905532137337
{'alpha_cut': 0.2, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 1, 'npartitions': 200, 'order': 2, 'output': ('wp1', 'wp2', 'wp3')}
Error : 0.08229067276531199
{'alpha_cut': 0.2, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 2, 'npartitions': 200, 'order': 2, 'output': ('wp1', 'wp2', 'wp3')}
Error : 0.08140150942675548
{'alpha_cut': 0.1, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 1, 'npartitions': 200, 'order': 1, 'output': ('wp1', 'wp2', 'wp3')}
Error : 0.11527883387924612
{'alpha_cut': 0.2, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 1, 'npartitions': 150, 'order': 1, 'output': ('wp1', 'wp2', 'wp3')}
Error : 0.11642857063129212
{'alpha_cut': 0.2, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 3, 'npartitions': 100, 'order': 1, 'output': ('wp1', 'wp2', 'wp3')}
Error : 0.10363929653907107
{'alpha_cut': 0.3, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 5, 'npartitions': 200, 'order': 2, 'output': ('wp1', 'wp2', 'wp3')}
Error : 0.07916522355127716
{'alpha_cut': 0.3, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 3, 'npartitions': 200, 'order': 2, 'output': ('wp1', 'wp2', 'wp3')}
Error : 0.07938399286248478
{'alpha_cut': 0.1, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 2, 'npartitions': 150, 'order': 2, 'output': ('wp1', 'wp2', 'wp3')}
Error : 0.08056469602939852
{'alpha_cut': 0.2, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 5, 'npartitions': 100, 'order': 1, 'output': ('wp1', 'wp2', 'wp3')}
Error : 0.09920669569870488
100%|██████████| 10/10 [00:09<00:00, 1.05trial/s, best loss: 0.07916522355127716]
best parameters:
{'alpha_cut': 0.3, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 5, 'npartitions': 200, 'order': 2, 'output': ('wp1', 'wp2', 'wp3')}
###Markdown
Granular Forecasting
###Code
def granular_forecast(train_df, test_df, params):
_input = list(params['input'])
_output = list(params['output'])
_npartitions = params['npartitions']
_knn = params['knn']
_alpha_cut = params['alpha_cut']
_order = params['order']
_step = params.get('step',1)
## create explanatory variables
exp_variables = []
for vc in _input:
exp_variables.append(variable.Variable(vc, data_label=vc, alias=vc,
npart=_npartitions, func=Membership.trimf,
data=train_df, alpha_cut=_alpha_cut))
model = granular.GranularWMVFTS(explanatory_variables=exp_variables, target_variable=exp_variables[0], order=_order,
knn=_knn)
model.fit(train_df[_input], num_batches=1)
if _step > 1:
forecast = pd.DataFrame(columns=test_df.columns)
length = len(test_df.index)
for k in range(0,(length -(_order + _step - 1))):
fcst = model.predict(test_df[_input], type='multivariate', start_at=k, steps_ahead=_step)
forecast = forecast.append(fcst.tail(1))
else:
forecast = model.predict(test_df[_input], type='multivariate')
return forecast[_output].values
def rolling_cv_granular(df, params):
forecasts = []
order_list = []
limit = df.index[-1].strftime('%Y-%m-%d')
test_end = ""
index = df.index[0]
while test_end < limit :
print("Index: ", index.strftime('%Y-%m-%d'))
train_start, train_end, test_start, test_end = getRollingWindow(index)
index = index + datetime.timedelta(days=7)
train = df[train_start : train_end]
test = df[test_start : test_end]
# Perform forecast
yhat = list(granular_forecast(train, test, params))
yhat.append(yhat[-1]) #para manter o formato do vetor de metricas
forecasts.append(yhat)
order_list.append(params['order'])
return forecasts, order_list
def granular_get_final_forecast(forecasts_raw, input):
forecasts_final = []
l_min = df[input].min()
l_max = df[input].max()
for i in np.arange(len(forecasts_raw)):
f_raw = denormalize(forecasts_raw[i], l_min, l_max)
forecasts_final.append(f_raw)
return forecasts_final
# Enter best params
params_raw = {'alpha_cut': 0.3, 'input': ('wp1', 'wp2', 'wp3'), 'knn': 5, 'npartitions': 200, 'order': 2, 'output': ('wp1', 'wp2', 'wp3')}
forecasts_raw, order_list = rolling_cv_granular(norm_df, params_raw)
forecasts_final = granular_get_final_forecast(forecasts_raw, list(params_raw['input']))
calculate_rolling_error("rolling_cv_wind_raw_granular", df[list(params_raw['input'])], forecasts_final, order_list)
files.download('rolling_cv_wind_raw_granular.csv')
###Output
_____no_output_____
###Markdown
Result Analysis
###Code
import pandas as pd
from google.colab import files
files.upload()
def createBoxplot(filename, data, xticklabels, ylabel):
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(data, patch_artist=True)
## change outline color, fill color and linewidth of the boxes
for box in bp['boxes']:
# change outline color
box.set( color='#7570b3', linewidth=2)
# change fill color
box.set( facecolor = '#AACCFF' )
## change color and linewidth of the whiskers
for whisker in bp['whiskers']:
whisker.set(color='#7570b3', linewidth=2)
## change color and linewidth of the caps
for cap in bp['caps']:
cap.set(color='#7570b3', linewidth=2)
## change color and linewidth of the medians
for median in bp['medians']:
median.set(color='#FFE680', linewidth=2)
## change the style of fliers and their fill
for flier in bp['fliers']:
flier.set(marker='o', color='#e7298a', alpha=0.5)
## Custom x-axis labels
ax.set_xticklabels(xticklabels)
ax.set_ylabel(ylabel)
plt.show()
fig.savefig(filename, bbox_inches='tight')
var_results = pd.read_csv("rolling_cv_wind_raw_var.csv")
evolving_results = pd.read_csv("rolling_cv_wind_raw_emvfts.csv")
mlp_results = pd.read_csv("rolling_cv_wind_raw_mlp_multi.csv")
granular_results = pd.read_csv("rolling_cv_wind_raw_granular.csv")
metric = 'RMSE'
results_data = [evolving_results[metric],var_results[metric], mlp_results[metric], granular_results[metric]]
xticks = ['e-MVFTS','VAR','MLP','FIG-FTS']
ylab = 'RMSE'
createBoxplot("e-mvfts_boxplot_rmse_solar", results_data, xticks, ylab)
pd.options.display.float_format = '{:.2f}'.format
metric = 'RMSE'
rmse_df = pd.DataFrame(columns=['e-MVFTS','VAR','MLP','FIG-FTS'])
rmse_df["e-MVFTS"] = evolving_results[metric]
rmse_df["VAR"] = var_results[metric]
rmse_df["MLP"] = mlp_results[metric]
rmse_df["FIG-FTS"] = granular_results[metric]
rmse_df.std()
metric = 'SMAPE'
results_data = [evolving_results[metric],var_results[metric], mlp_results[metric], granular_results[metric]]
xticks = ['e-MVFTS','VAR','MLP','FIG-FTS']
ylab = 'SMAPE'
createBoxplot("e-mvfts_boxplot_smape_solar", results_data, xticks, ylab)
metric = 'SMAPE'
smape_df = pd.DataFrame(columns=['e-MVFTS','VAR','MLP','FIG-FTS'])
smape_df["e-MVFTS"] = evolving_results[metric]
smape_df["VAR"] = var_results[metric]
smape_df["MLP"] = mlp_results[metric]
smape_df["FIG-FTS"] = granular_results[metric]
smape_df.std()
metric = "RMSE"
data = pd.DataFrame(columns=["VAR", "Evolving", "MLP", "Granular"])
data["VAR"] = var_results[metric]
data["Evolving"] = evolving_results[metric]
data["MLP"] = mlp_results[metric]
data["Granular"] = granular_results[metric]
ax = data.plot(figsize=(18,6))
ax.set(xlabel='Window', ylabel=metric)
fig = ax.get_figure()
#fig.savefig(path_images + exp_id + "_prequential.png")
x = np.arange(len(data.columns.values))
names = data.columns.values
values = data.mean().values
plt.figure(figsize=(5,6))
plt.bar(x, values, align='center', alpha=0.5, width=0.9)
plt.xticks(x, names)
#plt.yticks(np.arange(0, 1.1, 0.1))
plt.ylabel(metric)
#plt.savefig(path_images + exp_id + "_bars.png")
metric = "SMAPE"
data = pd.DataFrame(columns=["VAR", "Evolving", "MLP", "Granular"])
data["VAR"] = var_results[metric]
data["Evolving"] = evolving_results[metric]
data["MLP"] = mlp_results[metric]
data["Granular"] = granular_results[metric]
ax = data.plot(figsize=(18,6))
ax.set(xlabel='Window', ylabel=metric)
fig = ax.get_figure()
#fig.savefig(path_images + exp_id + "_prequential.png")
x = np.arange(len(data.columns.values))
names = data.columns.values
values = data.mean().values
plt.figure(figsize=(5,6))
plt.bar(x, values, align='center', alpha=0.5, width=0.9)
plt.xticks(x, names)
#plt.yticks(np.arange(0, 1.1, 0.1))
plt.ylabel(metric)
#plt.savefig(path_images + exp_id + "_bars.png")
###Output
_____no_output_____ |
code/algorithms/course_udemy_1/Stacks, Queues and Deques/Implementation of Stack.ipynb | ###Markdown
Implementation of Stack Stack Attributes and MethodsBefore we implement our own Stack class, let's review the properties and methods of a Stack.The stack abstract data type is defined by the following structure and operations. A stack is structured, as described above, as an ordered collection of items where items are added to and removed from the end called the “top.” Stacks are ordered LIFO. The stack operations are given below.* Stack() creates a new stack that is empty. It needs no parameters and returns an empty stack.* push(item) adds a new item to the top of the stack. It needs the item and returns nothing.* pop() removes the top item from the stack. It needs no parameters and returns the item. The stack is modified.* peek() returns the top item from the stack but does not remove it. It needs no parameters. The stack is not modified.* isEmpty() tests to see whether the stack is empty. It needs no parameters and returns a boolean value.* size() returns the number of items on the stack. It needs no parameters and returns an integer. ____ Stack Implementation
###Code
class Stack:
def __init__(self):
self.items = []
def isEmpty(self):
return self.items == []
def push(self, item):
self.items.append(item)
def pop(self):
return self.items.pop()
def peek(self):
return self.items[len(self.items)-1]
def size(self):
return len(self.items)
###Output
_____no_output_____
###Markdown
Let's try it out!
###Code
s = Stack()
print s.isEmpty()
s.push(1)
s.push('two')
s.peek()
s.push(True)
s.size()
s.isEmpty()
s.pop()
s.pop()
s.size()
s.pop()
s.isEmpty()
###Output
_____no_output_____ |
outlierdetector_lib.ipynb | ###Markdown
Angle-based Outlier Detector (ABOD)
###Code
clf1=ABOD(contamination=outliers_fraction)
clf1.fit(X)
y_pred1=clf1.predict(X)
from sklearn.metrics import confusion_matrix
confusion_matrix(y, y_pred1)
###Output
_____no_output_____
###Markdown
Cluster-based Local Outlier Factor (CBLOF)
###Code
clf2=CBLOF(contamination=outliers_fraction,check_estimator=False, random_state=random_state)
clf2.fit(X)
y_pred2=clf2.predict(X)
from sklearn.metrics import confusion_matrix
confusion_matrix(y, y_pred2)
###Output
_____no_output_____
###Markdown
Feature Bagging
###Code
clf3=FeatureBagging(LOF(n_neighbors=35),contamination=outliers_fraction,check_estimator=False,random_state=random_state)
clf3.fit(X)
y_pred3=clf3.predict(X)
from sklearn.metrics import confusion_matrix
confusion_matrix(y, y_pred3)
###Output
_____no_output_____
###Markdown
Histogram-base Outlier Detection (HBOS)
###Code
clf4=HBOS(alpha=0.1, contamination=0.037, n_bins=10, tol=0.9)
clf4.fit(X)
y_pred4=clf4.predict(X)
from sklearn.metrics import confusion_matrix
confusion_matrix(y, y_pred4)
###Output
_____no_output_____
###Markdown
Isolation Forest
###Code
clf5=IForest(contamination=outliers_fraction,random_state=random_state)
clf5.fit(X)
y_pred5=clf5.predict(X)
from sklearn.metrics import confusion_matrix
confusion_matrix(y, y_pred5)
###Output
_____no_output_____
###Markdown
K Nearest Neighbors (KNN)
###Code
clf6=KNN(contamination=outliers_fraction)
clf6.fit(X)
y_pred6=clf6.predict(X)
from sklearn.metrics import confusion_matrix
confusion_matrix(y, y_pred6)
###Output
_____no_output_____
###Markdown
Average KNN
###Code
clf7=KNN(method='mean',contamination=outliers_fraction)
clf7.fit(X)
y_pred7=clf7.predict(X)
from sklearn.metrics import confusion_matrix
confusion_matrix(y, y_pred7)
###Output
_____no_output_____ |
webinar_1/Lesson 1.ipynb | ###Markdown
Спортивный анализ данных. Платформа Kaggle Урок 1. Введение в спортивный анализ данных, Exploration Data Analysis Домашняя работа к уроку 1 Ссылка на наборы данных: https://drive.google.com/file/d/1j8zuKbI-PW5qKwhybP4S0EtugbPqmeyX/view?usp=sharing Задание 1 Сделать базовый анализ данных: вывести размерность датасетов, посчитать базовые статистики, выполнить анализ пропусков, сделать выводы.
###Code
# В работе. Как-то все наложилось. Надеюсь на этой неделе все нагнать.
# Посмотрел. Очень серьезный курс, темы сложные. Зря его вынесли во вне четверти.
###Output
_____no_output_____
###Markdown
Задание 2 Сделать базовый анализ целевой переменной, сделать выводы;
###Code
# В работе
###Output
_____no_output_____
###Markdown
Задание 3 Построить распределение признаков в зависимости от значения целевой переменной и распределение признаков для обучающей и тестовой выборки (если машина не позволяет построить распределение для всех признаков, то выполнить задание для признаков var_0, var_1, var_2, var_5, var_9, var_10, var_13, var_20, var_26, var_40, var_55, var_80, var_106, var_109, var_139, var_175, var_184, var_196), сделать выводы;
###Code
# В работе
###Output
_____no_output_____
###Markdown
Задание 4 Построить распределение основных статистики признаков (среднее, стандартное отклонение) в разрезе целевой переменной и распределение основных статистик обучающей и тестовой выборки, сделать выводы;
###Code
# В работе
###Output
_____no_output_____
###Markdown
Задание 5 Построить распределение коэффициентов корреляции между признаками. Есть ли зависимость между признаками (будем считать, что связь между признаками отсутствует, если коэффициент корреляции < 0.2)?
###Code
# В работе
###Output
_____no_output_____
###Markdown
Задание 6 Выявить 10 признаков, которые обладают наибольшей нелинейной связью с целевой переменной.
###Code
# В работе
###Output
_____no_output_____
###Markdown
Задание 7 Провести анализ идентичности распределения признаков на обучающей и тестовой выборках, сделать выводы.
###Code
# В работе
###Output
_____no_output_____ |
01_Workshop-master/Chapter06/Exercise94/Exercise94.ipynb | ###Markdown
Configure through code.Restart the kernel here.
###Code
import logging
import sys
root_logger = logging.getLogger()
handler = logging.StreamHandler(sys.stdout)
formatter = logging.Formatter("%(levelname)s: %(message)s")
handler.setFormatter(formatter)
root_logger.addHandler(handler)
root_logger.setLevel("INFO")
logging.info("Hello logging world")
###Output
INFO: Hello logging world
###Markdown
Configure with dictConfig.Restart the kernel here.
###Code
import logging
from logging.config import dictConfig
dictConfig({
"version": 1,
"formatters": {
"short":{
"format": "%(levelname)s: %(message)s",
}
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"formatter": "short",
"stream": "ext://sys.stdout",
"level": "DEBUG",
}
},
"loggers": {
"": {
"handlers": ["console"],
"level": "INFO"
}
}
})
logging.info("Hello logging world")
###Output
INFO: Hello logging world
###Markdown
Configure with basicConfig.Restart the kernel here.
###Code
import sys
import logging
logging.basicConfig(
level="INFO",
format="%(levelname)s: %(message)s",
stream=sys.stdout
)
logging.info("Hello there!")
###Output
INFO: Hello there!
###Markdown
Configure with fileconfig.Restart the kernel here.
###Code
import logging
from logging.config import fileConfig
fileConfig("logging-config.ini")
logging.info("Hello there!")
###Output
INFO: Hello there!
|
Module_5_LeNet_on_MNIST (1).ipynb | ###Markdown
Graded AssessmentIn this assessment you will write a full end-to-end training process using gluon and MXNet. We will train the LeNet-5 classifier network on the MNIST dataset. The network will be defined for you but you have to fill in code to prepare the dataset, train the network, and evaluate it's performance on a held out dataset.
###Code
#Check CUDA version
!nvcc --version
#Install appropriate MXNet version
'''
For eg if CUDA version is 10.0 choose mxnet cu100mkl
where cu adds CUDA GPU support
and mkl adds Intel CPU Math Kernel Library support
'''
!pip install mxnet-cu101mkl gluoncv
from pathlib import Path
from mxnet import gluon, metric, autograd, init, nd
import os
import mxnet as mx
#I downloaded the files from Coursera and hosted on my gdrive:
from google.colab import drive
drive.mount('/content/drive')
# M5_DATA = Path(os.getenv('DATA_DIR', '../../data'), 'module_5')
M5_DATA = Path('/content/drive/My Drive/CourseraWork/MXNetAWS/data/module_5')
M5_IMAGES = Path(M5_DATA, 'images')
###Output
_____no_output_____
###Markdown
--- Question 1 Prepare and the data and construct the dataloader* First, get the MNIST dataset from `gluon.data.vision.datasets`. Use* Don't forget the ToTensor and normalize Transformations. Use `0.13` and `0.31` as the mean and standard deviation respectively* Construct the dataloader with the batch size provide. Ensure that the train_dataloader is shuffled.**CAUTION!**: Although the notebook interface has internet connectivity, the **autograders are not permitted to access the internet**. We have already downloaded the correct models and data for you to use so you don't need access to the internet. Set the `root` parameter to `M5_IMAGES` when using a preset dataset. Usually, in the real world, you have internet access, so setting the `root` parameter isn't required (and it's set to `~/.mxnet` by default).
###Code
import os
from pathlib import Path
from mxnet.gluon.data.vision import transforms
import numpy as np
def get_mnist_data(batch=128):
"""
Should construct a dataloader with the MNIST Dataset with the necessary transforms applied.
:param batch: batch size for the DataLoader.
:type batch: int
:return: a tuple of the training and validation DataLoaders
:rtype: (gluon.data.DataLoader, gluon.data.DataLoader)
"""
def transformer(data, label):
data = data.flatten().expand_dims(0).astype(np.float32)/255
data = data-0.13/0.31
label = label.astype(np.float32)
return data, label
train_dataset = gluon.data.vision.datasets.MNIST(root=M5_IMAGES, train=True, transform=transformer)
validation_dataset = gluon.data.vision.datasets.MNIST(root=M5_IMAGES, train=False, transform=transformer)
train_dataloader = gluon.data.DataLoader(train_dataset, batch_size=batch, last_batch='keep',shuffle=True)
validation_dataloader = gluon.data.DataLoader(validation_dataset, batch_size=batch, last_batch='keep')
return train_dataloader, validation_dataloader
t, v = get_mnist_data()
assert isinstance(t, gluon.data.DataLoader)
assert isinstance(v, gluon.data.DataLoader)
d, l = next(iter(t))
assert d.shape == (128, 1, 28, 28) #check Channel First and Batch Size
assert l.shape == (128,)
assert nd.max(d).asscalar() <= 2.9 # check for normalization
assert nd.min(d).asscalar() >= -0.5 # check for normalization
###Output
_____no_output_____
###Markdown
--- Question 2 Write the training loop* Create the loss function. This should be a loss function suitable for multi-class classification.* Create the metric accumulator. This should the compute and store the accuracy of the model during training* Create the trainer with the `adam` optimizer and learning rate of `0.002`* Write the training loop
###Code
def train(network, training_dataloader, batch_size, epochs):
"""
Should take an initialized network and train that network using data from the data loader.
:param network: initialized gluon network to be trained
:type network: gluon.Block
:param training_dataloader: the training DataLoader provides batches for data for every iteration
:type training_dataloader: gluon.data.DataLoader
:param batch_size: batch size for the DataLoader.
:type batch_size: int
:param epochs: number of epochs to train the DataLoader
:type epochs: int
:return: tuple of trained network and the final training accuracy
:rtype: (gluon.Block, float)
"""
trainer = gluon.Trainer(network.collect_params(), 'adam',
{'learning_rate': 0.002})
metric = mx.metric.Accuracy()
for epoch in range(epochs):
train_loss =0.
for data,label in training_dataloader:
# print (data.shape)
# print (label.shape)
with autograd.record():
output = network(data)
loss=mx.ndarray.softmax_cross_entropy(output,label)
loss.backward()
trainer.step(batch_size)
train_loss += loss.mean().asscalar()
metric.update(label, output)
print (epoch , metric.get()[1])
training_accuracy = metric.get()[1]
return network, training_accuracy
###Output
_____no_output_____
###Markdown
Let's define and initialize a network to test the train function.
###Code
net = gluon.nn.Sequential()
net.add(gluon.nn.Conv2D(channels=6, kernel_size=5, activation='relu'),
gluon.nn.MaxPool2D(pool_size=2, strides=2),
gluon.nn.Conv2D(channels=16, kernel_size=3, activation='relu'),
gluon.nn.MaxPool2D(pool_size=2, strides=2),
gluon.nn.Flatten(),
gluon.nn.Dense(120, activation="relu"),
gluon.nn.Dense(84, activation="relu"),
gluon.nn.Dense(10))
net.initialize(init=init.Xavier())
n, ta = train(net, t, 128, 5)
assert ta >= .95
d, l = next(iter(v))
p = (n(d).argmax(axis=1))
assert (p.asnumpy() == l.asnumpy()).sum()/128.0 > .95
###Output
0 0.93415
1 0.9572583333333333
2 0.9668111111111111
3 0.972375
4 0.97606
###Markdown
--- Question 3 Write the validation loop* Create the metric accumulator. This should the compute and store the accuracy of the model on the validation set* Write the validation loop
###Code
def validate(network, validation_dataloader):
"""
Should compute the accuracy of the network on the validation set.
:param network: initialized gluon network to be trained
:type network: gluon.Block
:param validation_dataloader: the training DataLoader provides batches for data for every iteration
:type validation_dataloader: gluon.data.DataLoader
:return: validation accuracy
:rtype: float
"""
val_acc = mx.metric.Accuracy()
for data,label in validation_dataloader:
output = network(data)
val_acc.update(label,output)
print (val_acc.get()[1])
return val_acc.get()[1]
assert validate(n, v) > .95
###Output
_____no_output_____ |
matrix_two/day3.ipynb | ###Markdown
Import data
###Code
df = pd.read_hdf('data/car.h5')
df.shape
df.columns
###Output
_____no_output_____
###Markdown
Dummy Model
###Code
df.select_dtypes(np.number).columns
X = df['car_id']
y = df['price_value']
model = DummyRegressor()
model.fit(X, y)
y_pred = model.predict(X)
mae(y, y_pred)
[x for x in df.columns if 'price' in x]
df['price_currency'].value_counts()
df = df[ df.price_currency == 'PLN']
df.shape
###Output
_____no_output_____
###Markdown
Features
###Code
df.sample(5)
suffix_cat = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list):continue
factorized_values = df[feat].factorize()[0]
if suffix_cat in feat:
df[feat] = factorized_values
else:
df[feat+suffix_cat] = factorized_values
cat_feats = [x for x in df.columns if suffix_cat in x]
cat_feats = [x for x in cat_feats if 'price' not in x]
cat_feats
len(cat_feats)
X = df[cat_feats].values
y = df['price_value'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
np.mean(scores)
m = DecisionTreeRegressor(max_depth=5)
m.fit(X, y)
imp = PermutationImportance(m, random_state=0).fit(X, y)
eli5.show_weights(imp, feature_names=cat_feats)
df[['param_napęd', 'price_value']].groupby('param_napęd').agg(['mean', 'median', 'std', 'count'])
df['param_rok-produkcji'] = df['param_rok-produkcji'].astype(float)
fig = plt.figure(constrained_layout=True, figsize=(16,8))
gs = GridSpec(2, 4, figure=fig)
ax1 = fig.add_subplot(gs[0, :2])
ax2 = fig.add_subplot(gs[0, 2])
ax3 = fig.add_subplot(gs[0, 3])
ax4 = fig.add_subplot(gs[1, :])
sns.boxplot(data=df, x='param_napęd', y='price_value', ax=ax1)
sns.boxplot(data=df, x='param_faktura-vat__cat', y='price_value', ax=ax2)
sns.boxplot(data=df, x='param_stan', y='price_value', ax=ax3)
sns.scatterplot(x="param_rok-produkcji", y="price_value", data=df, alpha=0.1, linewidth=0, ax=ax4);
!git push origin master
###Output
Counting objects: 1
Counting objects: 4, done.
Delta compression using up to 2 threads.
Compressing objects: 25% (1/4)
Compressing objects: 50% (2/4)
Compressing objects: 75% (3/4)
Compressing objects: 100% (4/4)
Compressing objects: 100% (4/4), done.
Writing objects: 25% (1/4)
Writing objects: 50% (2/4)
Writing objects: 75% (3/4)
Writing objects: 100% (4/4)
Writing objects: 100% (4/4), 76.21 KiB | 5.86 MiB/s, done.
Total 4 (delta 1), reused 0 (delta 0)
remote: Resolving deltas: 100% (1/1), completed with 1 local object.[K
remote: This repository moved. Please use the new location:[K
remote: https://github.com/kmwolowiec/data_workshop.git[K
To https://github.com/ThePearsSon/data_workshop.git
874fb89..1c4aeef master -> master
|
docs/!ml/notebooks/Perceptron.ipynb | ###Markdown
Load data
###Code
df = pd.DataFrame({
'x': [4.5, 4.9, 5.0, 4.8, 5.8, 5.6, 5.7, 5.8],
'y': [35, 38, 45, 49, 59, 65, 73, 82],
'z': [0, 0, 0, 0, 1, 1, 1, 1]
})
df
plt.scatter(df['x'], df['y'], c=df['z'])
###Output
_____no_output_____
###Markdown
Train model
###Code
def fit(X, y, max_epochs=500):
"""
X : numpy 2D array. Each row corresponds to one training example.
y : numpy 1D array. Label (0 or 1) of each example.
"""
n = X.shape[1]
# Initialize weights
weights = np.zeros((n, ))
bias = 0.0
for _ in range(max_epochs):
errors = 0
# Loop through the examples
for i, xi in enumerate(X):
predict_y = 1 if xi.dot(weights) + bias >= 0 else 0
error = y[i] - predict_y
# Update weights
if error != 0:
weights += error * xi
bias += error
errors += 1
# We converged
if errors == 0:
break
return (weights, bias)
X = df.drop('z', axis=1).values
y = df['z'].values
weights, bias = fit(X, y)
weights, bias
###Output
_____no_output_____
###Markdown
Plot predictions
###Code
def plot_decision_boundary():
# Draw points
plt.scatter(X[:,0], X[:,1], c=y)
a = -weights[0]/weights[1]
b = -bias/weights[1]
# Draw hyperplane with margin
_X = np.arange(X[:,0].min(), X[:,0].max()+1, .1)
_Y = _X * a + b
plt.plot(_X, _Y)
plot_decision_boundary()
def plot_contour():
# Draw points
plt.scatter(X[:,0], X[:,1], c=y)
x_min, x_max = plt.gca().get_xlim()
y_min, y_max = plt.gca().get_ylim()
# Draw contour
xx, yy = np.meshgrid(np.arange(x_min, x_max+.1, .1),
np.arange(y_min, y_max+.1, .1))
_X = np.c_[xx.ravel(), yy.ravel()]
Z = np.sign(_X.dot(weights) + bias) \
.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Pastel1, alpha=0.3)
plot_contour()
###Output
_____no_output_____
###Markdown
Compare with logistic regression
###Code
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(C=1e20, solver='liblinear', random_state=0)
model.fit(X, y)
weights = model.coef_[0]
bias = model.intercept_[0]
plot_decision_boundary()
###Output
_____no_output_____
###Markdown
Compare with SVM
###Code
from sklearn import svm
model = svm.SVC(kernel='linear', C=1.0)
model.fit(X, y)
weights = model.coef_[0]
bias = model.intercept_[0]
plot_decision_boundary()
###Output
_____no_output_____ |
process_f0.ipynb | ###Markdown
This notebook contains the code needed to process the data which tracks the POIS and diversity, as generated by the SOS frameworkBecause of the large size of these tables, not all in-between artifacts are providedThis code is part of the paper "The Importance of Being Restrained"
###Code
import numpy as np
import pickle
import pandas as pd
from functools import partial
import glob
import seaborn as sbs
import matplotlib.pyplot as plt
from scipy.stats import kendalltau, rankdata
font = {'size' : 20}
plt.rc('font', **font)
base_folder = "/mnt/e/Research/DE/" #Update to required folder
output_location = "Datatables/"
def get_merged_dt(cross, sdis, F, CR, popsize):
dt_large = pd.DataFrame()
files = glob.glob(f"{base_folder}Runs_only/DEro{cross}{sdis}p{popsize}D30*F{F}Cr{CR}.txt")
for f in files:
dt_temp = pd.read_csv(f, sep=' ', header=None, skiprows=1)
dt_large = dt_large.append(dt_temp)
dt_large['cross'] = cross
dt_large['sdis'] = sdis
dt_large['F'] = F
dt_large['CR'] = CR
dt_large['popsize'] = popsize
return dt_large
def get_full_dt():
dt_full = pd.DataFrame()
for cross in ['b','e']:
for sdis in ['c', 'h', 'm', 's', 't', 'u']:
for CR in ['005', '099']:
for F in ['0916', '005']:
for popsize in [5,20,100]:
dt_temp = get_merged_dt(cross, sdis, F, CR, popsize)
dt_full = dt_full.append(dt_temp)
return dt_full
def get_merged_dt_v2(cross, sdis, F, CR, popsize):
dt_large = pd.DataFrame()
files = glob.glob(f"{base_folder}CosineSimilarity-MoreData/CosineSimilarity-MoreData/7/DEro{cross}{sdis}p{popsize}D30*F{F}Cr{CR}.txt")
if len(files) == 0:
return dt_large
for f in files:
dt_temp = pd.read_csv(f, sep=' ', header=None, skiprows=1)
dt_large = dt_large.append(dt_temp)
dt_large['cross'] = cross
dt_large['sdis'] = sdis
dt_large['F'] = F
dt_large['CR'] = CR
dt_large['popsize'] = popsize
dt_large.columns = ['cosine', 'applied', 'accept', 'cross', 'sdis', 'F', 'CR', 'popsize']
return dt_large
for cross in ['b','e']:
for sdis in ['c', 'h', 'm', 's', 't', 'u']:
for CR in ['005','0285','052','0755','099']:
for F in ['005','0285','052','0755','099']:
for popsize in [5,20, 100]:
dt = get_merged_dt_v2(cross, sdis, F, CR, popsize)
dt.to_csv(f"{output_location}DEro{cross}_{sdis}_p{popsize}_F{F}CR{CR}_cosine.csv")
def get_merged_dt_v3(sdis, F, CR, popsize):
dt_large = pd.DataFrame()
files = glob.glob(f"{base_folder}CosineSimilarity-LookingCloser/7/DErob{sdis}p{popsize}D30*F{F}Cr{CR}.txt")
if len(files) == 0:
return dt_large
for f in files:
dt_temp = pd.read_csv(f, sep=' ', header=None, skiprows=1)
dt_large = dt_large.append(dt_temp)
dt_large['sdis'] = sdis
dt_large['F'] = F
dt_large['CR'] = CR
dt_large['popsize'] = popsize
dt_large.columns = ['cosine', 'nr_mut', 'nr_exceed', 'accept', 'sdis', 'F', 'CR', 'popsize']
return dt_large
for sdis in ['c', 'h', 'm', 's', 't', 'u']:
for popsize in [5, 20, 100]:
for idx_0, F in enumerate(['099','0755','052','0285','005']):
for idx_1, CR in enumerate(['0041','0081','0121','0161','0201']):
dt = get_merged_dt_v3(sdis, F, CR, popsize)
dt.to_csv(f"{output_location}DE_{sdis}_p{popsize}_F{F}CR{CR}_cosine_v3.csv")
def get_merged_dt_v4(sdis, F, CR, popsize):
dt_large = pd.DataFrame()
files = glob.glob(f"{base_folder}Div_cos_sim/CosineSimilarity/7/DErob{sdis}p{popsize}D30f0*_F{F}Cr{CR}.txt")
if len(files) == 0:
return dt_large
for f in files:
dt_temp = pd.read_csv(f, sep=' ', header=None, skiprows=1)
dt_large = dt_large.append(dt_temp)
dt_large['sdis'] = sdis
dt_large['F'] = F
dt_large['CR'] = CR
dt_large['popsize'] = popsize
dt_large.columns = ['cosine', 'nr_mut', 'nr_exceed', 'accept', 'sdis', 'F', 'CR', 'popsize']
return dt_large
for F in ['0285', '099', '052', '005']: #'0755',
for CR in ['0755', '0285', '099', '052', '005', '00891', '01283', '01675', '02067', '02458']:
for popsize in [5, 20, 100]:
for sdis in ['t', 'h', 'm', 's', 'c', 'u']:
dt = get_merged_dt_v4(sdis, F, CR, popsize)
dt.to_csv(f"{output_location}DE_{sdis}_p{popsize}_F{F}CR{CR}_cosine_v4.csv")
def get_diversity_dt(sdis, F, CR, popsize):
dt_large = pd.DataFrame()
files = glob.glob(f"/mnt/e/Research/DE/Div_cos_sim/CosineSimilarity/7/Diversity-DErob{sdis}p{popsize}D30f0*_F{F}Cr{CR}.txt")
if len(files) == 0:
return dt_large
for f in files:
dt_temp = pd.read_csv(f, sep=' ', header=None, skiprows=1)
dt_large = dt_large.append(dt_temp)
dt_large['sdis'] = sdis
dt_large['F'] = F
dt_large['CR'] = CR
dt_large['popsize'] = popsize
dt_large.columns = ['div0', 'div1', 'sdis', 'F', 'CR', 'popsize']
return dt_large
for F in ['0755','0285', '099', '052', '005']:
for CR in ['0755', '0285', '099', '052', '005', '00891', '01283', '01675', '02067', '02458']:
for popsize in [5, 20, 100]:
for sdis in ['t', 'h', 'm', 's', 'c', 'u']:
dt = get_diversity_dt(sdis, F, CR, popsize)
dt.to_csv(f"{output_location}DE_{sdis}_p{popsize}_F{F}CR{CR}_diversity.csv")
###Output
_____no_output_____ |
3_core_core_analysis/6_explore_secretion_genes.ipynb | ###Markdown
Explore secretion system genes[KEGG enrichment analysis](5_KEGG_enrichment_of_stable_genes.ipynb) found that genes associated with ribosome, Lipopolysaccharide (outer membrane) biosynthesis, citrate cycle are significantly conserved across strains.Indeed functions that are essential seem to be significantly conserved across strains as expected.However, there are also pathways like the secretion systems, which allow for inter-strain warfare, that we’d expect to vary across strains but were found to be conserved (T3SS significant but not T6SS).This notebook examines the stability score of the genes in the secretion systems to determine if there is a subset of the secretion genes, related to the machinery that is conserved while others, like the secretory proteins, are more variable.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import random
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scripts import annotations
random.seed(1)
###Output
/home/alexandra/anaconda3/envs/core_acc/lib/python3.7/site-packages/matplotlib/__init__.py:886: MatplotlibDeprecationWarning:
examples.directory is deprecated; in the future, examples will be found relative to the 'datapath' directory.
"found relative to the 'datapath' directory.".format(key))
###Markdown
Load data and metadata
###Code
# Input similarity scores and annotations filenames
# Since the results are similar we only need to look at the scores for one strain type
pao1_similarity_filename = "pao1_core_similarity_associations_final_spell.tsv"
# Import df
pao1_similarity = pd.read_csv(pao1_similarity_filename, sep="\t", index_col=0, header=0)
pao1_similarity.head()
# Load KEGG pathway data
pao1_pathway_filename = "https://raw.githubusercontent.com/greenelab/adage/7a4eda39d360b224268921dc1f2c14b32788ab16/Node_interpretation/pseudomonas_KEGG_terms.txt"
pao1_pathways = annotations.load_format_KEGG(pao1_pathway_filename)
print(pao1_pathways.shape)
pao1_pathways.head()
###Output
(169, 2)
###Markdown
Get genes related to secretion pathways
###Code
pao1_pathways.loc[
[
"KEGG-Module-M00334: Type VI secretion system",
"KEGG-Module-M00332: Type III secretion system",
"KEGG-Module-M00335: Sec (secretion) system",
]
]
# Get genes related to pathways
T6SS_genes = list(pao1_pathways.loc["KEGG-Module-M00334: Type VI secretion system", 2])
T3SS_genes = list(pao1_pathways.loc["KEGG-Module-M00332: Type III secretion system", 2])
secretion_genes = list(
pao1_pathways.loc["KEGG-Module-M00335: Sec (secretion) system", 2]
)
# Pull out genes related to T3SS
T6SS_similarity = pao1_similarity.reindex(T6SS_genes)
T3SS_similarity = pao1_similarity.reindex(T3SS_genes)
sec_similarity = pao1_similarity.reindex(secretion_genes)
T6SS_similarity.sort_values(by="Transcriptional similarity across strains")
T3SS_similarity.sort_values(by="Transcriptional similarity across strains")
# sec_similarity.sort_values(by="Transcriptional similarity across strains")
# Save T3SS and T6SS df for easier lookup
T3SS_similarity.to_csv("T3SS_core_similarity_associations_final_spell.tsv", sep="\t")
T6SS_similarity.to_csv("T6SS_core_similarity_associations_final_spell.tsv", sep="\t")
###Output
_____no_output_____
###Markdown
Plot
###Code
plt.figure(figsize=(10, 8))
sns.violinplot(
data=pao1_similarity,
x="Transcriptional similarity across strains",
palette="Blues",
inner=None,
)
sns.swarmplot(
data=T6SS_similarity,
x="Transcriptional similarity across strains",
color="k",
label="T6SS genes",
alpha=0.8,
)
sns.swarmplot(
data=T3SS_similarity,
x="Transcriptional similarity across strains",
color="r",
label="T3SS genes",
alpha=0.8,
)
# sns.swarmplot(
# data=sec_similarity,
# x="Transcriptional similarity across strains",
# color="yellow",
# label="secretion system genes",
# alpha=0.8,
# )
# Add text labels for least stable genes amongst the T3SS/T6SS
plt.text(
x=T3SS_similarity.loc[
T3SS_similarity["Name"] == "pscR", "Transcriptional similarity across strains"
],
y=0.02,
s="$pscR$",
)
plt.text(
x=T6SS_similarity.loc[
T6SS_similarity["Name"] == "vgrG6", "Transcriptional similarity across strains"
],
y=-0.02,
s="$vgrG6$",
)
plt.text(
x=T6SS_similarity.loc[
T6SS_similarity["Name"] == "vgrG3", "Transcriptional similarity across strains"
],
y=-0.02,
s="$vgrG3$",
)
plt.text(
x=T6SS_similarity.loc[
T6SS_similarity["Name"] == "vgrG4a", "Transcriptional similarity across strains"
],
y=-0.02,
s="$vgrG4a$",
)
plt.title("Stability of secretion system genes", fontsize=14)
plt.legend()
###Output
_____no_output_____ |
code/Day02_answer/Day02_3_iris_excercise(LGBM).ipynb | ###Markdown
붓꽃(Iris) 품종 데이터 예측하기 Table of Contents1 DataFrame2 Train/Test 데이터 나누어 학습하기3 데이터 학습 및 평가하기4 교차 검증 (Cross Validation)4.1 교차검증 종류4.2 Kfold4.3 StratifiedKFold4.4 LeaveOnOut
###Code
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn import *
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
###Output
_____no_output_____
###Markdown
DataFrame
###Code
iris = load_iris()
iris_df = pd.DataFrame(data=iris.data,columns=iris.feature_names)
iris_df['label'] = iris.target
iris_df
iris_df.shape
###Output
_____no_output_____
###Markdown
Train/Test 데이터 나누어 학습하기
###Code
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target,
test_size = 0.3,
random_state = 100)
###Output
_____no_output_____
###Markdown
데이터 학습 및 평가하기사용 할 모델_ LGBM```python from lightgbm import LGBMClassifiermodel_lgbm = LGBMClassifier() 모델정의하기 model_lgbm.fit(???,???) 모델학습model_lgbm.score(???,???) 모델점수보기 model_lgbm.predict(???,???) 모델 학습결과저장 ```
###Code
# 모델정의하기
# 모델학습
# 모델점수보기
# 모델 학습결과저장
###Output
_____no_output_____ |
static_files/presentations/DC_Assignment1.ipynb | ###Markdown
Assignment 1 Student ID: *Double click here to fill the Student ID* Name: *Double click here to fill the name* Q1: Exploring the TensorFlow playgroundhttp://playground.tensorflow.org/(a) Execute the following steps first:1. Change the dataset to exclusive OR dataset (top-right dataset under "DATA" panel). 2. Reduce the hidden layer to only one layer and change the activation function to "ReLu". 3. Run the model five times. Before each trial, hit the "Reset the network" button to get a new random initialization. (The "Reset the network" button is the circular reset arrow just to the left of the Play button.) 4. Let each trial run for at least 500 epochs to ensure convergence. Make some comments about the role of initialization in this non-convex optimization problem. What is the minimum number of neurons required (Keeping all other parameters unchanged) to ensure that it almost always converges to global minima (where the test loss is below 0.02)? Finally, paste the convergence results below.* Note the convergence results should include all the settings and the model. An example is available [here](https://drive.google.com/file/d/15AXYZLNMNnpZj0kI0CgPdKnyP_KqRncz/view?usp=sharing) (b) Execute the following steps first1. Change the dataset to be the spiral (bottom-right dataset under "DATA" panel). 2. Increase the noise level to 50 and leave the training and test set ratio unchanged. 3. Train the best model you can, using just `X1` and `X2` as input features. Feel free to add or remove layers and neurons. You can also change learning settings like learning rate, regularization rate, activations and batch size. Try to get the test loss below 0.15. How many parameters do you have in your models? Describe the model architecture and the training strategy you use. Finally, paste the convergence results below. * You may need to train the model for enough epochs here and use learning rate scheduling manually (c) Use the same dataset as described above with noise level set to 50. This time, feel free to add additional features or other transformations like `sin(X1)` and `sin(X2)`. Again, try to get the loss below 0.15.Compare the results with (b) and describe your observation. Describe the model architecture and the training strategy you use. Finally, paste the convergence results below. Q2: Takling MNIST with DNN In this question, we will explore the behavior of the vanishing gradient problem (which we have tried to solve using feature engineering in Q1) and try to solve it. The dataset we use is the famous MNIST dataset which contains ten different classes of handwritten digits. The MNIST database contains 60,000 training images and 10,000 testing images. In addition, each grayscale image is fit into a 28x28 pixel bounding box.http://yann.lecun.com/exdb/mnist/ (a) Load the MNIST dataset (you may refer to `keras.datasets.mnist.load_data()`), and split it into a training set (48,000 images), a validation set (12,000 images) and a test set (10,000 images). Make sure to standardize the dataset first.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
(b) Build a sequential model with 30 hidden dense layers (60 neurons each using ReLU as the activation function) plus an output layer (10 neurons using softmax as the activation function). Train it with SGD optimizer with learning rate 0.001 and momentum 0.9 for 10 epochs on MNIST dataset. Try to manually calculate how many steps are in one epoch and compare it with the one reported by the program. Finally, plot the learning curves (loss vs epochs) and report the accuracy you get on the test set.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
(c) Update the model in (b) to add a BatchNormalization (BN) layer after every hidden layer's activation functions. How do the training time and the performance compare with (b)? Try to manually calculate how many non-trainable parameters are in your model and compare it with the one reported by the program. Finally, try moving the BN layers before the hidden layers' activation functions and compare the performance with BN layers after the activation function.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
Q3: High Accuracy CNN for CIFAR-10When facing problems related to images like Q2, we can consider using CNN instead of DNN. The CIFAR-10 dataset is one of the most widely used datasets for machine learning research. It consists of 60000 32x32 color images in 10 classes, with 6000 images per class. In this problem, we will try to build our own CNN from scratch and achieve the highest possible accuracy on CIFAR-10. https://www.cs.toronto.edu/~kriz/cifar.html (a) Load the CIFAR10 dataset (you may refer to `keras.datasets.cifar10.load_data()`), and split it into a training set (40,000 images), a validation set (10,000 images) and a test set (10,000 images). Make sure the pixel values range from 0 to 1.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
(b) Build a Convolutional Neural Network using the following architecture: | | Type | Maps | Activation ||--------|---------------------|---------|------------|| Output | Fully connected | 10 | Softmax || S10 | Max Pooling | | || B9 | Batch normalization | | || C8 | Convolution | 64 | ReLu || B7 | Batch normalization | | || C6 | Convolution | 64 | ReLu || S5 | Max Pooling | | || B4 | Batch normalization | | || C3 | Convolution | 32 | ReLu || B2 | Batch normalization | | || C1 | Convolution | 32 | ReLu || In | Input | RGB (3) | |Train the model for 20 epochs with NAdam optimizer (Adam with Nesterov momentum). Try to manually calculate the number of parameters in your model's architecture and compare it with the one reported by `summary()`. Finally, plot the learning curves and report the accuracy on the test set.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
(c) Looking at the learning curves, you can see that the model is overfitting. Adding data augmentation layer for the model in (a) as follows.* Applies random horizontal flipping * Rotates the input images by a random value in the range `[–18 degrees, +18 degrees]`)* Zooms in or out of the image by a random factor in the range `[-15%, +15%]`* Randomly choose a location to crop images down to a target size `[30, 30]`* Randomly adjust the contrast of images so that the resulting images are `[0.9, 1.1]` brighter or darker than the original one.Fit your model for enough epochs (60, for instance) and compare its performance and learning curves with the previous model in (b). Finally, report the accuracy on the test set.
###Code
# coding your answer here.
###Output
_____no_output_____
###Markdown
(d) Replace all the convolution layers in (b) with depthwise separable convolution layers (except the first convolution layer). Try to manually calculate the number of parameters in your model's architecture and compare it with the one reported by `summary()`. Fit your model and compare its performance with the previous model in (c). Finally, plot the learning curves and report the accuracy on the test set.
###Code
# coding your answer here.
###Output
_____no_output_____ |
Transfer Learning VGG-16.ipynb | ###Markdown
Splitting Train, Validation, Test Data
###Code
train_dir = 'training_data'
val_dir = 'validation_data'
test_dir = 'test_data'
train_files = np.concatenate([cat_train, dog_train])
validate_files = np.concatenate([cat_val, dog_val])
test_files = np.concatenate([cat_test, dog_test])
os.mkdir(train_dir) if not os.path.isdir(train_dir) else None
os.mkdir(val_dir) if not os.path.isdir(val_dir) else None
os.mkdir(test_dir) if not os.path.isdir(test_dir) else None
for fn in train_files:
shutil.copy(fn, train_dir)
for fn in validate_files:
shutil.copy(fn, val_dir)
for fn in test_files:
shutil.copy(fn, test_dir)
#!rm -r test_data/ training_data/ validation_data/
from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img
IMG_DIM = (150,150)
train_files = glob.glob('training_data/*')
train_imgs = [];train_labels = []
for file in train_files:
try:
train_imgs.append( img_to_array(load_img( file,target_size=IMG_DIM )) )
train_labels.append(file.split('/')[1].split('_')[0])
except:
pass
train_imgs = np.array(train_imgs)
validation_files = glob.glob('validation_data/*')
validation_imgs = [];validation_labels = []
for file in validation_files:
try:
validation_imgs.append( img_to_array(load_img( file,target_size=IMG_DIM )) )
validation_labels.append(file.split('/')[1].split('_')[0])
except:
pass
train_imgs = np.array(train_imgs)
validation_imgs = np.array(validation_imgs)
print('Train dataset shape:', train_imgs.shape,
'\tValidation dataset shape:', validation_imgs.shape)
# encode text category labels
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(train_labels)
train_labels_enc = le.transform(train_labels)
validation_labels_enc = le.transform(validation_labels)
###Output
_____no_output_____
###Markdown
Image Augmentation
###Code
train_datagen = ImageDataGenerator(rescale=1./255,
zoom_range=0.3,
rotation_range=50,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
val_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow(train_imgs, train_labels_enc, batch_size=30)
val_generator = val_datagen.flow(validation_imgs, validation_labels_enc, batch_size=20)
###Output
_____no_output_____
###Markdown
Keras Model
###Code
from keras.layers import Flatten, Dense, Dropout
from keras.applications import VGG16
from keras.models import Model
from keras import optimizers
input_shape = (150, 150, 3)
vgg = VGG16(include_top=False, weights='imagenet',input_shape=input_shape)
vgg.trainable = False
for layer in vgg.layers[:-8]:
layer.trainable = False
vgg_output = vgg.layers[-1].output
fc1 = Flatten()(vgg_output)
fc1 = Dense(512, activation='relu')(fc1)
fc1_dropout = Dropout(0.3)(fc1)
fc2 = Dense(512, activation='relu')(fc1_dropout)
fc2_dropout = Dropout(0.3)(fc2)
output = Dense(1, activation='sigmoid')(fc2_dropout)
model = Model(vgg.input, output)
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['accuracy'])
model.summary()
import pandas as pd
layers = [(layer, layer.name, layer.trainable) for layer in model.layers]
pd.DataFrame(layers, columns=['Layer Type', 'Layer Name', 'Layer Trainable'])
from keras.callbacks import EarlyStopping, ModelCheckpoint
filepath="saved_models/vgg_transfer_learn_dogvscat.h5"
save_model_cb = ModelCheckpoint(filepath, monitor='val_acc', verbose=2, save_best_only=True, mode='max')
# callback to stop the training if no improvement
early_stopping_cb = EarlyStopping(monitor='val_loss', patience=7, mode='min')
callbacks_list = [save_model_cb,early_stopping_cb]
history = model.fit_generator(train_generator, steps_per_epoch=100, epochs=100,
validation_data=val_generator, validation_steps=50,
verbose=2,callbacks=callbacks_list)
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/100
- 24s - loss: 0.6913 - acc: 0.5683 - val_loss: 0.5941 - val_acc: 0.7670
Epoch 00001: val_acc improved from -inf to 0.76700, saving model to saved_models/vgg_transfer_learn_dogvscat.h5
Epoch 2/100
- 21s - loss: 0.5215 - acc: 0.7513 - val_loss: 0.4317 - val_acc: 0.8000
Epoch 00002: val_acc improved from 0.76700 to 0.80000, saving model to saved_models/vgg_transfer_learn_dogvscat.h5
Epoch 3/100
- 18s - loss: 0.4392 - acc: 0.8050 - val_loss: 0.2326 - val_acc: 0.9060
Epoch 00003: val_acc improved from 0.80000 to 0.90600, saving model to saved_models/vgg_transfer_learn_dogvscat.h5
Epoch 4/100
- 18s - loss: 0.3877 - acc: 0.8360 - val_loss: 0.2522 - val_acc: 0.9080
Epoch 00004: val_acc improved from 0.90600 to 0.90800, saving model to saved_models/vgg_transfer_learn_dogvscat.h5
Epoch 5/100
- 18s - loss: 0.4047 - acc: 0.8420 - val_loss: 0.1939 - val_acc: 0.9210
Epoch 00005: val_acc improved from 0.90800 to 0.92100, saving model to saved_models/vgg_transfer_learn_dogvscat.h5
Epoch 6/100
- 20s - loss: 0.3546 - acc: 0.8580 - val_loss: 0.3172 - val_acc: 0.8950
Epoch 00006: val_acc did not improve from 0.92100
Epoch 7/100
- 19s - loss: 0.3495 - acc: 0.8640 - val_loss: 0.2710 - val_acc: 0.9080
Epoch 00007: val_acc did not improve from 0.92100
Epoch 8/100
- 18s - loss: 0.3236 - acc: 0.8613 - val_loss: 0.1848 - val_acc: 0.9270
Epoch 00008: val_acc improved from 0.92100 to 0.92700, saving model to saved_models/vgg_transfer_learn_dogvscat.h5
Epoch 9/100
- 18s - loss: 0.3220 - acc: 0.8723 - val_loss: 0.2740 - val_acc: 0.9320
Epoch 00009: val_acc improved from 0.92700 to 0.93200, saving model to saved_models/vgg_transfer_learn_dogvscat.h5
Epoch 10/100
- 19s - loss: 0.3034 - acc: 0.8746 - val_loss: 0.1948 - val_acc: 0.9310
Epoch 00010: val_acc did not improve from 0.93200
Epoch 11/100
- 18s - loss: 0.2919 - acc: 0.8820 - val_loss: 0.1839 - val_acc: 0.9400
Epoch 00011: val_acc improved from 0.93200 to 0.94000, saving model to saved_models/vgg_transfer_learn_dogvscat.h5
Epoch 12/100
- 18s - loss: 0.2987 - acc: 0.8816 - val_loss: 0.2716 - val_acc: 0.9030
Epoch 00012: val_acc did not improve from 0.94000
Epoch 13/100
- 18s - loss: 0.2879 - acc: 0.8870 - val_loss: 0.1975 - val_acc: 0.9200
Epoch 00013: val_acc did not improve from 0.94000
Epoch 14/100
- 18s - loss: 0.3175 - acc: 0.8716 - val_loss: 0.2850 - val_acc: 0.9240
Epoch 00014: val_acc did not improve from 0.94000
Epoch 15/100
- 19s - loss: 0.3048 - acc: 0.8863 - val_loss: 0.2069 - val_acc: 0.9200
Epoch 00015: val_acc did not improve from 0.94000
Epoch 16/100
- 19s - loss: 0.3155 - acc: 0.8766 - val_loss: 0.1515 - val_acc: 0.9390
Epoch 00016: val_acc did not improve from 0.94000
Epoch 17/100
- 18s - loss: 0.2923 - acc: 0.8963 - val_loss: 0.2877 - val_acc: 0.8920
Epoch 00017: val_acc did not improve from 0.94000
Epoch 18/100
- 18s - loss: 0.2936 - acc: 0.8880 - val_loss: 0.1676 - val_acc: 0.9330
Epoch 00018: val_acc did not improve from 0.94000
Epoch 19/100
- 19s - loss: 0.3047 - acc: 0.8813 - val_loss: 0.2313 - val_acc: 0.9230
Epoch 00019: val_acc did not improve from 0.94000
Epoch 20/100
- 18s - loss: 0.3235 - acc: 0.8896 - val_loss: 0.2459 - val_acc: 0.9170
Epoch 00020: val_acc did not improve from 0.94000
Epoch 21/100
- 18s - loss: 0.2898 - acc: 0.8893 - val_loss: 0.2059 - val_acc: 0.9380
Epoch 00021: val_acc did not improve from 0.94000
Epoch 22/100
- 18s - loss: 0.2759 - acc: 0.8943 - val_loss: 0.2351 - val_acc: 0.9220
Epoch 00022: val_acc did not improve from 0.94000
Epoch 23/100
- 18s - loss: 0.2981 - acc: 0.8783 - val_loss: 0.1400 - val_acc: 0.9450
Epoch 00023: val_acc improved from 0.94000 to 0.94500, saving model to saved_models/vgg_transfer_learn_dogvscat.h5
Epoch 24/100
- 19s - loss: 0.3097 - acc: 0.8903 - val_loss: 0.2690 - val_acc: 0.9260
Epoch 00024: val_acc did not improve from 0.94500
Epoch 25/100
- 18s - loss: 0.3076 - acc: 0.8913 - val_loss: 0.1940 - val_acc: 0.9110
Epoch 00025: val_acc did not improve from 0.94500
Epoch 26/100
- 18s - loss: 0.2969 - acc: 0.8953 - val_loss: 0.2101 - val_acc: 0.9300
Epoch 00026: val_acc did not improve from 0.94500
Epoch 27/100
- 18s - loss: 0.2720 - acc: 0.8883 - val_loss: 0.2825 - val_acc: 0.8800
Epoch 00027: val_acc did not improve from 0.94500
Epoch 28/100
- 19s - loss: 0.3663 - acc: 0.8792 - val_loss: 0.2281 - val_acc: 0.9450
Epoch 00028: val_acc did not improve from 0.94500
Epoch 29/100
- 18s - loss: 0.2987 - acc: 0.8820 - val_loss: 0.1870 - val_acc: 0.9220
Epoch 00029: val_acc did not improve from 0.94500
Epoch 30/100
- 18s - loss: 0.3263 - acc: 0.8983 - val_loss: 0.2469 - val_acc: 0.9280
Epoch 00030: val_acc did not improve from 0.94500
###Markdown
Model Performance
###Code
%matplotlib inline
import matplotlib.pyplot as plt
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
t = f.suptitle('Basic CNN Performance', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)
epoch_list = history.epoch
ax1.plot(epoch_list, history.history['acc'], label='Train Accuracy')
ax1.plot(epoch_list, history.history['val_acc'], label='Validation Accuracy')
ax1.set_xticks(np.arange(0, epoch_list[-1], 3))
ax1.set_ylabel('Accuracy Value')
ax1.set_xlabel('Epoch')
ax1.set_title('Accuracy')
l1 = ax1.legend(loc="best")
ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
ax2.set_xticks(np.arange(0, epoch_list[-1], 3))
ax2.set_ylabel('Loss Value')
ax2.set_xlabel('Epoch')
ax2.set_title('Loss')
l2 = ax2.legend(loc="best")
if not os.path.exists('saved_models'): os.mkdir('saved_models')
model.save('saved_models/vgg transfer learning.h5')
###Output
_____no_output_____ |
code/api/python/flickr-download-theatre.ipynb | ###Markdown
This script is based on instructions given in [this lesson](https://github.com/HeardLibrary/digital-scholarship/blob/master/code/scrape/pylesson/lesson2-api.ipynb). Import libraries and load API key from fileThe API key should be the only item in a text file called `flickr_api_key.txt` located in the user's home directory. No trailing newline and don't include the "secret".
###Code
from pathlib import Path
import requests
import json
import csv
from time import sleep
import webbrowser
# define some canned functions we need to use
# write a list of dictionaries to a CSV file
def write_dicts_to_csv(table, filename, fieldnames):
with open(filename, 'w', newline='', encoding='utf-8') as csv_file_object:
writer = csv.DictWriter(csv_file_object, fieldnames=fieldnames)
writer.writeheader()
for row in table:
writer.writerow(row)
home = str(Path.home()) #gets path to home directory; supposed to work for Win and Mac
key_filename = 'flickr_api_key.txt'
api_key_path = home + '/' + key_filename
try:
with open(api_key_path, 'rt', encoding='utf-8') as file_object:
api_key = file_object.read()
# print(api_key) # delete this line once the script is working; don't want the key as part of the notebook
except:
print(key_filename + ' file not found - is it in your home directory?')
###Output
_____no_output_____
###Markdown
Make a test API call to the accountWe need to know the user ID. Go to flickr.com, and search for vutheatre. The result is https://www.flickr.com/photos/123262983@N05 which tells us that the ID is 123262983@N05 . There are a lot of kinds of searches we can do. A list is [here](https://www.flickr.com/services/api/). Let's try `flickr.people.getPhotos` (described [here](https://www.flickr.com/services/api/flickr.people.getPhotos.html)). This method doesn't actually get the photos; it gets metadata about the photos for an account.The main purpose of this query is to find out the number of photos that are available so that we can know how to set up the next part. The number of photos is in `['photos']['total']`, so we can extract that from the response data.
###Code
user_id = '123262983@N05' # vutheatre's ID
endpoint_url = 'https://www.flickr.com/services/rest'
method = 'flickr.people.getPhotos'
filename = 'theatre-metadata.csv'
param_dict = {
'method' : method,
# 'tags' : 'kangaroo',
# 'extras' : 'url_o',
'per_page' : '1', # default is 100, maximum is 500. Use paging to retrieve more than 500.
'page' : '1',
'user_id' : user_id,
'oauth_consumer_key' : api_key,
'nojsoncallback' : '1', # this parameter causes the API to return actual JSON instead of its weird default string
'format' : 'json' # overrides the default XML serialization for the search results
}
metadata_response = requests.get(endpoint_url, params = param_dict)
# print(metadata_response.url) # uncomment this if testing is needed, again don't reveal key in notebook
data = metadata_response.json()
print(json.dumps(data, indent=4))
print()
number_photos = int(data['photos']['total']) # need to convert string to number
print('Number of photos: ', number_photos)
###Output
_____no_output_____
###Markdown
Test to see what kinds of useful metadata we can getThe instructions for the [method](https://www.flickr.com/services/api/flickr.people.getPhotos.html) says what kinds of "extras" you can request metadata about. Let's ask for everything that we care about and don't already know: `description,license,original_format,date_taken,original_format,geo,tags,machine_tags,media,url_t,url_o``url_t` is the URL for a thumbnail of the image and `url_o` is the URL to retrieve the original photo. The dimensions of these images will be given automatically when we request the URLs, so we don't need `o_dims`. There isn't any place to request the title, since it's automatically returned.
###Code
param_dict = {
'method' : method,
'extras' : 'description,license,original_format,date_taken,original_format,geo,tags,machine_tags,media,url_t,url_o',
'per_page' : '1', # default is 100, maximum is 500. Use paging to retrieve more than 500.
'page' : '1',
'user_id' : user_id,
'oauth_consumer_key' : api_key,
'nojsoncallback' : '1', # this parameter causes the API to return actual JSON instead of its weird default string
'format' : 'json' # overrides the default XML serialization for the search results
}
metadata_response = requests.get(endpoint_url, params = param_dict)
# print(metadata_response.url) # uncomment this if testing is needed, again don't reveal key in notebook
data = metadata_response.json()
print(json.dumps(data, indent=4))
print()
###Output
_____no_output_____
###Markdown
Create and test the function to extract the data we want
###Code
def extract_data(photo_number, data):
dictionary = {} # create an empty dictionary
# load the response data into a dictionary
dictionary['id'] = data['photos']['photo'][photo_number]['id']
dictionary['title'] = data['photos']['photo'][photo_number]['title']
dictionary['license'] = data['photos']['photo'][photo_number]['license']
dictionary['description'] = data['photos']['photo'][photo_number]['description']['_content']
# convert the stupid date format to ISO 8601 dateTime; don't know the time zone - maybe add later?
temp_time = data['photos']['photo'][photo_number]['datetaken']
dictionary['date_taken'] = temp_time.replace(' ', 'T')
dictionary['tags'] = data['photos']['photo'][photo_number]['tags']
dictionary['machine_tags'] = data['photos']['photo'][photo_number]['machine_tags']
dictionary['original_format'] = data['photos']['photo'][photo_number]['originalformat']
dictionary['latitude'] = data['photos']['photo'][photo_number]['latitude']
dictionary['longitude'] = data['photos']['photo'][photo_number]['longitude']
dictionary['thumbnail_url'] = data['photos']['photo'][photo_number]['url_t']
dictionary['original_url'] = data['photos']['photo'][photo_number]['url_o']
dictionary['original_height'] = data['photos']['photo'][photo_number]['height_o']
dictionary['original_width'] = data['photos']['photo'][photo_number]['width_o']
return dictionary
# test the function with a single row
table = []
photo_number = 0
photo_dictionary = extract_data(photo_number, data)
table.append(photo_dictionary)
# write the data to a file
fieldnames = photo_dictionary.keys() # use the keys from the last dictionary for column headers; assume all are the same
write_dicts_to_csv(table, filename, fieldnames)
print('Done')
###Output
_____no_output_____
###Markdown
Create the loops to do the pagingFlickr limits the number of photos that can be requested to 500. Since we have more than that, we need to request the data 500 photos at a time.
###Code
per_page = 5 # use 500 for full download, use smaller number like 5 for testing
pages = number_photos // per_page # the // operator returns the integer part of the division ("floor")
table = []
#for page_number in range(0, pages + 1): # need to add one to get the final partial page
for page_number in range(0, 1): # use this to do only one page for testing
print('retrieving page ', page_number + 1)
page_string = str(page_number + 1)
param_dict = {
'method' : method,
'extras' : 'description,license,original_format,date_taken,original_format,geo,tags,machine_tags,media,url_t,url_o',
'per_page' : str(per_page), # default is 100, maximum is 500.
'page' : page_string,
'user_id' : user_id,
'oauth_consumer_key' : api_key,
'nojsoncallback' : '1', # this parameter causes the API to return actual JSON instead of its weird default string
'format' : 'json' # overrides the default XML serialization for the search results
}
metadata_response = requests.get(endpoint_url, params = param_dict)
data = metadata_response.json()
# print(json.dumps(data, indent=4)) # uncomment this line for testing
# data['photos']['photo'] is the number of photos for which data was returned
for image_number in range(0, len(data['photos']['photo'])):
photo_dictionary = extract_data(image_number, data)
table.append(photo_dictionary)
# write the data to a file
# We could just do this for all the data at the end.
# But if the search fails in the middle, we will at least get partial results
fieldnames = photo_dictionary.keys() # use the keys from the last dictionary for column headers; assume all are the same
write_dicts_to_csv(table, filename, fieldnames)
sleep(1) # wait a second to avoid getting blocked for hitting the API to rapidly
print('Done')
###Output
_____no_output_____ |
decision_score.ipynb | ###Markdown
***UNIVERSIDADE FEDERAL DO MATO GROSSO DO SUL*** Análise de dados para aumentar nível de satisfação dos clientes de um marketplace utilizando árvore de decisão.**TRABALHO 3 - INTELIGÊNCIA ARTIFICIAL 2021/1**__________________________________________________**Aluno:** Nome: **José Augusto Lajo Vieira Vital**
###Code
# INÍCIO DO ESTUDO
###Output
_____no_output_____
###Markdown
**Importação do sistema operacional**
###Code
import os
###Output
_____no_output_____
###Markdown
**Determinação do acesso ao diretório**
###Code
os.chdir('/content/drive/MyDrive/datasets')
from google.colab import drive
drive.mount('/content/drive')
!pwd
!ls
###Output
olist_customers_dataset.csv olist_products_dataset.csv
olist_geolocation_dataset.csv olist_sellers_dataset.csv
olist_order_items_dataset.csv product_category_name_translation.csv
olist_order_payments_dataset.csv tabela_resulta.csv
olist_order_reviews_dataset.csv tabela_result.csv
olist_orders_dataset.csv
###Markdown
**Importação das bibliotecas Pandas e Numpy**
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
**Leitura dos tabelas .csv**
###Code
tabela_cliente = pd.read_csv('olist_customers_dataset.csv')
tabela_localizacao = pd.read_csv('olist_geolocation_dataset.csv')
tabela_pedido = pd.read_csv('olist_order_items_dataset.csv')
tabela_pagamento = pd.read_csv('olist_order_payments_dataset.csv')
tabela_review = pd.read_csv('olist_order_reviews_dataset.csv')
tabela_entrega_pedido = pd.read_csv('olist_orders_dataset.csv')
tabela_descricao_produto = pd.read_csv('olist_products_dataset.csv')
tabela_vendedor = pd.read_csv('olist_sellers_dataset.csv')
tabela_categoria_traduzido = pd.read_csv('product_category_name_translation.csv')
###Output
_____no_output_____
###Markdown
**Checagem dos 5 primeiros elementos de cada tabela**
###Code
tabela_cliente.head()
tabela_localizacao.head()
tabela_pedido.head()
tabela_pagamento.head()
tabela_review.head()
tabela_entrega_pedido.head()
tabela_descricao_produto.head()
tabela_vendedor.head()
tabela_categoria_traduzido.head()
###Output
_____no_output_____
###Markdown
**Início do processo de união das 9 tabelas disponibilizadas com a finalidade de produzir uma tabela resultante que possua os elementos mais importantes para a determinação do review_score. No primero merge realizado, unimos a tabela de clientes com as respectivas entregas dos pedidos usando o código individual de cada consumidor como parâmetro.**
###Code
pd.merge(tabela_cliente, tabela_entrega_pedido, on=["customer_id"], how="inner")
###Output
_____no_output_____
###Markdown
**Processo de união com as demais tabelas disponibilizadas****1 - (Clientes, Entregas)****2 - (1, Pedidos)****3 - (2, Pagamentos)****4 - (3, Review)****5 - (4, Vendedor)**
###Code
test = pd.merge(tabela_cliente, tabela_entrega_pedido, on=["customer_id"], how="inner")
test = pd.merge(test, tabela_pedido, on=["order_id"], how="inner")
test = pd.merge(test, tabela_pagamento, on=["order_id"], how="inner")
test = pd.merge(test, tabela_review, on=["order_id"], how="inner")
test = pd.merge(test, tabela_vendedor, on=["seller_id"], how="inner")
###Output
_____no_output_____
###Markdown
**Tabela Resultante****Linhas: 118315****Colunas: 31**
###Code
test
###Output
_____no_output_____
###Markdown
**Segunda filtragem consiste em remover elementos que não possuemrelação com a variável review_score**
###Code
#test = test.drop(columns=["customer_unique_id"],axis=1)
#test = test.drop(columns=["customer_city"],axis=1)
#test = test.drop(columns=["customer_state"],axis=1)
#test = test.drop(columns=["order_status"],axis=1)
#test = test.drop(columns=["order_purchase_timestamp"],axis=1)
#test = test.drop(columns=["order_approved_at"],axis=1)
#test = test.drop(columns=["order_delivered_carrier_date"],axis=1)
#test = test.drop(columns=["order_delivered_customer_date"],axis=1)
#test = test.drop(columns=["order_estimated_delivery_date"],axis=1)
#test = test.drop(columns=["shipping_limit_date"],axis=1)
#test = test.drop(columns=["review_creation_date"],axis=1)
#test = test.drop(columns=["review_answer_timestamp"],axis=1)
#test = test.drop(columns=["seller_city"],axis=1)
#test = test.drop(columns=["seller_state"],axis=1)
#test = test.drop(columns=["review_comment_title"],axis=1)
#test = test.drop(columns=["review_comment_message"],axis=1)
###Output
_____no_output_____
###Markdown
**Tabela Resultante após a remoção de atributos não prioritários para o nível desatisfação dos clientes****Linhas: 118315****Colunas: 15**
###Code
test
###Output
_____no_output_____
###Markdown
**Inserindo cada atributo da tabela resultante em um vetor para melhor manipulação dos dados**
###Code
vetor_cliente = np.array(test.customer_id)
vetor_cepcliente = np.array(test.customer_zip_code_prefix)
vetor_pedido = np.array(test.order_id)
vetor_idpedido = np.array(test.order_item_id)
vetor_produto = np.array(test.product_id)
vetor_vendedor = np.array(test.seller_id)
vetor_preco_produto = np.array(test.price)
vetor_frete = np.array(test.freight_value)
vetor_parcela = np.array(test.payment_sequential)
vetor_tipopagamento = np.array(test.payment_type)
vetor_pay = np.array(test.payment_installments)
vetor_valorfinal = np.array(test.payment_value)
vetor_review = np.array(test.review_id)
vetor_score = np.array(test.review_score)
vetor_cepvendedor = np.array(test.seller_zip_code_prefix)
###Output
_____no_output_____
###Markdown
**Definindo um novo dataframe vazio**
###Code
df = pd.DataFrame()
df
###Output
_____no_output_____
###Markdown
**Definindo as colunas do novo dataframe e atribuindo para cada coluna, seu respectivo vetor de dados registrado anteriormente.**
###Code
COLUNAS = [
'Cliente',
'CEP_Cliente',
'Pedido',
'id_Pedido',
'Produto',
'Vendedor',
'Preco_produto',
'Frete',
'Parcela',
'Tipo_pagamento',
'Installments',
'Valor_total',
'ID_Review',
'CEP_Vendedor',
'Score'
]
df = pd.DataFrame(columns =COLUNAS)
df.Cliente = vetor_cliente
df.CEP_Cliente = vetor_cepcliente
df.Pedido = vetor_pedido
df.id_Pedido = vetor_idpedido
df.Produto = vetor_produto
df.Vendedor = vetor_vendedor
df.Preco_produto = vetor_preco_produto
df.Frete = vetor_frete
df.Parcela = vetor_parcela
df.Tipo_pagamento = vetor_tipopagamento
df.Installments = vetor_pay
df.Valor_total = vetor_valorfinal
df.ID_Review = vetor_review
df.CEP_Vendedor = vetor_cepvendedor
df.Score = vetor_score
df
###Output
_____no_output_____
###Markdown
**Impressão da coluna de clientes.**
###Code
df.Cliente
#for index, row in df.iterrows():
# if row['Score'] == 1:
# df.loc[index,'Classe'] = 'Pessimo'
# if row['Score'] == 2:
# df.loc[index,'Classe'] = 'Ruim'
# if row['Score'] == 3:
# df.loc[index,'Classe'] = 'Mediano'
# if row['Score'] == 4:
# df.loc[index,'Classe'] = 'Bom'
# if row['Score'] == 5:
# df.loc[index,'Classe'] = 'Otimo'
###Output
_____no_output_____
###Markdown
**Informações do dataframe****Atributos, elementos não nulos, Tipo das variáveis da coluna**
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 118315 entries, 0 to 118314
Data columns (total 15 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Cliente 118315 non-null object
1 CEP_Cliente 118315 non-null int64
2 Pedido 118315 non-null object
3 id_Pedido 118315 non-null int64
4 Produto 118315 non-null object
5 Vendedor 118315 non-null object
6 Preco_produto 118315 non-null float64
7 Frete 118315 non-null float64
8 Parcela 118315 non-null int64
9 Tipo_pagamento 118315 non-null object
10 Installments 118315 non-null int64
11 Valor_total 118315 non-null float64
12 ID_Review 118315 non-null object
13 CEP_Vendedor 118315 non-null int64
14 Score 118315 non-null int64
dtypes: float64(3), int64(6), object(6)
memory usage: 13.5+ MB
###Markdown
**Agrupando os elementos do dataframe por consumidor**
###Code
df.groupby(by='Cliente').size()
###Output
_____no_output_____
###Markdown
**Importando os métodos de árvore de decisão**
###Code
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.model_selection import train_test_split
from sklearn import metrics
###Output
_____no_output_____
###Markdown
**Para simplificar os dados e evitar criar um dummy com esse dataframe, removemos todos os elementos não-numéricos para que o modelo seja capaz de realizar a execução. Solução encontrada para simplificar os atributos do tipo "Objeto" para tipos numéricos.**
###Code
df['Cliente'] = df['Cliente'].str.replace(r'\D', '')
df['Pedido'] = df['Pedido'].str.replace(r'\D', '')
df['Produto'] = df['Produto'].str.replace(r'\D', '')
df['Vendedor'] = df['Vendedor'].str.replace(r'\D', '')
df['ID_Review'] = df['ID_Review'].str.replace(r'\D', '')
df
###Output
_____no_output_____
###Markdown
**Realizamos o procedimento de remoção dos elementos não-numéricos para todas as colunas do tipo objeto com exceção do tipo de pagamento pois o tipo de pagamento se resume a poucas opções. Dessa forma, usamos a função get_dummies apenas para o tipo de pagamento****Portanto, a coluna Tipo_pagamento se divide em quatro colunas com lógica booleana. As novas colunas são: Tipo_pagamento_boleto, Tipo_pagamento_credit_card, Tipo_pagamento_debit_card, Tipo_pagamento_voucher**
###Code
result_df = pd.get_dummies(df, columns=["Tipo_pagamento"])
###Output
_____no_output_____
###Markdown
**Resultado final do dataframe**
###Code
result_df
###Output
_____no_output_____
###Markdown
**Criação de um dataframe reserva para possíveis conclusões**
###Code
reserva = result_df
reserva
###Output
_____no_output_____
###Markdown
**Eliminando todas as linhas com nível 4 ou nível 5 de satisfação. Dessa forma, temos um dataframe com todos os dados, e um apenas com dados classificados com nível 3,2 ou 1 ou seja, mediano, ruim ou péssimo. (elementos que apresentam nível de insatisfação interessante para análise)**
###Code
reserva = reserva.drop(reserva[reserva.Score > 3].index)
reserva
###Output
_____no_output_____
###Markdown
**Processo de separação de treine/teste****Proporções estabelecidas:****70% treino****30% teste**
###Code
X_train, X_test, y_train, y_test = train_test_split(result_df.drop('Score', axis=1), result_df['Score'], test_size=0.3)
###Output
_____no_output_____
###Markdown
**Número de amostras para cada processo**
###Code
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
**Número de targets para cada processo**
###Code
y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
**Criação do classifcador**
###Code
cls = DecisionTreeClassifier()
###Output
_____no_output_____
###Markdown
**Treinamento**
###Code
cls = cls.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
**Vetor com as importancias de cada atributo para a determinação do review_score**
###Code
cls.feature_importances_
df.head()
###Output
_____no_output_____
###Markdown
**Para tornar o modelo mais visual, criamos um laço para a impressão dos pesos de cada atributo para determinar o score**
###Code
for feature, importancia in zip(result_df.columns, cls.feature_importances_):
print("{}:{:.1f}%".format(feature,((importancia*100))))
###Output
Cliente:10.4%
CEP_Cliente:11.0%
Pedido:10.4%
id_Pedido:0.8%
Produto:9.4%
Vendedor:8.0%
Preco_produto:8.0%
Frete:8.1%
Parcela:0.3%
Installments:3.7%
Valor_total:8.6%
ID_Review:11.4%
CEP_Vendedor:7.7%
Score:0.8%
Tipo_pagamento_boleto:0.8%
Tipo_pagamento_credit_card:0.2%
Tipo_pagamento_debit_card:0.3%
###Markdown
**Vetor com predições para checagem do aprendizado**
###Code
result = cls.predict(X_test)
result
result_df.Score[118310]
###Output
_____no_output_____
###Markdown
**Representação das métricas de precisão e médias do modelo**
###Code
from sklearn import metrics
print(metrics.classification_report(y_test,result))
###Output
precision recall f1-score support
1 0.26 0.27 0.27 4602
2 0.12 0.13 0.12 1193
3 0.15 0.17 0.16 2975
4 0.24 0.25 0.24 6685
5 0.61 0.58 0.60 20040
accuracy 0.43 35495
macro avg 0.28 0.28 0.28 35495
weighted avg 0.44 0.43 0.44 35495
###Markdown
**Precisão total**
###Code
from sklearn.model_selection import cross_val_score
allScores = cross_val_score(cls, X_train, y_train , cv=10)
allScores.mean()
###Output
_____no_output_____
###Markdown
**Treinamento utilizando o dataframe reserva. (apenas com os níveis de satisfação abaixo da média, score <=3)** **Split do dataframe em treino e teste (70%, 30%, respectivamente)**
###Code
X_train, X_test, y_train, y_test = train_test_split(reserva.drop('Score', axis=1), reserva['Score'], test_size=0.3)
###Output
_____no_output_____
###Markdown
**Quantidade de amostras do treino**
###Code
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
**Classificador**
###Code
clf = DecisionTreeClassifier()
###Output
_____no_output_____
###Markdown
**Treino**
###Code
clf = clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
**Importancia de cada atributo para determinar o nível de satisfação dos consumidores**
###Code
clf.feature_importances_
for feature, importancia in zip(reserva.columns, clf.feature_importances_):
print("{}:{:.1f}%".format(feature,((importancia*100))))
###Output
Cliente:10.0%
CEP_Cliente:11.1%
Pedido:10.9%
id_Pedido:0.8%
Produto:9.5%
Vendedor:8.3%
Preco_produto:7.4%
Frete:8.3%
Parcela:0.2%
Installments:3.6%
Valor_total:8.7%
ID_Review:11.6%
CEP_Vendedor:7.6%
Score:0.7%
Tipo_pagamento_boleto:0.7%
Tipo_pagamento_credit_card:0.2%
Tipo_pagamento_debit_card:0.3%
|
chapters/05_object_recognition_and_classification/Chapter 5 - 03 Layers.ipynb | ###Markdown
Table of Contents 0.1 Common Layers0.1.1 Convolution Layers0.1.1.1 tf.nn.depthwise_conv2d0.1.1.2 tf.nn.separable_conv2d0.1.1.3 tf.nn.conv2d_transpose0.1.2 Activation Functions0.1.2.1 tf.nn.relu0.1.2.2 tf.sigmoid0.1.2.3 tf.tanh0.1.2.4 tf.nn.dropout0.1.3 Pooling Layers0.1.3.1 tf.nn.max_pool0.1.3.2 tf.nn.avg_pool0.1.4 Normalization0.1.4.1 tf.nn.local_response_normalization (tf.nn.lrn)0.1.5 High Level Layers0.1.5.1 tf.contrib.layers.convolution2d0.1.5.2 tf.contrib.layers.fully_connected0.1.5.3 Layer Input Common LayersFor a neural network architecture to be considered a CNN, it requires at least one convolution layer (`tf.nn.conv2d`). There are practical uses for a single layer CNN (edge detection), for image recognition and categorization it is common to use different layer types to support a convolution layer. These layers help reduce over-fitting, speed up training and decrease memory usage.The layers covered in this chapter are focused on layers commonly used in a CNN architecture. A CNN isn't limited to use only these layers, they can be mixed with layers designed for other network architectures.
###Code
# setup-only-ignore
import tensorflow as tf
import numpy as np
# setup-only-ignore
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Convolution LayersOne type of convolution layer has been covered in detail (`tf.nn.conv2d`) but there are a few notes which are useful to advanced users. The convolution layers in TensorFlow don't do a full convolution, details can be found in [the TensorFlow API documentation](https://www.tensorflow.org/versions/r0.8/api_docs/python/nn.htmlconvolution). In practice, the difference between a convolution and the operation TensorFlow uses is performance. TensorFlow uses a technique to speed up the convolution operation in all the different types of convolution layers.There are use cases for each type of convolution layer but for `tf.nn.conv2d` is a good place to start. The other types of convolutions are useful but not required in building a network capable of object recognition and classification. A brief summary of each is included. tf.nn.depthwise_conv2dUsed when attaching the output of one convolution to the input of another convolution layer. An advanced use case is using a `tf.nn.depthwise_conv2d` to create a network following the [inception architecture](http://arxiv.org/abs/1512.00567). tf.nn.separable_conv2dSimilar to `tf.nn.conv2d` but not a replacement. For large models, it speeds up training without sacrificing accuracy. For small models, it will converge quickly with worse accuracy. tf.nn.conv2d_transposeApplies a kernel to a new feature map where each section is filled with the same values as the kernel. As the kernel strides over the new image, any overlapping sections are summed together. There is a great explanation on how `tf.nn.conv2d_transpose` is used for learnable upsampling in [Stanford's CS231n Winter 2016: Lecture 13](https://www.youtube.com/watch?v=ByjaPdWXKJ4&t=20m00s). Activation FunctionsThese functions are used in combination with the output of other layers to generate a feature map. They're used to smooth (or differentiate) the results of certain operations. The goal is to introduce non-linearity into the neural network. Non-linearity means that the input is a curve instead of a straight line. Curves are capable of representing more complex changes in input. For example, non-linear input is capable of describing input which stays small for the majority of the time but periodically has a single point at an extreme. Introduction of non-linearity in a neural network allows it to train on the complex patterns found in data.TensorFlow has [multiple activation functions](https://www.tensorflow.org/versions/r0.8/api_docs/python/nn.htmlactivation-functions) available. With CNNs, `tf.nn.relu` is primarily used because of its performance although it sacrifices information. When starting out, using `tf.nn.relu` is recommended but advanced users may create their own. When considering if an activation function is useful there are a few primary considerations.1. The function is [**monotonic**](https://en.wikipedia.org/wiki/Monotonic_function), so its output should always be increasing or decreasing along with the input. This allows gradient descent optimization to search for local minima.2. The function is [**differentiable**](https://en.wikipedia.org/wiki/Differentiable_function), so there must be a derivative at any point in the function's domain. This allows gradient descent optimization to properly work using the output from this style of activation function.Any functions which satisfy those considerations could be used as activation functions. In TensorFlow there are a few worth highlighting which are common to see in CNN architectures. A brief summary of each is included with a small sample code illustrating their usage. tf.nn.reluA rectifier (rectified linear unit) called a ramp function in some documentation and looks like a skateboard ramp when plotted. ReLU is linear and keeps the same input values for any positive numbers while setting all negative numbers to be 0. It has the benefits that it doesn't suffer from [gradient vanishing](https://en.wikipedia.org/wiki/Vanishing_gradient_problem) and has a range of \\([0,+\infty)\\). A drawback of ReLU is that it can suffer from neurons becoming saturated when too high of a learning rate is used.
###Code
features = tf.range(-2, 3)
# Keep note of the value for negative features
sess.run([features, tf.nn.relu(features)])
###Output
_____no_output_____
###Markdown
In this example, the input in a rank one tensor (vector) of integer values between \\([-2, 3]\\). A `tf.nn.relu` is ran over the values the output highlights that any value less than 0 is set to be 0. The other input values are left untouched. tf.sigmoidA sigmoid function returns a value in the range of \\([0.0, 1.0]\\). Larger values sent into a `tf.sigmoid` will trend closer to 1.0 while smaller values will trend towards 0.0. The ability for sigmoids to keep a values between \\([0.0, 1.0]\\) is useful in networks which train on probabilities which are in the range of \\([0.0, 1.0]\\). The reduced range of output values can cause trouble with input becoming saturated and changes in input becoming exaggerated.
###Code
# Note, tf.sigmoid (tf.nn.sigmoid) is currently limited to float values
features = tf.to_float(tf.range(-1, 3))
sess.run([features, tf.sigmoid(features)])
###Output
_____no_output_____
###Markdown
In this example, a range of integers is converted to be float values (`1` becomes `1.0`) and a sigmoid function is ran over the input features. The result highlights that when a value of 0.0 is passed through a sigmoid, the result is 0.5 which is the midpoint of the simoid's domain. It's useful to note that with 0.5 being the sigmoid's midpoint, negative values can be used as input to a sigmoid. tf.tanhA hyperbolic tangent function (tanh) is a close relative to `tf.sigmoid` with some of the same benefits and drawbacks. The main difference between `tf.sigmoid` and `tf.tanh` is that `tf.tanh` has a range of \\([-1.0, 1.0]\\). The ability to output negative values may be useful in certain network architectures.
###Code
# Note, tf.tanh (tf.nn.tanh) is currently limited to float values
features = tf.to_float(tf.range(-1, 3))
sess.run([features, tf.tanh(features)])
###Output
_____no_output_____
###Markdown
In this example, all the setup is the same as the `tf.sigmoid` example but the output shows an important difference. In the output of `tf.tanh` the midpoint is 0.0 with negative values. This can cause trouble if the next layer in the network isn't expecting negative input or input of 0.0. tf.nn.dropoutSet the output to be 0.0 based on a configurable probability. This layer performs well in scenarios where a little randomness helps training. An example scenario is when there are patterns being learned which are too tied to their neighboring features. This layer will add a little noise to the output being learned.**NOTE**: This layer should only be used during training because the random noise it adds will give misleading results while testing.
###Code
features = tf.constant([-0.1, 0.0, 0.1, 0.2])
# Note, the output should be different on almost ever execution. Your numbers won't match
# this output.
sess.run([features, tf.nn.dropout(features, keep_prob=0.5)])
###Output
_____no_output_____
###Markdown
In this example, the output has a 50% probability of being kept. Each execution of this layer will have different output (most likely, it's somewhat random). When an output is dropped, its value is set to 0.0. Pooling LayersPooling layers reduce over-fitting and improving performance by reducing the size of the input. They're used to scale down input while keeping important information for the next layer. It's possible to reduce the size of the input using a `tf.nn.conv2d` alone but these layers execute much faster. tf.nn.max_poolStrides over a tensor and chooses the maximum value found within a certain kernel size. Useful when the intensity of the input data is relevant to importance in the image.The same example is modeled using example code below. The goal is to find the largest value within the tensor.
###Code
# Usually the input would be output from a previous layer and not an image directly.
batch_size=1
input_height = 3
input_width = 3
input_channels = 1
layer_input = tf.constant([
[
[[1.0], [0.2], [1.5]],
[[0.1], [1.2], [1.4]],
[[1.1], [0.4], [0.4]]
]
])
# The strides will look at the entire input by using the image_height and image_width
kernel = [batch_size, input_height, input_width, input_channels]
max_pool = tf.nn.max_pool(layer_input, kernel, [1, 1, 1, 1], "VALID")
sess.run(max_pool)
###Output
_____no_output_____
###Markdown
The `layer_input` is a tensor with a shape similar to the output of `tf.nn.conv2d` or an activation function. The goal is to keep only one value, the largest value in the tensor. In this case, the largest value of the tensor is `1.5` and is returned in the same format as the input. If the `kernel` were set to be smaller, it would choose the largest value in each kernel size as it strides over the image.Max-pooling will commonly be done using `2x2` receptive field (kernel with a height of 2 and width of 2) which is often written as a "2x2 max-pooling operation". One reason to use a `2x2` receptive field is that it's the smallest amount of downsampling which can be done in a single pass. If a `1x1` receptive field were used then the output would be the same as the input. tf.nn.avg_poolStrides over a tensor and averages all the values at each depth found within a kernel size. Useful when reducing values where the entire kernel is important, for example, input tensors with a large width and height but small depth.The same example is modeled using example code below. The goal is to find the average of all the values within the tensor.
###Code
batch_size=1
input_height = 3
input_width = 3
input_channels = 1
layer_input = tf.constant([
[
[[1.0], [1.0], [1.0]],
[[1.0], [0.5], [0.0]],
[[0.0], [0.0], [0.0]]
]
])
# The strides will look at the entire input by using the image_height and image_width
kernel = [batch_size, input_height, input_width, input_channels]
max_pool = tf.nn.avg_pool(layer_input, kernel, [1, 1, 1, 1], "VALID")
sess.run(max_pool)
###Output
_____no_output_____
###Markdown
Doing a summation of all the values in the tensor, then divide them by the size of the number of scalars in the tensor:\\(\dfrac{1.0 + 1.0 + 1.0 + 1.0 + 0.5 + 0.0 + 0.0 + 0.0 + 0.0}{9.0}\\)This is exactly what the example code did above but by reducing the size of the kernel, it's possible to adjust the size of the output. NormalizationNormalization layers are not unique to CNNs and aren't used as often. When using `tf.nn.relu`, it is useful to consider normalization of the output. Since ReLU is unbounded, it's often useful to utilize some form of normalization to identify high-frequency features. tf.nn.local_response_normalization (tf.nn.lrn)Local response normalization is a function which shapes the output based on a summation operation best explained in [TensorFlow's documentation](https://www.tensorflow.org/versions/master/api_docs/python/nn.htmllocal_response_normalization).> ... Within a given vector, each component is divided by the weighted, squared sum of inputs within depth_radius.One goal of normalization is to keep the input in a range of acceptable numbers. For instance, normalizing input in the range of \\([0.0,1.0]\\) where the full range of possible values is normalized to be represented by a number greater than or equal to `0.0` and less than or equal to `1.0`. Local response normalization normalizes values while taking into account the significance of each value.[Cuda-Convnet](https://code.google.com/p/cuda-convnet/wiki/LayerParams) includes further details on why using local response normalization is useful in some CNN architectures. [ImageNet](https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf) uses this layer to normalize the output from `tf.nn.relu`.
###Code
# Create a range of 3 floats.
# TensorShape([batch, image_height, image_width, image_channels])
layer_input = tf.constant([
[[[ 1.]], [[ 2.]], [[ 3.]]]
])
lrn = tf.nn.local_response_normalization(layer_input)
sess.run([layer_input, lrn])
###Output
_____no_output_____
###Markdown
In this example code, the layer input is in the format `[batch, image_height, image_width, image_channels]`. The normalization reduced the output to be in the range of \\([-1.0, 1.0]\\). For `tf.nn.relu`, this layer will reduce its unbounded output to be in the same range. High Level LayersTensorFlow has introduced high level layers designed to make it easier to create fairly standard layer definitions. These aren't required to use but they help avoid duplicate code while following best practices. While getting started, these layers add a number of non-essential nodes to the graph. It's worth waiting until the basics are comfortable before using these layers. tf.contrib.layers.convolution2dThe `convolution2d` layer will do the same logic as `tf.nn.conv2d` while including weight initialization, bias initialization, trainable variable output, bias addition and adding an activation function. Many of these steps haven't been covered for CNNs yet but should be familiar. A kernel is a trainable variable (the CNN's goal is to train this variable), weight initialization is used to fill the kernel with values (`tf.truncated_normal`) on its first run. The rest of the parameters are similar to what have been used before except they are reduced to short-hand version. Instead of declaring the full kernel, now it's a simple tuple `(1,1)` for the kernel's height and width.
###Code
image_input = tf.constant([
[
[[0., 0., 0.], [255., 255., 255.], [254., 0., 0.]],
[[0., 191., 0.], [3., 108., 233.], [0., 191., 0.]],
[[254., 0., 0.], [255., 255., 255.], [0., 0., 0.]]
]
])
conv2d = tf.contrib.layers.convolution2d(
image_input,
num_outputs=4,
kernel_size=(1,1), # It's only the filter height and width.
activation_fn=tf.nn.relu,
stride=(1, 1), # Skips the stride values for image_batch and input_channels.
trainable=True)
# It's required to initialize the variables used in convolution2d's setup.
sess.run(tf.global_variables_initializer())
sess.run(conv2d)
###Output
_____no_output_____
###Markdown
This example setup a full convolution against a batch of a single image. All the parameters are based off of the steps done throughout this chapter. The main difference is that `tf.contrib.layers.convolution2d` does a large amount of setup without having to write it all again. This can be a great time saving layer for advanced users.**NOTE**: `tf.to_float` should not be used if the input is an image, instead use `tf.image.convert_image_dtype` which will properly change the range of values used to describe colors. In this example code, float values of `255.` were used which aren't what TensorFlow expects when is sees an image using float values. TensorFlow expects an image with colors described as floats to stay in the range of \\([0,1]\\). tf.contrib.layers.fully_connectedA fully connected layer is one where every input is connected to every output. This is a fairly common layer in many architectures but for CNNs, the last layer is quite often fully connected. The `tf.contrib.layers.fully_connected` layer offers a great short-hand to create this last layer while following best practices.Typical fully connected layers in TensorFlow are often in the format of `tf.matmul(features, weight) + bias` where `feature`, `weight` and `bias` are all tensors. This short-hand layer will do the same thing while taking care of the intricacies involved in managing the `weight` and `bias` tensors.
###Code
features = tf.constant([
[[1.2], [3.4]]
])
fc = tf.contrib.layers.fully_connected(features, num_outputs=2)
# It's required to initialize all the variables first or there'll be an error about precondition failures.
sess.run(tf.global_variables_initializer())
sess.run(fc)
###Output
_____no_output_____ |
notebooks/utils/experiment_logger.ipynb | ###Markdown
Specify the experiment directory: pass this in as a command line argument.
###Code
# Specify the experiment directory
experiment_dir = '/home/justinvyu/ray_results/gym/DClaw/TurnFreeValve3ResetFreeSwapGoal-v0/2019-08-16T02-38-24-state_estimation_scaled_goal_condition'
###Output
_____no_output_____
###Markdown
What needs to be saved?1. Plots of whatever the user passes in ("observation_keys") - TODO: Split by whatever the experiment is being tuned over (like in viskit)2. ( of goals, resets/reset-free, domain/task, VICE/gtr, etc.)3. Gifs of the run4. Important parameters
###Code
def log_experiment(experiment_dir, observation_keys):
# Search for the seed directories
for seed in glob.iglob(os.path.join(experiment_dir, '*')):
if not os.path.isdir(seed):
continue
test = '/home/justinvyu/ray_results/gym/DClaw/TurnFreeValve3ResetFreeSwapGoal-v0/2019-08-16T02-38-24-state_estimation_scaled_goal_condition/id=9867fc30-seed=2007_2019-08-16_02-38-25c0jt87k7/progress.csv'
with open(test, newline='') as f:
df = pd.read_csv(f)
df.columns
observation_keys = [
'object_to_target_circle_distance-last-mean',
'object_to_target_position_distance-last-mean',
]
# evaluation_obs_path = 'evaluation/env_infos/obs/'
# training_obs_path = 'training/env_infos/obs/'
def contains_str_from_list(str_to_check, str_list):
return any(s in str_to_check for s in str_list)
all_obs_keys_to_record = [
col_name for col_name in df.columns
if contains_str_from_list(col_name, observation_keys)]
# all_obs_keys_to_record = np.concatenate([
# [path + observation_key for observation_key in observation_keys]
# for path in (evaluation_obs_path, training_obs_path)
# ])
all_obs_keys_to_record
record_data = df[all_obs_keys_to_record]
num_keys = len(all_obs_keys_to_record)
if num_keys % 2 != 0:
num_keys += 1
num_rows = num_keys // 2
num_cols = 2
curr_row, curr_col = 0, 0
fig, ax = plt.subplots(2, 2, figsize=(18, 9))
for i, col in enumerate(record_data):
num_data_points = len(record_data[col])
data = record_data[col]
# ax[i].subplot(num_rows, num_cols, i + 1)
row_index, col_index = i // num_rows, i % num_cols
ax[row_index, col_index].set_title(col)
ax[row_index, col_index].plot(data)
# plt.show()
def generate_plots(seed_dir, save_dir, observation_keys, fig=None, axes=None):
data_fn = os.path.join(seed_dir, 'progress.csv')
with open(data_fn, newline='') as f:
df = pd.read_csv(f)
def contains_str_from_list(str_to_check, str_list):
return any(s in str_to_check for s in str_list)
all_obs_keys_to_record = [
col_name for col_name in df.columns
if contains_str_from_list(col_name, observation_keys)
]
record_data = df[all_obs_keys_to_record]
num_keys = len(all_obs_keys_to_record)
# Set up the figure
if num_keys % 2 != 0:
num_keys += 1
num_rows = num_keys // 2
num_cols = 2
if fig is None and axes is None:
fig, axes = plt.subplots(num_cols, num_rows, figsize=(18, 9))
for i, col in enumerate(record_data):
num_data_points = len(record_data[col])
data = record_data[col]
row_index, col_index = i // num_rows, i % num_cols
axes[row_index, col_index].set_title(col)
axes[row_index, col_index].plot(data, alpha=0.9)
return fig, axes
video_save_frequency = 100
video_path = '/home/justinvyu/ray_results/gym/DClaw/TurnFreeValve3ResetFreeSwapGoal-v0/2019-08-16T02-38-24-state_estimation_scaled_goal_condition/id=9867fc30-seed=2007_2019-08-16_02-38-25c0jt87k7/videos'
for video_path in glob.iglob(os.path.join(video_path, '*00_0.mp4')):
print(video_path)
test_video = '/home/justinvyu/ray_results/gym/DClaw/TurnFreeValve3ResetFreeSwapGoal-v0/2019-08-16T15-46-37-two_policies_debug/id=b529c39e-seed=2542_2019-08-16_15-46-38m9pcum43/videos/training_path_0_0.mp4'
def extract_video_frames(video_path, img_size):
vidcap = cv2.VideoCapture(video_path)
success, image = vidcap.read()
images = []
while success:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, img_size)
images.append(image)
success, image = vidcap.read()
return images
def convert_images_to_gif(images, save_path):
imageio.mimsave(save_path, images)
def video_to_gif(video_path, output_path, img_size=(100, 100)):
images = extract_video_frames(test_video, img_size)
convert_images_to_gif(images, output_path)
def save_gifs(seed_dir, save_dir, save_frequency=100):
video_path = os.path.join(seed_dir, 'videos')
# TODO: Find the videos to save w.r.t save_frequency.
for path in glob.iglob(os.path.join(video_path, '*00_0.mp4')):
seed_name = seed_dir.split('seed=')[-1].split('_')[0]
output_fn = 'seed=' + seed_name + '_' + path.split('/')[-1].replace('mp4', 'gif')
output_path = os.path.join(save_dir, output_fn)
video_to_gif(path, output_path)
def log_experiment(experiment_dir, observation_keys):
if not os.path.exists(os.path.join(experiment_dir, 'log')):
os.mkdir(os.path.join(experiment_dir, 'log'))
save_dir = os.path.join(experiment_dir, 'log')
# Search for the seed directories
fig, axes = None, None
for seed_dir in glob.iglob(os.path.join(experiment_dir, '*')):
if not os.path.isdir(seed_dir) or seed_dir == save_dir:
continue
fig, axes = generate_plots(seed_dir, save_dir, observation_keys, fig=fig, axes=axes)
save_gifs(seed_dir, save_dir)
output_fn = os.path.join(save_dir, 'plots.png')
plt.savefig(output_fn)
plt.show()
log_experiment('/home/justinvyu/ray_results/gym/DClaw/TurnFreeValve3ResetFreeSwapGoal-v0/2019-08-16T02-38-24-state_estimation_scaled_goal_condition/',
observation_keys)
###Output
_____no_output_____ |
homework/14 Homework_Melissa.ipynb | ###Markdown
[作業目標]1. [簡答題] 比較下列兩個讀入的 df 有什麼不同?為什麼造成的?2. 請將 Dcard API 取得所有的看板資訊轉換成 DataFrame,並且依照熱門程度排序後存成一個 csv 的檔案。 作業
###Code
# 記得先 Import 正確的套件
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
1. [簡答題] 比較下列兩個讀入的 df 有什麼不同?為什麼造成的?
###Code
df1 = pd.read_csv('https://raw.githubusercontent.com/dataoptimal/posts/master/data%20cleaning%20with%20python%20and%20pandas/property%20data.csv')
df1
df2 = pd.read_csv(
'https://raw.githubusercontent.com/dataoptimal/posts/master/data%20cleaning%20with%20python%20and%20pandas/property%20data.csv',
keep_default_na=True,
na_values=['na', '--']
)
df2
###Output
_____no_output_____
###Markdown
前者不會將缺值補上資料,後者有定義將缺值補為NaN 2. 請將 Dcard API 取得所有的看板資訊轉換成 DataFrame,並且依照熱門程度排序後存成一個 csv 的檔案。
###Code
import requests
r = requests.get('https://www.dcard.tw/_api/forums')
response = r.text
import json
data = json.loads(response)
print(data)
df=pd.DataFrame(data)
df=df.sort_values(by=['subscriptionCount'], ascending = False)
df.head(5)
df.to_csv('./20210412_Dcaed.csv', index = False)
print(df)
###Output
alias availableLayouts canPost \
373 sex [classic] False
228 relationship [classic] False
224 dressup [classic, link] False
217 makeup [classic] False
233 meme [image] False
273 food [classic, link, image, video] False
270 horoscopes [classic] False
230 talk [classic, link] False
346 trending [classic, link] False
340 money [classic, link] False
231 funny [classic, link] False
234 girl [classic, link] False
229 mood [classic] False
212 youtuber [classic, link] False
287 netflix [classic, link] False
261 pet [classic, link] False
327 fitness [classic] False
328 weight_loss [classic, link] False
447 stock [classic, link] False
347 job [classic] False
257 house [classic, link] False
342 savemoney [classic, link] False
274 cooking [classic, link] False
284 movie [classic, link] False
232 joke [classic, link] False
283 travel [classic, link, image, video] False
238 rainbow [classic, link] False
338 apple [classic, link] False
226 buyonline [classic, link] False
307 acg [classic] False
.. ... ... ...
469 jp_meiji [classic] False
510 love_of_unknown [classic, image, video, link] False
484 epic_war_thrones [classic, video, image, link] False
380 hksponsored [classic] False
379 test_hk [classic] False
465 jp_univoftokyo [classic] False
466 jp_agu [classic] False
468 jp_keio [classic] False
502 ragnarokx_nextgeneration [classic, image, video, link] False
474 jp_sophia [classic] False
476 jp_rikkyo [classic] False
473 jp_hosei [classic] False
470 jp_chuo [classic] False
491 sponsored [classic] False
472 jp_gakushuin [classic] False
97 delete [classic] False
471 jp_hitotsubashi [classic] False
475 jp_ochanomizu [classic] False
169 info [classic] False
5 bugreport [classic] False
493 show_sexgoods [ecSharing] False
419 nba_test [classic] False
381 dcardaddemo [classic] False
168 infotest [classic] False
172 athlete [classic, link] False
207 hkbeauty [classic, image, video, link] False
208 hktrending [classic, image, video, link] False
182 hkmacdaily [classic, image, video, link] False
159 mkc [classic, link] False
209 hkacg [classic, image, video, link] False
canUseNickname createdAt \
373 True 2020-02-04T07:52:53.573Z
228 True 2020-02-04T07:28:43.573Z
224 True 2020-02-04T07:28:03.573Z
217 True 2020-02-04T07:26:53.573Z
233 True 2020-02-04T07:29:33.573Z
273 True 2020-02-04T07:36:13.573Z
270 True 2020-02-04T07:35:43.573Z
230 True 2020-02-04T07:29:03.573Z
346 True 2020-02-04T07:48:23.573Z
340 True 2020-02-04T07:47:23.573Z
231 True 2020-02-04T07:29:13.573Z
234 True 2020-02-04T07:29:43.573Z
229 True 2020-02-04T07:28:53.573Z
212 True 2020-02-04T07:26:03.573Z
287 True 2020-02-04T07:38:33.573Z
261 True 2020-02-04T07:34:13.573Z
327 True 2020-02-04T07:45:13.573Z
328 True 2020-02-04T07:45:23.573Z
447 True 2020-11-19T09:17:31.052Z
347 True 2020-02-04T07:48:33.573Z
257 True 2020-02-04T07:33:33.573Z
342 True 2020-02-04T07:47:43.573Z
274 True 2020-02-04T07:36:23.573Z
284 True 2020-02-04T07:38:03.573Z
232 True 2020-02-04T07:29:23.573Z
283 True 2020-02-04T07:37:53.573Z
238 True 2020-02-04T07:30:23.573Z
338 True 2020-02-04T07:47:03.573Z
226 True 2020-02-04T07:28:23.573Z
307 True 2020-02-04T07:41:53.573Z
.. ... ...
469 True 2021-01-29T00:40:42.548Z
510 True 2021-07-09T03:38:05.681Z
484 True 2021-04-13T08:58:30.139Z
380 True 2020-03-05T04:28:23.785Z
379 True 2020-02-25T10:01:27.581Z
465 True 2021-01-29T00:14:12.269Z
466 True 2021-01-29T00:30:34.293Z
468 False 2021-01-29T00:35:34.922Z
502 True 2021-06-24T06:12:55.144Z
474 True 2021-01-29T00:49:40.367Z
476 True 2021-01-29T01:04:09.768Z
473 True 2021-01-29T00:48:04.122Z
470 True 2021-01-29T00:42:23.942Z
491 True 2021-05-31T06:52:36.123Z
472 True 2021-01-29T00:46:07.776Z
97 True 2016-05-23T02:15:15.879Z
471 True 2021-01-29T00:44:16.800Z
475 True 2021-01-29T00:53:43.579Z
169 True 2017-02-25T06:52:03.772Z
5 True 2016-05-18T07:20:35.140Z
493 True 2021-06-02T07:15:42.935Z
419 True 2020-07-29T16:27:34.696Z
381 True 2020-03-09T04:40:44.327Z
168 True 2017-02-25T06:52:03.772Z
172 True 2017-08-22T05:22:03.772Z
207 True 2020-01-02T03:21:28.406Z
208 True 2020-01-02T03:22:36.962Z
182 True 2018-10-03T03:41:18.556Z
159 True 2016-09-23T09:35:46.370Z
209 True 2020-01-02T03:23:17.450Z
description enablePrivateMessage \
373 西斯板(Sex)提供男女私密話題分享或性教育等情慾議題討論,若有性方面相關問題也可在此發問。... False
228 無論是遠距離戀愛、情侶間的有趣互動、分手後的藕斷絲連等...都可以在感情板分享你們的愛情故事... False
224 穿搭板提供各種服裝搭配、包鞋、飾品配件等相關話題討論。\n歡迎分享自己的日常穿搭,或任何潮流... False
217 不管你喜歡開架彩妝還是專櫃彩妝,美妝板提供各種最新彩妝開箱評比、粉底色號、唇膏試色、眼影試色... False
233 梗圖=有梗的圖 False
273 美食板歡迎分享各種吃貨食記心得,或提供手搖飲料、校園美食、美食情報等文章! False
270 星座版提供各種星座運勢、心理測驗、星座感情分享,或是有任何塔羅占卜相關的專業知識也可在此發文討論! False
230 閒聊板提供各種生活周遭大小事的討論,無論是半夜睡不著想找同好,甚至是戴牙套遇到的困擾等...... False
346 時事板歡迎針對國內外議題、國家政策、即時新聞等討論,也可在此分享時事議題的社論。 False
340 理財板提供分享各種省錢小撇步、信用卡經驗、虛擬貨幣、股票投資心得等,歡迎你和大家交流各種不錯... False
231 有趣板歡迎發表任何自己或親友的耍笨事蹟!各種好笑、傻眼、母湯的生活趣事或笑話大全通通都可以在... False
234 專屬女孩的討論版,提供和女生有關的話題討論。也能在這裡匿名分享、抒發、詢問遇到的困擾,就像有... False
229 提供分享生活情緒、抒發心情或交流各種情緒處理的經歷故事。在這裡你可以安心匿名,用無壓力的書寫... False
212 只要有手機你就是Youtuber,一起將你的作品分享給全世界吧! False
287 希望大家能一起創造友善小天地\n分享我們對於Netflix 的熱愛\nENJOY❤️\n**... False
261 寵物板無論是貓狗、毛小孩或任何養其他寵物的經驗都可以在此討論,另外像是寵物協尋或動物醫院的分... False
327 請看版規!!!看完歡迎在此發表健身相關話題,例如:重訓技巧、健身飲食、健身房評比、體脂控制等... False
328 本板供大家討論減肥上的任何問題和困難,互相扶持。\n減肥的路上常覺得很孤單、路很長,可以上來... False
447 本板為股票專門討論板,討論內容不侷限台灣股市,貼文必須有股市相關點,並符合板規規範,若貼文內... False
347 本板提供分享面試經驗、職場心得、打工或實習經驗等相關工作話題。(徵才的職務刊登前請務必詳細閱... False
257 居家生活板以家或個人空間出發,舉凡室內設計、空間風格、裝潢、what’s in my roo... False
342 歡迎大家交流各種優惠訊息與省錢方法討論。\n發文前,記得把標題分類唷! True
274 歡迎大家分享以下內容:\n1. 自己的手做料理\n2. 料理問題提問\n料理提問請具備足夠條... False
284 注意:本板嚴禁標題爆雷,內文如有爆雷內容\n1. 請於標題最前面加上 #有雷\n2. 請在內... False
232 歡迎分享各種類型的笑話、梗圖、meme,不管是好笑的、冷場的、能讓人引發思考的,或者是諷刺社... False
283 旅遊板歡迎分享你的旅行紀錄或是國內外自由行、背包客心得、打工度假、機票購買等經驗,或是有什麼... False
238 Love Wins!專屬彩虹(LGBT)們的討論板,在這裡可以用最無壓力的方式分享你們的故事。 False
338 請務必看完版規再發文及討論\n在此版發文及討論視同同意版規 False
226 網路購物板主要提供線上購物之經驗分享與網購教學討論。\n或是在網購前中後遇到問題也能在此發文... False
307 動漫板提供各種輕小說、動畫討論、新番推薦、公仔模型、同人二創或Cosplay分享,動漫周邊或... False
.. ... ...
469 明治大学掲示板へようこそ!!\n明治大学掲示板では明治大学に関することならなんでも投稿できま... False
510 歡迎大家來到《未生逆行》板,希望各位小主播能在這裡交流,互相交換心動資訊呦σ ゚∀ ゚) ゚... False
484 《鴻圖之下》看板提供玩家們討論戰術攻略、情報分享、聯盟交友、玩家問答總匯等鴻圖之下相關話題。 False
380 Dcard HK 官方提供各項優惠資訊的看板 False
379 False
465 東京大学掲示板へようこそ!!\n東京大学掲示板では東京大学に関することならなんでも投稿できま... False
466 青学掲示板へようこそ!!\n青学掲示板では青山学院大学に関することならなんでも投稿できます!... False
468 慶應大学掲示板へようこそ!!\n慶應大学掲示板では慶應義塾大学に関することならなんでも投稿で... False
502 ROX看板提供卡友們討論攻略、情報分享、遊戲心得跟詢問RO\n\n仙境傳說:新世代的誕生之相... False
474 上智大学掲示板へようこそ!!\n上智大学掲示板では上智大学に関することならなんでも投稿できま... False
476 立教大学掲示板へようこそ!!\n立教大学掲示板では立教大学に関することならなんでも投稿できま... False
473 法政大学掲示板へようこそ!!\n法政大学掲示板では法政大学に関することならなんでも投稿できま... False
470 中央大学掲示板へようこそ!!\n中央大学掲示板では中央大学に関することならなんでも投稿できま... False
491 False
472 学習院大学掲示板へようこそ!!\n学習院大学掲示板では学習院大学に関することならなんでも投稿... False
97 False
471 一橋大学掲示板へようこそ!!\n一橋大学掲示板では一橋大学に関することならなんでも投稿できま... False
475 お茶の水大学掲示板へようこそ!!\nお茶の水大学掲示板ではお茶の水大学に関することならなんで... False
169 False
5 臨時回報版本問題 False
493 📋 暖心提醒:\n- 歡迎分享自己在好物購入西斯玩具後的實際使用心得!\n- 如有裸露照片,... False
419 False
381 False
168 False
172 False
207 呢度係比香港澳門嘅同學仔討論化妝、護膚、美髮、任何扮靚相關話題嘅討論區,發文留言前請先閱讀板規 False
208 呢度係比香港澳門嘅同學仔討論同港澳有關既時事議題嘅討論區,發文留言前請先閱讀板規 False
182 專屬於香港澳門o既討論區,日常生活大小事都可以係度傾~發文請注意需超過15個中文字 False
159 馬偕醫護管理專科學校板,一個能讓你暢所欲言的地方。在這裡,卡友們可以盡情討論校園裡的大小事,... False
209 呢度係比香港澳門既同學仔討論同分享各種動漫、遊戲嘅討論區,發文留言前請先閱讀板規 True
favorite fullyAnonymous hasPostCategories ... \
373 False True False ...
228 False True False ...
224 False False False ...
217 False False True ...
233 False False False ...
273 False False False ...
270 False True False ...
230 False False False ...
346 False False False ...
340 False False False ...
231 False False False ...
234 False True False ...
229 False True False ...
212 False False False ...
287 False False False ...
261 False False False ...
327 False False False ...
328 False False False ...
447 False False True ...
347 False True False ...
257 False False False ...
342 False False True ...
274 False False True ...
284 False False False ...
232 False False False ...
283 False False False ...
238 False True False ...
338 False False True ...
226 False False False ...
307 False False False ...
.. ... ... ... ...
469 False False False ...
510 False False False ...
484 False False False ...
380 False False False ...
379 False False False ...
465 False False False ...
466 False False False ...
468 False False False ...
502 False False False ...
474 False False False ...
476 False False False ...
473 False False False ...
470 False False False ...
491 False False False ...
472 False False False ...
97 False False False ...
471 False False False ...
475 False False False ...
169 False False False ...
5 False False False ...
493 False True False ...
419 False False False ...
381 False False False ...
168 False False False ...
172 False False False ...
207 False False False ...
208 False False False ...
182 False False False ...
159 False False False ...
209 False False False ...
postTitlePlaceholder read \
373 False
228 False
224 False
217 發文前請選擇標題分類,提高文章曝光度喔!❤️ False
233 False
273 False
270 False
230 False
346 False
340 False
231 False
234 False
229 False
212 請善用搜尋功能,不要發表相同或類似文章。\n\n發文前請先仔細閱讀板規,若因違反板規而被刪文... False
287 False
261 False
327 發文前看版規,發轉讓會籍、教練課相關貼文會永久禁言哦 False
328 False
447 ⚠️發文請記得選分類⚠️\n❗️選了分類就不用再重複輸入❗️\n#標的 #新聞 #分享 #請... False
347 False
257 False
342 發文前,先選擇↑標題分類 False
274 發文前請先瞭解版規規定喔 False
284 False
232 False
283 False
238 False
338 請務必看完版規再發文\n發文視同同意版規 False
226 False
307 False
.. ... ...
469 False
510 False
484 False
380 False
379 False
465 False
466 False
468 False
502 False
474 False
476 False
473 False
470 False
491 False
472 False
97 False
471 False
475 False
169 False
5 False
493 False
419 False
381 False
168 False
172 False
207 False
208 False
182 False
159 False
209 False
shouldCategorized shouldPostCategorized \
373 False False
228 False False
224 False False
217 False False
233 False False
273 True False
270 True False
230 False False
346 False False
340 True False
231 False False
234 False False
229 False False
212 False False
287 False False
261 True False
327 True False
328 False False
447 False False
347 True False
257 True False
342 False False
274 False False
284 True False
232 False False
283 True False
238 False False
338 False False
226 True False
307 True False
.. ... ...
469 False False
510 False False
484 False False
380 False False
379 False False
465 False False
466 False False
468 False False
502 False False
474 False False
476 False False
473 False False
470 False False
491 False False
472 False False
97 False False
471 False False
475 False False
169 False False
5 False False
493 False False
419 False False
381 False False
168 False False
172 False False
207 False False
208 False False
182 False False
159 False False
209 False False
subcategories subscribed \
373 [創作, 知識, 圖文] False
228 [曖昧, 閃光, 劈腿, 失戀, 分手, 告白] False
224 [精選, 日常, 正式, 情侶, 鞋款] False
217 [精選, 底妝, 眼妝, 唇彩, 保養, 情報] False
233 [] False
273 [精選, 食譜, 食記, 評比, 超商] False
270 [占卜, 心理測驗, 白羊, 金牛, 雙子, 巨蟹, 獅子, 處女, 天秤, 天蠍, 射手,... False
230 [醫療, 法律] False
346 [新聞, 討論, 爆料, 社論] False
340 [請益, 虛擬貨幣, 基金, 股票期貨, 保險, 匯率] False
231 [] False
234 [購物, 髮型, 心事] False
229 [] False
212 [] False
287 [] False
261 [精選, 協尋, 狗, 貓, 小動物, 爬蟲, 水族] False
327 [精選] False
328 [] False
447 [] False
347 [精選, 徵才, 經驗分享, 職業介紹, 勞工權益] False
257 [] False
342 [] False
274 [] False
284 [精選, 情報, 電影, 臺灣, 韓國, 歐美, 日本, 中國] False
232 [] False
283 [精選, 臺灣, 日韓, 亞洲, 歐美] False
238 [心情, 議題] False
338 [] False
226 [教學, 發問, 集運, 心得] False
307 [精選, 情報, 心得, 推坑, 同人, COS] False
.. ... ...
469 [] False
510 [] False
484 [] False
380 [] False
379 [] False
465 [] False
466 [] False
468 [] False
502 [] False
474 [] False
476 [] False
473 [] False
470 [] False
491 [] False
472 [] False
97 [] False
471 [] False
475 [] False
169 [] False
5 [] False
493 [] False
419 [] False
381 [] False
168 [] False
172 [] False
207 [] False
208 [] False
182 [] False
159 [課程評價] False
209 [] False
subscriptionCount titlePlaceholder \
373 609171 發文請記得在下一頁加入話題或其他相關分類喲!
228 550783 發文記得加入「話題」分類喲!
224 534417 發文記得加入「話題」分類喲!
217 450324 發文請記得在下一步驟加入「相關話題」或其他相關分類喲!
233 422567
273 383179 發文記得加入「話題」分類喲!
270 347350 發文記得加入「話題」分類喲!
230 330688 發文記得加入「話題」分類喲!
346 326227 發文記得加入「話題」分類喲!
340 298184 發文記得加入「話題」分類喲!
231 283557 發文記得加入「話題」分類喲!
234 269475 發文記得加入「話題」分類喲!
229 263890 發文記得加入「話題」分類喲!!
212 260715
287 256225
261 231298 發文記得加入「話題」分類喲!
327 227955 發文記得加入「話題」分類喲!
328 221896
447 209699
347 205870 發文記得加入「話題」分類喲!
257 190291 發文記得加入「話題」分類喲!
342 186877
274 182357
284 180189 請記得話題加入「電影名稱」或其他相關分類喲!
232 179987
283 177008 發文記得加入「話題」分類喲!
238 175839 發文記得加入「話題」分類喲!
338 173547
226 169904 發文記得加入「話題」分類喲!
307 161557 請記得在話題加入「作品名稱」或其他相關分類喲!
.. ... ...
469 139
510 132
484 120
380 110
379 109
465 105
466 102
468 100
502 92
474 86
476 84
473 78
470 70
491 67
472 58
97 56
471 50
475 43
169 23
5 17
493 10
419 2
381 2
168 1
172 0
207 0
208 0
182 0
159 0
209 0
topics \
373 [A片, 甲, Les, 無碼片, NTR, 內射, 自慰, 3P, 外流, 意淫自拍OL黑...
228 [微西斯, 愛情, 閃光, 價值觀, 告白, 分手, 遠距離, 失戀, 曖昧, 做愛, 在一...
224 [蝦皮, 耳環, 襯衫, 工裝, 後背包, 寬褲, 淘寶, 涼鞋, 洋裝, 情侶穿搭, 鞋子...
217 [潔膚水, 防曬, 粉餅, 受受狼, 刷具, 遮瑕, 粉刺, 打亮, 眼影, 粉底, 眉筆,...
233 []
273 [台中美食, 高雄美食, 台南美食, 台北美食, 新竹美食, 板橋美食, 全聯, 711, ...
270 [心理測驗, 占卜, 雙魚, 射手, 天蠍, 雙子, 巨蟹, 白羊, 金牛, 水瓶, 獅子,...
230 [網美媽媽, 廢墟探險, 畢旅, 童年回憶, 泰國浴, 租屋, 牙套, 法律, 困擾, 醫療]
346 [校正回歸]
340 [信用卡, 基金, 股票期貨, 虛擬貨幣, 匯率, 儲蓄險, 保險, 比特幣, 投資]
231 [笑話, 梗圖, Wootalk, 愛莉莎莎, 黃金12猛漢, 撩妹, 微西斯, 貼圖, 網...
234 [心事, 男友, 比基尼, 除毛, WhatsInMyBag, 內衣, 家人, 發胖, 桌布...
229 [女大十八變, 租屋糾紛, 畢旅, 感動的事, 一句晚安, 想念你, 謝謝你, 靠北, 勵志...
212 []
287 [Netflix, 影集, 美劇, 電影, 推薦, 觀後感]
261 [領養代替購買, 米克斯, 貓, 狗, 柯基, 柴犬, 認養, 貓咪真的很可愛, 動物醫院,...
327 [生酮飲食, 減脂, 乳清, 增肌, 健身器材, 健身房, 重訓, 臥推, 熱量, 啞鈴, ...
328 [飲食, 運動, 勵志]
447 []
347 [面試經驗, 2020聯合校徵, 面試心得, 面試小技巧, 履歷教學, 航空業, Askme...
257 [WhatsInMyRoom, 居家佈置, 空間風格, 租屋, 室內香氛, 家具, 輕裝潢,...
342 [優惠, 已兌換, 買一送一, 生日優惠, 折價券, 折扣碼]
274 [料理, 提問, 廚具, 烹飪, 食譜, 小資料理]
284 [影評, MARVEL系列, 迪士尼, DC系列, 觀後感, 電影院, 奧斯卡獎, 預告片,...
232 [地獄梗, meme, 梗圖, 冷笑話]
283 [畢旅, 自由行, 賞楓, 海外志工, 台灣秘境, 臥鋪火車, 獨旅, 飛機餐, 沙發衝浪,...
238 [微西斯, 高馬尾和長直髮, PPL, 早知道系列, Les, 天菜老師, 總在夜半消失的室...
338 [AppleLearn, AppleWork, Mac, iPad]
226 [網購教學, 淘寶, 退貨, 蝦皮, 支付寶, 賣家, 集運運費, 官方集運, 私人集運, ...
307 [蠟筆小新, 庫洛魔法使, 聲之形, 動漫展, 初音未來, Cosplay, 動漫周邊, 動...
.. ...
469 []
510 []
484 []
380 []
379 []
465 []
466 []
468 []
502 []
474 []
476 []
473 []
470 []
491 []
472 []
97 []
471 []
475 []
169 []
5 []
493 []
419 []
381 []
168 []
172 []
207 [減肥, 護膚, 打扮, 搽面, 化妝, 分享]
208 [港聞, 正苦, 林鄭, 時事, 社會, 政治]
182 [好玩, 港澳板, 生活, 日常]
159 []
209 [電玩, 動漫節, Cosplay, 動漫, ACG, 遊戲]
updatedAt
373 2021-06-24T04:50:12.522Z
228 2021-04-20T08:36:40.391Z
224 2021-04-20T08:36:37.330Z
217 2021-06-25T07:07:20.485Z
233 2020-08-31T09:47:51.769Z
273 2021-04-20T08:36:35.879Z
270 2021-04-20T08:36:39.207Z
230 2021-04-20T08:36:35.632Z
346 2021-05-22T07:01:38.199Z
340 2021-04-20T08:36:40.750Z
231 2021-04-20T08:36:37.118Z
234 2021-04-20T08:36:37.575Z
229 2021-04-20T08:36:40.189Z
212 2020-10-07T10:21:01.877Z
287 2021-05-02T01:11:11.254Z
261 2021-04-20T08:36:39.170Z
327 2021-04-20T08:36:39.028Z
328 2021-04-20T08:36:40.945Z
447 2020-12-22T18:06:09.500Z
347 2021-04-20T08:36:37.261Z
257 2021-04-20T08:36:40.284Z
342 2021-07-07T06:40:02.753Z
274 2021-04-20T08:36:38.750Z
284 2021-04-20T08:36:36.924Z
232 2021-04-20T08:36:39.292Z
283 2021-04-20T08:36:39.419Z
238 2021-04-20T08:36:36.801Z
338 2021-04-20T08:36:40.086Z
226 2021-04-20T08:36:41.949Z
307 2021-04-20T08:36:39.864Z
.. ...
469 2021-04-09T11:32:05.344Z
510 2021-07-09T03:40:30.704Z
484 2021-04-14T04:11:34.276Z
380 2020-09-16T07:17:36.151Z
379 2020-02-26T09:11:36.709Z
465 2021-04-09T11:33:33.374Z
466 2021-04-09T11:34:13.925Z
468 2021-04-09T11:32:46.924Z
502 2021-06-24T06:17:20.796Z
474 2021-04-09T11:42:13.651Z
476 2021-04-09T11:40:21.329Z
473 2021-04-09T11:42:54.571Z
470 2021-04-09T11:44:59.361Z
491 2021-06-10T06:47:41.577Z
472 2021-04-09T11:43:35.677Z
97 2017-06-18T20:42:53.510Z
471 2021-04-09T11:44:09.340Z
475 2021-04-09T11:41:29.760Z
169 2017-02-25T06:52:03.772Z
5 2017-06-18T03:31:45.331Z
493 2021-06-16T13:58:13.523Z
419 2020-07-29T16:27:34.696Z
381 2021-07-21T08:50:28.127Z
168 2018-02-06T17:18:30.699Z
172 2020-08-13T06:02:20.625Z
207 2021-04-20T10:15:48.549Z
208 2021-04-20T10:31:36.975Z
182 2021-07-16T07:05:15.714Z
159 2020-08-13T08:58:10.340Z
209 2021-07-12T04:51:47.346Z
[514 rows x 34 columns]
|
notebooks/investigate_ha_cachito_fitting.ipynb | ###Markdown
Fit Velocity Cachito Fit
###Code
phase_cachito = (Time(new_fit_cachito['date'])-texpl).value
velocity_cachito = -1*calc_velocity(new_fit_cachito['vel0'], HA).to(u.km/u.s).value
fitter_power = fitting.LevMarLSQFitter()
fitter_linear = fitting.LinearLSQFitter()
power_model = models.PowerLaw1D()
poly_model3 = models.Polynomial1D(degree=3)
poly_model4 = models.Polynomial1D(degree=4)
poly_model5 = models.Polynomial1D(degree=5)
power_fit_cachito = fitter_power(power_model, phase_cachito, velocity_cachito)
poly_fit3_cachito = fitter_linear(poly_model3, phase_cachito, velocity_cachito)
poly_fit4_cachito = fitter_linear(poly_model4, phase_cachito, velocity_cachito)
poly_fit5_cachito = fitter_linear(poly_model5, phase_cachito, velocity_cachito)
fit_time = np.arange(1, phase_cachito[-1]+1,1)
fig = plt.figure(figsize=[10, 5])
ax_cachito = fig.add_subplot(2,1,1)
ax_resid = fig.add_subplot(2,1,2, sharex=ax_cachito)
ax_cachito.plot(phase_cachito, velocity_cachito, '^', color='lime', label='new fit separate')
ax_cachito.set_xticks(np.arange(0, 90, 10))
ax_cachito.grid()
ax_cachito.plot(fit_time, power_fit_cachito(fit_time), label='Power Law')
ax_cachito.plot(fit_time, poly_fit4_cachito(fit_time), label='Polynomial deg={}'.format(poly_model4.degree))
ax_cachito.set_title('Cachito Velocity (if Hydrogen)')
ax_cachito.vlines((IR_dates-texpl).value, linestyle='--', ymin=12000, ymax=21000, label='IR spectra')
ax_cachito.set_ylabel('Velocity (km/s)')
ax_cachito.set_ylim(ymin=12000, ymax=21000)
ax_cachito.legend(loc='best')
ax_resid.axhline(0, color='k')
ax_resid.vlines((IR_dates-texpl).value, linestyle='--', ymin=-500, ymax=500, label='IR spectra')
ax_resid.plot(phase_cachito, velocity_cachito - power_fit_cachito(phase_cachito), 'o', label='Power')
ax_resid.plot(phase_cachito, velocity_cachito - poly_fit3_cachito(phase_cachito), 'o', label='deg3')
ax_resid.plot(phase_cachito, velocity_cachito - poly_fit4_cachito(phase_cachito), 'o', label='deg4')
ax_resid.plot(phase_cachito, velocity_cachito - poly_fit5_cachito(phase_cachito), 'o', label='deg5')
ax_resid.set_yticks([-500, -250, 0, 250, 500])
ax_resid.grid()
ax_resid.legend(loc='best', ncol=3)
ax_resid.set_ylabel('Residual (km/s)')
ax_resid.set_xlabel('Phase (days)')
plt.savefig(os.path.join(FIG_DIR, 'cachito_velocity_fit.pdf'))
print('Power law std = {}'.format(np.std(velocity_cachito - power_fit_cachito(phase_cachito))))
print('Deg 4 polynomial std = {}'.format(np.std(velocity_cachito - poly_fit4_cachito(phase_cachito))))
print('Deg 3 polynomial std = {}'.format(np.std(velocity_cachito - poly_fit3_cachito(phase_cachito))))
###Output
Power law std = 391.2385538728443
Deg 4 polynomial std = 193.85033736603393
Deg 3 polynomial std = 246.76558954290823
###Markdown
Speaking with Stefano - we're going to use the power law fit; Nugent (2006) and Faran (2014) both fit power laws H-Alpha Fit
###Code
phase_HA = (Time(new_fit_HA['date'])-texpl).value
velocity_HA = -1*calc_velocity(new_fit_HA['vel0'], HA).to(u.km/u.s).value
fitter_power = fitting.LevMarLSQFitter()
fitter_linear = fitting.LinearLSQFitter()
power_model = models.PowerLaw1D()
poly_model3 = models.Polynomial1D(degree=3)
poly_model4 = models.Polynomial1D(degree=4)
poly_model5 = models.Polynomial1D(degree=5)
power_fit_HA = fitter_power(power_model, phase_HA, velocity_HA)
poly_fit3_HA = fitter_linear(poly_model3, phase_HA, velocity_HA)
poly_fit4_HA = fitter_linear(poly_model4, phase_HA, velocity_HA)
poly_fit5_HA = fitter_linear(poly_model5, phase_HA, velocity_HA)
fit_time = np.arange(1, phase_HA[-1]+1,1)
fig = plt.figure(figsize=[10, 5])
ax_HA = fig.add_subplot(2,1,1)
ax_resid = fig.add_subplot(2,1,2, sharex=ax_HA)
ax_HA.plot(phase_HA, velocity_HA, '^', color='lime', label='new fit separate')
ax_HA.set_xticks(np.arange(0, 90, 10))
ax_HA.grid()
ax_HA.plot(fit_time, power_fit_HA(fit_time), label='Power Law')
ax_HA.plot(fit_time, poly_fit4_HA(fit_time), label='Polynomial deg={}'.format(poly_model4.degree))
ax_HA.set_title('HA Velocity (if Hydrogen)')
ax_HA.vlines((IR_dates-texpl).value, linestyle='--', ymin=8000, ymax=12000, label='IR spectra')
ax_HA.set_ylim(ymin=8000, ymax=12000)
ax_HA.legend(loc='best')
ax_HA.set_ylabel('velocity (km/s)')
ax_resid.axhline(0, color='k')
ax_resid.vlines((IR_dates-texpl).value, linestyle='--', ymin=-500, ymax=500, label='IR spectra')
ax_resid.plot(phase_HA, velocity_HA - power_fit_HA(phase_HA), 'o', label='Power')
ax_resid.plot(phase_HA, velocity_HA - poly_fit3_HA(phase_HA), 'o', label='deg3')
ax_resid.plot(phase_HA, velocity_HA - poly_fit4_HA(phase_HA), 'o', label='deg4')
ax_resid.plot(phase_HA, velocity_HA - poly_fit5_HA(phase_HA), 'o', label='deg5')
ax_resid.grid()
ax_resid.legend(loc='best', ncol=2)
ax_resid.set_xlabel('Phase (days)')
ax_resid.set_ylabel('Residual')
print('Power law std = {}'.format(np.std(velocity_HA - power_fit_HA(phase_HA))))
print('Deg 4 polynomial std = {}'.format(np.std(velocity_HA - poly_fit4_HA(phase_HA))))
print('Deg 3 polynomial std = {}'.format(np.std(velocity_HA - poly_fit3_HA(phase_HA))))
plt.savefig(os.path.join(FIG_DIR, 'HA_velocity_fit.pdf'))
###Output
Power law std = 215.510555052082
Deg 4 polynomial std = 209.33691258591205
Deg 3 polynomial std = 221.00547094202278
###Markdown
Look at Silicon Velocity and fit the FeII Velocity
###Code
tbdata_feII = asc.read(os.path.join(DATA_DIR, 'FeII_multi.tab'))
tbdata_feII.remove_columns(['vel1', 'vel_err_left_1', 'vel_err_right_1', 'vel_pew_1', 'vel_pew_err1'])
tbdata_feII.rename_column('vel0', 'velocity')
tbdata_feII.rename_column('vel_err_left_0', 'vel_err_left')
tbdata_feII.rename_column('vel_err_right_0', 'vel_err_right')
tbdata_feII.rename_column('vel_pew_0', 'pew')
tbdata_feII.rename_column('vel_pew_err0', 'pew_err')
phase_feII = (Time(tbdata_feII['date'])-texpl).value
velocity_feII = -1*calc_velocity(tbdata_feII['velocity'], FeII).to(u.km/u.s)
power_model_feII = models.PowerLaw1D(alpha=power_fit_cachito.alpha, x_0=power_fit_cachito.x_0)
power_fit_feII = fitter_power(power_model_feII, phase_feII, velocity_feII)
fig = plt.figure(figsize=[10, 5])
ax_Fe = fig.add_subplot(2,1,1)
ax_resid = fig.add_subplot(2,1,2, sharex=ax_Fe)
ax_Fe.plot(phase_feII, velocity_feII, '^', label='FeII (5169)')
ax_Fe.plot((Time(new_fit_cachito['date'])-texpl).value, -1*calc_velocity(new_fit_cachito['vel0'], SiII).to(u.km/u.s), '^', label='Cachito (as SiII 6533)')
ax_Fe.plot(fit_time, power_fit_feII(fit_time))
ax_Fe.vlines((IR_dates-texpl).value, linestyle='--', ymin=-3000, ymax=12000, label='IR spectra')
ax_Fe.set_xticks(np.arange(0, 90, 10))
ax_Fe.legend()
ax_Fe.set_title(r'FeII 5169 Velocity')
ax_Fe.set_ylim(3000, 11000)
ax_resid.axhline(0, color='k')
ax_resid.plot(phase_feII, velocity_feII - power_fit_feII(phase_feII), 'o')
ax_resid.set_yticks([-500, -250, 0, 250, 500])
ax_resid.grid()
ax_resid.vlines((IR_dates-texpl).value, linestyle='--', ymin=-500, ymax=500, label='IR spectra')
print('Power law std = {}'.format(np.std(velocity_feII - power_fit_feII(phase_feII))))
fig = plt.figure()
ax_Fe = fig.add_subplot(1,1,1)
ax_Fe.plot((Time(new_fit_cachito['date'])-texpl).value, -1*calc_velocity(new_fit_cachito['vel0'], SiII).to(u.km/u.s), '^', label='Cachito (as SiII 6533)')
#ax_Fe.plot((Time(new_fit_cachito['date'])-texpl).value, -1*calc_velocity(new_fit_together['vel0'], SiII).to(u.km/u.s), '^', label='Cachito (as SiII 6533); new joint fit', alpha=0.25)
#ax_Fe.plot((Time(new_fit_cachito['date'])-texpl).value, -1*calc_velocity(old_fitting['vel0'], SiII).to(u.km/u.s), '^', label='Cachito (as SiII 6533); old joint fit', alpha=0.25)
ax_Fe.plot(phase_feII, velocity_feII, 'o', label='FeII (5169)')
ax_Fe.set_xticks(np.arange(0, 90, 10))
ax_Fe.legend()
ax_Fe.set_title(r'FeII 5169 Velocity')
ax_Fe.set_ylim(5000, 11000)
ax_Fe.set_xlim(0, 40)
ax_Fe.set_xlabel('Phase (days)')
ax_Fe.set_ylabel('Velocity (km/s)')
plt.savefig(os.path.join(FIG_DIR, 'cachito_fe_vel_comp.pdf'))
cp ../figures/cachito_fe_vel_comp.pdf ../paper/figures/
###Output
_____no_output_____ |
Examples/22-StepCalibration.ipynb | ###Markdown
Figure out correct number of steps
###Code
import gcode
import numpy as np
# Draw a relative horizontal line.
rel = gcode.GCode()
rel.G91()
baseline = gcode.hline(X0=0, Xf=100, Y=0, n_points=2)
line = rel+gcode.Line(baseline, power=200, feed=200)
line
import grbl
cnc = grbl.Grbl("/dev/ttyUSB0")
def init_machine(**kwargs):
prog = gcode.GCode(**kwargs)
prog.G92(X=0, Y=0)
prog.G90()
prog.G21()
return prog
# Draw a relative vertical line.
rel = gcode.GCode()
rel.G91()
line_pts = gcode.vline(X=0, Y0=0, Yf=100, n_points=2)
line = rel+gcode.Line(line_pts, power=200, feed=200)
line
row_spacing=2
step_cal=init_machine(machine=cnc)
for row, steps_mm in enumerate(np.arange(80, 81, 0.025)):
step_cal.G90()
step_cal.G0(X=0, Y=np.round(row*row_spacing, 4))
step_cal.buffer.append(f"$100={steps_mm}")
step_cal.buffer.append(f"$101={steps_mm}")
step_cal+=line
print(step_cal)
step_cal
###Output
G92X0.0Y0.0
G90
G21
G90
G0X0.0Y0.0
$100=80.0
$101=80.0
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y2.0
$100=80.025
$101=80.025
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y4.0
$100=80.05000000000001
$101=80.05000000000001
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y6.0
$100=80.07500000000002
$101=80.07500000000002
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y8.0
$100=80.10000000000002
$101=80.10000000000002
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y10.0
$100=80.12500000000003
$101=80.12500000000003
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y12.0
$100=80.15000000000003
$101=80.15000000000003
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y14.0
$100=80.17500000000004
$101=80.17500000000004
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y16.0
$100=80.20000000000005
$101=80.20000000000005
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y18.0
$100=80.22500000000005
$101=80.22500000000005
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y20.0
$100=80.25000000000006
$101=80.25000000000006
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y22.0
$100=80.27500000000006
$101=80.27500000000006
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y24.0
$100=80.30000000000007
$101=80.30000000000007
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y26.0
$100=80.32500000000007
$101=80.32500000000007
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y28.0
$100=80.35000000000008
$101=80.35000000000008
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y30.0
$100=80.37500000000009
$101=80.37500000000009
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y32.0
$100=80.40000000000009
$101=80.40000000000009
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y34.0
$100=80.4250000000001
$101=80.4250000000001
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y36.0
$100=80.4500000000001
$101=80.4500000000001
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y38.0
$100=80.47500000000011
$101=80.47500000000011
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y40.0
$100=80.50000000000011
$101=80.50000000000011
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y42.0
$100=80.52500000000012
$101=80.52500000000012
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y44.0
$100=80.55000000000013
$101=80.55000000000013
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y46.0
$100=80.57500000000013
$101=80.57500000000013
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y48.0
$100=80.60000000000014
$101=80.60000000000014
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y50.0
$100=80.62500000000014
$101=80.62500000000014
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y52.0
$100=80.65000000000015
$101=80.65000000000015
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y54.0
$100=80.67500000000015
$101=80.67500000000015
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y56.0
$100=80.70000000000016
$101=80.70000000000016
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y58.0
$100=80.72500000000016
$101=80.72500000000016
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y60.0
$100=80.75000000000017
$101=80.75000000000017
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y62.0
$100=80.77500000000018
$101=80.77500000000018
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y64.0
$100=80.80000000000018
$101=80.80000000000018
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y66.0
$100=80.82500000000019
$101=80.82500000000019
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y68.0
$100=80.8500000000002
$101=80.8500000000002
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y70.0
$100=80.8750000000002
$101=80.8750000000002
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y72.0
$100=80.9000000000002
$101=80.9000000000002
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y74.0
$100=80.92500000000021
$101=80.92500000000021
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y76.0
$100=80.95000000000022
$101=80.95000000000022
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
G90
G0X0.0Y78.0
$100=80.97500000000022
$101=80.97500000000022
G91
G0X0.0Y0.0
M4S200.0
G1X0.0Y100.0F200.0
M5
|
genre by playlist.ipynb | ###Markdown
setup importing stuff and such
###Code
import spotipy
import matplotlib
import numpy as np
%matplotlib notebook
from matplotlib import pylab as plt
from matplotlib import mlab
sp = spotipy.Spotify()
###Output
_____no_output_____
###Markdown
fetch all the playlist details that have 'punk' in the name(note that this doesn't get the track lists, we'll do that a bit later)
###Code
results = sp.search(type='playlist', q='punk', limit=50)['playlists']
print "gathering details about", results['total'], "playlists"
punk_playlists = results['items']
while results['next']:
results = sp.next(results)['playlists']
punk_playlists += results['items']
###Output
gathering details about 5351 playlists
###Markdown
basic statsto get a feel for the dataset, let's to do some basic stats before we plow ahead with the track analysis title lengthwe expect a peak a 4 characters (the minimal 'Punk'), what else happens?
###Code
print "number of results:", len(punk_playlists)
print
title_lengths = filter(lambda c:c<100, map(lambda pl:len(pl['name']), punk_playlists))
n, bins, patches = plt.hist(title_lengths, 50, normed=1, facecolor='green', alpha=0.75)
mu = np.mean(title_lengths)
sigma = np.std(title_lengths)
# add a 'best fit' line
y = mlab.normpdf( bins, mu, sigma)
l = plt.plot(bins, y, 'r--', linewidth=1)
plt.xlabel('Number of Characters')
plt.ylabel('Probability')
plt.title(r'$\mathrm{Histogram\ of\ Punk playlist title lengths:}\ \mu='+str(mu)+',\ \sigma='+str(sigma)+'$')
# plt.axis([40, 160, 0, 0.03])
plt.grid(True)
###Output
number of results: 5351
###Markdown
ok, so _a_ peak where expected, but the vast majority are longer, mean is just over 16 characters.--- word counts in the titlesSo picking that apart a little more, let's take a look at some lightly cleaned word counts across all the titles
###Code
from collections import Counter
from string import punctuation
stopwords = "and of the or in".split()
print "top words in titles"
word_count = Counter()
for pl in punk_playlists:
word_count.update([w.strip(punctuation) for w in pl['name'].lower().split() if w not in stopwords and len(w) > 2])
word_count.most_common(10)
###Output
top words in titles
###Markdown
remember friends: Daft Punk may be playing in your house, your house, but it's a pretty good guess that when 'punk' is proceeded by 'daft' its probably not actually a punk playlist...the other results here are basically the expected neighbouring genres (e.g. 'pop', 'rock', 'metal') and of course some self labelling ('playlist')small aside, this seems to indicate that some of the playlists don't mention 'punk' in the name. Is that a problem? (this makes me wonder how the search algorithm works...). Let's see how many there are and what they look like. That's not punk.
###Code
print len([pl['name'] for pl in punk_playlists if "punk" not in pl['name'].lower()]), "of the search results don't say punk.\n here they are:"
print '\n'.join([pl['name'] for pl in punk_playlists if "punk" not in pl['name'].lower()])
###Output
87 of the search results don't say punk.
here they are:
Pünk
Workout Shout!
French Touch
Post-Hardcore Crash Course
💰🔪☠
Post Garage Wave Revival
.01
New Rock
🌚 cool songs 🌝
Is It New Wave?
moosic
Locos x los 2010
Teenage Dirtbag
Skunk Rock
Crossfit Hutto Rock
math rock !
Dance Anthems - Ministry of Sound
Hits 2013
From The Garage
Cheap Beer & Dirty Basements
Proper Naughty
Canadians Rock!
Everything Rock
random
THE DEFINITIVE 70s
current songs
Arctic Monkeys: Origens
Emo/Emoish/Indierock
Rock 2
70's ROCK - The Ultimate Playlist
PØP PÛNK
Main Playlist
God tier 2.0
TOPSIFY 80s HITS
for when its cold outside
All Things Post
NOW 85
Die 257ers Party Playlist
Bare tunes
poleng's
Sweat Through This
Tank
Highschool Hits
Vidar
Quote Songs🖊💘
Best of '80s Indie • Alternative eighties
GRUNGE FOR LIFE
Main Playlist
De Boa na Praia
Mi playlist
Today's Best Boy Bands Piano
Mixed Bag
Idk.
Sunday Breakfast
French Touch
Good.
Originals vs Covers + Sampled
EMO
cool tunes
Hard. Heavy. Loud.
Lounge & Warm-up
"Satans Musik"
Don't Call It An Emo Revival
Hard Rock y Rock 70'
favourite shit
Filtr - Playlist MARIAGE
Filtr I LOVE 10s
SWR3 Rock Hits
Summer Hits
Terrorgruppe - Maximilian Playlist
Waning Light Mix
alernative music
CROSSING all OVER! ► by rock.de
This Is Rewind - 90s
Marvin
Discover: Dischord Records
Filtr - Playlist FRIDAY NIGHT
♥ (by tumblr user macleod)
College Rock Playlist | 80s Campus Radio / Alternative / Indie / Grunge | Feat. Sonic Youth, Weezer, Pavement, R.E.M. & more
THE CLASH - 50 CLASSICS
EMO 101
Electrospective / Dance Focus: 1988-'97
Alexa Chung's Picks
Best of Lou Reed
Playlist do Rick - Dead Fish
Up Songs
Build My Throne
|
old_projects/quchem_ibm/Experiments/LiH_Simulation_result/LiH_Analysis_STANDARD.ipynb | ###Markdown
Histogram
###Code
def Get_Hist_data(Histogram_data, I_term):
E_list=[]
for m_index in tqdm(range(Histogram_data[0]['Measurements'].shape[0])):
E=I_term
for M_dict_key in Histogram_data:
coeff = Histogram_data[M_dict_key]['coeff']
parity = 1 if sum(map(int, Histogram_data[M_dict_key]['Measurements'][m_index])) % 2 == 0 else -1
E+=coeff*parity
E_list.append(E)
return E_list
I_term = -4.142299396835105
E_list_STANDARD_sim=Get_Hist_data(STANDARD_Hist_data_sim, I_term)
import json
with open("E_list_STANDARD_sim.json", "w") as write_file:
json.dump(E_list_STANDARD_sim, write_file)
E_list_STANDARD_sim=np.array(E_list_STANDARD_sim)
def gaussian(x, mean, amplitude, standard_deviation):
return amplitude * np.exp( - ((x - mean)**2 / (2*standard_deviation**2)))
from scipy.optimize import curve_fit
# from matplotlib import pyplot
# %matplotlib inline
# # bins_standard = len(set(E_list_STANDARD_sim))
# bins_standard = 1000
# bin_heights_STANDARD, bin_borders_STANDARD, _=pyplot.hist(E_list_STANDARD_sim,
# bins_standard, alpha=0.7,
# label='$E$ standard VQE - sim',
# color='g',
# density=False)
# bin_centers_STANDARD = bin_borders_STANDARD[:-1] + np.diff(bin_borders_STANDARD) / 2
# popt, _ = curve_fit(gaussian, bin_centers_STANDARD, bin_heights_STANDARD, p0=[fci_energy, 0., 1.], **{'maxfev':10000})
# mean_STANDARD, amplitude_STANDARD, standard_deviation_STANDARD= popt
# x_interval_for_fit = np.linspace(bin_borders_STANDARD[0], bin_borders_STANDARD[-1], 10000)
# pyplot.plot(x_interval_for_fit, gaussian(x_interval_for_fit, *popt), label='Gaussian fit', color='g')
# pyplot.axvline(mean_STANDARD, color='g', linestyle='dashed', linewidth=1,
# label='$E_{average}$ standard VQE - sim') # mean of GAUSSIAN FIT
# # pyplot.axvline(E_list_STANDARD_sim.mean(), color='g', linestyle='dashed', linewidth=1,
# # label='$E_{average}$ standard VQE - sim') # mean of DATA
# pyplot.errorbar(mean_STANDARD,65_000,
# xerr=standard_deviation_STANDARD, linestyle="None", color='g',
# uplims=True, lolims=True, label='$\sigma_{E_{av}}$standard VQE - sim')
# pyplot.axvline(fci_energy, color='k', linestyle='solid', linewidth=2,
# label='$E_{FCI}$', alpha=0.4)
# pyplot.legend(loc='upper right')
# # pyplot.legend(bbox_to_anchor=(0.865,1.9), loc="upper left")
# pyplot.ylabel('Frequency')
# pyplot.xlabel('Energy')
# pyplot.tight_layout()
# file_name = 'LiH_Histogram_STANDARD_sim_Gaussian.jpeg'
# pyplot.savefig(file_name, dpi=300,transparent=True,) # edgecolor='black', facecolor='white')
# pyplot.show()
def normal_dist(x, mean, standard_deviation):
return (1/(np.sqrt(2*np.pi)*standard_deviation)) * np.exp( - ((x - mean)**2 / (2*standard_deviation**2)))
plt.plot(x, normal_dist(x, av, sig))
# from scipy.stats import norm
# x=np.linspace(-10, 10, 1000)
# av=2
# sig=1
# plt.plot(x, norm.pdf(x, av, sig))
len(set(np.around(E_list_STANDARD_sim, 5)))
E_list_STANDARD_sim.shape
E_list_STANDARD_sim.shape[0]**(1/3)
# https://stats.stackexchange.com/questions/798/calculating-optimal-number-of-bins-in-a-histogram
from scipy.stats import iqr
bin_width = 2 * iqr(E_list_STANDARD_sim) / E_list_STANDARD_sim.shape[0]**(1/3)
np.ceil((max(E_list_STANDARD_sim)-min(E_list_STANDARD_sim))/bin_width)
from matplotlib import pyplot
%matplotlib inline
# bins = len(set(E_list_SEQ_ROT_sim))
# bins_standard = len(set(E_list_STANDARD_sim))
# bins_standard = 150_000
bins_standard = 2500
bin_heights_STANDARD, bin_borders_STANDARD, _=pyplot.hist(E_list_STANDARD_sim,
bins_standard, alpha=0.7,
label='$E$ standard VQE - sim',
color='g',
density=True)
#### ,hatch='-')
###### Gaussian fit
bin_centers_STANDARD = bin_borders_STANDARD[:-1] + np.diff(bin_borders_STANDARD) / 2
popt, _ = curve_fit(gaussian, bin_centers_STANDARD, bin_heights_STANDARD, p0=[fci_energy, 0., 1.])#, **{'maxfev':10000})
mean_STANDARD, amplitude_STANDARD, standard_deviation_STANDARD= popt
x_interval_for_fit = np.linspace(bin_borders_STANDARD[0], bin_borders_STANDARD[-1], 10000)
pyplot.plot(x_interval_for_fit, gaussian(x_interval_for_fit, *popt), label='Gaussian fit', color='olive',
linewidth=3)
### normal fit
# popt_norm, _ = curve_fit(normal_dist, bin_centers_STANDARD, bin_heights_STANDARD, p0=[fci_energy, standard_deviation_STANDARD])#, **{'maxfev':10000})
# mean_norm, standard_deviation_norm= popt_norm
# pyplot.plot(x_interval_for_fit, normal_dist(x_interval_for_fit, *popt_norm), label='Normal fit', color='b',
# linestyle='--')
# pyplot.plot(x_interval_for_fit, normal_dist(x_interval_for_fit, mean_STANDARD, standard_deviation_STANDARD),
# label='Normal fit', color='b', linestyle='--')
#### Average energy from data
pyplot.axvline(E_list_STANDARD_sim.mean(), color='g', linestyle='--', linewidth=2,
label='$E_{average}$ standard VQE - sim') # mean of DATA
##############
# chemical accuracy
pyplot.axvline(fci_energy, color='k', linestyle='solid', linewidth=3,
label='$E_{FCI}$', alpha=0.3)
# # chemical accuracy
# pyplot.fill_between([fci_energy-1.6e-3, fci_energy+1.6e-3],
# [0, np.ceil(max(bin_heights_STANDARD))] ,
# color='k',
# label='chemical accuracy',
# alpha=0.5)
pyplot.rcParams["font.family"] = "Times New Roman"
# pyplot.legend(loc='upper right')
# # pyplot.legend(bbox_to_anchor=(0.865,1.9), loc="upper left")
pyplot.ylabel('Probability Density', fontsize=20)
pyplot.xlabel('Energy / Hartree', fontsize=20)
pyplot.xticks(np.arange(-9.5,-5.5,0.5), fontsize=20)
pyplot.yticks(np.arange(0,2.5,0.5), fontsize=20)
# pyplot.xlim(np.floor(min(bin_borders_STANDARD)), np.ceil(max(bin_borders_STANDARD)))
pyplot.xlim(-9.5, -6.5)
pyplot.tight_layout()
file_name = 'LiH_Histogram_STANDARD_sim_Gaussian.jpeg'
pyplot.savefig(file_name, dpi=300,transparent=True,) # edgecolor='black', facecolor='white')
pyplot.show()
from matplotlib import pyplot
%matplotlib inline
# bins = len(set(E_list_SEQ_ROT_sim))
# bins_standard = len(set(E_list_STANDARD_sim))
# bins_standard = 5000
bins_standard = 150_000
bin_heights_STANDARD, bin_borders_STANDARD, _=pyplot.hist(E_list_STANDARD_sim,
bins_standard, alpha=0.7,
label='$E$ standard VQE - sim',
color='g',
density=True)
##############
pyplot.rcParams["font.family"] = "Times New Roman"
# pyplot.legend(loc='upper right')
# # pyplot.legend(bbox_to_anchor=(0.865,1.9), loc="upper left")
pyplot.ylabel('Probability Density', fontsize=20)
pyplot.xlabel('Energy / Hartree', fontsize=20)
pyplot.xticks(np.arange(-9.5,-5.5,0.5), fontsize=20)
pyplot.yticks(np.arange(0,3,0.5), fontsize=20)
# pyplot.xlim(np.floor(min(bin_borders_STANDARD)), np.ceil(max(bin_borders_STANDARD)))
pyplot.xlim(-9.5, -6.5)
pyplot.tight_layout()
# file_name = 'LiH_Histogram_STANDARD_sim_Gaussian.jpeg'
# pyplot.savefig(file_name, dpi=300,transparent=True,) # edgecolor='black', facecolor='white')
pyplot.show()
from scipy import stats
print(stats.shapiro(E_list_STANDARD_sim))
print(stats.kstest(E_list_STANDARD_sim, 'norm'))
###Output
/Users/lex/anaconda3/envs/UpdatedCirq/lib/python3.7/site-packages/scipy/stats/morestats.py:1676: UserWarning: p-value may not be accurate for N > 5000.
warnings.warn("p-value may not be accurate for N > 5000.")
###Markdown
XY Z comparison
###Code
i_list_XY=[]
STANDARD_Hist_data_XY={}
i_list_Z=[]
STANDARD_Hist_data_Z={}
amplitude_min=0.00
XY_terms=[]
Z_amp_sum=0
for key in STANDARD_Hist_data_sim:
Pword, const = STANDARD_Hist_data_sim[key]['P']
coeff=STANDARD_Hist_data_sim[key]['coeff']
if np.abs(coeff)>amplitude_min:
qubitNos, qubitPstrs = zip(*(list(Pword)))
# XY terms only!
if ('X' in qubitPstrs) or ('Y' in qubitPstrs):
i_list_XY.append(key)
STANDARD_Hist_data_XY[key]=STANDARD_Hist_data_sim[key]
XY_terms.append(STANDARD_Hist_data_sim[key]['P'])
else:
i_list_Z.append(key)
STANDARD_Hist_data_Z[key]=STANDARD_Hist_data_sim[key]
Z_amp_sum+=coeff
Z_amp_sum
def Get_Hist_data(Histogram_data, I_term):
E_list=[]
for m_index in tqdm(range(Histogram_data[list(Histogram_data.keys())[0]]['Measurements'].shape[0])):
E=I_term
for M_dict_key in Histogram_data:
coeff = Histogram_data[M_dict_key]['coeff']
parity = 1 if sum(map(int, Histogram_data[M_dict_key]['Measurements'][m_index])) % 2 == 0 else -1
E+=coeff*parity
E_list.append(E)
return E_list
I_term = -4.142299396835105
E_list_STANDARD_XY=Get_Hist_data(STANDARD_Hist_data_XY, 0)
E_list_STANDARD_Z=Get_Hist_data(STANDARD_Hist_data_Z, 0)
print(len(set(np.around(E_list_STANDARD_XY, 5))))
print(len(set(np.around(E_list_STANDARD_Z, 5))))
from matplotlib import pyplot
%matplotlib inline
# bins_standard = len(set(E_list_STANDARD_sim))
# bins_standard = 1000
bins_standard=8_000
# bin_heights_XY, bin_borders_XY, _=pyplot.hist(E_list_STANDARD_XY,
# bins_standard, alpha=0.7,
# label='$XY$ terms',
# color='b',
# density=False)
bin_heights_Z, bin_borders_Z, _=pyplot.hist(E_list_STANDARD_Z,
bins_standard, alpha=0.7,
label='$Z$ terms',
color='g',
density=True)
pyplot.rcParams["font.family"] = "Times New Roman"
pyplot.ylabel('Probability Density', fontsize=20)
pyplot.xlabel('Energy / Hartree', fontsize=20)
pyplot.xticks(np.arange(-4.2,-3.0,0.2), fontsize=20)
pyplot.xlim((-4.2, -3.2))
pyplot.yticks(np.arange(0,1200,200), fontsize=20)
pyplot.ylim((0, 1000))
pyplot.tight_layout()
file_name = 'LiH_standard_Z.jpeg'
pyplot.savefig(file_name, dpi=300,transparent=True,) # edgecolor='black', facecolor='white')
pyplot.show()
np.where(bin_heights_Z==max(bin_heights_Z))[0]
print(bin_heights_Z[2334])
print('left sum:',sum(bin_heights_Z[:2334]))
print('right sum:', sum(bin_heights_Z[2335:]))
# therefore slighlt more likely to get more +ve energy!!!
bin_borders_Z[583]
print(len(np.where(np.array(E_list_STANDARD_Z)>-3.8)[0]))
print(len(np.where(np.array(E_list_STANDARD_Z)<-3.89)[0]))
len(E_list_STANDARD_Z)
from matplotlib import pyplot
%matplotlib inline
# bins_standard = len(set(E_list_STANDARD_sim))
# bins_standard = 1000
bins_standard = 5000
bin_heights_XY, bin_borders_XY, _=pyplot.hist(E_list_STANDARD_XY,
bins_standard, alpha=0.7,
label='$XY$ terms',
color='g',
density=True)
pyplot.rcParams["font.family"] = "Times New Roman"
pyplot.ylabel('Probability Density', fontsize=20)
pyplot.xlabel('Energy / Hartree', fontsize=20)
pyplot.xticks(np.arange(-0.8,0.9,0.2), fontsize=20)
pyplot.xlim((-0.8, 0.8))
pyplot.yticks(np.arange(0,3,0.5), fontsize=20)
pyplot.tight_layout()
file_name = 'LiH_standard_XY.jpeg'
pyplot.savefig(file_name, dpi=300,transparent=True,) # edgecolor='black', facecolor='white')
pyplot.show()
###Output
_____no_output_____ |
Toxic_Release_Inventory.ipynb | ###Markdown
**Load only first n rows**
###Code
import pandas as pd
df = pd.read_csv('/content/drive/My Drive/255_Data_Mining/dataset/basic_data_files.csv'
, sep=','
, nrows=1000)
# , chunksize=1000000)
# , usecols=['YEAR', 'TRI_FACILITY_ID', 'FACILITY_NAME', 'ZIP', 'LATITUDE', 'LONGITUDE', 'PRIMARY_NAICS', 'INDUSTRY_SECTOR', 'CLASSIFICATION', 'CARCINOGEN', '5.1_FUGITIVE_AIR', '5.2_STACK_AIR', '5.3_WATER', 'ON-SITE_RELEASE_TOTAL'])
df
###Output
_____no_output_____
###Markdown
**Load all rows with only selected columns**
###Code
import pandas as pd
df = pd.read_csv('/content/drive/My Drive/255_Data_Mining/dataset/basic_data_files.csv'
, sep=','
, usecols=['YEAR', 'TRI_FACILITY_ID', 'FACILITY_NAME', 'ZIP', 'LATITUDE', 'LONGITUDE', 'INDUSTRY_SECTOR', 'CLASSIFICATION', 'METAL', 'METAL_CATEGORY', 'CARCINOGEN', '5.2_STACK_AIR', '5.3_WATER', 'ON-SITE_RELEASE_TOTAL'])
df
df['METAL_CATEGORY']
print(df['YEAR'][23444])
print(df['YEAR'][1222538])
print(df['YEAR'][5051319])
###Output
2016
2008
81082
|
Capstone/M3ExploratoryDataAnalysis-lab.ipynb | ###Markdown
**Exploratory Data Analysis Lab** Estimated time needed: **30** minutes In this module you get to work with the cleaned dataset from the previous module.In this assignment you will perform the task of exploratory data analysis.You will find out the distribution of data, presence of outliers and also determine the correlation between different columns in the dataset. Objectives In this lab you will perform the following: * Identify the distribution of data in the dataset.* Identify outliers in the dataset.* Remove outliers from the dataset.* Identify correlation between features in the dataset. *** Hands on Lab Import the pandas module.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Load the dataset into a dataframe.
###Code
df = pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DA0321EN-SkillsNetwork/LargeData/m2_survey_data.csv")
df
df['Age'].median()
df[df['Gender'] == 'Woman']['ConvertedComp'].median()
df['Age'].hist()
df['ConvertedComp'].median()
df.boxplot(column=['ConvertedComp'])
Q1 = df['ConvertedComp'].quantile(0.25)
Q3 = df['ConvertedComp'].quantile(0.75)
IQR = Q3 - Q1 #IQR is interquartile range.
filtered = (df['ConvertedComp'] >= Q1 - 1.5 * IQR) & (df['ConvertedComp'] <= Q3 + 1.5 *IQR)
df_remove_outliers = df.loc[filtered]
df_remove_outliers['ConvertedComp'].median()
df_remove_outliers['ConvertedComp'].mean()
df.boxplot(column=['Age'])
df.corr(method ='pearson')
import matplotlib.pyplot as plt
plt.plot(df['Age'],df['WorkWeekHrs'],'o')
###Output
_____no_output_____
###Markdown
Distribution Determine how the data is distributed The column `ConvertedComp` contains Salary converted to annual USD salaries using the exchange rate on 2019-02-01.This assumes 12 working months and 50 working weeks. Plot the distribution curve for the column `ConvertedComp`.
###Code
# your code goes here
###Output
_____no_output_____
###Markdown
Plot the histogram for the column `ConvertedComp`.
###Code
# your code goes here
###Output
_____no_output_____
###Markdown
What is the median of the column `ConvertedComp`?
###Code
# your code goes here
###Output
_____no_output_____
###Markdown
How many responders identified themselves only as a **Man**?
###Code
# your code goes here
###Output
_____no_output_____
###Markdown
Find out the median ConvertedComp of responders identified themselves only as a **Woman**?
###Code
# your code goes here
###Output
_____no_output_____
###Markdown
Give the five number summary for the column `Age`? **Double click here for hint**.<!--min,q1,median,q3,max of a column are its five number summary.-->
###Code
# your code goes here
###Output
_____no_output_____
###Markdown
Plot a histogram of the column `Age`.
###Code
# your code goes here
###Output
_____no_output_____
###Markdown
Outliers Finding outliers Find out if outliers exist in the column `ConvertedComp` using a box plot?
###Code
# your code goes here
###Output
_____no_output_____
###Markdown
Find out the Inter Quartile Range for the column `ConvertedComp`.
###Code
# your code goes here
###Output
_____no_output_____
###Markdown
Find out the upper and lower bounds.
###Code
# your code goes here
###Output
_____no_output_____
###Markdown
Identify how many outliers are there in the `ConvertedComp` column.
###Code
# your code goes here
###Output
_____no_output_____
###Markdown
Create a new dataframe by removing the outliers from the `ConvertedComp` column.
###Code
# your code goes here
###Output
_____no_output_____
###Markdown
Correlation Finding correlation Find the correlation between `Age` and all other numerical columns.
###Code
# your code goes here
###Output
_____no_output_____ |
FindApheroidNotebook.ipynb | ###Markdown
Cell tracking and identificationWe analyze the images starting from the raw images. They are arganized in the following order: - experiment - well no. - channel no. We call them from the trackingparr function.
###Code
from nd2reader import ND2Reader
###Output
_____no_output_____
###Markdown
Spheroid segmentationInitial step is to retrieve spheroid coords so that we can compare them to the cell displacements and classify them. The first set of functions are workhouse functions to identify the well center and then crop away the left-over data points.This function saves all the files at the destination indicated in the SAVEPATH.
###Code
import DetermineCellState
DATAPATH = r'\\atlas.pasteur.fr\Multicell\Shreyansh\20191017\exp_matrigel_conc_Tcell_B16\50pc\TIFF\DataFrames'
PATH = r'\\atlas.pasteur.fr\Multicell\Shreyansh\20191017\exp_matrigel_conc_Tcell_B16\50pc\TIFF'
SAVEPATH = r'C:\Users\gronteix\Documents\Research\SpheroidPositionAnalysis\20191017\B1650pcMatrigel'
wellDiameter = 440
marginDistance = 160
aspectRatio = 3
CHANNEL = '2'
DetermineCellState._loopThroughExperiments(PATH, DATAPATH, SAVEPATH, CHANNEL, wellDiameter, marginDistance, aspectRatio)
###Output
_____no_output_____ |
Convolutional Neural Networks Prev/week4/Art+Generation+with+Neural+Style+Transfer+-+v2.ipynb | ###Markdown
Deep Learning & Art: Neural Style TransferWelcome to the second assignment of this week. In this assignment, you will learn about Neural Style Transfer. This algorithm was created by Gatys et al. (2015) (https://arxiv.org/abs/1508.06576). **In this assignment, you will:**- Implement the neural style transfer algorithm - Generate novel artistic images using your algorithm Most of the algorithms you've studied optimize a cost function to get a set of parameter values. In Neural Style Transfer, you'll optimize a cost function to get pixel values!
###Code
import os
import sys
import scipy.io
import scipy.misc
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
from nst_utils import *
import numpy as np
import tensorflow as tf
%matplotlib inline
###Output
_____no_output_____
###Markdown
1 - Problem StatementNeural Style Transfer (NST) is one of the most fun techniques in deep learning. As seen below, it merges two images, namely, a "content" image (C) and a "style" image (S), to create a "generated" image (G). The generated image G combines the "content" of the image C with the "style" of image S. In this example, you are going to generate an image of the Louvre museum in Paris (content image C), mixed with a painting by Claude Monet, a leader of the impressionist movement (style image S).Let's see how you can do this. 2 - Transfer LearningNeural Style Transfer (NST) uses a previously trained convolutional network, and builds on top of that. The idea of using a network trained on a different task and applying it to a new task is called transfer learning. Following the original NST paper (https://arxiv.org/abs/1508.06576), we will use the VGG network. Specifically, we'll use VGG-19, a 19-layer version of the VGG network. This model has already been trained on the very large ImageNet database, and thus has learned to recognize a variety of low level features (at the earlier layers) and high level features (at the deeper layers). Run the following code to load parameters from the VGG model. This may take a few seconds.
###Code
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
print(model)
###Output
{'input': <tf.Variable 'Variable:0' shape=(1, 300, 400, 3) dtype=float32_ref>, 'conv1_1': <tf.Tensor 'Relu:0' shape=(1, 300, 400, 64) dtype=float32>, 'conv1_2': <tf.Tensor 'Relu_1:0' shape=(1, 300, 400, 64) dtype=float32>, 'avgpool1': <tf.Tensor 'AvgPool:0' shape=(1, 150, 200, 64) dtype=float32>, 'conv2_1': <tf.Tensor 'Relu_2:0' shape=(1, 150, 200, 128) dtype=float32>, 'conv2_2': <tf.Tensor 'Relu_3:0' shape=(1, 150, 200, 128) dtype=float32>, 'avgpool2': <tf.Tensor 'AvgPool_1:0' shape=(1, 75, 100, 128) dtype=float32>, 'conv3_1': <tf.Tensor 'Relu_4:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_2': <tf.Tensor 'Relu_5:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_3': <tf.Tensor 'Relu_6:0' shape=(1, 75, 100, 256) dtype=float32>, 'conv3_4': <tf.Tensor 'Relu_7:0' shape=(1, 75, 100, 256) dtype=float32>, 'avgpool3': <tf.Tensor 'AvgPool_2:0' shape=(1, 38, 50, 256) dtype=float32>, 'conv4_1': <tf.Tensor 'Relu_8:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_2': <tf.Tensor 'Relu_9:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_3': <tf.Tensor 'Relu_10:0' shape=(1, 38, 50, 512) dtype=float32>, 'conv4_4': <tf.Tensor 'Relu_11:0' shape=(1, 38, 50, 512) dtype=float32>, 'avgpool4': <tf.Tensor 'AvgPool_3:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_1': <tf.Tensor 'Relu_12:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_2': <tf.Tensor 'Relu_13:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_3': <tf.Tensor 'Relu_14:0' shape=(1, 19, 25, 512) dtype=float32>, 'conv5_4': <tf.Tensor 'Relu_15:0' shape=(1, 19, 25, 512) dtype=float32>, 'avgpool5': <tf.Tensor 'AvgPool_4:0' shape=(1, 10, 13, 512) dtype=float32>}
###Markdown
The model is stored in a python dictionary where each variable name is the key and the corresponding value is a tensor containing that variable's value. To run an image through this network, you just have to feed the image to the model. In TensorFlow, you can do so using the [tf.assign](https://www.tensorflow.org/api_docs/python/tf/assign) function. In particular, you will use the assign function like this: ```pythonmodel["input"].assign(image)```This assigns the image as an input to the model. After this, if you want to access the activations of a particular layer, say layer `4_2` when the network is run on this image, you would run a TensorFlow session on the correct tensor `conv4_2`, as follows: ```pythonsess.run(model["conv4_2"])``` 3 - Neural Style Transfer We will build the NST algorithm in three steps:- Build the content cost function $J_{content}(C,G)$- Build the style cost function $J_{style}(S,G)$- Put it together to get $J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$. 3.1 - Computing the content costIn our running example, the content image C will be the picture of the Louvre Museum in Paris. Run the code below to see a picture of the Louvre.
###Code
content_image = scipy.misc.imread("images/louvre.jpg")
imshow(content_image)
###Output
_____no_output_____
###Markdown
The content image (C) shows the Louvre museum's pyramid surrounded by old Paris buildings, against a sunny sky with a few clouds.** 3.1.1 - How do you ensure the generated image G matches the content of the image C?**As we saw in lecture, the earlier (shallower) layers of a ConvNet tend to detect lower-level features such as edges and simple textures, and the later (deeper) layers tend to detect higher-level features such as more complex textures as well as object classes. We would like the "generated" image G to have similar content as the input image C. Suppose you have chosen some layer's activations to represent the content of an image. In practice, you'll get the most visually pleasing results if you choose a layer in the middle of the network--neither too shallow nor too deep. (After you have finished this exercise, feel free to come back and experiment with using different layers, to see how the results vary.)So, suppose you have picked one particular hidden layer to use. Now, set the image C as the input to the pretrained VGG network, and run forward propagation. Let $a^{(C)}$ be the hidden layer activations in the layer you had chosen. (In lecture, we had written this as $a^{[l](C)}$, but here we'll drop the superscript $[l]$ to simplify the notation.) This will be a $n_H \times n_W \times n_C$ tensor. Repeat this process with the image G: Set G as the input, and run forward progation. Let $$a^{(G)}$$ be the corresponding hidden layer activation. We will define as the content cost function as:$$J_{content}(C,G) = \frac{1}{4 \times n_H \times n_W \times n_C}\sum _{ \text{all entries}} (a^{(C)} - a^{(G)})^2\tag{1} $$Here, $n_H, n_W$ and $n_C$ are the height, width and number of channels of the hidden layer you have chosen, and appear in a normalization term in the cost. For clarity, note that $a^{(C)}$ and $a^{(G)}$ are the volumes corresponding to a hidden layer's activations. In order to compute the cost $J_{content}(C,G)$, it might also be convenient to unroll these 3D volumes into a 2D matrix, as shown below. (Technically this unrolling step isn't needed to compute $J_{content}$, but it will be good practice for when you do need to carry out a similar operation later for computing the style const $J_{style}$.)**Exercise:** Compute the "content cost" using TensorFlow. **Instructions**: The 3 steps to implement this function are:1. Retrieve dimensions from a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`2. Unroll a_C and a_G as explained in the picture above - If you are stuck, take a look at [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape).3. Compute the content cost: - If you are stuck, take a look at [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract).
###Code
# GRADED FUNCTION: compute_content_cost
def compute_content_cost(a_C, a_G):
"""
Computes the content cost
Arguments:
a_C -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image C
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing content of the image G
Returns:
J_content -- scalar that you compute using equation 1 above.
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape a_C and a_G (≈2 lines)
a_C_unrolled = tf.transpose(tf.reshape(a_C, [n_H * n_W, n_C]))
a_G_unrolled = tf.transpose(tf.reshape(a_G, [n_H * n_W, n_C]))
# compute the cost with tensorflow (≈1 line)
J_content = tf.reduce_sum(tf.square(tf.subtract(a_C_unrolled,a_G_unrolled))) / (4 * n_H * n_W * n_C)
### END CODE HERE ###
return J_content
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_C = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_content = compute_content_cost(a_C, a_G)
print("J_content = " + str(J_content.eval()))
###Output
J_content = 6.76559
###Markdown
**Expected Output**: **J_content** 6.76559 **What you should remember**:- The content cost takes a hidden layer activation of the neural network, and measures how different $a^{(C)}$ and $a^{(G)}$ are. - When we minimize the content cost later, this will help make sure $G$ has similar content as $C$. 3.2 - Computing the style costFor our running example, we will use the following style image:
###Code
style_image = scipy.misc.imread("images/monet_800600.jpg")
imshow(style_image)
###Output
_____no_output_____
###Markdown
This painting was painted in the style of *[impressionism](https://en.wikipedia.org/wiki/Impressionism)*.Lets see how you can now define a "style" const function $J_{style}(S,G)$. 3.2.1 - Style matrixThe style matrix is also called a "Gram matrix." In linear algebra, the Gram matrix G of a set of vectors $(v_{1},\dots ,v_{n})$ is the matrix of dot products, whose entries are ${\displaystyle G_{ij} = v_{i}^T v_{j} = np.dot(v_{i}, v_{j}) }$. In other words, $G_{ij}$ compares how similar $v_i$ is to $v_j$: If they are highly similar, you would expect them to have a large dot product, and thus for $G_{ij}$ to be large. Note that there is an unfortunate collision in the variable names used here. We are following common terminology used in the literature, but $G$ is used to denote the Style matrix (or Gram matrix) as well as to denote the generated image $G$. We will try to make sure which $G$ we are referring to is always clear from the context. In NST, you can compute the Style matrix by multiplying the "unrolled" filter matrix with their transpose:The result is a matrix of dimension $(n_C,n_C)$ where $n_C$ is the number of filters. The value $G_{ij}$ measures how similar the activations of filter $i$ are to the activations of filter $j$. One important part of the gram matrix is that the diagonal elements such as $G_{ii}$ also measures how active filter $i$ is. For example, suppose filter $i$ is detecting vertical textures in the image. Then $G_{ii}$ measures how common vertical textures are in the image as a whole: If $G_{ii}$ is large, this means that the image has a lot of vertical texture. By capturing the prevalence of different types of features ($G_{ii}$), as well as how much different features occur together ($G_{ij}$), the Style matrix $G$ measures the style of an image. **Exercise**:Using TensorFlow, implement a function that computes the Gram matrix of a matrix A. The formula is: The gram matrix of A is $G_A = AA^T$. If you are stuck, take a look at [Hint 1](https://www.tensorflow.org/api_docs/python/tf/matmul) and [Hint 2](https://www.tensorflow.org/api_docs/python/tf/transpose).
###Code
# GRADED FUNCTION: gram_matrix
def gram_matrix(A):
"""
Argument:
A -- matrix of shape (n_C, n_H*n_W)
Returns:
GA -- Gram matrix of A, of shape (n_C, n_C)
"""
### START CODE HERE ### (≈1 line)
GA = tf.matmul(A, tf.transpose(A))
### END CODE HERE ###
return GA
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
A = tf.random_normal([3, 2*1], mean=1, stddev=4)
GA = gram_matrix(A)
print("GA = " + str(GA.eval()))
###Output
GA = [[ 6.42230511 -4.42912197 -2.09668207]
[ -4.42912197 19.46583748 19.56387138]
[ -2.09668207 19.56387138 20.6864624 ]]
###Markdown
**Expected Output**: **GA** [[ 6.42230511 -4.42912197 -2.09668207] [ -4.42912197 19.46583748 19.56387138] [ -2.09668207 19.56387138 20.6864624 ]] 3.2.2 - Style cost After generating the Style matrix (Gram matrix), your goal will be to minimize the distance between the Gram matrix of the "style" image S and that of the "generated" image G. For now, we are using only a single hidden layer $a^{[l]}$, and the corresponding style cost for this layer is defined as: $$J_{style}^{[l]}(S,G) = \frac{1}{4 \times {n_C}^2 \times (n_H \times n_W)^2} \sum _{i=1}^{n_C}\sum_{j=1}^{n_C}(G^{(S)}_{ij} - G^{(G)}_{ij})^2\tag{2} $$where $G^{(S)}$ and $G^{(G)}$ are respectively the Gram matrices of the "style" image and the "generated" image, computed using the hidden layer activations for a particular hidden layer in the network. **Exercise**: Compute the style cost for a single layer. **Instructions**: The 3 steps to implement this function are:1. Retrieve dimensions from the hidden layer activations a_G: - To retrieve dimensions from a tensor X, use: `X.get_shape().as_list()`2. Unroll the hidden layer activations a_S and a_G into 2D matrices, as explained in the picture above. - You may find [Hint1](https://www.tensorflow.org/versions/r1.3/api_docs/python/tf/transpose) and [Hint2](https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/reshape) useful.3. Compute the Style matrix of the images S and G. (Use the function you had previously written.) 4. Compute the Style cost: - You may find [Hint3](https://www.tensorflow.org/api_docs/python/tf/reduce_sum), [Hint4](https://www.tensorflow.org/api_docs/python/tf/square) and [Hint5](https://www.tensorflow.org/api_docs/python/tf/subtract) useful.
###Code
# GRADED FUNCTION: compute_layer_style_cost
def compute_layer_style_cost(a_S, a_G):
"""
Arguments:
a_S -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image S
a_G -- tensor of dimension (1, n_H, n_W, n_C), hidden layer activations representing style of the image G
Returns:
J_style_layer -- tensor representing a scalar value, style cost defined above by equation (2)
"""
### START CODE HERE ###
# Retrieve dimensions from a_G (≈1 line)
m, n_H, n_W, n_C = a_G.get_shape().as_list()
# Reshape the images to have them of shape (n_C, n_H*n_W) (≈2 lines)
a_S = tf.transpose(tf.reshape(a_S, [n_H * n_W, n_C]))
a_G = tf.transpose(tf.reshape(a_G, [n_H * n_W, n_C]))
# Computing gram_matrices for both images S and G (≈2 lines)
GS = gram_matrix(a_S)
GG = gram_matrix(a_G)
# Computing the loss (≈1 line)
J_style_layer = tf.reduce_sum(tf.square(tf.subtract(GS, GG))) / (4 * n_C **2 * (n_W * n_H) ** 2)
### END CODE HERE ###
return J_style_layer
tf.reset_default_graph()
with tf.Session() as test:
tf.set_random_seed(1)
a_S = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
a_G = tf.random_normal([1, 4, 4, 3], mean=1, stddev=4)
J_style_layer = compute_layer_style_cost(a_S, a_G)
print("J_style_layer = " + str(J_style_layer.eval()))
###Output
J_style_layer = 9.19028
###Markdown
**Expected Output**: **J_style_layer** 9.19028 3.2.3 Style WeightsSo far you have captured the style from only one layer. We'll get better results if we "merge" style costs from several different layers. After completing this exercise, feel free to come back and experiment with different weights to see how it changes the generated image $G$. But for now, this is a pretty reasonable default:
###Code
STYLE_LAYERS = [
('conv1_1', 0.2),
('conv2_1', 0.2),
('conv3_1', 0.2),
('conv4_1', 0.2),
('conv5_1', 0.2)]
###Output
_____no_output_____
###Markdown
You can combine the style costs for different layers as follows:$$J_{style}(S,G) = \sum_{l} \lambda^{[l]} J^{[l]}_{style}(S,G)$$where the values for $\lambda^{[l]}$ are given in `STYLE_LAYERS`. We've implemented a compute_style_cost(...) function. It simply calls your `compute_layer_style_cost(...)` several times, and weights their results using the values in `STYLE_LAYERS`. Read over it to make sure you understand what it's doing. <!-- 2. Loop over (layer_name, coeff) from STYLE_LAYERS: a. Select the output tensor of the current layer. As an example, to call the tensor from the "conv1_1" layer you would do: out = model["conv1_1"] b. Get the style of the style image from the current layer by running the session on the tensor "out" c. Get a tensor representing the style of the generated image from the current layer. It is just "out". d. Now that you have both styles. Use the function you've implemented above to compute the style_cost for the current layer e. Add (style_cost x coeff) of the current layer to overall style cost (J_style)3. Return J_style, which should now be the sum of the (style_cost x coeff) for each layer.!-->
###Code
def compute_style_cost(model, STYLE_LAYERS):
"""
Computes the overall style cost from several chosen layers
Arguments:
model -- our tensorflow model
STYLE_LAYERS -- A python list containing:
- the names of the layers we would like to extract style from
- a coefficient for each of them
Returns:
J_style -- tensor representing a scalar value, style cost defined above by equation (2)
"""
# initialize the overall style cost
J_style = 0
for layer_name, coeff in STYLE_LAYERS:
# Select the output tensor of the currently selected layer
out = model[layer_name]
# Set a_S to be the hidden layer activation from the layer we have selected, by running the session on out
a_S = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model[layer_name]
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute style_cost for the current layer
J_style_layer = compute_layer_style_cost(a_S, a_G)
# Add coeff * J_style_layer of this layer to overall style cost
J_style += coeff * J_style_layer
return J_style
###Output
_____no_output_____
###Markdown
**Note**: In the inner-loop of the for-loop above, `a_G` is a tensor and hasn't been evaluated yet. It will be evaluated and updated at each iteration when we run the TensorFlow graph in model_nn() below.<!-- How do you choose the coefficients for each layer? The deeper layers capture higher-level concepts, and the features in the deeper layers are less localized in the image relative to each other. So if you want the generated image to softly follow the style image, try choosing larger weights for deeper layers and smaller weights for the first layers. In contrast, if you want the generated image to strongly follow the style image, try choosing smaller weights for deeper layers and larger weights for the first layers!-->**What you should remember**:- The style of an image can be represented using the Gram matrix of a hidden layer's activations. However, we get even better results combining this representation from multiple different layers. This is in contrast to the content representation, where usually using just a single hidden layer is sufficient.- Minimizing the style cost will cause the image $G$ to follow the style of the image $S$. 3.3 - Defining the total cost to optimize Finally, let's create a cost function that minimizes both the style and the content cost. The formula is: $$J(G) = \alpha J_{content}(C,G) + \beta J_{style}(S,G)$$**Exercise**: Implement the total cost function which includes both the content cost and the style cost.
###Code
# GRADED FUNCTION: total_cost
def total_cost(J_content, J_style, alpha = 10, beta = 40):
"""
Computes the total cost function
Arguments:
J_content -- content cost coded above
J_style -- style cost coded above
alpha -- hyperparameter weighting the importance of the content cost
beta -- hyperparameter weighting the importance of the style cost
Returns:
J -- total cost as defined by the formula above.
"""
### START CODE HERE ### (≈1 line)
J = alpha * J_content + beta*J_style
### END CODE HERE ###
return J
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(3)
J_content = np.random.randn()
J_style = np.random.randn()
J = total_cost(J_content, J_style)
print("J = " + str(J))
###Output
J = 35.34667875478276
###Markdown
**Expected Output**: **J** 35.34667875478276 **What you should remember**:- The total cost is a linear combination of the content cost $J_{content}(C,G)$ and the style cost $J_{style}(S,G)$- $\alpha$ and $\beta$ are hyperparameters that control the relative weighting between content and style 4 - Solving the optimization problem Finally, let's put everything together to implement Neural Style Transfer!Here's what the program will have to do:1. Create an Interactive Session2. Load the content image 3. Load the style image4. Randomly initialize the image to be generated 5. Load the VGG16 model7. Build the TensorFlow graph: - Run the content image through the VGG16 model and compute the content cost - Run the style image through the VGG16 model and compute the style cost - Compute the total cost - Define the optimizer and the learning rate8. Initialize the TensorFlow graph and run it for a large number of iterations, updating the generated image at every step.Lets go through the individual steps in detail. You've previously implemented the overall cost $J(G)$. We'll now set up TensorFlow to optimize this with respect to $G$. To do so, your program has to reset the graph and use an "[Interactive Session](https://www.tensorflow.org/api_docs/python/tf/InteractiveSession)". Unlike a regular session, the "Interactive Session" installs itself as the default session to build a graph. This allows you to run variables without constantly needing to refer to the session object, which simplifies the code. Lets start the interactive session.
###Code
# Reset the graph
tf.reset_default_graph()
# Start interactive session
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Let's load, reshape, and normalize our "content" image (the Louvre museum picture):
###Code
content_image = scipy.misc.imread("images/louvre_small.jpg")
content_image = reshape_and_normalize_image(content_image)
###Output
_____no_output_____
###Markdown
Let's load, reshape and normalize our "style" image (Claude Monet's painting):
###Code
style_image = scipy.misc.imread("images/monet.jpg")
style_image = reshape_and_normalize_image(style_image)
###Output
_____no_output_____
###Markdown
Now, we initialize the "generated" image as a noisy image created from the content_image. By initializing the pixels of the generated image to be mostly noise but still slightly correlated with the content image, this will help the content of the "generated" image more rapidly match the content of the "content" image. (Feel free to look in `nst_utils.py` to see the details of `generate_noise_image(...)`; to do so, click "File-->Open..." at the upper-left corner of this Jupyter notebook.)
###Code
generated_image = generate_noise_image(content_image)
imshow(generated_image[0])
###Output
_____no_output_____
###Markdown
Next, as explained in part (2), let's load the VGG16 model.
###Code
model = load_vgg_model("pretrained-model/imagenet-vgg-verydeep-19.mat")
###Output
_____no_output_____
###Markdown
To get the program to compute the content cost, we will now assign `a_C` and `a_G` to be the appropriate hidden layer activations. We will use layer `conv4_2` to compute the content cost. The code below does the following:1. Assign the content image to be the input to the VGG model.2. Set a_C to be the tensor giving the hidden layer activation for layer "conv4_2".3. Set a_G to be the tensor giving the hidden layer activation for the same layer. 4. Compute the content cost using a_C and a_G.
###Code
# Assign the content image to be the input of the VGG model.
sess.run(model['input'].assign(content_image))
# Select the output tensor of layer conv4_2
out = model['conv4_2']
# Set a_C to be the hidden layer activation from the layer we have selected
a_C = sess.run(out)
# Set a_G to be the hidden layer activation from same layer. Here, a_G references model['conv4_2']
# and isn't evaluated yet. Later in the code, we'll assign the image G as the model input, so that
# when we run the session, this will be the activations drawn from the appropriate layer, with G as input.
a_G = out
# Compute the content cost
J_content = compute_content_cost(a_C, a_G)
###Output
_____no_output_____
###Markdown
**Note**: At this point, a_G is a tensor and hasn't been evaluated. It will be evaluated and updated at each iteration when we run the Tensorflow graph in model_nn() below.
###Code
# Assign the input of the model to be the "style" image
sess.run(model['input'].assign(style_image))
# Compute the style cost
J_style = compute_style_cost(model, STYLE_LAYERS)
###Output
_____no_output_____
###Markdown
**Exercise**: Now that you have J_content and J_style, compute the total cost J by calling `total_cost()`. Use `alpha = 10` and `beta = 40`.
###Code
### START CODE HERE ### (1 line)
J = total_cost(J_content, J_style, 10, 40)
### END CODE HERE ###
###Output
_____no_output_____
###Markdown
You'd previously learned how to set up the Adam optimizer in TensorFlow. Lets do that here, using a learning rate of 2.0. [See reference](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer)
###Code
# define optimizer (1 line)
optimizer = tf.train.AdamOptimizer(2.0)
# define train_step (1 line)
train_step = optimizer.minimize(J)
###Output
_____no_output_____
###Markdown
**Exercise**: Implement the model_nn() function which initializes the variables of the tensorflow graph, assigns the input image (initial generated image) as the input of the VGG16 model and runs the train_step for a large number of steps.
###Code
def model_nn(sess, input_image, num_iterations = 200):
# Initialize global variables (you need to run the session on the initializer)
### START CODE HERE ### (1 line)
sess.run(tf.global_variables_initializer())
### END CODE HERE ###
# Run the noisy input image (initial generated image) through the model. Use assign().
### START CODE HERE ### (1 line)
sess.run(model['input'].assign(input_image))
### END CODE HERE ###
for i in range(num_iterations):
# Run the session on the train_step to minimize the total cost
### START CODE HERE ### (1 line)
_ = sess.run(train_step)
### END CODE HERE ###
# Compute the generated image by running the session on the current model['input']
### START CODE HERE ### (1 line)
generated_image = sess.run(model['input'])
### END CODE HERE ###
# Print every 20 iteration.
if i%20 == 0:
Jt, Jc, Js = sess.run([J, J_content, J_style])
print("Iteration " + str(i) + " :")
print("total cost = " + str(Jt))
print("content cost = " + str(Jc))
print("style cost = " + str(Js))
# save current generated image in the "/output" directory
save_image("output/" + str(i) + ".png", generated_image)
# save last generated image
save_image('output/generated_image.jpg', generated_image)
return generated_image
###Output
_____no_output_____
###Markdown
Run the following cell to generate an artistic image. It should take about 3min on CPU for every 20 iterations but you start observing attractive results after ≈140 iterations. Neural Style Transfer is generally trained using GPUs.
###Code
model_nn(sess, generated_image)
###Output
Iteration 0 :
total cost = 5.05035e+09
content cost = 7877.67
style cost = 1.26257e+08
Iteration 20 :
total cost = 9.43276e+08
content cost = 15186.9
style cost = 2.35781e+07
Iteration 40 :
total cost = 4.84898e+08
content cost = 16785.0
style cost = 1.21183e+07
Iteration 60 :
total cost = 3.12574e+08
content cost = 17465.8
style cost = 7.80998e+06
Iteration 80 :
total cost = 2.28137e+08
content cost = 17715.0
style cost = 5.699e+06
Iteration 100 :
total cost = 1.80694e+08
content cost = 17895.5
style cost = 4.51288e+06
Iteration 120 :
total cost = 1.49996e+08
content cost = 18034.4
style cost = 3.74539e+06
Iteration 140 :
total cost = 1.27698e+08
content cost = 18186.8
style cost = 3.18791e+06
Iteration 160 :
total cost = 1.10698e+08
content cost = 18354.2
style cost = 2.76287e+06
Iteration 180 :
total cost = 9.73408e+07
content cost = 18500.9
style cost = 2.4289e+06
|
6_Figures/.ipynb_checkpoints/Fig3_Horizon_Years_distribution-checkpoint.ipynb | ###Markdown
Publication year distribution
###Code
path <- "../2_Treatment_database/output/database_one_row_each_paper.csv"
df <- read_csv(path)
sprintf("%i x %i dataframe", nrow(df), ncol(df))
head(df,1)
df_pub <- df %>%
select(publication_year,horizon_year)%>%
#for easier reading aggregate all pub year before 2000 to 2000
mutate(pub_year = ifelse(publication_year <= 2002, 2002, publication_year),
hor_year = ifelse(horizon_year <= 2030, "[2025;2030]",
ifelse(horizon_year > 2050, "]2050;2100]", "]2030;2050]")))%>%
#calculate the sum of publi by region
group_by(pub_year,hor_year) %>%
summarise('number'=n()) %>%
ungroup() %>%
#calculate percentage for column labels
mutate('relative'=unlist(by(data = number, INDICES = pub_year,
FUN = function(x) round(x/sum(x)*100, digits = 0)))) %>%
mutate(Estimated = ifelse(pub_year == 2002, "aggregated", "yearly"))
plot_pub <- ggplot(data = df_pub,aes(x=pub_year,y=number,fill=hor_year, pattern = Estimated)) +
ggtitle('a) Horizon year per publication year')+
geom_bar_pattern(stat="identity",
color = "black",
pattern_fill = "black",
pattern_angle = 45,
pattern_density = 0.1,
pattern_spacing = 0.02,
pattern_key_scale_factor = 0.2) +
scale_pattern_manual(values = c(aggregated = "stripe", yearly = "none")) +
labs(x = " \n Publication Year", y = "Number of Papers \n ", fill = "Horizon Year",
pattern = "Level"
) +
guides(pattern = FALSE, fill = guide_legend(override.aes = list(pattern = "none")))+
geom_vline(xintercept= 2002.5, linetype="dashed", size=0.5)+
annotate("text", x = 2002, y = 200, label = "Until 2002",angle = 90) +
xlab(" \n Publication Year")+
ylab("Number of Papers \n ")+
geom_text(data = subset(df_pub,pub_year ==2002 & relative >=15),
aes(x = pub_year, label = paste0(relative,'%')),
colour = 'black', position=position_stack(vjust=0.5))+
geom_text(data = subset(df_pub,pub_year >=2007 & pub_year <=2013 & relative >=15),
aes(x = pub_year, label = paste0(relative,'%')),
colour = 'black', position=position_stack(vjust=0.5))+
geom_text(data = subset(df_pub,pub_year >=2014 & pub_year <=2016 & relative >=5),
aes(x = pub_year, label = paste0(relative,'%')),
colour = 'black', position=position_stack(vjust=0.5))+
geom_text(data = subset(df_pub,pub_year >=2017 & pub_year <=2020 & relative >=2),
aes(x = pub_year, label = paste0(relative,'%')),
colour = 'black', position=position_stack(vjust=0.5))+
theme_minimal()+
theme(
plot.title = element_text(size = rel(2)),
legend.title = element_text(size = 16,face ="bold"),
legend.text = element_text(size = 16),
legend.position = 'top',
axis.text.x = element_text(size = 16),
axis.text.y = element_text(size = 16),
axis.title.x = element_text(size = 16, hjust = 0.5,face ="bold"),
axis.title.y = element_text(size = 16, hjust = 0.5,face ="bold")
)
###Output
_____no_output_____
###Markdown
Horizon year distribution
###Code
df_hor <- df %>%
select(horizon_year,Region)%>%
mutate(hor_year = ifelse(horizon_year >=2026 & horizon_year <= 2029, 2027,
ifelse(horizon_year >=2031 & horizon_year <= 2039, 2035,
ifelse(horizon_year >=2041 & horizon_year <= 2049, 2045,
ifelse(horizon_year >=2051 & horizon_year <= 2099, 2075,horizon_year)))),
hor_year = as.character(hor_year),
Region=factor(Region, levels = c('Antarctica','Oceania','Africa','Latin America',
'North America','European Union','Europe','Asia'))) %>%
#calculate the sum of publi by region
group_by(hor_year,Region) %>%
summarise('number'=n()) %>%
ungroup() %>%
#calculate percentage for column labels
mutate('relative'=unlist(by(data = number, INDICES = hor_year,
FUN = function(x) round(x/sum(x)*100, digits = 0))))%>%
mutate(Estimated = ifelse(hor_year == 2027 | hor_year == 2035 | hor_year == 2045 | hor_year == 2075, "aggregated", "yearly"))
head(df_hor,2)
options(repr.plot.width=12, repr.plot.height=10)
plot_hor <- ggplot(data = df_hor, aes(x=hor_year,y=number,fill=Region, pattern = Estimated)) +
ggtitle('b) Regional distribution per horizon year')+
geom_bar_pattern(stat="identity",
color = "black",
pattern_fill = "black",
pattern_angle = 45,
pattern_density = 0.1,
pattern_spacing = 0.02,
pattern_key_scale_factor = 0.2) +
scale_pattern_manual(values = c(aggregated = "stripe", yearly = "none")) +
labs(x = " \n Horizon Year", y = "Number of Papers \n ", fill = "Region", pattern = "Level") +
guides(pattern = FALSE, fill = guide_legend(override.aes = list(pattern = "none")))+
scale_x_discrete(labels = c("2025","","2030","","2040","","2050","","2100")) +
scale_fill_manual(values=c('Asia'='darkorange',
'European Union'='#7CAE00',
'Europe'='seagreen4',
'North America'='darkblue',
'Latin America'='dodgerblue2',
'Africa'='orchid',
'Oceania'='coral2',
'Antarctica'='#CAB2D6')) +
geom_text(data = subset(df_hor,hor_year != 2030 & hor_year !=2050 & hor_year !=2027 & hor_year !=2045 &
relative >=15),
aes(x = hor_year, label = paste0(relative,'%')),
colour = 'black', position=position_stack(vjust=0.5))+
geom_text(data = subset(df_hor,hor_year == 2030 | hor_year ==2050),
aes(x = hor_year, label = paste0(relative,'%')),
colour = 'black', position=position_stack(vjust=0.5))+
theme_minimal()+
theme(
plot.title = element_text(size = rel(2)),
legend.title = element_text(size = 16,face ="bold"),
legend.text = element_text(size = 16),
legend.position = 'top',
axis.text.x = element_text(size = 16),
axis.text.y = element_text(size = 16),
axis.title.x = element_text(size = 16, hjust = 0.5,face ="bold"),
axis.title.y = element_text(size = 16, hjust = 0.5,face ="bold")
)
plot_hor
options(repr.plot.width=20, repr.plot.height=10)
plot <- ggarrange(plot_pub, plot_hor,
widths = c(5,4),
#common.legend = FALSE,
#legend = "bottom",
ncol=2, nrow = 1
)
plot
ggsave('./output/Fig3_distribution_years.png', height=10, width=20, plot=plot)
###Output
_____no_output_____ |
STAT641Handout9.ipynb | ###Markdown
Handout 9
###Code
#Chi-square goodness of Fit test
#Kolmogorov-Smirnov (K-S) Measure
#evaluating Fit to the chicken Data
#Cramer-von Mises (CvM) Measure
#Anderson Darling (AD) Measure
#replicate gofnormex.R in python
from scipy.stats import norm
from math import sqrt, log
L = sorted([156,162,168,182,186,190,190,196,202,210,214,220,226,230,230,236,236,242,246,270])
n, m, a = 20, 200, 35
z = norm.cdf(L,m,a)
i = list(range(1, n + 1))
print(i)
print(z)
# K-S Computations
d1 = [i/n - z for i, z in zip(i,z)]
dp = max(d1)
d2 = [z - (i -1)/n for i, z in zip(i,z)]
dm = max(d2)
ks = max(dp,dm)
KS = ks*(sqrt(n) + .12+.11/sqrt(n))
#look into formatting values
print("KS Statistic: " + str(KS))
#reject normality at 0.05 level if KS > 1.358
# Cramer-von Mises
wi = [(z-(2*i-1)/(2*n))**2 for i, z in zip(i,z)]
s = sum(wi)
cvm = s + 1/(12*n)
CvM = (cvm - .4/n + .6/n**2)*(1+1/n)
print("CvM: " + str(CvM))
#Anderson-Darling Computations
ali = [(2*i-1)*log(z) for i, z in zip(i,z)]
print(ali)
a2i = [(2*n+1-2*i)*log(1-z) for i, z in zip(i,z)]
#print(a2i)
s1 = sum(ali)
#print(s1)
s2 = sum(a2i)
#print(s2)
AD = -n-(1/n)*(s1+s2)
#AD = -n-(1/n)*(-144-276)
print("AD: " + str(AD))
#functions to the same thing as above?
#Shapiro Wilk Test
# Correlation Test
from scipy.stats import norm
L = sorted([156,162,168,182,186,190,190,196,202,210,214,220,226,230,230,236,236,242,246,270])
n = len(L)
i = list(range(1,n+1))
u = [(i-.375)/(n+25) for i in range(1,n+1)]
q = norm.ppf(u)
#correlation test - turn formula on pg 28 into a function?
#Modified for the Exponential Distribution
from math import log, exp
w = sorted([12,21,26,27,29,29,48,57,59,70,74,153,326,386,502])
n = len(w)
lam = sum(w)/n
z = [1-exp(-x/lam) for x in w] #computes F0(X(i))
i = list(range(1,n + 1))
# K-S Computations:
d1 = [j/n - a for j, a in zip(i,z)]
dp = max(d1)
d2 = [a - (j - 1)/n for j, a in zip(i,z)]
dm = max(d2)
KS = max(dp,dm)
KSM = (KS-.2/n)*(sqrt(n)+.26+.5/sqrt(n))
print(KSM)
# Cramer-von Mises Computations:
wi = [(a-(2*j-1)/(2*n))**2 for j, a in zip(i,z)]
s = sum(wi)
cvm = s + 1/(12*n)
cvmM = cvm*(1+.16/n)
print(cvmM)
# Anderson-Darling Computations:
a1i = [(2*j-1)*log(a) for j, a in zip(i,z)]
a2i = [(2*n+1-2*j)*log(1-a) for j, a in zip(i,z)]
s1 = sum(a1i)
s2 = sum(a2i)
AD = -n-(1/n)*(s1+s2)
ADM = AD*(1+.6/n)
print(ADM)
#Python Code to find MLE:
library(MASS)
x <- c(
17.88 , 28.92 , 33.00 , 41.52 , 42.12 , 45.60 , 48.40, 51.84 ,
51.96 , 54.12 , 55.56 , 67.80 , 68.64 , 68.64 , 68.88 , 84.12 ,
93.12 , 98.64 , 105.12 , 105.84 , 127.92 , 128.04 , 173.40)
fitdistr(x,"weibull")
# convert gofweibmle.r to gofweibmle.py
# The following program computes the Anderson-Darling Statistics
# for testing goodness of the fit of a
# Weibull Distribution
# with unspecified parameters (need to supply MLE's).
# The statistics include the modification needed to use the Tables included
# in the GOF handout.
# This example is based on a random sample of n=23 observations:
x = c(17.88, 28.92, 33.00, 41.52, 42.12, 45.60, 48.40, 51.84,
51.96, 54.12, 55.56, 67.80, 68.64, 68.64, 68.88, 84.12,
93.12, 98.64, 105.12, 105.84, 127.92, 128.04, 173.40)
n = length(x)
i = seq(1,n,1)
y = -log(x)
y = sort(y)
# Anderson-Darling: For Weibull Model
library(MASS)
mle <- fitdistr(x,"weibull")
shape = mle$estimate[1]
scale = mle$estimate[2]
a = -log(scale)
b = 1/shape
z = exp(-exp(-(y-a)/b))
A1i = (2*i-1)*log(z)
A2i = (2*n+1-2*i)*log(1-z)
s1 = sum(A1i)
s2 = sum(A2i)
AD = -n-(1/n)*(s1+s2)
ADM = AD*(1+.2/sqrt(n))
AD
ADM
n
n = length(y)
weib= -y
weib= sort(weib)
i= 1:n
ui= (i-.5)/n
QW= log(-log(1-ui))
plot(QW,weib,abline(lm(weib~QW)),
main="Weibull Reference Plot",cex=.75,lab=c(7,11,7),
xlab="Q=ln(-ln(1-ui))",
ylab="y=ln(W(i))")
legend(-3.5,5.0,"y=4.388+.4207Q")
legend(-3.5,4.7,"AD=.3721, p-value>.25")
#boxcox,samozone.R converted to boxcox_samozone.py
y = scan("u:/meth1/sfiles/ozone1.DAT")
n = length(y)
yt0 = log(y)
s = sum(yt0)
varyt0 = var(yt0)
Lt0 = -1*s - .5*n*(log(2*pi*varyt0)+1)
th = 0
Lt = 0
t = -3.01
i = 0
while(t < 3)
{t = t+.001
i = i+1
th[i] = t
yt = (y^t -1)/t
varyt = var(yt)
Lt[i] = (t-1)*s - .5*n*(log(2*pi*varyt)+1)
if(abs(th[i])<1.0e-10)Lt[i]<-Lt0
if(abs(th[i])<1.0e-10)th[i]<-0
}
# The following outputs the values of the likelihood and theta and yields
# the value of theta where likelihood is a maximum
out = cbind(th,Lt)
Ltmax= max(Lt)
imax= which(Lt==max(Lt))
thmax= th[imax]
postscript("boxcox,plotsam.ps",height=8,horizontal=FALSE)
plot(th,Lt,lab=c(30,50,7),main="Box-Cox Transformations",
xlab=expression(theta),
ylab=expression(Lt(theta)))
#the following plots a 95\% c.i. for theta
cic = Ltmax-.5*qchisq(.95,1)
del= .01
iLtci = which(abs(Lt-cic)<=del)
iLtciL= min(iLtci)
iLtciU= max(iLtci)
thLci= th[iLtciL]
thUci= th[iLtciU]
abline(h=cic)
abline(v=thLci)
abline(v=thUci)
abline(v=thmax)
#Reference distributions
qqnorm(x,main="Normal Prob Plots of Samford Ozone Data",
xlab="normal quantiles",ylab="ozone concentration",cex=.65)
qqline(x)
text(-2,200,"SW=.9288")
text(-2,190,"p-value=0")
y1= log(x)
y2= x^.23
y3= x^.5
s = shapiro.test(x)
s1 = shapiro.test(y1)
s2 = shapiro.test(y2)
s3 = shapiro.test(y3)
qqnorm(y2,main="Normal Prob Plots of Samford Ozone Data with (Ozone)^.23",
xlab="normal quantiles",ylab=expression(Ozone^.23),cex=.65)
qqline(y2)
text(-2,3.5,"SW=.9872")
text(-2,3.4,"p-value=.2382")
qqnorm(y1,main="Normal Prob Plots of Samford Ozone Data with Log(Ozone)",
xlab="normal quantiles",ylab="Log(Ozone)",cex=.65)
qqline(y1)
text(-2,5.0,"SW=.9806")
text(-2,4.85,"p-value=.0501")
qqnorm(y3,main="Normal Prob Plots of Samford Ozone Data with SQRT(Ozone)",
xlab="normal quantiles",ylab=expression(Ozone^.5),cex=.65)
qqline(y3)
text(-2,14.5,"SW=.9789")
text(-2,13.5,"p-value=.0501")
###Output
_____no_output_____ |
PCOS.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.__version__
from google.colab import files
uploaded = files.upload()
pcos = pd.read_csv('pcos-data.csv')
pcos.head(5)
pcos.drop('Patient File No.', inplace=True, axis=1)
pcos.drop('Sl. No', inplace=True, axis=1)
pcos.drop('Unnamed: 42', inplace=True, axis=1)
pcos.head(5)
pcos.info()
pcos.isnull().sum()
pcos =pcos.dropna()
pcos.isnull().sum()
pcos.info()
for column in pcos:
columnSeriesObj = pcos[column]
pcos[column] = pd.to_numeric(pcos[column], errors='coerce')
sns.pairplot(pcos.iloc[:,1:5])
def plot_hist(variable):
plt.figure(figsize = (9,3))
plt.hist(pcos[variable], bins = 50)
plt.xlabel(variable)
plt.ylabel("Frequency")
plt.title("{} distribution with hist".format(variable))
plt.show()
numericVar = [" Age (yrs)", "Weight (Kg)","Marraige Status (Yrs)"]
for n in numericVar:
plot_hist(n)
pcos=pcos.dropna()
pcos.corr()
corr_matrix= pcos.corr()
plt.subplots(figsize=(30,10))
sns.heatmap(corr_matrix, annot = True, fmt = ".2f");
plt.title("Correlation Between Features")
plt.show()
X = pcos.iloc[:,1:40].values
Y = pcos.iloc[:,0].values
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.3 , random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.fit_transform(X_test)
def models(X_train, Y_train):
from sklearn.linear_model import LogisticRegression
log = LogisticRegression(random_state = 0)
log.fit(X_train, Y_train)
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators = 50, criterion = 'entropy', random_state = 0)
forest.fit(X_train, Y_train)
print('Logistic Regression Training Accuracy:', log.score(X_train, Y_train))
print('Random Forest Classifier:', forest.score(X_train, Y_train))
return log, forest
model = models(X_train, Y_train)
from sklearn.metrics import confusion_matrix
for i in range( len(model) ) :
cm = confusion_matrix(Y_test, model[i].predict(X_test))
TP = cm[1][1]
TN = cm[0][0]
FN = cm[1][0]
FP = cm[0][1]
print(cm)
print('Testing Accuracy = ', (TP + TN)/ (TP + TN + FP + FN))
###Output
_____no_output_____ |
ideas/water_detection/preprocessing.ipynb | ###Markdown
PreprocessingRun the following cell to generate the folder for the Torch ImageLoader class.The cell requires a labels.csv file which contains the filenames of the image files and corresponding resistivity labels (which can be extendedfrom binary to multiclass depending on resistivity threshold)
###Code
with open('/datadrive/labels.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
for row in readCSV:
print(row)
img_name = row[1]
label = row[2]#int(row[2]=='True')
month_folder = row[0][:10]
#print(month_folder)
#print(img_name,label)
#print(os.path.join(img_folder,month_folder,img_name))
if YEAR_FLAG == 'train':
shutil.copyfile(os.path.join(img_folder,month_folder,img_name),os.path.join(target_folder_train,label,img_name))
else:
shutil.copyfile(os.path.join(img_folder,month_folder,img_name),os.path.join(target_folder_test,label,img_name))
###Output
_____no_output_____ |
notebooks/chart_data_model.ipynb | ###Markdown
chart.data_model> Interface providing data for charting- toc: True
###Code
# export
import pandas as pd
from bokeh.models import ColumnDataSource
# export
def construct_categorical_data(data_frame: pd.DataFrame, c_col: str=None, v_col: str=None,
color_col: str=None, text_col: str=None):
source = ColumnDataSource(data={
'category': data_frame[c_col].values,
'value': data_frame[v_col].values
})
if text_col is not None:
source.add(data_frame[text_col].values, 'text')
return source
###Output
_____no_output_____ |
notebooks/experimental/octopus_new_features.ipynb | ###Markdown
Table of Contents1 EDA and pre-processing1.1 Descriptive statistics (data shape, balance, etc)1.2 Data pre-processing2 ML template starts - training session2.1 Training model (LGBM) with stratisfied CV3 Model evaluation3.1 Plot of the CV folds - F1 macro and F1 for the positive class3.2 Scikit learn - Classification report3.3 ROC curve with AUC3.4 Confusion Matrix plot (normalized and with absolute values)3.5 Feature Importance plot3.6 Correlations analysis (on top features)3.7 Anomaly detection on the training set (on top features alone)3.8 Data leakage test3.9 Analysis of FPs/FNs
###Code
import warnings
import pandas as pd
import numpy as np
from pandas_summary import DataFrameSummary
import octopus_ml as oc
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
import seaborn as sns
import re
import optuna
pd.set_option('display.max_columns', None) # or 1000
pd.set_option('display.max_rows', None) # or 1000
pd.set_option('display.max_colwidth', -1) # or 199
%matplotlib inline
warnings.simplefilter("ignore")
###Output
_____no_output_____
###Markdown
Read the Kaggle Titanic competition dataset https://www.kaggle.com/c/titanic
###Code
pwd
XY_df=pd.read_csv('../../datasets/Kaggle_titanic_train.csv')
test_df=pd.read_csv('../../datasets/Kaggle_titanic_test.csv')
###Output
_____no_output_____
###Markdown
EDA and pre-processing Descriptive statistics (data shape, balance, etc)
###Code
XY_df.shape
XY_df.head(5)
###Output
_____no_output_____
###Markdown
Target distribution
###Code
XY_df['Survived'].value_counts()
oc.target_pie(XY_df,'Survived')
XY_df.shape
def convert_to_categorical(df):
categorical_features = []
for c in df.columns:
col_type = df[c].dtype
if col_type == "object" or col_type.name == "category":
# an option in case the data(pandas dataframe) isn't passed with the categorical column type
df[c] = df[c].astype('category')
categorical_features.append(c)
return df, categorical_features
def lgbm_fast(X_train, y_train, num, params=None):
# Training function for LGBM with basic categorical features treatment and close to default params
X_train, categorical_features=convert_to_categorical(X_train)
lgb_train = lgb.Dataset(X_train, y_train, categorical_feature=categorical_features)
if params == None:
params = {
"objective": "binary",
"boosting": "gbdt",
"scale_pos_weight": 0.02,
"learning_rate": 0.005,
"seed": 100,
"verbose":-1
# 'categorical_feature': 'auto',
# 'metric': 'auc',
# 'scale_pos_weight':0.1,
# 'learning_rate': 0.02,
# 'num_boost_round':2000,
# "min_sum_hessian_in_leaf":1,
# 'max_depth' : 100,
# "num_leaves":31,
# "bagging_fraction" : 0.4,
# "feature_fraction" : 0.05,
}
clf = lgb.train(
params, lgb_train, num_boost_round=num
)
return clf
###Output
_____no_output_____
###Markdown
Dataset comparisons
###Code
features=XY_df.columns.to_list()
print ('number of features ', len(features))
features_remove=['PassengerId','Survived']
for f in features_remove:
features.remove(f)
def dataset_comparison(df1,df2, top=3):
print ('Datasets shapes:\n df1: '+str(df1.shape)+'\n df2: '+str(df2.shape))
df1['label']=0
df2['label']=1
df=pd.concat([df1,df2])
print (df.shape)
clf=lgbm_fast(df,df['label'], 100, params=None)
oc.plot_imp( clf, df, title="Datasets differences", model="lgbm", num=12, importaince_type="split", save_path=None)
return df
df=dataset_comparison(XY_df[features],test_df)
import lightgbm as lgb
def dataset_comparison(df1,df2, top=3):
print ('Datasets shapes:\n df1: '+str(df1.shape)+'\n df2: '+str(df2.shape))
df1['label']=0
df2['label']=1
df=pd.concat([df1,df2])
print (df.shape)
clf=lgbm_fast(df,df['label'], 100, params=None)
feature_imp_list=oc.plot_imp( clf, df, title="Datasets differences", model="lgbm", num=10, importaince_type="gain", save_path=None)
oc.target_corr(df,df['label'],feature_imp_list)
return df
df=dataset_comparison(XY_df[features],test_df)
df[1700:1800]
###Output
_____no_output_____
###Markdown
Selected features vs target historgrams
###Code
oc.hist_target(XY_df, 'Sex', 'Survived')
oc.hist_target(XY_df, 'Fare', 'Survived')
###Output
_____no_output_____
###Markdown
Data summary - and missing values analysis
###Code
import missingno as msno
from pandas_summary import DataFrameSummary
dfs = DataFrameSummary(XY_df)
dfs.summary()
# Top 5 sparse features, mainly labs results
pd.Series(1 - XY_df.count() / len(XY_df)).sort_values(ascending=False).head(5)
###Output
_____no_output_____
###Markdown
Data pre-processing
###Code
XY_df['Cabin'] = XY_df['Cabin'].astype('str').fillna("U0")
deck = {"A": 1, "B": 2, "C": 3, "D": 4, "E": 5, "F": 6, "G": 7, "U": 8}
XY_df['Deck'] = XY_df['Cabin'].map(lambda x: re.compile("([a-zA-Z]+)").search(x).group())
XY_df['Deck'] = XY_df['Deck'].map(deck)
XY_df['Deck'] = XY_df['Deck'].fillna(0)
XY_df['Deck'] = XY_df['Deck'].astype('category')
XY_df['relatives'] = XY_df['SibSp'] + XY_df['Parch']
XY_df.loc[XY_df['relatives'] > 0, 'not_alone'] = 0
XY_df.loc[XY_df['relatives'] == 0, 'not_alone'] = 1
XY_df['not_alone'] = XY_df['not_alone'].astype(int)
def encodeAgeFare(train):
train.loc[train['Age'] <= 16, 'Age_fare'] = 0
train.loc[(train['Age'] > 16) & (train['Age'] <= 32), 'Age_fare'] = 1
train.loc[(train['Age'] > 32) & (train['Age'] <= 48), 'Age_fare'] = 2
train.loc[(train['Age'] > 48) & (train['Age'] <= 64), 'Age_fare'] = 3
train.loc[ (train['Age'] > 48) & (train['Age'] <= 80), 'Age_fare'] = 4
train.loc[train['Fare'] <= 7.91, 'Fare'] = 0
train.loc[(train['Fare'] > 7.91) & (train['Fare'] <= 14.454), 'Fare_adj'] = 1
train.loc[(train['Fare'] > 14.454) & (train['Fare'] <= 31.0), 'Fare_adj'] = 2
train.loc[(train['Fare'] > 31.0) & (train['Fare'] <= 512.329), 'Fare_adj'] = 3
encodeAgeFare(XY_df)
# Categorical features pre-proccesing
cat_list ,XY_df=oc.cat_features_proccessing(XY_df)
print (cat_list)
features=XY_df.columns.to_list()
print ('number of features ', len(features))
features_remove=['PassengerId','Survived']
for f in features_remove:
features.remove(f)
X=XY_df[features]
y=XY_df['Survived']
from IPython.display import Image
Image("../images/octopus_know_your_data.PNG", width=600, height=600)
XY_sampled=oc.sampling(XY_df,'Survived',200)
###Output
number of positive instances: 342
number of negative instance : 549
new dataset shape: (542, 17)
Method Name :[35;1m sampling[0m
Current memory usage:[36m 0.066895MB[0m
Peak :[36m 0.080649MB[0m
Total time taken: [36m 17.692 ms [0m
###Markdown
ML template starts - training session Training model (LGBM) with stratisfied CV
###Code
def create(hyperparams):
"""Create LGBM Classifier for a given set of hyper-parameters."""
model = LGBMClassifier(**hyperparams)
return model
def kfold_evaluation(X, y, k, hyperparams, esr=50):
scores = []
kf = KFold(k)
for i, (train_idx, test_idx) in enumerate(kf.split(X)):
X_train = X.iloc[train_idx]
y_train = y.iloc[train_idx]
X_val = X.iloc[test_idx]
y_val = y.iloc[test_idx]
model = create(hyperparams)
model = fit_with_stop(model, X_train, y_train, X_val, y_val, esr)
train_score = evaluate(model, X_train, y_train)
val_score = evaluate(model, X_val, y_val)
scores.append((train_score, val_score))
scores = pd.DataFrame(scores, columns=['train score', 'validation score'])
return scores
# Constant
K = 5
# Objective function
def objective(trial):
# Search spaces
hyperparams = {
'reg_alpha': trial.suggest_float('reg_alpha', 0.001, 10.0),
'reg_lambda': trial.suggest_float('reg_lambda', 0.001, 10.0),
'num_leaves': trial.suggest_int('num_leaves', 5, 1000),
'min_child_samples': trial.suggest_int('min_child_samples', 5, 100),
'max_depth': trial.suggest_int('max_depth', 5, 64),
'colsample_bytree': trial.suggest_float('colsample_bytree', 0.1, 0.5),
'cat_smooth' : trial.suggest_int('cat_smooth', 10, 100),
'cat_l2': trial.suggest_int('cat_l2', 1, 20),
'min_data_per_group': trial.suggest_int('min_data_per_group', 50, 200)
}
hyperparams.update(best_params)
scores = kfold_evaluation(X, y, K, hyperparams, 10)
return scores['validation score'].mean()
def create(hyperparams):
model = LGBMClassifier(**hyperparams)
return model
def fit(model, X, y):
model.fit(X, y,verbose=-1)
return model
def fit_with_stop(model, X, y, X_val, y_val, esr):
#model.fit(X, y,
# eval_set=(X_val, y_val),
# early_stopping_rounds=esr,
# verbose=-1)
model.fit(X, y,
eval_set=(X_val, y_val),
verbose=-1)
return model
def evaluate(model, X, y):
yp = model.predict_proba(X)[:, 1]
auc_score = roc_auc_score(y, yp)
return auc_score
###Output
_____no_output_____
###Markdown
Hyper Parameter Optimization
###Code
best_params = {
'n_estimators': 1000,
'learning_rate': 0.05,
'metric': 'auc',
'verbose': -1
}
from lightgbm import LGBMClassifier
from sklearn.model_selection import KFold
from sklearn.metrics import roc_auc_score
study = optuna.create_study(direction='maximize')
study.optimize(objective, n_trials=50)
study.best_value
best_params.update(study.best_params)
best_params
#plot_param_importances(study)
#plot_optimization_history(study)
params = {
'boosting_type': 'gbdt',
'objective': 'binary',
'metric': 'auc',
'learning_rate': 0.1,
'n_estimators': 500,
'verbose': -1,
'max_depth': -1,
'seed':100,
'min_split_gain': 0.01,
'num_leaves': 18,
'reg_alpha': 0.01,
'reg_lambda': 1.50,
'feature_fraction':0.2,
'bagging_fraction':0.84
}
metrics= oc.cv_adv(X,y,0.5,1000,shuffle=True,params=best_params)
###Output
5it [00:03, 1.60it/s]
###Markdown
Model evaluation Plot of the CV folds - F1 macro and F1 for the positive class (in this case it's an unbalanced dataset)
###Code
oc.cv_plot(metrics['f1_weighted'],metrics['f1_macro'],metrics['f1_positive'],'Titanic Kaggle competition')
###Output
_____no_output_____
###Markdown
Scikit learn - Classification report
###Code
print(classification_report(metrics['y'], metrics['predictions_folds']))
###Output
precision recall f1-score support
0 0.83 0.89 0.86 549
1 0.80 0.71 0.75 342
accuracy 0.82 891
macro avg 0.82 0.80 0.81 891
weighted avg 0.82 0.82 0.82 891
###Markdown
ROC curve with AUC
###Code
oc.roc_curve_plot(metrics['y'], metrics['predictions_proba'])
###Output
_____no_output_____
###Markdown
Confusion Matrix plot (normalized and with absolute values)
###Code
oc.confusion_matrix_plot(metrics['y'], metrics['predictions_folds'])
###Output
_____no_output_____
###Markdown
Feature Importance plot
###Code
feature_imp_list=oc.plot_imp(metrics['final_clf'],X,'LightGBM Mortality Kaggle',num=15)
top_features=feature_imp_list.sort_values(by='Value', ascending=False).head(20)
top_features
###Output
_____no_output_____
###Markdown
Correlations analysis (on top features)
###Code
list_for_correlations=top_features['Feature'].to_list()
list_for_correlations.append('Survived')
oc.correlations(XY_df,list_for_correlations)
###Output
_____no_output_____
###Markdown
Data leakage test
###Code
oc.data_leakage(X,top_features['Feature'].to_list())
###Output
-> [32m Passed the data leakage test - no duplicate intstances detected [0m
Method Name :[35;1m data_leakage[0m
Current memory usage:[36m 0.020105MB[0m
Peak :[36m 0.189193MB[0m
Total time taken: [36m 9.369 ms [0m
###Markdown
Analysis of FPs/FNs
###Code
fps=oc.recieve_fps(XY_df, metrics['index'] ,metrics['y'], metrics['predictions_proba'],top=10)
fns=oc.recieve_fns(XY_df, metrics['index'] ,metrics['y'], metrics['predictions_proba'],top=10)
fps
fns
filter_fps = XY_df[XY_df.index.isin(fps['index'])]
filter_fns = XY_df[XY_df.index.isin(fns['index'])]
filter_fps_with_prediction=pd.merge(filter_fps,fps[['index','preds_proba']], left_on=[pd.Series(filter_fps.index.values)], right_on=fps['index'])
filter_fns_with_prediction=pd.merge(filter_fns,fns[['index','preds_proba']], left_on=[pd.Series(filter_fns.index.values)], right_on=fns['index'])
###Output
_____no_output_____
###Markdown
Top FPs with full features
###Code
filter_fps_with_prediction
###Output
_____no_output_____
###Markdown
Top FNs with full features
###Code
filter_fns_with_prediction
###Output
_____no_output_____ |
module4-makefeatures/ Day_4_Make_Features_Assignment.ipynb | ###Markdown
ASSIGNMENT- Replicate the lesson code. - This means that if you haven't followed along already, type out the things that we did in class. Forcing your fingers to hit each key will help you internalize the syntax of what we're doing. - [Lambda Learning Method for DS - By Ryan Herr](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit?usp=sharing)- Convert the `term` column from string to integer.- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns.
###Code
!wget https://resources.lendingclub.com/LoanStats_2018Q4.csv.zip
!unzip LoanStats_2018Q4.csv.zip
!head LoanStats_2018Q4.csv
!tail LoanStats_2018Q4.csv
import pandas as pd
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_rows', 500)
df = pd.read_csv('LoanStats_2018Q4.csv', header = 1 , skipfooter = 2, engine = 'python')
print(df.shape)
df.head()
df.tail()
df.isnull().sum().sort_values(ascending = False)
df = df.drop(columns =['id', 'member_id', 'desc', 'url'], axis= 'columns')
df.dtypes
type(df['int_rate'])
int_rate = '15.02%'
int_rate[:-1]
int_list =[ '15.02%', '13.56%', '16.91%' ]
int_list[:2]
int_rate.strip('%')
type(int_rate.strip('%'))
float(int_rate.strip('%'))
type(float(int_rate.strip('%')))
def remove_percent_to_float(string):
return float(string.strip('%'))
int_list = ['15.02%','13.56%', '16.91%']
[remove_percent_to_float(item) for item in int_list]
df['int_rate']= df['int_rate'].apply(remove_percent_to_float)
df.head()
df.dtypes
df['emp_title']
df['emp_title'].value_counts(dropna = False).head(20)
df['emp_title'].isnull().sum()
import numpy as np
examples = ['owner', 'Supervisor', 'Project Manager', np.NaN]
def clean_title(item):
if isinstance(item, str):
return item.strip().title()
else:
return "Unknown"
[clean_title(item) for item in examples]
df['emp_title'] = df['emp_title'].apply(clean_title)
df.head()
###Output
_____no_output_____
###Markdown
###Code
df['emp_title'].value_counts(dropna = False).reset_index().shape
df.describe(exclude = 'number')
df['emp_title'].describe(exclude = 'number')
df['emp_title'].nunique()
df.emp_title_manager = True
print(df.emp_title_manager)
df['emp_title_manager'] = True
print(df['emp_title_manager'])
df['emp_title_manager'] = df['emp_title'].str.contains("Manager")
df.head()
condition = (df['emp_title_manager'] == True)
managers = df[condition]
print(managers.shape)
managers.head()
managers = df[df['emp_title'].str.contains('Manager')]
print(managers.shape)
managers.head()
plebians = df[df['emp_title_manager'] == False]
print(plebians.shape)
plebians.head()
managers['int_rate'].hist(bins=20);
plebians['int_rate'].hist(bins=20);
managers['int_rate'].plot.density();
plebians['int_rate'].plot.density();
managers['int_rate'].mean()
plebians['int_rate'].mean()
df['issue_d']
df['issue_d'].describe()
df['issue_d'].value_counts()
df.dtypes
df['issue_d'] = pd.to_datetime(df['issue_d'], infer_datetime_format=True)
df['issue_d'].head().values
df.dtypes
df['issue_d'].dt.year
df['issue_d'].dt.month
df['issue_year'] = df['issue_d'].dt.year
df['issue_month'] = df['issue_d'].dt.month
df.head()
[col for col in df if col.endswith('_d')]
df['earliest_cr_line'].head()
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'],
infer_datetime_format=True)
df['days_from_earliest_credit_to_issue'] = (df['issue_d'] - df['earliest_cr_line']).dt.days
df['days_from_earliest_credit_to_issue'].describe()
25171/365
###Output
_____no_output_____
###Markdown
STRETCH OPTIONSYou can do more with the LendingClub or Instacart datasets.LendingClub options:- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20. - Take initiatve and work on your own ideas!Instacart options:- Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)- Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.)- Take initiative and work on your own ideas! You can uncomment and run the cells below to re-download and extract the Instacart data
###Code
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# %cd instacart_2017_05_01
###Output
_____no_output_____ |
CORIOLIX_REST_API_Quickstart.ipynb | ###Markdown
CORIOLIX REST API Documentation EXAMPLE 1: Query the CORIOLIX REST API - Get a list of all REST endpoints
###Code
"""Example script to query the CORIOLIX REST API."""
# Key concepts:
# Use the python requests module to query the REST API
# Use the python json module to parse and dump the json response
# Returns:
# Returns a list of all CORIOLIX REST Endpoints
import requests
import json
# Base URL for Datapresence REST API - MODIFY AS NEEDED
rest_url = 'https://coriolix.ceoas.oregonstate.edu/oceanus/api/?format=json'
# Make the query to the REST API
response = requests.get(rest_url, verify=False)
# Load the response as json data
responseJSON = json.loads(response.text)
# Print all the published endpoints
print(json.dumps(responseJSON, indent=4, sort_keys=True))
###Output
{
"anemo_mmast": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/anemo_mmast/?format=json",
"asset": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/asset/?format=json",
"cur_obs": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/cur_obs/?format=json",
"cur_obs_archive": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/cur_obs_archive/?format=json",
"custom_sensor_1": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/custom_sensor_1/?format=json",
"custom_sensor_2": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/custom_sensor_2/?format=json",
"custom_sensor_3": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/custom_sensor_3/?format=json",
"echo_well": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/echo_well/?format=json",
"events": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/events/?format=json",
"fluor_flth": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/fluor_flth/?format=json",
"gnss_gga_bow": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/gnss_gga_bow/?format=json",
"gnss_vtg_bow": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/gnss_vtg_bow/?format=json",
"gyro_brdg": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/gyro_brdg/?format=json",
"metstn_bow": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/metstn_bow/?format=json",
"metstn_stbd": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/metstn_stbd/?format=json",
"par_mmast": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/par_mmast/?format=json",
"parameter": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/parameter/?format=json",
"rad_mmast": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/rad_mmast/?format=json",
"rain_mmast": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/rain_mmast/?format=json",
"sensor": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor/?format=json",
"sensor_float_1": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_float_1/?format=json",
"sensor_float_2": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_float_2/?format=json",
"sensor_float_3": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_float_3/?format=json",
"sensor_float_4": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_float_4/?format=json",
"sensor_float_5": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_float_5/?format=json",
"sensor_float_6": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_float_6/?format=json",
"sensor_float_7": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_float_7/?format=json",
"sensor_integer_1": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_integer_1/?format=json",
"sensor_log": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_log/?format=json",
"sensor_mixed_1": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_mixed_1/?format=json",
"sensor_mixed_2": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_mixed_2/?format=json",
"sensor_mixed_3": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_mixed_3/?format=json",
"sensor_point_1": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/sensor_point_1/?format=json",
"speedlog_well": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/speedlog_well/?format=json",
"station": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/station/?format=json",
"subevent": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/subevent/?format=json",
"therm_fwd": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/therm_fwd/?format=json",
"therm_hull": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/therm_hull/?format=json",
"transmiss_flth": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/transmiss_flth/?format=json",
"true_winds": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/true_winds/?format=json",
"tsg_flth": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/tsg_flth/?format=json",
"user_ship_shapes": "https://coriolix.ceoas.oregonstate.edu/oceanus/api/user_ship_shapes/?format=json"
}
###Markdown
EXAMPLE 2: Query the CORIOLIX REST API - Get the current sensor observation
###Code
"""Example script to query the CORIOLIX REST API."""
# Key concepts:
# Use the python requests module to query the REST API
# Use the python json module to parse and dump the json response
# Select a specific endpoint to query.
# Returns:
# Returns a list of all currently valid sensor values
import requests
import json
# URL for Datapresence REST endpoint for the current observations table.
rest_url = 'https://coriolix.ceoas.oregonstate.edu/oceanus/api/cur_obs/?format=json'
#rest_url = 'https://coriolix.ceoas.oregonstate.edu/oceanus/api/decimateData/?model=TsgFlth&date_0=2019-10-10%2002:06:55.353%2B00&decfactr=1&format=json'
# Make the query to the REST API
response = requests.get(rest_url, verify=False)
# Load the response as json data
responseJSON = json.loads(response.text)
# Print all current observations
print(json.dumps(responseJSON, indent=4, sort_keys=True))
## EXAMPLE 3: Query the CORIOLIX REST API - Get the Thermosalinograph data for a user specified window of time
"""Example script to query the CORIOLIX REST API."""
# Key concepts:
# Use the python requests module to query the REST API
# Use the python json module to parse and dump the json response
# Select a specific sensor endpoint to query.
# Filter results
# Returns:
# Returns a list of all currently valid sensor values
import requests
import json
# URL for Datapresence REST endpoint for the current thermosalinograph table.
base_url = 'https://coriolix.ceoas.oregonstate.edu/oceanus/api/decimateData/?model=TsgFlth'
# Set the start date and time using the ISO8601 format, data stored in UTC
start_date = '2019-10-08T20:00:00Z'
end_date = '2019-10-08T21:00:00Z'
# build the query string
query_url = base_url+'?date_0='+start_date+'&date_1='+end_date+'&format=json'
query_url = base_url+'&date_0=2019-10-10%2002:06:55.353%2B00&decfactr=1&format=json'
# Make the query to the REST API
response = requests.get(query_url, verify=False)
# Load the response as json data
responseJSON = json.loads(response.text)
# Print all thermosalinograph observations
print(json.dumps(responseJSON, indent=4, sort_keys=True))
###Output
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/urllib3/connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecureRequestWarning)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.