path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
06 Guess My Number.ipynb | ###Markdown
Guess My Number** How would you go about guessing a whole number that you know is between 1 and 100 inclusive? ** \ is used to write comments for humans to read in code cells. Everything after a \ in a line is ignored by the program.
###Code
import random #This imports a library that can generate random numbers
number = random.randint(1,100) # this selects a random integer from 1 to 100 inclusive
guess = 0
number_of_guesses=1
while number != guess:
try:
guess = int(input("What number am I thinking of?"))
except:
print("That's not an integer. Keeping last guess or zero for this try.")
if guess < number:
print("Too low")
elif guess > number:
print("Too high")
else:
print("You got it in " +str(number_of_guesses) + " guesses!")
if number_of_guesses > 7:
print("Can you think of a way to get to the number in fewer guesses?")
number_of_guesses += 1
print("Thanks for playing!")
###Output
_____no_output_____
###Markdown
Could you consistently get the number in 7 or fewer guess? If so, you probably found an efficient *algorithm* for guessing the number. An *algorithm* is set of instructions for solving a problem. One way you could have tried is starting your guesses at 1 and continuing until you got to the number. How many guesses could this take? This time you think of a number and tell the computer if its guess is too high or too low.
###Code
name = input("What is your name?")
print("Hello %s, please think of a whole number between 1 and 100 inclusive." % name )
print("I will try to guess your number by starting at 1 and guessing up until I get it.")
feedback = "l"
guess = 1
while feedback is not "c":
feedback = input("Is your number %d? Type l for too low, h for too high, or c for correct:" % guess)
guess += 1
print("I got it in %s tries, %s!" % (str(guess-1), name))
###Output
_____no_output_____
###Markdown
If you picked a really large number you may have gotten frustrated with the simple search algorthm there! Can we give the computer a better way to guess? Perhaps when you were guessing, you noticed that starting in the middle cut the possibilities in half. And then you could repeat that by starting in the middle again until you hit the number or reduce the possibities to one. Let's think about how we can code that.
###Code
name = input("What is your name?")
print("Hello %s, please think of a whole number between 1 and 100 inclusive." % name )
print("I will try to guess your number by cutting the possibilities in half each time.")
number_of_guesses = 1
feedback = "l"
low = 0
high = 100
while feedback[0] is not "c":
guess = (low + high) // 2
feedback = input("Is your number %s? Type l for too low, h for too high, or c for correct:" % str(guess))
feedback = str.lower(feedback[0])
if feedback == "l":
low = guess + 1
elif feedback =="h":
high = guess - 1
elif feedback == "c":
print("I guessed it in %s tries, %s!" % (str(number_of_guesses), name))
else:
print("Please use L for low, H for high, or C for correct in response to my guesses.")
number_of_guesses += 1
print("Thanks for playing!")
###Output
_____no_output_____ |
paper_notebooks/four_well_potential.ipynb | ###Markdown
potential
###Code
# define potential
def autocorr(x, t):
return np.corrcoef(np.array([x[0:len(x)-t], x[t:len(x)]]))[0, 1]
def potential(x):
return 2 * (x ** 8 + 0.8 * np.exp(-80 * x ** 2) + 0.2 * np.exp(-80*(x-0.5)**2) + 0.5 * np.exp(-40 * (x+0.5)**2))
x_list = np.linspace(-1, 1, 100)
V_list = potential(x_list)
fig, ax = plt.subplots()
ax.plot(x_list, V_list)
ax.set_xlabel('x', fontsize = temp_font_size)
ax.set_ylabel('V(x)', fontsize = temp_font_size)
fig.tight_layout()
def get_transition_matrix(V_list, kT=1.0):
num_points = len(V_list)
transition = np.zeros((num_points, num_points))
for item in range(num_points):
p_left = np.exp(-(V_list[item - 1] - V_list[item]) / kT) if item != 0 else 0
p_right = np.exp(-(V_list[item + 1] - V_list[item]) / kT) if item != num_points -1 else 0
p_sum = p_left + 1 + p_right
transition[item - 1][item] = p_left / p_sum
transition[item][item] = 1 / p_sum
transition[(item + 1) % num_points][item] = p_right / p_sum
assert_almost_equal(transition.sum(axis=0), np.ones(num_points))
return transition
# get transition matrices for lag time = 1 and 100
transition_1 = get_transition_matrix(V_list, kT=1) # transition matrix with lag = 1
lag_time = 100
transition = np.linalg.matrix_power(transition_1, lag_time) # transition matrix with lag = 100
def get_sorted_eigen_values_vecs(matrix):
eigenValues, eigenVectors = np.linalg.eig(matrix)
idx = eigenValues.argsort()[::-1]
eigenValues = eigenValues[idx]
eigenVectors = eigenVectors.T[idx]
return eigenValues, eigenVectors
def inner_prod(xx, yy, equi_dist):
return np.sum(xx * yy * equi_dist)
def normalize_state(state, equi_dist):
return state / np.sqrt(inner_prod(state, state, equi_dist))
# get eigenvalues, eigenvectors of the transition matrix
eigenValues, eigenVectors = get_sorted_eigen_values_vecs(transition)
equi_dist = np.real_if_close(eigenVectors[0] / np.sum(eigenVectors[0]))
# get first four eigenstates of transfer matrix, which are first four slowest modes
eigenstates = [item / equi_dist for item in eigenVectors]
eigenstates = [np.real_if_close(normalize_state(item, equi_dist)) for item in eigenstates] # normalization
th_timescales = [-lag_time / np.log(eigenValues[item]) for item in range(1, 4)]
fig, axes = plt.subplots(2,2)
for item in range(4):
ax = axes[item // 2][item % 2]
if item == 0: timescale_label = "$t_%d = \infty$" % item
else:
timescale_label = '$t_%d = %d$' % (item, th_timescales[item - 1])
ax.plot(x_list, eigenstates[item], label=timescale_label)
if item > 1: ax.set_xlabel('x', fontsize=temp_font_size)
ax.set_ylabel('$\psi_%d$' % item, fontsize=temp_font_size)
ax.axes.get_xaxis().set_ticks([])
ax.axes.get_yaxis().set_ticks([])
legend = ax.legend(loc='best', shadow=True, fontsize='x-large')
fig.tight_layout()
plt.subplots_adjust(hspace = 0.1)
fig.set_size_inches(8, 6)
# check these states are orthogonal
corr_matrix = [[inner_prod(eigenstates[xx], eigenstates[yy], equi_dist) for xx in range(4)] for yy in range(4)]
sns.heatmap(corr_matrix, annot=True)
def run_markov_simulation(transition, x_list, start_point = None, steps = 1e6):
num_states = len(transition)
if start_point is None:
point = np.random.randint(num_states)
else: point = start_point
traj = [point]
for _ in range(int(steps - 1)):
rand_num = np.random.rand()
if rand_num < transition[point - 1][point]:
point -= 1
elif rand_num > transition[point][point] + transition[point - 1][point]:
point = (point + 1) % num_states
traj.append(point)
traj = np.array(traj)
return traj, x_list[traj]
def traj_to_bin(traj, x_list):
return np.where(x_list == traj)[1]
# generate simulation data
traj_bins, traj = run_markov_simulation(transition_1, x_list, steps=5e6)
traj = traj.reshape(-1, 1)
# compare theoretical and actual probability distributions
equi_dist_actual = []
for xx in np.unique(traj):
equi_dist_actual.append(np.sum(traj == xx))
equi_dist_actual = np.array(equi_dist_actual).astype(float)
equi_dist_actual /= np.sum(equi_dist_actual)
plt.plot(x_list, equi_dist, '--', label='theoretical') # theoretical value
plt.plot(x_list, equi_dist_actual, label='actual')
plt.xlabel('x'); plt.ylabel('probability')
plt.legend()
def remove_component_from_state(state, equi_dist, components):
result = state
for item in components:
result = result - item * np.sum(result * equi_dist * item) / np.sum(item * equi_dist * item)
assert (np.abs(np.sum(result * item * equi_dist)) < 1e-5), np.abs(np.sum(result * item * equi_dist))
return result
def get_decompose_coeff(state, eigenstates, equi_dist):
return ['%.03f' % inner_prod(state, item, equi_dist) for item in eigenstates]
def plot_learned_eigenstates(model, previous_CV_state_to_remove = [], lag_time=lag_time):
fig, axes = plt.subplots(1, 2)
traj_PCs = model.get_PCs().T
PCs = model.get_PCs(x_list.reshape(len(x_list), 1))
# should use theoretical or experimental distribution data?
PC_states = [normalize_state(remove_component_from_state(
item, equi_dist_actual,
[np.ones(100)] + previous_CV_state_to_remove), equi_dist_actual) for item in PCs.T]
coeffs = [get_decompose_coeff(item, eigenstates, equi_dist)[:4] for item in PC_states]
if len(PC_states) > 1:
CV_corr = np.corrcoef(PC_states[0], PC_states[1])[0, 1]
else: CV_corr = 0
for item in range(len(PC_states)):
axes[item].plot(x_list, PC_states[item])
axes[item].set_xlabel('x'); axes[item].set_ylabel('CV%d' % (item+1))
axes[item].set_title('coeff = %s\nautocorr = %f, timescale = %f' % (
str(coeffs[item]), autocorr(traj_PCs[item], lag_time),
-lag_time / np.log(autocorr(traj_PCs[item], lag_time))))
fig.suptitle(item_a.split('/')[-1] + '\ncoor = %.03f' % CV_corr)
fig.tight_layout()
fig.set_size_inches(12, 5)
return PC_states
###Output
_____no_output_____
###Markdown
kTICA
###Code
from temp_ktica import *
from sklearn.model_selection import train_test_split, KFold
from sklearn.cluster import KMeans
# KMeans clustering to generate landmarks for kernel TICA
n_landmarks = 100
kmeans = KMeans(init='k-means++', n_clusters=n_landmarks, n_init=10)
kmeans.fit(traj[::10])
sigma = 0.05
ktica = Kernel_tica(3, lag_time=100, gamma=1. / (2 * sigma * sigma), shrinkage=None, n_components_nystroem=100,
landmarks = kmeans.cluster_centers_
)
ktica.fit([traj])
# get kernel TICA projections and timescales
ktica_proj = ktica.transform([x_list.reshape(100, 1)])[0]
ktica_proj *= (-np.sign(ktica_proj[0]))
ktica_timescales = ktica._tica.timescales
###Output
_____no_output_____
###Markdown
HDE
###Code
sys.path.append('../')
from hde import HDE
from keras.callbacks import EarlyStopping
earlyStopping = EarlyStopping(monitor='val_loss', patience=50, verbose=1, mode='min', restore_best_weights=True)
hde = HDE(1, n_components=3, lag_time=lag_time, dropout_rate=0, batch_size=50000, n_epochs=100, hidden_size=100,
validation_split=0.2, batch_normalization=True, learning_rate = 0.01,
callbacks=[earlyStopping])
# hde.r_degree = 10 # use VAMP-10 score for pre-training
# hde.n_epochs = 20
# hde.fit(traj)
# hde._recompile = True
hde.r_degree = 2 # switch back to VAMP-2 score for training
hde.n_epochs = 50
hde.fit(traj)
hde_proj = hde.transform(x_list.reshape(-1, 1))
hde_proj *= (-np.sign(hde_proj[0]))
hde_timescales = hde.timescales_
# plot learned eigenfunctions for KTICA and HDE
def plot_CV_state_with_coeff(CVs_state_list_list, row_label_list, timescale_list=None):
num_list = len(CVs_state_list_list)
num_states = len(CVs_state_list_list[0])
fig, axes = plt.subplots(num_states, num_list)
for index_list in range(num_list):
CVs_state_list = CVs_state_list_list[index_list]
for item in range(num_states):
CVs_state_list[item] = normalize_state(CVs_state_list[item], equi_dist)
for _1, item in enumerate(CVs_state_list):
ax = axes[_1, index_list]
ax.plot(x_list, item, label='$\\tilde{\psi}_%d\ (t_%d = %d)$' % (
_1 + 1, _1 + 1, timescale_list[index_list][_1]), color='blue')
ax.plot(x_list, eigenstates[_1 + 1], label='$\psi_%d\ (t_%d = %d)$' % (
_1 + 1, _1 + 1, th_timescales[_1]), linestyle='dashed', color='red')
legend = ax.legend(loc='best', shadow=True, fontsize='x-large')
coeff_list = get_decompose_coeff(item, eigenstates[:4], equi_dist)
title_text = '$\\tilde{\psi}_%d = %s \psi_0 + %s \psi_1 + %s \psi_2 + %s \psi_3$' % (
_1+1, coeff_list[0], coeff_list[1], coeff_list[2], coeff_list[3])
if _1 == 0: ax.set_title(row_label_list[index_list], fontsize=30)
if _1 > 1: ax.set_xlabel('x', fontsize = temp_font_size)
ax.axes.get_xaxis().set_ticks([])
ax.axes.get_yaxis().set_ticks([])
plt.subplots_adjust(hspace=0.1)
fig.set_size_inches(5 * num_list, num_states * 5 + 0.5)
return fig
fig = plot_CV_state_with_coeff([ktica_proj.T, hde_proj.T], ['kTICA', 'HDE'],
[ktica_timescales, hde_timescales])
###Output
/home/kengyangyao/.anaconda2/envs/py36/lib/python3.6/site-packages/ipykernel_launcher.py:14: ComplexWarning: Casting complex values to real discards the imaginary part
|
02_machine_learning_regression/week_4/quiz_2.ipynb | ###Markdown
Regression Week 4: Ridge Regression (gradient descent) In this notebook, you will implement ridge regression via gradient descent. You will:* Convert an SFrame into a Numpy array* Write a Numpy function to compute the derivative of the regression weights with respect to a single feature* Write gradient descent function to compute the regression weights given an initial weight vector, step size, tolerance, and L2 penalty Fire up Turi Create Make sure you have the latest version of Turi Create
###Code
import turicreate as tc
###Output
_____no_output_____
###Markdown
Load in house sales dataDataset is from house sales in King County, the region where the city of Seattle, WA is located.
###Code
sales = tc.SFrame('home_data.sframe/')
###Output
_____no_output_____
###Markdown
If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features. Import useful functions from previous notebook As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste `get_numpy_data()` from the second notebook of Week 2.
###Code
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
###Output
_____no_output_____
###Markdown
Also, copy and paste the `predict_output()` function to compute the predictions for an entire matrix of features given the matrix and the weights:
###Code
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a
# corresponding numpy array create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
###Output
_____no_output_____
###Markdown
Computing the Derivative We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term.```Cost(w)= SUM[ (prediction - output)^2 ]+ l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2).```Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to `w[i]` can be written as: ```2*SUM[ error*[feature_i] ].```The derivative of the regularization term with respect to `w[i]` is:```2*l2_penalty*w[i].```Summing both, we get```2*SUM[ error*[feature_i] ] + 2*l2_penalty*w[i].```That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself, plus `2*l2_penalty*w[i]`. **We will not regularize the constant.** Thus, in the case of the constant, the derivative is just twice the sum of the errors (without the `2*l2_penalty*w[0]` term).Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors, plus `2*l2_penalty*w[i]`.With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points). To decide when to we are dealing with the constant (so we don't regularize it) we added the extra parameter to the call `feature_is_constant` which you should set to `True` when computing the derivative of the constant and `False` otherwise.
###Code
def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant):
# If feature_is_constant is True, derivative is twice the dot product of errors and feature
if feature_is_constant:
derivative = 2 * np.dot(errors, feature)
# Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight
else:
derivative = 2 * np.dot(errors, feature) + 2 * l2_penalty * weight
return derivative
###Output
_____no_output_____
###Markdown
To test your feature derivartive run the following:
###Code
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([1., 10.])
test_predictions = predict_output(example_features, my_weights)
errors = test_predictions - example_output # prediction errors
# next two lines should print the same values
print(feature_derivative_ridge(errors, example_features[:,1], my_weights[1], 1, False))
print(np.sum(errors * example_features[:,1]) * 2 + 20.)
print()
# next two lines should print the same values
print(feature_derivative_ridge(errors, example_features[:,0], my_weights[0], 1, True))
print(np.sum(errors) * 2.)
###Output
-56554166782350.0
-56554166782350.0
-22446749336.0
-22446749336.0
###Markdown
Gradient Descent Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of *increase* and therefore the negative gradient is the direction of *decrease* and we're trying to *minimize* a cost function. The amount by which we move in the negative gradient *direction* is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a **maximum number of iterations** and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.)With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria.
###Code
def ridge_regression_gradient_descent(feature_matrix, output, initial_weights, step_size, l2_penalty,
max_iterations=100):
print(f'Starting gradient descent with l2_penalty = {l2_penalty}')
weights = np.array(initial_weights) # make sure it's a numpy array
iteration = 0 # iteration counter
print_frequency = 1 # for adjusting frequency of debugging output
#while not reached maximum number of iterations:
while iteration != max_iterations:
iteration += 1 # increment iteration counter
### === code section for adjusting frequency of debugging output. ===
if iteration == 10:
print_frequency = 10
if iteration == 100:
print_frequency = 100
if iteration % print_frequency == 0:
print(f'Iteration = {iteration}')
### === end code section ===
# compute the predictions based on feature_matrix and weights using your predict_output() function
predictions = predict_output(feature_matrix, weights)
# compute the errors as predictions - output
errors = predictions - output
# from time to time, print the value of the cost function
if iteration % print_frequency == 0:
print(f'Cost function = {np.dot(errors, errors) + l2_penalty * (np.dot(weights, weights) - weights[0]**2)}')
for i in range(len(weights)): # loop over each weight
# Recall that feature_matrix[:,i] is the feature column associated with weights[i]
# compute the derivative for weight[i].
#(Remember: when i=0, you are computing the derivative of the constant!)
if i:
derivative = feature_derivative_ridge(errors, feature_matrix[:, i], weights[i], l2_penalty, False)
else:
derivative = feature_derivative_ridge(errors, feature_matrix[:, i], weights[i], l2_penalty, True)
# subtract the step size times the derivative from the current weight
weights[i] = weights[i] - step_size * derivative
print(f'Done with gradient descent at iteration {iteration}')
print(f'Learned weights = {weights}')
return weights
###Output
_____no_output_____
###Markdown
Visualizing effect of L2 penalty The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature:
###Code
simple_features = ['sqft_living']
my_output = 'price'
###Output
_____no_output_____
###Markdown
Let us split the dataset into training set and test set. Make sure to use `seed=0`:
###Code
train_data, test_data = sales.random_split(0.8,seed=0)
###Output
_____no_output_____
###Markdown
In this part, we will only use `'sqft_living'` to predict `'price'`. Use the `get_numpy_data` function to get a Numpy versions of your data with only this feature, for both the `train_data` and the `test_data`.
###Code
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
(simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
###Output
_____no_output_____
###Markdown
Let's set the parameters for our optimization:
###Code
initial_weights = np.array([0., 0.])
step_size = 1e-12
max_iterations=1000
###Output
_____no_output_____
###Markdown
First, let's consider no regularization. Set the `l2_penalty` to `0.0` and run your ridge regression algorithm to learn the weights of your model. Call your weights:`simple_weights_0_penalty`we'll use them later.
###Code
simple_weights_0_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights, step_size,
0.0, max_iterations)
###Output
Starting gradient descent with l2_penalty = 0.0
Iteration = 1
Cost function = 7433051851026171.0
Iteration = 2
Cost function = 5394267213135526.0
Iteration = 3
Cost function = 4023237736501158.0
Iteration = 4
Cost function = 3101256183922414.0
Iteration = 5
Cost function = 2481247644505114.0
Iteration = 6
Cost function = 2064308077891941.2
Iteration = 7
Cost function = 1783927097372279.8
Iteration = 8
Cost function = 1595378203154872.0
Iteration = 9
Cost function = 1468583991054997.0
Iteration = 10
Cost function = 1383318191484981.8
Iteration = 20
Cost function = 1211562140496238.8
Iteration = 30
Cost function = 1208313762678823.5
Iteration = 40
Cost function = 1208252326252870.0
Iteration = 50
Cost function = 1208251163612919.8
Iteration = 60
Cost function = 1208251140915263.0
Iteration = 70
Cost function = 1208251139777036.0
Iteration = 80
Cost function = 1208251139046557.0
Iteration = 90
Cost function = 1208251138323789.2
Iteration = 100
Cost function = 1208251137601167.8
Iteration = 200
Cost function = 1208251130374984.5
Iteration = 300
Cost function = 1208251123148810.0
Iteration = 400
Cost function = 1208251115922643.2
Iteration = 500
Cost function = 1208251108696485.0
Iteration = 600
Cost function = 1208251101470335.0
Iteration = 700
Cost function = 1208251094244193.2
Iteration = 800
Cost function = 1208251087018059.8
Iteration = 900
Cost function = 1208251079791934.5
Iteration = 1000
Cost function = 1208251072565817.5
Done with gradient descent at iteration 1000
Learned weights = [-1.63113501e-01 2.63024369e+02]
###Markdown
Next, let's consider high regularization. Set the `l2_penalty` to `1e11` and run your ridge regression algorithm to learn the weights of your model. Call your weights:`simple_weights_high_penalty`we'll use them later.
###Code
simple_weights_high_penalty = ridge_regression_gradient_descent(simple_feature_matrix, output, initial_weights,
step_size, 1e11, max_iterations)
###Output
Starting gradient descent with l2_penalty = 100000000000.0
Iteration = 1
Cost function = 7433051851026171.0
Iteration = 2
Cost function = 5618303898412631.0
Iteration = 3
Cost function = 4920613278115385.0
Iteration = 4
Cost function = 4652381942612294.0
Iteration = 5
Cost function = 4549258764014157.0
Iteration = 6
Cost function = 4509612390882265.0
Iteration = 7
Cost function = 4494370050281118.5
Iteration = 8
Cost function = 4488509984030221.5
Iteration = 9
Cost function = 4486256988531770.0
Iteration = 10
Cost function = 4485390752674687.5
Iteration = 20
Cost function = 4484848868034300.0
Iteration = 30
Cost function = 4484847880479026.0
Iteration = 40
Cost function = 4484846931081658.0
Iteration = 50
Cost function = 4484845981687379.0
Iteration = 60
Cost function = 4484845032293500.0
Iteration = 70
Cost function = 4484844082900019.0
Iteration = 80
Cost function = 4484843133506938.0
Iteration = 90
Cost function = 4484842184114254.5
Iteration = 100
Cost function = 4484841234721970.5
Iteration = 200
Cost function = 4484831740821062.0
Iteration = 300
Cost function = 4484822246960036.0
Iteration = 400
Cost function = 4484812753138891.0
Iteration = 500
Cost function = 4484803259357624.0
Iteration = 600
Cost function = 4484793765616238.0
Iteration = 700
Cost function = 4484784271914732.0
Iteration = 800
Cost function = 4484774778253106.0
Iteration = 900
Cost function = 4484765284631358.5
Iteration = 1000
Cost function = 4484755791049491.5
Done with gradient descent at iteration 1000
Learned weights = [ 9.76730383 124.57217565]
###Markdown
This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.)
###Code
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(16,9))
plt.plot(simple_feature_matrix, output, 'k.')
plt.plot(simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_0_penalty),'b-')
plt.plot(simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_high_penalty),'r-')
###Output
_____no_output_____
###Markdown
Compute the RSS on the TEST data for the following three sets of weights:1. The initial weights (all zeros)2. The weights learned with no regularization3. The weights learned with high regularizationWhich weights perform best?
###Code
prediction_simple_zero = predict_output(simple_test_feature_matrix, initial_weights)
errors_simple_zero = prediction_simple_zero - test_output
rss_simple_zero = sum(errors_simple_zero * errors_simple_zero)
prediction_simple_no_reg = predict_output(simple_test_feature_matrix, simple_weights_0_penalty)
errors_simple_no_reg = prediction_simple_no_reg - test_output
rss_simple_no_reg = sum(errors_simple_no_reg * errors_simple_no_reg)
prediction_simple_reg = predict_output(simple_test_feature_matrix, simple_weights_high_penalty)
errors_simple_reg = prediction_simple_reg - test_output
rss_simple_reg = sum(errors_simple_reg * errors_simple_reg)
###Output
_____no_output_____
###Markdown
***QUIZ QUESTIONS***1. What is the value of the coefficient for `sqft_living` that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?2. Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper?3. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)?
###Code
print(f'Weight no regularization = {simple_weights_0_penalty[1]:.1f}')
print(f'Weight regularization = {simple_weights_high_penalty[1]:.1f}')
print(f'RSS initial = {rss_simple_zero:.2e}')
print(f'RSS no regularization = {rss_simple_no_reg:.2e}')
print(f'RSS regularization = {rss_simple_reg:.2e}')
###Output
RSS initial = 1.78e+15
RSS no regularization = 2.76e+14
RSS regularization = 6.95e+14
###Markdown
Running a multiple regression with L2 penalty Let us now consider a model with 2 features: `['sqft_living', 'sqft_living15']`. First, create Numpy versions of your training and test data with these two features.
###Code
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features, my_output)
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
###Output
_____no_output_____
###Markdown
We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations.
###Code
initial_weights = np.array([0.0, 0.0, 0.0])
step_size = 1e-12
max_iterations = 1000
###Output
_____no_output_____
###Markdown
First, let's consider no regularization. Set the `l2_penalty` to `0.0` and run your ridge regression algorithm to learn the weights of your model. Call your weights:`multiple_weights_0_penalty`
###Code
multiple_weights_0_penalty = ridge_regression_gradient_descent(feature_matrix, output, initial_weights,
step_size, 0.0, max_iterations)
###Output
Starting gradient descent with l2_penalty = 0.0
Iteration = 1
Cost function = 7433051851026171.0
Iteration = 2
Cost function = 4056752331500972.0
Iteration = 3
Cost function = 2529565114333592.5
Iteration = 4
Cost function = 1838556694275926.8
Iteration = 5
Cost function = 1525675575208603.5
Iteration = 6
Cost function = 1383789498674794.0
Iteration = 7
Cost function = 1319232606276634.5
Iteration = 8
Cost function = 1289648872028921.0
Iteration = 9
Cost function = 1275884724079266.8
Iteration = 10
Cost function = 1269278807577156.2
Iteration = 20
Cost function = 1257812386316614.5
Iteration = 30
Cost function = 1251954571266786.0
Iteration = 40
Cost function = 1246755423155437.5
Iteration = 50
Cost function = 1242139508748821.0
Iteration = 60
Cost function = 1238041401137187.8
Iteration = 70
Cost function = 1234403013463993.2
Iteration = 80
Cost function = 1231172774976820.2
Iteration = 90
Cost function = 1228304900059555.0
Iteration = 100
Cost function = 1225758739263726.0
Iteration = 200
Cost function = 1211738881421532.8
Iteration = 300
Cost function = 1207473080962631.8
Iteration = 400
Cost function = 1206175125770959.8
Iteration = 500
Cost function = 1205780190233995.8
Iteration = 600
Cost function = 1205660014471675.5
Iteration = 700
Cost function = 1205623439252682.0
Iteration = 800
Cost function = 1205612300984401.0
Iteration = 900
Cost function = 1205608902360341.5
Iteration = 1000
Cost function = 1205607858660559.5
Done with gradient descent at iteration 1000
Learned weights = [ -0.35743482 243.0541689 22.41481594]
###Markdown
Next, let's consider high regularization. Set the `l2_penalty` to `1e11` and run your ridge regression algorithm to learn the weights of your model. Call your weights:`multiple_weights_high_penalty`
###Code
multiple_weights_high_penalty = ridge_regression_gradient_descent(feature_matrix, output, initial_weights,
step_size, 1e11, max_iterations)
###Output
Starting gradient descent with l2_penalty = 100000000000.0
Iteration = 1
Cost function = 7433051851026171.0
Iteration = 2
Cost function = 4460489790285891.0
Iteration = 3
Cost function = 3796674468844608.0
Iteration = 4
Cost function = 3648319530437361.0
Iteration = 5
Cost function = 3615091103216102.0
Iteration = 6
Cost function = 3607602742514732.0
Iteration = 7
Cost function = 3605886322161656.0
Iteration = 8
Cost function = 3605474874533295.0
Iteration = 9
Cost function = 3605365167765576.0
Iteration = 10
Cost function = 3605329402184649.0
Iteration = 20
Cost function = 3605294281022695.0
Iteration = 30
Cost function = 3605293537267099.0
Iteration = 40
Cost function = 3605293082749905.5
Iteration = 50
Cost function = 3605292631106357.0
Iteration = 60
Cost function = 3605292179491501.0
Iteration = 70
Cost function = 3605291727877070.0
Iteration = 80
Cost function = 3605291276262784.5
Iteration = 90
Cost function = 3605290824648642.0
Iteration = 100
Cost function = 3605290373034643.0
Iteration = 200
Cost function = 3605285856902500.0
Iteration = 300
Cost function = 3605281340784635.0
Iteration = 400
Cost function = 3605276824681046.0
Iteration = 500
Cost function = 3605272308591735.0
Iteration = 600
Cost function = 3605267792516700.0
Iteration = 700
Cost function = 3605263276455942.0
Iteration = 800
Cost function = 3605258760409461.0
Iteration = 900
Cost function = 3605254244377257.0
Iteration = 1000
Cost function = 3605249728359329.0
Done with gradient descent at iteration 1000
Learned weights = [ 6.7429658 91.48927361 78.43658768]
###Markdown
Compute the RSS on the TEST data for the following three sets of weights:1. The initial weights (all zeros)2. The weights learned with no regularization3. The weights learned with high regularizationWhich weights perform best?
###Code
prediction_multiple_zero = predict_output(test_feature_matrix, initial_weights)
errors_multiple_zero = prediction_multiple_zero - test_output
rss_multiple_zero = sum(errors_multiple_zero * errors_multiple_zero)
prediction_multiple_no_reg = predict_output(test_feature_matrix, multiple_weights_0_penalty)
errors_multiple_no_reg = prediction_multiple_no_reg - test_output
rss_multiple_no_reg = sum(errors_multiple_no_reg * errors_multiple_no_reg)
prediction_multiple_reg = predict_output(test_feature_matrix, multiple_weights_high_penalty)
errors_multiple_reg = prediction_multiple_reg - test_output
rss_multiple_reg = sum(errors_multiple_reg * errors_multiple_reg)
print(f'RSS initial = {rss_multiple_zero:.2e}')
print(f'RSS no regularization = {rss_multiple_no_reg:.2e}')
print(f'RSS regularization = {rss_multiple_reg:.2e}')
###Output
RSS initial = 1.78e+15
RSS no regularization = 2.74e+14
RSS regularization = 5.00e+14
###Markdown
Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house?
###Code
prediction_multiple_no_reg[0]
prediction_multiple_reg[0]
test_output[0]
###Output
_____no_output_____ |
code/00-Configuration.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License.
###Code
# Import Workspace
import os
# import dotenv
from azureml.core import Workspace, Datastore
from azureml.exceptions import ProjectSystemException
# _ = dotenv.load_dotenv('.env')
###Output
_____no_output_____
###Markdown
Create or Import Azure Machine Learning Workspace
###Code
# Fetch Workspace Name, Resource Group and Subscription ID from environment variables
# If these aren't defined as Env Vars, then replace the "None" entries below
WORKSPACE_NAME = os.environ.get('AML_WORKSPACE', None)
RESOURCE_GROUP = os.environ.get('AML_RG', None)
SUBSCRIPTION_ID = os.environ.get('AML_SUB_ID', None)
REGION_NAME = os.environ.get('AML_REGION', None)
try:
ws = Workspace(subscription_id=SUBSCRIPTION_ID,
resource_group=RESOURCE_GROUP,
workspace_name=WORKSPACE_NAME)
ws.write_config()
except ProjectSystemException:
print("Workspace Not Found - please create workspace using code below")
print()
print('Workspace Name: \t', WORKSPACE_NAME)
print('Resource Group: \t', RESOURCE_GROUP)
print('Subscription ID: \t', SUBSCRIPTION_ID)
# # If the above code fails, uncomment this code block to create a new Workspace
# ws = Workspace.create(subscription_id=SUBSCRIPTION_ID,
# resource_group=RESOURCE_GROUP,
# name=WORKSPACE_NAME,
# location=REGION_NAME)
# ws.write_config()
ws = Workspace.from_config()
###Output
_____no_output_____
###Markdown
Create CPU ClusterWe can create a CPU-based AML Managed Compute Cluster
###Code
from azureml.core.compute import AmlCompute
cpu_cluster_name = 'cpu-cluster-west'
if ws.compute_targets.get(cpu_cluster_name):
print(f"'{cpu_cluster_name}' found. Skipping cluster creation")
ct = ws.compute_targets[cpu_cluster_name]
else:
print(f"'{cpu_cluster_name}' not found. Creating a new AML Manged Compute cluster.")
cpu_config = AmlCompute.provisioning_configuration(vm_size='Standard_DS13_v2',
vm_priority='dedicated',
min_nodes=0,
max_nodes=5,
idle_seconds_before_scaledown=360,
tags={'project_name': "lab-demo"},
description="An AML Managed compute cluster leveraging DS Series VMs"
)
ct = AmlCompute.create(workspace=ws,
name=cpu_cluster_name,
provisioning_configuration=cpu_config)
ct.wait_for_completion(show_output=True)
# # If you would like to see the supported VM sizes, you can get a list with the supported_vmsizes method
# AmlCompute.supported_vmsizes(ws)
###Output
_____no_output_____
###Markdown
Create GPU Cluster
###Code
gpu_cluster_name = 'gpu-cluster'
if ws.compute_targets.get(gpu_cluster_name):
print(f"'{gpu_cluster_name}' found. Skipping cluster creation")
ct = ws.compute_targets[gpu_cluster_name]
else:
print(f"'{gpu_cluster_name}' not found. Creating a new AML Manged Compute cluster.")
gpu_config = AmlCompute.provisioning_configuration(vm_size='Standard_NC6',
vm_priority='dedicated',
min_nodes=0,
max_nodes=5,
idle_seconds_before_scaledown=360,
tags={'project_name': "lab-demo"},
description="An AML Managed compute cluster leveraging NC Series VMs"
)
gpu_ct = AmlCompute.create(workspace=ws,
name=gpu_cluster_name,
provisioning_configuration=gpu_config)
gpu_ct.wait_for_completion(show_output=True)
# Verify clusters are created
ct = ws.compute_targets['cpu-cluster']
cs = ct.get_status()
cs.serialize()
###Output
_____no_output_____
###Markdown
Attach Data Store
###Code
BLOB_CONTAINER = os.environ.get("BLOB_CONTAINER_NAME", "diabetes")
BLOB_ACCOUNT = os.environ.get("BLOB_ACCT_NAME", "publicmldataeastus")
BLOB_ACCT_KEY = os.environ.get("BLOB_ACCT_KEY", "JzbwX/fvm/bjlQCkSfKGaxhVDuDAsQ22wZNwf6ngE3l/F/ArzfwJQDuJToT899dwwtVSpgrAClFR5aZ3JSvqwQ==")
datastore_name = 'diabetes'
if ws.datastores.get(datastore_name):
print(f"'{datastore_name}' datastore found. Skipping registration")
else:
print(f"'{datastore_name}' datastore not found. Registering with Workspace")
_ = Datastore.register_azure_blob_container(workspace=ws,
datastore_name='diabetes',
container_name=BLOB_CONTAINER,
account_name=BLOB_ACCOUNT,
account_key=BLOB_ACCT_KEY)
datastore_name = 'hymenoptera'
if ws.datastores.get(datastore_name):
print(f"'{datastore_name}' datastore found. Skipping registration")
else:
print(f"'{datastore_name}' datastore not found. Registering with Workspace")
_ = Datastore.register_azure_blob_container(workspace=ws,
datastore_name='hymenoptera',
container_name='hymenoptera',
account_name=BLOB_ACCOUNT,
account_key=BLOB_ACCT_KEY)
ws.datastores
###Output
_____no_output_____ |
_build/jupyter_execute/contents/plot/effective porosity.ipynb | ###Markdown
Effective porosity plot after David and DeWiest (1966)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
df = pd.read_csv("dataset.csv")
df.head(5)
fig = plt.figure(figsize=(10,6))
ax = plt.subplot()
ax.plot(df["A"], df["B"], "-", label="effective porosity")
ax.plot(df["C"], df["D"], "--", label="(Well sorted)")
ax.plot(df["E"], df["F"],"-.", label="Total porosity")
ax.plot(df["G"], df["H"],"+-", label="(well sorted)")
ax.plot(df["I"], df["J"],'*-', label="specific retention")
plt.legend()
ax.set_xscale("log")
ax.set_xlim(0.0001, 200)
ax.set_ylim(0,65)
fmt = '%.0f%%' # Format you want the ticks, e.g. '40%'
yticks = mtick.FormatStrFormatter(fmt)
ax.yaxis.set_major_formatter(yticks)
#x_fmt = mtick.FormatStrFormatter('%e')
#ax.xaxis.set_major_formatter(x_fmt)
ax.set_ylabel("Porosity")
ax.set_xlabel("Particle size (mm) ")
ax2 = ax.twiny() # add second x-axis
ax2.xaxis.set_ticks_position("bottom")
ax2.xaxis.set_label_position("bottom")
ax2.spines["bottom"].set_position(("axes", -0.15))
ax2.set_frame_on(True)
#ax2.patch.set_visible(False)
ax2.set_frame_on(True)
ax2.tick_params(direction='out', length= 15, width=2, colors='r',
grid_color='r', grid_alpha=0.5, axis='x', rotation=90, which="minor")
ax2.set_xscale("log")
vals = [0.0001, 0.002, 0.06, 2.0, 63, 1000]
ax2.set_xticks(vals, minor=True)
ax2.set_xticklabels(vals, minor=True) ;
ax2.set_xlim(0.0001, 200)
plt.setp(ax2.get_xmajorticklabels(), visible=False); # remove the major xaxis label
fig.text(0.15,-0.02 , 'Clay', ha='left', va='top', size=12, fontweight='bold')
fig.text(0.35,-0.02 , 'Silt', ha='left', va='top', size=12, fontweight='bold')
fig.text(0.52,-0.02 , 'Sand', ha='left', va='top', size=12, fontweight='bold')
fig.text(0.7,-0.02 , 'Gravel', ha='left', va='top', size=12, fontweight='bold')
df = pd.read_csv("finaldf.csv")
df.head(5)
###Output
_____no_output_____ |
Desarrollo_Jupyter/ecuacion_difusion_and_burguers.ipynb | ###Markdown
Ecuación de difusión en 1-D La ecuación de difusión unidimensional está dada por: $$\frac{\partial u}{\partial t}= \nu \frac{\partial^2 u}{\partial x^2}$$La derivada de segundo orden se puede representar geométricamente como la línea tangente a la curva dada por la primera derivada. Discretizaremos la derivada de segundo orden con un esquema de Diferencia Central: una combinación de Diferencia hacia adelante y Diferencia hacia atrás de la primera derivada. Considere la expansión de Taylor de $ u_{i + 1} $ y $ u_{i-1} $ alrededor de $ u_i $:$u_{i+1} = u_i + \Delta x \frac{\partial u}{\partial x}\bigg|_i + \frac{\Delta x^2}{2} \frac{\partial ^2 u}{\partial x^2}\bigg|_i + \frac{\Delta x^3}{3!} \frac{\partial ^3 u}{\partial x^3}\bigg|_i + O(\Delta x^4)$$u_{i-1} = u_i - \Delta x \frac{\partial u}{\partial x}\bigg|_i + \frac{\Delta x^2}{2} \frac{\partial ^2 u}{\partial x^2}\bigg|_i - \frac{\Delta x^3}{3!} \frac{\partial ^3 u}{\partial x^3}\bigg|_i + O(\Delta x^4)$ Si sumamos estas dos expansiones, puede ver que los términos de las derivadas impares se cancelarán entre sí. Si descuidamos cualquier término de $ O (\Delta x ^ 4) $ o superior (y en realidad, esos son muy pequeños), entonces podemos reorganizar la suma de estas dos expansiones para resolver nuestra segunda derivada.$$u_{i+1} + u_{i-1} = 2u_i+\Delta x^2 \frac{\partial ^2 u}{\partial x^2}\bigg|_i + O(\Delta x^4)$$Así,$$\frac{\partial ^2 u}{\partial x^2}=\frac{u_{i+1}-2u_{i}+u_{i-1}}{\Delta x^2} + O(\Delta x^4)$$ Así, reemplazando en la ecuación de Difusión$$\frac{u_{i}^{n+1}-u_{i}^{n}}{\Delta t}=\nu\frac{u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n}}{\Delta x^2}$$Como antes, notamos que una vez que tenemos una condición inicial, la única incógnita es $ u_ {i} ^ {n + 1} $, por lo que reorganizamos la ecuación resolviendo nuestra incógnita:$$u_{i}^{n+1}=u_{i}^{n}+\frac{\nu\Delta t}{\Delta x^2}(u_{i+1}^{n}-2u_{i}^{n}+u_{i-1}^{n})$$
###Code
import numpy as np
import matplotlib.pyplot as plt
import time, sys
%matplotlib inline
###Output
_____no_output_____
###Markdown
Retomemos las funciones para el dato inicial.
###Code
nx = 41
dx = 2 / (nx - 1)
nt = 20 #el número de pasos de tiempo que queremos calcular
nu = 0.3 #el valor de viscocidad
sigma = .2
dt = sigma * dx**2 / nu #note la diferencia...
x=np.arange(0.0,2+dx,dx)
dt*nt #es el tiempo final, al terminar la iteración
def f1(nx):
#dx = 2 / (nx-1)
u = np.ones(nx) #numpy función ones()
u[int(.5 / dx):int(1 / dx + 1)] = 2 #configurar u = 2 entre 0.5 y 1 y u = 1 para todo lo demás según nuestra CI
#print(u)
return u
def f1_var(x):
u=np.piecewise(x,[x<0.5,np.abs(x-0.75)<=0.25,x>1],[lambda x:1,lambda x: 2,lambda x:1])
return u
def f2(nx):
#dx = 2 / (nx-1)
L=2
n=4
Fi=0.2#ángulo de fase temporal
c=10 #velocidad de la onda
A=0.5#amplitud máxima, relacionada con la intensidad del sonido
t=0.18 #instantánea en el tiempo t segundos
x=np.arange(0.0,L+dx,dx)
u=A*np.sin(n*np.pi*c*0.005*t/L+Fi)*np.sin(n*np.pi*x/L)#aquí definimos la función
return u
def mostrar_imagen(u):
plt.plot(np.linspace(0, 2, nx), u);
###Output
_____no_output_____
###Markdown
Discretizamos la ecuación de Difusión ED
###Code
def ED(u):
for n in range(nt): #iterando en el tiempo
un = u.copy()
for i in range(1, nx - 1): #iterando en el espacio
u[i] = un[i] + nu * dt / dx**2 * (un[i+1] - 2 * un[i] + un[i-1])
return u
u=ED(f1_var(x))
mostrar_imagen(u)
u=ED(f2(nx))
mostrar_imagen(u)
nx = 41
dx = 2 / (nx - 1)
nt = 20 #el número de pasos de tiempo que queremos calcular
nu = 0.3 #el valor de viscocidad
sigma = .2
dt = sigma * dx**2 / nu #note la diferencia...
x=np.arange(0.0,2+dx,dx)
plt.figure(figsize=(18,12))
#como colocar un título general a un paquete
plt.subplot(2, 2, 1)
plt.plot(x,f1(nx),label=f' Velocidad inicial para f1')
plt.legend(frameon=False)
plt.subplot(2, 2, 2)
plt.plot(x,ED(f1(nx)),label=f'Vel en t=0.5 para f1')
plt.legend(frameon=False)
plt.subplot(2, 2, 3)
plt.plot(x,f2(nx),label=f' Velocidad inicial para f2')
plt.legend(frameon=False)
plt.subplot(2, 2, 4)
plt.plot(x,ED(f2(nx)),label=f'Vel en t=0.5 para f2')
plt.legend(frameon=False)
plt.show()
###Output
_____no_output_____
###Markdown
Ecuación de BurgersLa ecuación de Burgues (EB) en una dimensión espacial, es una combinación de la ecuación de convección no lineal y la ecuación de difusión. Vamos a detenernos y estudiar un poco esta ecuación por lo sorprendente que es y lo mucho que podemos aprender de esta pequeña ecaución.$$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = \nu \frac{\partial ^2u}{\partial x^2}$$ Usando los pasos realizados en la discretización de la ECnL y la ED obtenemos: $$\frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n - u_{i-1}^n}{\Delta x} = \nu \frac{u_{i+1}^n - 2u_i^n + u_{i-1}^n}{\Delta x^2}$$De donde,$$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)$$ Para examinar algunas propiedades interesantes de la ecuación de Burgers, es útil usar diferentes condiciones iniciales y de contorno de las que hemos estado usando para los pasos anteriores.Nuestra condición inicial para este problema será:\begin{eqnarray}u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\\phi &=& \exp \bigg(\frac{-x^2}{4 \nu} \bigg) + \exp \bigg(\frac{-(x-2 \pi)^2}{4 \nu} \bigg)\end{eqnarray}Cuya solución analítica es dada por( Usando el método de separación de variables)\begin{eqnarray}u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\\phi &=& \exp \bigg(\frac{-(x-4t)^2}{4 \nu (t+1)} \bigg) + \exp \bigg(\frac{-(x-4t -2 \pi)^2}{4 \nu(t+1)} \bigg)\end{eqnarray}Además, usaremos una condición de frontera (problema tipo Neumann)$$u(0) = u(2\pi).$$ La condición inicial que estamos usando para la ecuación de Burgers puede ser un poco difícil de evaluar a mano. La derivada $\frac{\partial \phi}{\partial x}$ no es demasiado difícil, pero sería fácil dejar caer un signo u olvidar un factor de $ x $ en alguna parte, así que vamos a usar SymPy para ayudarnos. [SymPy](http://sympy.org/en/) SymPy es la biblioteca matemática simbólica de Python. Tiene muchas de las mismas funciones matemáticas simbólicas que Mathematica con el beneficio adicional de que podemos traducir fácilmente sus resultados a nuestros cálculos de Python (también es gratuito y de código abierto).
###Code
import sympy
from sympy import init_printing
init_printing(use_latex=True) #salidas renderizadas usando Latex
###Output
_____no_output_____
###Markdown
Comience configurando variables simbólicas para las tres variables en nuestra condición inicial y luego escriba la ecuación completa para $ \phi $. Deberíamos obtener una versión bien renderizada de nuestra ecuación $ \phi $.
###Code
x, nu, t = sympy.symbols('x nu t')
phi = (sympy.exp(-(x - 4 * t)**2 / (4 * nu * (t + 1))) +
sympy.exp(-(x - 4 * t - 2 * sympy.pi)**2 / (4 * nu * (t + 1))))
phi
phiprimex = phi.diff(x)
phiprimex
phiprimexx = phiprimex.diff(x)
phiprimexx
phiprimet = phi.diff(t)
phiprimet
###Output
_____no_output_____
###Markdown
Ahora que tenemos la versión Pythonic de nuestra derivada, podemos terminar de escribir la ecuación de la condición inicial completa y luego traducirla a una expresión Python utilizable. Para esto, usaremos la función *lambdify*, que toma una ecuación simbólica SymPy y la convierte en una función invocable.
###Code
from sympy.utilities.lambdify import lambdify
u = -2 * nu * (phiprimex / phi) + 4
print(u)
v = u.diff(t) + u * u.diff(x) - nu * u.diff(x,x)
vfunc = lambdify((t, x, nu), v)
print(vfunc(1, 4, 3))
###Output
0.0
###Markdown
LambdifyPara lambdify esta expresión en una función utilizable, le decimos a lambdify qué variables solicitar y la función a la que queremos conectarlas.
###Code
ufunc = lambdify((t, x, nu), u)
print(ufunc(1, 4, 3))
###Output
3.49170664206445
###Markdown
Ahora que tenemos configuradas las condiciones iniciales, podemos continuar y terminar de configurar el problema. Podemos generar la gráfica de la condición inicial usando nuestra función lambdify
###Code
nx = 101
nt = 100
dx = 2 * np.pi / (nx - 1)
nu = .07
dt = dx * nu
x = np.linspace(0, 2 * np.pi, nx)
def f3(t):
x = np.linspace(0, 2 * np.pi, nx)
un = np.empty(nx)
u = np.asarray([ufunc(t, x0, nu) for x0 in x])
return u
np.shape(f3(0))
np.shape(x)
u = f3(0)
plt.figure(figsize=(11, 7), dpi=100)
plt.plot(x, u, marker='o', lw=2)
plt.xlim([0, 2 * np.pi])
plt.ylim([0, 10]);
###Output
_____no_output_____
###Markdown
Definitivamente, esta no es la función de sombrero con la que hemos estado tratando hasta ahora. Lo llamamos "función de diente de sierra". Sigamos adelante y veamos qué sucede. Condiciones de frontera periódicasUna de las grandes diferencias entre lo que habíamos hecho y este problema es el uso de condiciones de contorno *periódicas*. Si experimenta un poco con los lo hecho antes y hace que la simulación se ejecute más tiempo (aumentando `nt`), notará que la onda seguirá moviéndose hacia la derecha hasta que ya ni siquiera aparezca en la gráfica.Con condiciones de contorno periódicas, cuando un punto llega al lado derecho del marco, **se envuelve** de nuevo al frente del marco.Recuerde la discretización que resolvimos$$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n) + \nu \frac{\Delta t}{\Delta x^2}(u_{i+1}^n - 2u_i^n + u_{i-1}^n)$$¿Qué *significa* $ u_{i + 1} ^ n $ cuando $ i $ ya está al final del marco?Piense en esto por un minuto antes de continuar.
###Code
u = f3(0) #Volvemos a crear la condición inicial
def EB(u):
for n in range(nt):
un = u.copy()
for i in range(nx-1):
u[i] = un[i] - un[i] * dt / dx *(un[i] - un[i-1]) + nu * dt / dx**2 *\
(un[i+1] - 2 * un[i] + un[i-1])
u[-1] = u[0]
return u
u_analitica = np.asarray([ufunc(nt * dt, xi, nu) for xi in x])
u = f3(0) #Volvemos a crear la condición inicial
def EB_profe(u):
for n in range(nt):
# el equivalente de np.roll es pandas es pd.DataFrame.shift (me parece)
# En ese caso no asume periodicidad !
u_der = np.roll(u,-1)
u_izq = np.roll(u,1)
u = u - u *dt / dx * (u -u_izq) + nu * dt / dx**2 * (u_izq - 2*u +u_der)
return u
plt.figure(figsize=(11, 7), dpi=100)
plt.plot(x,u, marker='o', lw=2, label='Inicial')
plt.plot(x,EB(u), marker='o', lw=2, label='Computacional')
plt.plot(x, u_analitica, label='Analítica')
plt.xlim([0, 2 * np.pi])
plt.ylim([0, 10])
plt.legend(frameon = False);
###Output
_____no_output_____
###Markdown
Jugando un poco con los parametros
###Code
nx = 301 # tome diferentes valores y si vale 501?
nt = 1000 # tome diferentes valores y si vale 500?
dx = 2 * np.pi / (nx - 1)
nu = .07
dt = dx * nu
x = np.linspace(0, 2 * np.pi, nx)
u = f3(0) #Volvemos a crear la condición inicial
u_analytical = np.asarray([ufunc(nt * dt, xi, nu) for xi in x])
plt.figure(figsize=(11, 7), dpi=100)
plt.plot(x,u, marker='o', lw=2, label='Inicial')
plt.plot(x,EB(u), marker='o', lw=2, label='Computacional')
plt.plot(x, u_analytical, label='Analítica')
plt.xlim([0, 2 * np.pi])
plt.ylim([0, 10])
plt.legend(frameon = False);
###Output
_____no_output_____ |
Chapter07/Exercise 7.02/Exercise 7.02.ipynb | ###Markdown
Text Summarization of After Twenty Years by O' Henry and Text Summarization of the first section of the Wikipedia article on Oscar Wilde using TextRank (gensim) http://www.gutenberg.org/cache/epub/2776/pg2776.txt After Twenty Years by O'Henry
###Code
from gensim.summarization import summarize
import wikipedia
import re
file_url_after_twenty=r'../data/ohenry/pg2776.txt'
with open(file_url_after_twenty, 'r') as f:
contents = f.read()
start_string='AFTER TWENTY YEARS\n\n\n'
end_string='\n\n\n\n\n\nLOST ON DRESS PARADE'
text_after_twenty=contents[contents.find(start_string):contents.find(end_string)]
text_after_twenty=text_after_twenty.replace('\n',' ')
text_after_twenty=re.sub(r"\s+"," ",text_after_twenty)
text_after_twenty
summary_text_after_twenty=summarize(text_after_twenty, ratio=0.2)
print(summary_text_after_twenty)
summary_text_after_twenty=summarize(text_after_twenty, ratio=0.25)
print(summary_text_after_twenty)
###Output
Now and then you might see the lights of a cigar store or of an all-night lunch counter; but the majority of the doors belonged to business places that had long since been closed.
About that long ago there used to be a restaurant where this store stands--'Big Joe' Brady's restaurant." "Until five years ago," said the policeman.
"Twenty years ago to-night," said the man, "I dined here at 'Big Joe' Brady's with Jimmy Wells, my best chum, and the finest chap in the world.
You couldn't have dragged Jimmy out of New York; he thought it was the only place on earth.
Well, we agreed that night that we would meet here again exactly twenty years from that date and time, no matter what our conditions might be or from what distance we might have to come.
We figured that in twenty years each of us ought to have our destiny worked out and our fortunes made, whatever they were going to be." "It sounds pretty interesting," said the policeman.
I came a thousand miles to stand in this door to-night, and it's worth it if my old partner turns up." The waiting man pulled out a handsome watch, the lids of it set with small diamonds.
"It was exactly ten o'clock when we parted here at the restaurant door." "Did pretty well out West, didn't you?" asked the policeman.
If Jimmy is alive on earth he'll be here by that time.
So long, officer." "Good-night, sir," said the policeman, passing on along his beat, trying doors as he went.
And in the door of the hardware store the man who had come a thousand miles to fill an appointment, uncertain almost to absurdity, with the friend of his youth, smoked his cigar and waited.
About twenty minutes he waited, and then a tall man in a long overcoat, with collar turned up to his ears, hurried across from the opposite side of the street.
"Is that you, Jimmy Wells?" cried the man in the door.
Well, well, well!--twenty years is a long time.
How has the West treated you, old man?" "Bully; it has given me everything I asked it for.
Come on, Bob; we'll go around to a place I know of, and have a good long talk about old times." The two men started up the street, arm in arm.
It's from Patrolman Wells." The man from the West unfolded the little piece of paper handed him.
His hand was steady when he began to read, but it trembled a little by the time he had finished.
"_Bob: I was at the appointed place on time.
When you struck the match to light your cigar I saw it was the face of the man wanted in Chicago.
###Markdown
Wikipedia article on Oscar Wilde (summary section)
###Code
#text_wiki_oscarwilde=wikipedia.summary("Oscar Wilde")
file_url_wiki_oscarwilde=r'../data/oscarwilde/ow_wikipedia_sum.txt'
with open(file_url_wiki_oscarwilde, 'r', encoding='latin-1') as f:
text_wiki_oscarwilde = f.read()
text_wiki_oscarwilde=text_wiki_oscarwilde.replace('\n',' ')
text_wiki_oscarwilde=re.sub(r"\s+"," ",text_wiki_oscarwilde)
text_wiki_oscarwilde
summary_wiki_oscarwilde=summarize(text_wiki_oscarwilde, ratio=0.2)
print(summary_wiki_oscarwilde)
summary_wiki_oscarwilde=summarize(text_wiki_oscarwilde, ratio=0.25)
print(summary_wiki_oscarwilde)
###Output
He is best remembered for his epigrams and plays, his novel The Picture of Dorian Gray, and the circumstances of his criminal conviction for "gross indecency", imprisonment, and early death at age 46.
As a spokesman for aestheticism, he tried his hand at various literary activities: he published a book of poems, lectured in the United States and Canada on the new "English Renaissance in Art" and interior decoration, and then returned to London where he worked prolifically as a journalist.
Unperturbed, Wilde produced four society comedies in the early 1890s, which made him one of the most successful playwrights of late-Victorian London.
At the height of his fame and success, while The Importance of Being Earnest (1895) was still being performed in London, Wilde had the Marquess of Queensberry prosecuted for criminal libel.
During his last year in prison, he wrote De Profundis (published posthumously in 1905), a long letter which discusses his spiritual journey through his trials, forming a dark counterpoint to his earlier philosophy of pleasure.
|
Classification reports/.ipynb_checkpoints/Comparison_All_models-checkpoint.ipynb | ###Markdown
Comparisions for all work done till now
###Code
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
import seaborn as sb
###Output
_____no_output_____
###Markdown
Now let's check for the Tags analysis we did Task: Tag AnalysisHere we have trained 2 model* One with top 5500 important tags and only with 100K datapoints.* Second one with 500 tags and with 3 times more weight to title. Because title contains more information about question.
###Code
# We get the report of our analysis
tags_5500 = pd.read_csv("report_5500tags_100k.csv")
tags_500_title = pd.read_csv("report_3title_100k.csv")
# Selecting average micro f1 score from report
tags_5500_avg_values = tags_5500["f1-score"].iloc[5500]
tags_500_title_avg_values = tags_500_title["f1-score"].iloc[500]
# Now plot then to get better understanding
y=[tags_5500_avg_values,tags_500_title_avg_values]
sb.barplot(["5500 tags","500 tags with 3 time title"],y)
plt.xlabel("Micro F1 score")
plt.ylabel("values between 0 to 1 (Higher is better)")
plt.title("Micro F1 Scores between both models")
###Output
_____no_output_____
###Markdown
Now let's look at precision and recall value how they are changing
###Code
tags_5500.head()
tags_500_title.head()
###Output
_____no_output_____ |
notebooks/2StateFolding.ipynb | ###Markdown
2 State Folding Fitting Notebook---[Author] ERGM--- Fitting equilibrium & kinetic folding data to simple modelsIn this notebook we will show how equilbrium and kinetic folding data can be imported into a notebook and fitted to folding models--- Data FormatDatasets should be in .csv files where:1. The 1st row should contain the data titles2. the 1st column should contain the x-values 3. The subsequent columns should contain the y-values. 4. You can have different data sets in different .csv files or all in one (as long as there is only one x-value column).5. If you wish to perform global analyses on folding or equilibrium data, the datasets concerned must be in the same .csv6. Except for global analyses using the Ising model, here each dataset must have its own .csv ---Example .csv structure:[Urea] (M) | Fraction Unfolded FKBP12------------ | ------------------------0 | -0.002070.267 | 0.003070.533 | -0.006880.8 | 0.006051.07 | 0.00232... | ...--- First off lets load pyfolding & pyplot into this ipython notebook (pyplot allows us to plot more complex figures of our results):
###Code
# use this command to tell Jupyter to plot figures inline with the text
%matplotlib inline
# import pyfolding, the pyfolding models and ising models
import pyfolding
from pyfolding import *
# import the package for plotting, call it plt
import matplotlib.pyplot as plt
# import numpy as well
import numpy as np
###Output
_____no_output_____
###Markdown
--- Now, we need to load some data to analyse.I will import the equilibrium denaturation & unfolding/folding kinetics of wild-type FKBP12 from:`2. Main E.R.G., Fulton K.F. & Jackson S.E. (1999) “The folding pathway of FKBP12 and characterisation of the transition state.” Journal of Molecular Biology, 291, 429-444.` Considerations1. Kinetics data should be entered as rate constants ( *k* ) and NOT as the `log` of the rate constant.2. There can be no "empty" cells between data points in the .csv file for kinetics data.
###Code
# start by loading a data set
# arguments are "path", "filename"
pth = "/Users/ergm/Dropbox/AlanLoweCollaboration/Datasets/FKBP12_Datasets/"
Equilm_FKBP12 = pyfolding.read_equilibrium_data(pth,"Equilm_FKBP12.csv")
Kinetics_FKBP12 = pyfolding.read_kinetic_data(pth,"Kinetics_FKBP12.csv")
###Output
_____no_output_____
###Markdown
--- Lets plot the data.
###Code
# now plot the equilm data
Equilm_FKBP12.plot()
# now the kinetics
Kinetics_FKBP12.plot()
###Output
_____no_output_____
###Markdown
--- OK, now we can try fitting the data to protein folding models. We will start by fitting the equilibrium data to a two state folding model, without sloping baslines (as the data has been processed as Fraction Unfolded). We can list the models in pyfolding:
###Code
# Command imports pyfolding models
from pyfolding.models import *
# command lists models
list_models()
###Output
_____no_output_____
###Markdown
--- We can print out the model for viewing and then print out the variables of the model.
###Code
# printing the equation for viewing
models.TwoStateEquilibrium().print_equation()
# as can be seen this model fits data that has been normalised to "faction folded" or "fraction unfolded".
# Printing the model variables
# The function has two parts:
# part1 states the model you want to find about: "******()"
# part2 prints the variables: ".fit_func_args"
TwoStateEquilibrium().fit_func_args
###Output
_____no_output_____
###Markdown
--- Or we can skip straight to fitting the data.
###Code
# Set temperature to 25.00°C
# (NOTE: Careful, this sets the temperature for all subsequent calculations)
pyfolding.set_temperature(25.)
#1st select the fit function and associates it with the data
Equilm_FKBP12.fit_func = models.TwoStateEquilibrium
#then fit it.
#in the brackets you can define starting values for the variables -
#input in the order printed above with the".fit_func_args'
Equilm_FKBP12.fit(p0=[2,4])
###Output
Set temperature to 25.00°C
(NOTE: Careful, this sets the temperature for all subsequent calculations)
==================================================
Fitting results
==================================================
ID: Equilm_FKBP12
Model: TwoStateEquilibrium
Method: scipy.optimize.curve_fit
m: 1.43404 ± 0.00003
d50: 3.86730 ± 0.00001
--------------------------------------------------
R^2: 0.99933
==================================================
###Markdown
--- We can print the resultant graph:We will need a slightly different command as we need to plot the data and the curve on the same graph
###Code
Equilm_FKBP12.plot()
# Set temperature to 25.00°C
# (NOTE: Careful, this sets the temperature for all subsequent calculations)
pyfolding.set_temperature(25.)
# We can also fit the data to the Two State Equilm model that has denaturant linear dependent Native & Denatured Baselines:
Equilm_FKBP12.fit_func = models.TwoStateEquilibrium
# printing the equation for viewing
models.TwoStateEquilibrium().print_equation()
#then fit it.
#in the brackets you can define starting values for the variables -
#in this case I have just left them as the default.
Equilm_FKBP12.fit()
###Output
Set temperature to 25.00°C
(NOTE: Careful, this sets the temperature for all subsequent calculations)
###Markdown
--- We can fit the kinetic data separately too:
###Code
# select the fit function
Kinetics_FKBP12.fit_func = models.TwoStateChevron
TwoStateChevron().fit_func_args
# printing the equation for viewing
models.TwoStateChevron().print_equation()
# 1st select the fit function and associates it with the data
Kinetics_FKBP12.fit_func = models.TwoStateChevron
# 2nd fit the data with initial values
Kinetics_FKBP12.fit(p0=[4,2,0.0001,1])
###Output
==================================================
Fitting results
==================================================
ID: Kinetics_FKBP12
Model: TwoStateChevron
Method: scipy.optimize.curve_fit
kf: 4.21728 ± 0.00625
mf: 1.86223 ± 0.00082
ku: 0.00019 ± 0.00000
mu: 0.90557 ± 0.00094
--------------------------------------------------
R^2: 0.99199
==================================================
###Markdown
--- We can print the resultant graph:This is similar to the equilm but we need to remember to use a natural log plot with kinetics dataset and fit.
###Code
pyfolding.plot_chevron(Kinetics_FKBP12)
###Output
_____no_output_____
###Markdown
--- We can also plot a fancier graph that shows both Equilm and Kinetics together
###Code
pyfolding.plot_figure(Equilm_FKBP12, Kinetics_FKBP12, display=True)
###Output
_____no_output_____
###Markdown
Saving out the fit results as a .CSV
###Code
# save out the data
Equilm_FKBP12.save_fit('/Users/ergm/Desktop/test.csv')
###Output
Writing .csv file (/Users/ergm/Desktop/test.csv)...
###Markdown
Fit to multiple models
###Code
# make a list of models to be used to fit
models_to_fit = [models.TwoStateEquilibrium,
models.TwoStateEquilibriumSloping]
# and now lets fit them
for model in models_to_fit:
Equilm_FKBP12.fit_func = model
Equilm_FKBP12.fit()
Equilm_FKBP12.plot()
###Output
==================================================
Fitting results
==================================================
ID: Equilm_FKBP12
Model: TwoStateEquilibrium
Method: scipy.optimize.curve_fit
m: 1.43404 ± 0.00003
d50: 3.86730 ± 0.00001
--------------------------------------------------
R^2: 0.99933
==================================================
|
dmdw_lab_Assignment_4.ipynb | ###Markdown
Disimilarity matrix
###Code
path="https://raw.githubusercontent.com/chirudukuru/DMDW/main/student-mat.csv"
import pandas as pd
import numpy as np
df=pd.read_csv(path)
df
###Output
_____no_output_____
###Markdown
proxmity measures of binary attributes
###Code
df1=df[['schoolsup','famsup','paid','activities','nursery','higher','internet','romantic']]
df1.head()
df1=df1.replace('no',0)
df1=df1.replace('yes',1)
df1.head()
n=np.array(df1[['schoolsup','famsup']])
n=n.reshape(-1,2)
n.shape
m=np.array(df1[['internet','romantic']])
m=m.reshape(-1,2)
m.shape
from scipy.spatial import distance
dist_matrix=distance.cdist(n,m)
print(dist_matrix)
import seaborn as sns
import matplotlib.pyplot as plt
sns.heatmap(dist_matrix)
plt.show()
###Output
_____no_output_____
###Markdown
nominal attribute
###Code
nominal=df[['Mjob','Fjob','reason','guardian']]
nominal=nominal.replace('at_home','home')
nominal=(nominal.astype('category'))
from sklearn.preprocessing import LabelEncoder
lb=LabelEncoder()
nominal['Mjob']=lb.fit_transform(nominal['Mjob'])
nominal['Fjob']=lb.fit_transform(nominal['Fjob'])
nominal['reason']=lb.fit_transform(nominal['reason'])
nominal['guardian']=lb.fit_transform(nominal['guardian'])
nominal.head()
nominal1=np.array(nominal)
nominal1.reshape(-1,2)
nominal2=np.array(nominal)
nominal2.reshape(-1,2)
from scipy.spatial import distance
dist_matrix=distance.cdist(nominal1,nominal2)
print(dist_matrix)
sns.heatmap(dist_matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Numeric Attributes
###Code
numeric=df[['age','Medu','Fedu','traveltime','studytime','failures']]
numeric.head()
num1=np.array(numeric[['age','failures']])
num1.reshape(-1,2)
num1.shape
num2=np.array(numeric[['Fedu','Medu']])
num2.reshape(-1,2)
num2.shape
from scipy.spatial import distance
dist_matrix=distance.cdist(num1,num2)
print(dist_matrix)
dist_matrix.shape
sns.heatmap(dist_matrix)
###Output
_____no_output_____ |
Colab-bezier-shapes.ipynb | ###Markdown
Generating random shapesThis coalab uses [jviquerat/bezier_shapes](https://github.com/jviquerat/bezier_shapes) and applies some modifications, so it actually works. The original code does not seem to do bezier lines without problems, so at least random shapes are generated.My fork is located in [styler00dollar/Colab-bezier-shapes](https://github.com/styler00dollar/Colab-bezier-shapes).Example output with removed dots:Currently only random shapes without curvy lines.
###Code
#@title connect google drive
from google.colab import drive
drive.mount('/content/drive')
print('Google Drive connected.')
#@title install everything
%cd /content/
!git clone https://github.com/styler00dollar/Colab-bezier-shapes
!sudo apt-get install -y gmsh
!pip install gmsh
!pip install pygmsh
!pip install meshio
!pip install progress
!pip install pythontools
!pip install gmsh-sdk
%cd /content/
!git clone https://gitlab.onelab.info/gmsh/gmsh.git
%cd /content/gmsh
!mkdir build
#!cd build
%cd /content/gmsh
!cmake -DENABLE_BUILD_DYNAMIC=1
#!cmake ..
!make
!make install
#@title modify python file (contains configuraiton)
%%writefile /content/Colab-bezier-shapes/generate_dataset.py
# fixed file
# Generic imports
import warnings
import os
import os.path
import PIL
import math
import scipy.special
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import numpy
warnings.simplefilter(action='ignore', category=FutureWarning)
# Imports with probable installation required
try:
import pygmsh, meshio
except ImportError:
print('*** Missing required packages, I will install them for you ***')
os.system('pip3 install pygmsh meshio')
import pygmsh, meshio
# Custom imports
from meshes_utils import *
### ************************************************
### Class defining shape object
class Shape:
### ************************************************
### Constructor
def __init__(self,
name ='shape',
control_pts =None,
n_control_pts =None,
n_sampling_pts=None,
radius =None,
edgy =None):
if (name is None): name = 'shape'
if (control_pts is None): control_pts = np.array([])
if (n_control_pts is None): n_control_pts = 0
if (n_sampling_pts is None): n_sampling_pts = 0
if (radius is None): radius = np.array([])
if (edgy is None): edgy = np.array([])
self.name = name
self.control_pts = control_pts
self.n_control_pts = n_control_pts
self.n_sampling_pts = n_sampling_pts
self.curve_pts = np.array([])
self.area = 0.0
self.size_x = 0.0
self.size_y = 0.0
self.index = 0
if (len(radius) == n_control_pts): self.radius = radius
if (len(radius) == 1): self.radius = radius*np.ones([n_control_pts])
if (len(edgy) == n_control_pts): self.edgy = edgy
if (len(edgy) == 1): self.edgy = edgy*np.ones([n_control_pts])
subname = name.split('_')
if (len(subname) == 2): # name is of the form shape_?.xxx
self.name = subname[0]
index = subname[1].split('.')[0]
self.index = int(index)
if (len(subname) > 2): # name contains several '_'
print('Please do not use several "_" char in shape name')
quit()
if (len(control_pts) > 0):
self.control_pts = control_pts
self.n_control_pts = len(control_pts)
### ************************************************
### Reset object
def reset(self):
self.name = 'shape'
self.control_pts = np.array([])
self.n_control_pts = 0
self.n_sampling_pts = 0
self.radius = np.array([])
self.edgy = np.array([])
self.curve_pts = np.array([])
self.area = 0.0
### ************************************************
### Generate shape
def generate(self, *args, **kwargs):
# Handle optional argument
centering = kwargs.get('centering', True)
cylinder = kwargs.get('cylinder', False)
magnify = kwargs.get('magnify', 1.0)
ccws = kwargs.get('ccws', True)
# Generate random control points if empty
if (len(self.control_pts) == 0):
if (cylinder):
self.control_pts = generate_cylinder_pts(self.n_control_pts)
else:
self.control_pts = generate_random_pts(self.n_control_pts)
# Magnify
self.control_pts *= magnify
# Center set of points
center = np.mean(self.control_pts, axis=0)
if (centering):
self.control_pts -= center
# Sort points counter-clockwise
if (ccws):
control_pts, radius, edgy = ccw_sort(self.control_pts,
self.radius,
self.edgy)
else:
control_pts = np.array(self.control_pts)
radius = np.array(self.radius)
edgy = np.array(self.edgy)
local_curves = []
delta = np.zeros([self.n_control_pts,2])
radii = np.zeros([self.n_control_pts,2])
delta_b = np.zeros([self.n_control_pts,2])
# Compute all informations to generate curves
for i in range(self.n_control_pts):
# Collect points
prv = (i-1)
crt = i
nxt = (i+1)%self.n_control_pts
pt_m = control_pts[prv,:]
pt_c = control_pts[crt,:]
pt_p = control_pts[nxt,:]
# Compute delta vector
diff = pt_p - pt_m
diff = diff/np.linalg.norm(diff)
delta[crt,:] = diff
# Compute edgy vector
delta_b[crt,:] = 0.5*(pt_m + pt_p) - pt_c
# Compute radii
dist = compute_distance(pt_m, pt_c)
radii[crt,0] = 0.5*dist*radius[crt]
dist = compute_distance(pt_c, pt_p)
radii[crt,1] = 0.5*dist*radius[crt]
# Generate curves
for i in range(self.n_control_pts):
crt = i
nxt = (i+1)%self.n_control_pts
pt_c = control_pts[crt,:]
pt_p = control_pts[nxt,:]
dist = compute_distance(pt_c, pt_p)
smpl = math.ceil(self.n_sampling_pts*math.sqrt(dist))
local_curve = generate_bezier_curve(pt_c, pt_p,
delta[crt,:], delta[nxt,:],
delta_b[crt,:], delta_b[nxt,:],
radii[crt,1], radii[nxt,0],
edgy[crt], edgy[nxt],
smpl)
local_curves.append(local_curve)
curve = np.concatenate([c for c in local_curves])
x, y = curve.T
z = np.zeros(x.size)
self.curve_pts = np.column_stack((x,y,z))
self.curve_pts = remove_duplicate_pts(self.curve_pts)
# Center set of points
if (centering):
center = np.mean(self.curve_pts, axis=0)
self.curve_pts -= center
self.control_pts[:,0:2] -= center[0:2]
# Compute area
self.compute_area()
# Compute dimensions
self.compute_dimensions()
### ************************************************
### Write image
def generate_image(self, *args, **kwargs):
# Handle optional argument
plot_pts = kwargs.get('plot_pts', False)
override_name = kwargs.get('override_name', '')
show_quadrants = kwargs.get('show_quadrants', False)
max_radius = kwargs.get('max_radius', 1.0)
min_radius = kwargs.get('min_radius', 0.2)
xmin = kwargs.get('xmin', -1.0)
xmax = kwargs.get('xmax', 1.0)
ymin = kwargs.get('ymin', -1.0)
ymax = kwargs.get('ymax', 1.0)
# Plot shape
plt.xlim([xmin,xmax])
plt.ylim([ymin,ymax])
plt.axis('off')
plt.gca().set_aspect('equal', adjustable='box')
plt.fill([xmin,xmax,xmax,xmin],
[ymin,ymin,ymax,ymax],
color=(0.784,0.773,0.741),
linewidth=2.5,
zorder=0)
plt.fill(self.curve_pts[:,0],
self.curve_pts[:,1],
'black',
linewidth=0,
zorder=1)
# Plot points
# Each point gets a different color
if (plot_pts):
colors = matplotlib.cm.ocean(np.linspace(0, 1, self.n_control_pts))
plt.scatter(self.control_pts[:,0],
self.control_pts[:,1],
color=colors,
s=16,
zorder=2,
alpha=0.5)
# Plot quadrants
if (show_quadrants):
for pt in range(self.n_control_pts):
dangle = (360.0/float(self.n_control_pts))
angle = dangle*float(pt)+dangle/2.0
x_max = max_radius*math.cos(math.radians(angle))
y_max = max_radius*math.sin(math.radians(angle))
x_min = min_radius*math.cos(math.radians(angle))
y_min = min_radius*math.sin(math.radians(angle))
plt.plot([x_min, x_max],
[y_min, y_max],
color='w',
linewidth=1)
circle = plt.Circle((0,0),max_radius,fill=False,color='w')
plt.gcf().gca().add_artist(circle)
circle = plt.Circle((0,0),min_radius,fill=False,color='w')
plt.gcf().gca().add_artist(circle)
# Save image
filename = self.name+'_'+str(self.index)+'.png'
if (override_name != ''): filename = override_name
plt.savefig(filename,
dpi=400)
plt.close(plt.gcf())
plt.cla()
trim_white(filename)
### ************************************************
### Write csv
def write_csv(self):
with open(self.name+'_'+str(self.index)+'.csv','w') as file:
# Write header
file.write('{} {}\n'.format(self.n_control_pts,
self.n_sampling_pts))
# Write control points coordinates
for i in range(0,self.n_control_pts):
file.write('{} {} {} {}\n'.format(self.control_pts[i,0],
self.control_pts[i,1],
self.radius[i],
self.edgy[i]))
### ************************************************
### Read csv and initialize shape with it
def read_csv(self, filename, *args, **kwargs):
# Handle optional argument
keep_numbering = kwargs.get('keep_numbering', False)
if (not os.path.isfile(filename)):
print('I could not find csv file: '+filename)
print('Exiting now')
exit()
self.reset()
sfile = filename.split('.')
sfile = sfile[-2]
sfile = sfile.split('/')
name = sfile[-1]
if (keep_numbering):
sname = name.split('_')
name = sname[0]
name = name+'_'+str(self.index)
x = []
y = []
radius = []
edgy = []
with open(filename) as file:
header = file.readline().split()
n_control_pts = int(header[0])
n_sampling_pts = int(header[1])
for i in range(0,n_control_pts):
coords = file.readline().split()
x.append(float(coords[0]))
y.append(float(coords[1]))
radius.append(float(coords[2]))
edgy.append(float(coords[3]))
control_pts = np.column_stack((x,y))
self.__init__(name,
control_pts,
n_control_pts,
n_sampling_pts,
radius,
edgy)
### ************************************************
### Mesh shape
def mesh(self, *args, **kwargs):
# Handle optional argument
mesh_domain = kwargs.get('mesh_domain', False)
xmin = kwargs.get('xmin', -1.0)
xmax = kwargs.get('xmax', 1.0)
ymin = kwargs.get('ymin', -1.0)
ymax = kwargs.get('ymax', 1.0)
domain_h = kwargs.get('domain_h', 1.0)
mesh_format = kwargs.get('mesh_format', 'mesh')
# Convert curve to polygon
geom = pygmsh.built_in.Geometry()
poly = geom.add_polygon(self.curve_pts,
make_surface=not mesh_domain)
# Mesh domain if necessary
if (mesh_domain):
# Compute an intermediate mesh size
border = geom.add_rectangle(xmin, xmax,
ymin, ymax,
0.0,
domain_h,
holes=[poly.line_loop])
# Generate mesh and write in medit format
try:
mesh = pygmsh.generate_mesh(geom, extra_gmsh_arguments=["-v", "0"])
except AssertionError:
print('\n'+'!!!!! Meshing failed !!!!!')
return False, 0
### ************************************************
# fixing output
without_vertex = [x for x in mesh.cells if "vertex" not in x]
triangle_array = [x for x in mesh.cells if "triangle" in x]
triangle_array = numpy.array(list(triangle_array))
n_tri = len(triangle_array[0][1])
mesh.cells = without_vertex
# Remove lines if output format is xml
if (mesh_format == 'xml'): del mesh.cells['line']
# Write mesh
filename = self.name+'_'+str(self.index)+'.'+mesh_format
meshio.write_points_cells(filename, mesh.points, mesh.cells)
return True, n_tri
### ************************************************
### Get shape bounding box
def compute_bounding_box(self):
x_max, y_max = np.amax(self.control_pts,axis=0)
x_min, y_min = np.amin(self.control_pts,axis=0)
dx = x_max - x_min
dy = y_max - y_min
return dx, dy
### ************************************************
### Modify shape given a deformation field
def modify_shape_from_field(self, deformation, *args, **kwargs):
# Handle optional argument
replace = kwargs.get('replace', False)
pts_list = kwargs.get('pts_list', [])
# Check inputs
if (pts_list == []):
if (len(deformation[:,0]) != self.n_control_pts):
print('Input deformation field does not have right length')
quit()
if (len(deformation[0,:]) not in [2, 3]):
print('Input deformation field does not have right width')
quit()
if (pts_list != []):
if (len(pts_list) != len(deformation)):
print('Lengths of pts_list and deformation are different')
quit()
# If shape is to be replaced entirely
if ( replace):
# If a list of points is provided
if (pts_list != []):
for i in range(len(pts_list)):
self.control_pts[pts_list[i],0] = deformation[i,0]
self.control_pts[pts_list[i],1] = deformation[i,1]
self.edgy[pts_list[i]] = deformation[i,2]
# Otherwise
if (pts_list == []):
self.control_pts[:,0] = deformation[:,0]
self.control_pts[:,1] = deformation[:,1]
self.edgy[:] = deformation[:,2]
# Otherwise
if (not replace):
# If a list of points to deform is provided
if (pts_list != []):
for i in range(len(pts_list)):
self.control_pts[pts_list[i],0] += deformation[i,0]
self.control_pts[pts_list[i],1] += deformation[i,1]
self.edgy[pts_list[i]] += deformation[i,2]
# Otherwise
if (pts_list == []):
self.control_pts[:,0] += deformation[:,0]
self.control_pts[:,1] += deformation[:,1]
self.edgy[:] += deformation[:,2]
# Increment shape index
self.index += 1
### ************************************************
### Compute shape area
def compute_area(self):
self.area = 0.0
# Use Green theorem to compute area
for i in range(0,len(self.curve_pts)-1):
x1 = self.curve_pts[i-1,0]
x2 = self.curve_pts[i, 0]
y1 = self.curve_pts[i-1,1]
y2 = self.curve_pts[i, 1]
self.area += 2.0*(y1+y2)*(x2-x1)
### ************************************************
### Compute shape dimensions
def compute_dimensions(self):
self.size_y = 0.0
self.size_x = 0.0
xmin = 1.0e20
xmax =-1.0e20
ymin = 1.0e20
ymax =-1.0e20
for i in range(len(self.curve_pts)):
xmin = min(xmin, self.curve_pts[i,0])
xmax = max(xmax, self.curve_pts[i,0])
ymin = min(ymin, self.curve_pts[i,1])
ymax = max(ymax, self.curve_pts[i,1])
self.size_x = xmax - xmin
self.size_y = ymax - ymin
### End of class Shape
### ************************************************
### ************************************************
### Compute distance between two points
def compute_distance(p1, p2):
return np.sqrt((p1[0]-p2[0])**2+(p1[1]-p2[1])**2)
### ************************************************
### Generate n_pts random points in the unit square
def generate_random_pts(n_pts):
return np.random.rand(n_pts,2)
### ************************************************
### Generate cylinder points
def generate_cylinder_pts(n_pts):
if (n_pts < 4):
print('Not enough points to generate cylinder')
exit()
pts = np.zeros([n_pts, 2])
ang = 2.0*math.pi/n_pts
for i in range(0,n_pts):
pts[i,:] = [math.cos(float(i)*ang),math.sin(float(i)*ang)]
return pts
### ************************************************
### Compute minimal distance between successive pts in array
def compute_min_distance(pts):
dist_min = 1.0e20
for i in range(len(pts)-1):
p1 = pts[i ,:]
p2 = pts[i+1,:]
dist = compute_distance(p1,p2)
dist_min = min(dist_min,dist)
return dist_min
### ************************************************
### Remove duplicate points in input coordinates array
### WARNING : this routine is highly sub-optimal
def remove_duplicate_pts(pts):
to_remove = []
for i in range(len(pts)):
for j in range(len(pts)):
# Check that i and j are not identical
if (i == j):
continue
# Check that i and j are not removed points
if (i in to_remove) or (j in to_remove):
continue
# Compute distance between points
pi = pts[i,:]
pj = pts[j,:]
dist = compute_distance(pi,pj)
# Tag the point to be removed
if (dist < 1.0e-8):
to_remove.append(j)
# Sort elements to remove in reverse order
to_remove.sort(reverse=True)
# Remove elements from pts
for pt in to_remove:
pts = np.delete(pts, pt, 0)
return pts
### ************************************************
### Counter Clock-Wise sort
### - Take a cloud of points and compute its geometric center
### - Translate points to have their geometric center at origin
### - Compute the angle from origin for each point
### - Sort angles by ascending order
def ccw_sort(pts, rad, edg):
geometric_center = np.mean(pts,axis=0)
translated_pts = pts - geometric_center
angles = np.arctan2(translated_pts[:,1], translated_pts[:,0])
x = angles.argsort()
pts2 = np.array(pts)
rad2 = np.array(rad)
edg2 = np.array(edg)
return pts2[x,:], rad2[x], edg2[x]
### ************************************************
### Compute Bernstein polynomial value
def compute_bernstein(n,k,t):
k_choose_n = scipy.special.binom(n,k)
return k_choose_n * (t**k) * ((1.0-t)**(n-k))
### ************************************************
### Sample Bezier curves given set of control points
### and the number of sampling points
### Bezier curves are parameterized with t in [0,1]
### and are defined with n control points P_i :
### B(t) = sum_{i=0,n} B_i^n(t) * P_i
def sample_bezier_curve(control_pts, n_sampling_pts):
n_control_pts = len(control_pts)
t = np.linspace(0, 1, n_sampling_pts)
curve = np.zeros((n_sampling_pts, 2))
for i in range(n_control_pts):
curve += np.outer(compute_bernstein(n_control_pts-1, i, t),
control_pts[i])
return curve
### ************************************************
### Generate Bezier curve between two pts
def generate_bezier_curve(p1, p2,
delta1, delta2,
delta_b1, delta_b2,
radius1, radius2,
edgy1, edgy2,
n_sampling_pts):
# Lambda function to wrap angles
#wrap = lambda angle: (angle >= 0.0)*angle + (angle < 0.0)*(angle+2*np.pi)
# Sample the curve if necessary
if (n_sampling_pts != 0):
# Create array of control pts for cubic Bezier curve
# First and last points are given, while the two intermediate
# points are computed from edge points, angles and radius
control_pts = np.zeros((4,2))
control_pts[0,:] = p1[:]
control_pts[3,:] = p2[:]
# Compute baseline intermediate control pts ctrl_p1 and ctrl_p2
ctrl_p1_base = radius1*delta1
ctrl_p2_base =-radius2*delta2
ctrl_p1_edgy = radius1*delta_b1
ctrl_p2_edgy = radius2*delta_b2
control_pts[1,:] = p1 + edgy1*ctrl_p1_base + (1.0-edgy1)*ctrl_p1_edgy
control_pts[2,:] = p2 + edgy2*ctrl_p2_base + (1.0-edgy2)*ctrl_p2_edgy
# Compute points on the Bezier curve
curve = sample_bezier_curve(control_pts, n_sampling_pts)
# Else return just a straight line
else:
curve = p1
curve = np.vstack([curve,p2])
return curve
### Crop white background from image
def trim_white(filename):
im = PIL.Image.open(filename)
bg = PIL.Image.new(im.mode, im.size, (255,255,255))
diff = PIL.ImageChops.difference(im, bg)
bbox = diff.getbbox()
cp = im.crop(bbox)
cp.save(filename)
# Generic imports
import os
import random
import shutil
from datetime import datetime
# Imports with probable installation required
try:
import progress.bar
except ImportError:
print('*** Missing required packages, I will install them for you ***')
os.system('pip3 install progress')
import progress.bar
### ************************************************
### Generate full dataset
# Parameters
n_sampling_pts = 5
mesh_domain = False
plot_pts = True
n_shapes = 200
time = datetime.now().strftime('%Y-%m-%d_%H_%M_%S')
dataset_dir = 'dataset_'+time+'/'
mesh_dir = dataset_dir+'meshes/'
img_dir = dataset_dir+'images/'
filename = 'shape'
magnify = 1.0
xmin =-3.0
xmax = 3.0
ymin =-3.0
ymax = 3.0
n_tri_max = 5000
# Create directories if necessary
if not os.path.exists(mesh_dir):
os.makedirs(mesh_dir)
if not os.path.exists(img_dir):
os.makedirs(img_dir)
# Generate dataset
bar = progress.bar.Bar('Generating shapes', max=n_shapes)
for i in range(0,n_shapes):
generated = False
while (not generated):
n_pts = random.randint(3, 12)
radius = np.random.uniform(0.0, 1.0, size=n_pts)
edgy = np.random.uniform(0.0, 1.0, size=n_pts)
shape = Shape(filename+'_'+str(i),
None,n_pts,n_sampling_pts,radius,edgy)
shape.generate(magnify=2.0,
xmin=xmin,
xmax=xmax,
ymin=ymin,
ymax=ymax)
meshed, n_tri = shape.mesh()
if (meshed and (n_tri < n_tri_max)):
shape.generate_image(plot_pts=0,
xmin=xmin,
xmax=xmax,
ymin=ymin,
ymax=ymax)
img = filename+'_'+str(i)+'.png'
mesh = filename+'_'+str(i)+'.mesh'
shutil.move(img, img_dir)
shutil.move(mesh, mesh_dir)
generated = True
bar.next()
# End bar
bar.finish()
#@title run shape generator
%cd /content/Colab-bezier-shapes
!python /content/Colab-bezier-shapes/generate_dataset.py
#@title create an archive of repository and copy it to google drive (it contains mesh and png data)
!7z a /content/shape_data.7z /content/Colab-bezier-shapes
!cp /content/shape_data.7z "/content/drive/My Drive/shape_data.7z"
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/1.0-xs-analysis-checkpoint.ipynb | ###Markdown
Formatting and Analysis =====================
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import sktime
data = pd.read_csv('/home/zone/Documents/advantage-investing/data/raw/Goog.csv')
data
data.dtypes
data.columns
data[:][:1]
for col in data.drop(columns=['Unnamed: 0']):
if str(col) !=['Unnamed: 0']:
print(str(col))
data[col] = data[col].str.split('.').str[0]
for col in data.drop(columns=['Unnamed: 0']):
print(str(col))
data[col] = data[col].str.strip(',')
df.Close
df = data.replace(',','', regex=True)
df.Date = pd.to_datetime(df.Date)
df.Close = pd.to_numeric(df.Close)
df = df.set_index(df.Date,drop = True)
df = pd.DataFrame(df.Close)
df
plt.figure(figsize=(9,5))
plt.grid(True)
plt.xlabel('')
plt.ylabel('')
plt.yticks(np.arange(0, 3000, 100))
plt.plot(df['Close'])
plt.title('goog closing price')
plt.show()
plt.figure(figsize=(10,6))
df_close = df.Close
df_close.plot(style='k.')
plt.title('Scatter plot of closing price')
plt.show()
###Output
_____no_output_____
###Markdown
Is the data stationary? Data points are often non-stationary or have means, variances, and covariances that change over time. Non-stationary behaviors like trends are unpredictable and produce unreliable forecasts
###Code
from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeseries):
#Determing rolling statistics
rolmean = timeseries.rolling(12).mean()
rolstd = timeseries.rolling(12).std()
#Plot rolling statistics:
plt.plot(timeseries, color='yellow',label='Original')
plt.plot(rolmean, color='red', label='Rolling Mean')
plt.plot(rolstd, color='black', label = 'Rolling Std')
plt.legend(loc='best')
plt.title('Rolling Mean and Standard Deviation')
plt.show(block=False)
print("Results of dickey fuller test")
adft = adfuller(timeseries,autolag='AIC')
# output for dft will give us without defining what the values are.
#hence we manually write what values does it explains using a for loop
output = pd.Series(adft[0:4],index=['Test Statistics','p-value','No. of lags used','Number of observations used'])
for key,values in adft[4].items():
output['critical value (%s)'%key] = values
print(output)
test_stationarity(df.Close)
###Output
_____no_output_____
###Markdown
The mean and std changes over time so our data is non stationary and looking at the graph the data is showing a trend. First we'll create train and test sets then we'll fit an ARIMA model to this data. ( ARIMA models use differencing to extract stationary data from our originally non stationary data)
###Code
df_log = df_log[::-1]
train_data, test_data = df_log[3:int(len(df_log)*0.9)], df_log[int(len(df_log)*0.9):]
plt.figure(figsize=(10,6))
plt.grid(True)
plt.xlabel('Dates')
plt.ylabel('Closing Prices')
plt.plot(df_log, 'green', label='Train data')
plt.plot(test_data, 'blue', label='Test data')
plt.legend()
len(test_data)
from sktime.forecasting.all import *
fh = FocastingHorizon(test_data.index, is_relative=False)
forecaster = ThetaForecaster(sp=12)
forecaster.fit(train_data)
y_pred = forecaster.predict(fh)
smape_loss(test_data, y_pred)
###Output
_____no_output_____ |
botChallenge/davidepruscini/report.ipynb | ###Markdown
Soluzione 'The Bot Challenge' - Davide Pruscini (prushh) EngineAl fine di risolvere i vari tasks, è stato necessario studiare il comportamento dell'engine e trovare: * i comandi a disposizione * come visualizzare/completare le quests * come chiudere la challengePer avviare il bot, messo a disposizione grazie ai tre file eseguibili (rispettivamente per Linux, MacOS o Windows), bisogna memorizzare il **TOKEN** all'interno di un file denominato `token.txt` e per interagire con esso basterà digitare, nella barra di ricerca di Telegram, il `` scelto in fase di registrazione. Per maggiori informazioni: [Come creare un bot](https://core.telegram.org/bots)In fase di presentazione della challenge sono stati fatti alcuni spoiler per quanto riguarda i comandi disponibili: * **/status** - Mostra il numero di quests completate * **/quest0** - Permette di rispondere alla prima quest * **/quest1** - Permette di rispondere alla seconda quest * **/quest2** - Permette di rispondere alla terza quest Oltre a questi non sono stati trovati altri comandi utilizzabili, si è passati quindi alla ricerca della parola chiave legata ad *Harry Potter*. Dopo innumerevoli tentativi collegati più che mai a *Piton*, ho deciso di cedere e guardare all'interno dell'eseguibile sperando di trovare qualcosa.Fortunatamente è stato così, digitando la frase `giuro solennemente di non avere buone intenzioni` sono comparsi i tre bottoni che permettono la lettura delle quests, una dopo l'altra, soltanto se la precedente è stata completata. Si parlerà in seguito della loro risoluzione.Non rimaneva che chiudere la challenge, per farlo è stato semplice, avendo visto recentemente la famosa saga è bastata una ricerca online: per mascherare la *Mappa del Malandrino* è sufficiente dire `fatto il misfatto`. Ovviamente ciò è possibile soltanto se tutte le quests sono state completate con successo. Sviluppo BotCome da direttive è stato usato il pacchetto [python-telegram-bot](https://github.com/python-telegram-bot/python-telegram-bot), all'interno del repository di GitHub è presente una cartella contenente degli [esempi](https://github.com/python-telegram-bot/python-telegram-bot/tree/master/examples) ed è proprio da lì che sono partito. Main
###Code
def main():
# Create the EventHandler and pass it your bot's token
updater = Updater(TOKEN, use_context=True)
dp = updater.dispatcher
# Display authorization message
bot_username = updater.bot.get_me()['username']
print(f"{get_now()} Authorized on account {bot_username}")
# Adding all the handler for the commands
dp.add_handler(CommandHandler('status', status))
dp.add_handler(CommandHandler('quest0', quest0))
dp.add_handler(CommandHandler('quest1', quest1))
dp.add_handler(CommandHandler('quest2', quest2))
dp.add_handler(CommandHandler('quest3', quest3))
dp.add_handler(CommandHandler('quest4', quest4))
dp.add_handler(CommandHandler('quest5', quest5))
dp.add_handler(MessageHandler(Filters.command, unknown))
cmd_unlocks = ConversationHandler(
entry_points=[MessageHandler(Filters.text, unlocks)],
states={
1: [MessageHandler(
Filters.regex('^(Quest 0|Quest 1|Quest 2|Quest 3|Quest 4|Quest 5)$'),
quest_choice),
MessageHandler(Filters.text, quest_choice)],
},
fallbacks=[CommandHandler('cancel', cancel)]
)
dp.add_handler(cmd_unlocks)
# Log all errors
dp.add_error_handler(error)
# Start the Bot
updater.start_polling()
# Run the bot until you press Ctrl-C or the process
# receives SIGINT, SIGTERM or SIGABRT.
updater.idle()
return 0
###Output
_____no_output_____
###Markdown
All'interno della funzione `main()` si va ad effettuare la creazione vera e propria del bot, si aggiungono gli handler per i comandi conosciuti e sconosciuti, viene inoltre utilizzato un `ConversationHandler` per interagire con ogni singolo utente.Si procede all'avvio con l'istruzione seguente:```pythonupdater.start_polling()``` Unlocks
###Code
def unlocks(update, context):
'''
Unlock missions with the correct passphrase
'''
msg = update.message.text.lower()
reply = "Non ho niente da dire..."
if msg == passphrase:
if 'quests' not in context.user_data.keys():
context.user_data['quests'] = create_qts()
reply_keyboard = [
['Quest 0', 'Quest 1', 'Quest 2'],
['Quest 3', 'Quest 4', 'Quest 5']]
markup = ReplyKeyboardMarkup(reply_keyboard, resize_keyboard=True)
update.message.reply_text(reply, reply_markup=markup)
return 1
elif msg == endphrase:
if 'quests' in context.user_data.keys():
quests = context.user_data['quests']
if get_solved(quests) == NUM_QTS:
reply = "Congratulazioni! Hai finito tutte le missioni..."
else:
reply = "Non hai completato tutte le missioni. Che peccato..."
update.message.reply_text(reply)
###Output
_____no_output_____
###Markdown
Questo funzione rappresenta l'*entry_points* del `ConversationHandler`, viene richiamata ogni volta che s'invia un messaggio di testo grazie alla specifica `Filters.text`. Si confronta il messaggio inviato con la passphrase, se coincidono ed è la prima volta che è stata inserita si provvede alla creazione di un dizionario di dizionari all'interno dell'apposito spazio per l'utente:```pythoncontext.user_data['quests'] = create_qts()```Quest'ultimo conterrà tutte le quests, ogni *key* rappresenta una missione e sarà così composta:```pythonnum_quest: { 'text': quesito, 'solution': soluzione, 'solved': bool, 'attemp': int}```Viene inoltre creata e resa visibile la `ReplyKeyboardMarkup` contenente i bottoni, si ritorna 1 così da cambiare lo stato del `ConversationHandler`. Se invece è stata inserita l'endphrase si controlla che tutte le quests siano state completate.In ogni caso viene inviato un messaggio di risposta. Quest choice
###Code
def quest_choice(update, context):
'''
Bot core to interact with quests.
'''
msg = update.message.text.lower()
quests = context.user_data['quests']
if msg == endphrase:
if get_solved(quests) == NUM_QTS:
reply = "Congratulazioni! Hai finito tutte le missioni..."
reply_markup = ReplyKeyboardRemove()
update.message.reply_text(reply, reply_markup=reply_markup)
return ConversationHandler.END
reply = "Non hai completato tutte le missioni. Che peccato..."
elif msg == 'quest 0':
reply = check_choice(quests, 0)
elif msg == 'quest 1':
reply = check_choice(quests, 1)
elif msg == 'quest 2':
reply = check_choice(quests, 2)
elif msg == 'quest 3':
reply = check_choice(quests, 3)
elif msg == 'quest 4':
reply = check_choice(quests, 4)
elif msg == 'quest 5':
reply = check_choice(quests, 5)
else:
reply = "Non ho niente da dire..."
update.message.reply_text(reply)
###Output
_____no_output_____
###Markdown
Ora siamo passati allo stato 1 del `ConversationHandler`, rappresentato da questa funzione. Si controlla il messaggio che l'utente ha inviato cercando di trovare un riscontro o con l'endphrase (dove in questo caso viene rimossa la `ReplyKeyboardMarkup` e conclusa la conversazione) o con una delle quests presenti.Se la scelta corrisponde ad una quest si procede con la seguente funzione che restituisce un messaggio a seconda dei casi.Verifica inoltre se tutte le quests precedenti sono state completate o meno.
###Code
def check_choice(qts: dict, n: int) -> str:
'''
Check if the question can be answered.
'''
if qts[n]['solved']:
return "Hai già completato questa quest..."
for idx in range(0, n):
if not qts[idx]['solved']:
return "Devi completare la quest precedente..."
return qts[n]['text']
###Output
_____no_output_____
###Markdown
Quest-i
###Code
def quest0(update, context):
'''Command for quest0.'''
reply = check_qts(context, 0)
update.message.reply_text(reply)
###Output
_____no_output_____
###Markdown
Per ogni quest viene richiamata la stessa funzione, si controlla la correttezza della risposta inviando all'utente un apposito messaggio.Si aggiorna inoltre il numero di tentativi effettuati per una determinata quest.
###Code
def check_qts(context, n: int) -> str:
'''
Check if the answer is correct.
'''
reply = "Risposta non corretta..."
if 'quests' not in context.user_data.keys():
return reply
qts = context.user_data['quests'][n]
if qts['solved']:
return reply
args = " ".join(context.args)
if _cast_arg(args) == qts['solution']:
context.user_data['quests'][n]['solved'] = True
return "Risposta corretta! Quest completata!"
context.user_data['quests'][n]['attemp'] += 1
reply += f" Tentativi effettuati: {qts['attemp']}"
return reply
###Output
_____no_output_____
###Markdown
Status
###Code
def status(update, context):
'''Show number of completed quests.'''
if 'quests' in context.user_data.keys():
quests = context.user_data['quests']
solved_qts = get_solved(quests)
reply = f"Hai completato {solved_qts} quest su {NUM_QTS}"
else:
reply = f"Hai completato 0 quest su {NUM_QTS}"
update.message.reply_text(reply)
###Output
_____no_output_____
###Markdown
Mette a conoscenza l'utente su quante quests ha completato e quante ce ne sono complessivamente. Unknown
###Code
def unknown(update, context):
'''Reply to all unrecognized commands.'''
update.message.reply_text("Comando non valido...")
###Output
_____no_output_____
###Markdown
Ogni qual volta l'utente invii un comando sconosciuto si risponde semplicemente come sopra. Risoluzione QuestsDi seguito le funzioni create per risolvere le quest, *N* rappresenta un numero intero random preso in un determinato range, *DATE* rappresenta una data nel formato `%Y/%m/%d`. 0. Calcola la somma dei multipli di 3 e 5 fino a *N*
###Code
def _quest0(last: int) -> int:
sum_ = 0
for elm in range(last):
if elm % 3 == 0 or elm % 5 == 0:
sum_ += elm
return sum_
###Output
_____no_output_____
###Markdown
1. Calcola la somma dei numeri dispari della serie di fibonacci fino all'*N*-esimo
###Code
def _quest1(n: int) -> int:
n += 2
sum_ = -1
a, b = 0, 1
while n > 0:
if a % 2 != 0:
sum_ += a
a, b = b, a + b
n -= 1
return sum_
###Output
_____no_output_____
###Markdown
2. Decodifica la stringa: *SGVsbG8sIFB56K6t57uD6JClIQ==%!(EXTRA int=21)*
###Code
def _quest2(b64: str) -> str:
return b64decode(b64).decode('utf-8')
###Output
_____no_output_____
###Markdown
3. Trova la cifra decimale numero *N* del π
###Code
def _quest3(nth: int) -> int:
idx = nth + 1
tmp = "%.48f" % pi
return int(tmp[idx])
###Output
_____no_output_____
###Markdown
4. Trova una differente rappresentazione: *DATE*
###Code
def _quest4(date: str) -> int:
return int(datetime.strptime(date, "%Y/%m/%d").timestamp())
###Output
_____no_output_____
###Markdown
5. Conosci qualche alfabeto? Prova a fare lo spelling: *PyBootCamp*
###Code
def _quest5(text: str) -> str:
return nato(text)
###Output
_____no_output_____ |
pytorch/0225_SigmoidNeuron.ipynb | ###Markdown
Plotting Sigmoid Function
###Code
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import matplotlib.colors
import pandas as pd
###Output
_____no_output_____
###Markdown
$S_{w, b}(x) = \frac{1}{1 + e^{-(wx + b)}}$
###Code
def sigmoid(x, w, b):
return 1/(1 + np.exp(-(w*x + b)))
sigmoid(1, 0.5, 0)
w = -1.8 #@param {type: "slider", min: -2, max: 2, step: 0.1}
b = -0.5 #@param {type: "slider", min: -2, max: 2, step: 0.1}
X = np.linspace(-10,10,100)
Y = sigmoid(X, w, b)
plt.plot(X, Y)
plt.show()
###Output
_____no_output_____
###Markdown
$S_{w_1, w_2, b}(x_1, x_2) = \frac{1}{1 + e^{-(w_1x_1 + w_2x_2 + b)}}$
###Code
def sigmoid_2d(x1, x2, w1, w2, b):
return 1/(1 + np.exp(-(w1*x1 + w2*x2 + b)))
sigmoid_2d(1, 0, 0.5, 0, 0)
X1 = np.linspace(-10, 10, 100)
X2 = np.linspace(-10, 10, 100)
XX1, XX2 = np.meshgrid(X1, X2)
print(X1.shape, X2.shape, XX1.shape, XX2.shape)
w1 = 2
w2 = -0.5
b = 0
Y = sigmoid_2d(XX1, XX2, w1, w2, b)
my_cmap = matplotlib.colors.LinearSegmentedColormap.from_list("", ["red","yellow","green"])
plt.contourf(XX1, XX2, Y, cmap = my_cmap, alpha = 0.6)
plt.show()
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_surface(XX1, XX2, Y, cmap='viridis')
ax.set_xlabel('x1')
ax.set_ylabel('x2')
ax.set_zlabel('y')
ax.view_init(30, 270)
###Output
_____no_output_____
###Markdown
Compute Loss for a Given Dataset
###Code
w_unknown = 0.5
b_unknown = 0.25
X = np.random.random(25) * 20 - 10
Y = sigmoid(X, w_unknown, b_unknown)
plt.plot(X, Y, '*')
plt.show()
def calculate_loss(X, Y, w_est, b_est):
loss = 0
for x, y in zip(X, Y):
loss += (y - sigmoid(x, w_est, b_est))**2
return loss
W = np.linspace(0, 2, 101)
B = np.linspace(-1, 1, 101)
WW, BB = np.meshgrid(W, B)
Loss = np.zeros(WW.shape)
WW.shape
for i in range(WW.shape[0]):
for j in range(WW.shape[1]):
Loss[i, j] = calculate_loss(X, Y, WW[i, j], BB[i, j])
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_surface(WW, BB, Loss, cmap='viridis')
ax.set_xlabel('w')
ax.set_ylabel('b')
ax.set_zlabel('Loss')
ax.view_init(30, 270)
ij = np.argmin(Loss)
i = int(np.floor(ij/Loss.shape[1]))
j = int(ij - i * Loss.shape[1])
print(i, j)
print(WW[i, j], BB[i, j])
###Output
0.5 0.26
###Markdown
Class for Sigmoid Neuron
###Code
class SigmoidNeuron:
def __init__(self):
self.w = None
self.b = None
def perceptron(self, x):
return np.dot(x, self.w.T) + self.b
def sigmoid(self, x):
return 1.0/(1.0 + np.exp(-x))
def grad_w(self, x, y):
y_pred = self.sigmoid(self.perceptron(x))
return (y_pred - y) * y_pred * (1 - y_pred) * x
def grad_b(self, x, y):
y_pred = self.sigmoid(self.perceptron(x))
return (y_pred - y) * y_pred * (1 - y_pred)
def fit(self, X, Y, epochs=1, learning_rate=1, initialise=True):
# initialise w, b
if initialise:
self.w = np.random.randn(1, X.shape[1])
self.b = 0
for i in range(epochs):
dw = 0
db = 0
for x, y in zip(X, Y):
dw += self.grad_w(x, y)
db += self.grad_b(x, y)
self.w -= learning_rate * dw
self.b -= learning_rate * db
###Output
_____no_output_____
###Markdown
Fit for toy data
###Code
X = np.asarray([[2.5, 2.5], [4, -1], [1, -4], [-3, 1.25], [-2, -4], [1, 5]])
Y = [1, 1, 1, 0, 0, 0]
sn = SigmoidNeuron()
sn.fit(X, Y, 1, 0.25, True)
def plot_sn(X, Y, sn, ax):
X1 = np.linspace(-10, 10, 100)
X2 = np.linspace(-10, 10, 100)
XX1, XX2 = np.meshgrid(X1, X2)
YY = np.zeros(XX1.shape)
for i in range(X2.size):
for j in range(X1.size):
val = np.asarray([X1[j], X2[i]])
YY[i, j] = sn.sigmoid(sn.perceptron(val))
ax.contourf(XX1, XX2, YY, cmap=my_cmap, alpha=0.6)
ax.scatter(X[:,0], X[:,1],c=Y, cmap=my_cmap)
ax.plot()
sn.fit(X, Y, 1, 0.05, True)
N = 30
plt.figure(figsize=(10, N*5))
for i in range(N):
print(sn.w, sn.b)
ax = plt.subplot(N, 1, i + 1)
plot_sn(X, Y, sn, ax)
sn.fit(X, Y, 1, 0.5, False)
###Output
[[1.1475869 0.14338099]] [0.00025004]
[[ 1.14552087 -0.26776881]] [-0.00977853]
[[ 1.14629352 -0.4731143 ]] [-0.07636588]
[[ 1.24938546 -0.3584108 ]] [-0.12709462]
[[ 1.26904094 -0.46124892]] [-0.1781642]
[[ 1.33184019 -0.42412577]] [-0.21976639]
[[ 1.36363794 -0.46294058]] [-0.2596155]
[[ 1.4052129 -0.46376531]] [-0.29481983]
[[ 1.43880387 -0.47889947]] [-0.32764669]
[[ 1.47188452 -0.48817067]] [-0.35779204]
[[ 1.50242874 -0.49820885]] [-0.38579537]
[[ 1.53127057 -0.50736475]] [-0.41185252]
[[ 1.55847742 -0.51605487]] [-0.43619511]
[[ 1.58423325 -0.52427671]] [-0.45900731]
[[ 1.60867751 -0.53208114]] [-0.48044905]
[[ 1.63193273 -0.53950679]] [-0.50065799]
[[ 1.65410544 -0.54658763]] [-0.51975337]
[[ 1.67528882 -0.55335339]] [-0.53783904]
[[ 1.69556483 -0.55983018]] [-0.55500584]
[[ 1.71500587 -0.5660411 ]] [-0.57133357]
[[ 1.73367619 -0.57200665]] [-0.58689267]
[[ 1.75163305 -0.57774507]] [-0.60174554]
[[ 1.76892769 -0.58327272]] [-0.61594767]
[[ 1.7856061 -0.58860425]] [-0.62954862]
[[ 1.80170972 -0.59375287]] [-0.64259274]
[[ 1.81727602 -0.59873052]] [-0.65511991]
[[ 1.83233898 -0.60354804]] [-0.66716608]
[[ 1.8469295 -0.60821526]] [-0.67876374]
[[ 1.86107574 -0.61274116]] [-0.68994235]
[[ 1.87480348 -0.61713394]] [-0.70072869]
|
NYT/GPT/gpt2-150K-cutoff-experiment.ipynb | ###Markdown
Definitions (run first!)
###Code
import gzip
import pickle
import requests
import csv
from torch.utils.data import Dataset, DataLoader, random_split
from transformers import GPT2Tokenizer, GPT2Model
import torch
from sklearn.preprocessing import MultiLabelBinarizer
import numpy as np
import wandb
import gc
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
class GPTEmbeddedDataset(Dataset):
def __init__(self, X, y):
self.X = X
self.y = y
def __len__(self):
return len(self.X)
def __getitem__(self, idx):
return self.X[idx], self.y[idx], idx
import csv
def loadcsv(filename):
with open(filename, newline='', encoding='utf-8') as f:
return list(csv.reader(f))
def load_label_map(out2id_path, id2label_path):
out2id = loadcsv(out2id_path)
out2id = {int(row[0]): row[1] for row in out2id}
id2label_raw = loadcsv(id2label_path)
id2label = {}
for row in id2label_raw:
if row == []:
continue
id2label[row[1]] = row[2]
out2label = [id2label[out2id[out]] for out in sorted(out2id.keys())]
return out2label
out2label = load_label_map('../data/labels_dict_gpt.csv', '../data/nyt-theme-tags.csv')
mlb = MultiLabelBinarizer(classes=out2label)
mlb.fit(out2label)
import time
import math
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
def validation_split(dataset, validation_subset, seed=42):
if validation_subset > 0:
n_total_samples = len(dataset)
n_train_samples = math.floor(n_total_samples * (1-validation_subset))
n_valid_samples = n_total_samples - n_train_samples
train_subset, valid_subset = random_split(
dataset,
[n_train_samples, n_valid_samples],
generator=torch.Generator().manual_seed(seed)
) # reproducible results
else:
train_subset = dataset
valid_subset = None
return train_subset, valid_subset
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
epoch_precision = 0
epoch_recall = 0
epoch_f_score = 0
model.train()
for i, batch in enumerate(iterator):
article_embeddings, labels, idx = batch
article_embeddings = article_embeddings.to(device)
labels = labels.type(torch.float).to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = model(article_embeddings)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# calculate metrics
preds = model.act(outputs) > 0.5
acc, precision, recall, f1 = multi_label_scores(labels.detach().cpu(), preds.detach().cpu())
epoch_loss += loss.item()
epoch_acc += acc.item()
epoch_precision += precision.item()
epoch_recall += recall.item()
epoch_f_score += f1.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator), \
epoch_precision / len(iterator), epoch_recall / len(iterator), \
epoch_f_score / len(iterator)
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
epoch_precision = 0
epoch_recall = 0
epoch_f_score = 0
model.eval()
with torch.no_grad():
for i, batch in enumerate(iterator):
article_embeddings, labels, idx = batch
article_embeddings = article_embeddings.to(device)
labels = labels.type(torch.float).to(device)
outputs = model(article_embeddings)
loss = criterion(outputs, labels)
# calculate metrics
preds = model.act(outputs) > 0.5
acc, precision, recall, f1 = multi_label_scores(labels.detach().cpu(), preds.detach().cpu())
epoch_loss += loss.item()
epoch_acc += acc.item()
epoch_precision += precision.item()
epoch_recall += recall.item()
epoch_f_score += f1.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator), \
epoch_precision / len(iterator), epoch_recall / len(iterator), \
epoch_f_score / len(iterator)
###Output
_____no_output_____
###Markdown
Training
###Code
import io
import os
import torch
import torch.optim as optim
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
from tqdm.notebook import tqdm
from mitnewsclassify2.gpt_model import GPTModel as GPTHead2
%load_ext autoreload
%autoreload 2
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
def multi_label_scores(correct_labels, predicted_labels):
accuracy = accuracy_score(correct_labels, predicted_labels)
precision = precision_score(correct_labels, predicted_labels, average='weighted', zero_division=0)
recall = recall_score(correct_labels, predicted_labels, average='weighted', zero_division=0)
f_1_score = f1_score(correct_labels, predicted_labels, average='weighted', zero_division=0)
return accuracy, precision, recall, f_1_score
def gettags(head_model, features, eval=False):
head_model.eval()
features = features.unsqueeze(0).to(device)
with torch.no_grad():
logits = head_model(features)
multi_label_sigmoids = head_model.act(logits)
preds = multi_label_sigmoids > 0.5
preds = preds.detach().cpu()
return mlb.inverse_transform(preds)
%%time
# for ARTICLE_TYPE in ['cutoff', 'complete']:
for ARTICLE_TYPE in ['complete']:
train_dataset = torch.load(f'vectorized/train_150k_{ARTICLE_TYPE}.pt')
print(f'X_train_{ARTICLE_TYPE}', train_dataset.X.shape)
print(f'y_train_{ARTICLE_TYPE}', train_dataset.y.shape)
# splitting train/validation
batch_size = 256
seed = 42
train_subset, valid_subset = validation_split(train_dataset, 0.1, seed)
train_loader = DataLoader(train_subset, batch_size=batch_size)
valid_loader = DataLoader(valid_subset, batch_size=batch_size)
n_training_samples = train_dataset.X.shape[0]
# hyperparams
max_epochs = 1000
patience = 10
# model
model_path = f'models/{ARTICLE_TYPE}.pt'
model = GPTHead2(768, 538).to(device)
criterion = nn.BCEWithLogitsLoss()
optimizer = optim.Adam(model.parameters(),
lr = 1e-3, # default is 5e-5, our notebook had 2e-5
)
wandb.init(
entity='ut-mit-news-classify',
project="NYT Multilabeling",
tags=[ARTICLE_TYPE, 'cutoff-experiment'],
)
# training
epochs_of_no_improvement = 0
best_valid_loss = float('inf')
wandb.config.early_stopping_patience = patience
wandb.config.training_samples=n_training_samples
wandb.config.model_path = model_path
wandb.save(model_path) # this will make wandb upload the model after call to `wandb.finish()`
print('Starting training...')
for epoch in range(max_epochs):
start_time = time.time()
train_loss, train_acc, train_precision, train_recall, train_f_score \
= train(model, train_loader, optimizer, criterion)
valid_loss, valid_acc, valid_precision, valid_recall, valid_f_score \
= evaluate(model, valid_loader, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if valid_loss < best_valid_loss:
print(f'New validation loss {valid_loss} is better than the best validation loss {best_valid_loss} so far.')
best_valid_loss = valid_loss
torch.save(model, model_path)
epochs_of_no_improvement = 0
else:
epochs_of_no_improvement += 1
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}% | ' +
f'Train Precision: {train_precision*100:.2f}% | Train Recall: {train_recall*100:.2f}% | ' +
f'Train F1-score: {train_f_score*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}% | ' +
f'Val. Precision: {valid_precision*100:.2f}% | Val. Recall: {valid_recall*100:.2f}% | ' +
f'Val. F1-score: {valid_f_score*100:.2f}%')
wandb.log({"train_loss": train_loss,
"train_precision": train_precision,
"train_f_score": train_f_score,
"train_acc": train_acc,
"train_recall": train_recall,
"valid_loss": valid_loss,
"valid_acc": valid_acc,
"valid_precision": valid_precision,
"valid_recall": valid_recall,
"valid_f_score": valid_f_score,
"epoch": epoch+1,
})
# check if the training should be stopped and then stop the training
if epochs_of_no_improvement == patience :
print(f'Early stopping, on epoch: {epoch+1}.')
break
del train_dataset
del train_subset
del valid_subset
del train_loader
del valid_loader
del model
gc.collect()
####### TESTING #######
print('Testing...')
model = torch.load(model_path)
test_dataset_names = [('cutoff', 'vectorized/test_150k_cutoff.pt'), ('complete', 'vectorized/test_150k_complete.pt')]
for article_type, dataset_path in test_dataset_names:
test_dataset = torch.load(dataset_path)
test_loader = DataLoader(test_dataset, batch_size=128)
start_time = time.time()
test_loss, test_acc, test_precision, test_recall, test_f_score \
= evaluate(model, test_loader, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
wandb.run.summary[f'test_{article_type}_acc'] = test_acc
wandb.run.summary[f'test_{article_type}_precision'] = test_precision
wandb.run.summary[f'test_{article_type}_recall'] = test_recall
wandb.run.summary[f'test_{article_type}_f_score'] = test_f_score
print(f'Epoch: test | Epoch Time: {epoch_mins}m {epoch_secs}s | Dataset: {article_type}')
print(f'\tTest Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}% | ' +
f'Test Precision: {test_precision*100:.2f}% | Test Recall: {test_recall*100:.2f}% | ' +
f'Test F1-score: {test_f_score*100:.2f}%')
wandb.finish()
model = torch.load(model_path)
test_dataset_names = [('cutoff', 'vectorized/test_150k_cutoff.pt'), ('complete', 'vectorized/test_150k_complete.pt')]
for article_type, dataset_path in test_dataset_names:
test_dataset = torch.load(dataset_path)
test_loader = DataLoader(test_dataset, batch_size=128)
start_time = time.time()
test_loss, test_acc, test_precision, test_recall, test_f_score \
= evaluate(model, test_loader, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
wandb.run.summary[f'test_{article_type}_acc'] = test_acc
wandb.run.summary[f'test_{article_type}_precision'] = test_precision
wandb.run.summary[f'test_{article_type}_recall'] = test_recall
wandb.run.summary[f'test_{article_type}_f_score'] = test_f_score
print(f'Epoch: test | Epoch Time: {epoch_mins}m {epoch_secs}s | Dataset: {article_type}')
print(f'\tTest Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}% | ' +
f'Test Precision: {test_precision*100:.2f}% | Test Recall: {test_recall*100:.2f}% | ' +
f'Test F1-score: {test_f_score*100:.2f}%')
wandb.finish()
idx = 15
print('predicted', gettags(model, train_dataset.X[idx]))
print('gold:', mlb.inverse_transform(train_dataset.y[idx].unsqueeze(0)))
###Output
predicted [('politics and government',)]
gold: [('armament, defense and military forces',)]
|
backend/questionAnalysis.ipynb | ###Markdown
import librarys
###Code
import nltk
nltk.download('nps_chat')
nltk.download('punkt')
import pandas as pd
import matplotlib.pyplot as plt
import boto3
from sagemaker import get_execution_role
role = get_execution_role()
bucket_name = 'deploy-sagemaker-conversation'
s3_url = 's3://deploy-sagemaker-conversation/floop_data_15k.json'
conn = boto3.client('s3')
contents = conn.list_objects(Bucket = bucket_name)['Contents']
s3 = boto3.resource('s3')
import json
s3_client = boto3.client('s3')
s3 = boto3.resource('s3')
obj = s3.Object('deploy-sagemaker-conversation',
'auto-floop-s3-export3.json')
data = json.load(obj.get()['Body'])
#print(data)
print(len(data))
dataList=[]
for x in range(50):
dataList.append(data[x])
for x in dataList:
for y in range(len(x)):
sentence = x[y].get('Text')
sentences = []
for x in dataList:
for y in range(len(x)):
sentence = x[y].get('Text')
sentences.append(sentence)
print(sentences)
dataset = conn.get_object(Bucket = bucket_name, Key = 'floop_data_15k.json')
s3_client.get_object(Bucket = bucket_name, Key = 'floop_data_15k.json')
###Output
_____no_output_____
###Markdown
Data
###Code
posts = nltk.corpus.nps_chat.xml_posts()[:10000]
Question_Words = [ 'where','how','why','did','do','does',"isn't",'has','am i', 'are','can','could','is','may',"can't",
"didn't",'will','when',"doesn't","haven't",'have','what',"aren't",'would',"couldn't","wouldn't","won't","shouldn't",'should']
###Output
_____no_output_____
###Markdown
* input: chat posts* Tokenize sentences using NLTK's word_tokenize* return dict of tokenized words
###Code
def post_features(post):
features = {}
for word in nltk.word_tokenize(post):
features['contains({})'.format(word.lower())] = True
return features
###Output
_____no_output_____
###Markdown
0. Input: none 1. Divide data into 80% training and 10% testing sets 2. Use NLTK's Multinomial Naive Bayes to perform classifcation 3. Return: Classifier object
###Code
def perform_classification():
featuresets = [(post_features(post.text), post.get('class')) for post in posts]
training_size = int(len(featuresets) * 0.1)
train_set, test_set = featuresets[training_size:], featuresets[:training_size]
classifier = nltk.NaiveBayesClassifier.train(train_set)
return classifier
myclassifier= perform_classification()
###Output
_____no_output_____
###Markdown
1. Input a sentence2. returns the type sentence
###Code
types = ["whQuestion","ynQuestion","Statement"]
def is_ques(ques):
question_type = myclassifier.classify(post_features(ques))
return question_type
###Output
_____no_output_____
###Markdown
0. Method : IsQuestion1. Input: Sentence to be predicted2. Return: type of sentence it is using nltk
###Code
def predict(data):
type=is_ques(data)
#print("type is"+type)
first_word = data.split()[0].lower()
types=[]
if ((type=="whQuestion" or type== "ynQuestion") or (first_word in Question_Words )) :
types.append("Question")
elif type=="Statement" or data.endswith(".")or data.endswith("!"):
types.append("Statement")
else:
types.append("garbled text")
return types[0]
sent="More details. A lot happened."
diss=predict(sent)
pred=[ ]
pred.append(diss)
mydat={"sentence":diss}
print(mydat)
data = json.load(obj.get()['Body'])
analysis=[]
for i in sentences:
analysis.append(predict(i))
print(len(analysis))
result = pd.DataFrame({'Original Data':dataList, 'Sentence Analysis':analysis})
result.to_csv('results.csv')
print(result['Sentence Analysis'].value_counts())
plt.pie(result['Sentence Analysis'].value_counts(), labels = result['Sentence Analysis'].value_counts().keys(), autopct='%.1f%%')
plt.show()
###Output
Statement 34
garbled text 15
Question 1
Name: Sentence Analysis, dtype: int64
|
imaterialist-challenge-furniture-2018/3. TrainPredict_Mix3model0.ipynb | ###Markdown
3. TrainPredict_Mix3model0 Result:- Kaggle score: Tensorboard:- Input at command: tensorboard --logdir=./log- Input at browser: http://127.0.0.1:6006 Reference:- https://www.kaggle.com/codename007/a-very-extensive-landmark-exploratory-analysis Run name
###Code
import time
project_name = 'ic_furniture2018'
step_name = 'TrainPredict_Mix3model'
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
run_name = project_name + '_' + step_name + '_' + time_str
print('run_name: ' + run_name)
t0 = time.time()
###Output
run_name: ic_furniture2018_TrainPredict_Mix3model_20180429_112205
###Markdown
Import params
###Code
epochs = 15
print('epochs: ', epochs)
###Output
epochs: 15
###Markdown
Import PKGs
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
from IPython.display import display
import os
import gc
import zipfile
import pickle
import math
from PIL import Image
import shutil
from tqdm import tqdm
from multiprocessing import cpu_count
from sklearn.metrics import confusion_matrix, accuracy_score
###Output
_____no_output_____
###Markdown
Important Params
###Code
batch_size = 1024
###Output
_____no_output_____
###Markdown
Project folders
###Code
cwd = os.getcwd()
input_folder = os.path.join(cwd, 'input')
output_folder = os.path.join(cwd, 'output')
model_folder = os.path.join(cwd, 'model')
feature_folder = os.path.join(cwd, 'feature')
post_pca_feature_folder = os.path.join(cwd, 'post_pca_feature')
log_folder = os.path.join(cwd, 'log')
print('input_folder: \t\t\t%s' % input_folder)
print('output_folder: \t\t\t%s' % output_folder)
print('model_folder: \t\t\t%s' % model_folder)
print('feature_folder: \t\t%s' % feature_folder)
print('post_pca_feature_folder: \t%s' % post_pca_feature_folder)
print('log_folder: \t\t\t%s' % log_folder)
org_train_folder = os.path.join(input_folder, 'org_train')
org_val_folder = os.path.join(input_folder, 'org_val')
org_test_folder = os.path.join(input_folder, 'org_test')
train_folder = os.path.join(input_folder, 'data_train')
val_folder = os.path.join(input_folder, 'data_val')
test_folder = os.path.join(input_folder, 'data_test')
test_sub_folder = os.path.join(test_folder, 'test')
if not os.path.exists(post_pca_feature_folder):
os.mkdir(post_pca_feature_folder)
print('Create folder: %s' % post_pca_feature_folder)
train_json_file = os.path.join(input_folder, 'train.json')
val_json_file = os.path.join(input_folder, 'validation.json')
test_json_file = os.path.join(input_folder, 'test.json')
print('\ntrain_json_file: \t\t%s' % train_json_file)
print('val_json_file: \t\t\t%s' % val_json_file)
print('test_json_file: \t\t%s' % test_json_file)
train_csv_file = os.path.join(input_folder, 'train.csv')
val_csv_file = os.path.join(input_folder, 'validation.csv')
test_csv_file = os.path.join(input_folder, 'test.csv')
print('\ntrain_csv_file: \t\t%s' % train_csv_file)
print('val_csv_file: \t\t\t%s' % val_csv_file)
print('test_csv_file: \t\t\t%s' % test_csv_file)
sample_submission_csv_file = os.path.join(input_folder, 'sample_submission_randomlabel.csv')
print('\nsample_submission_csv_file: \t%s' % sample_submission_csv_file)
###Output
input_folder: /data1/kaggle/imaterialist-challenge-furniture-2018/input
output_folder: /data1/kaggle/imaterialist-challenge-furniture-2018/output
model_folder: /data1/kaggle/imaterialist-challenge-furniture-2018/model
feature_folder: /data1/kaggle/imaterialist-challenge-furniture-2018/feature
post_pca_feature_folder: /data1/kaggle/imaterialist-challenge-furniture-2018/post_pca_feature
log_folder: /data1/kaggle/imaterialist-challenge-furniture-2018/log
train_json_file: /data1/kaggle/imaterialist-challenge-furniture-2018/input/train.json
val_json_file: /data1/kaggle/imaterialist-challenge-furniture-2018/input/validation.json
test_json_file: /data1/kaggle/imaterialist-challenge-furniture-2018/input/test.json
train_csv_file: /data1/kaggle/imaterialist-challenge-furniture-2018/input/train.csv
val_csv_file: /data1/kaggle/imaterialist-challenge-furniture-2018/input/validation.csv
test_csv_file: /data1/kaggle/imaterialist-challenge-furniture-2018/input/test.csv
sample_submission_csv_file: /data1/kaggle/imaterialist-challenge-furniture-2018/input/sample_submission_randomlabel.csv
###Markdown
Preview data
###Code
train_csv = pd.read_csv(train_csv_file)
print('train_csv.shape is {0}.'.format(train_csv.shape))
display(train_csv.head(2))
val_csv = pd.read_csv(val_csv_file)
print('val_csv.shape is {0}.'.format(val_csv.shape))
display(val_csv.head(2))
test_csv = pd.read_csv(test_csv_file)
print('test_csv.shape is {0}.'.format(test_csv.shape))
display(test_csv.head(2))
test_csv = pd.read_csv(test_csv_file)
print('test_csv.shape is {0}.'.format(test_csv.shape))
display(test_csv.head(2))
sample_submission_csv = pd.read_csv(sample_submission_csv_file)
print('sample_submission_csv.shape is {0}.'.format(sample_submission_csv.shape))
display(sample_submission_csv.head(2))
train_id = train_csv['image_id']
train_label_id = train_csv['label_id']
id_2_train_label_id_dict = dict(zip(train_id, train_label_id))
print('len(id_2_train_label_id_dict)=%d' % len(id_2_train_label_id_dict))
index = 0
print('id: %s, \tlandmark_id:%s' % (train_id[index], id_2_train_label_id_dict[train_id[index]]))
index = 1
print('id: %s, \tlandmark_id:%s' % (train_id[index], id_2_train_label_id_dict[train_id[index]]))
image_file = '%s_%s.jpg' % (train_id[index], id_2_train_label_id_dict[train_id[index]])
print(image_file)
val_id = val_csv['image_id']
val_label_id = val_csv['label_id']
id_2_val_label_id_dict = dict(zip(val_id, val_label_id))
print('len(id_2_val_label_id_dict)=%d' % len(id_2_val_label_id_dict))
index = 0
print('id: %s, \tlandmark_id:%s' % (val_id[index], id_2_val_label_id_dict[val_id[index]]))
index = 1
print('id: %s, \tlandmark_id:%s' % (val_id[index], id_2_val_label_id_dict[val_id[index]]))
image_file = '%s_%s.jpg' % (val_id[index], id_2_val_label_id_dict[val_id[index]])
print(image_file)
test_id = test_csv['image_id']
index = 0
print('id: %s' % (test_id[index]))
index = 1
print('id: %s' % (test_id[index]))
image_file = '%s.jpg' % (test_id[index])
print(image_file)
###Output
id: 1
id: 2
2.jpg
###Markdown
Load feature
###Code
%%time
import h5py
import numpy as np
np.random.seed(2018)
def load_h5_data(data_str, feature_folder, file_reg, model_name, time_str):
x_data = {}
y_data = {}
feature_model = os.path.join(feature_folder, file_reg % (model_name, time_str))
for filename in [feature_model]:
with h5py.File(filename, 'r') as h:
x_data = np.array(h[data_str])
y_data = np.array(h['%s_labels' % data_str])
return x_data, y_data
def load_h5_test(feature_folder, file_reg, model_name, time_str):
x_test = {}
feature_model = os.path.join(feature_folder, file_reg % (model_name, time_str))
for filename in [feature_model]:
with h5py.File(filename, 'r') as h:
x_test = np.array(h['test'])
return x_test
def is_files_existed(feature_folder, file_reg, model_names, time_strs):
for model_name in model_names:
for time_str in time_strs:
file_name = file_reg % (model_name, time_str)
file_path = os.path.join(feature_folder, file_name)
if not os.path.exists(file_path):
print('File not existed: %s' % file_path)
return False
else:
print('File existed: %s' % file_path)
return True
# Test
file_reg = 'feature_%s_%s.h5'
model_names = [
# 'MobileNet',
# 'VGG16',
# 'VGG19',
# 'ResNet50',
# 'DenseNet121',
# 'DenseNet169',
# 'DenseNet201',
'Xception',
'InceptionV3',
'InceptionResNetV2'
]
time_strs = [
'300_20180415-150022',
'200_20180415-140848',
'150_20180415-131901',
'300_20180415-111333',
'450_20180409-115039',
'450_20180409-191753',
'450_20180410-032144',
'450_20180410-112448',
'450_20180410-193018',
'20180331-163122',
'20180401-185347',
'20180401-224426',
'20180402-023138',
'20180402-062014',
'20180402-101021',
'20180402-140156',
'20180402-175506',
'20180402-214919',
'20180403-014551',
'20180406-070324',
'20180406-102130',
'20180406-123522',
'20180406-144857',
'150_20180406-230352',
'150_20180407-010446',
'150_20180407-031951',
'150_20180407-053509',
'150_20180407-075132',
'150_20180407-100849',
'150_20180407-122723',
'150_20180407-144712',
'150_20180407-170832',
'150_20180407-193230'
]
print('*'*100)
print(is_files_existed(feature_folder, file_reg, model_names, time_strs))
def time_str_generator(time_strs):
while(1):
for time_str in time_strs:
print(' ' + time_str)
yield time_str
# Test
time_str_gen = time_str_generator(time_strs)
for i in range(len(time_strs)):
next(time_str_gen)
%%time
from keras.utils.np_utils import to_categorical
from sklearn.utils import shuffle
def load_time_str_feature_data(data_str, feature_folder, file_reg, model_names, time_str):
x_data_time_strs = []
y_data_time_strs = None
for model_name in model_names:
x_data_time_str, y_data_time_str = load_h5_data(data_str, feature_folder, file_reg, model_name, time_str)
# Around data to 3 decimals to calculate computation
x_data_time_str = np.round(x_data_time_str, decimals=3)
x_data_time_strs.append(x_data_time_str)
y_data_time_strs = y_data_time_str
x_data_time_strs = np.concatenate(x_data_time_strs, axis=-1)
# print(x_data_time_strs.shape)
# print(y_data_time_strs.shape)
return x_data_time_strs, y_data_time_strs
def data_generator_folder(data_str, feature_folder, file_reg, model_names, time_strs, batch_size, num_classes):
assert is_files_existed(feature_folder, file_reg, model_names, time_strs)
time_str_gen = time_str_generator(time_strs)
x_data, y_data = load_time_str_feature_data(data_str, feature_folder, file_reg, model_names, next(time_str_gen))
# x_data, y_data = shuffle(x_data, y_data, random_state=2018)
x_data, y_data = shuffle(x_data, y_data, random_state=None)
len_x_data = len(x_data)
start_index = 0
end_index = 0
while(1):
end_index = start_index + batch_size
if end_index < len_x_data:
# print(start_index, end_index, end=' ')
x_batch = x_data[start_index: end_index, :]
y_batch = y_data[start_index: end_index]
y_batch_cat = to_categorical(y_batch, num_classes)
start_index = start_index + batch_size
# print(x_batch.shape, y_batch_cat.shape)
yield x_batch, y_batch_cat
else:
end_index = end_index-len_x_data
# print(start_index, end_index, end=' ')
x_data_old = np.array(x_data[start_index:, :], copy=True)
y_data_old = np.array(y_data[start_index:], copy=True)
# Load new datas
x_data, y_data = load_time_str_feature_data(data_str, feature_folder, file_reg, model_names, next(time_str_gen))
# x_data, y_data = shuffle(x_data, y_data, random_state=2018)
x_data, y_data = shuffle(x_data, y_data, random_state=None)
len_x_data = len(x_data)
gc.collect()
x_batch = np.vstack((x_data_old, x_data[:end_index, :]))
y_batch = np.concatenate([y_data_old, y_data[:end_index]])
y_batch_cat = to_categorical(y_batch, num_classes)
start_index = end_index
# print(x_batch.shape, y_batch_cat.shape)
yield x_batch, y_batch_cat
# x_train = np.concatenate([x_train_Xception, x_train_InceptionV3, x_train_InceptionResNetV2], axis=-1)
num_classes = len(list(set(train_label_id)))
print('num_classes: %s' % num_classes)
# Test
file_reg = 'feature_%s_%s.h5'
model_names = [
# 'MobileNet',
# 'VGG16',
# 'VGG19',
# 'ResNet50',
# 'DenseNet121',
# 'DenseNet169',
# 'DenseNet201',
'Xception',
'InceptionV3',
'InceptionResNetV2'
]
time_strs = [
# '300_20180415-150022',
# '200_20180415-140848',
# '150_20180415-131901',
# '300_20180415-111333',
'450_20180409-115039',
'450_20180409-191753',
'450_20180410-032144',
'450_20180410-112448',
'450_20180410-193018',
# '20180331-163122',
# '20180401-185347',
# '20180401-224426',
# '20180402-023138',
# '20180402-062014',
# '20180402-101021',
# '20180402-140156',
# '20180402-175506',
# '20180402-214919',
# '20180403-014551',
# '20180406-070324',
# '20180406-102130',
# '20180406-123522',
# '20180406-144857',
# '150_20180406-230352',
# '150_20180407-010446',
# '150_20180407-031951',
# '150_20180407-053509',
# '150_20180407-075132',
# '150_20180407-100849',
# '150_20180407-122723',
# '150_20180407-144712',
# '150_20180407-170832',
# '150_20180407-193230'
]
print('*' * 80)
len_train_csv = train_csv.shape[0]
steps_per_epoch_train = int(len_train_csv/batch_size) * len(time_strs)
print('len(train_data): %s' % len_train_csv)
print('batch_size: %s' % batch_size)
print('steps_per_epoch_train: %s' % steps_per_epoch_train)
train_gen = data_generator_folder('train', feature_folder, file_reg, model_names, time_strs, batch_size, num_classes)
batch_data = next(train_gen)
print(batch_data[0].shape, batch_data[1].shape)
batch_data = next(train_gen)
print(batch_data[0].shape, batch_data[1].shape)
# for i in range(steps_per_epoch_train*5):
# next(train_gen)
print('*' * 80)
len_val_csv = val_csv.shape[0]
steps_per_epoch_val = int(len_val_csv/batch_size)
print('len(val_data): %s' % len_val_csv)
print('batch_size: %s' % batch_size)
print('steps_per_epoch_val: %s' % steps_per_epoch_val)
val_gen = data_generator_folder('val', feature_folder, file_reg, model_names, time_strs, batch_size, num_classes)
batch_data = next(val_gen)
print(batch_data[0].shape, batch_data[1].shape)
batch_data = next(val_gen)
print(batch_data[0].shape, batch_data[1].shape)
data_dim = batch_data[0].shape[1]
print('data_dim: %s' % data_dim)
print('*' * 80)
x_train, y_train = load_time_str_feature_data('train', feature_folder, file_reg, model_names, time_strs[0])
x_val, y_val = load_time_str_feature_data('val', feature_folder, file_reg, model_names, time_strs[0])
print(x_train.shape)
print(len(y_train))
print(x_val.shape)
print(len(y_val))
def load_time_str_feature_test(feature_folder, file_reg, model_names, time_str):
x_test_time_strs = []
for model_name in model_names:
file_name = file_reg % (model_name, time_str)
file_path = os.path.join(feature_folder, file_name)
x_test_time_str = load_h5_test(feature_folder, file_reg, model_name, time_str)
x_test_time_str = np.round(x_test_time_str, decimals=3)
x_test_time_strs.append(x_test_time_str)
x_test_time_strs = np.concatenate(x_test_time_strs, axis=-1)
# print(x_test_time_strs.shape)
return x_test_time_strs
x_test = load_time_str_feature_test(feature_folder, file_reg, model_names, time_strs[0])
print(x_test.shape)
###Output
(12652, 5632)
###Markdown
Build NN
###Code
from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.models import Sequential
from keras.layers import Dense, Dropout, Input, Flatten, Conv2D, MaxPooling2D, BatchNormalization, LSTM
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler, TensorBoard, EarlyStopping
def get_lr(x):
# lr = round(3e-4 * 0.96 ** x, 6)
if x <= 10:
lr = 3e-4
if x <= 15:
lr = 1e-4
else:
lr = 5e-5
# lr = 3e-4
print(lr, end=' ')
return lr
for i in range(epochs):
print(get_lr(i), end=' ')
# annealer = LearningRateScheduler(lambda x: 1e-3 * 0.9 ** x)
annealer = LearningRateScheduler(get_lr)
# early_stop = EarlyStopping(monitor='acc', min_delta=0.001, patience=0.0005, verbose=0, mode='auto')
log_dir = os.path.join(log_folder, run_name)
print('\nlog_dir:' + log_dir)
tensorBoard = TensorBoard(log_dir=log_dir)
callbacks = []
# callbacks = [early_stop]
# expected input data shape: (batch_size, timesteps, data_dim)
model = Sequential()
model.add(Dense(1024, input_shape=(data_dim,)))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Dense(1024, activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Dense(num_classes, activation='softmax'))
# model.compile(loss='categorical_crossentropy',
# optimizer='rmsprop',
# metrics=['accuracy'])
model.compile(optimizer=Adam(lr=1e-4),
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
cpu_amount = cpu_count()
print(cpu_amount)
%%time
hist = model.fit_generator(train_gen,
steps_per_epoch=steps_per_epoch_train,
epochs=epochs, #Increase this when not on Kaggle kernel
verbose=1, #1 for ETA, 0 for silent
callbacks=callbacks,
max_queue_size=batch_size,
workers=cpu_amount,
use_multiprocessing=False,
validation_data=val_gen,
validation_steps=steps_per_epoch_val)
final_loss, final_acc = model.evaluate_generator(val_gen, steps=steps_per_epoch_val * len(time_strs))
print("Final loss: {0:.4f}, final accuracy: {1:.4f}".format(final_loss, final_acc))
run_name_acc = run_name + '_' + str(int(final_acc*10000)).zfill(4)
print(run_name_acc)
histories = pd.DataFrame(hist.history)
histories['epoch'] = hist.epoch
print(histories.columns)
histories_file = os.path.join(model_folder, 'hist_%s.csv' % run_name_acc)
histories.to_csv(histories_file, index=False)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(hist.history['loss'], color='b')
plt.plot(hist.history['val_loss'], color='r')
plt.show()
plt.plot(hist.history['acc'], color='b')
plt.plot(hist.history['val_acc'], color='r')
plt.show()
def save_network(model, run_name):
cwd = os.getcwd()
modelPath = os.path.join(cwd, 'model')
if not os.path.isdir(modelPath):
os.mkdir(modelPath)
weigthsFile = os.path.join(modelPath, run_name + '.h5')
model.save(weigthsFile)
save_network(model, run_name_acc)
###Output
_____no_output_____
###Markdown
Predict
###Code
%%time
y_train_proba = model.predict(x_train)
print(y_train_proba.shape)
y_train_pred = np.argmax(y_train_proba, -1)
print(y_train_pred.shape)
print(accuracy_score(y_train, y_train_pred))
%%time
y_val_proba = model.predict(x_val)
print(y_val_proba.shape)
y_val_pred = np.argmax(y_val_proba, -1)
print(y_val_pred.shape)
print(accuracy_score(y_val, y_val_pred))
%%time
y_test_proba = model.predict(x_test)
print(y_test_proba.shape)
y_test_pred = np.argmax(y_test_proba, -1)
print(y_test_pred.shape)
# 这里证明os.listdir()得到的图片名称list不正确
files = os.listdir(os.path.join(test_folder, 'test'))
print(files[:10])
# 这里证明ImageDataGenerator()得到的图片名称list才是正确
gen = ImageDataGenerator()
image_size = (299, 299)
test_generator = gen.flow_from_directory(test_folder, image_size, shuffle=False, batch_size=batch_size)
print('test_generator')
print(len(test_generator.filenames))
def save_proba(y_train_proba, y_train, y_val_proba, y_val, y_test_proba, test_filenames, file_name):
test_filenames = [n.encode('utf8') for n in test_filenames]
print(test_filenames[:10])
if os.path.exists(file_name):
os.remove(file_name)
print('File removed: \t%s' % file_name)
with h5py.File(file_name) as h:
h.create_dataset('y_train_proba', data=y_train_proba)
h.create_dataset('y_train', data=y_train)
h.create_dataset('y_val_proba', data=y_val_proba)
h.create_dataset('y_val', data=y_val)
h.create_dataset('y_test_proba', data=y_test_proba)
h.create_dataset('test_filenames', data=test_filenames)
print('File saved: \t%s' % file_name)
def load_proba(file_name):
with h5py.File(file_name, 'r') as h:
y_train_proba = np.array(h['y_train_proba'])
y_train = np.array(h['y_train'])
y_val_proba = np.array(h['y_val_proba'])
y_val = np.array(h['y_val'])
y_test_proba = np.array(h['y_test_proba'])
test_filenames = np.array(h['test_filenames'])
print('File loaded: \t%s' % file_name)
test_filenames = [n.decode('utf8') for n in test_filenames]
print(test_filenames[:10])
return y_train_proba, y_train, y_val_proba, y_val, y_test_proba, test_filenames
y_proba_file = os.path.join(model_folder, 'proba_%s.p' % run_name_acc)
save_proba(y_train_proba, y_train, y_val_proba, y_val, y_test_proba, test_generator.filenames, y_proba_file)
y_train_proba, y_train, y_val_proba, y_val, y_test_proba, test_filenames = load_proba(y_proba_file)
print(y_train_proba.shape)
print(y_train.shape)
print(y_val_proba.shape)
print(y_val.shape)
print(y_test_proba.shape)
print(len(test_filenames))
%%time
max_indexes = np.argmax(y_test_proba, -1)
print(max_indexes.shape)
test_dict = {}
for pair in zip(test_filenames, max_indexes):
image_name, indx = pair[0], int(pair[1])
image_name = image_name.split('/')[-1]
image_id = int(image_name.split('.')[0])
# print(pair[0], image_name, image_id, indx, indx+1, type(image_id), type(indx))
test_dict[image_id] = indx + 1
#确认图片的id是否能与ImageDataGenerator()对应上
for name in test_generator.filenames[:10]:
image_name = name.split('/')[-1]
image_id = int(image_name.split('.')[0])
# print('%s\t%s\t%s' % (name, image_id, test_dict[image_id]))
display(sample_submission_csv.head(2))
%%time
len_sample_submission_csv = len(sample_submission_csv)
print('len(len_sample_submission_csv)=%d' % len_sample_submission_csv)
count = 0
for i in range(len_sample_submission_csv):
image_id = int(sample_submission_csv.iloc[i, 0])
if image_id in test_dict:
pred_label = test_dict[image_id]
# print('%s\t%s' % (image_id, pred_label))
sample_submission_csv.iloc[i, 1] = pred_label
else:
# print('%s\t%s' % (image_id, 20))
sample_submission_csv.iloc[i, 1] = 20 # 属于20的类最多,所以全都设置成这个类,可能会比设置成其他得到的结果好
count += 1
if count % 1000 == 0:
print(int(count/1000), end=' ')
display(sample_submission_csv.head(2))
print(list(set(sample_submission_csv['predicted'])))
pred_file = os.path.join(output_folder, 'pred_%s.csv' % run_name_acc)
sample_submission_csv.to_csv(pred_file, index=None)
print(run_name_acc)
t1 = time.time()
print('time cost: %.2f s' % (t1-t0))
print('Done!')
###Output
ic_furniture2018_TrainPredict_Mix3model_20180429_112205_7594
time cost: 9061.06 s
Done!
|
Machine Learning/Jupyter Notebooks/Lab4 Logistic Regression.ipynb | ###Markdown
Logistic Regression on MNIST Assignment 3 (Deadline : 29/10/2020 11:59PM)Total Points : 100
###Code
import numpy as np
np.random.seed(42) # setting random seed for reproducibility
###Output
_____no_output_____
###Markdown
1. Digit Classification : 8 vs others (40 points)
###Code
# Import the required libraries
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression, LogisticRegressionCV
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.model_selection import train_test_split, cross_val_score
# The digits data can be loaded as follows :
from sklearn import datasets
D = datasets.load_digits()
X, y = D["data"], D["target"]
# check the shape of the data
print(f"Shape of X: {X.shape}\nShape of y: {y.shape}")
# Plot a few digits to get a sense of how the data looks like
fig, axes = plt.subplots(4, 5, figsize = (10, 10))
for ax in axes.flatten():
index = np.random.randint(0, X.shape[0])
ax.imshow(X[index].reshape(8, 8), cmap = 'gray')
ax.set_title(y[index])
ax.set_xticks([])
ax.set_yticks([])
###Output
_____no_output_____
###Markdown
Let's plot the number of examples in each class
###Code
pd.Series(y).value_counts().plot(kind = 'bar', title = 'Number of examples in each class',
xlabel = 'Class', ylabel = 'Count', figsize = (12, 6));
pd.Series(y).value_counts()
###Output
_____no_output_____
###Markdown
- The classes are well balanced as can be seen from the above graph
###Code
# Redefine the labels
y[y != 8] = 0
y[y == 8] = 1
pd.Series(y).value_counts().plot(kind = 'bar', title = 'Distribution of examples in each class',
xlabel = 'Class', ylabel = 'Count', figsize = (12, 6));
pd.Series(y).value_counts()
###Output
_____no_output_____
###Markdown
- As can be seen from the above graph, the distribution is highly imbalanced Plot the number of examples in both classes
###Code
# Create a 2-class classification problem (digit 8 versus other digits)
# 20% for testing and rest for training
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, stratify = y, random_state = 42)
print(f"Shape of X_train: {X_train.shape}\nShape of y_train: {y_train.shape}")
print(f"Shape of X_test: {X_test.shape}\nShape of y_test: {y_test.shape}")
###Output
Shape of X_train: (1437, 64)
Shape of y_train: (1437,)
Shape of X_test: (360, 64)
Shape of y_test: (360,)
###Markdown
Logistic Regression Model with no penalty
###Code
# Train a logistic regression model with no regularisation for the problem and obtain the cross validation accuracies
solvers = ['sag', 'saga', 'lbfgs', 'newton-cg']
for solver in solvers:
LR = LogisticRegression(penalty = 'none', solver = solver, max_iter = 10000, random_state = 0)
score = cross_val_score(LR, X_train, y_train, cv = 5, scoring = 'accuracy')
print(f"Mean accuracy: {round(score.mean(), 4)} with {solver} solver")
###Output
Mean accuracy: 0.9548 with sag solver
Mean accuracy: 0.9548 with saga solver
Mean accuracy: 0.9569 with lbfgs solver
Mean accuracy: 0.9569 with newton-cg solver
###Markdown
- **LBFGS and newton-cg** solver gave the best performance
###Code
# Predict values for the test set using the model obtained above
LR = LogisticRegression(penalty = 'none', solver = 'lbfgs', max_iter = 10000, random_state = 0)
LR.fit(X_train, y_train)
y_pred = LR.predict_proba(X_test)
# Obtain all the different performance metrics for the model on the test set
confusion_matrix(y_test, np.argmax(y_pred, axis = -1))
# print the accuracy score
round(accuracy_score(y_test, np.argmax(y_pred, axis = -1)), 4)
###Output
_____no_output_____
###Markdown
Logistic Regression with L2 regularization
###Code
# Change the hyperparameters of the model and see what effect it has on the model
# Find the hyperparameters which maximises the model performance. Choose the right performance metric to evaluate the model
lambda_param = [0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30, 100, 300]
for C in lambda_param:
LR = LogisticRegression(penalty = 'l2', C = C, solver = 'lbfgs', max_iter = 10000, random_state = 0)
score = cross_val_score(LR, X_train, y_train, cv = 5, scoring = 'accuracy')
print(f"Mean accuracy using cross validation with l2 penalty: {round(score.mean(), 4)} when C = {C}")
###Output
Mean accuracy using cross validation with l2 penalty: 0.961 when C = 0.001
Mean accuracy using cross validation with l2 penalty: 0.9659 when C = 0.003
Mean accuracy using cross validation with l2 penalty: 0.9631 when C = 0.01
Mean accuracy using cross validation with l2 penalty: 0.9645 when C = 0.03
Mean accuracy using cross validation with l2 penalty: 0.9617 when C = 0.1
Mean accuracy using cross validation with l2 penalty: 0.9624 when C = 0.3
Mean accuracy using cross validation with l2 penalty: 0.9603 when C = 1
Mean accuracy using cross validation with l2 penalty: 0.9603 when C = 3
Mean accuracy using cross validation with l2 penalty: 0.9582 when C = 10
Mean accuracy using cross validation with l2 penalty: 0.9575 when C = 30
Mean accuracy using cross validation with l2 penalty: 0.9582 when C = 100
Mean accuracy using cross validation with l2 penalty: 0.9575 when C = 300
###Markdown
Logistic Regression with L1 regularization using saga solver - **LBFGS solver** doesn't support l1 regularization
###Code
for C in lambda_param:
LR = LogisticRegression(penalty = 'l1', C = C, solver = 'saga', max_iter = 10000, random_state = 0)
score = cross_val_score(LR, X_train, y_train, cv = 5, scoring = 'accuracy')
print(f"Mean accuracy using cross validation with l1 penalty: {round(score.mean(), 4)} when C = {C}")
###Output
Mean accuracy using cross validation with l1 penalty: 0.9033 when C = 0.001
Mean accuracy using cross validation with l1 penalty: 0.9033 when C = 0.003
Mean accuracy using cross validation with l1 penalty: 0.9471 when C = 0.01
Mean accuracy using cross validation with l1 penalty: 0.9596 when C = 0.03
Mean accuracy using cross validation with l1 penalty: 0.9589 when C = 0.1
Mean accuracy using cross validation with l1 penalty: 0.9548 when C = 0.3
Mean accuracy using cross validation with l1 penalty: 0.9534 when C = 1
Mean accuracy using cross validation with l1 penalty: 0.9548 when C = 3
Mean accuracy using cross validation with l1 penalty: 0.9548 when C = 10
Mean accuracy using cross validation with l1 penalty: 0.9555 when C = 30
Mean accuracy using cross validation with l1 penalty: 0.9555 when C = 100
Mean accuracy using cross validation with l1 penalty: 0.9548 when C = 300
###Markdown
Logistic Regression with elasticnet regularization and saga solver - LBFGS solver doesn't support l1 regularization
###Code
for C in lambda_param:
LR = LogisticRegression(penalty = 'elasticnet', C = C, solver = 'saga', max_iter = 10000, random_state = 0, l1_ratio = 0.5)
score = cross_val_score(LR, X_train, y_train, cv = 5, scoring = 'accuracy')
print(f"Mean accuracy using cross validation with elasticnet penalty: {round(score.mean(), 4)} when C = {C}")
###Output
Mean accuracy using cross validation with elasticnet penalty: 0.9033 when C = 0.001
Mean accuracy using cross validation with elasticnet penalty: 0.9165 when C = 0.003
Mean accuracy using cross validation with elasticnet penalty: 0.9589 when C = 0.01
Mean accuracy using cross validation with elasticnet penalty: 0.9589 when C = 0.03
Mean accuracy using cross validation with elasticnet penalty: 0.9582 when C = 0.1
Mean accuracy using cross validation with elasticnet penalty: 0.9534 when C = 0.3
Mean accuracy using cross validation with elasticnet penalty: 0.9548 when C = 1
Mean accuracy using cross validation with elasticnet penalty: 0.9541 when C = 3
Mean accuracy using cross validation with elasticnet penalty: 0.9548 when C = 10
Mean accuracy using cross validation with elasticnet penalty: 0.9555 when C = 30
Mean accuracy using cross validation with elasticnet penalty: 0.9548 when C = 100
Mean accuracy using cross validation with elasticnet penalty: 0.9548 when C = 300
###Markdown
Best model parameters:- Solver: LBFGS- C = 0.003- Penalty = l2
###Code
# model with best parameters
LR = LogisticRegression(penalty = 'l2', C = 0.003, solver = 'lbfgs', max_iter = 10000, random_state = 0)
LR.fit(X_train, y_train)
y_pred = LR.predict(X_test)
# plot the confusion matrix
confusion_matrix(y_test, y_pred)
# print the accuracy score
round(accuracy_score(y_test, y_pred), 4)
###Output
_____no_output_____
###Markdown
Preprocess the input data
###Code
from sklearn.preprocessing import StandardScaler
scalar = StandardScaler()
X_train = scalar.fit_transform(X_train)
X_test = scalar.transform(X_test)
for C in lambda_param:
LR = LogisticRegression(penalty = 'l2', C = C, solver = 'lbfgs', max_iter = 10000, random_state = 0)
score = cross_val_score(LR, X_train, y_train, cv = 5, scoring = 'accuracy')
print(f"Mean accuracy using cross validation with l2 penalty: {round(score.mean(), 4)} when C = {C}")
# model with best parameters
LR = LogisticRegression(penalty = 'l2', C = 0.3, solver = 'lbfgs', max_iter = 10000, random_state = 0)
LR.fit(X_train, y_train)
y_pred = LR.predict(X_test)
# plot the confusion matrix
confusion_matrix(y_test, y_pred)
# print the accuracy score
round(accuracy_score(y_test, y_pred), 4)
###Output
_____no_output_____
###Markdown
2. California Housing Prices (60 points)In this problem, we will take the same California Housing prices dataset that was shown in the last lab class to do a binary classification of whether the median house value for a given locality is high or low .
###Code
# Get the data from the website to the local directory
import os
import tarfile
from six.moves import urllib
source_path = "https://raw.githubusercontent.com/ageron/handson-ml/master/datasets/housing/housing.tgz"
local_path = os.path.join("datasets", "housing")
os.makedirs(local_path, exist_ok = True)
tgz_path = os.path.join(local_path, "housing.tgz")
urllib.request.urlretrieve(source_path, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=local_path)
housing_tgz.close()
# Read the data into a dataframe
housing_df = pd.read_csv('datasets/housing/housing.csv')
housing_df.head()
# Shape of the dataframe
housing_df.shape
# Basic info about the dataset
housing_df.info()
# Description of the dataset
housing_df.describe().T
###Output
_____no_output_____
###Markdown
Explore the data to understand it better
###Code
# Check the null values in the dataset
housing_df.isnull().sum()
###Output
_____no_output_____
###Markdown
- **Total bedrooms column has 207 null values**.
###Code
# plot the counts of housing median age
housing_df.housing_median_age.value_counts().plot(kind = 'bar', title = 'Count of housing median age', ylabel = 'Counts',
xlabel = 'Housing Median Age', figsize = (14, 8));
###Output
_____no_output_____
###Markdown
- **Housing median age = 52** occurs maximum number of times. It may be a typo because all other values are much less in frequency.
###Code
# Plot the distribution of total_rooms area
housing_df.total_rooms.plot(kind = 'hist', bins = 100, title = 'Distribution of Total Rooms Area in the dataset',
figsize = (12, 6));
# largest 15 values in the total rooms column in the dataset
housing_df.total_rooms.sort_values(ascending = True).iloc[-15:]
###Output
_____no_output_____
###Markdown
- There are few houses with **very large values of total rooms area**. These are probably outliers in this column. - Most values in the above distribution lies in the **range from 0 - 10000**.
###Code
# Plot the distribution of total_bedrooms area
housing_df.total_bedrooms.plot(kind = 'hist', bins = 100, title = 'Distribution of Total Bedrooms Area in the dataset',
figsize = (12, 6));
# largest 15 values in the total bedrooms column in the dataset
housing_df.total_bedrooms.dropna().sort_values(ascending = True).iloc[-15:]
###Output
_____no_output_____
###Markdown
- There are few **very large values in the total bedrooms column**. These are probably outliers in this column.- The distribution lies mostly in the **range 0 - 2000**.
###Code
# Plot the distribution of population column
housing_df.population.plot(kind = 'hist', bins = 100, title = 'Distribution of Population in the dataset',
figsize = (12, 6));
# largest 15 values in the population column in the dataset
housing_df.population.sort_values(ascending = True).iloc[-15:]
###Output
_____no_output_____
###Markdown
- Few values in the population column are very large as can be seen from the above plot and value counts.- The distribution lies mostly in the **range 0 - 5000**.
###Code
# Plot the distribution of households column in the dataset
housing_df.households.plot(kind = 'hist', bins = 100, title = 'Distribution of households in the dataset',
figsize = (12, 6));
# largest 15 values in the households column in the dataset
housing_df.households.sort_values(ascending = True).iloc[-15:]
###Output
_____no_output_____
###Markdown
- Few values in the household column are very large as can be seen from the above plot and value counts.- The distribution lies mostly in the **range 0 - 2000**.
###Code
# Plot the distribution of median income column in the dataset
housing_df.median_income.plot(kind = 'hist', bins = 100, title = 'Distribution of median income in the dataset',
figsize = (12, 6));
# largest 15 values in the median income column in the dataset
housing_df.median_income.sort_values(ascending = True).iloc[-15:]
# Count of median income with value equal to 15.001
(housing_df.median_income == 15.0001).sum()
###Output
_____no_output_____
###Markdown
- There are **49 entries with median income value = 15.0001**. This can be a typo or outlier in this column.- This is also the largest value in this column.
###Code
# Plot the distribution of median house values column in the dataset
housing_df.median_house_value.plot(kind = 'hist', bins = 60, title = 'Distribution of median house values in the dataset',
figsize = (12, 6));
# maximum value in the median house values column
housing_df.median_house_value.max()
# Count of median house entry with value equal to 500001.0
(housing_df.median_house_value == 500001.0).sum()
###Output
_____no_output_____
###Markdown
- There are **965 entries with median house value = 500001.0**. This can be a typo or outlier in this column.- This is also the largest value in this column.
###Code
# explore the ocean proximity column
housing_df.ocean_proximity.value_counts().plot(kind = 'bar', title = 'Counts of each category in the ocean proximity column',
ylabel = 'Counts', figsize = (12, 6));
# Counts of each category in the ocean proximity column in the dataset
housing_df.ocean_proximity.value_counts()
###Output
_____no_output_____
###Markdown
- **1H OCEAN** has the largest frequency in this ocean proximity column. - **ISLAND** has the lowest frequency.
###Code
# Convert the data to suit a binary classification of High Price vs Low Price for the median_house_value column
# Assume that anything >= $200,000 is high price with output value 1 and anything less than that is low price with output value 0.
housing_df['target'] = housing_df.median_house_value.apply(lambda x: 1 if x >= 200000 else 0)
housing_df.head()
###Output
_____no_output_____
###Markdown
Pipeline
###Code
y = housing_df.target
X = housing_df.drop(['median_house_value', 'target'], axis = 1)
X.head()
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size = 0.2, stratify = y, random_state = 42)
print(f"Shape of train_X: {train_X.shape}\nShape of train_y: {train_y.shape}")
print(f"Shape of test_X: {test_X.shape}\nShape of test_y: {test_y.shape}")
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn import set_config
from sklearn.model_selection import GridSearchCV
from sklearn.decomposition import PCA
from sklearn.utils import estimator_html_repr
from sklearn.preprocessing import StandardScaler, OneHotEncoder, RobustScaler, MinMaxScaler
# set up the numeric and categorical transformer
numeric_transformer = Pipeline(steps = [('imputer', SimpleImputer(strategy = 'median')),
('scaler', 'passthrough')])
categorical_transformer = Pipeline(steps = [('imputer', SimpleImputer(strategy = 'constant', fill_value = 'missing')),
('onehot', OneHotEncoder(handle_unknown = 'ignore'))])
numeric_features = X.select_dtypes(include = ['int64', 'float64']).columns
categorical_features = X.select_dtypes(include = ['object']).columns
data_transformer = ColumnTransformer(transformers = [('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features)])
preprocessor = Pipeline(steps = [('data_transformer', data_transformer),
('reduce_dim', PCA())])
clf = Pipeline(steps = [('preprocessor', preprocessor),
('classifier', LogisticRegression(random_state = 0, max_iter = 10000))])
param_grid = {
'preprocessor__data_transformer__num__imputer__strategy': ['mean', 'median'],
'preprocessor__data_transformer__cat__imputer__strategy': ['constant','most_frequent'],
'preprocessor__data_transformer__num__scaler': [StandardScaler(), RobustScaler(), MinMaxScaler()],
'classifier__C': [0.1, 1.0, 10, 100],
'preprocessor__reduce_dim__n_components': [2, 5, 7, 9],
'classifier__solver': ['liblinear','newton-cg', 'lbfgs','sag','saga']}
CV = GridSearchCV(clf, param_grid, n_jobs = 1)
CV.fit(train_X, train_y)
# print(CV.best_params_)
# print(CV.best_score_)
# set config to diagram for visualizing the pipelines / composite estimators
set_config(display = 'diagram')
# Lets visualize the best estimator from grid search.
CV.best_estimator_
# saving pipeline as html format
# with open('titanic_data_pipeline_estimator.html', 'w') as f:
# f.write(estimator_html_repr(grid_search.best_estimator_))
###Output
_____no_output_____
###Markdown
One hot encode the ocean proximity column
###Code
housing_df = pd.get_dummies(housing_df, columns = ['ocean_proximity'])
housing_df.head()
# Rename the columns of the dataframe
columns = ['longitude', 'latitude', 'housing_median_age', 'total_rooms','total_bedrooms', 'population', 'households',
'median_income', 'median_house_value', 'target', '1h_ocean', 'inland', 'island','near_bay', 'near_ocean']
housing_df.columns = columns
housing_df.head()
###Output
_____no_output_____
###Markdown
Fill the missing values in the total_bedrooms column by the mean of the column
###Code
housing_df.total_bedrooms.fillna(housing_df.total_bedrooms.mean(), inplace = True)
# check the null values after imputing with mean
housing_df.total_bedrooms.isnull().sum()
###Output
_____no_output_____
###Markdown
Check the correlation in the dataset
###Code
housing_df.corr()
# inputs and outputs for the model
columns = ['longitude', 'latitude', 'housing_median_age', 'total_rooms','total_bedrooms', 'population', 'households',
'median_income', '1h_ocean', 'inland', 'island','near_bay', 'near_ocean']
y = housing_df.target
X = housing_df[columns]
print(f"Shape of inputs: {X.shape}\nShape of outputs: {y.shape}")
# check the distribution of targets
y.value_counts()
# Use stratified sampling to create an 80-20 train-test split
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size = 0.2, stratify = y, random_state = 42)
print(f"Shape of train_X: {train_X.shape}\nShape of train_y: {train_y.shape}")
print(f"Shape of test_X: {test_X.shape}\nShape of test_y: {test_y.shape}")
###Output
Shape of train_X: (16512, 13)
Shape of train_y: (16512,)
Shape of test_X: (4128, 13)
Shape of test_y: (4128,)
###Markdown
Base Line Model
###Code
# Define the base line model
lr_model = LogisticRegression(max_iter = 5000)
score = cross_val_score(lr_model, train_X, train_y, cv = 5, scoring = 'accuracy')
print(f"Mean accuracy with cross validation: {round(score.mean(), 4)}")
###Output
Mean accuracy with cross validation: 0.8272
###Markdown
Separate features into categorical and numeric
###Code
# separate numeric features
numeric_features = train_X.iloc[:, :-5]
# separate categorical features
categorical_features = train_X.iloc[:, -5:]
###Output
_____no_output_____
###Markdown
Scale numeric features for modelling
###Code
scalar = StandardScaler()
train_X_scaled = scalar.fit_transform(numeric_features)
test_X_scaled = scalar.transform(test_X.iloc[:, :-5])
# Shape of numeric features
train_X_scaled.shape, test_X_scaled.shape
###Output
_____no_output_____
###Markdown
Fitting model only on scaled numeric features
###Code
# fit the logistic model
lr_model = LogisticRegression(max_iter = 5000)
score = cross_val_score(lr_model, train_X_scaled, train_y, cv = 5, scoring = 'accuracy')
print(f"Mean accuracy with cross validation only using scaled numeric features: {round(score.mean(), 4)}")
###Output
Mean accuracy with cross validation only using scaled numeric features: 0.843
###Markdown
Using both categorical and numerical features to fit model
###Code
# concatenate numerical and categorical features
train_data = np.concatenate((train_X_scaled, categorical_features), 1)
test_data = np.concatenate((test_X_scaled, test_X.iloc[:, -5:]), 1)
train_data.shape, test_data.shape
lr_model = LogisticRegression(max_iter = 5000)
score = cross_val_score(lr_model, train_data, train_y, cv = 5, scoring = 'accuracy')
print(f"Mean accuracy with cross validation using all features: {round(score.mean(), 4)}")
###Output
Mean accuracy with cross validation using all features: 0.8431
###Markdown
Find the best solver for the model
###Code
for solver in solvers:
LR = LogisticRegression(penalty = 'none', solver = solver, max_iter = 10000, random_state = 0)
score = cross_val_score(LR, train_data, train_y, cv = 5, scoring = 'accuracy')
print(f"Mean accuracy: {round(score.mean(), 4)} with {solver} solver")
# performance of model using saga solver and l2 regularization
for C in lambda_param:
LR = LogisticRegression(penalty = 'l2', solver = 'lbfgs', max_iter = 10000, random_state = 0)
score = cross_val_score(LR, train_data, train_y, cv = 5, scoring = 'accuracy')
print(f"Mean accuracy using cross validation with saga solver: {round(score.mean(), 4)} when C = {C}")
# performance of model using saga solver and l1 regularization
for C in lambda_param:
LR = LogisticRegression(penalty = 'l1', solver = 'saga', max_iter = 10000, random_state = 0)
score = cross_val_score(LR, train_data, train_y, cv = 5, scoring = 'accuracy')
print(f"Mean accuracy using cross validation with saga solver: {round(score.mean(), 4)} when C = {C}")
# performance of model using saga solver and elasticnet regularization
for C in lambda_param:
LR = LogisticRegression(penalty = 'elasticnet', solver = 'saga', max_iter = 10000, random_state = 0, l1_ratio = 0.5)
score = cross_val_score(LR, train_data, train_y, cv = 5, scoring = 'accuracy')
print(f"Mean accuracy using cross validation with saga solver: {round(score.mean(), 4)} when C = {C}")
###Output
Mean accuracy using cross validation with saga solver: 0.8434 when C = 0.001
Mean accuracy using cross validation with saga solver: 0.8434 when C = 0.003
Mean accuracy using cross validation with saga solver: 0.8434 when C = 0.01
Mean accuracy using cross validation with saga solver: 0.8434 when C = 0.03
Mean accuracy using cross validation with saga solver: 0.8434 when C = 0.1
Mean accuracy using cross validation with saga solver: 0.8434 when C = 0.3
Mean accuracy using cross validation with saga solver: 0.8434 when C = 1
Mean accuracy using cross validation with saga solver: 0.8434 when C = 3
Mean accuracy using cross validation with saga solver: 0.8434 when C = 10
Mean accuracy using cross validation with saga solver: 0.8434 when C = 30
Mean accuracy using cross validation with saga solver: 0.8434 when C = 100
Mean accuracy using cross validation with saga solver: 0.8434 when C = 300
###Markdown
Best model parameters- Solver: LBFGS- penalty: None
###Code
# Find the best Logistic Regression model that can solve this problem
lr_model = LogisticRegression(penalty = 'none', max_iter = 10000, solver = 'lbfgs')
lr_model.fit(train_data, train_y)
pred_y = lr_model.predict(test_data)
# confusion matrix
confusion_matrix(test_y, pred_y)
# Accuracy
accuracy_score(test_y, pred_y)
###Output
_____no_output_____ |
Notebooks/Age_and_Gender_detection.ipynb | ###Markdown
Deep-Learning projectAge and gender detectionWe will work with the dataset **```age_gender.csv```**. This dataset contains images of faces, as well as the gender and age of the person in question.> The structure of the exercise is as follows: >> I - [Preparation of the dataset](preparation)>>>>>> II - [Gender classification and age prediction](model)
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import callbacks
from tensorflow.keras import Input, Model
from tensorflow.keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
###Output
_____no_output_____
###Markdown
I - Preparation of the dataset
###Code
df = pd.read_csv("../Data/age_gender.csv", sep=",")
df["pixels"] = df["pixels"].apply(lambda x : np.array(x.split()).astype(np.int16).reshape(48, 48, 1))
print("Number of rows in the dataset : {}".format(df.shape[0]))
print("Number of columns in the dataset : {}".format(df.shape[1]))
df.head()
plt.figure(figsize=(10, 6))
df["age"].hist(bins=20, grid=False, color="#2e9a74")
plt.title("Age repartition", fontsize=20)
plt.xlabel("Age")
plt.ylabel("Number of faces")
plt.xticks([0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100, 105, 110, 115]);
plt.figure(figsize=(6, 6))
plt.pie(x=df["gender"].value_counts().values,
autopct="%.1f%%",
labels=["Male", "Female"],
pctdistance=0.5,
colors=["#a37bc4", "#79519c"],
explode=[0, 0.03])
plt.title("Gender repartition", fontsize=20);
df = df[df["age"]>=15].reset_index(drop=True)
gender_dict = {0 : "Man", 1 : "Woman"}
age_max = df["age"].max()
fig, axs = plt.subplots(2, 2, figsize=(6, 6))
for row in range(2):
for col in range(2):
i = np.random.randint(len(df))
axs[row, col].imshow(df["pixels"].iloc[i].reshape((48, 48)), cmap="gray")
axs[row, col].axis("off")
axs[row, col].set_title("{} year old {}".format(df["age"].iloc[i], gender_dict[df["gender"].iloc[i]]))
###Output
_____no_output_____
###Markdown
II - Gender classification and age prediction
###Code
class DataGenerator():
def __init__(self, df):
self.df = df
def generate_splits(self):
"""reproduction of the train_test_split function"""
permut = np.random.permutation(len(self.df)) # shuffle of indices
# equivalent of data_without_test, data_test = train_test_split(data, test_size=0.2)
train_up_to = int(len(self.df)*0.8)
indices_without_test = permut[:train_up_to]
test_indices = permut[train_up_to:]
# equivalent of data_train, data_validation = train_test_split(data_without_test, test_size=0.2)
train_up_to = int(train_up_to*0.8)
train_indices = indices_without_test[:train_up_to]
valid_indices = indices_without_test[train_up_to:]
return train_indices, valid_indices, test_indices # return the indices
def generate_images(self, indices, is_training, batch_size=32):
"""used to generate batches with images during training/testing/validation of our Keras model
for example, we will have "indices=train_indices, is_training=True" for a training generator
"indices=test_indices, is_training=False" for a test generator"""
max_age = self.df['age'].max() # max_age is used to scale the ages between 0 and 1
files, genders, ages = [], [], [] # we initialize empty lists to contain what the generator will return
while True:
for i in indices: # we browse the indices of the list taken in argument
person = self.df.iloc[i] # we locate the person of index i
files.append((person['pixels']/255)) # we add the image of the person of index i (pixels scaled between 0 and 1)
genders.append(person['gender']) # we add the gender of the person of index i
ages.append(person['age']/max_age) # we add the scaled age of the person of index i
if len(files) >= batch_size: # as soon as the desired batch size is reached
yield np.array(files), [np.array(genders), np.array(ages)] # we return the data
# make sure you remember the order of the outputs as we will have to define our model accordingly
files, genders, ages = [], [], [] # we reset the lists
# for predictions, you have to stop after going through all the data once
if not is_training:
break
data_generator = DataGenerator(df)
train_indices, valid_indices, test_indices = data_generator.generate_splits()
batch_size = 64
training_data = data_generator.generate_images(train_indices, is_training=True, batch_size=batch_size)
valid_data = data_generator.generate_images(valid_indices, is_training=True, batch_size=batch_size)
test_data = data_generator.generate_images(test_indices, is_training=False, batch_size=batch_size)
class MultiOutputModel():
def hidden_layers(self, inputs):
x = Conv2D(filters=30, kernel_size=(5, 5), padding="valid", activation="relu")(inputs)
x = MaxPooling2D(pool_size=(3, 3))(x)
x = Conv2D(filters=32, kernel_size=(3, 3), padding="valid", activation="relu")(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Dropout(rate=0.25)(x)
return x
def gender_branch(self, inputs):
x = self.hidden_layers(inputs)
x = Flatten()(x)
x = Dense(units=64, activation="relu")(x)
x = Dropout(rate=0.2)(x)
x = Dense(units=1, activation="sigmoid", name="gender_output")(x)
return x
def age_branch(self, inputs):
x = self.hidden_layers(inputs)
x = Flatten()(x)
x = Dense(units=64, activation="relu")(x)
x = Dropout(rate=0.2)(x)
x = Dense(units=1, activation="linear", name="age_output")(x)
return x
def create_model(self, input_shape):
inputs = Input(shape=input_shape)
gender_branch = self.gender_branch(inputs)
age_branch = self.age_branch(inputs)
model = Model(inputs=inputs, outputs=[gender_branch, age_branch])
return model
input_shape = (48,48,1)
model = MultiOutputModel().create_model(input_shape)
early_stopping = callbacks.EarlyStopping(monitor="val_loss",
patience=8,
mode="min",
restore_best_weights=True)
lr_plateau = callbacks.ReduceLROnPlateau(monitor="val_loss",
patience=5,
factor=0.5,
verbose=2,
mode="min")
loss = {"gender_output": "binary_crossentropy", "age_output": "mse"}
metrics = {"gender_output": "accuracy", "age_output": "mse"}
model.compile(optimizer="adam", loss=loss, metrics=metrics)
history = model.fit(training_data,
steps_per_epoch=len(train_indices)//batch_size,
epochs=100,
validation_data=valid_data,
validation_steps=len(valid_indices)//batch_size,
callbacks=[early_stopping, lr_plateau])
plt.figure(figsize=(11, 7))
plt.plot(history.history["loss"], "k--")
plt.plot(history.history["val_loss"], color="#2e9a74")
plt.title("Model loss by epoch", fontsize=20)
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="right");
plt.figure(figsize=(23, 7))
plt.subplot(121)
plt.plot(history.history["gender_output_accuracy"], "k--")
plt.plot(history.history["val_gender_output_accuracy"], color="#2e9a74")
plt.title("Model gender accuracy by epoch", fontsize=20)
plt.ylabel("gender accuracy")
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="right")
plt.subplot(122)
plt.plot(history.history["age_output_mse"], "k--")
plt.plot(history.history["val_age_output_mse"], color="#2e9a74")
plt.title("Model age MSE by epoch", fontsize=20)
plt.ylabel("age MSE")
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="right");
model.evaluate(test_data)
gender_dict = {0 : "Man", 1 : "Woman"}
age_max = df["age"].max()
fig, axs = plt.subplots(2, 2, figsize=(7, 7))
for row in range(2):
for col in range(2):
i = np.random.randint(len(test_indices))
pred = model.predict((df["pixels"][test_indices[i]]/255).reshape(1, 48, 48, 1))
axs[row, col].imshow(df["pixels"][test_indices[i]].reshape((48, 48)), cmap="gray")
axs[row, col].axis("off")
axs[row, col].set_title("{} year old {} \n predicted {} year old {}"\
.format(df["age"][test_indices[i]], gender_dict[df["gender"][test_indices[i]]],\
max(int(round(pred[1][0][0]*age_max)), 0), gender_dict[pred[0][0][0].round()]))
model.save("age_gender_detector.h5")
###Output
_____no_output_____ |
loc & iloc method in pandas.ipynb | ###Markdown
pandas loc methodaccess a group of rows and columns by label(s) or a boolean array
###Code
import pandas as pd
data =pd.read_csv('C:\\Users\\admin\\Desktop\\book1.csv')
data
data.loc[0]
data.loc[3]
data.loc[[1,3]]
data.loc[[4,3]]
data.loc[1,"Date"]
data.loc[1:5,"marks"]
###Output
_____no_output_____
###Markdown
pandas iloc method select data by indexing
###Code
data.iloc[[0]]
data.iloc[0]
data.iloc[[0,1]]
###Output
_____no_output_____ |
exploring_the_bitcoin_cryptocurrency_market/exploring_the_bitcoin_cryptocurrency_market.ipynb | ###Markdown
1. Bitcoin and Cryptocurrencies: Full dataset, filtering, and reproducibilitySince the launch of Bitcoin in 2008, hundreds of similar projects based on the blockchain technology have emerged. We call these cryptocurrencies (also coins or cryptos in the Internet slang). Some are extremely valuable nowadays, and others may have the potential to become extremely valuable in the future1. In fact, on the 6th of December of 2017, Bitcoin has a market capitalization above $200 billion. The astonishing increase of Bitcoin market capitalization in 2017.*1 WARNING: The cryptocurrency market is exceptionally volatile2 and any money you put in might disappear into thin air. Cryptocurrencies mentioned here might be scams similar to Ponzi Schemes or have many other issues (overvaluation, technical, etc.). Please do not mistake this for investment advice. *2 Update on March 2020: Well, it turned out to be volatile indeed :DThat said, let's get to business. We will start with a CSV we conveniently downloaded on the 6th of December of 2017 using the coinmarketcap API (NOTE: The public API went private in 2020 and is no longer available) named datasets/coinmarketcap_06122017.csv.
###Code
# Importing pandas
import pandas as pd
# Importing matplotlib and setting aesthetics for plotting later.
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
plt.style.use('fivethirtyeight')
# Reading datasets/coinmarketcap_06122017.csv into pandas
dec6 = pd.read_csv('coinmarketcap_06122017.csv')
# Selecting the 'id' and the 'market_cap_usd' columns
market_cap_raw = dec6[['id', 'market_cap_usd']]
# Counting the number of values
market_cap_raw.count()
###Output
_____no_output_____
###Markdown
2. Discard the cryptocurrencies without a market capitalizationWhy do the count() for id and market_cap_usd differ above? It is because some cryptocurrencies listed in coinmarketcap.com have no known market capitalization, this is represented by NaN in the data, and NaNs are not counted by count(). These cryptocurrencies are of little interest to us in this analysis, so they are safe to remove.
###Code
# Filtering out rows without a market capitalization
cap = market_cap_raw.query('market_cap_usd > 0')
# Counting the number of values again
cap.count()
###Output
_____no_output_____
###Markdown
3. How big is Bitcoin compared with the rest of the cryptocurrencies?At the time of writing, Bitcoin is under serious competition from other projects, but it is still dominant in market capitalization. Let's plot the market capitalization for the top 10 coins as a barplot to better visualize this.
###Code
# Declaring these now for later use in the plots
TOP_CAP_TITLE = 'Top 10 market capitalization'
TOP_CAP_YLABEL = '% of total cap'
# Selecting the first 10 rows and setting the index
cap10 = cap[:10].set_index('id')
# Calculating market_cap_perc
cap10 = cap10.assign(market_cap_perc = lambda x: (x.market_cap_usd / cap.market_cap_usd.sum())*100)
# Plotting the barplot with the title defined above
ax = cap10['market_cap_perc'].plot.bar(title=TOP_CAP_TITLE)
# Annotating the y axis with the label defined above
ax.set_ylabel(TOP_CAP_YLABEL);
###Output
_____no_output_____
###Markdown
4. Making the plot easier to read and more informativeWhile the plot above is informative enough, it can be improved. Bitcoin is too big, and the other coins are hard to distinguish because of this. Instead of the percentage, let's use a log10 scale of the "raw" capitalization. Plus, let's use color to group similar coins and make the plot more informative1. For the colors rationale: bitcoin-cash and bitcoin-gold are forks of the bitcoin blockchain2. Ethereum and Cardano both offer Turing Complete smart contracts. Iota and Ripple are not minable. Dash, Litecoin, and Monero get their own color.1 This coloring is a simplification. There are more differences and similarities that are not being represented here.2 The bitcoin forks are actually very different, but it is out of scope to talk about them here. Please see the warning above and do your own research.
###Code
# Colors for the bar plot
COLORS = ['orange', 'green', 'orange', 'cyan', 'cyan', 'blue', 'silver', 'orange', 'red', 'green']
# Plotting market_cap_usd as before but adding the colors and scaling the y-axis
ax = cap10['market_cap_usd'].plot.bar(title=TOP_CAP_TITLE, logy=True, color=COLORS)
# Annotating the y axis with 'USD'
ax.set_ylabel('USD')
# Final touch! Removing the xlabel as it is not very informative
ax.set_xlabel('');
###Output
_____no_output_____
###Markdown
5. What is going on?! Volatility in cryptocurrenciesThe cryptocurrencies market has been spectacularly volatile since the first exchange opened. This notebook didn't start with a big, bold warning for nothing. Let's explore this volatility a bit more! We will begin by selecting and plotting the 24 hours and 7 days percentage change, which we already have available.
###Code
# Selecting the id, percent_change_24h and percent_change_7d columns
volatility = dec6[['id', 'percent_change_24h', 'percent_change_7d']]
# Setting the index to 'id' and dropping all NaN rows
volatility = volatility.set_index('id').dropna()
# Sorting the DataFrame by percent_change_24h in ascending order
volatility = volatility.sort_values(by='percent_change_24h')
# Checking the first few rows
volatility.head()
###Output
_____no_output_____
###Markdown
6. Well, we can already see that things are *a bit* crazyIt seems you can lose a lot of money quickly on cryptocurrencies. Let's plot the top 10 biggest gainers and top 10 losers in market capitalization.
###Code
# Defining a function with 2 parameters, the series to plot and the title
def top10_subplot(volatility_series, title):
# Making the subplot and the figure for two side by side plots
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))
# Plotting with pandas the barchart for the top 10 losers
ax = volatility_series[:10].plot.bar(color='darkred', ax=axes[0])
# Setting the figure's main title to the text passed as parameter
fig.suptitle(title)
# Setting the ylabel to '% change'
ax.set_ylabel('% change')
# Same as above, but for the top 10 winners
ax = volatility_series[-10:].plot.bar(color='darkblue', ax=axes[1])
# Returning this for good practice, might use later
return fig, ax
DTITLE = "24 hours top losers and winners"
# Calling the function above with the 24 hours period series and title DTITLE
fig, ax = top10_subplot(volatility['percent_change_24h'], DTITLE)
###Output
/home/victor/.local/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:400: MatplotlibDeprecationWarning:
The is_first_col function was deprecated in Matplotlib 3.4 and will be removed two minor releases later. Use ax.get_subplotspec().is_first_col() instead.
if ax.is_first_col():
###Markdown
7. Ok, those are... interesting. Let's check the weekly Series too.800% daily increase?! Why are we doing this tutorial and not buying random coins?1After calming down, let's reuse the function defined above to see what is going weekly instead of daily.1 Please take a moment to understand the implications of the red plots on how much value some cryptocurrencies lose in such short periods of time
###Code
# Sorting in ascending order
volatility7d = volatility.sort_values(by='percent_change_7d')
WTITLE = "Weekly top losers and winners"
# Calling the top10_subplot function
fig, ax = top10_subplot(volatility7d['percent_change_7d'], WTITLE)
###Output
_____no_output_____
###Markdown
8. How small is small?The names of the cryptocurrencies above are quite unknown, and there is a considerable fluctuation between the 1 and 7 days percentage changes. As with stocks, and many other financial products, the smaller the capitalization, the bigger the risk and reward. Smaller cryptocurrencies are less stable projects in general, and therefore even riskier investments than the bigger ones1. Let's classify our dataset based on Investopedia's capitalization definitions for company stocks. 1 Cryptocurrencies are a new asset class, so they are not directly comparable to stocks. Furthermore, there are no limits set in stone for what a "small" or "large" stock is. Finally, some investors argue that bitcoin is similar to gold, this would make them more comparable to a commodity instead.
###Code
# Selecting everything bigger than 10 billion
largecaps = cap.query('market_cap_usd >= 10000000000')
# Printing out largecaps
largecaps
###Output
_____no_output_____
###Markdown
9. Most coins are tinyNote that many coins are not comparable to large companies in market cap, so let's divert from the original Investopedia definition by merging categories.This is all for now. Thanks for completing this project!
###Code
# Making a nice function for counting different marketcaps from the "cap" DataFrame. Returns an int.
# INSTRUCTORS NOTE: Since you made it to the end, consider it a gift :D
def capcount(query_string):
return cap.query(query_string).count().id
# Labels for the plot
LABELS = ["biggish", "micro", "nano"]
# Using capcount count the biggish cryptos
biggish = capcount('market_cap_usd > 300000000')
# Same as above for micro ...
micro = capcount('300000000 > market_cap_usd > 50000000')
# ... and for nano
nano = capcount('market_cap_usd < 50000000')
# Making a list with the 3 counts
values = [biggish, micro, nano]
# Plotting them with matplotlib
plt.bar(range(len(values)), values, tick_label=LABELS);
###Output
_____no_output_____ |
tensorflow_1_x/7_kaggle/notebooks/python/raw/ex_7.ipynb | ###Markdown
$EXERCISE_PREAMBLE$There are only four problems in this last set of exercises, but they're all pretty tricky, so be on guard! If you get stuck, don't hesitate to head to the [Learn Forum](https://kaggle.com/learn-forum) to discuss.Run the setup code below before working on the questions (and run it again if you leave this notebook and come back later).
###Code
#_RM_
%matplotlib inline
# SETUP. You don't need to worry for now about what this code does or how it works. If you're ever curious about the
# code behind these exercises, it's available under an open source license here: https://github.com/Kaggle/learntools/
from learntools.core import binder; binder.bind(globals())
from learntools.python.ex7 import *
print('Setup complete.')
###Output
_____no_output_____
###Markdown
Exercises 1.After completing [the exercises on lists and tuples]($EXERCISE_URL(5)$), Jimmy noticed that, according to his `estimate_average_slot_payout` function, the slot machines at the Learn Python Casino are actually rigged *against* the house, and are profitable to play in the long run.Starting with $200 in his pocket, Jimmy has played the slots 500 times, recording his new balance in a list after each spin. He used Python's `matplotlib` library to make a graph of his balance over time:
###Code
# Import the jimmy_slots submodule
from learntools.python import jimmy_slots
# Call the get_graph() function to get Jimmy's graph
graph = jimmy_slots.get_graph()
graph
###Output
_____no_output_____
###Markdown
As you can see, he's hit a bit of bad luck recently. He wants to tweet this along with some choice emojis, but, as it looks right now, his followers will probably find it confusing. He's asked if you can help him make the following changes:1. Add the title "Results of 500 slot machine pulls"2. Make the y-axis start at 0. 3. Add the label "Balance" to the y-axisAfter calling `type(graph)` you see that Jimmy's graph is of type `matplotlib.axes._subplots.AxesSubplot`. Hm, that's a new one. By calling `dir(graph)`, you find three methods that seem like they'll be useful: `.set_title()`, `.set_ylim()`, and `.set_ylabel()`. Use these methods to complete the function `prettify_graph` according to Jimmy's requests. We've already checked off the first request for you (setting a title).(Remember: if you don't know what these methods do, use the `help()` function!)
###Code
def prettify_graph(graph):
"""Modify the given graph according to Jimmy's requests: add a title, make the y-axis
start at 0, label the y-axis. (And, if you're feeling ambitious, format the tick marks
as dollar amounts using the "$" symbol.)
"""
graph.set_title("Results of 500 slot machine pulls")
# Complete steps 2 and 3 here
graph = jimmy_slots.get_graph()
prettify_graph(graph)
graph
###Output
_____no_output_____
###Markdown
**Bonus:** Can you format the numbers on the y-axis so they look like dollar amounts? e.g. $200 instead of just 200.(We're not going to tell you what method(s) to use here. You'll need to go digging yourself with `dir(graph)` and/or `help(graph)`.)
###Code
#_COMMENT_IF(PROD)_
q1.solution()
###Output
_____no_output_____
###Markdown
2. 🌶️🌶️Luigi is trying to perform an analysis to determine the best items for winning races on the Mario Kart circuit. He has some data in the form of lists of dictionaries that look like... [ {'name': 'Peach', 'items': ['green shell', 'banana', 'green shell',], 'finish': 3}, {'name': 'Bowser', 'items': ['green shell',], 'finish': 1}, Sometimes the racer's name wasn't recorded {'name': None, 'items': ['mushroom',], 'finish': 2}, {'name': 'Toad', 'items': ['green shell', 'mushroom'], 'finish': 1}, ]`'items'` is a list of all the power-up items the racer picked up in that race, and `'finish'` was their placement in the race (1 for first place, 3 for third, etc.).He wrote the function below to take a list like this and return a dictionary mapping each item to how many times it was picked up by first-place finishers.
###Code
def best_items(racers):
"""Given a list of racer dictionaries, return a dictionary mapping items to the number
of times those items were picked up by racers who finished in first place.
"""
winner_item_counts = {}
for i in range(len(racers)):
# The i'th racer dictionary
racer = racers[i]
# We're only interested in racers who finished in first
if racer['finish'] == 1:
for i in racer['items']:
# Add one to the count for this item (adding it to the dict if necessary)
if i not in winner_item_counts:
winner_item_counts[i] = 0
winner_item_counts[i] += 1
# Data quality issues :/ Print a warning about racers with no name set. We'll take care of it later.
if racer['name'] is None:
print("WARNING: Encountered racer with unknown name on iteration {}/{} (racer = {})".format(
i+1, len(racers), racer['name'])
)
return winner_item_counts
###Output
_____no_output_____
###Markdown
He tried it on a small example list above and it seemed to work correctly:
###Code
sample = [
{'name': 'Peach', 'items': ['green shell', 'banana', 'green shell',], 'finish': 3},
{'name': 'Bowser', 'items': ['green shell',], 'finish': 1},
{'name': None, 'items': ['mushroom',], 'finish': 2},
{'name': 'Toad', 'items': ['green shell', 'mushroom'], 'finish': 1},
]
best_items(sample)
###Output
_____no_output_____
###Markdown
However, when he tried running it on his full dataset, the program crashed with a `TypeError`.Can you guess why? Try running the code cell below to see the error message Luigi is getting. Once you've identified the bug, fix it in the cell below (so that it runs without any errors).Hint: Luigi's bug is similar to one we encountered in the [tutorial]($TUTORIAL_URL$) when we talked about star imports.
###Code
# Import luigi's full dataset of race data
from learntools.python.luigi_analysis import full_dataset
# Fix me!
def best_items(racers):
winner_item_counts = {}
for i in range(len(racers)):
# The i'th racer dictionary
racer = racers[i]
# We're only interested in racers who finished in first
if racer['finish'] == 1:
for i in racer['items']:
# Add one to the count for this item (adding it to the dict if necessary)
if i not in winner_item_counts:
winner_item_counts[i] = 0
winner_item_counts[i] += 1
# Data quality issues :/ Print a warning about racers with no name set. We'll take care of it later.
if racer['name'] is None:
print("WARNING: Encountered racer with unknown name on iteration {}/{} (racer = {})".format(
i+1, len(racers), racer['name'])
)
return winner_item_counts
# Try analyzing the imported full dataset
best_items(full_dataset)
#_COMMENT_IF(PROD)_
q2.hint()
#_COMMENT_IF(PROD)_
q2.solution()
###Output
_____no_output_____
###Markdown
3. 🌶️Suppose we wanted to create a new type to represent hands in blackjack. One thing we might want to do with this type is overload the comparison operators like `>` and `<=` so that we could use them to check whether one hand beats another. e.g. it'd be cool if we could do this:```python>>> hand1 = BlackjackHand(['K', 'A'])>>> hand2 = BlackjackHand(['7', '10', 'A'])>>> hand1 > hand2True```Well, we're not going to do all that in this question (defining custom classes is a bit beyond the scope of these lessons), but the code we're asking you to write in the function below is very similar to what we'd have to write if we were defining our own `BlackjackHand` class. (We'd put it in the `__gt__` magic method to define our custom behaviour for `>`.)Fill in the body of the `blackjack_hand_greater_than` function according to the docstring.
###Code
def blackjack_hand_greater_than(hand_1, hand_2):
"""
Return True if hand_1 beats hand_2, and False otherwise.
In order for hand_1 to beat hand_2 the following must be true:
- The total of hand_1 must not exceed 21
- The total of hand_1 must exceed the total of hand_2 OR hand_2's total must exceed 21
Hands are represented as a list of cards. Each card is represented by a string.
When adding up a hand's total, cards with numbers count for that many points. Face
cards ('J', 'Q', and 'K') are worth 10 points. 'A' can count for 1 or 11.
When determining a hand's total, you should try to count aces in the way that
maximizes the hand's total without going over 21. e.g. the total of ['A', 'A', '9'] is 21,
the total of ['A', 'A', '9', '3'] is 14.
Examples:
>>> blackjack_hand_greater_than(['K'], ['3', '4'])
True
>>> blackjack_hand_greater_than(['K'], ['10'])
False
>>> blackjack_hand_greater_than(['K', 'K', '2'], ['3'])
False
"""
pass
q3.check()
#_COMMENT_IF(PROD)_
q3.hint()
#_COMMENT_IF(PROD)_
q3.solution()
###Output
_____no_output_____
###Markdown
4. 🌶️🌶️In [the previous set of exercises]($EXERCISE_URL(6)$), you heard a tip-off that the roulette tables at the Learn Python Casino had some quirk where the probability of landing on a particular number was partly dependent on the number the wheel most recently landed on. You wrote a function `conditional_roulette_probs` which returned a dictionary with counts of how often the wheel landed on `x` then `y` for each value of `x` and `y`.After analyzing the output of your function, you've come to the following conclusion: for each wheel in the casino, there is exactly one pair of numbers `a` and `b`, such that, after the wheel lands on `a`, it's significantly more likely to land on `b` than any other number. If the last spin landed on anything other than `a`, then it acts like a normal roulette wheel, with equal probability of landing on any of the 11 numbers (* the casino's wheels are unusually small - they only have the numbers from 0 to 10 inclusive).It's time to exploit this quirk for fun and profit. You'll be writing a roulette-playing agent to beat the house. When called, your agent will have an opportunity to sit down at one of the casino's wheels for 100 spins. You don't need to bet on every spin. For example, the agent below bets on a random number unless the last spin landed on 4 (in which case it just watches).
###Code
from learntools.python import roulette
import random
def random_and_superstitious(wheel):
"""Interact with the given wheel over 100 spins with the following strategy:
- if the wheel lands on 4, don't bet on the next spin
- otherwise, bet on a random number on the wheel (from 0 to 10)
"""
last_number = 0
while wheel.num_remaining_spins() > 0:
if last_number == 4:
# Unlucky! Don't bet anything.
guess = None
else:
guess = random.randint(0, 10)
last_number = wheel.spin(number_to_bet_on=guess)
roulette.evaluate_roulette_strategy(random_and_superstitious)
###Output
_____no_output_____
###Markdown
As you might have guessed, our random/superstitious agent bleeds money. Can you write an agent that beats the house? (i.e. can you make "Average gain per simulation" positive?)For more information on the type of object your agent will be passed, try calling `help(roulette.RouletteSession)`. You can also call `help(roulette.evaluate_roulette_strategy)` to see some optional parameters you can change regarding the conditions under which we test your agent.HINT: it might help to go back to your [strings and dictionaries exercise notebook]($EXERCISE_URL(6)$) and review your code for `conditional_roulette_probs` for inspiration.
###Code
def my_agent(wheel):
pass
roulette.evaluate_roulette_strategy(my_agent)
###Output
_____no_output_____ |
3-ConvNets/CNN-Handwritting digits-TensorFlow 1.X.ipynb | ###Markdown
Handwritting digits with TensorFlow 1.X
###Code
%tensorflow_version 1.x
import tensorflow as tf
print(tf.__version__)
!python --version
import numpy as np
import math
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
from google.colab import drive
drive.mount('/content/gdrive')
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('content/gdrive/MyDrive/Colab/CV-TF1.X-2.X/data/MNIST/', one_hot=True)
print("- Size:Training-set:\t\t{}".format(len(data.train.labels)))
print("- Size:Test-set:\t\t{}".format(len(data.test.labels)))
print("- Size:Validation-set:\t{}".format(len(data.validation.labels)))
#class labels for the test set
data.test.cls = np.argmax(data.test.labels, axis=1)
###Output
_____no_output_____
###Markdown
Data Dimensions
###Code
# MNIST images are 28 pixels in each dimension.
img_size = 28
# Flattened format in 1-D array
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes
num_classes = 10
###Output
_____no_output_____
###Markdown
Function for plotting images
###Code
def plot_img(images, cls_label, cls_pred=None):
fig, axes = plt.subplots(1, 10, figsize=(15,15))
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for idx, ax in enumerate(axes.flat):
ax.imshow(images[idx].reshape(img_shape), cmap='binary')
if cls_pred is None:
label = "True: {0}".format(cls_label[idx]) # shows only the class label
else:
label = "True: {0}, Pred: {1}".format(cls_label[idx], cls_pred[idx])
ax.set_xlabel(label)
ax.set_xticks([])
ax.set_yticks([])
plt.show()
# Get the first images from the test-set.
images = data.test.images[0:10]
# Get the true classes for those images.
cls_true = data.test.cls[0:10]
# Plot the images and labels using the function plot_img.
plot_img(images=images, cls_label=cls_true)
###Output
_____no_output_____
###Markdown
Placeholder variables
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
#The convolutional layers expect x to be encoded as a 4-dim tensor [num_images, img_height, img_width, num_channels]
#img_height == img_width == img_size
#num_images can be inferred automatically by using -1 for the size of the first dimension
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
# variable for the true labels associated with the images that were input in the placeholder variable x
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
#we also have a placeholder variable for the class-number, we will calculate it using argmax function.
y_true_cls = tf.argmax(y_true, axis=1)
###Output
_____no_output_____
###Markdown
CNN configuration
###Code
# Convolutional Layer 1.
filter_size1 = 5 # Convolution filters are 5 x 5 pixels.
num_filters1 = 16
# Convolutional Layer 2.
filter_size2 = 5 # Convolution filters are 5 x 5 pixels.
num_filters2 = 36
# Fully-connected layer.
fc_size = 128 # Number of neurons in fully-connected layer.
# function to generate weights with random values with a given shape
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
###Output
_____no_output_____
###Markdown
Creating a new Convolutional Layer
###Code
def conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
):
# Shape of the filter-weights for the convolution.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# The strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME',
use_cudnn_on_gpu=True)
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution
# This is 2x2 max-pooling.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU)
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x))
# we can save 75% of the relu-operations by max-pooling first.
return layer, weights
###Output
_____no_output_____
###Markdown
Flattening a layer
###Code
def flatten_layer(layer):
layer_shape = layer.get_shape()
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
return layer_flat, num_features
###Output
_____no_output_____
###Markdown
Creating a new Fully-Connected Layer
###Code
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the dot product of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
if use_relu:
layer = tf.nn.relu(layer)
return layer
###Output
_____no_output_____
###Markdown
Filling the layers
###Code
## convolutional Layer1
layer_conv1, weights_conv1 = conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=filter_size1,
num_filters=num_filters1,
)
layer_conv1
#(?, 14, 14, 16) which means that there is an arbitrary number of images (this is the ?)
layer_conv2, weights_conv2 = conv_layer(input=layer_conv1,
num_input_channels=num_filters1,
filter_size=filter_size2,
num_filters=num_filters2,
)
layer_conv2
#The shape is (?, 7, 7, 36) where the ? again means that there is an arbitrary number of images.
#Flatten layer:
layer_flat, num_features = flatten_layer(layer_conv2)
layer_flat
num_features
#Fully-Connected Layer 1
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=fc_size,
use_relu=True)
#shape (?, 128) where the ? means there is an arbitrary number of images and fc_size == 128
layer_fc1
#Fully-Connected Layer 2
layer_fc2 = new_fc_layer(input=layer_fc1,
num_inputs=fc_size,
num_outputs=num_classes,
use_relu=False)
layer_fc2
#Predicted Class
y_pred = tf.nn.softmax(layer_fc2)
#The class-number is the index of the largest element
y_pred_cls = tf.argmax(y_pred, axis=1)
###Output
_____no_output_____
###Markdown
Cost-function to be optimized
###Code
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits=layer_fc2, labels=y_true)
#Note that the function calculates the softmax internally so we must use the output of layer_fc2 directly
#rather than y_pred which has already had the softmax applied.
#we take the average of the cross-entropy for all the image classifications.
cost = tf.reduce_mean(cross_entropy)
###Output
_____no_output_____
###Markdown
Optimization Method
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
###Output
_____no_output_____
###Markdown
Performance Metrics
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
#a vector of booleans whether the predicted class equals the true class of each image
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
#This calculates the classification accuracy by first type-casting the vector of booleans to floats,
#so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
###Output
_____no_output_____
###Markdown
TensorFlow Run
###Code
#For CPU:
#session = tf.Session()
#init = session.run(tf.global_variables_initializer())
#For GPU:
init = tf.global_variables_initializer()
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
sess.run(init)
train_batch_size = 64
total_iterations = 0
def optimize(num_iterations):
global total_iterations
for i in range(total_iterations, total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict
feed_dict_train = {x: x_batch, y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
sess.run(optimizer, feed_dict=feed_dict_train)
if i % 100 == 0:
acc = sess.run(accuracy, feed_dict=feed_dict_train)
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
print(msg.format(i + 1, acc))
total_iterations += num_iterations
def plot_example_errors(cls_pred, correct):
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
plot_img(images=images[0:10],
cls_label=cls_true[0:10],
cls_pred=cls_pred[0:10])
def plot_confusion_matrix(cls_pred):
cls_true = data.test.cls
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = sess.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# We define accuracy as the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
Performance before any optimization
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 7.1% (706 / 10000)
###Markdown
Performance after 10 optimization iterations
###Code
optimize(num_iterations=10)
print_test_accuracy()
###Output
Accuracy on Test-Set: 15.4% (1537 / 10000)
###Markdown
Performance after 1000 optimization iterations
###Code
optimize(num_iterations=990)
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 93.6% (9356 / 10000)
Example errors:
###Markdown
Performance after 10,000 optimization iterations
###Code
optimize(num_iterations=9000)
print_test_accuracy(show_example_errors=True, show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.7% (9874 / 10000)
Example errors:
###Markdown
Visualization of Weights and Layers
###Code
def plot_conv_weights(weights, input_channel=0):
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = sess.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
num_filters = w.shape[3]# Number of filters in the conv. layer.
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i < num_filters:
# Get the weights for the i'th filter of the input channel.
img = w[:, :, input_channel, i]
ax.imshow(img, vmin=w_min, vmax=w_max, interpolation='nearest', cmap='seismic')
ax.set_xticks([])
ax.set_yticks([])
plt.show()
def plot_conv_layer(layer, image):
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image]}
# Calculate and retrieve the output values of the layer
# when entering that image.
values = sess.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot. Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
fig, axes = plt.subplots(num_grids, num_grids)
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i < num_filters:
# Get the output image of using the i'th filter.
img = values[0, :, :, i]
ax.imshow(img, interpolation='nearest', cmap='binary')
ax.set_xticks([])
ax.set_yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
Plotting weights, Convolutional Layers
###Code
#Convolution Layer 1
plot_conv_weights(weights=weights_conv1)
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
image1 = data.test.images[15]
plot_image(image1)
plot_conv_layer(layer=layer_conv1, image=image1)
#There are 16 input channels to the second convolutional layer
plot_conv_weights(weights=weights_conv2, input_channel=0)
plot_conv_layer(layer=layer_conv2, image=image1)
sess.close()
###Output
_____no_output_____ |
test/combine/splitMissieven.ipynb | ###Markdown
SplitWe split a big dataset into volumes.
###Code
import os
from tf.fabric import Fabric
from tf.compose import split
from tf.core.helpers import unexpanduser
GH = os.path.expanduser("~/github")
GM = f"{GH}/Dans-labs/clariah-gm"
VERSION = "0.9.1"
SOURCE = f"{GM}/tf/{VERSION}"
TARGET = f"{GM}/_local/tf/{VERSION}"
###Output
_____no_output_____
###Markdown
LoadingWe load the dataset, and pass its api to the `split()` function.If something goes wrong during the split, we can inspect the dataset without reloading it.In a normal scenario, we can just leave out this step. The `split()` function willautomatically load the dataset if no `api` argument is passed.
###Code
TF = Fabric(locations=SOURCE)
api = TF.loadAll()
api.makeAvailableIn(globals())
###Output
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
40 features found and 0 ignored
0.00s loading features ...
9.51s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
0.65s All additional features loaded - for details use TF.loadLog()
###Markdown
Selective splittingSplitting happens at top level sections.We can restrict the processing to just one volume, for debugging purposes.The volume you are interested in, can be passed in the `volume=` optional parameter.Set is to `None` or leave it out to process all volumes.
###Code
V = None
volumes = split(SOURCE, TARGET, api=api, overwrite=True, volume=V)
###Output
Splitting dataset in 13 volumes:
| Volume 1 : with slots 1 - 358356
| Volume 2 : with slots 358357 - 765208
| Volume 3 : with slots 765209 - 1213807
| Volume 4 : with slots 1213808 - 1589004
| Volume 5 : with slots 1589005 - 2008807
| Volume 6 : with slots 2008808 - 2450424
| Volume 7 : with slots 2450425 - 2850492
| Volume 8 : with slots 2850493 - 2977520
| Volume 9 : with slots 2977521 - 3447394
| Volume 10 : with slots 3447395 - 4089689
| Volume 11 : with slots 4089690 - 4594831
| Volume 12 : with slots 4594832 - 4941618
| Volume 13 : with slots 4941619 - 5316429
1.91s volume splits determined
1.91s Distribute nodes over volumes ...
| 0.30s volume 1 with 398465 nodes ...
| 0.64s volume 2 with 451426 nodes ...
| 1.03s volume 3 with 500564 nodes ...
| 1.36s volume 4 with 421357 nodes ...
| 1.73s volume 5 with 467818 nodes ...
| 2.10s volume 6 with 490255 nodes ...
| 2.44s volume 7 with 440890 nodes ...
| 2.58s volume 8 with 140198 nodes ...
| 2.93s volume 9 with 519879 nodes ...
| 3.47s volume 10 with 704722 nodes ...
| 3.92s volume 11 with 555400 nodes ...
| 4.22s volume 12 with 382980 nodes ...
| 4.53s volume 13 with 417077 nodes ...
6.48s distribution done
6.48s Remap features ...
| 0.00s volume 1 with 398465 nodes ...
| 2.55s volume 2 with 451426 nodes ...
| 5.31s volume 3 with 500564 nodes ...
| 8.60s volume 4 with 421357 nodes ...
| 11s volume 5 with 467818 nodes ...
| 15s volume 6 with 490255 nodes ...
| 18s volume 7 with 440890 nodes ...
| 21s volume 8 with 140198 nodes ...
| 22s volume 9 with 519879 nodes ...
| 26s volume 10 with 704722 nodes ...
| 30s volume 11 with 555400 nodes ...
| 34s volume 12 with 382980 nodes ...
| 37s volume 13 with 417077 nodes ...
46s remapping done
46s Write TF datasets per volume
| 0.00s Writing volume 1 ...
| 2.78s Writing volume 2 ...
| 5.80s Writing volume 3 ...
| 9.26s Writing volume 4 ...
| 12s Writing volume 5 ...
| 15s Writing volume 6 ...
| 19s Writing volume 7 ...
| 22s Writing volume 8 ...
| 23s Writing volume 9 ...
| 27s Writing volume 10 ...
| 32s Writing volume 11 ...
| 36s Writing volume 12 ...
| 38s Writing volume 13 ...
1m 27s writing done
1m 27s All done
###Markdown
Checkout the volumesThe `split()` function returns basic information about the volumes:* title* top node in the original dataset* top node in the volume dataset* location of the volume dataset on the filesystem
###Code
for v in volumes:
print(f"{v[0]:<20} original volume node={v[1]:>8} top node={v[2]:>7} at {unexpanduser(v[3])}")
###Output
_____no_output_____
###Markdown
Load all volumesWe use the result of the `split()` function to find and load all volumes.We now get one TF-api handle per volume. volumeMapNote that each volume has an extra feature: `volumeMap`. The value for each node in the volume datasetis the corresponding node in the complete dataset from which the volume is taken.If you use the volume dataset to compute annotations, and you want to publish these annotations against the complete dataset, the feature `volumeMap` provides the necessary information to do so.Suppose `annotvx` is a dict mapping the some nodes in the dataset of volume `x` to interesting values, then you apply them to the big dataset as follows``` python{F.volumeMap.v(n): value for (n, value) in annotvx.items}```
###Code
TFs = {}
apis = {}
for (vt, vn, vl, vloc) in volumes:
TFs[vt] = Fabric(locations=vloc)
apis[vt] = TFs[vt].loadAll()
###Output
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.13s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.87s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.02s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.09s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.05s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.08s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.73s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.61s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.62s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.51s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.02s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| | 0.03s C __levels__ from otype, oslots, otext
| | 2.46s C __order__ from otype, oslots, __levels__
| | 0.15s C __rank__ from otype, __order__
| | 2.62s C __levUp__ from otype, oslots, __rank__
| | 0.14s C __levDown__ from otype, __levUp__, __rank__
| | 1.40s C __boundary__ from otype, oslots, __rank__
| | 0.10s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.02s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
11s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.06s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.35s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.02s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T issub from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T isund from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.01s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.54s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/1
1.02s All additional features loaded - for details use TF.loadLog()
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.11s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 1.02s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.09s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.06s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.05s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.06s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.84s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.68s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.72s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.56s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.10s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| | 0.03s C __levels__ from otype, oslots, otext
| | 2.83s C __order__ from otype, oslots, __levels__
| | 0.16s C __rank__ from otype, __order__
| | 2.99s C __levUp__ from otype, oslots, __rank__
| | 0.15s C __levDown__ from otype, __levUp__, __rank__
| | 1.54s C __boundary__ from otype, oslots, __rank__
| | 0.12s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.02s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
12s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.04s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.39s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.01s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.06s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T issub from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T isund from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.01s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.62s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/2
1.17s All additional features loaded - for details use TF.loadLog()
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.12s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 1.09s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.19s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.07s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.06s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.06s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.95s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.68s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.81s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.55s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.22s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| | 0.03s C __levels__ from otype, oslots, otext
| | 3.18s C __order__ from otype, oslots, __levels__
| | 0.18s C __rank__ from otype, __order__
| | 3.15s C __levUp__ from otype, oslots, __rank__
| | 0.18s C __levDown__ from otype, __levUp__, __rank__
| | 1.78s C __boundary__ from otype, oslots, __rank__
| | 0.13s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.03s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
14s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.04s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.36s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.01s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.13s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T issub from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T isund from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.01s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.70s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/3
1.29s All additional features loaded - for details use TF.loadLog()
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.11s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.95s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.22s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.07s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.05s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.06s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.79s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.51s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.67s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.39s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.25s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| | 0.03s C __levels__ from otype, oslots, otext
| | 2.74s C __order__ from otype, oslots, __levels__
| | 0.15s C __rank__ from otype, __order__
| | 2.55s C __levUp__ from otype, oslots, __rank__
| | 0.16s C __levDown__ from otype, __levUp__, __rank__
| | 1.51s C __boundary__ from otype, oslots, __rank__
| | 0.11s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.03s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
11s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.04s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.27s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.14s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T issub from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T isund from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.01s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.59s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/4
1.11s All additional features loaded - for details use TF.loadLog()
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.11s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 1.03s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.29s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.05s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.06s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.04s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.90s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.51s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.75s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.42s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.34s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| | 0.03s C __levels__ from otype, oslots, otext
| | 3.11s C __order__ from otype, oslots, __levels__
| | 0.16s C __rank__ from otype, __order__
| | 2.99s C __levUp__ from otype, oslots, __rank__
| | 0.18s C __levDown__ from otype, __levUp__, __rank__
| | 1.68s C __boundary__ from otype, oslots, __rank__
| | 0.12s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.02s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
13s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.03s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.30s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.20s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T issub from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T isund from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.66s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/5
1.25s All additional features loaded - for details use TF.loadLog()
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.12s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 1.07s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.35s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.06s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.06s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.05s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.94s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.47s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.79s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.39s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.41s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| | 0.03s C __levels__ from otype, oslots, otext
| | 3.14s C __order__ from otype, oslots, __levels__
| | 0.17s C __rank__ from otype, __order__
| | 3.08s C __levUp__ from otype, oslots, __rank__
| | 0.17s C __levDown__ from otype, __levUp__, __rank__
| | 1.74s C __boundary__ from otype, oslots, __rank__
| | 0.12s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.02s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
13s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.03s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.27s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.24s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.01s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T status from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.69s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/6
1.29s All additional features loaded - for details use TF.loadLog()
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.11s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.95s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.37s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.05s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.05s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.04s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.84s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.38s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.71s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.31s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.43s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| | 0.03s C __levels__ from otype, oslots, otext
| | 2.83s C __order__ from otype, oslots, __levels__
| | 0.15s C __rank__ from otype, __order__
| | 2.75s C __levUp__ from otype, oslots, __rank__
| | 0.14s C __levDown__ from otype, __levUp__, __rank__
| | 1.58s C __boundary__ from otype, oslots, __rank__
| | 0.10s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.04s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
12s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.03s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.22s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.25s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T issub from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.62s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/7
1.16s All additional features loaded - for details use TF.loadLog()
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.03s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.28s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.13s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.01s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.02s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.01s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.27s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.11s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.24s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.10s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.15s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| | 0.01s C __levels__ from otype, oslots, otext
| | 0.90s C __order__ from otype, oslots, __levels__
| | 0.05s C __rank__ from otype, __order__
| | 0.88s C __levUp__ from otype, oslots, __rank__
| | 0.04s C __levDown__ from otype, __levUp__, __rank__
| | 0.44s C __boundary__ from otype, oslots, __rank__
| | 0.04s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.01s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
3.74s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.01s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.07s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.09s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T issub from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T status from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.21s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/8
0.41s All additional features loaded - for details use TF.loadLog()
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.12s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 1.13s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.38s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.01s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.06s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.01s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.99s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.55s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.87s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.47s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.45s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| | 0.03s C __levels__ from otype, oslots, otext
| | 3.50s C __order__ from otype, oslots, __levels__
| | 0.17s C __rank__ from otype, __order__
| | 3.27s C __levUp__ from otype, oslots, __rank__
| | 0.18s C __levDown__ from otype, __levUp__, __rank__
| | 1.79s C __boundary__ from otype, oslots, __rank__
| | 0.12s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.02s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
14s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.01s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.31s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.26s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T issub from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T isund from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.01s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T status from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.72s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/9
1.35s All additional features loaded - for details use TF.loadLog()
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.19s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 1.53s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.25s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.08s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.01s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 1.33s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 1.06s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 1.12s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.86s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.30s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| | 0.04s C __levels__ from otype, oslots, otext
| | 4.94s C __order__ from otype, oslots, __levels__
| | 0.25s C __rank__ from otype, __order__
| | 4.48s C __levUp__ from otype, oslots, __rank__
| | 0.21s C __levDown__ from otype, __levUp__, __rank__
| | 2.48s C __boundary__ from otype, oslots, __rank__
| | 0.17s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.02s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
19s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.60s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.17s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T issub from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T isund from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T status from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 1.05s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/10
1.88s All additional features loaded - for details use TF.loadLog()
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.13s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 1.21s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.38s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.06s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 1.07s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.60s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.87s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.49s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.46s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| | 0.03s C __levels__ from otype, oslots, otext
| | 3.81s C __order__ from otype, oslots, __levels__
| | 0.19s C __rank__ from otype, __order__
| | 3.46s C __levUp__ from otype, oslots, __rank__
| | 0.17s C __levDown__ from otype, __levUp__, __rank__
| | 2.00s C __boundary__ from otype, oslots, __rank__
| | 0.13s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.02s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
15s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.35s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.27s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T issub from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T isund from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T status from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.79s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/11
1.46s All additional features loaded - for details use TF.loadLog()
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.10s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.84s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.10s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.05s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.73s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.62s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.61s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.51s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.11s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| | 0.02s C __levels__ from otype, oslots, otext
| | 2.62s C __order__ from otype, oslots, __levels__
| | 0.13s C __rank__ from otype, __order__
| | 2.32s C __levUp__ from otype, oslots, __rank__
| | 0.12s C __levDown__ from otype, __levUp__, __rank__
| | 1.34s C __boundary__ from otype, oslots, __rank__
| | 0.09s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.02s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
10s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.38s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.07s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T issub from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T isund from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T status from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.55s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/12
1.04s All additional features loaded - for details use TF.loadLog()
This is Text-Fabric 8.5.14
Api reference : https://annotation.github.io/text-fabric/tf/cheatsheet.html
41 features found and 0 ignored
0.00s loading features ...
| 0.11s T otype from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.90s T oslots from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.42s T puncr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T transn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.05s T n from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T puncn from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T title from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.79s T trans from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.29s T transo from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.67s T punc from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.24s T punco from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.49s T transr from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| | 0.03s C __levels__ from otype, oslots, otext
| | 2.93s C __order__ from otype, oslots, __levels__
| | 0.15s C __rank__ from otype, __order__
| | 2.54s C __levUp__ from otype, oslots, __rank__
| | 0.16s C __levDown__ from otype, __levUp__, __rank__
| | 1.47s C __boundary__ from otype, oslots, __rank__
| | 0.10s C __sections__ from otype, oslots, otext, __levUp__, __levels__, n, n, n
| | 0.01s C __structure__ from otype, oslots, otext, __rank__, __levUp__, n, title, n
11s All features loaded/computed - for details use TF.loadLog()
0.00s loading features ...
| 0.00s T author from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T authorFull from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.01s T col from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T day from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T facs from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T isemph from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T isfolio from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T isnote from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.17s T isorig from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T isref from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.29s T isremark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T isspecial from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T issuper from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T isund from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T mark from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T month from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T note from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T page from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T place from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T rawdate from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.01s T row from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T seq from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T tpl from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T vol from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.60s T volumeMap from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
| 0.00s T year from ~/github/Dans-labs/clariah-gm/_local/tf/0.9.1/13
1.12s All additional features loaded - for details use TF.loadLog()
|
code_examples/spring17_survey/session_titles.ipynb | ###Markdown
Wordcloud for previous topics presented at The Hacker WithinBy R. Stuart Geiger, licensed CC-BY 4.0 and the MIT licenseData scraped from [THW's website](http://www.thehackerwithin.org/berkeley/previous.html) in an embarassingly manual way. I cleaned up some terms and replaced "R" with "RR" because the wordcloud package has trouble with single-letter words.
###Code
!pip install wordcloud
%matplotlib inline
from os import path
from wordcloud import WordCloud
# Read the whole text.
text = open('session_titles.tsv').read().lower()
# Generate a word cloud image
wordcloud = WordCloud(width=3000, height=2000, prefer_horizontal=1, stopwords=None).generate(text)
# Display the generated image:
# the matplotlib way:
import matplotlib.pyplot as plt
plt.figure(figsize=(30,15))
plt.imshow(wordcloud)
plt.axis("off")
plt.savefig("cloud2.png")
###Output
_____no_output_____ |
_notebooks/2022-05-06-fanng-day2.ipynb | ###Markdown
Faang Day 2> Great Developer with great practice- toc: true - badges: true- comments: true- categories: [practice,dream]- hide: false 3 question of array
###Code
# K-concatination
# kaden algorithm using dp
def maxsubarray(arr):
dp = [0] * len(arr)
ans = dp[0] = arr[0]
for i in range(1,len(arr)):
dp[i]=max(arr[i],arr[i]+dp[i-1])
ans = max(ans,dp[i-1])
return ans
def kcon(arr,k):
if k == 1:
return maxsubarray(arr)
else:
sum_arr = sum(arr)
new_arr = arr + arr
return max(maxsubarray(new_arr),maxsubarray(new_arr)+sum_arr*(k-2))
kcon([-1,2,4,-5,0,3],4)
# maximum circular subarray sum
def maxcir_subsum(arr):
sum_pos = maxsubarray(arr)
neg_arr = [-1*i for i in arr]
sum_neg = maxsubarray(neg_arr)
wrap_sum = -(sum(neg_arr) - sum_neg)
res = max(sum_pos,wrap_sum)
print(sum_pos,',',neg_arr,',',sum(neg_arr),',',sum_neg,',',wrap_sum)
return res
a = [11, 10, -20, 5, -3, -5, 8, -13, 10]
maxcir_subsum(a)
# finding subarray with given sum
def find_sum(arr,goal):
n = len(arr)
curr_sum = arr[0]
start = 0
i = 1
while i <= n:
while curr_sum > goal and start < i-1:
curr_sum -= arr[start]
start += 1
if curr_sum == goal:
print(start,i-1)
if i < n:
curr_sum += arr[i]
i += 1
arr = [15, 2, 4, 8, 9, 5, 10, 23]
goal = 23
find_sum(arr,goal)
###Output
1 4
7 7
|
notebooks/Fair Parameter Selection.ipynb | ###Markdown
Fair Parameter SelectionThis nootbook gives an overview of selecting parameters for the different models, such that they can be compared in a fair manner. In order to make a fair comparison of the three models, we must evaluate the models using the same number of free parameters. By setting a baseline for the $TensorTrain$ K-value, we can compute what the K's for the CP and GMM models should be.
###Code
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append('../')
import models as m
%load_ext autoreload
%autoreload 2
K_baseline = 10
M_DIM = 2
K_MAX = 100
Ks = np.arange(2, K_MAX, 1)
Ms = np.arange(2, 3, 1)
params = np.zeros((3, len(Ms), len(Ks)))
legends = []
# compute number of parameters for each model
for i, M in enumerate(Ms):
for j, K in enumerate(Ks):
model_Train = m.TensorTrainGaussian(K, M)
model_CP = m.CPGaussian(K, M)
params[0, i, j] = model_Train.n_parameters()
params[1, i, j] = model_CP.n_parameters()
params[2, i, j] = K + M*K + K*M*M
legends.append('TT, M='+str(M))
legends.append('CP, M='+str(M))
legends.append('GMM, M='+str(M))
# Make a vertical line
val = params[0, Ms==M_DIM, Ks==K_baseline][0]
line = val*np.ones(Ks.shape)
legends.append('Equality line')
f,ax = plt.subplots(figsize=(10, 5))
for i in range(len(Ms)):
ax.plot(Ks, params[:,i].T, linewidth=3)
ax.plot(Ks, line, '--k', linewidth=3)
ax.legend(legends)
ax.set_xlabel('K of model')
ax.set_ylabel('Free parameters')
ax.grid('on')
ax.set_title('Number of free parameters vs. K for each model')
ax.set_ylim([0, 500])
ax.set_xlim([0, K_MAX])
plt.show()
# Find indicies that achieve the same number of trainable parameters
idx_TT = np.argmax(params[0, Ms==M_DIM] >= val)
idx_CP = np.argmax(params[1, Ms==M_DIM] >= val)
idx_GMM = np.argmax(params[2, Ms==M_DIM] >= val)
# Number of components
K_TT = Ks[idx_TT]
K_CP = Ks[idx_CP]
K_GMM = Ks[idx_GMM]
print('Selected number of components (K-values)')
print(f'TensorTrain : {K_TT}')
print(f'CP : {K_CP}')
print(f'GMM(sklean) : {K_GMM}')
###Output
Selected number of components (K-values)
TensorTrain : 10
CP : 82
GMM(sklean) : 59
###Markdown
Method for getting fair K's
###Code
from utils import get_fair_Ks
Ks_tt = [5] #10, 15, 20, 25, 30, 35, 40, 50, 75, 100
M = 3
K_gmm, K_cp = get_fair_Ks(Ks_tt, M)
print(K_gmm)
print(K_cp)
###Output
[33]
[18]
|
_notebooks/2019-09-13-youtube-dl_commented.ipynb | ###Markdown
"Engine knock detection AI part 1/5"
> "This is the first notebook in a series where I'm following along with the fast.ai course Practical Deep Learning for Coders by training an AI to identify knocking in 4-stroke car petrol engines"
- image: images/motor-768750_1920.jpg > Important: This notebook is not guaranteed to work without modification. It's a while since I wrote this and haven't tested it since. For a summary of this project see [Engine knock detection AI part 5/5](http://niklasekman.com/2021/02/10/Train-model-updated.html) Background
The inspiration for this project comes from the presentation of [fast.ai](https://www.fast.ai/) course alumni projects by Jeremy Howard in some of the first lessons of the third installment of [Practical Deep Learning for Coders](https://course.fast.ai/) where training an image classifier using spectrograms to identify sounds is mentioned.
What this means in simpler terms is that an image classifying algorithm is presented with a series of pictures labeled with which class or case they represent. Each picture represents what that sound looks like if one were to plot the sound intensities at different frequencies as a function of time. Method
A set of recordings of both knocking and healthy sounding engines has first to be aquired in order to create a training set of spectrograms. I've chosen to create two playlists on youtube of videos specifically depicting car engines, one for each case. I made sure to just include petrol engines since my suspicion was that the sound of a diesel engine might throw things off.
As a non car mechanic I made that assumption based on the idea that if I could distinguish a well running engine from a knocking one then the resulting model could do it too. I found that I had some trouble making the distinction with some of the cold started diesels I listened to. Downloading audio files
After creating the playlists Google Drive is mounted to store the intermediary sound files.
###Code
#collapse-output
from google.colab import drive
drive.mount('/content/drive')
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.photos.readonly%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fpeopleapi.readonly&response_type=code
Enter your authorization code:
··········
Mounted at /content/drive
###Markdown
The youtube-dl python package is installed and the soundtrack from each of the videos is downloaded.
###Code
#collapse-output
!pip install --upgrade youtube-dl
import youtube_dl
#collapse-output
ydl = youtube_dl.YoutubeDL({'outtmpl': '%(id)s%(ext)s'})
ydl_opts = {
'format': 'bestaudio/best',
'postprocessors': [{
'key': 'FFmpegExtractAudio',
'preferredcodec': 'mp3',
'preferredquality': '192',
}],
}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
ydl.download(['https://www.youtube.com/playlist?list=PL9R1Zswn-XPCh3A3bV9Vnf7ui9YCZWo9w'])
!rm -rf ./sample_data
###Output
_____no_output_____
###Markdown
Checking to see that all the expected files are present.
###Code
!ls *.mp3
###Output
'1996 Subaru Outback, Normal Engine Sound-8BOtWY3RobA.mp3'
'2003 toyota corolla engine sound-ho38ZYkQJxs.mp3'
'2016-2018 Toyota Tacoma Loud Engine sound is Normal-nSB65kClSXM.mp3'
'2016 Kia Rio 1.6L V4 Engine - Normal Running Noises _ Sounds-nGQcC7giMkU.mp3'
'2017 Hyundai Elantra SE 2.0L 4 Cylinder - Normal Engine Running Noises-Eky3PMh76gY.mp3'
'2017 Kia Rio 1.6L 4 Cylinder GDI Engine - Normal Engine Sounds-2EQcfJAU7IM.mp3'
'2017 Nissan Versa Note - Normal Engine Running Noises - 1.6L Engine-PRO0HgD9qx4.mp3'
'2018 Hyundai Tucson 2.0 GDI Nu Engine Sound Normal-pLmX2ws7znE.mp3'
'3.2 fsi engine sound cold (normal or not)-mOFcfwLTNkY.mp3'
'Bmw f30 Noisy Engine Sound-bkPtY1pW82Y.mp3'
'Listen Toyota 2.4 VVT-i engine sound, when engine is very OK. Years 2002 to 2015-6GPgodkLSkQ.mp3'
'Mini R56S normal engine sound-8cXmy_U0_28.mp3'
'Nissan Frontier 4L v6 engine sound idle-YiwsC_UlDFY.mp3'
'Normal Engine Idle Sound Cold & Warm - 2006 Nissan Sentra-5q1uATQu8zg.mp3'
'Volvo S60 2.0T - Normal engine sound-uumxLAHbDsE.mp3'
###Markdown
Moving the sound files from the Colab instance to a folder on Google Drive.
###Code
!mv *.mp3 /content/drive/My\ Drive/Colab\ Notebooks/fast.ai/KnockKnock/data/normal/
###Output
_____no_output_____
###Markdown
Rinse and repeat for the other playlist.
###Code
#collapse-output
ydl_opts = {
'format': 'bestaudio/best',
'postprocessors': [{
'key': 'FFmpegExtractAudio',
'preferredcodec': 'mp3',
'preferredquality': '192',
}],
}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
ydl.download(['https://www.youtube.com/playlist?list=PL9R1Zswn-XPCnpyXQLRPYVJr4BXJUYie8'])
!mv *.mp3 /content/drive/My\ Drive/Colab\ Notebooks/fast.ai/KnockKnock/data/knocking/
###Output
_____no_output_____ |
component-library/filter/filter.ipynb | ###Markdown
Filters rows based on predicate on pandas data frameExample "predicate=~metadata.filename.str.contains('.gz') " => filters all rows where column "filename" contains '.gz"
###Code
# @param predicate (as described in documentation of the component)
# @param file_name csv file name
import os
predicate = os.environ.get('predicate')
file_name = os.environ.get('file_name', 'metadata.csv')
!pip3 install pandas==1.2.1
import pandas as pd
metadata = pd.read_csv(file_name)
exec('metadata = metadata[' + predicate + ']')
metadata.to_csv(file_name, index=False)
###Output
_____no_output_____ |
wspinbot.ipynb | ###Markdown
WSPIN Bot - Brought to you by WSPIN Social Experiment. Here to change the internet for the best. opensource humanism For this notebook to function, we must first install the required packages. Notably we need the chatterbot package, as well as the chatterbot corpus to allow for initialisation and training of our chat bot.
###Code
pip install chatterbot
pip install chatterbot_corpus
from chatterbot import ChatBot
bot = ChatBot(
'Norman',
storage_adapter='chatterbot.storage.SQLStorageAdapter',
logic_adapters=[
'chatterbot.logic.MathematicalEvaluation',
'chatterbot.logic.TimeLogicAdapter'
],
database_uri='sqlite:///database.sqlite3'
)
###Output
_____no_output_____
###Markdown
Here we create a basic bot that can only tell the time and perform mathematical operations and give the result to the user.
###Code
need_response = True
while need_response:
try:
bot_input = bot.get_response(input())
print(bot_input)
need_response = False
except(KeyboardInterrupt, EOFError, SystemExit):
break
###Output
_____no_output_____
###Markdown
And here we train it on a list representing a conversation.
###Code
chat_bot = ChatBot('greeter',
storage_adapter='chatterbot.storage.SQLStorageAdapter',
database_uri='sqlite:///database.sqlite3'
)
from chatterbot.trainers import ListTrainer
trainer = ListTrainer(bot)
trainer.train([
'How are you?',
'I am good.',
'That is good to hear.',
'Thank you',
'You are welcome.'
])
# Get user input
user_input = input()
print(user_input)
# Get bot response
response = chat_bot.get_response(user_input)
print(response)
###Output
_____no_output_____
###Markdown
Next, we follow the terminal example provided by the docs.
###Code
from chatterbot import ChatBot
# Uncomment the following lines to enable verbose logging
import logging
logging.basicConfig(level=logging.INFO)
# Create a new instance of a ChatBot
bot = ChatBot(
'Terminal',
storage_adapter='chatterbot.storage.SQLStorageAdapter',
logic_adapters=[
'chatterbot.logic.MathematicalEvaluation',
'chatterbot.logic.TimeLogicAdapter',
'chatterbot.logic.BestMatch'
],
database_uri='sqlite:///database.db'
)
print("Type something to begin...")
# The following loop will execute each time the user enters input
while True:
try:
user_input=input()
bot_response=bot.get_response(user_input)
print(bot_response)
except(KeyboardInterrupt, EOFError, SystemExit):
break
###Output
_____no_output_____
###Markdown
In order to use a non-relational database, MongoDB for example, run the following commands ! Reference : https://docs.mongodb.com/manual/tutorial/install-mongodb-on-ubuntu/Open a new terminal and launch these bash commands !1. Import the public key used by the package management system.From a terminal, issue the following command to import the MongoDB public GPG Key from https://www.mongodb.org/static/pgp/server-4.4.asc:`wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -``sudo apt-get install gnupg``wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -`Once that is done and you see an OK message printed to the standard output, keep going with this :2. Create a list file for MongoDB"The following instruction is for Ubuntu 20.04 (Focal).Create the /etc/apt/sources.list.d/mongodb-org-4.4.list file for Ubuntu 20.04 (Focal): "`echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list`3. Reload local package database.`sudo apt-get update`4. Install the MongoDB packages`sudo apt-get install -y mongodb-org`And then, start the MongoDB service (systemctl)1. Start MongoDB.`sudo systemctl start mongod`2. Verify that MongoDB has started successfully.`sudo systemctl status mongod`You can optionally ensure that MongoDB will start following a system reboot by issuing the following command:`sudo systemctl enable mongod`
###Code
from chatterbot import ChatBot
# Uncomment the following lines to enable verbose logging
# import logging
# logging.basicConfig(level=logging.INFO)
# Create a new ChatBot instance
bot = ChatBot(
'Terminal',
storage_adapter='chatterbot.storage.MongoDatabaseAdapter',
logic_adapters=[
'chatterbot.logic.BestMatch'
],
database_uri='mongodb://localhost:27017/chatterbot-database'
)
print('Type something to begin...')
while True:
try:
user_input = input()
bot_response = bot.get_response(user_input)
print(bot_response)
# Press ctrl-c or ctrl-d on the keyboard to exit
except (KeyboardInterrupt, EOFError, SystemExit):
break
###Output
_____no_output_____
###Markdown
Preprocessors can be used to transform the input text and ensure that it will be comprehensible to our chatbot.
###Code
from chatterbot import ChatBot
from chatterbot.trainers import ChatterBotCorpusTrainer
# Uncomment the following lines to enable verbose logging
# import logging
# logging.basicConfig(level=logging.INFO)
# Create a new ChatBot instance
chatbot = ChatBot(
'Preprocessor Bot',
storage_adapter='chatterbot.storage.MongoDatabaseAdapter',
preprocessors=[
'chatterbot.preprocessors.clean_whitespace',
'chatterbot.preprocessors.unescape_html',
'chatterbot.preprocessors.convert_to_ascii'
],
logic_adapters=[
'chatterbot.logic.BestMatch',
],
database_uri='mongodb://localhost:27017/chatterbot-database'
)
'''
trainer = ChatterBotCorpusTrainer(chatbot)
trainer.train(
"chatterbot.corpus.english"
)
'''
print('Type something to begin...')
while True:
try:
user_input = input()
bot_response = chatbot.get_response(user_input)
print(bot_response)
# Press ctrl-c or ctrl-d on the keyboard to exit
except (KeyboardInterrupt, EOFError, SystemExit):
break
###Output
_____no_output_____
###Markdown
Specific Response AdapterIf the input that the chat bot receives, matches the input text specified for this adapter, the specified response will be returned.
###Code
from chatterbot import ChatBot
# Create a new instance of a ChatBot
bot = ChatBot(
'Exact Response Example Bot',
storage_adapter='chatterbot.storage.SQLStorageAdapter',
# Specific response logic adapter responds to 'Help me!' specifically with a link
logic_adapters=[
{
'import_path': 'chatterbot.logic.BestMatch'
},
{
'import_path': 'chatterbot.logic.SpecificResponseAdapter',
'input_text': 'Help me!',
'output_text': 'Ok, here is a link: http://chatterbot.rtfd.org'
}
]
)
# Get a response given the specific input
response = bot.get_response('Help me!')
print(response)
###Output
_____no_output_____
###Markdown
Low confidence response example
###Code
from chatterbot import ChatBot
from chatterbot.trainers import ListTrainer
# Create a new instance of a ChatBot
bot = ChatBot(
'Example Bot',
storage_adapter='chatterbot.storage.SQLStorageAdapter',
logic_adapters=[
{
'import_path': 'chatterbot.logic.BestMatch',
'default_response': 'I am sorry, but I do not understand.',
'maximum_similarity_threshold': 0.90
}
]
)
trainer = ListTrainer(bot)
# Train the chat bot with a few responses
trainer.train([
'How can I help you?',
'I want to create a chat bot',
'Have you read the documentation?',
'No, I have not',
'This should help get you started: http://chatterbot.rtfd.org/en/latest/quickstart.html'
])
# Get a response for some unexpected input
response = bot.get_response('How do I make an omelette?')
print(response)
###Output
_____no_output_____
###Markdown
Django Integrationhttps://chatterbot.readthedocs.io/en/stable/django/index.htmlDjango installation : https://docs.djangoproject.com/en/dev/intro/install/`pip install django chatterbot``cd ~/wspinbot/wspinbot` open a terminal and cd into the directory, then run `django-admin startproject wspinsite``django-admin startproject wspinsite``cd ./wspinbot/wspinbot/wspinsite``python manage.py migrate``python manage.py runserver` Now let's see what a working example looks like ! Forked from https://github.com/gunthercox/ChatterBotWe will create a new directory for this. Enter the following bash commands in a new terminal. `cd ~ && mkdir djbot && cd djbot``git clone https://github.com/wspinsocial/ChatterBot.git``cd ChatterBot && python -m venv .``source bin/activate``pip install -r requirements.txt``pip install django chatterbot``pip install chatterbot_corpus``cd examples/django_app/``python manage.py runserver`Very import modification required to make the server run on Django 3.2 ! https://www.programmersought.com/article/80546800879/Another important modification to resolve the following error: "TemplateSyntaxError at /'staticfiles' is not a registered tag library."https://stackoverflow.com/questions/55929472/django-templatesyntaxerror-staticfiles-is-not-a-registered-tag-libraryFinally it is crucial to actually go ahead and migrate the database to be able to use the tables in the sqllite database.`python manage.py migrate` In order to train our dj_bot, we must add some lines to the settings.py and we must add a python script that will train our bot and store the training into the database used for generating responses. So we create a file named train.py where we train the bot on the existing corpus provided by the chatterbot package.We must also specify the training class in our settings.py file in order for it to be taken into account by our django_app Then we can run the following command at the root of our django application to train our bot :`python train.py`followed by `python manage.py runserver` Pour enseigner notre agent conversationnel à parler en bon francais, il suffit d'ajouter une ligne à notre code.`'chatterbot.corpus.french'`Pour lui enseigner d'autres langues, on peut choisir parmi les options retrouvées ici: https://github.com/gunthercox/chatterbot-corpus/tree/master/chatterbot_corpus/data Now, in order to make our chatbot perform like a modern artifical intelligence, we should feed it a larger dataset. For this, we will make use of the Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words.`chatterbot.trainers.UbuntuCorpusTrainer`Will allow chatbots to be trained with the data from the Ubuntu Dialog Corpus.Source : https://chatterbot.readthedocs.io/en/stable/training.htmltraining-with-the-ubuntu-dialog-corpus
###Code
"""
This example shows how to train a chat bot using the
Ubuntu Corpus of conversation dialog.
"""
import logging
from chatterbot import ChatBot
from chatterbot.trainers import UbuntuCorpusTrainer
# Enable info level logging
logging.basicConfig(level=logging.INFO)
chatbot = ChatBot('Example Bot')
trainer = UbuntuCorpusTrainer(chatbot)
# Start by training our bot with the Ubuntu corpus data
trainer.train()
# Now let's get a response to a greeting
response = chatbot.get_response('How are you doing today?')
print(response)
###Output
[nltk_data] Downloading package stopwords to /home/wspin/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /home/wspin/nltk_data...
[nltk_data] Package averaged_perceptron_tagger is already up-to-
[nltk_data] date!
Downloading http://cs.mcgill.ca/~jpineau/datasets/ubuntu-corpus-1.0/ubuntu_dialogs.tgz
[= ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[==== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[====== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[======== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[========== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[============ ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[============== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[================ ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[================== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[=================== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[===================== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[======================= ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[========================= ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[========================== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[============================ ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[============================== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[================================ ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[================================= ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[=================================== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[===================================== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[======================================= ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[======================================== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[=========================================== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[============================================= ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[=============================================== ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
[================================================= ]IOPub message rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
`--NotebookApp.iopub_msg_rate_limit`.
Current values:
NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)
NotebookApp.rate_limit_window=3.0 (secs)
INFO:chatterbot.chatterbot:File extracted to /home/wspin/ubuntu_data/ubuntu_dialogs
Process ForkPoolWorker-10:
Process ForkPoolWorker-5:
Process ForkPoolWorker-7:
Process ForkPoolWorker-11:
Process ForkPoolWorker-13:
Process ForkPoolWorker-12:
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Process ForkPoolWorker-2:
Process ForkPoolWorker-6:
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
Process ForkPoolWorker-8:
Process ForkPoolWorker-9:
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
Process ForkPoolWorker-3:
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
Process ForkPoolWorker-4:
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/pool.py", line 51, in starmapstar
return list(itertools.starmap(args[0], args[1]))
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/home/wspin/.local/lib/python3.8/site-packages/chatterbot/trainers.py", line 203, in read_file
statement.search_text = tagger.get_bigram_pair_string(statement.text)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/wspin/.local/lib/python3.8/site-packages/chatterbot/tagging.py", line 170, in get_bigram_pair_string
pos_tags.extend(self.get_pos_tags(words))
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/home/wspin/.local/lib/python3.8/site-packages/chatterbot/tagging.py", line 109, in get_pos_tags
tags = pos_tag(words, lang=self.language.ISO_639)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
KeyboardInterrupt
KeyboardInterrupt
KeyboardInterrupt
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/queues.py", line 356, in get
res = self._reader.recv_bytes()
File "/usr/lib/python3.8/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/home/wspin/.local/lib/python3.8/site-packages/nltk/tag/__init__.py", line 165, in pos_tag
return _pos_tag(tokens, tagset, tagger, lang)
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/pool.py", line 114, in worker
task = get()
File "/usr/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
buf = self._recv(4)
File "/home/wspin/.local/lib/python3.8/site-packages/nltk/tag/__init__.py", line 122, in _pos_tag
tagged_tokens = tagger.tag(tokens)
File "/usr/lib/python3.8/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/home/wspin/.local/lib/python3.8/site-packages/nltk/tag/perceptron.py", line 188, in tag
tag, conf = self.model.predict(features, return_conf)
KeyboardInterrupt
File "/usr/lib/python3.8/multiprocessing/queues.py", line 355, in get
with self._rlock:
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 95, in __enter__
return self._semlock.__enter__()
File "/home/wspin/.local/lib/python3.8/site-packages/nltk/tag/perceptron.py", line 67, in predict
scores[label] += value * weight
KeyboardInterrupt
KeyboardInterrupt
KeyboardInterrupt
|
3. Practical Cases of Linear Regression .ipynb | ###Markdown
Functional Forms Linear regression requires parameters to be linear rather than variables, for instance an exponential form is nonlinear$$Y_{i}=\beta_{1} X_{i}^{\beta_{2}} e^{u_{i}}$$ Many of you might wonder what if variables are nonlinear, such as$$Y=\beta_{1}+\beta_{2} X_{2}^{3}+\beta_{3} \sqrt{X_{3}}+\beta_{4} \log X_{4} + u_i$$Actually, it doesn't matter, they are essentially linear after nonlinear operation performed. $$Y=\beta_{1}+\beta_{2} Z_2+\beta_{3}Z_3+\beta_{4}Z_4+u_i$$where $Z_2 = X_{2}^{3}$, $Z_3 = \sqrt{X_{3}}$ and $Z_4 =\log X_{4}$.But interpretation of parameters not the same anymore. Log Form Transformation We will explain how log form transformation is used through an economics example.If you have studied microeconomics, you probably remember the concept of elasticity, it is a unitless measurement of relative changes of two variables. For example, the price elasticity of demand is defined as$$\frac{\Delta C/C}{\Delta P/P} = \frac{d C/d P}{C/P}$$If price elasticity of demand is $.3$, it means one percent of price change would cause $.3%$ percent change of demand. If $C$ and $P$ takes the form of $C = \beta_1 P^{\beta_2}$, then$$\frac{d C}{d P}=\beta_1 \beta_2 P^{\beta_2-1}= \beta_2\frac{C}{P}$$Substitute the result back in elasticity of consumption to price$$\frac{\Delta C/C}{\Delta P/P} = \beta_2$$ $\beta_2$ is the elasticity, and this is also why we prefer using exponential form in estimating elasticities. Taking natural log on $C = \beta_1 P^{\beta_2}u$$$\ln{C} = \ln{\beta_1}+\beta_2\ln{P}+\ln{u}$$This, again, back to linear regression form. With cosmetic substitution, the model becomes$$C' = \beta_1'+\beta_2 P'+u'$$ Here is the estimation procedure:1. Take natural log of $C$ and $P$.2. Estimate the regression by OLS.3. $\beta_2$ has the direct interpretation of elasticity.4. To obtain $\beta_1$, take $\exp{\beta_1'}$. We will retrieve _Consumer Price Index_ and _Expenditure of Durable Good_ from FRED.
###Code
start = dt.datetime(1950, 1, 1)
end = dt.datetime(2021, 1, 1)
df_exp = pdr.data.DataReader(['PCEDG','PCE'], 'fred' , start, end)
df_exp.columns = ['Exp_Dur', 'Tot_Exp']
df_exp = df_exp.dropna()
(df_exp/df_exp.iloc[0]).plot(grid=True, figsize=(12, 7), title = 'Total Exp. and Exp. on Durables (Normalised at 1.2010)')
plt.show()
X = np.log(df_exp['Tot_Exp'])
Y = np.log(df_exp['Exp_Dur'])
X = sm.add_constant(X) # adding a constant
model = sm.OLS(Y, X).fit()
print_model = model.summary()
print(print_model)
md('That means every $1\%$ increase of total expenditure will bring up {:.4f}\% of durable good consumption.'.format(model.params[1]))
###Output
_____no_output_____ |
resume_matches.ipynb | ###Markdown
###Code
!pip install docx2txt
import docx2txt
job_description =docx2txt.process('/content/Alice Clark CV.docx')
resume=docx2txt.process('/content/Smith Resume.docx')
print(resume)
content=[job_description,resume]
from sklearn.feature_extraction.text import CountVectorizer
cv=CountVectorizer()
matrix=cv.fit_transform(content)
from sklearn.metrics.pairwise import cosine_similarity
similarity_matrix=cosine_similarity(matrix)
print(similarity_matrix)
print('Resume Matches By :' +str(similarity_matrix[1][0]))
###Output
_____no_output_____ |
Hemanshu_SD565_Assignment1.ipynb | ###Markdown
Introductory Machine Learning: Assignment 1**Deadline:**Assignment 1 is due Thursday, September 23 at 11:59pm. Late work will not be accepted as per the course policies (see the Syllabus and Course policies on [Canvas](https://canvas.yale.edu)).Directly sharing answers is not okay, but discussing problems with the course staff or with other students is encouraged.You should start early so that you have time to get help if you're stuck. The drop-in office hours schedule can be found on [Canvas](https://canvas.yale.edu). You can also post questions or start discussions on [Ed Discussion](https://edstem.org/us/courses/9209/discussion/). The assignment may look long at first glance, but the problems are broken up into steps that should help you to make steady progress.**Submission:**Submit your assignment as a .pdf on Gradescope. You can access Gradescope through Canvas on the left-side of the class home page. The problems in each homework assignment are numbered. Note: When submitting on Gradescope, please select the correct pages of your pdf that correspond to each problem. This will allow graders to more easily find your complete solution to each problem.To produce the .pdf, please do the following in order to preserve the cell structure of the notebook: 1. Go to "File" at the top-left of your Jupyter Notebook2. Under "Download as", select "HTML (.html)"3. After the .html has downloaded, open it and then select "File" and "Print" (note you will not actually be printing)4. From the print window, select the option to save as a .pdf**Topics**1. Linear regression2. k-nearest-neighbor classificationThis assignment will also help you to learn the essentials of Python, Pandas, and Jupyter notebooks. Problem 1: Linear Regression with Covid Data The New York Times Covid-19 Database is a county-level database of confirmed cases and deaths, compiled from state and local governments and health departments across the United States.The initial release of the database was on Thursday, March 26, 2020, and it is updated daily. The data are publically available via GitHub: [https://github.com/nytimes/covid-19-data](https://www.nytimes.com/interactive/2020/us/coronavirus-us-cases.html). In this problem we will only use the data aggregated across states.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import plotly.graph_objects as go
import statsmodels.api as sm
###Output
_____no_output_____
###Markdown
Load the dataFirst, read the whole dataset including the accumulated cases and deaths for each state for each day.
###Code
covid_table = pd.read_csv("https://raw.githubusercontent.com/nytimes/covid-19-data/master/us-states.csv")
covid_table = covid_table.drop('fips', axis=1)
covid_table.tail(10)
###Output
_____no_output_____
###Markdown
Aggregate data across statesWe can merge data from different states to get the total number of cases and deaths for the country. Here we also show an example of visualizing the data.
###Code
merged_data = covid_table.drop('state', axis=1)
merged_data = merged_data.groupby('date').sum()
merged_data.tail(10)
new_cases = np.diff(merged_data['cases'])
dates = np.array(merged_data['cases'].index)
plt.figure(figsize=(8,5))
plt.plot(dates, merged_data['cases'])
plt.xticks(dates[[0, 100, 200, 300, 400, 500, len(dates)-1]], rotation=90)
_ = plt.title('Cumulative cases in the US')
###Output
_____no_output_____
###Markdown
Problem 1.1Let's call April 1, 2021 to May 1, 2021 Period 1. The following code extracts the cumulative cases, deaths, and days during this period
###Code
first_date = '2021-04-01'
last_date = '2021-05-01'
merged_data_period1 = merged_data[(merged_data['cases'].index >= first_date) & (merged_data['cases'].index <= last_date)]
merged_data_period1.insert(2,"days",np.arange(len(merged_data_period1))+1)
merged_data_period1.head(10)
###Output
_____no_output_____
###Markdown
Problem 1.1.aVisualize the accumulated cases in period 1. You can use any graph you like, such as line plot, scatter plot, and bar plot. The x-axis should be `days`.
###Code
## -- please write code for visualization here. -- ##
fig = go.Figure()
fig.add_trace(go.Scatter(x=merged_data_period1.index, y=merged_data_period1['cases'],mode='lines'))
fig.update_layout(title='Cumulative Cases in April 2021',xaxis_title='Date',yaxis_title='Cumulative Cases')
fig.show()
###Output
_____no_output_____
###Markdown
Problem 1.1.bNow, calculate the least-squares estimates of the coefficients for the linear model that includes a slope and an intercept: $$\text{cases}_i = \beta_0 + \beta_1 \text{days}_i + \epsilon_i$$You may either compute these values with explicit expressions, or use a package such as statsmodels.api.OLS. Use our demo from class as an example, if you wish.
###Code
## -- please compute linear regression here. -- ##
X = sm.add_constant(merged_data_period1['days'])
model = sm.OLS(merged_data_period1['cases'], X)
result = model.fit()
beta = [result.params[0], result.params[1]]
###Output
_____no_output_____
###Markdown
Problem 1.1.c Now, plot the data together with the linear fit, shown as a straight line.
###Code
## -- please write code for your visualization go here -- ##
fig = go.Figure()
fig.add_trace(go.Scatter(x=merged_data_period1['days'], y=merged_data_period1['cases'],mode='markers', name = 'Actual Data'))
fig.add_trace(go.Scatter(x=merged_data_period1['days'], y=beta[0] + beta[1]*merged_data_period1['days'],mode='lines', name = 'Fitted Line'))
fig.update_layout(title='Cumulative Cases in April 2021',xaxis_title='Days',yaxis_title='Cumulative Cases')
fig.show()
###Output
_____no_output_____
###Markdown
Problem 1.2Modify the code in 1.1 to fit and visualize a linear regression model for Period 2, July 1, 2021 to August 1, 2021.
###Code
## -- please put your code to process the data here -- ##
first_date = '2021-07-01'
last_date = '2021-08-01'
merged_data_period2 = merged_data[(merged_data['cases'].index >= first_date) & (merged_data['cases'].index <= last_date)]
merged_data_period2.insert(2,"days",np.arange(len(merged_data_period2))+1)
#merged_data_period2.head(10)
###Output
_____no_output_____
###Markdown
Problem 1.2.aVisualize the data
###Code
## -- please write code for plotting the data here. --#
fig = go.Figure()
fig.add_trace(go.Scatter(x=merged_data_period2.index, y=merged_data_period2['cases'],mode='lines'))
fig.update_layout(title='Cumulative Cases in July 2021',xaxis_title='Date',yaxis_title='Cumulative Cases')
fig.show()
###Output
_____no_output_____
###Markdown
Problem 1.2.bCompute a linear regression
###Code
## -- please write code for the linear regression and visualization here. -- ##
X2 = sm.add_constant(merged_data_period2['days'])
model2 = sm.OLS(merged_data_period2['cases'], X2)
result2 = model2.fit()
beta2 = [result2.params[0], result2.params[1]]
###Output
_____no_output_____
###Markdown
Problem 1.2.cPlot the data together with the linear regression here
###Code
## -- please write the code for your plot here -- ##
fig = go.Figure()
fig.add_trace(go.Scatter(x=merged_data_period2['days'], y=merged_data_period2['cases'],mode='markers', name = 'Actual Data'))
fig.add_trace(go.Scatter(x=merged_data_period2['days'], y=beta2[0] + beta2[1]*merged_data_period2['days'],mode='lines', name = 'Fitted Line'))
fig.update_layout(title='Cumulative Cases in July 2021', xaxis_title='Days',yaxis_title='Cumulative Cases')
fig.show()
###Output
_____no_output_____
###Markdown
Problem 1.3Compare the linear regression results for 1.1 and 1.2. In which case does the model better fit the data? Please verify your answer *quantitatively*.
###Code
## -- please write your answer here. -- ##
str_april = 'R-squared for the regression model using April 2021 data:' + str(np.round(result.rsquared_adj, 4))
str_july = 'R-squared for the regression model using July 2021 data:' + str(np.round(result2.rsquared_adj, 4))
print(str_april + ' and ' + str_july)
if result.rsquared_adj > result2.rsquared_adj:
print('The model fit using data from April 2021 fits better')
else:
print('The model fit using data from July 2021 fits better')
###Output
R-squared for the regression model using April 2021 data:0.9961 and R-squared for the regression model using July 2021 data:0.9374
The model fit using data from April 2021 fits better
###Markdown
Problem 1.4 (Extra credit: 5 points)Repeat problems 1.1, 1.2, and 1.3 but this time using *multiple regression*, regressing the cumulative cases onto the number of days and the cumulative deaths on that day. Thus, each of your regressions should have three parameters: an intercept, a coefficient for days, and a coefficient for deaths. When you visualize the results, plot the predicted number of cases versus days, but use the number of deaths to compute your predicted values.
###Code
## -- please write your answer here. -- ##
'''Creating a model using April 2021 data'''
x_data = np.column_stack((merged_data_period1['days'], merged_data_period1['deaths']))
X = sm.add_constant(x_data)
model = sm.OLS(merged_data_period1['cases'], X)
result = model.fit()
beta = [result.params[0], result.params[1], result.params[2]]
fig = go.Figure()
fig.add_trace(go.Scatter(x=merged_data_period1['days'], y=merged_data_period1['cases'],mode='markers', name = 'Actual Data'))
fig.add_trace(go.Scatter(x=merged_data_period1['days'], y=beta[0] + beta[1]*merged_data_period1['days'] + beta[2]*merged_data_period1['deaths'],mode='lines', name = 'Fitted Line'))
fig.update_layout(title='Cumulative Cases in April 2021',xaxis_title='Days',yaxis_title='Cumulative Cases')
fig.show()
'''Creating a model using July 2021 data'''
x_data = np.column_stack((merged_data_period2['days'], merged_data_period2['deaths']))
X = sm.add_constant(x_data)
model2 = sm.OLS(merged_data_period2['cases'], X)
result2 = model.fit()
beta2 = [result2.params[0], result2.params[1], result2.params[2]]
fig = go.Figure()
fig.add_trace(go.Scatter(x=merged_data_period2['days'], y=merged_data_period2['cases'],mode='markers', name = 'Actual Data'))
fig.add_trace(go.Scatter(x=merged_data_period2['days'], y=beta2[0] + beta2[1]*merged_data_period2['days'] + beta2[2]*merged_data_period2['deaths'],mode='lines', name = 'Fitted Line'))
fig.update_layout(title='Cumulative Cases in July 2021', xaxis_title='Days',yaxis_title='Cumulative Cases')
fig.show()
'''Comparing regression results'''
str_april = 'R-squared for the regression model using April 2021 data:' + str(np.round(result.rsquared_adj, 4))
str_july = 'R-squared for the regression model using July 2021 data:' + str(np.round(result2.rsquared_adj, 4))
print(str_april + ' and ' + str_july)
if result.rsquared_adj > result2.rsquared_adj:
print('The model fit using data from April 2021 fits better')
else:
print('The model fit using data from July 2021 fits better')
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import dateutil
import random
import math
###Output
_____no_output_____
###Markdown
Problem 2: Romance vs. Action (30 points)Credit: Data 8 Part 1. Exploring the datasetIn this problem, we will try to predicta movie's genre from the text of its screenplay. We have compiled a list of 5,000 words that occur in conversations between movie characters. For each movie, our dataset tells us the frequency with which each of these words occurs in certain conversations in its screenplay. All words have been converted to lowercase.Run the cell below to read the `movies` table. It may take up a minute or so to load.
###Code
movies = pd.read_csv('https://raw.githubusercontent.com/YData123/sds265-fa21/main/assignments/assn1/movies.csv')
movies.head(10)
movies.iloc[125,[0, 1, 2, 3, 4, 5, 10, 30, 5005]]
###Output
_____no_output_____
###Markdown
The above cell prints a few columns of the row for the action movie *The Matrix*. The movie contains 3792 words. The word "it" appears 115 times, as it makes up a fraction $\frac{115}{3792} \approx 0.030327$ of the words in the movie. The word "not" appears 33 times, as it makes up a fraction $\frac{33}{3792} \approx 0.00870253$ of the words. The word "fling" doesn't appear at all.This numerical representation of a body of text, one that describes only the frequencies of individual words, is called a bag-of-words representation. A lot of information is discarded in this representation: the order of the words, the context of each word, who said what, the cast of characters and actors, etc. However, a bag-of-words representation is often used for machine learning applications as a reasonable starting point, because a great deal of information is also retained and expressed in a convenient and compact format. We will investigate whether this representation is sufficient to build an accurate genre classifier. All movie titles are unique. The `row_for_title` function provides fast access to the one row for each title.
###Code
def row_for_title(title):
"""Return the row for a title
"""
return movies[movies["Title"]==title]
###Output
_____no_output_____
###Markdown
For example, the fastest way to find the frequency of "hey" in the movie *The Terminator* is to access the `'hey'` item from its row. Check the original table to see if this worked for you!
###Code
row_for_title('the terminator')["hey"].item()
###Output
_____no_output_____
###Markdown
This dataset was extracted from [a dataset from Cornell University](http://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html). After transforming the dataset (e.g., converting the words to lowercase, removing profanity, and converting the counts to frequencies), this new dataset was created containing the frequency of 5000 common words in each movie.
###Code
print('Words with frequencies:', len(movies.drop(movies.columns[np.arange(6)],axis=1).columns))
print('Movies with genres:', len(movies))
###Output
Words with frequencies: 5000
Movies with genres: 242
###Markdown
**Word Stemming**The columns other than "Title", "Genre", "Year", "Rating", " Votes" and " Words" in the `movies` table are all words that appear in some of the movies in our dataset. These words have been *stemmed*, or abbreviated heuristically, in an attempt to make different [inflected](https://en.wikipedia.org/wiki/Inflection) forms of the same base word into the same string. For example, the column "manag" is the sum of proportions of the words "manage", "manager", "managed", and "managerial" (and perhaps others) in each movie. This is a common technique used in machine learning and natural language processing.Stemming makes it a little tricky to search for the words you want to use, so we have provided another table that will let you see examples of unstemmed versions of each stemmed word. Run the code below to load it.
###Code
# Just run this cell.
vocab_mapping = pd.read_csv('https://raw.githubusercontent.com/YData123/sds265-fa21/main/assignments/assn1/stem.csv')
stemmed = list(movies.drop(movies.columns[np.arange(6)],axis=1).columns)
vocab_table = vocab_mapping[vocab_mapping["Stem"].isin(stemmed)]
vocab_table = vocab_table.sort_values('Stem')
vocab_table.iloc[np.arange(2000, 2010)]
###Output
_____no_output_____
###Markdown
Problem 2.1.a:Assign `stemmed_alternating` to the stemmed version of the word "alternating".
###Code
# Set stemmed_alternating to the stemmed version of "alternating" (which
# should be a string). Use vocab_table.
stemmed_alternating = ...
stemmed_alternating
###Output
_____no_output_____
###Markdown
Problem 2.1.b:Assign `unstemmed_run` to an array of words in `vocab_table` that have "run" as its stemmed form.
###Code
# Set unstemmed_run to the unstemmed versions of "run" (which
# should be an array of string).
unstemmed_run = ...
unstemmed_run
###Output
_____no_output_____
###Markdown
**Splitting the dataset**We're going to use our `movies` dataset for two purposes.1. First, we want to *train* a movie genre classifier.2. Second, we want to *test* the performance of the classifier.So, we need two different datasets: *training* and *test*.The purpose of a classifier is to classify unseen data that is similar to the training data. Therefore, we must ensure that there are no movies that appear in both sets. We do so by splitting the dataset randomly. The dataset has already been permuted randomly, so it's easy to split. We just take the top for training and the rest for test. Run the code below (without changing it) to separate the datasets into two tables.
###Code
# Here we have defined the proportion of our data
# that we want to designate for training as 17/20ths
# of our total dataset. 3/20ths of the data is
# reserved for testing.
training_proportion = 17/20
num_movies = len(movies)
num_train = int(num_movies * training_proportion)
num_test = num_movies - num_train
train_movies = movies.iloc[np.arange(num_train)]
test_movies = movies.iloc[np.arange(num_train, num_movies)]
print("Training: ", len(train_movies), ";",
"Test: ", len(test_movies))
###Output
Training: 205 ; Test: 37
###Markdown
Problem 2.1.c:Draw a horizontal bar chart with two bars that show the proportion of Action movies in each dataset. Complete the function `action_proportion` first; it should help you create the bar chart.
###Code
def action_proportion(dataframe):
"""Return the proportion of movies in a table that have the Action genre."""
return ...
###Output
_____no_output_____
###Markdown
Part 2. K-Nearest Neighbors: A guided examplek-Nearest Neighbors (k-NN) is a classification algorithm. Given some *attributes* (also called *features*) of an unseen example, it decides whether that example belongs to one or the other of two categories based on its similarity to previously seen examples. Predicting the category of an example is called *labeling*, and the predicted category is also called a *label*.An attribute (feature) we have about each movie is *the proportion of times a particular word appears in the movies*, and the labels are two movie genres: romance and action. The algorithm requires many previously seen examples for which both the attributes and labels are known: that's the `train_movies` dataframe.To build understanding, we're going to visualize the algorithm instead of just describing it. **Classifying a movie**In k-NN, we classify a movie by finding the `k` movies in the *training set* that are most similar according to the features we choose. We call those movies with similar features the *nearest neighbors*. The k-NN algorithm assigns the movie to the most common category among its `k` nearest neighbors.Let's limit ourselves to just 2 features for now, so we can plot each movie. The features we will use are the proportions of the words "money" and "feel" in the movie. Taking the movie "Batman Returns" (in the test set), 0.000502 of its words are "money" and 0.004016 are "feel". This movie appears in the test set, so let's imagine that we don't yet know its genre.First, we need to make our notion of similarity more precise. We will say that the *distance* between two movies is the straight-line distance between them when we plot their features in a scatter diagram. This distance is called the Euclidean ("yoo-KLID-ee-un") distance, whose formula is $\sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2}$.For example, in the movie *Titanic* (in the training set), 0.0009768 of all the words in the movie are "money" and 0.0017094 are "feel". Its distance from *Batman Returns* on this 2-word feature set is $$\sqrt{(0.000502 - 0.0009768)^2 + (0.004016 - 0.0017094)^2} \approx 0.00235496.$$ (If we included more or different features, the distance could be different.)A third movie, *The Avengers* (in the training set), is 0 "money" and 0.001115 "feel".The function below creates a plot to display the "money" and "feel" features of a test movie and some training movies. As you can see in the result, *Batman Returns* is more similar to *Titanic* than to *The Avengers* based on these features. However, we know that *Batman Returns* and *The Avengers* are both action movies, so intuitively we'd expect them to be more similar. Unfortunately, that isn't always the case. We'll discuss this more later.
###Code
# Just run this cell.
def plot_embeddings(M_reduced, word2Ind, words):
"""
Plot in a scatterplot the embeddings of the words specified in the list "words".
Include a label next to each point.
"""
for word in words:
x, y = M_reduced[word2Ind[word]]
plt.scatter(x, y, marker='x', color='red')
plt.text(x+.03, y+.03, word, fontsize=9)
plt.show()
M_reduced_plot_test = np.array([[1, 1], [-1, -1], [1, -1], [-1, 1], [0, 0]])
word2Ind_plot_test = {'test1': 0, 'test2': 1, 'test3': 2, 'test4': 3, 'test5': 4}
words = ['test1', 'test2', 'test3', 'test4', 'test5']
plot_embeddings(M_reduced_plot_test, word2Ind_plot_test, words)
# Just run this cell.
def plot_with_two_features(test_movie, training_movies, x_feature, y_feature):
"""Plot a test movie and training movies using two features."""
test_row = row_for_title(test_movie)
test_x = test_row[x_feature].item()
test_y = test_row[y_feature].item()
plt.scatter(test_x, test_y, s=100)
plt.text(test_x, test_y+.0005, test_movie, fontsize=20)
for movie in training_movies:
row = row_for_title(movie)
train_x = row[x_feature].item()
train_y = row[y_feature].item()
plt.scatter(train_x, train_y, s=100)
plt.text(train_x, train_y+.0005, movie, fontsize=20)
plt.show()
plt.figure(figsize=(12, 8))
plt.xlim(-0.0005, 0.002)
plt.ylim(-0.001, 0.006)
plt.xlabel('money', fontsize=25)
plt.ylabel('feel', fontsize=25)
training = ["titanic", "the avengers"]
plot_with_two_features("batman returns", training, "money", "feel")
###Output
_____no_output_____
###Markdown
Problem 2.2.a:Compute the distance between the two action movies, *Batman Returns* and *The Avengers*, using the `money` and `feel` features only. Assign it the name `action_distance`.**Note:** If you have a row, you can use `item` to get a value from a column by its name. For example, if `r` is a row, then `r["Genre"].item()` is the value in column `"Genre"` in row `r`.*Hint*: Remember the function `row_for_title`, redefined for you below.
###Code
def row_for_title(title):
"""Return the row for a title
"""
return movies[movies["Title"]==title]
batman = row_for_title("batman returns")
avengers = row_for_title("the avengers")
action_distance = ...
action_distance
###Output
_____no_output_____
###Markdown
Below, we've added a third training movie, *The Terminator*. Before, the point closest to *Batman Returns* was *Titanic*, a romance movie. However, now the closest point is *The Terminator*, an action movie.
###Code
plt.figure(figsize=(12, 8))
plt.xlim(-0.0005, 0.002)
plt.ylim(-0.001, 0.006)
plt.xlabel('money', fontsize=25)
plt.ylabel('feel', fontsize=25)
training = ["the avengers", "titanic", "the terminator"]
plot_with_two_features("batman returns", training, "money", "feel")
###Output
_____no_output_____
###Markdown
Problem 2.2.b:Complete the function `distance_two_features` that computes the Euclidean distance between any two movies, using two features. The last two lines call your function to show that *Batman Returns* is closer to *The Terminator* than *The Avengers*.
###Code
def distance_two_features(title0, title1, x_feature, y_feature):
"""Compute the distance between two movies with titles title0 and title1
Only the features named x_feature and y_feature are used when computing the distance.
"""
row0 = ...
row1 = ...
...
for movie in ["the terminator", "the avengers"]:
movie_distance = distance_two_features(movie, "batman returns", "money", "feel")
print(movie, 'distance:\t', movie_distance)
###Output
the terminator distance: None
the avengers distance: None
###Markdown
Problem 2.2.c:Define the function `distance_from_batman_returns` so that it works as described in its documentation.**Note:** Your solution should not use arithmetic operations directly. Instead, it should make use of existing functionality above!
###Code
def distance_from_batman_returns(title):
"""The distance between the given movie and "batman returns", based on the features "money" and "feel".
This function takes a single argument:
title: A string, the name of a movie.
"""
...
###Output
_____no_output_____
###Markdown
Problem 2.2.d:Using the features `"money"` and `"feel"`, what are the names and genres of the 7 movies in the **training set** closest to "batman returns"? To answer this question, make a table named `close_movies` containing those 7 movies with columns `"Title"`, `"Genre"`, `"money"`, and `"feel"`, as well as a column called `"distance from batman"` that contains the distance from "batman returns". The dataframe should be **sorted in ascending order by `distance from batman`**.*Hint*: You may find the function [`insert`](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.insert.html) useful.
###Code
# The sample solution took multiple lines.
...
close_movies = ...
close_movies
###Output
_____no_output_____
###Markdown
Problem 2.2.e:Next, we'll classify "batman returns" based on the genres of the closest movies. To do so, define the function `most_common` so that it works as described in its documentation below.
###Code
def most_common(label, dataframe):
"""The most common element in a column of a table.
This function takes two arguments:
label: The label of a column, a string.
dataframe: A dataframe.
It returns the most common value in that column of that table.
In case of a tie, it returns any one of the most common values
"""
...
# Calling most_common on your table of 7 nearest neighbors classifies
# "batman returns" as a romance movie, 5 votes to 2.
most_common('Genre', close_movies)
###Output
_____no_output_____
###Markdown
Congratulations, you've classified your first movie! However, we can see that the classifier doesn't work too well since it categorized *Batman Returns* as a romance movie. Let's see if we can do better! Part 3. FeaturesNow, we're going to extend our classifier to consider more than two features at a time.Euclidean distance still makes sense with more than two features. For `n` different features, we compute the difference between corresponding feature values for two movies, square each of the `n` differences, sum up the resulting numbers, and take the square root of the sum. Problem 2.3.a:Write a function to compute the Euclidean distance between two **arrays** of features of *arbitrary* (but equal) length. Use it to compute the distance between the first movie in the training set and the first movie in the test set, *using all of the features*. (Remember that the first six columns of your tables are not features.)**Note:** To convert rows to arrays, use `np.array`. For example, if `df` was a dataframe, `np.array(df.iloc[i])` converts row i of `df` into an array.
###Code
def distance(features1, features2):
"""The Euclidean distance between two arrays of feature values."""
...
distance_first_to_first = ...
distance_first_to_first
###Output
_____no_output_____
###Markdown
**Creating your own feature set**Unfortunately, using all of the features has some downsides. One clear downside is *computational* -- computing Euclidean distances just takes a long time when we have lots of features. You might have noticed that in the last question!So we're going to select just 20. We'd like to choose features that are very *discriminative*. That is, features which lead us to correctly classify as much of the test set as possible. This process of choosing features that will make a classifier work well is sometimes called *feature selection*, or more broadly *feature engineering*. Problem 2.3.b:In this question, we will help you get started on selecting more effective features for distinguishing romance from action movies. The plot below (generated for you) shows the average number of times each word occurs in a romance movie on the horizontal axis and the average number of times it occurs in an action movie on the vertical axis.  Problem 2.3.c:Using the plot above, choose 20 common words that you think might let you distinguish between romance and action movies. Make sure to choose words that are frequent enough that every movie contains at least one of them. Don't just choose the 20 most frequent, though... you can do much better.You might want to come back to this question later to improve your list, once you've seen how to evaluate your classifier.
###Code
# Set my_20_features to a list of 20 features (strings that are column labels)
my_20_features = []
train_20 = train_movies[my_20_features]
test_20 = test_movies[my_20_features]
###Output
_____no_output_____
###Markdown
In two sentences or less, describe how you selected your features. *Write your answer here, replacing this text.* Next, let's classify the first movie from our test set using these features. You can examine the movie by running the cells below. Do you think it will be classified correctly?
###Code
print("Movie:")
print(test_movies.iloc[0,[0,1]])
print("Features:")
print(test_20.iloc[0])
###Output
Movie:
Title the mummy
Genre action
Name: 205, dtype: object
Features:
Series([], Name: 205, dtype: float64)
###Markdown
As before, we want to look for the movies in the training set that are most like our test movie. We will calculate the Euclidean distances from the test movie (using the 20 selected features) to all movies in the training set. You could do this with a `for` loop, but to make it computationally faster, we have provided a function, `fast_distances`, to do this for you. Read its documentation to make sure you understand what it does. (You don't need to understand the code in its body unless you want to.)
###Code
# Just run this cell to define fast_distances.
def fast_distances(test_row, train_dataframe):
"""An array of the distances between test_row and each row in train_rows.
Takes 2 arguments:
test_row: A row of a table containing features of one
test movie (e.g., test_20.iloc[0]).
train_table: A table of features (for example, the whole
table train_20)."""
assert len(train_dataframe.columns) < 50, "Make sure you're not using all the features of the movies table."
counts_matrix = np.asmatrix(train_20.values)
diff = np.tile(test_row.values, [counts_matrix.shape[0], 1]) - counts_matrix
np.random.seed(0) # For tie breaking purposes
distances = np.squeeze(np.asarray(np.sqrt(np.square(diff).sum(1))))
eps = np.random.uniform(size=distances.shape)*1e-10 #Noise for tie break
distances = distances + eps
return distances
###Output
_____no_output_____
###Markdown
Problem 2.3.d:Use the `fast_distances` function provided above to compute the distance from the first movie in the test set to all the movies in the training set, **using your set of 20 features**. Make a new dataframe called `genre_and_distances` with one row for each movie in the training set and three columns:* The `"Title"` of the training movie* The `"Genre"` of the training movie* The `"Distance"` from the first movie in the test set Ensure that `genre_and_distances` is **sorted in increasing order by distance to the first test movie**.*Hint*: You may find the function [`sort_values`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html) useful.
###Code
genre_and_distances = ...
genre_and_distances
###Output
_____no_output_____
###Markdown
Problem 2.3.e:Now compute the 5-nearest neighbors classification of the first movie in the test set. That is, decide on its genre by finding the most common genre among its 5 nearest neighbors in the training set, according to the distances you've calculated. Then check whether your classifier chose the right genre. (Depending on the features you chose, your classifier might not get this movie right, and that's okay.)
###Code
# Set my_assigned_genre to the most common genre among these.
my_assigned_genre = ...
# Set my_assigned_genre_was_correct to True if my_assigned_genre
# matches the actual genre of the first movie in the test set.
my_assigned_genre_was_correct = ...
print("The assigned genre, {}, was{}correct.".format(my_assigned_genre, " " if my_assigned_genre_was_correct else " not "))
###Output
The assigned genre, Ellipsis, was correct.
###Markdown
**A classifier function**Now we can write a single function that encapsulates the whole process of classification. Problem 2.3.f:Write a function called `classify`. It should take the following four arguments:* A row of features for a movie to classify (e.g., `test_20.iloc[0]`).* A table with a column for each feature (e.g., `train_20`).* An array of classes that has as many items as the previous table has rows, and in the same order.* `k`, the number of neighbors to use in classification.It should return the class a `k`-nearest neighbor classifier picks for the given row of features (the string `'Romance'` or the string `'Action'`).*Hint:* You may find [`Counter().most_common()`](https://docs.python.org/3/library/collections.htmlcollections.Counter) helpful for finding the classification result.
###Code
def classify(test_row, train_rows, train_labels, k):
"""Return the most common class among k nearest neigbors to test_row."""
distances = fast_distances(test_row, train_rows)
genre_and_distances = ...
...
###Output
_____no_output_____
###Markdown
Problem 2.3.g:Assign `king_kong_genre` to the genre predicted by your classifier for the movie "king kong" in the test set, using **11 neighbors** and using your 20 features.
###Code
# The sample solution first defined a row called king_kong_features.
king_kong_features = ...
king_kong_genre = ...
king_kong_genre
###Output
_____no_output_____
###Markdown
Finally, when we evaluate our classifier, it will be useful to have a classification function that is specialized to use a fixed training set and a fixed value of `k`. Problem 2.3.h:Create a classification function that takes as its argument a row containing your 20 features and classifies that row using the 11-nearest neighbors algorithm with `train_20` as its training set.
###Code
def classify_feature_row(row):
...
# When you're done, this should produce 'Romance' or 'Action'.
classify_feature_row(test_20.iloc[0])
###Output
_____no_output_____
###Markdown
Part 4: Evaluating your classifierNow that it's easy to use the classifier, let's see how accurate it is on the whole test set. Problem 2.4.a:Use `classify_feature_row` and [`pandas.DataFrame.apply`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html) (or a loop) to classify every movie in the test set. Assign these guesses as an array to `test_guesses`. **Then**, compute the proportion of correct classifications.
###Code
test_guesses = ...
proportion_correct = ...
proportion_correct
###Output
_____no_output_____
###Markdown
Problem 2.4.b:An important part of evaluating your classifiers is figuring out where they make mistakes. Assign the name `test_movie_correctness` to a dataframe with three columns, `'Title'`, `'Genre'`, and `'Was correct'`. The last column should contain `True` or `False` depending on whether or not the movie was classified correctly.
###Code
test_movie_correctness = ...
test_movie_correctness.sort_values('Was correct')
###Output
_____no_output_____ |
Data_Viz_Titanic/titanic_data_analysis_walkthrough.ipynb | ###Markdown
Titanic Ship Manifest Data Analysis Run this cell below if you'd like to, but it's not required at all.
###Code
# Changing logging_enable = True gives consent to anonymously log. No personal data is collected,
# just what code is run while working. We like to see how far people get in this lesson to help us design better ones.
from Logger import OnlineLogger
logging_enable = False
notebook = "titanic_data_analysis_walkthrough.ipynb"
course = "2021-10-21-Titanic-PythonDataAnalysis-In-Person"
if logging_enable == True:
OnlineLogger.start(notebook,course)
###Output
_____no_output_____
###Markdown
----------- We've Got to Do Some Install(s)Because it's good to know how to do that sort of thing
###Code
#This is a bash command to update/use plotnine for later. Let's practice!
#conda install -c conda-forge plotnine
import math
#import matplotlib as mpl ### May need to uncomment these two lines for mac osx users
#mpl.use('TkAgg') ###
from matplotlib import pyplot as plt
plt.style.use('ggplot')
import plotnine as p9
import pandas as pd
import numpy as np
%matplotlib inline
# Here is where I define all the variables that I may want to change later on
###Output
_____no_output_____
###Markdown
Lets load in the dataset
###Code
#You'll have to find your own path! You're looking for /data/train.csv
df = pd.read_csv("data/train.csv")
df.head(12)
df.tail()
###Output
_____no_output_____
###Markdown
Lets see the types that were imported on our behalf
###Code
df.dtypes
df.info()
df.describe()
###Output
_____no_output_____
###Markdown
See the shape of the dataset
###Code
df.shape
###Output
_____no_output_____
###Markdown
Here we can see the that it has 891 rows of data and 12 attributes worth of imformation.
###Code
len(df)
len(df.columns)
# Try to bring in the full data set using the read csv function! You can find data in the /data/ folder
# where df.columns is
df.columns
df["Name"]
my_famous_passenger = df[df["Name"] == "Guggenheim, Mr. Benjamin"]
print(my_famous_passenger)
###Lets get some information about a column
### Like mean age on the boat
print(df["Age"].mean())
### Fare
#df["Fare"].mean()
print(df["Fare"].describe())
### Fare
df["Fare"].mean()
df.describe()
my_rich_passenger = df[df["Fare"] == 0.000]
print(my_rich_passenger)
####Lets rearrange some columns. This would be very hard to do using a csv library and doing this by hand.
####Panda allows us to do this very intuitively
titanic_cols = list(df.columns.values)
#cols = list(df.columns.values)
#print(cols)
print(titanic_cols)
#Using that list above, we can create a new list, with the values rearranged.
cols = ['Survived', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp', 'Ticket', 'Fare', 'Cabin', 'Embarked', 'PassengerId']
new_df = df[cols]
new_df.head()
#### We can create new dataframes from a few attributes
new_df = df[["Sex","Age"]]
new_df.head(10)
df_of_women = df[df["Sex"] == "female"]
df_of_men = df[df["Sex"] == "male"]
df_of_women_noNA = df_of_women.dropna()
df_of_women_noNA.describe()
df_of_men.describe()
# Excercise
# Create three data frames. Capture them by passenger class 'PClass'.
# There are three of them. Then figure out the size of each one.
df_pclass1 = df[(df['Pclass']==1) np.logical_or (df['Pclass']==2)]
df_pclass1.head()
df_pclass_1 = df[df["Pclass"] == 1]
df_pclass_1.head()
df_pclass_1.shape
df_pclass_2 = df[df["Pclass"] == 2]
df_pclass_2.head()
df_pclass_2.shape
df_pclass_3 = df[df["Pclass"] == 3]
df_pclass_3.head()
df_pclass_3.shape
###Output
_____no_output_____
###Markdown
Observations: There were many people in third class. More so than the other two class of passengers combined. We can create new attributes from other attributes!
###Code
df['FamilySize'] = df['SibSp'] + df['Parch']
df.head()
###Output
_____no_output_____
###Markdown
Since we know that Parch is the number of parents or children onboard, and SibSp is the number of siblings or spouses, we could collect those together as a FamilySize
###Code
df["Age"].hist()
df["Age"].dropna().hist(bins=16, range=(0,80))
df["Fare"].hist()
plt.scatter(df['Fare'], df['Survived'])
plt.show()
##### Back to the titanic. So we have our original dataset
df.head()
#### Lets group them by gender
grouped_by_sex = df.groupby(["Sex"])
grouped_by_sex.describe()
#### Lets group them by gender
df_grouped_by_sex_and_pclass = df.groupby(["Sex", "Pclass"])
df_grouped_by_sex_and_pclass.describe()
#### Lets group them by gender
df_grouped_by_sex_and_pclass_survived = df.groupby(["Sex", "Pclass", "Survived"])
df_grouped_by_sex_and_pclass_survived.describe()
#df_grouped_by_sex_and_pclass_survived.head()
df_sex_passclass_survival = df.groupby(['Sex', 'Pclass',"Survived"]).count()
df ### Count of records in each group throughout a dataset
###Output
_____no_output_____
###Markdown
--------------------------------- A Quick Tangent, Or, "Why I'm not a big fan of matplotlib"
###Code
# Lets create a scatter plot
d = {'one' : np.random.rand(10),
'two' : np.random.rand(10)}
print(d)
# And call it something so we can plot it
df_crap = pd.DataFrame(d)
df_crap.plot(style=['ro','bx'])
###Output
_____no_output_____
###Markdown
Needless to say, this isn't publication quality. And honestly, it'll most likely never be. Actual code for a multisubplot figure in matplotlib that I wrote as my first program:%matplotlib inline define figure size, column layout, grid layoutfigsize = (15, (len(query_api_chrs_final))+30)cols = 2gs = gridspec.GridSpec(len(output_dict) // cols + 1, cols) These are the "Tableau 20" colors as RGB tableau20 = [(31, 119, 180), (174, 199, 232), (255, 127, 14), (255, 187, 120), (44, 160, 44), (152, 223, 138), (214, 39, 40), (255, 152, 150), (148, 103, 189), (197, 176, 213), (140, 86, 75), (196, 156, 148), (227, 119, 194), (247, 182, 210), (127, 127, 127), (199, 199, 199), (188, 189, 34), (219, 219, 141), (23, 190, 207), (158, 218, 229)] Scale the RGB values to the [0, 1] range, which is the format matplotlib accepts Yeah I'm actually iterating over the colors to color things in the plot here because as a beginner I couldn't figure out a better way first :|```for i in range(len(tableau20)): r, g, b = tableau20[i] tableau20[i] = (r / 255., g / 255., b / 255.)``` Then I figured out if I added a dependency that can't really be moved around anywhere I could make the code slightly easier`current_palette = sns.color_palette("husl", 40)`("Set2", 30)tableau20 Now the actual figurefig = plt.figure(figsize=figsize, frameon=False)subplt_count = -1ax = [] Yup, looping again to get the number of subplots, their size, and where to put them in the final figure```for tchr in listofchrgraph: subplt_count += 1 print "Plotting subplot: "+str(subplt_count) count = 0 row = (subplt_count // cols) col = subplt_count % cols ax.append(fig.add_subplot(gs[row, col])) for qchr in output_dict[tchr]: count += 1 try: if (max(output_dict[tchr][qchr].itervalues()))>5: x = output_dict[tchr][qchr].keys() y = output_dict[tchr][qchr].values() Sets up plotting conditions ax[-1].spines["top"].set_visible(False) ax[-1].spines["right"].set_visible(False) ax[-1].spines["left"].set_visible(True) ax[-1].spines["bottom"].set_visible(True) ax[-1].patch.set_visible(False) ax[-1].get_xaxis().tick_bottom() ax[-1].get_yaxis().tick_left() ax[-1].plot(x, y, color=current_palette[count], lw=2, label=str(qchr)) ax[-1].set_title(label='Target Chromosome: '+species_name_filter+" "+ tchr, fontweight='bold', fontsize=14, y=1.1, loc='left') ax[-1].set_xlabel('Window Iteration\n(Gene number)', fontsize=12, fontweight='bold') ax[-1].set_ylabel('% Retention\n(Syntenic genes/window size)', fontsize=12, fontweight='bold') ax[-1].legend(bbox_to_anchor=(1.25, 1.13), loc=1, frameon=False, title=" Query\nChromosome", fontsize=10) else: continue except ValueError: continue``` This is code I tried because Stack Overflow said it'd help; it didn't, but I didn't have the heart to actually delete it because I wasn't sure if it'd be useful again some day. It wasn't. fig.tight_layout(pad=2, w_pad = 6) Taking all that above and actually calling the plot```fig.subplots_adjust(wspace=0.45, hspace=0.6)plt.savefig("~/Desktop/SynMapFractBiasAnalysis/fractbias_figure"+str(species_name_filter)+".png", transparent=False)``` But I got something like this This code was changed to six lines of Plotly code. Thank you D3 and Plotly. ------------------------------------------------- Plotting considerations1. List out the head or columns of dataframes of interest.1. Be sure you're getting correct dataframe while graphing.
###Code
# Example 1
df_sex_passclass_survival.head()
###Output
_____no_output_____
###Markdown
Plotting with plotnineWhich is totally ** cough ** ggplot from R ** cough ** but in Python. And that's great. It even uses the same grammar as ggplot. So we can leverage all the knowledge from ggplot examples for everything we're going to do.
###Code
# Create the parts of the plot
titanic_plot = p9.ggplot(data=df_sex_passclass_survival,
mapping=p9.aes(x='Fare', y='Age'))
# Draw the plot
titanic_plot + p9.geom_point()
###Output
_____no_output_____
###Markdown
**Hypothesis 1:** There was a vampire on board the Titanic. Not only did this vampire report their actual age, but they also paid the standard "Passenger Endangerment and Loss of Blood" Ticket Surcharge. This is why that ticket was $300.**Hypothesis 0 =** Something is wrong.
###Code
# Let's try the same thing, but with the original dataframe
titanic_plot = p9.ggplot(data=df,
mapping=p9.aes(x='Fare', y='Age'))
# Draw the plot
titanic_plot + p9.geom_point()
###Output
/opt/anaconda3/lib/python3.8/site-packages/plotnine/layer.py:401: PlotnineWarning: geom_point : Removed 177 rows containing missing values.
###Markdown
Distributions are fun to plot
###Code
(p9.ggplot(data=df,
mapping=p9.aes(x='Pclass',
y='Fare'))
+ p9.geom_boxplot()
)
###Output
_____no_output_____
###Markdown
That's nice, but it's not super useful to put all the classes together Let's split them out into the factors of Pclass...using the aptly named factor function
###Code
(p9.ggplot(data=df,
mapping=p9.aes(x='factor(Pclass)',
y='Fare'))
+ p9.geom_boxplot()
+ p9.theme_bw()
)
###Output
_____no_output_____
###Markdown
Let's Get Fancy: we're all fancy people
###Code
(p9.ggplot(data=df,
mapping=p9.aes(x='factor(Pclass)',y='Fare'))
+ p9.geom_boxplot()
+ p9.theme_bw()
+ p9.labs(x = 'Passenger Class', y ='Fare in US Dollars')
)
###Output
_____no_output_____
###Markdown
It's time to break out! Let's create some interesting visualizations and report backMost interesting visualization wins!
###Code
# Logistic Regression Time!
import statsmodels.api as sm
import pylab as pl
print(df.columns)
# Create a new temp data frame
new_df = df
def gender_to_numeric(x):
if x == "male":
return 0
else:
return 1
new_df['Sex'] = new_df['Sex'].apply(gender_to_numeric)
new_df = new_df[["Survived", "Age","Sex", "Pclass"]]
new_df = new_df.dropna()
train_cols = new_df.columns[1:]
train_cols
logit = sm.Logit(new_df['Survived'], new_df[train_cols])
#Fit the model
result = logit.fit()
print(result.summary())
print(result.conf_int())
###Output
0 1
Age -0.012195 0.005269
Sex 2.265226 3.020908
Pclass -0.794116 -0.512863
|
hippocampus/scripts/results_supp_fig_04.ipynb | ###Markdown
Heritability of T1wT2w maps after controlling for the mean T1wT2w scores SUB (Left)
###Code
# we will read heritability values from 1024 sub vertices
tot_node_num_lsub = 1024
node_str = []
for i in range(1, tot_node_num_lsub+1):
node_str.append(i)
# empty data frame to be filled out
df_herit_t1t2_LSUB = pd.DataFrame(index = node_str, columns = ['H2r', 'rp'])
# read-in heritability scores
fLSUB = '../solar/solar_mean_msm50_t1t2_lsub/t1t2_lsub_mean_results_herit.txt'
herit_t1t2_LSUB = pd.read_csv(fLSUB, index_col = 0, header = 0)
herit_t1t2_LSUB.index.name = 'node'
for nodeID in range(1, tot_node_num_lsub+1):
iA = herit_t1t2_LSUB.index.get_loc(nodeID)
iB = df_herit_t1t2_LSUB.index.get_loc(nodeID)
df_herit_t1t2_LSUB.iloc[iB]['H2r'] = herit_t1t2_LSUB.iloc[iA]['H2r']
df_herit_t1t2_LSUB.iloc[iB]['rp'] = herit_t1t2_LSUB.iloc[iA]['rp']
dataLSUB = np.array(df_herit_t1t2_LSUB['H2r'], dtype = 'float')
pLSUB = np.array(df_herit_t1t2_LSUB['rp'], dtype = 'float')
q = 0.05
pID, pN = FDR_sofie(pLSUB, q)
pID, len(np.where(dataLSUB <= pID)[0]), dataLSUB.max()
fig1 = plot_funcs.plot_surf_upper2(plot_funcs.xLSUB,
plot_funcs.yLSUB,
plot_funcs.zLSUB,
plot_funcs.triLSUB,
dataLSUB,
'viridis',
0, 0.77)
fig2 = plot_funcs.plot_surf_upper2(plot_funcs.xLSUB,
plot_funcs.yLSUB,
plot_funcs.zLSUB,
plot_funcs.triLSUB,
pLSUB,
'copper_r',
0, pID)
###Output
_____no_output_____
###Markdown
CA
###Code
# we will read heritability values from 2048 ca vertices
tot_node_num_lca = 2048
node_str = []
for i in range(1, tot_node_num_lca+1):
node_str.append(i)
# empty data frame to be filled out
df_herit_t1t2_LCA = pd.DataFrame(index = node_str, columns = ['H2r', 'rp'])
# read-in heritability scores
fLCA = '../solar/solar_mean_msm50_t1t2_lca/t1t2_lca_mean_results_herit.txt'
herit_t1t2_LCA = pd.read_csv(fLCA, index_col = 0, header = 0)
herit_t1t2_LCA.index.name = 'node'
for nodeID in range(1, tot_node_num_lca+1):
iA = herit_t1t2_LCA.index.get_loc(nodeID)
iB = df_herit_t1t2_LCA.index.get_loc(nodeID)
df_herit_t1t2_LCA.iloc[iB]['H2r'] = herit_t1t2_LCA.iloc[iA]['H2r']
df_herit_t1t2_LCA.iloc[iB]['rp'] = herit_t1t2_LCA.iloc[iA]['rp']
dataLCA = np.array(df_herit_t1t2_LCA['H2r'], dtype = 'float')
pLCA = np.array(df_herit_t1t2_LCA['rp'], dtype = 'float')
q = 0.05
pID, pN = FDR_sofie(pLCA, q)
pID, len(np.where(dataLCA <= pID)[0]), dataLCA.max()
fig1 = plot_funcs.plot_surf_upper2(plot_funcs.xLCA,
plot_funcs.yLCA,
plot_funcs.zLCA,
plot_funcs.triLCA,
dataLCA,
'viridis',
0, 0.77)
fig2 = plot_funcs.plot_surf_upper3(plot_funcs.xLCA,
plot_funcs.yLCA,
plot_funcs.zLCA,
plot_funcs.triLCA,
pLCA,
'copper_r',
0, pID)
###Output
_____no_output_____
###Markdown
DG
###Code
# we will read heritability values from 1024 sub vertices
tot_node_num_ldg = 1024
node_str = []
for i in range(1, tot_node_num_ldg+1):
node_str.append(i)
# empty data frame to be filled out
df_herit_t1t2_LDG = pd.DataFrame(index = node_str, columns = ['H2r', 'rp'])
# read-in heritability scores
fLDG = '../solar/solar_mean_msm50_t1t2_ldg/t1t2_ldg_mean_results_herit.txt'
herit_t1t2_LDG = pd.read_csv(fLDG, index_col = 0, header = 0)
herit_t1t2_LDG.index.name = 'node'
for nodeID in range(1, tot_node_num_ldg+1):
iA = herit_t1t2_LDG.index.get_loc(nodeID)
iB = df_herit_t1t2_LDG.index.get_loc(nodeID)
df_herit_t1t2_LDG.iloc[iB]['H2r'] = herit_t1t2_LDG.iloc[iA]['H2r']
df_herit_t1t2_LDG.iloc[iB]['rp'] = herit_t1t2_LDG.iloc[iA]['rp']
dataLDG = np.array(df_herit_t1t2_LDG['H2r'], dtype = 'float')
pLDG = np.array(df_herit_t1t2_LDG['rp'], dtype = 'float')
q = 0.05
pID, pN = FDR_sofie(pLDG, q)
pID, len(np.where(dataLDG <= pID)[0]), dataLDG.max()
fig1 = plot_funcs.plot_surf_upper2(plot_funcs.xLDG,
plot_funcs.yLDG,
plot_funcs.zLDG,
plot_funcs.triLDG,
dataLDG,
'viridis',
0, 0.77)
fig2 = plot_funcs.plot_surf_upper3(plot_funcs.xLDG,
plot_funcs.yLDG,
plot_funcs.zLDG,
plot_funcs.triLDG,
pLDG,
'copper_r',
0, pID)
###Output
_____no_output_____
###Markdown
RIGHT hemisphere
###Code
# we will read heritability values from 1024 sub vertices
tot_node_num_rsub = 1024
node_str = []
for i in range(1, tot_node_num_rsub+1):
node_str.append(i)
# empty data frame to be filled out
df_herit_t1t2_RSUB = pd.DataFrame(index = node_str, columns = ['H2r', 'rp'])
###Output
_____no_output_____
###Markdown
SUB
###Code
# read-in heritability scores
fRSUB = '../solar/solar_mean_msm50_t1t2_rsub/t1t2_rsub_mean_results_herit.txt'
herit_t1t2_RSUB = pd.read_csv(fRSUB, index_col = 0, header = 0)
herit_t1t2_RSUB.index.name = 'node'
for nodeID in range(1, tot_node_num_rsub+1):
iA = herit_t1t2_RSUB.index.get_loc(nodeID)
iB = df_herit_t1t2_RSUB.index.get_loc(nodeID)
df_herit_t1t2_RSUB.iloc[iB]['H2r'] = herit_t1t2_RSUB.iloc[iA]['H2r']
df_herit_t1t2_RSUB.iloc[iB]['rp'] = herit_t1t2_RSUB.iloc[iA]['rp']
dataRSUB = np.array(df_herit_t1t2_RSUB['H2r'], dtype = 'float')
pRSUB = np.array(df_herit_t1t2_RSUB['rp'], dtype = 'float')
q = 0.05
pID, pN = FDR_sofie(pRSUB, q)
pID, len(np.where(dataRSUB <= pID)[0]), dataRSUB.max()
fig1 = plot_funcs.plot_surf_upper2(plot_funcs.xRSUB,
plot_funcs.yRSUB,
plot_funcs.zRSUB,
plot_funcs.triRSUB,
dataRSUB,
'viridis',
0, 0.77)
fig2 = plot_funcs.plot_surf_upper3(plot_funcs.xRSUB,
plot_funcs.yRSUB,
plot_funcs.zRSUB,
plot_funcs.triRSUB,
pRSUB,
'copper_r',
0, pID)
###Output
_____no_output_____
###Markdown
CA
###Code
# we will read heritability values from 2048 ca vertices
tot_node_num_rca = 2048
node_str = []
for i in range(1, tot_node_num_rca+1):
node_str.append(i)
# empty data frame to be filled out
df_herit_t1t2_RCA = pd.DataFrame(index = node_str, columns = ['H2r', 'rp'])
# read-in heritability scores
fRCA = '../solar/solar_mean_msm50_t1t2_rca/t1t2_rca_mean_results_herit.txt'
herit_t1t2_RCA = pd.read_csv(fRCA, index_col = 0, header = 0)
herit_t1t2_RCA.index.name = 'node'
for nodeID in range(1, tot_node_num_rca+1):
iA = herit_t1t2_RCA.index.get_loc(nodeID)
iB = df_herit_t1t2_RCA.index.get_loc(nodeID)
df_herit_t1t2_RCA.iloc[iB]['H2r'] = herit_t1t2_RCA.iloc[iA]['H2r']
df_herit_t1t2_RCA.iloc[iB]['rp'] = herit_t1t2_RCA.iloc[iA]['rp']
dataRCA = np.array(df_herit_t1t2_RCA['H2r'], dtype = 'float')
pRCA = np.array(df_herit_t1t2_RCA['rp'], dtype = 'float')
q = 0.05
pID, pN = FDR_sofie(pRCA, q)
pID, len(np.where(dataRCA <= pID)[0]), dataRCA.max()
fig1 = plot_funcs.plot_surf_upper2(plot_funcs.xRCA,
plot_funcs.yRCA,
plot_funcs.zRCA,
plot_funcs.triRCA,
dataRCA,
'viridis',
0, 0.77)
fig2 = plot_funcs.plot_surf_upper3(plot_funcs.xRCA,
plot_funcs.yRCA,
plot_funcs.zRCA,
plot_funcs.triRCA,
pRCA,
'copper_r',
0, pID)
###Output
_____no_output_____
###Markdown
DG
###Code
# we will read heritability values from 1024 sub vertices
tot_node_num_rdg = 1024
node_str = []
for i in range(1, tot_node_num_rdg+1):
node_str.append(i)
# empty data frame to be filled out
df_herit_t1t2_RDG = pd.DataFrame(index = node_str, columns = ['H2r', 'rp'])
# read-in heritability scores
fRDG = '../solar/solar_mean_msm50_t1t2_rdg/t1t2_rdg_mean_results_herit.txt'
herit_t1t2_RDG = pd.read_csv(fRDG, index_col = 0, header = 0)
herit_t1t2_RDG.index.name = 'node'
for nodeID in range(1, tot_node_num_rdg+1):
iA = herit_t1t2_RDG.index.get_loc(nodeID)
iB = df_herit_t1t2_RDG.index.get_loc(nodeID)
df_herit_t1t2_RDG.iloc[iB]['H2r'] = herit_t1t2_RDG.iloc[iA]['H2r']
df_herit_t1t2_RDG.iloc[iB]['rp'] = herit_t1t2_RDG.iloc[iA]['rp']
dataRDG = np.array(df_herit_t1t2_RDG['H2r'], dtype = 'float')
pRDG = np.array(df_herit_t1t2_RDG['rp'], dtype = 'float')
q = 0.05
pID, pN = FDR_sofie(pRDG, q)
pID, len(np.where(dataRDG <= pID)[0]), dataRDG.max()
fig1 = plot_funcs.plot_surf_upper2(plot_funcs.xRDG,
plot_funcs.yRDG,
plot_funcs.zRDG,
plot_funcs.triRDG,
dataRDG,
'viridis',
0, 0.77)
fig2 = plot_funcs.plot_surf_upper3(plot_funcs.xRDG,
plot_funcs.yRDG,
plot_funcs.zRDG,
plot_funcs.triRDG,
pRDG,
'copper_r',
0, pID)
###Output
_____no_output_____ |
figure2array.ipynb | ###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
def figure2array(fig):
"""
plt.figure를 RGBA로 변환(layer가 4개)
shape: height, width, layer
"""
fig.canvas.draw()
return np.array(fig.canvas.renderer._renderer)
im = np.random.randint(0,255,(512,512))
f = plt.figure()
plt.imshow(im,cmap='gray')
array = figure2array(f)
print(array.shape)
plt.imshow(array)
###Output
_____no_output_____ |
06 APIs, Scraping I/podestanalyse/podesta emails.ipynb | ###Markdown
1. Alle Mails öffnen und als Liste abspeichern2. Dollarzeichen in Kombination mit "million" suchen3. Alle Mails markieren, wo Dollar Zeichen gefunden werden4. Reinigung mit Pandas
###Code
mails = os.listdir('podesta5000')
list_mailcontent = []
for mail in mails:
f = open("podesta5000/"+mail, "r", encoding='utf-8').read()
list_mailcontent.append(f)
dollar_list = []
dollar = re.compile(r'\$\d+ [mM]illion')
for mail in list_mailcontent:
try:
treffer = re.search(dollar, mail).group()
dic = {"Dollarnennung": treffer, "Mailcontent": mail}
except:
dic = {"Dollarnennung": "N/A", "Mailcontent": mail}
dollar_list.append(dic)
df = pd.DataFrame(dollar_list)
df['Dollarnennung'].value_counts().head(20)
###Output
_____no_output_____
###Markdown
Dollar-Liste
###Code
dollar_list = []
dollar = re.compile(r'\$\d+ [mM]illion')
for mail in list_mailcontent:
try:
treffer = re.findall(dollar, mail)
dic = {"Dollernennung": treffer, "Mailcontent": mail}
except:
dic = {"Dollernennung": "N/A", "Mailcontent": mail}
dollar_list.append(dic)
df2 = pd.DataFrame(dollar_list)
df2['Dollernennung'].value_counts()
###Output
_____no_output_____ |
results/Architecture.ipynb | ###Markdown
Architecture results plots
###Code
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
# Load the results
e_1_cpu = pd.read_csv("./architecture/1_cpu.csv", sep=";")
e_1_1gpu = pd.read_csv("./architecture/1_1gpu.csv", sep=";")
e_1_2gpu = pd.read_csv("./architecture/1_2gpu.csv", sep=";")
e_2_cpu = pd.read_csv("./architecture/2_cpu.csv", sep=";")
e_2_1gpu = pd.read_csv("./architecture/2_1gpu.csv", sep=";")
e_2_2gpu = pd.read_csv("./architecture/2_2gpu.csv", sep=";")
e_2_3gpu = pd.read_csv("./architecture/2_3gpu.csv", sep=";")
###Output
_____no_output_____
###Markdown
Experiment 1 CPU
###Code
e_1_cpu.describe()
## Lines
plt.plot(e_1_cpu.time, label="CPU", color="blue")
## Configuration
plt.title("Experiment 1 - CPU")
plt.xlabel("Epoch")
plt.ylabel("Execution time (seconds)");
plt.legend();
## Run the plot
plt.show()
###Output
_____no_output_____
###Markdown
GPU - 1
###Code
e_1_1gpu.describe()
## Lines
plt.plot(e_1_1gpu.time, label="1 GPU", color="green")
## Configuration
plt.title("Experiment 1 - 1 GPU")
plt.xlabel("Epoch")
plt.ylabel("Execution time (seconds)");
plt.legend();
## Run the plot
plt.show()
###Output
_____no_output_____
###Markdown
GPU - 2
###Code
e_1_2gpu.describe()
## Lines
plt.plot(e_1_2gpu.time, label="2 GPUs", color="green")
## Configuration
plt.title("Experiment 1 - 2 GPUs")
plt.xlabel("Epoch")
plt.ylabel("Execution time (seconds)");
plt.legend();
## Run the plot
plt.show()
###Output
_____no_output_____
###Markdown
Comparison
###Code
e_1_comparison = pd.DataFrame({'cpu': e_1_cpu.time, 'gpu1': e_1_1gpu.time, 'gpu2': e_1_2gpu.time})
e_1_comparison.describe()
## Lines
plt.plot(e_1_cpu.time, label="CPU", color="blue")
plt.plot(e_1_1gpu.time, label="1 GPU", color="green")
plt.plot(e_1_2gpu.time, label="2 GPU", color="red")
plt.axhline(y=e_1_cpu.time.mean(), label="CPU Average", linestyle='--', color="blue")
plt.axhline(y=e_1_1gpu.time.mean(), label="1 GPU Average", linestyle='--', color="green")
plt.axhline(y=e_1_2gpu.time.mean(), label="2 GPU Average", linestyle='--', color="red")
## Configuration
plt.title("Experiment 1 - Comparison")
plt.xlabel("Epoch")
plt.ylabel("Execution time (seconds)");
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.00), shadow=True, ncol=2);
## Run the plot
plt.show()
###Output
_____no_output_____
###Markdown
Experiment 2 CPU
###Code
e_2_cpu.time.describe()
## Lines
plt.plot(e_2_cpu.time, label="CPU", color="blue")
## Configuration
plt.title("Experiment 2 - CPU")
plt.xlabel("Epoch")
plt.ylabel("Execution time (seconds)");
plt.legend();
## Run the plot
plt.show()
###Output
_____no_output_____
###Markdown
GPU 1
###Code
e_2_1gpu.describe()
## Lines
plt.plot(e_2_1gpu.time, label="1 GPU", color="green")
## Configuration
plt.title("Experiment 2 - 1 GPU")
plt.xlabel("Epoch")
plt.ylabel("Execution time (seconds)");
plt.legend();
## Run the plot
plt.show()
###Output
_____no_output_____
###Markdown
GPU 2
###Code
e_2_2gpu.describe()
## Lines
plt.plot(e_2_2gpu.time, label="2 GPU", color="green")
## Configuration
plt.title("Experiment 2 - 2 GPU")
plt.xlabel("Epoch")
plt.ylabel("Execution time (seconds)");
plt.legend();
## Run the plot
plt.show()
###Output
_____no_output_____
###Markdown
GPU 3
###Code
e_2_3gpu.describe()
## Lines
plt.plot(e_2_3gpu.time, label="3 GPU", color="green")
## Configuration
plt.title("Experiment 2 - 3 GPU")
plt.xlabel("Epoch")
plt.ylabel("Execution time (seconds)");
plt.legend();
## Run the plot
plt.show()
###Output
_____no_output_____
###Markdown
Comparison
###Code
e_2_comparison = pd.DataFrame({'cpu': e_2_cpu.time, 'gpu1': e_2_1gpu.time, 'gpu2': e_2_2gpu.time, 'gpu3': e_2_3gpu.time})
e_2_comparison.describe()
## Lines
plt.plot(e_2_cpu.time, label="CPU", color="blue")
plt.plot(e_2_1gpu.time, label="1 GPU", color="green")
plt.plot(e_2_2gpu.time, label="2 GPU", color="red")
plt.plot(e_2_3gpu.time, label="3 GPU", color="orange")
plt.axhline(y=e_2_cpu.time.mean(), label="CPU Average", linestyle='--', color="blue")
plt.axhline(y=e_2_1gpu.time.mean(), label="1 GPU Average", linestyle='--', color="green")
plt.axhline(y=e_2_2gpu.time.mean(), label="2 GPU Average", linestyle='--', color="red")
plt.axhline(y=e_2_3gpu.time.mean(), label="3 GPU Average", linestyle='--', color="orange")
## Configuration
plt.title("Experiment 2 - Comparison")
plt.xlabel("Epoch")
plt.ylabel("Execution time (seconds)");
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.00), shadow=True, ncol=2);
## Run the plot
plt.show()
###Output
_____no_output_____ |
kfp-pipeline/archive/compile_deploy-Copy1.ipynb | ###Markdown
Upload the dataset
###Code
%%sh
PROJECT_ID=mlops-demo
DATASET_LOCATION=US
DATASET_ID=lab_11
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/full/covertype.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:INTEGER,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
Dataset 'mlops-demo:lab_11' successfully created.
###Markdown
Create the staging GCS bucket
###Code
%%sh
PROJECT_ID=mlops-demo
BUCKET_NAME=gs://${PROJECT_ID}-lab-11
gsutil mb -p $PROJECT_ID $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Build a trainer image
###Code
TRAINING_APP_FOLDER = 'training_app'
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import pickle
import numpy as np
import pandas as pd
import hypertune
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_features = ['Elevation', 'Aspect', 'Slope', 'Horizontal_Distance_To_Hydrology',
'Vertical_Distance_To_Hydrology', 'Horizontal_Distance_To_Roadways',
'Hillshade_9am', 'Hillshade_Noon', 'Hillshade_3pm',
'Horizontal_Distance_To_Fire_Points']
categorical_features = ['Wilderness_Area', 'Soil_Type']
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_features),
('cat', OneHotEncoder(), categorical_features)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log'))
])
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
PROJECT_ID='mlops-demo'
IMAGE_NAME='trainer_image'
IMAGE_TAG='latest'
IMAGE_URI='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, IMAGE_TAG)
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
Creating temporary tarball archive of 2 file(s) totalling 2.4 KiB before compression.
Uploading tarball of [training_app] to [gs://mlops-demo_cloudbuild/source/1578605826.62-1a80ecd6629a41ffb767abb9901eeb69.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/mlops-demo/builds/5be349a9-5e33-4700-974d-1a3c2e54845c].
Logs are available at [https://console.cloud.google.com/gcr/builds/5be349a9-5e33-4700-974d-1a3c2e54845c?project=609934025272].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "5be349a9-5e33-4700-974d-1a3c2e54845c"
FETCHSOURCE
Fetching storage object: gs://mlops-demo_cloudbuild/source/1578605826.62-1a80ecd6629a41ffb767abb9901eeb69.tgz#1578605827100723
Copying gs://mlops-demo_cloudbuild/source/1578605826.62-1a80ecd6629a41ffb767abb9901eeb69.tgz#1578605827100723...
/ [1 files][ 1.2 KiB/ 1.2 KiB]
Operation completed over 1 objects/1.2 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 5.12kB
Step 1/5 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
35c102085707: Pulling fs layer
251f5509d51d: Pulling fs layer
8e829fe70a46: Pulling fs layer
6001e1789921: Pulling fs layer
1259902c87a2: Pulling fs layer
83ca0edf82af: Pulling fs layer
a459cc7a0819: Pulling fs layer
221c4376244e: Pulling fs layer
6be10f944cd9: Pulling fs layer
34c517f627e3: Pulling fs layer
8bc377099823: Pulling fs layer
f28fcd8ca9f0: Pulling fs layer
a5d245cced6f: Pulling fs layer
8c6be6aa5553: Pulling fs layer
1d7154118978: Pulling fs layer
1df8626a77b0: Pulling fs layer
6001e1789921: Waiting
1259902c87a2: Waiting
83ca0edf82af: Waiting
a459cc7a0819: Waiting
221c4376244e: Waiting
6be10f944cd9: Waiting
34c517f627e3: Waiting
8bc377099823: Waiting
f28fcd8ca9f0: Waiting
a5d245cced6f: Waiting
8c6be6aa5553: Waiting
1d7154118978: Waiting
1df8626a77b0: Waiting
251f5509d51d: Verifying Checksum
251f5509d51d: Download complete
8e829fe70a46: Verifying Checksum
8e829fe70a46: Download complete
35c102085707: Verifying Checksum
35c102085707: Download complete
6001e1789921: Verifying Checksum
6001e1789921: Download complete
a459cc7a0819: Verifying Checksum
a459cc7a0819: Download complete
83ca0edf82af: Verifying Checksum
83ca0edf82af: Download complete
6be10f944cd9: Verifying Checksum
6be10f944cd9: Download complete
34c517f627e3: Verifying Checksum
34c517f627e3: Download complete
8bc377099823: Verifying Checksum
8bc377099823: Download complete
f28fcd8ca9f0: Verifying Checksum
f28fcd8ca9f0: Download complete
a5d245cced6f: Verifying Checksum
a5d245cced6f: Download complete
8c6be6aa5553: Verifying Checksum
8c6be6aa5553: Download complete
1d7154118978: Verifying Checksum
1d7154118978: Download complete
1259902c87a2: Verifying Checksum
1259902c87a2: Download complete
1df8626a77b0: Verifying Checksum
1df8626a77b0: Download complete
35c102085707: Pull complete
221c4376244e: Verifying Checksum
221c4376244e: Download complete
251f5509d51d: Pull complete
8e829fe70a46: Pull complete
6001e1789921: Pull complete
1259902c87a2: Pull complete
83ca0edf82af: Pull complete
a459cc7a0819: Pull complete
221c4376244e: Pull complete
6be10f944cd9: Pull complete
34c517f627e3: Pull complete
8bc377099823: Pull complete
f28fcd8ca9f0: Pull complete
a5d245cced6f: Pull complete
8c6be6aa5553: Pull complete
1d7154118978: Pull complete
1df8626a77b0: Pull complete
Digest: sha256:848d51a70c3608c4acd37c3dd5a5bacef9c6a51aab5b0064daf5d4258237ef62
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> 8f1066e7fc0b
Step 2/5 : RUN pip install -U fire cloudml-hypertune
---> Running in 9c6acd6581cc
Collecting fire
Downloading https://files.pythonhosted.org/packages/d9/69/faeaae8687f4de0f5973694d02e9d6c3eb827636a009157352d98de1129e/fire-0.2.1.tar.gz (76kB)
Collecting cloudml-hypertune
Downloading https://files.pythonhosted.org/packages/84/54/142a00a29d1c51dcf8c93b305f35554c947be2faa0d55de1eabcc0a9023c/cloudml-hypertune-0.1.0.dev6.tar.gz
Requirement already satisfied, skipping upgrade: six in /root/miniconda3/lib/python3.7/site-packages (from fire) (1.12.0)
Collecting termcolor
Downloading https://files.pythonhosted.org/packages/8a/48/a76be51647d0eb9f10e2a4511bf3ffb8cc1e6b14e9e4fab46173aa79f981/termcolor-1.1.0.tar.gz
Building wheels for collected packages: fire, cloudml-hypertune, termcolor
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.2.1-py2.py3-none-any.whl size=103527 sha256=9b30c9d9e3fba2dac074f3273e0308ce7b796699c737d4623dd94f262970e1bd
Stored in directory: /root/.cache/pip/wheels/31/9c/c0/07b6dc7faf1844bb4688f46b569efe6cafaa2179c95db821da
Building wheel for cloudml-hypertune (setup.py): started
Building wheel for cloudml-hypertune (setup.py): finished with status 'done'
Created wheel for cloudml-hypertune: filename=cloudml_hypertune-0.1.0.dev6-py2.py3-none-any.whl size=3987 sha256=7284e1fc09cd65ab84f9d19d32bf98f30ba21e9c0e0f6d6a8e3e01a6a825f3ca
Stored in directory: /root/.cache/pip/wheels/71/ac/62/80b621f3fe2994f3f367a36123d8351d75e3ea5591b4a62c85
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-cp37-none-any.whl size=4832 sha256=52d40dbf3368cdd91ab4bb436f773c163014d524a01c9f76dee2a2f0bd0f41e3
Stored in directory: /root/.cache/pip/wheels/7c/06/54/bc84598ba1daf8f970247f550b175aaaee85f68b4b0c5ab2c6
Successfully built fire cloudml-hypertune termcolor
Installing collected packages: termcolor, fire, cloudml-hypertune
Successfully installed cloudml-hypertune-0.1.0.dev6 fire-0.2.1 termcolor-1.1.0
Removing intermediate container 9c6acd6581cc
---> 3c794f526bc1
Step 3/5 : WORKDIR /app
---> Running in 2cc5a226af8d
Removing intermediate container 2cc5a226af8d
---> 1e1341428c31
Step 4/5 : COPY train.py .
---> 9fa719ff717a
Step 5/5 : ENTRYPOINT ["python", "train.py"]
---> Running in 79178e7c4993
Removing intermediate container 79178e7c4993
---> 2583e976a69c
Successfully built 2583e976a69c
Successfully tagged gcr.io/mlops-demo/trainer_image:latest
PUSH
Pushing gcr.io/mlops-demo/trainer_image:latest
The push refers to repository [gcr.io/mlops-demo/trainer_image]
63b391dcfb56: Preparing
da588dd6cc64: Preparing
7414787a79a2: Preparing
07a867e0ba2d: Preparing
092c50747c65: Preparing
d6fb36f9bda1: Preparing
f36c7efe6784: Preparing
97d733be068e: Preparing
d0ce9f8647d3: Preparing
fa4332f1c95c: Preparing
cd80b8f8deac: Preparing
104fbab0f8e2: Preparing
4019db0181d2: Preparing
5a78197acff6: Preparing
804e87810c15: Preparing
122be11ab4a2: Preparing
7beb13bce073: Preparing
f7eae43028b3: Preparing
6cebf3abed5f: Preparing
d6fb36f9bda1: Waiting
f36c7efe6784: Waiting
97d733be068e: Waiting
d0ce9f8647d3: Waiting
fa4332f1c95c: Waiting
cd80b8f8deac: Waiting
104fbab0f8e2: Waiting
4019db0181d2: Waiting
5a78197acff6: Waiting
804e87810c15: Waiting
122be11ab4a2: Waiting
7beb13bce073: Waiting
f7eae43028b3: Waiting
6cebf3abed5f: Waiting
092c50747c65: Layer already exists
07a867e0ba2d: Layer already exists
f36c7efe6784: Layer already exists
d6fb36f9bda1: Layer already exists
d0ce9f8647d3: Layer already exists
97d733be068e: Layer already exists
fa4332f1c95c: Layer already exists
cd80b8f8deac: Layer already exists
104fbab0f8e2: Layer already exists
4019db0181d2: Layer already exists
5a78197acff6: Layer already exists
122be11ab4a2: Layer already exists
804e87810c15: Layer already exists
7beb13bce073: Layer already exists
f7eae43028b3: Layer already exists
6cebf3abed5f: Layer already exists
da588dd6cc64: Pushed
7414787a79a2: Pushed
63b391dcfb56: Pushed
latest: digest: sha256:3784c2beddc0c490db91e9228bbe77a833d68caa09882c33635bb48ffcfead41 size: 4283
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
5be349a9-5e33-4700-974d-1a3c2e54845c 2020-01-09T21:37:07+00:00 2M33S gs://mlops-demo_cloudbuild/source/1578605826.62-1a80ecd6629a41ffb767abb9901eeb69.tgz gcr.io/mlops-demo/trainer_image (+1 more) SUCCESS
###Markdown
Compile the pipeline
###Code
%%sh
export PROJECT_ID=mlops-demo
export COMPONENT_URL_SEARCH_PREFIX=https://raw.githubusercontent.com/kubeflow/pipelines/0.1.36/components/gcp/
export BASE_IMAGE=gcr.io/deeplearning-platform-release/base-cpu
export TRAINER_IMAGE=gcr.io/$PROJECT_ID/trainer_image:latest
export RUNTIME_VERSION=1.15
export PYTHON_VERSION=3.7
dsl-compile --py covertype_training_pipeline.py --output covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Run the pipeline
###Code
PROJECT_ID='mlops-demo'
INVERSE_PROXY_HOST='1ff32660e53f5d89-dot-us-central1.notebooks.googleusercontent.com'
STAGING_GCS_BUCKET='gs://mlops-demo-lab-11'
RUN_NAME='Training_Run_001'
EXPERIMENT_NAME='Default'
!kfp --endpoint $INVERSE_PROXY_HOST run submit \
-e $EXPERIMENT_NAME \
-r $RUN_NAME \
-f covertype_training_pipeline.yaml \
project_id=$PROJECT_ID \
gcs_root=$STAGING_GCS_BUCKET \
region=us-central1 \
source_table_name=lab_11.covertype \
dataset_id=splits \
evaluation_metric_name=accuracy \
evaluation_metric_threshold=0.69 \
model_id=covertype_classifier \
version_id=v0.1 \
replace_existing_version=True
INVERSE_PROXY_HOST
###Output
_____no_output_____
###Markdown
Upload the pipeline to KFP environment
###Code
PIPELINE_NAME='covertype_classifier_training'
PIPELINE_PACKAGE='covertype_training_pipeline.yaml'
!kfp --endpoint $INVERSE_PROXY_HOST pipeline upload -p $PIPELINE_NAME $PIPELINE_PACKAGE
###Output
Pipeline 4437d79d-9168-457c-8517-e14131fa1cac has been submitted
Pipeline Details
------------------
ID 4437d79d-9168-457c-8517-e14131fa1cac
Name covertype_classifier_training
Description
Uploaded at 2020-01-09T22:03:35+00:00
+-----------------------------+--------------------------------------------------+
| Parameter Name | Default Value |
+=============================+==================================================+
| project_id | |
+-----------------------------+--------------------------------------------------+
| region | |
+-----------------------------+--------------------------------------------------+
| source_table_name | |
+-----------------------------+--------------------------------------------------+
| gcs_root | |
+-----------------------------+--------------------------------------------------+
| dataset_id | |
+-----------------------------+--------------------------------------------------+
| evaluation_metric_name | |
+-----------------------------+--------------------------------------------------+
| evaluation_metric_threshold | |
+-----------------------------+--------------------------------------------------+
| model_id | |
+-----------------------------+--------------------------------------------------+
| version_id | |
+-----------------------------+--------------------------------------------------+
| replace_existing_version | |
+-----------------------------+--------------------------------------------------+
| hypertune_settings | { |
| | "hyperparameters": { |
| | "goal": "MAXIMIZE", |
| | "maxTrials": 6, |
| | "maxParallelTrials": 3, |
| | "hyperparameterMetricTag": "accuracy", |
| | "enableTrialEarlyStopping": True, |
| | "params": [ |
| | { |
| | "parameterName": "max_iter", |
| | "type": "DISCRETE", |
| | "discreteValues": [500, 1000] |
| | }, |
| | { |
| | "parameterName": "alpha", |
| | "type": "DOUBLE", |
| | "minValue": 0.0001, |
| | "maxValue": 0.001, |
| | "scaleType": "UNIT_LINEAR_SCALE" |
| | } |
| | ] |
| | } |
| | } |
+-----------------------------+--------------------------------------------------+
| dataset_location | US |
+-----------------------------+--------------------------------------------------+
###Markdown
List pipelines
###Code
INVERSE_PROXY_HOST='1ff32660e53f5d89-dot-us-central1.notebooks.googleusercontent.com'
!kfp --endpoint $INVERSE_PROXY_HOST pipeline list
INVERSE_PROXY_HOST='1ff32660e53f5d89-dot-us-central1.notebooks.googleusercontent.com'
STAGING_GCS_BUCKET='gs://mlops-demo-lab-11'
RUN_NAME="Training_Run_003"
EXPERIMENT_NAME='Covertype_Classifier_Training'
PIPELINE_ID='4437d79d-9168-457c-8517-e14131fa1cac'
!kfp --endpoint $INVERSE_PROXY_HOST run submit \
-e $EXPERIMENT_NAME \
-r $RUN_NAME \
-p $PIPELINE_ID \
project_id=$PROJECT_ID \
gcs_root=$STAGING_GCS_BUCKET \
region=us-central1 \
source_table_name=lab_11.covertype \
dataset_id=splits \
evaluation_metric_name=accuracy \
evaluation_metric_threshold=0.69 \
model_id=covertype_classifier \
version_id=v0.1 \
replace_existing_version=True
###Output
Run c42f5d53-5157-4223-b02a-436d1cba807f is submitted
+--------------------------------------+------------------+----------+---------------------------+
| run id | name | status | created at |
+======================================+==================+==========+===========================+
| c42f5d53-5157-4223-b02a-436d1cba807f | Training_Run_003 | | 2020-01-09T22:43:25+00:00 |
+--------------------------------------+------------------+----------+---------------------------+
|
project2-mem-master/miscellaneous/Untitled1.ipynb | ###Markdown
Automated Feature Engineering
###Code
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# setting to display max columns
pd.set_option('display.max_columns', 500)
data = pd.read_csv('minimaly_encoded.csv', index_col=0)
data.head(3)
###Output
_____no_output_____
###Markdown
Feature engeneering
###Code
# visits of one users divided by sum visits to that page
data['frac_Admin'] = data['Administrative']/sum(data['Administrative'])
data['frac_Info'] = data['Informational']/sum(data['Informational'])
data['fract_Prod'] = data['Informational_Duration']/sum(data['Informational_Duration'])
# divide this by total users in a specific month
def visits_month(df, page):
# go thorugh the list for unique months
for month in df['Month'].unique():
# select the month subset
month_df = df[df['Month']==month]
# create a new column in original df and divide the count of user page visits my total monthly visits
# return same value if in the row if the condition fails
data['monthPageFrac'] = np.where((df['Month'] == month), df[page]/sum(month_df['Month'], df['Month'])
# visits of one user divided by sum visits to all pages
data['fractAdminTotal'] = data['Administrative']/(data['Administrative']+data['Informational']+data['Informational'])
data['fractInfoTotal'] = data['Informational']/(data['Administrative']+data['Informational']+data['Informational'])
data['fractProdTotal'] = data['ProductRelated']/(data['Administrative']+data['Informational']+data['Informational'])
# different clustering thersholds but not on the revenue becasue that will overfit
data.info()
data.dtypes.value_counts().plot.bar(edgecolor = 'k');
plt.title('Variable Type Distribution');
###Output
_____no_output_____
###Markdown
Steps before- cleaning (e.g. missing values, create new features Feature tools can build on top of, remove high correlation, ) EntitySet and EntitiesAn EntitySet in Featuretools holds all of the tables and the relationships between them. At the moment we only have a single table, but we can create multiple tables through normalization. We'll call the first table data since it contains all the information both at the individual level and at the household level.
###Code
hh_bool = data.select_dtypes('int64').drop('Y',axis=1)
hh_cont = data.select_dtypes('float64')
import featuretools as ft
es = ft.EntitySet(id = 'households')
es.entity_from_dataframe(entity_id = 'data',
dataframe = data,
index = 'id')
es.normalize_entity(base_entity_id='data',
new_entity_id='household',
index = 'idhogar',
additional_variables = hh_bool + hh_cont + ['Y'])
es
###Output
_____no_output_____ |
06-conditional-prob/review.ipynb | ###Markdown
Review Terminology ReviewUse the flashcards below to help you review the terminology introduced in this chapter.
###Code
from jupytercards import display_flashcards
github='https://raw.githubusercontent.com/jmshea/Foundations-of-Data-Science-with-Python/main/06-conditional-prob/flashcards/'
display_flashcards(github+'review.json')
#display_flashcards("flashcards/"+'review.json')
###Output
_____no_output_____
###Markdown
Self Assessment ExercisesAnswer the questions below to assess your understanding of the material in this chapter. $\mbox{ }$
###Code
from jupyterquiz import display_quiz
github='https://raw.githubusercontent.com/jmshea/Foundations-of-Data-Science-with-Python/main/06-conditional-prob/quiz/'
display_quiz(github+'review.json')
# For local testing
# display_quiz('../questions/ch4.json')
###Output
_____no_output_____ |
Final-Binary Digit Classifier Using QNN with GUI input.ipynb | ###Markdown
Binary Digit Classifier Using QNN with GUI input Project DesciptionThe Project first aims to briefly introduce Quantum Neural Networks and then build a Quantum Neural Network (QNN) to classify handwritten 0 and 1 (using MNIST handwritten data). And then, we'll make a Graphical User Interface (GUI) using which the user can draw a digit. And afterward, we'll integrate the GUI with the QNN above. And then, we'll classify whether the user has made 0 or 1. References - https://arxiv.org/pdf/1802.06002.pdf- https://www.tensorflow.org/quantum/tutorials/mnist- https://docs.python.org/3/library/tk.html- https://tkdocs.com/tutorial/index.html- https://pennylane.ai/qml/glossary/quantum_neural_network.html- https://en.wikipedia.org/wiki/Quantum_neural_network What is Quantum Neural Networks ? A quantum neural network (QNN) is a machine learning model or algorithm that combines concepts from quantum computing and artifical neural networks.Quantum Neural Network extends the key features and structures of Neural Networks to quantum systems.Most Quantum neural networks are developed as feed-forward networks. Similar to their classical counterparts, this structure intakes input from one layer of qubits, and passes that input onto another layer of qubits. This layer of qubits evaluates this information and passes on the output to the next layer. Eventually the path leads to the final layer of qubits. Fig1: Illustration of QNN with the input |ψ>, the parameter θ and linear entanglement structure.[image_source](https://arxiv.org/pdf/2108.01468.pdf) Now let's start building the QNN Model Libraries Used - **cirq**- **tensorflow** - **tensorflow_quantum**- **numpy**- **sympy**- **seaborn**- **matplotlib**- **tkinter**- **opencv** Importing Libraries
###Code
import tensorflow as tf
import tensorflow_quantum as tfq
import cirq
import sympy
import numpy as np
import seaborn as sns
import collections
%matplotlib inline
import matplotlib.pyplot as plt
from cirq.contrib.svg import SVGCircuit
###Output
_____no_output_____
###Markdown
Flowchart Index 1. Data Loading, Filtering and Encoding  1.1 Data Loading  1.2 Data Filtering  1.3 Downscaling Images to 4x4  1.4 Removing Contradictory Examples  1.5 Encoding the data as quantum Circuits 2. Building QNN (Quantum Neural Network)  2.1 Building the model Circuit  2.2 Wrapping the model_circuit in a tfq.keras model  2.3 Training and Evaluating QNN 3. Saving QNN Model 4. Making GUI using tkinter 5. Integrating GUI with QNN Model 1. Data Loading, Filtering and Encoding 1.1 Data Loading
###Code
#Loading MNIST Dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
# Rescaling the images to [0.0,1.0] Range.
x_train, x_test = x_train[..., np.newaxis]/255.0, x_test[..., np.newaxis]/255.0
print("Number of training examples before filtering:", len(x_train))
print("Number of testing examples before filtering:", len(x_test))
###Output
Number of training examples before filtering: 60000
Number of testing examples before filtering: 10000
###Markdown
1.2 Data Filtering
###Code
# Defining Function to filter dataset to keep just 0's and 1's.
def filter_01(x, y):
keep = (y == 0) | (y == 1)
x, y = x[keep], y[keep]
y = y == 0
return x,y
# Filtering using Above Function to keep 0's and 1's
x_train, y_train = filter_01(x_train, y_train)
x_test, y_test = filter_01(x_test, y_test)
print("Number of training examples after filtering:", len(x_train))
print("Number of testing examples after filtering:", len(x_test))
###Output
Number of training examples after filtering: 12665
Number of testing examples after filtering: 2115
###Markdown
1.3 Downscaling Images to 4x4
###Code
downscaled_x_train = tf.image.resize(x_train, (4,4)).numpy()
downscaled_x_test = tf.image.resize(x_test, (4,4)).numpy()
# Displaying the first training image before and after downscaling
print("Before Downscaling:")
plt.imshow(x_train[0,:,:,0], vmin=0, vmax=1)
plt.colorbar()
print("After Downscaling:")
plt.imshow(downscaled_x_train[0,:,:,0], vmin=0, vmax=1)
plt.colorbar()
###Output
After Downscaling:
###Markdown
1.4 Removing Contradictory Examples
###Code
# Defining Function to remove conradictory Examples.
def remove_contradicting(xs, ys):
mapping = collections.defaultdict(set)
orig_x = {}
# Determine the set of labels for each unique image:
for x,y in zip(xs,ys):
orig_x[tuple(x.flatten())] = x
mapping[tuple(x.flatten())].add(y)
new_x = []
new_y = []
for flatten_x in mapping:
x = orig_x[flatten_x]
labels = mapping[flatten_x]
if len(labels) == 1:
new_x.append(x)
new_y.append(next(iter(labels)))
else:
# Throw out images that match more than one label.
pass
num_uniq_0 = sum(1 for value in mapping.values() if len(value) == 1 and True in value)
num_uniq_1 = sum(1 for value in mapping.values() if len(value) == 1 and False in value)
num_uniq_both = sum(1 for value in mapping.values() if len(value) == 2)
print("Number of unique images:", len(mapping.values()))
print("Number of unique 0s: ", num_uniq_0)
print("Number of unique 1s: ", num_uniq_1)
print("Number of unique contradicting labels (both 0 and 1): ", num_uniq_both)
print()
print("Initial number of images: ", len(xs))
print("Remaining non-contradicting unique images: ", len(new_x))
return np.array(new_x), np.array(new_y)
x_train_nocon, y_train_nocon = remove_contradicting(downscaled_x_train, y_train)
###Output
Number of unique images: 7100
Number of unique 0s: 4924
Number of unique 1s: 2068
Number of unique contradicting labels (both 0 and 1): 108
Initial number of images: 12665
Remaining non-contradicting unique images: 6992
###Markdown
1.5 Encoding the data as quantum Circuits
###Code
THRESHOLD = 0.5
x_train_bin = np.array(x_train_nocon > THRESHOLD, dtype=np.float32)
x_test_bin = np.array(downscaled_x_test > THRESHOLD, dtype=np.float32)
_ = remove_contradicting(x_train_bin, y_train_nocon)
# Defining Function to convert images to circuit
def convert_to_circuit(image):
"""Encode truncated classical image into quantum datapoint."""
values = np.ndarray.flatten(image)
qubits = cirq.GridQubit.rect(4, 4)
circuit = cirq.Circuit()
for i, value in enumerate(values):
if value:
circuit.append(cirq.X(qubits[i]))
return circuit
x_train_circ = [convert_to_circuit(x) for x in x_train_bin]
x_test_circ = [convert_to_circuit(x) for x in x_test_bin]
print("Circuit for the first train example")
SVGCircuit(x_train_circ[0])
# Converting Cirq circuits to tensors for TensorflowQuantum
x_train_tfcirc = tfq.convert_to_tensor(x_train_circ)
x_test_tfcirc = tfq.convert_to_tensor(x_test_circ)
###Output
_____no_output_____
###Markdown
2. Building QNN (Quantum Neural Network) 2.1 Building the model Circuit
###Code
class CircuitLayerBuilder():
def __init__(self, data_qubits, readout):
self.data_qubits = data_qubits
self.readout = readout
def add_layer(self, circuit, gate, prefix):
for i, qubit in enumerate(self.data_qubits):
symbol = sympy.Symbol(prefix + '-' + str(i))
circuit.append(gate(qubit, self.readout)**symbol)
def create_quantum_model():
"""Create a QNN model circuit and readout operation to go along with it."""
data_qubits = cirq.GridQubit.rect(4, 4) # a 4x4 grid.
readout = cirq.GridQubit(-1, -1) # a single qubit at [-1,-1]
circuit = cirq.Circuit()
# Prepare the readout qubit.
circuit.append(cirq.X(readout))
circuit.append(cirq.H(readout))
builder = CircuitLayerBuilder(
data_qubits = data_qubits,
readout=readout)
# Then add layers (experiment by adding more).
builder.add_layer(circuit, cirq.XX, "xx1")
builder.add_layer(circuit, cirq.ZZ, "zz1")
# Finally, prepare the readout qubit.
circuit.append(cirq.H(readout))
return circuit, cirq.Z(readout)
model_circuit, model_readout = create_quantum_model()
###Output
_____no_output_____
###Markdown
2.2 Wrapping the model_circuit in a tfq.keras model
###Code
# Build the Keras model.
model = tf.keras.Sequential([
# The input is the data-circuit, encoded as a tf.string
tf.keras.layers.Input(shape=(), dtype=tf.string),
# The PQC layer returns the expected value of the readout gate, range [-1,1].
tfq.layers.PQC(model_circuit, model_readout),
])
y_train_hinge = 2.0*y_train_nocon-1.0
y_test_hinge = 2.0*y_test-1.0
def hinge_accuracy(y_true, y_pred):
y_true = tf.squeeze(y_true) > 0.0
y_pred = tf.squeeze(y_pred) > 0.0
result = tf.cast(y_true == y_pred, tf.float32)
return tf.reduce_mean(result)
model.compile(
loss=tf.keras.losses.Hinge(),
optimizer=tf.keras.optimizers.Adam(),
metrics=[hinge_accuracy])
print(model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
pqc_1 (PQC) (None, 1) 32
=================================================================
Total params: 32
Trainable params: 32
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
2.3 Training and Evaluating QNN
###Code
EPOCHS = 3
BATCH_SIZE = 32
NUM_EXAMPLES = len(x_train_tfcirc)
x_train_tfcirc_sub = x_train_tfcirc[:NUM_EXAMPLES]
y_train_hinge_sub = y_train_hinge[:NUM_EXAMPLES]
qnn_history = model.fit(
x_train_tfcirc_sub, y_train_hinge_sub,
batch_size=32,
epochs=EPOCHS,
verbose=1,
validation_data=(x_test_tfcirc, y_test_hinge))
qnn_results = model.evaluate(x_test_tfcirc, y_test)
###Output
Train on 6992 samples, validate on 2115 samples
Epoch 1/3
6992/6992 [==============================] - 1503s 215ms/sample - loss: 0.8408 - hinge_accuracy: 0.7129 - val_loss: 0.8269 - val_hinge_accuracy: 0.6006
Epoch 2/3
6992/6992 [==============================] - 1467s 210ms/sample - loss: 0.4951 - hinge_accuracy: 0.8146 - val_loss: 0.5883 - val_hinge_accuracy: 0.6248
Epoch 3/3
6992/6992 [==============================] - 2043s 292ms/sample - loss: 0.4454 - hinge_accuracy: 0.8473 - val_loss: 0.5374 - val_hinge_accuracy: 0.9067
2115/2115 [==============================] - 19s 9ms/sample - loss: 0.5374 - hinge_accuracy: 0.9067
###Markdown
3. Saving QNN Model
###Code
model.save_weights("weights_MNIST.tf",save_format='tf')
###Output
_____no_output_____
###Markdown
4. Making GUI using tkinter 4.1 Importing more required libraries
###Code
import tkinter as tk
import cv2
from PIL import ImageTk,Image,ImageDraw
###Output
_____no_output_____
###Markdown
4.2 Defining GUI and functions required for it
###Code
def save():
global count
img_array=np.array(img)
img_array=cv2.resize(img_array,(100,100))
cv2.imwrite(str(count)+'.jpg',img_array)
count=count+1
def clear():
global img,img_draw
canvas.delete('all')
img=Image.new('RGB',(400,400),(0,0,0))
img_draw=ImageDraw.Draw(img)
label_status.config(text='Prediction: None')
def predict():
img_array=np.array(img)
img_array=cv2.cvtColor(img_array,cv2.COLOR_BGR2GRAY)
img_array=cv2.resize(img_array,(4,4))
img_array=img_array/255.0
img_array=img_array.reshape(1,4,4)
img_circ=convert_to_circuit(img_array)
img_tfcirc=tfq.convert_to_tensor([img_circ])
result=model.predict(img_tfcirc)
label=np.argmax(result,axis=1)
label_status.config(text='Prediction:'+str(label))
def canvas_function(event):
x=event.x
y=event.y
x1=x-10
y1=y-10
x2=x+10
y2=y+10
canvas.create_oval((x1,y1,x2,y2),fill='black')
img_draw.ellipse((x1,y1,x2,y2),fill='white')
###Output
_____no_output_____
###Markdown
5. Integrating GUI with QNN Model 5.1 Rebuilding Architecture and loading the model weights
###Code
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=(), dtype=tf.string),
tfq.layers.PQC(model_circuit, model_readout),
])
model.add(tf.keras.layers.Flatten(input_shape=(4,4)))
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model.load_weights('weights_MNIST.tf')
###Output
_____no_output_____
###Markdown
5.2 Final Integration
###Code
count=0
win=tk.Tk()
win.title('Binary Classifier')
canvas=tk.Canvas(win,width=400,height=400,bg='white')
canvas.grid(row=0,column=0,columnspan=4)
button_save=tk.Button(win,text='Save Image',bg='green',fg='white',font='Cambria 22 bold',command=save)
button_save.grid(row=1,column=0)
button_predict=tk.Button(win,text='Predict',bg='magenta',fg='white',font='Cambria 22 bold',command=predict)
button_predict.grid(row=1,column=1)
button_clear=tk.Button(win,text='Clear',bg='gold',fg='white',font='Cambria 22 bold',command=clear)
button_clear.grid(row=1,column=2)
button_exit=tk.Button(win,text='EXIT',bg='red',fg='white',font='Cambria 22 bold',command=win.destroy)
button_exit.grid(row=1,column=3)
label_status=tk.Label(win,text='Prediction: None',bg='white',font='Verdana 24 bold')
label_status.grid(row=2,column=0,columnspan=4)
canvas.bind('<B1-Motion>',canvas_function)
img=Image.new('RGB',(400,400),(0,0,0))
img_draw=ImageDraw.Draw(img)
win.mainloop()
###Output
_____no_output_____ |
.ipynb_checkpoints/G2_data_visualization-checkpoint.ipynb | ###Markdown
G2 - Grado di soddisfazione della vita
###Code
# Librerie
import os
import pandas as pd
import numpy as np
import folium
import matplotlib.pyplot as plt
plt.style.use('ggplot')
get_ipython().magic('pylab inline')
# Cartelle Input/Output
dir_df = os.path.join(os.path.abspath(''),'stg')
dir_out = os.path.join(os.path.abspath(''),'output')
df_g2_filename = r'df_g2.pkl'
df_g2_fullpath = os.path.join(dir_df, df_g2_filename)
df_g2 = pd.read_pickle(df_g2_fullpath)
df_g2 = df_g2[df_g2['Territorio']!='Italia']
df_g2['Popolazione'] = df_g2['Popolazione']/100000
# Report G2
tp = df_g2.plot(
x='Reddito pro capite',
y='Gradio di soddisfazione per la vita',
s=df_g2['Popolazione'],
kind='scatter',
xlim=(0,75000),
ylim=(0,10),
legend = False)
for i, txt in enumerate(df_g2.Territorio):
tp.annotate(txt, (df_g2['Reddito pro capite'].iat[i]*1.070,df_g2['Gradio di soddisfazione per la vita'].iat[i]))
tp.plot()
fig_prj = tp.get_figure()
fig_prj.tight_layout()
fig_prj.savefig(os.path.join(dir_out,'g2.png'), format='png', dpi=300)
###Output
_____no_output_____ |
Autism Screening with Machine Learning.ipynb | ###Markdown
Childhood Autistic Spectrum Disorder Screening using Machine LearningThe early diagnosis of neurodevelopment disorders can improve treatment and significantly decrease the associated healthcare costs. In this project, we will use supervised learning to diagnose Autistic Spectrum Disorder (ASD) based on behavioural features and individual characteristics. More specifically, we will build and deploy a neural network using the Keras API. This project will use a dataset provided by the UCI Machine Learning Repository that contains screening data for 292 patients. The dataset can be found at the following URL: https://archive.ics.uci.edu/ml/datasets/Autistic+Spectrum+Disorder+Screening+Data+for+Children++Let's dive right in! First, we will import a few of libraries we will use in this project.
###Code
import sys
import pandas as pd
import sklearn
import keras
print 'Python: {}'.format(sys.version)
print 'Pandas: {}'.format(pd.__version__)
print 'Sklearn: {}'.format(sklearn.__version__)
print 'Keras: {}'.format(keras.__version__)
###Output
Using Theano backend.
WARNING (theano.tensor.blas): Using NumPy C-API based implementation for BLAS functions.
###Markdown
1. Importing the DatasetWe will obtain the data from the UCI Machine Learning Repository; however, since the data isn't contained in a csv or txt file, we will have to download the compressed zip file and then extract the data manually. Once that is accomplished, we will read the information in from a text file using Pandas.
###Code
# import the dataset
file = 'C:/users/brend/tutorial/autism-data.txt'
# read the csv
data = pd.read_table(file, sep = ',', index_col = None)
# print the shape of the DataFrame, so we can see how many examples we have
print 'Shape of DataFrame: {}'.format(data.shape)
print data.loc[0]
# print out multiple patients at the same time
data.loc[:10]
# print out a description of the dataframe
data.describe()
###Output
_____no_output_____
###Markdown
2. Data PreprocessingThis dataset is going to require multiple preprocessing steps. First, we have columns in our DataFrame (attributes) that we don't want to use when training our neural network. We will drop these columns first. Secondly, much of our data is reported using strings; as a result, we will convert our data to categorical labels. During our preprocessing, we will also split the dataset into X and Y datasets, where X has all of the attributes we want to use for prediction and Y has the class labels.
###Code
# drop unwanted columns
data = data.drop(['result', 'age_desc'], axis=1)
data.loc[:10]
# create X and Y datasets for training
x = data.drop(['class'], 1)
y = data['class']
x.loc[:10]
# convert the data to categorical values - one-hot-encoded vectors
X = pd.get_dummies(x)
# print the new categorical column labels
X.columns.values
# print an example patient from the categorical data
X.loc[1]
# convert the class data to categorical values - one-hot-encoded vectors
Y = pd.get_dummies(y)
Y.iloc[:10]
###Output
_____no_output_____
###Markdown
3. Split the Dataset into Training and Testing DatasetsBefore we can begin training our neural network, we need to split the dataset into training and testing datasets. This will allow us to test our network after we are done training to determine how well it will generalize to new data. This step is incredibly easy when using the train_test_split() function provided by scikit-learn!
###Code
from sklearn import model_selection
# split the X and Y data into training and testing datasets
X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X, Y, test_size = 0.2)
print X_train.shape
print X_test.shape
print Y_train.shape
print Y_test.shape
###Output
(233, 96)
(59, 96)
(233, 2)
(59, 2)
###Markdown
4. Building the Network - KerasIn this project, we are going to use Keras to build and train our network. This model will be relatively simple and will only use dense (also known as fully connected) layers. This is the most common neural network layer. The network will have one hidden layer, use an Adam optimizer, and a categorical crossentropy loss. We won't worry about optimizing parameters such as learning rate, number of neurons in each layer, or activation functions in this project; however, if you have the time, manually adjusting these parameters and observing the results is a great way to learn about their function!
###Code
# build a neural network using Keras
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
# define a function to build the keras model
def create_model():
# create model
model = Sequential()
model.add(Dense(8, input_dim=96, kernel_initializer='normal', activation='relu'))
model.add(Dense(4, kernel_initializer='normal', activation='relu'))
model.add(Dense(2, activation='sigmoid'))
# compile model
adam = Adam(lr=0.001)
model.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy'])
return model
model = create_model()
print(model.summary())
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 8) 776
_________________________________________________________________
dense_2 (Dense) (None, 4) 36
_________________________________________________________________
dense_3 (Dense) (None, 2) 10
=================================================================
Total params: 822
Trainable params: 822
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
5. Training the NetworkNow it's time for the fun! Training a Keras model is as simple as calling model.fit().
###Code
# fit the model to the training data
model.fit(X_train, Y_train, epochs=50, batch_size=10, verbose = 1)
###Output
Epoch 1/50
233/233 [==============================] - 0s 288us/step - loss: 0.6927 - acc: 0.5794
Epoch 2/50
233/233 [==============================] - 0s 245us/step - loss: 0.6910 - acc: 0.7210
Epoch 3/50
233/233 [==============================] - 0s 258us/step - loss: 0.6868 - acc: 0.7639
Epoch 4/50
233/233 [==============================] - 0s 236us/step - loss: 0.6779 - acc: 0.7082
Epoch 5/50
233/233 [==============================] - 0s 236us/step - loss: 0.6619 - acc: 0.8541
Epoch 6/50
233/233 [==============================] - 0s 305us/step - loss: 0.6340 - acc: 0.8283
Epoch 7/50
233/233 [==============================] - 0s 227us/step - loss: 0.5963 - acc: 0.8541
Epoch 8/50
233/233 [==============================] - 0s 305us/step - loss: 0.5446 - acc: 0.9399
Epoch 9/50
233/233 [==============================] - 0s 240us/step - loss: 0.4884 - acc: 0.8884
Epoch 10/50
233/233 [==============================] - 0s 227us/step - loss: 0.4220 - acc: 0.9227
Epoch 11/50
233/233 [==============================] - 0s 322us/step - loss: 0.3603 - acc: 0.9313
Epoch 12/50
233/233 [==============================] - 0s 245us/step - loss: 0.2935 - acc: 0.9614
Epoch 13/50
233/233 [==============================] - 0s 296us/step - loss: 0.2528 - acc: 0.9657
Epoch 14/50
233/233 [==============================] - 0s 330us/step - loss: 0.2087 - acc: 0.9657
Epoch 15/50
233/233 [==============================] - 0s 305us/step - loss: 0.1788 - acc: 0.9871
Epoch 16/50
233/233 [==============================] - 0s 313us/step - loss: 0.1605 - acc: 0.9700
Epoch 17/50
233/233 [==============================] - 0s 309us/step - loss: 0.1389 - acc: 0.9828
Epoch 18/50
233/233 [==============================] - 0s 335us/step - loss: 0.1258 - acc: 0.9785
Epoch 19/50
233/233 [==============================] - 0s 343us/step - loss: 0.1108 - acc: 0.9871
Epoch 20/50
233/233 [==============================] - 0s 399us/step - loss: 0.1004 - acc: 0.9871
Epoch 21/50
233/233 [==============================] - 0s 416us/step - loss: 0.0910 - acc: 0.9871
Epoch 22/50
233/233 [==============================] - 0s 343us/step - loss: 0.0820 - acc: 0.9871
Epoch 23/50
233/233 [==============================] - 0s 361us/step - loss: 0.0752 - acc: 0.9914
Epoch 24/50
233/233 [==============================] - 0s 356us/step - loss: 0.0714 - acc: 0.9957
Epoch 25/50
233/233 [==============================] - 0s 309us/step - loss: 0.0634 - acc: 0.9957
Epoch 26/50
233/233 [==============================] - 0s 339us/step - loss: 0.0585 - acc: 0.9957
Epoch 27/50
233/233 [==============================] - 0s 335us/step - loss: 0.0571 - acc: 1.0000
Epoch 28/50
233/233 [==============================] - 0s 429us/step - loss: 0.0526 - acc: 0.9957
Epoch 29/50
233/233 [==============================] - 0s 335us/step - loss: 0.0474 - acc: 1.0000
Epoch 30/50
233/233 [==============================] - 0s 322us/step - loss: 0.0463 - acc: 0.9957
Epoch 31/50
233/233 [==============================] - 0s 296us/step - loss: 0.0431 - acc: 1.0000
Epoch 32/50
233/233 [==============================] - 0s 348us/step - loss: 0.0381 - acc: 1.0000
Epoch 33/50
233/233 [==============================] - 0s 322us/step - loss: 0.0357 - acc: 1.0000
Epoch 34/50
233/233 [==============================] - 0s 292us/step - loss: 0.0331 - acc: 1.0000
Epoch 35/50
233/233 [==============================] - 0s 305us/step - loss: 0.0316 - acc: 1.0000
Epoch 36/50
233/233 [==============================] - 0s 335us/step - loss: 0.0294 - acc: 1.0000
Epoch 37/50
233/233 [==============================] - 0s 322us/step - loss: 0.0282 - acc: 1.0000
Epoch 38/50
233/233 [==============================] - 0s 236us/step - loss: 0.0281 - acc: 1.0000
Epoch 39/50
233/233 [==============================] - 0s 339us/step - loss: 0.0253 - acc: 1.0000
Epoch 40/50
233/233 [==============================] - 0s 223us/step - loss: 0.0252 - acc: 1.0000
Epoch 41/50
233/233 [==============================] - 0s 326us/step - loss: 0.0226 - acc: 1.0000
Epoch 42/50
233/233 [==============================] - 0s 326us/step - loss: 0.0213 - acc: 1.0000
Epoch 43/50
233/233 [==============================] - 0s 219us/step - loss: 0.0203 - acc: 1.0000
Epoch 44/50
233/233 [==============================] - 0s 215us/step - loss: 0.0193 - acc: 1.0000
Epoch 45/50
233/233 [==============================] - 0s 318us/step - loss: 0.0190 - acc: 1.0000
Epoch 46/50
233/233 [==============================] - 0s 232us/step - loss: 0.0176 - acc: 1.0000
Epoch 47/50
233/233 [==============================] - 0s 215us/step - loss: 0.0163 - acc: 1.0000
Epoch 48/50
233/233 [==============================] - 0s 202us/step - loss: 0.0161 - acc: 1.0000
Epoch 49/50
233/233 [==============================] - 0s 240us/step - loss: 0.0154 - acc: 1.0000
Epoch 50/50
233/233 [==============================] - 0s 223us/step - loss: 0.0150 - acc: 1.0000
###Markdown
6. Testing and Performance MetricsNow that our model has been trained, we need to test its performance on the testing dataset. The model has never seen this information before; as a result, the testing dataset allows us to determine whether or not the model will be able to generalize to information that wasn't used during its training phase. We will use some of the metrics provided by scikit-learn for this purpose!
###Code
# generate classification report using predictions for categorical model
from sklearn.metrics import classification_report, accuracy_score
predictions = model.predict_classes(X_test)
predictions
print('Results for Categorical Model')
print(accuracy_score(Y_test[['YES']], predictions))
print(classification_report(Y_test[['YES']], predictions))
###Output
Results for Categorical Model
0.9661016949152542
precision recall f1-score support
0 0.97 0.97 0.97 36
1 0.96 0.96 0.96 23
avg / total 0.97 0.97 0.97 59
|
d2l/chapter_recurrent-neural-networks/rnn.ipynb | ###Markdown
循环神经网络:label:`sec_rnn`在 :numref:`sec_language_model` 中,我们介绍了 $n$ 元语法模型,其中单词 $x_t$ 在时间步 $t$ 的条件概率仅取决于前面 $n-1$ 个单词。如果我们想将时间步 $t-(n-1)$ 之前的单词的可能产生的影响合并到 $x_t$ 上就需要增加 $n$,然而模型参数的数量也会随之呈指数增长,因为词表 $\mathcal{V}$ 需要存储 $|\mathcal{V}|^n$ 个数字,因此与其将 $P(x_t \mid x_{t-1}, \ldots, x_{t-n+1})$ 模型化,不如使用隐变量模型:$$P(x_t \mid x_{t-1}, \ldots, x_1) \approx P(x_t \mid h_{t-1}),$$其中 $h_{t-1}$ 是 *隐藏状态*(也称为隐藏变量),其存储了到时间步 $t-1$ 的序列信息。通常,可以基于当前输入 $x_{t}$ 和先前隐藏状态 $h_{t-1}$ 来计算时间步 $t$ 处的任何时间的隐藏状态:$$h_t = f(x_{t}, h_{t-1}).$$:eqlabel:`eq_ht_xt`对于一个足够强大的函数 $f$( :eqref:`eq_ht_xt` ),隐变量模型不是近似值。毕竟 $h_t$ 是可以仅仅存储到目前为止观察到的所有数据,然而这样的操作可能会使计算和存储的代价都变得昂贵。回想一下,我们在 :numref:`chap_perceptrons` 中讨论过的具有隐藏单元的隐藏层。值得注意的是,隐藏层和隐藏状态指的是两个截然不同的概念。如上所述,隐藏层是在输入到输出的路径上以观测角度来理解的隐藏的层,而隐藏状态则是在给定步骤所做的任何事情以技术角度来定义的 *输入*,并且这些状态只能通过先前时间步的数据来计算。*循环神经网络*(Recurrent neural networks, RNNs)是具有隐藏状态的神经网络。在介绍循环神经网络模型之前,我们首先回顾 :numref:`sec_mlp` 中介绍的多层感知机模型。 无隐藏状态的神经网络让我们来看一看只有单隐藏层的多层感知机。设隐藏层的激活函数为 $\phi$。给定一个小批量样本 $\mathbf{X} \in \mathbb{R}^{n \times d}$,其中批量大小为 $n$,输入维度为 $d$,则隐藏层的输出 $\mathbf{H} \in \mathbb{R}^{n \times h}$ 通过下式计算:$$\mathbf{H} = \phi(\mathbf{X} \mathbf{W}_{xh} + \mathbf{b}_h).$$:eqlabel:`rnn_h_without_state`在 :eqref:`rnn_h_without_state` 中,我们拥有的隐藏层权重参数为 $\mathbf{W}_{xh} \in \mathbb{R}^{d \times h}$、偏置参数为 $\mathbf{b}_h \in \mathbb{R}^{1 \times h}$,以及隐藏单元的数目为 $h$。因此求和时将应用广播机制(见 :numref:`subsec_broadcasting`)。接下来,将隐藏变量 $\mathbf{H}$ 用作输出层的输入。输出层由下式给出:$$\mathbf{O} = \mathbf{H} \mathbf{W}_{hq} + \mathbf{b}_q,$$其中,$\mathbf{O} \in \mathbb{R}^{n \times q}$ 是输出变量,$\mathbf{W}_{hq} \in \mathbb{R}^{h \times q}$ 是权重参数,$\mathbf{b}_q \in \mathbb{R}^{1 \times q}$ 是输出层的偏置参数。如果是分类问题,我们可以用 $\text{softmax}(\mathbf{O})$ 来计算输出类别的概率分布。这完全类似于之前在 :numref:`sec_sequence` 中解决的回归问题,因此我们省略了细节。无须多言,只要可以随机选择“特征-标签”对,并且通过自动微分和随机梯度下降能够学习网络参数就可以了。 有隐藏状态的循环神经网络:label:`subsec_rnn_w_hidden_states`当我们有隐藏状态后,情况就完全不同了。让我们更详细地看看这个结构。假设我们在时间步$t$有小批量输入$\mathbf{X}_t \in \mathbb{R}^{n \times d}$。换言之,对于$n$个序列样本的小批量,$\mathbf{X}_t$的每一行对应于来自该序列的时间步$t$处的一个样本。接下来,用$\mathbf{H}_t \in \mathbb{R}^{n \times h}$表示时间步$t$的隐藏变量。与多层感知机不同的是,我们在这里保存了前一个时间步的隐藏变量$\mathbf{H}_{t-1}$,并引入了一个新的权重参数$\mathbf{W}_{hh} \in \mathbb{R}^{h \times h}$来描述如何在当前时间步中使用前一个时间步的隐藏变量。具体地说,当前时间步隐藏变量的计算由当前时间步的输入与前一个时间步的隐藏变量一起确定:$$\mathbf{H}_t = \phi(\mathbf{X}_t \mathbf{W}_{xh} + \mathbf{H}_{t-1} \mathbf{W}_{hh} + \mathbf{b}_h).$$:eqlabel:`rnn_h_with_state`与 :eqref:`rnn_h_without_state` 相比, :eqref:`rnn_h_with_state` 多添加了一项$\mathbf{H}_{t-1} \mathbf{W}_{hh}$,从而实例化了 :eqref:`eq_ht_xt`。从相邻时间步的隐藏变量$\mathbf{H}_t$和$\mathbf{H}_{t-1}$之间的关系可知,这些变量捕获并保留了序列直到其当前时间步的历史信息,就如当前时间步下神经网络的状态或记忆,因此这样的隐藏变量被称为 *隐藏状态*(hidden state)。由于在当前时间步中隐藏状态使用的定义与前一个时间步中使用的定义相同,因此 :eqref:`rnn_h_with_state` 的计算是 *循环的*(recurrent)。于是基于循环计算的隐状态神经网络被命名为 *循环神经网络*(recurrent neural networks)。在循环神经网络中执行 :eqref:`rnn_h_with_state` 计算的层称为 *循环层*(recurrent layers)。有许多不同的方法可以构建循环神经网络,由 :eqref:`rnn_h_with_state` 定义的隐藏状态的循环神经网络是非常常见的一种。对于时间步$t$,输出层的输出类似于多层感知机中的计算:$$\mathbf{O}_t = \mathbf{H}_t \mathbf{W}_{hq} + \mathbf{b}_q.$$循环神经网络的参数包括隐藏层的权重$\mathbf{W}_{xh} \in \mathbb{R}^{d \times h}, \mathbf{W}_{hh} \in \mathbb{R}^{h \times h}$和偏置$\mathbf{b}_h \in \mathbb{R}^{1 \times h}$,以及输出层的权重$\mathbf{W}_{hq} \in \mathbb{R}^{h \times q}$和偏置$\mathbf{b}_q \in \mathbb{R}^{1 \times q}$。值得一提的是,即使在不同的时间步,循环神经网络也总是使用这些模型参数。因此,循环神经网络的参数开销不会随着时间步的增加而增加。:numref:`fig_rnn` 展示了循环神经网络在三个相邻时间步的计算逻辑。在任意时间步$t$,隐藏状态的计算可以被视为:1、拼接当前时间步$t$的输入$\mathbf{X}_t$和前一时间步$t-1$的隐藏状态$\mathbf{H}_{t-1}$;2、将拼接的结果送入带有激活函数$\phi$的全连接层。全连接层的输出是当前时间步$t$的隐藏状态$\mathbf{H}_t$。在本例中,模型参数是$\mathbf{W}_{xh}$和$\mathbf{W}_{hh}$的拼接,以及$\mathbf{b}_h$的偏置,所有这些参数都来自 :eqref:`rnn_h_with_state`。当前时间步$t$的隐藏状态$\mathbf{H}_t$将参与计算下一时间步$t+1$的隐藏状态$\mathbf{H}_{t+1}$。而且$\mathbf{H}_t$还将送入全连接输出层用于计算当前时间步$t$的输出$\mathbf{O}_t$。:label:`fig_rnn`我们刚才提到,隐藏状态中$\mathbf{X}_t \mathbf{W}_{xh} + \mathbf{H}_{t-1} \mathbf{W}_{hh}$的计算,相当于$\mathbf{X}_t$和$\mathbf{H}_{t-1}$的拼接与$\mathbf{W}_{xh}$和$\mathbf{W}_{hh}$的拼接的矩阵乘法。虽然这个性质可以通过数学证明,但在下面我们只使用一个简单的代码片段来说明。首先,我们定义矩阵`X`、`W_xh`、`H`和`W_hh`,它们的形状分别为$(3,1)$、$(1,4)$、$(3,4)$和$(4,4)$。分别将`X`乘以`W_xh`,将`H`乘以`W_hh`,然后将这两个乘法相加,我们得到一个形状为$(3,4)$的矩阵。
###Code
import torch
from d2l import torch as d2l
X, W_xh = torch.normal(0, 1, (3, 1)), torch.normal(0, 1, (1, 4))
H, W_hh = torch.normal(0, 1, (3, 4)), torch.normal(0, 1, (4, 4))
torch.matmul(X, W_xh) + torch.matmul(H, W_hh)
###Output
_____no_output_____
###Markdown
现在,我们沿列(轴1)拼接矩阵`X`和`H`,沿行(轴0)拼接矩阵`W_xh`和`W_hh`。这两个拼接分别产生形状$(3, 5)$和形状$(5, 4)$的矩阵。再将这两个拼接的矩阵相乘,我们得到与上面相同形状$(3, 4)$的输出矩阵。
###Code
torch.matmul(torch.cat((X, H), 1), torch.cat((W_xh, W_hh), 0))
###Output
_____no_output_____ |
ColAppTextProcDemo.ipynb | ###Markdown
Webscrape college applicationsThis notebook demonstrates:1. Pre-processing2. Counting word occurances3. Making individual dataframes4. Merging those dataframesInspired by: https://www.kdnuggets.com/2018/03/text-data-preprocessing-walkthrough-python.html
###Code
import re, string, unicodedata
import nltk
import contractions
import inflect
import pandas as pd
from pandas import Series, DataFrame
from operator import itemgetter
from bs4 import BeautifulSoup
from nltk import word_tokenize, sent_tokenize
from nltk.corpus import stopwords
from nltk.stem import LancasterStemmer, WordNetLemmatizer
###Output
_____no_output_____
###Markdown
Define Helper Functions
###Code
def strip_html(text):
soup = BeautifulSoup(text, "html.parser")
return soup.get_text()
def stem_words(words):
# Stem words in list of tokenized words
stemmer = LancasterStemmer()
stems = []
for word in words:
stem = stemmer.stem(word)
stems.append(stem)
return stems
def lemmatize_verbs(words):
# Lemmatize verbs in list of tokenized words
lemmatizer = WordNetLemmatizer()
lemmas = []
for word in words:
lemma = lemmatizer.lemmatize(word, pos='v')
lemmas.append(lemma)
return lemmas
def remove_pmarks(text):
text = re.sub(r'~|`|:|;|"|,', '', text)
text = str.replace(text, '"', '')
text = str.replace(text, "'", '')
text = str.replace(text, '.', '')
text = str.replace(text, '?', '')
return text
def remove_common_words(text):
words_to_drop = ['a','is','it','of','in','at','to','the']
for word in words_to_drop:
text = re.sub(''.join((r'\b', word, r'\b')), '', text)
return text
def combine_cword(text):
compound_words = {
'social security number': 'socialsecuritynumber',
'ssn': 'socialsecuritynumber',
'high school': 'highschool'
}
for cword in compound_words:
text = re.sub(''.join((r'\b', cword, r'\b')), compound_words[cword], text)
return text
def get_words(text):
return nltk.word_tokenize(text)
def remove_non_ascii(words):
# Remove non-ASCII characters from list of tokenized words
new_words = []
for word in words:
new_word = unicodedata.normalize('NFKD', word).encode('ascii', 'ignore').decode('utf-8', 'ignore')
new_words.append(new_word)
return new_words
def replace_numbers(words):
# Replace all interger occurrences in list of tokenized words with textual representation
p = inflect.engine()
new_words = []
for word in words:
if word.isdigit():
new_word = p.number_to_words(word)
new_words.append(new_word)
else:
new_words.append(word)
return new_words
def normalize(text):
# Remove punctuation from entire string.
print('Remove punctuation.', end='')
text = remove_pmarks(text)
print(' DONE')
# Put to lowercase first
print('Converting case....', end='')
text = text.lower()
print(' DONE')
# Remove unwanted common words.
print('Remove freq words..', end='')
text = remove_common_words(text)
print(' DONE')
# Handle common application compound words
print('Handle compounds...', end='')
text = combine_cword(text)
print(' DONE')
# Tokenize the string.
print('Tokenize string....', end='')
words = get_words(text)
print(' DONE')
# Remove ascii characters.
print('Remove non ascii...', end='')
words = remove_non_ascii(words)
print(' DONE')
# Convert numebrs to words.
print('Replace numbers....', end='')
words = replace_numbers(words)
print(' DONE')
return words
def get_sorted_count(words):
index_list = sorted(set(words))
count_list = list(range(len(index_list)))
for i in range(len(index_list)):
count_list[i] = 0
for word in words:
if word == index_list[i]:
count_list[i] = count_list[i] + 1
grand_list = []
for i in range(len(index_list)):
item_list = []
item_list.append(index_list[i])
item_list.append(count_list[i])
grand_list.append(item_list)
return sorted(grand_list, key=itemgetter(0), reverse=True)
some_test_text = '''Here are a few words that I will use to test.
high
Don't think this is the end of it.
high school
Social Security Number
ssn 'Here are a few words that I will use to test.
high
Don't think this is the end of it.
high school
SSN
high school
social security number
1. Hi hi hi
2. Three four five
6. What?'''
more_test_text = '''Cookie information can be found through the use of a
high school SSN high school plug-in for your web browser. (I use
'Cookie Manager' on FireFox, although there are many other options
for FireFox and other browsers). The two cookies you are looking
for are called Y and T, and they are linked to the domain yahoo.com.
Extract the data from these cookies, and paste it into the appropriate
variables... a cookie will expire after a certain amount of time,
which varies between computers. This means that you may have to
re-fetch the Y and T cookie data every few days, or you will not be
able to archive private groups. 'Here are a few words that I
will use to test. high Don't think this is the end of it. high school'''
print(sorted(set(normalize(some_test_text))))
print(sorted(set(normalize(more_test_text))))
for vname in ['some_test_text', 'more_test_text']:
vars()[vname + '_df'] = DataFrame(get_sorted_count(normalize(vars()[vname])), columns=['word','freq'])
some_test_text_df['freq1'] = some_test_text_df['freq']
some_test_text_df = some_test_text_df.drop(columns=['freq'])
print(some_test_text_df.head())
more_test_text_df['freq2'] = more_test_text_df['freq']
more_test_text_df = more_test_text_df.drop(columns=['freq'])
print(more_test_text_df.head())
pd.merge(some_test_text_df,more_test_text_df,on='word',how='outer').head()
###Output
_____no_output_____ |
11-dqn/cartpole_dqn.ipynb | ###Markdown
Deep Q networksWe have already seen [Q learning](../../1-grid-world/7-q-learning/q_learning_agent.ipynb) in the previous examples.Q learning is a special algorithm that provides an off policy method for Temporal Difference style control.Deep Q network is just an extension of Q learning that uses neural networks, i.e. a *non-linear* function as state-action value function approximator.This is done to scale up to decision making in really large domains of huge state spaces.Deep Q network in particular has proven to achieve great results in games, as we will see in this example too. Characteristics of deep Q networksDeep Q network inherits all the characteristics of [Q learning](../7-q-learning/q_learning_agent.ipynb), except that it allows us to solve problems with continuous or very large state and action spaces. Neural Network as value function approximatorThis can be any neural network architecture that we see fit.In this example, we are using a very simple Dense Neural Network architecture.Dense Neural Networks are simply a *stack of matrices with activation functions in between*. Replay memoryEach tuple of $(s_t, a_t, r, s_{t+1})$ is recorded in a replay memory.As a replay memory we use a FIFO dequeue of constant size. Batch trainingEach step taken by the agent, we feed a *random* sample of tuples from the replay memory to the neural network in order to do *batch training*. Main model and Target modelWe keep two neural network models in our implementation.The reason we keep a second one, called the target model, is to provide some **stability** while training. - with each step taken we update only the first main model. - after some time interval (in this case, every episode), we update the second target model to be the same with the main model. Continuous or very large state and action spacesUsing a neural network as a value function approximator has its benefits.It allows us to solve problems with continuous state and action spaces.Such problems would be almost impossible to solve with normal Q learning, as they are very resource hungry. InitializationFor the Q learning aspect we keep track of the following: - In order to showcase how robust off policy algorithms like Q learning are, we are going to keep the epsilon and learning rate constant - `self.learning_rate` is set to $0.001$ - `self.epsilon` is initially set to $1.0$ and decays each step taken via the variable `self.epsilon_decay = 0.999` until a minimum of `self.epsilon_min = 0.01` is reached - `self.discount_factor` is set to $0.99$ - We keep track of a **replay memory** of size $2000$ via the following `self.memory = deque(maxlen=2000)`For the neural network aspect we keep track of the following: - we keep track of a model (neural network) in `self.model = self.build_model()` - we also keep track of a target model `self.target_model = self.build_model()`. After some time interval we update the target model to be the same with the main model to provide some stability. - we only train after there are at least $1000$ entries in the replay memory. We specify this at `self.train_start = 1000`.
###Code
import sys
import gym
import pylab
import random
import numpy as np
from collections import deque
from keras.layers import Dense
from keras.optimizers import Adam
from keras.models import Sequential
EPISODES = 300
# DQN Agent for the Cartpole
# it uses Neural Network to approximate q function
# and replay memory & target q network
class DQNAgent:
def __init__(self, state_size, action_size):
# if you want to see Cartpole learning, then change to True
self.render = False
self.load_model = False
# get size of state and action
self.state_size = state_size
self.action_size = action_size
# These are hyper parameters for the DQN
self.discount_factor = 0.99
self.learning_rate = 0.001
self.epsilon = 1.0
self.epsilon_decay = 0.999
self.epsilon_min = 0.01
self.batch_size = 64
self.train_start = 1000
# create replay memory using deque
# contains <s,a,r,s'> tuples
self.memory = deque(maxlen=2000)
# create main model and target model
self.model = self.build_model()
self.target_model = self.build_model()
# initialize target model
self.next_states_model()
if self.load_model:
self.model.load_weights("./save_model/cartpole_dqn.h5")
###Output
_____no_output_____
###Markdown
Deep Q networkThe update rule for Q values in deep Q network is the following:$\hat{Q}^\pi(s_t, a_t) \gets^{train} R_t+ \gamma max_{a’} \hat{Q}^\pi(s_t ,a’)$ - $\hat{Q}^\pi(s_t, a_t)$ - *predicted* Q value of current state-action pair following the policy $\pi$ - $\gets^{train}$ - this means train the neural network accordingly, instead of *assign* the value - $\gamma$ - the **discount factor**. - $max_{a’} \hat{Q}^\pi(s_t ,a’)$ - maximization operator over the **predicted** Q values of all possible actions in the current stateFirst things first, the update formula is very similar to the update rule we saw in Q learning, although with the following differences: - instead of $Q^\pi$ we now deal with $\hat{Q}^\pi$, which is an approximation, the output of a neural network. - after calculating the result, we do not *assign* the value to $\hat{Q}^\pi$, but rather *train* the neural network with *gradient descent* in order to update the weights with the latest state-action value.Moreover, this seems like a simplified version of the update rule we saw for Q learning.Notice that the right hand side of the formula is simply a **bootstrapped return**.We use this value to update the network and we do not take into account the temporal difference between Q values.The reason behind this is that in Deep Q network, the presence of a neural network will mimic the temporal difference formula we saw on Q learning with a *learning rate*.In a neural network we update the weights via *backpropagation* in *gradient descent*.We also specify a **learning rate** in the process.That is why we do not need the explicit temporal difference aspect of Q learning anymore, since a similar implicit process is provided by backpropagation when we train the neural network with new data.Moreover, keep in mind these two characteristics of training that we also mentioned above: - replay memory - batch training - stability with target networks
###Code
class DQNAgent(DQNAgent):
# pick samples randomly from replay memory (with batch_size)
def train_model(self):
if len(self.memory) < self.train_start:
return
batch_size = min(self.batch_size, len(self.memory))
mini_batch = random.sample(self.memory, batch_size)
# get s (state) as input from mini_batch
# initialize with shape batch_size x state_size
curr_states = np.zeros((batch_size, self.state_size))
# get s' (next state) as input from mini_batch
# initialize with shape batch_size x state_size
next_states = np.zeros((batch_size, self.state_size))
action, reward, done = [], [], []
for i in range(self.batch_size):
# get s (state) as input from mini_batch
curr_states[i] = mini_batch[i][0]
action.append(mini_batch[i][1])
reward.append(mini_batch[i][2])
# get s' (next state) as input from mini_batch
next_states[i] = mini_batch[i][3]
done.append(mini_batch[i][4])
target = self.model.predict(curr_states)
target_val = self.target_model.predict(next_states)
for i in range(self.batch_size):
# Q Learning: get maximum Q value at s' from target model
if done[i]:
target[i][action[i]] = reward[i]
else:
# selection of best action is from *target* model
# evaluation is also from target model
target[i][action[i]] = reward[i] + self.discount_factor * (
np.amax(target_val[i]))
# make minibatch which includes target q value and predicted q value
# and do the model fit!
self.model.fit(curr_states, target, batch_size=self.batch_size,
epochs=1, verbose=0)
###Output
_____no_output_____
###Markdown
The model: neural networkNeural networks are built one layer after the other.Our model is a dense neural network, i.e. a neural network comprised of only dense layers.Dense layers are simply a *set of matrices* with an *activation function* in the end.
###Code
class DQNAgent(DQNAgent):
# approximate Q function using Neural Network
# state is input and Q Value of each action is output of network
def build_model(self):
model = Sequential()
model.add(Dense(32, input_dim=self.state_size, activation='relu',
kernel_initializer='he_uniform'))
model.add(Dense(16, activation='relu',
kernel_initializer='he_uniform'))
model.add(Dense(self.action_size, activation='linear',
kernel_initializer='he_uniform'))
model.summary()
model.compile(loss='mse', optimizer=Adam(lr=self.learning_rate))
return model
###Output
_____no_output_____
###Markdown
Setting target network for stability
###Code
class DQNAgent(DQNAgent):
# after some time interval update the target model to be same with model
def next_states_model(self):
self.target_model.set_weights(self.model.get_weights())
###Output
_____no_output_____
###Markdown
Helper methods
###Code
class DQNAgent(DQNAgent):
# get action from model using epsilon-greedy policy
def get_action(self, state):
if np.random.rand() <= self.epsilon:
return random.randrange(self.action_size)
else:
q_value = self.model.predict(state)
return np.argmax(q_value[0])
class DQNAgent(DQNAgent):
# save sample <s,a,r,s'> to the replay memory
def append_sample(self, state, action, reward, next_state, done):
self.memory.append((state, action, reward, next_state, done))
if self.epsilon > self.epsilon_min:
self.epsilon *= self.epsilon_decay
###Output
_____no_output_____
###Markdown
Main loop
###Code
if __name__ == "__main__":
# In case of CartPole-v1, maximum length of episode is 500
env = gym.make('CartPole-v1')
# get size of state and action from environment
state_size = env.observation_space.shape[0]
action_size = env.action_space.n
agent = DQNAgent(state_size, action_size)
scores, episodes = [], []
for e in range(EPISODES):
done = False
score = 0
state = env.reset()
state = np.reshape(state, [1, state_size])
while not done:
if agent.render:
env.render()
# get action for the current state and go one step in environment
action = agent.get_action(state)
next_state, reward, done, info = env.step(action)
next_state = np.reshape(next_state, [1, state_size])
# if an action make the episode end, then gives penalty of -100
reward = reward if not done or score == 499 else -100
# save the sample <s, a, r, s'> to the replay memory
agent.append_sample(state, action, reward, next_state, done)
# every time step do the training
agent.train_model()
score += reward
state = next_state
if done:
# every episode update the target model to be same with model
agent.next_states_model()
# every episode, plot the play time
score = score if score == 500 else score + 100
scores.append(score)
episodes.append(e)
pylab.plot(episodes, scores, 'b')
pylab.savefig("./save_graph/cartpole_dqn2.png")
print("episode:", e, " score:", score, " memory length:",
len(agent.memory), " epsilon:", agent.epsilon)
# if the mean of scores of last 10 episode is bigger than 490
# stop training
if np.mean(scores[-min(10, len(scores)):]) > 490:
sys.exit()
# save the model
if e % 50 == 0:
agent.model.save_weights("./save_model/cartpole_dqn2.h5")
###Output
_____no_output_____ |
trained_models/evalcassiepolicy.ipynb | ###Markdown
Time varying trajectory for one full phase.
###Code
from matplotlib import pyplot as plt
def plot_policy(data, title=None):
cassie_action = ["hip roll", "hip yaw", "hip pitch", "knee", "foot"]
# one row for each leg
plot_rows = 2
plot_cols = 10 // 2
fig, axes = plt.subplots(plot_rows, plot_cols, figsize=(20, 10))
if title is not None:
fig.suptitle(title, fontsize=16)
trj_len = len(data["s"])
for r in range(plot_rows): # 2 legs
for c in range(plot_cols): # 5 actions
a = r * plot_cols + c
axes[r][c].plot(np.arange(trj_len), data["u_delta"][:, a], "C0", label="delta")
axes[r][c].plot(np.arange(trj_len), data["u_ref"][:, a], "C1", label="reference")
axes[r][c].plot(np.arange(trj_len), data["u_delta"][:, a] + data["u_ref"][:, a], "C2--", label="summed")
axes[r][c].plot(np.arange(trj_len), data["qpos"][:, env.pos_idx][:, a], "C3--", label="result")
axes[0][c].set_xlabel(cassie_action[c])
axes[0][c].xaxis.set_label_position('top')
axes[r][0].set_ylabel(["left leg", "right leg"][r])
plt.tight_layout()
if title is not None:
plt.subplots_adjust(top=0.93)
axes[0][0].legend(loc='upper left')
data = get_trajectory_data(2*env.phaselen)
plot_policy(data)
###Output
_____no_output_____
###Markdown
Phase portraits for left and right knee angles.
###Code
@torch.no_grad()
def plot_phase_portrait():
qpos = np.copy(traj.qpos)
qvel = np.copy(traj.qvel)
fig, axes = plt.subplots(1, 2, figsize=(10, 5))
# down sample trajectory to simrate
traj_len = qpos.shape[0]
sub = [t for t in range(traj_len) if t % env.simrate == 0]
# left leg
axes[0].plot(qpos[:, 14], qvel[:, 12], label="left knee")
axes[1].plot(qpos[sub, 14], qvel[sub, 12], label="left knee")
#right leg
axes[0].plot(qpos[:, 28], qvel[:, 25], label="right knee")
axes[1].plot(qpos[sub, 28], qvel[sub, 25], label="right knee")
axes[0].set_ylabel(r"$\dot{\theta}$ (rad/s)")
axes[0].set_xlabel(r"$\theta$ (rad)")
axes[0].set_title(r"2Khz")
axes[1].set_title(r"30Khz")
axes[0].legend(loc='upper left')
left_x, left_x_dot = [], []
right_x, right_x_dot = [], []
# get the full resolution phase portrait
env.reset()
for t in range(traj_len):
if t % env.simrate == 0:
state = torch.Tensor(env.get_full_state())
v, action = policy.act(state, True)
v, action = v.numpy(), action.numpy()
env.time += 1
env.phase += 1
if env.phase > env.phaselen:
env.phase = 0
env.counter += 1
env.step_simulation(action)
left_x.append(env.sim.qpos()[14])
left_x_dot.append(env.sim.qvel()[12])
right_x.append(env.sim.qpos()[28])
right_x_dot.append(env.sim.qvel()[25])
left_x, left_x_dot = np.array(left_x), np.array(left_x_dot)
right_x, right_x_dot = np.array(right_x), np.array(right_x_dot)
fig, axes = plt.subplots(1, 2, figsize=(10, 5))
axes[0].plot(left_x, left_x_dot, label="left knee")
axes[0].plot(right_x, right_x_dot, label="right knee")
axes[1].plot(left_x[sub], left_x_dot[sub], label="left knee")
axes[1].plot(right_x[sub], right_x_dot[sub], label="right knee")
axes[0].set_ylabel(r"$\dot{\theta}$ (rad/s)")
axes[0].set_xlabel(r"$\theta$ (rad)")
axes[0].set_title(r"2Khz")
axes[1].set_title(r"30Khz")
axes[0].legend(loc='upper left')
plot_phase_portrait()
@torch.no_grad()
def plot_phase_states():
qpos = np.copy(traj.qpos)
qvel = np.copy(traj.qvel)
fig, axes = plt.subplots(1, 2, figsize=(10, 5))
# down sample trajectory to simrate
traj_len = qpos.shape[0]
sub = [t for t in range(traj_len) if t % env.simrate == 0]
# full resolution
axes[0].plot(np.arange(traj_len), qpos[:, 14], label="left knee")
axes[0].plot(np.arange(traj_len), qpos[:, 28], label="right knee")
axes[1].plot(np.arange(traj_len), qvel[:, 12], label="left knee")
axes[1].plot(np.arange(traj_len), qvel[:, 25], label="right knee")
axes[0].set_ylabel(r"$\dot{\theta}$ (rad/s)")
axes[0].set_xlabel(r"$\theta$ (rad)")
axes[0].set_title(r"2Khz")
axes[1].set_title(r"30Khz")
axes[0].legend(loc='upper left')
fig, axes = plt.subplots(1, 2, figsize=(10, 5))
# downsampled
axes[0].plot(np.arange(len(sub)), qpos[sub, 14], label="left knee")
axes[0].plot(np.arange(len(sub)), qpos[sub, 28], label="right knee")
axes[1].plot(np.arange(len(sub)), qvel[sub, 12], label="left knee")
axes[1].plot(np.arange(len(sub)), qvel[sub, 25], label="right knee")
left_x, left_x_dot = [], []
right_x, right_x_dot = [], []
# get the full resolution phase portrait
env.reset()
for t in range(traj_len):
if t % env.simrate == 0:
state = torch.Tensor(env.get_full_state())
v, action = policy.act(state, True)
v, action = v.numpy(), action.numpy()
env.time += 1
env.phase += 1
if env.phase > env.phaselen:
env.phase = 0
env.counter += 1
env.step_simulation(action)
left_x.append(env.sim.qpos()[14])
left_x_dot.append(env.sim.qvel()[12])
right_x.append(env.sim.qpos()[28])
right_x_dot.append(env.sim.qvel()[25])
left_x, left_x_dot = np.array(left_x), np.array(left_x_dot)
right_x, right_x_dot = np.array(right_x), np.array(right_x_dot)
fig, axes = plt.subplots(1, 2, figsize=(10, 5))
# full resolution positions
axes[0].plot(np.arange(traj_len), left_x, label="left knee")
axes[0].plot(np.arange(traj_len), right_x, label="right knee")
# full resolution velocities
axes[1].plot(np.arange(traj_len), left_x_dot, label="left knee")
axes[1].plot(np.arange(traj_len), right_x_dot, label="left knee")
axes[0].set_ylabel(r"$\dot{\theta}$ (rad/s)")
axes[0].set_xlabel(r"Time")
axes[0].set_title(r"$\theta$ over time (2Khz)")
axes[1].set_title(r"$\frac{d\theta}{dt}$ over time (2Khz)")
axes[0].legend(loc='upper left')
fig, axes = plt.subplots(1, 2, figsize=(10, 5))
# downsampled positions
axes[0].plot(np.arange(len(sub)), left_x[sub], label="left knee")
axes[0].plot(np.arange(len(sub)), right_x[sub], label="right knee")
# downsampled velocities
axes[1].plot(np.arange(len(sub)), left_x_dot[sub], label="left knee")
axes[1].plot(np.arange(len(sub)), right_x_dot[sub], label="left knee")
axes[0].set_ylabel(r"$\theta$ (rad)")
axes[0].set_xlabel(r"Time")
axes[1].set_ylabel(r"$\dot{\theta}$ (rad/s)")
axes[1].set_xlabel(r"Time")
axes[0].set_title(r"$\theta$ over time (30hz)")
axes[1].set_title(r"$\frac{d\theta}{dt}$ over time (30hz)")
axes[0].legend(loc='upper left')
plot_phase_states()
def plot_reward(data):
fig = plt.figure(figsize=(10, 5))
ax = fig.gca()
ax.plot(np.arange(len(data["reward"])), data["reward"])
ax.set_ylabel(r"Reward")
ax.set_xlabel(r"Timestep")
ax.set_title("Reward over time")
data = get_trajectory_data(10000)
plot_reward(data)
def plot_foot_forces(data):
fig = plt.figure(figsize=(10, 5))
ax = fig.gca()
ax.plot(np.arange(len(data["ff_left"])), data["ff_left"], label="left foot")
ax.plot(np.arange(len(data["ff_right"])), data["ff_right"], label="right foot")
ax.set_ylabel(r"Foot force (Newtons)")
ax.set_xlabel(r"Timestep")
ax.set_title("Foot force in z direction over time")
ax.legend(loc='upper left')
data = get_trajectory_data(110)
plot_foot_forces(data)
###Output
_____no_output_____ |
notebooks/old_altimetry_code.ipynb | ###Markdown
This is the old notebookThe material here is gradually being transferred to new notebooks, e.g. "Altimetry analysis.ipynb"A. Arendt 20200102 Altimetry Module IntroductionHere we will show you the four most important functions in the Altimetry module. We will start with simple low level functions and move to higher and more helpful ones. We will end with some examples of how to use these functions. Altimetry.Altimetry.ConnectDb(**kwargs) Purpose: Connect to a postgres database. If the default is set appropriately just execute with no arguments. eg connection,cursor = ConnectDb() Keyword Arguments:serverThe name of the server must match a dictionary name in __init__.pyget_hostGet the default host nameget_userGet the default user nameget_dbnameGet the default database name Returns: A list including psycopg2 connection and cursor to the database. Example:
###Code
#First just checking the default host information stored in the __init__.py
connection,cursor = alt.ConnectDb(connectionString)
print "Host is %s " % alt.ConnectDb(connectionString,get_host=True)
print "User is %s " % alt.ConnectDb(connectionString,get_user=True)
print "DBname is %s \n" % alt.ConnectDb(connectionString,get_dbname=True)
#Now, lets list all of the glaciers bigger than 1000 sq. km.
cursor.execute("SELECT name,area from ergi_mat_view WHERE area>1000 ORDER BY area DESC;")
#Once the select statement is run the cursor one can fetch data line by line or all of the data together.
#Here we are fetching all of the data and looping through to print the area.
for name, area in cursor.fetchall():
print "{glacier} has an area of {area:.0f} sq. km.".format(glacier = name,area=area)
#Close the cursor once you are done.
cursor.close()
###Output
_____no_output_____
###Markdown
The LiDAR Altimetry data and glacier geometry data used in [Larsen et al.(2015)]() are stored in the following table structure. The key point to understand is that here we do not start from the raw LiDAR point clouds. Instead, the table lamb contains the surface elevation change rate profiles for each glacier over each possible interval. Each row was in this table was generated using a semi-manual step (discussed in Arendt et al., [2002] and Johnson et all. [2013]), where a user defines a bin size, a glacier polygon etc. and then runs a matlab script called 'lamb' to generate a top-bottom profile of surfaceelevation change rates. This script also outputs the along profile IQR of the measured surface elevation change and the mass balance integrated across the user defined glacier polygon. All of this data is included in the lamb table. The only part of the lamb table used by Larsen et al., is the elevation change rate profile and the IQR for each glacier and the glacier geometry is extracted from the ergi table and ergi_mat_view materialized view.The glacier geometry is provided by a modified version of the RGI 4.0 called the ergi. The ergi has some differences that accomodate the UAF Altimetry flightlines. Important glacier attributes including Glacier name, terminus type,surge-type,and region are all stored as points. We then run a spatial join with the ergi to create a materialzed view of the ergi table with the appropriate attributes. The ergi has also been split into 30 m elevation bins using the DEM described in Kienholz et al. [2015] to capture every glacier hypsometry in ergibins. Altimetry.Altimetry.GetSqlData2(select, bycolumn = False) Purpose:Extract data from the default database using sql query and return data organized by either row or column. This just condenses the above code into one line and insures all of your code is working off the same database. Arguments: selectAny postgresql select statement as string including ';' Keyword Arguments:bycolumnSet to True to request that data be returned by column instead of row. Returns: If data is returned by row (default) the requested data will be stored as a list of dictionaries where eachrow in the output table is stored as a dictionary where the keys are the column names. If you request bycolumn, each column in the output table is accessed though a dictionary where the key is the column name. Each column of data is stored in that dictionary as a list or as a numpy array. If you request a MULTIPOLYGON geometry, the geometry will be extracted into a list of coordinates for theouter ring and another list of inner rings (which is another list of coordinates). Data is stored in the dictionary as keys 'inner' and 'outer'. If there are no inner rings, None is returned. Example:
###Code
# Lets find the which glaciers have areas greater than 1000 sq. km.
data = alt.GetSqlData(credentials['ice2oceansdb-Altimetry-user'],"SELECT name,area FROM ergi_mat_view WHERE area>1000 ORDER BY area DESC;",bycolumn=False)
for glacier in data:
print ("{name} has an area of {area:.0f} sq. km.".format(**glacier))
###Output
Seward Glacier has an area of 3363 sq. km.
Bering Glacier has an area of 3025 sq. km.
Hubbard Glacier has an area of 2834 sq. km.
Logan Glacier has an area of 1177 sq. km.
Kaskawulsh Glacier has an area of 1054 sq. km.
Nabesna Glacier has an area of 1029 sq. km.
Yahtse Glacier has an area of 1019 sq. km.
Tana Glacier has an area of 1003 sq. km.
###Markdown
Altimetry.Altimetry.GetLambData(\*args,\**kwargs) Purpose:This is the primary function to extract Laser Altimetry Mass Balance (LAMB) data from the database. The key point to understand is that this code does not calculate mass balance from the raw LiDAR point clouds that are also stored in ice2oceans database. Instead, GetLambData queries a table called lamb that contains the surface elevation change profiles for each glacier over each possible interval. Each profile in this table was generated using a semi-manual step (discussed in Arendt et al., (2002) and Johnson et all. (2013)), where a user defines a bin size, a glacier polygon etc and then runs a matlab script called 'lamb' to generate a top-bottom profile of surface elevation change rates. This script also outputs the along profile IQR of the measured surface elevation change and the mass balance integrated across the user defined glacier polygon. All of this data is included in the lamb table and output by GetLambData. The only part of 'lamb' used by Larsen et al., is the elevation change rate profile and the IQR for each glacier. This script will retrieve Lambdata for any group of glaciers and survey intervals. It contains keywords for you to filter what intervals you would like. It will also return the glacier polygon from the RGI (the one used in Larsen et al., [2015] not Johnson et al. [2013]), the glacier hypsometry, and the Larsen et al., 2015 mass balance estimate for that glacier. Keyword Arguments: removerepeatsSet to True to select only the shortest/non-overlapping intervals for each glacier. Set to false to include all data. Default Value=Truelongest_intervalSet to True to retreive only the single longest interval available for each glacier.days_from_yearSet the number of days away from 365 to be considered. For example if you want annual intervals to be within month of each other leave default of 30. If you want sub-annual (seasonal data) set to 365. Default Value = 30interval_minMinimum length of interval in years. This is a rounded value so inputing 1 will include an interval of 0.8 years if it passes the days_from_year threshold above. Default = 0 interval_maxMaximum length of interval in years. This is a rounded value so inputing 3 will include an interval of 3.1 years if it passes the days_from_year threshold above. Default = Noneearliest_dateEarliest date of first acquistion. Enter as string 'YYYY-MM-DD'. Default = Nonelatest_dateLatest date of second acquistion. Enter as string 'YYYY-MM-DD'. Default = NoneuserwhereEnter as string. User can input additional queries as a string to a where statement. Any fields in ergi_mat_view or lamb are valid. Example input:"name NOT LIKE '%Columbia%' AND area > 10". Default="" verboseVerbose output. Default = Trueget_geomSet to True to retrieve the geometry of the glacier polygongeneralizeSet to a value to simplify geometriesby_columnGet data organized by column instead of by lamb file (Default=True)as_objectGet data output as a LambObject. Only works if by_column = True (Default=True)get_glimsidSet to true to retrieve each glaciers glimsid as well.resultsSet to true to retrieve the mass balance of the glacier as is estimated by Larsen et. al. (2015). Note the larsen et al mass balance is returned regardless of the interval you chose for that glacier. Returns: A dictionary or a lamb object with all attributes available in lamb as well as glacier parametersavailable in ergi_mat_view for the selection of surveyed glacier intervals chosen. Example: class Altimetry.Altimetry.LambObject()GetLambData can output data as a dictionary, as shown above, but it can also output the same data as an instance of a lamb object or a list of lamb objects. This not only makes life easier syntactically, it also provides access to an assortment of important methods available to this object class. Each lamb oject will contain attributes whose name and value correspond to matching fields in the database: Attribute | Data Type | Description ------------- |:-------------:|:-----lambid | integer | Primary Keyergiid | integer | foreign key to ergidate1 | date | First Acquisition Datedate2 | date | Second Acquisition Dateinterval | smallint | Interval Length (days)volmodel | real | Gacier-wide mass balance in Gt/yr: not used for Larsen et al., [2015]vol25diff | real | Gacier-wide mass balance 25th percentile error estimate in Gt/yr: not used for Larsen et al., [2015]vol75diff | real | Gacier-wide mass balance 75th percentile error estimate in Gt/yr: not used for Larsen et al., [2015]balmodel | real | Gacier-wide mass balance in m w. e./yr: not used for Larsen et al., [2015]bal25diff | real | Gacier-wide mass balance 25th percentile error estimate in m w. e./yr: not used for Larsen et al., [2015]bal75diff | real | Gacier-wide mass balance 75th percentile error estimate in m w. e./yr: not used for Larsen et al., [2015]e | integer[] | Elevation of bottom of bin (m)dz | real[] | Surface elevation change rate along elevation profile specified by e (m/yr)dz25 | real[] | Surface elevation change rate variability: 25th percentile along elevation profile specified by e (m/yr)dz75 | real[] | Surface elevation change rate variability: 75th percentile along elevation profile specified by e (m/yr)aad | real[] | Area Altitude Distribution for the surveyed glacier determined using a polygon drawn by an UAF LiDAR altimetry tech and may/may not be the RGI polygon. Note this AAD was not used for Larsen et al. [2015].masschange | real[] | Lamb output: not used for Larsen et al., 2015 (units?)massbal | real[] | Lamb output: not used for Larsen et al., 2015 (units?)numdata | integer[] | Number of crossing points in Binergiid | integer | Primary key to ergi tablearea | numeric | Glacier Areamin | numeric | Min elevation of the glacier defined by the RGI 4.0max | numeric | Max elevation of the glacier defined by the RGI 4.0continentality | real | Distance from coast (polygon centerpoint) units(?)albersgeom | geometry(MultiPolygon,3338) | Alaska Albers Polygon Geometry of the glacier boundaryname | character varying(45) | Glacier Namegltype | integer | Terminus Type 0=land,1=tide,2=lakesurge | boolean | Surge-Type Glacier?region | character varying(50) | Region defined by Larsen et al., 2015 that this glacier is inarendtregion | integer | Region defined by Arendt et al., 2002 that this glacier is in Using a LambObject, we can do the same thing as before but using a object output you will see the syntax is cleaner:
###Code
import Altimetry as alt
%matplotlib inline
from matplotlib.pyplot import *
#Lets look at the Taku timeseries
#Note the only thing different from above is as_object is set to 'True'
taku = alt.GetLambData(removerepeats=True,by_column=False,as_object=True, userwhere="ergi_mat_view.name='Taku Glacier'")
#Plotting
for t in taku:
plot(t.e,t.dz,label="%s - %s" % (t.date1.year,t.date2.year))
legend(loc=4)
xlabel('Elevation (m)')
ylabel('Elevation Change Rate (m/yr)')
###Output
list
###Markdown
***Lamb objects also have several important methods designed to work with the lambObject in column form (i.e. set by_column = True). That will help you perform standard operations to the sample you have selected. Try to use them in this order.* convert085( )* fix_terminus(slope=-0.05,error=1)* remove_upper_extrap(remove_bottom=True,erase_mean=True,add_mask=True)* normalize_elevation( )* calc_dz_stats(too_few=4)* extend_upper_extrap( )* calc_mb(units='area normalized')* convert085* get_approx_location Altimetry.Altimetry.LambObject.convert085( ) Purpose:Convert attributes dz,dz25 and dz75 to units of water equivalent instead of surface elevation change. Returns: None Example:
###Code
%matplotlib inline
from matplotlib.pyplot import *
#Lets look at the Taku timeseries
taku = alt.GetLambData(longest_interval=True,by_column=True,as_object=True, userwhere="ergi_mat_view.name='Taku Glacier'")
plot(taku.e[0],taku.dz[0],label='Units m/yr')
taku.convert085()
plot(taku.e[0],taku.dz[0],label='Units m e. eq./yr')
legend()
xlabel('Elevation (m)')
ylabel('Elevation Change Rate')
###Output
object
###Markdown
Altimetry.Altimetry.LambObject.fixterminus( ) Purpose:Correct profile's for a retreating terminus as exemplified in figure S11, [Larsen et al., 2015]. Specifically it changes dz,dz25 and dz75. Keyword Arguments:slopeThe threshold used to determine how steep the reduction in thinning rate needs to be to qualify as a section that needs to be corrected. I played with this a lot and thedefault value of -0.05 worked best.errorThe 1 quartile width to set as the error for the corrected portion of the profile Returns: None Example:
###Code
%matplotlib inline
#Lets look at the Gillam timeseries
gillam = alt.GetLambData(longest_interval=True,by_column=True,as_object=True, userwhere="ergi_mat_view.name='Gillam Glacier'")
plot(gillam.e[0],gillam.dz[0],label='Original Profile')
gillam.fix_terminus()
plot(gillam.e[0],gillam.dz[0],label='Corrected Profile')
legend(loc=4)
xlabel("Elevation (m)")
ylabel("Elevation Change Rate (m/yr)")
###Output
object
###Markdown
Altimetry.Altimetry.LambObject.remove_upper_extrap(remove_bottom=True,erase_mean=True) Purpose:The LAMB matlab code extrapolates to the glacier head and glacier terminus when data does not make it all of the way to the top or bottom of the glacier hypsometry. While this is needed when estimating individual glacier mass balance, it isn't really appropriate when using the profile to extrapolate to other glaciers. This method removes those extrapolations by masking the top and bottom. Removing the bottom is optional with the keyword remove_bottom. Keyword Arguments* remove_bottom * set to false to only remove top. Default = True* erase_mean * Set to False to keep the mean and only delete the dz25 and dz75 fields. Default = True Returns: None Example:
###Code
%matplotlib inline
#Lets look at the Gillam timeseries
gil = alt.GetLambData(longest_interval=True,by_column=True,as_object=True, userwhere="ergi_mat_view.name='Gillam Glacier'")
plot(gil.e[0],gil.dz[0],'r',lw=2,label='Full lamb profile')
gil.fix_terminus()
plot(gil.e[0],gil.dz[0],'g',lw=2,label='terminus fixed')
gil.remove_upper_extrap()
plot(gil.e[0],gil.dz[0],'b',lw=2,label='terminus fixed and \nextrapolation removed')
#best to go in this order!
legend(loc=4)
xlabel("Elevation (m)")
ylabel("Elevation Change Rate (m/yr)")
###Output
object
###Markdown
Altimetry.Altimetry.LambObject.normalize_elevation( ) Purpose:Normalize the elevation range in the elevation bins listed in lamb. This normalization assumes the max and min elevation is that available in the ergi table fields max and min. This function creates and updates the class attributes: self.norme,self.normdz,self.norm25,self.norm75,self.survIQRs. This works for an individual glacier or a group within the object. Keyword Arguments:* gaussian * Set to True to place a gaussian smooth over the normalized data Returns:NoneBut adds the following attributes to the object: self.norme, self.normdz, self.norm25, self.norm75, self.survIQRs* norme * The normalized elevation of each bin where 0.00 is the glacier bottom 1 is the glacier top.* normdz * The mean elevation change rate profile on the normalized elevation scale.* norm25,norm75, survIQRs * The 1st and 3rd quartiles and the IQR on the normalized scale Example:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
#Lets look at the Gillam timeseries
akrng = alt.GetLambData(longest_interval=True,by_column=True,as_object=True, userwhere="region = 'Alaska Range'")
fig = plt.figure(figsize=[15,5])
ax1=fig.add_subplot(121)
#PLOTTING ELEVATION PROFILES AND MAX AND MIN ELEVATION OF EACH GLACIER IN THE ALASKA RANGE
labels = ['dhdt profiles','max glacier elevation', 'min glacier elevation']
for e,dz,x,n in zip(akrng.e,akrng.dz,akrng.max,akrng.min):
ax1.plot(e,dz,'k',label=labels[0],zorder=1,alpha=0.5)
ax1.plot(x,0,'ro',label=labels[1])
ax1.plot(n,0,'bo',label=labels[2])
#only labeling each type once
labels=[None,None,None]
ax1.legend()
ax1.set_xlabel('Elevation')
ax1.set_ylabel("Elevation Change Rate (m/yr)")
#PLOTTING THE SAME PROFILES BUT NORMALIZED
akrng.normalize_elevation()
ax2=fig.add_subplot(122)
ax2.set_xlabel('Normalized Elevation')
ax2.set_ylabel("Elevation Change Rate (m/yr)")
for n in akrng.normdz:ax2.plot(akrng.norme,n,'k',alpha=0.5)
###Output
object
###Markdown
Altimetry.Altimetry.calc_mb.LambObject.calc_mb() Purpose:Adds a new attribute to the Lamb object self.mb which is a glacier mass balance estimate given the lamb data, a normalized surface elevation change profile (from normalize_elevation) and a glacier hypsometry(run GetLambData with 'get_hypsometry=True'). This works for an individual glacier or a group within the object. calc_mb(self,units='area normalized') Keyword Arguments:* units * Set to either 'gt' or 'area normalized' Returns:* None * However, self.mb is set to the mb in the units specified Example:
###Code
#Let's look at the altimetry mass balance timeseries for Bering Glacier.
import Altimetry as alt
bering = alt.GetLambData(removerepeats=True,by_column=True,as_object=True, userwhere="ergi_mat_view.name='Gillam Glacier'",get_hypsometry=True)
bering.normalize_elevation()
bering.calc_mb()
for d1,d2,mb in zip (bering.date1,bering.date2,bering.mb):
plot_date([d1,d2],[mb,mb],'-b',lw=2,label="{d1} - {d2}".format(d1=d1.year,d2=d2.year))
xlabel('Year')
ylabel("Mass Balance (m w. eq./yr)")
###Output
glimsid = G212893E63657N
{'normbins': array([ 0. , 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.09,
0.1 , 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18,
0.19, 0.2 , 0.21, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28,
0.29, 0.3 , 0.31, 0.32, 0.33, 0.34, 0.35, 0.37, 0.38,
0.39, 0.4 , 0.41, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47,
0.48, 0.49, 0.51, 0.52, 0.53, 0.54, 0.55, 0.56, 0.57,
0.58, 0.59, 0.6 , 0.61, 0.62, 0.64, 0.65, 0.66, 0.67,
0.68, 0.69, 0.7 , 0.71, 0.72, 0.73, 0.74, 0.75, 0.76,
0.78, 0.79, 0.8 , 0.81, 0.82, 0.83, 0.84, 0.85, 0.86,
0.87, 0.88, 0.89, 0.9 , 0.92, 0.93, 0.94, 0.95, 0.96,
0.97, 0.98, 0.99, 0.99]), 'binned_area': array([ 2.22823000e+05, 7.39385000e+05, 9.92011000e+05,
9.90115000e+05, 1.15405000e+06, 1.29920000e+06,
1.30126000e+06, 1.18990000e+06, 1.27579000e+06,
1.37134000e+06, 1.56436000e+06, 1.14943000e+06,
1.44132000e+06, 2.28815000e+06, 2.63588000e+06,
2.58708000e+06, 2.80310000e+06, 2.62507000e+06,
3.10637000e+06, 2.05606000e+06, 1.59287000e+06,
2.05669000e+06, 2.06209000e+06, 2.54242000e+06,
2.35516000e+06, 2.40597000e+06, 1.88172000e+06,
1.79273000e+06, 1.54625000e+06, 1.37281000e+06,
1.20527000e+06, 1.61539000e+06, 1.28631000e+06,
1.32520000e+06, 1.37250000e+06, 1.29303000e+06,
1.67722000e+06, 1.58500000e+06, 1.42959000e+06,
1.62597000e+06, 1.53407000e+06, 1.92728000e+06,
1.75527000e+06, 1.59493000e+06, 1.47208000e+06,
1.50101000e+06, 1.29182000e+06, 1.10406000e+06,
8.69573000e+05, 7.45280000e+05, 7.70389000e+05,
8.56141000e+05, 7.14943000e+05, 6.42839000e+05,
5.37667000e+05, 5.03609000e+05, 4.82125000e+05,
3.87107000e+05, 3.71617000e+05, 3.70635000e+05,
3.31899000e+05, 2.74142000e+05, 2.58129000e+05,
2.65194000e+05, 2.88409000e+05, 2.63761000e+05,
2.29142000e+05, 1.90461000e+05, 1.97628000e+05,
2.05820000e+05, 1.98495000e+05, 1.87795000e+05,
1.68318000e+05, 1.76737000e+05, 1.62792000e+05,
1.50348000e+05, 1.44480000e+05, 1.35247000e+05,
1.44433000e+05, 1.15838000e+05, 1.04177000e+05,
9.58676000e+04, 8.65014000e+04, 7.69227000e+04,
5.58694000e+04, 4.85291000e+04, 3.90169000e+04,
2.38083000e+04, 1.53629000e+04, 9.72044000e+03,
6.00056000e+03, 1.95690000e+03, 5.38790000e+02,
2.34841000e+03]), 'bins': array([ 945., 975., 1005., 1035., 1065., 1095., 1125., 1155.,
1185., 1215., 1245., 1275., 1305., 1335., 1365., 1395.,
1425., 1455., 1485., 1515., 1545., 1575., 1605., 1635.,
1665., 1695., 1725., 1755., 1785., 1815., 1845., 1875.,
1905., 1935., 1965., 1995., 2025., 2055., 2085., 2115.,
2145., 2175., 2205., 2235., 2265., 2295., 2325., 2355.,
2385., 2415., 2445., 2475., 2505., 2535., 2565., 2595.,
2625., 2655., 2685., 2715., 2745., 2775., 2805., 2835.,
2865., 2895., 2925., 2955., 2985., 3015., 3045., 3075.,
3105., 3135., 3165., 3195., 3225., 3255., 3285., 3315.,
3345., 3375., 3405., 3435., 3465., 3495., 3525., 3555.,
3585., 3615., 3645., 3675., 3735., 3705.])}
glimsid = G212893E63657N
{'normbins': array([ 0. , 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.09,
0.1 , 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18,
0.19, 0.2 , 0.21, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28,
0.29, 0.3 , 0.31, 0.32, 0.33, 0.34, 0.35, 0.37, 0.38,
0.39, 0.4 , 0.41, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47,
0.48, 0.49, 0.51, 0.52, 0.53, 0.54, 0.55, 0.56, 0.57,
0.58, 0.59, 0.6 , 0.61, 0.62, 0.64, 0.65, 0.66, 0.67,
0.68, 0.69, 0.7 , 0.71, 0.72, 0.73, 0.74, 0.75, 0.76,
0.78, 0.79, 0.8 , 0.81, 0.82, 0.83, 0.84, 0.85, 0.86,
0.87, 0.88, 0.89, 0.9 , 0.92, 0.93, 0.94, 0.95, 0.96,
0.97, 0.98, 0.99, 0.99]), 'binned_area': array([ 2.22823000e+05, 7.39385000e+05, 9.92011000e+05,
9.90115000e+05, 1.15405000e+06, 1.29920000e+06,
1.30126000e+06, 1.18990000e+06, 1.27579000e+06,
1.37134000e+06, 1.56436000e+06, 1.14943000e+06,
1.44132000e+06, 2.28815000e+06, 2.63588000e+06,
2.58708000e+06, 2.80310000e+06, 2.62507000e+06,
3.10637000e+06, 2.05606000e+06, 1.59287000e+06,
2.05669000e+06, 2.06209000e+06, 2.54242000e+06,
2.35516000e+06, 2.40597000e+06, 1.88172000e+06,
1.79273000e+06, 1.54625000e+06, 1.37281000e+06,
1.20527000e+06, 1.61539000e+06, 1.28631000e+06,
1.32520000e+06, 1.37250000e+06, 1.29303000e+06,
1.67722000e+06, 1.58500000e+06, 1.42959000e+06,
1.62597000e+06, 1.53407000e+06, 1.92728000e+06,
1.75527000e+06, 1.59493000e+06, 1.47208000e+06,
1.50101000e+06, 1.29182000e+06, 1.10406000e+06,
8.69573000e+05, 7.45280000e+05, 7.70389000e+05,
8.56141000e+05, 7.14943000e+05, 6.42839000e+05,
5.37667000e+05, 5.03609000e+05, 4.82125000e+05,
3.87107000e+05, 3.71617000e+05, 3.70635000e+05,
3.31899000e+05, 2.74142000e+05, 2.58129000e+05,
2.65194000e+05, 2.88409000e+05, 2.63761000e+05,
2.29142000e+05, 1.90461000e+05, 1.97628000e+05,
2.05820000e+05, 1.98495000e+05, 1.87795000e+05,
1.68318000e+05, 1.76737000e+05, 1.62792000e+05,
1.50348000e+05, 1.44480000e+05, 1.35247000e+05,
1.44433000e+05, 1.15838000e+05, 1.04177000e+05,
9.58676000e+04, 8.65014000e+04, 7.69227000e+04,
5.58694000e+04, 4.85291000e+04, 3.90169000e+04,
2.38083000e+04, 1.53629000e+04, 9.72044000e+03,
6.00056000e+03, 1.95690000e+03, 5.38790000e+02,
2.34841000e+03]), 'bins': array([ 945., 975., 1005., 1035., 1065., 1095., 1125., 1155.,
1185., 1215., 1245., 1275., 1305., 1335., 1365., 1395.,
1425., 1455., 1485., 1515., 1545., 1575., 1605., 1635.,
1665., 1695., 1725., 1755., 1785., 1815., 1845., 1875.,
1905., 1935., 1965., 1995., 2025., 2055., 2085., 2115.,
2145., 2175., 2205., 2235., 2265., 2295., 2325., 2355.,
2385., 2415., 2445., 2475., 2505., 2535., 2565., 2595.,
2625., 2655., 2685., 2715., 2745., 2775., 2805., 2835.,
2865., 2895., 2925., 2955., 2985., 3015., 3045., 3075.,
3105., 3135., 3165., 3195., 3225., 3255., 3285., 3315.,
3345., 3375., 3405., 3435., 3465., 3495., 3525., 3555.,
3585., 3615., 3645., 3675., 3735., 3705.])}
glimsid = G212893E63657N
{'normbins': array([ 0. , 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.09,
0.1 , 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18,
0.19, 0.2 , 0.21, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28,
0.29, 0.3 , 0.31, 0.32, 0.33, 0.34, 0.35, 0.37, 0.38,
0.39, 0.4 , 0.41, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47,
0.48, 0.49, 0.51, 0.52, 0.53, 0.54, 0.55, 0.56, 0.57,
0.58, 0.59, 0.6 , 0.61, 0.62, 0.64, 0.65, 0.66, 0.67,
0.68, 0.69, 0.7 , 0.71, 0.72, 0.73, 0.74, 0.75, 0.76,
0.78, 0.79, 0.8 , 0.81, 0.82, 0.83, 0.84, 0.85, 0.86,
0.87, 0.88, 0.89, 0.9 , 0.92, 0.93, 0.94, 0.95, 0.96,
0.97, 0.98, 0.99, 0.99]), 'binned_area': array([ 2.22823000e+05, 7.39385000e+05, 9.92011000e+05,
9.90115000e+05, 1.15405000e+06, 1.29920000e+06,
1.30126000e+06, 1.18990000e+06, 1.27579000e+06,
1.37134000e+06, 1.56436000e+06, 1.14943000e+06,
1.44132000e+06, 2.28815000e+06, 2.63588000e+06,
2.58708000e+06, 2.80310000e+06, 2.62507000e+06,
3.10637000e+06, 2.05606000e+06, 1.59287000e+06,
2.05669000e+06, 2.06209000e+06, 2.54242000e+06,
2.35516000e+06, 2.40597000e+06, 1.88172000e+06,
1.79273000e+06, 1.54625000e+06, 1.37281000e+06,
1.20527000e+06, 1.61539000e+06, 1.28631000e+06,
1.32520000e+06, 1.37250000e+06, 1.29303000e+06,
1.67722000e+06, 1.58500000e+06, 1.42959000e+06,
1.62597000e+06, 1.53407000e+06, 1.92728000e+06,
1.75527000e+06, 1.59493000e+06, 1.47208000e+06,
1.50101000e+06, 1.29182000e+06, 1.10406000e+06,
8.69573000e+05, 7.45280000e+05, 7.70389000e+05,
8.56141000e+05, 7.14943000e+05, 6.42839000e+05,
5.37667000e+05, 5.03609000e+05, 4.82125000e+05,
3.87107000e+05, 3.71617000e+05, 3.70635000e+05,
3.31899000e+05, 2.74142000e+05, 2.58129000e+05,
2.65194000e+05, 2.88409000e+05, 2.63761000e+05,
2.29142000e+05, 1.90461000e+05, 1.97628000e+05,
2.05820000e+05, 1.98495000e+05, 1.87795000e+05,
1.68318000e+05, 1.76737000e+05, 1.62792000e+05,
1.50348000e+05, 1.44480000e+05, 1.35247000e+05,
1.44433000e+05, 1.15838000e+05, 1.04177000e+05,
9.58676000e+04, 8.65014000e+04, 7.69227000e+04,
5.58694000e+04, 4.85291000e+04, 3.90169000e+04,
2.38083000e+04, 1.53629000e+04, 9.72044000e+03,
6.00056000e+03, 1.95690000e+03, 5.38790000e+02,
2.34841000e+03]), 'bins': array([ 945., 975., 1005., 1035., 1065., 1095., 1125., 1155.,
1185., 1215., 1245., 1275., 1305., 1335., 1365., 1395.,
1425., 1455., 1485., 1515., 1545., 1575., 1605., 1635.,
1665., 1695., 1725., 1755., 1785., 1815., 1845., 1875.,
1905., 1935., 1965., 1995., 2025., 2055., 2085., 2115.,
2145., 2175., 2205., 2235., 2265., 2295., 2325., 2355.,
2385., 2415., 2445., 2475., 2505., 2535., 2565., 2595.,
2625., 2655., 2685., 2715., 2745., 2775., 2805., 2835.,
2865., 2895., 2925., 2955., 2985., 3015., 3045., 3075.,
3105., 3135., 3165., 3195., 3225., 3255., 3285., 3315.,
3345., 3375., 3405., 3435., 3465., 3495., 3525., 3555.,
3585., 3615., 3645., 3675., 3735., 3705.])}
object
###Markdown
[Go to Table of Contents](toc) Altimetry.Altimetry.LambObject.calc_dz_stats() Purpose:Calculates various statistics for the sample within the object. Requires that one run normalize_elevation first. calc_dz_stats(masked_array=False,too_few=None) Keyword Arguments:too_fewWhen calculating kurtosis we normaly require samples to be larger than 4. If you happen to be choosing groups with a sample size smaller than 4, set this to None and ignore the Kurtosis, and for that matter skew etc. as that is a really small sample. (Default = 4) Result:Adds the following attributes to the lamb object:quadsumQuadrature sum of the sample along profile (m/yr) along the normalized profile. Used as an estimate of surveyed glacier uncertainty for the region when integrated over all surveyed glaciers in this sample. dzs_stdStandard Deviation along the normalized profile. dzs_meanMean surface elevation change rate (m/yr) along the normalized profile. dzs_medianMedian surface elevation change rate (m/yr) along the normalized profile. dzs_madnNormalizedMAD of surface elevation change rate (m/yr) along the normalized profile. dzs_semStandar error of the mean (m/yr) along the normalized profile. Used as an estimate of surveyed glacier uncertainty for the region when integrated over all unsurveyed glaciers normalpP-value probablitiy of normality along profile skewz/pZ-score and p-value for test of whether the sample elevation change rates have a skew that is non-normal along profile. kurtz/pZ-score and p-value for test of whether the sample elevation change rates have a skew that is non-normal along profile. skewThe skew of the distribution along profile. kurtosisThe kurtosis of the distribution along profile. percentile_5The 5th percentile of the distribution along profile (m/yr). quartile_1The first quartile of the distribution along profile (m/yr). percentile_33The 33rd percentile of the distribution along profile (m/yr). percentile_66The 66th percentile of the distribution along profile (m/yr). quartile_3The 3rd quartile of the distribution along profile (m/yr). percentile_95The 95th percentile of the distribution along profile (m/yr). Example:
###Code
%matplotlib inline
from matplotlib.pyplot import *
#Lets look at the distribution of elevation changes in the Wrangells
wra = alt.GetLambData(longest_interval=True,interval_max=30,interval_min=5,by_column=True,as_object=True, \
userwhere="ergi_mat_view.region='Alaska Range'")
wra.normalize_elevation()
wra.calc_dz_stats()
#Plotting the mean surface elevation profile
plot(wra.norme,wra.dzs_mean)
#Plotting the standard deviation of surface elevation changes
fill_between(wra.norme,wra.dzs_mean+wra.dzs_std,wra.dzs_mean-wra.dzs_std,alpha=0.3)
xlabel("Normalized Elevation (m)")
ylabel("Elevation Change Rate (m/yr)")
###Output
object
(35, 100)
###Markdown
Altimetry.Altimetry.LambObject.extend_upper_extrap() Purpose:Depending on the sample of glaciers you choose, you may run into samples where none of the surveys cover dhdt values at the top or bottom of the normalized elevation curve. Thus you may not have any stats at the top or bottom of the elevation curve. In these cases we this function will extend all of the stats from the last known data point at the top or bottom. This is generally only needed and best to run prior to using 'extrapolation'. If no extension is needed the data will not be changed. Alitmetry.Altimetry.partition_dataset() Purpose:This function simplifies partitioning the altimetry dataset in (almost) anyway the user would like for purposes of running the extrapolate function to estimate the regional mass balance. Here the user specifies a list of "WHERE" clauses that select samples that can then be used to represent some segment of glaciers from the ergi. This function simply loops through GetLambData and returns a list of Lamb objects and assumes you are desiring the longest interval available for each glacier. The example should be very helpful in how to use this function. Argument:partitioningA list of strings that would go into where statements that describe each partition individually. In larsen et al. we divide our extrapolation by terminus type. So wehave three groups, each with a where statement that says either "gltype=0","gltype=1",or "gltype=2". Thus this input is a list as ["gltype=0","gltype=1","gltype=2"]. Do not include the "WHERE or ANDs."This list must be as long as the number of groups you are partitioning. Keyword Arguments:apply_to_allThere may also be requirments that apply to all of the groups. In our example above, we don't want surge glaciers or a few specific outliers. Requirements for all groups are listed here, as in the example above. You can list as many or as few (None) as you want.interval_mininterval_maxThis has a similar effect to apply to all, it is just a easier way to specify requirements on the interval length. Numbers can be entered as an int. (Default = 5,30 for min and max respectively) too_fewWhen calculating kurtosis we normaly require samples to be larger than 4. If you happen to be choosing groups with a sample size smaller than 4 set this to None and ignore the Kurtosis, and for that matter skew etc. as that is a really small sample. (Default = 4) Returns: A 5 element list of the following: lamb,userwheres,notused,whereswo,notswolambA list of LambObjects for each of the partitioned groups specfied in the paritioning argument.These are in the same order as in partitioning argument. If any of the groups requested haveno glaciers in them, that LambObject is excluded from this list. userwheresA list of where statements that were used to filter altimetry intervals for each group. This combines the where statments in partitioning, apply_to_all and interval_min/max . Note if noglaciers exist in that group, the where is not output here, and instead will come out innotused.notusedA list of the groups specified by partitioning that have no glaciers in them. The where statements in a list are returned here.whereswoSame as userwheres but does not include the where statements associated with the apply_to_allargument. This is useful when used with extrapolate.notswoSame as notused but does not include the where statements associated with the apply_to_allargument. Example:See example with extrapolate below. Altimetry.Altimetry.extrapolate(\*args,\**kwargs) Purpose:This function extrapolates to unsurveyed glaciers and returns estimates of mass balance for all glaciers,including surveyed glaciers in the ERGI. This function is intended to work with partition dataset wherepartition dataset splits the altimetry dataset up into samples and then this function applies thosesample mean curves to glaciers of choice. There are key subltetys to how this works so pay attention here.In effort to give the user the maximum flexibility this function will allow you to give yourself resultsthat make no sense. So you must be careful and also examine your outputs. There is an example below. Arguments:* user * Input a string that states the user name. This funtion will output a table with the name alt_result_[user]X where X is 1 or if the user has a table already this script will not over-write so the number will be increased incrementally. This will prevent different users fromconfusing their results. * groups * A list lamb objects output by partition dataset, each element in the list is a single curve that the user intends to apply to some group of glaciers. It is critical here to note that the glaciers that receive a specifc elevation profile do NOT need to be at all related to theglaciers that made the profile. This will be clarified further on. * selections * A list of strings that specify where statements that describe where each group from the group argument should be applied. This should be of the same length and same intended order of the lamb list presented for the groups argument. The KEY here is that the user must insure that the selections together, include EVERY SINGLE glacier in the ergi AND don't ever have overlapping selections either. Said another way the selections list must be comprehsive of the glacier inventory and each selection is mututally exclusive of the others. See the example below for further clarification. Keyword Arguments:* insert\_surveyed\_data * Set to a lamb object that includes glaciers, for which, you would like to insert the actual surveyed glacier profile into the regional estimate. If this keyword is left blank, we use the extrapolation curves are applied to all glaciers in the selections argument even if they were surveyed glaciers (Default = one)* keep\_postgres\_tbls * Set to True if the user wishes to retain the output dataset (Default=False). If so, this table will be called alt_result_[user]X * export_shp * Set to a pathname if the user would like to output the table as a shpfile for viewing in a GIS. * density * Set to the assumed density (Default=0.85) * density_err * Set to the assumed density uncertainty (Default=0.06) * acrossgl_err * Set to the assumed across glacier uncertainty (Default=0.0) Returns: A dictionary of dictionaries containing partitioned mass balance results. Where glaciers are divided in the following ways. The following dictionary keys are returned regardless of your partitioning choices: * bysurveyed * Mass balance of glaciers divided by whether they were 'surveyed' or 'unsurveyed' for the data input.* bytype * same but divided by terminus type.* all * Same but the region as a whole.* bytype_survey * Same but divided by terminus type and whether they were 'surveyed' or 'unsurveyed' for the data input.Within each of these groups the summed mass balance is presented in another dictionary with keys:* area * Total area of group* totalkgm2 * Mean mass balance in m w eq/yr (yes it says kgm2)* errkgm2 * Mean mass balance error in m w eq/yr (yes it says kgm2)* totalgt * Total mass balance Gt/yr* errgt * Total mass balance error Gt/yr. Note this does not include the 50% increase in error for tidewater glaciers nor the area dependency correction that is discused in larsen et al. Those should to be added manually if the user wishes. If the keyword keep_postgres_tbls=True, the result table will be retained in the database and will allow the user to query and view this table (in a GIS) to evaluate the extrapolation further. Example use with GetLambData AND partition_dataset:
###Code
%matplotlib inline
import Altimetry as alt
#Retreiving all of the surveyed glacier data in one lambobject
#to be applied those glaciers individually as surveyeddata.
surveyeddata = alt.GetLambData(verbose=False,longest_interval=True,interval_max=30,interval_min=5,by_column=True, as_object=True)
surveyeddata.fix_terminus()
surveyeddata.normalize_elevation()
surveyeddata.calc_dz_stats()
types = ["gltype=0","gltype=1","gltype=2"]
lamb,userwheres,notused,whereswo,notswo = alt.partition_dataset(types,applytoall=["surge='f'","name NOT IN ('Columbia Glacier','West Yakutat Glacier','East Yakutat Glacier')"])
#Lastly, runing extrapolate on those groups.
#this applies the group extrapolation to each group but we use the where statements include exceptions
# like surgers. To reiterate, we ran partition_dataset on land/lake/tide w/o surgers.
# But we apply those same curves to land/lake/tides but to all glaciers
# including surgers. Here, ther user has then inserted the surveyed data so surveyed glacier mass balance
#is included on an individual
# glacier basis. Lastly here we drop the output_table, please do this as the output table is several Gb.
results = alt.extrapolate('guest',lamb,whereswo,insert_surveyed_data=surveyeddata,keep_postgres_tbls=True)
results
%matplotlib inline
from matplotlib.pyplot import *
import numpy as N
#Plotting a pie chart of the total amount of mass lost from each glacier type.
typenames = {'0':'Land','1':'Tidewater','2':'Lake'}
labels = [typenames[i] for i in results['bytype']['gltype'].astype(str)]
figure(figsize=(3,3))
pi = pie(N.abs(results['bytype']['totalgt']),labels=labels)
###Output
_____no_output_____ |
exercises/solutions/ex01/01_introduction.ipynb | ###Markdown
IntroductionThis first exercise is based on the python crashcourse offered by the department "Nachrichtentechnik" and "Elektrische Messtechnik".[Python Crash-Course](https://fgnt.github.io/python_crashkurs//) MATLAB vs. Python| MATLAB | Python ||:--------------------------------------------------------------:|:----------------------------------------------------------------------------------:|| Commercial | Open Source || New functions via MATLAB Toolkits(no package manager) | Installation of new modules withpackage manager (conda or pip) || Mainly procedual programming(Objects exists but are a hassle) | Object oriented || Mathematical Programming Language | General Purpose Language withmany mathematical modules || No Namespaces for Core-Functions | Proper Namespaces (e.g. `plt.plot` instead of `plot`) || GUI included | Various GUIs available.We recommend [Pycharm](https://www.jetbrains.com/pycharm/) || Download: [Mathworks](https://de.mathworks.com/downloads/) | Download: [Anaconda](https://www.anaconda.com/download/) | Numpy for MATLAB users[https://docs.scipy.org/doc/numpy-1.15.0/user/numpy-for-matlab-users.html](https://docs.scipy.org/doc/numpy-1.15.0/user/numpy-for-matlab-users.html) Common Libraries* Numpy (Vector and Matrix operations, Numeric computing)* Matplotlib (Plotting)* Pandas (Table operations)* Scikit-Learn (Machine Learning)* Tensorflow / PyTorch (Neural Networks)* SymPy (Symbolic computations)* Seaborn (Advanced Plotting)* ... Quickstart
###Code
import numpy as np
import matplotlib.pyplot as plt
#plt.style.use('dark_background')
U_0 = 3 # V
u_peak = 2 # V
f_0 = 50 # 1/s
# Timevector in s (Sequence of numbers)
t = np.arange(start=0, stop=0.04, step=0.001)
u = U_0 + u_peak * np.sin(2 * np.pi * f_0 * t)
plt.plot(t, u, 'o--')
plt.xlabel('Time $t$ / s')
plt.ylabel('Voltage $u(t)$ / V')
plt.grid(True)
###Output
_____no_output_____ |
Course_4_Residual_Networks_v2a.ipynb | ###Markdown
Residual NetworksWelcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](https://arxiv.org/pdf/1512.03385.pdf), allow you to train much deeper networks than were previously practically feasible.**In this assignment, you will:**- Implement the basic building blocks of ResNets. - Put together these building blocks to implement and train a state-of-the-art neural network for image classification. Updates If you were working on the notebook before this update...* The current notebook is version "2a".* You can find your original work saved in the notebook with the previous version name ("v2") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates* For testing on an image, replaced `preprocess_input(x)` with `x=x/255.0` to normalize the input image in the same way that the model's training data was normalized.* Refers to "shallower" layers as those layers closer to the input, and "deeper" layers as those closer to the output (Using "shallower" layers instead of "lower" or "earlier").* Added/updated instructions. This assignment will be done in Keras. Before jumping into the problem, let's run the cell below to load the required packages.
###Code
import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
###Output
Using TensorFlow backend.
###Markdown
1 - The problem of very deep neural networksLast week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.* The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the shallower layers, closer to the input) to very complex features (at the deeper layers, closer to the output). * However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent prohibitively slow. * More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode" to take very large values). * During training, you might therefore see the magnitude (or norm) of the gradient for the shallower layers decrease to zero very rapidly as training proceeds: **Figure 1** : **Vanishing gradient** The speed of learning decreases very rapidly for the shallower layers as the network trains You are now going to solve this problem by building a Residual Network! 2 - Building a Residual NetworkIn ResNets, a "shortcut" or a "skip connection" allows the model to skip layers: **Figure 2** : A ResNet block showing a **skip-connection** The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network. We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function accounts for ResNets' remarkable performance even more so than skip connections helping with vanishing gradients).Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them: the "identity block" and the "convolutional block." 2.1 - The identity blockThe identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps: **Figure 3** : **Identity block.** Skip connection "skips over" 2 layers. The upper path is the "shortcut path." The lower path is the "main path." In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras! In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this: **Figure 4** : **Identity block.** Skip connection "skips over" 3 layers. Here are the individual steps.First component of main path: - The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the seed for the random initialization. - The first BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2a'`.- Then apply the ReLU activation function. This has no name and no hyperparameters. Second component of main path:- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is "same" and its name should be `conv_name_base + '2b'`. Use 0 as the seed for the random initialization. - The second BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2b'`.- Then apply the ReLU activation function. This has no name and no hyperparameters. Third component of main path:- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2c'`. Use 0 as the seed for the random initialization. - The third BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2c'`. - Note that there is **no** ReLU activation function in this component. Final step: - The `X_shortcut` and the output from the 3rd layer `X` are added together.- **Hint**: The syntax will look something like `Add()([var1,var2])`- Then apply the ReLU activation function. This has no name and no hyperparameters. **Exercise**: Implement the ResNet identity block. We have implemented the first component of the main path. Please read this carefully to make sure you understand what it is doing. You should implement the rest. - To implement the Conv2D step: [Conv2D](https://keras.io/layers/convolutional/conv2d)- To implement BatchNorm: [BatchNormalization](https://faroit.github.io/keras-docs/1.2.2/layers/normalization/) (axis: Integer, the axis that should be normalized (typically the 'channels' axis))- For the activation, use: `Activation('relu')(X)`- To add the value passed forward by the shortcut: [Add](https://keras.io/layers/merge/add)
###Code
# GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters = F2, kernel_size = (f, f), strides = (1,1), padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters = F3, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = layers.Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
###Output
out = [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]
###Markdown
**Expected Output**: **out** [ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003] 2.2 - The convolutional blockThe ResNet "convolutional block" is the second block type. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path: **Figure 4** : **Convolutional block** * The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.) * For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. * The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step. The details of the convolutional block are as follows. First component of main path:- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the `glorot_uniform` seed.- The first BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2a'`.- Then apply the ReLU activation function. This has no name and no hyperparameters. Second component of main path:- The second CONV2D has $F_2$ filters of shape (f,f) and a stride of (1,1). Its padding is "same" and it's name should be `conv_name_base + '2b'`. Use 0 as the `glorot_uniform` seed.- The second BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2b'`.- Then apply the ReLU activation function. This has no name and no hyperparameters. Third component of main path:- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and it's name should be `conv_name_base + '2c'`. Use 0 as the `glorot_uniform` seed.- The third BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component. Shortcut path:- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '1'`. Use 0 as the `glorot_uniform` seed.- The BatchNorm is normalizing the 'channels' axis. Its name should be `bn_name_base + '1'`. Final step: - The shortcut and the main path values are added together.- Then apply the ReLU activation function. This has no name and no hyperparameters. **Exercise**: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.- [Conv2D](https://keras.io/layers/convolutional/conv2d)- [BatchNormalization](https://keras.io/layers/normalization/batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))- For the activation, use: `Activation('relu')(X)`- [Add](https://keras.io/layers/merge/add)
###Code
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s = 2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1, 1), strides = (s,s), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(F2, (f, f), strides = (1,1),padding = 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(F3, (1, 1), strides = (1,1),padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
##### SHORTCUT PATH #### (≈2 lines)
X_shortcut = Conv2D(F3, (1, 1), strides = (s,s),padding = 'valid', name = conv_name_base + '1', kernel_initializer = glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis = 3, name = bn_name_base + '1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = layers.Add()([X_shortcut, X])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
###Output
out = [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]
###Markdown
**Expected Output**: **out** [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603] 3 - Building your first ResNet model (50 layers)You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together. **Figure 5** : **ResNet-50 model** The details of this ResNet-50 model are:- Zero-padding pads the input with a pad of (3,3)- Stage 1: - The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is "conv1". - BatchNorm is applied to the 'channels' axis of the input. - MaxPooling uses a (3,3) window and a (2,2) stride.- Stage 2: - The convolutional block uses three sets of filters of size [64,64,256], "f" is 3, "s" is 1 and the block is "a". - The 2 identity blocks use three sets of filters of size [64,64,256], "f" is 3 and the blocks are "b" and "c".- Stage 3: - The convolutional block uses three sets of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a". - The 3 identity blocks use three sets of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".- Stage 4: - The convolutional block uses three sets of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a". - The 5 identity blocks use three sets of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".- Stage 5: - The convolutional block uses three sets of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a". - The 2 identity blocks use three sets of filters of size [512, 512, 2048], "f" is 3 and the blocks are "b" and "c".- The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".- The 'flatten' layer doesn't have any hyperparameters or name.- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be `'fc' + str(classes)`.**Exercise**: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above. You'll need to use this function: - Average pooling [see reference](https://keras.io/layers/pooling/averagepooling2d)Here are some other functions we used in the code below:- Conv2D: [See reference](https://keras.io/layers/convolutional/conv2d)- BatchNorm: [See reference](https://keras.io/layers/normalization/batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))- Zero padding: [See reference](https://keras.io/layers/convolutional/zeropadding2d)- Max pooling: [See reference](https://keras.io/layers/pooling/maxpooling2d)- Fully connected layer: [See reference](https://keras.io/layers/core/dense)- Addition: [See reference](https://keras.io/layers/merge/add)
###Code
# GRADED FUNCTION: ResNet50
def ResNet50(input_shape = (64, 64, 3), classes = 6):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
### START CODE HERE ###
# Stage 3 (≈4 lines)
X = convolutional_block(X, f = 3, filters = [128, 128, 512], stage = 3, block='a', s = 2)
X = identity_block(X, 3, [128, 128, 512], stage=3, block='b')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='c')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='d')
# Stage 4 (≈6 lines)
X = convolutional_block(X, f = 3, filters = [256, 256, 1024], stage = 4, block='a', s = 2)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='f')
# Stage 5 (≈3 lines)
X = convolutional_block(X, f = 3, filters = [512, 512, 2048], stage = 5, block='a', s = 2)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='c')
# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D(pool_size=(2, 2), name = 'avg_pool')(X)
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X, name='ResNet50')
return model
###Output
_____no_output_____
###Markdown
Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running `model.fit(...)` below.
###Code
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
###Output
_____no_output_____
###Markdown
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
###Code
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The model is now ready to be trained. The only thing you need is a dataset. Let's load the SIGNS Dataset. **Figure 6** : **SIGNS dataset**
###Code
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
###Output
number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)
###Markdown
Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
###Code
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
###Output
Epoch 1/2
1080/1080 [==============================] - 266s - loss: 0.7792 - acc: 0.7778
Epoch 2/2
1080/1080 [==============================] - 263s - loss: 1.0705 - acc: 0.7278
###Markdown
**Expected Output**: ** Epoch 1/2** loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours. ** Epoch 2/2** loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing. Let's see how this model (trained on only two epochs) performs on the test set.
###Code
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
###Output
120/120 [==============================] - 9s
Loss = 13.1812077204
Test Accuracy = 0.166666667163
###Markdown
**Expected Output**: **Test Accuracy** between 0.16 and 0.25 For the purpose of this assignment, we've asked you to train the model for just two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well. After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU. Using a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.
###Code
model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
###Output
120/120 [==============================] - 10s
Loss = 0.530178320408
Test Accuracy = 0.866666662693
###Markdown
ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.Congratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system! 4 - Test on your own image (Optional/Ungraded) If you wish, you can also take a picture of your own hand and see the output of the model. To do this: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right!
###Code
img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = x/255.0
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x))
###Output
Input image shape: (1, 64, 64, 3)
class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] =
[[ 3.41876671e-06 2.77412561e-04 9.99522924e-01 1.98842812e-07
1.95619068e-04 4.11686671e-07]]
###Markdown
You can also print a summary of your model by running the following code.
###Code
model.summary()
###Output
____________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
====================================================================================================
input_1 (InputLayer) (None, 64, 64, 3) 0
____________________________________________________________________________________________________
zero_padding2d_1 (ZeroPadding2D) (None, 70, 70, 3) 0 input_1[0][0]
____________________________________________________________________________________________________
conv1 (Conv2D) (None, 32, 32, 64) 9472 zero_padding2d_1[0][0]
____________________________________________________________________________________________________
bn_conv1 (BatchNormalization) (None, 32, 32, 64) 256 conv1[0][0]
____________________________________________________________________________________________________
activation_4 (Activation) (None, 32, 32, 64) 0 bn_conv1[0][0]
____________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 15, 15, 64) 0 activation_4[0][0]
____________________________________________________________________________________________________
res2a_branch2a (Conv2D) (None, 15, 15, 64) 4160 max_pooling2d_1[0][0]
____________________________________________________________________________________________________
bn2a_branch2a (BatchNormalizatio (None, 15, 15, 64) 256 res2a_branch2a[0][0]
____________________________________________________________________________________________________
activation_5 (Activation) (None, 15, 15, 64) 0 bn2a_branch2a[0][0]
____________________________________________________________________________________________________
res2a_branch2b (Conv2D) (None, 15, 15, 64) 36928 activation_5[0][0]
____________________________________________________________________________________________________
bn2a_branch2b (BatchNormalizatio (None, 15, 15, 64) 256 res2a_branch2b[0][0]
____________________________________________________________________________________________________
activation_6 (Activation) (None, 15, 15, 64) 0 bn2a_branch2b[0][0]
____________________________________________________________________________________________________
res2a_branch2c (Conv2D) (None, 15, 15, 256) 16640 activation_6[0][0]
____________________________________________________________________________________________________
res2a_branch1 (Conv2D) (None, 15, 15, 256) 16640 max_pooling2d_1[0][0]
____________________________________________________________________________________________________
bn2a_branch2c (BatchNormalizatio (None, 15, 15, 256) 1024 res2a_branch2c[0][0]
____________________________________________________________________________________________________
bn2a_branch1 (BatchNormalization (None, 15, 15, 256) 1024 res2a_branch1[0][0]
____________________________________________________________________________________________________
add_2 (Add) (None, 15, 15, 256) 0 bn2a_branch2c[0][0]
bn2a_branch1[0][0]
____________________________________________________________________________________________________
activation_7 (Activation) (None, 15, 15, 256) 0 add_2[0][0]
____________________________________________________________________________________________________
res2b_branch2a (Conv2D) (None, 15, 15, 64) 16448 activation_7[0][0]
____________________________________________________________________________________________________
bn2b_branch2a (BatchNormalizatio (None, 15, 15, 64) 256 res2b_branch2a[0][0]
____________________________________________________________________________________________________
activation_8 (Activation) (None, 15, 15, 64) 0 bn2b_branch2a[0][0]
____________________________________________________________________________________________________
res2b_branch2b (Conv2D) (None, 15, 15, 64) 36928 activation_8[0][0]
____________________________________________________________________________________________________
bn2b_branch2b (BatchNormalizatio (None, 15, 15, 64) 256 res2b_branch2b[0][0]
____________________________________________________________________________________________________
activation_9 (Activation) (None, 15, 15, 64) 0 bn2b_branch2b[0][0]
____________________________________________________________________________________________________
res2b_branch2c (Conv2D) (None, 15, 15, 256) 16640 activation_9[0][0]
____________________________________________________________________________________________________
bn2b_branch2c (BatchNormalizatio (None, 15, 15, 256) 1024 res2b_branch2c[0][0]
____________________________________________________________________________________________________
add_3 (Add) (None, 15, 15, 256) 0 bn2b_branch2c[0][0]
activation_7[0][0]
____________________________________________________________________________________________________
activation_10 (Activation) (None, 15, 15, 256) 0 add_3[0][0]
____________________________________________________________________________________________________
res2c_branch2a (Conv2D) (None, 15, 15, 64) 16448 activation_10[0][0]
____________________________________________________________________________________________________
bn2c_branch2a (BatchNormalizatio (None, 15, 15, 64) 256 res2c_branch2a[0][0]
____________________________________________________________________________________________________
activation_11 (Activation) (None, 15, 15, 64) 0 bn2c_branch2a[0][0]
____________________________________________________________________________________________________
res2c_branch2b (Conv2D) (None, 15, 15, 64) 36928 activation_11[0][0]
____________________________________________________________________________________________________
bn2c_branch2b (BatchNormalizatio (None, 15, 15, 64) 256 res2c_branch2b[0][0]
____________________________________________________________________________________________________
activation_12 (Activation) (None, 15, 15, 64) 0 bn2c_branch2b[0][0]
____________________________________________________________________________________________________
res2c_branch2c (Conv2D) (None, 15, 15, 256) 16640 activation_12[0][0]
____________________________________________________________________________________________________
bn2c_branch2c (BatchNormalizatio (None, 15, 15, 256) 1024 res2c_branch2c[0][0]
____________________________________________________________________________________________________
add_4 (Add) (None, 15, 15, 256) 0 bn2c_branch2c[0][0]
activation_10[0][0]
____________________________________________________________________________________________________
activation_13 (Activation) (None, 15, 15, 256) 0 add_4[0][0]
____________________________________________________________________________________________________
res3a_branch2a (Conv2D) (None, 8, 8, 128) 32896 activation_13[0][0]
____________________________________________________________________________________________________
bn3a_branch2a (BatchNormalizatio (None, 8, 8, 128) 512 res3a_branch2a[0][0]
____________________________________________________________________________________________________
activation_14 (Activation) (None, 8, 8, 128) 0 bn3a_branch2a[0][0]
____________________________________________________________________________________________________
res3a_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_14[0][0]
____________________________________________________________________________________________________
bn3a_branch2b (BatchNormalizatio (None, 8, 8, 128) 512 res3a_branch2b[0][0]
____________________________________________________________________________________________________
activation_15 (Activation) (None, 8, 8, 128) 0 bn3a_branch2b[0][0]
____________________________________________________________________________________________________
res3a_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_15[0][0]
____________________________________________________________________________________________________
res3a_branch1 (Conv2D) (None, 8, 8, 512) 131584 activation_13[0][0]
____________________________________________________________________________________________________
bn3a_branch2c (BatchNormalizatio (None, 8, 8, 512) 2048 res3a_branch2c[0][0]
____________________________________________________________________________________________________
bn3a_branch1 (BatchNormalization (None, 8, 8, 512) 2048 res3a_branch1[0][0]
____________________________________________________________________________________________________
add_5 (Add) (None, 8, 8, 512) 0 bn3a_branch2c[0][0]
bn3a_branch1[0][0]
____________________________________________________________________________________________________
activation_16 (Activation) (None, 8, 8, 512) 0 add_5[0][0]
____________________________________________________________________________________________________
res3b_branch2a (Conv2D) (None, 8, 8, 128) 65664 activation_16[0][0]
____________________________________________________________________________________________________
bn3b_branch2a (BatchNormalizatio (None, 8, 8, 128) 512 res3b_branch2a[0][0]
____________________________________________________________________________________________________
activation_17 (Activation) (None, 8, 8, 128) 0 bn3b_branch2a[0][0]
____________________________________________________________________________________________________
res3b_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_17[0][0]
____________________________________________________________________________________________________
bn3b_branch2b (BatchNormalizatio (None, 8, 8, 128) 512 res3b_branch2b[0][0]
____________________________________________________________________________________________________
activation_18 (Activation) (None, 8, 8, 128) 0 bn3b_branch2b[0][0]
____________________________________________________________________________________________________
res3b_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_18[0][0]
____________________________________________________________________________________________________
bn3b_branch2c (BatchNormalizatio (None, 8, 8, 512) 2048 res3b_branch2c[0][0]
____________________________________________________________________________________________________
add_6 (Add) (None, 8, 8, 512) 0 bn3b_branch2c[0][0]
activation_16[0][0]
____________________________________________________________________________________________________
activation_19 (Activation) (None, 8, 8, 512) 0 add_6[0][0]
____________________________________________________________________________________________________
res3c_branch2a (Conv2D) (None, 8, 8, 128) 65664 activation_19[0][0]
____________________________________________________________________________________________________
bn3c_branch2a (BatchNormalizatio (None, 8, 8, 128) 512 res3c_branch2a[0][0]
____________________________________________________________________________________________________
activation_20 (Activation) (None, 8, 8, 128) 0 bn3c_branch2a[0][0]
____________________________________________________________________________________________________
res3c_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_20[0][0]
____________________________________________________________________________________________________
bn3c_branch2b (BatchNormalizatio (None, 8, 8, 128) 512 res3c_branch2b[0][0]
____________________________________________________________________________________________________
activation_21 (Activation) (None, 8, 8, 128) 0 bn3c_branch2b[0][0]
____________________________________________________________________________________________________
res3c_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_21[0][0]
____________________________________________________________________________________________________
bn3c_branch2c (BatchNormalizatio (None, 8, 8, 512) 2048 res3c_branch2c[0][0]
____________________________________________________________________________________________________
add_7 (Add) (None, 8, 8, 512) 0 bn3c_branch2c[0][0]
activation_19[0][0]
____________________________________________________________________________________________________
activation_22 (Activation) (None, 8, 8, 512) 0 add_7[0][0]
____________________________________________________________________________________________________
res3d_branch2a (Conv2D) (None, 8, 8, 128) 65664 activation_22[0][0]
____________________________________________________________________________________________________
bn3d_branch2a (BatchNormalizatio (None, 8, 8, 128) 512 res3d_branch2a[0][0]
____________________________________________________________________________________________________
activation_23 (Activation) (None, 8, 8, 128) 0 bn3d_branch2a[0][0]
____________________________________________________________________________________________________
res3d_branch2b (Conv2D) (None, 8, 8, 128) 147584 activation_23[0][0]
____________________________________________________________________________________________________
bn3d_branch2b (BatchNormalizatio (None, 8, 8, 128) 512 res3d_branch2b[0][0]
____________________________________________________________________________________________________
activation_24 (Activation) (None, 8, 8, 128) 0 bn3d_branch2b[0][0]
____________________________________________________________________________________________________
res3d_branch2c (Conv2D) (None, 8, 8, 512) 66048 activation_24[0][0]
____________________________________________________________________________________________________
bn3d_branch2c (BatchNormalizatio (None, 8, 8, 512) 2048 res3d_branch2c[0][0]
____________________________________________________________________________________________________
add_8 (Add) (None, 8, 8, 512) 0 bn3d_branch2c[0][0]
activation_22[0][0]
____________________________________________________________________________________________________
activation_25 (Activation) (None, 8, 8, 512) 0 add_8[0][0]
____________________________________________________________________________________________________
res4a_branch2a (Conv2D) (None, 4, 4, 256) 131328 activation_25[0][0]
____________________________________________________________________________________________________
bn4a_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4a_branch2a[0][0]
____________________________________________________________________________________________________
activation_26 (Activation) (None, 4, 4, 256) 0 bn4a_branch2a[0][0]
____________________________________________________________________________________________________
res4a_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_26[0][0]
____________________________________________________________________________________________________
bn4a_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4a_branch2b[0][0]
____________________________________________________________________________________________________
activation_27 (Activation) (None, 4, 4, 256) 0 bn4a_branch2b[0][0]
____________________________________________________________________________________________________
res4a_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_27[0][0]
____________________________________________________________________________________________________
res4a_branch1 (Conv2D) (None, 4, 4, 1024) 525312 activation_25[0][0]
____________________________________________________________________________________________________
bn4a_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4a_branch2c[0][0]
____________________________________________________________________________________________________
bn4a_branch1 (BatchNormalization (None, 4, 4, 1024) 4096 res4a_branch1[0][0]
____________________________________________________________________________________________________
add_9 (Add) (None, 4, 4, 1024) 0 bn4a_branch2c[0][0]
bn4a_branch1[0][0]
____________________________________________________________________________________________________
activation_28 (Activation) (None, 4, 4, 1024) 0 add_9[0][0]
____________________________________________________________________________________________________
res4b_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_28[0][0]
____________________________________________________________________________________________________
bn4b_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4b_branch2a[0][0]
____________________________________________________________________________________________________
activation_29 (Activation) (None, 4, 4, 256) 0 bn4b_branch2a[0][0]
____________________________________________________________________________________________________
res4b_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_29[0][0]
____________________________________________________________________________________________________
bn4b_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4b_branch2b[0][0]
____________________________________________________________________________________________________
activation_30 (Activation) (None, 4, 4, 256) 0 bn4b_branch2b[0][0]
____________________________________________________________________________________________________
res4b_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_30[0][0]
____________________________________________________________________________________________________
bn4b_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4b_branch2c[0][0]
____________________________________________________________________________________________________
add_10 (Add) (None, 4, 4, 1024) 0 bn4b_branch2c[0][0]
activation_28[0][0]
____________________________________________________________________________________________________
activation_31 (Activation) (None, 4, 4, 1024) 0 add_10[0][0]
____________________________________________________________________________________________________
res4c_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_31[0][0]
____________________________________________________________________________________________________
bn4c_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4c_branch2a[0][0]
____________________________________________________________________________________________________
activation_32 (Activation) (None, 4, 4, 256) 0 bn4c_branch2a[0][0]
____________________________________________________________________________________________________
res4c_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_32[0][0]
____________________________________________________________________________________________________
bn4c_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4c_branch2b[0][0]
____________________________________________________________________________________________________
activation_33 (Activation) (None, 4, 4, 256) 0 bn4c_branch2b[0][0]
____________________________________________________________________________________________________
res4c_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_33[0][0]
____________________________________________________________________________________________________
bn4c_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4c_branch2c[0][0]
____________________________________________________________________________________________________
add_11 (Add) (None, 4, 4, 1024) 0 bn4c_branch2c[0][0]
activation_31[0][0]
____________________________________________________________________________________________________
activation_34 (Activation) (None, 4, 4, 1024) 0 add_11[0][0]
____________________________________________________________________________________________________
res4d_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_34[0][0]
____________________________________________________________________________________________________
bn4d_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4d_branch2a[0][0]
____________________________________________________________________________________________________
activation_35 (Activation) (None, 4, 4, 256) 0 bn4d_branch2a[0][0]
____________________________________________________________________________________________________
res4d_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_35[0][0]
____________________________________________________________________________________________________
bn4d_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4d_branch2b[0][0]
____________________________________________________________________________________________________
activation_36 (Activation) (None, 4, 4, 256) 0 bn4d_branch2b[0][0]
____________________________________________________________________________________________________
res4d_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_36[0][0]
____________________________________________________________________________________________________
bn4d_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4d_branch2c[0][0]
____________________________________________________________________________________________________
add_12 (Add) (None, 4, 4, 1024) 0 bn4d_branch2c[0][0]
activation_34[0][0]
____________________________________________________________________________________________________
activation_37 (Activation) (None, 4, 4, 1024) 0 add_12[0][0]
____________________________________________________________________________________________________
res4e_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_37[0][0]
____________________________________________________________________________________________________
bn4e_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4e_branch2a[0][0]
____________________________________________________________________________________________________
activation_38 (Activation) (None, 4, 4, 256) 0 bn4e_branch2a[0][0]
____________________________________________________________________________________________________
res4e_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_38[0][0]
____________________________________________________________________________________________________
bn4e_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4e_branch2b[0][0]
____________________________________________________________________________________________________
activation_39 (Activation) (None, 4, 4, 256) 0 bn4e_branch2b[0][0]
____________________________________________________________________________________________________
res4e_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_39[0][0]
____________________________________________________________________________________________________
bn4e_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4e_branch2c[0][0]
____________________________________________________________________________________________________
add_13 (Add) (None, 4, 4, 1024) 0 bn4e_branch2c[0][0]
activation_37[0][0]
____________________________________________________________________________________________________
activation_40 (Activation) (None, 4, 4, 1024) 0 add_13[0][0]
____________________________________________________________________________________________________
res4f_branch2a (Conv2D) (None, 4, 4, 256) 262400 activation_40[0][0]
____________________________________________________________________________________________________
bn4f_branch2a (BatchNormalizatio (None, 4, 4, 256) 1024 res4f_branch2a[0][0]
____________________________________________________________________________________________________
activation_41 (Activation) (None, 4, 4, 256) 0 bn4f_branch2a[0][0]
____________________________________________________________________________________________________
res4f_branch2b (Conv2D) (None, 4, 4, 256) 590080 activation_41[0][0]
____________________________________________________________________________________________________
bn4f_branch2b (BatchNormalizatio (None, 4, 4, 256) 1024 res4f_branch2b[0][0]
____________________________________________________________________________________________________
activation_42 (Activation) (None, 4, 4, 256) 0 bn4f_branch2b[0][0]
____________________________________________________________________________________________________
res4f_branch2c (Conv2D) (None, 4, 4, 1024) 263168 activation_42[0][0]
____________________________________________________________________________________________________
bn4f_branch2c (BatchNormalizatio (None, 4, 4, 1024) 4096 res4f_branch2c[0][0]
____________________________________________________________________________________________________
add_14 (Add) (None, 4, 4, 1024) 0 bn4f_branch2c[0][0]
activation_40[0][0]
____________________________________________________________________________________________________
activation_43 (Activation) (None, 4, 4, 1024) 0 add_14[0][0]
____________________________________________________________________________________________________
res5a_branch2a (Conv2D) (None, 2, 2, 512) 524800 activation_43[0][0]
____________________________________________________________________________________________________
bn5a_branch2a (BatchNormalizatio (None, 2, 2, 512) 2048 res5a_branch2a[0][0]
____________________________________________________________________________________________________
activation_44 (Activation) (None, 2, 2, 512) 0 bn5a_branch2a[0][0]
____________________________________________________________________________________________________
res5a_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_44[0][0]
____________________________________________________________________________________________________
bn5a_branch2b (BatchNormalizatio (None, 2, 2, 512) 2048 res5a_branch2b[0][0]
____________________________________________________________________________________________________
activation_45 (Activation) (None, 2, 2, 512) 0 bn5a_branch2b[0][0]
____________________________________________________________________________________________________
res5a_branch2c (Conv2D) (None, 2, 2, 2048) 1050624 activation_45[0][0]
____________________________________________________________________________________________________
res5a_branch1 (Conv2D) (None, 2, 2, 2048) 2099200 activation_43[0][0]
____________________________________________________________________________________________________
bn5a_branch2c (BatchNormalizatio (None, 2, 2, 2048) 8192 res5a_branch2c[0][0]
____________________________________________________________________________________________________
bn5a_branch1 (BatchNormalization (None, 2, 2, 2048) 8192 res5a_branch1[0][0]
____________________________________________________________________________________________________
add_15 (Add) (None, 2, 2, 2048) 0 bn5a_branch2c[0][0]
bn5a_branch1[0][0]
____________________________________________________________________________________________________
activation_46 (Activation) (None, 2, 2, 2048) 0 add_15[0][0]
____________________________________________________________________________________________________
res5b_branch2a (Conv2D) (None, 2, 2, 512) 1049088 activation_46[0][0]
____________________________________________________________________________________________________
bn5b_branch2a (BatchNormalizatio (None, 2, 2, 512) 2048 res5b_branch2a[0][0]
____________________________________________________________________________________________________
activation_47 (Activation) (None, 2, 2, 512) 0 bn5b_branch2a[0][0]
____________________________________________________________________________________________________
res5b_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_47[0][0]
____________________________________________________________________________________________________
bn5b_branch2b (BatchNormalizatio (None, 2, 2, 512) 2048 res5b_branch2b[0][0]
____________________________________________________________________________________________________
activation_48 (Activation) (None, 2, 2, 512) 0 bn5b_branch2b[0][0]
____________________________________________________________________________________________________
res5b_branch2c (Conv2D) (None, 2, 2, 2048) 1050624 activation_48[0][0]
____________________________________________________________________________________________________
bn5b_branch2c (BatchNormalizatio (None, 2, 2, 2048) 8192 res5b_branch2c[0][0]
____________________________________________________________________________________________________
add_16 (Add) (None, 2, 2, 2048) 0 bn5b_branch2c[0][0]
activation_46[0][0]
____________________________________________________________________________________________________
activation_49 (Activation) (None, 2, 2, 2048) 0 add_16[0][0]
____________________________________________________________________________________________________
res5c_branch2a (Conv2D) (None, 2, 2, 512) 1049088 activation_49[0][0]
____________________________________________________________________________________________________
bn5c_branch2a (BatchNormalizatio (None, 2, 2, 512) 2048 res5c_branch2a[0][0]
____________________________________________________________________________________________________
activation_50 (Activation) (None, 2, 2, 512) 0 bn5c_branch2a[0][0]
____________________________________________________________________________________________________
res5c_branch2b (Conv2D) (None, 2, 2, 512) 2359808 activation_50[0][0]
____________________________________________________________________________________________________
bn5c_branch2b (BatchNormalizatio (None, 2, 2, 512) 2048 res5c_branch2b[0][0]
____________________________________________________________________________________________________
activation_51 (Activation) (None, 2, 2, 512) 0 bn5c_branch2b[0][0]
____________________________________________________________________________________________________
res5c_branch2c (Conv2D) (None, 2, 2, 2048) 1050624 activation_51[0][0]
____________________________________________________________________________________________________
bn5c_branch2c (BatchNormalizatio (None, 2, 2, 2048) 8192 res5c_branch2c[0][0]
____________________________________________________________________________________________________
add_17 (Add) (None, 2, 2, 2048) 0 bn5c_branch2c[0][0]
activation_49[0][0]
____________________________________________________________________________________________________
activation_52 (Activation) (None, 2, 2, 2048) 0 add_17[0][0]
____________________________________________________________________________________________________
avg_pool (AveragePooling2D) (None, 1, 1, 2048) 0 activation_52[0][0]
____________________________________________________________________________________________________
flatten_1 (Flatten) (None, 2048) 0 avg_pool[0][0]
____________________________________________________________________________________________________
fc6 (Dense) (None, 6) 12294 flatten_1[0][0]
====================================================================================================
Total params: 23,600,006
Trainable params: 23,546,886
Non-trainable params: 53,120
____________________________________________________________________________________________________
###Markdown
Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
###Code
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
###Output
_____no_output_____ |
hands-on/070_imdb_data_exploration_lazy.ipynb | ###Markdown
IMDB data set: opinions and recurrent neural networks Required imports
###Code
from collections import Counter
from keras.datasets import imdb
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from pathlib import Path
import pickle
###Output
_____no_output_____
###Markdown
Loading the data set Load the data set, it will be downloaded and cached.
###Code
(x_train, y_train), (x_test, y_test) = imdb.load_data()
###Output
_____no_output_____
###Markdown
Exploring the data set Data shape and types Shape and type of the input and output.
###Code
x_train.shape, x_train.dtype, y_train.shape, y_train.dtype
x_test.shape, x_test.dtype, y_test.shape, y_test.dtype
###Output
_____no_output_____
###Markdown
Both training and test sets have 25,000 examples each. The input is a list of integers, the output either 0 or 1.
###Code
type(x_train[0]), len(x_train[0]), type(x_train[0][0])
set(y_train)
###Output
_____no_output_____
###Markdown
Each training input consists of a list of integers. Each integer uniquely represents a word, the list represents a text as an ordered sequence of words. The corresponding output is an integer, either 0 or 1, representing the opinion expressed in the review text. Review lengths We can visualize the distribution of the review lengths in a histogram, one for the training, the other for the test input.
###Code
figure, axes = plt.subplots(nrows=1, ncols=2, figsize=(15, 5))
for i, reviews in enumerate((x_train, x_test)):
review_lengths = map(len, reviews)
axes[i].hist(list(review_lengths), bins=50);
figure.tight_layout()
###Output
_____no_output_____
###Markdown
Word distribution The distribution of the words, or features, can also be visualized. The following computation is rather time consuming, so its results are pickled, so that they can be reused for demonstartion purposes without redoing the computation.
###Code
pickle_path = Path('feature_count.pkl')
if not pickle_path.exists():
feature_counter = Counter()
for review in x_train:
for feature in review:
feature_counter[feature] += 1
with open('feature_count.pkl', 'wb') as pickle_file:
pickle.dump(feature_counter, pickle_file)
else:
with open('feature_count.pkl', 'rb') as pickle_file:
feature_counter = pickle.load(pickle_file)
feature_counter.most_common(10)
###Output
_____no_output_____
###Markdown
Note that the most common word starts at index 4, which may be unexpected.
###Code
feature_counter[0], feature_counter[1], feature_counter[2], feature_counter[3]
###Output
_____no_output_____
###Markdown
Index 0 serves as padding, 1 as start of a review (note that it occurs as many times as there are reviews in the training set). For more details, see the section on texts below.
###Code
len(feature_counter)
plt.semilogy(list(feature_counter.keys()), list(feature_counter.values()),
'.', alpha=0.3, markersize=1)
plt.xlabel('feature')
plt.ylabel('frequency');
###Output
_____no_output_____
###Markdown
The features, i.e., the words, follow a Zipf-like distribution, which doesn't come as a surprise. Since this computation is time consuming, we assume similar results for the test set. Note that the minimum index is 1, the maximum 88586.
###Code
min(feature_counter.keys()), max(feature_counter.keys())
###Output
_____no_output_____
###Markdown
Sentiment distribution The distirbution of the opinions, 0 or 1, can be visualized in a bar plot, again one for the training, the other for the test output.
###Code
figure, axes = plt.subplots(1, 2)
for i, opinions in enumerate((y_train, y_test)):
counter = [0, 0]
for opinion in opinions:
counter[opinion] += 1
axes[i].bar(['0', '1'], counter, 0.5);
figure.tight_layout()
###Output
_____no_output_____
###Markdown
Positive and negetive opinions are uniformly distributed in the training set and the test set. Texts The texts represented as a list of integers, each integer representing a specific word. The word index, i.e, a dictionary that has the words as keys, and the integers as values is also available in the IMDB dataset.
###Code
word_index = imdb.get_word_index()
word_index['the']
###Output
_____no_output_____
###Markdown
However, in order to translate the lists of integers into the original reviews, the data has to be loaded appropriately. The `load_data` method has some optional arguments that should be specified. Index 0 is usually reserved for padding, i.e., to ensure that short sequences can be extended to the required length. Index 1 indicates the start of a review (`start_char`), while index 2 is used to represent words that have not been indexed, either because they were not part of the data set, they were too infrequently used, or, if the top words are left out, too common to be considered informative (`oov_char`). Hence, the actual word index starts at 4. The word index has to be shifted by `index_from`, and the strings representing padding, start and unknown added. The following function will do this, compute the reverse dictionary, and return both.
###Code
def compute_indices(word_index=None, index_from=3, padding_idx=0, start_idx=1, unknown_idx=2):
if word_index is None:
word_index = imdb.get_word_index()
word_to_idx = {k: v + index_from for k, v in word_index.items()}
word_to_idx['<pad>'] = padding_idx
word_to_idx['<start>'] = start_idx
word_to_idx['<unknown>'] = unknown_idx
return word_to_idx, {v: k for k, v in word_to_idx.items()}
word_to_idx, idx_to_word = compute_indices(word_index)
###Output
_____no_output_____
###Markdown
The first review in the training set can now be "translated" back to English. Note that there is no punctuation, and, since only the 1,000 most common words were loaded, quite a number of `` crop up in the text.
###Code
print(' '.join(idx_to_word[idx] for idx in x_train[0]))
###Output
_____no_output_____
###Markdown
The sentiment expressed in this review is positive, so the output should be 1.
###Code
y_train[0]
###Output
_____no_output_____
###Markdown
Find the first review expressing a negative sentiment.
###Code
neg_idx = list(y_train).index(0)
print(' '.join(idx_to_word[idx] for idx in x_train[neg_idx]))
###Output
_____no_output_____
###Markdown
Clearly, the reviewer was not taken by the movie. Stop words When you display the top-25 words, it is quite clear that most will not be very informative, except "but" at index 20 and "not" at index 23.
###Code
for i in range(4, 4 + 26):
print(i, idx_to_word[i])
###Output
_____no_output_____ |
initial_model.ipynb | ###Markdown
Data exploration1. Clean the data and fix any issues you see (missing values?). I think we should start working only with `application_train|test.csv`. If by the time we have a functional model there is enough time left, we can take a look at the rest of the data, i.e. `bureau.csv`, `previous_application.csv`, etc.2. Look at the relationship between variables Load the data
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
from sklearn.preprocessing import LabelEncoder
if 'google.colab' in str(get_ipython()):
from google.colab import drive
drive.mount('/content/drive')
path = 'drive/MyDrive/CS 249 Project/Data/'
else:
path = 'Data/'
# Load training data and test data
train_df = pd.read_csv(path+'application_train.csv')
test_df = pd.read_csv(path+'application_test.csv')
print('Training data shape: ', train_df.shape)
print('Training data shape: ', test_df.shape)
train_df.head()
###Output
Training data shape: (307511, 122)
Training data shape: (48744, 121)
###Markdown
EDAExplore the data and address any issues Is the data balanced?Look at the target column and plot its distribution
###Code
train_df.TARGET.plot.hist()
###Output
_____no_output_____
###Markdown
Clearly the data isn't balanced. The number of loans that were repaid is far greater than the number of loans that were not repaid. We don't need to address this issue right now though. Are there missing values?
###Code
missing_values = train_df.isna().sum()
missing_values_percent = missing_values*100/len(train_df)
miss_df = pd.concat(
[missing_values.rename('missing_val'),
missing_values_percent.rename('missing_val_percent')],
axis=1
)
miss_df.sort_values(by='missing_val_percent', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
Which features are categorical?
###Code
train_df.dtypes.value_counts()
categorical_features = train_df.select_dtypes('object')
categorical_features.nunique()
###Output
_____no_output_____
###Markdown
Encode these categorical features. We can either use one-hot encoding (which we've used in the class) or label encoding. One-hot encoding seems to be the preferred method, with the only caveat that if there's a large number of categories for a feature, the number of one-hot encoded features can explode. A workaround is to use PCA or another dimensionality reduction method.Let's label encode the features with 2 categories and one-hot encode the ones with more than 2.
###Code
# Label encode the features with 2 categories
label_encoder = LabelEncoder()
for feat in categorical_features:
if len(train_df[feat].unique()) <= 2:
train_df[feat] = label_encoder.fit_transform(train_df[feat])
test_df[feat] = label_encoder.fit_transform(test_df[feat])
train_df.columns
# One-hot encode features with more than 2 categories
train_df = pd.get_dummies(train_df, drop_first=True)
test_df = pd.get_dummies(test_df, drop_first=True)
train_df.columns
###Output
_____no_output_____
###Markdown
Aligning training and test data
###Code
# Check the shapes of the training and test data
print(f"Training data: {train_df.shape}")
print(f"Test data: {test_df.shape}")
# Save target labels
train_target = train_df.TARGET
# Align train and test dfs
train_df, test_df = train_df.align(test_df, join='inner', axis=1)
# Add target labels back in
train_df['TARGET'] = train_target
# Check shapes after aligning
print(f"Training data: {train_df.shape}")
print(f"Test data: {test_df.shape}")
###Output
Training data: (307511, 227)
Test data: (48744, 226)
###Markdown
AnomaliesAccording to the reference notebook, the feature `DAYS_BIRTH` has negative numbers because they are recorded relative to the current loan application. The feature is described as client's age in days at the time of the application.It's easier to find anomalies if we transform to years.
###Code
# Convert BIRTH_DAYS to years
(train_df['DAYS_BIRTH'] / -365).describe()
###Output
_____no_output_____
###Markdown
Given the min and the max, there doesn't seem to be any outliers.Now let's look at DAYS_EMPLOYED. This feature is described as:>How many days before the application the person started current employment.
###Code
train_df['DAYS_EMPLOYED'].describe()
train_df['DAYS_EMPLOYED'].plot.hist(alpha=0.5)
###Output
_____no_output_____
###Markdown
There's clearly a chunk of samples for this feature that aren't right. It makes no sense for this to be negative and have such a large absolute value. A safe way to deal with this is to set those values to NaN and impute them later. Before setting all anomalous values to NaN, check to see if the anomalous clients have any patterns of behavior in terms of credit default (higher or lower rates).
###Code
max_days_employed = train_df['DAYS_EMPLOYED'].max()
anom = train_df[train_df['DAYS_EMPLOYED'] == max_days_employed]
non_anom = train_df[train_df['DAYS_EMPLOYED'] != max_days_employed]
print(f'Value of days employed anomaly (aka max of days employed column):', max_days_employed)
print(f'The non-anomalies default on %0.2f%% of loans' % (100 * non_anom['TARGET'].mean()))
print(f'The anomalies default on %0.2f%% of loans' % (100 * anom['TARGET'].mean()))
print(f'There are %d anomalous days of employment' % len(anom))
###Output
Value of days employed anomaly (aka max of days employed column): 365243
The non-anomalies default on 8.66% of loans
The anomalies default on 5.40% of loans
There are 55374 anomalous days of employment
###Markdown
Since the anomalous clients have a lower rate of default, we would like to capture this information in a separate column before clearing the anomalous values. We will fill in the anomalous values with NaN and create a boolean column to indicate whether the value was anomalous or not.
###Code
# Create an anomalous flag column
train_df['DAYS_EMPLOYED_ANOM'] = train_df["DAYS_EMPLOYED"] == max_days_employed
test_df['DAYS_EMPLOYED_ANOM'] = test_df['DAYS_EMPLOYED'] == max_days_employed
# How many samples have the max value as DAYS_EMPLOYED?
max_value_count = (train_df['DAYS_EMPLOYED'] == train_df['DAYS_EMPLOYED'].max()).sum()
print(f"Count of samples with max DAYS_EMPLOYED: {max_value_count}")
# Replace all these occurrences with NaN
train_df.DAYS_EMPLOYED.replace(
to_replace=train_df.DAYS_EMPLOYED.max(),
value=np.nan,
inplace=True)
train_df.DAYS_EMPLOYED.plot.hist(alpha=0.5)
plt.xlabel('Days employed prior to application')
print('There are %d anomalies in the train data out of %d entries' % (train_df["DAYS_EMPLOYED_ANOM"].sum(), len(train_df)))
print('There are %d anomalies in the test data out of %d entries' % (test_df["DAYS_EMPLOYED_ANOM"].sum(), len(test_df)))
###Output
There are 55374 anomalies in the train data out of 307511 entries
There are 9274 anomalies in the test data out of 48744 entries
###Markdown
Relationships between variablesWith `.corr` method we can use the Pearson correlation coefficient to find relationships between the features in the training set.
###Code
corr_matrix = train_df.corr().TARGET.sort_values()
print(f'Highest positive correlations:\n {corr_matrix.tail(10)}\n')
print(f'Highest negative correlations:\n {corr_matrix.head(10)}')
###Output
Highest positive correlations:
REG_CITY_NOT_WORK_CITY 0.050994
DAYS_ID_PUBLISH 0.051457
CODE_GENDER_M 0.054713
DAYS_LAST_PHONE_CHANGE 0.055218
NAME_INCOME_TYPE_Working 0.057481
REGION_RATING_CLIENT 0.058899
REGION_RATING_CLIENT_W_CITY 0.060893
DAYS_EMPLOYED 0.074958
DAYS_BIRTH 0.078239
TARGET 1.000000
Name: TARGET, dtype: float64
Highest negative correlations:
EXT_SOURCE_3 -0.178919
EXT_SOURCE_2 -0.160472
EXT_SOURCE_1 -0.155317
NAME_EDUCATION_TYPE_Higher education -0.056593
NAME_INCOME_TYPE_Pensioner -0.046209
DAYS_EMPLOYED_ANOM -0.045987
ORGANIZATION_TYPE_XNA -0.045987
FLOORSMAX_AVG -0.044003
FLOORSMAX_MEDI -0.043768
FLOORSMAX_MODE -0.043226
Name: TARGET, dtype: float64
###Markdown
Features with highest positive correlation: Effect of age on repaymentThe feature with highest positive correlation is `DAYS_BIRTH`. Let's look at it in more datail.Positive correlation means that as `DAYS_BIRTH` increases the customer is less likely to repay the loan. However, this `DAYS_BIRTH` is given as negative numbers, it's easier to understand its relationship to `TARGET` if we multiply by -1.
###Code
plt.style.use('seaborn-muted')
# Make DAYS_BIRTH positive
train_df.DAYS_BIRTH = train_df.DAYS_BIRTH * -1
# Plot the distribution of ages in years
sns.kdeplot(train_df.loc[train_df.TARGET == 0, 'DAYS_BIRTH'] / 365)
sns.kdeplot(train_df.loc[train_df.TARGET == 1, 'DAYS_BIRTH'] / 365)
plt.xlabel('Age in years')
plt.title('KDE Plot for Applicant Age')
plt.legend(['Repaid', 'Not Paid'])
###Output
_____no_output_____
###Markdown
From the plot it looks like among the pool of applicants who were not able to repay their loan, the majority of them were younger than 30 years old. Features with highest negative correlationThe three features with highest negative correlation are `EXT_SOURCE_3`, `EXT_SOURCE_2`, `EXT_SOURCE_1`. These are described as *Normalized score from external data source*. Let's look at a heat map of their correlation with `TARGET`.
###Code
features = ['EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3', 'DAYS_BIRTH', 'TARGET']
ext_corr = train_df[features].corr()
ext_corr
sns.heatmap(ext_corr, annot=True)
###Output
_____no_output_____
###Markdown
The correlation matrix tells us that as the value of the external source features increases, the customer is more likely to repay the loan. Similarly, as the age of the customer increases, the higher the chance of the loan to be repaid. Feature Engineering Polynomial featuresCreate polynomial features for the selected features
###Code
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import PolynomialFeatures
# Make new df with polynomial features
poly_features = train_df[features]
poly_features_test = test_df[features[:-1]] # there's no TARGET in test_df
# Assign variables
y_train = poly_features.TARGET
X_train = poly_features.drop(columns='TARGET')
# Handle missing values
imputer = SimpleImputer(strategy='median')
X_train = imputer.fit_transform(X_train)
X_test = imputer.transform(poly_features_test)
# Create polynomial features
poly = PolynomialFeatures(degree=3)
X_train = poly.fit_transform(X_train)
X_test = poly.transform(X_test)
feat_names = poly.get_feature_names(features[:-1])
print(X_train.shape)
print(X_test.shape)
print(feat_names)
###Output
(307511, 35)
(48744, 35)
['1', 'EXT_SOURCE_1', 'EXT_SOURCE_2', 'EXT_SOURCE_3', 'DAYS_BIRTH', 'EXT_SOURCE_1^2', 'EXT_SOURCE_1 EXT_SOURCE_2', 'EXT_SOURCE_1 EXT_SOURCE_3', 'EXT_SOURCE_1 DAYS_BIRTH', 'EXT_SOURCE_2^2', 'EXT_SOURCE_2 EXT_SOURCE_3', 'EXT_SOURCE_2 DAYS_BIRTH', 'EXT_SOURCE_3^2', 'EXT_SOURCE_3 DAYS_BIRTH', 'DAYS_BIRTH^2', 'EXT_SOURCE_1^3', 'EXT_SOURCE_1^2 EXT_SOURCE_2', 'EXT_SOURCE_1^2 EXT_SOURCE_3', 'EXT_SOURCE_1^2 DAYS_BIRTH', 'EXT_SOURCE_1 EXT_SOURCE_2^2', 'EXT_SOURCE_1 EXT_SOURCE_2 EXT_SOURCE_3', 'EXT_SOURCE_1 EXT_SOURCE_2 DAYS_BIRTH', 'EXT_SOURCE_1 EXT_SOURCE_3^2', 'EXT_SOURCE_1 EXT_SOURCE_3 DAYS_BIRTH', 'EXT_SOURCE_1 DAYS_BIRTH^2', 'EXT_SOURCE_2^3', 'EXT_SOURCE_2^2 EXT_SOURCE_3', 'EXT_SOURCE_2^2 DAYS_BIRTH', 'EXT_SOURCE_2 EXT_SOURCE_3^2', 'EXT_SOURCE_2 EXT_SOURCE_3 DAYS_BIRTH', 'EXT_SOURCE_2 DAYS_BIRTH^2', 'EXT_SOURCE_3^3', 'EXT_SOURCE_3^2 DAYS_BIRTH', 'EXT_SOURCE_3 DAYS_BIRTH^2', 'DAYS_BIRTH^3']
###Markdown
Verify whether any of these poly features has stronger correlation with `TARGET`.
###Code
# Create a df with the polynomial features
poly_train_df = pd.DataFrame(X_train, columns=feat_names)
poly_test_df = pd.DataFrame(X_test, columns=feat_names)
# Add target
poly_train_df['TARGET'] = y_train
# Find correlations
poly_corr = poly_train_df.corr()['TARGET'].sort_values()
# Display the features with highest correlation
print(poly_corr.head(10))
print(poly_corr.tail(5))
###Output
EXT_SOURCE_2 EXT_SOURCE_3 -0.193939
EXT_SOURCE_1 EXT_SOURCE_2 EXT_SOURCE_3 -0.189605
EXT_SOURCE_2 EXT_SOURCE_3 DAYS_BIRTH -0.181283
EXT_SOURCE_2^2 EXT_SOURCE_3 -0.176428
EXT_SOURCE_2 EXT_SOURCE_3^2 -0.172282
EXT_SOURCE_1 EXT_SOURCE_2 -0.166625
EXT_SOURCE_1 EXT_SOURCE_3 -0.164065
EXT_SOURCE_2 -0.160295
EXT_SOURCE_2 DAYS_BIRTH -0.156873
EXT_SOURCE_1 EXT_SOURCE_2^2 -0.156867
Name: TARGET, dtype: float64
DAYS_BIRTH -0.078239
DAYS_BIRTH^2 -0.076672
DAYS_BIRTH^3 -0.074273
TARGET 1.000000
1 NaN
Name: TARGET, dtype: float64
###Markdown
Several of these features have better correlation with the target than the original features. We will add these features to a copy of the training and test data and then evaluate the model with and without the features.
###Code
# Merge polynomial features into the train and test dataframes
poly_train_df['SK_ID_CURR'] = train_df['SK_ID_CURR']
poly_test_df['SK_ID_CURR'] = test_df['SK_ID_CURR']
poly_train_df = train_df.merge(poly_train_df, on='SK_ID_CURR', how='left')
poly_test_df = test_df.merge(poly_test_df, on='SK_ID_CURR', how='left')
# Align the dataframes
poly_train_df, poly_test_df = poly_train_df.align(poly_test_df, join='inner', axis=1)
print(f'Training data shape: {poly_train_df.shape}')
print(f'Training data shape: {poly_test_df.shape}')
###Output
Training data shape: (307511, 262)
Training data shape: (48744, 262)
###Markdown
Other features New featuresCreate the following new features from existing features:* `CREDIT_INCOME_PERCENT`* `ANNUITY_INCOME_PERCENT`* `CREDIT_TERM`* `DAYS_EMPLOYED_PERCENT`
###Code
domain_train_df = train_df.copy()
domain_test_df = test_df.copy()
domain_train_df['CREDIT_INCOME_PERCENT'] = domain_train_df['AMT_CREDIT'] / domain_train_df['AMT_INCOME_TOTAL']
domain_train_df['ANNUITY_INCOME_PERCENT'] = domain_train_df['AMT_ANNUITY'] / domain_train_df['AMT_INCOME_TOTAL']
domain_train_df['CREDIT_TERM'] = domain_train_df['AMT_ANNUITY'] / domain_train_df['AMT_CREDIT']
domain_train_df['DAYS_EMPLOYED_PERCENT'] = domain_train_df['DAYS_EMPLOYED'] / domain_train_df['DAYS_BIRTH']
domain_test_df['CREDIT_INCOME_PERCENT'] = domain_test_df['AMT_CREDIT'] / domain_test_df['AMT_INCOME_TOTAL']
domain_test_df['ANNUITY_INCOME_PERCENT'] = domain_test_df['AMT_ANNUITY'] / domain_test_df['AMT_INCOME_TOTAL']
domain_test_df['CREDIT_TERM'] = domain_test_df['AMT_ANNUITY'] / domain_test_df['AMT_CREDIT']
domain_test_df['DAYS_EMPLOYED_PERCENT'] = domain_test_df['DAYS_EMPLOYED'] / domain_test_df['DAYS_BIRTH']
###Output
_____no_output_____
###Markdown
Visualize new features
###Code
plt.figure(figsize=(8,12))
for i, feature in enumerate(['CREDIT_INCOME_PERCENT',
'ANNUITY_INCOME_PERCENT',
'CREDIT_TERM',
'DAYS_EMPLOYED_PERCENT']):
plt.subplot(4, 1 ,i+1)
for target in [0,1]:
sns.kdeplot(domain_train_df.loc[domain_train_df['TARGET'] == target, feature])
plt.legend(['Repaid', 'Not paid'])
plt.tight_layout()
###Output
_____no_output_____
###Markdown
It doesn't look like these features help elucidate any significant differences between customers who repaid their loans and those who didn't. We'll still try them out and see whether they give any reasonable results or not. Baseline using Logistic RegressionUse logistic regression with L-2 penalty
###Code
from sklearn.preprocessing import MinMaxScaler
print(f'Train set shape: {train_df.shape}')
print(f'Test set shape: {test_df.shape}')
# Drop TARGET from train set
y_train = train_df.TARGET
X_train = train_df.drop(columns=['TARGET'])
# Save feature names
features = list(X_train.columns)
# Copy test data to new df
X_test = test_df.copy()
# Impute missing values and scale 0-1
imputer = SimpleImputer(strategy='median')
scaler = MinMaxScaler(feature_range=(0,1))
X_train = imputer.fit_transform(X_train)
X_test = imputer.transform(X_test)
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# Do a 70-30 split on the training data
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train,
y_train,
test_size=0.3,
shuffle=True,
random_state=0
)
# Run logistic regression with L-2 regularization
from sklearn.linear_model import LogisticRegression
model_lg = LogisticRegression(penalty='l2', C=0.0001)
model_lg.fit(X_train, y_train)
from sklearn.metrics import roc_curve, auc
# Predict train labels
y_pred_proba_train = model_lg.predict_proba(X_train)[:,1]
fpr, tpr, thresholds = roc_curve(y_train, y_pred_proba_train)
# Compute performance metrics on the training data
auc_train = auc(fpr, tpr)
print(f'AUC of the model in the validation set is: {auc_train}')
fig, ax = plt.subplots(figsize=(9,6))
ax.plot(fpr, tpr, label=f'AUC = {auc_train:.3f}')
ax.plot([0,1], [0,1], linestyle='--')
ax.set_title('ROC Curve (Training data)')
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
ax.legend(loc='best')
# Predict validation set labels
y_pred_proba_val = model_lg.predict_proba(X_val)[:,1]
fpr, tpr, thresholds = roc_curve(y_val, y_pred_proba_val)
# Compute performance metrics on the training data
auc_train = auc(fpr, tpr)
print(f'AUC of the model in the validation set is: {auc_train}')
fig, ax = plt.subplots(figsize=(9,6))
ax.plot(fpr, tpr, label=f'AUC = {auc_train:.3f}')
ax.plot([0,1], [0,1], linestyle='--')
ax.set_title('ROC Curve (Training data)')
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
ax.legend(loc='best')
###Output
AUC of the model in the validation set is: 0.6877487105241726
###Markdown
Kaggle submission fileMake predictions on the test data now so we can submit to Kaggle and get the legit AUC score.
###Code
y_pred_proba_test = model_lg.predict_proba(X_test)[:,1]
submit = test_df[['SK_ID_CURR']].copy()
submit['TARGET'] = y_pred_proba_test
submit.head()
submit.to_csv('log_reg_baseline.csv', index=False)
###Output
_____no_output_____ |
notebooks/Evaluation Experiments.ipynb | ###Markdown
Evaluation Entry**Note**: This notebook generate results for evaluations. Evaluation results please go to folder 'eval_train_size' default nerual network parameters~~~~self.nn_params = { 'hidden_size': 100, 'depth': 3, network depth = 'depth' + 1 'activation': torch.nn.Tanh, 'device': device}~~~~ default training parameters~~~~self.train_params = { 'epochs': 500, 'train_loss': [ 'mse_loss', 'phy_loss', 'energy_loss' ], 'test_loss': [ 'phy_loss', 'energy_loss' ], 'num_batch': 1, 'optimizer': torch.optim.Adamax, 'cyclical': { 'base_lr': 0.001, 'max_lr': 0.1, 'step_size_up': 20, 'step_size_down': 20, 'mode': 'triangular' }, set to False or {} to disable 'L2_reg': 0.0, 'verbose': False, 'print_interval': 10, 'early_stopping': False}~~~~ default input/output parameters~~~~self.io_params = { 'path_out': '../results/', 'path_fig': '../figures/', 'path_log': '../logs/', 'env_path': [], path that will be registered through sys.path 'use_timestamp': True}~~~~ default dataset parameters~~~~self.data_params = { 'data_path': '../datasets/', 'phase': 'single-phase', use 'single-phase', 'dual-phase' or 'multi-phase' 'n_sites': 4, 'train_size': 20000, 'val_size': 2000, 'test_size': 0, set to 0 to use all the test data 'normalize_input': True, 'normalize_output': False, 'device': device, 'dataset': 'new'}~~~~ default loss function parameters~~~~self.loss_params = { 'lambda_s': 1.0, 'lambda_e0': 0.2, 'anneal_factor': 0.9, 'anneal_interval': 50, this is a parameter also for noise and cyclical update 'norm_wf': False, set to False or None or {} to disable the noise 'noise': {'mode': 'uniform', 'mean': 0.0, 'var': 0.5, 'decay': 0.9}, 'cyclical': {'mode': 'sin', 'decay': 0.9, 'mean': 1.0, 'amp': 1.0, 'period': 20, 'cycle_momentum': False}}~~~~ The Models| Models | Abbr. || --------------------------- | ---------------- || Deep Neural Networks | DNN || Shrodinger Loss DNN | S-DNN || Energy Loss S-DNN | SE-DNN || Normalized SE-DNN | NSE-DNN || Extended Training S-DNN | S-DNN$_{ex}$ || Extended Training SE-DNN | SE-DNN$_{ex}$ || Normalized S-DNN$_{ex}$ | NS-DNN$_{ex}$ || Normalized SE-DNN$_{ex}$ | NSE-DNN$_{ex}$ || Label-Free S-DNN$_{ex}$ | S-DNN$_{ex}$-LB || Label-Free SE-DNN$_{ex}$ | SE-DNN$_{ex}$-LB || Normalized S-DNN$_{ex}$-LB | NS-DNN$_{ex}$-LB || Normalized SE-DNN$_{ex}$-LB | NSE-DNN$_{ex}$-LB|
###Code
%matplotlib inline
import seaborn as sns
import numpy as np
import pandas as pd
import torch
import matplotlib.pyplot as plt
import random
import sys
sys.path.append('../scripts/')
from training import Trainer
from presets import LambdaSearch
from config_plots import global_settings
from utils import *
global_settings()
from fastprogress.fastprogress import master_bar, progress_bar
device = torch.device('cuda:0')
###Output
_____no_output_____
###Markdown
Free Log Files
###Code
num_logs = len(
[
name for name in os.listdir(
'../logs/'
) if os.path.isfile(name)
]
)
if num_logs > 2000:
free_logs('../logs/')
###Output
_____no_output_____
###Markdown
Set Up Parameters and Start TasksThis part includes customized training parameters. Change these to whatever you need.
###Code
param_presets = LambdaSearch(data_path='../datasets/')
train_sizes = [100, 200, 500, 1000, 2000, 5000, 10000, 15000, 20000]
break_loop = False
loss_plot = False
mb = master_bar(range(10))
for i in mb:
for train_size in progress_bar(train_sizes):
# the black-box neural networks
param = param_presets.DNN()
param.name = 'DNN'
param.data_params['train_size'] = train_size
param.train_params['break_loop_early'] = break_loop
param.loss_params['lambda_s'] = 0.0
param.loss_params['lambda_e0'] = 0.0
param.loss_params['anneal_factor'] = 0.0
trainer = Trainer(master_bar=mb, plot=loss_plot)
trainer.start(param)
# PINN-analogue
param = param_presets.NSE_DNNex_LB()
param.name = 'NSE-DNNex-LF'
param.data_params['train_size'] = train_size
param.train_params['break_loop_early'] = break_loop
param.loss_params['lambda_s'] = 0.566698
param.loss_params['lambda_e0'] = 3.680050
param.loss_params['anneal_factor'] = 0.786220
trainer = Trainer(master_bar=mb, plot=loss_plot)
trainer.start(param)
# PGNN-analogue
param = param_presets.NSE_DNNex()
param.name = 'NSE-DNNex'
param.data_params['train_size'] = train_size
param.train_params['break_loop_early'] = break_loop
param.loss_params['lambda_s'] = 0.433467
param.loss_params['lambda_e0'] = 2.297982
param.loss_params['anneal_factor'] = 0.861043
trainer = Trainer(master_bar=mb, plot=loss_plot)
trainer.start(param)
# MTL-PGNN
param = param_presets.vNSE_DNNex()
param.name = 'vNSE-NNex'
param.data_params['train_size'] = train_size
param.train_params['break_loop_early'] = break_loop
param.train_params['vanilla'] = True
param.loss_params['lambda_s'] = 0.433467
param.loss_params['lambda_e0'] = 2.297982
param.loss_params['anneal_factor'] = 0.861043
param.data_params['device'] = device
param.nn_params['device'] = device
trainer = Trainer(master_bar=mb, plot=loss_plot)
trainer.start(param)
# CoPhy-PGNN
param = param_presets.NSE_DNNex()
param.name = 'cNSE-NNex'
param.data_params['train_size'] = train_size
param.train_params['break_loop_early'] = break_loop
param.loss_params['lambda_s'] = 0.0
param.loss_params['lambda_e0'] = 2.061856
param.loss_params['anneal_factor'] = 0.816932
param.data_params['device'] = device
param.nn_params['device'] = device
param.loss_params['cold_start'] = {
'mode': 'sigmoid',
'lambda_s': 0.846349,
'threshold': 51.0,
'smooth': 0.171778
}
trainer = Trainer(master_bar=mb, plot=loss_plot)
trainer.start(param)
# CoPhy-PGNN (w/o E-Loss)
param = param_presets.NS_DNNex()
param.name = 'cNS-NNex'
param.data_params['train_size'] = train_size
param.train_params['break_loop_early'] = break_loop
param.data_params['device'] = device
param.nn_params['device'] = device
param.loss_params['cold_start'] = {
'mode': 'sigmoid',
'lambda_s': 0.846349,
'threshold': 51.0,
'smooth': 0.171778
}
param.loss_params['lambda_s'] = 0.0
param.loss_params['lambda_e0'] = 0.0
param.loss_params['anneal_factor'] = 0.0
trainer = Trainer(master_bar=mb, plot=loss_plot)
trainer.start(param)
# CoPhy-PGNN (only-D_Tr)
param = param_presets.NSE_DNN()
param.name = 'cNSE-NN'
param.data_params['train_size'] = train_size
param.train_params['break_loop_early'] = break_loop
param.data_params['device'] = device
param.nn_params['device'] = device
param.loss_params['lambda_s'] = 0.274606
param.loss_params['lambda_e0'] = 3.046513
param.loss_params['anneal_factor'] = 0.672920
param.loss_params['cold_start'] = {
'mode': 'sigmoid',
'lambda_s': 0.846349,
'threshold': 51.0,
'smooth': 0.171778
}
trainer = Trainer(master_bar=mb, plot=loss_plot)
trainer.start(param)
# CoPhy-PGNN (Label-free)
param = param_presets.NSE_DNNex_LB()
param.name = 'cNSE-NNex-LF'
param.data_params['train_size'] = train_size
param.train_params['break_loop_early'] = break_loop
param.loss_params['lambda_s'] = 0.0
param.loss_params['lambda_e0'] = 3.680050
param.loss_params['anneal_factor'] = 0.786220
param.data_params['device'] = device
param.nn_params['device'] = device
param.loss_params['cold_start'] = {
'mode': 'sigmoid',
'lambda_s': 0.846349,
'threshold': 51.0,
'smooth': 0.171778
}
trainer = Trainer(master_bar=mb, plot=loss_plot)
trainer.start(param)
###Output
_____no_output_____ |
colors_overview-Copy1.ipynb | ###Markdown
Pragmatic color describers
###Code
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
###Output
_____no_output_____
###Markdown
Contents1. [Overview](Overview)1. [Set-up](Set-up)1. [The corpus](The-corpus) 1. [Corpus reader](Corpus-reader) 1. [ColorsCorpusExample instances](ColorsCorpusExample-instances) 1. [Displaying examples](Displaying-examples) 1. [Color representations](Color-representations) 1. [Utterance texts](Utterance-texts) 1. [Far, Split, and Close conditions](Far,-Split,-and-Close-conditions)1. [Toy problems for development work](Toy-problems-for-development-work)1. [Core model](Core-model) 1. [Toy dataset illustration](Toy-dataset-illustration) 1. [Predicting sequences](Predicting-sequences) 1. [Listener-based evaluation](Listener-based-evaluation) 1. [Other prediction and evaluation methods](Other-prediction-and-evaluation-methods) 1. [Cross-validation](Cross-validation)1. [Baseline SCC model](Baseline-SCC-model)1. [Modifying the core model](Modifying-the-core-model) 1. [Illustration: LSTM Cells](Illustration:-LSTM-Cells) 1. [Illustration: Deeper models](Illustration:-Deeper-models) OverviewThis notebook is part of our unit on grounding. It illustrates core concepts from the unit, and it provides useful background material for the associated homework and bake-off. Set-up
###Code
from colors import ColorsCorpusReader
import os
import pandas as pd
from sklearn.model_selection import train_test_split
import torch
from torch_color_describer import (
ContextualColorDescriber, create_example_dataset)
import utils
from utils import START_SYMBOL, END_SYMBOL, UNK_SYMBOL
utils.fix_random_seeds()
###Output
_____no_output_____
###Markdown
The [Stanford English Colors in Context corpus](https://cocolab.stanford.edu/datasets/colors.html) (SCC) is included in the data distribution for this course. If you store the data in a non-standard place, you'll need to update the following:
###Code
COLORS_SRC_FILENAME = os.path.join(
"data", "colors", "filteredCorpus.csv")
COLORS_SRC_FILENAME
###Output
_____no_output_____
###Markdown
The corpus The SCC corpus is based in a two-player interactive game. The two players share a context consisting of three color patches, with the display order randomized between them so that they can't use positional information when communicating.The __speaker__ is privately assigned a target color and asked to produce a description of it that will enable the __listener__ to identify the speaker's target. The listener makes a choice based on the speaker's message, and the two succeed if and only if the listener identifies the target correctly.In the game, the two players played repeated reference games and could communicate with each other in a free-form way. This opens up the possibility of modeling these repeated interactions as task-oriented dialogues. However, for this unit, we'll ignore most of this structure. We'll treat the corpus as a bunch of independent reference games played by anonymous players, and we will ignore the listener and their choices entirely.For the bake-off, we will be distributing a separate test set. Thus, all of the data in the SCC can be used for exploration and development. Corpus reader The corpus reader class is `ColorsCorpusReader` in `colors.py`. The reader's primary function is to let you iterate over corpus examples:
###Code
corpus = ColorsCorpusReader(
COLORS_SRC_FILENAME,
word_count=None,
normalize_colors=True)
###Output
_____no_output_____
###Markdown
The two keyword arguments have their default values here. * If you supply `word_count` with an interger value, it will restrict to just examples where the utterance has that number of words (using a whitespace heuristic). This creates smaller corpora that are useful for development.* The colors in the corpus are in [HLS format](https://en.wikipedia.org/wiki/HSL_and_HSV). With `normalize_colors=False`, the first (hue) value is an integer between 1 and 360 inclusive, and the L (lightness) and S (saturation) values are between 1 and 100 inclusive. With `normalize_colors=True`, these values are all scaled to between 0 and 1 inclusive. The default is `normalize_colors=True` because this is a better choice for all the machine learning models we'll consider.
###Code
examples = list(corpus.read())
###Output
_____no_output_____
###Markdown
We can verify that we read in the same number of examples as reported in [Monroe et al. 2017](https://transacl.org/ojs/index.php/tacl/article/view/1142):
###Code
# Should be 46994:
len(examples)
###Output
_____no_output_____
###Markdown
ColorsCorpusExample instances The examples are `ColorsCorpusExample` instances:
###Code
ex1 = next(corpus.read())
###Output
_____no_output_____
###Markdown
These objects have a lot of attributes and methods designed to help you study the corpus and use it for our machine learning tasks. Let's review some highlights. Displaying examples You can see what the speaker saw, with the utterance they chose wote above the patches:
###Code
ex1.display(typ='speaker')
###Output
The darker blue one
###Markdown
This is the original order of patches for the speaker. The target happens to the be the leftmost patch, as indicated by the black box around it.Here's what the listener saw, with the speaker's message printed above the patches:
###Code
ex1.display(typ='listener')
###Output
The darker blue one
###Markdown
The listener isn't shown the target, of course, so no patches are highlighted. If `display` is called with no arguments, then the target is placed in the final position and the other two are given in an order determined by the corpus metadata:
###Code
ex1.display()
###Output
The darker blue one
###Markdown
This is the representation order we use for our machine learning models. Color representations For machine learning, we'll often need to access the color representations directly. The primary attribute for this is `colors`:
###Code
ex1.colors
###Output
_____no_output_____
###Markdown
In this display order, the third element is the target color and the first two are the distractors. The attributes `speaker_context` and `listener_context` return the same colors but in the order that those players saw them. For example:
###Code
ex1.speaker_context
###Output
_____no_output_____
###Markdown
Utterance texts Utterances are just strings:
###Code
ex1.contents
###Output
_____no_output_____
###Markdown
There are cases where the speaker made a sequences of utterances for the same trial. We follow [Monroe et al. 2017](https://transacl.org/ojs/index.php/tacl/article/view/1142) in concatenating these into a single utterances. To preserve the original information, the individual turns are separated by `" "`. Example 3 is the first with this property – let's check it out:
###Code
ex3 = examples[2]
ex3.contents
###Output
_____no_output_____
###Markdown
The method `parse_turns` will parse this into individual turns:
###Code
ex3.parse_turns()
###Output
_____no_output_____
###Markdown
For examples consisting of a single turn, `parse_turns` returns a list of length 1:
###Code
ex1.parse_turns()
###Output
_____no_output_____
###Markdown
Far, Split, and Close conditions The SCC contains three conditions: __Far condition__: All three colors are far apart in color space. Example:
###Code
print("Condition type:", examples[1].condition)
examples[1].display()
###Output
Condition type: far
purple
###Markdown
__Split condition__: The target is close to one of the distractors, and the other is far away from both of them. Example:
###Code
print("Condition type:", examples[3].condition)
examples[3].display()
###Output
Condition type: split
lime
###Markdown
__Close condition__: The target is similar to both distractors. Example:
###Code
print("Condition type:", examples[2].condition)
examples[2].display()
###Output
Condition type: close
Medium pink ### the medium dark one
###Markdown
These conditions go from easiest to hardest when it comes to reliable communication. In the __Far__ condition, the context is hardly relevant, whereas the nature of the distractors reliably shapes the speaker's choices in the other two conditions. You can begin to see how this affects speaker choices in the above examples: "purple" suffices for the __Far__ condition, a more marked single word ("lime") suffices in the __Split__ condition, and the __Close__ condition triggers a pretty long, complex description. The `condition` attribute provides access to this value:
###Code
ex1.condition
###Output
_____no_output_____
###Markdown
The following verifies that we have the same number of examples per condition as reported in [Monroe et al. 2017](https://transacl.org/ojs/index.php/tacl/article/view/1142):
###Code
pd.Series([ex.condition for ex in examples]).value_counts()
###Output
_____no_output_____
###Markdown
Toy problems for development work The SCC corpus is fairly large and quite challenging as an NLU task. This means it isn't ideal when it comes to testing hypotheses and debugging code. Poor performance could trace to a mistake, but it could just as easily trace to the fact that the problem is very challenging from the point of view of optimization.To address this, the module `torch_color_describer.py` includes a function `create_example_dataset` for creating small, easy datasets with the same basic properties as the SCC corpus.Here's a toy problem containing just six examples:
###Code
tiny_contexts, tiny_words, tiny_vocab = create_example_dataset(
group_size=2, vec_dim=2)
tiny_vocab
tiny_words
tiny_contexts
###Output
_____no_output_____
###Markdown
Each member of `tiny_contexts` contains three vectors. The final (target) vector always has values in a range that determines the corresponding word sequence, which is drawn from a set of three fixed sequences. Thus, the model basically just needs to learn to ignore the distractors and find the association between the target vector and the corresponding sequence. All the models we study have a capacity to solve this task with very little data, so you should see perfect or near perfect performance on reasonably-sized versions of this task. Core model Our core model for this problem is implemented in `torch_color_describer.py` as `ContextualColorDescriber`. At its heart, this is a pretty standard encoder–decoder model:* `Encoder`: Processes the color contexts as a sequence. We always place the target in final position so that it is closest to the supervision signals that we get when decoding.* `Decoder`: A neural language model whose initial hidden representation is the final hidden representation of the `Encoder`.* `EncoderDecoder`: Coordinates the operations of the `Encoder` and `Decoder`.Finally, `ContextualColorDescriber` is a wrapper around these model components. It handle the details of training and implements the prediction and evaluation functions that we will use.Many additional details about this model are included in the slides for this unit. Toy dataset illustration To highlight the core functionality of `ContextualColorDescriber`, let's create a small toy dataset and use it to train and evaluate a model:
###Code
toy_color_seqs, toy_word_seqs, toy_vocab = create_example_dataset(
group_size=50, vec_dim=2)
toy_color_seqs_train, toy_color_seqs_test, toy_word_seqs_train, toy_word_seqs_test = \
train_test_split(toy_color_seqs, toy_word_seqs)
###Output
_____no_output_____
###Markdown
Here we expose all of the available parameters with their default values:
###Code
toy_mod = ContextualColorDescriber(
toy_vocab,
embedding=None, # Option to supply a pretrained matrix as an `np.array`.
embed_dim=10,
hidden_dim=10,
max_iter=100,
eta=0.01,
optimizer=torch.optim.Adam,
batch_size=128,
l2_strength=0.0,
warm_start=False,
device=None)
_ = toy_mod.fit(toy_color_seqs_train, toy_word_seqs_train)
###Output
Epoch 100; err = 0.1345149129629135
###Markdown
Predicting sequences The `predict` method takes a list of color contexts as input and returns model descriptions:
###Code
toy_preds = toy_mod.predict(toy_color_seqs_test)
toy_preds[0]
###Output
_____no_output_____
###Markdown
We can then check that we predicted all correct sequences:
###Code
toy_word_seqs_test
toy_preds[0] = 'AG'
print(toy_preds)
toy_correct = sum(1 for x, p in zip(toy_word_seqs_test, toy_preds) if x == p)
print(toy_correct)
toy_correct / len(toy_word_seqs_test)
###Output
37
###Markdown
For real problems, this is too stringent a requirement, since there are generally many equally good descriptions. This insight gives rise to metrics like [BLEU](https://en.wikipedia.org/wiki/BLEU), [METEOR](https://en.wikipedia.org/wiki/METEOR), [ROUGE](https://en.wikipedia.org/wiki/ROUGE_(metric)), [CIDEr](https://arxiv.org/pdf/1411.5726.pdf), and others, which seek to relax the requirement of an exact match with the test sequence. These are reasonable options to explore, but we will instead adopt a communcation-based evaluation, as discussed in the next section. Listener-based evaluation `ContextualColorDescriber` implements a method `listener_accuracy` that we will use for our primary evaluations in the assignment and bake-off. The essence of the method is that we can calculate$$c^{*} = \text{argmax}_{c \in C} P_S(\text{utterance} \mid c)$$where $P_S$ is our describer model and $C$ is the set of all permutations of all three colors in the color context. We take $c^{*}$ to be a correct prediction if it is one where the target is in the privileged final position. (There are two such contexts; we try both in case the order of the distractors influences the predictions, and the model is correct if one of them has the highest probability.)Here's the listener accuracy of our toy model:
###Code
toy_mod
toy_color_seqs_test
toy_word_seqs_test
toy_mod.listener_accuracy(toy_color_seqs_test, toy_word_seqs_test)
###Output
_____no_output_____
###Markdown
Other prediction and evaluation methods You can get the perplexities for test examles with `perpelexities`:
###Code
toy_perp = toy_mod.perplexities(toy_color_seqs_test, toy_word_seqs_test)
toy_perp[0]
###Output
_____no_output_____
###Markdown
You can use `predict_proba` to see the full probability distributions assigned to test examples:
###Code
toy_proba = toy_mod.predict_proba(toy_color_seqs_test, toy_word_seqs_test)
toy_proba[0].shape
for timestep in toy_proba[0]:
print(dict(zip(toy_vocab, timestep)))
###Output
{'<s>': 1.0, '</s>': 0.0, 'A': 0.0, 'B': 0.0, '$UNK': 0.0}
{'<s>': 0.0036858993, '</s>': 0.00026680966, 'A': 0.98546416, 'B': 0.009143492, '$UNK': 0.0014396018}
{'<s>': 0.004782121, '</s>': 0.024507336, 'A': 0.0019362189, 'B': 0.96381485, '$UNK': 0.004959458}
{'<s>': 0.0050889943, '</s>': 0.9780351, 'A': 0.014443769, 'B': 0.0008280441, '$UNK': 0.00160416}
###Markdown
Cross-validation You can use `utils.fit_classifier_with_crossvalidation` to cross-validate these models. Just be sure to set `scoring=None` so that the sklearn model selection methods use the `score` method of `ContextualColorDescriber`, which is an alias for `listener_accuracy`:
###Code
best_mod = utils.fit_classifier_with_crossvalidation(
toy_color_seqs_train,
toy_word_seqs_train,
toy_mod,
cv=2,
scoring=None,
param_grid={'hidden_dim': [10, 20]})
###Output
Epoch 100; err = 0.12754578888416298
###Markdown
Baseline SCC model Just to show how all the pieces come together, here's a very basic SCC experiment using the core code and very simplistic assumptions (which you will revisit in the assignment) about how to represent the examples: To facilitate quick development, we'll restrict attention to the two-word examples:
###Code
dev_corpus = ColorsCorpusReader(COLORS_SRC_FILENAME, word_count=2)
dev_examples = list(dev_corpus.read())
len(dev_examples)
###Output
_____no_output_____
###Markdown
Here we extract the raw colors and texts (as strings):
###Code
x = [[ex.colors, ex.contents] for ex in dev_examples]
dev_cols, dev_texts = zip(*[[ex.colors, ex.contents] for ex in dev_examples])
len(dev_texts)
len(dev_cols)
###Output
_____no_output_____
###Markdown
To tokenize the examples, we'll just split on whitespace, taking care to add the required boundary symbols:
###Code
dev_word_seqs = [[START_SYMBOL] + text.split() + [END_SYMBOL] for text in dev_texts]
len(dev_word_seqs)
###Output
_____no_output_____
###Markdown
We'll use a random train–test split:
###Code
dev_cols_train, dev_cols_test, dev_word_seqs_train, dev_word_seqs_test = \
train_test_split(dev_cols, dev_word_seqs)
###Output
_____no_output_____
###Markdown
Our vocab is determined by the train set, and we take care to include the `$UNK` token:
###Code
dev_vocab = sorted({w for toks in dev_word_seqs_train for w in toks}) + [UNK_SYMBOL]
len(dev_vocab)
###Output
_____no_output_____
###Markdown
And now we're ready to train a model:
###Code
dev_mod = ContextualColorDescriber(
dev_vocab,
embed_dim=10,
hidden_dim=10,
max_iter=10,
batch_size=128)
print(len(dev_cols_train))
print(len(dev_word_seqs_train))
dev_cols_train[0]
dev_cols_train[100]
_ = dev_mod.fit(dev_cols_train, dev_word_seqs_train)
###Output
Epoch 10; err = 101.75892037153244
###Markdown
And finally an evaluation in terms of listener accuracy:
###Code
dev_mod
print(len(dev_cols_train))
print(len(dev_cols_test))
dev_mod.listener_accuracy(dev_cols_test, dev_word_seqs_test)
###Output
_____no_output_____
###Markdown
Modifying the core model The first few assignment problems concern how you preprocess the data for your model. After that, the goal is to subclass model components in `torch_color_describer.py`. For the bake-off submission, you can do whatever you like in terms of modeling, but my hope is that you'll be able to continue subclassing based on `torch_color_describer.py`.This section provides some illustrative examples designed to give you a feel for how the code is structured and what your options are in terms of creating subclasses. Illustration: LSTM Cells Both the `Encoder` and the `Decoder` of `torch_color_describer` are currently GRU cells. Switching to another cell type is easy: __Step 1__: Subclass the `Encoder`; all we have to do here is change `GRU` from the original to `LSTM`:
###Code
import torch.nn as nn
from torch_color_describer import Encoder
class LSTMEncoder(Encoder):
def __init__(self, color_dim, hidden_dim):
super().__init__(color_dim, hidden_dim)
self.rnn = nn.LSTM(
input_size=self.color_dim,
hidden_size=self.hidden_dim,
batch_first=True)
###Output
_____no_output_____
###Markdown
__Step 2__: Subclass the `Decoder`, making the same simple change as above:
###Code
import torch.nn as nn
from torch_color_describer import Encoder, Decoder
class LSTMDecoder(Decoder):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.rnn = nn.LSTM(
input_size=self.embed_dim,
hidden_size=self.hidden_dim,
batch_first=True)
###Output
_____no_output_____
###Markdown
__Step 3__:`ContextualColorDescriber` has a method called `build_graph` that sets up the `Encoder` and `Decoder`. The needed revision just uses `LSTMEncoder`:
###Code
from torch_color_describer import EncoderDecoder
class LSTMContextualColorDescriber(ContextualColorDescriber):
def build_graph(self):
# Use the new Encoder:
encoder = LSTMEncoder(
color_dim=self.color_dim,
hidden_dim=self.hidden_dim)
# Use the new Decoder:
decoder = LSTMDecoder(
vocab_size=self.vocab_size,
embed_dim=self.embed_dim,
embedding=self.embedding,
hidden_dim=self.hidden_dim)
return EncoderDecoder(encoder, decoder)
###Output
_____no_output_____
###Markdown
Here's an example run:
###Code
lstm_mod = LSTMContextualColorDescriber(
toy_vocab,
embed_dim=10,
hidden_dim=10,
max_iter=100,
batch_size=128)
_ = lstm_mod.fit(toy_color_seqs_train, toy_word_seqs_train)
lstm_mod.listener_accuracy(toy_color_seqs_test, toy_word_seqs_test)
###Output
_____no_output_____
###Markdown
Illustration: Deeper models The `Encoder` and `Decoder` are both currently hard-coded to have just one hidden layer. It is straightforward to make them deeper as long as we ensure that both the `Encoder` and `Decoder` have the same depth; since the `Encoder` final states are the initial hidden states for the `Decoder`, we need this alignment. (Strictly speaking, we could have different numbers of `Encoder` and `Decoder` layers, as long as we did some kind of averaging or copying to achieve the hand-off from `Encoder` to `Decocer`. I'll set this possibility aside.) __Step 1__: We need to subclass the `Encoder` and `Decoder` so that they have `num_layers` argument that is fed into the RNN cell:
###Code
import torch.nn as nn
from torch_color_describer import Encoder, Decoder
class DeepEncoder(Encoder):
def __init__(self, *args, num_layers=2, **kwargs):
super().__init__(*args, **kwargs)
self.num_layers = num_layers
self.rnn = nn.GRU(
input_size=self.color_dim,
hidden_size=self.hidden_dim,
num_layers=self.num_layers,
batch_first=True)
class DeepDecoder(Decoder):
def __init__(self, *args, num_layers=2, **kwargs):
super().__init__(*args, **kwargs)
self.num_layers = num_layers
self.rnn = nn.GRU(
input_size=self.embed_dim,
hidden_size=self.hidden_dim,
num_layers=self.num_layers,
batch_first=True)
###Output
_____no_output_____
###Markdown
__Step 2__: As before, we need to update the `build_graph` method of `ContextualColorDescriber`. The needed revision just uses `DeepEncoder` and `DeepDecoder`. To expose this new argument to the user, we also add a new keyword argument to `ContextualColorDescriber`:
###Code
from torch_color_describer import EncoderDecoder
class DeepContextualColorDescriber(ContextualColorDescriber):
def __init__(self, *args, num_layers=2, **kwargs):
self.num_layers = num_layers
super().__init__(*args, **kwargs)
def build_graph(self):
encoder = DeepEncoder(
color_dim=self.color_dim,
hidden_dim=self.hidden_dim,
num_layers=self.num_layers) # The new piece is this argument.
decoder = DeepDecoder(
vocab_size=self.vocab_size,
embed_dim=self.embed_dim,
embedding=self.embedding,
hidden_dim=self.hidden_dim,
num_layers=self.num_layers) # The new piece is this argument.
return EncoderDecoder(encoder, decoder)
###Output
_____no_output_____
###Markdown
An example/test run:
###Code
mod_deep = DeepContextualColorDescriber(
toy_vocab,
embed_dim=10,
hidden_dim=10,
max_iter=100,
batch_size=128)
_ = mod_deep.fit(toy_color_seqs_train, toy_word_seqs_train)
mod_deep.listener_accuracy(toy_color_seqs_test, toy_word_seqs_test)
###Output
_____no_output_____ |
scripts/tutorial-pyswaggerclient.ipynb | ###Markdown
Using FAIRshake APIThis notebook walks through using FAIRshake API with coreapi's python implementation, it works just as easily in javascript or any language with a coreapi implementation. Given that swagger is also exposed at , a swagger-based client can *also* be used.For more information, refer to the documentation at https://fairshake.cloud/swagger/Dependencies:`pip install 'git+https://github.com/u8sand/[email protected]'`
###Code
import pyswaggerclient
url = 'https://fairshake.cloud/'
schema_url = 'https://fairshake.cloud/swagger?format=openapi'
fairshake_client = pyswaggerclient.SwaggerClient(schema_url)
###Output
_____no_output_____
###Markdown
Step 1. Authentication If you don't already have a user registered with FAIRshake, you need to create one.This can be done manually at .Your API Key is available at **Note**: Authentication is only required if you plan on *creating*/*changing* things. Read-access is available without authentication.
###Code
# Put your API Key in this string
api_key = ''
###Output
_____no_output_____
###Markdown
Now that we have our API key, we can reinstantiate our Client with an authenticated transport layer.
###Code
fairshake_client.update(
headers={
'Authorization': 'Token ' + api_key,
}
)
###Output
_____no_output_____
###Markdown
We can test that it worked by reading information about the logged in user.
###Code
fairshake_client.actions.auth_user_read.call()
###Output
_____no_output_____
###Markdown
Step 2. Project, Digital Object, Rubric, Metric Management All elements expose themselves in the same way with a common set of attributes for search and identification with a few extra attributes distinguishing each element.Here, quite simply, is the gist of these data models:```pythonclass Identifiable: id: int url: str title: str description: str image: str tags: str type: ['', 'any', 'data', 'repo', 'test', 'tool'] authors: Author[]class Project(Identifiable): passclass DigitalObject(Identifiable): projects: Project[] rubrics: Rubric[]class Rubric(Identifiable): license: str metrics: Metric[]class Metric(Identifiable): license: str rationale: str principle: str fairmetrics: str fairsharing: str```Queries can be made by providing any of the parameters and we'll return the subset of the database which satisfies those parameter constraints. Though you use `title=something` we do a fuzzy search if it makes sense to do so. More find-tuned queries are actually supported by the API but not yet documented which would allow for django-style filters e.g. `title__contains=something`.**Note**: Results are paginated; use `params={'page': n}` to go through pages
###Code
# List all projects
fairshake_client.actions.project_list.call()
# List all Digital objects of type Tool
fairshake_client.actions.digital_object_list.call(type='tool')
# Create a metric
metric = fairshake_client.actions.metric_create.call(data=dict(
title='My Metric',
description='It has a url',
type='url',
tags='my project test',
license='MIT',
rationale='https://fairrationals.com/test',
principle='F',
))
metric_id = metric['id']
metric
# Create a rubric
rubric = fairshake_client.actions.rubric_create.call(data=dict(
title='My Rubric',
description='Rubric is great',
tags='my project test',
type='tool',
license='MIT',
metrics=[
metric_id,
],
))
rubric_id = rubric['id']
rubric
# Create a project
proj = fairshake_client.actions.project_create.call(data=dict(
url='http://my-objects.com',
title='My Project',
description='Project is great',
tags='my project test',
))
proj_id = proj['id']
proj
# Create a digital object
obj = fairshake_client.actions.digital_object_create.call(data=dict(
url='http://my-objects.com/00001',
title='My Object',
description='Object is great',
tags='my object test',
type='tool',
rubrics=[rubric_id],
projects=[proj_id],
))
obj_id = obj['id']
obj
###Output
_____no_output_____
###Markdown
Performing Assessments```pythonclass Assessment: project: Project target: DigitalObject rubric: Rubric methodology: ['self', 'user', 'auto', 'test'] answers: Answer[]class Answer: metric: Metric answer: float 0~1 quantified metric satisfaction comment: str url_comment: str```
###Code
# Create an assessment
fairshake_client.actions.assessment_create.call(data=dict(
project=proj_id,
target=obj_id,
rubric=rubric_id,
methodology='test',
answers=[
{
'metric': metric_id,
'answer': 1.0,
'url_comment': 'http://my_url.com',
},
],
))
###Output
_____no_output_____
###Markdown
Obtaining the score of a Digital Object
###Code
score = fairshake_client.actions.score_list.call(data=dict(target=obj_id))
score
###Output
_____no_output_____
###Markdown
Displaying FAIR insignia The insignia client library exposes a function, `build_svg_from_score` which accepts a container and the query dict.
###Code
import json
from IPython.display import display, HTML
HTML("""
<div
id="insignia"
data-target="%s"
style="width: 40px; height: 40px; border: 0px solid black" />
""" % (obj_id))
%%javascript
require(['https://fairshake.cloud/static/scripts/insignia.js'], function(insignia) {
var element = document.getElementById('insignia')
insignia.build_svg_from_score(
element,
{ target: element.getAttribute('data-target') }
)
})
###Output
_____no_output_____
###Markdown
Delete test objects
###Code
# Delete your account and associated objects
result = fairshake_client.actions.digital_object_delete(data=dict(id=obj_id))
display(result)
result = fairshake_client.actions.project_delete(data=dict(id=proj_id))
display(result)
result = fairshake_client.actions.rubric_delete(data=dict(id=rubric_id))
display(result)
result = fairshake_client.actions.metric_delete(data=dict(id=metric_id))
display(result)
###Output
_____no_output_____ |
Machine Learning Projects/Kelas Pengembangan ML/PengembanganML_1_citrus.ipynb | ###Markdown
###Code
import pandas as pd
citrus = pd.read_csv('citrus.csv') #Upload citrus.csv first
citrus.info()
citrus.head()
citrus.name[citrus.name == 'orange'] = 0
citrus.name[citrus.name == 'grapefruit'] = 1
citrus.head(10000)
dataset = citrus.values
dataset
x = dataset[:,1:6]
y = dataset[:,0]
from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler()
x_scale = min_max_scaler.fit_transform(x)
x_scale
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x_scale, y, test_size=0.3)
import numpy as np
y_train = y_train.astype(np.float32)
y_test = y_test.astype(np.float32)
from keras.models import Sequential
from keras.layers import Dense
model = Sequential([
Dense(32, activation='relu', input_shape=(5,)),
Dense(32, activation='relu'),
Dense(1, activation='sigmoid'),
])
model.compile(
optimizer='sgd',
loss='binary_crossentropy',
metrics=['accuracy']
)
model.fit(x_train, y_train, epochs=101)
model.evaluate(x_test, y_test)
###Output
_____no_output_____ |
pyML-ex7/.ipynb_checkpoints/c3py-sample-checkpoint.ipynb | ###Markdown
This is sample of using c3py library (https://[email protected]/yurz/c3py.git) Only Line, Area and Bar (Unstacked and Stacked) charts are supported at the momen. c3py can be used for displaying charts inside Jupyter Notebooks or rendering self-contained HTML files.
###Code
from c3py import C3Chart as c3
import sys
import IPython
print(sys.version)
print(IPython.__version__)
###Output
3.5.2 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:53:06)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)]
4.2.0
###Markdown
For displaying inside Jupyter - Load required Javascript libraries:
###Code
c3.ipy_load()
###Output
_____no_output_____
###Markdown
Download some GDP data from World Bank and prepare Pandas DataFrame object:
###Code
from pandas_datareader import data, wb
df = wb.download(indicator='NY.GDP.PCAP.PP.KD', country="all", start=2010, end=2015).unstack()
###Output
_____no_output_____
###Markdown
This is how raw df looks:
###Code
df.head()
###Output
_____no_output_____
###Markdown
Cleanup and create df_selected with top 5 and bottom 5 countries based on GDP Per Capita Growth between 2010 and 2015:
###Code
df.columns = df.columns.levels[1]
df = df.dropna(how="any")
df = df.astype(int)
df["5yr_change"] = (df["2014"]-df["2010"])/df["2010"]
df = df.sort_values("5yr_change")
df_selected = df.head().append(df.tail())
df_selected
ch1 = c3(data=df_selected[['2010', '2011', '2012', '2013', '2014', '2015']].T,
width=1020, height=380, kind="line", title="GDP Per Capita 2010-2015 Growth - Top 5 and Bottom 5 Countries", id="ch1")
ch1()
###Output
_____no_output_____
###Markdown
Now lat's create a copy of the above chart presented as an area chart
###Code
ch2 = ch1.copy()
ch2.id = "ch2"
ch2.kind = "area"
ch2()
###Output
_____no_output_____
###Markdown
Render a HTML file with both files:
###Code
with open("sample.html", "w") as f:
f.write("<html><body>" + c3.head + ch1.to_html() + "<br/><br/>" + ch2.to_html() + "</html></body>")
###Output
_____no_output_____ |
2.2.c.Temperature_climatology_Spain_results.ipynb | ###Markdown
2.2.c Temperature climatology in Spain (results)=================================================In this notebook we will finally generate the climatology results of daily temperature observations.Our final goal is still to know whether the temperature anomaly has influence on the power demand or not. Now, let's import the _classic stack_
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
We do now need to load the compressed filed into memory.
###Code
DATA = pd.read_csv("aemet_daily_observations_2009-2018.csv.bz2",compression='bz2')
###Output
_____no_output_____
###Markdown
Let's have a quick look at the first and last lines of the file
###Code
DATA.head(2)
DATA.tail(2)
###Output
_____no_output_____
###Markdown
The observations cover the period end of 2009 end of 2018, 10 years of data. It seems enough to have a representative climatology.And data are already been thorugh some preparation. Group by The **_groupby_** method is a very powerful tool for apply functions based on categories.We will use it to apply it to our _category_ ~ _day-of-the-year_.But, first of all, we add datetime properties to the date column and set it as index
###Code
DATA.loc[:,'date'] = pd.DatetimeIndex(pd.to_datetime(DATA['date']))
DATA.set_index(['date'],inplace=True)
DATA.loc[:,'doy'] = DATA.index.strftime('Y%m%d')
DATA.head(2)
station_group = DATA.groupby(by='station')
###Output
_____no_output_____
###Markdown
Let's loop throug the station_group object to have a first insight on what the groupby method has produce.
###Code
for name,group_st in station_group:
print ('%s (%s) %s' % (name,group_st.iloc[0].region ,len(group_st)))
###Output
A CORUÑA (A CORUÑA) 3318
A CORUÑA AEROPUERTO (A CORUÑA) 3314
A POBRA DE TRIVES (OURENSE) 3194
ABLA (ALMERIA) 3257
ADRA (ALMERIA) 1672
ALAJAR (HUELVA) 3220
ALBACETE (ALBACETE) 3317
ALBACETE BASE AÉREA (ALBACETE) 3318
ALBORÁN (ALMERIA) 1711
ALCANTARILLA, BASE AÉREA (MURCIA) 3318
ALCAÑIZ (TERUEL) 3082
ALICANTE-ELCHE AEROPUERTO (ALICANTE) 3318
ALICANTE/ALACANT (ALICANTE) 3318
ALMERÍA AEROPUERTO (ALMERIA) 3318
ANDÚJAR (JAEN) 3087
ANTEQUERA (MALAGA) 2933
ARAGÜÉS DEL PUERTO (HUESCA) 2915
ARANDA DE DUERO (BURGOS) 3301
ARANGUREN, ILUNDAIN (NAVARRA) 2946
ARANJUEZ (MADRID) 3093
ARENYS DE MAR (BARCELONA) 2975
ARROYO DEL OJANCO (JAEN) 3137
ASTURIAS AEROPUERTO (ASTURIAS) 3318
AUTILLA DEL PINO (PALENCIA) 3257
AYAMONTE (HUELVA) 3281
BADAJOZ AEROPUERTO (BADAJOZ) 3318
BARCELONA (BARCELONA) 3237
BARCELONA AEROPUERTO (BARCELONA) 3318
BARCELONA, FABRA (BARCELONA) 3318
BARDENAS REALES, BASE AÉREA (NAVARRA) 3145
BAZA (GRANADA) 2890
BAZTAN, IRURITA (NAVARRA) 2819
BELORADO (BURGOS) 2932
BENAVENTE (ZAMORA) 3315
BIELSA (HUESCA) 3023
BILBAO AEROPUERTO (BIZKAIA) 3318
BUITRAGO DEL LOZOYA (MADRID) 3032
BUJARALOZ (ZARAGOZA) 2346
BURGOS AEROPUERTO (BURGOS) 3318
CABO BUSTO (ASTURIAS) 2729
CABO PEÑAS (ASTURIAS) 2992
CABO VILAN (A CORUÑA) 2962
CADREITA (NAVARRA) 2265
CALAMOCHA (TERUEL) 3307
CALANDA (TERUEL) 2433
CALATAYUD (ZARAGOZA) 2732
CAPDEPERA (ILLES BALEARS) 3024
CARAVACA DE LA CRUZ (MURCIA) 3309
CARBALLIÑO, O (OURENSE) 3253
CARBONERAS (ALMERIA) 1962
CARRIÓN DE LOS CONDES (PALENCIA) 3281
CARTAGENA (MURCIA) 3318
CASTELLFORT (CASTELLON) 3087
CASTELLÓN - ALMASSORA (CASTELLON) 3318
CASTRO URDIALES (CANTABRIA) 1628
CASTROPOL (ASTURIAS) 1469
CAZALLA DE LA SIERRA (SEVILLA) 3213
CAZORLA (JAEN) 2839
CAÑIZARES (CUENCA) 2603
CERVERA DE PISUERGA (PALENCIA) 3211
CEUTA (CEUTA) 3305
CHINCHILLA (ALBACETE) 2163
CIEZA (MURCIA) 3310
CIUDAD REAL (CIUDAD REAL) 3318
COLMENAR VIEJO (MADRID) 3318
CORIA (CACERES) 3273
CUENCA (CUENCA) 3318
CÁCERES (CACERES) 3318
CÁDIZ (CADIZ) 2893
CÓRDOBA AEROPUERTO (CORDOBA) 3318
DAROCA (ZARAGOZA) 3318
DON BENITO (BADAJOZ) 3297
DONOSTIA/SAN SEBASTIÁN, IGUELDO (GIPUZKOA) 3318
DOÑA MENCÍA (CORDOBA) 3235
ELGOIBAR (GIPUZKOA) 3227
ESCORCA, LLUC (ILLES BALEARS) 3210
ESTACA DE BARES (A CORUÑA) 3189
ESTEPONA (MALAGA) 3264
FISTERRA (A CORUÑA) 3087
FORONDA-TXOKIZA (ARABA/ALAVA) 3318
FUENGIROLA (MALAGA) 3293
FUERTEVENTURA AEROPUERTO (LAS PALMAS) 3303
GETAFE (MADRID) 3318
GIJÓN, CAMPUS (ASTURIAS) 1908
GIJÓN, PUERTO (ASTURIAS) 3318
GIRONA AEROPUERTO (GIRONA) 3316
GRAN CANARIA AEROPUERTO (LAS PALMAS) 3318
GRANADA AEROPUERTO (GRANADA) 3318
GRANADA BASE AÉREA (GRANADA) 3318
GRAZALEMA (CADIZ) 3237
GUADALAJARA (GUADALAJARA) 2710
GUADALAJARA, EL SERRANILLO (GUADALAJARA) 1727
GÜEÑES (BIZKAIA) 3145
GÜÍMAR (STA. CRUZ DE TENERIFE) 1826
HELLÍN (ALBACETE) 3079
HERRERA DEL DUQUE (BADAJOZ) 3261
HIERRO AEROPUERTO (STA. CRUZ DE TENERIFE) 3318
HINOJOSA DEL DUQUE (CORDOBA) 2845
HONDARRIBIA, MALKARROA (GIPUZKOA) 3318
HUELVA, RONDA ESTE (HUELVA) 3309
HUESCA AEROPUERTO (HUESCA) 3318
HUÉRCAL-OVERA (ALMERIA) 3263
HUÉSCAR (GRANADA) 2321
IBIZA, AEROPUERTO (ILLES BALEARS) 3318
IZAÑA (STA. CRUZ DE TENERIFE) 3318
JACA (HUESCA) 2693
JAÉN (JAEN) 3318
JAÉN, INSTITUTO (JAEN) 1278
JEREZ DE LA FRONTERA AEROPUERTO (CADIZ) 3318
JEREZ DE LOS CABALLEROS (BADAJOZ) 3110
JÁVEA/ XÀBIA (ALICANTE) 3174
LA ALDEA DE SAN NICOLÁS (LAS PALMAS) 1484
LA COVATILLA, ESTACIÓN DE ESQUÍ (SALAMANCA) 2952
LA MOLINA (GIRONA) 3060
LA PALMA AEROPUERTO (STA. CRUZ DE TENERIFE) 3303
LA PINILLA, ESTACIÓN DE ESQUÍ (SEGOVIA) 3266
LA RODA DE ANDALUCÍA (SEVILLA) 3182
LA SEU D'URGELL (LLEIDA) 3180
LAGUNAS DE SOMOZA (LEON) 3083
LANZAROTE AEROPUERTO (LAS PALMAS) 3318
LAS PALMAS DE GRAN CANARIA, PL. DE LA FERIA (LAS PALMAS) 1861
LAS PALMAS DE GRAN CANARIA, SAN CRISTOBAL (LAS PALMAS) 2705
LEKEITIO (BIZKAIA) 3125
LEÓN AEROPUERTO (LEON) 1218
LEÓN, VIRGEN DEL CAMINO (LEON) 3318
LLANES (ASTURIAS) 3149
LLEIDA (LLEIDA) 3318
LLERENA (BADAJOZ) 3237
LOGROÑO AEROPUERTO (LA RIOJA) 3318
LOJA (GRANADA) 3147
LORCA (MURCIA) 3282
LUGO AEROPUERTO (LUGO) 3318
MACHICHACO (BIZKAIA) 3069
MADRID AEROPUERTO (MADRID) 3318
MADRID, CIUDAD UNIVERSITARIA (MADRID) 3151
MADRID, CUATRO VIENTOS (MADRID) 3318
MADRID, RETIRO (MADRID) 3318
MADRIDEJOS (TOLEDO) 2934
MANRESA (BARCELONA) 3199
MASPALOMAS (LAS PALMAS) 2867
MEDINA DE POMAR (BURGOS) 3145
MELILLA (MELILLA) 3318
MENORCA, AEROPUERTO (ILLES BALEARS) 3318
MOGUER, EL ARENOSILLO (HUELVA) 3239
MOGÁN, PUERTO (LAS PALMAS) 3219
MOLINA DE ARAGÓN (GUADALAJARA) 3288
MONTE IROITE (A CORUÑA) 2513
MORÓN DE LA FRONTERA (SEVILLA) 3318
MOTRIL (GRANADA) 553
MURCIA (MURCIA) 3318
MÁLAGA AEROPUERTO (MALAGA) 3318
MÁLAGA, CENTRO METEOROLÓGICO (MALAGA) 3310
MÁLAGA, PUERTO (MALAGA) 59
MÉRIDA (BADAJOZ) 3298
NAUT ARAN, ARTIES (LLEIDA) 2941
NAVALMORAL DE LA MATA (CACERES) 3312
NAVARREDONDA DE GREDOS (AVILA) 3130
OLIVA (VALENCIA) 3203
OLMEDO (VALLADOLID) 2886
OURENSE (OURENSE) 3318
OVIEDO (ASTURIAS) 5814
PADRÓN (A CORUÑA) 3131
PALACIOS DE LA SIERRA (BURGOS) 3181
PALMA, AEROPUERTO (ILLES BALEARS) 3318
PALMA, PUERTO (ILLES BALEARS) 3318
PAMPLONA (NAVARRA) 3318
PAMPLONA AEROPUERTO (NAVARRA) 3318
PINOSO (ALICANTE) 3060
PLASENCIA (CACERES) 3238
POLINYÀ DE XÚQUER (VALENCIA) 3188
PONFERRADA (LEON) 3318
PONTEVEDRA (PONTEVEDRA) 3318
PORQUERES (GIRONA) 2060
PORRERES (ILLES BALEARS) 3203
PORTOCOLOM (ILLES BALEARS) 3106
PUEBLA DE DON RODRIGO (CIUDAD REAL) 2660
PUERTO ALTO DEL LEÓN (MADRID) 2691
PUERTO DE LA CRUZ (STA. CRUZ DE TENERIFE) 3220
PUERTO DE LEITARIEGOS (ASTURIAS) 2762
PUERTO DE NAVACERRADA (MADRID) 3318
PUERTO DE PAJARES (ASTURIAS) 3037
PUERTO DE SAN ISIDRO (LEON) 2796
PUNTA GALEA (BIZKAIA) 3257
PÁJARA (LAS PALMAS) 1443
QUINTANAR DE LA ORDEN (TOLEDO) 2859
REINOSA (CANTABRIA) 3176
REUS AEROPUERTO (TARRAGONA) 3318
RIPOLL (GIRONA) 3206
ROBLEDO DE CHAVELA (MADRID) 2749
RONDA (MALAGA) 558
ROQUETAS DE MAR (ALMERIA) 2954
ROTA, BASE NAVAL (CADIZ) 3318
SA POBLA (ILLES BALEARS) 2744
SABADELL AEROPUERTO (BARCELONA) 2241
SAELICES EL CHICO (SALAMANCA) 3297
SALAMANCA (SALAMANCA) 3308
SALAMANCA AEROPUERTO (SALAMANCA) 3318
SAN CLEMENTE (CUENCA) 2112
SAN FERNANDO (CADIZ) 3284
SAN JAVIER AEROPUERTO (MURCIA) 6240
SAN PABLO DE LOS MONTES (TOLEDO) 3175
SAN SEBASTIÁN AEROPUERTO (GIPUZKOA) 2618
SAN SEBASTIÁN DE LA GOMERA (STA. CRUZ DE TENERIFE) 3251
SAN VICENTE DE LA BARQUERA (CANTABRIA) 3299
SANT JAUME D'ENVEJA (TARRAGONA) 2967
SANTA ELENA (JAEN) 3050
SANTA SUSANNA (BARCELONA) 2565
SANTANDER (CANTABRIA) 5815
SANTANDER AEROPUERTO (CANTABRIA) 3318
SANTIAGO DE COMPOSTELA (A CORUÑA) 3214
SANTIAGO DE COMPOSTELA AEROPUERTO (A CORUÑA) 3318
SEGOVIA (SEGOVIA) 3318
SEVILLA AEROPUERTO (SEVILLA) 3318
SIERRA DE ALFABIA, BUNYOLA (ILLES BALEARS) 3090
SIGÜENZA (GUADALAJARA) 3019
SOMOSIERRA (MADRID) 2736
SORIA (SORIA) 3267
SOS DEL REY CATÓLICO (ZARAGOZA) 2810
SOTILLO DE LA ADRADA (AVILA) 2991
STA.CRUZ DE TENERIFE (STA. CRUZ DE TENERIFE) 3318
TALARN (LLEIDA) 2824
TALAVERA DE LA REINA (TOLEDO) 3063
TARANCÓN (CUENCA) 3259
TARIFA (CADIZ) 3102
TAZACORTE (STA. CRUZ DE TENERIFE) 3104
TEGUISE (LAS PALMAS) 2015
TENERIFE NORTE AEROPUERTO (STA. CRUZ DE TENERIFE) 3318
TENERIFE SUR AEROPUERTO (STA. CRUZ DE TENERIFE) 3318
TERUEL (TERUEL) 3318
TOLEDO (TOLEDO) 3315
TOMELLOSO (CIUDAD REAL) 2915
TORLA (HUESCA) 1729
TORREJÓN DE ARDOZ (MADRID) 3318
TORROX (MALAGA) 2807
TORTOSA (TARRAGONA) 3318
TRUJILLO (CACERES) 3211
TÀRREGA (LLEIDA) 2715
UTIEL (VALENCIA) 3267
VALDEPEÑAS (CIUDAD REAL) 3077
VALDERREDIBLE, POLIENTES (CANTABRIA) 3268
VALENCIA AEROPUERTO (VALENCIA) 3318
VALENCIA DE ALCÁNTARA (CACERES) 3200
VALLADOLID (VALLADOLID) 3318
VALLADOLID AEROPUERTO (VALLADOLID) 3318
VALÈNCIA (VALENCIA) 3318
VALÈNCIA, VIVEROS (VALENCIA) 3297
VANDELLÒS (TARRAGONA) 2735
VEJER DE LA FRONTERA (CADIZ) 3308
VIGO AEROPUERTO (PONTEVEDRA) 3318
VILLAFRANCA DEL CID/VILLAFRANCA (CASTELLON) 2096
VILLANUEVA DE CÓRDOBA (CORDOBA) 3191
VILLARDECIERVOS (ZAMORA) 3258
VILLARRODRIGO (JAEN) 2309
VINARÒS (CASTELLON) 3206
VISO DEL MARQUÉS (CIUDAD REAL) 2895
VITIGUDINO (SALAMANCA) 3262
VITORIA GASTEIZ AEROPUERTO (ARABA/ALAVA) 2650
XINZO DE LIMIA (OURENSE) 2912
XÀTIVA (VALENCIA) 3003
YECLA (MURCIA) 3303
ZAMORA (ZAMORA) 3318
ZARAGOZA AEROPUERTO (ZARAGOZA) 3318
ZARAGOZA, VALDESPARTERA (ZARAGOZA) 2245
ZUMAIA (GIPUZKOA) 3310
ZUMARRAGA (GIPUZKOA) 3261
ÁGUILAS (MURCIA) 3316
ÁVILA (AVILA) 3318
ÉCIJA (SEVILLA) 3219
###Markdown
The next step is to groupby date of the year.
###Code
for name,group_doy in group_st.groupby(by='doy'):
print ('%s (%s) %4.2f %4.2f %4.2f' % (name,len(group_doy),np.mean(group_doy['tmin']),
np.mean(group_doy['tavg']),np.mean(group_doy['tmax'])))
###Output
Y0101 (9) 4.66 10.06 15.43
Y0102 (9) 4.17 10.10 16.04
Y0103 (9) 4.53 10.09 15.63
Y0104 (7) 4.84 10.77 16.69
Y0105 (7) 2.37 9.23 16.08
Y0106 (7) 3.20 9.37 15.56
Y0107 (7) 4.46 9.77 15.07
Y0108 (7) 3.57 9.54 15.50
Y0109 (7) 2.69 9.06 15.37
Y0110 (7) 4.27 10.19 16.07
Y0111 (7) 4.67 10.40 16.10
Y0112 (8) 4.00 9.74 15.50
Y0113 (8) 4.23 9.87 15.53
Y0114 (8) 3.39 9.01 14.67
Y0115 (8) 1.99 8.04 14.09
Y0116 (8) 3.34 8.50 13.67
Y0117 (9) 2.43 8.87 15.31
Y0118 (9) 3.53 8.91 14.31
Y0119 (7) 3.87 8.97 14.05
Y0120 (6) 1.62 7.48 13.33
Y0121 (9) 1.65 8.02 14.35
Y0122 (8) 3.07 9.01 14.97
Y0123 (9) 2.33 8.66 14.93
Y0124 (9) 2.85 9.46 16.05
Y0125 (9) 3.66 9.60 15.52
Y0126 (9) 3.77 9.91 16.11
Y0127 (9) 4.81 9.62 14.44
Y0128 (8) 3.46 8.94 14.40
Y0129 (8) 3.15 8.84 14.50
Y0130 (8) 3.24 9.41 15.60
Y0131 (9) 3.47 9.66 15.84
Y0201 (9) 2.79 9.52 16.23
Y0202 (9) 2.51 8.24 13.98
Y0203 (8) 2.45 8.55 14.62
Y0204 (8) 3.59 9.12 14.66
Y0205 (9) 3.54 9.52 15.50
Y0206 (9) 2.88 9.49 16.13
Y0207 (9) 3.10 9.14 15.20
Y0208 (9) 3.02 8.68 14.33
Y0209 (9) 0.79 7.96 15.11
Y0210 (9) 2.33 8.16 13.97
Y0211 (9) 2.67 8.74 14.78
Y0212 (9) 4.97 9.54 14.08
Y0213 (9) 4.56 9.51 14.42
Y0214 (9) 3.67 9.77 15.83
Y0215 (8) 5.09 10.39 15.68
Y0216 (9) 3.66 9.72 15.78
Y0217 (9) 4.84 10.09 15.32
Y0218 (9) 5.28 10.58 15.93
Y0219 (9) 4.58 10.39 16.17
Y0220 (9) 3.97 10.48 16.98
Y0221 (9) 5.99 11.56 17.14
Y0222 (9) 4.43 10.97 17.51
Y0223 (9) 4.78 11.11 17.46
Y0224 (9) 4.89 11.10 17.31
Y0225 (9) 5.21 11.23 17.27
Y0226 (9) 3.85 11.00 18.12
Y0227 (9) 5.89 11.77 17.63
Y0228 (8) 5.50 10.94 16.40
Y0229 (2) 3.65 11.90 20.20
Y0301 (9) 4.14 10.92 17.69
Y0302 (8) 5.89 11.91 17.93
Y0303 (9) 6.19 11.81 17.40
Y0304 (9) 6.60 11.84 17.10
Y0305 (9) 7.50 12.41 17.33
Y0306 (9) 6.43 12.07 17.68
Y0307 (9) 5.27 11.91 18.56
Y0308 (9) 5.67 12.87 20.08
Y0309 (9) 6.26 12.71 19.17
Y0310 (9) 6.26 12.96 19.68
Y0311 (9) 5.47 12.32 19.18
Y0312 (9) 6.30 12.64 18.96
Y0313 (9) 5.77 11.97 18.21
Y0314 (9) 5.48 11.57 17.68
Y0315 (9) 5.09 12.07 19.08
Y0316 (9) 5.77 12.39 19.01
Y0317 (9) 6.81 13.28 19.70
Y0318 (9) 6.82 13.47 20.09
Y0319 (9) 6.36 13.53 20.71
Y0320 (9) 7.69 13.12 18.51
Y0321 (9) 7.01 12.69 18.38
Y0322 (8) 5.35 11.86 18.36
Y0323 (8) 6.70 12.11 17.52
Y0324 (8) 7.18 13.03 18.86
Y0325 (9) 7.02 12.67 18.33
Y0326 (9) 7.86 13.72 19.60
Y0327 (8) 6.91 13.47 19.97
Y0328 (9) 6.74 14.35 21.99
Y0329 (9) 8.30 14.38 20.44
Y0330 (9) 7.87 14.86 21.84
Y0331 (9) 8.71 15.17 21.62
Y0401 (9) 7.76 15.01 22.32
Y0402 (9) 8.33 15.04 21.80
Y0403 (9) 8.53 14.36 20.18
Y0404 (8) 8.43 14.19 19.90
Y0405 (9) 8.90 15.32 21.73
Y0406 (9) 8.19 15.50 22.86
Y0407 (9) 8.66 15.93 23.23
Y0408 (8) 8.56 16.18 23.84
Y0409 (9) 8.21 15.50 22.84
Y0410 (9) 9.83 16.16 22.50
Y0411 (8) 11.02 17.23 23.48
Y0412 (9) 9.79 15.79 21.80
Y0413 (8) 9.30 16.69 24.04
Y0414 (9) 10.00 17.26 24.51
Y0415 (9) 10.59 17.09 23.58
Y0416 (9) 11.50 17.95 24.39
Y0417 (9) 9.86 17.79 25.72
Y0418 (9) 11.18 18.08 25.01
Y0419 (9) 11.72 17.56 23.37
Y0420 (9) 11.83 17.26 22.64
Y0421 (9) 10.74 16.83 22.91
Y0422 (9) 11.22 17.20 23.21
Y0423 (9) 11.52 17.78 24.03
Y0424 (9) 10.76 17.54 24.29
Y0425 (9) 10.64 17.50 24.36
Y0426 (9) 11.84 18.53 25.24
Y0427 (9) 12.21 18.50 24.80
Y0428 (9) 10.50 16.50 22.50
Y0429 (9) 10.71 16.10 21.51
Y0430 (9) 10.32 16.34 22.33
Y0501 (9) 9.22 16.53 23.82
Y0502 (9) 9.47 17.34 25.21
Y0503 (9) 10.49 18.33 26.16
Y0504 (9) 11.34 18.57 25.82
Y0505 (8) 11.25 18.34 25.39
Y0506 (8) 10.96 18.34 25.69
Y0507 (8) 11.30 18.39 25.47
Y0508 (9) 12.66 19.88 27.09
Y0509 (9) 12.80 20.17 27.56
Y0510 (9) 13.43 20.12 26.82
Y0511 (9) 13.17 20.16 27.16
Y0512 (9) 12.71 20.27 27.83
Y0513 (9) 13.37 20.74 28.12
Y0514 (9) 12.70 20.76 28.79
Y0515 (9) 13.30 20.50 27.71
Y0516 (9) 12.84 20.67 28.52
Y0517 (9) 12.37 20.82 29.29
Y0518 (9) 12.69 19.94 27.17
Y0519 (9) 11.87 19.42 26.96
Y0520 (9) 11.67 18.97 26.27
Y0521 (9) 12.86 19.64 26.43
Y0522 (9) 12.16 20.10 28.06
Y0523 (9) 12.09 20.90 29.71
Y0524 (9) 12.74 20.48 28.23
Y0525 (9) 13.37 20.91 28.42
Y0526 (9) 13.37 20.44 27.53
Y0527 (9) 13.96 21.26 28.53
Y0528 (9) 14.54 20.91 27.32
Y0529 (9) 13.26 20.56 27.87
Y0530 (9) 14.27 21.67 29.04
Y0531 (8) 13.59 22.31 31.06
Y0601 (9) 14.77 23.42 32.09
Y0602 (9) 15.09 23.28 31.49
Y0603 (9) 15.15 22.80 30.41
Y0604 (9) 14.03 22.34 30.66
Y0605 (9) 15.04 23.23 31.42
Y0606 (9) 15.48 22.76 30.06
Y0607 (8) 14.81 22.02 29.25
Y0608 (9) 14.53 21.84 29.13
Y0609 (9) 14.02 21.46 28.91
Y0610 (9) 14.86 22.28 29.72
Y0611 (8) 15.58 22.49 29.43
Y0612 (9) 15.37 23.52 31.70
Y0613 (9) 15.64 24.26 32.88
Y0614 (9) 16.01 24.61 33.20
Y0615 (9) 16.80 25.11 33.41
Y0616 (9) 16.57 24.91 33.23
Y0617 (8) 16.75 25.45 34.16
Y0618 (9) 16.61 24.56 32.51
Y0619 (9) 16.41 24.43 32.49
Y0620 (9) 16.01 25.02 34.04
Y0621 (9) 17.14 25.29 33.47
Y0622 (9) 16.74 25.71 34.69
Y0623 (9) 16.89 26.16 35.39
Y0624 (9) 17.79 26.54 35.33
Y0625 (9) 18.21 26.17 34.14
Y0626 (9) 17.47 26.22 34.94
Y0627 (9) 17.76 26.68 35.61
Y0628 (9) 18.29 26.83 35.39
Y0629 (9) 18.51 26.76 34.96
Y0630 (9) 17.27 25.34 33.41
Y0701 (9) 16.87 25.38 33.86
Y0702 (9) 17.16 25.81 34.44
Y0703 (9) 18.08 26.57 35.08
Y0704 (9) 17.54 26.57 35.56
Y0705 (9) 18.83 27.36 35.90
Y0706 (9) 18.66 26.78 34.93
Y0707 (9) 18.84 27.22 35.54
Y0708 (9) 18.18 27.22 36.28
Y0709 (9) 18.27 28.01 37.76
Y0710 (8) 19.21 28.23 37.26
Y0711 (8) 18.41 27.84 37.29
Y0712 (9) 18.40 27.70 36.98
Y0713 (9) 18.56 27.60 36.63
Y0714 (9) 19.17 27.67 36.16
Y0715 (9) 18.11 27.32 36.54
Y0716 (9) 20.01 28.66 37.35
Y0717 (9) 20.03 28.57 37.14
Y0718 (9) 19.23 28.03 36.81
Y0719 (9) 19.49 28.50 37.49
Y0720 (9) 19.65 27.88 36.08
Y0721 (9) 18.63 27.36 36.08
Y0722 (9) 18.61 27.51 36.42
Y0723 (9) 18.96 27.69 36.42
Y0724 (9) 19.38 27.76 36.13
Y0725 (9) 18.80 27.78 36.76
Y0726 (9) 18.52 27.57 36.63
Y0727 (9) 19.36 27.66 35.90
Y0728 (9) 19.33 27.92 36.50
Y0729 (9) 18.56 27.64 36.73
Y0730 (9) 18.93 27.84 36.77
Y0731 (9) 18.63 27.67 36.71
Y0801 (9) 19.09 28.56 38.02
Y0802 (8) 18.72 27.70 36.69
Y0803 (9) 19.53 28.83 38.17
Y0804 (8) 19.49 29.46 39.40
Y0805 (9) 20.58 29.46 38.33
Y0806 (9) 20.28 28.98 37.68
Y0807 (9) 21.36 29.39 37.46
Y0808 (9) 20.96 28.74 36.53
Y0809 (9) 20.67 28.79 36.90
Y0810 (9) 20.28 29.00 37.69
Y0811 (9) 21.10 29.44 37.79
Y0812 (9) 20.50 28.80 37.10
Y0813 (9) 19.61 27.46 35.29
Y0814 (9) 18.43 26.73 35.02
Y0815 (9) 19.57 27.64 35.72
Y0816 (9) 19.20 27.61 36.02
Y0817 (9) 19.83 27.98 36.09
Y0818 (9) 20.59 28.58 36.60
Y0819 (9) 20.91 29.16 37.41
Y0820 (8) 21.00 28.90 36.83
Y0821 (8) 20.59 28.85 37.13
Y0822 (9) 20.51 28.31 36.09
Y0823 (9) 19.30 27.78 36.23
Y0824 (9) 18.86 27.18 35.53
Y0825 (8) 18.99 27.55 36.15
Y0826 (8) 19.80 27.78 35.76
Y0827 (9) 19.21 27.96 36.67
Y0828 (9) 19.76 27.90 36.02
Y0829 (9) 19.26 26.85 34.41
Y0830 (9) 18.82 26.48 34.12
Y0831 (9) 18.62 26.19 33.74
Y0901 (9) 19.77 26.84 33.91
Y0902 (9) 19.32 26.32 33.30
Y0903 (9) 18.33 25.53 32.76
Y0904 (9) 17.94 25.59 33.24
Y0905 (9) 17.60 25.58 33.57
Y0906 (9) 18.04 26.17 34.33
Y0907 (9) 19.38 26.22 33.02
Y0908 (9) 17.74 24.80 31.86
Y0909 (9) 16.54 24.35 32.11
Y0910 (9) 16.19 24.26 32.31
Y0911 (9) 16.88 25.08 33.30
Y0912 (9) 17.67 26.01 34.36
Y0913 (9) 18.07 25.37 32.66
Y0914 (8) 17.43 24.48 31.56
Y0915 (8) 16.81 23.84 30.84
Y0916 (7) 15.94 22.91 29.93
Y0917 (7) 15.99 22.30 28.63
Y0918 (9) 15.10 22.68 30.19
Y0919 (8) 15.81 23.35 30.90
Y0920 (8) 17.20 24.49 31.80
Y0921 (8) 16.86 24.41 31.95
Y0922 (9) 16.86 24.28 31.72
Y0923 (9) 15.78 22.98 30.21
Y0924 (9) 15.47 22.81 30.13
Y0925 (9) 15.18 22.93 30.70
Y0926 (9) 15.79 22.79 29.80
Y0927 (9) 16.31 22.63 28.96
Y0928 (9) 16.78 22.30 27.84
Y0929 (9) 16.18 22.33 28.50
Y0930 (9) 15.30 22.29 29.26
Y1001 (8) 15.97 23.16 30.35
Y1002 (9) 15.00 22.86 30.71
Y1003 (9) 16.21 22.96 29.74
Y1004 (9) 16.34 23.25 30.17
Y1005 (9) 14.94 22.04 29.21
Y1006 (9) 14.62 21.95 29.29
Y1007 (9) 14.70 21.91 29.11
Y1008 (9) 14.53 22.14 29.70
Y1009 (9) 14.10 21.20 28.30
Y1010 (9) 14.27 20.77 27.24
Y1011 (9) 14.86 21.21 27.60
Y1012 (9) 15.02 20.56 26.09
Y1013 (9) 13.46 20.20 26.94
Y1014 (9) 13.01 19.40 25.78
Y1015 (9) 12.48 19.01 25.54
Y1016 (9) 13.23 19.92 26.60
Y1017 (9) 12.94 19.93 26.93
Y1018 (9) 12.91 18.88 24.84
Y1019 (9) 12.84 18.74 24.59
Y1020 (9) 12.57 18.99 25.41
Y1021 (9) 12.14 18.70 25.29
Y1022 (9) 13.22 18.80 24.40
Y1023 (9) 13.42 19.09 24.79
Y1024 (9) 13.49 18.66 23.82
Y1025 (9) 13.08 18.86 24.62
Y1026 (8) 13.64 18.43 23.24
Y1027 (9) 11.94 18.20 24.46
Y1028 (8) 10.72 17.19 23.65
Y1029 (9) 9.90 16.57 23.21
Y1030 (9) 10.23 16.14 22.04
Y1031 (9) 9.72 15.83 21.94
Y1101 (9) 10.41 16.49 22.51
Y1102 (9) 10.59 16.32 22.04
Y1103 (8) 11.01 16.44 21.87
Y1104 (9) 11.77 16.06 20.36
Y1105 (9) 10.56 15.14 19.71
Y1106 (9) 8.53 13.91 19.30
Y1107 (9) 9.17 14.27 19.37
Y1108 (9) 8.61 13.78 18.96
Y1109 (9) 8.21 13.33 18.44
Y1110 (9) 6.79 13.09 19.40
Y1111 (9) 7.71 14.29 20.88
Y1112 (9) 8.44 14.74 21.09
Y1113 (9) 8.29 14.94 21.59
Y1114 (9) 9.18 14.63 20.06
Y1115 (9) 7.76 13.60 19.41
Y1116 (8) 6.58 13.06 19.56
Y1117 (8) 6.41 12.86 19.30
Y1118 (8) 7.31 13.16 18.96
Y1119 (9) 7.30 13.27 19.28
Y1120 (9) 7.85 13.31 18.77
Y1121 (9) 8.53 13.43 18.31
Y1122 (8) 7.51 12.46 17.46
Y1123 (8) 6.83 12.35 17.91
Y1124 (8) 7.02 12.71 18.40
Y1125 (8) 8.01 13.30 18.57
Y1126 (8) 7.89 12.77 17.63
Y1127 (7) 6.21 11.33 16.40
Y1128 (7) 5.06 10.86 16.66
Y1129 (8) 4.07 10.14 16.17
Y1130 (9) 4.67 10.35 16.02
Y1201 (10) 4.67 10.64 16.60
Y1202 (10) 5.60 10.89 16.19
Y1203 (9) 4.51 10.50 16.49
Y1204 (10) 4.72 10.95 17.14
Y1205 (10) 4.70 10.86 17.02
Y1206 (9) 4.87 11.10 17.31
Y1207 (10) 5.62 11.27 16.96
Y1208 (10) 5.86 11.81 17.77
Y1209 (10) 5.52 11.54 17.57
Y1210 (10) 6.58 12.36 18.10
Y1211 (9) 5.20 11.32 17.44
Y1212 (10) 4.70 10.26 15.82
Y1213 (9) 6.62 11.42 16.21
Y1214 (9) 7.34 11.64 15.93
Y1215 (10) 6.19 10.74 15.31
Y1216 (9) 5.12 9.53 13.99
Y1217 (10) 5.47 10.14 14.83
Y1218 (9) 4.54 10.02 15.51
Y1219 (10) 4.71 10.29 15.86
Y1220 (10) 3.71 9.46 15.20
Y1221 (9) 4.24 10.82 17.40
Y1222 (9) 4.44 11.11 17.81
Y1223 (9) 3.60 10.70 17.77
Y1224 (9) 3.28 10.08 16.85
Y1225 (9) 4.01 9.99 15.96
Y1226 (9) 3.56 9.72 15.88
Y1227 (9) 3.45 9.71 15.99
Y1228 (9) 4.60 10.64 16.69
Y1229 (10) 4.15 10.20 16.28
Y1230 (10) 3.31 9.69 16.05
Y1231 (10) 4.04 10.10 16.15
###Markdown
Finally, all together
###Code
line_of_result = []
for station,group_st in station_group:
region = group_st.iloc[0].region
for doy,group_doy in group_st.groupby(by='doy'):
ndata = len(group_doy)
line_of_result.append([station,region,ndata,doy,np.mean(group_doy['tmin']),
np.mean(group_doy['tavg']),np.mean(group_doy['tmax'])])
CLIMATOLOGY = pd.DataFrame.from_records(line_of_result)
CLIMATOLOGY.head()
###Output
_____no_output_____
###Markdown
Let's add the correct columns
###Code
CLIMATOLOGY.columns = ['station','region','ndata','doy','tmin','tavg','tmax']
CLIMATOLOGY.head()
###Output
_____no_output_____
###Markdown
Interactive mode Let's have a quick interactive look
###Code
import ipywidgets as widgets
from ipywidgets import interact, interact_manual,interactive, interactive_output
from ipywidgets import fixed, FloatSlider, Dropdown, HBox, Label, VBox, Layout
# Function to select region and station (based on the selected region...)
# Define the region widget (initialised at region = A CORUÑA)
region_list = sorted(list(set(CLIMATOLOGY['region'])))
region_widget = Dropdown(options=region_list,
value='A CORUÑA',
description='Region:'
)
# Define the station widget (initialised at region = 'A CORUÑA' & statio = 'A CORUÑA')
station_widget = Dropdown(options=sorted(list(set(CLIMATOLOGY[CLIMATOLOGY['region']=='A CORUÑA']['station']))),
value='A CORUÑA',
description='Station:'
)
# The upodate station list
def on_update_brand_widget(*args):
station_widget.options = sorted(list(set(CLIMATOLOGY[CLIMATOLOGY['region']==region_widget.value]['station'])))
# The observe method to link station to region
region_widget.observe(on_update_brand_widget, 'value')
# Function
def plot_region_station(df,region,station,column):
fig = plt.figure(figsize=(20,5))
plt.grid()
df_region = df.loc[df['region'] == region]
DataFrame = df_region.loc[df_region['station'] == station]
ndata = DataFrame['ndata'].values[0]
plt.title('%s at %s (%s) ndata=%s' % (column,station,region,ndata),fontsize=10)
display(plt.plot(DataFrame[column]))
Y = interactive(plot_region_station,df=fixed(CLIMATOLOGY),region=region_widget, station=station_widget,
column=['tmin','tavg','tmax']);
controls_Left = VBox([Y.children[0],Y.children[1]])
controls_Right = VBox([Y.children[2]])
controls = HBox([controls_Left,controls_Right], layout = Layout(flex_flow='row wrap'))
output = Y.children[-1]
display(VBox([controls, output]))
###Output
_____no_output_____
###Markdown
This climatology doesn't look like a soft climatology... 10 years may not be enough.In order to generate a typical climatology, we need to compute the mean over a window.
###Code
window_widgets = widgets.IntSlider(value=5,min=1,max=24,step=2,description='Window:',disabled=False,
continuous_update=False,orientation='horizontal',readout=True,readout_format='d')
doy_list = sorted(list(set(CLIMATOLOGY['doy'])))
# Function
def plot_window(df,region,station,column,longwindow):
window = (longwindow-1)//2
df_region = df.loc[df['region'] == region]
df_station = df_region.loc[df_region['station'] == station]
df_station.reset_index(inplace=True)
line_of_result = []
for doy in doy_list:
doyindex = df_station[df_station['doy'] == doy].index[0]
if (doyindex-window) < 0:
df_concat = pd.concat([df_station.iloc[doyindex-window:],df_station.iloc[0:doyindex],
df_station.iloc[doyindex:1+doyindex+window]])
elif (doyindex+window) > 365:
df_concat = pd.concat([df_station.iloc[doyindex-window:],
df_station.iloc[:window-(365-doyindex)]])
else:
df_concat = pd.concat([df_station.iloc[doyindex-window:doyindex],
df_station.iloc[doyindex:1+doyindex+window]])
line_of_result.append([doy,np.mean(df_concat[column])])
DataFrame = pd.DataFrame.from_records(line_of_result)
fig = plt.figure(figsize=(20,5))
plt.grid()
plt.title('%s at %s (%s) - window : %s days' % (column,station,region,longwindow),fontsize=10)
display(plt.plot(DataFrame[1]))
Z = interactive(plot_window,df=fixed(CLIMATOLOGY),region=region_widget, station=station_widget,
column=['tmin','tavg','tmax'], longwindow=window_widgets);
controls_Left = VBox([Z.children[0],Z.children[1]])
controls_Right = VBox([Z.children[2],Z.children[3]])
controls = HBox([controls_Left,controls_Right], layout = Layout(flex_flow='row wrap'))
output = Z.children[-1]
display(VBox([controls, output]))
###Output
_____no_output_____
###Markdown
Climatology results Taking the mean value over a window produces a more representative climatology.Given a selected window days, let's recaculate the climatology for all stations.
###Code
# I've choosed a window covering 21 days
longwindow = 21
station_group = CLIMATOLOGY.groupby(by='station')
window = (longwindow-1)//2
line_of_result = []
for station,df_station in station_group:
df_station.reset_index(inplace=True)
region = df_station.iloc[0].region
for doy in doy_list:
try:
doyindex = df_station[df_station['doy'] == doy].index[0]
if (doyindex-window) < 0:
df_concat = pd.concat([df_station.iloc[doyindex-window:],df_station.iloc[0:doyindex],
df_station.iloc[doyindex:1+doyindex+window]])
elif (doyindex+window) > 365:
df_concat = pd.concat([df_station.iloc[doyindex-window:],
df_station.iloc[:window-(365-doyindex)]])
else:
df_concat = pd.concat([df_station.iloc[doyindex-window:doyindex],
df_station.iloc[doyindex:1+doyindex+window]])
line_of_result.append([station,region,doy,np.mean(df_concat['tmin']),
np.mean(df_concat['tavg']),np.mean(df_concat['tmax'])])
except:
break
CLIMATOLOGY_WINDOW = pd.DataFrame.from_records(line_of_result)
CLIMATOLOGY_WINDOW.columns = ['station','region','doy','tmin','tavg','tmax']
CLIMATOLOGY_WINDOW = pd.DataFrame.from_records(line_of_result)
###Output
_____no_output_____
###Markdown
The daily temperature climatology is now ready!Let's export the result to a CSV file.
###Code
CLIMATOLOGY_WINDOW.to_csv('aemet_daily_temperature_climatology_2009-2018_window-21.csv.bz2',
compression='bz2',float_format='%1.1f',index=False)
###Output
_____no_output_____ |
module-3/Evaluation-Recall-and-Precision/your-code/.ipynb_checkpoints/main-checkpoint.ipynb | ###Markdown
Evaluation: Precision & Recall Using the evaluation metrics we have learned, we are going to compare how well some different types of classifiers perform on different evaluation metrics We are going to use a dataset of written numbers which we can import from sklearn. Run the code below to do so.
###Code
$ pip install upgrade tensorflow
import numpy as np
from sklearn.datasets import fetch_mldata
from tensorflow.keras.datasets import mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
mnist = fetch_mldata('MNIST original')
X, y = mnist['data'], mnist['target']
###Output
_____no_output_____
###Markdown
Now take a look at the shapes of the X and y matricies
###Code
import tensorflow as tf
pip install keras
###Output
Collecting keras
Downloading https://files.pythonhosted.org/packages/ad/fd/6bfe87920d7f4fd475acd28500a42482b6b84479832bdc0fe9e589a60ceb/Keras-2.3.1-py2.py3-none-any.whl (377kB)
Requirement already satisfied: pyyaml in c:\users\denis\anaconda3\lib\site-packages (from keras) (5.1.2)
Requirement already satisfied: numpy>=1.9.1 in c:\users\denis\anaconda3\lib\site-packages (from keras) (1.16.5)
Requirement already satisfied: h5py in c:\users\denis\anaconda3\lib\site-packages (from keras) (2.9.0)
Requirement already satisfied: six>=1.9.0 in c:\users\denis\anaconda3\lib\site-packages (from keras) (1.12.0)
Requirement already satisfied: scipy>=0.14 in c:\users\denis\anaconda3\lib\site-packages (from keras) (1.4.1)
Requirement already satisfied: keras-preprocessing>=1.0.5 in c:\users\denis\anaconda3\lib\site-packages (from keras) (1.1.0)
Requirement already satisfied: keras-applications>=1.0.6 in c:\users\denis\anaconda3\lib\site-packages (from keras) (1.0.8)
Installing collected packages: keras
Successfully installed keras-2.3.1
Note: you may need to restart the kernel to use updated packages.
###Markdown
Now, let's pick one entry and see what number is written. Use indexing to pick the 36000th digit You can use the .reshape(28,28) function and plt.imshow() function with the parameters cmap = matplotlib.cm.binary, interpolation="nearest" to make a plot of the number. Be sure to import matplotlib! Use indexing to see if what the plot shows matches with the outcome of the 36000th index Now lets break into a test train split to run a classification. Instead of using sklearn, use indexing to select the first 60000 entries for the training, and the rest for training. We are going to make a two-class classifier, so lets restrict to just one number, for example 5s. Do this by defining a new y training and y testing sets for just the number 5 Lets train a logistic regression to predict if a number is a 5 or not (remember to use the 'just 5s' y training set!) Does the classifier predict correctly the 36000th digit we picked before? To make some comparisons, we are going to make a very dumb classifier, that never predicts 5s. Build the classifier with the code below, and call it using: never_5_clf = Never5Classifier()
###Code
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
pass
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
###Output
_____no_output_____ |
notebooks/examples/scatter_matrix.ipynb | ###Markdown
Scatter Matrix--------------An example of using a RepeatChart to construct a multi-panel scatter plotwith linked panning and zooming.
###Code
# category: interactive
import altair as alt
alt.data_transformers.enable('json')
from vega_datasets import data
alt.Chart(data.cars.url).mark_circle().encode(
alt.X(alt.repeat("column"), type='quantitative'),
alt.Y(alt.repeat("row"), type='quantitative'),
color='Origin:N'
).properties(
width=250,
height=250
).repeat(
row=['Horsepower', 'Acceleration', 'Miles_per_Gallon'],
column=['Miles_per_Gallon', 'Acceleration', 'Horsepower']
).interactive()
###Output
_____no_output_____ |
rl_my_way.ipynb | ###Markdown
###Code
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision
# Library imports
import torch
import torch.nn as nn
import torch.optim as optim
import string
torch.set_default_tensor_type('torch.cuda.FloatTensor')
class Net(nn.Module):
def __init__(self, input_size, hidden_size, number_of_classes):
super(Net, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.number_of_classes = number_of_classes
self.fc1 = nn.Linear(N_STATES, 50)
self.fc1.weight.data.normal_(0, 0.1) # initialization
self.out = nn.Linear(50, N_ACTIONS)
self.out.weight.data.normal_(0, 0.1) # initialization
def forward(self, x, y):
loss = 10
while loss > .1:
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
out = self.relu2(out)
out = self.fc3(out)
loss = self.criterion(out, y)
self.optimizer.zero_grad()
loss.backward(retain_graph=True)
self.optimizer.step()
return (out, loss)
def check(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
out = self.relu2(out)
out = self.fc3(out)
return(out)
def save(self, f):
f = open('f', 'wb')
torch.save(self, f)
def load(f):
return torch.load(f)
class Environments:
# This is a wrapper class to hold all the envirvonments that have been created
# as well as different time steps of those past, present, future
def __init__(self):
self.holder = {
"past": [],
"present": [],
"future": []
}
def add_environment(self, time_of_env, env):
self.holder[time_of_env].append(env)
def remove_environment(self, time_of_env, selected_env):
selected_index = None
for index, env in enumerate(self.holder[time_of_env]):
if env == select_env:
selected_index = index
break
if selected_index:
del self.holder[time_of_env][selected_index]
return
def list_environments(self, time_of_env=None):
if time_of_env:
return self.holder[time_of_env]
else:
return self.holder
def search_for(self, item):
for env in self.holder['present']:
entries = env.describe()
for entry in entries:
if entry[0] == item:
return entry
for env in self.holder['past']:
entries = env.describe()
for entry in entries:
if entry[0] == item:
return entry
for env in self.holder['future']:
entries = env.describe()
for entry in entries:
if entry[0] == item:
return entry
self.holder['present'].add(item)
class Environment:
# The environment is consists of nouns
def __init__(self):
# an entry is knowledge, feeling, impression, link
self.entries = {}
def add(self, entry, known=1, feeling=0, impression=0, link=None):
self.entries[entry] = [known, feeling, impression, link]
def update(self, entry, known=1, feeling=0, impression=0, link=None):
if self.entries[entry]:
self.entries[entry][0] = known
self.entries[entry][1] = feeling
self.entries[entry][2] = impression
self.entries[entry][3].append(link)
else:
if link:
link = [link]
else:
link = []
self.add(entry, known, feeling, impression, link)
def describe(self):
master_array = []
for k,v in self.entries.items():
temp_array = []
temp_array.append(k)
temp_array.append(v)
master_array.append(temp_array)
return master_array
class Agent:
# An agent makes decisions based on it's environments
def __init__(self):
# List of actions the agent can take
self.actions = []
# Wait for instruction
self.learn()
def learn(self, trainer, enviroment):
pass
def evaluate_environment(self, environment):
# look at the environment
# return what you see
return current_observed_environment
def demonstrate(self, enviroment):
# Do learned procedures
action = None
response = evalutae(action)
def try_again(self, environment, response_from_evaluation):
# Fix what you did wrong
# evaluate again
action = None
response = evaluate(action)
while response is bad:
try_again(response)
else:
# Remember what you did correct
return
class Trainer:
def __init__(self):
pass
def teach(self, environment, agent):
environment = None
agent_needs_to_do = None
# When the environment is in this state -> Do this action
action = environment + agent_evaluation
return action
def evaluate(self, environment, agent):
correct_action = None
agent_action = None
score = correct_action - agent_action
return score
class Language:
def __init__(self):
self.word_to_numbers = {"SOS": 0, "EOS": 1, "SPACE": 2, "UNK": 3}
self.numbers_to_word = {0: "SOS", 1: "EOS", 2: "SPACE", 3: "UNK"}
self.maximum_sentence_length = 15
def pad_sentence_length(self, sentence_in_numbers_array):
# Add a space symbol between words we need to track the index too
temp_array = []
for index, word in enumerate(sentence_in_numbers_array):
# Make sure it is not the first word
if (index == 0):
# Insert the SOS token in the start of the array
temp_array.append(0)
temp_array.append(word)
elif (index == len(sentence_in_numbers_array) - 1):
# Add the EOS token to the end of the sentence
temp_array.append(word)
temp_array.append(1)
else:
temp_array.append(2)
temp_array.append(word)
# Continue to add EOS tokens until the sentence is the minimum length
while len(temp_array) < self.maximum_sentence_length:
temp_array.append(1)
return temp_array
def add_word(self, word):
if word not in self.word_to_numbers.keys():
self.word_to_numbers[word] = len(self.word_to_numbers)
self.numbers_to_word[len(self.numbers_to_word)] = word
def clean_sentence(self, sentence_in_english):
cleaned_sentence_array = []
for word in sentence_in_english.split(" "):
word = word.translate(str.maketrans('', '', string.punctuation))
word = word.lower()
word = word.strip()
cleaned_sentence_array.append(word)
return cleaned_sentence_array
def add_sentence(self, sentence_in_english):
sentence_in_english = self.clean_sentence(sentence_in_english)
for word in sentence_in_english:
self.add_word(word)
def get_numbers_from_sentence(self, sentence_in_english, padded=False):
sentence_in_english = self.clean_sentence(sentence_in_english)
temp_array = []
for word in sentence_in_english:
if word in self.word_to_numbers.keys():
temp_array.append(self.word_to_numbers[word])
else:
temp_array.append(3)
if padded:
return self.pad_sentence_length(temp_array)
else:
return temp_array
def get_sentence_from_numbers(self, sentence_in_unrounded_nums):
temp_array = []
for number in sentence_in_unrounded_nums:
# Lets round the number
number = round(number.item())
# Lets check if the number is in our dictionary
if number in self.numbers_to_word.keys():
temp_array.append(self.numbers_to_word[number])
else:
# Handle an unknown
temp_array.append("UNK")
return " ".join(temp_array)
def get_word_from_number(self, number):
if number in self.numbers_to_word.keys():
return self.numbers_to_word[number]
else:
return ("Unknown word")
# english = Language()
# english.add_sentence("Hello my name is sam.")
# print(english.numbers_to_word)
# english.get_numbers_from_sentence("hello sam")
# network = Net(1,1,1)
# network.save('./f.pkl')
# network = Net.load('./f.pkl')
###Output
/usr/local/lib/python3.6/dist-packages/torch/serialization.py:241: UserWarning: Couldn't retrieve source code for container of type Net. It won't be checked for correctness upon loading.
"type " + obj.__name__ + ". It won't be checked "
|
Invert_Stokeslet_FFT.ipynb | ###Markdown
example: comparison with standard Stokeslet
###Code
eps = 0.00001 #add epsilon^2 to denominator of integrand to regularize at q = 0
dq = 0.07 #spacing in Fourier space
Q = 10 #max value of q
F = np.array([0, 0, 1]) #external force
#values of viscosity coefficients
etaR1, etaR2, etaRo = 0, 0, 0
eta_eQ1, eta_oQ1 = 0, 0
eta_eQ2, eta_oQ2 = 0, 0
eta_eQ3, eta_oQ3 = 0, 0
eta_eA, eta_oA = 0, 0
eta_eS, eta_oS = 0, 0
xi = 0
eta_o1, eta_o2 = 0, 0
mu = 1
visc_coeff_list = [etaR1, etaR2, etaRo, eta_eQ1, eta_oQ1, eta_eQ2, eta_oQ2, eta_eQ3, eta_oQ3, eta_eA,
eta_oA, eta_eS, eta_oS, xi, eta_o1, eta_o2, mu]
#call inverse FFT function
fft_xs, v = v_fourier_ifft(Q, dq, eps, visc_coeff_list, F)
#compute velocities in cylindrical coordinates from v (Cartesian coordinates)
x, y, z = np.meshgrid(fft_xs, fft_xs, fft_xs)
vr = (v[0]*x + v[1]*y) / np.sqrt(x**2+ y**2)
vphi = (-v[0]*y + v[1]*x) / np.sqrt(x**2 + y**2)
#analytical expression for Stokeslet with shear viscosity mu and force in -z
Fz = -F[2]
x, y, z = np.meshgrid(fft_xs, fft_xs, fft_xs)
vx_theory = Fz/(8*np.pi*mu) * z*x / (x**2 + y**2 + z**2)**(3/2)
vy_theory = Fz/(8*np.pi*mu) * z*y / (x**2 + y**2 + z**2)**(3/2)
vz_theory = Fz/(8*np.pi*mu) * (x**2 + y**2 + 2*z**2) / (x**2 + y**2 + z**2)**(3/2)
vr_theory = (vx_theory*x + vy_theory*y) / np.sqrt(x**2+ y**2)
vphi_theory = (-vx_theory*y + vy_theory*x) / np.sqrt(x**2 + y**2)
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10, 4))
ind1 = 145
ind2 = 145
plt.subplot(121)
plt.plot(vr_theory[ind1, ind2, :] - np.nanmean(vr_theory), fft_xs, color='crimson', label=r'\textbf{theory}')
plt.plot(vr[ind1, ind2, :] - np.nanmean(vr), fft_xs, marker='.', linestyle='None', color='royalblue', label=r'\textbf{numerics}')
plt.ylim(-10, 10)
plt.ylabel(r'\textbf{$z$}',fontsize=16)
plt.xlabel(r'\textbf{$v_r$}',fontsize=16)
plt.xticks([-0.01, 0, 0.01])
plt.legend()
plt.subplot(122)
plt.plot(vz_theory[ind1, ind2, :] - np.nanmean(vz_theory), fft_xs, color='crimson', label=r'\textbf{theory}')
plt.plot(v[2][ind1, ind2, :] - np.nanmean(v[2]), fft_xs, marker='.', linestyle='None', color='royalblue', label=r'\textbf{numerics}')
plt.ylim(-10, 10)
#plt.ylabel(r'\textbf{$z$}',fontsize=16)
plt.yticks([])
plt.xlabel(r'\textbf{$v_z$}',fontsize=16)
plt.legend()
plt.tight_layout()
plt.savefig('/Users/talikhain/Documents/UChicago/Soft Matter/Vitelli/Odd viscosity in three dimensional flows/Figures2/num_validation.png', dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
example: Stokeslet with perturbative odd shear viscosities, $\eta_1^o = -2\eta_2^o$
###Code
eps = 0.00001 #add epsilon^2 to denominator of integrand to regularize at q = 0
dq = 0.07 #spacing in Fourier space
Q = 10 #max value of q
F = np.array([0, 0, 1]) #external force
#values of viscosity coefficients
etaR1, etaR2, etaRo = 0, 0, 0
eta_eQ1, eta_oQ1 = 0, 0
eta_eQ2, eta_oQ2 = 0, 0
eta_eQ3, eta_oQ3 = 0, 0
eta_eA, eta_oA = 0, 0
eta_eS, eta_oS = 0, 0
xi = 0
eta_o1, eta_o2 = -0.2, 0.1
mu = 1
visc_coeff_list = [etaR1, etaR2, etaRo, eta_eQ1, eta_oQ1, eta_eQ2, eta_oQ2, eta_eQ3, eta_oQ3, eta_eA,
eta_oA, eta_eS, eta_oS, xi, eta_o1, eta_o2, mu]
fft_xs, v = v_fourier_ifft(Q, dq, eps, visc_coeff_list, F)
x, y, z = np.meshgrid(fft_xs, fft_xs, fft_xs)
vr = (v[0]*x + v[1]*y) / np.sqrt(x**2+ y**2)
vphi = (-v[0]*y + v[1]*x) / np.sqrt(x**2 + y**2)
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(5, 7))
plt.subplot(111)
ind = 145
cont = plt.contourf(fft_xs, fft_xs, np.transpose(vphi[ind,:,:]), 50, cmap=newcmp)#, vmin = -1, vmax = 1)
circle = mpatches.Circle([0, 0], radius = 1, color = 'white')
axes.add_patch(circle)
plt.xlim(0, 10)
plt.ylim(-10, 10)
plt.xticks([0, 3, 6, 9], size=12)
plt.yticks([-9, -6, -3, 0, 3, 6, 9], size=12)
#plt.text(0.5, 8.5, coeff_label_list[count], fontsize=16)
cb = plt.colorbar(cont)#, ticks = np.arange(-2, 2.5, 0.5))
cb.set_label(label=r'\textbf{$v_{\phi}$}', size=16)
cb.ax.tick_params(labelsize=12)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
example: theory and numerics comparison for Stokeslet with odd viscosity
###Code
eps = 0.00001 #add epsilon^2 to denominator of integrand to regularize at q = 0
dq = 0.07 #spacing in Fourier space
Q = 10 #max value of q
F = np.array([0, 0, 1]) #external force
#values of viscosity coefficients
etaR1, etaR2, etaRo = 0, 0, 0
eta_eQ1, eta_oQ1 = 0, 0
eta_eQ2, eta_oQ2 = 0, 0
eta_eQ3, eta_oQ3 = 0, 0
eta_eA, eta_oA = 0, 0
eta_eS, eta_oS = 0, 0
xi = 0
eta_o1, eta_o2 = 0, 0
mu = 1
vphi_list = []
for i in range(3):
eta_o_list = [0, 0, 0]
eta_o_list[i] = 0.01
eta_o1, eta_o2, etaRo = eta_o_list[0], eta_o_list[1], eta_o_list[2]
visc_coeff_list = [etaR1, etaR2, etaRo, eta_eQ1, eta_oQ1, eta_eQ2, eta_oQ2, eta_eQ3, eta_oQ3, eta_eA,
eta_oA, eta_eS, eta_oS, xi, eta_o1, eta_o2, mu]
fft_xs, v = v_fourier_ifft(Q, dq, eps, visc_coeff_list, F)
x, y, z = np.meshgrid(fft_xs, fft_xs, fft_xs)
vphi = (-v[0]*y + v[1]*x) / np.sqrt(x**2 + y**2)
vphi_list.append(vphi)
#analytical expression for Stokeslet with shear viscosity mu, force in -z,
#and perturbative odd viscosity for eta_1^o, eta_2^o, eta_R^o
Fz = -F[2]
eps = 0.01
x, y, z = np.meshgrid(fft_xs, fft_xs, fft_xs)
vphi_theory_eta1 = eps*Fz/(32*mu*np.pi) * np.sqrt(x**2 + y**2)*z * (x**2 + y**2 + 4*z**2) / (x**2 + y**2 + z**2)**(5/2)
vphi_theory_eta2 = -eps*Fz/(16*mu*np.pi) * np.sqrt(x**2 + y**2)*z * (x**2 + y**2 - 2*z**2) / (x**2 + y**2 + z**2)**(5/2)
vphi_theory_etaR = -eps*Fz/(8*mu*np.pi) * np.sqrt(x**2 + y**2)*z / (x**2 + y**2 + z**2)**(3/2)
vphi_theory_list = [vphi_theory_eta1, vphi_theory_eta2, vphi_theory_etaR]
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(15, 5))
ind1 = 145
ind2 = 145
title_list = [r'\textbf{$\eta_1^o/\mu = 0.01$}', r'\textbf{$\eta_2^o/\mu = 0.01$}', r'\textbf{$\eta_R^o/\mu = 0.01$}']
for i in range(3):
plt.subplot(1, 3, i+1)
plt.plot((vphi_theory_list[i][ind1, ind2, :] - np.nanmean(vphi_theory_list[i]))*1e3, fft_xs, color='crimson', label=r'\textbf{theory}')
plt.plot((vphi_list[i][ind1, ind2, :] - np.nanmean(vphi_list[i]))*1e3, fft_xs, marker='.', linestyle='None', color='royalblue', label=r'\textbf{numerics}')
plt.ylim(-10, 10)
if i == 0:
plt.ylabel(r'\textbf{$z$}',fontsize=22)
plt.xlabel(r'\textbf{$v_{\phi} \cdot 10^3$}',fontsize=22)
plt.title(title_list[i], fontsize=22)
plt.xticks(size=15)
plt.yticks(size=15)
if i > 0:
plt.yticks([])
plt.legend(fontsize=15)
plt.tight_layout()
plt.savefig('/Users/talikhain/Documents/UChicago/Soft Matter/Vitelli/Odd viscosity in three dimensional flows/Figures2/num_validation_odd.png', dpi=300)
plt.show()
###Output
findfont: Font family ['serif'] not found. Falling back to DejaVu Sans.
|
notebooks/1_sokoly35_creating_dataset.ipynb | ###Markdown
Setup
###Code
from dotenv import load_dotenv
import os
import spotipy
from lyricsgenius import Genius
from spotipy.oauth2 import SpotifyClientCredentials
import sys
# Setting the working directory
sys.path.insert(0,'..')
from src.data.make_dataset import create_dataset
# Loading credentials to Spotify and Genius API from .env file
dotenv_path = os.path.join('..', '.env')
load_dotenv(dotenv_path)
SPOTIFY_API_CLIENT_ID = os.getenv('SPOTIFY_API_CLIENT_ID')
SPOTIFY_API_CLIENT_SECRET = os.getenv('SPOTIFY_API_CLIENT_SECRET')
GENIUS_ACCESS_TOKEN = os.getenv('GENIUS_ACCESS_TOKEN')
# Path to save results
path_to_save_df = os.path.join('..', 'data', 'raw', 'data.csv')
# Connecting to spotify API
client_credentials_manager = SpotifyClientCredentials(client_id=SPOTIFY_API_CLIENT_ID,
client_secret=SPOTIFY_API_CLIENT_SECRET)
sp = spotipy.Spotify(auth_manager=client_credentials_manager)
# Connecting to Genius API
genius = Genius(GENIUS_ACCESS_TOKEN, timeout=10, retries=5)
genius.verbose = False
genius.remove_section_headers = True
# Selecting 10 subjectively most popular music genre
# From lsit generated with function below
# sp.recommendation_genre_seeds()
chosen_genres_10 = ['blues', 'country', 'disco', 'hip-hop',
'pop', 'punk', 'reggae', 'rock', 'r-n-b', 'jazz']
###Output
_____no_output_____
###Markdown
Scraping data Example
###Code
df = create_dataset(chosen_genres_10, sp, genius,
limit=3,
how_many_in_genre=6,
sleep_time=0)
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 60 entries, 0 to 59
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 artist_name 60 non-null object
1 track_name 60 non-null object
2 popularity 60 non-null int64
3 genre 60 non-null object
4 lyrics 60 non-null object
dtypes: int64(1), object(4)
memory usage: 2.8+ KB
###Markdown
Final data frame We will scrape data from 10 popular music genres. For each genre there will be maximum 1 000 unique observations. Carefully it can took some time until function ends.
###Code
df = create_dataset(chosen_genres_10, sp, genius,
limit=50,
how_many_in_genre=1_000,
sleep_time=10,
path_to_save=path_to_save_df,
verbose=False)
###Output
_____no_output_____ |
notebooks/spaqrl_queries.ipynb | ###Markdown
Legislation
###Code
# Legislation titles
queryString = """
prefix dcterm: <http://purl.org/dc/terms/>
prefix overheidrl: <http://linkeddata.overheid.nl/terms/>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
select ?article ?title
{
?article rdf:type <http://linkeddata.overheid.nl/terms/Wet>.
?article dcterm:title ?title
}
limit 100
"""
sparql.setQuery(queryString)
sparql.setReturnFormat(JSON)
ret = sparql.query()
result = ret.convert()
law_titles = sparql_result_to_df(result)
law_titles #.sort_values('cnt', ascending=False)
###Output
_____no_output_____
###Markdown
Links
###Code
# Link types
queryString = """
prefix dcterm: <http://purl.org/dc/terms/>
prefix overheidrl: <http://linkeddata.overheid.nl/terms/>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
select ?link_type (count(*) as ?cnt)
{
?link_id overheidrl:heeftLinktype ?link_type.
}
group by ?link_type
order by desc(?cnt)
"""
sparql.setQuery(queryString)
sparql.setReturnFormat(JSON)
ret = sparql.query()
result = ret.convert()
link_titles = sparql_result_to_df(result)
link_titles.head(30)
for l in link_titles.head(20)['link_type']:
print(l)
# Link types
queryString = """
prefix dcterm: <http://purl.org/dc/terms/>
prefix overheidrl: <http://linkeddata.overheid.nl/terms/>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
select ?sourceType ?targetType (count(*) as ?cnt)
{
?target rdf:type ?targetType.
?source rdf:type ?sourceType.
?link_id overheidrl:heeftLinktype <http://linkeddata.overheid.nl/terms/linktype/id/lx-referentie>.
?link_id overheidrl:linktNaar ?target.
?link_id overheidrl:linktVan ?source
}
group by ?sourceType ?targetType
order by desc(?cnt)
"""
sparql.setQuery(queryString)
sparql.setReturnFormat(JSON)
ret = sparql.query()
result = ret.convert()
link_types = sparql_result_to_df(result)
link_types
link_types.to_csv('link_types.csv')
# Links from cases to cases
queryString = """
prefix dcterm: <http://purl.org/dc/terms/>
prefix overheidrl: <http://linkeddata.overheid.nl/terms/>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
select ?link_id ?source ?target ?linktype
{
?target rdf:type overheidrl:Jurisprudentie.
?source rdf:type overheidrl:Jurisprudentie.
?link_id overheidrl:heeftLinktype ?linktype.
?link_id overheidrl:linktNaar ?target.
?link_id overheidrl:linktVan ?source
}
"""
sparql.setQuery(queryString)
sparql.setReturnFormat(JSON)
ret = sparql.query()
result = ret.convert()
case_to_case_links = sparql_result_to_df(result)
print(case_to_case_links.shape)
case_to_case_links.head()
case_to_case_links.to_csv('case_to_case_links.csv', index=False)
case_to_case_links_lx = case_to_case_links[
case_to_case_links['linktype']=='http://linkeddata.overheid.nl/terms/linktype/id/lx-referentie']
case_to_case_links_lx = case_to_case_links_lx[['link_id', 'source', 'target']]
print(case_to_case_links_lx.shape, case_to_case_links_lx.drop_duplicates().shape)
case_to_case_links_lx.to_csv('case_to_case_lx_links.csv', index=False)
case_to_case_links.groupby('linktype').count()['link_id'].sort_values(ascending=False)
# Case - Legislation network
queryString = """
prefix dcterm: <http://purl.org/dc/terms/>
prefix overheidrl: <http://linkeddata.overheid.nl/terms/>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
select ?link_id ?source ?target ?linktype
{
?target rdf:type overheidrl:Artikel.
?source rdf:type overheidrl:Jurisprudentie.
?link_id overheidrl:heeftLinktype ?linktype.
?link_id overheidrl:linktNaar ?target.
?link_id overheidrl:linktVan ?source
}
"""
sparql.setQuery(queryString)
sparql.setReturnFormat(JSON)
ret = sparql.query()
result = ret.convert()
case_article_network = sparql_result_to_df(result)
case_article_network.shape
case_article_network.to_csv('/media/sf_VBox_Shared/CaseLaw/2018-01-29-lido/derived/case-to-article-links.csv', index=False)
case_article_network.groupby('linktype').count()['link_id']
###Output
_____no_output_____
###Markdown
Nodes
###Code
# Get all articles
queryString = """
prefix dcterm: <http://purl.org/dc/terms/>
prefix overheidrl: <http://linkeddata.overheid.nl/terms/>
prefix owms: <http://standaarden.overheid.nl/owms/terms/>
prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
prefix skos: <http://www.w3.org/2004/02/skos/core#>
select ?id ?title ?label ?authority
{
?id rdf:type overheidrl:Artikel.
optional {?id dcterm:title ?title.}
optional {?id owms:authority ?authority.}
optional {?id skos:prefLabel ?label}
}
"""
sparql.setQuery(queryString)
sparql.setReturnFormat(JSON)
ret = sparql.query()
result = ret.convert()
article_nodes = sparql_result_to_df(result)
article_nodes.shape
article_nodes.to_csv('/media/sf_VBox_Shared/CaseLaw/2018-01-29-lido/derived/article_nodes.csv', index=False)
article_nodes.head()
###Output
_____no_output_____
###Markdown
Sometimes a law or article changes names, so there are multiple titles/labels. Unfortunately, we don't know what the latest version is.Therefore, we just take the alphabetically first option.
###Code
article_nodes_dedup = article_nodes.sort_values(['title', 'label', 'authority']).groupby('id').first()
article_nodes_dedup.shape
article_nodes_dedup.to_csv('/media/sf_VBox_Shared/CaseLaw/2018-01-29-lido/derived/article_nodes_nodup.csv',
encoding='utf-8')
article_nodes_dedup.isnull().sum()
###Output
_____no_output_____ |
applications/solvers/cokeCombustionFoam/SegregatedSteps/runs/complicatedPorousMedia/combustions/optimize/tiny2_6/analysis/analyzeCombustion.ipynb | ###Markdown
Temporal Evolution of Combustion Temperature,residual coke and reaction rate
###Code
fieldminMaxFile="../postProcessing/minMaxComponents/0/fieldMinMax.dat"
with open(fieldminMaxFile,"r") as fp:
comment=fp.readline()
header=fp.readline()
header=header[1:-1].split()
indexs_processor=[]
for i,name in enumerate(header):
if header[i]=="processor":
indexs_processor.append(i)
indexs_processor.reverse()
data=pd.read_csv(fieldminMaxFile,comment='#', sep='\t',header=None)
data=data.drop(indexs_processor,axis=1)
data.rename(columns=lambda x:header[x],inplace=True)
data.head()
sampling_rate=10
data_sampling=data[data.index%sampling_rate==0]
data_sampling.shape
def readOpenFoamUField(file,nx,ny,normizedValue=1,component=0):
with open(file,"r") as fp:
lines=fp.readlines()
for i,line in enumerate(lines):
if line.startswith("internalField"):
start=i+3
elif line.startswith("boundaryField"):
end=i-4
break
field=[]
for i in np.arange(start,end+1):
values=lines[i].replace('\n', '').split()
values=[float(value.replace('(', '').replace(')', '')) for value in values]
value=values[component]
field.append(value/normizedValue)
field=np.array(field).reshape(ny,nx)
return field
def readOpenFoamField(file,nx,ny,normizedValue=1):
with open(file,"r") as fp:
lines=fp.readlines()
for i,line in enumerate(lines):
if line.startswith("internalField"):
start=i+3
elif line.startswith("boundaryField"):
end=i-4
break
field=[]
for i in np.arange(start,end+1):
value=float(lines[i].replace('\n', ''))
field.append(value/normizedValue)
field=np.array(field).reshape(ny,nx)
return field
times=np.arange(timeStep,endTime+timeStep,timeStep)
stimes=pd.Series([f"{t:.2f}".rstrip('.0') for t in times])
sampling_rate=1
stimes=stimes[stimes.index%sampling_rate==0]
stimes.shape
volumeAveragedCoke=[]
volumeAveragedReactionRate=[]
sumReactionRate=[]
inletfluxs=[]
for t in stimes:
cokeField=readOpenFoamField(f"../{str(t)}/coke",lx,ly)
volumeAveragedCoke.append(np.mean(cokeField))
cokeReactionRateField=readOpenFoamField(f"../{str(t)}/cokeRectionRate",lx,ly)
volumeAveragedReactionRate.append(np.mean(cokeReactionRateField))
sumReactionRate.append(np.sum(cokeReactionRateField))
densityField=readOpenFoamField(f"../{str(t)}/rho",lx,ly)
UxField=readOpenFoamUField(f"../{str(t)}/U",lx,ly)
inletFluxProfile=densityField[:,0]*UxField[:,0]
inletfluxs.append(np.sum(inletFluxProfile))
fig, ax = plt.subplots()
ax.set_xlabel(f"Time (s)")
ax.set_title(f"Temporal Evolution",color="k")
ax.plot(data["Time"],data["max"]/Tref,linestyle="-",label="Maximum Temperature",color="b")
ax.set_ylabel(f"Dimensionless T",color="b")
ax.tick_params(axis='y', labelcolor="b")
ax2 = ax.twinx()
ax2.plot(stimes.index*timeStep,volumeAveragedCoke,linestyle="-",color="r")
ax2.set_xlabel('Time (s)',color="r")
ax2.set_ylabel("Residual coke fraction",color="r")
ax2.tick_params(axis='y', labelcolor="r")
fig,ax=plt.subplots()
ax.plot(stimes.index*timeStep,np.array(sumReactionRate)*(pixelResolution*pixelResolution)*-1/MCoke*MO2,linestyle="-",color="b")
plt.rcParams.update({'mathtext.default': 'regular' })
ax.set_xlabel('Time (s)')
ax.set_ylabel("Total $O_2$ Reaction Rate (kg/s)",color="b")
ax.set_ylim([1e-7,2e-5])
ax.set_yscale('log')
ax.tick_params(axis='y', labelcolor="b")
ax2 = ax.twinx()
ax2.plot(stimes.index*timeStep,np.array(inletfluxs)*pixelResolution*YO2,linestyle="--",color="r")
ax2.set_ylabel("Total $O_{2}$ Flux by convection",color="r")
ax2.set_ylim([1e-7,2e-5])
ax2.set_yscale('log')
ax2.tick_params(axis='y', labelcolor="r")
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Transversely averaged O2 fraction and temperature distributions at three typical time instants
###Code
def show(timeInstant):
cokeField=readOpenFoamField(f"../{str(timeInstant)}/coke",lx,ly)
O2Field=readOpenFoamField(f"../{str(timeInstant)}/O2",lx,ly)
TField=readOpenFoamField(f"../{str(timeInstant)}/T",lx,ly,Tref)
fig,axs=plt.subplots(nrows=3, sharex=True, figsize=(13, 6))
fig.tight_layout()
plt.rcParams.update({'mathtext.default': 'regular' })
# fig.suptitle(f"Field contours at time instant of {str(timeInstant)} s", fontsize=20)
fig.text(0.55, 1.02, f'Field contours at time instant of {str(timeInstant)} s', transform=fig.transFigure, horizontalalignment='center', fontsize=18)
im0=axs[0].imshow(cokeField,cmap="coolwarm")
axs[0].set_title("coke fraction")
bbox_ax0 = axs[0].get_position()
loc_cbar0 = fig.add_axes([bbox_ax0.x1*1.01, bbox_ax0.y0, 0.02, bbox_ax0.y1-bbox_ax0.y0])
cbar0 = fig.colorbar(im0, cax=loc_cbar0)
im1=axs[1].imshow(O2Field,cmap="coolwarm")
plt.rcParams.update({'mathtext.default': 'regular' })
axs[1].set_title("${O_2}$ fraction")
bbox_ax1 = axs[1].get_position()
loc_cbar1 = fig.add_axes([bbox_ax1.x1*1.01, bbox_ax1.y0, 0.02, bbox_ax1.y1-bbox_ax1.y0])
cbar1 = fig.colorbar(im1, cax=loc_cbar1)
im2=axs[2].imshow(TField,cmap="coolwarm")
axs[2].set_title("Temperature")
bbox_ax2 = axs[2].get_position()
loc_cbar2 = fig.add_axes([bbox_ax2.x1*1.01, bbox_ax2.y0, 0.02, bbox_ax2.y1-bbox_ax2.y0])
cbar2 = fig.colorbar(im2, cax=loc_cbar2)
# show(t1)
t1=0.01
t2=0.05
t3=0.1
show(t1)
show(t2)
show(t3)
cokeField0=readOpenFoamField(f"../{str(t1)}/coke",lx,ly)
O2Field0=readOpenFoamField(f"../{str(t1)}/O2",lx,ly)
TField0=readOpenFoamField(f"../{str(t1)}/T",lx,ly,Tref)
cokeField1=readOpenFoamField(f"../{str(t2)}/coke",lx,ly)
O2Field1=readOpenFoamField(f"../{str(t2)}/O2",lx,ly)
TField1=readOpenFoamField(f"../{str(t2)}/T",lx,ly,Tref)
cokeField2=readOpenFoamField(f"../{str(t3)}/coke",lx,ly)
O2Field2=readOpenFoamField(f"../{str(t3)}/O2",lx,ly)
TField2=readOpenFoamField(f"../{str(t3)}/T",lx,ly,Tref)
fig,axs=plt.subplots(nrows=3, sharex=True, figsize=(10, 6))
fig.tight_layout()
axs[0].plot(np.mean(cokeField0,axis=0),linestyle="-.",color="k",label=fr"$\mathit{{t}}\ $ = {str(t1)} s")
axs[0].plot(np.mean(cokeField1,axis=0),linestyle="--",color="b",label=fr"$\mathit{{t}}\ $ = {str(t2)} s")
axs[0].plot(np.mean(cokeField2,axis=0),linestyle="-",color="r",label=fr"$\mathit{{t}}\ $ = {str(t3)} s")
axs[0].set_ylabel(f"Coke Fraction")
axs[0].legend()
axs[1].plot(np.mean(O2Field0,axis=0),linestyle="-.",color="k",label=fr"$\mathit{{t}}\ $ = {str(t1)} s")
axs[1].plot(np.mean(O2Field1,axis=0),linestyle="--",color="b",label=fr"$\mathit{{t}}\ $ = {str(t2)} s")
axs[1].plot(np.mean(O2Field2,axis=0),linestyle="-",color="r",label=fr"$\mathit{{t}}\ $ = {str(t3)} s")
axs[1].set_ylabel(f"$O_{2}$ Fraction")
axs[1].legend()
axs[2].plot(np.mean(TField0,axis=0),linestyle="-.",color="k",label=fr"$\mathit{{t}}\ $ = {str(t1)} s")
axs[2].plot(np.mean(TField1,axis=0),linestyle="--",color="b",label=fr"$\mathit{{t}}\ $ = {str(t2)} s")
axs[2].plot(np.mean(TField2,axis=0),linestyle="-",color="r",label=fr"$\mathit{{t}}\ $ = {str(t3)} s")
axs[2].set_ylabel(f"Temperature")
axs[2].legend()
axs[2].set_xlim([0,lx*1.2])
###Output
_____no_output_____ |
Exam project/Marcus3.ipynb | ###Markdown
Gradient descent Let $\boldsymbol{x} = \left[\begin{array}{c}x_1 \\x_2\\\end{array}\right]$ be a two-dimensional vector. Consider the following algorithm: **Algorithm:** `gradient_descent()`**Goal:** Minimize the function $f(\boldsymbol{x})$.1. Choose a tolerance $\epsilon>0$, a scale factor $ \Theta > 0$, and a small number $\Delta > 0$2. Guess on $\boldsymbol{x}_0$ and set $n=1$3. Compute a numerical approximation of the jacobian for $f$ by $$ \nabla f(\boldsymbol{x}_{n-1}) \approx \frac{1}{\Delta}\left[\begin{array}{c} f\left(\boldsymbol{x}_{n-1}+\left[\begin{array}{c} \Delta\\ 0 \end{array}\right]\right)-f(\boldsymbol{x}_{n-1})\\ f\left(\boldsymbol{x}_{n-1}+\left[\begin{array}{c} 0\\ \Delta \end{array}\right]\right)-f(\boldsymbol{x}_{n-1}) \end{array}\right] $$4. Stop if the maximum element in $|\nabla f(\boldsymbol{x}_{n-1})|$ is less than $\epsilon$5. Set $\theta = \Theta$ 6. Compute $f^{\theta}_{n} = f(\boldsymbol{x}_{n-1} - \theta \nabla f(\boldsymbol{x}_{n-1}))$7. If $f^{\theta}_{n} < f(\boldsymbol{x}_{n-1})$ continue to step 98. Set $\theta = \frac{\theta}{2}$ and return to step 6 9. Set $x_{n} = x_{n-1} - \theta \nabla f(\boldsymbol{x}_{n-1})$10. Set $n = n + 1$ and return to step 3 **Question:** Implement the algorithm above such that the code below can run. Jacobian function
###Code
def _rosen(x1,x2):
f = (1.0-x1)**2 + 2*(x2-x1**2)**2
x1 = sm.symbols('x_1')
x2 = sm.symbols('x_2')
f = (1.0-x1)**2 + 2*(x2-x1**2)**2
Df = sm.Matrix([sm.diff(f,i) for i in [x1,x2]])
Df
def rosen(x):
return _rosen(x[0],x[1])
def rosen_jac(x):
return np.array([-(2.0-2*x[0])-8*x[0]*(x[1]-x[0]**2),4*(x[1]-x[0]**2)])
def gradient_descent(f,x0,epsilon=1e-6,Theta=0.1,Delta=1e-8,max_iter=10_000):
""" minimize function with gradient descent
Args:
f (callable): function
x0 (np.ndarray): initial values
jac (callable): jacobian - MADE MYSELF IN FUNCTION
alpha (list): potential step sizes - CHANGED TO Theta
max_iter (int): maximum number of iterations
tol (float): tolerance
Returns:
x (np.ndarray): minimum
n (int): number of iterations used
"""
# step 1: initialize
x = x0
fx = f(x0)
n = 1
# step 2-6: iteration
while n < max_iter:
x_prev = x
fx_prev = fx
# step 2: evaluate gradient
jacx = rosen_jac(x)
# step 3: find good step size (line search)
fx_ast = np.inf
theta_ast = Theta
theta = Theta / 2
x = x_prev - theta*jacx
fx = f(x)
if fx < fx_ast:
fx_ast = fx
theta_ast = theta
# step 4: update guess
x = x_prev - theta_ast*jacx
# step 5: check convergence
fx = f(x)
if abs(fx-fx_prev) < epsilon:
break
# d. update i
n += 1
return x,n
pass
###Output
_____no_output_____
###Markdown
**Test case:**
###Code
def rosen(x):
return (1.0-x[0])**2+2*(x[1]-x[0]**2)**2
x0 = np.array([1.1,1.1])
try:
x,it = gradient_descent(rosen,x0)
print(f'minimum found at ({x[0]:.4f},{x[1]:.4f}) after {it} iterations')
assert np.allclose(x,[1,1])
except:
print('not implemented yet')
gradient_descent(rosen,x0)
def minimize_gradient_descent(f,x0,jac,alphas=[0.01,0.05,0.1,0.25,0.5,1],max_iter=500,tol=1e-8):
""" minimize function with gradient descent
Args:
f (callable): function
x0 (np.ndarray): initial values
jac (callable): jacobian
alpha (list): potential step sizes
max_iter (int): maximum number of iterations
tol (float): tolerance
Returns:
x (np.ndarray): minimum
n (int): number of iterations used
"""
# step 1: initialize
x = x0
fx = f(x0)
n = 1
# step 2-6: iteration
while n < max_iter:
x_prev = x
fx_prev = fx
# step 2: evaluate gradient
jacx = jac(x)
# step 3: find good step size (line search)
fx_ast = np.inf
alpha_ast = np.nan
for alpha in alphas:
x = x_prev - alpha*jacx
fx = f(x)
if fx < fx_ast:
fx_ast = fx
alpha_ast = alpha
# step 4: update guess
x = x_prev - alpha_ast*jacx
# step 5: check convergence
fx = f(x)
if abs(fx-fx_prev) < tol:
break
# d. update i
n += 1
return x,n
minimize_gradient_descent(rosen,x0,rosen_jac)
###Output
_____no_output_____ |
data/dataLoader.ipynb | ###Markdown
经度划分为1800个密位,维度划分为900个密位,转换公式:经度 lon = int(5*lon) + 900纬度 lat = int(5*lat) + 450
###Code
# # 存取图,避免重复读取大数据
np.save("nancatch400fig.np",fig)
# b = np.load("fig.np")
# 作图,先作一个维度看看
# plt.imshow(fig[0], cmap ='gray')
# plt.imsave('nancatch20.png', fig[0])
# plt.imsave('nancatch20gray.png', fig[0], cmap='gray')
for i in range(8):
plt.imsave('nancatch400gray'+str(i)+'.png', fig[i], cmap='gray')
###Output
_____no_output_____ |
PyTorch Tutorial.ipynb | ###Markdown
PyTorch Tutorial for DSC Tetris RL Project Group
###Code
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.distributions import Categorical
###Output
_____no_output_____
###Markdown
Linear Regression with Gradient DescentWe make a linear regression model of the form $\hat{y} = \hat{\beta_0} + \hat{\beta_1}x_1 + \hat{\beta_2}x_2$. First let's define the model.
###Code
class OLS(nn.Module):
def __init__(self):
super(OLS, self).__init__()
self.weights = nn.Linear(2, 1) # slope1, slope2, intercept
def forward(self, x):
return self.weights(x)
###Output
_____no_output_____
###Markdown
Now we make some fake data to test it out.
###Code
X = torch.Tensor([
[1, 2],
[3, 4],
[5, 6],
[4, 2],
[10,9]
])
# y = 1.62 + 3.14x_1 + 1.41x_2
y = torch.sum(X*torch.Tensor([3.14, 1.41]), axis=1, keepdims=True) + 1.62 # 5x1
X, y
###Output
_____no_output_____
###Markdown
Now initialize the model, optimizer, loss function, and train the model.
###Code
model = OLS()
optimizer = optim.SGD(model.parameters(), lr=0.01)
criterion = nn.MSELoss()
num_steps = 1000
for step in range(num_steps):
optimizer.zero_grad()
y_ = model(X) # predicted
loss = criterion(y_, y)
loss.backward()
optimizer.step()
if step % 100 == 0:
print(f"Step: {step}, Loss: {loss.item()}")
###Output
Step: 0, Loss: 874.3859252929688
Step: 100, Loss: 0.3385462164878845
Step: 200, Loss: 0.11772467195987701
Step: 300, Loss: 0.04475127160549164
Step: 400, Loss: 0.017174752429127693
Step: 500, Loss: 0.006597722880542278
Step: 600, Loss: 0.00253481836989522
Step: 700, Loss: 0.0009739218512549996
Step: 800, Loss: 0.0003741784894373268
Step: 900, Loss: 0.00014376426406670362
###Markdown
Look at the predictions and the expected output.
###Code
model(X), y
###Output
_____no_output_____
###Markdown
Look at the predicted weights (they should be close to what we used to make the $y$ data):
###Code
model.weights.weight, model.weights.bias
###Output
_____no_output_____
###Markdown
Finding the Highest Reward Path on a Lattice Using REINFORCE
###Code
board = np.array([
[0, -1000000, -200000, -300000],
[100000, 10000, -1000000, -400000],
[-10000, 10000, 10000, 10000],
[-10000, -10000, -10000, 0]
])
# typical x: [r, c] --> max{0, b_0 + b_1r + b_2c} =: y,
class Model(nn.Module):
# expects 2D input
# outputs logits of right and down
def __init__(self):
super(Model, self).__init__()
self.fc1 = nn.Linear(2, 3)
self.fc2 = nn.Linear(3, 3)
self.fc3 = nn.Linear(3,optimizer.step() 2) # logit right, logit down
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x #logits
# just some constant vars
RIGHT, DOWN = 1, 0
# plotting stuff
plot_episode = []
plot_rewards = []
fig = plt.figure()
ax = fig.add_subplot(111)
plt.ion()
fig.show()
fig.canvas.draw()
gamma = 0.999
num_episodes = 40000
pool_size = 1000
model = Model()
optimizer = optim.Adagrad(model.parameters(), lr=0.01)
state_pool = []
action_pool = []
reward_pool = []
for e in range(num_episodes):
position = torch.zeros(1, 2)
model.eval()
# takes 6 steps to go from top left to bottom right
for i in range(6):
if position[0, RIGHT] == 3:
# we are all the way right
position = position.clone().detach()
position[0, DOWN] += 1
elif position[0, DOWN] == 3:
# we are all the way down
position = position.clone().detach()
position[0, RIGHT] += 1
else:
prob_action = model(position)
action = Categorical(logits = prob_action).sample() # [-1000, 199] --> softmax([-1000, 199])
if action.item() == RIGHT:
# move right
position = position.clone().detach()
position[0, RIGHT] += 1
elif action.item() == DOWN:
# move down
position = position.clone().detach()
position[0, DOWN] += 1
reward = board[int(position[0, DOWN]), int(position[0, RIGHT])]
state_pool.append(position)
action_pool.append(action)
reward_pool.append(reward)
model.train()
if e % pool_size == pool_size - 1:
ax.clear()
plot_episode.append(e)
for i in reversed(range(len(reward_pool))):
if reward_pool[i] == 0:
prev_reward = 0
else:
prev_reward = prev_reward * gamma + reward_pool[i]
reward_pool[i] = prev_reward
reward_mean = np.mean(reward_pool)
reward_std = np.std(reward_pool)
# plot mean reward
plot_rewards.append(reward_mean)
ax.plot(plot_episode, plot_rewards)
fig.canvas.draw()
for i in range(len(reward_pool)):
reward_pool[i] = (reward_pool[i] - reward_mean) / reward_std
optimizer.zero_grad()
for i in range(0, len(reward_pool), 6):
state = torch.cat(state_pool[i:i+6])
action = torch.tensor(action_pool[i:i+6]).long()
reward = torch.tensor(reward_pool[i:i+6]).float()
loss = torch.sum(-Categorical(logits = model(state)).log_prob(action) * reward)
loss.backward()
optimizer.step()
state_pool = []
action_pool = []
reward_pool = []
model.eval()
position = torch.zeros(1, 2)
reward = 0
# takes 6 steps to go from top left to bottom right
for i in range(6):
if position[0, RIGHT] == 3:
# we are all the way right
position[0, DOWN] += 1
elif position[0, DOWN] == 3:
# we are all the way down
position[0, RIGHT] += 1
else:
prob_action = model(position)
action = torch.argmax(prob_action)
if action.item() == RIGHT:
# move right
position[0, RIGHT] += 1
elif action.item() == DOWN:
# move down
position[0, DOWN] += 1
reward += board[int(position[0, DOWN]), int(position[0, RIGHT])]
print(position, reward)
###Output
tensor([[1., 0.]]) 100000
tensor([[2., 0.]]) 90000
tensor([[3., 0.]]) 80000
tensor([[3., 1.]]) 70000
tensor([[3., 2.]]) 60000
tensor([[3., 3.]]) 60000
|
01-Feature-Extraction-from-Text.ipynb | ###Markdown
___ ___ This unit is divided into two sections:* First, we'll find out what what is necessary to build an NLP system that can turn a body of text into a numerical array of *features*.* Next we'll show how to perform these steps using real tools. Building a Natural Language Processor From ScratchIn this section we'll use basic Python to build a rudimentary NLP system. We'll build a *corpus of documents* (two small text files), create a *vocabulary* from all the words in both documents, and then demonstrate a *Bag of Words* technique to extract features from each document.**This first section is for illustration only!**Don't bother memorizing the code - we'd never do this in real life. Start with some documents:For simplicity we won't use any punctuation.
###Code
%%writefile 1.txt
This is a story about cats
our feline pets
Cats are furry animals
%%writefile 2.txt
This story is about surfing
Catching waves is fun
Surfing is a popular water sport
###Output
Overwriting 2.txt
###Markdown
Build a vocabularyThe goal here is to build a numerical array from all the words that appear in every document. Later we'll create instances (vectors) for each individual document.
###Code
vocab = {}
i = 1
with open('1.txt') as f:
x = f.read().lower().split()
for word in x:
if word in vocab:
continue
else:
vocab[word]=i
i+=1
print(vocab)
with open('2.txt') as f:
x = f.read().lower().split()
for word in x:
if word in vocab:
continue
else:
vocab[word]=i
i+=1
print(vocab)
###Output
{'this': 1, 'is': 2, 'a': 3, 'story': 4, 'about': 5, 'cats': 6, 'our': 7, 'feline': 8, 'pets': 9, 'are': 10, 'furry': 11, 'animals': 12, 'surfing': 13, 'catching': 14, 'waves': 15, 'fun': 16, 'popular': 17, 'water': 18, 'sport': 19}
###Markdown
Even though `2.txt` has 15 words, only 7 new words were added to the dictionary. Feature ExtractionNow that we've encapsulated our "entire language" in a dictionary, let's perform *feature extraction* on each of our original documents:
###Code
# Create an empty vector with space for each word in the vocabulary:
one = ['1.txt']+[0]*len(vocab)
one
# map the frequencies of each word in 1.txt to our vector:
with open('1.txt') as f:
x = f.read().lower().split()
for word in x:
one[vocab[word]]+=1
one
###Output
_____no_output_____
###Markdown
We can see that most of the words in 1.txt appear only once, although "cats" appears twice.
###Code
# Do the same for the second document:
two = ['2.txt']+[0]*len(vocab)
with open('2.txt') as f:
x = f.read().lower().split()
for word in x:
two[vocab[word]]+=1
# Compare the two vectors:
print(f'{one}\n{two}')
###Output
['1.txt', 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0]
['2.txt', 1, 3, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 2, 1, 1, 1, 1, 1, 1]
###Markdown
By comparing the vectors we see that some words are common to both, some appear only in `1.txt`, others only in `2.txt`. Extending this logic to tens of thousands of documents, we would see the vocabulary dictionary grow to hundreds of thousands of words. Vectors would contain mostly zero values, making them *sparse matrices*. Bag of Words and Tf-idfIn the above examples, each vector can be considered a *bag of words*. By itself these may not be helpful until we consider *term frequencies*, or how often individual words appear in documents. A simple way to calculate term frequencies is to divide the number of occurrences of a word by the total number of words in the document. In this way, the number of times a word appears in large documents can be compared to that of smaller documents.However, it may be hard to differentiate documents based on term frequency if a word shows up in a majority of documents. To handle this we also consider *inverse document frequency*, which is the total number of documents divided by the number of documents that contain the word. In practice we convert this value to a logarithmic scale, as described [here](https://en.wikipedia.org/wiki/Tf%E2%80%93idfInverse_document_frequency).Together these terms become [**tf-idf**](https://en.wikipedia.org/wiki/Tf%E2%80%93idf). Stop Words and Word StemsSome words like "the" and "and" appear so frequently, and in so many documents, that we needn't bother counting them. Also, it may make sense to only record the root of a word, say `cat` in place of both `cat` and `cats`. This will shrink our vocab array and improve performance. Tokenization and TaggingWhen we created our vectors the first thing we did was split the incoming text on whitespace with `.split()`. This was a crude form of *tokenization* - that is, dividing a document into individual words. In this simple example we didn't worry about punctuation or different parts of speech. In the real world we rely on some fairly sophisticated *morphology* to parse text appropriately.Once the text is divided, we can go back and *tag* our tokens with information about parts of speech, grammatical dependencies, etc. This adds more dimensions to our data and enables a deeper understanding of the context of specific documents. For this reason, vectors become ***high dimensional sparse matrices***. **That's the end of the first section.**In the next section we'll use scikit-learn to perform a real-life analysis. ___ Feature Extraction from TextIn the **Scikit-learn Primer** lecture we applied a simple SVC classification model to the SMSSpamCollection dataset. We tried to predict the ham/spam label based on message length and punctuation counts. In this section we'll actually look at the text of each message and try to perform a classification based on content. We'll take advantage of some of scikit-learn's [feature extraction](https://scikit-learn.org/stable/modules/feature_extraction.htmltext-feature-extraction) tools. Load a dataset
###Code
# Perform imports and load the dataset:
import numpy as np
import pandas as pd
df = pd.read_csv('../TextFiles/smsspamcollection.tsv', sep='\t')
df.head()
###Output
_____no_output_____
###Markdown
Check for missing values:Always a good practice.
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Take a quick look at the *ham* and *spam* `label` column:
###Code
df['label'].value_counts()
###Output
_____no_output_____
###Markdown
4825 out of 5572 messages, or 86.6%, are ham. This means that any text classification model we create has to perform **better than 86.6%** to beat random chance. Split the data into train & test sets:
###Code
from sklearn.model_selection import train_test_split
X = df['message'] # this time we want to look at the text
y = df['label']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
###Output
_____no_output_____
###Markdown
Scikit-learn's CountVectorizerText preprocessing, tokenizing and the ability to filter out stopwords are all included in [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html), which builds a dictionary of features and transforms documents to feature vectors.
###Code
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
X_train_counts.shape
###Output
_____no_output_____
###Markdown
This shows that our training set is comprised of 3733 documents, and 7082 features. Transform Counts to Frequencies with Tf-idfWhile counting words is helpful, longer documents will have higher average count values than shorter documents, even though they might talk about the same topics.To avoid this we can simply divide the number of occurrences of each word in a document by the total number of words in the document: these new features are called **tf** for Term Frequencies.Another refinement on top of **tf** is to downscale weights for words that occur in many documents in the corpus and are therefore less informative than those that occur only in a smaller portion of the corpus.This downscaling is called **tf–idf** for “Term Frequency times Inverse Document Frequency”.Both tf and tf–idf can be computed as follows using [TfidfTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfTransformer.html):
###Code
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
###Output
_____no_output_____
###Markdown
Note: the `fit_transform()` method actually performs two operations: it fits an estimator to the data and then transforms our count-matrix to a tf-idf representation. Combine Steps with TfidVectorizerIn the future, we can combine the CountVectorizer and TfidTransformer steps into one using [TfidVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html):
###Code
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer()
X_train_tfidf = vectorizer.fit_transform(X_train) # remember to use the original X_train set
X_train_tfidf.shape
###Output
_____no_output_____
###Markdown
Train a ClassifierHere we'll introduce an SVM classifier that's similar to SVC, called [LinearSVC](https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html). LinearSVC handles sparse input better, and scales well to large numbers of samples.
###Code
from sklearn.svm import LinearSVC
clf = LinearSVC()
clf.fit(X_train_tfidf,y_train)
###Output
_____no_output_____
###Markdown
Earlier we named our SVC classifier **svc_model**. Here we're using the more generic name **clf** (for classifier). Build a PipelineRemember that only our training set has been vectorized into a full vocabulary. In order to perform an analysis on our test set we'll have to submit it to the same procedures. Fortunately scikit-learn offers a [**Pipeline**](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) class that behaves like a compound classifier.
###Code
from sklearn.pipeline import Pipeline
# from sklearn.feature_extraction.text import TfidfVectorizer
# from sklearn.svm import LinearSVC
text_clf = Pipeline([('tfidf', TfidfVectorizer()),
('clf', LinearSVC()),
])
# Feed the training data through the pipeline
text_clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Test the classifier and display results
###Code
# Form a prediction set
predictions = text_clf.predict(X_test)
# Report the confusion matrix
from sklearn import metrics
print(metrics.confusion_matrix(y_test,predictions))
# Print a classification report
print(metrics.classification_report(y_test,predictions))
# Print the overall accuracy
print(metrics.accuracy_score(y_test,predictions))
###Output
0.989668297988
|
2019-spring/seminars/seminar4/DL19_seminar4.ipynb | ###Markdown
Обучение нейросетей — оптимизация и регуляризация**Разработчик: Артем Бабенко** На это семинаре будет необходимо (1) реализовать Dropout-слой и проследить его влияние на обобщающую способность сети (2) реализовать BatchNormalization-слой и пронаблюдать его влияние на скорость сходимости обучения. Dropout (0.6 балла)Как всегда будем экспериментировать на датасете MNIST. MNIST является стандартным бенчмарк-датасетом, и его можно подгрузить средствами pytorch.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import clear_output
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import torch.optim as optim
from torch.utils.data.sampler import SubsetRandomSampler
input_size = 784
num_classes = 10
batch_size = 128
train_dataset = dsets.MNIST(root='./MNIST/',
train=True,
transform=transforms.ToTensor(),
download=True)
test_dataset = dsets.MNIST(root='./MNIST/',
train=False,
transform=transforms.ToTensor())
train_loader = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=batch_size,
shuffle=False)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=batch_size,
shuffle=False)
###Output
_____no_output_____
###Markdown
Определим ряд стандартных функций с прошлых семинаров
###Code
def train_epoch(model, optimizer, batchsize=32):
loss_log, acc_log = [], []
model.train()
for batch_num, (x_batch, y_batch) in enumerate(train_loader):
data = x_batch
target = y_batch
optimizer.zero_grad()
output = model(data)
pred = torch.max(output, 1)[1]
acc = torch.eq(pred, y_batch).float().mean()
acc_log.append(acc)
loss = F.nll_loss(output, target).cpu()
loss.backward()
optimizer.step()
loss = loss.item()
loss_log.append(loss)
return loss_log, acc_log
def test(model):
loss_log, acc_log = [], []
model.eval()
for batch_num, (x_batch, y_batch) in enumerate(test_loader):
data = x_batch
target = y_batch
output = model(data)
loss = F.nll_loss(output, target).cpu()
pred = torch.max(output, 1)[1]
acc = torch.eq(pred, y_batch).float().mean()
acc_log.append(acc)
loss = loss.item()
loss_log.append(loss)
return loss_log, acc_log
def plot_history(train_history, val_history, title='loss'):
plt.figure()
plt.title('{}'.format(title))
plt.plot(train_history, label='train', zorder=1)
points = np.array(val_history)
plt.scatter(points[:, 0], points[:, 1], marker='+', s=180, c='orange', label='val', zorder=2)
plt.xlabel('train steps')
plt.legend(loc='best')
plt.grid()
plt.show()
def train(model, opt, n_epochs):
train_log, train_acc_log = [], []
val_log, val_acc_log = [], []
for epoch in range(n_epochs):
print("Epoch {0} of {1}".format(epoch, n_epochs))
train_loss, train_acc = train_epoch(model, opt, batchsize=batch_size)
val_loss, val_acc = test(model)
train_log.extend(train_loss)
train_acc_log.extend(train_acc)
steps = train_dataset.train_labels.shape[0] / batch_size
val_log.append((steps * (epoch + 1), np.mean(val_loss)))
val_acc_log.append((steps * (epoch + 1), np.mean(val_acc)))
clear_output()
plot_history(train_log, val_log)
plot_history(train_acc_log, val_acc_log, title='accuracy')
print("Epoch: {2}, val loss: {0}, val accuracy: {1}".format(np.mean(val_loss), np.mean(val_acc), epoch))
###Output
_____no_output_____
###Markdown
Создайте простейшую однослойную модель - однослойную полносвязную сеть и обучите ее с параметрами оптимизации, заданными ниже.
###Code
class Flatten(nn.Module):
def forward(self, x):
return x.view(x.size()[0], -1)
model = nn.Sequential(
#<your code>
)
opt = torch.optim.Adam(model.parameters(), lr=0.0005)
train(model, opt, 10)
###Output
_____no_output_____
###Markdown
Параметром обученной нейросети является матрица весов, в которой каждому классу соответствует один из 784-мерных столбцов. Визуализируйте обученные векторы для каждого из классов, сделав их двумерными изображениями 28-28. Для визуализации можно воспользоваться кодом для визуализации MNIST-картинок с предыдущих семинаров.
###Code
weights = #<your code>
plt.figure(figsize=[10, 10])
for i in range(10):
plt.subplot(5, 5, i + 1)
plt.title("Label: %i" % i)
plt.imshow(weights[i].reshape([28, 28]), cmap='gray');
###Output
_____no_output_____
###Markdown
Реализуйте Dropout-слой для полносвязной сети. Помните, что этот слой ведет себя по-разному во время обучения и во время применения.
###Code
class DropoutLayer(nn.Module):
def __init__(self, p):
super().__init__()
#<your code>
def forward(self, input):
if self.training:
#<your code>
else:
#<your code>
###Output
_____no_output_____
###Markdown
Добавьте Dropout-слой в архитектуру сети, проведите оптимизацию с параметрами, заданными ранее, визуализируйте обученные веса. Есть ли разница между весами обученными с Dropout и без него? Параметр Dropout возьмите равным 0.7
###Code
modelDp = nn.Sequential(
#<your code>
)
opt = torch.optim.Adam(modelDp.parameters(), lr=0.0005)
train(modelDp, opt, 10)
weights = #<your code>
plt.figure(figsize=[10, 10])
for i in range(10):
plt.subplot(5, 5, i + 1)
plt.title("Label: %i" % i)
plt.imshow(weights[i].reshape([28, 28]), cmap='gray');
###Output
_____no_output_____
###Markdown
Обучите еще одну модель, в которой вместо Dropout-регуляризации используется L2-регуляризация с коэффициентом 0.05. (Параметр weight_decay в оптимизаторе). Визуализируйте веса и сравните с двумя предыдущими подходами.
###Code
model = nn.Sequential(
Flatten(),
nn.Linear(input_size,num_classes),
nn.LogSoftmax(dim=-1)
)
opt = torch.optim.Adam(model.parameters(), lr=0.0005, weight_decay=0.05)
train(model, opt, 10)
weights = #<your code>
plt.figure(figsize=[10, 10])
for i in range(10):
plt.subplot(5, 5, i + 1)
plt.title("Label: %i" % i)
plt.imshow(weights[i].reshape([28, 28]), cmap='gray');
###Output
_____no_output_____
###Markdown
Batch normalization (0.4 балла)Реализуйте BatchNormalization слой для полносвязной сети. В реализации достаточно только центрировать и разделить на корень из дисперсии, аффинную поправку (гамма и бета) в этом задании можно не реализовывать.
###Code
class BnLayer(nn.Module):
def __init__(self, num_features):
super().__init__()
#<your code>
def forward(self, input):
if self.training:
#<your code>
else:
#<your code>
return #<your code>
###Output
_____no_output_____
###Markdown
Обучите трехслойную полносвязную сеть (размер скрытого слоя возьмите 100) с сигмоидами в качестве функций активации.
###Code
model = nn.Sequential(
#<your code>
)
opt = torch.optim.RMSprop(model.parameters(), lr=0.01)
train(model, opt, 3)
###Output
_____no_output_____
###Markdown
Повторите обучение с теми же параметрами для сети с той же архитектурой, но с добавлением BatchNorm слоя (для всех трех скрытых слоев).
###Code
modelBN = nn.Sequential(
#<your code>
)
opt = torch.optim.RMSprop(modelBN.parameters(), lr=0.01)
train(modelBN, opt, 3)
###Output
_____no_output_____ |
sentiment/sentiment-rnn/Sentiment RNN.ipynb | ###Markdown
Sentiment Analysis with an RNNIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the *sequence* of words. Here we'll use a dataset of movie reviews, accompanied by labels.The architecture for this network is shown below.Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
###Code
import numpy as np
import tensorflow as tf
with open('reviews.txt', 'r') as f:
reviews = f.read()
with open('labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
###Output
_____no_output_____
###Markdown
Data preprocessingThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines `\n`. To deal with those, I'm going to split the text into each review using `\n` as the delimiter. Then I can combined all the reviews back together into one big string.First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
###Code
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
###Output
_____no_output_____
###Markdown
Encoding the wordsThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.> **Exercise:** Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers **start at 1, not 0**.> Also, convert the reviews to integers and store the reviews in a new list called `reviews_ints`.
###Code
# Create your dictionary that maps vocab words to integers here
vocab_to_int =
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints =
###Output
_____no_output_____
###Markdown
Encoding the labelsOur labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.> **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively.
###Code
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels =
###Output
_____no_output_____
###Markdown
If you built `labels` correctly, you should see the next output.
###Code
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
###Output
Zero-length reviews: 1
Maximum review length: 2514
###Markdown
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.> **Exercise:** First, remove the review with zero length from the `reviews_ints` list.
###Code
# Filter out that review with 0 length
reviews_ints =
###Output
_____no_output_____
###Markdown
> **Exercise:** Now, create an array `features` that contains the data we'll pass to the network. The data should come from `review_ints`, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is `['best', 'movie', 'ever']`, `[117, 18, 128]` as integers, the row will look like `[0, 0, 0, ..., 0, 117, 18, 128]`. For reviews longer than 200, use on the first 200 words as the feature vector.This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
###Code
seq_len = 200
features =
###Output
_____no_output_____
###Markdown
If you build features correctly, it should look like that cell output below.
###Code
features[:10,:100]
###Output
_____no_output_____
###Markdown
Training, Validation, Test With our data in nice shape, we'll split it into training, validation, and test sets.> **Exercise:** Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, `train_x` and `train_y` for example. Define a split fraction, `split_frac` as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
###Code
split_frac = 0.8
train_x, val_x =
train_y, val_y =
val_x, test_x =
val_y, test_y =
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
###Output
_____no_output_____
###Markdown
With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:``` Feature Shapes:Train set: (20000, 200) Validation set: (2500, 200) Test set: (2501, 200)``` Build the graphHere, we'll build the graph. First up, defining the hyperparameters.* `lstm_size`: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.* `lstm_layers`: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.* `batch_size`: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.* `learning_rate`: Learning rate
###Code
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
###Output
_____no_output_____
###Markdown
For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be `batch_size` vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability. > **Exercise:** Create the `inputs_`, `labels_`, and drop out `keep_prob` placeholders using `tf.placeholder`. `labels_` needs to be two-dimensional to work with some functions later. Since `keep_prob` is a scalar (a 0-dimensional tensor), you shouldn't provide a size to `tf.placeholder`.
###Code
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ =
labels_ =
keep_prob =
###Output
_____no_output_____
###Markdown
EmbeddingNow we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.> **Exercise:** Create the embedding lookup matrix as a `tf.Variable`. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with [`tf.nn.embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup). This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
###Code
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding =
embed =
###Output
_____no_output_____
###Markdown
LSTM cellNext, we'll create our LSTM cells to use in the recurrent network ([TensorFlow documentation](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn)). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.To create a basic LSTM cell for the graph, you'll want to use `tf.contrib.rnn.BasicLSTMCell`. Looking at the function documentation:```tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=)```you can see it takes a parameter called `num_units`, the number of units in the cell, called `lstm_size` in this code. So then, you can write something like ```lstm = tf.contrib.rnn.BasicLSTMCell(num_units)```to create an LSTM cell with `num_units`. Next, you can add dropout to the cell with `tf.contrib.rnn.DropoutWrapper`. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like```drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)```Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with `tf.contrib.rnn.MultiRNNCell`:```cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)```Here, `[drop] * lstm_layers` creates a list of cells (`drop`) that is `lstm_layers` long. The `MultiRNNCell` wrapper builds this into multiple layers of RNN cells, one for each cell in the list.So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.> **Exercise:** Below, use `tf.contrib.rnn.BasicLSTMCell` to create an LSTM cell. Then, add drop out to it with `tf.contrib.rnn.DropoutWrapper`. Finally, create multiple LSTM layers with `tf.contrib.rnn.MultiRNNCell`.Here is [a tutorial on building RNNs](https://www.tensorflow.org/tutorials/recurrent) that will help you out.
###Code
with graph.as_default():
# Your basic LSTM cell
lstm =
# Add dropout to the cell
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
###Output
_____no_output_____
###Markdown
RNN forward passNow we need to actually run the data through the RNN nodes. You can use [`tf.nn.dynamic_rnn`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn) to do this. You'd pass in the RNN cell you created (our multiple layered LSTM `cell` for instance), and the inputs to the network.```outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)```Above I created an initial state, `initial_state`, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. `tf.nn.dynamic_rnn` takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.> **Exercise:** Use `tf.nn.dynamic_rnn` to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, `embed`.
###Code
with graph.as_default():
outputs, final_state =
###Output
_____no_output_____
###Markdown
OutputWe only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with `outputs[:, -1]`, the calculate the cost from that and `labels_`.
###Code
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
Validation accuracyHere we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
###Code
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
###Output
_____no_output_____
###Markdown
BatchingThis is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the `x` and `y` arrays and returns slices out of those arrays with size `[batch_size]`.
###Code
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
###Output
_____no_output_____
###Markdown
TrainingBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the `checkpoints` directory exists.
###Code
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
###Output
_____no_output_____
###Markdown
Testing
###Code
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
###Output
_____no_output_____ |
wp/notebooks/experiments/.ipynb_checkpoints/mnist_ios-checkpoint.ipynb | ###Markdown
Mc Dropout
###Code
base_experiment_path = os.path.join(METRICS_PATH, "mnist_ios")
metrics_handler = ExperimentSuitMetrics(base_experiment_path)
df_mc = MetricsTransformer.load_from_dir(metrics_handler, "mc_dropout", dtype=dtypes)
scores = FrameScores(accuracy_column="eval_sparse_categorical_accuracy")
mc_frame = ExperimentFrame(df_mc, scores=scores)
df_mc = mc_frame.get_frame()
fig, ax = plt.subplots(1, 2, figsize=(25, 10))
sns.lineplot(ax=ax[0], data=df_mc, x="labeled_pool_size", y="eval_sparse_categorical_accuracy", hue="method")
ax[0].legend(title="Method")
ax[0].set_xlabel("Labeled pool size")
ax[0].set_ylabel("Test Accuracy")
sns.lineplot(ax=ax[1], data=df_mc, x="labeled_pool_size", y="eval_sparse_categorical_crossentropy", hue="method")
ax[1].legend(title="Method")
ax[1].set_xlabel("Labeled pool size")
ax[1].set_ylabel("Test Loss")
plt.savefig(os.path.join(STAT_PATH, "mc_dropout_loss_acc.png"))
fig, ax = plt.subplots(figsize=(15, 10))
sns.lineplot(ax=ax, data=df_mc, x="iteration", y="mean_leff", hue="method")
ax.legend(title="Method")
ax.set_xlabel("Iteration")
ax.set_ylabel("Labeling Efficiency")
#ax.set(xlabel="Iteration", ylabel="Query time in seconds per datapoint.")
plt.savefig(os.path.join(STAT_PATH, "mc_dropout_leff.png"))
selector = df_mc["method"] != "Random"
tmp = df_mc.copy()
tmp["query_time"] = tmp["query_time"] * 1000
ax = sns.barplot(data=tmp[selector], x="method", y="query_time", ci="sd")
ax.set(xlabel="Method", ylabel="Query time in [ms]")
ax.set_ylim(0.27, 0.3)
###Output
_____no_output_____
###Markdown
Moment Propagation
###Code
df_mp = MetricsTransformer.load_from_dir(metrics_handler, "moment_propagation", dtype=dtypes)
scores = FrameScores(accuracy_column="eval_sparse_categorical_accuracy")
mp_frame = ExperimentFrame(df_mp, scores=scores)
df_mp = mp_frame.get_frame()
fig, ax = plt.subplots(1, 2, figsize=(25, 10))
selector = df_mp["eval_sparse_categorical_accuracy"] > .7
sns.lineplot(ax=ax[0], data=df_mp[selector], x="labeled_pool_size", y="eval_sparse_categorical_accuracy", hue="method")
ax[0].set_xlabel("Labeled Pool Size")
ax[0].set_ylabel("Test Accuracy")
ax[0].legend(title="Method")
sns.lineplot(ax=ax[1], data=df_mp, x="labeled_pool_size", y="eval_sparse_categorical_crossentropy", hue="method")
ax[1].set_xlabel("Labeled Pool Size")
ax[1].set_ylabel("Test Accuracy")
ax[1].legend(title="Method")
fig, ax = plt.subplots(1, 2, figsize=(25, 10))
sns.lineplot(ax=ax[0], data=df_mp, x="labeled_pool_size", y="mean_leff", hue="method")
ax[0].set_xlabel("Labeled Pool Size")
ax[0].set_ylabel("Labeling Efficiency")
ax[0].legend(title="Method")
sns.lineplot(ax=ax[1], data=df_mp, x="labeled_pool_size", y="query_time", hue="method")
ax[1].set_xlabel("Labeled Pool Size")
ax[1].set_ylabel("Query time in seconds per datapoints")
ax[1].legend(title="Method")
num_of_datapoints = np.array([10_000, 20_000, 40_000, 80_000, 160_000])
qt_dp_mp = Stats.query_time_per_datapoints(df_mp, num_of_datapoints)
qt_dp_mc = Stats.query_time_per_datapoints(df_mc, num_of_datapoints)
qt_dp_mc.insert(0, "model", "Mc Dropout")
qt_dp_mp.insert(0, "model", "Moment Propagation")
merged_times = pd.concat([qt_dp_mc, qt_dp_mp])
merged_times = merged_times.rename(columns={"model": "Model", "method": "Method"})
fig = sns.lineplot(data=merged_times, x="size", y="times", hue="Method", style="Model")
fig.set(xlabel="Num of datapoints to query", ylabel="Query time in seconds")
composite_df = pd.concat([df_mp, df_mc])
composite_df["query_time"] = composite_df["query_time"]*1000
selector = composite_df["method"] != "Random"
fig = sns.barplot(data=composite_df[selector], x="method", y="query_time", hue="model", ci="sd")
fig.set(xlabel="Method", ylabel="Query time in [ms]")
fig.legend(title="Model")
fig.set_title("Runtime of query functions per datapoint")
df_mp_mean = mp_frame.get_mean_frame()
df_mc_mean = mc_frame.get_mean_frame()
df_mc_mean["query_time"] = df_mc_mean["query_time"]-df_mp_mean["query_time"]
df_mc_mean["query_time"] = df_mc_mean["query_time"]*1000
selector = df_mc_mean["method"] != "Random"
fig = sns.barplot(data=df_mc_mean[selector], x="method", y="query_time", ci=None)
fig.set(xlabel="Method", ylabel="Performance increase in [ms]")
fig.set_title("Performance increase Moment Propagation compared to Mc Dropout")
df_merged = pd.concat([df_mc, df_mp])
fig, ax = plt.subplots(2, 2, figsize=(25, 20))
selector = df_merged["method"] == "BALD"
sns.lineplot(ax=ax[0][0], data=df_merged[selector], x="labeled_pool_size", y="eval_sparse_categorical_accuracy", hue="model")
ax[0][0].set_xlabel("Labeled pool size")
ax[0][0].set_ylabel("Test Accuracy")
ax[0][0].legend(title="Model")
ax[0][0].set_title("Acquisition function: BALD")
selector = df_merged["method"] == "Max. Entropy"
fig = sns.lineplot(ax=ax[0][1], data=df_merged[selector], x="labeled_pool_size", y="eval_sparse_categorical_accuracy", hue="model")
ax[0][1].set_xlabel("Labeled pool size")
ax[0][1].set_ylabel("Test Accuracy")
ax[0][1].legend(title="Model")
ax[0][1].set_title("Acquisition function: Max. Entropy")
selector = df_merged["method"] == "Mean Std."
fig = sns.lineplot(ax=ax[1][0], data=df_merged[selector], x="labeled_pool_size", y="eval_sparse_categorical_accuracy", hue="model")
ax[1][0].set_xlabel("Labeled pool size")
ax[1][0].set_ylabel("Test Accuracy")
ax[1][0].legend(title="Model")
ax[1][0].set_title("Acquisition function: Mean Std.")
selector = df_merged["method"] == "Max. Var. Ratio"
fig = sns.lineplot(ax=ax[1][1], data=df_merged[selector], x="labeled_pool_size", y="eval_sparse_categorical_accuracy", hue="model")
ax[1][1].set_xlabel("Labeled pool size")
ax[1][1].set_ylabel("Test Accuracy")
ax[1][1].legend(title="Model")
ax[1][1].set_title("Acquisition function: Max. Var. Ratio")
###Output
_____no_output_____
###Markdown
Variance in initial training
###Code
selector = df_mc["iteration"] == 1
fig = sns.barplot(data=df_mc[selector], x="method", y="eval_sparse_categorical_accuracy")
fig.set_ylim(0.45, 0.65)
fig.set(xlabel="Method", ylabel="Test Accuracy")
selector = df_mp["iteration"] == 1
fig = sns.barplot(data=df_mp[selector], x="method", y="eval_sparse_categorical_accuracy")
fig.set_ylim(0.5, 0.6)
###Output
_____no_output_____
###Markdown
Runtime boxplot
###Code
composite_df = pd.concat([df_mp, df_mc])
#composite_df = composite_df.groupby(["model", "method", "run"]).mean()
composite_df["query_time"] = composite_df["query_time"]*1000
selector = composite_df["method"] != "Random"
#plt.figure(dpi=300)
fig = sns.boxplot(data=composite_df[selector], y="method", x="query_time", hue="model")
fig.set(xlabel="Query time in [ms]", ylabel="")
fig.legend(title="Model")
#plt.grid()
plt.tight_layout()
plt.savefig(os.path.join(STAT_PATH, "mp_mc_runtime_boxplot.png"))
###Output
_____no_output_____
###Markdown
AUC/Query Time Table
###Code
table_query_time = Table.query_time(pd.concat([df_mc, df_mp]), decimals=6)
table_query_time = Frame.merge_mean_std(table_query_time)
table_query_time.columns = ["Query Time"]
table_query_time
auc_mean_std = Table.auc(pd.concat([df_mc, df_mp]), decimals=3)
auc_mean_std = Frame.merge_mean_std(auc_mean_std)
auc_mean_std.columns = ["AUC"]
auc_mean_std
qt_auc = pd.concat([auc_mean_std, table_query_time], axis=1)
qt_auc.to_latex(os.path.join(STAT_PATH, "tables", "auc_qt_mp_mc.tex"))
qt_auc
###Output
_____no_output_____ |
fenics_artery.ipynb | ###Markdown
Simulation of artery wall dynamics using FEniCS[FEniCS](https://fenicsproject.org/) is a finite element (FE) solver in Python and is used here to simulate artery wall dynamics under functional hyperaemia. This simulation was used to calculate IPAD under functional hyperaemia [1].FEniCS is best installed as a standalone Anaconda environment:
###Code
conda create -n fenicsproject -c conda-forge fenics
source activate fenicsproject
###Output
_____no_output_____
###Markdown
Installing ```nb_conda``` in both the ```fenicsproject``` and the base environment adds the option to switch the kernel environment to Jupyter Notebook.
###Code
conda install nb_conda
###Output
_____no_output_____
###Markdown
Now we can begin modelling an arteriole using FEniCS. Begin with all your module imports
###Code
from fenics import *
from dolfin import *
from mshr import *
from matplotlib import pyplot as plt
import numpy as np
from scipy.interpolate import interp1d
###Output
_____no_output_____
###Markdown
Next we define useful parameters for units and scaled parameters
###Code
# Units
cm = 1e-2
um = 1e-4 * cm
dyn = 1
pa = 10 * dyn/cm**2
s = 1
# Scaled variables
r0 = 20*um # arteriole radius
R = r0/r0 # dimensionless radius
We = 0.2*R # arteriole wall width
w_ast = 10*um/r0 # width of an astrocyte
gap = 1*um/r0 # gap between astrocytes
Le = 10*R + 5*w_ast + 4*gap # length of the arteriole
tf = 50*s # simulation time
###Output
_____no_output_____
###Markdown
This example simulation will have 5 astrocytes placed along the arteriole wall. They're start and end points along the wall are defined next
###Code
asta1 = Le/2 - 5*w_ast
asta2 = asta1+w_ast
astb1 = asta2+gap
astb2 = astb1+w_ast
astc1 = astb2+gap
astc2 = astc1+w_ast
astd1 = astc2+gap
astd2 = astd1+w_ast
aste1 = astd2+gap
aste2 = aste1+w_ast
###Output
_____no_output_____
###Markdown
Finally, we need the elasticity parameters for the artery wall
###Code
Y = 1.0e6 * pa # Young's modulus
nu = 0.49 # Poisson's ratio
lam = Y*nu/((1+nu)*(1-2*nu)) # First Lame coefficient
mu = Y/(2*(1+nu)) # Second Lame coefficient
Mu = mu/mu # dimensionless Lame coefficient
Lam = lam/mu # dimensionless Lame coefficient
###Output
_____no_output_____
###Markdown
The displacement for the arteriole wall is obtained from the simulations of the neurovascular unit (NVU) following [2].
###Code
disp = np.loadtxt('./nvu/disp.csv', delimiter=',')/r0 # read displacement from data
nt = disp.shape[0] # number of time steps
dt = tf/nt
###Output
_____no_output_____
###Markdown
To set up FEniCS we need to set up the geometry and mesh using meshr
###Code
N = 512
deg = 2
elem = "Lagrange"
geom = Rectangle(Point(0, 0), Point(We, Le))
mesh = generate_mesh(geom, N)
###Output
_____no_output_____
###Markdown
We will need three function spaces for the simulation
###Code
V = VectorFunctionSpace(mesh, elem, deg)
W = FunctionSpace(mesh, elem, deg)
WW = TensorFunctionSpace(mesh, elem, deg)
###Output
_____no_output_____
###Markdown
Astrocytes on the arteriole wall are defined as Dirichlet boundary conditions on the top boundary of the arteriole wall with prescribed displacement obtained from ```disp```. The bottom boundary of the arteriole wall is allowed to move freely. The side boundaries of the rectangular arteriole geometry are fixed to zero displacement.
###Code
def astr_a(x, on_boundary):
return near(x[0], We) and (x[1] < asta2 and x[1] > asta1)
def astr_b(x, on_boundary):
return near(x[0], We) and (x[1] < astb2 and x[1] > astb1)
def astr_c(x, on_boundary):
return near(x[0], We) and (x[1] < astc2 and x[1] > astc1)
def astr_d(x, on_boundary):
return near(x[0], We) and (x[1] < astd2 and x[1] > astd1)
def astr_e(x, on_boundary):
return near(x[0], We) and (x[1] < aste2 and x[1] > aste1)
def fixed_boundary(x, on_boundary):
return near(x[1], 0) or near(x[1], Le)
disps = Expression(('d', '0'), d=disp[0], degree=deg)
bc0 = DirichletBC(V, Constant((0, 0)), fixed_boundary)
bc1 = DirichletBC(V, disps, astr_a, method='pointwise')
bc2 = DirichletBC(V, disps, astr_b, method='pointwise')
bc3 = DirichletBC(V, disps, astr_c, method='pointwise')
bc4 = DirichletBC(V, disps, astr_d, method='pointwise')
bc5 = DirichletBC(V, disps, astr_e, method='pointwise')
bcs = [bc0, bc1, bc2, bc3, bc4, bc5]
###Output
_____no_output_____
###Markdown
Next we define functions for stress and strain in the PDE
###Code
def epsilon(u):
return 0.5*(nabla_grad(u) + nabla_grad(u).T)
def sigma(u):
return Lam*nabla_div(u)*Identity(d) + 2*Mu*epsilon(u)
###Output
_____no_output_____
###Markdown
With all parameters and variables in place, we can set up the variational Problem in FEniCS
###Code
u = TrialFunction(V)
d = u.geometric_dimension()
v = TestFunction(V)
f = Constant((0, 0))
a = inner(sigma(u), epsilon(v))*dx
L = dot(f, v)*dx
###Output
_____no_output_____
###Markdown
Solutions should be stored in a VTK stack
###Code
ufile = File('./fenics_artery/u.pvd')
sfile = File('./fenics_artery/s.pvd')
###Output
_____no_output_____
###Markdown
Finally, the simulation itself is run in a loop over time. Each time step the current displacement gets updated on the boundary conditions and the variational problem is solved.
###Code
u = Function(V)
t = 0.0
for i in range(nt):
disps.d = disp[i] # update displacement in all astrocyte boundary conditions
solve(a == L, u, bcs)
u.rename('u (um/s)', 'u (um/s)')
# Obtain principal stress component in radial direction
sigma_w = project(sigma(u)*Identity(d), WW)
sigma_r = project(sigma_w[0, 0], W)
sigma_r.rename('sigma (Pa)', 'sigma (Pa)')
# Save to file and plot solution
ufile << (u, t)
sfile << (sigma_r, t)
t += dt
###Output
_____no_output_____ |
assignments/A9/A9_BONUS.ipynb | ###Markdown
BONUSIn this question, we'll do some basic processing on a real-world dataset, courtesy of Kaggle: "A Million News Headlines" from ABC, published over the *15-year period* beginning 2003 and through 2017. You can [download the dataset and play with it yourself](https://www.kaggle.com/therohk/million-headlines/version/6) (though it's available on JupyterHub as well for this assignment). Part AWrite a function to read and process the input into a usable format. This function should - be named `parse_csv`, - take 1 argument: the name of the input file to read - return 1 value: a dictionary, where each key is a unique date, and each value is a *list* of *titles* of all headlines posted on that date The *input* file will be a comma-separated text file, the same one that's available on Kaggle. The very first line contains file metadata and can be safely ignored. Each subsequent line is a single news header, with its accompanying timestamp (year, month, day). These two values are separated by a comma--no other punctuation exists.The first four lines of the 1-million-plus line file look like this:```publish_date,headline_text20030219,aba decides against community broadcasting licence20030219,act fire witnesses must be aware of defamation20030219,a g calls for infrastructure protection summit```Again, skip the very first line. After that, for each line, keep track of both the date/timestamp and the news header. Add the key (timestamp) to the dictionary if it doesn't already exist, and include the value as a new element to a list under that key (make sure you `strip()` both strings before adding them). Do this for each header in the file, then return the dictionary containing all this information.**No external packages are allowed (this includes NumPy).** You can use built-in functions like `range`, `zip`, `enumerate`, and `open`.
###Code
import json
ytrue = json.load(open("d.json", "r"))
ypred = parse_csv("abcnews-date-text.csv")
assert ytrue == ypred
###Output
_____no_output_____
###Markdown
Part BUsing your `parse_csv` function from Part A, answer the question: **how many total news headings are there, total?** Show the code you needed to run to obtain this answer, and use the text field to explain (in a paragraph that could fit in a tweet).**No other imports are allowed.***Hint*: you'll want to pass `abcnews-date-text.csv` into your function as its input argument.
###Code
# Please DO NOT MODIFY THIS CELL! Thank you!
### BEGIN HIDDEN TESTS
# The answer: 1103665 headings
# The code should read in the data and count the number of elements
# under each key.
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
Part CUsing your `parse_csv` function from Part A, answer the question: **in what *year* were the *most* headings posted?** Show the code you needed to run to obtain this answer, and use the text field to explain (in a paragraph that could fit in a tweet).**No other imports are allowed.***Hint*: you'll want to pass `abcnews-date-text.csv` into your function as its input argument.
###Code
# Please DO NOT MODIFY THIS CELL! Thank you!
### BEGIN HIDDEN TESTS
# The answer: 2012 and 2016 are tied for the most with 366 headings each.
# The code will probably use another dictionary to key up the years
# from the original data and count the headings under those years.
### END HIDDEN TESTS
###Output
_____no_output_____
###Markdown
Part DUsing your `parse_csv` function from Part A, answer the question: **what are the most common words for each year?** Remove the following stopwords from consideration: `["to", "for", "in", "on", "of", "the", "over", "at", "and", "a", "with", "an", "after"]`. Show the code you needed to run to obtain this answer, and use the text field to explain (in a paragraph that could fit in a tweet).**You may use `defaultdict` and `Counter` to answer this question.** To use them, add the following statement at the beginning of your code:`from collections import defaultdict, Counter`The documentation for `defaultdict` is [here](https://docs.python.org/3/library/collections.htmlcollections.defaultdict), and for `Counter` is [here](https://docs.python.org/3/library/collections.htmlcollections.Counter) (both are on different parts of the same webpage).*Hint*: you'll want to pass `abcnews-date-text.csv` into your function as its input argument.
###Code
# I've gone ahead and imported things you can use and defined your list of stopwords.
from collections import defaultdict, Counter
stopwords = ["to", "for", "in", "on", "of", "the", "over", "at", "and", "a", "with", "an", "after"]
### BEGIN SOLUTION
### END SOLUTION
# Please DO NOT MODIFY THIS CELL! Thank you!
### BEGIN HIDDEN TESTS
# The answer should look something like this:
"""
2003 [('us', 2399)]
2004 [('police', 2767)]
2005 [('police', 2934)]
2006 [('police', 2458)]
2007 [('police', 3330)]
2008 [('police', 3141)]
2009 [('police', 2669)]
2010 [('interview', 2807)]
2011 [('police', 2102)]
2012 [('interview', 2438)]
2013 [('new', 2533)]
2014 [('new', 2405)]
2015 [('new', 2503)]
2016 [('man', 1820)]
2017 [('says', 1369)]
"""
# The code will again need to group the output of parse_csv
# by year, but will also have to break the headings up into
# individual words. defaultdict(int) may be used to count
# these words. Counter can then be used for each year's
# word count to pick out the top word.
### END HIDDEN TESTS
###Output
_____no_output_____ |
notebooks/toxic-comment-lstm.ipynb | ###Markdown
Configuring TPU'sFor this version of Notebook we will be using TPU's as we have to built a BERT Model
###Code
# Detect hardware, return appropriate distribution strategy
try:
# TPU detection. No parameters necessary if TPU_NAME environment variable is
# set: this is always the case on Kaggle.
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print('Running on TPU ', tpu.master())
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
# Default distribution strategy in Tensorflow. Works on CPU and single GPU.
strategy = tf.distribute.get_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
train = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/jigsaw-toxic-comment-train.csv')
train2 = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/jigsaw-unintended-bias-train.csv')
validation = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv')
test = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/test.csv')
train.drop(train.columns.difference(['id','comment_text','toxic']),axis=1,inplace=True)
train2.drop(train2.columns.difference(['id','comment_text','toxic']),axis=1,inplace=True)
train2.drop(train2.columns.difference(['id','comment_text','toxic']),axis=1,inplace=True)
#print(train.head())
#print(train2.head())
#print(validation.head())
#print(test.head())
print(set(list(train.toxic.values)))
train2['toxic'] = train2['toxic'].apply(lambda x: 1 if x>0.5 else 0)
print(set(list(train2.toxic.values)))
print(train.shape)
train = train.append(train2, ignore_index=True)
print(train.shape)
max_len = train['comment_text'].apply(lambda x:len(str(x).split())).max()
print("max_len", max_len)
def roc_auc(predictions,target):
fpr, tpr, thresholds = metrics.roc_curve(target, predictions)
roc_auc = metrics.auc(fpr, tpr)
return roc_auc
###Output
_____no_output_____
###Markdown
Data Preparation
###Code
train = train.sample(frac=1).reset_index(drop=True)
xtrain, ytrain = train.comment_text.values, train.toxic.values
xvalid, yvalid = validation.comment_text.values, validation.toxic.values
# using keras tokenizer here
token = text.Tokenizer(num_words=None)
#max_len = 1500
token.fit_on_texts(list(xtrain) + list(xvalid))
xtrain_seq = token.texts_to_sequences(xtrain)
xvalid_seq = token.texts_to_sequences(xvalid)
#zero pad the sequences
xtrain_pad = sequence.pad_sequences(xtrain_seq, maxlen=max_len)
xvalid_pad = sequence.pad_sequences(xvalid_seq, maxlen=max_len)
word_index = token.word_index
# load the GloVe vectors in a dictionary:
embeddings_index = {}
f = open('/kaggle/input/glove840b300dtxt/glove.840B.300d.txt','r',encoding='utf-8')
for line in tqdm(f):
values = line.split(' ')
word = values[0]
coefs = np.asarray([float(val) for val in values[1:]])
embeddings_index[word] = coefs
f.close()
print('Found %s word vectors.' % len(embeddings_index))
# create an embedding matrix for the words we have in the dataset
embedding_matrix = np.zeros((len(word_index) + 1, 300))
for word, i in tqdm(word_index.items()):
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
%%time
with strategy.scope():
# A simple bidirectional LSTM with glove embeddings and one dense layer
model = Sequential()
model.add(Embedding(len(word_index) + 1,
300,
weights=[embedding_matrix],
input_length=max_len,
trainable=False))
model.add(Bidirectional(LSTM(300, dropout=0.3, recurrent_dropout=0.3)))
model.add(Dense(1,activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam',metrics=['accuracy'])
model.summary()
model.fit(xtrain_pad, ytrain, nb_epoch=5, batch_size=64*strategy.num_replicas_in_sync)
scores = model.predict(xvalid_pad)
print("Auc: %.2f%%" % (roc_auc(scores,yvalid)))
sub['toxic'] = model.predict(test_dataset, verbose=1)
sub.to_csv('submission.csv', index=False)
###Output
_____no_output_____ |
thesis/graph.ipynb | ###Markdown
센서값 정규화- 진동센서는 제거
###Code
index = 1100
sensor_df = df.iloc[index:index+70, [1, 3, 4, 5, 6]].set_index("AddDt")
normalization_df = (sensor_df - sensor_df.mean())/sensor_df.std()
normalization_df.head()
###Output
_____no_output_____
###Markdown
정규화 후 산점도- 자신의 경우에는 빈도수를 나타내는 그래프가 보여짐
###Code
sns.pairplot(normalization_df, diag_kind="kde")
###Output
_____no_output_____
###Markdown
딱히 상관관계가 보이지 않음.일단 새로운 데이터셋인 `senval_new.csv`를 가지고 다시 시작
###Code
new_dataset = pd.read_csv("./senval_new.csv")
new_dataset
sensor_df = new_dataset.iloc[:, [3, 4, 5, 6]]
sensor_df.plot()
sensor_df.iloc[10500:11000, :].plot()
sensor_df.iloc[:, [0, 1]].plot()
###Output
_____no_output_____
###Markdown
데이터 차이 추출- 각 센서의 이전 값 차를 이용하여 큰 폭으로 작아지거나 커지는 값을 추출
###Code
new = pd.DataFrame([(0, 0, 0, 0)], columns=["SSO", "FSR", "LIG", "HAL"])
new = new.append(sensor_df.copy(), ignore_index=True)
sensor_df = sensor_df.append(pd.DataFrame([(0, 0, 0, 0)], columns=[
"SSO", "FSR", "LIG", "HAL"]), ignore_index=True)
difference_df = (sensor_df-new).iloc[1:-1, :]
difference_df.plot()
###Output
_____no_output_____
###Markdown
초음파센서와 압력센서의 차이를 살펴보자- 아래의 그래프로 보아 거의 동시에 값이 반비례한다.
###Code
difference_df.iloc[10500:11000, :2].plot()
###Output
_____no_output_____
###Markdown
조도센서와 초음파+압력센서 간 차이를 살펴보자
###Code
difference_df.iloc[10650:10700, :3].plot()
difference_df.iloc[10800:10850, :3].plot()
###Output
_____no_output_____
###Markdown
결과- 조도센서가 순간 증가한 뒤에 압력센서와 초음파센서가 변동되는 것을 볼 수 있으며 조도센서의 값이 낮아지면 그 뒤로는 값의 변동이 없다.- 각 조도센서는 순간 증가폭이 최소 150 이상이다.
###Code
open_list = difference_df.index[difference_df["LIG"] > 150].to_numpy()
closed_list = difference_df.index[difference_df["LIG"] < -150].to_numpy()
open_list, closed_list
###Output
_____no_output_____
###Markdown
두 자릿수 인덱스는 세팅 중 오류로 발생한 결과물이므로 제거하여 테스트를 진행각 인덱스 값이 차이가 안나는 경우는 제거
###Code
open_list = open_list[2:]
closed_list = closed_list[1:]
for i in range(open_list.size-1):
# 10으로 비교하는 것은 넉넉잡은 값
# 조도센서의 값차가 100이상 나는 인덱스가 연속으로 존재하지, 띄엄띄엄 나타나는 것은 데이터가 잘못 된 것임
if open_list[i+1] - open_list[i] < 10:
open_list[i+1] = open_list[i]
for i in range(closed_list.size-1):
if closed_list[i+1] - closed_list[i] < 10:
closed_list[i+1] = closed_list[i]
open_list = sorted(list(set(open_list)))
closed_list = sorted(list(set(closed_list)))
###Output
_____no_output_____
###Markdown
테스트 시도
###Code
start, end = open_list[-3], closed_list[-3]
difference_df.iloc[start:end, :3].plot()
difference_df.iloc[start:end, :2].mean()
difference_df.iloc[7583:7596, :3].plot()
difference_df.iloc[7583:7596, :2].mean()
difference_df.iloc[2773:2785, :3].plot()
difference_df.iloc[2773:2785, :2].mean()
###Output
_____no_output_____
###Markdown
아래는 랜덤으로 잡은 인덱스 (보관함을 조종하고있지 않은 상태)- 값의 변화가 매우 작은 것을 확인할 수 있다
###Code
difference_df.iloc[3003:3043, :3].plot()
difference_df.iloc[3003:3043, :2].mean()
for i in range(len(open_list)):
start, end = open_list[i], closed_list[i]
sso, fsr = difference_df.iloc[start:end, :2].mean()
# threshold: 조도센서의 차가 +100이 날 때를 기준으로 + 40만큼의 행들의 센서별 평균값이 0.1 이상 차이나는 경우
if sso > 0.1 and fsr < -0.1:
print("물건을 가져갔습니다. : 이용자(고객)")
elif sso < -0.1 and fsr > 0.1:
print("물건을 넣었습니다. : 집배원")
else:
print(f"UNKNOWN DATA, Index = {index}, sso = {sso}, fsr = {fsr}")
difference_df["isUsed"] = 0
for i in range(len(open_list)):
difference_df.iloc[open_list[i]:closed_list[i], -1] = 1
plt.plot(difference_df.iloc[:, -1])
plt.plot(difference_df.iloc[4000:5000, -1])
###Output
_____no_output_____ |
ml/prj3/Preprocessing.ipynb | ###Markdown
Load MNIST on Python 3.x
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import pickle
from scipy.ndimage import imread
import numpy as np
with open('./mnist.pkl', 'rb') as f:
training_data, validation_data, test_data = pickle.load(f, encoding='latin1')
plt.imshow(
(training_data[0][1].reshape(28,28) == 0),
cmap='Greys_r'
)
plt.imshow(
plt.imread('./USPSdata/Numerals/0/0001b.png')
)
from glob import glob
imgs = {
i: np.c_[[plt.imread(f) for f in glob(f'./USPSdata/Numerals/{i}/*png')]] for i in range(10)
}
###Output
_____no_output_____ |
array_strings/ipynb/first_non_repeating_str.ipynb | ###Markdown
Given a string, find the first non-repeating character in it and return it's index. If it doesn't exist, return -1.Examples:s = "leetcode"return 0.s = "loveleetcode",return 2.
###Code
def n_repeat(ls):
for i in range(len(ls)):
if ls.count(ls[i]) == 1:
return i
return False
print(n_repeat("geekgeeks"))
#In this example we are returning value
# Note we do not need dictionary over here
def n_repeat(s):
dic ={}
for ch in s:
if ch in dic.keys():
continue
else:
if s.count(ch) == 1:
return ch
else:
dic[ch]=s.count(ch)
return -1
print(n_repeat("geekgeeks"))
# In this example iteration is done
def firstUniqChar(s):
"""
:type s: str
:rtype: int
"""
dic ={}
for i in range(len(s)):
if s[i] in dic.keys():
continue
else:
if s.count(s[i]) == 1:
return i
else:
dic[s[i]]=s.count(s[i])
return -1
print(firstUniqChar("saspa"))
def f_dup(st):
dic = {}
for i in range(len(st)):
print("st[i]",st[i])
print("dic",dic)
print("count",st.count(st[i]) )
if st[i] in dic.keys():
print("inside if")
continue
else:
if st.count(st[i]) == 1:
return i
else:
dic[st[i]]=st.count(st[i])
return -1
print(f_dup("sapsa"))
###Output
st[i] s
dic {}
count 2
st[i] a
dic {'s': 2}
count 2
st[i] p
dic {'s': 2, 'a': 2}
count 1
2
|
analysis/poweranalysis/Neuropower_demo.ipynb | ###Markdown
Example neuropower methodology
###Code
from __future__ import division
from neuropower import cluster
from neuropower import BUM
from neuropower import neuropowermodels
from neuropower import peakdistribution
import nibabel as nib
import nilearn.plotting
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
import seaborn as sns
import scipy
import brewer2mpl
import pandas as pd
colors = brewer2mpl.get_map('Set2', 'qualitative', 5).mpl_colors
twocol = brewer2mpl.get_map('Paired', 'qualitative', 5).mpl_colors
%matplotlib inline
###Output
_____no_output_____
###Markdown
load data and extract peaks
###Code
map_display=nilearn.plotting.plot_stat_map("Zstat1.nii",threshold=2,title='Z-statistic image')
SPM = nib.load("Zstat1.nii").get_data()
maskfile = "Mask.nii"
MASK = nib.load(maskfile).get_data()
exc = 2
peaks = cluster.PeakTable(SPM,exc,MASK)
pvalues = np.exp(-exc*(np.array(peaks.peak)-exc))
peaks['pval'] = [max(10**(-6),p) for p in pvalues]
peaks[1:10]
###Output
_____no_output_____
###Markdown
estimate $\pi_1$
###Code
bum = BUM.EstimatePi1(peaks['pval'],starts=10,seed=1)
bum
sns.set_style("white")
sns.set_palette(sns.color_palette("Paired"))
ax = sns.distplot(peaks['pval'],kde=False,norm_hist=True,bins=10)
ax.set(xlabel="P-values",ylabel="Frequency",title="Histogram of peak p-values")
xn_p = np.arange(0,1,0.01)
alt_p = bum['pi1']*scipy.stats.beta.pdf(xn_p,bum['a'],1)+1-bum['pi1']
nul_p = [1-bum['pi1']]*len(xn_p)
ax = sns.distplot(peaks['pval'],kde=False,norm_hist=True,bins=10)
ax.set(xlabel="P-values",ylabel="Frequency",title="Histogram of peak p-values with estimated mixture distribution \n $\pi_0$="+str(1-bum['pi1']))
plt.plot(xn_p,alt_p,lw=3,color=twocol[1])
plt.plot(xn_p,nul_p,lw=3,color=twocol[0])
###Output
_____no_output_____
###Markdown
Fit model
###Code
plt.figure(figsize=(4.5, 4))
sns.set_palette(sns.color_palette("Paired"))
ax = sns.distplot(peaks['peak'],kde=False,norm_hist=True,bins=10)
ax.set(xlabel="peak z-values",ylabel="Frequency",title="Histogram of peak z-values (exc = 2.0)")
modelfit = neuropowermodels.modelfit(peaks.peak,bum['pi1'],exc=exc,starts=10,method="RFT")
modelfit
xn_t = np.arange(exc,10,0.01)
alt_t = bum['pi1']*neuropowermodels.altPDF(xn_t,modelfit['mu'],modelfit['sigma'],exc=exc)
nul_t = [1-bum['pi1']]*neuropowermodels.nulPDF(xn_t,exc=exc)
mix_t = neuropowermodels.mixPDF(xn_t,pi1=bum['pi1'],mu=modelfit['mu'],sigma=modelfit['sigma'],exc=exc)
plt.figure(figsize=(4.5, 4))
ax = sns.distplot(peaks['peak'],kde=False,norm_hist=True,bins=10)
ax.set(xlabel="peak z-values",ylabel="Frequency",title="Histogram of peak z-values with theoretical nul distibution",xlim=[exc,6])
plt.plot(xn_t,nul_t)
plt.figure(figsize=(4.5, 4))
ax = sns.distplot(peaks['peak'],kde=False,norm_hist=True,bins=10)
ax.set(xlabel="peak z-values",ylabel="Frequency",title="Histogram of peak z-values with fitted distribution",xlim=[exc,6])
plt.plot(xn_t,nul_t,color=twocol[0])
plt.plot(xn_t,alt_t,color=twocol[0])
plt.plot(xn_t,mix_t,color=twocol[1])
thresholds = neuropowermodels.threshold(peaks.peak,peaks.pval,FWHM=[8,8,8],voxsize=[3,3,3],nvox=np.sum(MASK),alpha=0.05,exc=exc)
effect_cohen = modelfit['mu']/np.sqrt(13)
power_predicted = []
newsubs = range(10,71)
for s in newsubs:
projected_effect = effect_cohen*np.sqrt(s)
powerpred = {k:1-neuropowermodels.altCDF(v,projected_effect,modelfit['sigma'],exc=exc,method="RFT") for k,v in thresholds.items() if v!='nan'}
power_predicted.append(powerpred)
power_predicted_df = pd.DataFrame(power_predicted)
thresholds
names = ["UN","BF","RFT","BH"]
plt.figure(figsize=(4.5, 4))
ax = sns.distplot(peaks['peak'],kde=False,norm_hist=True,bins=10)
ax.set(xlabel="peak z-values",ylabel="Frequency",title="Histogram of peak z-values with theoretical nul distibution",xlim=[exc,6])
plt.plot(xn_t,nul_t)
k = 0
for method in names:
k+=1
plt.axvline(thresholds[method],color=colors[k])
plt.figure(figsize=(4.5, 4))
names = ["UN","BF","RFT","BH"]
for col in range(4):
plt.plot(newsubs,power_predicted_df[names[col]],color=colors[col+1])
plt.legend(loc='lower right')
plt.xlabel("subjects")
plt.ylabel("power")
plt.title("Power curves")
power_predicted_df['newsubs'] = newsubs
power_predicted_df[0:10]
ind = np.min([i for i,elem in enumerate(power_predicted_df['UN']) if elem>0.8])
sub = power_predicted_df['newsubs'][ind]
sub
###Output
_____no_output_____ |
Alcohol-Sales-Prediction/Alcohol-Sales-Forecast.ipynb | ###Markdown
Using RNN (LSTM/GRU) on a Time Series - Alcohol Sales ForecastIn this project, we're using data from the Federal Reserve Economic Database concerning Sales of Beer, Wine, and Distilled Alcoholic Beverages in millions of dollars from January 1992 to January 2020.Data source: https://fred.stlouisfed.org/series/S4248SM144NCEN Perform standard imports
###Code
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# This relates to plotting datetime values with matplotlib:
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
import matplotlib.pyplot as plt
plt.rcParams[
"figure.facecolor"
] = "w" # force white background on plots when using dark mode in JupyterLab
###Output
_____no_output_____
###Markdown
Load the datasetWe'll take advantage of pandas' built-in DatetimeIndex by passing parse_dates=True
###Code
df = pd.read_csv('Data/Alcohol_Sales.csv',index_col=0,parse_dates=True)
len(df)
# Always a good idea with time series data:
# There is no right or wrong here
# You can either drop the null value rows or you can replace null with mean
df.dropna(inplace=True)
len(df)
df.head()
df.tail()
###Output
_____no_output_____
###Markdown
Plotting time series dataWe can add titles, axis labels, and other features to the plot.We're going to tighten the x-axis to fit the width of the actual data with plt.autoscale(axis='x',tight=True).Alternatively you could set your own limits with plt.xlim(pd.Timestamp('1992-01-01'), pd.Timestamp('2019-01-01')) or some other values.
###Code
plt.figure(figsize=(12,4))
plt.title('Beer, Wine, and Alcohol Sales')
plt.ylabel('Sales (millions of dollars)')
plt.grid(True)
plt.autoscale(axis='x',tight=True)
plt.plot(df['S4248SM144NCEN']) # name of the column given by Fed
plt.show()
###Output
_____no_output_____
###Markdown
Prepare the dataIn the next steps we'll divide the data into train/test sets, then normalize the training values so that they fall between -1 and 1 (to improve training). We'll train the model, then predict into a period that matches the test set. Finally, we'll forecast into an unknown future.
###Code
# Extract values from the source .csv file
y = df['S4248SM144NCEN'].values.astype(float)
# Define a test size
test_size = 12
# Create train and test sets
train_set = y[:-test_size]
test_set = y[-test_size:]
test_set
###Output
_____no_output_____
###Markdown
Normalize the dataThe formula for normalizing data around zero is: $X_{norm} = \frac{X - \mu} {\sigma}$where $\mu$ is the population mean, and $\sigma$ is the population standard deviation.We want to perform min/max feature scaling so that our values fall between -1 and 1, as this makes hyperparameters converge faster.The formula for this would be: $X^{\prime} = a + \frac{(X - X_{min}) (b - a)} {X_{max} - X_{min}}$where $a={-1}$ and $b=1$We can use scikit-learn to do this, with sklearn.preprocessing.MinMaxScaler()NOTE: We only want to normalize the training set to avoid data leakage. If we include the test set then the higher average values of the test set could become part of the signal in the training set. There's a good article on data leakage here.After using transformed data to train the model and generate predictions, we'll inverse_transform the predicted values so that we can compare them to the actual test data.
###Code
from sklearn.preprocessing import MinMaxScaler
# Instantiate a scaler with a feature range from -1 to 1
scaler = MinMaxScaler(feature_range=(-1, 1))
# Normalize the training set
train_norm = scaler.fit_transform(train_set.reshape(-1, 1))
train_norm.min()
train_norm.max()
train_norm.mean()
type(train_norm)
###Output
_____no_output_____
###Markdown
Prepare data for LSTMHere we'll create our list of (seq/label) tuples from the training set. LSTM consumes a window of samples toward the first prediction, so the size of our training set will become ((325 - test_size) - window_size).
###Code
# Convert train_norm from an array to a tensor
train_norm = torch.FloatTensor(train_norm).view(-1)
# Define a window size
window_size = 12
# Define function to create seq/label tuples
# ws is the window size
def input_data(seq, ws):
out = []
L = len(seq)
for i in range(L-ws):
window = seq[i:i+ws]
label = seq[i+ws:i+ws+1]
out.append((window,label))
return out
# Apply the input_data function to train_norm
train_data = input_data(train_norm, window_size)
len(train_data) # this should equal 337-12-12
# Display the first seq/label tuple in the train data
train_data[0]
# Display the 2nd seq/label tuple in the train data
train_data[1]
# Display the 3rd seq/label tuple in the train data
train_data[2]
###Output
_____no_output_____
###Markdown
Define the modelThis time we'll use an LSTM layer of size (1,100).
###Code
class LSTMnetwork(nn.Module):
def __init__(self, input_size=1, hidden_size=100, output_size=1):
super(LSTMnetwork, self).__init__()
# Hidden dimensions
self.hidden_size = hidden_size
# Add an LSTM layer:
self.lstm = nn.LSTM(input_size, hidden_size)
# Add a fully-connected layer:
self.linear = nn.Linear(hidden_size, output_size)
# Initialize h0 and c0:
self.hidden = (torch.zeros(1,1,self.hidden_size),
torch.zeros(1,1,self.hidden_size))
def forward(self, seq):
lstm_out, self.hidden = self.lstm(seq.view(len(seq),1,-1), self.hidden)
pred = self.linear(lstm_out.view(len(seq),-1))
return pred[-1] # we only want the last value
###Output
_____no_output_____
###Markdown
Instantiate the model, define loss and optimization functions
###Code
model = LSTMnetwork()
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
model
# parameters depth
len(list(model.parameters()))
for i in range(len(list(model.parameters()))):
print(list(model.parameters())[i].size())
###Output
torch.Size([400, 1])
torch.Size([400, 100])
torch.Size([400])
torch.Size([400])
torch.Size([1, 100])
torch.Size([1])
###Markdown
Train the model
###Code
epochs = 100
import time
start_time = time.time()
for epoch in range(epochs):
# extract the sequence & label from the training data
for seq, y_train in train_data:
# reset the parameters and hidden states
optimizer.zero_grad()
model.hidden = (torch.zeros(1,1,model.hidden_size),
torch.zeros(1,1,model.hidden_size))
y_pred = model(seq)
loss = criterion(y_pred, y_train)
loss.backward()
optimizer.step()
# print training result
print(f'Epoch: {epoch+1:2} Loss: {loss.item():10.8f}')
print(f'\nDuration: {time.time() - start_time:.0f} seconds')
###Output
Epoch: 1 Loss: 0.27213061
Epoch: 2 Loss: 0.23486409
Epoch: 3 Loss: 0.36305246
Epoch: 4 Loss: 0.23926432
Epoch: 5 Loss: 0.23267537
Epoch: 6 Loss: 0.21156687
Epoch: 7 Loss: 0.19906308
Epoch: 8 Loss: 0.18158565
Epoch: 9 Loss: 0.15881790
Epoch: 10 Loss: 0.15318750
Epoch: 11 Loss: 0.15143064
Epoch: 12 Loss: 0.15995049
Epoch: 13 Loss: 0.16027461
Epoch: 14 Loss: 0.15402775
Epoch: 15 Loss: 0.15876684
Epoch: 16 Loss: 0.15029652
Epoch: 17 Loss: 0.14277886
Epoch: 18 Loss: 0.11314578
Epoch: 19 Loss: 0.08872589
Epoch: 20 Loss: 0.08905081
Epoch: 21 Loss: 0.32246891
Epoch: 22 Loss: 0.27161974
Epoch: 23 Loss: 0.25211734
Epoch: 24 Loss: 0.22751518
Epoch: 25 Loss: 0.19006512
Epoch: 26 Loss: 0.32366836
Epoch: 27 Loss: 0.24500629
Epoch: 28 Loss: 0.15842056
Epoch: 29 Loss: 0.20487115
Epoch: 30 Loss: 0.09797894
Epoch: 31 Loss: 0.22789279
Epoch: 32 Loss: 0.22984600
Epoch: 33 Loss: 0.22901720
Epoch: 34 Loss: 0.21031110
Epoch: 35 Loss: 0.17113298
Epoch: 36 Loss: 0.13098751
Epoch: 37 Loss: 0.05924523
Epoch: 38 Loss: 0.06839544
Epoch: 39 Loss: 0.02218805
Epoch: 40 Loss: 0.05368588
Epoch: 41 Loss: 0.00053905
Epoch: 42 Loss: 0.00397253
Epoch: 43 Loss: 0.00043302
Epoch: 44 Loss: 0.02555052
Epoch: 45 Loss: 0.07280198
Epoch: 46 Loss: 0.06508605
Epoch: 47 Loss: 0.05368875
Epoch: 48 Loss: 0.03782743
Epoch: 49 Loss: 0.04207291
Epoch: 50 Loss: 0.03251502
Epoch: 51 Loss: 0.03522571
Epoch: 52 Loss: 0.04377595
Epoch: 53 Loss: 0.03071015
Epoch: 54 Loss: 0.03000759
Epoch: 55 Loss: 0.02351148
Epoch: 56 Loss: 0.02442423
Epoch: 57 Loss: 0.02387267
Epoch: 58 Loss: 0.03191353
Epoch: 59 Loss: 0.02241893
Epoch: 60 Loss: 0.01816599
Epoch: 61 Loss: 0.01922058
Epoch: 62 Loss: 0.01711075
Epoch: 63 Loss: 0.01479641
Epoch: 64 Loss: 0.01847376
Epoch: 65 Loss: 0.00193354
Epoch: 66 Loss: 0.01306059
Epoch: 67 Loss: 0.01081176
Epoch: 68 Loss: 0.01055785
Epoch: 69 Loss: 0.01070392
Epoch: 70 Loss: 0.01075548
Epoch: 71 Loss: 0.01068058
Epoch: 72 Loss: 0.01043542
Epoch: 73 Loss: 0.01003232
Epoch: 74 Loss: 0.00963149
Epoch: 75 Loss: 0.00944336
Epoch: 76 Loss: 0.00955658
Epoch: 77 Loss: 0.00994068
Epoch: 78 Loss: 0.01043198
Epoch: 79 Loss: 0.01080057
Epoch: 80 Loss: 0.01093336
Epoch: 81 Loss: 0.01084940
Epoch: 82 Loss: 0.01059914
Epoch: 83 Loss: 0.01037212
Epoch: 84 Loss: 0.01060433
Epoch: 85 Loss: 0.01132560
Epoch: 86 Loss: 0.01196929
Epoch: 87 Loss: 0.01217649
Epoch: 88 Loss: 0.01208251
Epoch: 89 Loss: 0.01173907
Epoch: 90 Loss: 0.01115024
Epoch: 91 Loss: 0.01088442
Epoch: 92 Loss: 0.01109031
Epoch: 93 Loss: 0.00801962
Epoch: 94 Loss: 0.00643960
Epoch: 95 Loss: 0.00116488
Epoch: 96 Loss: 0.00349307
Epoch: 97 Loss: 0.00024064
Epoch: 98 Loss: 0.00163518
Epoch: 99 Loss: 0.00022112
Epoch: 100 Loss: 0.00028595
Duration: 186 seconds
###Markdown
Run predictions and compare to known test set
###Code
future = 12
# Add the last window of training values to the list of predictions
preds = train_norm[-window_size:].tolist()
# Set the model to evaluation mode
model.eval()
for i in range(future):
seq = torch.FloatTensor(preds[-window_size:])
with torch.no_grad():
model.hidden = (torch.zeros(1,1,model.hidden_size),
torch.zeros(1,1,model.hidden_size))
preds.append(model(seq).item())
# Display predicted values
preds[window_size:] # equivalent to preds[-future:]
###Output
_____no_output_____
###Markdown
Invert the normalizationWe want to compare our test predictions to the original data, so we need to undo the previous normalization step. Note that inverse_transform uses the most recently applied parameters; we can rescale based on the test data, but not on the previous training data.
###Code
true_predictions = scaler.inverse_transform(np.array(preds[window_size:]).reshape(-1, 1))
true_predictions
df['S4248SM144NCEN'][-12:]
###Output
_____no_output_____
###Markdown
It looks like our predictions weren't that far off! Plot the resultsOur original data contains a datetime index, but our predicted values do not. We can create a range of dates using NumPy that are spaced one month apart using dtype='datetime64[M]', and then store them with day values to match our dataset with .astype('datetime64[D]').
###Code
# Remember that the stop date has to be later than the last predicted value.
x = np.arange('2019-04-01', '2020-04-01', dtype='datetime64[M]').astype('datetime64[D]')
x
plt.figure(figsize=(12,4))
plt.title('Beer, Wine, and Alcohol Sales')
plt.ylabel('Sales (millions of dollars)')
plt.grid(True)
plt.autoscale(axis='x',tight=True)
plt.plot(df['S4248SM144NCEN'])
plt.plot(x,true_predictions)
plt.show()
# Plot the end of the graph
fig = plt.figure(figsize=(12,4))
plt.title('Beer, Wine, and Alcohol Sales')
plt.ylabel('Sales (millions of dollars)')
plt.grid(True)
plt.autoscale(axis='x',tight=True)
fig.autofmt_xdate()
# Select the end of the graph with slice notation:
plt.plot(df['S4248SM144NCEN']['2017-01-01':])
plt.plot(x,true_predictions)
plt.show()
###Output
_____no_output_____
###Markdown
For more information on x-axis date formatting in matplotlib, check out matplotlib.figure.Figure.autofmt_xdate and matplotlib.dates.DateFormatter Forecast into an unknown futureThis time we'll continue training the model using the entire dataset, and predict 12 steps into the future.
###Code
epochs = 100
# set model to back to training mode
model.train()
# feature scale the entire dataset
y_norm = scaler.fit_transform(y.reshape(-1, 1))
y_norm = torch.FloatTensor(y_norm).view(-1)
all_data = input_data(y_norm,window_size)
import time
start_time = time.time()
for epoch in range(epochs):
# train on the full set of sequences
for seq, y_train in all_data:
# reset the parameters and hidden states
optimizer.zero_grad()
model.hidden = (torch.zeros(1,1,model.hidden_size),
torch.zeros(1,1,model.hidden_size))
y_pred = model(seq)
loss = criterion(y_pred, y_train)
loss.backward()
optimizer.step()
# print training result
print(f'Epoch: {epoch+1:2} Loss: {loss.item():10.8f}')
print(f'\nDuration: {time.time() - start_time:.0f} seconds')
###Output
Epoch: 1 Loss: 0.00000716
Epoch: 2 Loss: 0.00057569
Epoch: 3 Loss: 0.00000475
Epoch: 4 Loss: 0.00318039
Epoch: 5 Loss: 0.00000296
Epoch: 6 Loss: 0.00535440
Epoch: 7 Loss: 0.00157587
Epoch: 8 Loss: 0.00066996
Epoch: 9 Loss: 0.00001635
Epoch: 10 Loss: 0.00019139
Epoch: 11 Loss: 0.00078499
Epoch: 12 Loss: 0.00206006
Epoch: 13 Loss: 0.00030018
Epoch: 14 Loss: 0.00004410
Epoch: 15 Loss: 0.00191987
Epoch: 16 Loss: 0.00007019
Epoch: 17 Loss: 0.00000234
Epoch: 18 Loss: 0.06088843
Epoch: 19 Loss: 0.00000417
Epoch: 20 Loss: 0.00005600
Epoch: 21 Loss: 0.00000887
Epoch: 22 Loss: 0.00005473
Epoch: 23 Loss: 0.00006495
Epoch: 24 Loss: 0.00005274
Epoch: 25 Loss: 0.00000278
Epoch: 26 Loss: 0.00063272
Epoch: 27 Loss: 0.00016414
Epoch: 28 Loss: 0.00008934
Epoch: 29 Loss: 0.00003102
Epoch: 30 Loss: 0.00002161
Epoch: 31 Loss: 0.00011580
Epoch: 32 Loss: 0.00009500
Epoch: 33 Loss: 0.00008028
Epoch: 34 Loss: 0.00000129
Epoch: 35 Loss: 0.00076566
Epoch: 36 Loss: 0.00007239
Epoch: 37 Loss: 0.00028869
Epoch: 38 Loss: 0.00003524
Epoch: 39 Loss: 0.00035948
Epoch: 40 Loss: 0.00027359
Epoch: 41 Loss: 0.00053236
Epoch: 42 Loss: 0.00016622
Epoch: 43 Loss: 0.00006576
Epoch: 44 Loss: 0.00054587
Epoch: 45 Loss: 0.00168048
Epoch: 46 Loss: 0.00030523
Epoch: 47 Loss: 0.00121771
Epoch: 48 Loss: 0.00109653
Epoch: 49 Loss: 0.00267132
Epoch: 50 Loss: 0.00200672
Epoch: 51 Loss: 0.00070872
Epoch: 52 Loss: 0.00074508
Epoch: 53 Loss: 0.00076156
Epoch: 54 Loss: 0.00145438
Epoch: 55 Loss: 0.00348031
Epoch: 56 Loss: 0.00002929
Epoch: 57 Loss: 0.00032769
Epoch: 58 Loss: 0.00389468
Epoch: 59 Loss: 0.00080655
Epoch: 60 Loss: 0.00185473
Epoch: 61 Loss: 0.00173423
Epoch: 62 Loss: 0.00004778
Epoch: 63 Loss: 0.00050959
Epoch: 64 Loss: 0.00028702
Epoch: 65 Loss: 0.00316277
Epoch: 66 Loss: 0.00050513
Epoch: 67 Loss: 0.00090066
Epoch: 68 Loss: 0.00074043
Epoch: 69 Loss: 0.00000010
Epoch: 70 Loss: 0.00025293
Epoch: 71 Loss: 0.00091662
Epoch: 72 Loss: 0.00003336
Epoch: 73 Loss: 0.00002637
Epoch: 74 Loss: 0.00071511
Epoch: 75 Loss: 0.00439526
Epoch: 76 Loss: 0.00132170
Epoch: 77 Loss: 0.00243867
Epoch: 78 Loss: 0.00036165
Epoch: 79 Loss: 0.00884334
Epoch: 80 Loss: 0.00011419
Epoch: 81 Loss: 0.00094733
Epoch: 82 Loss: 0.00039582
Epoch: 83 Loss: 0.00669908
Epoch: 84 Loss: 0.00047514
Epoch: 85 Loss: 0.00080109
Epoch: 86 Loss: 0.00148919
Epoch: 87 Loss: 0.00030508
Epoch: 88 Loss: 0.00004638
Epoch: 89 Loss: 0.00397905
Epoch: 90 Loss: 0.01397627
Epoch: 91 Loss: 0.00137753
Epoch: 92 Loss: 0.00034056
Epoch: 93 Loss: 0.00000783
Epoch: 94 Loss: 0.00081068
Epoch: 95 Loss: 0.00108648
Epoch: 96 Loss: 0.00043386
Epoch: 97 Loss: 0.00049079
Epoch: 98 Loss: 0.00175973
Epoch: 99 Loss: 0.00037281
Epoch: 100 Loss: 0.00017556
Duration: 200 seconds
###Markdown
Predict future values, plot the result
###Code
window_size = 12
future = 12
L = len(y)
preds = y_norm[-window_size:].tolist()
model.eval()
for i in range(future):
seq = torch.FloatTensor(preds[-window_size:])
with torch.no_grad():
# Reset the hidden parameters here!
model.hidden = (torch.zeros(1,1,model.hidden_size),
torch.zeros(1,1,model.hidden_size))
preds.append(model(seq).item())
# Inverse-normalize the prediction set
true_predictions = scaler.inverse_transform(np.array(preds).reshape(-1, 1))
# PLOT THE RESULT
# Set a data range for the predicted data.
# Remember that the stop date has to be later than the last predicted value.
x = np.arange('2019-02-01', '2020-02-01', dtype='datetime64[M]').astype('datetime64[D]')
plt.figure(figsize=(12,4))
plt.title('Beer, Wine, and Alcohol Sales')
plt.ylabel('Sales (millions of dollars)')
plt.grid(True)
plt.autoscale(axis='x',tight=True)
plt.plot(df['S4248SM144NCEN'])
plt.plot(x,true_predictions[window_size:])
plt.show()
fig = plt.figure(figsize=(12,4))
plt.title('Beer, Wine, and Alcohol Sales')
plt.ylabel('Sales (millions of dollars)')
plt.grid(True)
plt.autoscale(axis='x',tight=True)
fig.autofmt_xdate()
plt.plot(df['S4248SM144NCEN']['2017-01-01':])
plt.plot(x,true_predictions[window_size:])
plt.show()
###Output
_____no_output_____
###Markdown
DONE BONUS:To save time in the future, we've written a function that will take in a time series training data set, and output a tensor of (seq, label) tuples.
###Code
# Load dependencies
from sklearn.preprocessing import MinMaxScaler
# Instantiate a scaler
"""
This has to be done outside the function definition so that
we can inverse_transform the prediction set later on.
"""
scaler = MinMaxScaler(feature_range=(-1, 1))
# Extract values from the source .csv file
df = pd.read_csv('data/Alcohol_Sales.csv',index_col=0,parse_dates=True)
y = df['S4248SM144NCEN'].values.astype(float)
# Define a test size
test_size = 12
# Create the training set of values
train_set = y[:-test_size]
# DEFINE A FUNCTION:
def create_train_data(seq,ws=12):
"""Takes in a training sequence and window size (ws) of
default size 12, returns a tensor of (seq/label) tuples"""
seq_norm = scaler.fit_transform(seq.reshape(-1, 1))
seq_norm = torch.FloatTensor(seq_norm).view(-1)
out = []
L = len(seq_norm)
for i in range(L-ws):
window = seq_norm[i:i+ws]
label = seq_norm[i+ws:i+ws+1]
out.append((window,label))
return out
# Apply the function to train_set
train_data = create_train_data(train_set,12)
len(train_data)
train_data[0]
help(create_train_data)
###Output
Help on function create_train_data in module __main__:
create_train_data(seq, ws=12)
Takes in a training sequence and window size (ws) of
default size 12, returns a tensor of (seq/label) tuples
|
datascience/stats_2_sample_independent_t_test_1.ipynb | ###Markdown
PROBLEMWe've an input file (stats_2_sample_independent_t_test_1.csv) with the marks of students for Maths. Formulate a hypothesis and test it to see if there's significant difference between the marks of boys and girls. Hypothesis Testing Null Hypothesis (µo) µboys = µgirls i.e. there's no significant difference between the marks of boys and girls Alternate Hypothesis (µa) µboys != µgirls i.e. there's significant difference between the marks of boys and girls
###Code
import os
import pandas as pd
import scipy.stats as st
import matplotlib.pyplot as plt
%matplotlib inline
# Check the current working directory and set it appropriately.
# cwd should point to the location where the input csv file resides.
os.getcwd()
# alternate way to list files in a directory
import glob
glob.glob('./*.csv')
# Alternately you can specify the full path to the file.
# Here, I'm giving only the filename.
data = pd.read_csv("stats_2_sample_independent_t_test_1.csv")
data.columns
data.describe()
data.info()
# Using the pandas groupby() for grouping the data by gender
grouped = data.groupby('Gender')
grouped
# Segregating boys and girls based on Gender
# Value of Gender is 1 for girls and 0 for boys
Girls = grouped.get_group(1)
Boys = grouped.get_group(0)
Girls.info()
Girls.columns
Boys.info()
Boys.columns
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 91 entries, 0 to 91
Data columns (total 11 columns):
ID 91 non-null int64
Gender 91 non-null int64
Race 91 non-null int64
SEB 91 non-null int64
School 91 non-null int64
Prog 91 non-null int64
Read 91 non-null int64
Write 91 non-null int64
Math1 91 non-null int64
Math2 91 non-null int64
SST 91 non-null int64
dtypes: int64(11)
memory usage: 8.5 KB
###Markdown
Here, we've two independent samples Two Sample Independent T-Test Note: we don't go for T-test for more than two samples
###Code
st.ttest_ind(Girls.Math1, Boys.Math1)
###Output
_____no_output_____ |
recursion/letter_combination_phone.ipynb | ###Markdown
Given a string containing digits from 2-9 inclusive, return all possible letter combinations that the number could represent. Return the answer in any order.A mapping of digit to letters (just like on the telephone buttons) is given below. Note that 1 does not map to any letters.Example 1:Input: digits = "23" Output: ["ad","ae","af","bd","be","bf","cd","ce","cf"] Example 2:Input: digits = ""Output: []Example 3:Input: digits = "2" Output: ["a","b","c"] Constraints:0 digits[i] is a digit in the range ['2', '9']. Solution
###Code
def letterCombinations(digits):
"""
:type digits: str
:rtype: List[str]
"""
phone = {'2': ['a', 'b', 'c'],
'3': ['d', 'e', 'f'],
'4': ['g', 'h', 'i'],
'5': ['j', 'k', 'l'],
'6': ['m', 'n', 'o'],
'7': ['p', 'q', 'r', 's'],
'8': ['t', 'u', 'v'],
'9': ['w', 'x', 'y', 'z']}
def backtrack(combination, next_digits):
# if there is no more digits to check
if len(next_digits) == 0:
# the combination is done
output.append(combination)
# if there are still digits to check
else:
# iterate over all letters which map
# the next available digit
for letter in phone[next_digits[0]]:
# append the current letter to the combination
# and proceed to the next digits
backtrack(combination + letter, next_digits[1:])
output = []
if digits:
backtrack("", digits)
return output
print(letterCombinations("23"))
def letterCombinations(digits):
"""
:type digits: str
:rtype: List[str]
"""
phone = {'2': ['a', 'b', 'c'],
'3': ['d', 'e', 'f'],
'4': ['g', 'h', 'i'],
'5': ['j', 'k', 'l'],
'6': ['m', 'n', 'o'],
'7': ['p', 'q', 'r', 's'],
'8': ['t', 'u', 'v'],
'9': ['w', 'x', 'y', 'z']}
def backtrack(combination, next_digits):
print("combination",combination)
print("next_digits",next_digits)
# if there is no more digits to check
if len(next_digits) == 0:
# the combination is done
print("\nreturn")
output.append(combination)
print("output",output)
# if there are still digits to check
else:
# iterate over all letters which map
# the next available digit
print("phone[",next_digits[0],"]",phone[next_digits[0]])
for letter in phone[next_digits[0]]:
# append the current letter to the combination
# and proceed to the next digits
print("\nletter",letter)
print("combination + letter",combination + letter)
print("next_digits[1:]",next_digits[1:])
backtrack(combination + letter, next_digits[1:])
output = []
if digits:
print("inside")
backtrack("", digits)
return output
print(letterCombinations("23"))
###Output
inside
combination
next_digits 23
phone[ 2 ] ['a', 'b', 'c']
letter a
combination + letter a
next_digits[1:] 3
combination a
next_digits 3
phone[ 3 ] ['d', 'e', 'f']
letter d
combination + letter ad
next_digits[1:]
combination ad
next_digits
return
output ['ad']
letter e
combination + letter ae
next_digits[1:]
combination ae
next_digits
return
output ['ad', 'ae']
letter f
combination + letter af
next_digits[1:]
combination af
next_digits
return
output ['ad', 'ae', 'af']
letter b
combination + letter b
next_digits[1:] 3
combination b
next_digits 3
phone[ 3 ] ['d', 'e', 'f']
letter d
combination + letter bd
next_digits[1:]
combination bd
next_digits
return
output ['ad', 'ae', 'af', 'bd']
letter e
combination + letter be
next_digits[1:]
combination be
next_digits
return
output ['ad', 'ae', 'af', 'bd', 'be']
letter f
combination + letter bf
next_digits[1:]
combination bf
next_digits
return
output ['ad', 'ae', 'af', 'bd', 'be', 'bf']
letter c
combination + letter c
next_digits[1:] 3
combination c
next_digits 3
phone[ 3 ] ['d', 'e', 'f']
letter d
combination + letter cd
next_digits[1:]
combination cd
next_digits
return
output ['ad', 'ae', 'af', 'bd', 'be', 'bf', 'cd']
letter e
combination + letter ce
next_digits[1:]
combination ce
next_digits
return
output ['ad', 'ae', 'af', 'bd', 'be', 'bf', 'cd', 'ce']
letter f
combination + letter cf
next_digits[1:]
combination cf
next_digits
return
output ['ad', 'ae', 'af', 'bd', 'be', 'bf', 'cd', 'ce', 'cf']
['ad', 'ae', 'af', 'bd', 'be', 'bf', 'cd', 'ce', 'cf']
|
deepLearning/presenciales/Semana_2_Sistemas_cognitivos_artificiales.ipynb | ###Markdown
En primer lugar vamos a importar tensorflow y vamos a comprobar su versión
###Code
import tensorflow as tf
print(tf.__version__)
###Output
2.4.1
###Markdown
Ahora, vamos a importar keras y a ver su versión
###Code
from tensorflow import keras
print(keras.__version__)
###Output
2.4.0
###Markdown
**Construcción de una clasificador de imagenes utilizando una API secuencial** vamos a cargar ahora uno de los dataset de entrenamiento de keras, el reconocimiento de los digitos
###Code
fashion_mnist = keras.datasets.fashion_mnist
(X_train_full, y_train_full), (X_test, y_test) = fashion_mnist.load_data()
###Output
_____no_output_____
###Markdown
Vamos a ver el tamaño del dataset, y el tamaño de la imagen
###Code
X_train_full.shape
###Output
_____no_output_____
###Markdown
Aqui comprobamos el tipo de dato
###Code
X_train_full.dtype
###Output
_____no_output_____
###Markdown
Un poco de pretratamiento de los datos, en este caso como vamos a usar el gradient descent, necesitamos reescalar los datos a 0-1
###Code
X_valid, X_train = X_train_full[:5000]/255.0, X_train_full[5000:]/255.0
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
###Output
_____no_output_____
###Markdown
Le damos un nombre a cada una de las clases
###Code
class_names = ["T-shirt/top", "Trouser", "Pullover", "Dress", "Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]
class_names[y_train[0]]
###Output
_____no_output_____
###Markdown
**Creación del modelo usando una API secuencial**
###Code
model= keras.models.Sequential()
model.add(keras.layers.Flatten(input_shape=[28,28]))
model.add(keras.layers.Dense(300, activation="relu"))
model.add(keras.layers.Dense(300, activation="relu"))
model.add(keras.layers.Dense(10, activation="softmax"))
###Output
_____no_output_____
###Markdown
otra forma de hacer esta misma operación de añadir capas:
###Code
# model= keras.models.Sequential([]
# keras.layers.Flatten(input_shape=[28,28])
# keras.layers.Dense(300, activation="relu")
# keras.layers.Dense(300, activation="relu")
# keras.layers.Dense(10, activation="softmax")
#])
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten (Flatten) (None, 784) 0
_________________________________________________________________
dense (Dense) (None, 300) 235500
_________________________________________________________________
dense_1 (Dense) (None, 300) 90300
_________________________________________________________________
dense_2 (Dense) (None, 10) 3010
=================================================================
Total params: 328,810
Trainable params: 328,810
Non-trainable params: 0
_________________________________________________________________
###Markdown
Vamos ahora a ver las capas
###Code
model.layers
###Output
_____no_output_____
###Markdown
algunas operaciones y comando interesantes
###Code
hidden1=model.layers[1]
hidden1.name
model.get_layer('dense_2') is hidden1
###Output
_____no_output_____
###Markdown
Todos los parametros de una capa estan accesibles utilizando "get_weights()" y "set_weights()"
###Code
weights, biases = hidden1.get_weights()
weights
weights.shape
biases
biases.shape
###Output
_____no_output_____
###Markdown
**Compilando el modelo**
###Code
model.compile(loss="sparse_categorical_crossentropy",
optimizer= "sgd",
metrics =["accuracy"])
###Output
_____no_output_____
###Markdown
**Entrenando y evaluando el modelo**
###Code
history = model.fit(X_train,y_train, epochs=30, validation_data=(X_valid, y_valid))
###Output
Epoch 1/30
1719/1719 [==============================] - 8s 4ms/step - loss: 1.0155 - accuracy: 0.6828 - val_loss: 0.5399 - val_accuracy: 0.8158
Epoch 2/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.5036 - accuracy: 0.8267 - val_loss: 0.4507 - val_accuracy: 0.8474
Epoch 3/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.4571 - accuracy: 0.8397 - val_loss: 0.4332 - val_accuracy: 0.8536
Epoch 4/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.4168 - accuracy: 0.8537 - val_loss: 0.4342 - val_accuracy: 0.8476
Epoch 5/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.4024 - accuracy: 0.8570 - val_loss: 0.3872 - val_accuracy: 0.8644
Epoch 6/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.3818 - accuracy: 0.8659 - val_loss: 0.3819 - val_accuracy: 0.8680
Epoch 7/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.3724 - accuracy: 0.8688 - val_loss: 0.3850 - val_accuracy: 0.8658
Epoch 8/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.3619 - accuracy: 0.8718 - val_loss: 0.3603 - val_accuracy: 0.8734
Epoch 9/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.3442 - accuracy: 0.8785 - val_loss: 0.3535 - val_accuracy: 0.8750
Epoch 10/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.3403 - accuracy: 0.8801 - val_loss: 0.3412 - val_accuracy: 0.8784
Epoch 11/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.3282 - accuracy: 0.8816 - val_loss: 0.3476 - val_accuracy: 0.8748
Epoch 12/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.3231 - accuracy: 0.8837 - val_loss: 0.3485 - val_accuracy: 0.8752
Epoch 13/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.3162 - accuracy: 0.8867 - val_loss: 0.3457 - val_accuracy: 0.8758
Epoch 14/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.3085 - accuracy: 0.8886 - val_loss: 0.3344 - val_accuracy: 0.8832
Epoch 15/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2960 - accuracy: 0.8935 - val_loss: 0.3313 - val_accuracy: 0.8828
Epoch 16/30
1719/1719 [==============================] - 7s 4ms/step - loss: 0.3064 - accuracy: 0.8888 - val_loss: 0.3268 - val_accuracy: 0.8842
Epoch 17/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2891 - accuracy: 0.8965 - val_loss: 0.3322 - val_accuracy: 0.8830
Epoch 18/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2903 - accuracy: 0.8951 - val_loss: 0.3177 - val_accuracy: 0.8836
Epoch 19/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2775 - accuracy: 0.9006 - val_loss: 0.3097 - val_accuracy: 0.8896
Epoch 20/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2717 - accuracy: 0.9031 - val_loss: 0.3197 - val_accuracy: 0.8838
Epoch 21/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2650 - accuracy: 0.9042 - val_loss: 0.3080 - val_accuracy: 0.8874
Epoch 22/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2587 - accuracy: 0.9071 - val_loss: 0.3241 - val_accuracy: 0.8848
Epoch 23/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2581 - accuracy: 0.9067 - val_loss: 0.3114 - val_accuracy: 0.8886
Epoch 24/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2503 - accuracy: 0.9102 - val_loss: 0.3032 - val_accuracy: 0.8924
Epoch 25/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2493 - accuracy: 0.9071 - val_loss: 0.3088 - val_accuracy: 0.8854
Epoch 26/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2467 - accuracy: 0.9128 - val_loss: 0.2991 - val_accuracy: 0.8930
Epoch 27/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2398 - accuracy: 0.9128 - val_loss: 0.3328 - val_accuracy: 0.8852
Epoch 28/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2347 - accuracy: 0.9133 - val_loss: 0.2932 - val_accuracy: 0.8938
Epoch 29/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2331 - accuracy: 0.9175 - val_loss: 0.2971 - val_accuracy: 0.8914
Epoch 30/30
1719/1719 [==============================] - 6s 4ms/step - loss: 0.2269 - accuracy: 0.9166 - val_loss: 0.3081 - val_accuracy: 0.8932
###Markdown
Ahora vamos a comprobar las variables loss, accuracy, validation_loss y validation_accuracy
###Code
import pandas as pd
import matplotlib.pyplot as plt
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1)
plt.show()
###Output
_____no_output_____
###Markdown
Fijaros que las curvas training accuracy y validation accuracy se van incrementando durante el entrenamiento, mientras que las loss se hacen mas pequeñas
###Code
model.evaluate(X_test,y_test)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 71.7711 - accuracy: 0.8399
|
facenet_classifier_all_three_datasets.ipynb | ###Markdown
###Code
import numpy as np
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical
# Load encoding
# 3_frame_seq
X = np.load('drive/My Drive/dataset_3_frame_seq_faces/X_3.npy')
y = np.load('drive/My Drive/dataset_3_frame_seq_faces/Y_label_3.npy')
frames = 3
# 5_frame_seq
# X = np.load('drive/My Drive/dataset_5_frame_seq_faces/X_5.npy')
# y = np.load('drive/My Drive/dataset_5_frame_seq_faces/Y_label_5.npy')
# frames = 5
# 10_frame_seq
# X = np.load('drive/My Drive/facea_seqs_one_frame_to_the_right/X.npy')
# y = np.load('drive/My Drive/facea_seqs_one_frame_to_the_right/Y_label.npy')
# frames = 10
test_size = 0.3
if frames ==5:
test_size = 0.1
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = test_size, random_state = 0)
# y_train = to_categorical(y_train, num_classes=7)
# y_test = to_categorical(y_test, num_classes=7)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
X_train_new = np.zeros(shape=(X_train.shape[0] * X_train.shape[1], X_train.shape[2]))
X_test_new = np.zeros(shape=(X_test.shape[0] * X_test.shape[1], X_test.shape[2]))
y_train_new = np.zeros(shape=(y_train.shape[0] * X_train.shape[1]))
y_test_new = np.zeros(shape=(y_test.shape[0] * X_train.shape[1]))
train_index = 0
for seq in range(X_train.shape[0]):
for frame in range(X_train.shape[1]):
X_train_new[train_index] = X_train[seq,frame]
y_train_new[train_index] = y_train[seq]
train_index += 1
print(train_index)
test_index = 0
for seq in range(X_test.shape[0]):
for frame in range(X_test.shape[1]):
X_test_new[test_index] = X_test[seq,frame]
y_test_new[test_index] = y_test[seq]
test_index += 1
print(test_index)
y_train_new = to_categorical(y_train_new, num_classes=7)
y_test_new = to_categorical(y_test_new, num_classes=7)
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
def classifier():
model = Sequential()
model.add(Dense(64, input_dim=128, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(7, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = classifier()
epochs = 50
if frames == 10:
epochs = 100
model.fit(X_train_new, y_train_new, epochs=epochs, verbose=1, validation_split=0.1)
def getFaceExpressionFromIndex(i):
if (i == 0):
return 'surprise'
elif (i == 1):
return 'smile'
elif (i == 2):
return 'sad'
elif (i == 3):
return 'anger'
elif (i == 4):
return 'fear'
elif (i == 5):
return 'disgust'
elif (i == 6):
return 'none'
else:
print(i)
test_predictions = model.predict(X_test_new)
correct = 0
for i in range(len(test_predictions)):
if i % X_test.shape[1] ==0:
truth = np.argmax(y_test_new[i])
predictions = []
for j in range(X_test.shape[1]):
predictions.append(np.argmax(test_predictions[i + j]))
if truth == max(predictions):
correct += 1
else:
print(f'Wrong classification, truth: {getFaceExpressionFromIndex(truth)}')
print(f'Wrong classification, prediction: {getFaceExpressionFromIndex(max(predictions))}')
print("---------------------------------------------------------------------------")
print(f'Accuracy of predicitons: {correct / (len(test_predictions) / X_test.shape[1])}')
print(f'Got correct: {correct}')
print(f'Got wrong: {int(len(test_predictions) / X_test.shape[1]) - correct}')
from sklearn.manifold import TSNE
from matplotlib import pyplot as plt
tsne = TSNE(n_components=2, random_state=0)
X_2d = tsne.fit_transform(X_test_new[:1000])
target_ids = range(0,7)
plt.figure(figsize=(20, 18))
colors = 'r', 'g', 'b', 'c', 'm', 'y', 'k'
for i, c, label in zip(target_ids, colors, ["surprise", "smile","sad", "anger","fear", "disgust","none"]):
plt.scatter(X_2d[y_new[:1000] == i, 0], X_2d[y_new[:1000] == i, 1], c=c, label=label)
plt.legend(prop={'size': 30})
plt.show()
###Output
_____no_output_____ |
10_pipeline/kubeflow/99_DISABLE_PUBLIC_ENDPOINT_TO_AVOID_GETTING_HACKED.ipynb | ###Markdown
Disable the Public Kubeflow UIYou must run this in your SageMaker notebook - not here in this notebook!This removes `istio-ingress` from the `istio-system` namespace and removes the publicly-available `LoadBalancer` endpoint.WE ARE REMOVING THE PUBLICLY AVAILABLE LOADBALANCER FOR THE KUBEFLOW DASHBOARD UI BECAUSE WE DON'T WANT TO GET HACKED.
###Code
%%bash
source ~/.bash_profile
cd ${KF_DIR}/.cache/manifests/manifests-1.0.2/
kubectl delete -k aws/istio-ingress/base --namespace istio-system
cd ${KF_DIR}
###Output
_____no_output_____
###Markdown
_If you see an error ^^ above ^^, then you are running this in the wrong notebook server. Run this in your SageMaker Notebook. Not here!!_
###Code
%%javascript
Jupyter.notebook.save_checkpoint()
Jupyter.notebook.session.delete();
###Output
_____no_output_____ |
notebooks/kr08/2_2/python_original.ipynb | ###Markdown
Program 2.1 (SIR model) - original Python code
###Code
import scipy.integrate as spi
import numpy as np
import pylab as pl
%matplotlib inline
beta=1.4247
gamma=0.14286
TS=1.0
ND=70.0
S0=1-1e-6
I0=1e-6
INPUT = (S0, I0, 0.0)
def diff_eqs(INP,t):
'''The main set of equations'''
Y=np.zeros((3))
V = INP
Y[0] = - beta * V[0] * V[1]
Y[1] = beta * V[0] * V[1] - gamma * V[1]
Y[2] = gamma * V[1]
return Y # For odeint
t_start = 0.0; t_end = ND; t_inc = TS
t_range = np.arange(t_start, t_end+t_inc, t_inc)
RES = spi.odeint(diff_eqs,INPUT,t_range)
#Ploting
pl.subplot(211)
pl.plot(RES[:,0], '-g', label='Susceptibles')
pl.plot(RES[:,2], '-k', label='Recovereds')
pl.legend(loc=0)
pl.title('Program_2_1.py')
pl.xlabel('Time')
pl.ylabel('Susceptibles and Recovereds')
pl.subplot(212)
pl.plot(RES[:,1], '-r', label='Infectious')
pl.xlabel('Time')
pl.ylabel('Infectious')
pl.show()
###Output
_____no_output_____ |
Mean_Reversion_BB_algo_tag_1_IBEX.ipynb | ###Markdown
Importing Data Download ticket master data from IBEX at close price
###Code
ibex = MarketData("IBEX")
ibex_close = ibex.get_close()
ibex_close
###Output
_____no_output_____
###Markdown
Mean Reversion Strategy using Bollinger Bands Generate empty signals dataframes for IBEX
###Code
signals_ibex = ibex_close * 0
###Output
_____no_output_____
###Markdown
Calculate moving average for IBEX for the chosen window
###Code
window = 20
sma_ibex=ibex_close.rolling(window=window, min_periods=1, center=False).mean()
###Output
_____no_output_____
###Markdown
Calculate moving standard deviation for IBEX for the chosen window
###Code
smsd_ibex = ibex_close.rolling(window=window, min_periods=1, center=False).std()
###Output
_____no_output_____
###Markdown
Set upper band and lower band for IBEX
###Code
upper_band_ibex = sma_ibex + 2 * smsd_ibex
lower_band_ibex = sma_ibex - 2 * smsd_ibex
###Output
_____no_output_____
###Markdown
Generate position signals for IBEX
###Code
signals_ibex[window:] = np.where(ibex_close[window:]
< lower_band_ibex[window:], 1.0, 0.0)
signals_ibex
###Output
_____no_output_____
###Markdown
Generate trading signals
###Code
buy_sell_ibex = signals_ibex[window:].diff()
buy_sell_ibex
###Output
_____no_output_____
###Markdown
Recommendations
###Code
recomendation_ibex_buy = list(buy_sell_ibex.columns[buy_sell_ibex.iloc[-1,:] == 1.0])
recomendation_ibex_buy
recomendation_ibex_sell = list(buy_sell_ibex.columns[buy_sell_ibex.iloc[-1,:] == -1.0])
recomendation_ibex_sell
if recomendation_ibex_buy == [] and recomendation_ibex_sell == []:
print("No recommendations for IBEX")
raise
else:
print("The algorithm recommends to allocate or re-allocate. Send new allocation to the API.")
###Output
No recommendations for IBEX
###Markdown
Portfolio management: Orders generation - Generate & Send Allocation to API Check algos & select algo_tag
###Code
url = f'{API.url_base()}/participants/algorithms'
params = {'competi': API.competi(),
'key': API.user_key(),}
response = requests.get(url, params)
algos = response.json()
if algos:
algos_df = pd.DataFrame(algos)
print(algos_df.to_string())
algo_tag = algos_df.iloc[0].algo_tag
###Output
_____no_output_____
###Markdown
Set allocations - Assign weights portfolio to algo
###Code
from utils import gen_alloc_data
allocation_ibex_sell = [gen_alloc_data(ticker, 0.0) for ticker in recomendation_ibex_sell]
###Output
_____no_output_____
###Markdown
We will weight each asset equally and leave a 5% cash reserve.
###Code
allocation_ibex_buy = [gen_alloc_data(ticker, 0.95/len(recomendation_ibex_buy)) for ticker in recomendation_ibex_buy]
allocation_ibex = allocation_ibex_sell + allocation_ibex_buy
###Output
_____no_output_____
###Markdown
Set date and market
###Code
market_alloc = "IBEX"
today = datetime.now().strftime('%Y-%m-%d')
url = f'{API.url_base()}/participants/allocation'
url_auth = f'{url}?key={API.user_key()}'
print(url_auth)
str_date = today
params = {
'competi': API.competi(),
'algo_tag': algo_tag,
'market': market_alloc,
'date': str_date,
'allocation': allocation_ibex
}
response = requests.post(url_auth, data=json.dumps(params))
print (response.json())
###Output
https://miax-gateway-jog4ew3z3q-ew.a.run.app/participants/allocation?key=AIzaSyBIAVe1BIJaxb-LVyMYMmhtoPPdJZSfRqI
{'date': '2020-08-31', 'result': True}
###Markdown
Query allocations
###Code
from utils import allocs_to_frame
url = f'{API.url_base()}/participants/algo_allocations'
params = {
'key': API.user_key(),
'competi': API.competi(),
'algo_tag': algo_tag,
'market': market_alloc,
}
response = requests.get(url, params)
allocs_to_frame(response.json())
###Output
_____no_output_____ |
challenges/ibm-quantum/iqc-2021/ex5/ex5.ipynb | ###Markdown
Exercise 5 - Variational quantum eigensolver Historical backgroundDuring the last decade, quantum computers matured quickly and began to realize Feynman's initial dream of a computing system that could simulate the laws of nature in a quantum way. A 2014 paper first authored by Alberto Peruzzo introduced the **Variational Quantum Eigensolver (VQE)**, an algorithm meant for finding the ground state energy (lowest energy) of a molecule, with much shallower circuits than other approaches.[1] And, in 2017, the IBM Quantum team used the VQE algorithm to simulate the ground state energy of the lithium hydride molecule.[2]VQE's magic comes from outsourcing some of the problem's processing workload to a classical computer. The algorithm starts with a parameterized quantum circuit called an ansatz (a best guess) then finds the optimal parameters for this circuit using a classical optimizer. The VQE's advantage over classical algorithms comes from the fact that a quantum processing unit can represent and store the problem's exact wavefunction, an exponentially hard problem for a classical computer. This exercise 5 allows you to realize Feynman's dream yourself, setting up a variational quantum eigensolver to determine the ground state and the energy of a molecule. This is interesting because the ground state can be used to calculate various molecular properties, for instance the exact forces on nuclei than can serve to run molecular dynamics simulations to explore what happens in chemical systems with time.[3] References1. Peruzzo, Alberto, et al. "A variational eigenvalue solver on a photonic quantum processor." Nature communications 5.1 (2014): 1-7.2. Kandala, Abhinav, et al. "Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets." Nature 549.7671 (2017): 242-246.3. Sokolov, Igor O., et al. "Microcanonical and finite-temperature ab initio molecular dynamics simulations on quantum computers." Physical Review Research 3.1 (2021): 013125. IntroductionFor the implementation of VQE, you will be able to make choices on how you want to compose your simulation, in particular focusing on the ansatz quantum circuits.This is motivated by the fact that one of the important tasks when running VQE on noisy quantum computers is to reduce the loss of fidelity (which introduces errors) by finding the most compact quantum circuit capable of representing the ground state.Practically, this entails to minimizing the number of two-qubit gates (e.g. CNOTs) while not loosing accuracy.Goal Find the shortest ansatz circuits for representing accurately the ground state of given problems. Be creative! Plan First you will learn how to compose a VQE simulation for the smallest molecule and then apply what you have learned to a case of a larger one. **1. Tutorial - VQE for H$_2$:** familiarize yourself with VQE and select the best combination of ansatz/classical optimizer by running statevector simulations.**2. Final Challenge - VQE for LiH:** perform similar investigation as in the first part but restricting to statevector simulator only. Use the qubit number reduction schemes available in Qiskit and find the optimal circuit for this larger system. Optimize the circuit and use your imagination to find ways to select the best building blocks of parameterized circuits and compose them to construct the most compact ansatz circuit for the ground state, better than the ones already available in Qiskit. Below is an introduction to the theory behind VQE simulations. You don't have to understand the whole thing before moving on. Don't be scared! TheoryHere below is the general workflow representing how the molecular simulations using VQE are performed on quantum computers.The core idea hybrid quantum-classical approach is to outsource to **CPU (classical processing unit)** and **QPU (quantum processing unit)** the parts that they can do best. The CPU takes care of listing the terms that need to be measured to compute the energy and also optimizing the circuit parameters. The QPU implements a quantum circuit representing the quantum state of a system and measures the energy. Some more details are given below:**CPU** can compute efficiently the energies associated to electron hopping and interactions (one-/two-body integrals by means of a Hartree-Fock calculation) that serve to represent the total energy operator, Hamiltonian. The [Hartree–Fock (HF) method](https://en.wikipedia.org/wiki/Hartree%E2%80%93Fock_method:~:text=In%20computational%20physics%20and%20chemistry,system%20in%20a%20stationary%20state.) efficiently computes an approximate grounds state wavefunction by assuming that the latter can be represented by a single Slater determinant (e.g. for H$_2$ molecule in STO-3G basis with 4 spin-orbitals and qubits, $|\Psi_{HF} \rangle = |0101 \rangle$ where electrons occupy the lowest energy spin-orbitals). What QPU does later in VQE is finding a quantum state (corresponding circuit and its parameters) that can also represent other states associated missing electronic correlations (i.e. $\sum_i c_i |i\rangle$ states in $|\Psi \rangle = c_{HF}|\Psi_{HF} \rangle + \sum_i c_i |i\rangle $ where $i$ is a bitstring). After a HF calculation, operators in the Hamiltonian are mapped to measurements on a QPU using fermion-to-qubit transformations (see Hamiltonian section below). One can further analyze the properties of the system to reduce the number of qubits or shorten the ansatz circuit:- For Z2 symmetries and two-qubit reduction, see [Bravyi *et al*, 2017](https://arxiv.org/abs/1701.08213v1).- For entanglement forging, see [Eddins *et al.*, 2021](https://arxiv.org/abs/2104.10220v1).- For the adaptive ansatz see, [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205). You may use the ideas found in those works to find ways to shorten the quantum circuits.**QPU** implements quantum circuits (see Ansatzes section below), parameterized by angles $\vec\theta$, that would represent the ground state wavefunction by placing various single qubit rotations and entanglers (e.g. two-qubit gates). The quantum advantage lies in the fact that QPU can efficiently represent and store the exact wavefunction, which becomes intractable on a classical computer for systems that have more than a few atoms. Finally, QPU measures the operators of choice (e.g. ones representing a Hamiltonian).Below we go slightly more in mathematical details of each component of the VQE algorithm. It might be also helpful if you watch our [video episode about VQE](https://www.youtube.com/watch?v=Z-A6G0WVI9w). Hamiltonian Here we explain how we obtain the operators that we need to measure to obtain the energy of a given system.These terms are included in the molecular Hamiltonian defined as:$$\begin{aligned}\hat{H} &=\sum_{r s} h_{r s} \hat{a}_{r}^{\dagger} \hat{a}_{s} \\&+\frac{1}{2} \sum_{p q r s} g_{p q r s} \hat{a}_{p}^{\dagger} \hat{a}_{q}^{\dagger} \hat{a}_{r} \hat{a}_{s}+E_{N N}\end{aligned}$$with$$h_{p q}=\int \phi_{p}^{*}(r)\left(-\frac{1}{2} \nabla^{2}-\sum_{I} \frac{Z_{I}}{R_{I}-r}\right) \phi_{q}(r)$$$$g_{p q r s}=\int \frac{\phi_{p}^{*}\left(r_{1}\right) \phi_{q}^{*}\left(r_{2}\right) \phi_{r}\left(r_{2}\right) \phi_{s}\left(r_{1}\right)}{\left|r_{1}-r_{2}\right|} $$where the $h_{r s}$ and $g_{p q r s}$ are the one-/two-body integrals (using the Hartree-Fock method) and $E_{N N}$ the nuclear repulsion energy. The one-body integrals represent the kinetic energy of the electrons and their interaction with nuclei. The two-body integrals represent the electron-electron interaction.The $\hat{a}_{r}^{\dagger}, \hat{a}_{r}$ operators represent creation and annihilation of electron in spin-orbital $r$ and require mappings to operators, so that we can measure them on a quantum computer.Note that VQE minimizes the electronic energy so you have to retrieve and add the nuclear repulsion energy $E_{NN}$ to compute the total energy. So, for every non-zero matrix element in the $ h_{r s}$ and $g_{p q r s}$ tensors, we can construct corresponding Pauli string (tensor product of Pauli operators) with the following fermion-to-qubit transformation. For instance, in Jordan-Wigner mapping for an orbital $r = 3$, we obtain the following Pauli string:$$\hat a_{3}^{\dagger}= \hat \sigma_z \otimes \hat \sigma_z \otimes\left(\frac{ \hat \sigma_x-i \hat \sigma_y}{2}\right) \otimes 1 \otimes \cdots \otimes 1$$where $\hat \sigma_x, \hat \sigma_y, \hat \sigma_z$ are the well-known Pauli operators. The tensor products of $\hat \sigma_z$ operators are placed to enforce the fermionic anti-commutation relations.A representation of the Jordan-Wigner mapping between the 14 spin-orbitals of a water molecule and some 14 qubits is given below:Then, one simply replaces the one-/two-body excitations (e.g. $\hat{a}_{r}^{\dagger} \hat{a}_{s}$, $\hat{a}_{p}^{\dagger} \hat{a}_{q}^{\dagger} \hat{a}_{r} \hat{a}_{s}$) in the Hamiltonian by corresponding Pauli strings (i.e. $\hat{P}_i$, see picture above). The resulting operator set is ready to be measured on the QPU.For additional details see [Seeley *et al.*, 2012](https://arxiv.org/abs/1208.5986v1). AnsatzesThere are mainly 2 types of ansatzes you can use for chemical problems. - **q-UCC ansatzes** are physically inspired, and roughly map the electron excitations to quantum circuits. The q-UCCSD ansatz (`UCCSD`in Qiskit) possess all possible single and double electron excitations. The paired double q-pUCCD (`PUCCD`) and singlet q-UCCD0 (`SUCCD`) just consider a subset of such excitations (meaning significantly shorter circuits) and have proved to provide good results for dissociation profiles. For instance, q-pUCCD doesn't have single excitations and the double excitations are paired as in the image below.- **Heuristic ansatzes (`TwoLocal`)** were invented to shorten the circuit depth but still be able to represent the ground state. As in the figure below, the R gates represent the parametrized single qubit rotations and $U_{CNOT}$ the entanglers (two-qubit gates). The idea is that after repeating certain $D$-times the same block (with independent parameters) one can reach the ground state. For additional details refer to [Sokolov *et al.* (q-UCC ansatzes)](https://arxiv.org/abs/1911.10864v2) and [Barkoutsos *et al.* (Heuristic ansatzes)](https://arxiv.org/pdf/1805.04340.pdf). VQEGiven a Hermitian operator $\hat H$ with an unknown minimum eigenvalue $E_{min}$, associated with the eigenstate $|\psi_{min}\rangle$, VQE provides an estimate $E_{\theta}$, bounded by $E_{min}$:\begin{align*} E_{min} \le E_{\theta} \equiv \langle \psi(\theta) |\hat H|\psi(\theta) \rangle\end{align*} where $|\psi(\theta)\rangle$ is the trial state associated with $E_{\theta}$. By applying a parameterized circuit, represented by $U(\theta)$, to some arbitrary starting state $|\psi\rangle$, the algorithm obtains an estimate $U(\theta)|\psi\rangle \equiv |\psi(\theta)\rangle$ on $|\psi_{min}\rangle$. The estimate is iteratively optimized by a classical optimizer by changing the parameter $\theta$ and minimizing the expectation value of $\langle \psi(\theta) |\hat H|\psi(\theta) \rangle$. As applications of VQE, there are possibilities in molecular dynamics simulations, see [Sokolov *et al.*, 2021](https://arxiv.org/abs/2008.08144v1), and excited states calculations, see [Ollitrault *et al.*, 2019](https://arxiv.org/abs/1910.12890) to name a few. References for additional details For the qiskit-nature tutorial that implements this algorithm see [here](https://qiskit.org/documentation/nature/tutorials/01_electronic_structure.html)but this won't be sufficient and you might want to look on the [first page of github repository](https://github.com/Qiskit/qiskit-nature) and the [test folder](https://github.com/Qiskit/qiskit-nature/tree/main/test) containing tests that are written for each component, they provide the base code for the use of each functionality. Part 1: Tutorial - VQE for H$_2$ molecule In this part, you will simulate H$_2$ molecule using the STO-3G basis with the PySCF driver and Jordan-Wigner mapping.We will guide you through the following parts so then you can tackle harder problems. 1. DriverThe interfaces to the classical chemistry codes that are available in Qiskit are called drivers.We have for example `PSI4Driver`, `PyQuanteDriver`, `PySCFDriver` are available. By running a driver (Hartree-Fock calculation for a given basis set and molecular geometry), in the cell below, we obtain all the necessary information about our molecule to apply then a quantum algorithm.
###Code
from qiskit_nature.drivers import PySCFDriver
molecule = "H .0 .0 .0; H .0 .0 0.739"
driver = PySCFDriver(atom=molecule)
qmolecule = driver.run()
###Output
_____no_output_____
###Markdown
Tutorial questions 1 Look into the attributes of `qmolecule` and answer the questions below. 1. We need to know the basic characteristics of our molecule. What is the total number of electrons in your system?2. What is the number of molecular orbitals?3. What is the number of spin-orbitals?3. How many qubits would you need to simulate this molecule with Jordan-Wigner mapping?5. What is the value of the nuclear repulsion energy?You can find the answers at the end of this notebook.
###Code
# WRITE YOUR CODE BETWEEN THESE LINES - START
# WRITE YOUR CODE BETWEEN THESE LINES - END
###Output
_____no_output_____
###Markdown
2. Electronic structure problemYou can then create an `ElectronicStructureProblem` that can produce the list of fermionic operators before mapping them to qubits (Pauli strings).
###Code
from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem
problem = ElectronicStructureProblem(driver)
# Generate the second-quantized operators
second_q_ops = problem.second_q_ops()
# Hamiltonian
main_op = second_q_ops[0]
###Output
_____no_output_____
###Markdown
3. QubitConverterAllows to define the mapping that you will use in the simulation. You can try different mapping but we will stick to `JordanWignerMapper` as allows a simple correspondence: a qubit represents a spin-orbital in the molecule.
###Code
from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper
from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter
# Setup the mapper and qubit converter
mapper_type = 'JordanWignerMapper'
if mapper_type == 'ParityMapper':
mapper = ParityMapper()
elif mapper_type == 'JordanWignerMapper':
mapper = JordanWignerMapper()
elif mapper_type == 'BravyiKitaevMapper':
mapper = BravyiKitaevMapper()
converter = QubitConverter(mapper=mapper, two_qubit_reduction=False)
# The fermionic operators are mapped to qubit operators
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
qubit_op = converter.convert(main_op, num_particles=num_particles)
###Output
_____no_output_____
###Markdown
4. Initial stateAs we described in the Theory section, a good initial state in chemistry is the HF state (i.e. $|\Psi_{HF} \rangle = |0101 \rangle$). We can initialize it as follows:
###Code
from qiskit_nature.circuit.library import HartreeFock
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
init_state = HartreeFock(num_spin_orbitals, num_particles, converter)
print(init_state)
###Output
┌───┐
q_0: ┤ X ├
└───┘
q_1: ─────
┌───┐
q_2: ┤ X ├
└───┘
q_3: ─────
###Markdown
5. AnsatzOne of the most important choices is the quantum circuit that you choose to approximate your ground state.Here is the example of qiskit circuit library that contains many possibilities for making your own circuit.
###Code
from qiskit.circuit.library import TwoLocal
from qiskit_nature.circuit.library import UCCSD, PUCCD, SUCCD
# Choose the ansatz
ansatz_type = "TwoLocal"
# Parameters for q-UCC antatze
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
# Put arguments for twolocal
if ansatz_type == "TwoLocal":
# Single qubit rotations that are placed on all qubits with independent parameters
rotation_blocks = ['ry', 'rz']
# Entangling gates
entanglement_blocks = 'cx'
# How the qubits are entangled
entanglement = 'full'
# Repetitions of rotation_blocks + entanglement_blocks with independent parameters
repetitions = 3
# Skip the final rotation_blocks layer
skip_final_rotation_layer = True
ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions,
entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer)
# Add the initial state
ansatz.compose(init_state, front=True, inplace=True)
elif ansatz_type == "UCCSD":
ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "PUCCD":
ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "SUCCD":
ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "Custom":
# Example of how to write your own circuit
from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister
# Define the variational parameter
theta = Parameter('a')
n = qubit_op.num_qubits
# Make an empty quantum circuit
qc = QuantumCircuit(qubit_op.num_qubits)
qubit_label = 0
# Place a Hadamard gate
qc.h(qubit_label)
# Place a CNOT ladder
for i in range(n-1):
qc.cx(i, i+1)
# Visual separator
qc.barrier()
# rz rotations on all qubits
qc.rz(theta, range(n))
ansatz = qc
ansatz.compose(init_state, front=True, inplace=True)
print(ansatz)
###Output
┌───┐ ┌──────────┐┌──────────┐ ┌──────────┐»
q_0: ───┤ X ├────┤ RY(θ[0]) ├┤ RZ(θ[4]) ├──■────■─────────■──┤ RY(θ[8]) ├»
┌──┴───┴───┐├──────────┤└──────────┘┌─┴─┐ │ │ └──────────┘»
q_1: ┤ RY(θ[1]) ├┤ RZ(θ[5]) ├────────────┤ X ├──┼────■────┼───────■──────»
└──┬───┬───┘├──────────┤┌──────────┐└───┘┌─┴─┐┌─┴─┐ │ │ »
q_2: ───┤ X ├────┤ RY(θ[2]) ├┤ RZ(θ[6]) ├─────┤ X ├┤ X ├──┼───────┼──────»
┌──┴───┴───┐├──────────┤└──────────┘ └───┘└───┘┌─┴─┐ ┌─┴─┐ »
q_3: ┤ RY(θ[3]) ├┤ RZ(θ[7]) ├───────────────────────────┤ X ├───┤ X ├────»
└──────────┘└──────────┘ └───┘ └───┘ »
« ┌───────────┐ ┌───────────┐»
«q_0: ┤ RZ(θ[12]) ├───────────────────■────────■─────────■──┤ RY(θ[16]) ├»
« └┬──────────┤┌───────────┐ ┌─┴─┐ │ │ └───────────┘»
«q_1: ─┤ RY(θ[9]) ├┤ RZ(θ[13]) ├────┤ X ├──────┼────■────┼────────■──────»
« └──────────┘├───────────┤┌───┴───┴───┐┌─┴─┐┌─┴─┐ │ │ »
«q_2: ──────■──────┤ RY(θ[10]) ├┤ RZ(θ[14]) ├┤ X ├┤ X ├──┼────────┼──────»
« ┌─┴─┐ ├───────────┤├───────────┤└───┘└───┘┌─┴─┐ ┌─┴─┐ »
«q_3: ────┤ X ├────┤ RY(θ[11]) ├┤ RZ(θ[15]) ├──────────┤ X ├────┤ X ├────»
« └───┘ └───────────┘└───────────┘ └───┘ └───┘ »
« ┌───────────┐
«q_0: ┤ RZ(θ[20]) ├───────────────────■────────■─────────■────────────
« ├───────────┤┌───────────┐ ┌─┴─┐ │ │
«q_1: ┤ RY(θ[17]) ├┤ RZ(θ[21]) ├────┤ X ├──────┼────■────┼────■───────
« └───────────┘├───────────┤┌───┴───┴───┐┌─┴─┐┌─┴─┐ │ │
«q_2: ──────■──────┤ RY(θ[18]) ├┤ RZ(θ[22]) ├┤ X ├┤ X ├──┼────┼────■──
« ┌─┴─┐ ├───────────┤├───────────┤└───┘└───┘┌─┴─┐┌─┴─┐┌─┴─┐
«q_3: ────┤ X ├────┤ RY(θ[19]) ├┤ RZ(θ[23]) ├──────────┤ X ├┤ X ├┤ X ├
« └───┘ └───────────┘└───────────┘ └───┘└───┘└───┘
###Markdown
6. BackendThis is where you specify the simulator or device where you want to run your algorithm.We will focus on the `statevector_simulator` in this challenge.
###Code
from qiskit import Aer
backend = Aer.get_backend('statevector_simulator')
###Output
_____no_output_____
###Markdown
7. OptimizerThe optimizer guides the evolution of the parameters of the ansatz so it is very important to investigate the energy convergence as it would define the number of measurements that have to be performed on the QPU.A clever choice might reduce drastically the number of needed energy evaluations.
###Code
from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B, SPSA, SLSQP
optimizer_type = 'COBYLA'
# You may want to tune the parameters
# of each optimizer, here the defaults are used
if optimizer_type == 'COBYLA':
optimizer = COBYLA(maxiter=500)
elif optimizer_type == 'L_BFGS_B':
optimizer = L_BFGS_B(maxfun=500)
elif optimizer_type == 'SPSA':
optimizer = SPSA(maxiter=500)
elif optimizer_type == 'SLSQP':
optimizer = SLSQP(maxiter=500)
###Output
_____no_output_____
###Markdown
8. Exact eigensolverFor learning purposes, we can solve the problem exactly with the exact diagonalization of the Hamiltonian matrix so we know where to aim with VQE.Of course, the dimensions of this matrix scale exponentially in the number of molecular orbitals so you can try doing this for a large molecule of your choice and see how slow this becomes. For very large systems you would run out of memory trying to store their wavefunctions.
###Code
from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory
from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver
import numpy as np
def exact_diagonalizer(problem, converter):
solver = NumPyMinimumEigensolverFactory()
calc = GroundStateEigensolver(converter, solver)
result = calc.solve(problem)
return result
result_exact = exact_diagonalizer(problem, converter)
exact_energy = np.real(result_exact.eigenenergies[0])
print("Exact electronic energy", exact_energy)
print(result_exact)
# The targeted electronic energy for H2 is -1.85336 Ha
# Check with your VQE result.
###Output
Exact electronic energy -1.8533636186720364
=== GROUND STATE ENERGY ===
* Electronic ground state energy (Hartree): -1.853363618672
- computed part: -1.853363618672
~ Nuclear repulsion energy (Hartree): 0.716072003951
> Total ground state energy (Hartree): -1.137291614721
=== MEASURED OBSERVABLES ===
0: # Particles: 2.000 S: 0.000 S^2: 0.000 M: 0.000
=== DIPOLE MOMENTS ===
~ Nuclear dipole moment (a.u.): [0.0 0.0 1.39650761]
0:
* Electronic dipole moment (a.u.): [0.0 0.0 1.39650761]
- computed part: [0.0 0.0 1.39650761]
> Dipole moment (a.u.): [0.0 0.0 0.0] Total: 0.
(debye): [0.0 0.0 0.00000001] Total: 0.00000001
###Markdown
9. VQE and initial parameters for the ansatzNow we can import the VQE class and run the algorithm.
###Code
from qiskit.algorithms import VQE
from IPython.display import display, clear_output
# Print and save the data in lists
def callback(eval_count, parameters, mean, std):
# Overwrites the same line when printing
display("Evaluation: {}, Energy: {}, Std: {}".format(eval_count, mean, std))
clear_output(wait=True)
counts.append(eval_count)
values.append(mean)
params.append(parameters)
deviation.append(std)
counts = []
values = []
params = []
deviation = []
# Set initial parameters of the ansatz
# We choose a fixed small displacement
# So all participants start from similar starting point
try:
initial_point = [0.01] * len(ansatz.ordered_parameters)
except:
initial_point = [0.01] * ansatz.num_parameters
algorithm = VQE(ansatz,
optimizer=optimizer,
quantum_instance=backend,
callback=callback,
initial_point=initial_point)
result = algorithm.compute_minimum_eigenvalue(qubit_op)
print(result)
###Output
{ 'aux_operator_eigenvalues': None,
'cost_function_evals': 500,
'eigenstate': array([ 5.55839219e-07+4.33667242e-05j, -2.54817228e-04+1.69136374e-04j,
-1.41198727e-05+1.07852951e-05j, -2.98933473e-05+3.34001403e-05j,
-9.30756867e-06-2.69271113e-06j, 9.59165537e-01+2.59526293e-01j,
4.01826666e-04-2.39628624e-04j, -6.72818471e-08+1.34512394e-07j,
3.66141410e-08-2.52535990e-08j, -7.66350067e-04+4.00786325e-04j,
-1.08531209e-01-2.94523377e-02j, 8.41316477e-05-1.50598909e-05j,
3.82451399e-06-3.01452469e-06j, 1.24046236e-04-9.64048191e-05j,
-3.11599148e-05-1.51052025e-05j, -1.17678002e-04+1.17115410e-05j]),
'eigenvalue': -1.8533628495566552,
'optimal_parameters': { ParameterVectorElement(θ[11]): -0.01878165369801624,
ParameterVectorElement(θ[21]): 1.1822559282436655,
ParameterVectorElement(θ[12]): 0.9182692041658823,
ParameterVectorElement(θ[5]): 0.21004383063116186,
ParameterVectorElement(θ[15]): 0.3924753613255743,
ParameterVectorElement(θ[19]): 0.2186285353170032,
ParameterVectorElement(θ[4]): 0.22579514945005438,
ParameterVectorElement(θ[14]): -0.18989856209494552,
ParameterVectorElement(θ[13]): 0.22866321930163055,
ParameterVectorElement(θ[3]): -0.23605585408984223,
ParameterVectorElement(θ[0]): 0.22539103122433168,
ParameterVectorElement(θ[7]): 0.3638889623695151,
ParameterVectorElement(θ[20]): -0.31980898480092,
ParameterVectorElement(θ[9]): -0.00023120672708110463,
ParameterVectorElement(θ[22]): 1.0536585009278732,
ParameterVectorElement(θ[23]): 0.5696994669998804,
ParameterVectorElement(θ[8]): -6.155468509014689e-05,
ParameterVectorElement(θ[18]): 0.00384029862590771,
ParameterVectorElement(θ[1]): -0.00017629542703102578,
ParameterVectorElement(θ[2]): 0.0038162025598846303,
ParameterVectorElement(θ[17]): 3.1416457403459686,
ParameterVectorElement(θ[16]): -0.00043981623804380666,
ParameterVectorElement(θ[10]): -0.0003977930893254482,
ParameterVectorElement(θ[6]): 0.287211763434586},
'optimal_point': array([ 2.25391031e-01, -3.97793089e-04, -1.87816537e-02, 9.18269204e-01,
2.28663219e-01, -1.89898562e-01, 3.92475361e-01, -4.39816238e-04,
3.14164574e+00, 3.84029863e-03, 2.18628535e-01, -1.76295427e-04,
-3.19808985e-01, 1.18225593e+00, 1.05365850e+00, 5.69699467e-01,
3.81620256e-03, -2.36055854e-01, 2.25795149e-01, 2.10043831e-01,
2.87211763e-01, 3.63888962e-01, -6.15546851e-05, -2.31206727e-04]),
'optimal_value': -1.8533628495566552,
'optimizer_evals': 500,
'optimizer_time': 3.769929885864258}
###Markdown
9. Scoring function We need to judge how good are your VQE simulations, your choice of ansatz/optimizer.For this, we implemented the following simple scoring function:$$ score = N_{CNOT}$$where $N_{CNOT}$ is the number of CNOTs. But you have to reach the chemical accuracy which is $\delta E_{chem} = 0.004$ Ha $= 4$ mHa, which may be hard to reach depending on the problem. You have to reach the accuracy we set in a minimal number of CNOTs to win the challenge. The lower the score the better!
###Code
# Store results in a dictionary
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import Unroller
# Unroller transpile your circuit into CNOTs and U gates
pass_ = Unroller(['u', 'cx'])
pm = PassManager(pass_)
ansatz_tp = pm.run(ansatz)
cnots = ansatz_tp.count_ops()['cx']
score = cnots
accuracy_threshold = 4.0 # in mHa
energy = result.optimal_value
if ansatz_type == "TwoLocal":
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': rotation_blocks,
'entanglement_blocks': entanglement_blocks,
'entanglement': entanglement,
'repetitions': repetitions,
'skip_final_rotation_layer': skip_final_rotation_layer,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
else:
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': None,
'entanglement_blocks': None,
'entanglement': None,
'repetitions': None,
'skip_final_rotation_layer': None,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_thresholdaccuract_thres,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
# Plot the results
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
ax.set_xlabel('Iterations')
ax.set_ylabel('Energy')
ax.grid()
fig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\nScore: {score:.0f}')
plt.title(f"{result_dict['optimizer']}-{result_dict['mapping']}\n{result_dict['ansatz']}")
ax.plot(counts, values)
ax.axhline(exact_energy, linestyle='--')
fig_title = f"\
{result_dict['optimizer']}-\
{result_dict['mapping']}-\
{result_dict['ansatz']}-\
Energy({result_dict['energy (Ha)']:.3f})-\
Score({result_dict['score']:.0f})\
.png"
fig.savefig(fig_title, dpi=300)
# Display and save the data
import pandas as pd
import os.path
filename = 'results_h2.csv'
if os.path.isfile(filename):
result_df = pd.read_csv(filename)
result_df = result_df.append([result_dict])
else:
result_df = pd.DataFrame.from_dict([result_dict])
result_df.to_csv(filename)
result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks',
'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']]
###Output
_____no_output_____
###Markdown
Tutorial questions 2 Experiment with all the parameters and then:1. Can you find your best (best score) heuristic ansatz (by modifying parameters of `TwoLocal` ansatz) and optimizer?2. Can you find your best q-UCC ansatz (choose among `UCCSD, PUCCD or SUCCD` ansatzes) and optimizer?3. In the cell where we define the ansatz, can you modify the `Custom` ansatz by placing gates yourself to write a better circuit than your `TwoLocal` circuit? For each question, give `ansatz` objects.Remember, you have to reach the chemical accuracy $|E_{exact} - E_{VQE}| \leq 0.004 $ Ha $= 4$ mHa.
###Code
# WRITE YOUR CODE BETWEEN THESE LINES - START
# WRITE YOUR CODE BETWEEN THESE LINES - END
###Output
_____no_output_____
###Markdown
Part 2: Final Challenge - VQE for LiH molecule In this part, you will simulate LiH molecule using the STO-3G basis with the PySCF driver. Goal Experiment with all the parameters and then find your best ansatz. You can be as creative as you want!For each question, give `ansatz` objects as for Part 1. Your final score will be based only on Part 2. Be aware that the system is larger now. Work out how many qubits you would need for this system by retrieving the number of spin-orbitals. Reducing the problem sizeYou might want to reduce the number of qubits for your simulation:- you could freeze the core electrons that do not contribute significantly to chemistry and consider only the valence electrons. Qiskit already has this functionality implemented. So inspect the different transformers in `qiskit_nature.transformers` and find the one that performs the freeze core approximation.- you could use `ParityMapper` with `two_qubit_reduction=True` to eliminate 2 qubits.- you could reduce the number of qubits by inspecting the symmetries of your Hamiltonian. Find a way to use `Z2Symmetries` in Qiskit. Custom ansatz You might want to explore the ideas proposed in [Grimsley *et al.*,2018](https://arxiv.org/abs/1812.11173v2), [H. L. Tang *et al.*,2019](https://arxiv.org/abs/1911.10205), [Rattew *et al.*,2019](https://arxiv.org/abs/1910.09694), [Tang *et al.*,2019](https://arxiv.org/abs/1911.10205). You can even get try machine learning algorithms to generate best ansatz circuits. Setup the simulationLet's now run the Hartree-Fock calculation and the rest is up to you!Attention We give below the `driver`, the `initial_point`, the `initial_state` that should remain as given.You are free then to explore all other things available in Qiskit.So you have to start from this initial point (all parameters set to 0.01): `initial_point = [0.01] * len(ansatz.ordered_parameters)` or`initial_point = [0.01] * ansatz.num_parameters`and your initial state has to be the Hartree-Fock state: `init_state = HartreeFock(num_spin_orbitals, num_particles, converter)` For each question, give `ansatz` object.Remember you have to reach the chemical accuracy $|E_{exact} - E_{VQE}| \leq 0.004 $ Ha $= 4$ mHa.
###Code
from qiskit_nature.drivers import PySCFDriver
molecule = 'Li 0.0 0.0 0.0; H 0.0 0.0 1.5474'
driver = PySCFDriver(atom=molecule, basis='sto3g')
qmolecule = driver.run()
# WRITE YOUR CODE BETWEEN THESE LINES - START
from qiskit_nature.problems.second_quantization.electronic import ElectronicStructureProblem
from qiskit_nature.transformers import FreezeCoreTransformer
from qiskit_nature.mappers.second_quantization import ParityMapper, BravyiKitaevMapper, JordanWignerMapper
from qiskit_nature.converters.second_quantization.qubit_converter import QubitConverter
from qiskit_nature.circuit.library import HartreeFock
from qiskit.circuit.library import TwoLocal
from qiskit_nature.circuit.library import UCCSD, PUCCD, SUCCD
from qiskit import Aer
from qiskit.algorithms.optimizers import COBYLA, L_BFGS_B, SPSA, SLSQP
from qiskit_nature.algorithms.ground_state_solvers.minimum_eigensolver_factories import NumPyMinimumEigensolverFactory
from qiskit_nature.algorithms.ground_state_solvers import GroundStateEigensolver
import numpy as np
from qiskit.transpiler import PassManager
from qiskit.transpiler.passes import Unroller
import matplotlib.pyplot as plt
import pandas as pd
import os.path
# decreases the # of qubits
freezeCT = [FreezeCoreTransformer(freeze_core=True, remove_orbitals=[3,4])]
problem = ElectronicStructureProblem(driver,q_molecule_transformers=freezeCT)
# Generate the second-quantized operators
second_q_ops = problem.second_q_ops()
# Hamiltonian
main_op = second_q_ops[0]
# Setup the mapper and qubit converter
# given in hints above
mapper_type = 'ParityMapper'
if mapper_type == 'ParityMapper':
mapper = ParityMapper()
elif mapper_type == 'JordanWignerMapper':
mapper = JordanWignerMapper()
elif mapper_type == 'BravyiKitaevMapper':
mapper = BravyiKitaevMapper()
# given in hints above
converter = QubitConverter(mapper=mapper, two_qubit_reduction=True, z2symmetry_reduction=[1,1])
# The fermionic operators are mapped to qubit operators
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
qubit_op = converter.convert(main_op, num_particles=num_particles)
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
init_state = HartreeFock(num_spin_orbitals, num_particles, converter)
print(init_state)
# Choose the ansatz
ansatz_type = "TwoLocal"
# Parameters for q-UCC antatze
num_particles = (problem.molecule_data_transformed.num_alpha,
problem.molecule_data_transformed.num_beta)
num_spin_orbitals = 2 * problem.molecule_data_transformed.num_molecular_orbitals
# Put arguments for twolocal
if ansatz_type == "TwoLocal":
# Single qubit rotations that are placed on all qubits with independent parameters
rotation_blocks = ['ry', 'rz']
# Entangling gates
entanglement_blocks = 'cx'
# How the qubits are entangled
# full -> linear : 18 -> 6
entanglement = 'linear'
# Repetitions of rotation_blocks + entanglement_blocks with independent parameters
# 3 -> 2 : 18 -> 6
repetitions = 2
# Skip the final rotation_blocks layer
skip_final_rotation_layer = True
ansatz = TwoLocal(qubit_op.num_qubits, rotation_blocks, entanglement_blocks, reps=repetitions,
entanglement=entanglement, skip_final_rotation_layer=skip_final_rotation_layer)
# Add the initial state
ansatz.compose(init_state, front=True, inplace=True)
elif ansatz_type == "UCCSD":
ansatz = UCCSD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "PUCCD":
ansatz = PUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "SUCCD":
ansatz = SUCCD(converter,num_particles,num_spin_orbitals,initial_state = init_state)
elif ansatz_type == "Custom":
# Example of how to write your own circuit
from qiskit.circuit import Parameter, QuantumCircuit, QuantumRegister
# Define the variational parameter
theta = Parameter('a')
n = qubit_op.num_qubits
# Make an empty quantum circuit
qc = QuantumCircuit(qubit_op.num_qubits)
qubit_label = 0
# Place a Hadamard gate
qc.h(qubit_label)
# Place a CNOT ladder
for i in range(n-1):
qc.cx(i, i+1)
# Visual separator
qc.barrier()
# rz rotations on all qubits
qc.rz(theta, range(n))
ansatz = qc
ansatz.compose(init_state, front=True, inplace=True)
print(ansatz)
backend = Aer.get_backend('statevector_simulator')
# random choice from my side
optimizer_type = 'SLSQP'
# You may want to tune the parameters
# of each optimizer, here the defaults are used
if optimizer_type == 'COBYLA':
optimizer = COBYLA(maxiter=500)
elif optimizer_type == 'L_BFGS_B':
optimizer = L_BFGS_B(maxfun=500)
elif optimizer_type == 'SPSA':
optimizer = SPSA(maxiter=500)
elif optimizer_type == 'SLSQP':
optimizer = SLSQP(maxiter=500)
def exact_diagonalizer(problem, converter):
solver = NumPyMinimumEigensolverFactory()
calc = GroundStateEigensolver(converter, solver)
result = calc.solve(problem)
return result
result_exact = exact_diagonalizer(problem, converter)
exact_energy = np.real(result_exact.eigenenergies[0])
print("Exact electronic energy", exact_energy)
print(result_exact)
# The targeted electronic energy for H2 is -1.85336 Ha
# Check with your VQE result.
from qiskit.algorithms import VQE
from IPython.display import display, clear_output
# Print and save the data in lists
def callback(eval_count, parameters, mean, std):
# Overwrites the same line when printing
display("Evaluation: {}, Energy: {}, Std: {}".format(eval_count, mean, std))
clear_output(wait=True)
counts.append(eval_count)
values.append(mean)
params.append(parameters)
deviation.append(std)
counts = []
values = []
params = []
deviation = []
# Set initial parameters of the ansatz
# We choose a fixed small displacement
# So all participants start from similar starting point
try:
initial_point = [0.01] * len(ansatz.ordered_parameters)
except:
initial_point = [0.01] * ansatz.num_parameters
algorithm = VQE(ansatz,
optimizer=optimizer,
quantum_instance=backend,
callback=callback,
initial_point=initial_point)
result = algorithm.compute_minimum_eigenvalue(qubit_op)
print(result)
# Store results in a dictionary
# Unroller transpile your circuit into CNOTs and U gates
pass_ = Unroller(['u', 'cx'])
pm = PassManager(pass_)
ansatz_tp = pm.run(ansatz)
cnots = ansatz_tp.count_ops()['cx']
score = cnots
accuracy_threshold = 4.0 # in mHa
energy = result.optimal_value
if ansatz_type == "TwoLocal":
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': rotation_blocks,
'entanglement_blocks': entanglement_blocks,
'entanglement': entanglement,
'repetitions': repetitions,
'skip_final_rotation_layer': skip_final_rotation_layer,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
else:
result_dict = {
'optimizer': optimizer.__class__.__name__,
'mapping': converter.mapper.__class__.__name__,
'ansatz': ansatz.__class__.__name__,
'rotation blocks': None,
'entanglement_blocks': None,
'entanglement': None,
'repetitions': None,
'skip_final_rotation_layer': None,
'energy (Ha)': energy,
'error (mHa)': (energy-exact_energy)*1000,
'pass': (energy-exact_energy)*1000 <= accuracy_threshold,
'# of parameters': len(result.optimal_point),
'final parameters': result.optimal_point,
'# of evaluations': result.optimizer_evals,
'optimizer time': result.optimizer_time,
'# of qubits': int(qubit_op.num_qubits),
'# of CNOTs': cnots,
'score': score}
# Plot the results
fig, ax = plt.subplots(1, 1)
ax.set_xlabel('Iterations')
ax.set_ylabel('Energy')
ax.grid()
fig.text(0.7, 0.75, f'Energy: {result.optimal_value:.3f}\nScore: {score:.0f}')
plt.title(f"{result_dict['optimizer']}-{result_dict['mapping']}\n{result_dict['ansatz']}")
ax.plot(counts, values)
ax.axhline(exact_energy, linestyle='--')
fig_title = f"\
{result_dict['optimizer']}-\
{result_dict['mapping']}-\
{result_dict['ansatz']}-\
Energy({result_dict['energy (Ha)']:.3f})-\
Score({result_dict['score']:.0f})\
.png"
fig.savefig(fig_title, dpi=300)
# Display and save the data
filename = 'results_h2.csv'
if os.path.isfile(filename):
result_df = pd.read_csv(filename)
result_df = result_df.append([result_dict])
else:
result_df = pd.DataFrame.from_dict([result_dict])
result_df.to_csv(filename)
result_df[['optimizer','ansatz', '# of qubits', '# of parameters','rotation blocks', 'entanglement_blocks',
'entanglement', 'repetitions', 'error (mHa)', 'pass', 'score']]
# WRITE YOUR CODE BETWEEN THESE LINES - END
# Check your answer using following code
from qc_grader import grade_ex5
freeze_core = True # change to True if you freezed core electrons
grade_ex5(ansatz,qubit_op,result,freeze_core)
# Submit your answer. You can re-submit at any time.
from qc_grader import submit_ex5
submit_ex5(ansatz,qubit_op,result,freeze_core)
###Output
Submitting your answer for ex5. Please wait...
Success 🎉! Your answer has been submitted.
|
Regression/Simple LR/Simple_LR.ipynb | ###Markdown
Importing the essential Libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
data = pd.read_csv("/content/drive/MyDrive/Machine+Learning+A-Z+(Codes+and+Datasets)/Machine Learning A-Z (Codes and Datasets)/Part 2 - Regression/Section 4 - Simple Linear Regression/Python/Salary_Data.csv")
data
###Output
_____no_output_____
###Markdown
Creating the features and Labels
###Code
X=data.iloc[:,:-1]
Y=data.iloc[:,-1]
X
###Output
_____no_output_____
###Markdown
Splitting the dataset
###Code
from sklearn.model_selection import train_test_split
X_train,X_test,Y_Train,Y_test = train_test_split(X,Y,test_size=1/3,random_state=0)
X_train
###Output
_____no_output_____
###Markdown
Fitting the model to the dataset
###Code
from sklearn.linear_model import LinearRegression
# Creating an Object of the LinearRegression Class
regressor= LinearRegression()
# Fitting the model
regressor.fit(X_train,Y_Train)
# Predicting the values for the test Dataset
y_pred = regressor.predict(X_test)
plt.scatter(X_train, Y_Train, color = 'blue')
plt.plot(X_train, regressor.predict(X_train), color = 'red')
plt.title('Salary vs Experience (Training set)')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
plt.show()
plt.scatter(X_test, Y_test, color = 'blue')
plt.plot(X_train, regressor.predict(X_train), color = 'red')
plt.title('Salary vs Experience (Test set)')
plt.xlabel('Years of Experience')
plt.ylabel('Salary')
plt.show()
###Output
_____no_output_____ |
prem-step/ew-mcmcPost.ipynb | ###Markdown
Eng+Wales model MCMC post-processThis is the MCMC postprocess notebook.Outputs of this notebook:* `ewMod-result_mcmc.pik` : inference params at for representative set of samples* `ewMod-traj_mcmc.pik` : deterministic trajectories with parameters from posteriorThis notebook requires as *input*:* `ewMod-mcmc.pik` : result of MCMC computation. These files are very large in general and are not provided in this repo. This notebook *will not execute correctly* unless such a file is provided.** Note carefully ** : internal details of .pik files that are created by the MCMC notebook may be affected by changes to pyross source code. It is therefore useful to keep track of the specific commitID used for a given run. I am using git commit `be4eabc` . start notebook (the following line is for efficient parallel processing)
###Code
%env OMP_NUM_THREADS=1
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
import pyross
import time
import pandas as pd
import matplotlib.image as mpimg
import pickle
import os
import pprint
import scipy.stats
#print(pyross.__file__)
#print(os.getcwd())
#from ew_fns import *
from uk_v2a_fns import * ## these are exactly the same functions as ew_fns,
## imported like this for compatibility with saved pik files (legacy)
import expt_params_local
import model_local
verboseMod=False ## print ancillary info about the model?
## time unit is one week
daysPerWeek = 7.0
## these are params that might be varied in different expts
exptParams = expt_params_local.getLocalParams()
pprint.pprint(exptParams)
## this is used for filename handling throughout
pikFileRoot = exptParams['pikFileRoot']
###Output
{'careFile': '../data/CareHomes.csv',
'chooseCM': 'premEtAl',
'dataFile': '../data/OnsData.csv',
'estimatorTol': 1e-08,
'exCare': True,
'forecastTime': 3,
'freeInitPriors': ['E', 'A', 'Is1', 'Is2', 'Is3'],
'infOptions': {'cma_population': 32,
'cma_processes': None,
'ftol': 5e-05,
'global_atol': 1.0,
'global_max_iter': 1500,
'local_max_iter': 400},
'inferBetaNotAi': True,
'numCohorts': 16,
'numCohortsPopData': 19,
'pikFileRoot': 'ewMod',
'popFile': '../data/EWAgeDistributedNew.csv',
'timeLast': 8,
'timeZero': 0}
###Markdown
convenience
###Code
np.set_printoptions(precision=3)
pltAuto = True
plt.rcParams.update({'figure.autolayout': pltAuto})
plt.rcParams.update({'font.size': 14})
###Output
_____no_output_____
###Markdown
LOAD MODEL
###Code
loadModel = model_local.loadModel(exptParams,daysPerWeek,verboseMod)
## should use a dictionary but...
[ numCohorts, fi, N, Ni, model_spec, estimator, contactBasis, interventionFn,
modParams, priorsAll, initPriorsLinMode, obsDeath, fltrDeath,
simTime, deathCumulativeDat ] = loadModel
###Output
** model
{'A': {'infection': [], 'linear': [['E', 'gammaE'], ['A', '-gammaA']]},
'E': {'infection': [['A', 'beta'],
['Is1', 'beta'],
['Is2', 'betaLate'],
['Is3', 'betaLate']],
'linear': [['E', '-gammaE']]},
'Im': {'infection': [], 'linear': [['Is3', 'cfr*gammaIs3']]},
'Is1': {'infection': [],
'linear': [['A', 'gammaA'],
['Is1', '-alphabar*gammaIs1'],
['Is1', '-alpha*gammaIs1']]},
'Is2': {'infection': [],
'linear': [['Is1', 'alphabar*gammaIs1'], ['Is2', '-gammaIs2']]},
'Is3': {'infection': [],
'linear': [['Is2', 'gammaIs2'],
['Is3', '-cfrbar*gammaIs3'],
['Is3', '-cfr*gammaIs3']]},
'S': {'infection': [['A', '-beta'],
['Is1', '-beta'],
['Is2', '-betaLate'],
['Is3', '-betaLate']],
'linear': []},
'classes': ['S', 'E', 'A', 'Is1', 'Is2', 'Is3', 'Im']}
typC 0.5952004647091569
** using getPriorsControl
###Markdown
helper functions for MCMC
###Code
def dumpPickle(pikFileRoot,sampler) :
opFile = pikFileRoot + "-mcmc.pik"
print('opf',opFile)
with open(opFile, 'wb') as f:
pickle.dump([sampler,infResult],f)
def loadPickle(pikFileRoot) :
ipFile = pikFileRoot + "-mcmc.pik"
print('ipf',ipFile)
with open(ipFile, 'rb') as f:
[ss,ii] = pickle.load(f)
return [ss,ii]
###Output
_____no_output_____
###Markdown
load data
###Code
[sampler,infResult] = loadPickle(pikFileRoot)
###Output
ipf ewMod-mcmc.pik
###Markdown
plotting helper functions for MCMC
###Code
def plotMCtrace(selected_dims,sampler,numTrace=None):
# Plot the trace for these dimensions:
plot_dim = len(selected_dims)
fig, axes = plt.subplots(plot_dim, figsize=(12, plot_dim), sharex=True)
samples = sampler.get_chain()
if numTrace == None : numTrace = np.shape(samples)[1] ## corrected index
for ii,dd in enumerate(selected_dims):
ax = axes[ii]
ax.plot(samples[:, :numTrace , dd], "k", alpha=0.3)
ax.set_xlim(0, len(samples))
axes[-1].set_xlabel("step number");
plt.show(fig)
plt.close()
nDimMCMC = np.size(infResult['flat_params'])
def plotInterestingTraces() :
offset = 6 # full inference
#offset = 1 # no inference of gammas
## these are cohorts 3,7,11,15 (which includes the eldest)
selected_dims = [ i for i in range(3,numCohorts,4) ]
print('beta, cohorts',selected_dims)
plotMCtrace(selected_dims,sampler,numTrace=40)
print('aF, cohorts',selected_dims)
## index hacking. plot aF with same cohorts as beta above
selected_dims = [ i for i in range(numCohorts+offset+3,numCohorts+offset+numCohorts,4) ]
plotMCtrace(selected_dims,sampler,numTrace=40)
print('lockTime,lockWidth')
selected_dims = [ i for i in range(numCohorts+offset+numCohorts,numCohorts+offset+numCohorts+2) ]
plotMCtrace(selected_dims,sampler,numTrace=40)
print('initConds')
selected_dims = [ i for i in range(nDimMCMC-1-len(exptParams['freeInitPriors']),nDimMCMC) ]
plotMCtrace(selected_dims,sampler,numTrace=40)
###Output
_____no_output_____
###Markdown
MCMC traces (to check mixing)
###Code
plotInterestingTraces()
###Output
beta, cohorts [3, 7, 11, 15]
###Markdown
collect results
###Code
## how many samples in total?
pp = sampler.get_log_prob()
nSampleTot = np.shape(pp)[0]
## for analysis we pull discard the initial 1/3 from burn-in
## then we pull out reprsentative samples spaced by 0.1 of the total run
## (total samples come out as 6 * batch size where batch size is twice the num of inferred params)
result_mcmc = estimator.latent_infer_mcmc_process_result(sampler, obsDeath, fltrDeath,
priorsAll,
initPriorsLinMode,
generator=contactBasis,
intervention_fun=interventionFn,
discard=int(nSampleTot/3),
thin=int(nSampleTot/10) )
print("** samples",np.size(result_mcmc))
param_post_mean = pyross.utils.posterior_mean(result_mcmc)
print( '** ave logLikelihood',np.mean( [ rr['log_likelihood'] for rr in result_mcmc ] ) )
print( '** ave logPost',np.mean( [ rr['log_posterior'] for rr in result_mcmc ] ) )
###Output
** samples 552
** ave logLikelihood -267.7965234473959
** ave logPost -188.2400157993843
###Markdown
save this subset of results from MCMC
###Code
opFile = pikFileRoot + "-result_mcmc.pik"
print('opf',opFile)
with open(opFile, 'wb') as f:
pickle.dump([result_mcmc],f)
###Output
opf ewMod-result_mcmc.pik
###Markdown
run 100 deterministic trajectories using posterior samples
###Code
runTime = 10
nSave = 100
saveTraj = []
for ii,rr in enumerate(result_mcmc[-nSave:]) :
estimator.set_params(rr['params_dict'])
estimator.set_contact_matrix( contactBasis.intervention_custom_temporal( interventionFn,
**rr['control_params_dict'])
)
mytraj = estimator.integrate( rr['x0'], 0, runTime, runTime+1)
saveTraj.append( mytraj )
opFile = pikFileRoot + "-traj_mcmc.pik"
print('opf',opFile)
with open(opFile, 'wb') as f:
pickle.dump([model_spec,saveTraj,N,numCohorts,deathCumulativeDat],f)
###Output
opf ewMod-traj_mcmc.pik
|
src_optimization/21_nonlinear_stencil_01_precalc/e_roofline.ipynb | ###Markdown
based on: https://github.com/essentialsofparallelcomputing/Chapter3/blob/master/JupyterNotebook/HardwarePlatformCharaterization.ipynb Operator Benchmark on Gauss3 Theoretical Parameters
###Code
Sockets=2
ProcessorFrequency=2.35
ProcessorCores=64
Hyperthreads=2
VectorWidth=256
WordSizeBits=64
FMA=2
DataTransferRate=3200
MemoryChannels=8
BytesTransferredPerAccess=8
TheoreticalMaximumFlops=Sockets*ProcessorCores*Hyperthreads*ProcessorFrequency*VectorWidth/WordSizeBits*FMA
TheoreticalMemoryBandwidth=Sockets*DataTransferRate*MemoryChannels*BytesTransferredPerAccess/1000
TheoreticalMachineBalance=TheoreticalMaximumFlops/TheoreticalMemoryBandwidth
###Output
_____no_output_____
###Markdown
Benchmark data Previous Versionsbased on:* `src_master_thesis/optimization/00_unoptimized/exe_op_bench_likwid-perfctr_gauss3_linear_stencil.out`* `src_master_thesis/optimization/01_compiler_optimized/exe_op_bench_linear_stencil_likwid-perfctr_gauss3.out`* * * `src_master_thesis/optimization/00_unoptimized/exe_op_bench_linear_stencil_likwid-perfctr_gauss3.out`* `src_master_thesis/optimization/00_unoptimized/exe_op_bench_linear_stencil_vcl_likwid-perfctr_gauss3.out`
###Code
FLOPS_apply = [
1.0653186,
4.1038111,
5.1812428,
4.2049185,
98.04469560000001
]
AI_apply = [
0.0296,
0.0884,
0.1145,
0.1199455409875806,
0.42690517018704194
]
labels_apply = [
"00_stencil_unoptimized",
"01_stencil_compiler_optimized",
"03_stencil_vcl_vectorization",
"04_stencil_tiling",
"08_openmp_stencil_03_noedgecases"
]
###Output
_____no_output_____
###Markdown
Current Versions
###Code
import csv
def read_Results(p_path):
with open(p_path, 'r') as f:
reader = csv.reader(f)
#
# NOTE skip the header
#
next(reader, None)
for row in reader:
#
# NOTE select row
#
if row[3] == 'apply':
performance = float(row[8])
memory_bandwidth = float(row[9])
break
result = {
#
# NOTE MFLOP/s => GFLOP/s
#
'flops': performance/1000,
'ai': performance/memory_bandwidth
}
return result
results = read_Results('./e_roofline.csv')
print(results)
FLOPS_apply.append(results['flops'])
AI_apply.append(results['ai'])
labels_apply.append('20_several_coeff')
# # Install a pip package in the current Jupyter kernel
# import sys
# !{sys.executable} -m pip install matplotlib
# !{sys.executable} -m pip install numpy
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
print(mcolors.TABLEAU_COLORS)
font = { 'size' : 20}
plt.rc('font', **font)
markersize = 16
# colors = ['b','g','r','y','m','c']
colors = list(mcolors.TABLEAU_COLORS.values())
print(colors)
styles = ['o','s','v','^','D',">","<","*","h","H","+","1","2","3","4","8","p","d","|","_",".",","]
roofline_color = 'r'
fig = plt.figure(1,figsize=(15,10))
plt.clf()
ax = fig.gca()
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('Arithmetic Intensity [FLOPS/Byte]')
ax.set_ylabel('Performance [GFLOP/sec]')
ax.grid()
ax.grid(which='minor', linestyle=':', linewidth=0.5, color='black')
nx = 10000
xmin = -3
xmax = 2
ymin = 0.01
ymax = 10000
ax.set_xlim(10**xmin, 10**xmax)
ax.set_ylim(ymin, ymax)
ixx = int(nx*0.02)
xlim = ax.get_xlim()
ylim = ax.get_ylim()
scomp_x_elbow = []
scomp_ix_elbow = []
smem_x_elbow = []
smem_ix_elbow = []
x = np.logspace(xmin,xmax,nx)
#
# rooflines
#
for ix in range(1,nx):
if TheoreticalMemoryBandwidth * x[ix] >= TheoreticalMaximumFlops and TheoreticalMemoryBandwidth * x[ix-1] < TheoreticalMaximumFlops:
theoMem_ix_elbow = ix-1
break
for ix in range(1,nx):
if (TheoreticalMaximumFlops <= TheoreticalMemoryBandwidth * x[ix] and TheoreticalMaximumFlops > TheoreticalMemoryBandwidth * x[ix-1]):
theoFlops_ix_elbow = ix-1
break
y = np.ones(len(x)) * TheoreticalMaximumFlops
ax.plot(x[theoFlops_ix_elbow:],y[theoFlops_ix_elbow:],c=roofline_color,ls='--',lw='2')
ax.text(x[-ixx],TheoreticalMaximumFlops*0.95,
'node1: P_T',
horizontalalignment='right',
verticalalignment='top',
c=roofline_color)
y = x * TheoreticalMemoryBandwidth
ax.plot(x[:theoMem_ix_elbow+1],y[:theoMem_ix_elbow+1],c=roofline_color,ls='--',lw='2')
ang = np.arctan(np.log10(xlim[1]/xlim[0]) / np.log10(ylim[1]/ylim[0])
* fig.get_size_inches()[1]/fig.get_size_inches()[0] )
ax.text(x[ixx],x[ixx]*TheoreticalMemoryBandwidth*(1+0.25*np.sin(ang)**2),
'node1: B_T',
horizontalalignment='left',
verticalalignment='bottom',
rotation=180/np.pi*ang,
c=roofline_color)
plt.vlines(TheoreticalMachineBalance, 0, TheoreticalMaximumFlops, colors=roofline_color, linestyles='dashed', linewidth=2)
ax.text(TheoreticalMachineBalance,2*ymin,
'node1: MB_T',
horizontalalignment='right',
verticalalignment='bottom',
rotation=90,
c=roofline_color)
marker_handles = list()
for i in range(0,len(AI_apply)):
ax.plot(float(AI_apply[i]),float(FLOPS_apply[i]),c=colors[i],marker=styles[0],linestyle='None',ms=markersize,label=labels_apply[i])
marker_handles.append(ax.plot([],[],c=colors[i],marker=styles[0],linestyle='None',ms=markersize,label=labels_apply[i])[0])
leg1 = plt.legend(handles = marker_handles,loc=4, ncol=2)
ax.add_artist(leg1)
# plt.savefig('roofline.png')
# plt.savefig('roofline.eps')
# plt.savefig('roofline.pdf')
# plt.savefig('roofline.svg')
plt.show()
###Output
OrderedDict([('tab:blue', '#1f77b4'), ('tab:orange', '#ff7f0e'), ('tab:green', '#2ca02c'), ('tab:red', '#d62728'), ('tab:purple', '#9467bd'), ('tab:brown', '#8c564b'), ('tab:pink', '#e377c2'), ('tab:gray', '#7f7f7f'), ('tab:olive', '#bcbd22'), ('tab:cyan', '#17becf')])
['#1f77b4', '#ff7f0e', '#2ca02c', '#d62728', '#9467bd', '#8c564b', '#e377c2', '#7f7f7f', '#bcbd22', '#17becf']
|
ML-quantificacao-incertezas-2020-PLE/.ipynb_checkpoints/Bootstrap-checkpoint.ipynb | ###Markdown
6Gerando 100 amostras de dados a partir da normal $\mu = 5, \, \sigma = 1$
###Code
X = np.random.normal(5,1,100)
###Output
_____no_output_____
###Markdown
$\theta = e^5$
###Code
theta = np.exp(5)
print(theta)
def t_hat(X):
return np.exp(np.mean(X))
theta_hat = t_hat(X)
print(theta_hat)
n = 100
B = 60
def bootstrap(X, f_hat):
bootMtx = np.array([(np.random.choice(X,n)) for i in range(B)])
bootVec = np.array([f_hat(i) for i in bootMtx])
return bootMtx, bootVec
bootMtrx, bootVec = bootstrap(X, t_hat)
np.percentile(bootVec, 97.5)
se = np.sqrt(np.var(bootVec))
alpha = 5
U = np.percentile(bootVec, 100-alpha/200)
L = np.percentile(bootVec, alpha/200)
interval = (2*theta_hat - U, 2*theta_hat - L)
print(f'O intervalo de {100-alpha}% de confiança é {interval}')
fig, axes = plt.subplots(2, 2, figsize=(15, 10), sharex=True)
vecPlot = np.random.randint(0,29,4)
sns.distplot(bootMtx[vecPlot[0]], rug=True, ax = axes[0,0], label='oi')
sns.distplot(bootMtx[vecPlot[1]], rug=True, ax = axes[0,1])
sns.distplot(bootMtx[vecPlot[2]], rug=True, ax = axes[1,0])
sns.distplot(bootMtx[vecPlot[3]], rug=True, ax = axes[1,1])
axes[0,0].set_title(f'B = {vecPlot[0]}')
axes[0,1].set_title(f'B = {vecPlot[1]}')
axes[1,0].set_title(f'B = {vecPlot[2]}')
axes[1,1].set_title(f'B = {vecPlot[3]}')
plt.plot()
###Output
_____no_output_____
###Markdown
Distribuição original
###Code
x_axis = np.arange(0, 8, 0.01)
plt.plot(x_axis, norm.pdf(x_axis, 5,1))
plt.show()
###Output
_____no_output_____
###Markdown
7
###Code
Y = np.random.uniform(0,1, 50)
def t_hat2(X):
return np.max(X)
theta_hat2 = t_hat2(Y)
bootMtx_Y, bootVec_Y = bootstrap(Y, t_hat2)
fig, axes = plt.subplots(2, 2, figsize=(15, 10), sharex=True)
vecPlot = np.random.randint(0,29,4)
sns.distplot(bootMtx_Y[vecPlot[0]], rug=True, ax = axes[0,0], label='oi')
sns.distplot(bootMtx_Y[vecPlot[1]], rug=True, ax = axes[0,1])
sns.distplot(bootMtx_Y[vecPlot[2]], rug=True, ax = axes[1,0])
sns.distplot(bootMtx_Y[vecPlot[3]], rug=True, ax = axes[1,1])
axes[0,0].set_title(f'B = {vecPlot[0]}')
axes[0,1].set_title(f'B = {vecPlot[1]}')
axes[1,0].set_title(f'B = {vecPlot[2]}')
axes[1,1].set_title(f'B = {vecPlot[3]}')
plt.plot()
x_axis = np.arange(0, 1, 0.01)
plt.plot(x_axis, uniform.pdf(x_axis, 0,1))
plt.show()
###Output
_____no_output_____ |
7.sem/3.stochastic.programming/task1/task1.ipynb | ###Markdown
Регрессия
###Code
data = pd.read_csv('task_1_capital.txt', sep='\s+')
data.shape
capital = data["Capital"].to_numpy()
rental = data["Rental"].to_numpy()
A = np.ones((data.shape[0], 2),dtype=float)
A[:, 1], Y = capital, rental
T = A.transpose()
W = np.linalg.pinv(T @ A) @ T @ Y
W
plt.figure(figsize=(25, 5))
plt.scatter(capital, rental, c="black")
plt.title("task 1")
plt.xlabel("capital")
plt.ylabel("rental")
plt.plot(capital, A @ W, c="red");
###Output
_____no_output_____
###Markdown
--- SVM
###Code
metrics = {
"accuracy" : accuracy_score,
"precision" : precision_score,
"recall" : recall_score,
"f1" : f1_score,
}
data = np.loadtxt("chips.txt", delimiter=",", dtype=float)
print(data.shape)
data[:10]
X, Y = data[:, :2], data[:, 2]
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3, shuffle=True, stratify=Y)
svc_param_grid = {
"C" : [0.001, 0.1, 1 , 10, 100],
"kernel" : ["linear", "poly", "rbf"],
"gamma" : ["auto", "scale"]
}
svc = GridSearchCV(SVC(random_state=42), svc_param_grid, n_jobs=-1, scoring=["accuracy", "precision", "recall"], cv=5, iid=False, refit="accuracy")
svc.fit(X_train, Y_train);
svc.best_params_, svc.best_score_
fig = plt.figure(figsize=(10,5))
plot_decision_regions(X=X, y=Y.astype(np.integer), clf=svc.best_estimator_, legend=2)
plt.title('SVM Decision Region Boundary', size=16);
###Output
_____no_output_____
###Markdown
---- KNN
###Code
knn_param_grid = {
"n_neighbors" : [ i for i in range(3, X.shape[0] // 2) ],
"weights" : ["uniform", "distance"],
"algorithm" : ["ball_tree", "kd_tree", "brute"],
}
knn = GridSearchCV(KNeighborsClassifier(), knn_param_grid, n_jobs=-1, scoring=["accuracy", "precision", "recall"], cv=5, iid=False, refit="accuracy")
knn.fit(X_train, Y_train);
knn.best_params_, knn.best_score_
accuracy_score(Y_test, knn.best_estimator_.predict(X_test))
fig = plt.figure(figsize=(10, 5))
plot_decision_regions(X=X, y=Y.astype(np.integer), clf=knn.best_estimator_, legend=2)
plt.title('KNN Decision Region Boundary', size=16);
###Output
_____no_output_____
###Markdown
---
###Code
print("accuracy")
print("svc:", accuracy_score(Y_test, svc.best_estimator_.predict(X_test)))
print("knn:", accuracy_score(Y_test, knn.best_estimator_.predict(X_test)))
###Output
accuracy
svc: 0.7777777777777778
knn: 0.7222222222222222
|
Fig2_map_decadal_ohc.ipynb | ###Markdown
Figure 2: Spatial fields of decadal vertically integrated ocean heat content- evalutate **first cell** to load necessary modules - scroll down to **Plotting** to reproduce figure with derived fields
###Code
import xarray as xr
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
import matplotlib.patches as mpatches
import cmocean as cmo
import numpy as np
import cartopy.crs as ccrs
import cartopy
import pandas as pd
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import matplotlib.ticker as mticker
import warnings
warnings.filterwarnings("ignore")
import pandas as pd
from scipy.interpolate import griddata
from scipy.io import loadmat
import datetime
# from datetime import datetime
import string
import sys
sys.path.append("./") # adds upper level to working directory\n
from utils_iohc_ummenhofer2020 import deseason,ohc_anomaly,cut_indo,plot_map,finished_plot,add_ipo_bar,monte_carlo
# where to save plots
plotsave = 'plots/'
datapath = '/vortexfs1/share/clidex/data/'
# baseline period for anomalies
base = ['1960-01-01','2012-12-31'] # paper
# values for heat content calculation
cp = 3994 # heat capacity
rho = 1029 # reference density
# cp and rho from https://xgcm.readthedocs.io/en/latest/example_eccov4.html
# masks
masktype = 'orca' # 'levitus'
if masktype=='orca': mask = xr.open_dataset(datapath+'ORCA/mesh_files/new_maskglo.nc')
elif masktype=='levitus': mask = xr.open_dataset('./data/mask_ind_levitus_orca.nc')
###Output
_____no_output_____
###Markdown
Load data
###Code
################### load data ######################
zrange = slice(0,22) #3m-734m for temp data
datapath2 = datapath+'publications/IOHC_Ummenhofer/'
k003 = xr.open_dataset(datapath2 + 'K003.hindcast_temp_IndoPacific_30E_150W_70S_30N.nc',
chunks={'x': 200, 'y': 200, 'time_counter': 200}).isel(deptht=zrange)
k004 = xr.open_dataset(datapath2 + 'K004.thermhal90_temp_IndoPacific_30E_150W_70S_30N.nc',
chunks={'x': 200, 'y': 200, 'time_counter': 200}).isel(deptht=zrange)
k005 = xr.open_dataset(datapath2 + 'K005.wind90_temp_IndoPacific_30E_150W_70S_30N.nc',
chunks={'x': 200, 'y': 200, 'time_counter': 200}).isel(deptht=zrange)
###############################################################################
############################# load mask ########################################
###############################################################################
# load mask
%timeit
# rename dimesion to match
mask_orca = mask.rename({'X':'x', 'Y':'y'}).drop(['x','y'])
# cut out indo region
mask_orca = cut_indo(mask_orca['tmaskind']).to_dataset()
###############################################################################
############################ load mesh variables ###############################
###############################################################################
# horizontal
meshpath = datapath+'ORCA/mesh_files/'
mesh = xr.open_dataset(meshpath + 'mesh_hgr.nc').sel(z=zrange)
# spatical weights (e1t=zonal, e2t=meridional)
e2t = mesh['e2t']
e2t = e2t.rename({'t':'time_counter'}).squeeze()
e1t = mesh['e1t']
e1t = e1t.rename({'t':'time_counter'}).squeeze()
# vertical
mesh = xr.open_dataset(meshpath + 'mesh_zgr.nc').sel(z=zrange)
# vertical weight
e3t = mesh['e3t_0']
e3t = e3t.rename({'t':'time_counter', 'z':'deptht'})
e3t = e3t.squeeze()
# also need to cut weights for multiplication
w3 = cut_indo(e3t).to_dataset()
w3["deptht"] = k003['deptht']
w2 = cut_indo(e2t)
w1 = cut_indo(e1t)
# temperature mask (not sure what this does exactly)
tmask = xr.open_dataset(meshpath + 'mask.nc')['tmask'].isel(z=zrange)
tmask = cut_indo(tmask)
tmask = np.squeeze(tmask.rename({'z':'deptht'}))
###Output
_____no_output_____
###Markdown
Vertically integrate OHC
###Code
# mask_full = xr.ones_like(k003['votemper'][0,0,::]).to_dataset()
# mask_full = mask_full.rename({'votemper':'tmaskind'}).load()
# mask_full = mask_full*tmask[0,::].values
# weights = w3['e3t_0'].values * tmask
# ohc_k003_700 = ohc_anomaly(k003['votemper'],mask_full,weights=weights,dims=['deptht'],base=base)
# ohc_k004_700 = ohc_anomaly(k004['votemper'],mask_full,weights=weights,dims=['deptht'],base=base)
# ohc_k005_700 = ohc_anomaly(k005['votemper'],mask_full,weights=weights,dims=['deptht'],base=base)
###Output
_____no_output_____
###Markdown
Monte Carlo for significanceBecause last panel consists of different number of years we need to run monte-carlo simulation multiple times
###Code
# # two-tailed 90% significance
# years=[10,7,3]
# k003_p5 = {}
# k004_p5 = {}
# k005_p5 = {}
# k004_k005_p5 ={}
# k003_p95 = {}
# k004_p95 = {}
# k005_p95 = {}
# k004_k005_p95 = {}
# for yy in years:
# [k003_p5[str(yy)+'yr'],k003_p95[str(yy)+'yr']] = monte_carlo(ohc_k003_700,duration=yy*12,n=1000,pval=5,timevar='time_counter')
# [k004_p5[str(yy)+'yr'],k004_p95[str(yy)+'yr']] = monte_carlo(ohc_k004_700,duration=yy*12,n=1000,pval=5,timevar='time_counter')
# [k005_p5[str(yy)+'yr'],k005_p95[str(yy)+'yr']] = monte_carlo(ohc_k005_700,duration=yy*12,n=1000,pval=5,timevar='time_counter')
# [k004_k005_p5[str(yy)+'yr'],k004_k005_p95[str(yy)+'yr']] = monte_carlo(ohc_k004_700+ohc_k005_700,
# duration=yy*12,n=1000,pval=5,timevar='time_counter')
# # save each dictionary to separate file
# for dic,name in zip([k003_p95,k003_p5,k004_p95,k004_p5,k005_p95,k005_p5,k004_k005_p95,k004_k005_p5],
# ['k003_p95','k003_p5','k004_p95','k004_p5','k005_p95','k005_p5','k004_k005_p95','k004_k005_p5']):
# np.savez('./data/fig3_' + name + '_two_tail_90p_base_1960_2012_n1000',**dic)
##########
# save files will be loaded during plotting
###Output
_____no_output_____
###Markdown
Plotting Load data
###Code
# load saved files
datapath2 = datapath+'publications/IOHC_Ummenhofer/'
ohc_k003_700 = deseason(xr.open_dataset(datapath2+'k003_ohc_zint_700m.nc'),refperiod=base)['votemper'].sel(y=slice(200,None))
ohc_k004_700 = deseason(xr.open_dataset(datapath2+'k004_ohc_zint_700m.nc'),refperiod=base)['votemper'].sel(y=slice(200,None))
ohc_k005_700 = deseason(xr.open_dataset(datapath2+'k005_ohc_zint_700m.nc'),refperiod=base)['votemper'].sel(y=slice(200,None))
###Output
_____no_output_____
###Markdown
No stippling - much faster
###Code
plt.rcParams.update({'font.size': 12})
fig,ax=plt.subplots(nrows=6,ncols=4,figsize=(12,10),
subplot_kw = dict(projection=ccrs.PlateCarree(central_longitude=120)))
plt.subplots_adjust(bottom=0.1, top=0.9, left=0.03, right=0.97, wspace=0.04, hspace=0.05)
vmin=-1.5
vmax=1.5
cmap = plt.get_cmap('RdBu_r',len(np.arange(-1.2,1.2,0.1)))
# loop over datasets and plot
j=0
ll = 0
for ds in [ohc_k003_700,ohc_k004_700,ohc_k005_700,ohc_k004_700+ohc_k005_700]:
# loop over time
jj = 0
for year in np.arange(1960,2020,10):
year2 = year+9
#print(year,year2)
time_bnds = [str(year) + '-01-01',str(year2) + '-12-31']
hh = ax[jj,j].pcolormesh(ds.nav_lon,ds.nav_lat,(ds/1e09).sel(time_counter=slice(*time_bnds)).mean('time_counter'),
transform=ccrs.PlateCarree(),cmap=cmap,vmin=vmin,vmax=vmax)
ax[jj,j].gridlines(crs=ccrs.PlateCarree(),draw_labels=False,
xlocs=[40,80,120,160,200,-160],ylocs=range(-60,60,30))
ax[jj,j].coastlines(resolution='50m')
ax[jj,j].add_feature(cartopy.feature.LAND, color='lightgray')
ax[jj,j].set_title(str(year) + '-' + str(year2),fontsize=10,loc='left',weight='bold')
ax[jj,j].tick_params(axis="x", direction="out")
ax[jj,j].tick_params(axis="y",direction="out")
# ax[jj,j].set_extent((30,211,-40,31),crs=ccrs.PlateCarree())
ax[jj,j].set_extent((30,211,-40,31),crs=ccrs.PlateCarree())
############ adjust labels for subplots #########################
gl = ax[jj,j].gridlines(crs=ccrs.PlateCarree(),draw_labels=True,
xlocs=[40,80,120,160,-160],ylocs=range(-60,60,30))
gl.ylabels_right = False
gl.xlabels_top = False
# ylabels
if j==0:
gl.yformatter = LATITUDE_FORMATTER
gl.ylabel_style = {'size':10}
else:
gl.ylabels_left = False
# xlabels
if jj==5:
gl.xformatter = LONGITUDE_FORMATTER
gl.xlabel_style = {'size':10}
else:
gl.xlabels_bottom = False
t = ax[jj,j].text(0.02, 0.82, string.ascii_lowercase[ll]+')', transform=ax[jj,j].transAxes,
size=11, weight='bold')
t.set_bbox(dict(facecolor='w',boxstyle='square,pad=0.2'))
jj=jj+1 # move to next row
ll=ll+1 # for labels
j=j+1 # move to next column
# add colorbar
cbaxes = fig.add_axes([0.25, 0.05, 0.5, 0.015])
cb = plt.colorbar(hh,orientation='horizontal', cax = cbaxes,extend='both',label='OHC anomaly [$10^{9}\,$J]')
cb.ax.tick_params(labelsize=9)
# fix title for bottom panels
ax[5,0].set_title('2010-2016',fontsize=10,loc='left',weight='bold')
ax[5,1].set_title('2010-2016',fontsize=10,loc='left',weight='bold')
ax[5,2].set_title('2010-2016',fontsize=10,loc='left',weight='bold')
ax[5,3].set_title('2010-2016',fontsize=10,loc='left',weight='bold')
# add names for columns
ax[0,0].text(0.5, 1.3, 'hindcast', transform=ax[0,0].transAxes,
size=11, weight='bold',horizontalalignment='center',bbox=dict(facecolor='w',edgecolor='k'))
ax[0,1].text(0.5, 1.3, 'buoyancy', transform=ax[0,1].transAxes,
size=11, weight='bold',horizontalalignment='center',bbox=dict(facecolor='w',edgecolor='k'))
ax[0,2].text(0.5, 1.3, 'wind', transform=ax[0,2].transAxes,
size=11, weight='bold',horizontalalignment='center',bbox=dict(facecolor='w',edgecolor='k'))
ax[0,3].text(0.5, 1.3, 'buoyancy+wind', transform=ax[0,3].transAxes,
size=11, weight='bold',horizontalalignment='center',bbox=dict(facecolor='w',edgecolor='k'))
###Output
_____no_output_____
###Markdown
Stippling
###Code
plt.rcParams.update({'font.size': 12})
fig,ax=plt.subplots(nrows=6,ncols=4,figsize=(12,10),
subplot_kw = dict(projection=ccrs.PlateCarree(central_longitude=120)))
plt.subplots_adjust(bottom=0.1, top=0.9, left=0.03, right=0.97, wspace=0.04, hspace=0.05)
vmin=-1.5
vmax=1.5
cmap = plt.get_cmap('RdBu_r',len(np.arange(-1.2,1.2,0.1)))
nyears = 7
# need to change longitude values in order to have hatching plotted properly
lon = ohc_k003_700.nav_lon.values
lon[lon<0] = lon[lon<0]+360
lon=lon[300,:]
lat=ohc_k003_700.nav_lat[:,200].values
# loop over datasets and plot
j=0
ll = 0
for ds,name95,name5 in zip([ohc_k003_700,ohc_k004_700,ohc_k005_700,ohc_k004_700+ohc_k005_700],
['k003_p95','k004_p95','k005_p95','k004_k005_p95'],
['k003_p5','k004_p5','k005_p5','k004_k005_p5']):
p95 = dict(np.load('./data/fig3_' + name95 + '_two_tail_90p_base_1960_2012_n1000.npz'))
p5 = dict(np.load('./data/fig3_' + name5 + '_two_tail_90p_base_1960_2012_n1000.npz'))
# loop over time
jj = 0
for year,i in zip(np.arange(1960,2020,10),range(6)):
if i==5:
year2 = year+nyears-1
dummy_p95 = p95[str(nyears)+'yr']
dummy_p5 = p5[str(nyears)+'yr']
else:
year2 = year+9
dummy_p95 = p95['10yr']
dummy_p5 = p5['10yr']
#print(year,year2)
time_bnds = [str(year) + '-01-01',str(year2) + '-12-31']
hh = ax[jj,j].pcolormesh(lon,lat,(ds/1e09).sel(time_counter=slice(*time_bnds)).mean('time_counter'),
transform=ccrs.PlateCarree(),cmap=cmap,vmin=vmin,vmax=vmax)
print('ohc done, now stippling')
# significance
mask = (xr.zeros_like(ds.sel(time_counter=slice(*time_bnds)).mean('time_counter')).values)*np.nan
dummy = (ds.sel(time_counter=slice(*time_bnds)).mean('time_counter').values-dummy_p95)
mask[dummy>=0]=1
dummy = (ds.sel(time_counter=slice(*time_bnds)).mean('time_counter').values-dummy_p5)
mask[dummy<=0]=1
mask[ds[0,::].values==0] = np.nan
ax[jj,j].pcolor(lon,ds.nav_lat,mask,hatch='...',alpha=0.,transform=ccrs.PlateCarree())
print('stippling done')
ax[jj,j].gridlines(crs=ccrs.PlateCarree(),draw_labels=False,
xlocs=[40,80,120,160,200,-160],ylocs=range(-60,60,30))
ax[jj,j].coastlines(resolution='50m')
ax[jj,j].add_feature(cartopy.feature.LAND, color='lightgray')
ax[jj,j].set_title(str(year) + '-' + str(year2),fontsize=10,loc='left',weight='bold')
ax[jj,j].tick_params(axis="x", direction="out")
ax[jj,j].tick_params(axis="y",direction="out")
ax[jj,j].set_extent((30,211,-40,31),crs=ccrs.PlateCarree())
############ adjust labels for subplots #########################
gl = ax[jj,j].gridlines(crs=ccrs.PlateCarree(),draw_labels=True,
xlocs=[40,80,120,160,-160],ylocs=range(-60,60,30))
gl.ylabels_right = False
gl.xlabels_top = False
# ylabels
if j==0:
gl.yformatter = LATITUDE_FORMATTER
gl.ylabel_style = {'size':10}
else:
gl.ylabels_left = False
# xlabels
if jj==5:
gl.xformatter = LONGITUDE_FORMATTER
gl.xlabel_style = {'size':10}
else:
gl.xlabels_bottom = False
t = ax[jj,j].text(0.02, 0.82, string.ascii_lowercase[ll]+')', transform=ax[jj,j].transAxes,
size=11, weight='bold')
t.set_bbox(dict(facecolor='w',boxstyle='square,pad=0.2'))
jj=jj+1 # move to next row
ll=ll+1 # for labels
print(str(year))
# print(j)
j=j+1 # move to next column
# add colorbar
cbaxes = fig.add_axes([0.25, 0.05, 0.5, 0.015])
cb = plt.colorbar(hh,orientation='horizontal', cax = cbaxes,extend='both',label='OHC anomaly [$10^{9}\,$J]')
cb.ax.tick_params(labelsize=9)
# fix title for bottom panels
ax[5,0].set_title('2010-2016',fontsize=10,loc='left',weight='bold')
ax[5,1].set_title('2010-2016',fontsize=10,loc='left',weight='bold')
ax[5,2].set_title('2010-2016',fontsize=10,loc='left',weight='bold')
# add names for columns
ax[0,0].text(0.5, 1.3, 'hindcast', transform=ax[0,0].transAxes,
size=11, weight='bold',horizontalalignment='center',bbox=dict(facecolor='w',edgecolor='k'))
ax[0,1].text(0.5, 1.3, 'buoyancy', transform=ax[0,1].transAxes,
size=11, weight='bold',horizontalalignment='center',bbox=dict(facecolor='w',edgecolor='k'))
ax[0,2].text(0.5, 1.3, 'wind', transform=ax[0,2].transAxes,
size=11, weight='bold',horizontalalignment='center',bbox=dict(facecolor='w',edgecolor='k'))
ax[0,3].text(0.5, 1.3, 'buoyancy+wind', transform=ax[0,3].transAxes,
size=11, weight='bold',horizontalalignment='center',bbox=dict(facecolor='w',edgecolor='k'))
###Output
_____no_output_____ |
Scripts/WrGrowth.ipynb | ###Markdown
Scrape Data
###Code
#set years you want
YearStart = 2010
YearEnd = 2020
TBigDF = pd.DataFrame()
for i in range((YearStart - 1), (YearEnd + 1)):
tempurl1 = "https://www.pro-football-reference.com/years/"
tempurl2 = "/receiving.htm"
tempurl3 = str(i)
tempurlF = tempurl1 + tempurl3 + tempurl2
page = requests.get(tempurlF)
soup = BeautifulSoup(page.text, 'html.parser')
table = soup.find_all('table')
df = pd.read_html(str(table))[0]
df['Year'] = i
TBigDF = TBigDF.append(df)
###Output
_____no_output_____
###Markdown
Wrangle data
###Code
#here we make a copy of the df and were going to use that so we dont have to scrape again any time we change things
BigDF = TBigDF.copy()
BigDF = BigDF.loc[(BigDF['Pos'] == "WR") | (BigDF['Pos'] == "wr") | (BigDF['Pos'].isnull())]
BigDF['Player'].replace('\*\+', '',inplace = True, regex = True)
BigDF['Player'].replace('\+', '',inplace = True, regex = True)
BigDF['Player'].replace('\*', '',inplace = True, regex = True)
BigDF['Player'] = BigDF['Player'].str.strip()
BigDF['Yds'] = BigDF['Yds'].astype(int)
BigDF.reset_index(inplace = True)
BigDF = BigDF.sort_values(['Player', 'Age'])
RnSDF = BigDF.groupby('Player').head(2)
temp = RnSDF['Player'].value_counts()
RnSDF = RnSDF[RnSDF['Player'].isin(temp[temp>1].index)]
RnSDF['FantasyPoints'] = (RnSDF['Yds'].astype(int) * .1) + (RnSDF['TD'].astype(int) * 6)
RookieDF = RnSDF.drop_duplicates(subset='Player', keep='first')
RookieDF = RookieDF.loc[RookieDF['Year']> (YearStart - 1)]
#set the threshold to whatever you want between -3 and 3. the bigger the number, the more elite the rookie seasons you will be looking at
threshold = 1.5
RookieDF = RookieDF[(stats.zscore(RookieDF.Yds) > threshold)]
SophDF = RnSDF.drop_duplicates(subset='Player', keep='last')
RnSDF = pd.merge(RookieDF, SophDF, how="inner", on='Player')
RnSDF = RnSDF.loc[RnSDF['GS_y'].astype(int)>10]
RnSDF['Growth'] = RnSDF['Yds_y'] - RnSDF['Yds_x']
RnSDF['PercentGrowth'] = ((RnSDF['Yds_y'] - RnSDF['Yds_x']) / RnSDF['Yds_x'])*100
###Output
_____no_output_____
###Markdown
Analyze data
###Code
RnSDF.Growth.mean()
#regress year 2 on to year 1
sns.lmplot(x='Yds_x',y='Yds_y', data=RnSDF,fit_reg=True, aspect=1.5)
plt.xlabel("Rookie Season")
plt.ylabel("Sophomore Season")
plt.title("Growth of wr recieving yds from year 1 to year 2")
slope, intercept, r_value, pv, se = stats.linregress(RnSDF.Yds_x.values,RnSDF.Yds_y.values)
print('beta = ', slope,' , ', 'intercept = ', intercept)
# regress growth onto year 1
sns.lmplot(x='Yds_x',y='Growth', data=RnSDF,fit_reg=True, aspect=1.5)
plt.xlabel("Rookie Season")
plt.ylabel("Growth")
plt.title("Growth of wr recieving yds predicted by year 1 yds")
slope, intercept1, r_value, pv, se = stats.linregress(RnSDF.Yds_x.values, RnSDF.Growth.values)
print('beta = ', slope,' , ', 'intercept = ', intercept1)
#regress growth as percentage onto year 1
sns.lmplot(x='Yds_x',y='PercentGrowth', data=RnSDF,fit_reg=True, aspect=1.5)
plt.xlabel("Rookie Season")
plt.ylabel("Percent Growth")
plt.title("Percent Growth of wr recieving yds predicted by year 1 yds")
slope, intercept, r_value, pv, se = stats.linregress(RnSDF.Yds_x.values, RnSDF.PercentGrowth.values)
print('beta = ', slope,' , ', 'intercept = ', intercept)
#regress growth of fantasy points year 2 onto year 1
sns.lmplot(x='FantasyPoints_x',y='FantasyPoints_y', data=RnSDF,fit_reg=True, aspect=1.5)
plt.xlabel("Rookie Season Points")
plt.ylabel("Sophomore Season Points")
plt.title("Second season fantasy points predicted by Rookie Season fantasy points")
slope, intercept, r_value, pv, se = stats.linregress(RnSDF.FantasyPoints_x.values,RnSDF.FantasyPoints_y.values)
print('beta = ', slope,' , ', 'intercept = ', intercept)
###Output
beta = 0.41445050337679895 , intercept = 94.57034984977835
|
PLGA_analysis/PLGA_water.ipynb | ###Markdown
PLGA/water system analysis N = 6
###Code
# For the right Rg calculation using MD Analysis, use trajactory without pbc
n6_plga_wat = mda.Universe("n6plga_wat/n6plgaonly_wat.pdb", "n6plga_wat/nowat_n6plga.xtc")
n6_plga_wat.trajectory
len(n6_plga_wat.trajectory)
#Select the polymer heavy atoms
plga_n6wat = n6_plga_wat.select_atoms("resname sPLG PLG tPLG and not type H")
crv_n6plga_wat = pers_length(plga_n6wat,6)
crv_n6plga_wat
com_bond = np.zeros(shape=(1,18000))
count = 0
for ts in n6_plga_wat.trajectory[0:18000]:
n6_mon1_wat = n6_plga_wat.select_atoms("resid 1")
n6_mon2_wat = n6_plga_wat.select_atoms("resid 2")
oo_len = mda.analysis.distances.distance_array(n6_mon1_wat.center_of_mass(), n6_mon2_wat.center_of_mass(),
box=n6_plga_wat.trajectory.ts.dimensions)
com_bond[0, count] = oo_len
count += 1
com_bond
lb_avg_pn6 = np.mean(com_bond)
lb_avg_pn6
np.std(com_bond)
###Output
_____no_output_____
###Markdown
Radius of Gyration vs. time N = 6 PLGA/water system
###Code
n6plga_rgens_wat, cor_n6plga_wat, N6plga_cos_wat, rgwat_n6plga = get_rg_pers_poly(plga_n6wat, n6_plga_wat, 0, 18000)
n6plga_rgens_wat[0].shape
cor_n6plga_wat[3]
N6plga_cos_wat
rgwat_n6plga
np.std(n6plga_rgens_wat)
trj_len = np.arange(18000)
#trj_len += 1
trj_len
plt.figure(figsize=(7,7))
plt.title(r'PLGA Radius of Gyration', fontsize=18, y=1.01)
plt.xlabel(r'Time [ns]', fontsize=15)
plt.ylabel(r'$R_{g}$ [nm]', fontsize=15)
plt.plot(trj_len/100, n6plga_rgens_wat[0]/10,linewidth=2, color='#CCBE9F')
plt.tick_params(labelsize=14)
plt.legend(['N = 6 in water'], frameon=False, fontsize=14)
#plt.text(127, 0.96,r'N = 6 in water', fontsize=18, color='#1F2E69', family='Arial')
plt.xlim(0,180)
plt.ylim(0.2,2)
###Output
_____no_output_____
###Markdown
Relaxation times vs monomer length
###Code
# Key variables
# def pos_bead_autocorr_RA(polymer_atoms, universe, n_monomers, t_corr, start, end):
n_mon = 6
start = 0
end = 18000
t_corr = 1000
window_shift = 20
s_time = timeit.default_timer()
tcRA_plgan6wat, tcSUM_plgan6wat = pos_bead_autocorr_RA(plga_n6wat, n6_plga_wat, n_mon, t_corr, window_shift, start, end)
timeit.default_timer() - s_time
tcSUM_plgan6wat.shape
###Output
_____no_output_____
###Markdown
Fitting autocorrelation data
###Code
xdata_n6wat = tcRA_plgan6wat[1]/100
ydata_n6wat = tcRA_plgan6wat[0]
ydata_n6wat.shape
xdata_n6wat.shape
s_n6 =[2 for i in range(len(xdata_n6wat))]
plt.figure(figsize=(8,8))
plt.rcParams["font.family"] = "Arial"
plt.scatter(tcRA_plgan6wat[1]/100, tcRA_plgan6wat[0], color='#CCBE9F', s=s_n6)
plt.title(r'Positional bead autocorrelation PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('C(t)', fontsize=18)
#plt.legend(fontsize=14, frameon=False)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.yscale('symlog', linthreshy=0.5)
plt.xscale('symlog')
plt.grid(b=True)
plt.xlim(0,30)
plt.ylim(-1,1)
plt.tick_params(labelsize=18, direction='in', which='both', width=1, length=10)
rouse_relax(1, 2, 6)
zimm_relax(1, 2, 0.1, 6)
def res_polyn6wat(variabls, xnp, ynp):
hs_np = variabls['h_star']
tr1_np = variabls['t_first']
n_m = 6
testnp = []
for i in range(len(xnp)):
model_ynp = zimm_relax(xnp[i], tr1_np, hs_np, n_m)
#model_ynp = rouse_relax(xnp[i], tr1_np, n_m)
testnp.append(ynp[i] - model_ynp)
tt_n30 = np.array(testnp)
return tt_n30
#x1 = np.array([0,0])
#pfit, pcov, infodict, errmsg, success = leastsq(res_poly, x1, args=(xdata, ydata), full_output=1)
from lmfit import Minimizer, Parameters, report_fit
pfit_n6wat = Parameters()
pfit_n6wat.add(name='h_star', value=0, min=0, max=0.26, vary=True)
pfit_n6wat.add(name='t_first', value=2)
mini_n6wat = Minimizer(res_polyn6wat, pfit_n6wat, fcn_args=(xdata_n6wat, ydata_n6wat))
out_n6wat = mini_n6wat.leastsq()
#bfit_n10 = ydata_n10ace + out_n10ace.residual
report_fit(out_n6wat.params)
out_n6wat.params
twat_n6plga = []
for i in range(len(xdata_n6wat)):
twat_n6plga.append(zimm_relax(xdata_n6wat[i], 0.28, 0.26, n_mon))
s_n6 =[2 for i in range(len(xdata_n6wat))]
plt.figure(figsize=(10,10))
plt.rcParams["font.family"] = "Arial"
plt.scatter(tcorr_plgan6wat[1]/100, tcorr_plgan6wat[0], color='#CCBE9F', s=s_n6)
plt.plot(xdata_n6wat, twat_n6plga, color='#CCBE9F')
plt.title(r'Positional bead autocorrelation PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('C(t)', fontsize=18)
#plt.legend(fontsize=14, frameon=False)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([0,30])
plt.ylim([-0.7,1])
plt.tick_params(labelsize=15)
rwat_msen6 = np.array([tcorr_plgan6wat[0] - np.array(twat_n6plga)])
rwat_msen6
plt.figure(figsize=(10,10))
plt.rcParams["font.family"] = "Arial"
s_n6 =[2 for i in range(len(xdata_n6wat))]
plt.scatter(xdata_n6wat, rwat_msen6, color='#CCBE9F', s=s_n6)
plt.title(r'Relaxation time Fitting Residuals PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('Residuals', fontsize=18)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([0,30])
plt.ylim([-0.7,1])
plt.tick_params(labelsize=15)
###Output
_____no_output_____
###Markdown
Correlation values at each arc length for the whole 180 ns trajectory, N = 6 PLGA/water system
###Code
# x values
blen_wat = cor_n6plga_wat[3]*lb_avg_pn6
#nt_tt[0] = 0
blen_wat
mk_n6p_wat = cor_n6plga_wat[1]/cor_n6plga_wat[0]
mk_n6p_wat
plt.figure(figsize=(7,7))
plt.errorbar(blen_wat, np.log(cor_n6plga_wat[0]), yerr=mk_n6p_wat, color='b', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
# All the points give the best fits for N = 6 peg in water
n6_blksplga_wat , n6plga_lpwat = bavg_pers_cnt(5, plga_n6wat, n6_plga_wat, lb_avg_pn6, 3, 3000 , 18000)
n6_blksplga_wat
n6plga_lpwat
n6plga_lpwat[2]
np.mean(n6plga_lpwat[3])
def line_fit(slope, x):
return slope*x
blen_wat
gg_n6plga_wat = line_fit(np.mean(n6plga_lpwat[2]),blen_wat)
gg_n6plga_wat
###Output
_____no_output_____
###Markdown
Block averaged Radius of gyration and persistence length, N = 6 PLGA/water system
###Code
np.mean(n6_blksplga_wat["Avg persistence length"])
np.std(n6_blksplga_wat["Avg persistence length"])
np.mean(n6_blksplga_wat["Avg Radius of gyration"])
np.std(n6_blksplga_wat["Avg Radius of gyration"])
plt.figure(figsize=(7,7))
plt.errorbar(blen_wat, np.log(cor_n6plga_wat[0]), yerr=mk_n6p_wat, color='b', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.plot(blen_wat, gg_n6plga_wat, color='b')
plt.title(r'Ensemble Averaged ln(Cosine $\theta$) in water', fontsize=15, y=1.01)
plt.xlabel(r'Bond Length', fontsize=15)
plt.ylabel(r'ln$\left< Cos(\theta)\right >$', fontsize=15)
#plt.ylim(-1.9,0)
font = font_manager.FontProperties(family='Arial', style='normal', size='14')
plt.tick_params(labelsize=14)
plt.text(0.5, -1.82,r'$N_{PLGA}$ = 6: $L_{p}$ = 16.0 $\AA$ ± 1.73 $\AA$', fontsize=15, color='b')
#plt.text(5,-0.15,r'R$^{2}$ = 0.98', fontsize=15, color='blue')
rgplga_olig_wat = pd.DataFrame(data=n6_blksplga_wat["Avg Radius of gyration"]
, columns=['$R_{g}$ [Angstrom] N = 6 PLGA in water'])
rgplga_olig_wat
pers_plgat_wat = pd.DataFrame(data=n6_blksplga_wat["Avg persistence length"]
, columns=[r"$L_{p}$ [Angstrom] N = 6 PLGA in water"])
pers_plgat_wat
###Output
_____no_output_____
###Markdown
N = 8 PLGA/water system
###Code
# For the right Rg calculation using MD Analysis, use trajactory without pbc
n8_plga_wat = mda.Universe("n8plga_wat/n8plgaonly_wat.pdb", "n8plga_wat/nowat_n8plga.xtc")
n8_plga_wat.trajectory
len(n8_plga_wat.trajectory)
#Select the polymer heavy atoms
plga_n8wat = n8_plga_wat.select_atoms("resname sPLG PLG tPLG and not type H")
crv_n8plga_wat = pers_length(plga_n8wat,8)
crv_n8plga_wat
com_bond_n8wat = np.zeros(shape=(1,18000))
count = 0
for ts in n8_plga_wat.trajectory[0:18000]:
n8_mon1_wat = n8_plga_wat.select_atoms("resid 1")
n8_mon2_wat = n8_plga_wat.select_atoms("resid 2")
oo_len = mda.analysis.distances.distance_array(n8_mon1_wat.center_of_mass(), n8_mon2_wat.center_of_mass(),
box=n8_plga_wat.trajectory.ts.dimensions)
com_bond_n8wat[0, count] = oo_len
count += 1
com_bond
np.std(com_bond)
lb_avg_pn6 = np.mean(com_bond)
lb_avg_pn6
np.mean(com_bond_n8wat)
np.std(com_bond_n8wat)
###Output
_____no_output_____
###Markdown
Radius of Gyration vs. time N = 8 PLGA/water system
###Code
n8plga_rgens_wat, cor_n8plga_wat, N8plga_cos_wat, rgwat_n8plga = get_rg_pers_poly(plga_n8wat, n8_plga_wat, 0, 18000)
n8plga_rgens_wat[0].shape
cor_n8plga_wat[3]
N8plga_cos_wat
rgwat_n8plga
np.std(n8plga_rgens_wat)
plt.figure(figsize=(7,7))
plt.title(r'PLGA Radius of Gyration', fontsize=18, y=1.01)
plt.xlabel(r'Time [ns]', fontsize=15)
plt.ylabel(r'$R_{g}$ [nm]', fontsize=15)
plt.plot(trj_len/100, n6plga_rgens_wat[0]/10,linewidth=2, color='#CCBE9F')
plt.plot(trj_len/100, n8plga_rgens_wat[0]/10,linewidth=2, color='#601A4A')
plt.tick_params(labelsize=14)
plt.legend(['N = 6 in water','N = 8 in water'], frameon=False, fontsize=14)
#plt.text(127, 0.96,r'N = 6 in water', fontsize=18, color='#1F2E69', family='Arial')
plt.xlim(0,180)
plt.ylim(0.2,2)
###Output
_____no_output_____
###Markdown
Relaxation times vs monomer length
###Code
# Key variables
# def pos_bead_autocorr_RA(polymer_atoms, universe, n_monomers, t_corr, start, end):
n8_mon = 8
start = 0
end = 18000
t_corr = 1000
window_shift = 20
s_time = timeit.default_timer()
tcRA_plgan8wat, tcSUM_plgan8wat = pos_bead_autocorr_RA(plga_n8wat, n8_plga_wat, n8_mon, t_corr, window_shift, start, end)
timeit.default_timer() - s_time
tcSUM_plgan8wat.shape
###Output
_____no_output_____
###Markdown
Fitting autocorrelation data
###Code
xdata_n8wat = tcRA_plgan8wat[1]/100
ydata_n8wat = tcRA_plgan8wat[0]
ydata_n8wat.shape
xdata_n8wat.shape
s_n8 =[2 for i in range(len(xdata_n8wat))]
plt.figure(figsize=(8,8))
plt.rcParams["font.family"] = "Arial"
plt.scatter(tcRA_plgan6wat[1]/100, tcRA_plgan6wat[0], color='#CCBE9F', s=s_n6)
plt.scatter(tcRA_plgan8wat[1]/100, tcRA_plgan8wat[0], color='#601A4A', s=s_n8)
plt.title(r'Positional bead autocorrelation PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('C(t)', fontsize=18)
#plt.legend(fontsize=14, frameon=False)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.yscale('symlog', linthreshy=0.5)
plt.xscale('symlog')
plt.grid(b=True)
plt.xlim(0,30)
plt.ylim(-1,1)
plt.tick_params(labelsize=18, direction='in', which='both', width=1, length=10)
def res_polyn8wat(variabls, xnp, ynp):
hs_np = variabls['h_star']
tr1_np = variabls['t_first']
n_m = 8
testnp = []
for i in range(len(xnp)):
model_ynp = zimm_relax(xnp[i], tr1_np, hs_np, n_m)
#model_ynp = rouse_relax(xnp[i], tr1_np, n_m)
testnp.append(ynp[i] - model_ynp)
tt_n30 = np.array(testnp)
return tt_n30
#x1 = np.array([0,0])
#pfit, pcov, infodict, errmsg, success = leastsq(res_poly, x1, args=(xdata, ydata), full_output=1)
pfit_n8wat = Parameters()
pfit_n8wat.add(name='h_star', value=0, min=0, max=0.26, vary=True)
pfit_n8wat.add(name='t_first', value=2)
mini_n8wat = Minimizer(res_polyn8wat, pfit_n8wat, fcn_args=(xdata_n8wat, ydata_n8wat))
out_n8wat = mini_n8wat.leastsq()
#bfit_n10 = ydata_n10ace + out_n10ace.residual
report_fit(out_n8wat.params)
out_n8wat.params
twat_n8plga = []
for i in range(len(xdata_n8wat)):
twat_n8plga.append(zimm_relax(xdata_n8wat[i], 0.53, 0.26, n_mon))
s_n8 =[2 for i in range(len(xdata_n8wat))]
plt.figure(figsize=(10,10))
plt.rcParams["font.family"] = "Arial"
plt.scatter(tcorr_plgan6wat[1]/100, tcorr_plgan6wat[0], color='#CCBE9F', s=s_n6)
plt.scatter(tcorr_plgan8wat[1]/100, tcorr_plgan8wat[0], color='#601A4A', s=s_n8)
plt.plot(xdata_n6wat, twat_n6plga, color='#CCBE9F')
plt.plot(xdata_n8wat, twat_n8plga, color='#601A4A')
plt.title(r'Positional bead autocorrelation PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('C(t)', fontsize=18)
#plt.legend(fontsize=14, frameon=False)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([0,30])
plt.ylim([-0.7,1])
plt.tick_params(labelsize=15)
rwat_msen8 = np.array([tcorr_plgan8wat[0] - np.array(twat_n8plga)])
rwat_msen8
plt.figure(figsize=(10,10))
s_n6 =[2 for i in range(len(xdata_n8wat))]
plt.rcParams["font.family"] = "Arial"
plt.scatter(xdata_n6wat, rwat_msen6, color='#CCBE9F', s=s_n6)
plt.scatter(xdata_n8wat, rwat_msen8, color='#601A4A', s=s_n8)
plt.title(r'Relaxation time Fitting Residuals PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('Residuals', fontsize=18)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([0,30])
plt.ylim([-0.7,1])
plt.tick_params(labelsize=15)
###Output
_____no_output_____
###Markdown
Correlation values at each arc length for the whole 180 ns trajectory, N = 8 PLGA/water system
###Code
# x values
blen_n8wat = cor_n8plga_wat[3]*lb_avg_pn6
#nt_tt[0] = 0
blen_n8wat
mk_n8p_wat = cor_n8plga_wat[1]/cor_n8plga_wat[0]
mk_n8p_wat
plt.figure(figsize=(7,7))
plt.errorbar(blen_wat, np.log(cor_n6plga_wat[0]), yerr=mk_n6p_wat, color='#CCBE9F', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n8wat, np.log(cor_n8plga_wat[0]), yerr=mk_n8p_wat, color='#601A4A', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
# All the points give the best fits for N = 6 peg in water
n8_blksplga_wat , n8plga_lpwat = bavg_pers_cnt(5, plga_n8wat, n8_plga_wat, lb_avg_pn6, 4, 3000 , 18000)
n8_blksplga_wat
n8plga_lpwat
n8plga_lpwat[2]
np.mean(n8plga_lpwat[3])
blen_n8wat
gg_n8plga_wat = line_fit(np.mean(n8plga_lpwat[2]),blen_n8wat)
gg_n6plga_n8wat = line_fit(np.mean(n6plga_lpwat[2]),blen_n8wat)
gg_n8plga_wat
###Output
_____no_output_____
###Markdown
Block averaged Radius of gyration and persistence length, N = 8 PLGA/water system
###Code
np.mean(n8_blksplga_wat["Avg persistence length"])
np.std(n8_blksplga_wat["Avg persistence length"])
np.mean(n8_blksplga_wat["Avg Radius of gyration"])
np.std(n8_blksplga_wat["Avg Radius of gyration"])
plt.figure(figsize=(7,7))
plt.errorbar(blen_wat, np.log(cor_n6plga_wat[0]), yerr=mk_n6p_wat, color='#CCBE9F', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n8wat, np.log(cor_n8plga_wat[0]), yerr=mk_n8p_wat, color='#601A4A', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.plot(blen_n8wat, gg_n6plga_n8wat, color='#CCBE9F')
plt.plot(blen_n8wat, gg_n8plga_wat, color='#601A4A')
plt.title(r'Ensemble Averaged ln(Cosine $\theta$) in water', fontsize=15, y=1.01)
plt.xlabel(r'Bond Length', fontsize=15)
plt.ylabel(r'ln$\left< Cos(\theta)\right >$', fontsize=15)
plt.ylim(-7,1)
plt.xlim(0,165)
font = font_manager.FontProperties(family='Arial', style='normal', size='14')
#plt.legend([r'$N_{PEG}$ = 6: $L_{p}$ = 10.7 $\AA$ ± 1.62 $\AA$'], loc=3, frameon=0, fontsize=14, prop=font)
plt.tick_params(labelsize=14)
plt.text(0.5, -6.92,r'$N_{PLGA}$ = 6: $L_{p}$ = 16.0 $\AA$ ± 1.73 $\AA$', fontsize=15, color='#CCBE9F')
plt.text(0.5, -6.54,r'$N_{PLGA}$ = 8: $L_{p}$ = 17.2 $\AA$ ± 2.13 $\AA$', fontsize=15, color='#601A4A')
rgplga_olig_wat[r"$R_{g}$ [Angstrom] N = 8 PLGA in water"] = n8_blksplga_wat["Avg Radius of gyration"]
rgplga_olig_wat
pers_plgat_wat[r"$L_{p}$ [Angstrom] N = 8 PLGA in water"] = n8_blksplga_wat["Avg persistence length"]
pers_plgat_wat
###Output
_____no_output_____
###Markdown
N = 10 PLGA/water system
###Code
# For the right Rg calculation using MD Analysis, use trajactory without pbc
n10_plga_wat = mda.Universe("n10plga_wat/n10plgaonly_wat.pdb", "n10plga_wat/nowat_n10plga.xtc")
n10_plga_wat.trajectory
len(n10_plga_wat.trajectory)
#Select the polymer heavy atoms
plga_n10wat = n10_plga_wat.select_atoms("resname sPLG PLG tPLG and not type H")
crv_n10plga_wat = pers_length(plga_n10wat,10)
crv_n10plga_wat
com_bond_n10wat = np.zeros(shape=(1,18000))
count = 0
for ts in n10_plga_wat.trajectory[0:18000]:
n10_mon1_wat = n10_plga_wat.select_atoms("resid 1")
n10_mon2_wat = n10_plga_wat.select_atoms("resid 2")
oo_len = mda.analysis.distances.distance_array(n10_mon1_wat.center_of_mass(), n10_mon2_wat.center_of_mass(),
box=n10_plga_wat.trajectory.ts.dimensions)
com_bond_n10wat[0, count] = oo_len
count += 1
com_bond
np.std(com_bond)
lb_avg_pn6 = np.mean(com_bond)
lb_avg_pn6
np.mean(com_bond_n10wat)
np.std(com_bond_n10wat)
###Output
_____no_output_____
###Markdown
Radius of Gyration vs. time N = 10 PLGA/water system
###Code
n10plga_rgens_wat, cor_n10plga_wat, N10plga_cos_wat, rgwat_n10plga = get_rg_pers_poly(plga_n10wat, n10_plga_wat, 0, 18000)
n10plga_rgens_wat[0].shape
cor_n10plga_wat[3]
N10plga_cos_wat
rgwat_n10plga
np.std(n10plga_rgens_wat)
plt.figure(figsize=(7,7))
plt.title(r'PLGA Radius of Gyration', fontsize=18, y=1.01)
plt.xlabel(r'Time [ns]', fontsize=15)
plt.ylabel(r'$R_{g}$ [nm]', fontsize=15)
plt.plot(trj_len/100, n6plga_rgens_wat[0]/10,linewidth=2, color='#CCBE9F')
plt.plot(trj_len/100, n8plga_rgens_wat[0]/10,linewidth=2, color='#601A4A')
plt.plot(trj_len/100, n10plga_rgens_wat[0]/10,linewidth=2, color='#2B6322')
plt.tick_params(labelsize=14)
plt.legend(['N = 6 in water','N = 8 in water','N = 10 in water'], frameon=False, fontsize=14)
#plt.text(127, 0.96,r'N = 6 in water', fontsize=18, color='#1F2E69', family='Arial')
plt.xlim(0,180)
plt.ylim(0.2,2)
###Output
_____no_output_____
###Markdown
Relaxation times vs monomer length
###Code
# Key variables
# def pos_bead_autocorr_RA(polymer_atoms, universe, n_monomers, t_corr, start, end):
n10_mon = 10
start = 0
end = 18000
t_corr = 1000
window_shift = 20
s_time = timeit.default_timer()
tcRA_plgan10wat, tcSUM_plgan10wat = pos_bead_autocorr_RA(plga_n10wat, n10_plga_wat, n10_mon, t_corr, window_shift, start, end)
timeit.default_timer() - s_time
tcSUM_plgan10wat.shape
###Output
_____no_output_____
###Markdown
Fitting autocorrelation data
###Code
xdata_n10wat = tcRA_plgan10wat[1]/100
ydata_n10wat = tcRA_plgan10wat[0]
xdata_n10wat.shape
ydata_n10wat.shape
s_n10 =[2 for i in range(len(xdata_n10wat))]
plt.figure(figsize=(8,8))
plt.rcParams["font.family"] = "Arial"
plt.scatter(tcRA_plgan6wat[1]/100, tcRA_plgan6wat[0], color='#CCBE9F', s=s_n6)
plt.scatter(tcRA_plgan8wat[1]/100, tcRA_plgan8wat[0], color='#601A4A', s=s_n8)
plt.scatter(tcRA_plgan10wat[1]/100, tcRA_plgan10wat[0], color='#2B6322', s=s_n10)
plt.title(r'Positional bead autocorrelation PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('C(t)', fontsize=18)
#plt.legend(fontsize=14, frameon=False)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.yscale('symlog', linthreshy=0.5)
plt.xscale('symlog')
plt.grid(b=True)
plt.xlim(0,30)
plt.ylim(-1,1)
plt.tick_params(labelsize=18, direction='in', which='both', width=1, length=10)
def res_polyn10wat(variabls, xnp, ynp):
hs_np = variabls['h_star']
tr1_np = variabls['t_first']
n_m = 10
testnp = []
for i in range(len(xnp)):
model_ynp = zimm_relax(xnp[i], tr1_np, hs_np, n_m)
#model_ynp = rouse_relax(xnp[i], tr1_np, n_m)
testnp.append(ynp[i] - model_ynp)
tt_n30 = np.array(testnp)
return tt_n30
#x1 = np.array([0,0])
#pfit, pcov, infodict, errmsg, success = leastsq(res_poly, x1, args=(xdata, ydata), full_output=1)
pfit_n10wat = Parameters()
pfit_n10wat.add(name='h_star', value=0, min=0, max=0.26, vary=True)
pfit_n10wat.add(name='t_first', value=2)
mini_n10wat = Minimizer(res_polyn10wat, pfit_n10wat, fcn_args=(xdata_n10wat, ydata_n10wat))
out_n10wat = mini_n10wat.leastsq()
#bfit_n10 = ydata_n10ace + out_n10ace.residual
report_fit(out_n10wat.params)
out_n10wat.params
twat_n10plga = []
for i in range(len(xdata_n10wat)):
twat_n10plga.append(zimm_relax(xdata_n10wat[i], 1.82, 3.24e-6, n_mon))
s_n10 =[2 for i in range(len(xdata_n10wat))]
plt.figure(figsize=(10,10))
plt.rcParams["font.family"] = "Arial"
plt.scatter(tcorr_plgan6wat[1]/100, tcorr_plgan6wat[0], color='#CCBE9F', s=s_n6)
plt.scatter(tcorr_plgan8wat[1]/100, tcorr_plgan8wat[0], color='#601A4A', s=s_n8)
plt.scatter(tcorr_plgan10wat[1]/100, tcorr_plgan10wat[0], color='#2B6322', s=s_n10)
plt.plot(xdata_n6wat, twat_n6plga, color='#CCBE9F')
plt.plot(xdata_n8wat, twat_n8plga, color='#601A4A')
plt.plot(xdata_n10wat, twat_n10plga, color='#2B6322')
plt.title(r'Positional bead autocorrelation PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('C(t)', fontsize=18)
#plt.legend(fontsize=14, frameon=False)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([0,30])
plt.ylim([-0.7,1])
plt.tick_params(labelsize=15)
rwat_msen10 = np.array([tcorr_plgan10wat[0] - np.array(twat_n10plga)])
rwat_msen10
plt.figure(figsize=(10,10))
plt.rcParams["font.family"] = "Arial"
s_n10 =[2 for i in range(len(xdata_n10wat))]
plt.scatter(xdata_n6wat, rwat_msen6, color='#CCBE9F', s=s_n6)
plt.scatter(xdata_n8wat, rwat_msen8, color='#601A4A', s=s_n8)
plt.scatter(xdata_n10wat, rwat_msen10, color='#2B6322', s=s_n10)
plt.title(r'Relaxation time Fitting Residuals PLGA in DMSO', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('Residuals', fontsize=18)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([0,30])
plt.ylim([-0.7,1])
plt.tick_params(labelsize=15)
###Output
_____no_output_____
###Markdown
Correlation values at each arc length for the whole 180 ns trajectory, N = 10 PLGA/water system
###Code
# x values
blen_n10wat = cor_n10plga_wat[3]*lb_avg_pn6
#nt_tt[0] = 0
blen_n10wat
mk_n10p_wat = cor_n10plga_wat[1]/cor_n10plga_wat[0]
mk_n10p_wat
plt.figure(figsize=(7,7))
plt.errorbar(blen_wat, np.log(cor_n6plga_wat[0]), yerr=mk_n6p_wat, color='#CCBE9F', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n8wat, np.log(cor_n8plga_wat[0]), yerr=mk_n8p_wat, color='#601A4A', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n10wat, np.log(cor_n10plga_wat[0]), yerr=mk_n10p_wat, color='#2B6322', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
# All the points give the best fits for N = 6 peg in water
n10_blksplga_wat , n10plga_lpwat = bavg_pers_cnt(5, plga_n10wat, n10_plga_wat, lb_avg_pn6, 4, 3000 , 18000)
n10_blksplga_wat
n10plga_lpwat
n10plga_lpwat[2]
np.mean(n10plga_lpwat[3])
blen_n10wat
gg_n10plga_wat = line_fit(np.mean(n10plga_lpwat[2]),blen_n10wat)
gg_n6plga_n10wat = line_fit(np.mean(n6plga_lpwat[2]),blen_n10wat)
gg_n8plga_n10wat = line_fit(np.mean(n8plga_lpwat[2]),blen_n10wat)
gg_n10plga_wat
###Output
_____no_output_____
###Markdown
Block averaged Radius of gyration and persistence length, N = 10 PLGA/acetone system
###Code
np.mean(n10_blksplga_wat["Avg persistence length"])
np.std(n10_blksplga_wat["Avg persistence length"])
np.mean(n10_blksplga_wat["Avg Radius of gyration"])
np.std(n10_blksplga_wat["Avg Radius of gyration"])
plt.figure(figsize=(7,7))
plt.errorbar(blen_wat, np.log(cor_n6plga_wat[0]), yerr=mk_n6p_wat, color='#CCBE9F', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n8wat, np.log(cor_n8plga_wat[0]), yerr=mk_n8p_wat, color='#601A4A', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n10wat, np.log(cor_n10plga_wat[0]), yerr=mk_n10p_wat, color='#2B6322', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.plot(blen_n10wat, gg_n6plga_n10wat, color='#CCBE9F')
plt.plot(blen_n10wat, gg_n8plga_n10wat, color='#601A4A')
plt.plot(blen_n10wat, gg_n10plga_wat, color='#2B6322')
plt.title(r'Ensemble Averaged ln(Cosine $\theta$) in acetone', fontsize=15, y=1.01)
plt.xlabel(r'Bond Length', fontsize=15)
plt.ylabel(r'ln$\left< Cos(\theta)\right >$', fontsize=15)
plt.ylim(-7,1)
plt.xlim(0,165)
font = font_manager.FontProperties(family='Arial', style='normal', size='14')
#plt.legend([r'$N_{PEG}$ = 6: $L_{p}$ = 10.7 $\AA$ ± 1.62 $\AA$'], loc=3, frameon=0, fontsize=14, prop=font)
plt.tick_params(labelsize=14)
plt.text(0.5, -6.92,r'$N_{PLGA}$ = 6: $L_{p}$ = 16.0 $\AA$ ± 1.73 $\AA$', fontsize=15, color='#CCBE9F')
plt.text(0.5, -6.54,r'$N_{PLGA}$ = 8: $L_{p}$ = 17.2 $\AA$ ± 2.13 $\AA$', fontsize=15, color='#601A4A')
plt.text(0.5, -6.14,r'$N_{PLGA}$ = 10: $L_{p}$ = 15.8 $\AA$ ± 1.01 $\AA$', fontsize=15, color='#2B6322')
rgplga_olig_wat[r"$R_{g}$ [Angstrom] N = 10 PLGA in water"] = n10_blksplga_wat["Avg Radius of gyration"]
rgplga_olig_wat
pers_plgat_wat[r"$L_{p}$ [Angstrom] N = 10 PLGA in water"] = n10_blksplga_wat["Avg persistence length"]
pers_plgat_wat
###Output
_____no_output_____
###Markdown
N = 20 PLGA/water system
###Code
# For the right Rg calculation using MD Analysis, use trajactory without pbc
n20_plga_wat = mda.Universe("n20plga_wat/n20plgaonly_wat.pdb", "n20plga_wat/nowat_n20plga.xtc")
n20_plga_wat.trajectory
len(n20_plga_wat.trajectory)
#Select the polymer heavy atoms
plga_n20wat = n20_plga_wat.select_atoms("resname sPLG PLG tPLG and not type H")
crv_n20plga_wat = pers_length(plga_n20wat,20)
crv_n20plga_wat
com_bond_n20wat = np.zeros(shape=(1,18000))
count = 0
for ts in n20_plga_wat.trajectory[0:18000]:
n20_mon1_wat = n20_plga_wat.select_atoms("resid 1")
n20_mon2_wat = n20_plga_wat.select_atoms("resid 2")
oo_len = mda.analysis.distances.distance_array(n20_mon1_wat.center_of_mass(), n20_mon2_wat.center_of_mass(),
box=n20_plga_wat.trajectory.ts.dimensions)
com_bond_n20wat[0, count] = oo_len
count += 1
com_bond
np.std(com_bond)
lb_avg_pn6 = np.mean(com_bond)
lb_avg_pn6
np.mean(com_bond_n20wat)
np.std(com_bond_n20wat)
###Output
_____no_output_____
###Markdown
Radius of Gyration vs. time N = 20 PLGA/water system
###Code
n20plga_rgens_wat, cor_n20plga_wat, N20plga_cos_wat, rgwat_n20plga = get_rg_pers_poly(plga_n20wat, n20_plga_wat, 0, 18000)
n20plga_rgens_wat[0].shape
cor_n20plga_wat[3]
N20plga_cos_wat
rgwat_n20plga
np.std(n20plga_rgens_wat)
plt.figure(figsize=(7,7))
plt.title(r'PLGA Radius of Gyration', fontsize=18, y=1.01)
plt.xlabel(r'Time [ns]', fontsize=15)
plt.ylabel(r'$R_{g}$ [nm]', fontsize=15)
plt.plot(trj_len/100, n6plga_rgens_wat[0]/10,linewidth=2, color='#CCBE9F')
plt.plot(trj_len/100, n8plga_rgens_wat[0]/10,linewidth=2, color='#601A4A')
plt.plot(trj_len/100, n10plga_rgens_wat[0]/10,linewidth=2, color='#2B6322')
plt.plot(trj_len/100, n20plga_rgens_wat[0]/10,linewidth=2, color='#562A8B')
plt.tick_params(labelsize=14)
plt.legend(['N = 6 in water','N = 8 in water','N = 10 in water','N = 20 in water'], frameon=False, fontsize=14)
#plt.text(127, 0.96,r'N = 6 in water', fontsize=18, color='#1F2E69', family='Arial')
plt.xlim(0,180)
plt.ylim(0.2,2)
###Output
_____no_output_____
###Markdown
Relaxation times vs monomer length
###Code
# Key variables
# def pos_bead_autocorr_RA(polymer_atoms, universe, n_monomers, t_corr, start, end):
n20_mon = 20
start = 0
end = 18000
t_corr = 2500
window_shift = 20
s_time = timeit.default_timer()
tcRA_plgan20wat, tcSUM_plgan20wat = pos_bead_autocorr_RA(plga_n20wat, n20_plga_wat, n20_mon, t_corr, window_shift, start, end)
timeit.default_timer() - s_time
tcSUM_plgan20wat.shape
###Output
_____no_output_____
###Markdown
Fitting autocorrelation data
###Code
xdata_n20wat = tcRA_plgan20wat[1]/100
ydata_n20wat = tcRA_plgan20wat[0]
xdata_n20wat.shape
xdata_n20wat.shape
s_n20 =[2 for i in range(len(xdata_n20wat))]
plt.figure(figsize=(8,8))
plt.rcParams["font.family"] = "Arial"
plt.scatter(tcRA_plgan6wat[1]/100, tcRA_plgan6wat[0], color='#CCBE9F', s=s_n6)
plt.scatter(tcRA_plgan8wat[1]/100, tcRA_plgan8wat[0], color='#601A4A', s=s_n8)
plt.scatter(tcRA_plgan10wat[1]/100, tcRA_plgan10wat[0], color='#2B6322', s=s_n10)
plt.scatter(tcRA_plgan20wat[1]/100, tcRA_plgan20wat[0], color='#562A8B', s=s_n20)
plt.title(r'Positional bead autocorrelation PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('C(t)', fontsize=18)
#plt.legend(fontsize=14, frameon=False)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.yscale('symlog', linthreshy=0.5)
plt.xscale('symlog')
plt.grid(b=True)
plt.xlim(0,30)
plt.ylim(-1,1)
plt.tick_params(labelsize=18, direction='in', which='both', width=1, length=10)
def res_polyn20wat(variabls, xnp, ynp):
hs_np = variabls['h_star']
tr1_np = variabls['t_first']
n_m = 20
testnp = []
for i in range(len(xnp)):
model_ynp = zimm_relax(xnp[i], tr1_np, hs_np, n_m)
#model_ynp = rouse_relax(xnp[i], tr1_np, n_m)
testnp.append(ynp[i] - model_ynp)
tt_n30 = np.array(testnp)
return tt_n30
#x1 = np.array([0,0])
#pfit, pcov, infodict, errmsg, success = leastsq(res_poly, x1, args=(xdata, ydata), full_output=1)
pfit_n20wat = Parameters()
pfit_n20wat.add(name='h_star', value=0, min=0, max=0.26, vary=True)
pfit_n20wat.add(name='t_first', value=2)
mini_n20wat = Minimizer(res_polyn20wat, pfit_n20wat, fcn_args=(xdata_n20wat, ydata_n20wat))
out_n20wat = mini_n20wat.leastsq()
#bfit_n10 = ydata_n10ace + out_n10ace.residual
report_fit(out_n20wat.params)
out_n20wat.params
twat_n20plga = []
for i in range(len(xdata_n20wat)):
twat_n20plga.append(zimm_relax(xdata_n20wat[i], 0.96, 0.26, n_mon))
s_n20 =[2 for i in range(len(xdata_n20wat))]
plt.figure(figsize=(10,10))
plt.rcParams["font.family"] = "Arial"
plt.scatter(tcorr_plgan6wat[1]/100, tcorr_plgan6wat[0], color='#CCBE9F', s=s_n6)
plt.scatter(tcorr_plgan8wat[1]/100, tcorr_plgan8wat[0], color='#601A4A', s=s_n8)
plt.scatter(tcorr_plgan10wat[1]/100, tcorr_plgan10wat[0], color='#2B6322', s=s_n10)
plt.scatter(tcorr_plgan20wat[1]/100, tcorr_plgan20wat[0], color='#562A8B', s=s_n20)
plt.plot(xdata_n6wat, twat_n6plga, color='#CCBE9F')
plt.plot(xdata_n8wat, twat_n8plga, color='#601A4A')
plt.plot(xdata_n10wat, twat_n10plga, color='#2B6322')
plt.plot(xdata_n20wat, twat_n20plga, color='#562A8B')
plt.title(r'Positional bead autocorrelation PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('C(t)', fontsize=18)
#plt.legend(fontsize=14, frameon=False)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([0,30])
plt.ylim([-0.7,1])
plt.tick_params(labelsize=15)
rwat_msen20 = np.array([tcorr_plgan20wat[0] - np.array(twat_n20plga)])
rwat_msen20
plt.figure(figsize=(10,10))
s_n20 =[2 for i in range(len(xdata_n20wat))]
plt.rcParams["font.family"] = "Arial"
plt.scatter(xdata_n6wat, rwat_msen6, color='#CCBE9F', s=s_n6)
plt.scatter(xdata_n8wat, rwat_msen8, color='#601A4A', s=s_n8)
plt.scatter(xdata_n10wat, rwat_msen10, color='#2B6322', s=s_n10)
plt.scatter(xdata_n20wat, rwat_msen20, color='#562A8B', s=s_n20)
plt.title(r'Relaxation time Fitting Residuals PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('Residuals', fontsize=18)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([0,30])
plt.ylim([-0.7,1])
font = font_manager.FontProperties(family='Cambria', style='normal', size='15')
plt.tick_params(labelsize=15)
###Output
_____no_output_____
###Markdown
Correlation values at each arc length for the whole 180 ns trajectory, N = 20 PLGA/water system
###Code
# x values
blen_n20wat = cor_n20plga_wat[3]*lb_avg_pn6
#nt_tt[0] = 0
blen_n20wat
mk_n20p_wat = cor_n20plga_wat[1]/cor_n20plga_wat[0]
mk_n20p_wat
plt.figure(figsize=(7,7))
plt.errorbar(blen_wat, np.log(cor_n6plga_wat[0]), yerr=mk_n6p_wat, color='#CCBE9F', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n8wat, np.log(cor_n8plga_wat[0]), yerr=mk_n8p_wat, color='#601A4A', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n10wat, np.log(cor_n10plga_wat[0]), yerr=mk_n10p_wat, color='#2B6322', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n20wat, np.log(cor_n20plga_wat[0]), yerr=mk_n20p_wat, color='#562A8B', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
# All the points give the best fits for N = 6 peg in water
n20_blksplga_wat, n20plga_lpwat = bavg_pers_cnt(5, plga_n20wat, n20_plga_wat, lb_avg_pn6, 4, 3000 , 18000)
n20_blksplga_wat
n20plga_lpwat
n20plga_lpwat[2]
np.mean(n20plga_lpwat[3])
blen_n20wat
gg_n20plga_wat = line_fit(np.mean(n20plga_lpwat[2]),blen_n20wat)
gg_n6plga_n20wat = line_fit(np.mean(n6plga_lpwat[2]),blen_n20wat)
gg_n8plga_n20wat = line_fit(np.mean(n8plga_lpwat[2]),blen_n20wat)
gg_n10plga_n20wat = line_fit(np.mean(n10plga_lpwat[2]),blen_n20wat)
gg_n20plga_wat
###Output
_____no_output_____
###Markdown
Block averaged Radius of gyration and persistence length, N = 20 PLGA/water system
###Code
np.mean(n20_blksplga_wat["Avg persistence length"])
np.std(n20_blksplga_wat["Avg persistence length"])
np.mean(n20_blksplga_wat["Avg Radius of gyration"])
np.std(n20_blksplga_wat["Avg Radius of gyration"])
plt.figure(figsize=(7,7))
plt.errorbar(blen_wat, np.log(cor_n6plga_wat[0]), yerr=mk_n6p_wat, color='#CCBE9F', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n8wat, np.log(cor_n8plga_wat[0]), yerr=mk_n8p_wat, color='#601A4A', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n10wat, np.log(cor_n10plga_wat[0]), yerr=mk_n10p_wat, color='#2B6322', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n20wat, np.log(cor_n20plga_wat[0]), yerr=mk_n20p_wat, color='#562A8B', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.plot(blen_n20wat[:15], gg_n6plga_n20wat[:15], color='#CCBE9F')
plt.plot(blen_n20wat[:15], gg_n8plga_n20wat[:15], color='#601A4A')
plt.plot(blen_n20wat[:15], gg_n10plga_n20wat[:15], color='#2B6322')
plt.plot(blen_n20wat[:15], gg_n20plga_wat[:15], color='#562A8B')
plt.title(r'Ensemble Averaged ln(Cosine $\theta$) in water', fontsize=15, y=1.01)
plt.xlabel(r'Bond Length', fontsize=15)
plt.ylabel(r'ln$\left< Cos(\theta)\right >$', fontsize=15)
plt.ylim(-7,1)
plt.xlim(0,165)
font = font_manager.FontProperties(family='Arial', style='normal', size='14')
#plt.legend([r'$N_{PEG}$ = 6: $L_{p}$ = 10.7 $\AA$ ± 1.62 $\AA$'], loc=3, frameon=0, fontsize=14, prop=font)
plt.tick_params(labelsize=14)
plt.text(0.5, -6.92,r'$N_{PLGA}$ = 6: $L_{p}$ = 16.0 $\AA$ ± 1.73 $\AA$', fontsize=15, color='#CCBE9F')
plt.text(0.5, -6.54,r'$N_{PLGA}$ = 8: $L_{p}$ = 17.2 $\AA$ ± 2.13 $\AA$', fontsize=15, color='#601A4A')
plt.text(0.5, -6.14,r'$N_{PLGA}$ = 10: $L_{p}$ = 15.8 $\AA$ ± 1.01 $\AA$', fontsize=15, color='#2B6322')
plt.text(0.5, -5.70,r'$N_{PLGA}$ = 20: $L_{p}$ = 17.6 $\AA$ ± 1.49 $\AA$', fontsize=15, color='#562A8B')
rgplga_olig_wat[r"$R_{g}$ [Angstrom] N = 20 PLGA in water"] = n20_blksplga_wat["Avg Radius of gyration"]
rgplga_olig_wat
pers_plgat_wat[r"$L_{p}$ [Angstrom] N = 20 PLGA in water"] = n20_blksplga_wat["Avg persistence length"]
pers_plgat_wat
###Output
_____no_output_____
###Markdown
N = 30 PLGA/water system
###Code
# For the right Rg calculation using MD Analysis, use trajactory without pbc
n30_plga_wat = mda.Universe("n30plga_wat/n30plgaonly_wat.pdb", "n30plga_wat/nowat_n30plga.xtc")
n30_plga_wat.trajectory
len(n30_plga_wat.trajectory)
#Select the polymer heavy atoms
plga_n30wat = n30_plga_wat.select_atoms("resname sPLG PLG tPLG and not type H")
crv_n30plga_wat = pers_length(plga_n30wat,30)
crv_n30plga_wat
com_bond_n30wat = np.zeros(shape=(1,18000))
count = 0
for ts in n30_plga_wat.trajectory[0:18000]:
n30_mon1_wat = n30_plga_wat.select_atoms("resid 1")
n30_mon2_wat = n30_plga_wat.select_atoms("resid 2")
oo_len = mda.analysis.distances.distance_array(n30_mon1_wat.center_of_mass(), n30_mon2_wat.center_of_mass(),
box=n30_plga_wat.trajectory.ts.dimensions)
com_bond_n30wat[0, count] = oo_len
count += 1
com_bond
np.std(com_bond)
lb_avg_pn6 = np.mean(com_bond)
lb_avg_pn6
np.mean(com_bond_n30wat)
np.std(com_bond_n30wat)
###Output
_____no_output_____
###Markdown
Radius of Gyration vs. time N = 30 PLGA/water system
###Code
n30plga_rgens_wat, cor_n30plga_wat, N30plga_cos_wat, rgwat_n30plga = get_rg_pers_poly(plga_n30wat, n30_plga_wat, 0, 18000)
n30plga_rgens_wat[0].shape
cor_n30plga_wat[3]
N30plga_cos_wat
rgwat_n30plga
np.std(n30plga_rgens_wat)
plt.figure(figsize=(7,7))
plt.title(r'PLGA Radius of Gyration', fontsize=18, y=1.01)
plt.xlabel(r'Time [ns]', fontsize=15)
plt.ylabel(r'$R_{g}$ [nm]', fontsize=15)
plt.plot(trj_len/100, n6plga_rgens_wat[0]/10,linewidth=2, color='#CCBE9F')
plt.plot(trj_len/100, n8plga_rgens_wat[0]/10,linewidth=2, color='#601A4A')
plt.plot(trj_len/100, n10plga_rgens_wat[0]/10,linewidth=2, color='#2B6322')
plt.plot(trj_len/100, n20plga_rgens_wat[0]/10,linewidth=2, color='#562A8B')
plt.plot(trj_len/100, n30plga_rgens_wat[0]/10,linewidth=2, color='#1D77CF')
plt.tick_params(labelsize=14)
plt.legend(['N = 6 in water','N = 8 in water','N = 10 in water','N = 20 in water','N = 30 in water'], frameon=False, fontsize=14)
#plt.text(127, 0.96,r'N = 6 in water', fontsize=18, color='#1F2E69', family='Arial')
plt.xlim(0,180)
plt.ylim(0.2,3)
np.save('n6plga_watRg.npy', n6plga_rgens_wat[0])
np.save('n8plga_watRg.npy', n8plga_rgens_wat[0])
np.save('n10plga_watRg.npy', n10plga_rgens_wat[0])
np.save('n20plga_watRg.npy', n20plga_rgens_wat[0])
np.save('n30plga_watRg.npy', n30plga_rgens_wat[0])
###Output
_____no_output_____
###Markdown
Relaxation times vs monomer length
###Code
# Key variables
# def pos_bead_autocorr_RA(polymer_atoms, universe, n_monomers, t_corr, start, end):
n30_mon = 30
start = 0
end = 18000
t_corr = 4000
window_shift = 20
s_time = timeit.default_timer()
tcRA_plgan30wat, tcSUM_plgan30wat = pos_bead_autocorr_RA(plga_n30wat, n30_plga_wat, n30_mon, t_corr, window_shift, start, end)
timeit.default_timer() - s_time
tcRA_plgan30wat.shape
tcSUM_plgan30wat.shape
###Output
_____no_output_____
###Markdown
Fitting autocorrelation data
###Code
xdata_n30wat = tcRA_plgan30wat[1]/100
ydata_n30wat = tcRA_plgan30wat[0]
xdata_n30wat.shape
ydata_n30wat.shape
s_n30 =[3 for i in range(len(xdata_n30wat))]
plt.figure(figsize=(8,8))
plt.rcParams["font.family"] = "Arial"
plt.scatter(tcRA_plgan6wat[1]/100, tcRA_plgan6wat[0], color='#CCBE9F', s=s_n6, label=r'$N_{PLGA}$ = 6')
plt.scatter(tcRA_plgan8wat[1]/100, tcRA_plgan8wat[0], color='#601A4A', s=s_n8, label=r'$N_{PLGA}$ = 8')
plt.scatter(tcRA_plgan10wat[1]/100, tcRA_plgan10wat[0], color='#2B6322', s=s_n10, label=r'$N_{PLGA}$ = 10')
plt.scatter(tcRA_plgan20wat[1]/100, tcRA_plgan20wat[0], color='#562A8B', s=s_n20, label=r'$N_{PLGA}$ = 20')
plt.scatter(tcRA_plgan30wat[1]/100, tcRA_plgan30wat[0], color='#1D77CF', s=s_n30, label=r'$N_{PLGA}$ = 30')
plt.title(r'Positional bead autocorrelation PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('C(t)', fontsize=18)
plt.legend(fontsize=14, frameon=False)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.yscale('symlog', linthreshy=0.5)
plt.xscale('symlog')
plt.grid(b=True, axis='y')
plt.xlim(0,45)
plt.ylim(-1,1)
plt.tick_params(labelsize=18, direction='in', which='both', width=1, length=10)
np.save('n6plga_WATCt_RA.npy', tcRA_plgan6wat)
np.save('n8plga_WATCt_RA.npy', tcRA_plgan8wat)
np.save('n10plga_WATCt_RA.npy', tcRA_plgan10wat)
np.save('n20plga_WATCt_RA.npy', tcRA_plgan20wat)
np.save('n30plga_WATCt_RA.npy', tcRA_plgan30wat)
def res_polyn30wat(variabls, xnp, ynp):
hs_np = variabls['h_star']
tr1_np = variabls['t_first']
n_m = 30
testnp = []
for i in range(len(xnp)):
model_ynp = zimm_relax(xnp[i], tr1_np, hs_np, n_m)
#model_ynp = rouse_relax(xnp[i], tr1_np, n_m)
testnp.append(ynp[i] - model_ynp)
tt_n30 = np.array(testnp)
return tt_n30
#x1 = npdmso.array([0,0])
#pfit, pcov, infodict, errmsg, success = leastsq(res_poly, x1, args=(xdata, ydata), full_output=1)
pfit_n30wat = Parameters()
pfit_n30wat.add(name='h_star', value=0, min=0, max=0.26, vary=True)
pfit_n30wat.add(name='t_first', value=2)
mini_n30wat = Minimizer(res_polyn30wat, pfit_n30wat, fcn_args=(xdata_n30wat, ydata_n30wat))
out_n30wat = mini_n30wat.leastsq()
#bfit_n10 = ydata_n10ace + out_n10ace.residual
report_fit(out_n30wat.params)
out_n30wat.params
twat_n30plga = []
for i in range(len(xdata_n30wat)):
twat_n30plga.append(zimm_relax(xdata_n30wat[i], 4.23, 0.26, n30_mon))
s_n30 =[2 for i in range(len(xdata_n30wat))]
plt.figure(figsize=(10,10))
plt.rcParams["font.family"] = "Arial"
plt.scatter(tcorr_plgan6wat[1]/100, tcorr_plgan6wat[0], color='#CCBE9F', s=s_n6)
plt.scatter(tcorr_plgan8wat[1]/100, tcorr_plgan8wat[0], color='#601A4A', s=s_n8)
plt.scatter(tcorr_plgan10wat[1]/100, tcorr_plgan10wat[0], color='#2B6322', s=s_n10)
plt.scatter(tcorr_plgan20wat[1]/100, tcorr_plgan20wat[0], color='#562A8B', s=s_n20)
plt.scatter(tcorr_plgan30wat[1]/100, tcorr_plgan30wat[0], color='#1D77CF', s=s_n30)
plt.plot(xdata_n6wat, twat_n6plga, color='#CCBE9F')
plt.plot(xdata_n8wat, twat_n8plga, color='#601A4A')
plt.plot(xdata_n10wat, twat_n10plga, color='#2B6322')
plt.plot(xdata_n20wat, twat_n20plga, color='#562A8B')
plt.plot(xdata_n30wat, twat_n30plga, color='#1D77CF')
plt.title(r'Positional bead autocorrelation PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('C(t)', fontsize=18)
#plt.legend(fontsize=14, frameon=False)
plt.text(10, 0.95, r'$N_{PLGA}$ = 6: $\tau_{1}$ = 0.28 ± 0.03 ns, $h^{*}$ = 0.26 ± 0.43', fontsize=15, color='#CCBE9F')
plt.text(10, 0.90, r'$N_{PLGA}$ = 8: $\tau_{1}$ = 0.53 ± 0.04 ns, $h^{*}$ = 0.26 ± 0.29', fontsize=15, color='#601A4A')
plt.text(10, 0.85, r'$N_{PLGA}$ = 10: $\tau_{1}$ = 1.82 ± 0.06 ns, $h^{*}$ = 3.2e-6 ± 0.01', fontsize=15, color='#2B6322')
plt.text(10, 0.80, r'$N_{PLGA}$ = 20: $\tau_{1}$ = 0.96 ± 0.04 ns, $h^{*}$ = 0.26 ± 0.10 ', fontsize=15, color='#562A8B')
plt.text(10, 0.75, r'$N_{PLGA}$ = 30: $\tau_{1}$ = 4.23 ± 0.06 ns, $h^{*}$ = 0.26 ± 0.03', fontsize=15, color='#1D77CF')
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([0,30])
plt.ylim([-0.7,1])
plt.tick_params(labelsize=15)
rwat_msen30 = np.array([tcorr_plgan30wat[0] - np.array(twat_n30plga)])
rwat_msen30
plt.figure(figsize=(10,10))
s_n30 =[10 for i in range(len(xdata_n30wat))]
plt.rcParams["font.family"] = "Arial"
plt.scatter(xdata_n6wat, rwat_msen6, color='#CCBE9F', s=s_n30, label=r'$N_{PLGA}$ = 6' )
plt.scatter(xdata_n8wat, rwat_msen8, color='#601A4A', s=s_n30, label=r'$N_{PLGA}$ = 8')
plt.scatter(xdata_n10wat, rwat_msen10, color='#2B6322', s=s_n30, label=r'$N_{PLGA}$ = 10')
plt.scatter(xdata_n20wat, rwat_msen20, color='#562A8B', s=s_n30, label=r'$N_{PLGA}$ = 20')
plt.scatter(xdata_n30wat, rwat_msen30, color='#1D77CF', s=s_n30, label=r'$N_{PLGA}$ = 30')
plt.title(r'Relaxation time Fitting Residuals PLGA in water', fontsize=18, y=1.01)
plt.xlabel(r'Time lag [ns]', fontsize=18)
plt.ylabel('Residuals', fontsize=18)
plt.legend(fontsize=14, frameon=False)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([0,30])
plt.ylim([-0.7,1])
font = font_manager.FontProperties(family='Cambria', style='normal', size='15')
plt.tick_params(labelsize=15)
###Output
_____no_output_____
###Markdown
Correlation values at each arc length for the whole 180 ns trajectory, N = 30 PLGA/water system
###Code
# x values
blen_n30wat = cor_n30plga_wat[3]*lb_avg_pn6
#nt_tt[0] = 0
blen_n30wat
mk_n30p_wat = cor_n30plga_wat[1]/cor_n30plga_wat[0]
mk_n30p_wat
plt.figure(figsize=(7,7))
plt.errorbar(blen_wat, np.log(cor_n6plga_wat[0]), yerr=mk_n6p_wat, color='#CCBE9F', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n8wat, np.log(cor_n8plga_wat[0]), yerr=mk_n8p_wat, color='#601A4A', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n10wat, np.log(cor_n10plga_wat[0]), yerr=mk_n10p_wat, color='#2B6322', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n20wat, np.log(cor_n20plga_wat[0]), yerr=mk_n20p_wat, color='#562A8B', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n30wat, np.log(cor_n30plga_wat[0]), yerr=mk_n30p_wat, color='#1D77CF', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
# All the points give the best fits for N = 6 peg in water
n30_blksplga_wat , n30plga_lpwat = bavg_pers_cnt(5, plga_n30wat, n30_plga_wat, lb_avg_pn6, 4, 3000 , 18000)
n30_blksplga_wat
n30plga_lpwat
n30plga_lpwat[2]
np.mean(n30plga_lpwat[3])
blen_n30wat
gg_n30plga_wat = line_fit(np.mean(n30plga_lpwat[2]),blen_n30wat)
gg_n6plga_n30wat = line_fit(np.mean(n6plga_lpwat[2]),blen_n30wat)
gg_n8plga_n30wat = line_fit(np.mean(n8plga_lpwat[2]),blen_n30wat)
gg_n10plga_n30wat = line_fit(np.mean(n10plga_lpwat[2]),blen_n30wat)
gg_n20plga_n30wat = line_fit(np.mean(n20plga_lpwat[2]),blen_n30wat)
gg_n30plga_wat
###Output
_____no_output_____
###Markdown
Block averaged Radius of gyration and persistence length, N = 30 PLGA/water system
###Code
np.mean(n30_blksplga_wat["Avg persistence length"])
np.std(n30_blksplga_wat["Avg persistence length"])
np.mean(n30_blksplga_wat["Avg Radius of gyration"])
np.std(n30_blksplga_wat["Avg Radius of gyration"])
plt.figure(figsize=(7,7))
plt.errorbar(blen_wat, np.log(cor_n6plga_wat[0]), yerr=mk_n6p_wat, color='#CCBE9F', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n8wat, np.log(cor_n8plga_wat[0]), yerr=mk_n8p_wat, color='#601A4A', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n10wat, np.log(cor_n10plga_wat[0]), yerr=mk_n10p_wat, color='#2B6322', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n20wat, np.log(cor_n20plga_wat[0]), yerr=mk_n20p_wat, color='#562A8B', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.errorbar(blen_n30wat, np.log(cor_n30plga_wat[0]), yerr=mk_n30p_wat, color='#1D77CF', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.plot(blen_n30wat[:15], gg_n6plga_n30wat[:15], color='#CCBE9F')
plt.plot(blen_n30wat[:15], gg_n8plga_n30wat[:15], color='#601A4A')
plt.plot(blen_n30wat[:15], gg_n10plga_n30wat[:15], color='#2B6322')
plt.plot(blen_n30wat[:15], gg_n20plga_n30wat[:15], color='#562A8B')
plt.plot(blen_n30wat[:15], gg_n30plga_wat[:15], color='#1D77CF')
plt.title(r'Ensemble Averaged ln(Cosine $\theta$) in water', fontsize=15, y=1.01)
plt.xlabel(r'Bond Length', fontsize=15)
plt.ylabel(r'ln$\left< Cos(\theta)\right >$', fontsize=15)
plt.ylim(-7,1)
plt.xlim(0,165)
font = font_manager.FontProperties(family='Arial', style='normal', size='14')
#plt.legend([r'$N_{PEG}$ = 6: $L_{p}$ = 10.7 $\AA$ ± 1.62 $\AA$'], loc=3, frameon=0, fontsize=14, prop=font)
plt.tick_params(labelsize=14)
plt.text(0.5, -6.92,r'$N_{PLGA}$ = 6: $L_{p}$ = 16.0 $\AA$ ± 1.73 $\AA$', fontsize=15, color='#CCBE9F')
plt.text(0.5, -6.54,r'$N_{PLGA}$ = 8: $L_{p}$ = 17.2 $\AA$ ± 2.13 $\AA$', fontsize=15, color='#601A4A')
plt.text(0.5, -6.14,r'$N_{PLGA}$ = 10: $L_{p}$ = 15.8 $\AA$ ± 1.01 $\AA$', fontsize=15, color='#2B6322')
plt.text(0.5, -5.70,r'$N_{PLGA}$ = 20: $L_{p}$ = 17.6 $\AA$ ± 1.49 $\AA$', fontsize=15, color='#562A8B')
plt.text(0.5, -5.30,r'$N_{PLGA}$ = 30: $L_{p}$ = 18.8 $\AA$ ± 1.49 $\AA$', fontsize=15, color='#1D77CF')
rgplga_olig_wat[r"$R_{g}$ [Angstrom] N = 30 PLGA in water"] = n30_blksplga_wat["Avg Radius of gyration"]
rgplga_olig_wat
pers_plgat_wat[r"$L_{p}$ [Angstrom] N = 30 PLGA in water"] = n30_blksplga_wat["Avg persistence length"]
pers_plgat_wat
rgplga_olig_wat.to_pickle("PLGA_water_Rg.pkl")
pers_plgat_wat.to_pickle("PLGA_water_Lp.pkl")
###Output
_____no_output_____
###Markdown
Fluory Exponent, PLGA/water system
###Code
n_plga = np.array([6,8,10,20,30])
rg_nplga_wat = np.array([np.mean(n6_blksplga_wat["Avg Radius of gyration"])
,np.mean(n8_blksplga_wat["Avg Radius of gyration"]),np.mean(n10_blksplga_wat["Avg Radius of gyration"])
,np.mean(n20_blksplga_wat["Avg Radius of gyration"]),np.mean(n30_blksplga_wat["Avg Radius of gyration"])])
rg_nplga_wat
rgwat_nplga_std = np.array([np.std(np.log10(n6_blksplga_wat["Avg Radius of gyration"]))
,np.std(np.log10(n8_blksplga_wat["Avg Radius of gyration"]))
,np.std(np.log10(n10_blksplga_wat["Avg Radius of gyration"]))
,np.std(np.log10(n20_blksplga_wat["Avg Radius of gyration"]))
,np.std(np.log10(n30_blksplga_wat["Avg Radius of gyration"]))])
rgwat_nplga_std
n_plga
np.log10(rg_nplga_wat)
np.log10(n_plga)
# From fitting all points, I get best fit
from sklearn.linear_model import LinearRegression
model_vpwat = LinearRegression(fit_intercept=True)
model_vpwat.fit(np.log10(n_plga).reshape(-1,1), np.log10(rg_nplga_wat))
# Slope here is in nanometers
print("Model slope: ", model_vpwat.coef_[0])
print("Model intercept:", model_vpwat.intercept_)
gg_wat = model_vpwat.predict(np.log10(n_plga.reshape(-1,1)))
gg_wat
print("Mean Std Error:", sklearn.metrics.mean_squared_error(np.log10(rg_nplga_wat), gg_wat))
print("R2 score:", sklearn.metrics.r2_score(np.log10(rg_nplga_wat), gg_wat))
# Residuals between the true y data and model y data
resid_vwat = np.log10(rg_nplga_wat) - gg_wat
resid_vwat
# How to calculate Sum((Xi - avg(X))^2): X values are the bond length values
nt_ttwat = np.log10(n_plga)
nt_ttwat -= np.mean(nt_ttwat)
nhui_wat = nt_ttwat**2
np.sum(nhui_wat)
# t-value with 95 % confidence intervals
scipy.stats.t.ppf(0.975, 4)
# How to calculate 95% confidence interval for the slope
flc_vwat = scipy.stats.t.ppf(0.975, 4)*np.sqrt((np.sum(resid_vwat**2)/len(resid_vwat))/(np.sum(nhui_wat)))
flc_vwat
plt.figure(figsize=(7,7))
plt.errorbar(np.log10(n_plga), np.log10(rg_nplga_wat), yerr=rgwat_nplga_std, color='#A58262', linestyle="None",marker='o',
capsize=5, capthick=1, ecolor='black')
plt.plot(np.log10(n_plga), gg_wat, color='#A58262')
plt.title(r'Fluory Exponent', fontsize=15)
plt.xlabel(r'Log($N_{PLGA}$)', fontsize=15)
plt.ylabel(r'Log($R_{g}$)', fontsize=15)
plt.tick_params(labelsize=14)
plt.text(1.1, 0.95, r'$v_{acetone}$ = 0.29 ± 0.04', fontsize=15, color='#A58262')
###Output
_____no_output_____
###Markdown
Scaling exponent from relaxation times
###Code
#np.array([1.03, 4.56, 1.39, 2.04, 6.97])
t1_nplga_wat = np.array([0.28, 0.53, 1.82, 0.96, 4.23])
t1_nplga_wat
#np.array([0.05, 0.09, 0.05, 0.06, 0.11])
stdt1_nplga_wat = np.array([0.03, 0.04, 0.06, 0.04, 0.06])
stdt1_nplga_wat
tr_plga_wat = pd.DataFrame(data=t1_nplga_wat, columns=[r"$\tau_{1}$ [ns] PLGA in DMSO"])
tr_plga_wat[r'Std. Error for $\tau_{1}$'] = stdt1_nplga_wat
tr_plga_wat['Monomer length'] = np.array([6,8,10,20,30])
tr_plga_wat.to_pickle("PLGA_water_tr.pkl")
#tr_plga_wat.head()
###Output
_____no_output_____ |
course-2-improving-deep-neural-networks/Regularization+-+v2.ipynb | ###Markdown
RegularizationWelcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!**You will learn to:** Use regularization in your deep learning models.Let's first import the packages you are going to use.
###Code
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head. **Figure 1** : **Football field** The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head They give you the following 2D dataset from France's past 10 games.
###Code
train_X, train_Y, test_X, test_Y = load_2D_dataset()
###Output
_____no_output_____
###Markdown
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.- If the dot is blue, it means the French player managed to hit the ball with his/her head- If the dot is red, it means the other team's player hit the ball with their head**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball. **Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well. You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem. 1 - Non-regularized modelYou will use the following neural network (already implemented for you below). This model can be used:- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python. - in *dropout mode* -- by setting the `keep_prob` to a value less than oneYou will first try the model without any regularization. Then, you will implement:- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
###Code
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
###Code
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6557412523481002
Cost after iteration 10000: 0.16329987525724216
Cost after iteration 20000: 0.13851642423255986
###Markdown
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
###Code
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting. 2 - L2 RegularizationThe standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$To:$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$Let's modify your cost and observe the consequences.**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :```pythonnp.sum(np.square(Wl))```Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
###Code
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (1/m)*(lambd/2)*(np.sum(np.square(W1))+np.sum(np.square(W2))+np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
###Output
cost = 1.78648594516
###Markdown
**Expected Output**: **cost** 1.78648594516 Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost. **Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
###Code
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m)*W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m)*W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m)*W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
###Output
dW1 = [[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 = [[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 = [[-1.77691347 -0.11832879 -0.09397446]]
###Markdown
**Expected Output**: **dW1** [[-0.25604646 0.12298827 -0.28297129] [-0.17706303 0.34536094 -0.4410571 ]] **dW2** [[ 0.79276486 0.85133918] [-0.0957219 -0.01720463] [-0.13100772 -0.03750433]] **dW3** [[-1.77691347 -0.11832879 -0.09397446]] Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call: - `compute_cost_with_regularization` instead of `compute_cost`- `backward_propagation_with_regularization` instead of `backward_propagation`
###Code
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6974484493131264
Cost after iteration 10000: 0.2684918873282239
Cost after iteration 20000: 0.2680916337127301
###Markdown
Congrats, the test set accuracy increased to 93%. You have saved the French football team!You are not overfitting the training data anymore. Let's plot the decision boundary.
###Code
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.**What is L2-regularization actually doing?**:L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes. **What you should remember** -- the implications of L2-regularization on:- The cost computation: - A regularization term is added to the cost- The backpropagation function: - There are extra terms in the gradients with respect to weight matrices- Weights end up smaller ("weight decay"): - Weights are pushed to smaller values. 3 - DropoutFinally, **dropout** is a widely used regularization technique that is specific to deep learning. **It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!<!--To understand drop-out, consider this conversation with a friend:- Friend: "Why do you need all these neurons to train your network and classify images?". - You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."!--> Figure 2 : Drop-out on the second hidden layer. At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. Figure 3 : Drop-out on the first and third hidden layers. $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. 3.1 - Forward propagation with dropout**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer. **Instructions**:You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.2. Set each entry of $D^{[1]}$ to be 0 with probability (`1-keep_prob`) or 1 with probability (`keep_prob`), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: `X = (X < 0.5)`. Note that 0 and 1 are respectively equivalent to False and True.3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
###Code
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0],A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = np.multiply(A1,D1) # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0],A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = D2 < keep_prob # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = np.multiply(A2,D2) # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
###Output
A3 = [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
###Markdown
**Expected Output**: **A3** [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]] 3.2 - Backward propagation with dropout**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache. **Instruction**:Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`. 2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
###Code
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = np.multiply(dA2,D2) # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = np.multiply(dA1,D1) # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
###Output
dA1 = [[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 = [[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
###Markdown
**Expected Output**: **dA1** [[ 0.36544439 0. -0.00188233 0. -0.17408748] [ 0.65515713 0. -0.00337459 0. -0. ]] **dA2** [[ 0.58180856 0. -0.00299679 0. -0.27715731] [ 0. 0.53159854 -0. 0.53159854 -0.34089673] [ 0. 0. -0.00292733 0. -0. ]] Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:- `forward_propagation_with_dropout` instead of `forward_propagation`.- `backward_propagation_with_dropout` instead of `backward_propagation`.
###Code
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6543912405149825
###Markdown
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you! Run the code below to plot the decision boundary.
###Code
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____ |
Week1/Convolution_model_Application_v1a.ipynb | ###Markdown
Convolutional Neural Networks: ApplicationWelcome to Course 4's second assignment! In this notebook, you will:- Implement helper functions that you will use when implementing a TensorFlow model- Implement a fully functioning ConvNet using TensorFlow **After this assignment you will be able to:**- Build and train a ConvNet in TensorFlow for a classification problem We assume here that you are already familiar with TensorFlow. If you are not, please refer the *TensorFlow Tutorial* of the third week of Course 2 ("*Improving deep neural networks*"). Updates to Assignment If you were working on a previous version* The current notebook filename is version "1a". * You can find your work in the file directory as version "1".* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of Updates* `initialize_parameters`: added details about tf.get_variable, `eval`. Clarified test case.* Added explanations for the kernel (filter) stride values, max pooling, and flatten functions.* Added details about softmax cross entropy with logits.* Added instructions for creating the Adam Optimizer.* Added explanation of how to evaluate tensors (optimizer and cost).* `forward_propagation`: clarified instructions, use "F" to store "flatten" layer.* Updated print statements and 'expected output' for easier visual comparisons.* Many thanks to Kevin P. Brown (mentor for the deep learning specialization) for his suggestions on the assignments in this course! 1.0 - TensorFlow modelIn the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call. As usual, we will start by loading in the packages.
###Code
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
###Output
_____no_output_____
###Markdown
Run the next cell to load the "SIGNS" dataset you are going to use.
###Code
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
###Output
_____no_output_____
###Markdown
As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples.
###Code
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
###Output
y = 2
###Markdown
In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.To get started, let's examine the shapes of your data.
###Code
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
###Output
number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)
###Markdown
1.1 - Create placeholdersTensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.**Exercise**: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension **[None, n_H0, n_W0, n_C0]** and Y should be of dimension **[None, n_y]**. [Hint: search for the tf.placeholder documentation"](https://www.tensorflow.org/api_docs/python/tf/placeholder).
###Code
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
"""
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(tf.float32, [None ,n_H0, n_W0, n_C0])
Y = tf.placeholder(tf.float32, [None, n_y])
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
###Output
X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32)
Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32)
###Markdown
**Expected Output** X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32) Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32) 1.2 - Initialize parametersYou will initialize weights/filters $W1$ and $W2$ using `tf.contrib.layers.xavier_initializer(seed = 0)`. You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment.**Exercise:** Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use:```pythonW = tf.get_variable("W", [1,2,3,4], initializer = ...)``` tf.get_variable()[Search for the tf.get_variable documentation](https://www.tensorflow.org/api_docs/python/tf/get_variable). Notice that the documentation says:```Gets an existing variable with these parameters or create a new one.```So we can use this function to create a tensorflow variable with the specified name, but if the variables already exist, it will get the existing variable with that same name.
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Note that we will hard code the shape values in the function to make the grading simpler.
Normally, functions should take values as inputs rather than hard coding.
Returns:
parameters -- a dictionary of tensors containing W1, W2
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable("W1", [4, 4, 3, 8], initializer = tf.contrib.layers.xavier_initializer(seed = 0))
W2 = tf.get_variable("W2", [2, 2, 8, 16], initializer = tf.contrib.layers.xavier_initializer(seed = 0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1[1,1,1] = \n" + str(parameters["W1"].eval()[1,1,1]))
print("W1.shape: " + str(parameters["W1"].shape))
print("\n")
print("W2[1,1,1] = \n" + str(parameters["W2"].eval()[1,1,1]))
print("W2.shape: " + str(parameters["W2"].shape))
###Output
W1[1,1,1] =
[ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394
-0.06847463 0.05245192]
W1.shape: (4, 4, 3, 8)
W2[1,1,1] =
[-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058
-0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228
-0.22779644 -0.1601823 -0.16117483 -0.10286498]
W2.shape: (2, 2, 8, 16)
###Markdown
** Expected Output:**```W1[1,1,1] = [ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 -0.06847463 0.05245192]W1.shape: (4, 4, 3, 8)W2[1,1,1] = [-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 -0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 -0.22779644 -0.1601823 -0.16117483 -0.10286498]W2.shape: (2, 2, 8, 16)``` 1.3 - Forward propagationIn TensorFlow, there are built-in functions that implement the convolution steps for you.- **tf.nn.conv2d(X,W, strides = [1,s,s,1], padding = 'SAME'):** given an input $X$ and a group of filters $W$, this function convolves $W$'s filters on X. The third parameter ([1,s,s,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). Normally, you'll choose a stride of 1 for the number of examples (the first value) and for the channels (the fourth value), which is why we wrote the value as `[1,s,s,1]`. You can read the full documentation on [conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d).- **tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'):** given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. For max pooling, we usually operate on a single example at a time and a single channel at a time. So the first and fourth value in `[1,f,f,1]` are both 1. You can read the full documentation on [max_pool](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool).- **tf.nn.relu(Z):** computes the elementwise ReLU of Z (which can be any shape). You can read the full documentation on [relu](https://www.tensorflow.org/api_docs/python/tf/nn/relu).- **tf.contrib.layers.flatten(P)**: given a tensor "P", this function takes each training (or test) example in the batch and flattens it into a 1D vector. * If a tensor P has the shape (m,h,w,c), where m is the number of examples (the batch size), it returns a flattened tensor with shape (batch_size, k), where $k=h \times w \times c$. "k" equals the product of all the dimension sizes other than the first dimension. * For example, given a tensor with dimensions [100,2,3,4], it flattens the tensor to be of shape [100, 24], where 24 = 2 * 3 * 4. You can read the full documentation on [flatten](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/flatten).- **tf.contrib.layers.fully_connected(F, num_outputs):** given the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation on [full_connected](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/fully_connected).In the last function above (`tf.contrib.layers.fully_connected`), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters. Window, kernel, filterThe words "window", "kernel", and "filter" are used to refer to the same thing. This is why the parameter `ksize` refers to "kernel size", and we use `(f,f)` to refer to the filter size. Both "kernel" and "filter" refer to the "window." **Exercise**Implement the `forward_propagation` function below to build the following model: `CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED`. You should use the functions above. In detail, we will use the following parameters for all the steps: - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME" - Conv2D: stride 1, padding is "SAME" - ReLU - Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME" - Flatten the previous output. - FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost.
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Note that for simplicity and grading purposes, we'll hard-code some values
such as the stride and kernel (filter) sizes.
Normally, functions should take these values as function parameters.
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X,W1, strides = [1,1,1,1], padding = 'SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, stride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, ksize = [1,8,8,1], strides = [1,8,8,1], padding = 'SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1,W2, strides = [1,1,1,1], padding = 'SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, ksize = [1,4,4,1], strides = [1,4,4,1], padding = 'SAME')
# FLATTEN
F = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(F, num_outputs = 6,activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = \n" + str(a))
###Output
Z3 =
[[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064]
[-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]
###Markdown
**Expected Output**:```Z3 = [[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] [-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]``` 1.4 - Compute costImplement the compute cost function below. Remember that the cost function helps the neural network see how much the model's predictions differ from the correct labels. By adjusting the weights of the network to reduce the cost, the neural network can improve its predictions.You might find these two functions helpful: - **tf.nn.softmax_cross_entropy_with_logits(logits = Z, labels = Y):** computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation [softmax_cross_entropy_with_logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits).- **tf.reduce_mean:** computes the mean of elements across dimensions of a tensor. Use this to calculate the sum of the losses over all the examples to get the overall cost. You can check the full documentation [reduce_mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean). Details on softmax_cross_entropy_with_logits (optional reading)* Softmax is used to format outputs so that they can be used for classification. It assigns a value between 0 and 1 for each category, where the sum of all prediction values (across all possible categories) equals 1.* Cross Entropy is compares the model's predicted classifications with the actual labels and results in a numerical value representing the "loss" of the model's predictions.* "Logits" are the result of multiplying the weights and adding the biases. Logits are passed through an activation function (such as a relu), and the result is called the "activation."* The function is named `softmax_cross_entropy_with_logits` takes logits as input (and not activations); then uses the model to predict using softmax, and then compares the predictions with the true labels using cross entropy. These are done with a single function to optimize the calculations.** Exercise**: Compute the cost below using the function above.
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (number of examples, 6)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
###Output
cost = 2.91034
###Markdown
**Expected Output**: ```cost = 2.91034``` 1.5 Model Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset. **Exercise**: Complete the function below. The model below should:- create placeholders- initialize parameters- forward propagate- compute the cost- create an optimizerFinally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. [Hint for initializing the variables](https://www.tensorflow.org/api_docs/python/tf/global_variables_initializer) Adam OptimizerYou can use `tf.train.AdamOptimizer(learning_rate = ...)` to create the optimizer. The optimizer has a `minimize(loss=...)` function that you'll call to set the cost function that the optimizer will minimize.For details, check out the documentation for [Adam Optimizer](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer) Random mini batchesIf you took course 2 of the deep learning specialization, you implemented `random_mini_batches()` in the "Optimization" programming assignment. This function returns a list of mini-batches. It is already implemented in the `cnn_utils.py` file and imported here, so you can call it like this:```Pythonminibatches = random_mini_batches(X, Y, mini_batch_size = 64, seed = 0)```(You will want to choose the correct variable names when you use it in your code). Evaluating the optimizer and costWithin a loop, for each mini-batch, you'll use the `tf.Session` object (named `sess`) to feed a mini-batch of inputs and labels into the neural network and evaluate the tensors for the optimizer as well as the cost. Remember that we built a graph data structure and need to feed it inputs and labels and use `sess.run()` in order to get values for the optimizer and cost.You'll use this kind of syntax:```output_for_var1, output_for_var2 = sess.run( fetches=[var1, var2], feed_dict={var_inputs: the_batch_of_inputs, var_labels: the_batch_of_labels} )```* Notice that `sess.run` takes its first argument `fetches` as a list of objects that you want it to evaluate (in this case, we want to evaluate the optimizer and the cost). * It also takes a dictionary for the `feed_dict` parameter. * The keys are the `tf.placeholder` variables that we created in the `create_placeholders` function above. * The values are the variables holding the actual numpy arrays for each mini-batch. * The sess.run outputs a tuple of the evaluated tensors, in the same order as the list given to `fetches`. For more information on how to use sess.run, see the documentation [tf.Sesssionrun](https://www.tensorflow.org/api_docs/python/tf/Sessionrun) documentation.
###Code
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
"""
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(loss=cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
"""
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost.
# The feedict should contain a minibatch for (X,Y).
"""
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run(
fetches=[optimizer, cost],
feed_dict={X: minibatch_X,
Y: minibatch_Y}
)
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
###Output
_____no_output_____
###Markdown
Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
###Code
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
###Output
Cost after epoch 0: 1.917929
Cost after epoch 5: 1.506757
Cost after epoch 10: 0.955359
Cost after epoch 15: 0.845802
Cost after epoch 20: 0.701174
Cost after epoch 25: 0.571977
Cost after epoch 30: 0.518435
Cost after epoch 35: 0.495806
Cost after epoch 40: 0.429827
Cost after epoch 45: 0.407291
Cost after epoch 50: 0.366394
Cost after epoch 55: 0.376922
Cost after epoch 60: 0.299491
Cost after epoch 65: 0.338870
Cost after epoch 70: 0.316400
Cost after epoch 75: 0.310413
Cost after epoch 80: 0.249549
Cost after epoch 85: 0.243457
Cost after epoch 90: 0.200031
Cost after epoch 95: 0.175452
###Markdown
**Expected output**: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease. **Cost after epoch 0 =** 1.917929 **Cost after epoch 5 =** 1.506757 **Train Accuracy =** 0.940741 **Test Accuracy =** 0.783333 Congratulations! You have finished the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance). Once again, here's a thumbs up for your work!
###Code
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
###Output
_____no_output_____ |
SAM-L11 Cortex-M23/Experiments - AES Mode ECB/AES-256_Flash_WolfSSL_library/Experiment_AES_Flash.ipynb | ###Markdown
Import Libraries
###Code
# %matplotlib ipympl
# %matplotlib inline
%matplotlib wx
import matplotlib.pyplot as plt
plt.ion()
from pydgilib_extra import *
from atprogram.atprogram import atprogram
from os import getcwd, path, pardir
import pickle
###Output
_____no_output_____
###Markdown
Compile and program project
###Code
project_path = [path.curdir, "AES_Flash-S"]
project_path
atprogram("./Crypto-Current", verbose=2)
#atprogram(path.abspath(path.join(*project_path)), verbose=2)
###Output
_____no_output_____
###Markdown
Data Logging
###Code
live_plot = False
###Output
_____no_output_____
###Markdown
Create a figure for the plot.
###Code
if live_plot:
fig = plt.figure(figsize=(10, 6))
fig.show()
###Output
_____no_output_____
###Markdown
Create the configuration dictionary for `DGILibExtra`.
###Code
config_dict = {
"loggers": [LOGGER_OBJECT, LOGGER_CSV],
"file_name_base": "experiment_aes_flash"
}
config_dict_plot = {
"loggers": [LOGGER_OBJECT, LOGGER_PLOT, LOGGER_CSV],
"plot_pins": [False, False, True, True],
"plot_pins_method": "line",
"plot_xmax": 5,
"window_title": "Experiment AES-128 Flash",
}
###Output
_____no_output_____
###Markdown
Stop criteria to pass to the logger:
###Code
def stop_fn(logger_data):
return all(logger_data.gpio.values[-1])
###Output
_____no_output_____
###Markdown
Perform the measurement.
###Code
data = []
cd = config_dict.copy()
if live_plot:
fig.clf()
for ax in fig.get_axes():
ax.cla()
cd.update(config_dict_plot)
cd["fig"] = fig
with DGILibExtra(**cd) as dgilib:
dgilib.device_reset()
dgilib.logger.log(1000,stop_fn)
data = dgilib.data
data.length()
print(data)
###Output
Interfaces:
48: gpio, samples: 6005
256: power, samples: 405000
###Markdown
Store Data
###Code
import pickle
pickle.dump(data, open("aes_flash_logger_data.p", "wb"))
###Output
_____no_output_____
###Markdown
Load Data
###Code
data = pickle.load(open("aes_flash_logger_data.p", "rb"))
iteration = 0
name = "AES-128_Flash"
data = pickle.load(open(path.join(path.pardir, path.pardir, f"{name}_{iteration}.p"), "rb"))
###Output
_____no_output_____
###Markdown
Analysis Create Stop Function to stop parsing the data when all pins are high.
###Code
def stop_function(pin_values):
return all(pin_values)
###Output
_____no_output_____
###Markdown
Parse the data.
###Code
aes_charge, aes_time = power_and_time_per_pulse(data, 2, stop_function=stop_function)
flash_charge, flash_time = power_and_time_per_pulse(data, 3, stop_function=stop_function)
print(len(aes_charge), len(aes_time), len(flash_charge), len(flash_time))
# cutoff = min(len(aes_charge), len(aes_time), len(flash_charge), len(flash_time))
# aes_charge = aes_charge[:cutoff]
# aes_time = aes_time[:cutoff]
# flash_charge = flash_charge[:cutoff]
# flash_time = flash_time[:cutoff]
# length = len(aes_charge)
# assert length == len(aes_time)
# assert length == len(flash_charge)
# assert length == len(flash_time)
# print(length)
aes_encrypt_charge = aes_charge[0::2]
aes_decrypt_charge = aes_charge[1::2]
aes_encrypt_time = aes_time[0::2]
aes_decrypt_time = aes_time[1::2]
aes_flash_write_charge = flash_charge[0::2]
aes_flash_read_charge = flash_charge[1::2]
aes_flash_write_time = flash_time[0::2]
aes_flash_read_time = flash_time[1::2]
len(aes_encrypt_charge), len(aes_decrypt_charge), len(aes_encrypt_time), len(aes_decrypt_time), len(aes_flash_write_charge), len(aes_flash_read_charge), len(aes_flash_write_time), len(aes_flash_read_time)
drop = 0
cutoff = min(len(aes_encrypt_charge), len(aes_decrypt_charge), len(aes_encrypt_time), len(aes_decrypt_time), len(aes_flash_write_charge), len(aes_flash_read_charge), len(aes_flash_write_time), len(aes_flash_read_time)) - drop
aes_encrypt_charge = aes_encrypt_charge[:cutoff]
aes_decrypt_charge = aes_decrypt_charge[:cutoff]
aes_encrypt_time = aes_encrypt_time[:cutoff]
aes_decrypt_time = aes_decrypt_time[:cutoff]
aes_flash_write_charge = aes_flash_write_charge[:cutoff]
aes_flash_read_charge = aes_flash_read_charge[:cutoff]
aes_flash_write_time = aes_flash_write_time[:cutoff]
aes_flash_read_time = aes_flash_read_time[:cutoff]
length = len(aes_encrypt_charge)
assert length == len(aes_decrypt_charge)
assert length == len(aes_encrypt_time)
assert length == len(aes_decrypt_time)
assert length == len(aes_flash_write_charge)
assert length == len(aes_flash_read_charge)
assert length == len(aes_flash_write_time)
assert length == len(aes_flash_read_time)
print(length)
###Output
375
###Markdown
Convert to Joule
###Code
voltage = 3.33
j_scale = 1e3 # m
t_scale = 1e3 # m
model_j_scale = 1e6 # n
model_t_scale = 1e3 # u
experiment_name = 'AES-256'
aes_encrypt_energy = aes_encrypt_charge[:cutoff]
aes_flash_write_energy = aes_flash_write_charge[:cutoff]
aes_flash_read_energy = aes_flash_read_charge[:cutoff]
aes_decrypt_energy = aes_decrypt_charge[:cutoff]
aes_encrypt_time_s = aes_encrypt_time[:cutoff]
aes_flash_write_time_s = aes_flash_write_time[:cutoff]
aes_flash_read_time_s = aes_flash_read_time[:cutoff]
aes_decrypt_time_s = aes_decrypt_time[:cutoff]
for i in range(len(aes_encrypt_energy)):
aes_encrypt_energy[i] = aes_encrypt_energy[i] * voltage * j_scale
for i in range(len(aes_flash_write_energy)):
aes_flash_write_energy[i] = aes_flash_write_energy[i] * voltage * j_scale
for i in range(len(aes_flash_read_energy)):
aes_flash_read_energy[i] = aes_flash_read_energy[i] * voltage * j_scale
for i in range(len(aes_decrypt_energy)):
aes_decrypt_energy[i] = aes_decrypt_energy[i] * voltage * j_scale
for i in range(len(aes_encrypt_time_s)):
aes_encrypt_time_s[i] = aes_encrypt_time_s[i] * t_scale
for i in range(len(aes_flash_write_time_s)):
aes_flash_write_time_s[i] = aes_flash_write_time_s[i] * t_scale
for i in range(len(aes_flash_read_time_s)):
aes_flash_read_time_s[i] = aes_flash_read_time_s[i] * t_scale
for i in range(len(aes_decrypt_time_s)):
aes_decrypt_time_s[i] = aes_decrypt_time_s[i] * t_scale
MBEDTLS_AES_BLOCK_SIZE = 16
STEP_SIZE = MBEDTLS_AES_BLOCK_SIZE
MIN_NUM_BYTES = STEP_SIZE
num_bytes = range(MIN_NUM_BYTES, MIN_NUM_BYTES + STEP_SIZE * len(aes_encrypt_energy), STEP_SIZE)
print(f"MAX_NUM_BYTES: {num_bytes[-1]}")
from lmfit import Model
def line(x, slope, intercept):
"""a line"""
return [slope*i + intercept for i in x]
mod = Model(line)
pars = mod.make_params(slope=0, intercept=1)
# pars['intercept'].set(min=0)
results = []
ylabels = (['Energy [mJ]'] * 2 + ['Time [ms]'] * 2) * 2 + ['Energy [mJ]'] + ['Time [ms]']
parameter_names = [
'Encrypt Energy',
'Flash Write Energy',
'Flash Read Energy',
'Decrypt Energy',
'Encrypt Time',
'Flash Write Time',
'Flash Read Time',
'Decrypt Time',
'Total Energy',
'Total Time',
]
for y in [aes_encrypt_energy, aes_flash_write_energy, aes_flash_read_energy, aes_decrypt_energy, aes_encrypt_time_s, aes_flash_write_time_s, aes_flash_read_time_s, aes_decrypt_time_s,
[e + w + r + d for (e,w,r,d) in zip(aes_encrypt_energy, aes_flash_write_energy, aes_flash_read_energy, aes_decrypt_energy)],
[e + w + r + d for (e,w,r,d) in zip(aes_encrypt_time_s, aes_flash_write_time_s, aes_flash_read_time_s, aes_decrypt_time_s)]]:
result = mod.fit(y, pars, x=num_bytes)
print(result.fit_report())
fig, grid = result.plot(
xlabel='Checkpoint Size [Bytes]',
ylabel=ylabels[len(results)])
fig.tight_layout(rect=(0.05, 0.05, 1, 1))
fig.set_size_inches(5, 4.5, forward=True)
fig.canvas.set_window_title(
f"Residuals of {experiment_name} {parameter_names[len(results)]}")
fig.show()
results.append(result)
fig2 = plt.figure(figsize=(8, 6))
fig2.canvas.set_window_title(f"Analysis {experiment_name}")
charge_color = 'r'
time_color = 'b'
fig2.clf()
# fig2.suptitle("Energy analysis of AES")
ax1 = fig2.add_subplot(1, 1, 1)
ax2 = ax1.twinx()
ax1.set_xlabel('Checkpoint Size [Bytes]')
ax1.set_ylabel('Energy [mJ]', color=charge_color)
ax2.set_ylabel('Time [ms]', color=time_color)
ax1.tick_params('y', colors=charge_color)
ax2.tick_params('y', colors=time_color)
lines = []
lines += ax1.plot(num_bytes, aes_encrypt_energy, charge_color+'-', label=f'{parameter_names[len(lines)]}')
lines += ax1.plot(num_bytes, aes_flash_write_energy, charge_color+'-.', label=f'{parameter_names[len(lines)]}')
lines += ax1.plot(num_bytes, aes_flash_read_energy, charge_color+':', label=f'{parameter_names[len(lines)]}')
lines += ax1.plot(num_bytes, aes_decrypt_energy, charge_color+'--', label=f'{parameter_names[len(lines)]}')
lines += ax2.plot(num_bytes, aes_encrypt_time_s, time_color+'-', label=f'{parameter_names[len(lines)]}')
lines += ax2.plot(num_bytes, aes_flash_write_time_s, time_color+'-.', label=f'{parameter_names[len(lines)]}')
lines += ax2.plot(num_bytes, aes_flash_read_time_s, time_color+':', label=f'{parameter_names[len(lines)]}')
lines += ax2.plot(num_bytes, aes_decrypt_time_s, time_color+'--', label=f'{parameter_names[len(lines)]}')
ax1.legend(handles=lines)
ax1.set_title(
f"{parameter_names[0]}: Slope {results[0].params['slope'].value * model_j_scale:.04} nJ/B, Intercept {results[0].params['intercept'].value * model_j_scale:.04} nJ\n" +
f"{parameter_names[1]}: Slope {results[1].params['slope'].value * model_j_scale:.04} nJ/B, Intercept {results[1].params['intercept'].value * model_j_scale:.04} nJ\n" +
f"{parameter_names[2]}: Slope {results[2].params['slope'].value * model_j_scale:.04} nJ/B, Intercept {results[2].params['intercept'].value * model_j_scale:.04} nJ\n" +
f"{parameter_names[3]}: Slope {results[3].params['slope'].value * model_j_scale:.04} nJ/B, Intercept {results[3].params['intercept'].value * model_j_scale:.04} nJ\n" +
f"{parameter_names[4]}: Slope {results[4].params['slope'].value * model_t_scale:.04} $\mu$s/B, Intercept {results[4].params['intercept'].value * model_t_scale:.04} $\mu$s\n" +
f"{parameter_names[5]}: Slope {results[5].params['slope'].value * model_t_scale:.04} $\mu$s/B, Intercept {results[5].params['intercept'].value * model_t_scale:.04} $\mu$s\n" +
f"{parameter_names[6]}: Slope {results[6].params['slope'].value * model_t_scale:.04} $\mu$s/B, Intercept {results[6].params['intercept'].value * model_t_scale:.04} $\mu$s\n" +
f"{parameter_names[7]}: Slope {results[7].params['slope'].value * model_t_scale:.04} $\mu$s/B, Intercept {results[7].params['intercept'].value * model_t_scale:.04} $\mu$s\n" +
f"{parameter_names[8]}: Slope {results[8].params['slope'].value * model_j_scale:.04} nJ/B, Intercept {results[8].params['intercept'].value * model_j_scale:.04} nJ\n" +
f"{parameter_names[9]}: Slope {results[9].params['slope'].value * model_t_scale:.04} $\mu$s/B, Intercept {results[9].params['intercept'].value * model_t_scale:.04} $\mu$s\n")
fig2.tight_layout()
fig2.show()
print(
f"{parameter_names[0]}: Slope {results[0].params['slope'].value * model_j_scale:.020} nJ/B, Intercept {results[0].params['intercept'].value * model_j_scale:.020} nJ\n" +
f"{parameter_names[1]}: Slope {results[1].params['slope'].value * model_j_scale:.020} nJ/B, Intercept {results[1].params['intercept'].value * model_j_scale:.020} nJ\n" +
f"{parameter_names[2]}: Slope {results[2].params['slope'].value * model_j_scale:.020} nJ/B, Intercept {results[2].params['intercept'].value * model_j_scale:.020} nJ\n" +
f"{parameter_names[3]}: Slope {results[3].params['slope'].value * model_j_scale:.020} nJ/B, Intercept {results[3].params['intercept'].value * model_j_scale:.020} nJ\n" +
f"{parameter_names[4]}: Slope {results[4].params['slope'].value * model_t_scale:.020} $\mu$s/B, Intercept {results[4].params['intercept'].value * model_t_scale:.020} $\mu$s\n" +
f"{parameter_names[5]}: Slope {results[5].params['slope'].value * model_t_scale:.020} $\mu$s/B, Intercept {results[5].params['intercept'].value * model_t_scale:.020} $\mu$s\n" +
f"{parameter_names[6]}: Slope {results[6].params['slope'].value * model_t_scale:.020} $\mu$s/B, Intercept {results[6].params['intercept'].value * model_t_scale:.020} $\mu$s\n" +
f"{parameter_names[7]}: Slope {results[7].params['slope'].value * model_t_scale:.020} $\mu$s/B, Intercept {results[7].params['intercept'].value * model_t_scale:.020} $\mu$s\n" +
f"{parameter_names[8]}: Slope {results[8].params['slope'].value * model_j_scale:.020} nJ/B, Intercept {results[8].params['intercept'].value * model_j_scale:.020} nJ\n" +
f"{parameter_names[9]}: Slope {results[9].params['slope'].value * model_t_scale:.020} $\mu$s/B, Intercept {results[9].params['intercept'].value * model_t_scale:.020} $\mu$s\n"
)
# Save Charge amount list into pickle file
import pickle
pickle.dump(aes_encrypt_energy, open("aes_flash_encrypt_energy_mJ.p", "wb"))
pickle.dump(aes_decrypt_energy, open("aes_flash_decrypt_energy_mJ.p", "wb"))
pickle.dump(aes_flash_write_energy, open("aes_flash_write_energy_mJ.p", "wb"))
pickle.dump(aes_flash_read_energy, open("aes_flash_read_energy_mJ.p", "wb"))
pickle.dump(aes_encrypt_time_s, open("aes_flash_encrypt_time_ms.p", "wb"))
pickle.dump(aes_decrypt_time_s, open("aes_flash_decrypt_time_ms.p", "wb"))
pickle.dump(aes_flash_write_time_s, open("aes_flash_write_time_ms.p", "wb"))
pickle.dump(aes_flash_read_time_s, open("aes_flash_read_time_ms.p", "wb"))
aes = [aes_encrypt_energy, aes_flash_write_energy, aes_flash_read_energy, aes_decrypt_energy, aes_encrypt_time_s, aes_flash_write_time_s, aes_flash_read_time_s, aes_decrypt_time_s]
for i in aes:
print(len(i), len(i)*16)
###Output
375 6000
375 6000
375 6000
375 6000
375 6000
375 6000
375 6000
375 6000
###Markdown
Write config file
###Code
import json
config = {}
config["name"] = "AES-128 Flash"
config["project_paths"] = [project_path]
config["config_dict"] = config_dict
config["config_dict_plot"] = config_dict_plot
config["analysis"] = {"pins":{2: ["AES-128 Encrypt", "AES-128 Decrypt"], 3: ["AES-128 Flash Write", "AES-128 Flash Read"]},
"result_types": ["Charge", "Time"],
"section_types": {"init": [],
"store": ["AES-128 Encrypt", "AES-128 Flash Write"],
"load": ["AES-128 Flash Read", "AES-128 Decrypt"],
"exit": []},
"labels": {
"Charge": {"x":"Data Size", "x_unit": "byte", "y": "Charge", "y_unit": "C"},
"Time": {"x":"Data Size", "x_unit": "byte", "y": "Time", "y_unit": "s"},
},
"x_step": MBEDTLS_AES_BLOCK_SIZE}
with open("looped_experiment.json", 'w') as config_file:
json.dump(config, config_file, indent=4)
###Output
_____no_output_____
###Markdown
Write model data
###Code
dump_pickle = True
fit_lm = True
verbose = 2
show_lm_plot = 2
# drop = 1
# Parse data
analysis_config = config.get("analysis")
result_types = analysis_config.get("result_types")
x_step = analysis_config.get("x_step")
parsed_data = {}
for pin, parameter_names in analysis_config.get("pins").items():
data2 = power_and_time_per_pulse(
data, int(pin), stop_function=stop_function)
num_names = len(parameter_names)
for i, parameter_name in enumerate(parameter_names):
end_index = -drop * num_names or None
parsed_data[parameter_name] = {
result_types[0]: data2[0][i:end_index:num_names],
result_types[1]: data2[1][i:end_index:num_names],
"x_step": x_step}
if dump_pickle:
pickle.dump(parsed_data, open(
path.join(path.curdir,
f"{config_dict.get('file_name_base')}_looped.p"), "wb"))
# Fit lm
if fit_lm:
model = None
if model is None:
def line(x, intercept, slope):
"""a line"""
return [intercept + slope*i for i in x]
model = Model(line)
params = model.make_params(intercept=0, slope=1)
# params['intercept'].set(min=0)
else:
params = model.params
model_results = {}
labels = analysis_config.get("labels")
for parameter_name in parsed_data.keys():
length = len(parsed_data[parameter_name][result_types[0]])
x_step = parsed_data[parameter_name]["x_step"]
num_bytes = range(x_step, (length+1)*x_step, x_step)
if verbose:
print(
f"Fitting model to {parameter_name} with {length} " +
f"samples, from {min(num_bytes)} to {max(num_bytes)} "
f"bytes in steps of {x_step}.")
model_result = {}
for result_type in result_types:
model_result[result_type] = model.fit(
parsed_data[parameter_name][result_type], params,
x=num_bytes)
if verbose >= 2:
print(model_result[result_type].fit_report())
# Plot multiple view
if show_lm_plot >= 2:
fig, grid = model_result[result_type].plot(
xlabel=f"{labels[result_type]['x']} " +
f"[{labels[result_type]['x_unit']}]",
ylabel=f"{labels[result_type]['y']} " +
f"[{labels[result_type]['y_unit']}]")
fig.canvas.set_window_title(
f"Residuals of {parameter_name}")
fig.tight_layout()
fig.show()
model_results[parameter_name] = model_result
# Plot single view
if show_lm_plot:
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(9, 6))
fig.canvas.set_window_title(f"Analysis {config.get('name')}")
colors = dict(zip(result_types, ['r', 'b']))
line_styles = (
line_style for line_style in ('-', '--', '-.', ':') * 2)
# fig.suptitle(f"Energy analysis of {config.get('name')}")
ax = {}
ax[result_types[0]] = fig.add_subplot(1, 1, 1)
ax[result_types[1]] = ax[result_types[0]].twinx()
ax[result_types[0]].set_xlabel(
f"{labels[result_types[0]]['x']} " +
f"[{labels[result_types[0]]['x_unit']}]")
for result_type in result_types:
ax[result_type].set_ylabel(
f"{labels[result_type]['y']} " +
f"[{labels[result_type]['y_unit']}]",
color=colors[result_type])
ax[result_type].tick_params('y', colors=colors[result_type])
lines = []
title_str = ""
for parameter_name in parsed_data.keys():
length = len(parsed_data[parameter_name][result_types[0]])
x_step = parsed_data[parameter_name]["x_step"]
num_bytes = range(x_step, (length+1)*x_step, x_step)
model_result = {}
line_style = next(line_styles)
for result_type in result_types:
label = f"{parameter_name} {labels[result_type]['y']}"
lines += ax[result_type].plot(
num_bytes, parsed_data[parameter_name][result_type],
colors[result_type] + line_style, label=label)
title_str += f"{label} "
for param in params.keys():
title_str += "".join(
f"{params[param].name.capitalize()}: ")
title_str += "".join(
f"{model_results[parameter_name][result_type].params[param].value: .03} ")
title_str += "".join(
f"{labels[result_type]['y_unit']}, ")
title_str = title_str[:-2] + \
f" per {labels[result_type]['x_unit']}\n"
ax[result_types[0]].legend(handles=lines)
ax[result_types[0]].set_title(title_str[:-1])
# fig.tight_layout()
fig.tight_layout(rect=(0.05, 0.05, 1, 1))
fig.set_size_inches(8, 6, forward=True)
fig.show()
# Save model results to file
if dump_pickle:
model_results_dump = {}
for parameter_name in model_results.keys():
model_results_dump[parameter_name] = {}
for result_type in model_results[parameter_name].keys():
model_results_dump[parameter_name][result_type] = \
model_results[parameter_name][result_type].values
pickle.dump(model_results_dump, open(path.join(
path.curdir,
f"{config_dict.get('file_name_base')}_model.p"), "wb"))
###Output
Fitting model to AES-128 Encrypt with 375 samples, from 16 to 6000 bytes in steps of 16.
[[Model]]
Model(line)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 9
# data points = 375
# variables = 2
chi-square = 5.7782e-12
reduced chi-square = 1.5491e-14
Akaike info crit = -11922.4437
Bayesian info crit = -11914.5898
[[Variables]]
intercept: 3.3739e-06 +/- 1.2880e-08 (0.38%) (init = 0)
slope: -4.9129e-12 +/- 3.7108e-12 (75.53%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(intercept, slope) = -0.867
[[Model]]
Model(line)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 7
# data points = 375
# variables = 2
chi-square = 4.0302e-07
reduced chi-square = 1.0805e-09
Akaike info crit = -7740.19966
Bayesian info crit = -7732.34581
[[Variables]]
intercept: 8.9231e-04 +/- 3.4017e-06 (0.38%) (init = 0)
slope: -1.4025e-09 +/- 9.8002e-10 (69.88%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(intercept, slope) = -0.867
Fitting model to AES-128 Decrypt with 375 samples, from 16 to 6000 bytes in steps of 16.
[[Model]]
Model(line)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 9
# data points = 375
# variables = 2
chi-square = 3.7686e-12
reduced chi-square = 1.0103e-14
Akaike info crit = -12082.7175
Bayesian info crit = -12074.8636
[[Variables]]
intercept: 3.3044e-06 +/- 1.0402e-08 (0.31%) (init = 0)
slope: -4.5506e-12 +/- 2.9968e-12 (65.86%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(intercept, slope) = -0.867
[[Model]]
Model(line)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 7
# data points = 375
# variables = 2
chi-square = 2.6456e-07
reduced chi-square = 7.0928e-10
Akaike info crit = -7898.04501
Bayesian info crit = -7890.19116
[[Variables]]
intercept: 8.7554e-04 +/- 2.7561e-06 (0.31%) (init = 0)
slope: -1.3265e-09 +/- 7.9402e-10 (59.86%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(intercept, slope) = -0.867
Fitting model to AES-128 Flash Write with 375 samples, from 16 to 6000 bytes in steps of 16.
[[Model]]
Model(line)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 9
# data points = 375
# variables = 2
chi-square = 3.9208e-09
reduced chi-square = 1.0512e-11
Akaike info crit = -9477.46168
Bayesian info crit = -9469.60783
[[Variables]]
intercept: 7.3718e-06 +/- 3.3552e-07 (4.55%) (init = 0)
slope: 6.1834e-08 +/- 9.6662e-11 (0.16%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(intercept, slope) = -0.867
[[Model]]
Model(line)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 7
# data points = 375
# variables = 2
chi-square = 1.2513e-04
reduced chi-square = 3.3546e-07
Akaike info crit = -5588.41432
Bayesian info crit = -5580.56047
[[Variables]]
intercept: 0.00157029 +/- 5.9938e-05 (3.82%) (init = 0)
slope: 1.1497e-05 +/- 1.7268e-08 (0.15%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(intercept, slope) = -0.867
Fitting model to AES-128 Flash Read with 375 samples, from 16 to 6000 bytes in steps of 16.
[[Model]]
Model(line)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 10
# data points = 375
# variables = 2
chi-square = 3.7411e-12
reduced chi-square = 1.0030e-14
Akaike info crit = -12085.4601
Bayesian info crit = -12077.6062
[[Variables]]
intercept: 1.7421e-08 +/- 1.0364e-08 (59.49%) (init = 0)
slope: 8.1365e-09 +/- 2.9859e-12 (0.04%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(intercept, slope) = -0.867
[[Model]]
Model(line)
[[Fit Statistics]]
# fitting method = leastsq
# function evals = 7
# data points = 375
# variables = 2
chi-square = 2.5668e-07
reduced chi-square = 6.8816e-10
Akaike info crit = -7909.38243
Bayesian info crit = -7901.52858
[[Variables]]
intercept: 1.6196e-05 +/- 2.7147e-06 (16.76%) (init = 0)
slope: 2.1250e-06 +/- 7.8211e-10 (0.04%) (init = 1)
[[Correlations]] (unreported correlations are < 0.100)
C(intercept, slope) = -0.867
###Markdown
Total from measurement
###Code
n_samples = 5995
total_energy = sum(aes_encrypt_energy[:n_samples]) + sum(aes_flash_write_energy[:n_samples]) + sum(aes_flash_read_energy[:n_samples]) + sum(aes_decrypt_energy[:n_samples])
total_time = sum(aes_encrypt_time_s[:n_samples]) + sum(aes_flash_write_time_s[:n_samples]) + sum(aes_flash_read_time_s[:n_samples]) + sum(aes_decrypt_time_s[:n_samples])
print(total_energy, total_time)
[e + w + r + d for (e,w,r,d) in zip(aes_encrypt_energy, aes_flash_write_energy, aes_flash_read_energy, aes_decrypt_energy)],
[e + w + r + d for (e,w,r,d) in zip(aes_encrypt_time_s, aes_flash_write_time_s, aes_flash_read_time_s, aes_decrypt_time_s)]
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.